Main content area

Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus

Wu, Yuankai, Tan, Huachun, Peng, Jiankun, Zhang, Hailong, He, Hongwen
Applied energy 2019 v.247 pp. 454-466
algorithms, artificial intelligence, dynamic programming, emissions, energy efficiency, fuels, issues and policy, traffic, vehicles (equipment)
Hybrid electric vehicles offer an immediate solution for emissions reduction and fuel displacement under the current technique level. Energy management strategies are critical for improving fuel economy of hybrid electric vehicles. In this paper we propose a energy management strategy for a series-parallel plug-in hybrid electric bus based on deep deterministic policy gradients. Specifically, deep deterministic policy gradients is an actor-critic, model-free reinforcement learning algorithm that can assign the optimal energy split of the bus over continuous spaces. We consider that the buses are driving in a fixed bus line, where driving cycle is constrained by the traffic. The traffic information and number of passengers are also incorporated into the energy management system. The deep reinforcement learning based energy management agent is trained with a large amount of driving cycles that generated from traffic simulation. Experiments on the traffic simulation driving cycles show that the proposed approach outperforms conventional reinforcement learning approach and exhibits performance close to the global optimal dynamic programming. Moreover, it also has great generality to the standard driving cycles that are significantly different with the ones that it has been trained with. We also show some interesting attributes of learned energy management strategies through visualizations of the actor and critic. The main contribution of this study is to explore the incorporation of traffic information within hybrid electric vehicle energy managment through advanced intelligent algorithms.