KIEE
The Transactions of
the Korean Institute of Electrical Engineers
KIEE
Contact
Open Access
Monthly
ISSN : 1975-8359 (Print)
ISSN : 2287-4364 (Online)
http://www.tkiee.org/kiee
Mobile QR Code
The Transactions of the Korean Institute of Electrical Engineers
ISO Journal Title
Trans. Korean. Inst. Elect. Eng.
Main Menu
Main Menu
최근호
Current Issue
저널소개
About Journal
논문집
Journal Archive
편집위원회
Editorial Board
윤리강령
Ethics Code
논문투고안내
Instructions to Authors
연락처
Contact Info
논문투고·심사
Submission & Review
Journal Search
Home
Archive
2022-09
(Vol.71 No.9)
10.5370/KIEE.2022.71.9.1293
Journal XML
XML
PDF
INFO
REF
References
1
S. D. Pendleton, H. Andersen, X. Du, X. Shen, M. Meghjani, Y. H. Eng, D. Rus, 2017, Perception, planning, control, and coordination for autonomous vehicles, Machines, Vol. 5, No. 1, pp. 1-54
2
S. Das, P. N. Suganthan, 2011, Differential evolution: A survey of the state-of-the-art, IEEE Trans. Evol. Comput., Vol. 15, No. 1, pp. 4-31
3
L. Yu, K. Vongsuriya, L. Wedman, 1970, Application of optimal control theory to a power system, IEEE Trans. Power. Syst., Vol. 89, No. 1, pp. 55-92
4
E. Trélat, 2012, Optimal control and applications to aerospace: Some results and challenges, J. Optim. Theory Appl., Vol. 154, No. 3, pp. 713-758
5
E. Alcalá, V. Puig, J. Quevedo, U. Rosolia, 2020, Autonomous racing using Linear Parameter Varying-Model Predictive Control (LPV-MPC), Control Eng. Pract., Vol. 95
6
R. Sutton, A. Barto, 1998, Reinforcement Learning: an Introduction. Cambridge, MA: MIT Press
7
B. Wang, M. Jiang, D. Liu, 2020, Real-Time Fault Detection for UAV Based on Model Acceleration Engine, IEEE Transactions on Instrumentation and Measurement, Vol. 69, No. 12, pp. 9505-9516
8
B. Demirel, A. Ramaswamy, D. E. Quevedo, H. Karl, 2018, DeepCAS: A Deep Reinforcement Learning Algorithm for Control-Aware Scheduling, IEEE Control. Syst. Lett., Vol. 2, No. 4, pp. 737-742
9
Y. Cho, J. Lee, K. Lee, 2020, CNN based Reinforcement Learning for Driving Behavior of Simulated Self-Driving Car, KIEE, Vol. 69, No. 11, pp. 1740-1749
10
A. S. Polydoros, L. Nalpantidis, 2017, Survey of model- based reinforcement learning: Applications on robotics, J. Intell. Robot. Syst., Vol. 86, pp. 153-173
11
M. Deisenroth, C. Rasmussen, 2011, PILCO: a model-based and data efficient approach to policy search, ICML
12
M. P. Deisenroth, C.E. Rasmussen, D. Fox, 2011, Learning to Control a Low-Cost Manipulator Using Data-Efficient Reinforcement Learning, Robot. Sci. Syst., Vol. 7, No. , pp. 57-64
13
J. Hwangbo, I. Sa, R. Siegwart, M. Hutter, 2017, Control of a quadrotor with reinforcement learning, IEEE Robot. Autom. Lett., Vol. 2, No. 4, pp. 2096-2103
14
S. Levine, V. Koltun, 2013, Guided policy search, ICML
15
T. Zhang, G. Kahn, S. Levine, P. Abbeel, 2016, Learning deep control policies for autonomous aerial vehicles with MPC-guided policy search, in Proc. ICRA, pp. 528-535
16
J. Yoo, D. Jang, H. J. Kim, K. H. Johansoon, 2020, Hybrid Reinforcement Learning Control for a Micro Quadrotor Flight, IEEE Control Syst. Lett., Vol. 5, No. 2, pp. 505-510
17
C. K. Williams, C. E. Rasmussen, 2006, Gaussian Processes for Machine Learning. Cambridge, MA
18
M. P. Deisenroth, 2010, Efficient Reinforcement Learning Using Gaussian Processes, 1st ed. Karlsruhe
19
K. J. Åström, K. Furuta, 2000, Swinging up a pendulum by energy control, Automatica, Vol. 36, No. 2, pp. 287-295