KIEE
The Transactions of
the Korean Institute of Electrical Engineers
KIEE
Contact
Open Access
Monthly
ISSN : 1975-8359 (Print)
ISSN : 2287-4364 (Online)
http://www.tkiee.org/kiee
Mobile QR Code
The Transactions of the Korean Institute of Electrical Engineers
ISO Journal Title
Trans. Korean. Inst. Elect. Eng.
Main Menu
Main Menu
최근호
Current Issue
저널소개
About Journal
논문집
Journal Archive
편집위원회
Editorial Board
윤리강령
Ethics Code
논문투고안내
Instructions to Authors
연락처
Contact Info
논문투고·심사
Submission & Review
Journal Search
Home
Archive
2021-01
(Vol.70 No.1)
10.5370/KIEE.2021.70.1.108
Journal XML
XML
PDF
INFO
REF
References
1
Q. Jin, C. Li, S. Chen, H. Wu, 2015, Speech emotion recognition with acoustic and lexical features, IEEE International Conference on Acoustics, Vol. speech and signal processing (icassp), pp. 4749-4753
2
H. S. Kumbhar, S. U. Bhandari, 2019, Speech Emotion Recognition using MFCC features and LSTM network, International Conference On Computing, Communication, Vol. control and automation, pp. 1-3
3
N. Jain, S. Kumar, A. Kumar, P. Shamsolmoali, M. Zareapoor, 2018, Hybrid deep neural networks for face emotion recognition, Pattern Recognition Letters, Vol. 115, pp. 101-106
4
D. Shin, D. Shin, D. Shin, 2017, Development of emotion recognition interface using complex EEG/ECG bio-signal for interactive contents, Multimedia Tools and Applications, Vol. 76, No. 9, pp. 11449-11470
5
J. Zhao, X. Mao, L. Chen, 2019, Speech emotion recognition using deep 1D & 2D CNN LSTM networks, Biomedical Signal Processing and Control, Vol. 47, pp. 312-323
6
H. Kun, Y. Dong, T. Ivan, 2014, Speech Emotion Recognition Using Deep Neural Network and Extreme Learning Machine, Interspeech 2014, Vol. , No. , pp. -
7
K. Ko, D. Shin, K. Sim, 2009, Development of Context Awarenedd and Service Reasoning Technique for Handi- capped people, Korean Institute of Intelligent Systems, Vol. 19, No. 1, pp. 34-39
8
Y. Huang, J. Yang, P. Liao, J. Pan, 2017, Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition, Computational Intelligence and Neuroscience, Vol. 2017, pp. 1-8
9
Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, et al., 2017, Tacotron: Towards End-to-End Speech Synthesis, Interspeech, pp. 4006-4010
10
S. Byun, S. Lee, 2016, Emotion Recognition Using Tone and Tempo Based on Voice for IoT, The Transactions of the Korean Institute of Electrical Engineers, Vol. 65, No. 1, pp. 116-121
11
K. Park, 2018, KSS Dataset : Korean single speaker speech dataset, https://kaggle.com/bryanpark/korean-single-speaker-speech-dataset/
12
S. Yoon, S. Byun, K. Jung, 2018, Multimodal Speech Emotion Recognition Using Audio and Text, IEEE Spoken Lan- guage Technology Workshop (SLT), pp. 112-118
13
B. T. Atmaja, K. Shirai, M. Akagi, 2019, Speech Emotion Recognition Using Speech Feature and Word Embedding, Asia-Pacific Signal and Information Processing Association Annual Summit and Conference(APSIPA ASC), pp. 519-523
14
F. Eyben, M. Wöllmer, B. Schuller, 2010, Opensmile: the munich versatile and fast open-source audio feature extrac- tor, In Proceedings of the 18th ACM international con- ference on Multimedia (MM '10), Vol. association for com- puting machinery, pp. 1459-1462
15
S. Bird, E. Loper, 2004, Nltk: the natural language toolkit, In Proceedings of the ACL on Interactive poster and demonstration sessions, Vol. , No. , pp. 214-217
16
J. Pennington, R. Socher, C. Manning, 2014, Glove: Global vectors for word representation, In Proceedings of the conference on empirical methods in natural language pro- cessing (EMNLP), Vol. 14, pp. 1532-1543