• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

References

1 
D. Belanche, L. V. Casaló, C. Flavián, C, J. Schepers, “Service robot implementation: a theoretical framework and research agenda,” The Service Industries Journal, vol. 40, pp. 203-225, 2020. DOI:10.1080/02642069.2019.1672666DOI
2 
Sebastian Thrun, “Probabilistic robotics,” Communications of the ACM, pp. 52-57, 2002. DOI:10.1145/504729.504754DOI
3 
Y. Yamada, T. Sumikura, T. Harada and Y. Yoshida, “Evaluating Spatial Understanding of Large Language Models,” arXiv preprint arXiv:2401.16865, 2024.URL
4 
C. Zhu, X. Xu, W. Wang, J. Yang and D. Wang, “Can Large Language Models Understand Spatial Audio?,” arXiv preprint arXiv:2309.11359, 2023. DOI:10.48550/arXiv.2310.14540DOI
5 
F. Li, Y. Zhang, Z. Feng, X. Li and Y. Gao, “Advancing Spatial Reasoning in Large Language Models: An In-Depth Evaluation and Enhancement Using the StepGame Benchmark,” arXiv preprint arXiv:2306.01183, 2023. DOI:10.48550/arXiv.2401.03991DOI
6 
Y. Tanaka and S. Katsura, “A voice-controlled motion reproduction using large language models for polishing robots,” In 2023 IEEE International Conference on Mechatronics (ICM), pp. 1-6, 2023. DOI:10.1109/ICM54990.2023.10101966DOI
7 
N. Kojima, P. Shah, K. Dogan, A. Agarwal, J. Baldridge and Y. Artzi, “Zero-Shot Compositional Concept Learning,” arXiv preprint arXiv:2205.01536, 2022. DOI:10.48550/arXiv.2107.05176DOI
8 
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei and I. Sutskever, “Language Models are Unsupervised Multitask Learners,” OpenAI, 2019.URL
9 
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language Models are Few-Shot Learners,” arXiv preprint arXiv:2005.14165, 2020. DOI:10.48550/arXiv.2005.14165DOI
10 
J. Garcia and F. Fernández, “A Comprehensive Survey on Safe Reinforcement Learning,” Journal of Machine Learning Research, vol. 16, no. 1, pp. 1437-1480, 2015. DOI:10.5555/2789272.2886795DOI
11 
S. A. Li, Y. Y. Liu, Y. C. Chen, H. M. Feng, P. K. Shen and Y. C. Wu, “Voice Interaction Recognition Design in Real-Life Scenario Mobile Robot Applications,” Applied Sciences, vol. 13, no. 5, pp. 3359, 2023. DOI:10.3390/app13053359DOI
12 
J. Huang, Y. Gao, L. Weng, C. Xiong and X. Hu, “LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models,” arXiv preprint arXiv:2304.14026, 2023. DOI:10.48550/arXiv.2212.04088DOI
13 
M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakrishnan, K. Hausman, et al., “Do As I Can, Not As I Say: Grounding Language in Robotic Affordances,” arXiv preprint arXiv:2204.01691, 2022. DOI:10.48550/arXiv.2204.01691DOI
14 
P. Sikorski, L. Schrader, K. Yu, L. Billadeau, J. Meenakshi, N. Mutharasan, F. Esposito, H. AliAkbarpour and M. Babaiasl, “Deployment of NLP and LLM Techniques to Control Mobile Robots at the Edge: A Case Study Using GPT-4-Turbo and LLaMA 2,” arXiv preprint arXiv:2405.17670, 2024. DOI:10.48550/arXiv.2405.17670DOI
15 
M. S. Hossain, S. Aktar, N. Gu, W. Liu, Z. Huang, “GeoSCN: A Novel multimodal self-attention to integrate geometric information on spatial-channel network for fine-grained image captioning,” Expert Systems with Applications, vol. 272, 2025. DOI:10.1016/j.eswa.2025.126692DOI
16 
Z. Shi, Q. Zhang, A. Lipani, A., “Stepgame: A new benchmark for robust multi-hop spatial reasoning in texts,” In Proceedings of the AAAI conference on artificial intelligence, vol. 36, no. 10, pp. 11321-11329, 2022. DOI:10.48550/arXiv.2204.08292DOI
17 
Z. Zheng, P. Peng, Z. Ma, X. Chen, E. Choi, E. and D. Harwath, “BAT: Learning to Reason about Spatial Sounds with Large Language Models,” arXiv preprint arXiv:2402.01591, 2024. DOI:10.48550/arXiv.2402.01591DOI
18 
Y. Zhang, Z. Wang, Z. He, J. Li, G. Mai, J. Lin, C. Wei and W. Yu, “BB-GeoGPT: A framework for learning a large language model for geographic information science,” Information Processing & Management, vol. 61, no. 5, pp. 103808, 2024. DOI:10.1016/j.ipm.2024.103808DOI
19 
I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason and A. Garg, “ProgPrompt: program generation for situated robot task planning using large language models,” Autonomous Robots, vol. 47, no. 8, pp. 1689-1706, 2023. DOI:10.48550/arXiv.2209.11302DOI
20 
S. Godfrey, A. Tomar, R. Gopalakrishnan, S. Niekum and P. Stone, “MARLIN: Multi-Agent Reinforcement Learning with Language-Based Negotiation,” arXiv preprint arXiv:2310.12534, 2023. DOI:10.48550/arXiv.2410.14383DOI
21 
Y. Kim, D. Kim, J. Choi, J. Park, N. Oh and D. Park, “A survey on integration of large language models with intelligent robots. Intelligent Service Robotics,” vol. 17, no. 5, pp. 1091-1107, 2024. DOI:10.1007/s11370-024-00550-5DOI
22 
G. Sejnova, M. Vavrecka and K. Stepanova, “Bridging Language, Vision and Action: Multimodal VAEs in Robotic Manipulation Tasks,” arXiv preprint arXiv:2404.01932, 2024. DOI:10.1109/IROS58592.2024.10802160DOI