Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 虛擬代理人的相互性行為對信任之影響
Agents’ Mutuality Behavior and Human Trust in Human-Agent Interaction
作者 林庭羽
Lin, Ting-Yu
貢獻者 陳宜秀<br>廖峻鋒
Chen, Yihsiu<br>Liao, Chun-Feng
林庭羽
Lin, Ting-Yu
關鍵詞 代理人
虛擬實境
人機互動
行為中的相互關係
信任
日期 2023
上傳時間 1-Sep-2023 15:56:23 (UTC+8)
摘要   近年來,隨著人工智能的進步,電腦逐漸擁有強大的決策能力,這對人類的生活和工作方式都產生了變革性的影響,也讓人與代理人互動(Human-Agent Interaction, HAI)的形式更加多元,AI可化身為能協助人類行事具形體或不具形體的代理人(agent),像是智能機器人、自動駕駛汽車等,這些代理人會協助人們完成任務。而「信任」是人與代理人得以不斷合作與互動的重要關鍵之一,因此我們需要研究是什麼因素能促進或減低人們對AI的信任,尤其是以AI為基礎的代理人之信任。
  在代理人的設計中,人們通常依據過去與他人互動經驗之心智模型來與代理人互動。根據溝通調適理論,人們常常調整自己的語言、非語言和副語言行為,以適應他人的溝通特徵,減少社會距離。本研究即探討在虛擬環境中當代理人呈現具眼神注意及調節行為時,人們是否會因此對代理人產生信任?為了驗證假設,本實驗為二因子組間設計:代理人的外觀(人形/機器人型)和注意行為(有注意行為/無注意行為)兩個主變項。實驗方式為在虛擬實境環境中進行人與代理人共同協作拼圖任務,由代理人給予受試者拼圖指示和建議,受試者可自行決定是否要接受代理人之指示,並於實驗後填寫綜合信任技術接受模型問卷,了解代理人的外觀擬人化與注意行為變化是否會對人類產生信任影響。
  實驗結果發現,我們無法藉由操弄來達到原先想形塑的社會知覺(Social Perception),因人們對於代理人的注意行為會產生不同的主觀感知,本實驗也再次驗證人與代理人互動中心智模型的重要性。在實驗中,若代理人行為表現與人相似且具有相應的注意行為,可以符合人們的心智模型時,會增加對代理人的可預測性,而讓人更有安全感,以致可以專注完成任務,所以任務表現較佳。但若實驗情境無法與心智模型相呼應,因降低了對代理人的可預測性也讓任務表現較差,更有可能受到代理人「外觀」與「注意感覺」的干擾影響表現。另外,受試者對於人形外觀的高感知能提升對代理人的誠信感;當受試者認為代理人具備注意行為時,他們更信任代理人並願意接受其指示,且能提升彼此合作關係。儘管本研究與原先假設並不完全符合,但實驗結果出乎我們的想像令我們獲得許多有趣的發現,期望為日後的HAI領域提供新的發展方向。
參考文獻 [1] Admoni, H. et al. 2011. Robot gaze does not reflexively cue human attention. (2011).
[2] Adolphs, R. et al. 1999. Social cognition and the human brain. Trends in Cognitive Sciences. 3, 12 (Dec. 1999), 469–479. DOI:https://doi.org/10.1016/S1364-6613(99)01399-6.
[3] Automated Trading Systems: The Pros and Cons: 2019. https://www.investopedia.com/articles/trading/11/automated-trading-systems.asp. Accessed: 2022-03-25.
[4] Axelrod, R. and Hamilton, W.D. 1981. The Evolution of Cooperation. New Series. 211, 4489 (1981), 1390–1396.
[5] Bailenson, J.N. et al. 2001. Equilibrium theory revisited: Mutual gaze and personal space in virtual environments. Presence: Teleoperators and Virtual Environments. 10, 6 (Dec. 2001), 583–598. DOI:https://doi.org/10.1162/105474601753272844.
[6] Bar, M. et al. 2006. Top-down facilitation of visual recognition. Proceedings of the National Academy of Sciences of the United States of America. 103, 2 (Jan. 2006), 449–454. DOI:https://doi.org/10.1073/PNAS.0507062103/SUPPL_FILE/07062FIG9.PDF.
[7] Baron-Cohen lgg, S. et al. 1995. From: Mindblindness: ern essay on autism end theor-y of mind. kc Learning, Development, and Conceptual Change. (1995).
[8] Bayliss, A.P. and Tipper, S.P. 2006. Predictive Gaze Cues and Personality Judgments: Should Eye Trust You? Psychological science. 17, 6 (Jun. 2006), 514. DOI:https://doi.org/10.1111/J.1467-9280.2006.01737.X.
[9] Bell, L. and Gustafson, J. 1999. REPETITION AND ITS PHONETIC REALIZATIONS: INVESTIGATING A SWEDISH DATABASE OF SPONTANEOUS COMPUTER-DIRECTED SPEECH. (1999).
[10] Benbasat, I. and Wang, W. 2005. Trust In and Adoption of Online Recommendation Agents. Journal of the Association for Information Systems. 6, 3 (Mar. 2005), 4. DOI:https://doi.org/10.17705/1jais.00065.
[11] Bennett, M.T. and Maruyama, Y. 2021. Intensional Artificial Intelligence: From Symbol Emergence to Explainable and Empathetic AI. (Apr. 2021). DOI:https://doi.org/10.48550/arxiv.2104.11573.
[12] Berger, C.R. and Bradac, J.J. 1982. Language and social knowledge : uncertainty in interpersonal relations. (1982), 151.
[13] Blakemore, S.J. and Decety, J. 2001. From the perception of action to the understanding of intention. Nature reviews. Neuroscience. 2, 8 (2001), 561–567. DOI:https://doi.org/10.1038/35086023.
[14] Blascovich, J. et al. 2002. Immersive Virtual Environment Technology as a Methodological Tool for Social Psychology.
[15] Bradshaw, J.M. An Introduction to Software Agents.
[16] Bratman, M.E. 1992. Shared Cooperative Activity. The Philosophical Review. 101, 2 (Apr. 1992), 327. DOI:https://doi.org/10.2307/2185537.
[17] Breazeal, C. et al. 2005. Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS. (2005), 708–713. DOI:https://doi.org/10.1109/IROS.2005.1545011.
[18] Breazeal, C. 2003. Emotion and sociable humanoid robots. International Journal of Human-Computer Studies. 59, 1–2 (Jul. 2003), 119–155. DOI:https://doi.org/10.1016/S1071-5819(03)00018-1.
[19] Brennan, S.E. and Hanna, J.E. 2009. Partner-Specific Adaptation in Dialog. Topics in Cognitive Science. 1, 2 (Apr. 2009), 274–291. DOI:https://doi.org/10.1111/J.1756-8765.2009.01019.X.
[20] Burgoon, J.K. et al. 2007. Interpersonal Adaptation: Dyadic Interaction Patterns. (2007).
[21] Burnett, R. 2004. How images think. (2004), 253.
[22] Burton-Chellew, M.N. and West, S.A. 2013. Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences of the United States of America. 110, 1 (Jan. 2013), 216–221. DOI:https://doi.org/10.1073/PNAS.1210960110/SUPPL_FILE/PNAS.201210960SI.PDF.
[23] Buschmeier, H. and Kopp, S. 2018. Communicative Listener Feedback in Human-Agent Interaction: Artificial Speakers Need to Be Attentive and Adaptive Socially Interactive Agents Track. IFAAMAS. 9, (2018).
[24] Camerer, C. 2003. Behavioral game theory : experiments in strategic interaction. Russell Sage Foundation.
[25] Can We Trust Trust | Semantic Scholar: 2000. https://www.semanticscholar.org/paper/Can-We-Trust-Trust-Gambetta/542ace96c6daa25922e626aaa8ca4aa904c2a2b0. Accessed: 2022-04-26.
[26] Cannon-Bowers, J.A. and Salas, E. 2001. Reflections on shared cognition. Journal of Organizational Behavior. 22, 2 (Mar. 2001), 195–202. DOI:https://doi.org/10.1002/JOB.82.
[27] Card, S.K. et al. 1980. The keystroke-level model for user performance time with interactive systems. Communications of the ACM. 23, 7 (Jul. 1980), 396–410. DOI:https://doi.org/10.1145/358886.358895.
[28] Carlisle, J.H. Evaluating the impact of office automation on top management communication. Proceedings of the June 7-10, 1976, national computer conference and exposition on - AFIPS ’76. DOI:https://doi.org/10.1145/1499799.
[29] Carter, E.J. et al. 2014. Playing catch with robots: Incorporating social gestures into physical interactions. IEEE RO-MAN 2014 - 23rd IEEE International Symposium on Robot and Human Interactive Communication: Human-Robot Co-Existence: Adaptive Interfaces and Systems for Daily Life, Therapy, Assistance and Socially Engaging Interactions. (Oct. 2014), 231–236. DOI:https://doi.org/10.1109/ROMAN.2014.6926258.
[30] Cassell, J. 2000. Embodied conversational agents. (2000), 430.
[31] Chavaillaz, A. et al. 2018. Automation in visual inspection tasks: X-ray luggage screening supported by a system of direct, indirect or adaptable cueing with low and high system reliability. Ergonomics. 61, 10 (Oct. 2018), 1395–1408. DOI:https://doi.org/10.1080/00140139.2018.1481231.
[32] Chen, J.Y.C. et al. 2010. Supervisory Control of Unmanned Vehicles.
[33] Chen, T. et al. 2015. Increasing Autonomy Transparency through capability communication in multiple heterogeneous UAV management. IEEE International Conference on Intelligent Robots and Systems. 2015-December, (Dec. 2015), 2434–2439. DOI:https://doi.org/10.1109/IROS.2015.7353707.
[34] Christoffersen, K. and Woods, D.D. 2002. How to make automated systems team players. Advances in Human Performance and Cognitive Engineering Research. 2, (2002), 1–12. DOI:https://doi.org/10.1016/S1479-3601(02)02003-9.
[35] Clark, H.H. 2005. Coordinating with each other in a material world. Discourse Studies. 7, 4–5 (Aug. 2005), 507–525. DOI:https://doi.org/10.1177/1461445605054404.
[36] Clark, H.H. and Brennan, S.E. 1991. GROUNDING IN COMMUNICATION. (1991).
[37] Cook, A. et al. 2005. Complex trauma in children and adolescents. Psychiatric Annals. 35, 5 (2005), 390–398. DOI:https://doi.org/10.3928/00485713-20050501-05.
[38] Costo, S. and Molfino, R. 2004. A NEW ROBOTIC UNIT FOR ONBOARD AIRPLANES BOMB DISPOSAL. (2004).
[39] Cuzzolin, F. et al. 2020. Knowing me, knowing you: theory of mind in AI. Psychological medicine. 50, 7 (May 2020), 1057–1061. DOI:https://doi.org/10.1017/S0033291720000835.
[40] Dennett, D. 1987. Then intentional stance. (1987), 400.
[41] Desideri, L. et al. 2019. Emotional processes in human-robot interaction during brief cognitive testing. Computers in Human Behavior. 90, (Jan. 2019), 331–342. DOI:https://doi.org/10.1016/J.CHB.2018.08.013.
[42] DiSalvo, C.F. et al. 2002. All robots are not created equal: The design and perception of humanoid robot heads. Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, DIS. (2002), 321–326. DOI:https://doi.org/10.1145/778712.778756.
[43] Duffy, B.R. 2003. Anthropomorphism and the social robot. Robotics and Autonomous Systems (Mar. 2003), 177–190.
[44] Duncan, S. and Fiske, D.W. 1977. Face-to-face interaction : research, methods, and theory. (1977), 361.
[45] Dzindolet, M.T. et al. 2003. The role of trust in automation reliance. International Journal of Human Computer Studies. 58, 6 (2003), 697–718. DOI:https://doi.org/10.1016/S1071-5819(03)00038-7.
[46] Ekman, P. 1964. Body position, facial expression, and verbal behavior during interviews. Journal of Abnormal and Social Psychology. 68, 3 (Mar. 1964), 295–301. DOI:https://doi.org/10.1037/H0040225.
[47] Ekman, P. and Friesen, W. 2003. Unmasking the face: A guide to recognizing emotions from facial clues. (2003).
[48] Ekman, P. and Friesen, W. V. 1969. Nonverbal leakage and clues to deception. Psychiatry. 32, 1 (Feb. 1969), 88–106. DOI:https://doi.org/10.1080/00332747.1969.11023575.
[49] Emery, N.J. 2000. The eyes have it: the neuroethology, function and evolution of social gaze. Neuroscience & Biobehavioral Reviews. 24, 6 (Aug. 2000), 581–604. DOI:https://doi.org/10.1016/S0149-7634(00)00025-7.
[50] Emmerich, K. et al. 2018. I’m glad you are on my side: How to design compelling game companions. CHI PLAY 2018 - Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. (Oct. 2018), 153–162. DOI:https://doi.org/10.1145/3242671.3242709.
[51] Fan, X. and Yen, J. 2007. Realistic Cognitive Load Modeling for Enhancing Shared Mental Models in Human-Agent Collaboration. (2007).
[52] Fink, J. 2012. Anthropomorphism and human likeness in the design of robots and human-robot interaction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 7621 LNAI, (2012), 199–208. DOI:https://doi.org/10.1007/978-3-642-34103-8_20.
[53] Finomore, V. et al. 2009. Predicting vigilance: a fresh look at an old problem. Ergonomics. 52, 7 (2009), 791–808. DOI:https://doi.org/10.1080/00140130802641627.
[54] Fiore, S.M. et al. 2013. Toward understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior. Frontiers in Psychology. 4, NOV (2013). DOI:https://doi.org/10.3389/FPSYG.2013.00859.
[55] Folkes, V.S. 2016. Forming Relationships and the Matching Hypothesis: http://dx.doi.org/10.1177/0146167282084005. 8, 4 (Jul. 2016), 631–636. DOI:https://doi.org/10.1177/0146167282084005.
[56] Fong, T. et al. 2003. A survey of socially interactive robots. Robotics and Autonomous Systems. 42, 3–4 (Mar. 2003), 143–166. DOI:https://doi.org/10.1016/S0921-8890(02)00372-X.
[57] Freire, A. et al. 2004. Are Eyes Windows to a Deceiver’s Soul? Children’s Use of Another’s Eye Gaze Cues in a Deceptive Situation. Developmental psychology. 40, 6 (Nov. 2004), 1093. DOI:https://doi.org/10.1037/0012-1649.40.6.1093.
[58] Friedman, J. et al. 2000. Additive logistic regression: A statistical view of boosting. Annals of Statistics. 28, 2 (2000), 337–407. DOI:https://doi.org/10.1214/AOS/1016218223.
[59] Frischen, A. et al. 2007. Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychological bulletin. 133, 4 (Jul. 2007), 694–724. DOI:https://doi.org/10.1037/0033-2909.133.4.694.
[60] Fusaroli, R. et al. 2014. Dialog as interpersonal synergy. New Ideas in Psychology. 32, 1 (Jan. 2014), 147–157. DOI:https://doi.org/10.1016/J.NEWIDEAPSYCH.2013.03.005.
[61] Gallagher, H.L. and Frith, C.D. 2003. Functional imaging of ‘theory of mind.’ Trends in Cognitive Sciences. 7, 2 (Feb. 2003), 77–83. DOI:https://doi.org/10.1016/S1364-6613(02)00025-6.
[62] Garau, M. et al. 2003. The Impact of Avatar Realism and Eye Gaze Control on Perceived Quality of Communication in a Shared Immersive Virtual Environment.
[63] Gaze and eye contact: A research review: 1986. https://psycnet.apa.org/record/1986-27160-001. Accessed: 2022-04-26.
[64] Gebhard, P. and Schmitt, M. 2002. CrossTalk: An Interactive Installation with Animated Presentation Agents.
[65] Genesereth, M.R. and Ketchpel, S.P. 1994. Software agents. Communications of the ACM. 37, 7 (Jul. 1994), 48-ff. DOI:https://doi.org/10.1145/176789.176794.
[66] Gholami, B. et al. 2018. AI in the ICU: In the intensive care unit, artificial intelligence can keep watch. IEEE Spectrum. 55, 10 (Oct. 2018), 31–35. DOI:https://doi.org/10.1109/MSPEC.2018.8482421.
[67] Gibson, J.J. 2014. The Ecological Approach to Visual Perception : Classic Edition. The Ecological Approach to Visual Perception. (Nov. 2014). DOI:https://doi.org/10.4324/9781315740218.
[68] Gielniak, M.J. and Thomaz, A.L. 2011. Generating anticipation in robot motion. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication. (2011), 449–454. DOI:https://doi.org/10.1109/ROMAN.2011.6005255.
[69] Giles, H. and Ogay, T. 2007. Communication Accommodation Theory.
[70] Goetz, J. et al. 2003. Matching robot appearance and behavior to tasks to improve human-robot cooperation. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication (2003), 55–60.
[71] De Graaf, M.M.A. and Ben Allouch, S. 2013. Exploring influencing variables for the acceptance of social robots. Robotics and Autonomous Systems. 61, 12 (Dec. 2013), 1476–1486. DOI:https://doi.org/10.1016/J.ROBOT.2013.07.007.
[72] Grace, K. et al. 2018. Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts.
[73] Graefe, V. and Bischoff, R. 2003. Past, present and future of intelligent robots. Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA. 2, (2003), 801–810. DOI:https://doi.org/10.1109/CIRA.2003.1222283.
[74] Gratch, J. et al. 2007. Creating Rapport with Virtual Agents. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 4722 LNCS, (2007), 125–138. DOI:https://doi.org/10.1007/978-3-540-74997-4_12.
[75] Gratch, J. 2019. The Social Psychology of Human-agent Interaction. (Sep. 2019), 1–1.
[76] Gratch, J. et al. 2006. Virtual rapport. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 4133 LNAI, (2006), 14–27. DOI:https://doi.org/10.1007/11821830_2.
[77] Grodzinsky, F.S. et al. 2011. Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?” Ethics and Information Technology. 13, 1 (Mar. 2011), 17–27. DOI:https://doi.org/10.1007/S10676-010-9255-1/FIGURES/2.
[78] Gudykunst and W. B. 1995. Anxiety/uncertainty management (AUM) theory: Current status.
[79] Haim, G. et al. A Cultural Sensitive Agent for Human-Computer Negotiation.
[80] Haleem, A. et al. 2020. Artificial Intelligence (AI) applications in orthopaedics: An innovative technology to embrace. Journal of clinical orthopaedics and trauma. 11, Suppl 1 (Feb. 2020), S80–S81. DOI:https://doi.org/10.1016/J.JCOT.2019.06.012.
[81] Hancock, P.A. et al. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors. 53, 5 (Oct. 2011), 517–527. DOI:https://doi.org/10.1177/0018720811417254.
[82] Hancock, P.A. and Scallen, S.F. 2004. Allocating functions in human–machine systems. Viewing psychology as a whole: The integrative science of William N. Dember. (Oct. 2004), 509–539. DOI:https://doi.org/10.1037/10290-024.
[83] Hayes, B. and Scassellati, B. 2013. Challenges in Shared-Environment Human-Robot Collaboration. (2013).
[84] Heerink, M. et al. 2010. Assessing acceptance of assistive social agent technology by older adults: The almere model. International Journal of Social Robotics. 2, 4 (2010), 361–375. DOI:https://doi.org/10.1007/s12369-010-0068-5.
[85] Hegel, F. et al. 2006. Playing a different imitation game: Interaction with an Empathic Android Robot. Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS. (2006), 56–61. DOI:https://doi.org/10.1109/ICHR.2006.321363.
[86] Hemsley, G.D. and Doob, A.N. 1978. The Effect of Looking Behavior on Perceptions of a Communicator’s Credibility1. Journal of Applied Social Psychology. 8, 2 (Jun. 1978), 136–142. DOI:https://doi.org/10.1111/J.1559-1816.1978.TB00772.X.
[87] Hinds, P.J. et al. 2004. Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task.
[88] Hocks, M.E. and Kendrick, M.R. 2003. Eloquent images : word and image in the age of new media. (2003), 318.
[89] Hogg, M.A. and Reid, S.A. 2006. Social Identity, Self-Categorization, and the Communication of Group Norms. Communication Theory. 16, 1 (Feb. 2006), 7–30. DOI:https://doi.org/10.1111/J.1468-2885.2006.00003.X.
[90] Italian National Research Council et al. 1995. Cognitive And Social Action. Cognitive And Social Action. (Jun. 1995). DOI:https://doi.org/10.4324/9780203783221.
[91] Jackson, R.B. and Williams, T. 2019. Language-Capable Robots may Inadvertently Weaken Human Moral Norms. ACM/IEEE International Conference on Human-Robot Interaction. 2019-March, (Mar. 2019), 401–410. DOI:https://doi.org/10.1109/HRI.2019.8673123.
[92] Joo, E.J. et al. 2019. Response by joo et al to letter regarding article, “high-risk human papillomavirus infection and the risk of cardiovascular disease in Korean women.” Circulation Research. 125, 3 (Jul. 2019), E15. DOI:https://doi.org/10.1161/CIRCRESAHA.119.315480.
[93] Jung, B. and Kopp, S. 2003. FlurMax: An Interactive Virtual Agent for Entertaining Visitors in a Hallway.
[94] Kalegina, A. et al. 2018. Characterizing the Design Space of Rendered Robot Faces. ACM/IEEE International Conference on Human-Robot Interaction (Feb. 2018), 96–104.
[95] Kanda, T. et al. 2002. Development and evaluation of an interactive humanoid robot “Robovie.” Proceedings - IEEE International Conference on Robotics and Automation. 2, (2002), 1848–1854. DOI:https://doi.org/10.1109/ROBOT.2002.1014810.
[96] Kang, S.-H. et al. 2008. Does the Contingency of Agents’ Nonverbal Feedback Affect Users’ Social Anxiety? (2008).
[97] Katagiri, Y. et al. 2001. Social Persuasion in Human-Agent Interaction Visual Communication Project View project Social Persuasion in Human-Agent Interaction.
[98] Kiesler, S. et al. 1996. A prisoner’s dilemma experiment on cooperation with people and human-like computers. Journal of Personality and Social Psychology. 70, 1 (1996), 47–65. DOI:https://doi.org/10.1037//0022-3514.70.1.47.
[99] Kiesler, S. et al. 2008. Anthropomorphic Interactions with a Robot and Robot–like Agent. http://dx.doi.org/10.1521/soco.2008.26.2.169. 26, 2 (Apr. 2008), 169–181. DOI:https://doi.org/10.1521/SOCO.2008.26.2.169.
[100] Kim, D.J. and Lim, Y.K. 2019. Co-performing agent: Design for building user–agent partnership in learning and adaptive services. Conference on Human Factors in Computing Systems - Proceedings. (May 2019). DOI:https://doi.org/10.1145/3290605.3300714.
[101] Klein, G. et al. 2005. Common Ground and Coordination in Joint Activity. Organizational Simulation. (Jun. 2005), 139–184. DOI:https://doi.org/10.1002/0471739448.CH6.
[102] Kobayashi, H. 1993. Study on face robot for active human interface-mechanisms of face robot and expression of 6 basic facial expressions. ieeexplore.ieee.org. (1993).
[103] Komiak, S.Y.X. and Benbasat, I. 2006. The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly: Management Information Systems. 30, 4 (2006), 941–960. DOI:https://doi.org/10.2307/25148760.
[104] Kopp, S. and Krämer, N. 2021. Revisiting Human-Agent Communication: The Importance of Joint Co-construction and Understanding Mental States. Frontiers in Psychology. 12, (Mar. 2021), 597. DOI:https://doi.org/10.3389/FPSYG.2021.580955/BIBTEX.
[105] Krämer, N. et al. 2013. Smile and the world will smile with you-The effects of a virtual agent’s smile on users’ evaluation and behavior. International Journal of Human-Computer Studies. 71, 3 (Mar. 2013), 335–349. DOI:https://doi.org/10.1016/J.IJHCS.2012.09.006.
[106] Krämer, N.C. et al. 2012. Human-agent and human-robot interaction theory: Similarities to and differences from human-human interaction. Studies in Computational Intelligence. 396, (2012), 215–240. DOI:https://doi.org/10.1007/978-3-642-25691-2_9.
[107] Kramer, R.M. (Roderick M. and Tyler, T.R. 1996. Trust in organizations : frontiers of theory and research. (1996), 429.
[108] Kraut, R.E. and Poe, D.B. 1980. Behavioral roots of person perception: The deception judgments of customs inspectors and laymen. Journal of Personality and Social Psychology. 39, 5 (1980), 784–798. DOI:https://doi.org/10.1037/0022-3514.39.5.784.
[109] Krumhuber, E. et al. 2007. Facial Dynamics as Indicators of Trustworthiness and Cooperative Behavior. Emotion. 7, 4 (Nov. 2007), 730–735. DOI:https://doi.org/10.1037/1528-3542.7.4.730.
[110] Kunze, A. et al. 2019. Automation transparency: implications of uncertainty communication for human-automation interaction and interfaces. https://doi.org/10.1080/00140139.2018.1547842. 62, 3 (Mar. 2019), 345–360. DOI:https://doi.org/10.1080/00140139.2018.1547842.
[111] Lackey, S. et al. 2011. Defining next-generation multi-modal communication in Human Robot Interaction. Proceedings of the Human Factors and Ergonomics Society. (2011), 461–464. DOI:https://doi.org/10.1177/1071181311551095.
[112] LaFrance, M. 1979. Nonverbal Synchrony and Rapport: Analysis by the Cross-Lag Panel Technique. Social Psychology Quarterly. 42, 1 (Mar. 1979), 66. DOI:https://doi.org/10.2307/3033875.
[113] Lee, E.J. 2010. What Triggers Social Responses to Flattering Computers? Experimental Tests of Anthropomorphism and Mindlessness Explanations: http://dx.doi.org/10.1177/0093650209356389. 37, 2 (Feb. 2010), 191–214. DOI:https://doi.org/10.1177/0093650209356389.
[114] Lee, J.D. and See, K.A. 2004. Trust in automation: Designing for appropriate reliance. Human Factors. 46, 1 (Aug. 2004), 50–80. DOI:https://doi.org/10.1518/hfes.46.1.50_30392.
[115] Lee, S. et al. 1996. When the interface is a face. Human-Computer Interaction. 11, 2 (1996), 97–124. DOI:https://doi.org/10.1207/S15327051HCI1102_1.
[116] Lee, S. et al. 1996. When the interface is a face. Human-Computer Interaction. 11, 2 (1996), 97–124. DOI:https://doi.org/10.1207/S15327051HCI1102_1.
[117] Legacy, C. et al. 2019. Planning the driverless city. Transport Reviews. 39, 1 (Jan. 2019), 84–102. DOI:https://doi.org/10.1080/01441647.2018.1466835.
[118] Leslie, A.M. 1987. Pretense and representation: The origins of “theory of mind.” Psychological Review. 94, 4 (1987), 412–426. DOI:https://doi.org/10.1037//0033-295X.94.4.412.
[119] Licklider, J.C.R. 1960. Man-Computer Symbiosis. IRE Transactions on Human Factors in Electronics. HFE-1, 1 (1960), 4–11. DOI:https://doi.org/10.1109/THFE2.1960.4503259.
[120] Lohse, M. et al. 2014. Robot gestures make difficult tasks easier: The impact of gestures on perceived workload and task performance. Conference on Human Factors in Computing Systems - Proceedings. (2014), 1459–1466. DOI:https://doi.org/10.1145/2556288.2557274.
[121] Looije, R. et al. 2010. Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. International Journal of Human-Computer Studies. 68, 6 (Jun. 2010), 386–397. DOI:https://doi.org/10.1016/J.IJHCS.2009.08.007.
[122] Looser, C.E. and Wheatley, T. 2010. The tipping point of animacy: How, when, and where we perceive life in a face. Psychological Science. 21, 12 (Dec. 2010), 1854–1862. DOI:https://doi.org/10.1177/0956797610388044.
[123] Lucas, G.M. et al. 2014. It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior. 37, (Aug. 2014), 94–100. DOI:https://doi.org/10.1016/J.CHB.2014.04.043.
[124] Luger, E. and Sellen, A. 2016. “Like having a really bad pa”: The gulf between user expectation and experience of conversational agents. Conference on Human Factors in Computing Systems - Proceedings. (May 2016), 5286–5297. DOI:https://doi.org/10.1145/2858036.2858288.
[125] Lyons, J.B. 2013. Being Transparent about Transparency: A Model for Human-Robot Interaction. (2013).
[126] Machine Theory of Mind: 2018. http://proceedings.mlr.press/v80/rabinowitz18a.html. Accessed: 2022-04-27.
[127] Manser Payne, E.H. et al. 2021. Enhancing the value co-creation process: artificial intelligence and mobile banking service platforms. Journal of Research in Interactive Marketing. 15, 1 (2021), 68–85. DOI:https://doi.org/10.1108/JRIM-10-2020-0214.
[128] Marakas, G.M. et al. 2000. Theoretical model of differential social attributions toward computing technology: when the metaphor becomes the model. International Journal of Human Computer Studies. 52, 4 (2000), 719–750. DOI:https://doi.org/10.1006/IJHC.1999.0348.
[129] Marková, I. and British Academy. 2004. Trust and democratic transition in post-communist Europe. (2004), 217.
[130] Mayer, R.C. et al. 1995. An Integrative Model of Organizational Trust.
[131] McKinney, S.M. et al. 2020. International evaluation of an AI system for breast cancer screening. Nature 2020 577:7788. 577, 7788 (Jan. 2020), 89–94. DOI:https://doi.org/10.1038/s41586-019-1799-6.
[132] De Melo, C.M. et al. 2015. Humans versus computers: Impact of emotion expressions on people’s decision making. IEEE Transactions on Affective Computing. 6, 2 (Apr. 2015), 127–136. DOI:https://doi.org/10.1109/TAFFC.2014.2332471.
[133] Merritt, S.M. 2011. Affective processes in human-automation interactions. Human Factors. 53, 4 (Aug. 2011), 356–370. DOI:https://doi.org/10.1177/0018720811411912.
[134] Merritt, S.M. et al. 2013. I trust it, but I don’t know why: effects of implicit attitudes toward automation on trust in an automated system. Human factors. 55, 3 (Jun. 2013), 520–534. DOI:https://doi.org/10.1177/0018720812465081.
[135] Merritt, S.M. and Ilgen, D.R. 2008. Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Human factors. 50, 2 (Apr. 2008), 194–210. DOI:https://doi.org/10.1518/001872008X288574.
[136] Mishra, P. and Mishra, P. 2006. Affective Feedback from Computers and its Effect on Perceived Ability and... Journal of Educational Multimedia and Hypermedia. 15, 1 (2006), 107–131.
[137] Misuraca, G. et al. 2020. The use of AI in public services: results from a preliminary mapping across the EU. Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance. 2020, (2020), 23–25. DOI:https://doi.org/10.1145/3428502.
[138] Moon, Y. 2000. Intimate Exchanges: Using Computers to Elicit Self-Disclosure from Consumers. Journal of Consumer Research. 26, 4 (Mar. 2000), 323–339. DOI:https://doi.org/10.1086/209566.
[139] Moon, Y. and Nass, C. 1996. How “real” are computer personalities? Psychological responses to personality types in human-computer interaction. Communication Research. 23, 6 (1996), 651–674. DOI:https://doi.org/10.1177/009365096023006002.
[140] Morewedge, C.K. 2009. Negativity Bias in Attribution of External Agency. Journal of Experimental Psychology: General. 138, 4 (Nov. 2009), 535–545. DOI:https://doi.org/10.1037/A0016796.
[141] Nakajima, K. and Niitsuma, M. 2020. Effects of Space and Scenery on Virtual-Pet-Assisted Activity. Proceedings of the 8th International Conference on Human-Agent Interaction. 20, (2020). DOI:https://doi.org/10.1145/3406499.
[142] Nakatsu, R. et al. Emotion Recognition and Its Application to Computer Agents with Spontaneous Interactive Capabilities.
[143] Nass, C. et al. 1997. Are Machines Gender Neutral? Gender-Stereotypic Responses to Computers With Voices. Journal of Applied Social Psychology. 27, 10 (May 1997), 864–876. DOI:https://doi.org/10.1111/J.1559-1816.1997.TB00275.X.
[144] Nass, C. et al. 1994. Computer are social actors. Conference on Human Factors in Computing Systems - Proceedings. (1994), 72–78. DOI:https://doi.org/10.1145/259963.260288.
[145] Nass, C. et al. 1994. Computers are social actors. Proceedings of the SIGCHI conference on Human factors in computing systems celebrating interdependence - CHI ’94. (1994), 72–78. DOI:https://doi.org/10.1145/191666.191703.
[146] Nass, C. et al. 1997. How users reciprocate to computers: An experiment that demonstrates behavior change. Conference on Human Factors in Computing Systems - Proceedings. 22-27-March-1997, (Mar. 1997), 331–332. DOI:https://doi.org/10.1145/1120212.1120419.
[147] Nass, C. and Moon, Y. 2000. Machines and mindlessness: Social responses to computers. Journal of Social Issues. 56, 1 (2000), 81–103. DOI:https://doi.org/10.1111/0022-4537.00153.
[148] Neubauer, C. et al. 2012. Fatigue and voluntary utilization of automation in simulated driving. Human factors. 54, 5 (Oct. 2012), 734–746. DOI:https://doi.org/10.1177/0018720811423261.
[149] Norman, D. et al. 1988. Design of Everyday Things. Choice Reviews Online. 51, 10 (Jun. 1988), 51-5559-51–5559. DOI:https://doi.org/10.5860/CHOICE.51-5559.
[150] Norman, D.A. 1988. The psychology of everyday things. (1988), 257.
[151] Nowak, K. and Nowak, K. 2001. Defining and Differentiating Copresence, Social Presence and Presence as Transportation. 4 TH ANNUAL INTERNATIONAL WORKSHOP, PHILADELPHIA, RETTIE, R.M., 2003, A COMPARISON OF FOUR NEW COMMUNICATION TECHNOLOGIES, PROCEEDINGS OF HCI INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION, LAWRENCE ERLBAUM ASSOCIATES. (2001), 686--690.
[152] Nowak, K.L. 2015. Examining Perception and Identification in Avatar-mediated Interaction. The Handbook of the Psychology of Communication Technology. (Jan. 2015), 87–114. DOI:https://doi.org/10.1002/9781118426456.CH4.
[153] Nowak, K.L. and Biocca, F. 2003. The Effect of the Agency and Anthropomorphism on users’ Sense of Telepresence, Copresence, and Social Presence in Virtual Environments. Presence: Teleoperators and Virtual Environments. 12, 5 (Oct. 2003), 481–494. DOI:https://doi.org/10.1162/105474603322761289.
[154] Ogreten, S. et al. 2010. Recommended roles for uninhabited team members within mixed-initiative combat teams. 2010 International Symposium on Collaborative Technologies and Systems, CTS 2010. (2010), 531–536. DOI:https://doi.org/10.1109/CTS.2010.5478468.
[155] Oviatt, S. et al. 2004. Toward adaptive conversational interfaces. ACM Transactions on Computer-Human Interaction (TOCHI). 11, 3 (Sep. 2004), 300–328. DOI:https://doi.org/10.1145/1017494.1017498.
[156] Pak, R. et al. 2012. Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. http://dx.doi.org/10.1080/00140139.2012.691554. 55, 9 (Sep. 2012), 1059–1072. DOI:https://doi.org/10.1080/00140139.2012.691554.
[157] Panagiotopoulos, I. and Dimitrakopoulos, G. 2018. An empirical investigation on consumers’ intentions towards autonomous driving. Transportation Research Part C: Emerging Technologies. 95, (Oct. 2018), 773–784. DOI:https://doi.org/10.1016/J.TRC.2018.08.013.
[158] Parasuraman, R. et al. 2009. Adaptive automation for human supervision of multiple uninhabited vehicles: Effects on change detection, situation awareness, and mental workload. Military Psychology. 21, 2 (Apr. 2009), 270–297. DOI:https://doi.org/10.1080/08995600902768800.
[159] Parasuraman, R. and Riley, V. 1997. Humans and automation: Use, misuse, disuse, abuse. Human Factors. 39, 2 (Jun. 1997), 230–253. DOI:https://doi.org/10.1518/001872097778543886.
[160] Parise, S. et al. 1999. Cooperating with life-like interface agents. Computers in Human Behavior. 15, 2 (Mar. 1999), 123–142. DOI:https://doi.org/10.1016/S0747-5632(98)00035-1.
[161] Pertaub, D.P. et al. 2002. An experiment on public speaking anxiety in response to three different types of virtual audience. Presence: Teleoperators and Virtual Environments. 11, 1 (Feb. 2002), 68–78. DOI:https://doi.org/10.1162/105474602317343668.
[162] Philip, P.C. and Hafez, M.A. 2020. Entropy Weighted-Based (EWB) I-LEACH Protocol for Energy-Efficient IoT Applications. 2020 IEEE Global Conference on Artificial Intelligence and Internet of Things, GCAIoT 2020. (Dec. 2020). DOI:https://doi.org/10.1109/GCAIOT51063.2020.9345819.
[163] Phillips, E. et al. 2018. What is Human-like?: Decomposing Robots’ Human-like Appearance Using the Anthropomorphic roBOT (ABOT) Database. (2018). DOI:https://doi.org/10.1145/3171221.
[164] Poggi, I. and D’Errico, F. 2010. Cognitive modelling of human social signals. SSPW’10 - Proceedings of the 2010 ACM Social Signal Processing Workshop, Co-located with ACM Multimedia 2010. (2010), 21–26. DOI:https://doi.org/10.1145/1878116.1878124.
[165] Qiu, L. and Benbasat, I. 2008. Evaluating anthropomorphic product recommendation agents: A social relationship perspective to designing information systems. Journal of Management Information Systems. 25, 4 (Apr. 2008), 145–182. DOI:https://doi.org/10.2753/MIS0742-1222250405.
[166] Quigley, K.F.F. 1996. Trust: The social virtues and the creation of prosperity: By Francis Fukuyama. (New York: Free Press, 1995). Orbis. 40, 2 (1996), 333.
[167] Rai, A. 2020. Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science. 48, 1 (Jan. 2020), 137–141. DOI:https://doi.org/10.1007/S11747-019-00710-5/TABLES/1.
[168] Ram, A. et al. 2018. Conversational AI: The Science Behind the Alexa Prize. (Jan. 2018). DOI:https://doi.org/10.48550/arxiv.1801.03604.
[169] Rand, D.G. and Nowak, M.A. 2013. Human cooperation. Trends in Cognitive Sciences. 17, 8 (Aug. 2013), 413–425. DOI:https://doi.org/10.1016/J.TICS.2013.06.003.
[170] Rau, P.L.P. et al. 2010. A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics. 2, 2 (2010), 175–186. DOI:https://doi.org/10.1007/s12369-010-0056-9.
[171] Reeves, B. and Nass, C.I. 1996. The media equation : how people treat computers, television, and new media like real people and places. CSLI Publications ;Cambridge University Press.
[172] Reichenbach, J. et al. 2011. Human performance consequences of automated decision aids in states of sleep loss. Human factors. 53, 6 (Dec. 2011), 717–728. DOI:https://doi.org/10.1177/0018720811418222.
[173] Richardson, D.C. et al. 2007. The art of conversation is coordination: Common ground and the coupling of eye movements during dialogue: Research article. Psychological Science. 18, 5 (May 2007), 407–413. DOI:https://doi.org/10.1111/j.1467-9280.2007.01914.x.
[174] Rickel, J. and Johnson, W.L. 1999. Animated agents for procedural training in virtual reality: perception, cognition, and motor control. Applied Artificial Intelligence. 13, 4–5 (May 1999), 343–382. DOI:https://doi.org/10.1080/088395199117315.
[175] Rickenberg, R. and Reeves, B. 2000. The effects of animated characters on anxiety, task performance, and evaluations of user interfaces. Conference on Human Factors in Computing Systems - Proceedings. (2000), 49–56. DOI:https://doi.org/10.1145/332040.332406.
[176] Robotic surgery - Mayo Clinic: 2019. https://www.mayoclinic.org/tests-procedures/robotic-surgery/about/pac-20394974. Accessed: 2022-03-25.
[177] Rousseau, D.M. et al. 1998. Not so different after all: A cross-discipline view of trust. Academy of Management Review. 23, 3 (1998), 393–404. DOI:https://doi.org/10.5465/AMR.1998.926617.
[178] Salem, M. and Dautenhahn, K. 2015. Evaluating Trust and Safety in HRI: Practical Issues and Ethical Challenges.
[179] Samovar, L.A. et al. 2009. Communication between cultures. (2009).
[180] Schermerhorn, P. et al. 2008. Robot social presence and gender: Do females view robots differently than males? HRI 2008 - Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction: Living with Robots. (2008), 263–270. DOI:https://doi.org/10.1145/1349822.1349857.
[181] Schwaninger, I. et al. 2020. Exploring trust in human-agent collaboration. ECSCW 2019 - Proceedings of the 17th European Conference on Computer Supported Cooperative Work (2020).
[182] Sebeok, T.A. (Thomas A. 1979. The sign & its masters. University of Texas Press.
[183] Serenko, A. 2007. Are interface agents scapegoats? Attributions of responsibility in human–agent interaction. Interacting with Computers. 19, 2 (Mar. 2007), 293–303. DOI:https://doi.org/10.1016/J.INTCOM.2006.07.005.
[184] Serholt, S. et al. 2020. Trouble and Repair in Child–Robot Interaction: A Study of Complex Interactions With a Robot Tutee in a Primary School Classroom. Frontiers in Robotics and AI. 7, (Apr. 2020), 46. DOI:https://doi.org/10.3389/FROBT.2020.00046/BIBTEX.
[185] Shergadwala, M. and El-Nasr, M.S. 2021. Esports Agents with a Theory of Mind: Towards Better Engagement, Education, and Engineering. (2021). DOI:https://doi.org/10.31219/OSF.IO/QJCG9.
[186] Shockley, K. et al. 2003. Mutual interpersonal postural constraints are involved in cooperative conversation. Journal of experimental psychology. Human perception and performance. 29, 2 (Apr. 2003), 326–332. DOI:https://doi.org/10.1037/0096-1523.29.2.326.
[187] Shvo, M. et al. 2020. Towards the Role of Theory of Mind in Explanation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 12175 LNAI, (2020), 75–93. DOI:https://doi.org/10.1007/978-3-030-51924-7_5.
[188] SIGCHI (Group : U.S.). Curriculum Development Group. 1992. ACM SIGCHI : curricula for human-computer interaction. Association for Computing Machinery.
[189] Similarity of gestures and interpersonal influence.: 1969. https://psycnet.apa.org/record/1969-17366-001. Accessed: 2022-04-24.
[190] Singh, I.L. et al. 1993. Automation-Induced “Complacency”: Development of the Complacency-Potential Rating Scale. The International Journal of Aviation Psychology. 3, 2 (1993), 111–122. DOI:https://doi.org/10.1207/S15327108IJAP0302_2.
[191] Slater, M. et al. 1999. Public Speaking in Virtual Reality: Facing an Audience of Avatars. IEEE Computer Graphics and Applications. 19, 2 (Mar. 1999), 6–9. DOI:https://doi.org/10.1109/38.749116.
[192] Stafford, B.M. 2007. Echo objects : the cognitive work of images. (2007), 281.
[193] Strabala, K. et al. 2012. Learning the communication of intent prior to physical collaboration. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication. (2012), 968–973. DOI:https://doi.org/10.1109/ROMAN.2012.6343875.
[194] Susanne Kaiser et al. 1998. Emotional Episodes, Facial Expressions, and Reported Feelings in Human-Computer Interactions.
[195] Szalma, J.L. and Taylor, G.S. 2011. Individual differences in response to automation: the five factor model of personality. Journal of experimental psychology. Applied. 17, 2 (Jun. 2011), 71–96. DOI:https://doi.org/10.1037/A0024170.
[196] Takayama, L. 2008. Making sense of agentic objects and teleoperation: In-the-moment and reflective perspectives. Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, HRI’09. (2008), 239–240. DOI:https://doi.org/10.1145/1514095.1514155.
[197] Teoh, E.R. and Kidd, D.G. 2017. Rage against the machine? Google’s self-driving cars versus human drivers. Journal of Safety Research. 63, (Dec. 2017), 57–60. DOI:https://doi.org/10.1016/J.JSR.2017.08.008.
[198] Tesla Autopilot “partly to blame” for crash - BBC News: https://www.bbc.com/news/technology-41242884. Accessed: 2023-03-25.
[199] Tesla Faces Lawsuit after Model X on Autopilot with “Dozing Driver” Blamed for Fatal Crash: https://www.newsweek.com/tesla-lawsuit-model-x-autopilot-fatal-crash-japan-yoshihiro-umeda-1501114. Accessed: 2023-03-25.
[200] Tesla Sued Over Fatal Crash Blamed on Autopilot Malfunction - Bloomberg: https://www.bloomberg.com/news/articles/2019-05-01/tesla-sued-over-fatal-crash-blamed-on-autopilot-navigation-error. Accessed: 2022-03-19.
[201] The ghost in the machine. The influence of Embodied Conversational Agents on user expectations and user behaviour in a TV/VCR application: 2003. https://www.researchgate.net/publication/242273054_The_ghost_in_the_machine_The_influence_of_Embodied_Conversational_Agents_on_user_expectations_and_user_behaviour_in_a_TVVCR_application1. Accessed: 2022-03-27.
[202] Tickle-Degnen, L. and Rosenthal, R. 1990. The Nature of Rapport and Its Nonverbal Correlates. Psychological Inquiry. 1, 4 (Jan. 1990), 285–293. DOI:https://doi.org/10.1207/S15327965PLI0104_1.
[203] Tjøstheim, T.A. et al. 2019. A computational model of trust-, pupil-, and motivation dynamics. HAI 2019 - Proceedings of the 7th International Conference on Human-Agent Interaction. (Sep. 2019), 179–185. DOI:https://doi.org/10.1145/3349537.3351896.
[204] Todorov, A. et al. 2015. Social Attributions from Faces: Determinants, Consequences, Accuracy, and Functional Significance. http://dx.doi.org/10.1146/annurev-psych-113011-143831. 66, (Jan. 2015), 519–545. DOI:https://doi.org/10.1146/ANNUREV-PSYCH-113011-143831.
[205] Tomasello, M. 2006. Why Don’t Apes Point? (2006).
[206] Triandis, H.C. 1960. Some Determinants of Interpersonal Communication. Human Relations. 13, 3 (1960), 279–287. DOI:https://doi.org/10.1177/001872676001300308.
[207] Tung, F.W. 2011. Influence of Gender and Age on the Attitudes of Children towards Humanoid Robots. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 6764 LNCS, PART 4 (2011), 637–646. DOI:https://doi.org/10.1007/978-3-642-21619-0_76.
[208] Tyler, T.R. and Degoey, P. 2012. Trust in Organizational Authorities: The Influence of Motive Attributions on Willingness to Accept Decisions. Trust in Organizations: Frontiers of Theory and Research. (May 2012), 331–356. DOI:https://doi.org/10.4135/9781452243610.N16.
[209] Understanding the intentions of others from visual signals: Neurophysiological evidence: 1994. https://psycnet.apa.org/record/1995-24608-001. Accessed: 2022-04-26.
[210] Vinciarelli, A. 2009. Capturing order in social interactions [social sciences]. IEEE Signal Processing Magazine. 26, 5 (2009). DOI:https://doi.org/10.1109/MSP.2009.933382.
[211] de Visser, E.J. et al. 2016. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied. 22, 3 (Sep. 2016), 331–349. DOI:https://doi.org/10.1037/xap0000092.
[212] De Visser, E.J. et al. 2012. The World is not Enough: Trust in Cognitive Agents: http://dx.doi.org/10.1177/1071181312561062. (Sep. 2012), 263–267. DOI:https://doi.org/10.1177/1071181312561062.
[213] Wagner, A.R. et al. 2018. Modeling the human-robot trust phenomenon: A conceptual framework based on risk. ACM Transactions on Interactive Intelligent Systems. 8, 4 (Nov. 2018). DOI:https://doi.org/10.1145/3152890.
[214] Walker, J.H. et al. 1994. Using a human face in an interface. (1994), 205. DOI:https://doi.org/10.1145/259963.260290.
[215] Warm, J.S. et al. 2008. Vigilance requires hard mental work and is stressful. Human Factors. 50, 3 (Jun. 2008), 433–441. DOI:https://doi.org/10.1518/001872008X312152.
[216] Watch the horrifying moment a Tesla car crashes into a parked lorry while in “autopilot” mode - Mirror Online: https://www.mirror.co.uk/tech/watch-horrifying-moment-tesla-car-8839739. Accessed: 2023-03-25.
[217] Waytz, A. et al. 2010. Making sense by making sentient: effectance motivation increases anthropomorphism. Journal of personality and social psychology. 99, 3 (2010), 410–435. DOI:https://doi.org/10.1037/A0020240.
[218] Waytz, A. et al. 2014. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology. 52, (May 2014), 113–117. DOI:https://doi.org/10.1016/j.jesp.2014.01.005.
[219] Weitz, K. et al. 2019. I"do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. (Jul. 2019), 7–9. DOI:https://doi.org/10.1145/3308532.3329441.
[220] When non-human is better than semi-human: Consistency in speech interfaces | Request PDF: 2001. https://www.researchgate.net/publication/229068086_When_non-human_is_better_than_semi-human_Consistency_in_speech_interfaces. Accessed: 2022-03-28.
[221] Wiese, E. et al. 2014. What We Observe Is Biased by What Other People Tell Us: Beliefs about the Reliability of Gaze Behavior Modulate Attentional Orienting to Gaze Cues. PLOS ONE. 9, 4 (Apr. 2014), e94529. DOI:https://doi.org/10.1371/JOURNAL.PONE.0094529.
[222] Wiltshire, T.J. et al. 2014. An interdisciplinary taxonomy of social cues and signals in the service of engineering robotic social intelligence. Unmanned Systems Technology XVI. 9084, (Jun. 2014), 90840F. DOI:https://doi.org/10.1117/12.2049933.
[223] Wiltshire, T.J. et al. 2013. Effects of robot gaze and proxemic behavior on perceived social presence during a hallway navigation scenario. Proceedings of the Human Factors and Ergonomics Society. (2013), 1273–1277. DOI:https://doi.org/10.1177/1541931213571282.
[224] Woods, S. et al. 2005. Is Someone Watching Me?-Consideration of Social Facilitation Effects in Human-Robot Interaction Experiments. (2005).
[225] Wooldridge, M. and Jennings, N.R. 1995. Intelligent Agents: Theory and Practice.
[226] Yamazaki, Y. et al. 2009. Intent expression using eye robot for mascot robot system. (Apr. 2009). DOI:https://doi.org/10.48550/arxiv.0904.1631.
[227] Yuksel, B.F. et al. 2017. Brains or Beauty. ACM Transactions on Internet Technology (TOIT). 17, 1 (Jan. 2017). DOI:https://doi.org/10.1145/2998572.
[228] Zanbaka, C.A. et al. 2007. Social responses to virtual humans: Implications for future interface design. Conference on Human Factors in Computing Systems - Proceedings. (2007), 1561–1570. DOI:https://doi.org/10.1145/1240624.1240861.
[229] Zhang, T. et al. 2019. The roles of initial trust and perceived risk in public’s acceptance of automated vehicles. Transportation Research Part C: Emerging Technologies. 98, (Jan. 2019), 207–220. DOI:https://doi.org/10.1016/J.TRC.2018.11.018.
[230] Zuckerman, M. et al. 1981. Verbal and Nonverbal Communication of Deception. Advances in Experimental Social Psychology. 14, C (Jan. 1981), 1–59. DOI:https://doi.org/10.1016/S0065-2601(08)60369-X.
[231] 董士海 2004. 人机交互的进展及面临的挑战. 计算机辅助设计与图形学学报. (2004).
[232] 蔡佳蒓 2019. 合作點好嗎?從自我決定理論探討在《OVERCOOKED》合作模擬遊戲中目標結構與玩家類型對享樂感與未來合作意願的影響. 2019年 (Jan. 2019), 1–78. DOI:https://doi.org/10.6814/NCCU201901256.
描述 碩士
國立政治大學
數位內容碩士學位學程
109462013
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109462013
資料類型 thesis
dc.contributor.advisor 陳宜秀<br>廖峻鋒zh_TW
dc.contributor.advisor Chen, Yihsiu<br>Liao, Chun-Fengen_US
dc.contributor.author (Authors) 林庭羽zh_TW
dc.contributor.author (Authors) Lin, Ting-Yuen_US
dc.creator (作者) 林庭羽zh_TW
dc.creator (作者) Lin, Ting-Yuen_US
dc.date (日期) 2023en_US
dc.date.accessioned 1-Sep-2023 15:56:23 (UTC+8)-
dc.date.available 1-Sep-2023 15:56:23 (UTC+8)-
dc.date.issued (上傳時間) 1-Sep-2023 15:56:23 (UTC+8)-
dc.identifier (Other Identifiers) G0109462013en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/147168-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 數位內容碩士學位學程zh_TW
dc.description (描述) 109462013zh_TW
dc.description.abstract (摘要)   近年來,隨著人工智能的進步,電腦逐漸擁有強大的決策能力,這對人類的生活和工作方式都產生了變革性的影響,也讓人與代理人互動(Human-Agent Interaction, HAI)的形式更加多元,AI可化身為能協助人類行事具形體或不具形體的代理人(agent),像是智能機器人、自動駕駛汽車等,這些代理人會協助人們完成任務。而「信任」是人與代理人得以不斷合作與互動的重要關鍵之一,因此我們需要研究是什麼因素能促進或減低人們對AI的信任,尤其是以AI為基礎的代理人之信任。
  在代理人的設計中,人們通常依據過去與他人互動經驗之心智模型來與代理人互動。根據溝通調適理論,人們常常調整自己的語言、非語言和副語言行為,以適應他人的溝通特徵,減少社會距離。本研究即探討在虛擬環境中當代理人呈現具眼神注意及調節行為時,人們是否會因此對代理人產生信任?為了驗證假設,本實驗為二因子組間設計:代理人的外觀(人形/機器人型)和注意行為(有注意行為/無注意行為)兩個主變項。實驗方式為在虛擬實境環境中進行人與代理人共同協作拼圖任務,由代理人給予受試者拼圖指示和建議,受試者可自行決定是否要接受代理人之指示,並於實驗後填寫綜合信任技術接受模型問卷,了解代理人的外觀擬人化與注意行為變化是否會對人類產生信任影響。
  實驗結果發現,我們無法藉由操弄來達到原先想形塑的社會知覺(Social Perception),因人們對於代理人的注意行為會產生不同的主觀感知,本實驗也再次驗證人與代理人互動中心智模型的重要性。在實驗中,若代理人行為表現與人相似且具有相應的注意行為,可以符合人們的心智模型時,會增加對代理人的可預測性,而讓人更有安全感,以致可以專注完成任務,所以任務表現較佳。但若實驗情境無法與心智模型相呼應,因降低了對代理人的可預測性也讓任務表現較差,更有可能受到代理人「外觀」與「注意感覺」的干擾影響表現。另外,受試者對於人形外觀的高感知能提升對代理人的誠信感;當受試者認為代理人具備注意行為時,他們更信任代理人並願意接受其指示,且能提升彼此合作關係。儘管本研究與原先假設並不完全符合,但實驗結果出乎我們的想像令我們獲得許多有趣的發現,期望為日後的HAI領域提供新的發展方向。
zh_TW
dc.description.tableofcontents 摘要 i
目錄 iii
圖目錄 v
表目錄 vi
第1章 緒論 1
1.1 研究背景與動機 1
1.2 研究目標 4
第2章 文獻探討 6
2.1 代理人 6
2.1.1 定義 6
2.1.2 相關案例 8
2.1.3 人與代理人之互動 9
2.2 信任問題 11
2.2.1 信任的定義 11
2.2.2 人機互動中的信任 13
2.2.3 影響人對代理人信任之因素 15
2.3 「人與人」互動與「人與代理人」互動 18
2.3.1 人形機器人之意義 18
2.3.2 人類溝通之相互性 19
2.3.3 機器人之類人行為 21
2.3.4 相互性及信任 23
2.3.5 研究問題 25
2.3.6 研究假設 26
第3章 研究方法 28
3.1 實驗設計 28
3.2 受試者以及研究前問卷 29
3.3 實驗實體器材及遊戲引擎 29
3.3.1 實驗器材 29
3.3.2 遊戲引擎 30
3.4 任務:拼圖 33
3.5 實驗流程 35
3.6 依變項 36
第4章 結果 38
4.1 問卷的信度分析 38
4.2 自變項操弄檢測分析 41
4.3 皮爾森相關分析 42
4.3.1 信任指標之間之關聯性 42
4.3.2 信任指標與操弄檢測之皮爾森相關分析 43
4.3.3 主觀知覺的中位數分組 45
4.4 單因子變異數分析 46
4.4.1 外觀之單因子變異數分析 46
4.4.2 外觀感覺之單因子變異數分析 46
4.4.3 相似感覺之單因子變異數分析 47
4.4.4 注意感覺之單因子變異數分析 47
4.4.5 依照感覺之單因子變異數分析 48
4.4.6 社會關係 49
4.5 二因子變異數分析 50
4.5.1 「外觀」與「注意感覺」二因子交互作用 50
4.5.2 「外觀感覺」與「行為」二因子交互作用 53
4.5.3 「外觀感覺」與「依照感覺」二因子交互作用 55
4.5.4 「相似感覺」與「行為」二因子交互作用 57
4.5.5 「相似感覺」與「依照感覺」二因子交互作用 63
4.5.6 「外觀」與「性別」二因子交互作用 67
第5章 討論 70
5.1 人形外觀-提升誠信、不信任感下降;卻降低任務表現 71
5.2 主觀感知-相互性行為提升信任與互動關係 72
5.3 外觀與行為之交互作用 74
5.3.1 對代理人相互性行為的知覺影響「能力」與「有用性」感知 74
5.3.2 對代理人眼神凝視的知覺影響對機器人型「誠信」與「不信任」感知 75
5.3.3 受試者對代理人行為表現之心智模型影響任務表現 76
5.3.4 任務成敗之責任歸屬 76
第6章 結論 78
6.1 結論 78
6.2 研究限制與未來建議 81
參考文獻 83
附錄 104
zh_TW
dc.format.extent 12122790 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109462013en_US
dc.subject (關鍵詞) 代理人zh_TW
dc.subject (關鍵詞) 虛擬實境zh_TW
dc.subject (關鍵詞) 人機互動zh_TW
dc.subject (關鍵詞) 行為中的相互關係zh_TW
dc.subject (關鍵詞) 信任zh_TW
dc.title (題名) 虛擬代理人的相互性行為對信任之影響zh_TW
dc.title (題名) Agents’ Mutuality Behavior and Human Trust in Human-Agent Interactionen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Admoni, H. et al. 2011. Robot gaze does not reflexively cue human attention. (2011).
[2] Adolphs, R. et al. 1999. Social cognition and the human brain. Trends in Cognitive Sciences. 3, 12 (Dec. 1999), 469–479. DOI:https://doi.org/10.1016/S1364-6613(99)01399-6.
[3] Automated Trading Systems: The Pros and Cons: 2019. https://www.investopedia.com/articles/trading/11/automated-trading-systems.asp. Accessed: 2022-03-25.
[4] Axelrod, R. and Hamilton, W.D. 1981. The Evolution of Cooperation. New Series. 211, 4489 (1981), 1390–1396.
[5] Bailenson, J.N. et al. 2001. Equilibrium theory revisited: Mutual gaze and personal space in virtual environments. Presence: Teleoperators and Virtual Environments. 10, 6 (Dec. 2001), 583–598. DOI:https://doi.org/10.1162/105474601753272844.
[6] Bar, M. et al. 2006. Top-down facilitation of visual recognition. Proceedings of the National Academy of Sciences of the United States of America. 103, 2 (Jan. 2006), 449–454. DOI:https://doi.org/10.1073/PNAS.0507062103/SUPPL_FILE/07062FIG9.PDF.
[7] Baron-Cohen lgg, S. et al. 1995. From: Mindblindness: ern essay on autism end theor-y of mind. kc Learning, Development, and Conceptual Change. (1995).
[8] Bayliss, A.P. and Tipper, S.P. 2006. Predictive Gaze Cues and Personality Judgments: Should Eye Trust You? Psychological science. 17, 6 (Jun. 2006), 514. DOI:https://doi.org/10.1111/J.1467-9280.2006.01737.X.
[9] Bell, L. and Gustafson, J. 1999. REPETITION AND ITS PHONETIC REALIZATIONS: INVESTIGATING A SWEDISH DATABASE OF SPONTANEOUS COMPUTER-DIRECTED SPEECH. (1999).
[10] Benbasat, I. and Wang, W. 2005. Trust In and Adoption of Online Recommendation Agents. Journal of the Association for Information Systems. 6, 3 (Mar. 2005), 4. DOI:https://doi.org/10.17705/1jais.00065.
[11] Bennett, M.T. and Maruyama, Y. 2021. Intensional Artificial Intelligence: From Symbol Emergence to Explainable and Empathetic AI. (Apr. 2021). DOI:https://doi.org/10.48550/arxiv.2104.11573.
[12] Berger, C.R. and Bradac, J.J. 1982. Language and social knowledge : uncertainty in interpersonal relations. (1982), 151.
[13] Blakemore, S.J. and Decety, J. 2001. From the perception of action to the understanding of intention. Nature reviews. Neuroscience. 2, 8 (2001), 561–567. DOI:https://doi.org/10.1038/35086023.
[14] Blascovich, J. et al. 2002. Immersive Virtual Environment Technology as a Methodological Tool for Social Psychology.
[15] Bradshaw, J.M. An Introduction to Software Agents.
[16] Bratman, M.E. 1992. Shared Cooperative Activity. The Philosophical Review. 101, 2 (Apr. 1992), 327. DOI:https://doi.org/10.2307/2185537.
[17] Breazeal, C. et al. 2005. Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS. (2005), 708–713. DOI:https://doi.org/10.1109/IROS.2005.1545011.
[18] Breazeal, C. 2003. Emotion and sociable humanoid robots. International Journal of Human-Computer Studies. 59, 1–2 (Jul. 2003), 119–155. DOI:https://doi.org/10.1016/S1071-5819(03)00018-1.
[19] Brennan, S.E. and Hanna, J.E. 2009. Partner-Specific Adaptation in Dialog. Topics in Cognitive Science. 1, 2 (Apr. 2009), 274–291. DOI:https://doi.org/10.1111/J.1756-8765.2009.01019.X.
[20] Burgoon, J.K. et al. 2007. Interpersonal Adaptation: Dyadic Interaction Patterns. (2007).
[21] Burnett, R. 2004. How images think. (2004), 253.
[22] Burton-Chellew, M.N. and West, S.A. 2013. Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences of the United States of America. 110, 1 (Jan. 2013), 216–221. DOI:https://doi.org/10.1073/PNAS.1210960110/SUPPL_FILE/PNAS.201210960SI.PDF.
[23] Buschmeier, H. and Kopp, S. 2018. Communicative Listener Feedback in Human-Agent Interaction: Artificial Speakers Need to Be Attentive and Adaptive Socially Interactive Agents Track. IFAAMAS. 9, (2018).
[24] Camerer, C. 2003. Behavioral game theory : experiments in strategic interaction. Russell Sage Foundation.
[25] Can We Trust Trust | Semantic Scholar: 2000. https://www.semanticscholar.org/paper/Can-We-Trust-Trust-Gambetta/542ace96c6daa25922e626aaa8ca4aa904c2a2b0. Accessed: 2022-04-26.
[26] Cannon-Bowers, J.A. and Salas, E. 2001. Reflections on shared cognition. Journal of Organizational Behavior. 22, 2 (Mar. 2001), 195–202. DOI:https://doi.org/10.1002/JOB.82.
[27] Card, S.K. et al. 1980. The keystroke-level model for user performance time with interactive systems. Communications of the ACM. 23, 7 (Jul. 1980), 396–410. DOI:https://doi.org/10.1145/358886.358895.
[28] Carlisle, J.H. Evaluating the impact of office automation on top management communication. Proceedings of the June 7-10, 1976, national computer conference and exposition on - AFIPS ’76. DOI:https://doi.org/10.1145/1499799.
[29] Carter, E.J. et al. 2014. Playing catch with robots: Incorporating social gestures into physical interactions. IEEE RO-MAN 2014 - 23rd IEEE International Symposium on Robot and Human Interactive Communication: Human-Robot Co-Existence: Adaptive Interfaces and Systems for Daily Life, Therapy, Assistance and Socially Engaging Interactions. (Oct. 2014), 231–236. DOI:https://doi.org/10.1109/ROMAN.2014.6926258.
[30] Cassell, J. 2000. Embodied conversational agents. (2000), 430.
[31] Chavaillaz, A. et al. 2018. Automation in visual inspection tasks: X-ray luggage screening supported by a system of direct, indirect or adaptable cueing with low and high system reliability. Ergonomics. 61, 10 (Oct. 2018), 1395–1408. DOI:https://doi.org/10.1080/00140139.2018.1481231.
[32] Chen, J.Y.C. et al. 2010. Supervisory Control of Unmanned Vehicles.
[33] Chen, T. et al. 2015. Increasing Autonomy Transparency through capability communication in multiple heterogeneous UAV management. IEEE International Conference on Intelligent Robots and Systems. 2015-December, (Dec. 2015), 2434–2439. DOI:https://doi.org/10.1109/IROS.2015.7353707.
[34] Christoffersen, K. and Woods, D.D. 2002. How to make automated systems team players. Advances in Human Performance and Cognitive Engineering Research. 2, (2002), 1–12. DOI:https://doi.org/10.1016/S1479-3601(02)02003-9.
[35] Clark, H.H. 2005. Coordinating with each other in a material world. Discourse Studies. 7, 4–5 (Aug. 2005), 507–525. DOI:https://doi.org/10.1177/1461445605054404.
[36] Clark, H.H. and Brennan, S.E. 1991. GROUNDING IN COMMUNICATION. (1991).
[37] Cook, A. et al. 2005. Complex trauma in children and adolescents. Psychiatric Annals. 35, 5 (2005), 390–398. DOI:https://doi.org/10.3928/00485713-20050501-05.
[38] Costo, S. and Molfino, R. 2004. A NEW ROBOTIC UNIT FOR ONBOARD AIRPLANES BOMB DISPOSAL. (2004).
[39] Cuzzolin, F. et al. 2020. Knowing me, knowing you: theory of mind in AI. Psychological medicine. 50, 7 (May 2020), 1057–1061. DOI:https://doi.org/10.1017/S0033291720000835.
[40] Dennett, D. 1987. Then intentional stance. (1987), 400.
[41] Desideri, L. et al. 2019. Emotional processes in human-robot interaction during brief cognitive testing. Computers in Human Behavior. 90, (Jan. 2019), 331–342. DOI:https://doi.org/10.1016/J.CHB.2018.08.013.
[42] DiSalvo, C.F. et al. 2002. All robots are not created equal: The design and perception of humanoid robot heads. Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, DIS. (2002), 321–326. DOI:https://doi.org/10.1145/778712.778756.
[43] Duffy, B.R. 2003. Anthropomorphism and the social robot. Robotics and Autonomous Systems (Mar. 2003), 177–190.
[44] Duncan, S. and Fiske, D.W. 1977. Face-to-face interaction : research, methods, and theory. (1977), 361.
[45] Dzindolet, M.T. et al. 2003. The role of trust in automation reliance. International Journal of Human Computer Studies. 58, 6 (2003), 697–718. DOI:https://doi.org/10.1016/S1071-5819(03)00038-7.
[46] Ekman, P. 1964. Body position, facial expression, and verbal behavior during interviews. Journal of Abnormal and Social Psychology. 68, 3 (Mar. 1964), 295–301. DOI:https://doi.org/10.1037/H0040225.
[47] Ekman, P. and Friesen, W. 2003. Unmasking the face: A guide to recognizing emotions from facial clues. (2003).
[48] Ekman, P. and Friesen, W. V. 1969. Nonverbal leakage and clues to deception. Psychiatry. 32, 1 (Feb. 1969), 88–106. DOI:https://doi.org/10.1080/00332747.1969.11023575.
[49] Emery, N.J. 2000. The eyes have it: the neuroethology, function and evolution of social gaze. Neuroscience & Biobehavioral Reviews. 24, 6 (Aug. 2000), 581–604. DOI:https://doi.org/10.1016/S0149-7634(00)00025-7.
[50] Emmerich, K. et al. 2018. I’m glad you are on my side: How to design compelling game companions. CHI PLAY 2018 - Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. (Oct. 2018), 153–162. DOI:https://doi.org/10.1145/3242671.3242709.
[51] Fan, X. and Yen, J. 2007. Realistic Cognitive Load Modeling for Enhancing Shared Mental Models in Human-Agent Collaboration. (2007).
[52] Fink, J. 2012. Anthropomorphism and human likeness in the design of robots and human-robot interaction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 7621 LNAI, (2012), 199–208. DOI:https://doi.org/10.1007/978-3-642-34103-8_20.
[53] Finomore, V. et al. 2009. Predicting vigilance: a fresh look at an old problem. Ergonomics. 52, 7 (2009), 791–808. DOI:https://doi.org/10.1080/00140130802641627.
[54] Fiore, S.M. et al. 2013. Toward understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior. Frontiers in Psychology. 4, NOV (2013). DOI:https://doi.org/10.3389/FPSYG.2013.00859.
[55] Folkes, V.S. 2016. Forming Relationships and the Matching Hypothesis: http://dx.doi.org/10.1177/0146167282084005. 8, 4 (Jul. 2016), 631–636. DOI:https://doi.org/10.1177/0146167282084005.
[56] Fong, T. et al. 2003. A survey of socially interactive robots. Robotics and Autonomous Systems. 42, 3–4 (Mar. 2003), 143–166. DOI:https://doi.org/10.1016/S0921-8890(02)00372-X.
[57] Freire, A. et al. 2004. Are Eyes Windows to a Deceiver’s Soul? Children’s Use of Another’s Eye Gaze Cues in a Deceptive Situation. Developmental psychology. 40, 6 (Nov. 2004), 1093. DOI:https://doi.org/10.1037/0012-1649.40.6.1093.
[58] Friedman, J. et al. 2000. Additive logistic regression: A statistical view of boosting. Annals of Statistics. 28, 2 (2000), 337–407. DOI:https://doi.org/10.1214/AOS/1016218223.
[59] Frischen, A. et al. 2007. Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychological bulletin. 133, 4 (Jul. 2007), 694–724. DOI:https://doi.org/10.1037/0033-2909.133.4.694.
[60] Fusaroli, R. et al. 2014. Dialog as interpersonal synergy. New Ideas in Psychology. 32, 1 (Jan. 2014), 147–157. DOI:https://doi.org/10.1016/J.NEWIDEAPSYCH.2013.03.005.
[61] Gallagher, H.L. and Frith, C.D. 2003. Functional imaging of ‘theory of mind.’ Trends in Cognitive Sciences. 7, 2 (Feb. 2003), 77–83. DOI:https://doi.org/10.1016/S1364-6613(02)00025-6.
[62] Garau, M. et al. 2003. The Impact of Avatar Realism and Eye Gaze Control on Perceived Quality of Communication in a Shared Immersive Virtual Environment.
[63] Gaze and eye contact: A research review: 1986. https://psycnet.apa.org/record/1986-27160-001. Accessed: 2022-04-26.
[64] Gebhard, P. and Schmitt, M. 2002. CrossTalk: An Interactive Installation with Animated Presentation Agents.
[65] Genesereth, M.R. and Ketchpel, S.P. 1994. Software agents. Communications of the ACM. 37, 7 (Jul. 1994), 48-ff. DOI:https://doi.org/10.1145/176789.176794.
[66] Gholami, B. et al. 2018. AI in the ICU: In the intensive care unit, artificial intelligence can keep watch. IEEE Spectrum. 55, 10 (Oct. 2018), 31–35. DOI:https://doi.org/10.1109/MSPEC.2018.8482421.
[67] Gibson, J.J. 2014. The Ecological Approach to Visual Perception : Classic Edition. The Ecological Approach to Visual Perception. (Nov. 2014). DOI:https://doi.org/10.4324/9781315740218.
[68] Gielniak, M.J. and Thomaz, A.L. 2011. Generating anticipation in robot motion. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication. (2011), 449–454. DOI:https://doi.org/10.1109/ROMAN.2011.6005255.
[69] Giles, H. and Ogay, T. 2007. Communication Accommodation Theory.
[70] Goetz, J. et al. 2003. Matching robot appearance and behavior to tasks to improve human-robot cooperation. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication (2003), 55–60.
[71] De Graaf, M.M.A. and Ben Allouch, S. 2013. Exploring influencing variables for the acceptance of social robots. Robotics and Autonomous Systems. 61, 12 (Dec. 2013), 1476–1486. DOI:https://doi.org/10.1016/J.ROBOT.2013.07.007.
[72] Grace, K. et al. 2018. Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts.
[73] Graefe, V. and Bischoff, R. 2003. Past, present and future of intelligent robots. Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA. 2, (2003), 801–810. DOI:https://doi.org/10.1109/CIRA.2003.1222283.
[74] Gratch, J. et al. 2007. Creating Rapport with Virtual Agents. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 4722 LNCS, (2007), 125–138. DOI:https://doi.org/10.1007/978-3-540-74997-4_12.
[75] Gratch, J. 2019. The Social Psychology of Human-agent Interaction. (Sep. 2019), 1–1.
[76] Gratch, J. et al. 2006. Virtual rapport. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 4133 LNAI, (2006), 14–27. DOI:https://doi.org/10.1007/11821830_2.
[77] Grodzinsky, F.S. et al. 2011. Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?” Ethics and Information Technology. 13, 1 (Mar. 2011), 17–27. DOI:https://doi.org/10.1007/S10676-010-9255-1/FIGURES/2.
[78] Gudykunst and W. B. 1995. Anxiety/uncertainty management (AUM) theory: Current status.
[79] Haim, G. et al. A Cultural Sensitive Agent for Human-Computer Negotiation.
[80] Haleem, A. et al. 2020. Artificial Intelligence (AI) applications in orthopaedics: An innovative technology to embrace. Journal of clinical orthopaedics and trauma. 11, Suppl 1 (Feb. 2020), S80–S81. DOI:https://doi.org/10.1016/J.JCOT.2019.06.012.
[81] Hancock, P.A. et al. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors. 53, 5 (Oct. 2011), 517–527. DOI:https://doi.org/10.1177/0018720811417254.
[82] Hancock, P.A. and Scallen, S.F. 2004. Allocating functions in human–machine systems. Viewing psychology as a whole: The integrative science of William N. Dember. (Oct. 2004), 509–539. DOI:https://doi.org/10.1037/10290-024.
[83] Hayes, B. and Scassellati, B. 2013. Challenges in Shared-Environment Human-Robot Collaboration. (2013).
[84] Heerink, M. et al. 2010. Assessing acceptance of assistive social agent technology by older adults: The almere model. International Journal of Social Robotics. 2, 4 (2010), 361–375. DOI:https://doi.org/10.1007/s12369-010-0068-5.
[85] Hegel, F. et al. 2006. Playing a different imitation game: Interaction with an Empathic Android Robot. Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS. (2006), 56–61. DOI:https://doi.org/10.1109/ICHR.2006.321363.
[86] Hemsley, G.D. and Doob, A.N. 1978. The Effect of Looking Behavior on Perceptions of a Communicator’s Credibility1. Journal of Applied Social Psychology. 8, 2 (Jun. 1978), 136–142. DOI:https://doi.org/10.1111/J.1559-1816.1978.TB00772.X.
[87] Hinds, P.J. et al. 2004. Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task.
[88] Hocks, M.E. and Kendrick, M.R. 2003. Eloquent images : word and image in the age of new media. (2003), 318.
[89] Hogg, M.A. and Reid, S.A. 2006. Social Identity, Self-Categorization, and the Communication of Group Norms. Communication Theory. 16, 1 (Feb. 2006), 7–30. DOI:https://doi.org/10.1111/J.1468-2885.2006.00003.X.
[90] Italian National Research Council et al. 1995. Cognitive And Social Action. Cognitive And Social Action. (Jun. 1995). DOI:https://doi.org/10.4324/9780203783221.
[91] Jackson, R.B. and Williams, T. 2019. Language-Capable Robots may Inadvertently Weaken Human Moral Norms. ACM/IEEE International Conference on Human-Robot Interaction. 2019-March, (Mar. 2019), 401–410. DOI:https://doi.org/10.1109/HRI.2019.8673123.
[92] Joo, E.J. et al. 2019. Response by joo et al to letter regarding article, “high-risk human papillomavirus infection and the risk of cardiovascular disease in Korean women.” Circulation Research. 125, 3 (Jul. 2019), E15. DOI:https://doi.org/10.1161/CIRCRESAHA.119.315480.
[93] Jung, B. and Kopp, S. 2003. FlurMax: An Interactive Virtual Agent for Entertaining Visitors in a Hallway.
[94] Kalegina, A. et al. 2018. Characterizing the Design Space of Rendered Robot Faces. ACM/IEEE International Conference on Human-Robot Interaction (Feb. 2018), 96–104.
[95] Kanda, T. et al. 2002. Development and evaluation of an interactive humanoid robot “Robovie.” Proceedings - IEEE International Conference on Robotics and Automation. 2, (2002), 1848–1854. DOI:https://doi.org/10.1109/ROBOT.2002.1014810.
[96] Kang, S.-H. et al. 2008. Does the Contingency of Agents’ Nonverbal Feedback Affect Users’ Social Anxiety? (2008).
[97] Katagiri, Y. et al. 2001. Social Persuasion in Human-Agent Interaction Visual Communication Project View project Social Persuasion in Human-Agent Interaction.
[98] Kiesler, S. et al. 1996. A prisoner’s dilemma experiment on cooperation with people and human-like computers. Journal of Personality and Social Psychology. 70, 1 (1996), 47–65. DOI:https://doi.org/10.1037//0022-3514.70.1.47.
[99] Kiesler, S. et al. 2008. Anthropomorphic Interactions with a Robot and Robot–like Agent. http://dx.doi.org/10.1521/soco.2008.26.2.169. 26, 2 (Apr. 2008), 169–181. DOI:https://doi.org/10.1521/SOCO.2008.26.2.169.
[100] Kim, D.J. and Lim, Y.K. 2019. Co-performing agent: Design for building user–agent partnership in learning and adaptive services. Conference on Human Factors in Computing Systems - Proceedings. (May 2019). DOI:https://doi.org/10.1145/3290605.3300714.
[101] Klein, G. et al. 2005. Common Ground and Coordination in Joint Activity. Organizational Simulation. (Jun. 2005), 139–184. DOI:https://doi.org/10.1002/0471739448.CH6.
[102] Kobayashi, H. 1993. Study on face robot for active human interface-mechanisms of face robot and expression of 6 basic facial expressions. ieeexplore.ieee.org. (1993).
[103] Komiak, S.Y.X. and Benbasat, I. 2006. The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly: Management Information Systems. 30, 4 (2006), 941–960. DOI:https://doi.org/10.2307/25148760.
[104] Kopp, S. and Krämer, N. 2021. Revisiting Human-Agent Communication: The Importance of Joint Co-construction and Understanding Mental States. Frontiers in Psychology. 12, (Mar. 2021), 597. DOI:https://doi.org/10.3389/FPSYG.2021.580955/BIBTEX.
[105] Krämer, N. et al. 2013. Smile and the world will smile with you-The effects of a virtual agent’s smile on users’ evaluation and behavior. International Journal of Human-Computer Studies. 71, 3 (Mar. 2013), 335–349. DOI:https://doi.org/10.1016/J.IJHCS.2012.09.006.
[106] Krämer, N.C. et al. 2012. Human-agent and human-robot interaction theory: Similarities to and differences from human-human interaction. Studies in Computational Intelligence. 396, (2012), 215–240. DOI:https://doi.org/10.1007/978-3-642-25691-2_9.
[107] Kramer, R.M. (Roderick M. and Tyler, T.R. 1996. Trust in organizations : frontiers of theory and research. (1996), 429.
[108] Kraut, R.E. and Poe, D.B. 1980. Behavioral roots of person perception: The deception judgments of customs inspectors and laymen. Journal of Personality and Social Psychology. 39, 5 (1980), 784–798. DOI:https://doi.org/10.1037/0022-3514.39.5.784.
[109] Krumhuber, E. et al. 2007. Facial Dynamics as Indicators of Trustworthiness and Cooperative Behavior. Emotion. 7, 4 (Nov. 2007), 730–735. DOI:https://doi.org/10.1037/1528-3542.7.4.730.
[110] Kunze, A. et al. 2019. Automation transparency: implications of uncertainty communication for human-automation interaction and interfaces. https://doi.org/10.1080/00140139.2018.1547842. 62, 3 (Mar. 2019), 345–360. DOI:https://doi.org/10.1080/00140139.2018.1547842.
[111] Lackey, S. et al. 2011. Defining next-generation multi-modal communication in Human Robot Interaction. Proceedings of the Human Factors and Ergonomics Society. (2011), 461–464. DOI:https://doi.org/10.1177/1071181311551095.
[112] LaFrance, M. 1979. Nonverbal Synchrony and Rapport: Analysis by the Cross-Lag Panel Technique. Social Psychology Quarterly. 42, 1 (Mar. 1979), 66. DOI:https://doi.org/10.2307/3033875.
[113] Lee, E.J. 2010. What Triggers Social Responses to Flattering Computers? Experimental Tests of Anthropomorphism and Mindlessness Explanations: http://dx.doi.org/10.1177/0093650209356389. 37, 2 (Feb. 2010), 191–214. DOI:https://doi.org/10.1177/0093650209356389.
[114] Lee, J.D. and See, K.A. 2004. Trust in automation: Designing for appropriate reliance. Human Factors. 46, 1 (Aug. 2004), 50–80. DOI:https://doi.org/10.1518/hfes.46.1.50_30392.
[115] Lee, S. et al. 1996. When the interface is a face. Human-Computer Interaction. 11, 2 (1996), 97–124. DOI:https://doi.org/10.1207/S15327051HCI1102_1.
[116] Lee, S. et al. 1996. When the interface is a face. Human-Computer Interaction. 11, 2 (1996), 97–124. DOI:https://doi.org/10.1207/S15327051HCI1102_1.
[117] Legacy, C. et al. 2019. Planning the driverless city. Transport Reviews. 39, 1 (Jan. 2019), 84–102. DOI:https://doi.org/10.1080/01441647.2018.1466835.
[118] Leslie, A.M. 1987. Pretense and representation: The origins of “theory of mind.” Psychological Review. 94, 4 (1987), 412–426. DOI:https://doi.org/10.1037//0033-295X.94.4.412.
[119] Licklider, J.C.R. 1960. Man-Computer Symbiosis. IRE Transactions on Human Factors in Electronics. HFE-1, 1 (1960), 4–11. DOI:https://doi.org/10.1109/THFE2.1960.4503259.
[120] Lohse, M. et al. 2014. Robot gestures make difficult tasks easier: The impact of gestures on perceived workload and task performance. Conference on Human Factors in Computing Systems - Proceedings. (2014), 1459–1466. DOI:https://doi.org/10.1145/2556288.2557274.
[121] Looije, R. et al. 2010. Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. International Journal of Human-Computer Studies. 68, 6 (Jun. 2010), 386–397. DOI:https://doi.org/10.1016/J.IJHCS.2009.08.007.
[122] Looser, C.E. and Wheatley, T. 2010. The tipping point of animacy: How, when, and where we perceive life in a face. Psychological Science. 21, 12 (Dec. 2010), 1854–1862. DOI:https://doi.org/10.1177/0956797610388044.
[123] Lucas, G.M. et al. 2014. It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior. 37, (Aug. 2014), 94–100. DOI:https://doi.org/10.1016/J.CHB.2014.04.043.
[124] Luger, E. and Sellen, A. 2016. “Like having a really bad pa”: The gulf between user expectation and experience of conversational agents. Conference on Human Factors in Computing Systems - Proceedings. (May 2016), 5286–5297. DOI:https://doi.org/10.1145/2858036.2858288.
[125] Lyons, J.B. 2013. Being Transparent about Transparency: A Model for Human-Robot Interaction. (2013).
[126] Machine Theory of Mind: 2018. http://proceedings.mlr.press/v80/rabinowitz18a.html. Accessed: 2022-04-27.
[127] Manser Payne, E.H. et al. 2021. Enhancing the value co-creation process: artificial intelligence and mobile banking service platforms. Journal of Research in Interactive Marketing. 15, 1 (2021), 68–85. DOI:https://doi.org/10.1108/JRIM-10-2020-0214.
[128] Marakas, G.M. et al. 2000. Theoretical model of differential social attributions toward computing technology: when the metaphor becomes the model. International Journal of Human Computer Studies. 52, 4 (2000), 719–750. DOI:https://doi.org/10.1006/IJHC.1999.0348.
[129] Marková, I. and British Academy. 2004. Trust and democratic transition in post-communist Europe. (2004), 217.
[130] Mayer, R.C. et al. 1995. An Integrative Model of Organizational Trust.
[131] McKinney, S.M. et al. 2020. International evaluation of an AI system for breast cancer screening. Nature 2020 577:7788. 577, 7788 (Jan. 2020), 89–94. DOI:https://doi.org/10.1038/s41586-019-1799-6.
[132] De Melo, C.M. et al. 2015. Humans versus computers: Impact of emotion expressions on people’s decision making. IEEE Transactions on Affective Computing. 6, 2 (Apr. 2015), 127–136. DOI:https://doi.org/10.1109/TAFFC.2014.2332471.
[133] Merritt, S.M. 2011. Affective processes in human-automation interactions. Human Factors. 53, 4 (Aug. 2011), 356–370. DOI:https://doi.org/10.1177/0018720811411912.
[134] Merritt, S.M. et al. 2013. I trust it, but I don’t know why: effects of implicit attitudes toward automation on trust in an automated system. Human factors. 55, 3 (Jun. 2013), 520–534. DOI:https://doi.org/10.1177/0018720812465081.
[135] Merritt, S.M. and Ilgen, D.R. 2008. Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Human factors. 50, 2 (Apr. 2008), 194–210. DOI:https://doi.org/10.1518/001872008X288574.
[136] Mishra, P. and Mishra, P. 2006. Affective Feedback from Computers and its Effect on Perceived Ability and... Journal of Educational Multimedia and Hypermedia. 15, 1 (2006), 107–131.
[137] Misuraca, G. et al. 2020. The use of AI in public services: results from a preliminary mapping across the EU. Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance. 2020, (2020), 23–25. DOI:https://doi.org/10.1145/3428502.
[138] Moon, Y. 2000. Intimate Exchanges: Using Computers to Elicit Self-Disclosure from Consumers. Journal of Consumer Research. 26, 4 (Mar. 2000), 323–339. DOI:https://doi.org/10.1086/209566.
[139] Moon, Y. and Nass, C. 1996. How “real” are computer personalities? Psychological responses to personality types in human-computer interaction. Communication Research. 23, 6 (1996), 651–674. DOI:https://doi.org/10.1177/009365096023006002.
[140] Morewedge, C.K. 2009. Negativity Bias in Attribution of External Agency. Journal of Experimental Psychology: General. 138, 4 (Nov. 2009), 535–545. DOI:https://doi.org/10.1037/A0016796.
[141] Nakajima, K. and Niitsuma, M. 2020. Effects of Space and Scenery on Virtual-Pet-Assisted Activity. Proceedings of the 8th International Conference on Human-Agent Interaction. 20, (2020). DOI:https://doi.org/10.1145/3406499.
[142] Nakatsu, R. et al. Emotion Recognition and Its Application to Computer Agents with Spontaneous Interactive Capabilities.
[143] Nass, C. et al. 1997. Are Machines Gender Neutral? Gender-Stereotypic Responses to Computers With Voices. Journal of Applied Social Psychology. 27, 10 (May 1997), 864–876. DOI:https://doi.org/10.1111/J.1559-1816.1997.TB00275.X.
[144] Nass, C. et al. 1994. Computer are social actors. Conference on Human Factors in Computing Systems - Proceedings. (1994), 72–78. DOI:https://doi.org/10.1145/259963.260288.
[145] Nass, C. et al. 1994. Computers are social actors. Proceedings of the SIGCHI conference on Human factors in computing systems celebrating interdependence - CHI ’94. (1994), 72–78. DOI:https://doi.org/10.1145/191666.191703.
[146] Nass, C. et al. 1997. How users reciprocate to computers: An experiment that demonstrates behavior change. Conference on Human Factors in Computing Systems - Proceedings. 22-27-March-1997, (Mar. 1997), 331–332. DOI:https://doi.org/10.1145/1120212.1120419.
[147] Nass, C. and Moon, Y. 2000. Machines and mindlessness: Social responses to computers. Journal of Social Issues. 56, 1 (2000), 81–103. DOI:https://doi.org/10.1111/0022-4537.00153.
[148] Neubauer, C. et al. 2012. Fatigue and voluntary utilization of automation in simulated driving. Human factors. 54, 5 (Oct. 2012), 734–746. DOI:https://doi.org/10.1177/0018720811423261.
[149] Norman, D. et al. 1988. Design of Everyday Things. Choice Reviews Online. 51, 10 (Jun. 1988), 51-5559-51–5559. DOI:https://doi.org/10.5860/CHOICE.51-5559.
[150] Norman, D.A. 1988. The psychology of everyday things. (1988), 257.
[151] Nowak, K. and Nowak, K. 2001. Defining and Differentiating Copresence, Social Presence and Presence as Transportation. 4 TH ANNUAL INTERNATIONAL WORKSHOP, PHILADELPHIA, RETTIE, R.M., 2003, A COMPARISON OF FOUR NEW COMMUNICATION TECHNOLOGIES, PROCEEDINGS OF HCI INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION, LAWRENCE ERLBAUM ASSOCIATES. (2001), 686--690.
[152] Nowak, K.L. 2015. Examining Perception and Identification in Avatar-mediated Interaction. The Handbook of the Psychology of Communication Technology. (Jan. 2015), 87–114. DOI:https://doi.org/10.1002/9781118426456.CH4.
[153] Nowak, K.L. and Biocca, F. 2003. The Effect of the Agency and Anthropomorphism on users’ Sense of Telepresence, Copresence, and Social Presence in Virtual Environments. Presence: Teleoperators and Virtual Environments. 12, 5 (Oct. 2003), 481–494. DOI:https://doi.org/10.1162/105474603322761289.
[154] Ogreten, S. et al. 2010. Recommended roles for uninhabited team members within mixed-initiative combat teams. 2010 International Symposium on Collaborative Technologies and Systems, CTS 2010. (2010), 531–536. DOI:https://doi.org/10.1109/CTS.2010.5478468.
[155] Oviatt, S. et al. 2004. Toward adaptive conversational interfaces. ACM Transactions on Computer-Human Interaction (TOCHI). 11, 3 (Sep. 2004), 300–328. DOI:https://doi.org/10.1145/1017494.1017498.
[156] Pak, R. et al. 2012. Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. http://dx.doi.org/10.1080/00140139.2012.691554. 55, 9 (Sep. 2012), 1059–1072. DOI:https://doi.org/10.1080/00140139.2012.691554.
[157] Panagiotopoulos, I. and Dimitrakopoulos, G. 2018. An empirical investigation on consumers’ intentions towards autonomous driving. Transportation Research Part C: Emerging Technologies. 95, (Oct. 2018), 773–784. DOI:https://doi.org/10.1016/J.TRC.2018.08.013.
[158] Parasuraman, R. et al. 2009. Adaptive automation for human supervision of multiple uninhabited vehicles: Effects on change detection, situation awareness, and mental workload. Military Psychology. 21, 2 (Apr. 2009), 270–297. DOI:https://doi.org/10.1080/08995600902768800.
[159] Parasuraman, R. and Riley, V. 1997. Humans and automation: Use, misuse, disuse, abuse. Human Factors. 39, 2 (Jun. 1997), 230–253. DOI:https://doi.org/10.1518/001872097778543886.
[160] Parise, S. et al. 1999. Cooperating with life-like interface agents. Computers in Human Behavior. 15, 2 (Mar. 1999), 123–142. DOI:https://doi.org/10.1016/S0747-5632(98)00035-1.
[161] Pertaub, D.P. et al. 2002. An experiment on public speaking anxiety in response to three different types of virtual audience. Presence: Teleoperators and Virtual Environments. 11, 1 (Feb. 2002), 68–78. DOI:https://doi.org/10.1162/105474602317343668.
[162] Philip, P.C. and Hafez, M.A. 2020. Entropy Weighted-Based (EWB) I-LEACH Protocol for Energy-Efficient IoT Applications. 2020 IEEE Global Conference on Artificial Intelligence and Internet of Things, GCAIoT 2020. (Dec. 2020). DOI:https://doi.org/10.1109/GCAIOT51063.2020.9345819.
[163] Phillips, E. et al. 2018. What is Human-like?: Decomposing Robots’ Human-like Appearance Using the Anthropomorphic roBOT (ABOT) Database. (2018). DOI:https://doi.org/10.1145/3171221.
[164] Poggi, I. and D’Errico, F. 2010. Cognitive modelling of human social signals. SSPW’10 - Proceedings of the 2010 ACM Social Signal Processing Workshop, Co-located with ACM Multimedia 2010. (2010), 21–26. DOI:https://doi.org/10.1145/1878116.1878124.
[165] Qiu, L. and Benbasat, I. 2008. Evaluating anthropomorphic product recommendation agents: A social relationship perspective to designing information systems. Journal of Management Information Systems. 25, 4 (Apr. 2008), 145–182. DOI:https://doi.org/10.2753/MIS0742-1222250405.
[166] Quigley, K.F.F. 1996. Trust: The social virtues and the creation of prosperity: By Francis Fukuyama. (New York: Free Press, 1995). Orbis. 40, 2 (1996), 333.
[167] Rai, A. 2020. Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science. 48, 1 (Jan. 2020), 137–141. DOI:https://doi.org/10.1007/S11747-019-00710-5/TABLES/1.
[168] Ram, A. et al. 2018. Conversational AI: The Science Behind the Alexa Prize. (Jan. 2018). DOI:https://doi.org/10.48550/arxiv.1801.03604.
[169] Rand, D.G. and Nowak, M.A. 2013. Human cooperation. Trends in Cognitive Sciences. 17, 8 (Aug. 2013), 413–425. DOI:https://doi.org/10.1016/J.TICS.2013.06.003.
[170] Rau, P.L.P. et al. 2010. A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics. 2, 2 (2010), 175–186. DOI:https://doi.org/10.1007/s12369-010-0056-9.
[171] Reeves, B. and Nass, C.I. 1996. The media equation : how people treat computers, television, and new media like real people and places. CSLI Publications ;Cambridge University Press.
[172] Reichenbach, J. et al. 2011. Human performance consequences of automated decision aids in states of sleep loss. Human factors. 53, 6 (Dec. 2011), 717–728. DOI:https://doi.org/10.1177/0018720811418222.
[173] Richardson, D.C. et al. 2007. The art of conversation is coordination: Common ground and the coupling of eye movements during dialogue: Research article. Psychological Science. 18, 5 (May 2007), 407–413. DOI:https://doi.org/10.1111/j.1467-9280.2007.01914.x.
[174] Rickel, J. and Johnson, W.L. 1999. Animated agents for procedural training in virtual reality: perception, cognition, and motor control. Applied Artificial Intelligence. 13, 4–5 (May 1999), 343–382. DOI:https://doi.org/10.1080/088395199117315.
[175] Rickenberg, R. and Reeves, B. 2000. The effects of animated characters on anxiety, task performance, and evaluations of user interfaces. Conference on Human Factors in Computing Systems - Proceedings. (2000), 49–56. DOI:https://doi.org/10.1145/332040.332406.
[176] Robotic surgery - Mayo Clinic: 2019. https://www.mayoclinic.org/tests-procedures/robotic-surgery/about/pac-20394974. Accessed: 2022-03-25.
[177] Rousseau, D.M. et al. 1998. Not so different after all: A cross-discipline view of trust. Academy of Management Review. 23, 3 (1998), 393–404. DOI:https://doi.org/10.5465/AMR.1998.926617.
[178] Salem, M. and Dautenhahn, K. 2015. Evaluating Trust and Safety in HRI: Practical Issues and Ethical Challenges.
[179] Samovar, L.A. et al. 2009. Communication between cultures. (2009).
[180] Schermerhorn, P. et al. 2008. Robot social presence and gender: Do females view robots differently than males? HRI 2008 - Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction: Living with Robots. (2008), 263–270. DOI:https://doi.org/10.1145/1349822.1349857.
[181] Schwaninger, I. et al. 2020. Exploring trust in human-agent collaboration. ECSCW 2019 - Proceedings of the 17th European Conference on Computer Supported Cooperative Work (2020).
[182] Sebeok, T.A. (Thomas A. 1979. The sign & its masters. University of Texas Press.
[183] Serenko, A. 2007. Are interface agents scapegoats? Attributions of responsibility in human–agent interaction. Interacting with Computers. 19, 2 (Mar. 2007), 293–303. DOI:https://doi.org/10.1016/J.INTCOM.2006.07.005.
[184] Serholt, S. et al. 2020. Trouble and Repair in Child–Robot Interaction: A Study of Complex Interactions With a Robot Tutee in a Primary School Classroom. Frontiers in Robotics and AI. 7, (Apr. 2020), 46. DOI:https://doi.org/10.3389/FROBT.2020.00046/BIBTEX.
[185] Shergadwala, M. and El-Nasr, M.S. 2021. Esports Agents with a Theory of Mind: Towards Better Engagement, Education, and Engineering. (2021). DOI:https://doi.org/10.31219/OSF.IO/QJCG9.
[186] Shockley, K. et al. 2003. Mutual interpersonal postural constraints are involved in cooperative conversation. Journal of experimental psychology. Human perception and performance. 29, 2 (Apr. 2003), 326–332. DOI:https://doi.org/10.1037/0096-1523.29.2.326.
[187] Shvo, M. et al. 2020. Towards the Role of Theory of Mind in Explanation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 12175 LNAI, (2020), 75–93. DOI:https://doi.org/10.1007/978-3-030-51924-7_5.
[188] SIGCHI (Group : U.S.). Curriculum Development Group. 1992. ACM SIGCHI : curricula for human-computer interaction. Association for Computing Machinery.
[189] Similarity of gestures and interpersonal influence.: 1969. https://psycnet.apa.org/record/1969-17366-001. Accessed: 2022-04-24.
[190] Singh, I.L. et al. 1993. Automation-Induced “Complacency”: Development of the Complacency-Potential Rating Scale. The International Journal of Aviation Psychology. 3, 2 (1993), 111–122. DOI:https://doi.org/10.1207/S15327108IJAP0302_2.
[191] Slater, M. et al. 1999. Public Speaking in Virtual Reality: Facing an Audience of Avatars. IEEE Computer Graphics and Applications. 19, 2 (Mar. 1999), 6–9. DOI:https://doi.org/10.1109/38.749116.
[192] Stafford, B.M. 2007. Echo objects : the cognitive work of images. (2007), 281.
[193] Strabala, K. et al. 2012. Learning the communication of intent prior to physical collaboration. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication. (2012), 968–973. DOI:https://doi.org/10.1109/ROMAN.2012.6343875.
[194] Susanne Kaiser et al. 1998. Emotional Episodes, Facial Expressions, and Reported Feelings in Human-Computer Interactions.
[195] Szalma, J.L. and Taylor, G.S. 2011. Individual differences in response to automation: the five factor model of personality. Journal of experimental psychology. Applied. 17, 2 (Jun. 2011), 71–96. DOI:https://doi.org/10.1037/A0024170.
[196] Takayama, L. 2008. Making sense of agentic objects and teleoperation: In-the-moment and reflective perspectives. Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, HRI’09. (2008), 239–240. DOI:https://doi.org/10.1145/1514095.1514155.
[197] Teoh, E.R. and Kidd, D.G. 2017. Rage against the machine? Google’s self-driving cars versus human drivers. Journal of Safety Research. 63, (Dec. 2017), 57–60. DOI:https://doi.org/10.1016/J.JSR.2017.08.008.
[198] Tesla Autopilot “partly to blame” for crash - BBC News: https://www.bbc.com/news/technology-41242884. Accessed: 2023-03-25.
[199] Tesla Faces Lawsuit after Model X on Autopilot with “Dozing Driver” Blamed for Fatal Crash: https://www.newsweek.com/tesla-lawsuit-model-x-autopilot-fatal-crash-japan-yoshihiro-umeda-1501114. Accessed: 2023-03-25.
[200] Tesla Sued Over Fatal Crash Blamed on Autopilot Malfunction - Bloomberg: https://www.bloomberg.com/news/articles/2019-05-01/tesla-sued-over-fatal-crash-blamed-on-autopilot-navigation-error. Accessed: 2022-03-19.
[201] The ghost in the machine. The influence of Embodied Conversational Agents on user expectations and user behaviour in a TV/VCR application: 2003. https://www.researchgate.net/publication/242273054_The_ghost_in_the_machine_The_influence_of_Embodied_Conversational_Agents_on_user_expectations_and_user_behaviour_in_a_TVVCR_application1. Accessed: 2022-03-27.
[202] Tickle-Degnen, L. and Rosenthal, R. 1990. The Nature of Rapport and Its Nonverbal Correlates. Psychological Inquiry. 1, 4 (Jan. 1990), 285–293. DOI:https://doi.org/10.1207/S15327965PLI0104_1.
[203] Tjøstheim, T.A. et al. 2019. A computational model of trust-, pupil-, and motivation dynamics. HAI 2019 - Proceedings of the 7th International Conference on Human-Agent Interaction. (Sep. 2019), 179–185. DOI:https://doi.org/10.1145/3349537.3351896.
[204] Todorov, A. et al. 2015. Social Attributions from Faces: Determinants, Consequences, Accuracy, and Functional Significance. http://dx.doi.org/10.1146/annurev-psych-113011-143831. 66, (Jan. 2015), 519–545. DOI:https://doi.org/10.1146/ANNUREV-PSYCH-113011-143831.
[205] Tomasello, M. 2006. Why Don’t Apes Point? (2006).
[206] Triandis, H.C. 1960. Some Determinants of Interpersonal Communication. Human Relations. 13, 3 (1960), 279–287. DOI:https://doi.org/10.1177/001872676001300308.
[207] Tung, F.W. 2011. Influence of Gender and Age on the Attitudes of Children towards Humanoid Robots. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 6764 LNCS, PART 4 (2011), 637–646. DOI:https://doi.org/10.1007/978-3-642-21619-0_76.
[208] Tyler, T.R. and Degoey, P. 2012. Trust in Organizational Authorities: The Influence of Motive Attributions on Willingness to Accept Decisions. Trust in Organizations: Frontiers of Theory and Research. (May 2012), 331–356. DOI:https://doi.org/10.4135/9781452243610.N16.
[209] Understanding the intentions of others from visual signals: Neurophysiological evidence: 1994. https://psycnet.apa.org/record/1995-24608-001. Accessed: 2022-04-26.
[210] Vinciarelli, A. 2009. Capturing order in social interactions [social sciences]. IEEE Signal Processing Magazine. 26, 5 (2009). DOI:https://doi.org/10.1109/MSP.2009.933382.
[211] de Visser, E.J. et al. 2016. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied. 22, 3 (Sep. 2016), 331–349. DOI:https://doi.org/10.1037/xap0000092.
[212] De Visser, E.J. et al. 2012. The World is not Enough: Trust in Cognitive Agents: http://dx.doi.org/10.1177/1071181312561062. (Sep. 2012), 263–267. DOI:https://doi.org/10.1177/1071181312561062.
[213] Wagner, A.R. et al. 2018. Modeling the human-robot trust phenomenon: A conceptual framework based on risk. ACM Transactions on Interactive Intelligent Systems. 8, 4 (Nov. 2018). DOI:https://doi.org/10.1145/3152890.
[214] Walker, J.H. et al. 1994. Using a human face in an interface. (1994), 205. DOI:https://doi.org/10.1145/259963.260290.
[215] Warm, J.S. et al. 2008. Vigilance requires hard mental work and is stressful. Human Factors. 50, 3 (Jun. 2008), 433–441. DOI:https://doi.org/10.1518/001872008X312152.
[216] Watch the horrifying moment a Tesla car crashes into a parked lorry while in “autopilot” mode - Mirror Online: https://www.mirror.co.uk/tech/watch-horrifying-moment-tesla-car-8839739. Accessed: 2023-03-25.
[217] Waytz, A. et al. 2010. Making sense by making sentient: effectance motivation increases anthropomorphism. Journal of personality and social psychology. 99, 3 (2010), 410–435. DOI:https://doi.org/10.1037/A0020240.
[218] Waytz, A. et al. 2014. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology. 52, (May 2014), 113–117. DOI:https://doi.org/10.1016/j.jesp.2014.01.005.
[219] Weitz, K. et al. 2019. I"do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. (Jul. 2019), 7–9. DOI:https://doi.org/10.1145/3308532.3329441.
[220] When non-human is better than semi-human: Consistency in speech interfaces | Request PDF: 2001. https://www.researchgate.net/publication/229068086_When_non-human_is_better_than_semi-human_Consistency_in_speech_interfaces. Accessed: 2022-03-28.
[221] Wiese, E. et al. 2014. What We Observe Is Biased by What Other People Tell Us: Beliefs about the Reliability of Gaze Behavior Modulate Attentional Orienting to Gaze Cues. PLOS ONE. 9, 4 (Apr. 2014), e94529. DOI:https://doi.org/10.1371/JOURNAL.PONE.0094529.
[222] Wiltshire, T.J. et al. 2014. An interdisciplinary taxonomy of social cues and signals in the service of engineering robotic social intelligence. Unmanned Systems Technology XVI. 9084, (Jun. 2014), 90840F. DOI:https://doi.org/10.1117/12.2049933.
[223] Wiltshire, T.J. et al. 2013. Effects of robot gaze and proxemic behavior on perceived social presence during a hallway navigation scenario. Proceedings of the Human Factors and Ergonomics Society. (2013), 1273–1277. DOI:https://doi.org/10.1177/1541931213571282.
[224] Woods, S. et al. 2005. Is Someone Watching Me?-Consideration of Social Facilitation Effects in Human-Robot Interaction Experiments. (2005).
[225] Wooldridge, M. and Jennings, N.R. 1995. Intelligent Agents: Theory and Practice.
[226] Yamazaki, Y. et al. 2009. Intent expression using eye robot for mascot robot system. (Apr. 2009). DOI:https://doi.org/10.48550/arxiv.0904.1631.
[227] Yuksel, B.F. et al. 2017. Brains or Beauty. ACM Transactions on Internet Technology (TOIT). 17, 1 (Jan. 2017). DOI:https://doi.org/10.1145/2998572.
[228] Zanbaka, C.A. et al. 2007. Social responses to virtual humans: Implications for future interface design. Conference on Human Factors in Computing Systems - Proceedings. (2007), 1561–1570. DOI:https://doi.org/10.1145/1240624.1240861.
[229] Zhang, T. et al. 2019. The roles of initial trust and perceived risk in public’s acceptance of automated vehicles. Transportation Research Part C: Emerging Technologies. 98, (Jan. 2019), 207–220. DOI:https://doi.org/10.1016/J.TRC.2018.11.018.
[230] Zuckerman, M. et al. 1981. Verbal and Nonverbal Communication of Deception. Advances in Experimental Social Psychology. 14, C (Jan. 1981), 1–59. DOI:https://doi.org/10.1016/S0065-2601(08)60369-X.
[231] 董士海 2004. 人机交互的进展及面临的挑战. 计算机辅助设计与图形学学报. (2004).
[232] 蔡佳蒓 2019. 合作點好嗎?從自我決定理論探討在《OVERCOOKED》合作模擬遊戲中目標結構與玩家類型對享樂感與未來合作意願的影響. 2019年 (Jan. 2019), 1–78. DOI:https://doi.org/10.6814/NCCU201901256.
zh_TW