Publications-Proceedings

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 GazeNoter: Co-Piloted AR Note-Taking via Gaze Selection of LLM Suggestions to Match Users' Intentions
作者 蔡欣叡
Tsai, Hsin-Ruey;Chiu, Shih-Kang;Wang, Bryan
貢獻者 資訊系
關鍵詞 note-taking; augmented reality; large language models; artificial intelligence; gaze input; wearable devices
日期 2025-04
上傳時間 3-Oct-2025 09:53:38 (UTC+8)
摘要 Note-taking is critical during speeches and discussions, serving not only for later summarization and organization but also for real-time question and opinion reminding in question-and-answer sessions or timely contributions in discussions. Manually typing on smartphones for note-taking could be distracting and increase cognitive load for users. While large language models (LLMs) are used to automatically generate summaries and highlights, the content generated by artificial intelligence (AI) may not match users’ intentions without user input or interaction. Therefore, we propose an AI-copiloted augmented reality (AR) system, GazeNoter, to allow users to swiftly select diverse LLM-generated suggestions via gaze on an AR headset for real-time note-taking. GazeNoter leverages an AR headset as a medium for users to swiftly adjust the LLM output to match their intentions, forming a user-in-the-loop AI system for both within-context and beyond-context notes. We conducted two user studies to verify the usability of GazeNoter in attending speeches in a static sitting condition and walking meetings and discussions in a mobile walking condition, respectively.
關聯 CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, ACM, pp.1-22
資料類型 conference
DOI https://doi.org/10.1145/3706598.3714294
dc.contributor 資訊系
dc.creator (作者) 蔡欣叡
dc.creator (作者) Tsai, Hsin-Ruey;Chiu, Shih-Kang;Wang, Bryan
dc.date (日期) 2025-04
dc.date.accessioned 3-Oct-2025 09:53:38 (UTC+8)-
dc.date.available 3-Oct-2025 09:53:38 (UTC+8)-
dc.date.issued (上傳時間) 3-Oct-2025 09:53:38 (UTC+8)-
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/159778-
dc.description.abstract (摘要) Note-taking is critical during speeches and discussions, serving not only for later summarization and organization but also for real-time question and opinion reminding in question-and-answer sessions or timely contributions in discussions. Manually typing on smartphones for note-taking could be distracting and increase cognitive load for users. While large language models (LLMs) are used to automatically generate summaries and highlights, the content generated by artificial intelligence (AI) may not match users’ intentions without user input or interaction. Therefore, we propose an AI-copiloted augmented reality (AR) system, GazeNoter, to allow users to swiftly select diverse LLM-generated suggestions via gaze on an AR headset for real-time note-taking. GazeNoter leverages an AR headset as a medium for users to swiftly adjust the LLM output to match their intentions, forming a user-in-the-loop AI system for both within-context and beyond-context notes. We conducted two user studies to verify the usability of GazeNoter in attending speeches in a static sitting condition and walking meetings and discussions in a mobile walking condition, respectively.
dc.format.extent 103 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, ACM, pp.1-22
dc.subject (關鍵詞) note-taking; augmented reality; large language models; artificial intelligence; gaze input; wearable devices
dc.title (題名) GazeNoter: Co-Piloted AR Note-Taking via Gaze Selection of LLM Suggestions to Match Users' Intentions
dc.type (資料類型) conference
dc.identifier.doi (DOI) 10.1145/3706598.3714294
dc.doi.uri (DOI) https://doi.org/10.1145/3706598.3714294