Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 基於眼動資料之影片字幕依賴性研究
Investigating Viewer’s Reliance on Captions Based on Gaze Information
作者 陳巧如
Chen, Chiao-Ju
貢獻者 廖文宏
Liao, Wen-Hung
陳巧如
Chen, Chiao-Ju
關鍵詞 眼動資料
眼動儀
字幕
日期 2018
上傳時間 3-Sep-2018 16:02:31 (UTC+8)
摘要 字幕對國人是習慣性的存在,我們觀察台灣人對於字幕有依賴性,若將影片中字幕去除,或者字幕為陌生外語,對觀看者而言會造成什麼影響?本研究使用Tobii EyeX收集了45個本國受測者的有效眼動資料,並提出合適的指標,分析受測者在不同條件控制下的影片觀看行為。綜合巨觀與微觀的分析,更全面的了解受測者對於字幕及其他AOI(例如臉部)的依賴程度,另外字幕與臉部在影片中都是隨著時間變動的內容,我們採用OpenCV Canny方法自動偵測出字幕區,並利用Faster R-CNN技術劃分臉部區塊,以取得穩定且有效的臉部及字幕的感興趣區域,利於自動化分析。
分析結果顯示,聽覺語言是最關鍵的因素,以巨觀角度言,Group1(影片播放順序為:英中英中)偏好看臉部;Group2(影片播放順序為:中英中英)偏好看字幕。而以微觀角度言,一開始的偏好行為決定了後續觀看影片的行為模式。以Group2為例,後續的影片也都顯示比Group1對字幕有顯著的偏好,這種習慣性的偏好行為延續到後續觀看的影片,形成沉浸現象;此外我們發現對於看不懂的文字出現時,受測者會有逃避現象。值得注意的是,Group2開頭影片播放聲道為受測者母語,而結果又呈現此受測者集合偏好看字幕,故我們可以初步確認國人對字幕有一定程度的依賴。
Subtitles are present in almost all TV programs and films in Taiwan. Are Taiwanese more dependent on subtitles to appreciate the content of the film compared to people of other nationality? What happens if subtitles are removed or replaced by unfamiliar languages? In this research, we use Tobii EyeX to collect eye movement data from 45 native-speakers while they watch different films, and propose appropriate indicators to analyze their viewing behavior. To facilitate subsequent data analysis, certain areas of interest (AOI), such as the caption region and human face, are automatically detected using techniques including Canny edge detector and Faster R-CNN.
Experimental results indicate that auditory language is the most critical factor. Subjects in Group1 (English, Chinese, English and Chinese) have a higher tendency to focus on the face area. Subjects in Group2 (Chinese, English, Chinese and English) appear to read the subtitles more often. The initial behavior seems to determine the viewing pattern subsequently. For subjects in Group2, preference for caption is clearly observed than those in Group1. This habitual preference continues in follow-up movies, resulting in an immersion phenomenon. We also observe that when unfamiliar texts appear, the subjects exhibit ‘escaping’ behavior by avoiding the text region. It is worth noting that the video at the beginning of Group2 is the native language of the testee, and the result shows that the subject gathers preferences to read subtitles. Therefore, we can partially confirm that Taiwanese people have a certain degree of dependence on subtitles.
參考文獻 [1] 黃坤年,”電視字幕改良之我見”,廣播與電視第23期,(1973):70-74.
[2] 劉幼俐, 楊忠川. “傳播科技的另一種選擇──我國隱藏式字幕的政策研究”.廣播與電視第三卷第二期,(1997):109-140.
[3] 王福興, 周宗奎, 趙顯, 白學軍, 閆國利. “文字熟悉度對電影字幕偏好性影響的眼動研究”. 心理與行為研究10.1 (2012):50-57.
[4] Boatner, E. (1980). “Captioned Films For the Deaf.” National Association of the Deaf Retrieved February 15, 2016.
[5] Heppoko, “日本的電視,為什麼不像台灣的一樣「有字幕」?”.關鍵評論網The News Lens, from: https://www.thenewslens.com/article/43280
[6] 張翔等, "這時候,美國人會怎麼說?", 台北:知識工場. (2013)
[7] O`Bryan, Kenneth G. "Eye Movements as an Index of Television Viewing Strategies." (1975).
[8] 蔡政旻. "眼球追蹤技術應用於商品喜好評估." 創新管理與設計跨域研討會 (2013):1-4.
[9] 張晉文. "基於眼動軌跡之閱讀模式分析." 政治大學資訊科學系碩士論文 (2017): 1-91.
[10] Tobii/developer zone, “Sentry Versus the EyeX” CRom:
http://developer.tobii.com/community/forums/topic/sentry-versus-the-eyex/
[11] Tobii/developer zone, “Fixing Sampling/reCResh Rate” CRom:
http://developer.tobii.com/community/forums/topic/fixing-samplingreCResh-rate
[12] OpenCV document, Canny , from: https://docs.opencv.org/3.1.0/da/d22/tutorial_py_canny.html
[13] 傅筠駿、林崇偉、施懿芳, “超越數位巴別塔:從 TED 開放翻譯計畫探索數位內容的全球在地化策略”。2011 年數位創世紀研討會。(2011)
[14] Ogama, open gaze and mouse analyzer. In OGAMA. Last visited on 2/4/2016.
[15] He, Kaiming, et al, "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
描述 碩士
國立政治大學
資訊科學系碩士在職專班
1019710252
資料來源 http://thesis.lib.nccu.edu.tw/record/#G1019710252
資料類型 thesis
dc.contributor.advisor 廖文宏zh_TW
dc.contributor.advisor Liao, Wen-Hungen_US
dc.contributor.author (Authors) 陳巧如zh_TW
dc.contributor.author (Authors) Chen, Chiao-Juen_US
dc.creator (作者) 陳巧如zh_TW
dc.creator (作者) Chen, Chiao-Juen_US
dc.date (日期) 2018en_US
dc.date.accessioned 3-Sep-2018 16:02:31 (UTC+8)-
dc.date.available 3-Sep-2018 16:02:31 (UTC+8)-
dc.date.issued (上傳時間) 3-Sep-2018 16:02:31 (UTC+8)-
dc.identifier (Other Identifiers) G1019710252en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/119970-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系碩士在職專班zh_TW
dc.description (描述) 1019710252zh_TW
dc.description.abstract (摘要) 字幕對國人是習慣性的存在,我們觀察台灣人對於字幕有依賴性,若將影片中字幕去除,或者字幕為陌生外語,對觀看者而言會造成什麼影響?本研究使用Tobii EyeX收集了45個本國受測者的有效眼動資料,並提出合適的指標,分析受測者在不同條件控制下的影片觀看行為。綜合巨觀與微觀的分析,更全面的了解受測者對於字幕及其他AOI(例如臉部)的依賴程度,另外字幕與臉部在影片中都是隨著時間變動的內容,我們採用OpenCV Canny方法自動偵測出字幕區,並利用Faster R-CNN技術劃分臉部區塊,以取得穩定且有效的臉部及字幕的感興趣區域,利於自動化分析。
分析結果顯示,聽覺語言是最關鍵的因素,以巨觀角度言,Group1(影片播放順序為:英中英中)偏好看臉部;Group2(影片播放順序為:中英中英)偏好看字幕。而以微觀角度言,一開始的偏好行為決定了後續觀看影片的行為模式。以Group2為例,後續的影片也都顯示比Group1對字幕有顯著的偏好,這種習慣性的偏好行為延續到後續觀看的影片,形成沉浸現象;此外我們發現對於看不懂的文字出現時,受測者會有逃避現象。值得注意的是,Group2開頭影片播放聲道為受測者母語,而結果又呈現此受測者集合偏好看字幕,故我們可以初步確認國人對字幕有一定程度的依賴。
zh_TW
dc.description.abstract (摘要) Subtitles are present in almost all TV programs and films in Taiwan. Are Taiwanese more dependent on subtitles to appreciate the content of the film compared to people of other nationality? What happens if subtitles are removed or replaced by unfamiliar languages? In this research, we use Tobii EyeX to collect eye movement data from 45 native-speakers while they watch different films, and propose appropriate indicators to analyze their viewing behavior. To facilitate subsequent data analysis, certain areas of interest (AOI), such as the caption region and human face, are automatically detected using techniques including Canny edge detector and Faster R-CNN.
Experimental results indicate that auditory language is the most critical factor. Subjects in Group1 (English, Chinese, English and Chinese) have a higher tendency to focus on the face area. Subjects in Group2 (Chinese, English, Chinese and English) appear to read the subtitles more often. The initial behavior seems to determine the viewing pattern subsequently. For subjects in Group2, preference for caption is clearly observed than those in Group1. This habitual preference continues in follow-up movies, resulting in an immersion phenomenon. We also observe that when unfamiliar texts appear, the subjects exhibit ‘escaping’ behavior by avoiding the text region. It is worth noting that the video at the beginning of Group2 is the native language of the testee, and the result shows that the subject gathers preferences to read subtitles. Therefore, we can partially confirm that Taiwanese people have a certain degree of dependence on subtitles.
en_US
dc.description.tableofcontents 第一章 緒論 14
1.1研究背景 14
1.2研究動機與目的 14
1.3論文架構 15
第二章 文獻探討與相關研究 17
2.1字幕的研究探討 17
2.2眼球追蹤 20
2.3研究工具探討 22
第三章 實驗設計與前期驗證 28
3.1前期研究設計與實施方法 28
3.1.1實驗流程 29
3.1.2影片的選擇 29
3.1.3影片的字幕設計 32
3.1.3實驗設計與指標 33
3.2工具與方法 34
3.2.1OGAMA工具介紹 34
3.2.2前測實驗流程說明 36
3.2.3實驗變因 37
3.3.1字幕區域自動偵測 38
3.3.1.1寬鬆字幕區設定 39
3.3.1.2假定字幕區設定 39
3.3.2臉部區域自動偵測 40
3.3.3熱區圖分析比較 43
3.3.4資料檢核 46
3.4實驗設計修正 47
第四章 研究過程與結果 50
4.1受測者資料收集 50
4.2整體分析 52
4.2.1整體相關性 52
4.2.2影片聽覺因素之組間差異性 56
4.2.3組內比較 55
4.3影片1資料分析 56
4.3.1影片1—與字幕段落相關性 58
4.3.2影片1—字幕段落與影片聽覺因素之組間差異性 62
4.3.3影片1—聽覺因素組內之字幕段落間差異性 62
4.3.4影片1—字幕頭尾小段與聽覺因素之組間差異性 64
4.3.5影片1—聽覺因素組內之字幕頭尾小段段落間差異性 65
4.4影片2 資料分析 66
4.4.1影片2—與字幕段落相關性 66
4.4.2影片2—字幕段落與影片聽覺因素之組間差異性 69
4.4.3影片2—聽覺因素組內之字幕段落間差異性 70
4.4.4影片2—字幕頭尾小段與聽覺因素之組間差異性 72
4.4.5影片2—聽覺因素組內字幕頭尾小段之段落間差異性 73
4.5影片3資料分析 75
4.5.1影片3—與字幕段落相關性 75
4.5.2影片3—字幕段落與影片聽覺因素之組間差異性 78
4.5.3影片3—聽覺因素組內之字幕段落間差異性 79
4.5.4影片3—字幕頭尾小段與聽覺因素之組間差異性 81
4.5.5影片3—聽覺因素組內字幕頭尾小段之段落間差異性 82
4.6影片4資料分析 84
4.6.1影片4—字幕段落相關性 84
4.6.2影片4—字幕段落與影片聽覺因素之組間差異性 87
4.6.3影片4—聽覺因素組內之字幕段落間差異性 88
4.6.4影片4—字幕頭尾小段與聽覺因素之組間差異性 89
4.6.5影片4—聽覺因素組內字幕頭尾小段之段落間差異性 90
4.7 OGAMA 熱區圖 92
4.8小結 100
第五章 結論與未來研究方向 101
5.1 結論 101
5.2 未來研究方向 101
參考文獻 102
附錄A 104
zh_TW
dc.format.extent 5971245 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G1019710252en_US
dc.subject (關鍵詞) 眼動資料zh_TW
dc.subject (關鍵詞) 眼動儀zh_TW
dc.subject (關鍵詞) 字幕zh_TW
dc.title (題名) 基於眼動資料之影片字幕依賴性研究zh_TW
dc.title (題名) Investigating Viewer’s Reliance on Captions Based on Gaze Informationen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] 黃坤年,”電視字幕改良之我見”,廣播與電視第23期,(1973):70-74.
[2] 劉幼俐, 楊忠川. “傳播科技的另一種選擇──我國隱藏式字幕的政策研究”.廣播與電視第三卷第二期,(1997):109-140.
[3] 王福興, 周宗奎, 趙顯, 白學軍, 閆國利. “文字熟悉度對電影字幕偏好性影響的眼動研究”. 心理與行為研究10.1 (2012):50-57.
[4] Boatner, E. (1980). “Captioned Films For the Deaf.” National Association of the Deaf Retrieved February 15, 2016.
[5] Heppoko, “日本的電視,為什麼不像台灣的一樣「有字幕」?”.關鍵評論網The News Lens, from: https://www.thenewslens.com/article/43280
[6] 張翔等, "這時候,美國人會怎麼說?", 台北:知識工場. (2013)
[7] O`Bryan, Kenneth G. "Eye Movements as an Index of Television Viewing Strategies." (1975).
[8] 蔡政旻. "眼球追蹤技術應用於商品喜好評估." 創新管理與設計跨域研討會 (2013):1-4.
[9] 張晉文. "基於眼動軌跡之閱讀模式分析." 政治大學資訊科學系碩士論文 (2017): 1-91.
[10] Tobii/developer zone, “Sentry Versus the EyeX” CRom:
http://developer.tobii.com/community/forums/topic/sentry-versus-the-eyex/
[11] Tobii/developer zone, “Fixing Sampling/reCResh Rate” CRom:
http://developer.tobii.com/community/forums/topic/fixing-samplingreCResh-rate
[12] OpenCV document, Canny , from: https://docs.opencv.org/3.1.0/da/d22/tutorial_py_canny.html
[13] 傅筠駿、林崇偉、施懿芳, “超越數位巴別塔:從 TED 開放翻譯計畫探索數位內容的全球在地化策略”。2011 年數位創世紀研討會。(2011)
[14] Ogama, open gaze and mouse analyzer. In OGAMA. Last visited on 2/4/2016.
[15] He, Kaiming, et al, "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
zh_TW
dc.identifier.doi (DOI) 10.6814/THE.NCCU.EMCS.009.2018.B02-