學術產出-學位論文
文章檢視/開啟
書目匯出
-
題名 應用深度學習架構於社群網路資料分析:以Twitter圖文資料為例
Analyzing Social Network Data Using Deep Neural Networks: A Case Study Using Twitter Posts作者 楊子萲
Yang, Tzu-Hsuan貢獻者 廖文宏
Liao, Wen-Hung
楊子萲
Yang, Tzu-Hsuan關鍵詞 推特
圖文分析
Word2Vec
深度學習
社群網路
Twitter
Social networks
Graphical and text analysis
Word2Vec
Deep learning日期 2018 上傳時間 23-一月-2019 14:59:44 (UTC+8) 摘要 社群平台的發展日益蓬勃,人們分享動態的方式不僅只有文字,發文時搭配影像也是使用者常見的互動方式,然而有時候僅靠單方面的文字或是圖片並不能了解使用者真正想傳達的訊息,因此本研究以影像與文字分析技術為基礎,期望可藉由社群平台的多樣化資訊,分析圖片與文字之間的關係。由於Twitter的發文字數限制使得Twitter上的使用者較容易在貼文中明確表達重點,因此本研究從Twitter蒐集了2017年間擁有台灣關鍵字的推文資料,經過資料清洗後,從中分析哪些推文屬於觀光類型,哪些推文屬於非觀光類型,利用深度學習模型框架將圖文資訊進行整合,最後再進行分群,探討各類別的特性。透過此研究,可探索圖文之間相互輔助的關聯性,也可瞭解社群平台的貼文類型分佈,深化我們對於社群平台的理解,亦可透過本研究的框架提供質化分析研究者必要的資訊。
Interaction on various social networking platforms has become an important part of our daily life. Apart from text messages, image is also a popular media format utilized for online communication. Text or image alone, however, cannot fully convey the ideas that users wish to express. In the thesis, we employ computer vision and word embedding techniques to analyze the relationship between image content and text messages and explore the rich information entangled.The limitation on the total number of characters compels Twitter users to compose their messages more succinctly, suggesting a stronger association between text and image. In this study, we collected all tweets which include keywords related to Taiwan during 2017. After data cleaning, we apply machine learning techniques to classify tweets into to ‘travel’ and ‘non-travel’ types. This is achieved by employing deep neural networks to process and integrate text and image information. Within each class, we use hierarchical clustering to further partition the data into different clusters and investigate their characteristics.Through this research, we expect to identify the relationship between text and images in a tweet and gain more understanding of the properties of tweets on social networking platforms. The proposed framework and corresponding analytical results should also prove useful for qualitative research.參考文獻 [1] Google Clound Vision API Documentation. https://cloud.google.com/vision/docs/.[2] Amazon Rekognition. https://aws.amazon.com/rekognition/?nc1=h_ls.[3] 中華民國交通部觀光局,觀光統計圖表。https://admin.taiwan.net.tw/public/public.aspx?no=315[4] GU, Chunhui, et al. AVA: A video dataset of spatio-temporally localized atomic visual actions. arXiv preprint arXiv:1705.08421, 2017, 3.4: 6.[5] HUBEL, David H.; WIESEL, Torsten N. Receptive fields, binocular interaction and functional architecture in the cat`s visual cortex. The Journal of physiology, 1962, 160.1: 106-154.[6] HINTON, Geoffrey E.; OSINDERO, Simon; TEH, Yee-Whye. A fast learning algorithm for deep belief nets. Neural computation, 2006, 18.7: 1527-1554.[7] RANJAN, Rajeev; PATEL, Vishal M.; CHELLAPPA, Rama. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41.1: 121-135.[8] KRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097-1105.[9] HE, Kaiming, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778.[10] HU, Jie; SHEN, Li; SUN, Gang. Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 2017, 7.[11] 林之昫,HubertLin(2017)。最後一屆ImageNet大規模視覺識別大賽(ILSVRC2017)順利落幕,而WebVision圖像大賽會是下一個ImageNet大賽嗎?。https://goo.gl/5rHG1y。[12] LI, Wen, et al. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862, 2017.[13] WebVision. https://www.vision.ee.ethz.ch/webvision/2017/index.html.[14] WebVision Challenge Results. https://www.vision.ee.ethz.ch/webvision/2017/challenge_results.html.[15] HU, Yuheng, et al. What We Instagram: A First Analysis of Instagram Photo Content and User Types. In: Icwsm. 2014.[16] SZEGEDY, Christian, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 1-9.[17] IOFFE, Sergey; SZEGEDY, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.[18] SZEGEDY, Christian, et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 2818-2826.[19] CHOLLET, François. Xception: Deep learning with depthwise separable convolutions. arXiv preprint, 2017, 1610.02357.[20] HOWARD, Andrew G., et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.[21] Keras Documentation. https://keras.io/applications/.[22] HARRIS, Zellig S. Distributional structure. Word, 1954, 10.2-3: 146-162.[23] Vector Representations of Words. https://www.tensorflow.org/tutorials/word2vec.[24] MIKOLOV, Tomas, et al. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.[25] 李維平、張加憲(2013)。使用N組連結平均法的階層式自動分群。電子商務學報,第十五卷(第一期),35-56。[26] MAATEN, Laurens van der; HINTON, Geoffrey. Visualizing data using t-SNE. Journal of machine learning research, 2008, 9.Nov: 2579-2605.[27] HINTON, Geoffrey E.; ROWEIS, Sam T. Stochastic neighbor embedding. In: Advances in neural information processing systems. 2003. p. 857-864.[28] Flood and Fire Twitter Capture and Analysis Toolset, ff-tcat. https://github.com/Sparklet73/ff-tcat.git[29] Sara Robinson(2016), Google Cloud Vision – Safe Search Detection API. https://cloud.google.com/blog/big-data/2016/08/filtering-inappropriate-content-with-the-cloud-vision-api[30] Caffe. http://caffe.berkeleyvision.org/[31] Open NSFW Model, yahoo. https://github.com/yahoo/open_nsfw.git[32] GODIN, Fréderic, et al. Multimedia Lab $@ $ ACL WNUT NER Shared Task: Named Entity Recognition for Twitter Microposts using Distributed Word Representations. In: Proceedings of the Workshop on Noisy User-generated Text. 2015. p. 146-153. 描述 碩士
國立政治大學
資訊科學系
105753041資料來源 http://thesis.lib.nccu.edu.tw/record/#G0105753041 資料類型 thesis dc.contributor.advisor 廖文宏 zh_TW dc.contributor.advisor Liao, Wen-Hung en_US dc.contributor.author (作者) 楊子萲 zh_TW dc.contributor.author (作者) Yang, Tzu-Hsuan en_US dc.creator (作者) 楊子萲 zh_TW dc.creator (作者) Yang, Tzu-Hsuan en_US dc.date (日期) 2018 en_US dc.date.accessioned 23-一月-2019 14:59:44 (UTC+8) - dc.date.available 23-一月-2019 14:59:44 (UTC+8) - dc.date.issued (上傳時間) 23-一月-2019 14:59:44 (UTC+8) - dc.identifier (其他 識別碼) G0105753041 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/122134 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊科學系 zh_TW dc.description (描述) 105753041 zh_TW dc.description.abstract (摘要) 社群平台的發展日益蓬勃,人們分享動態的方式不僅只有文字,發文時搭配影像也是使用者常見的互動方式,然而有時候僅靠單方面的文字或是圖片並不能了解使用者真正想傳達的訊息,因此本研究以影像與文字分析技術為基礎,期望可藉由社群平台的多樣化資訊,分析圖片與文字之間的關係。由於Twitter的發文字數限制使得Twitter上的使用者較容易在貼文中明確表達重點,因此本研究從Twitter蒐集了2017年間擁有台灣關鍵字的推文資料,經過資料清洗後,從中分析哪些推文屬於觀光類型,哪些推文屬於非觀光類型,利用深度學習模型框架將圖文資訊進行整合,最後再進行分群,探討各類別的特性。透過此研究,可探索圖文之間相互輔助的關聯性,也可瞭解社群平台的貼文類型分佈,深化我們對於社群平台的理解,亦可透過本研究的框架提供質化分析研究者必要的資訊。 zh_TW dc.description.abstract (摘要) Interaction on various social networking platforms has become an important part of our daily life. Apart from text messages, image is also a popular media format utilized for online communication. Text or image alone, however, cannot fully convey the ideas that users wish to express. In the thesis, we employ computer vision and word embedding techniques to analyze the relationship between image content and text messages and explore the rich information entangled.The limitation on the total number of characters compels Twitter users to compose their messages more succinctly, suggesting a stronger association between text and image. In this study, we collected all tweets which include keywords related to Taiwan during 2017. After data cleaning, we apply machine learning techniques to classify tweets into to ‘travel’ and ‘non-travel’ types. This is achieved by employing deep neural networks to process and integrate text and image information. Within each class, we use hierarchical clustering to further partition the data into different clusters and investigate their characteristics.Through this research, we expect to identify the relationship between text and images in a tweet and gain more understanding of the properties of tweets on social networking platforms. The proposed framework and corresponding analytical results should also prove useful for qualitative research. en_US dc.description.tableofcontents 第一章 緒論 11.1 研究背景 11.2 研究目的與方法 31.3 論文貢獻 51.4 論文架構 5第二章 技術背景與相關研究 72.1 深度學習的演進 72.2 相關研究 102.2.1 卷積神經網路與相關模型簡介 112.2.2 詞袋簡介 152.2.3 Word2Vec 152.2.4 階層式分群演算法 162.2.5 t-SNE 19第三章 資料集 213.1 觀光類別 253.1.1 食物類 253.1.2 動物類 263.1.4 住宿類 283.1.5 交通類 283.1.6 風景類 293.1.7 街景類 303.1.8 鳥瞰類 313.1.9 煙火類 323.2 非觀光類別 333.2.1 偶像類 333.2.2 政治新聞類 343.2.3 人像類 353.2.4 文字類 363.2.5 非寫實類 373.2.6 色情類 38第四章 研究方法 394.1 使用工具 394.1.1 AllDup 394.1.2 Google Cloud Vision API[1] 404.1.3 Open NSFW[31] 424.2 實驗流程 424.2.1 去除重複圖片 434.2.2 色情圖片過濾 444.2.3 觀光、非觀光樣本定義 464.2.4 深度學習模型訓練 49第五章 實驗結果與討論 515.1 去除重複圖片 515.2 色情圖片過濾 555.2.1 工具測試與比較 555.2.2 偵測圖片並過濾 585.3 模型訓練 605.4 階層式分群與t-SNE視覺化 63第六章 結論與未來規劃 70參考文獻 71 zh_TW dc.format.extent 5047183 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0105753041 en_US dc.subject (關鍵詞) 推特 zh_TW dc.subject (關鍵詞) 圖文分析 zh_TW dc.subject (關鍵詞) Word2Vec zh_TW dc.subject (關鍵詞) 深度學習 zh_TW dc.subject (關鍵詞) 社群網路 zh_TW dc.subject (關鍵詞) Twitter en_US dc.subject (關鍵詞) Social networks en_US dc.subject (關鍵詞) Graphical and text analysis en_US dc.subject (關鍵詞) Word2Vec en_US dc.subject (關鍵詞) Deep learning en_US dc.title (題名) 應用深度學習架構於社群網路資料分析:以Twitter圖文資料為例 zh_TW dc.title (題名) Analyzing Social Network Data Using Deep Neural Networks: A Case Study Using Twitter Posts en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] Google Clound Vision API Documentation. https://cloud.google.com/vision/docs/.[2] Amazon Rekognition. https://aws.amazon.com/rekognition/?nc1=h_ls.[3] 中華民國交通部觀光局,觀光統計圖表。https://admin.taiwan.net.tw/public/public.aspx?no=315[4] GU, Chunhui, et al. AVA: A video dataset of spatio-temporally localized atomic visual actions. arXiv preprint arXiv:1705.08421, 2017, 3.4: 6.[5] HUBEL, David H.; WIESEL, Torsten N. Receptive fields, binocular interaction and functional architecture in the cat`s visual cortex. The Journal of physiology, 1962, 160.1: 106-154.[6] HINTON, Geoffrey E.; OSINDERO, Simon; TEH, Yee-Whye. A fast learning algorithm for deep belief nets. Neural computation, 2006, 18.7: 1527-1554.[7] RANJAN, Rajeev; PATEL, Vishal M.; CHELLAPPA, Rama. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41.1: 121-135.[8] KRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097-1105.[9] HE, Kaiming, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778.[10] HU, Jie; SHEN, Li; SUN, Gang. Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 2017, 7.[11] 林之昫,HubertLin(2017)。最後一屆ImageNet大規模視覺識別大賽(ILSVRC2017)順利落幕,而WebVision圖像大賽會是下一個ImageNet大賽嗎?。https://goo.gl/5rHG1y。[12] LI, Wen, et al. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862, 2017.[13] WebVision. https://www.vision.ee.ethz.ch/webvision/2017/index.html.[14] WebVision Challenge Results. https://www.vision.ee.ethz.ch/webvision/2017/challenge_results.html.[15] HU, Yuheng, et al. What We Instagram: A First Analysis of Instagram Photo Content and User Types. In: Icwsm. 2014.[16] SZEGEDY, Christian, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 1-9.[17] IOFFE, Sergey; SZEGEDY, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.[18] SZEGEDY, Christian, et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 2818-2826.[19] CHOLLET, François. Xception: Deep learning with depthwise separable convolutions. arXiv preprint, 2017, 1610.02357.[20] HOWARD, Andrew G., et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.[21] Keras Documentation. https://keras.io/applications/.[22] HARRIS, Zellig S. Distributional structure. Word, 1954, 10.2-3: 146-162.[23] Vector Representations of Words. https://www.tensorflow.org/tutorials/word2vec.[24] MIKOLOV, Tomas, et al. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.[25] 李維平、張加憲(2013)。使用N組連結平均法的階層式自動分群。電子商務學報,第十五卷(第一期),35-56。[26] MAATEN, Laurens van der; HINTON, Geoffrey. Visualizing data using t-SNE. Journal of machine learning research, 2008, 9.Nov: 2579-2605.[27] HINTON, Geoffrey E.; ROWEIS, Sam T. Stochastic neighbor embedding. In: Advances in neural information processing systems. 2003. p. 857-864.[28] Flood and Fire Twitter Capture and Analysis Toolset, ff-tcat. https://github.com/Sparklet73/ff-tcat.git[29] Sara Robinson(2016), Google Cloud Vision – Safe Search Detection API. https://cloud.google.com/blog/big-data/2016/08/filtering-inappropriate-content-with-the-cloud-vision-api[30] Caffe. http://caffe.berkeleyvision.org/[31] Open NSFW Model, yahoo. https://github.com/yahoo/open_nsfw.git[32] GODIN, Fréderic, et al. Multimedia Lab $@ $ ACL WNUT NER Shared Task: Named Entity Recognition for Twitter Microposts using Distributed Word Representations. In: Proceedings of the Workshop on Noisy User-generated Text. 2015. p. 146-153. zh_TW dc.identifier.doi (DOI) 10.6814/THE.NCCU.CS.002.2019.B02 en_US