Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 應用可解釋的遞歸神經網路於社群媒體中的假新聞辨識
XFlag: explainable fake news detection model on social media
作者 楊程鈞
Yang, Cheng-Jun
貢獻者 簡士鎰<br>郁方
Chien, Shih-Yi<br>Fang, Yu
楊程鈞
Yang, Cheng-Jun
關鍵詞 可解釋人工智慧
逐層相關性傳播演算法
SAT
透明度
假新聞偵測
長短期記憶
社群媒體
XAI
LRP
SAT
Transparency
Fake news detection
LSTM
Social media
日期 2022
上傳時間 10-Feb-2022 12:53:38 (UTC+8)
摘要 社群媒體成為了現今快速散播新聞的管道,只需透過電腦、行動裝置上網,人人都可以便利地瀏覽當天的最新消息。不過,這同時也是一把雙刃劍,有別於傳統媒體,大眾可以輕易地在網路中傳播資訊,而不需要受到查核機構的管制,這使得網路中的新聞來源混雜且難以辨別其真偽,假新聞的氾濫嚴重影響了人們信任網絡資訊的意圖與行為。為了解決問題,近期的研究提出利用人工智慧技術來發展假新聞偵測模型,然而,他們大多著重於如何提升人工智慧模型的效能(如準確率),而忽略了資訊透明度的議題。因此,本研究提出了創新的可解釋人工智慧(Explainable AI)框架XFlag。其可分為三個階段,首先訓練長短期記憶模型(Long short-term memory)來偵測社群媒體中的假新聞文章;接著以逐層相關性傳播演算法(Layer-wise relevance propagation)分析訓練好的偵測模型,產生對於預測結果的解釋向量;最後,由於未經處理的數學向量對於一般使用者是難以解讀的,我們以SAT模型(Situation awareness-based agent transparency)將解釋向量與預測結果設計為使用者容易理解的人機介面,提升人與人工智慧系統之間的資訊透明度。本研究透過線上的使用者研究驗證XFlag的有效性,其結果表明相較於黑盒子般的預測結果,此框架可以更好地提升系統透明度,讓使用者了解偵測模型背後的邏輯,進而解決社群媒體中的假新聞議題。更進一步來說,XFlag能夠幫助使用者以很小的認知工作量,來理解系統目標、判別系統決策和預測系統的不確定性。
Social media platforms provide an easy and rapid approach for news consumption. They allow any individual to disseminate information without third-party restrictions (such as fact-checking), making it difficult to verify the authenticity of a source. The proliferation of fake news has severely affected people’s intentions and behaviors in trusting online sources. Applying AI approaches for fake news detection on social media is the focus of much recent research, most of which, however, focuses on enhancing AI performance (such as accuracy). In contrast, in this study we propose XFlag, an innovative explainable AI (XAI) framework which uses long short-term memory (LSTM) to identify fake news articles, a layer-wise relevance propagation (LRP) algorithm to explain the fake news detection model based on LSTM, and a situation awareness-based agent transparency (SAT) model to increase transparency in human–AI interaction. The proposed framework has been empirically validated via online user studies, the results of which confirm that the XFlag framework is effective in resolving the fake news problems on social media by enhancing system transparency and enabling a user to understand the logic behind an AI model. The research findings suggest that the use of XFlag supports users in understanding system goals (i.e., perception), justifying system decisions (i.e., comprehension), and predicting system uncertainty (i.e., projection), with little cost of perceived cognitive workload.
參考文獻 Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138-52160.
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of economic perspectives, 31(2), 211-236.
Arras, L., Arjona-Medina, J., Widrich, M., Montavon, G., Gillhofer, M., Müller, K.-R., Hochreiter, S., & Samek, W. (2019). Explaining and interpreting LSTMs. In Explainable ai: Interpreting, explaining and visualizing deep learning (pp. 211-238). Springer, Cham.
Arras, L., Montavon, G., Müller, K.-R., & Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., & Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
Ayoub, J., Yang, X. J., & Zhou, F. (2021). Combat COVID-19 infodemic using explainable natural language processing models. Information Processing & Management, 58(4), 102569.
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
Bansal, G., Wu, T., Zhou, J., Fok, R., Nushi, B., Kamar, E., ... & Weld, D. (2021, May). Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-16).
Chen, H., Lundberg, S., & Lee, S.-I. (2021). Explaining models by propagating Shapley values of local components. In Explainable AI in Healthcare and Medicine (pp. 261-270). Springer, Cham,
Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. Army research lab aberdeen proving ground md human research and engineering directorate.
Chien, S.-Y., Lewis, M., Sycara, K., Kumru, A., & Liu, J. S. (2020). Influence of Culture, Transparency, Trust, and Degree of Automation on Automation Use. IEEE Transactions on Human-Machine Systems, 50(3), 205–214. https://doi.org/10.1109/THMS.2019.2931755
Chien, S.-Y., Lewis, M., Sycara, K., Liu, J. S., & Kumru, A. (2016). Relation between trust attitudes toward automation, Hofstede’s cultural dimensions, and big five personality traits. Proceedings of the Human Factors and Ergonomics Society, 840–844. https://doi.org/10.1177/1541931213601192
Chien, S.-Y., Lewis, M., Sycara, K., Liu, J.-S., & Kumru, A. (2018). The Effect of Culture on Trust in Automation: Reliability and Workload. ACM Transactions on Interactive Intelligent Systems. https://doi.org/10.1145/0000000.0000000
Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1), 1-4.
Das, S. D., Basak, A., & Dutta, S. (2021). A Heuristic-driven Uncertainty based Ensemble Framework for Fake News Detection in Tweets and News Articles. arXiv preprint arXiv:2104.01791.
Dong, Y., Su, H., Zhu, J., & Zhang, B. (2017). Improving interpretability of deep neural networks with semantic information. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4306-4314).
Feng, V. W., & Hirst, G. (2013). Detecting deceptive opinions with profile compatibility. In Proceedings of the sixth international joint conference on natural language processing (pp. 338-346).
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96-104.
Gedikli, F., Jannach, D., & Ge, M. (2014). How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies, 72(4), 367-382.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
Gramlich, J. (2019, May 16). 10 facts about Americans and Facebook https://www.pewresearch.org/fact-tank/2019/05/16/facts-about-americans-and-facebook/
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2(2).
Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work (pp. 241-250).
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
Kim, A., & Dennis, A. R. (2019). Says who? The effects of presentation format and source rating on fake news in social media. MIS Quarterly, 43(3).
Kim, A., Moravec, P. L., & Dennis, A. R. (2019). Combating Fake News on Social Media with Source Ratings: The Effects of User and Expert Reputation Ratings. Journal of Management Information Systems, 36(3), 931-968.
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Le, Q., & Mikolov, T. (2014). Distributed representations of sentences and documents. In International conference on machine learning (pp. 1188-1196). PMLR.
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human factors, 46(1), 50-80.
Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions. arXiv preprint arXiv:1606.04155.
Lu, Y.-J., & Li, C.-T. (2020). GCAN: Graph-aware co-attention networks for explainable fake news detection on social media. arXiv preprint arXiv:2004.11648.
Lundberg, S., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874.
Lyons, J. B. (2013). Being transparent about transparency: A model for human-robot interaction. In 2013 AAAI Spring Symposium Series.
Ma, J., Gao, W., Mitra, P., Kwon, S., Jansen, B. J., Wong, K.-F., & Cha, M. (2016). Detecting rumors from microblogs with recurrent neural networks.
Ma, J., Gao, W., & Wong, K.-F. (2018). Rumor detection on twitter with tree-structured recursive neural networks. Association for Computational Linguistics.
Madumal, P., Singh, R., Newn, J., & Vetere, F. (2018). Interaction Design for Explainable AI: Workshop Proceedings. arXiv preprint arXiv:1812.08597.
Mercado, J. E., Rupp, M. A., Chen, J. Y., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human–agent teaming for Multi-UxV management. Human factors, 58(3), 401-415.
Mishra, S., Sturm, B. L., & Dixon, S. (2017). Local Interpretable Model-Agnostic Explanations for Music Content Analysis. In ISMIR (pp. 537-543).
Moravec, P., Kim, A., & Dennis, A. R. (2020). Appealing to Sense and Sensibility: System 1 and System 2 Interventions for Fake News on Social Media. Information Systems Research, 31(3), 987-1006.
Moravec, P., Kim, A., Dennis, A. R., & Minas, R. (2018a). Do you really know if it’s true? How asking users to rate stories affects belief in fake news on social media. How Asking Users to Rate Stories Affects Belief in Fake News on Social Media (October 22, 2018). Kelley School of Business Research Paper, (18-89).
Moravec, P., Minas, R., & Dennis, A. R. (2018b). Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All. Kelley School of Business Research Paper, (18-87).
Ott, M., Cardie, C., & Hancock, J. T. (2013). Negative deceptive opinion spam. In Proceedings of the 2013 conference of the north american chapter of the association for computational linguistics: human language technologies (pp. 497-501).
Pynadath, D. V., Barnes, M. J., Wang, N., & Chen, J. Y. (2018). Transparency communication for machine learning in human-automation interaction. In Human and machine learning (pp. 75-90). Springer, Cham.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
Rubin, V. L., & Lukoianova, T. (2015). Truth and deception at the rhetorical structure level. Journal of the Association for Information Science and Technology, 66(5), 905-917.
Ruchansky, N., Seo, S., & Liu, Y. (2017). Csi: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 797-806).
Sagheer, A., & Kotb, M. (2019). Time series forecasting of petroleum production using deep LSTM recurrent networks. Neurocomputing, 323, 203-213.
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.
Selkowitz, A. R., Lakhmani, S. G., Larios, C. N., & Chen, J. Y. (2016). Agent transparency and the autonomous squad member. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 60, No. 1, pp. 1319-1323). Sage CA: Los Angeles, CA: SAGE Publications.
Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713.
Shu, K., Cui, L., Wang, S., Lee, D., & Liu, H. (2019). defend: Explainable fake news detection. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 395-405).
Tandoc Jr, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news” A typology of scholarly definitions. Digital journalism, 6(2), 137-153.
van Der Linden, S., Roozenbeek, J., & Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in psychology, 11, 2928.
Wang, N., Pynadath, D. V., & Hill, S. G. (2016). The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams. In Proceedings of the 2016 international conference on autonomous agents & multiagent systems (pp. 997-1005).
Wang, Y., Qian, S., Hu, J., Fang, Q., & Xu, C. (2020). Fake news detection via knowledge-driven multimodal graph convolutional networks. In Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 540-547).
Wang, Z., & Guo, Y. (2020). Empower rumor events detection from Chinese microblogs with multi-type individual information. Knowledge and Information Systems, 62(9), 3585-3614.
World Health Organization (2020). Coronavirus disease (COVID-19) advice for the public: Mythbusters. Available online at: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters (accessed May 5, 2020).
Yang, F., Pentyala, S. K., Mohseni, S., Du, M., Yuan, H., Linder, R., ... & Hu, X. (2019). Xfake: Explainable fake news detector with visualizations. In The World Wide Web Conference (pp. 3600-3604).
Yu, J., Huang, Q., Zhou, X., & Sha, Y. (2020). Iarnet: An information aggregating and reasoning network over heterogeneous graph for fake news detection. In 2020 International Joint Conference on Neural Networks (IJCNN) (pp. 1-9). IEEE.
Yuan, C., Ma, Q., Zhou, W., Han, J., & Hu, S. (2019). Jointly embedding the local and global relations of heterogeneous graph for rumor detection. In 2019 IEEE International Conference on Data Mining (ICDM) (pp. 796-805). IEEE.
Zhang, H., Fan, Z., Zheng, J., & Liu, Q. (2012). An improving deception detection method in computer-mediated communication. Journal of Networks, 7(11), 1811.
Zhao, R., Benbasat, I., & Cavusoglu, H. (2019). Transparency in Advice-Giving Systems: A Framework and a Research Model for Transparency Provision. In IUI Workshops.
Zhou, Y., Booth, S., Ribeiro, M. T., & Shah, J. (2021). Do Feature Attribution Methods Correctly Attribute Features? arXiv preprint arXiv:2104.14403.
描述 碩士
國立政治大學
資訊管理學系
108356018
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108356018
資料類型 thesis
dc.contributor.advisor 簡士鎰<br>郁方zh_TW
dc.contributor.advisor Chien, Shih-Yi<br>Fang, Yuen_US
dc.contributor.author (Authors) 楊程鈞zh_TW
dc.contributor.author (Authors) Yang, Cheng-Junen_US
dc.creator (作者) 楊程鈞zh_TW
dc.creator (作者) Yang, Cheng-Junen_US
dc.date (日期) 2022en_US
dc.date.accessioned 10-Feb-2022 12:53:38 (UTC+8)-
dc.date.available 10-Feb-2022 12:53:38 (UTC+8)-
dc.date.issued (上傳時間) 10-Feb-2022 12:53:38 (UTC+8)-
dc.identifier (Other Identifiers) G0108356018en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/138885-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 108356018zh_TW
dc.description.abstract (摘要) 社群媒體成為了現今快速散播新聞的管道,只需透過電腦、行動裝置上網,人人都可以便利地瀏覽當天的最新消息。不過,這同時也是一把雙刃劍,有別於傳統媒體,大眾可以輕易地在網路中傳播資訊,而不需要受到查核機構的管制,這使得網路中的新聞來源混雜且難以辨別其真偽,假新聞的氾濫嚴重影響了人們信任網絡資訊的意圖與行為。為了解決問題,近期的研究提出利用人工智慧技術來發展假新聞偵測模型,然而,他們大多著重於如何提升人工智慧模型的效能(如準確率),而忽略了資訊透明度的議題。因此,本研究提出了創新的可解釋人工智慧(Explainable AI)框架XFlag。其可分為三個階段,首先訓練長短期記憶模型(Long short-term memory)來偵測社群媒體中的假新聞文章;接著以逐層相關性傳播演算法(Layer-wise relevance propagation)分析訓練好的偵測模型,產生對於預測結果的解釋向量;最後,由於未經處理的數學向量對於一般使用者是難以解讀的,我們以SAT模型(Situation awareness-based agent transparency)將解釋向量與預測結果設計為使用者容易理解的人機介面,提升人與人工智慧系統之間的資訊透明度。本研究透過線上的使用者研究驗證XFlag的有效性,其結果表明相較於黑盒子般的預測結果,此框架可以更好地提升系統透明度,讓使用者了解偵測模型背後的邏輯,進而解決社群媒體中的假新聞議題。更進一步來說,XFlag能夠幫助使用者以很小的認知工作量,來理解系統目標、判別系統決策和預測系統的不確定性。zh_TW
dc.description.abstract (摘要) Social media platforms provide an easy and rapid approach for news consumption. They allow any individual to disseminate information without third-party restrictions (such as fact-checking), making it difficult to verify the authenticity of a source. The proliferation of fake news has severely affected people’s intentions and behaviors in trusting online sources. Applying AI approaches for fake news detection on social media is the focus of much recent research, most of which, however, focuses on enhancing AI performance (such as accuracy). In contrast, in this study we propose XFlag, an innovative explainable AI (XAI) framework which uses long short-term memory (LSTM) to identify fake news articles, a layer-wise relevance propagation (LRP) algorithm to explain the fake news detection model based on LSTM, and a situation awareness-based agent transparency (SAT) model to increase transparency in human–AI interaction. The proposed framework has been empirically validated via online user studies, the results of which confirm that the XFlag framework is effective in resolving the fake news problems on social media by enhancing system transparency and enabling a user to understand the logic behind an AI model. The research findings suggest that the use of XFlag supports users in understanding system goals (i.e., perception), justifying system decisions (i.e., comprehension), and predicting system uncertainty (i.e., projection), with little cost of perceived cognitive workload.en_US
dc.description.tableofcontents CHAPTER 1 INTRODUCTION 7
CHAPTER 2 RELATED WORK 13
2.1 Fake news detection on social media 13
2.2 Explainable artificial intelligence (XAI) 15
2.3 Fake news flagging mechanism 16
2.4 Situation awareness-based agent transparency (SAT) 18
CHAPTER 3 XFLAG: EXPLAINABLE FAKE NEWS DETECION AND INTERFACE SYNTHESIS 20
3.1 Detection model 21
3.1.1 Feature selection 22
3.1.2 Long short-term memory (LSTM) construction 23
3.2 Explanation model 24
3.2.1 Computation of feature explanation 24
3.2.2 User explanation systhesis 27
CHAPTER 4 MODEL VALIDATION 30
4.1 Dataset 30
4.2 Experiment setup 31
4.3 Fake news detection performance 31
4.4 LRP explanation 32
4.5 Relevance validation 35
4.6 Cross-validation with Twitter dataset 37
CHAPTER 5 USER STUDY 39
5.1 First-round user study 39
5.2 Second-round user study 41
5.2.1 Pilot test—Source validation 41
5.2.2 Online survey user study—Experimental designs and conditions 42
CHAPTER 6 RESULTS 45
6.1 Perceived news authenticity of different XFlag conditions 45
6.2 Calibration of trust beliefs 46
6.3 System understandability and explainability in SAT approaches 47
6.4 Trust and workload in SAT approaches 48
6.5 Feature importance and user preferences 49
CHAPTER 7 DISCUSSION 51
7.1 Development of XFlag framework 51
7.2 SAT model in source authenticity 53
7.3 User perception in XFlag framework 55
CHAPTER 8 CONCLUSION AND FUTURE WORK 57
REFERENCES 59
zh_TW
dc.format.extent 1978537 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108356018en_US
dc.subject (關鍵詞) 可解釋人工智慧zh_TW
dc.subject (關鍵詞) 逐層相關性傳播演算法zh_TW
dc.subject (關鍵詞) SATzh_TW
dc.subject (關鍵詞) 透明度zh_TW
dc.subject (關鍵詞) 假新聞偵測zh_TW
dc.subject (關鍵詞) 長短期記憶zh_TW
dc.subject (關鍵詞) 社群媒體zh_TW
dc.subject (關鍵詞) XAIen_US
dc.subject (關鍵詞) LRPen_US
dc.subject (關鍵詞) SATen_US
dc.subject (關鍵詞) Transparencyen_US
dc.subject (關鍵詞) Fake news detectionen_US
dc.subject (關鍵詞) LSTMen_US
dc.subject (關鍵詞) Social mediaen_US
dc.title (題名) 應用可解釋的遞歸神經網路於社群媒體中的假新聞辨識zh_TW
dc.title (題名) XFlag: explainable fake news detection model on social mediaen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138-52160.
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of economic perspectives, 31(2), 211-236.
Arras, L., Arjona-Medina, J., Widrich, M., Montavon, G., Gillhofer, M., Müller, K.-R., Hochreiter, S., & Samek, W. (2019). Explaining and interpreting LSTMs. In Explainable ai: Interpreting, explaining and visualizing deep learning (pp. 211-238). Springer, Cham.
Arras, L., Montavon, G., Müller, K.-R., & Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., & Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
Ayoub, J., Yang, X. J., & Zhou, F. (2021). Combat COVID-19 infodemic using explainable natural language processing models. Information Processing & Management, 58(4), 102569.
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
Bansal, G., Wu, T., Zhou, J., Fok, R., Nushi, B., Kamar, E., ... & Weld, D. (2021, May). Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-16).
Chen, H., Lundberg, S., & Lee, S.-I. (2021). Explaining models by propagating Shapley values of local components. In Explainable AI in Healthcare and Medicine (pp. 261-270). Springer, Cham,
Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. Army research lab aberdeen proving ground md human research and engineering directorate.
Chien, S.-Y., Lewis, M., Sycara, K., Kumru, A., & Liu, J. S. (2020). Influence of Culture, Transparency, Trust, and Degree of Automation on Automation Use. IEEE Transactions on Human-Machine Systems, 50(3), 205–214. https://doi.org/10.1109/THMS.2019.2931755
Chien, S.-Y., Lewis, M., Sycara, K., Liu, J. S., & Kumru, A. (2016). Relation between trust attitudes toward automation, Hofstede’s cultural dimensions, and big five personality traits. Proceedings of the Human Factors and Ergonomics Society, 840–844. https://doi.org/10.1177/1541931213601192
Chien, S.-Y., Lewis, M., Sycara, K., Liu, J.-S., & Kumru, A. (2018). The Effect of Culture on Trust in Automation: Reliability and Workload. ACM Transactions on Interactive Intelligent Systems. https://doi.org/10.1145/0000000.0000000
Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1), 1-4.
Das, S. D., Basak, A., & Dutta, S. (2021). A Heuristic-driven Uncertainty based Ensemble Framework for Fake News Detection in Tweets and News Articles. arXiv preprint arXiv:2104.01791.
Dong, Y., Su, H., Zhu, J., & Zhang, B. (2017). Improving interpretability of deep neural networks with semantic information. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4306-4314).
Feng, V. W., & Hirst, G. (2013). Detecting deceptive opinions with profile compatibility. In Proceedings of the sixth international joint conference on natural language processing (pp. 338-346).
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96-104.
Gedikli, F., Jannach, D., & Ge, M. (2014). How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies, 72(4), 367-382.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
Gramlich, J. (2019, May 16). 10 facts about Americans and Facebook https://www.pewresearch.org/fact-tank/2019/05/16/facts-about-americans-and-facebook/
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2(2).
Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work (pp. 241-250).
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
Kim, A., & Dennis, A. R. (2019). Says who? The effects of presentation format and source rating on fake news in social media. MIS Quarterly, 43(3).
Kim, A., Moravec, P. L., & Dennis, A. R. (2019). Combating Fake News on Social Media with Source Ratings: The Effects of User and Expert Reputation Ratings. Journal of Management Information Systems, 36(3), 931-968.
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Le, Q., & Mikolov, T. (2014). Distributed representations of sentences and documents. In International conference on machine learning (pp. 1188-1196). PMLR.
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human factors, 46(1), 50-80.
Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions. arXiv preprint arXiv:1606.04155.
Lu, Y.-J., & Li, C.-T. (2020). GCAN: Graph-aware co-attention networks for explainable fake news detection on social media. arXiv preprint arXiv:2004.11648.
Lundberg, S., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874.
Lyons, J. B. (2013). Being transparent about transparency: A model for human-robot interaction. In 2013 AAAI Spring Symposium Series.
Ma, J., Gao, W., Mitra, P., Kwon, S., Jansen, B. J., Wong, K.-F., & Cha, M. (2016). Detecting rumors from microblogs with recurrent neural networks.
Ma, J., Gao, W., & Wong, K.-F. (2018). Rumor detection on twitter with tree-structured recursive neural networks. Association for Computational Linguistics.
Madumal, P., Singh, R., Newn, J., & Vetere, F. (2018). Interaction Design for Explainable AI: Workshop Proceedings. arXiv preprint arXiv:1812.08597.
Mercado, J. E., Rupp, M. A., Chen, J. Y., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human–agent teaming for Multi-UxV management. Human factors, 58(3), 401-415.
Mishra, S., Sturm, B. L., & Dixon, S. (2017). Local Interpretable Model-Agnostic Explanations for Music Content Analysis. In ISMIR (pp. 537-543).
Moravec, P., Kim, A., & Dennis, A. R. (2020). Appealing to Sense and Sensibility: System 1 and System 2 Interventions for Fake News on Social Media. Information Systems Research, 31(3), 987-1006.
Moravec, P., Kim, A., Dennis, A. R., & Minas, R. (2018a). Do you really know if it’s true? How asking users to rate stories affects belief in fake news on social media. How Asking Users to Rate Stories Affects Belief in Fake News on Social Media (October 22, 2018). Kelley School of Business Research Paper, (18-89).
Moravec, P., Minas, R., & Dennis, A. R. (2018b). Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All. Kelley School of Business Research Paper, (18-87).
Ott, M., Cardie, C., & Hancock, J. T. (2013). Negative deceptive opinion spam. In Proceedings of the 2013 conference of the north american chapter of the association for computational linguistics: human language technologies (pp. 497-501).
Pynadath, D. V., Barnes, M. J., Wang, N., & Chen, J. Y. (2018). Transparency communication for machine learning in human-automation interaction. In Human and machine learning (pp. 75-90). Springer, Cham.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
Rubin, V. L., & Lukoianova, T. (2015). Truth and deception at the rhetorical structure level. Journal of the Association for Information Science and Technology, 66(5), 905-917.
Ruchansky, N., Seo, S., & Liu, Y. (2017). Csi: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 797-806).
Sagheer, A., & Kotb, M. (2019). Time series forecasting of petroleum production using deep LSTM recurrent networks. Neurocomputing, 323, 203-213.
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.
Selkowitz, A. R., Lakhmani, S. G., Larios, C. N., & Chen, J. Y. (2016). Agent transparency and the autonomous squad member. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 60, No. 1, pp. 1319-1323). Sage CA: Los Angeles, CA: SAGE Publications.
Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713.
Shu, K., Cui, L., Wang, S., Lee, D., & Liu, H. (2019). defend: Explainable fake news detection. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 395-405).
Tandoc Jr, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news” A typology of scholarly definitions. Digital journalism, 6(2), 137-153.
van Der Linden, S., Roozenbeek, J., & Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in psychology, 11, 2928.
Wang, N., Pynadath, D. V., & Hill, S. G. (2016). The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams. In Proceedings of the 2016 international conference on autonomous agents & multiagent systems (pp. 997-1005).
Wang, Y., Qian, S., Hu, J., Fang, Q., & Xu, C. (2020). Fake news detection via knowledge-driven multimodal graph convolutional networks. In Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 540-547).
Wang, Z., & Guo, Y. (2020). Empower rumor events detection from Chinese microblogs with multi-type individual information. Knowledge and Information Systems, 62(9), 3585-3614.
World Health Organization (2020). Coronavirus disease (COVID-19) advice for the public: Mythbusters. Available online at: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters (accessed May 5, 2020).
Yang, F., Pentyala, S. K., Mohseni, S., Du, M., Yuan, H., Linder, R., ... & Hu, X. (2019). Xfake: Explainable fake news detector with visualizations. In The World Wide Web Conference (pp. 3600-3604).
Yu, J., Huang, Q., Zhou, X., & Sha, Y. (2020). Iarnet: An information aggregating and reasoning network over heterogeneous graph for fake news detection. In 2020 International Joint Conference on Neural Networks (IJCNN) (pp. 1-9). IEEE.
Yuan, C., Ma, Q., Zhou, W., Han, J., & Hu, S. (2019). Jointly embedding the local and global relations of heterogeneous graph for rumor detection. In 2019 IEEE International Conference on Data Mining (ICDM) (pp. 796-805). IEEE.
Zhang, H., Fan, Z., Zheng, J., & Liu, Q. (2012). An improving deception detection method in computer-mediated communication. Journal of Networks, 7(11), 1811.
Zhao, R., Benbasat, I., & Cavusoglu, H. (2019). Transparency in Advice-Giving Systems: A Framework and a Research Model for Transparency Provision. In IUI Workshops.
Zhou, Y., Booth, S., Ribeiro, M. T., & Shah, J. (2021). Do Feature Attribution Methods Correctly Attribute Features? arXiv preprint arXiv:2104.14403.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202200094en_US