學術產出-Periodical Articles

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 XFlag: Explainable Fake News Detection Model on Social Media
作者 簡士鎰;郁方
Chien, Shih-Yi; Yu, Fang; Yang, Cheng-Jun
貢獻者 資管系
日期 2022-04
上傳時間 21-Sep-2022 11:55:20 (UTC+8)
摘要 Social media allows any individual to disseminate information without third-party restrictions, making it difficult to verify the authenticity of a source. The proliferation of fake news has severely affected people’s intentions and behaviors in trusting online sources. Applying AI approaches for fake news detection on social media is the focus of recent research, most of which, however, focuses on enhancing AI performance. This study proposes XFlag, an innovative explainable AI (XAI) framework which uses long short-term memory (LSTM) model to identify fake news articles, layer-wise relevance propagation (LRP) algorithm to explain the fake news detection model based on LSTM, and situation awareness-based agent transparency (SAT) model to increase transparency in human-AI interaction. The developed XFlag framework has been empirically validated. The findings suggest the use of XFlag supports users in understanding system goals (perception), justifying system decisions (comprehension), and predicting system uncertainty (projection), with little cost of perceived cognitive workload.
關聯 International Journal of Human-Computer Interaction, Vol.38, No.18-20, pp.1808-1827
資料類型 article
DOI https://doi.org/10.1080/10447318.2022.2062113
dc.contributor 資管系-
dc.creator (作者) 簡士鎰;郁方-
dc.creator (作者) Chien, Shih-Yi; Yu, Fang; Yang, Cheng-Jun-
dc.date (日期) 2022-04-
dc.date.accessioned 21-Sep-2022 11:55:20 (UTC+8)-
dc.date.available 21-Sep-2022 11:55:20 (UTC+8)-
dc.date.issued (上傳時間) 21-Sep-2022 11:55:20 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/142041-
dc.description.abstract (摘要) Social media allows any individual to disseminate information without third-party restrictions, making it difficult to verify the authenticity of a source. The proliferation of fake news has severely affected people’s intentions and behaviors in trusting online sources. Applying AI approaches for fake news detection on social media is the focus of recent research, most of which, however, focuses on enhancing AI performance. This study proposes XFlag, an innovative explainable AI (XAI) framework which uses long short-term memory (LSTM) model to identify fake news articles, layer-wise relevance propagation (LRP) algorithm to explain the fake news detection model based on LSTM, and situation awareness-based agent transparency (SAT) model to increase transparency in human-AI interaction. The developed XFlag framework has been empirically validated. The findings suggest the use of XFlag supports users in understanding system goals (perception), justifying system decisions (comprehension), and predicting system uncertainty (projection), with little cost of perceived cognitive workload.-
dc.format.extent 109 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) International Journal of Human-Computer Interaction, Vol.38, No.18-20, pp.1808-1827-
dc.title (題名) XFlag: Explainable Fake News Detection Model on Social Media-
dc.type (資料類型) article-
dc.identifier.doi (DOI) 10.1080/10447318.2022.2062113-
dc.doi.uri (DOI) https://doi.org/10.1080/10447318.2022.2062113-