學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 可解釋AI之類型對資訊透明度及使用者信任之影響:以假評論偵測為例
ReView: the effects of explainable AI on information transparency and user trust under fake review detection
作者 翁羽亮
Weng, Yu-Liang
貢獻者 簡士鎰
Chien, Shih-Yi
翁羽亮
Weng, Yu-Liang
關鍵詞 XAI
使用者研究
假評論偵測
多重解釋類型
SAT
XAI
User study
Fake review detection
Types of explanation
SAT
日期 2023
上傳時間 1-Sep-2023 14:54:24 (UTC+8)
摘要 自從COVID-19疫情後消費場所從實體大幅轉移至線上之後,消費者有比較多的機會參考線上評論,假評論的嚴重性也因此擴大,於是假評論偵測的議題在近年獲得許多關注,但由於AI/ML模型的黑盒特性,假評論偵測系統在現實場景中難以落地,因此本研究主要目的是發展具有解釋能力的系統,促進使用者對系統的信任程度。本研究提出三層式框架:AI/ML模型、XAI (可解釋AI)、XUI (可解釋介面),這三個層次環環相扣,本研究基於該框架建立一個可解釋的假評論辨識系統,LSTM在系統中作為底層AI模型的算法,XAI演算法採用LIME,而在XUI上,我們操弄介面上的解釋類型與解釋層次,假評論辨識系統需要具備什麼解釋類型,或是給予多少解釋內容,是本篇研究想探討的問題。當XUI中包含全局解釋 (global explanation)、局部解釋 (local explanation)、案例解釋 (example-based explanation)三種方法時,實驗結果發現兩種解釋彼此會有互補效果,也就是說全局解釋搭配局部解或是案例解釋,會表現得比單種解釋還有效,但是當三種解釋同時出現時,局部解釋和案例解釋彼此反而會有干擾效果。此外,我們發現當三種解釋方法同時出現時,減少解釋的內容也不會影響使用者的信任程度。本篇研究除了提出可解釋系統的三層框架以外,更重要的是發現全局解釋搭配上局部解釋或案例解釋可以有效提升使用者對假評論偵測系統的信任程度。本研究發現可供線上評論平台發展假評論辨識系統,藉由本篇研究知道如何提升使用者對系統的信任程度,促進他們合理的使用假評論辨識系統。
Since the COVID-19 pandemic, consumer activities have primarily shifted from physical to online platforms, and the severity of fake reviews has increased. However, due to the black box`s nature, fake review detection systems were far from actual usage. The present study aimed to develop an explainable system to enhance user trust. This study adopted a three-layer framework: AI/ML models, eXplainable AI (XAI), and eXplainable User Interface (XUI). These three layers were interconnected, and this study built an explainable fake review detection system based on this framework. LSTM served as the AI model, LIME was the XAI algorithm, and as for the XUI layer, this study manipulated explanation types and explanation levels on the interface. When XUI included three types of explanations - global, local, and example-based- the experimental results revealed that combining global explanations with local or example-based explanations demonstrated more effectiveness than using a single type of explanation. However, when all three types of explanations appeared simultaneously, local and example-based explanations might have interfered with each other. Additionally, it was observed that when all three types of explanations were presented together, reducing the content of local explanations did not significantly impact their trust level. Besides proposing the three-layer framework for an explainable system, this research emphasized the significance of combining different types of explanations to effectively enhance user trust in the fake review detection system. Online review platforms seeking to develop fake review detection systems could benefit from this study by understanding how to improve users` trust and promote their appropriate usage of the fake review detection system.
參考文獻 Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138-52160.
Alqaraawi, A., Schuessler, M., Weiß, P., Costanza, E., & Berthouze, N. (2020). Evaluating saliency map explanations for convolutional neural networks: a user study. Proceedings of the 25th International Conference on Intelligent User Interfaces,
Arras, L., Montavon, G., Müller, K.-R., & Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
Bove, C., Lesot, M.-J., Tijus, C. A., & Detyniecki, M. (2023). Investigating the Intelligibility of Plural Counterfactual Examples for Non-Expert Users: an Explanation User Interface Proposition and User Study. Proceedings of the 28th International Conference on Intelligent User Interfaces,
Budhi, G. S., Chiong, R., & Wang, Z. (2021). Resampling imbalanced data to detect fake reviews using machine learning classifiers and textual-based features. Multimedia Tools and Applications, 80, 13079-13097.
Cai, C. J., Jongejan, J., & Holbrook, J. (2019). The effects of example-based explanations in a machine learning interface. Proceedings of the 24th international conference on intelligent user interfaces,
Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. 2018 IEEE winter conference on applications of computer vision (WACV),
Chen, J. Y., Procci, K., Boyce, M., Wright, J. L., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency.
Chien, S.-Y., Yang, C.-J., & Yu, F. (2022). XFlag: Explainable fake news detection model on social media. International Journal of Human–Computer Interaction, 38(18-20), 1808-1827.
Chromik, M., & Butz, A. (2021). Human-XAI interaction: a review and design principles for explanation user interfaces. Human-Computer Interaction–INTERACT 2021: 18th IFIP TC 13 International Conference, Bari, Italy, August 30–September 3, 2021, Proceedings, Part II 18,
Danry, V., Pataranutaporn, P., Mao, Y., & Maes, P. (2020). Wearable Reasoner: towards enhanced human rationality through a wearable device with an explainable AI assistant. Proceedings of the Augmented Humans International Conference,
Endsley, M. R. (2023). Supporting Human-AI Teams: Transparency, explainability, and situation awareness. Computers in Human Behavior, 140, 107574.
Fei, G., Mukherjee, A., Liu, B., Hsu, M., Castellanos, M., & Ghosh, R. (2013). Exploiting burstiness in reviews for review spammer detection. Proceedings of the international AAAI conference on web and social media,
Fontanarava, J., Pasi, G., & Viviani, M. (2017). Feature analysis for fake review detection through supervised classification. 2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA),
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science robotics, 4(37), eaay7120.
Hadash, S., Willemsen, M. C., Snijders, C., & IJsselsteijn, W. A. (2022). Improving understandability of feature contributions in model-agnostic explainable AI tools. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems,
Hart, S. G. (2006). NASA-task load index (NASA-TLX); 20 years later. Proceedings of the human factors and ergonomics society annual meeting,
Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.
Humer, C., Hinterreiter, A., Leichtmann, B., Mara, M., & Streit, M. (2022). Comparing Effects of Attribution-based, Example-based, and Feature-based Explanation Methods on AI-Assisted Decision-Making.
Jindal, N., & Liu, B. (2007). Review spam detection. Proceedings of the 16th international conference on World Wide Web,
Kohli, R., Devaraj, S., & Mahmood, M. A. (2004). Understanding determinants of online consumer satisfaction: A decision process perspective. Journal of Management Information Systems, 21(1), 115-136.
Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human factors : the journal of the Human Factors and Ergonomics Society., 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50_30392
Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems,
Lim, B. Y., & Dey, A. K. (2009). Assessing demand for intelligibility in context-aware applications. Proceedings of the 11th international conference on Ubiquitous computing,
Lim, B. Y., Dey, A. K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. Proceedings of the SIGCHI conference on human factors in computing systems,
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.
Mukherjee, A., Venkataraman, V., Liu, B., & Glance, N. (2013). What yelp fake review filter might be doing? Proceedings of the international AAAI conference on web and social media,
Papenmeier, A., Englebienne, G., & Seifert, C. (2019). How model accuracy and explanation fidelity influence user trust. arXiv preprint arXiv:1907.12652.
Park, D.-H., Lee, J., & Han, I. (2007). The effect of on-line consumer reviews on consumer purchasing intention: The moderating role of involvement. International Journal of Electronic Commerce, 11(4), 125-148.
Pennington, J., Socher, R., & Manning, C. D. (2014). Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),
Rastogi, A., & Mehrotra, M. (2018). Impact of behavioral and textual features on opinion spam detection. 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS),
Rayana, S., & Akoglu, L. (2015). Collective opinion spam detection: Bridging review networks and metadata. Proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining,
Refaeli, D., & Hajek, P. (2021). Detecting fake online reviews using fine-tuned BERT. Proceedings of the 2021 5th International Conference on E-Business and Internet,
Ren, Y., Yan, M., & Ji, D. (2022). A hierarchical neural network model with user and product attention for deceptive reviews detection. Information Sciences, 604, 1-10.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should i trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE international conference on computer vision,
Yang, S., Yu, X., & Zhou, Y. (2020). Lstm and gru neural network performance comparison study: Taking yelp review dataset as an example. 2020 International workshop on electronic communication and artificial intelligence (IWECAI),
Zhao, W., Joshi, T., Nair, V. N., & Sudjianto, A. (2020). Shap values for explaining cnn-based text classification models. arXiv preprint arXiv:2008.11825.
Zou, L., Goh, H. L., Liew, C. J. Y., Quah, J. L., Gu, G. T., Chew, J. J., Kumar, M. P., Ang, C. G. L., & Ta, A. W. A. (2022). Ensemble image explainable AI (XAI) algorithm for severe community-acquired pneumonia and COVID-19 respiratory infections. IEEE Transactions on Artificial Intelligence, 4(2), 242-254.
描述 碩士
國立政治大學
資訊管理學系
110356037
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110356037
資料類型 thesis
dc.contributor.advisor 簡士鎰zh_TW
dc.contributor.advisor Chien, Shih-Yien_US
dc.contributor.author (Authors) 翁羽亮zh_TW
dc.contributor.author (Authors) Weng, Yu-Liangen_US
dc.creator (作者) 翁羽亮zh_TW
dc.creator (作者) Weng, Yu-Liangen_US
dc.date (日期) 2023en_US
dc.date.accessioned 1-Sep-2023 14:54:24 (UTC+8)-
dc.date.available 1-Sep-2023 14:54:24 (UTC+8)-
dc.date.issued (上傳時間) 1-Sep-2023 14:54:24 (UTC+8)-
dc.identifier (Other Identifiers) G0110356037en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/146892-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 110356037zh_TW
dc.description.abstract (摘要) 自從COVID-19疫情後消費場所從實體大幅轉移至線上之後,消費者有比較多的機會參考線上評論,假評論的嚴重性也因此擴大,於是假評論偵測的議題在近年獲得許多關注,但由於AI/ML模型的黑盒特性,假評論偵測系統在現實場景中難以落地,因此本研究主要目的是發展具有解釋能力的系統,促進使用者對系統的信任程度。本研究提出三層式框架:AI/ML模型、XAI (可解釋AI)、XUI (可解釋介面),這三個層次環環相扣,本研究基於該框架建立一個可解釋的假評論辨識系統,LSTM在系統中作為底層AI模型的算法,XAI演算法採用LIME,而在XUI上,我們操弄介面上的解釋類型與解釋層次,假評論辨識系統需要具備什麼解釋類型,或是給予多少解釋內容,是本篇研究想探討的問題。當XUI中包含全局解釋 (global explanation)、局部解釋 (local explanation)、案例解釋 (example-based explanation)三種方法時,實驗結果發現兩種解釋彼此會有互補效果,也就是說全局解釋搭配局部解或是案例解釋,會表現得比單種解釋還有效,但是當三種解釋同時出現時,局部解釋和案例解釋彼此反而會有干擾效果。此外,我們發現當三種解釋方法同時出現時,減少解釋的內容也不會影響使用者的信任程度。本篇研究除了提出可解釋系統的三層框架以外,更重要的是發現全局解釋搭配上局部解釋或案例解釋可以有效提升使用者對假評論偵測系統的信任程度。本研究發現可供線上評論平台發展假評論辨識系統,藉由本篇研究知道如何提升使用者對系統的信任程度,促進他們合理的使用假評論辨識系統。zh_TW
dc.description.abstract (摘要) Since the COVID-19 pandemic, consumer activities have primarily shifted from physical to online platforms, and the severity of fake reviews has increased. However, due to the black box`s nature, fake review detection systems were far from actual usage. The present study aimed to develop an explainable system to enhance user trust. This study adopted a three-layer framework: AI/ML models, eXplainable AI (XAI), and eXplainable User Interface (XUI). These three layers were interconnected, and this study built an explainable fake review detection system based on this framework. LSTM served as the AI model, LIME was the XAI algorithm, and as for the XUI layer, this study manipulated explanation types and explanation levels on the interface. When XUI included three types of explanations - global, local, and example-based- the experimental results revealed that combining global explanations with local or example-based explanations demonstrated more effectiveness than using a single type of explanation. However, when all three types of explanations appeared simultaneously, local and example-based explanations might have interfered with each other. Additionally, it was observed that when all three types of explanations were presented together, reducing the content of local explanations did not significantly impact their trust level. Besides proposing the three-layer framework for an explainable system, this research emphasized the significance of combining different types of explanations to effectively enhance user trust in the fake review detection system. Online review platforms seeking to develop fake review detection systems could benefit from this study by understanding how to improve users` trust and promote their appropriate usage of the fake review detection system.en_US
dc.description.tableofcontents CHAPTER 1. INTRODUCTION 1
CHAPTER 2. LITERATURE REVIEW 4
2.1 Fake Review Detection 4
2.2 eXplainable AI (XAI) 5
2.3 eXplainable User Interface (XUI) 9
2.4 Situation awareness-based Agent Transparency (SAT) 11
CHAPTER 3. METHODOLOGY 13
3.1 Fake review detection model (Rayana & Akoglu, 2015) 13
3.1.1 Behavioral feature 15
3.1.2 Textual feature 16
3.2 User study – XUI manipulation 19
3.3 Pilot test 22
3.4 Formal study 25
CHAPTER 4. RESULT 30
4.1 Effect of example-based explanation within multiple types of explanation 30
4.1.1 Example-based explanation within two types of explanation 30
H1.2: Example-based explanation and global explanation together can increase user trust, compared to global explanation alone. 30
4.1.2 Example-based explanation within three types of explanation 30
H1.4: Local explanation, example-based explanation and global explanation together can increase user trust, compared to global explanation together with local explanation. 30
4.2 Effect of local explanation within multiple types of explanation 32
4.2.1 Local explanation within two types of explanation 32
H1.1: Local explanation and global explanation together can increase user trust, compared to global explanation alone. 32
4.2.2 Local explanation within three types of explanation 33
H1.3: Local explanation, example-based explanation and global explanation together can increase user trust, compared to global explanation together with example-based explanation. 33
4.2.3 Comparable result between local explanation and full explanation 34
4.3 Effect of local explanation levels with global explanation 35
4.4 Effect of local explanation levels with global explanation and example-based explanation 37
CHAPTER 5. DISCUSSION 42
5.1 Types of explanation 42
5.2 Levels of explanation 44
5.3 Theoretical and Practical implication 45
CHAPTER 6. CONCLUSION and LIMITATION 47
REFERENCE 49
Appendix A - Questionnaire 53
A.1 Satisfaction 53
A.2 Understandability 53
A.3 Perceived transparency 54
A.4 Trust 54
Appendix B – NASA TLX 56
B.1 Mental demand 56
B.2 Physical demand 56
B.3 Temporal demand 56
B.4 Performance 57
B.4 Effort 57
B.5 Frustration 57
zh_TW
dc.format.extent 1410347 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110356037en_US
dc.subject (關鍵詞) XAIzh_TW
dc.subject (關鍵詞) 使用者研究zh_TW
dc.subject (關鍵詞) 假評論偵測zh_TW
dc.subject (關鍵詞) 多重解釋類型zh_TW
dc.subject (關鍵詞) SATzh_TW
dc.subject (關鍵詞) XAIen_US
dc.subject (關鍵詞) User studyen_US
dc.subject (關鍵詞) Fake review detectionen_US
dc.subject (關鍵詞) Types of explanationen_US
dc.subject (關鍵詞) SATen_US
dc.title (題名) 可解釋AI之類型對資訊透明度及使用者信任之影響:以假評論偵測為例zh_TW
dc.title (題名) ReView: the effects of explainable AI on information transparency and user trust under fake review detectionen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138-52160.
Alqaraawi, A., Schuessler, M., Weiß, P., Costanza, E., & Berthouze, N. (2020). Evaluating saliency map explanations for convolutional neural networks: a user study. Proceedings of the 25th International Conference on Intelligent User Interfaces,
Arras, L., Montavon, G., Müller, K.-R., & Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
Bove, C., Lesot, M.-J., Tijus, C. A., & Detyniecki, M. (2023). Investigating the Intelligibility of Plural Counterfactual Examples for Non-Expert Users: an Explanation User Interface Proposition and User Study. Proceedings of the 28th International Conference on Intelligent User Interfaces,
Budhi, G. S., Chiong, R., & Wang, Z. (2021). Resampling imbalanced data to detect fake reviews using machine learning classifiers and textual-based features. Multimedia Tools and Applications, 80, 13079-13097.
Cai, C. J., Jongejan, J., & Holbrook, J. (2019). The effects of example-based explanations in a machine learning interface. Proceedings of the 24th international conference on intelligent user interfaces,
Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. 2018 IEEE winter conference on applications of computer vision (WACV),
Chen, J. Y., Procci, K., Boyce, M., Wright, J. L., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency.
Chien, S.-Y., Yang, C.-J., & Yu, F. (2022). XFlag: Explainable fake news detection model on social media. International Journal of Human–Computer Interaction, 38(18-20), 1808-1827.
Chromik, M., & Butz, A. (2021). Human-XAI interaction: a review and design principles for explanation user interfaces. Human-Computer Interaction–INTERACT 2021: 18th IFIP TC 13 International Conference, Bari, Italy, August 30–September 3, 2021, Proceedings, Part II 18,
Danry, V., Pataranutaporn, P., Mao, Y., & Maes, P. (2020). Wearable Reasoner: towards enhanced human rationality through a wearable device with an explainable AI assistant. Proceedings of the Augmented Humans International Conference,
Endsley, M. R. (2023). Supporting Human-AI Teams: Transparency, explainability, and situation awareness. Computers in Human Behavior, 140, 107574.
Fei, G., Mukherjee, A., Liu, B., Hsu, M., Castellanos, M., & Ghosh, R. (2013). Exploiting burstiness in reviews for review spammer detection. Proceedings of the international AAAI conference on web and social media,
Fontanarava, J., Pasi, G., & Viviani, M. (2017). Feature analysis for fake review detection through supervised classification. 2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA),
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science robotics, 4(37), eaay7120.
Hadash, S., Willemsen, M. C., Snijders, C., & IJsselsteijn, W. A. (2022). Improving understandability of feature contributions in model-agnostic explainable AI tools. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems,
Hart, S. G. (2006). NASA-task load index (NASA-TLX); 20 years later. Proceedings of the human factors and ergonomics society annual meeting,
Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.
Humer, C., Hinterreiter, A., Leichtmann, B., Mara, M., & Streit, M. (2022). Comparing Effects of Attribution-based, Example-based, and Feature-based Explanation Methods on AI-Assisted Decision-Making.
Jindal, N., & Liu, B. (2007). Review spam detection. Proceedings of the 16th international conference on World Wide Web,
Kohli, R., Devaraj, S., & Mahmood, M. A. (2004). Understanding determinants of online consumer satisfaction: A decision process perspective. Journal of Management Information Systems, 21(1), 115-136.
Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human factors : the journal of the Human Factors and Ergonomics Society., 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50_30392
Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems,
Lim, B. Y., & Dey, A. K. (2009). Assessing demand for intelligibility in context-aware applications. Proceedings of the 11th international conference on Ubiquitous computing,
Lim, B. Y., Dey, A. K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. Proceedings of the SIGCHI conference on human factors in computing systems,
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.
Mukherjee, A., Venkataraman, V., Liu, B., & Glance, N. (2013). What yelp fake review filter might be doing? Proceedings of the international AAAI conference on web and social media,
Papenmeier, A., Englebienne, G., & Seifert, C. (2019). How model accuracy and explanation fidelity influence user trust. arXiv preprint arXiv:1907.12652.
Park, D.-H., Lee, J., & Han, I. (2007). The effect of on-line consumer reviews on consumer purchasing intention: The moderating role of involvement. International Journal of Electronic Commerce, 11(4), 125-148.
Pennington, J., Socher, R., & Manning, C. D. (2014). Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),
Rastogi, A., & Mehrotra, M. (2018). Impact of behavioral and textual features on opinion spam detection. 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS),
Rayana, S., & Akoglu, L. (2015). Collective opinion spam detection: Bridging review networks and metadata. Proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining,
Refaeli, D., & Hajek, P. (2021). Detecting fake online reviews using fine-tuned BERT. Proceedings of the 2021 5th International Conference on E-Business and Internet,
Ren, Y., Yan, M., & Ji, D. (2022). A hierarchical neural network model with user and product attention for deceptive reviews detection. Information Sciences, 604, 1-10.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should i trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE international conference on computer vision,
Yang, S., Yu, X., & Zhou, Y. (2020). Lstm and gru neural network performance comparison study: Taking yelp review dataset as an example. 2020 International workshop on electronic communication and artificial intelligence (IWECAI),
Zhao, W., Joshi, T., Nair, V. N., & Sudjianto, A. (2020). Shap values for explaining cnn-based text classification models. arXiv preprint arXiv:2008.11825.
Zou, L., Goh, H. L., Liew, C. J. Y., Quah, J. L., Gu, G. T., Chew, J. J., Kumar, M. P., Ang, C. G. L., & Ta, A. W. A. (2022). Ensemble image explainable AI (XAI) algorithm for severe community-acquired pneumonia and COVID-19 respiratory infections. IEEE Transactions on Artificial Intelligence, 4(2), 242-254.
zh_TW