Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/137296
題名: 基於知識圖譜表示法學習增強使用者與物品交互關係於推薦系統之效能改進
Improving Recommendation Performance via Enhanced User-Item relations based on Knowledge Graph Embedding
作者: 段寶鈞
Tuan, Pao-Chun
貢獻者: 蔡銘峰
Tsai, Ming-Feng
段寶鈞
Tuan, Pao-Chun
關鍵詞: 推薦系統
知識圖譜
連線
文本資訊
Recommendation system
Knowledge graph
Alignment
Textual information
日期: 2021
上傳時間: 1-Oct-2021
摘要:   在推薦系統(Recommendation System)中,知識圖譜(Knowledge Graph)扮演著越來越重要的角色。但幾乎沒有任何方法考慮到知識圖譜為不完整的可能性,現有方法大多單純透過標題或其他簡易資訊將使用者-物品偏好關係圖(User-item Interaction Graph)上的物品(Item)與知識圖譜上的實體(Entity)進行連線(Alignment),卻不曾考慮到連線可能有誤或是物品其實並不存在於知識圖譜上。因此本論文提出了一個新的想法,便是透過物品和實體的文本特徵,加入模型來計算兩邊的相似度,進而獲得連線。\n  另外,我們發現現有的推薦系統幾乎都是使用一對一連線,在訓練過程中直接將連線的物品與實體合併為同一點,並透過知識圖譜上其他相關資訊的連線來協助訓練。但這種透過知識圖譜上的多點跳躍(Multi-hop)所訓練出來的推薦系統,有丟失資訊、訓練時間過長或模型過擬合(Overfitting)的可能性發生。於是,本論文基於此,提出將一對一連線擴展至多對多連線的概念。因為本論文之連線方式都是計算兩邊的相似度來進行連線,因此也很容易可得到多對多連線。另外,本論文將 Text-aware Preference Ranking for Recommender Systems(TPR)模型的物品與詞語關係圖(Item-word Graph)的詞語部分替換為實體來進行訓練達成了多對多連線之目的。\n  本論文在四個真實世界的巨量資料集上進行 Top-N 推薦任務,且為了證明連線數多寡是否影響推薦效果,我們也進行了多對一與多對多的比較實驗。除此之外,我們將物品與實體進行隨機連線,來確認本論文提出之連線方式的有效性。本論文也透過更替知識圖譜的實驗,來確保多對多連線方式在不同的條件下依然能夠保持相同表現。而我們也透過實驗來驗證「連線正確與否並不影響推薦成效」之假說。最後,在實驗結果的部分,其數據表現呈現出我們所提出之多對多連線方式與使用者-物品推薦系統或加入知識圖譜之圖神經網路(Graph Neural Network)推薦模型實際比較後大多能取得最佳的推薦效果。
 The knowledge graph is playing an increasingly important role in recommendation systems. However, there is almost no method to consider the problem of the incomplete knowledge graph. Most previous studies use titles or other simple information to align items on the user-item interaction graph and entities on the knowledge graph. This method does not consider the condition that the connection may be wrong or the items are nonexistent in the knowledge graph. Therefore, this thesis proposes a new idea: to use a model to calculate the similarity between items and entities through the text features and then obtain the possible alignments for enriching the knowledge graph.\n In addition, we found that almost all existing recommendation systems use one-to-one connections between items and entities. During the training process, the related items and entities are directly merged into one node, then use other related information on the knowledge graph to assist in training. However, this kind of recommendation system trained through multi-hop on the knowledge graph may lose information in the process. Also, most of their training time is too long. Moreover, sometimes the model may be overfitting. Therefore, based on these possible issues, this paper proposes the concept of extending one-to-one alignments to many-to-many alignments. Because the alignment method in this thesis calculates the similarity between items and entities, it is easy to get many-to-many alignments. In addition, we replace the text part of the item-word graph in Text-aware Preference Ranking for Recommender Systems (TPR) model with entities for training to achieve our purpose of many-to-many alignments.\n This thesis provided Top-N recommendation tasks on four real-world datasets, and to prove whether the number of alignments affects the recommendation performance, we also provided many-to-one and many-to-many comparison experiments. In addition, we randomly connect items and entities to confirm the validity of the alignment method proposed in this paper. We also replace the knowledge graph to ensure that the many-to-many alignment method maintains the same performance under different conditions. Moreover, we also verify the hypothesis: “The correct or incorrect alignment will not affect the performance of the recommendation.” Finally, the experimental results show that our many-to-many alignment method achieved the best recommendation performance than the user-item recommendation system and Graph Neural Network recommendation model with knowledge graph in most cases.
參考文獻: [1] H.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah. Wide & deep learning for recommender systems. In DLRS@RecSys, pages 7–10, 2016.\n[2] Y.-N. Chuang, C.-M. Chen, C.-J. Wang, M.-F. Tsai, Y. Fang, and E.-P. Lim. Tpr: Text-aware preference ranking for recommender systems. In CIKM, pages 215–224, 2020.\n[3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171–4186, 2019.\n[4] X. He and T.-S. Chua. Neural factorization machines for sparse predictive analytics. In SIGIR, pages 355–364, 2017.\n[5] X. He, Z. He, J. Song, Z. Liu, Y.-G. Jiang, and T.-S. Chua. Nais: Neural attentive item similarity model for recommendation. In TKDE, pages 2354–2366, 2018.\n[6] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua. Neural collaborative filtering. In WWW, pages 173–182, 2017.\n[7] J. Lian, X. Zhou, F. Zhang, Z. Chen, X. Xie, and G. Sun. xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In KDD, pages 1754–1763, 2018.\n[8] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In ICLR, 2013.\n[9] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In UAI, pages 452–461, 2009.\n[10] S. Rendle, Z. Gantner, C. Freudenthaler, and L. Schmidt-Thieme. Fast context-aware recommendations with factorization machines. In SIGIR, pages 635–644, 2011.\n[11] S. E. Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, and M. Gatford. Okapi at trec-3. In TREC, pages 109–126, 1996.\n[12] Y. Shan, T. R. Hoens, J. Jiao, H. Wang, D. Yu, and J. C. Mao. Deep crossing: Webscale modeling without manually crafted combinatorial features. In KDD, pages 255–262, 2016.\n[13] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In NIPS, pages 6000–6010, 2017.\n[14] X. Wang, X. He, Y. Cao, M. Liu, and T.-S. Chua. Kgat: Knowledge graph attention network for recommendation. In KDD, pages 950–958, 2019.\n[15] X. Wang, X. He, F. Feng, L. Nie, and T.-S. Chua. Tem: Tree-enhanced embedding model for explainable recommendation. In WWW, pages 1543–1552, 2018.\n[16] X. Wang, X. He, L. Nie, and T.-S. Chua. Item silk road: Recommending items from information domains to social users. In SIGIR, pages 185–194, 2017.\n[17] X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua. Neural graph collaborative filtering. In SIGIR, pages 165–174, 2019.\n[18] X. Wang, T. Huang, D. Wang, Y. Yuan, Z. Liu, and X. H. T.-S. Chua. Learning intents behind interactions with knowledge graph for recommendation. In WWW, pages 878–887, 2021.\n[19] T. Wu, E. K.-I. Chio, H.-T. Cheng, Y. Du, S. Rendle, D. Kuzmin, R. Agarwal, L. Zhang, J. Anderson, S. Singh, T. Chandra, E. H. Chi, W. Li, A. Kumar, X. Ma, A. Soares, N. Jindal, and P. Cao. Zero-shot heterogeneous transfer learning from recommender systems to cold-start search retrieval. In CIKM, pages 2821–2828, 2020.\n[20] G. Zhou, X. Zhu, C. Song, Y. Fan, H. Zhu, Y. Y. Xiao Ma, J. Jin, H. Li, and K. Gai. Deep interest network for click-through rate prediction. In KDD, pages 1059–1068, 2018.
描述: 碩士
國立政治大學
資訊科學系
108753116
資料來源: http://thesis.lib.nccu.edu.tw/record/#G0108753116
資料類型: thesis
Appears in Collections:學位論文

Files in This Item:
File Description SizeFormat
311601.pdf5.48 MBAdobe PDF2View/Open
Show full item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.