Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/137296
DC FieldValueLanguage
dc.contributor.advisor蔡銘峰zh_TW
dc.contributor.advisorTsai, Ming-Fengen_US
dc.contributor.author段寶鈞zh_TW
dc.contributor.authorTuan, Pao-Chunen_US
dc.creator段寶鈞zh_TW
dc.creatorTuan, Pao-Chunen_US
dc.date2021en_US
dc.date.accessioned2021-10-01T02:06:19Z-
dc.date.available2021-10-01T02:06:19Z-
dc.date.issued2021-10-01T02:06:19Z-
dc.identifierG0108753116en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/137296-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description資訊科學系zh_TW
dc.description108753116zh_TW
dc.description.abstract  在推薦系統(Recommendation System)中,知識圖譜(Knowledge Graph)扮演著越來越重要的角色。但幾乎沒有任何方法考慮到知識圖譜為不完整的可能性,現有方法大多單純透過標題或其他簡易資訊將使用者-物品偏好關係圖(User-item Interaction Graph)上的物品(Item)與知識圖譜上的實體(Entity)進行連線(Alignment),卻不曾考慮到連線可能有誤或是物品其實並不存在於知識圖譜上。因此本論文提出了一個新的想法,便是透過物品和實體的文本特徵,加入模型來計算兩邊的相似度,進而獲得連線。\n  另外,我們發現現有的推薦系統幾乎都是使用一對一連線,在訓練過程中直接將連線的物品與實體合併為同一點,並透過知識圖譜上其他相關資訊的連線來協助訓練。但這種透過知識圖譜上的多點跳躍(Multi-hop)所訓練出來的推薦系統,有丟失資訊、訓練時間過長或模型過擬合(Overfitting)的可能性發生。於是,本論文基於此,提出將一對一連線擴展至多對多連線的概念。因為本論文之連線方式都是計算兩邊的相似度來進行連線,因此也很容易可得到多對多連線。另外,本論文將 Text-aware Preference Ranking for Recommender Systems(TPR)模型的物品與詞語關係圖(Item-word Graph)的詞語部分替換為實體來進行訓練達成了多對多連線之目的。\n  本論文在四個真實世界的巨量資料集上進行 Top-N 推薦任務,且為了證明連線數多寡是否影響推薦效果,我們也進行了多對一與多對多的比較實驗。除此之外,我們將物品與實體進行隨機連線,來確認本論文提出之連線方式的有效性。本論文也透過更替知識圖譜的實驗,來確保多對多連線方式在不同的條件下依然能夠保持相同表現。而我們也透過實驗來驗證「連線正確與否並不影響推薦成效」之假說。最後,在實驗結果的部分,其數據表現呈現出我們所提出之多對多連線方式與使用者-物品推薦系統或加入知識圖譜之圖神經網路(Graph Neural Network)推薦模型實際比較後大多能取得最佳的推薦效果。zh_TW
dc.description.abstract The knowledge graph is playing an increasingly important role in recommendation systems. However, there is almost no method to consider the problem of the incomplete knowledge graph. Most previous studies use titles or other simple information to align items on the user-item interaction graph and entities on the knowledge graph. This method does not consider the condition that the connection may be wrong or the items are nonexistent in the knowledge graph. Therefore, this thesis proposes a new idea: to use a model to calculate the similarity between items and entities through the text features and then obtain the possible alignments for enriching the knowledge graph.\n In addition, we found that almost all existing recommendation systems use one-to-one connections between items and entities. During the training process, the related items and entities are directly merged into one node, then use other related information on the knowledge graph to assist in training. However, this kind of recommendation system trained through multi-hop on the knowledge graph may lose information in the process. Also, most of their training time is too long. Moreover, sometimes the model may be overfitting. Therefore, based on these possible issues, this paper proposes the concept of extending one-to-one alignments to many-to-many alignments. Because the alignment method in this thesis calculates the similarity between items and entities, it is easy to get many-to-many alignments. In addition, we replace the text part of the item-word graph in Text-aware Preference Ranking for Recommender Systems (TPR) model with entities for training to achieve our purpose of many-to-many alignments.\n This thesis provided Top-N recommendation tasks on four real-world datasets, and to prove whether the number of alignments affects the recommendation performance, we also provided many-to-one and many-to-many comparison experiments. In addition, we randomly connect items and entities to confirm the validity of the alignment method proposed in this paper. We also replace the knowledge graph to ensure that the many-to-many alignment method maintains the same performance under different conditions. Moreover, we also verify the hypothesis: “The correct or incorrect alignment will not affect the performance of the recommendation.” Finally, the experimental results show that our many-to-many alignment method achieved the best recommendation performance than the user-item recommendation system and Graph Neural Network recommendation model with knowledge graph in most cases.en_US
dc.description.tableofcontents致謝 i\n中文摘要 ii\nAbstract iii\n第一章 緒論 1\n 1.1  前言 1\n 1.2  研究目的 2\n第二章 相關文獻探討 4\n 2.1  詞嵌入 4\n 2.2  知識圖譜 5\n 2.3  推薦系統 6\n第三章 研究方法 8\n 3.1  文本數據 8\n 3.2  模型學習與進行連線 10\n  3.2.1 連線型態 10\n  3.2.2 BM25 11\n  3.2.3 Word2vec 12\n 3.3  模型優化 13\n  3.3.1 以 BERT 優化 BM25 之 Top-k 連線表 13\n  3.3.2 以 BPR 預訓練物品詞嵌入再用以優化 Word2vec 14\n 3.4  多對多連線推薦系統 16\n  3.4.1 TPR 16\n  3.4.2 基於物品-實體連線之 TPR 18\n第四章 實驗結果與討論 20\n 4.1  資料集 20\n 4.2  比較模型 21\n 4.3  比較連線方式 21\n 4.4  實驗設定與評估標準 22\n  4.4.1 實驗設定 22\n  4.4.2 評估標準 22\n 4.5  實驗結果 23\n  4.5.1 Top-N 推薦任務 25\n  4.5.2 連線數目多寡之比較 25\n  4.5.3 多對多連線的有效性 27\n  4.5.4 實現於不同知識圖譜之效果 28\n  4.5.5 連線正確與否對於推薦系統之影響效果 28\n第五章 結論 30\n 5.1 結論 30\n 5.2 未來研究 30\n參考文獻 32zh_TW
dc.format.extent5606530 bytes-
dc.format.mimetypeapplication/pdf-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0108753116en_US
dc.subject推薦系統zh_TW
dc.subject知識圖譜zh_TW
dc.subject連線zh_TW
dc.subject文本資訊zh_TW
dc.subjectRecommendation systemen_US
dc.subjectKnowledge graphen_US
dc.subjectAlignmenten_US
dc.subjectTextual informationen_US
dc.title基於知識圖譜表示法學習增強使用者與物品交互關係於推薦系統之效能改進zh_TW
dc.titleImproving Recommendation Performance via Enhanced User-Item relations based on Knowledge Graph Embeddingen_US
dc.typethesisen_US
dc.relation.reference[1] H.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah. Wide & deep learning for recommender systems. In DLRS@RecSys, pages 7–10, 2016.\n[2] Y.-N. Chuang, C.-M. Chen, C.-J. Wang, M.-F. Tsai, Y. Fang, and E.-P. Lim. Tpr: Text-aware preference ranking for recommender systems. In CIKM, pages 215–224, 2020.\n[3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171–4186, 2019.\n[4] X. He and T.-S. Chua. Neural factorization machines for sparse predictive analytics. In SIGIR, pages 355–364, 2017.\n[5] X. He, Z. He, J. Song, Z. Liu, Y.-G. Jiang, and T.-S. Chua. Nais: Neural attentive item similarity model for recommendation. In TKDE, pages 2354–2366, 2018.\n[6] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua. Neural collaborative filtering. In WWW, pages 173–182, 2017.\n[7] J. Lian, X. Zhou, F. Zhang, Z. Chen, X. Xie, and G. Sun. xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In KDD, pages 1754–1763, 2018.\n[8] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In ICLR, 2013.\n[9] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In UAI, pages 452–461, 2009.\n[10] S. Rendle, Z. Gantner, C. Freudenthaler, and L. Schmidt-Thieme. Fast context-aware recommendations with factorization machines. In SIGIR, pages 635–644, 2011.\n[11] S. E. Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, and M. Gatford. Okapi at trec-3. In TREC, pages 109–126, 1996.\n[12] Y. Shan, T. R. Hoens, J. Jiao, H. Wang, D. Yu, and J. C. Mao. Deep crossing: Webscale modeling without manually crafted combinatorial features. In KDD, pages 255–262, 2016.\n[13] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In NIPS, pages 6000–6010, 2017.\n[14] X. Wang, X. He, Y. Cao, M. Liu, and T.-S. Chua. Kgat: Knowledge graph attention network for recommendation. In KDD, pages 950–958, 2019.\n[15] X. Wang, X. He, F. Feng, L. Nie, and T.-S. Chua. Tem: Tree-enhanced embedding model for explainable recommendation. In WWW, pages 1543–1552, 2018.\n[16] X. Wang, X. He, L. Nie, and T.-S. Chua. Item silk road: Recommending items from information domains to social users. In SIGIR, pages 185–194, 2017.\n[17] X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua. Neural graph collaborative filtering. In SIGIR, pages 165–174, 2019.\n[18] X. Wang, T. Huang, D. Wang, Y. Yuan, Z. Liu, and X. H. T.-S. Chua. Learning intents behind interactions with knowledge graph for recommendation. In WWW, pages 878–887, 2021.\n[19] T. Wu, E. K.-I. Chio, H.-T. Cheng, Y. Du, S. Rendle, D. Kuzmin, R. Agarwal, L. Zhang, J. Anderson, S. Singh, T. Chandra, E. H. Chi, W. Li, A. Kumar, X. Ma, A. Soares, N. Jindal, and P. Cao. Zero-shot heterogeneous transfer learning from recommender systems to cold-start search retrieval. In CIKM, pages 2821–2828, 2020.\n[20] G. Zhou, X. Zhu, C. Song, Y. Fan, H. Zhu, Y. Y. Xiao Ma, J. Jin, H. Li, and K. Gai. Deep interest network for click-through rate prediction. In KDD, pages 1059–1068, 2018.zh_TW
dc.identifier.doi10.6814/NCCU202101566en_US
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.cerifentitytypePublications-
item.fulltextWith Fulltext-
item.grantfulltextopen-
item.openairetypethesis-
Appears in Collections:學位論文
Files in This Item:
File Description SizeFormat
311601.pdf5.48 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.