學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 基於異質型偏好排序表示法之整合圖結構資訊於改進推薦系統效能
Improving Recommendation Performance via Incorporating Graph Structural Information based on Heterogeneous Preference Embedding
作者 張麒竑
Chang, Chi-Hung
貢獻者 蔡銘峰
Tsai, Ming-Feng
張麒竑
Chang, Chi-Hung
關鍵詞 推薦系統
圖形學習
圖形結構
Recommender system
Graph learning
Graph structure
日期 2022
上傳時間 2-Dec-2022 15:20:18 (UTC+8)
摘要 推薦系統(Recommendation System)發展至今已有三十餘年,從最初較為簡單的暢銷品(Best-Seller)推薦,到有參考他人和商品資訊的方法,如傳統的協同過濾(Collaborative Filtering)演算法和基於內容過濾(Content-Based Filtering);後續進階有將多種方法混和的方法(Hybrid Method),以及近年相當盛行的使用了機器學習(Machine Learning)和深度學習(Deep Learning)的各式先進模型。然而,現在先進的模型或引入知識圖譜(Knowledge Graph),或加入神經網路(Neural Network),雖然能確實的提升模型訓練預測的準確度,但除了會耗費較長的訓練時間及記憶體空間消耗外,某些看似隱含著有助於預測的資訊也有可能會被忽略而未被訓練模型考慮,「圖形結構」即為一個可能隱含正向幫助的資訊,但是鮮少有將此資訊應用至模型訓練當中。

在本篇當中,我們提出了 HPEstruc 方法。此方法發想於 Facebook 團隊在 2018 年提出的 SEAL(Learning from Subgraphs, Embeddings, and Attributes for Link prediction)模型,除了使用了深度學習進行訓練外,另外將「圖形結構」的資訊加入到模型中,並且在預測任務中得到了相當不錯的成績。因此我們認為,「圖形結構」對於推薦模型的訓練及預測,應能帶來正向的幫助。於是,在本篇當中,我們選定了異質型偏好排序表示法(Heterogeneous Preference Embedding,HPE)作為推薦的訓練模型,並且使用了可以將圖形結構轉換成向量表示法的 struc2vec 來進行圖形結構的擷取,將擷取到的圖形結構資訊加入異質型偏好排序表示法訓練模型當中,比較「圖形結構」對於推薦模型的訓練及預測是否有幫助。從實驗結果可以得知,HPEstruc 在社群網路類型的資料集可以得到比原始 HPE 模型更好的預測準確度,證明了「圖形結構」對於社群網路類型的資料,在推薦預測上是有所幫助的。

除了比較「圖形結構資訊加入與否」對於推薦模型預測的效果是否有改善外,另外有將預測結果與現今廣泛被使用的深度學習模型預測結果進行比較,以及與較為先進、預測結果較佳的 SEAL 模型進行比較,並且針對 HPEstruc 與 SEAL 模型作法上的差異進行深入探討與比較。
Recommender systems have been developed for about 30 years. In addition to collaborative filtering and content-based filtering those traditional methods, some hybrid methods are proposed in the field. In recent years, more advanced methods are also proposed based on deep learning techniques to include knowledge graphs for better prediction. So, leveraging graph structural information is one of the potentially crucial research directions.

In this paper, we proposed a training method named HPEstruc, which a model inspires named SEAL (Learning from Subgraphs, Embeddings, and Attributes for Link prediction). SEAL uses deep learning for training and adds "graph structural information" into the model. Moreover, it gets good results on prediction tasks. Due to those mentioned above, we believe that "graph structural information" can positively influence the recommender model for training and prediction. As a result, we choose Heterogeneous Preference Embedding (HPE) as our training model and use struc2vec, which can convert the graph structure to embeddings, to retrieve the graph structural information in our research. In addition, we add the information captured from the graph into the HPE model to compare whether adding structural information is helpful for training and prediction. It can be seen from the results that HPEstruc can get better prediction accuracy than the original HPE model on the social network datasets. This thesis provides that "graph structure" is helpful for prediction on this type of recommendation problem.

Furthermore, we also compare the results of HPEstruc with the results of methods used for prediction. We also discuss the difference between HPEstruc and SEAL, one of the most state-of-the-art training models, in detail.
參考文獻 [1] N. S. Altman. An introduction to kernel and nearest-neighbor nonparametric regres-
sion. The American Statistician, 46(3):175–185, 1992.
[2] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828, 2013.
[3] J. S. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive al- gorithms for collaborative filtering. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, UAI’98, page 43–52, San Francisco, CA, USA, 1998. Morgan Kaufmann Publishers Inc.
[4] C.-M. Chen, M.-F. Tsai, Y.-C. Lin, and Y.-H. Yang. Query-based music recom- mendations via preference embedding. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, page 79–82, New York, NY, USA, 2016. Association for Computing Machinery.
[5] G. E. Dahl, T. N. Sainath, and G. E. Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8609–8613, 2013.
[6] L. Ehrlinger and W. Wo ̈ß. Towards a definition of knowledge graphs. SEMANTiCS (Posters, Demos, SuCCESS), 48(1-4):2, 2016.
[7] J. J. Hopfield. Neural networks and physical systems with emergent collective com- putational abilities. Proceedings of the national academy of sciences, 79(8):2554– 2558, 1982.
[8] K. S. Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 1972.
[9] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Repre- sentations, ICLR ’17, 2017.
[10] Y.Koren,R.Bell,andC.Volinsky.Matrixfactorizationtechniquesforrecommender systems. Computer, 42(8):30–37, 2009.
[11] P. Liang. Semi-supervised learning for natural language. PhD thesis, Massachusetts Institute of Technology, 2005.
[12] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 1150–1157. Ieee, 1999.
[13] H. P. Luhn. A statistical approach to mechanized encoding and searching of literary information. IBM J. Res. Dev., 1(4):309–317, 1957.
[14] J. Mairal, J. Ponce, G. Sapiro, A. Zisserman, and F. Bach. Supervised dictionary learning. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Ad- vances in Neural Information Processing Systems, volume 21. Curran Associates, Inc., 2008.
[15] W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115–133, 1943.
[16] A. A. Mohammed and V. Umaashankar. Effectiveness of hierarchical softmax in large scale classification tasks. In 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pages 1090–1094, 2018.
[17] M. Mu ̈ller. Dynamic time warping. Information retrieval for music and motion, pages 69–84, 2007.
[18] K. O’Shea and R. Nash. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458, 2015.
[19] J. Qiu, Q. Chen, Y. Dong, J. Zhang, H. Yang, M. Ding, K. Wang, and J. Tang. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, page 1150–1160, New York, NY, USA, 2020. Association for Computing Machinery.
[20] L. F. Ribeiro, P. H. Saverese, and D. R. Figueiredo. Struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, page 385–394, New York, NY, USA, 2017. Association for Computing Machinery.
[21] S. E. Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, M. Gatford, et al. Okapi at trec-3. Nist Special Publication Sp, 109:109, 1995.
[22] F. Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958.
[23] G. Salton and C. Buckley. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513–523, 1988.
[24] J. Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015.
[25] Y. Shoham. Combining content-based and collaborative recommendation. Commu- nications of the ACM, 1997.
[26] C. Sun and G. Wu. Adaptive graph diffusion networks with hop-wise attention. arXiv preprint arXiv:2012.15024, 2020.
[27] H. Wang, F. Zhang, J. Wang, M. Zhao, W. Li, X. Xie, and M. Guo. Ripplenet: Prop- agating user preferences on the knowledge graph for recommender systems. In Pro- ceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM ’18, page 417–426, New York, NY, USA, 2018. Association for Computing Machinery.
[28] X. Wang, Y. Xu, X. He, Y. Cao, M. Wang, and T.-S. Chua. Reinforced negative sampling over knowledge graph for recommendation. In Proceedings of The Web Conference 2020, WWW ’20, page 99–109, New York, NY, USA, 2020. Association for Computing Machinery.
[29] Z. Wang, G. Lin, H. Tan, Q. Chen, and X. Liu. Ckan: Collaborative knowledge- aware attentive network for recommender systems. In Proceedings of the 43rd In- ternational ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’20, page 219–228, New York, NY, USA, 2020. Association for Computing Machinery.
[30] Z. Wang, Y. Zhou, L. Hong, Y. Zou, and H. Su. Pairwise learning for neural link prediction. arXiv preprint arXiv:2112.02936, 2021.
[31] H. Yin, M. Zhang, Y. Wang, J. Wang, and P. Li. Algorithm and system co- design for efficient subgraph-based graph representation learning. arXiv preprint arXiv:2202.13538, 2022.
[32] F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W.-Y. Ma. Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 353–362, New York, NY, USA, 2016. Association for Computing Machinery.
[33] M. Zhang and Y. Chen. Link prediction based on graph neural networks. In Pro- ceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, page 5171–5181, Red Hook, NY, USA, 2018. Curran Associates Inc.
[34] M. Zhang, Z. Cui, M. Neumann, and Y. Chen. An end-to-end deep learning archi- tecture for graph classification. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), 2018.
描述 碩士
國立政治大學
資訊科學系
109753119
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109753119
資料類型 thesis
dc.contributor.advisor 蔡銘峰zh_TW
dc.contributor.advisor Tsai, Ming-Fengen_US
dc.contributor.author (Authors) 張麒竑zh_TW
dc.contributor.author (Authors) Chang, Chi-Hungen_US
dc.creator (作者) 張麒竑zh_TW
dc.creator (作者) Chang, Chi-Hungen_US
dc.date (日期) 2022en_US
dc.date.accessioned 2-Dec-2022 15:20:18 (UTC+8)-
dc.date.available 2-Dec-2022 15:20:18 (UTC+8)-
dc.date.issued (上傳時間) 2-Dec-2022 15:20:18 (UTC+8)-
dc.identifier (Other Identifiers) G0109753119en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/142640-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 109753119zh_TW
dc.description.abstract (摘要) 推薦系統(Recommendation System)發展至今已有三十餘年,從最初較為簡單的暢銷品(Best-Seller)推薦,到有參考他人和商品資訊的方法,如傳統的協同過濾(Collaborative Filtering)演算法和基於內容過濾(Content-Based Filtering);後續進階有將多種方法混和的方法(Hybrid Method),以及近年相當盛行的使用了機器學習(Machine Learning)和深度學習(Deep Learning)的各式先進模型。然而,現在先進的模型或引入知識圖譜(Knowledge Graph),或加入神經網路(Neural Network),雖然能確實的提升模型訓練預測的準確度,但除了會耗費較長的訓練時間及記憶體空間消耗外,某些看似隱含著有助於預測的資訊也有可能會被忽略而未被訓練模型考慮,「圖形結構」即為一個可能隱含正向幫助的資訊,但是鮮少有將此資訊應用至模型訓練當中。

在本篇當中,我們提出了 HPEstruc 方法。此方法發想於 Facebook 團隊在 2018 年提出的 SEAL(Learning from Subgraphs, Embeddings, and Attributes for Link prediction)模型,除了使用了深度學習進行訓練外,另外將「圖形結構」的資訊加入到模型中,並且在預測任務中得到了相當不錯的成績。因此我們認為,「圖形結構」對於推薦模型的訓練及預測,應能帶來正向的幫助。於是,在本篇當中,我們選定了異質型偏好排序表示法(Heterogeneous Preference Embedding,HPE)作為推薦的訓練模型,並且使用了可以將圖形結構轉換成向量表示法的 struc2vec 來進行圖形結構的擷取,將擷取到的圖形結構資訊加入異質型偏好排序表示法訓練模型當中,比較「圖形結構」對於推薦模型的訓練及預測是否有幫助。從實驗結果可以得知,HPEstruc 在社群網路類型的資料集可以得到比原始 HPE 模型更好的預測準確度,證明了「圖形結構」對於社群網路類型的資料,在推薦預測上是有所幫助的。

除了比較「圖形結構資訊加入與否」對於推薦模型預測的效果是否有改善外,另外有將預測結果與現今廣泛被使用的深度學習模型預測結果進行比較,以及與較為先進、預測結果較佳的 SEAL 模型進行比較,並且針對 HPEstruc 與 SEAL 模型作法上的差異進行深入探討與比較。
zh_TW
dc.description.abstract (摘要) Recommender systems have been developed for about 30 years. In addition to collaborative filtering and content-based filtering those traditional methods, some hybrid methods are proposed in the field. In recent years, more advanced methods are also proposed based on deep learning techniques to include knowledge graphs for better prediction. So, leveraging graph structural information is one of the potentially crucial research directions.

In this paper, we proposed a training method named HPEstruc, which a model inspires named SEAL (Learning from Subgraphs, Embeddings, and Attributes for Link prediction). SEAL uses deep learning for training and adds "graph structural information" into the model. Moreover, it gets good results on prediction tasks. Due to those mentioned above, we believe that "graph structural information" can positively influence the recommender model for training and prediction. As a result, we choose Heterogeneous Preference Embedding (HPE) as our training model and use struc2vec, which can convert the graph structure to embeddings, to retrieve the graph structural information in our research. In addition, we add the information captured from the graph into the HPE model to compare whether adding structural information is helpful for training and prediction. It can be seen from the results that HPEstruc can get better prediction accuracy than the original HPE model on the social network datasets. This thesis provides that "graph structure" is helpful for prediction on this type of recommendation problem.

Furthermore, we also compare the results of HPEstruc with the results of methods used for prediction. We also discuss the difference between HPEstruc and SEAL, one of the most state-of-the-art training models, in detail.
en_US
dc.description.tableofcontents 致謝 1
中文摘要 2
Abstract 3

第一章 緒論....................................... 1
第二章 相關文獻探討.................................3
2.1 協同過濾(CollaborativeFiltering) ..............3
2.1.1 k-鄰近演算法(k-Nearest Neighbors,kNN) . . . 5
2.1.2 矩陣分解.............................. 5
2.2 基於內容過濾(Content-BasedFiltering)................ 7
2.2.1 TF-IDF(Term Frequency-Inverse Document Frequency) . 7
2.2.2 OkapiBM25 ............................ 8
2.3 混合方法(HybridMethod) ....................... 8
2.4 網路表示法用於推薦系統(Network Embedding for Recommender
Systems).................................. 9
2.4.1 表示法學習(RepresentationLearning) . . . . . . 9
2.4.2 網路表示法(NetworkEmbedding)............... 9
2.4.3 卷積神經網路(Convolutional Neural Network) .. . . . 10
2.4.4 圖卷積神經網路(Graph Convolutional Network) . . . . 11
2.4.5 Learning from Subgraphs, Embeddings, and Attributes for
Link prediction (SEAL)..................... 11
2.5 知識圖譜(KnowledgeGraph)...................... 12
2.5.1 Collaborative Knowledge Base Embedding for Recommender Systems (CKE) ................. 13
2.5.2 Propagating User Preferences on the Knowledge Graph for RecommenderSystems(RippleNet) .............. 14
第三章 研究方法..........................15
3.1 Struc2vec .................................. 15
3.1.1 計算結構相似度.......................... 15
3.1.2 建立前後關係 ........................... 17
3.1.3 生成節點之間的前後關係 .................... 18
3.1.4 學習語言模型 ........................... 19
3.2 異質型偏好排序表示法 .......................... 20
3.2.1 建立網路.............................. 20
3.2.2 藉由加權的隨機遊走進行邊取樣 ................ 21
3.2.3 藉由異質型偏好排序表示法進行問題導向的建模 . .. 22
3.3 基於圖形結構相似度之隨機遊走.................... 22
第四章 實驗結果與討論.............................. 26
4.1 資料集 ................................... 26
4.2 實驗設定.................................. 27
4.3 實驗結果.................................. 27
4.4 問題探討.................................. 28
4.4.1 節點之結構深度對於訓練模型的影響.......... 28
4.4.2 運算上的時間花費 ........................ 30
4.4.3 計算上的空間使用 ........................ 31
4.4.4 圖形結構資訊使用與否的比較................ 31
4.4.5 與SEAL模型之比較 ....................... 32
第五章 結論................................... 34
5.1 結論..................................... 34
參考文獻.......................................... 36
zh_TW
dc.format.extent 1751421 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109753119en_US
dc.subject (關鍵詞) 推薦系統zh_TW
dc.subject (關鍵詞) 圖形學習zh_TW
dc.subject (關鍵詞) 圖形結構zh_TW
dc.subject (關鍵詞) Recommender systemen_US
dc.subject (關鍵詞) Graph learningen_US
dc.subject (關鍵詞) Graph structureen_US
dc.title (題名) 基於異質型偏好排序表示法之整合圖結構資訊於改進推薦系統效能zh_TW
dc.title (題名) Improving Recommendation Performance via Incorporating Graph Structural Information based on Heterogeneous Preference Embeddingen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] N. S. Altman. An introduction to kernel and nearest-neighbor nonparametric regres-
sion. The American Statistician, 46(3):175–185, 1992.
[2] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828, 2013.
[3] J. S. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive al- gorithms for collaborative filtering. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, UAI’98, page 43–52, San Francisco, CA, USA, 1998. Morgan Kaufmann Publishers Inc.
[4] C.-M. Chen, M.-F. Tsai, Y.-C. Lin, and Y.-H. Yang. Query-based music recom- mendations via preference embedding. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, page 79–82, New York, NY, USA, 2016. Association for Computing Machinery.
[5] G. E. Dahl, T. N. Sainath, and G. E. Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8609–8613, 2013.
[6] L. Ehrlinger and W. Wo ̈ß. Towards a definition of knowledge graphs. SEMANTiCS (Posters, Demos, SuCCESS), 48(1-4):2, 2016.
[7] J. J. Hopfield. Neural networks and physical systems with emergent collective com- putational abilities. Proceedings of the national academy of sciences, 79(8):2554– 2558, 1982.
[8] K. S. Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 1972.
[9] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Repre- sentations, ICLR ’17, 2017.
[10] Y.Koren,R.Bell,andC.Volinsky.Matrixfactorizationtechniquesforrecommender systems. Computer, 42(8):30–37, 2009.
[11] P. Liang. Semi-supervised learning for natural language. PhD thesis, Massachusetts Institute of Technology, 2005.
[12] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 1150–1157. Ieee, 1999.
[13] H. P. Luhn. A statistical approach to mechanized encoding and searching of literary information. IBM J. Res. Dev., 1(4):309–317, 1957.
[14] J. Mairal, J. Ponce, G. Sapiro, A. Zisserman, and F. Bach. Supervised dictionary learning. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Ad- vances in Neural Information Processing Systems, volume 21. Curran Associates, Inc., 2008.
[15] W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115–133, 1943.
[16] A. A. Mohammed and V. Umaashankar. Effectiveness of hierarchical softmax in large scale classification tasks. In 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pages 1090–1094, 2018.
[17] M. Mu ̈ller. Dynamic time warping. Information retrieval for music and motion, pages 69–84, 2007.
[18] K. O’Shea and R. Nash. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458, 2015.
[19] J. Qiu, Q. Chen, Y. Dong, J. Zhang, H. Yang, M. Ding, K. Wang, and J. Tang. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, page 1150–1160, New York, NY, USA, 2020. Association for Computing Machinery.
[20] L. F. Ribeiro, P. H. Saverese, and D. R. Figueiredo. Struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, page 385–394, New York, NY, USA, 2017. Association for Computing Machinery.
[21] S. E. Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, M. Gatford, et al. Okapi at trec-3. Nist Special Publication Sp, 109:109, 1995.
[22] F. Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958.
[23] G. Salton and C. Buckley. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513–523, 1988.
[24] J. Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015.
[25] Y. Shoham. Combining content-based and collaborative recommendation. Commu- nications of the ACM, 1997.
[26] C. Sun and G. Wu. Adaptive graph diffusion networks with hop-wise attention. arXiv preprint arXiv:2012.15024, 2020.
[27] H. Wang, F. Zhang, J. Wang, M. Zhao, W. Li, X. Xie, and M. Guo. Ripplenet: Prop- agating user preferences on the knowledge graph for recommender systems. In Pro- ceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM ’18, page 417–426, New York, NY, USA, 2018. Association for Computing Machinery.
[28] X. Wang, Y. Xu, X. He, Y. Cao, M. Wang, and T.-S. Chua. Reinforced negative sampling over knowledge graph for recommendation. In Proceedings of The Web Conference 2020, WWW ’20, page 99–109, New York, NY, USA, 2020. Association for Computing Machinery.
[29] Z. Wang, G. Lin, H. Tan, Q. Chen, and X. Liu. Ckan: Collaborative knowledge- aware attentive network for recommender systems. In Proceedings of the 43rd In- ternational ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’20, page 219–228, New York, NY, USA, 2020. Association for Computing Machinery.
[30] Z. Wang, Y. Zhou, L. Hong, Y. Zou, and H. Su. Pairwise learning for neural link prediction. arXiv preprint arXiv:2112.02936, 2021.
[31] H. Yin, M. Zhang, Y. Wang, J. Wang, and P. Li. Algorithm and system co- design for efficient subgraph-based graph representation learning. arXiv preprint arXiv:2202.13538, 2022.
[32] F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W.-Y. Ma. Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 353–362, New York, NY, USA, 2016. Association for Computing Machinery.
[33] M. Zhang and Y. Chen. Link prediction based on graph neural networks. In Pro- ceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, page 5171–5181, Red Hook, NY, USA, 2018. Curran Associates Inc.
[34] M. Zhang, Z. Cui, M. Neumann, and Y. Chen. An end-to-end deep learning archi- tecture for graph classification. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), 2018.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202201707en_US