學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 基於高階相鄰關係與最短路徑之圖表示法學習
Exploring High-order Proximity and Shortest-path Walking for Graph Representation Learning
作者 林柏宇
Lin, Bo-Yu
貢獻者 蔡銘峰
林柏宇
Lin, Bo-Yu
關鍵詞 圖學習表示法
最短路徑
鏈結預測
Graph Representation Learning
Shortest Path
Link Prediction
日期 2023
上傳時間 1-Sep-2023 15:24:12 (UTC+8)
摘要 圖學習表示法作為一種重要的圖分析方法,旨在將圖中的節點和邊映射到低維度的向量空間,以更精確地捕捉其結構和關係特徵。在這一框架中,高階相鄰關係扮演著關鍵角色。相對於傳統的低階表示法,高階相鄰關係提供了更深入的分析和處理能力。 它能夠捕捉到節點之間的複雜連接關係,包括間接連接和共同鄰居等。此外,高階相 鄰關係在圖的結構中具有重要性,如三角形關係和子圖模式等。通過學習這些關係,模型能夠更準確地理解圖的整體特徵。因此,探索高階相鄰關係對於進一步提升圖學習表示法的性能和能力至關重要。
本研究旨在引入「最短路徑」作為主要概念,以改善圖學習表示法對高階相鄰關係的學習能力。通過利用最短路徑可達到 k 步為 k 階鄰 居的特性,我們希望表示法能更加直觀且準確地捕捉高階相鄰關係。此外,我們進一步探索以最短路徑長的分佈來動態決定高階鄰居的取值範圍。 這種方法使得圖學習表示法能夠自動調整參數,而無需耗費大量時間和計算資源。透過基於最短路徑長的分佈,我們能夠更有效地決定高階鄰居的範圍,從而提高圖學習表示法的性能和效率。
透過以上方法,本研究希望為圖學習表示法在學習高階相鄰關係方面提供一個新的視角和改進策略,期待過程中的實驗結果與觀察發現能為未來相關的研究提供幫助。
Graph representation learning is a critical technique in graph analysis that strives to project nodes and edges of a graph into a compressed vector space, thereby better grasping structural and relational aspects. Central to this are higher-order neighboring relationships. These relationships, unlike traditional lower-order ones, offer enhanced analysis and processing potential. They excel at detecting nuanced connections between nodes, such as indirect ties and shared neighbors. Importantly, these higher-order relation- ships highlight specific patterns in the graph structure, like triangle relation- ships and subgraph designs. By mastering these, models can more thoroughly comprehend the overarching graph features. Hence, examining higher-order neighboring relationships is essential for refining the efficacy of graph representation learning.
In this research, we propose the ”shortest paths” principle to boost the learning capacity of graph representation concerning higher-order neighbor- ing relationships. By harnessing that the shortest paths of up to k steps produce kth-order neighbors, we aim for a more precise portrayal of these relationships. Additionally, we explore the adaptive determination of the span of higher-order neighbors using the spread of shortest path lengths. Such a strategy enables graph representation learning to self-regulate parameters with- out excessive resource expenditure. Leveraging the spread of shortest path lengths helps swiftly determine the range of higher-order neighbors, hence enhancing graph representation learning’s effectiveness.
With these methodologies, our research offers a novel viewpoint and enhancement approach for graph representation learning, focusing on mastering higher-order neighboring relationships. The anticipated findings from this study will likely benefit subsequent studies in this domain.
參考文獻 1. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
2. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international con- ference on Knowledge discovery and data mining, pages 701–710, 2014.
3. Aditya Grover and Jure Leskovec.node2vec:Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864, 2016.
4. XinRong.word2vec parameter learning explained.arXiv preprint arXiv:1411.2738, 2014.
5. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
6. Lisa Ehrlinger and Wolfram Wo ̈ß. Towards a definition of knowledge graphs. SE- MANTiCS (Posters, Demos, SuCCESS), 48(1-4):2, 2016.
7. Jheng-Hong Yang, Chih-Ming Chen, Chuan-Ju Wang, and Ming-Feng Tsai. Hop- rec: high-order proximity for implicit recommendation. In Proceedings of the 12th ACM conference on recommender systems, pages 140–144, 2018
8. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118– 22133, 2020.
9. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067–1077, 2015.
10. Gueorgi Kossinets. Effects of missing data in social networks. Social networks, 28(3):247–268, 2006.
11. Albert-La ́szlo ́ Baraba ́si and Re ́ka Albert. Emergence of scaling in random networks. science, 286(5439):509–512, 1999.
12.Lada A Adamic and Eytan Adar.Friends and neighbors on the web.Social networks, 25(3):211–230, 2003.
13. Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37, 2009.
14. Keiron O’Shea and Ryan Nash. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458, 2015.
15. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
16. Qiaoyu Tan, Xin Zhang, Ninghao Liu, Daochen Zha, Li Li, Rui Chen, Soo-Hyun Choi, and Xia Hu. Bring your own view: Graph neural networks for link predic- tion with personalized subgraph selection. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pages 625–633, 2023.
17.XiangnanHe,KuanDeng,XiangWang,YanLi,YongdongZhang,andMengWang. Lightgcn: Simplifying and powering graph convolution network for recommenda- tion. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 639–648, 2020.
18. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
19. Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018.
20. Anh Viet Phan, Minh Le Nguyen, Yen Lam Hoang Nguyen, and Lam Thu Bui. Dgcnn: A convolutional neural network over large-scale labeled graphs. Neural Networks, 108:533–543, 2018.
21. Zhitao Wang, Yong Zhou, Litao Hong, Yuanhang Zou, Hanjing Su, and Shouzhi Chen. Pairwise learning for neural link prediction. arXiv preprint arXiv:2112.02936, 2021.
描述 碩士
國立政治大學
資訊科學系
110753109
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110753109
資料類型 thesis
dc.contributor.advisor 蔡銘峰zh_TW
dc.contributor.author (Authors) 林柏宇zh_TW
dc.contributor.author (Authors) Lin, Bo-Yuen_US
dc.creator (作者) 林柏宇zh_TW
dc.creator (作者) Lin, Bo-Yuen_US
dc.date (日期) 2023en_US
dc.date.accessioned 1-Sep-2023 15:24:12 (UTC+8)-
dc.date.available 1-Sep-2023 15:24:12 (UTC+8)-
dc.date.issued (上傳時間) 1-Sep-2023 15:24:12 (UTC+8)-
dc.identifier (Other Identifiers) G0110753109en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/147031-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 110753109zh_TW
dc.description.abstract (摘要) 圖學習表示法作為一種重要的圖分析方法,旨在將圖中的節點和邊映射到低維度的向量空間,以更精確地捕捉其結構和關係特徵。在這一框架中,高階相鄰關係扮演著關鍵角色。相對於傳統的低階表示法,高階相鄰關係提供了更深入的分析和處理能力。 它能夠捕捉到節點之間的複雜連接關係,包括間接連接和共同鄰居等。此外,高階相 鄰關係在圖的結構中具有重要性,如三角形關係和子圖模式等。通過學習這些關係,模型能夠更準確地理解圖的整體特徵。因此,探索高階相鄰關係對於進一步提升圖學習表示法的性能和能力至關重要。
本研究旨在引入「最短路徑」作為主要概念,以改善圖學習表示法對高階相鄰關係的學習能力。通過利用最短路徑可達到 k 步為 k 階鄰 居的特性,我們希望表示法能更加直觀且準確地捕捉高階相鄰關係。此外,我們進一步探索以最短路徑長的分佈來動態決定高階鄰居的取值範圍。 這種方法使得圖學習表示法能夠自動調整參數,而無需耗費大量時間和計算資源。透過基於最短路徑長的分佈,我們能夠更有效地決定高階鄰居的範圍,從而提高圖學習表示法的性能和效率。
透過以上方法,本研究希望為圖學習表示法在學習高階相鄰關係方面提供一個新的視角和改進策略,期待過程中的實驗結果與觀察發現能為未來相關的研究提供幫助。
zh_TW
dc.description.abstract (摘要) Graph representation learning is a critical technique in graph analysis that strives to project nodes and edges of a graph into a compressed vector space, thereby better grasping structural and relational aspects. Central to this are higher-order neighboring relationships. These relationships, unlike traditional lower-order ones, offer enhanced analysis and processing potential. They excel at detecting nuanced connections between nodes, such as indirect ties and shared neighbors. Importantly, these higher-order relation- ships highlight specific patterns in the graph structure, like triangle relation- ships and subgraph designs. By mastering these, models can more thoroughly comprehend the overarching graph features. Hence, examining higher-order neighboring relationships is essential for refining the efficacy of graph representation learning.
In this research, we propose the ”shortest paths” principle to boost the learning capacity of graph representation concerning higher-order neighbor- ing relationships. By harnessing that the shortest paths of up to k steps produce kth-order neighbors, we aim for a more precise portrayal of these relationships. Additionally, we explore the adaptive determination of the span of higher-order neighbors using the spread of shortest path lengths. Such a strategy enables graph representation learning to self-regulate parameters with- out excessive resource expenditure. Leveraging the spread of shortest path lengths helps swiftly determine the range of higher-order neighbors, hence enhancing graph representation learning’s effectiveness.
With these methodologies, our research offers a novel viewpoint and enhancement approach for graph representation learning, focusing on mastering higher-order neighboring relationships. The anticipated findings from this study will likely benefit subsequent studies in this domain.
en_US
dc.description.tableofcontents 第一章 緒論 1
第一節 前言 1
第二節 研究目的 2
第二章 相關文獻探討 4
第一節 圖學習表示法(Graph Representation) 4
第二節 鏈結預測(Link Prediction) 5
第三章 研究方法 9
第一節 問題定義 9
第二節 HOP-Rec 10
第三節 路徑抽取特徵 11
第四節 改進策略 13
第五節 方法總結 17
第四章 實驗結果與討論 19
第一節 資料集 19
第二節 比較基準模型(Baseline) 20 
第三節 實驗設定與評估標準 20
第四節 實驗結果 22
第五節 實驗總結 29
第五章 結論 30
第一節 結論 30
參考文獻 32
zh_TW
dc.format.extent 2135945 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110753109en_US
dc.subject (關鍵詞) 圖學習表示法zh_TW
dc.subject (關鍵詞) 最短路徑zh_TW
dc.subject (關鍵詞) 鏈結預測zh_TW
dc.subject (關鍵詞) Graph Representation Learningen_US
dc.subject (關鍵詞) Shortest Pathen_US
dc.subject (關鍵詞) Link Predictionen_US
dc.title (題名) 基於高階相鄰關係與最短路徑之圖表示法學習zh_TW
dc.title (題名) Exploring High-order Proximity and Shortest-path Walking for Graph Representation Learningen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) 1. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
2. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international con- ference on Knowledge discovery and data mining, pages 701–710, 2014.
3. Aditya Grover and Jure Leskovec.node2vec:Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864, 2016.
4. XinRong.word2vec parameter learning explained.arXiv preprint arXiv:1411.2738, 2014.
5. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
6. Lisa Ehrlinger and Wolfram Wo ̈ß. Towards a definition of knowledge graphs. SE- MANTiCS (Posters, Demos, SuCCESS), 48(1-4):2, 2016.
7. Jheng-Hong Yang, Chih-Ming Chen, Chuan-Ju Wang, and Ming-Feng Tsai. Hop- rec: high-order proximity for implicit recommendation. In Proceedings of the 12th ACM conference on recommender systems, pages 140–144, 2018
8. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118– 22133, 2020.
9. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067–1077, 2015.
10. Gueorgi Kossinets. Effects of missing data in social networks. Social networks, 28(3):247–268, 2006.
11. Albert-La ́szlo ́ Baraba ́si and Re ́ka Albert. Emergence of scaling in random networks. science, 286(5439):509–512, 1999.
12.Lada A Adamic and Eytan Adar.Friends and neighbors on the web.Social networks, 25(3):211–230, 2003.
13. Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37, 2009.
14. Keiron O’Shea and Ryan Nash. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458, 2015.
15. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
16. Qiaoyu Tan, Xin Zhang, Ninghao Liu, Daochen Zha, Li Li, Rui Chen, Soo-Hyun Choi, and Xia Hu. Bring your own view: Graph neural networks for link predic- tion with personalized subgraph selection. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pages 625–633, 2023.
17.XiangnanHe,KuanDeng,XiangWang,YanLi,YongdongZhang,andMengWang. Lightgcn: Simplifying and powering graph convolution network for recommenda- tion. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 639–648, 2020.
18. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
19. Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018.
20. Anh Viet Phan, Minh Le Nguyen, Yen Lam Hoang Nguyen, and Lam Thu Bui. Dgcnn: A convolutional neural network over large-scale labeled graphs. Neural Networks, 108:533–543, 2018.
21. Zhitao Wang, Yong Zhou, Litao Hong, Yuanhang Zou, Hanjing Su, and Shouzhi Chen. Pairwise learning for neural link prediction. arXiv preprint arXiv:2112.02936, 2021.
zh_TW