Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/137294
DC FieldValueLanguage
dc.contributor.advisor蔡銘峰zh_TW
dc.contributor.advisorTsai, Ming-Fengen_US
dc.contributor.author陳先灝zh_TW
dc.contributor.authorChen, Hsien-Haoen_US
dc.creator陳先灝zh_TW
dc.creatorChen, Hsien-Haoen_US
dc.date2021en_US
dc.date.accessioned2021-10-01T02:05:46Z-
dc.date.available2021-10-01T02:05:46Z-
dc.date.issued2021-10-01T02:05:46Z-
dc.identifierG0108753107en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/137294-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description資訊科學系zh_TW
dc.description108753107zh_TW
dc.description.abstract隨著電子商務、影像串流服務等線上服務平台的發展,各大服務供應商對於「精準掌握用戶喜好」等相關技術的需求也逐季提升。其中,推薦系統作為這類方法的核心技術,如何在多變的現實問題中,提出符合特定需求的解決方式,也成為近年來相關研究的主要方向。\n\n在本研究中,我們特別關心的是推薦系統中的冷啟動 (Cold Start) 問題。 冷啟動問題發生的主要原因,是因為特定情況造成的資料稀缺,比如推薦系統中的新用戶/物品等等。由於其困難性和實際應用中的無可避免,一直是推薦系統研究中,的一個具有挑戰性的問題。\n\n其中,緩解此問題的一種有效方法,是利用相關領域的知識來彌補目標領域的數據缺失問題,即所謂跨領域推薦 (Cross-Domain Recommendation)。跨領域推薦的主要目的在於,在多個不同的領域中實行推薦演算法,從中描繪出用戶的個人偏好 (Personal Preference),再利用這些資訊來補充目標領域缺少的數據,從而在某種程度上解決冷啟動問題。\n\n在本文中,我們提出了一個基於用戶轉換的的跨領域偏好排序方法(CPR),它讓用戶從源域 (Source Domain) 和目標域 (Target Domain)的物品中同時擷取資訊,並據此進行表示法學習,將其轉化為自身偏好的表示向量。通過這樣的轉換形式,CPR 將除了能有效地利用源域的資訊之外,也能直接地以此更新目標域中用戶和物品的相關表示,從而有效地改善目標域的推薦成果。\n\n在數據實驗中,為了能有效證明 CPR 方法的能力,我們將 CPR 方法實驗在六個不同的工業級資料上,並在差異化的條件設定 (目標域全體、冷啟動用戶、共同用戶) 中進行測試,也以先進的跨領域和單領域推薦演算法做為比較基準,進行比較。最後發現,CPR 不僅成功提高目標域整體的推薦效能,針對特定的冷啟動用戶也達到相當好的成果。zh_TW
dc.description.abstractWith the development of online service platforms such as e-commerce and video streaming services, the major service providers’ demand for related technologies such as ”accurately extracting user preferences” has also\nincreased quarter by quarter. Among them, the recommendation system is the core technology of this kind of method. Therefore, how to propose solutions that meet specific needs in changing real problems has also become\nthe main direction of related research in recent years. In this research, we are particularly concerned about the ”cold-start problem” in the recommendation system.\nThe main reason for the cold-start problem is the scarcity of data caused by specific circumstances, such as recommending new users/items in the system, and so on. Due to its difficulty and inevitable practical application, it has always been a challenging problem in recommender systems research.\nIn this thesis, we propose a cross-domain preference ranking method (CPR) based on user conversion, which allows users to simultaneously extract information from items in the source domain (Source Domain) and the target domain, and based on this, perform representation learning and transform it into a representation vector of their preferences. Through this conversion form, CPR will effectively use the information in the source domain\nand directly update the relevant representations of users and items in the target domain, thereby effectively improving the recommendation results of the target domain.\nIn the data experiment, to effectively prove the ability of the CPR method, we experimented with the CPR method on six different industrial-level data and conducted it in a differentiated condition setting (all target domains, coldstart users, shared users). The test also uses advanced cross-domain and single-domain recommendation algorithms as a benchmark for comparison.\nFinally, it was found that CPR successfully improved the overall recommendation performance of the target domain and achieved quite good results for specific cold-start users.en_US
dc.description.tableofcontents致謝.......................................1\n中文摘要.......................................2\nAbstract.......................................3\n第一章 緒論....................................... 1\n1.1 前言..................................... 1\n1.2 研究目的.................................. 3\n第二章 相關文獻探討.................................. 5\n2.1 表示法學習................................. 5\n2.2 推薦系統.................................. 6\n2.2.1 冷啟動推薦 ............................ 7\n2.2.2 跨領域推薦 ............................ 7\n2.3 深度學習.................................. 8\n2.3.1 圖神經網路 ............................ 8\n第三章 研究方法..................................... 10\n3.1 問題定義.................................. 10\n3.1.1 符號說明.............................. 10\n3.1.2 推薦系統.............................. 11\n3.1.3 跨領域推薦 ............................ 11\n3.2 貝葉斯個人化推薦排序 .......................... 12\n3.3 訊息傳遞模型與鄰居聚合......................... 12\n3.4 UET-CPR.................................. 14\n3.4.1 基於使用者向量轉換的跨領域鄰居聚合 ............ 14\n3.4.2 模型與實現方法說明 ....................... 15\n3.5 UET-CPR的推薦方法 ........................... 18\n3.6 UET-CPR的優化.............................. 19\n3.6.1 優化貝葉斯個人化推薦排序 ................... 19\n3.6.2 UET-CPR模型設計的合理性................... 21\n第四章 實驗結果與討論................................ 22\n4.1 實驗設定.................................. 22\n4.1.1 模型比較對象 ........................... 22\n4.1.2 資料集 ............................... 23\n4.1.3 實驗設定與方法說明 ....................... 24\n4.2 研究問題說明 ............................... 25\n4.3 實驗結果.................................. 25\n4.3.1 目標域上的使用者 ........................ 25\n4.3.2 目標域上的冷啟動使用者 .................... 27\n4.3.3 目標域和源域上的共同使用者.................. 28\n4.4 關於 UET-CPR 模型與其他模型在不同資料集上表現差異的討論 . 28\n4.4.1 不同資料集對結果的影響 .................... 29\n4.4.2 TV-VOD資料集.......................... 29\n4.4.3 CSJ-HK資料集 .......................... 29\n4.4.4 MT-B資料集............................ 30\n4.4.5 小結 ................................ 30\n4.5 個案探討-以TV-VOD資料集為例 ................... 30\n第五章 結論....................................... 32\n參考文獻.......................................... 34zh_TW
dc.format.extent1829149 bytes-
dc.format.mimetypeapplication/pdf-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0108753107en_US
dc.subject推薦系統zh_TW
dc.subject機器學習zh_TW
dc.subject跨領域推薦zh_TW
dc.subject冷啟動問題zh_TW
dc.subjectRecommendation Systemen_US
dc.subjectRecommender Systemen_US
dc.subjectMachine Learningen_US
dc.subjectCross Domain Recommendationen_US
dc.subjectCold-starten_US
dc.title基於使用者表示法轉換之跨領域偏好排序於推薦系統zh_TW
dc.titleUser Embedding Transformation on Cross-domain Preference Ranking for Recommender Systemsen_US
dc.typethesisen_US
dc.relation.reference[1] A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang. Learning polynomials with neural networks. In ICML, volume 32 of JMLR Workshop and Conference Proceedings, pages 1908–1916. JMLR.org, 2014.\n[2] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 89–96, New York, NY, USA, 2005. ACM.\n[3] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software\navailable at http://www.csie.ntu.edu.tw/˜cjlin/libsvm.\n[4] W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. CoRR,\nabs/1905.07953, 2019.\n[5] P. Covington, J. Adams, and E. Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, New York, NY, USA, 2016.\n[6] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.\n[7] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. CoRR, abs/1704.01212, 2017.\n[8] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. CoRR, abs/1607.00653, 2016.\n[9] W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, page 11, 2017.\n[10] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. CoRR, abs/2002.02126, 2020.\n[11] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, WWW ’17, page 173–182, Republic and Canton of Geneva, CHE, 2017. International World Wide Web Conferences Steering Committee.\n[12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.\n[13] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359 – 366, 1989.\n[14] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. 5th International Conference on Learning Representations, 2016.\n[15] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37, Aug. 2009.\n[16] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.\n[17] M. Liu, J. Li, G. Li, and P. Pan. Cross domain recommendation via bi-directional transfer graph collaborative filtering networks. In M. d’Aquin, S. Dietze, C. Hauff, E. Curry, and P. Cudre-Mauroux, editors, ´ CIKM, pages 885–894. ACM, 2020.\n[18] T. Man, H. Shen, X. Jin, and X. Cheng. Cross-domain recommendation: An embedding and mapping approach. In Proceedings of the Twenty-Sixth International Joint\nConference on Artificial Intelligence, IJCAI-17, pages 2464–2470, 2017.\n[19] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In Y. Bengio and Y. LeCun, editors, 1st International\nConference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013.\n[20] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013.\n[21] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. CoRR, abs/1403.6652, 2014.\n[22] R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learning using graphics processors. In A. P. Danyluk, L. Bottou, and M. L. Littman, editors, ICML,\nvolume 382 of ACM International Conference Proceeding Series, pages 873–880. ACM, 2009.\n[23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection, 2015. cite arxiv:1506.02640.\n[24] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. BPR: bayesian personalized ranking from implicit feedback. CoRR, abs/1205.2618, 2012.\n[25] S. Rendle, W. Krichene, L. Zhang, and J. R. Anderson. Neural collaborative filtering vs. matrix factorization revisited. CoRR, abs/2005.09683, 2020.\n[26] P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. Grouplens: an open architecture for collaborative filtering of netnews. In CSCW ’94: Proceedings of the\n1994 ACM conference on Computer supported cooperative work, pages 175–186, New York, NY, USA, 1994. ACM Press.\n[27] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Incremental singular value decomposition algorithms for highly scalable recommender systems. In Proceedings of the\n5th International Conference in Computers and Information Technology, 2002.\n[28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.\n[29] A. P. Singh and G. J. Gordon. Relational learning via collective matrix factorization. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge\ndiscovery and data mining, pages 650–658, 2008.\n[30] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. LINE: large-scale information network embedding. CoRR, abs/1503.03578, 2015.\n[31] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Laiser, and I. Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio,\nH. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, page 5998–6008. Curran Associates, Inc., 2017.\n[32] P. Velickovi ˇ c, G. Cucurull, A. Casanova, A. Romero, P. Li ´ o, and Y. Bengio. Graph attention networks, 2018.\n[33] M. Volkovs, G. W. Yu, and T. Poutanen. Content-based neighbor models for cold start in recommender systems. In Proceedings of the Recommender Systems Challenge 2017, pages 1–6. 2017.\n[34] M. Volkovs, G. W. Yu, and T. Poutanen. Dropoutnet: Addressing cold start in recommender systems. In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus,\nS. V. N. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4957–4966, 2017.\n[35] X. Wang, X. He, Y. Cao, M. Liu, and T.-S. Chua. Kgat: Knowledge graph attention network for recommendation. CoRR, abs/1905.07854, 2019.\n[36] X. Wang, X. He, M. Wang, F. Feng, and T. Chua. Neural graph collaborative filtering. CoRR, abs/1905.08108, 2019.\n[37] K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? CoRR, abs/1810.00826, 2018.\n[38] R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec. Graph convolutional neural networks for web-scale recommender systems, 2018.\ncite arxiv:1806.01973Comment: KDD 2018.zh_TW
dc.identifier.doi10.6814/NCCU202101563en_US
item.openairetypethesis-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
Appears in Collections:學位論文
Files in This Item:
File Description SizeFormat
310701.pdf1.79 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.