Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/146861
題名: 從高維度消費紀錄挖掘隱藏偏好
Discovering Hidden Preferences from High Dimensional Consumption Records
作者: 陳品嘉
Chen, Pin-Chia
貢獻者: 莊皓鈞<br>林靖庭
Chuang, Hao-Chun<br>Lin, Ching-Ting
陳品嘉
Chen, Pin-Chia
關鍵詞: 高維度資料
主題模型
非負矩陣分解
深度學習
High-dimension data
Topic modeling
NMF
Deep learning
日期: 2023
上傳時間: 1-Sep-2023
摘要: 理解消費者行為在許多領域中都被認為是重要的信息,尤其是在\r\n市場營銷中。但是,複雜的行為以及高維度、動態數據使得從中提取\r\n有意義的洞察變得困難。為了解決這些問題,我們結合了非負矩陣分\r\n解 (NMF) 和遞迴神經網絡 (RNN),提出深度動態神經網路 (Dynamic\r\nDeep NMF),來捕捉動態模式。NMF 的分解幫助我們總結消費主題\r\n和用戶對主題的興趣。而 RNN 的遞迴特性則幫助我們捕捉消費者\r\n的動態模式。我們設計了一個模擬實驗,產生模擬數據以測試 NMF\r\n和 Dynamic Deep NMF 的性能。最後,我們使用一個實證數據來展示\r\nDynamic Deep NMF 會找到什麼隱藏主題,以及它如何捕捉動態用戶\r\n行為。
To understand users’ consumption behavior is found critical in many fields,\r\nespecially marketing. But the complex behavior embedded in high-dimensional,\r\ndynamic transaction data make it hard to extract meaningful insights. To\r\ntackle such problems, we combine the non-negative matrix factorization(NMF)\r\nand recurrent neural network (RNN) to develop a Dynamic Deep NMF in order to capture dynamic patterns and elicit hidden preferences. The decomposition of NMF helps us to summarize the consumption topics and users’ interests among the topics. And the recurrent properties of RNN helps us to\r\ncapture the dynamic pattern of users’ interests. We also develop a simulation experiment to generate synthetic data to test the performances of NMF and Dynamic Deep NMF. Finally, we use an empirical dataset to demonstrate\r\nwhat hidden topics the Dynamic Deep NMF could find and how the method captures dynamic user behaviors.
參考文獻: 參考文獻\r\n[1] A. V. Bodapati, “Recommendation systems with purchase data,” Journal of\r\nMarketing Research, vol. 45, no. 1, pp. 77–93, 2008. [Online]. Available:\r\nhttps://doi.org/10.1509/jmkr.45.1.077\r\n[2] J. R. Hauser, G. L. Urban, G. Liberali, and M. Braun, “Website morphing,”\r\nMarketing Science, vol. 28, no. 2, pp. 202–223, 2009. [Online]. Available:\r\nhttp://www.jstor.org/stable/23884254\r\n[3] J. R. Hauser, G. G. Liberali, and G. L. Urban, “Website morphing 2.0:\r\nSwitching costs, partial exposure, random exit, and when to morph,” Management\r\nScience, vol. 60, no. 6, pp. 1594–1616, 2014. [Online]. Available: https:\r\n//doi.org/10.1287/mnsc.2014.1961\r\n[4] A. Goldfarb and C. Tucker, “Online display advertising: Targeting and\r\nobtrusiveness,” Marketing Science, vol. 30, no. 3, pp. 389–404, 2011. [Online].\r\nAvailable: http://www.jstor.org/stable/23012474\r\n[5] C. Perlich, B. Dalessandro, T. Raeder, O. Stitelman, and F. Provost, “Machine\r\nlearning for targeted display advertising: Transfer learning in action,” Mach.\r\nLearn., vol. 95, no. 1, p. 103–127, apr 2014. [Online]. Available: https:\r\n//doi.org/10.1007/s10994-013-5375-2\r\n[6] D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix\r\nfactorization,” Nature, vol. 401, pp. 788–791, 1999.\r\n25\r\n[7] J. K. Pritchard, M. Stephens, and P. Donnelly, “Inference of Population Structure\r\nUsing Multilocus Genotype Data,” Genetics, vol. 155, no. 2, pp. 945–959, 06 2000.\r\n[Online]. Available: https://doi.org/10.1093/genetics/155.2.945\r\n[8] D. Guillamet and J. Vitrià, “Non-negative matrix factorization for face recognition,”\r\nin Proceedings of the 5th Catalonian Conference on AI: Topics in Artificial Intelligence, ser. CCIA ’02. Berlin, Heidelberg: Springer-Verlag, 2002, p. 336–344.\r\n[9] D. Lee and H. Seung, “Algorithms for non-negative matrix factorization,” in Advances in Neural Information Processing Systems 13 - Proceedings of the 2000\r\nConference, NIPS 2000, ser. Advances in Neural Information Processing Systems.\r\nNeural information processing systems foundation, 2001, 14th Annual Neural Information Processing Systems Conference, NIPS 2000 ; Conference date: 27-11-2000\r\nThrough 02-12-2000.\r\n[10] W. Xu, X. Liu, and Y. Gong, “Document clustering based on non-negative\r\nmatrix factorization,” in Proceedings of the 26th Annual International ACM SIGIR\r\nConference on Research and Development in Informaion Retrieval, ser. SIGIR ’03.\r\nNew York, NY, USA: Association for Computing Machinery, 2003, p. 267–273.\r\n[Online]. Available: https://doi.org/10.1145/860435.860485\r\n[11] J. Mejia, S. Mankad, and A. Gopal, “Service quality using text mining: Measurement\r\nand consequences,” Manufacturing & Service Operations Management, vol. 23,\r\nno. 6, p. 1354–1372, nov 2021. [Online]. Available: https://doi.org/10.1287/msom.\r\n2020.0883\r\n[12] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data\r\nwith neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006. [Online].\r\nAvailable: https://www.science.org/doi/abs/10.1126/science.1127647\r\n[13] F. Ye, C. Chen, and Z. Zheng, “Deep autoencoder-like nonnegative matrix\r\nfactorization for community detection,” in Proceedings of the 27th ACM\r\nInternational Conference on Information and Knowledge Management, ser. CIKM\r\n26\r\n’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 1393–\r\n1402. [Online]. Available: https://doi.org/10.1145/3269206.3271697\r\n[14] J. Wang and X.-L. Zhang, “Deep nmf topic modeling,” Neurocomput., vol. 515,\r\nno. C, p. 157–173, jan 2023. [Online]. Available: https://doi.org/10.1016/j.neucom.\r\n2022.10.002\r\n[15] P. S. Dhillon and S. Aral, “Modeling dynamic user interests: A neural matrix\r\nfactorization approach,” Marketing Science, vol. 40, no. 6, p. 1059–1080, nov 2021.\r\n[Online]. Available: https://doi.org/10.1287/mksc.2021.1293\r\n[16] T. G. Kang, K. Kwon, J. W. Shin, and N. S. Kim, “Nmf-based target source separation\r\nusing deep neural network,” IEEE Signal Processing Letters, vol. 22, no. 2, pp. 229–\r\n233, 2015.\r\n[17] J. Le Roux, J. R. Hershey, and F. Weninger, “Deep nmf for speech separation,” in\r\n2015 IEEE International Conference on Acoustics, Speech and Signal Processing\r\n(ICASSP), 2015, pp. 66–70.\r\n[18] C. Févotte and J. Idier, “Algorithms for nonnegative matrix factorization with the\r\nβ-divergence,” Neural Computation, vol. 23, no. 9, pp. 2421–2456, 2011.\r\n[19] A. CICHOCKI and A.-H. PHAN, “Fast local algorithms for large scale nonnegative\r\nmatrix and tensor factorizations,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E92.A, no. 3, pp. 708–721, 2009.\r\n[20] J. Pennington, R. Socher, and C. Manning, “GloVe: Global vectors for\r\nword representation,” in Proceedings of the 2014 Conference on Empirical\r\nMethods in Natural Language Processing (EMNLP). Doha, Qatar: Association\r\nfor Computational Linguistics, Oct. 2014, pp. 1532–1543. [Online]. Available:\r\nhttps://aclanthology.org/D14-1162\r\n[21] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in\r\nProceedings of the Fourteenth International Conference on Artificial Intelligence\r\nand Statistics, ser. Proceedings of Machine Learning Research, G. Gordon,\r\n27\r\nD. Dunson, and M. Dudík, Eds., vol. 15. Fort Lauderdale, FL, USA: PMLR,\r\n11–13 Apr 2011, pp. 315–323. [Online]. Available: https://proceedings.mlr.press/\r\nv15/glorot11a.html\r\n[22] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv\r\npreprint arXiv:1412.6980, 2014.\r\n[23] W. Webber, A. Moffat, and J. Zobel, “A similarity measure for indefinite\r\nrankings,” ACM Trans. Inf. Syst., vol. 28, no. 4, nov 2010. [Online]. Available:\r\nhttps://doi.org/10.1145/1852102.1852106\r\n[24] M. Kohjima, T. Matsubayashi, and H. Sawada, “Non-negative multiple matrix factorization for consumer behavior pattern extraction by considering attribution information,” Transactions of the Japanese Society for Artificial Intelligence, vol. 30,\r\nno. 6, pp. 745–754, 2015
描述: 碩士
國立政治大學
金融學系
110352019
資料來源: http://thesis.lib.nccu.edu.tw/record/#G0110352019
資料類型: thesis
Appears in Collections:學位論文

Files in This Item:
File SizeFormat
index.html113 BHTML2View/Open
Show full item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.