學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 從高維度消費紀錄挖掘隱藏偏好
Discovering Hidden Preferences from High Dimensional Consumption Records
作者 陳品嘉
Chen, Pin-Chia
貢獻者 莊皓鈞<br>林靖庭
Chuang, Hao-Chun<br>Lin, Ching-Ting
陳品嘉
Chen, Pin-Chia
關鍵詞 高維度資料
主題模型
非負矩陣分解
深度學習
High-dimension data
Topic modeling
NMF
Deep learning
日期 2023
上傳時間 1-Sep-2023 14:47:45 (UTC+8)
摘要 理解消費者行為在許多領域中都被認為是重要的信息,尤其是在
     市場營銷中。但是,複雜的行為以及高維度、動態數據使得從中提取
     有意義的洞察變得困難。為了解決這些問題,我們結合了非負矩陣分
     解 (NMF) 和遞迴神經網絡 (RNN),提出深度動態神經網路 (Dynamic
     Deep NMF),來捕捉動態模式。NMF 的分解幫助我們總結消費主題
     和用戶對主題的興趣。而 RNN 的遞迴特性則幫助我們捕捉消費者
     的動態模式。我們設計了一個模擬實驗,產生模擬數據以測試 NMF
     和 Dynamic Deep NMF 的性能。最後,我們使用一個實證數據來展示
     Dynamic Deep NMF 會找到什麼隱藏主題,以及它如何捕捉動態用戶
     行為。
To understand users’ consumption behavior is found critical in many fields,
     especially marketing. But the complex behavior embedded in high-dimensional,
     dynamic transaction data make it hard to extract meaningful insights. To
     tackle such problems, we combine the non-negative matrix factorization(NMF)
     and recurrent neural network (RNN) to develop a Dynamic Deep NMF in order to capture dynamic patterns and elicit hidden preferences. The decomposition of NMF helps us to summarize the consumption topics and users’ interests among the topics. And the recurrent properties of RNN helps us to
     capture the dynamic pattern of users’ interests. We also develop a simulation experiment to generate synthetic data to test the performances of NMF and Dynamic Deep NMF. Finally, we use an empirical dataset to demonstrate
     what hidden topics the Dynamic Deep NMF could find and how the method captures dynamic user behaviors.
參考文獻 參考文獻
     [1] A. V. Bodapati, “Recommendation systems with purchase data,” Journal of
     Marketing Research, vol. 45, no. 1, pp. 77–93, 2008. [Online]. Available:
     https://doi.org/10.1509/jmkr.45.1.077
     [2] J. R. Hauser, G. L. Urban, G. Liberali, and M. Braun, “Website morphing,”
     Marketing Science, vol. 28, no. 2, pp. 202–223, 2009. [Online]. Available:
     http://www.jstor.org/stable/23884254
     [3] J. R. Hauser, G. G. Liberali, and G. L. Urban, “Website morphing 2.0:
     Switching costs, partial exposure, random exit, and when to morph,” Management
     Science, vol. 60, no. 6, pp. 1594–1616, 2014. [Online]. Available: https:
     //doi.org/10.1287/mnsc.2014.1961
     [4] A. Goldfarb and C. Tucker, “Online display advertising: Targeting and
     obtrusiveness,” Marketing Science, vol. 30, no. 3, pp. 389–404, 2011. [Online].
     Available: http://www.jstor.org/stable/23012474
     [5] C. Perlich, B. Dalessandro, T. Raeder, O. Stitelman, and F. Provost, “Machine
     learning for targeted display advertising: Transfer learning in action,” Mach.
     Learn., vol. 95, no. 1, p. 103–127, apr 2014. [Online]. Available: https:
     //doi.org/10.1007/s10994-013-5375-2
     [6] D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix
     factorization,” Nature, vol. 401, pp. 788–791, 1999.
     25
     [7] J. K. Pritchard, M. Stephens, and P. Donnelly, “Inference of Population Structure
     Using Multilocus Genotype Data,” Genetics, vol. 155, no. 2, pp. 945–959, 06 2000.
     [Online]. Available: https://doi.org/10.1093/genetics/155.2.945
     [8] D. Guillamet and J. Vitrià, “Non-negative matrix factorization for face recognition,”
     in Proceedings of the 5th Catalonian Conference on AI: Topics in Artificial Intelligence, ser. CCIA ’02. Berlin, Heidelberg: Springer-Verlag, 2002, p. 336–344.
     [9] D. Lee and H. Seung, “Algorithms for non-negative matrix factorization,” in Advances in Neural Information Processing Systems 13 - Proceedings of the 2000
     Conference, NIPS 2000, ser. Advances in Neural Information Processing Systems.
     Neural information processing systems foundation, 2001, 14th Annual Neural Information Processing Systems Conference, NIPS 2000 ; Conference date: 27-11-2000
     Through 02-12-2000.
     [10] W. Xu, X. Liu, and Y. Gong, “Document clustering based on non-negative
     matrix factorization,” in Proceedings of the 26th Annual International ACM SIGIR
     Conference on Research and Development in Informaion Retrieval, ser. SIGIR ’03.
     New York, NY, USA: Association for Computing Machinery, 2003, p. 267–273.
     [Online]. Available: https://doi.org/10.1145/860435.860485
     [11] J. Mejia, S. Mankad, and A. Gopal, “Service quality using text mining: Measurement
     and consequences,” Manufacturing & Service Operations Management, vol. 23,
     no. 6, p. 1354–1372, nov 2021. [Online]. Available: https://doi.org/10.1287/msom.
     2020.0883
     [12] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data
     with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006. [Online].
     Available: https://www.science.org/doi/abs/10.1126/science.1127647
     [13] F. Ye, C. Chen, and Z. Zheng, “Deep autoencoder-like nonnegative matrix
     factorization for community detection,” in Proceedings of the 27th ACM
     International Conference on Information and Knowledge Management, ser. CIKM
     26
     ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 1393–
     1402. [Online]. Available: https://doi.org/10.1145/3269206.3271697
     [14] J. Wang and X.-L. Zhang, “Deep nmf topic modeling,” Neurocomput., vol. 515,
     no. C, p. 157–173, jan 2023. [Online]. Available: https://doi.org/10.1016/j.neucom.
     2022.10.002
     [15] P. S. Dhillon and S. Aral, “Modeling dynamic user interests: A neural matrix
     factorization approach,” Marketing Science, vol. 40, no. 6, p. 1059–1080, nov 2021.
     [Online]. Available: https://doi.org/10.1287/mksc.2021.1293
     [16] T. G. Kang, K. Kwon, J. W. Shin, and N. S. Kim, “Nmf-based target source separation
     using deep neural network,” IEEE Signal Processing Letters, vol. 22, no. 2, pp. 229–
     233, 2015.
     [17] J. Le Roux, J. R. Hershey, and F. Weninger, “Deep nmf for speech separation,” in
     2015 IEEE International Conference on Acoustics, Speech and Signal Processing
     (ICASSP), 2015, pp. 66–70.
     [18] C. Févotte and J. Idier, “Algorithms for nonnegative matrix factorization with the
     β-divergence,” Neural Computation, vol. 23, no. 9, pp. 2421–2456, 2011.
     [19] A. CICHOCKI and A.-H. PHAN, “Fast local algorithms for large scale nonnegative
     matrix and tensor factorizations,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E92.A, no. 3, pp. 708–721, 2009.
     [20] J. Pennington, R. Socher, and C. Manning, “GloVe: Global vectors for
     word representation,” in Proceedings of the 2014 Conference on Empirical
     Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association
     for Computational Linguistics, Oct. 2014, pp. 1532–1543. [Online]. Available:
     https://aclanthology.org/D14-1162
     [21] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in
     Proceedings of the Fourteenth International Conference on Artificial Intelligence
     and Statistics, ser. Proceedings of Machine Learning Research, G. Gordon,
     27
     D. Dunson, and M. Dudík, Eds., vol. 15. Fort Lauderdale, FL, USA: PMLR,
     11–13 Apr 2011, pp. 315–323. [Online]. Available: https://proceedings.mlr.press/
     v15/glorot11a.html
     [22] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv
     preprint arXiv:1412.6980, 2014.
     [23] W. Webber, A. Moffat, and J. Zobel, “A similarity measure for indefinite
     rankings,” ACM Trans. Inf. Syst., vol. 28, no. 4, nov 2010. [Online]. Available:
     https://doi.org/10.1145/1852102.1852106
     [24] M. Kohjima, T. Matsubayashi, and H. Sawada, “Non-negative multiple matrix factorization for consumer behavior pattern extraction by considering attribution information,” Transactions of the Japanese Society for Artificial Intelligence, vol. 30,
     no. 6, pp. 745–754, 2015
描述 碩士
國立政治大學
金融學系
110352019
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110352019
資料類型 thesis
dc.contributor.advisor 莊皓鈞<br>林靖庭zh_TW
dc.contributor.advisor Chuang, Hao-Chun<br>Lin, Ching-Tingen_US
dc.contributor.author (Authors) 陳品嘉zh_TW
dc.contributor.author (Authors) Chen, Pin-Chiaen_US
dc.creator (作者) 陳品嘉zh_TW
dc.creator (作者) Chen, Pin-Chiaen_US
dc.date (日期) 2023en_US
dc.date.accessioned 1-Sep-2023 14:47:45 (UTC+8)-
dc.date.available 1-Sep-2023 14:47:45 (UTC+8)-
dc.date.issued (上傳時間) 1-Sep-2023 14:47:45 (UTC+8)-
dc.identifier (Other Identifiers) G0110352019en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/146861-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 金融學系zh_TW
dc.description (描述) 110352019zh_TW
dc.description.abstract (摘要) 理解消費者行為在許多領域中都被認為是重要的信息,尤其是在
     市場營銷中。但是,複雜的行為以及高維度、動態數據使得從中提取
     有意義的洞察變得困難。為了解決這些問題,我們結合了非負矩陣分
     解 (NMF) 和遞迴神經網絡 (RNN),提出深度動態神經網路 (Dynamic
     Deep NMF),來捕捉動態模式。NMF 的分解幫助我們總結消費主題
     和用戶對主題的興趣。而 RNN 的遞迴特性則幫助我們捕捉消費者
     的動態模式。我們設計了一個模擬實驗,產生模擬數據以測試 NMF
     和 Dynamic Deep NMF 的性能。最後,我們使用一個實證數據來展示
     Dynamic Deep NMF 會找到什麼隱藏主題,以及它如何捕捉動態用戶
     行為。
zh_TW
dc.description.abstract (摘要) To understand users’ consumption behavior is found critical in many fields,
     especially marketing. But the complex behavior embedded in high-dimensional,
     dynamic transaction data make it hard to extract meaningful insights. To
     tackle such problems, we combine the non-negative matrix factorization(NMF)
     and recurrent neural network (RNN) to develop a Dynamic Deep NMF in order to capture dynamic patterns and elicit hidden preferences. The decomposition of NMF helps us to summarize the consumption topics and users’ interests among the topics. And the recurrent properties of RNN helps us to
     capture the dynamic pattern of users’ interests. We also develop a simulation experiment to generate synthetic data to test the performances of NMF and Dynamic Deep NMF. Finally, we use an empirical dataset to demonstrate
     what hidden topics the Dynamic Deep NMF could find and how the method captures dynamic user behaviors.
en_US
dc.description.tableofcontents 第 一章 緒論 1
     第 二章 文獻回顧 3
     2.1 非負矩陣分解 3
     2.2 神經網路與非負矩陣分解 4
     第 三章 研究方法 6
     3.1 非負矩陣分解 6
     3.2 深度動態非負整數分解 7
     第 四章 模擬實驗 11
     4.1 實驗設計 11
     4.2 實驗結果 13
     第 五章 實證分析 19
     5.1 商品品類層級分析 19
     5.2 商品品項層級分析 21
     第 六章 結論 24
     參考文獻 26
zh_TW
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110352019en_US
dc.subject (關鍵詞) 高維度資料zh_TW
dc.subject (關鍵詞) 主題模型zh_TW
dc.subject (關鍵詞) 非負矩陣分解zh_TW
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) High-dimension dataen_US
dc.subject (關鍵詞) Topic modelingen_US
dc.subject (關鍵詞) NMFen_US
dc.subject (關鍵詞) Deep learningen_US
dc.title (題名) 從高維度消費紀錄挖掘隱藏偏好zh_TW
dc.title (題名) Discovering Hidden Preferences from High Dimensional Consumption Recordsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) 參考文獻
     [1] A. V. Bodapati, “Recommendation systems with purchase data,” Journal of
     Marketing Research, vol. 45, no. 1, pp. 77–93, 2008. [Online]. Available:
     https://doi.org/10.1509/jmkr.45.1.077
     [2] J. R. Hauser, G. L. Urban, G. Liberali, and M. Braun, “Website morphing,”
     Marketing Science, vol. 28, no. 2, pp. 202–223, 2009. [Online]. Available:
     http://www.jstor.org/stable/23884254
     [3] J. R. Hauser, G. G. Liberali, and G. L. Urban, “Website morphing 2.0:
     Switching costs, partial exposure, random exit, and when to morph,” Management
     Science, vol. 60, no. 6, pp. 1594–1616, 2014. [Online]. Available: https:
     //doi.org/10.1287/mnsc.2014.1961
     [4] A. Goldfarb and C. Tucker, “Online display advertising: Targeting and
     obtrusiveness,” Marketing Science, vol. 30, no. 3, pp. 389–404, 2011. [Online].
     Available: http://www.jstor.org/stable/23012474
     [5] C. Perlich, B. Dalessandro, T. Raeder, O. Stitelman, and F. Provost, “Machine
     learning for targeted display advertising: Transfer learning in action,” Mach.
     Learn., vol. 95, no. 1, p. 103–127, apr 2014. [Online]. Available: https:
     //doi.org/10.1007/s10994-013-5375-2
     [6] D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix
     factorization,” Nature, vol. 401, pp. 788–791, 1999.
     25
     [7] J. K. Pritchard, M. Stephens, and P. Donnelly, “Inference of Population Structure
     Using Multilocus Genotype Data,” Genetics, vol. 155, no. 2, pp. 945–959, 06 2000.
     [Online]. Available: https://doi.org/10.1093/genetics/155.2.945
     [8] D. Guillamet and J. Vitrià, “Non-negative matrix factorization for face recognition,”
     in Proceedings of the 5th Catalonian Conference on AI: Topics in Artificial Intelligence, ser. CCIA ’02. Berlin, Heidelberg: Springer-Verlag, 2002, p. 336–344.
     [9] D. Lee and H. Seung, “Algorithms for non-negative matrix factorization,” in Advances in Neural Information Processing Systems 13 - Proceedings of the 2000
     Conference, NIPS 2000, ser. Advances in Neural Information Processing Systems.
     Neural information processing systems foundation, 2001, 14th Annual Neural Information Processing Systems Conference, NIPS 2000 ; Conference date: 27-11-2000
     Through 02-12-2000.
     [10] W. Xu, X. Liu, and Y. Gong, “Document clustering based on non-negative
     matrix factorization,” in Proceedings of the 26th Annual International ACM SIGIR
     Conference on Research and Development in Informaion Retrieval, ser. SIGIR ’03.
     New York, NY, USA: Association for Computing Machinery, 2003, p. 267–273.
     [Online]. Available: https://doi.org/10.1145/860435.860485
     [11] J. Mejia, S. Mankad, and A. Gopal, “Service quality using text mining: Measurement
     and consequences,” Manufacturing & Service Operations Management, vol. 23,
     no. 6, p. 1354–1372, nov 2021. [Online]. Available: https://doi.org/10.1287/msom.
     2020.0883
     [12] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data
     with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006. [Online].
     Available: https://www.science.org/doi/abs/10.1126/science.1127647
     [13] F. Ye, C. Chen, and Z. Zheng, “Deep autoencoder-like nonnegative matrix
     factorization for community detection,” in Proceedings of the 27th ACM
     International Conference on Information and Knowledge Management, ser. CIKM
     26
     ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 1393–
     1402. [Online]. Available: https://doi.org/10.1145/3269206.3271697
     [14] J. Wang and X.-L. Zhang, “Deep nmf topic modeling,” Neurocomput., vol. 515,
     no. C, p. 157–173, jan 2023. [Online]. Available: https://doi.org/10.1016/j.neucom.
     2022.10.002
     [15] P. S. Dhillon and S. Aral, “Modeling dynamic user interests: A neural matrix
     factorization approach,” Marketing Science, vol. 40, no. 6, p. 1059–1080, nov 2021.
     [Online]. Available: https://doi.org/10.1287/mksc.2021.1293
     [16] T. G. Kang, K. Kwon, J. W. Shin, and N. S. Kim, “Nmf-based target source separation
     using deep neural network,” IEEE Signal Processing Letters, vol. 22, no. 2, pp. 229–
     233, 2015.
     [17] J. Le Roux, J. R. Hershey, and F. Weninger, “Deep nmf for speech separation,” in
     2015 IEEE International Conference on Acoustics, Speech and Signal Processing
     (ICASSP), 2015, pp. 66–70.
     [18] C. Févotte and J. Idier, “Algorithms for nonnegative matrix factorization with the
     β-divergence,” Neural Computation, vol. 23, no. 9, pp. 2421–2456, 2011.
     [19] A. CICHOCKI and A.-H. PHAN, “Fast local algorithms for large scale nonnegative
     matrix and tensor factorizations,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E92.A, no. 3, pp. 708–721, 2009.
     [20] J. Pennington, R. Socher, and C. Manning, “GloVe: Global vectors for
     word representation,” in Proceedings of the 2014 Conference on Empirical
     Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association
     for Computational Linguistics, Oct. 2014, pp. 1532–1543. [Online]. Available:
     https://aclanthology.org/D14-1162
     [21] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in
     Proceedings of the Fourteenth International Conference on Artificial Intelligence
     and Statistics, ser. Proceedings of Machine Learning Research, G. Gordon,
     27
     D. Dunson, and M. Dudík, Eds., vol. 15. Fort Lauderdale, FL, USA: PMLR,
     11–13 Apr 2011, pp. 315–323. [Online]. Available: https://proceedings.mlr.press/
     v15/glorot11a.html
     [22] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv
     preprint arXiv:1412.6980, 2014.
     [23] W. Webber, A. Moffat, and J. Zobel, “A similarity measure for indefinite
     rankings,” ACM Trans. Inf. Syst., vol. 28, no. 4, nov 2010. [Online]. Available:
     https://doi.org/10.1145/1852102.1852106
     [24] M. Kohjima, T. Matsubayashi, and H. Sawada, “Non-negative multiple matrix factorization for consumer behavior pattern extraction by considering attribution information,” Transactions of the Japanese Society for Artificial Intelligence, vol. 30,
     no. 6, pp. 745–754, 2015
zh_TW