Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 結合線性上信賴邊界與K-平均算法來分析個人化點擊率 : 以雅虎新聞推薦為例
Integrating LinUCB with K-Means to Analyze Personal CTR : Yahoo News Recommendation as an Example
作者 張翔
Chang, Hsiang
貢獻者 胡毓忠
Hu, Yuh-Jong
張翔
Chang, Hsiang
關鍵詞 Top-N推薦
情境式多拉桿賭博問題
線性上信賴邊界
K-平均算法
Top-N recommendation
Contextual Multi-Armed bandit problem
Linear upper confidence bound
K-Means
日期 2017
上傳時間 10-Aug-2017 09:58:35 (UTC+8)
摘要 隸屬於Top-N推薦演算法之一的線性上信賴邊界,根據使用者特徵以及對商品喜好程度的線性組合,進行好感度評估和個人化推薦,同時也將商品賦予了矩陣特性,匯集曾經匹配過的使用者特徵。不過在使用者樣本不充裕、或是未曾被推薦過某種使用者類型,演算法無法有效評估使用者對商品的喜好,因此也造成推薦失敗的次數增加。本研究藉由分析個人化點擊率案例,變更線性上信賴邊界的規則:對於推薦內容不感興趣的使用者,嘗試利用商品特徵的分群結果,強制群組內其它商品紀錄同名使用者特徵,防止類似(使用者-項目)組合再次發生。透過策略最佳化原則以及離線評估方式,並且採用Yahoo Front Page Today Module數據集做為點擊率期望值的評估來源。
Linear upper confidence bound (LinUCB) is one of the Top-N implementations for contextual multi-armed bandit problem. The purpose is to evaluate user preference depending on the linear combination of user features and feedbacks from the recommendation. LinUCB collects user characteristics for sample evaluation, but insufficient samples also makes statistic evaluation poor. This study revises the original rule when recommendation fails, our algorithm shares user features to the other similar and unselected items based on K-Means item-feature clustering, to avoid similar user-item pair recommendation. In the final stage, we use off-policy evaluation to improve the expected click-through-rate (CTR) under the policy optimization technique.
參考文獻 [1] F. Ricci, L. Rokach, and B. Shapira, "Introduction to recommender systems handbook," in Recommender systems handbook, ed: Springer, 2011, pp. 1-35.
[2] G. Adomavicius and A. Tuzhilin, "Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions," IEEE transactions on knowledge and data engineering, vol. 17, pp. 734-749, 2005.
[3] M. Deshpande and G. Karypis, "Item-based top-n recommendation algorithms," ACM Transactions on Information Systems (TOIS), vol. 22, pp. 143-177, 2004.
[4] G. Linden, B. Smith, and J. York, "Amazon. com recommendations: Item-to-item collaborative filtering," IEEE Internet computing, vol. 7, pp. 76-80, 2003.
[5] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, "Item-based collaborative filtering recommendation algorithms," in Proceedings of the 10th international conference on World Wide Web, 2001, pp. 285-295.
[6] J. B. Schafer, J. Konstan, and J. Riedl, "Recommender systems in e-commerce," in Proceedings of the 1st ACM conference on Electronic commerce, 1999, pp. 158-166.
[7] C. D. Manning, P. Raghavan, and H. Schütze, Introduction to information retrieval vol. 1: Cambridge university press Cambridge, 2008.
[8] L. Terveen, W. Hill, B. Amento, D. McDonald, and J. Creter, "PHOAKS: A system for sharing recommendations," Communications of the ACM, vol. 40, pp. 59-62, 1997.
[9] B. Mobasher, H. Dai, T. Luo, and M. Nakagawa, "Discovery and evaluation of aggregate usage profiles for web personalization," Data mining and knowledge discovery, vol. 6, pp. 61-82, 2002.
[10] Y. G. Jung, M. S. Kang, and J. Heo, "Clustering performance comparison using K-means and expectation maximization algorithms," Biotechnology & Biotechnological Equipment, vol. 28, pp. S44-S48, 2014.
[11] J. Vermorel and M. Mohri, "Multi-armed bandit algorithms and empirical evaluation," in ECML, 2005, pp. 437-448.
[12] A. Mahajan and D. Teneketzis, "Multi-armed bandit problems," Foundations and Applications of Sensor Management, pp. 121-151, 2008.
[13] M. Coggan, "Exploration and exploitation in reinforcement learning," Research supervised by Prof. Doina Precup, CRA-W DMP Project at McGill University, 2004.
[14] L. Li, W. Chu, J. Langford, and R. E. Schapire, "A contextual-bandit approach to personalized news article recommendation," in Proceedings of the 19th international conference on World wide web, 2010, pp. 661-670.
[15] T. Lu, D. Pál, and M. Pál, "Contextual multi-armed bandits," in Proceedings of the Thirteenth international conference on Artificial Intelligence and Statistics, 2010, pp. 485-492.
[16] M. Fukushima, T. Takayama, and M. Takayama, "How Developers Explore and Exploit Instant Innovation from Experiment to Implementing New Product Development," in IFIP International Conference on Product Lifecycle Management, 2014, pp. 507-517.
[17] J. C. Gittins, "Bandit processes and dynamic allocation indices," Journal of the Royal Statistical Society. Series B (Methodological), pp. 148-177, 1979.
[18] X. Wang, Y. Wang, D. Hsu, and Y. Wang, "Exploration in interactive personalized music recommendation: a reinforcement learning approach," ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 11, p. 7, 2014.
[19] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction vol. 1: MIT press Cambridge, 1998.
[20] M. Tokic, "Adaptive ε-greedy exploration in reinforcement learning based on value differences," in Annual Conference on Artificial Intelligence, 2010, pp. 203-210.
[21] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, "The nonstochastic multiarmed bandit problem," SIAM journal on computing, vol. 32, pp. 48-77, 2002.
[22] J. Langford and T. Zhang, "The epoch-greedy algorithm for multi-armed bandits with side information," in Advances in neural information processing systems, 2008, pp. 817-824.
[23] O. Chapelle and L. Li, "An empirical evaluation of thompson sampling," in Advances in neural information processing systems, 2011, pp. 2249-2257.
[24] S. L. Scott, "A modern Bayesian look at the multi‐armed bandit," Applied Stochastic Models in Business and Industry, vol. 26, pp. 639-658, 2010.
[25] W. R. Thompson, "On the likelihood that one unknown probability exceeds another in view of the evidence of two samples," Biometrika, vol. 25, pp. 285-294, 1933.
[26] P. Auer, "Using confidence bounds for exploitation-exploration trade-offs," Journal of Machine Learning Research, vol. 3, pp. 397-422, 2002.
[27] P. Auer, N. Cesa-Bianchi, and P. Fischer, "Finite-time analysis of the multiarmed bandit problem," Machine learning, vol. 47, pp. 235-256, 2002.
[28] N. Hansen and A. Ostermeier, "Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation," in Evolutionary Computation, 1996., Proceedings of IEEE International Conference on, 1996, pp. 312-317.
[29] R. Allesiardo, R. Féraud, and D. Bouneffouf, "A neural networks committee for the contextual bandit problem," in International Conference on Neural Information Processing, 2014, pp. 374-381.
[30] R. Féraud, R. Allesiardo, T. Urvoy, and F. Clérot, "Random Forest for the Contextual Bandit Problem," in Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 2016, pp. 93-101.
[31] M. Valko, N. Korda, R. Munos, I. Flaounas, and N. Cristianini, "Finite-time analysis of kernelised contextual bandits," arXiv preprint arXiv:1309.6869, 2013.
[32] L. Tang, Y. Jiang, L. Li, and T. Li, "Ensemble contextual bandits for personalized recommendation," in Proceedings of the 8th ACM Conference on Recommender Systems, 2014, pp. 73-80.
[33] K.-H. Huang and H.-T. Lin, "Linear Upper Confidence Bound Algorithm for Contextual Bandit Problem with Piled Rewards," in Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2016, pp. 143-155.
[34] A. Swaminathan, A. Krishnamurthy, A. Agarwal, M. Dudík, J. Langford, D. Jose, et al., "Off-policy evaluation for slate recommendation," arXiv preprint arXiv:1605.04812, 2016.
[35] P. Thomas and E. Brunskill, "Data-efficient off-policy policy evaluation for reinforcement learning," in International Conference on Machine Learning, 2016, pp. 2139-2148.
[36] L. Li, W. Chu, J. Langford, and X. Wang, "Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms," in Proceedings of the fourth ACM international conference on Web search and data mining, 2011, pp. 297-306.
[37] S. Li, A. Karatzoglou, and C. Gentile, "Collaborative filtering bandits," in Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, 2016, pp. 539-548.
描述 碩士
國立政治大學
資訊科學學系
103753038
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0103753038
資料類型 thesis
dc.contributor.advisor 胡毓忠zh_TW
dc.contributor.advisor Hu, Yuh-Jongen_US
dc.contributor.author (Authors) 張翔zh_TW
dc.contributor.author (Authors) Chang, Hsiangen_US
dc.creator (作者) 張翔zh_TW
dc.creator (作者) Chang, Hsiangen_US
dc.date (日期) 2017en_US
dc.date.accessioned 10-Aug-2017 09:58:35 (UTC+8)-
dc.date.available 10-Aug-2017 09:58:35 (UTC+8)-
dc.date.issued (上傳時間) 10-Aug-2017 09:58:35 (UTC+8)-
dc.identifier (Other Identifiers) G0103753038en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/111785-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學學系zh_TW
dc.description (描述) 103753038zh_TW
dc.description.abstract (摘要) 隸屬於Top-N推薦演算法之一的線性上信賴邊界,根據使用者特徵以及對商品喜好程度的線性組合,進行好感度評估和個人化推薦,同時也將商品賦予了矩陣特性,匯集曾經匹配過的使用者特徵。不過在使用者樣本不充裕、或是未曾被推薦過某種使用者類型,演算法無法有效評估使用者對商品的喜好,因此也造成推薦失敗的次數增加。本研究藉由分析個人化點擊率案例,變更線性上信賴邊界的規則:對於推薦內容不感興趣的使用者,嘗試利用商品特徵的分群結果,強制群組內其它商品紀錄同名使用者特徵,防止類似(使用者-項目)組合再次發生。透過策略最佳化原則以及離線評估方式,並且採用Yahoo Front Page Today Module數據集做為點擊率期望值的評估來源。zh_TW
dc.description.abstract (摘要) Linear upper confidence bound (LinUCB) is one of the Top-N implementations for contextual multi-armed bandit problem. The purpose is to evaluate user preference depending on the linear combination of user features and feedbacks from the recommendation. LinUCB collects user characteristics for sample evaluation, but insufficient samples also makes statistic evaluation poor. This study revises the original rule when recommendation fails, our algorithm shares user features to the other similar and unselected items based on K-Means item-feature clustering, to avoid similar user-item pair recommendation. In the final stage, we use off-policy evaluation to improve the expected click-through-rate (CTR) under the policy optimization technique.en_US
dc.description.tableofcontents 第一章 導論 9
1.1研究動機 9
1.2研究目的 9
1.3各章節闡述 10
第二章 研究背景 11
2.1 個人化推薦 11
2.2 K-平均算法 12
第三章 相關研究 13
3.1多拉桿賭博問題 13
3.1.1上信賴邊界 15
3.2情境式多拉桿賭博問題 15
3.2.1 線性上信賴邊界 16
第四章 研究架構 21
4.1數據集概述 21
4.2研究方法 23
4.2.1系統環境 23
4.2.2演算法設計 23
第五章 研究結果 29
5.1驗證結果 29
第六章 未來展望 32
參考文獻 33
zh_TW
dc.format.extent 1175272 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0103753038en_US
dc.subject (關鍵詞) Top-N推薦zh_TW
dc.subject (關鍵詞) 情境式多拉桿賭博問題zh_TW
dc.subject (關鍵詞) 線性上信賴邊界zh_TW
dc.subject (關鍵詞) K-平均算法zh_TW
dc.subject (關鍵詞) Top-N recommendationen_US
dc.subject (關鍵詞) Contextual Multi-Armed bandit problemen_US
dc.subject (關鍵詞) Linear upper confidence bounden_US
dc.subject (關鍵詞) K-Meansen_US
dc.title (題名) 結合線性上信賴邊界與K-平均算法來分析個人化點擊率 : 以雅虎新聞推薦為例zh_TW
dc.title (題名) Integrating LinUCB with K-Means to Analyze Personal CTR : Yahoo News Recommendation as an Exampleen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] F. Ricci, L. Rokach, and B. Shapira, "Introduction to recommender systems handbook," in Recommender systems handbook, ed: Springer, 2011, pp. 1-35.
[2] G. Adomavicius and A. Tuzhilin, "Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions," IEEE transactions on knowledge and data engineering, vol. 17, pp. 734-749, 2005.
[3] M. Deshpande and G. Karypis, "Item-based top-n recommendation algorithms," ACM Transactions on Information Systems (TOIS), vol. 22, pp. 143-177, 2004.
[4] G. Linden, B. Smith, and J. York, "Amazon. com recommendations: Item-to-item collaborative filtering," IEEE Internet computing, vol. 7, pp. 76-80, 2003.
[5] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, "Item-based collaborative filtering recommendation algorithms," in Proceedings of the 10th international conference on World Wide Web, 2001, pp. 285-295.
[6] J. B. Schafer, J. Konstan, and J. Riedl, "Recommender systems in e-commerce," in Proceedings of the 1st ACM conference on Electronic commerce, 1999, pp. 158-166.
[7] C. D. Manning, P. Raghavan, and H. Schütze, Introduction to information retrieval vol. 1: Cambridge university press Cambridge, 2008.
[8] L. Terveen, W. Hill, B. Amento, D. McDonald, and J. Creter, "PHOAKS: A system for sharing recommendations," Communications of the ACM, vol. 40, pp. 59-62, 1997.
[9] B. Mobasher, H. Dai, T. Luo, and M. Nakagawa, "Discovery and evaluation of aggregate usage profiles for web personalization," Data mining and knowledge discovery, vol. 6, pp. 61-82, 2002.
[10] Y. G. Jung, M. S. Kang, and J. Heo, "Clustering performance comparison using K-means and expectation maximization algorithms," Biotechnology & Biotechnological Equipment, vol. 28, pp. S44-S48, 2014.
[11] J. Vermorel and M. Mohri, "Multi-armed bandit algorithms and empirical evaluation," in ECML, 2005, pp. 437-448.
[12] A. Mahajan and D. Teneketzis, "Multi-armed bandit problems," Foundations and Applications of Sensor Management, pp. 121-151, 2008.
[13] M. Coggan, "Exploration and exploitation in reinforcement learning," Research supervised by Prof. Doina Precup, CRA-W DMP Project at McGill University, 2004.
[14] L. Li, W. Chu, J. Langford, and R. E. Schapire, "A contextual-bandit approach to personalized news article recommendation," in Proceedings of the 19th international conference on World wide web, 2010, pp. 661-670.
[15] T. Lu, D. Pál, and M. Pál, "Contextual multi-armed bandits," in Proceedings of the Thirteenth international conference on Artificial Intelligence and Statistics, 2010, pp. 485-492.
[16] M. Fukushima, T. Takayama, and M. Takayama, "How Developers Explore and Exploit Instant Innovation from Experiment to Implementing New Product Development," in IFIP International Conference on Product Lifecycle Management, 2014, pp. 507-517.
[17] J. C. Gittins, "Bandit processes and dynamic allocation indices," Journal of the Royal Statistical Society. Series B (Methodological), pp. 148-177, 1979.
[18] X. Wang, Y. Wang, D. Hsu, and Y. Wang, "Exploration in interactive personalized music recommendation: a reinforcement learning approach," ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 11, p. 7, 2014.
[19] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction vol. 1: MIT press Cambridge, 1998.
[20] M. Tokic, "Adaptive ε-greedy exploration in reinforcement learning based on value differences," in Annual Conference on Artificial Intelligence, 2010, pp. 203-210.
[21] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, "The nonstochastic multiarmed bandit problem," SIAM journal on computing, vol. 32, pp. 48-77, 2002.
[22] J. Langford and T. Zhang, "The epoch-greedy algorithm for multi-armed bandits with side information," in Advances in neural information processing systems, 2008, pp. 817-824.
[23] O. Chapelle and L. Li, "An empirical evaluation of thompson sampling," in Advances in neural information processing systems, 2011, pp. 2249-2257.
[24] S. L. Scott, "A modern Bayesian look at the multi‐armed bandit," Applied Stochastic Models in Business and Industry, vol. 26, pp. 639-658, 2010.
[25] W. R. Thompson, "On the likelihood that one unknown probability exceeds another in view of the evidence of two samples," Biometrika, vol. 25, pp. 285-294, 1933.
[26] P. Auer, "Using confidence bounds for exploitation-exploration trade-offs," Journal of Machine Learning Research, vol. 3, pp. 397-422, 2002.
[27] P. Auer, N. Cesa-Bianchi, and P. Fischer, "Finite-time analysis of the multiarmed bandit problem," Machine learning, vol. 47, pp. 235-256, 2002.
[28] N. Hansen and A. Ostermeier, "Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation," in Evolutionary Computation, 1996., Proceedings of IEEE International Conference on, 1996, pp. 312-317.
[29] R. Allesiardo, R. Féraud, and D. Bouneffouf, "A neural networks committee for the contextual bandit problem," in International Conference on Neural Information Processing, 2014, pp. 374-381.
[30] R. Féraud, R. Allesiardo, T. Urvoy, and F. Clérot, "Random Forest for the Contextual Bandit Problem," in Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 2016, pp. 93-101.
[31] M. Valko, N. Korda, R. Munos, I. Flaounas, and N. Cristianini, "Finite-time analysis of kernelised contextual bandits," arXiv preprint arXiv:1309.6869, 2013.
[32] L. Tang, Y. Jiang, L. Li, and T. Li, "Ensemble contextual bandits for personalized recommendation," in Proceedings of the 8th ACM Conference on Recommender Systems, 2014, pp. 73-80.
[33] K.-H. Huang and H.-T. Lin, "Linear Upper Confidence Bound Algorithm for Contextual Bandit Problem with Piled Rewards," in Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2016, pp. 143-155.
[34] A. Swaminathan, A. Krishnamurthy, A. Agarwal, M. Dudík, J. Langford, D. Jose, et al., "Off-policy evaluation for slate recommendation," arXiv preprint arXiv:1605.04812, 2016.
[35] P. Thomas and E. Brunskill, "Data-efficient off-policy policy evaluation for reinforcement learning," in International Conference on Machine Learning, 2016, pp. 2139-2148.
[36] L. Li, W. Chu, J. Langford, and X. Wang, "Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms," in Proceedings of the fourth ACM international conference on Web search and data mining, 2011, pp. 297-306.
[37] S. Li, A. Karatzoglou, and C. Gentile, "Collaborative filtering bandits," in Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, 2016, pp. 539-548.
zh_TW