學術產出-學位論文

題名 非對稱性加權之排名學習機制
Leaning to rank with asymmetric discordant penalty
作者 王榮聖
Wang, Rung Sheng
貢獻者 廖文宏
Liao, Wen Hung
王榮聖
Wang, Rung Sheng
關鍵詞 排名
排名學習
資料探勘
非對稱加權
Ranking
Learning to rank
Information retrival
Asymmetric weight
RealRankBoost
日期 2008
上傳時間 19-九月-2009 12:11:09 (UTC+8)
摘要   資訊發達的時代,資訊取得的方式與管道比起以前更方便而多元,但龐大資料量同時也造成了我們往往很難找到真正需要資料的問題,也因此資料的排名(ranking)問題就變得十分重要。本研究目的在於運用排名學習找出良好的排名,利用人對於某特定議題所給予的排名順序找出排名規則,並應用於資料探勘上,讓電腦可自動對資料做評分,產生正確的排序,將有助於資料的搜尋。

  本研究分為兩部分,第一部份為排名演算法的設計,我們改良現有的排名方法(RankBoost),設計出另一個新的演算法(RealRankBoost),並且用LETOR benchmark實測,作為與其他方法的比較和效果提升的證明;第二部份為非對稱加權概念的提出,我們考量排名位置所造成的資料被檢視機率不同,而給予不同的權重,使排名結果能更貼近人類的角度。
With the innovation in computer technology, we have easier ways to access information. But the huge amount of data also makes it hard for us to find what we really want. This is why ranking is important to us. The central issues of many applications are ranking, such as document retrieval, expert finding, and anti spam. The objective of this thesis is to discover a good ranking function according to specific ranking order of the human perceptions. We employ the learning-to-rank approach to automatically score and generate ranking order that helps data searching.

This thesis is divided into two parts. Firstly, we design a new learning-to-rank algorithm named RealRankBoost based on an existing method (RankBoost). We investigate the efficacy of the proposed method by performing comparative analysis using the LETOR benchmark. Secondly, we propose to assign asymmetric weightings for ranking in the sense that incorrect placement of top-ranked items should yield higher penalty. Incorporation of the asymmetric weighting technique will further make our system to mimic human ranking strategy.
參考文獻 [1]Thorsten Joachims, “Optimizing Search Engines using Clickthrough Data,” Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), ACM, pp.133-142, 2002
[2]Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, Hang Li, “Learning to Rank: From Pairwise Approach to Listwise Approach,” Proceedings of the 24th international conference on Machine learning (ICML), pp. 129-136, 2007
[3]Ping Li, Christopher J.C. Burges, Qiang Wu, “McRank: Learning to Rank Using Multiple Classification and Gradient Boosting,” Neural Information Processing Systems(NIPS), pp. 897-904, 2007
[4]Yoav Freund, Raj Iyer, Robert E. Schapire, Yoram Singer, “An Efficient Boosting Algorithm for Combining Preferences,” International Conference on Machine Learning, 1998
[5]Ming-Feng Tsai, Tie-Yan Liu, Tao Qin, Hsin-Hsi Chen, Wei-Ying Ma, “FRank: A Ranking Method with Fidelity Loss,” Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 383-390, 2007
[6]Jun Xu, Hang Li, “AdaRank: A Boosting Algorithm for Information Retrieval,” Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 391-398, 2007
[7]Christopher J.C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” Data Mining and Knowledge Discovery 2, pp. 121-167, 1998
[8]Robert E. Schapire, Yoram Singer, “Improved Boosting Algorithms Using Confidence-rated Predictions,” Machine Learning, 37(3), pp. 297-336, 1999
[9]Maurice George Kendall, “Rank Correlation Methods,” Hafner, 1955
[10]Frank Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” Cornell Aeronautical Laboratory, Psychological Review, v65, No. 6, pp. 386-408, 1958
[11]Marvin Minsky, Seymour Papert, “Perceptrons” Neurocomputing: foundations of research, MIT Press, pp. 157-169, 1988
[12]Jooyoung Park, Irwin W Sandberg, “Universal Approximation Using Radial- Basis-Function Networks,” Neural Computation, MIT Press, pp. 246-257, 1991
[13]J.A. Leonard, M.A. Kramer, L.H. Ungar, “Using Radial Basis Functions to Approximate a Function and Its Error Bounds,” IEEE Transactions on Neural Networks, pp. 207-224, 1991
[14]Tie-Yan Liu, Tao Qin, Jun Xu, Wenying Xiong and Hang Li, “LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval,” LR4IR 2007, in conjunction with SIGIR 2007, 2007
[15]William Hersh, Chris Buckley, T. J. Leone, David Hickam, “OHSUMED: An Interactive Retrieval Evaluation and New Large Test Collection for Research,” Proceedings of the 17th Annual ACM SIGIR Conference, pp. 192-201, 1994
[16]Nick Craswell, David Hawking, “Overview of the TREC 2004 Web Track,” The 13th Text Retrieval Conference (TREC 2004), 2004
[17]Stephen E. Robertson, “Overview of the okapi projects,” Journal of Documentation, Vol. 53, No. 1, pp.3-7, 1997
描述 碩士
國立政治大學
資訊科學學系
96753004
97
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0096753004
資料類型 thesis
dc.contributor.advisor 廖文宏zh_TW
dc.contributor.advisor Liao, Wen Hungen_US
dc.contributor.author (作者) 王榮聖zh_TW
dc.contributor.author (作者) Wang, Rung Shengen_US
dc.creator (作者) 王榮聖zh_TW
dc.creator (作者) Wang, Rung Shengen_US
dc.date (日期) 2008en_US
dc.date.accessioned 19-九月-2009 12:11:09 (UTC+8)-
dc.date.available 19-九月-2009 12:11:09 (UTC+8)-
dc.date.issued (上傳時間) 19-九月-2009 12:11:09 (UTC+8)-
dc.identifier (其他 識別碼) G0096753004en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/37115-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學學系zh_TW
dc.description (描述) 96753004zh_TW
dc.description (描述) 97zh_TW
dc.description.abstract (摘要)   資訊發達的時代,資訊取得的方式與管道比起以前更方便而多元,但龐大資料量同時也造成了我們往往很難找到真正需要資料的問題,也因此資料的排名(ranking)問題就變得十分重要。本研究目的在於運用排名學習找出良好的排名,利用人對於某特定議題所給予的排名順序找出排名規則,並應用於資料探勘上,讓電腦可自動對資料做評分,產生正確的排序,將有助於資料的搜尋。

  本研究分為兩部分,第一部份為排名演算法的設計,我們改良現有的排名方法(RankBoost),設計出另一個新的演算法(RealRankBoost),並且用LETOR benchmark實測,作為與其他方法的比較和效果提升的證明;第二部份為非對稱加權概念的提出,我們考量排名位置所造成的資料被檢視機率不同,而給予不同的權重,使排名結果能更貼近人類的角度。
zh_TW
dc.description.abstract (摘要) With the innovation in computer technology, we have easier ways to access information. But the huge amount of data also makes it hard for us to find what we really want. This is why ranking is important to us. The central issues of many applications are ranking, such as document retrieval, expert finding, and anti spam. The objective of this thesis is to discover a good ranking function according to specific ranking order of the human perceptions. We employ the learning-to-rank approach to automatically score and generate ranking order that helps data searching.

This thesis is divided into two parts. Firstly, we design a new learning-to-rank algorithm named RealRankBoost based on an existing method (RankBoost). We investigate the efficacy of the proposed method by performing comparative analysis using the LETOR benchmark. Secondly, we propose to assign asymmetric weightings for ranking in the sense that incorrect placement of top-ranked items should yield higher penalty. Incorporation of the asymmetric weighting technique will further make our system to mimic human ranking strategy.
en_US
dc.description.tableofcontents 第一章 研究背景........................................1
第二章 簡介與相關研究...................................4
第三章 Pairwise排名學習................................7
 3.1 排名評比(Ranking Measurement)....................8
 3.1.1 Pairwise排名評比...............................8
 3.1.2 分數式排名評比.................................10
 3.2 排名學習........................................11
 3.2.1 AdaBoost.....................................12
 3.2.2 RankBoost....................................13
 3.2.3 改良式RankBoost (RealRankBoost)...............16
 3.2.4 排名同序的處理.................................18
第四章 排名學習器(Learners)............................21
 4.1 排名函數(Ranking Function)......................21
 4.1.1 Linear Weight Vector.........................22
 4.1.2 Polynomial Weight Function...................24
 4.1.3 Radial Basis Function........................28
 4.2 排名方法(現有排名方法)..........................32
第五章 資料實測與效能分析...............................33
 5.1 LETOR排名評測標準................................33
 5.1.1 Dataset.......................................33
 5.1.2 特徵值萃取(Feature extraction).................35
 5.1.3 Data partitioning.............................38
 5.1.4 效能評量.......................................39
 5.2 實驗系統架構.....................................40
 5.3 實驗結果與比較...................................42
第六章 非對稱加權排名...................................50
 6.1 非對稱加權.......................................50
 6.2 Pairwise上之非對稱加權方法........................52
 6.3 加權函數(Weighting Function).....................54
 6.4 實驗結果.........................................56
 6.5 負向加權.........................................57
 6.6 負向加權實驗......................................58
 6.6.1 資料集(Dataset)................................58
 6.6.2 實驗設計.......................................59
 6.6.3 實驗結果.......................................60
第七章 結論與未來規劃....................................62
參考文獻................................................64
附錄....................................................66
zh_TW
dc.format.extent 120346 bytes-
dc.format.extent 118894 bytes-
dc.format.extent 156137 bytes-
dc.format.extent 228790 bytes-
dc.format.extent 218761 bytes-
dc.format.extent 335068 bytes-
dc.format.extent 288292 bytes-
dc.format.extent 259688 bytes-
dc.format.extent 268291 bytes-
dc.format.extent 122791 bytes-
dc.format.extent 88584 bytes-
dc.format.extent 190721 bytes-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.language.iso en_US-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0096753004en_US
dc.subject (關鍵詞) 排名zh_TW
dc.subject (關鍵詞) 排名學習zh_TW
dc.subject (關鍵詞) 資料探勘zh_TW
dc.subject (關鍵詞) 非對稱加權zh_TW
dc.subject (關鍵詞) Rankingen_US
dc.subject (關鍵詞) Learning to ranken_US
dc.subject (關鍵詞) Information retrivalen_US
dc.subject (關鍵詞) Asymmetric weighten_US
dc.subject (關鍵詞) RealRankBoosten_US
dc.title (題名) 非對稱性加權之排名學習機制zh_TW
dc.title (題名) Leaning to rank with asymmetric discordant penaltyen_US
dc.type (資料類型) thesisen
dc.relation.reference (參考文獻) [1]Thorsten Joachims, “Optimizing Search Engines using Clickthrough Data,” Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), ACM, pp.133-142, 2002zh_TW
dc.relation.reference (參考文獻) [2]Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, Hang Li, “Learning to Rank: From Pairwise Approach to Listwise Approach,” Proceedings of the 24th international conference on Machine learning (ICML), pp. 129-136, 2007zh_TW
dc.relation.reference (參考文獻) [3]Ping Li, Christopher J.C. Burges, Qiang Wu, “McRank: Learning to Rank Using Multiple Classification and Gradient Boosting,” Neural Information Processing Systems(NIPS), pp. 897-904, 2007zh_TW
dc.relation.reference (參考文獻) [4]Yoav Freund, Raj Iyer, Robert E. Schapire, Yoram Singer, “An Efficient Boosting Algorithm for Combining Preferences,” International Conference on Machine Learning, 1998zh_TW
dc.relation.reference (參考文獻) [5]Ming-Feng Tsai, Tie-Yan Liu, Tao Qin, Hsin-Hsi Chen, Wei-Ying Ma, “FRank: A Ranking Method with Fidelity Loss,” Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 383-390, 2007zh_TW
dc.relation.reference (參考文獻) [6]Jun Xu, Hang Li, “AdaRank: A Boosting Algorithm for Information Retrieval,” Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 391-398, 2007zh_TW
dc.relation.reference (參考文獻) [7]Christopher J.C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” Data Mining and Knowledge Discovery 2, pp. 121-167, 1998zh_TW
dc.relation.reference (參考文獻) [8]Robert E. Schapire, Yoram Singer, “Improved Boosting Algorithms Using Confidence-rated Predictions,” Machine Learning, 37(3), pp. 297-336, 1999zh_TW
dc.relation.reference (參考文獻) [9]Maurice George Kendall, “Rank Correlation Methods,” Hafner, 1955zh_TW
dc.relation.reference (參考文獻) [10]Frank Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” Cornell Aeronautical Laboratory, Psychological Review, v65, No. 6, pp. 386-408, 1958zh_TW
dc.relation.reference (參考文獻) [11]Marvin Minsky, Seymour Papert, “Perceptrons” Neurocomputing: foundations of research, MIT Press, pp. 157-169, 1988zh_TW
dc.relation.reference (參考文獻) [12]Jooyoung Park, Irwin W Sandberg, “Universal Approximation Using Radial- Basis-Function Networks,” Neural Computation, MIT Press, pp. 246-257, 1991zh_TW
dc.relation.reference (參考文獻) [13]J.A. Leonard, M.A. Kramer, L.H. Ungar, “Using Radial Basis Functions to Approximate a Function and Its Error Bounds,” IEEE Transactions on Neural Networks, pp. 207-224, 1991zh_TW
dc.relation.reference (參考文獻) [14]Tie-Yan Liu, Tao Qin, Jun Xu, Wenying Xiong and Hang Li, “LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval,” LR4IR 2007, in conjunction with SIGIR 2007, 2007zh_TW
dc.relation.reference (參考文獻) [15]William Hersh, Chris Buckley, T. J. Leone, David Hickam, “OHSUMED: An Interactive Retrieval Evaluation and New Large Test Collection for Research,” Proceedings of the 17th Annual ACM SIGIR Conference, pp. 192-201, 1994zh_TW
dc.relation.reference (參考文獻) [16]Nick Craswell, David Hawking, “Overview of the TREC 2004 Web Track,” The 13th Text Retrieval Conference (TREC 2004), 2004zh_TW
dc.relation.reference (參考文獻) [17]Stephen E. Robertson, “Overview of the okapi projects,” Journal of Documentation, Vol. 53, No. 1, pp.3-7, 1997zh_TW