dc.contributor.advisor | 余清祥 | zh_TW |
dc.contributor.advisor | Yue, Jack C. | en_US |
dc.contributor.author (Authors) | 高靖翔 | zh_TW |
dc.contributor.author (Authors) | Kao, Ching Hsiang | en_US |
dc.creator (作者) | 高靖翔 | zh_TW |
dc.creator (作者) | Kao, Ching Hsiang | en_US |
dc.date (日期) | 2008 | en_US |
dc.date.accessioned | 18-Sep-2009 20:10:38 (UTC+8) | - |
dc.date.available | 18-Sep-2009 20:10:38 (UTC+8) | - |
dc.date.issued (上傳時間) | 18-Sep-2009 20:10:38 (UTC+8) | - |
dc.identifier (Other Identifiers) | G0096354005 | en_US |
dc.identifier.uri (URI) | https://nccur.lib.nccu.edu.tw/handle/140.119/36927 | - |
dc.description (描述) | 碩士 | zh_TW |
dc.description (描述) | 國立政治大學 | zh_TW |
dc.description (描述) | 統計研究所 | zh_TW |
dc.description (描述) | 96354005 | zh_TW |
dc.description (描述) | 97 | zh_TW |
dc.description.abstract (摘要) | 由於電腦科技的快速發展,網際網路(World Wide Web;簡稱WWW)使得資料共享及搜尋更為便利,其中的網路搜尋引擎(Search Engine)更是尋找資料的利器,最知名的「Google」公司就是藉由搜尋引擎而發跡。網頁搜尋多半依賴各網頁的特徵,像是熵(Entropy)即是最為常用的特徵指標,藉由使用者選取「關鍵字詞」,找出與使用者最相似的網頁,換言之,找出相似指標函數最高的網頁。藉由相似指標函數分類也常見於生物學及生態學,但多半會計算兩個社群間的相似性,再判定兩個社群是否相似,與搜尋引擎只計算單一社群的想法不同。本文的目標在於研究若資料服從多項分配,特別是似幾何分配的多項分配(許多生態社群都滿足這個假設),單一社群的指標、兩個社群間的相似指標,何者會有較佳的分類正確性。本文考慮的指標包括單一社群的熵及Simpson指標、兩社群間的熵及相似指標(Yue and Clayton, 2005)、支持向量機(Support Vector Machine)、邏輯斯迴歸等方法,透過電腦模擬及交叉驗證(cross-validation)比較方法的優劣。本文發現單一社群熵指標之表現,在本文的模擬研究有不錯的分類結果,甚至普遍優於支持向量機,但單一社群熵指標分類法的結果並不穩定,為該分類方法之主要缺點。 | zh_TW |
dc.description.abstract (摘要) | Since computer science had changed rapidly, the worldwide web made it much easier to share and receive the information. Search engines would be the ones to help us find the target information conveniently. The famous Google was also founded by the search engine. The searching process is always depends on the characteristics of the web pages, for example, entropy is one of the characteristics index. The target web pages could be found by combining the index with the keywords information given by user. Or in other words, it is to find out the web pages which are the most similar to the user’s demands. In biology and ecology, similarity index function is commonly used for classification problems. But in practice, the pairwise instead of single similarity would be obtained to check if two communities are similar or not. It is dislike the thinking of search engines.This research is to find out which has better classification result between single index and pairwise index for the data which is multinomial distributed, especially distributed like a geometry distribution. This data assumption is often satisfied in ecology area. The following classification methods would be considered into this research: single index including entropy and Simpson index, pairwise index including pairwise entropy and similarity index (Yue and Clayton, 2005), and also support vector machine and logistic regression. Computer simulations and cross validations would also be considered here. In this research, it is found that the single index, entropy, has good classification result than imagine. Sometime using entropy to classify would even better than using support vector machine with raw data. But using entropy to classify is not very robust, it is the one needed to be improved in future. | en_US |
dc.description.tableofcontents | 第一章 前言 1第二章 文獻探討 3第一節 搜尋引擎原理 3第二節 索引指標 5一、Simpson指標 6二、熵 6三、成對樣本相似指標 7四、成對樣本熵 8第三節 機器學習 8一、邏輯斯迴歸分析 9二、支持向量機 9第三章 使用資料與模擬設定 11第一節 似幾何分配資料與分類模擬設定 11第二節 Zipf’s Law資料與分類模擬設定 13第三節 紅樓夢實證資料之介紹 15第四章 模擬結果與實證研究 17第一節 似幾何分配下之分類模擬結果 17一、分類結果與設定討論 17二、變數選取與子集討論 21三、多物種資料延伸討論 24第二節 Zipf’s Law下之分類模擬結果 28一、分類結果與設定討論 28二、變數選取與子集討論 31三、多物種資料延伸討論 35第三節 紅樓夢實證之分類結果 39一、分類結果 39二、變數選取與子集討論 40第五章 結論與建議 42第一節 結論 42第二節 建議 42參考文獻 44附錄 46附錄一 似幾何分配其他相關附錄資料 46附錄二 Zipf’s Law相關附錄資料 53附錄三 紅樓夢相關附錄資料 60 | zh_TW |
dc.format.extent | 95625 bytes | - |
dc.format.extent | 120594 bytes | - |
dc.format.extent | 165681 bytes | - |
dc.format.extent | 307664 bytes | - |
dc.format.extent | 139546 bytes | - |
dc.format.extent | 266137 bytes | - |
dc.format.extent | 308082 bytes | - |
dc.format.extent | 2006549 bytes | - |
dc.format.extent | 1975195 bytes | - |
dc.format.extent | 263717 bytes | - |
dc.format.extent | 177603 bytes | - |
dc.format.extent | 169392 bytes | - |
dc.format.extent | 3959073 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en_US | - |
dc.source.uri (資料來源) | http://thesis.lib.nccu.edu.tw/record/#G0096354005 | en_US |
dc.subject (關鍵詞) | 多項分配 | zh_TW |
dc.subject (關鍵詞) | 熵 | zh_TW |
dc.subject (關鍵詞) | 相似指標 | zh_TW |
dc.subject (關鍵詞) | 電腦模擬 | zh_TW |
dc.subject (關鍵詞) | 支持向量機 | zh_TW |
dc.subject (關鍵詞) | 冪次定理 | zh_TW |
dc.subject (關鍵詞) | Multinomial distribution | en_US |
dc.subject (關鍵詞) | Entropy | en_US |
dc.subject (關鍵詞) | Similarity index | en_US |
dc.subject (關鍵詞) | Computer simulation | en_US |
dc.subject (關鍵詞) | Support vector machine | en_US |
dc.subject (關鍵詞) | Power Law | en_US |
dc.subject (關鍵詞) | Zipf`s Law | en_US |
dc.title (題名) | 多項分配之分類方法比較與實證研究 | zh_TW |
dc.title (題名) | An empirical study of classification on multinomial data | en_US |
dc.type (資料類型) | thesis | en |
dc.relation.reference (參考文獻) | 中文部分 | zh_TW |
dc.relation.reference (參考文獻) | 1. 余清祥 (1998), “統計在紅樓夢的應用”, 政大學報, 76, 303-327. | zh_TW |
dc.relation.reference (參考文獻) | 英文部分 | zh_TW |
dc.relation.reference (參考文獻) | 1. Agresti, A. (2007), An Introduction to Categorical Data Analysis, 2nd ed., John Wiley & Sons, Inc. | zh_TW |
dc.relation.reference (參考文獻) | 2. Boser, B. E., Guyon, I.M., Vapnik, V. N. (1992), “A training algorithm for optimal margin classifiers”, Proceedings of the fifth annual workshop on Computational learning theory, 144-152. | zh_TW |
dc.relation.reference (參考文獻) | 3. Cortes, C. & Vapnik, V. (1995), “Support-vector network”, Machine Learning, 20, 1-25. | zh_TW |
dc.relation.reference (參考文獻) | 4. Drucker, P. F. (1999), “Beyond the information revolution”, The Atlantic Monthly, 284, 47-59 | zh_TW |
dc.relation.reference (參考文獻) | 5. Meyer, D. (2009), “Support Vector Machines: The Interface to libsvm in package e1071”, Technische Universität Wien, Austria. | zh_TW |
dc.relation.reference (參考文獻) | 6. Page, L., Brin, S., Motwani, R. and Winograd, T. (1998), “The PageRank citation ranking: Bringing order to the Web”, Standford Digital Library Technologies Project. | zh_TW |
dc.relation.reference (參考文獻) | 7. Reed, W. J. (2001), “The Pareto, Zipf and other power laws”, Economics Letters 74 (1), 15–19. | zh_TW |
dc.relation.reference (參考文獻) | 8. Shannon, C. E. (1948), “A mathematical theory of communication”, Bell System Technical Journal, 27, 379-423, 623-656. | zh_TW |
dc.relation.reference (參考文獻) | 9. Sharma, S (1996), Applied Multivariate Techniques, John Wiley & Sons, Inc. | zh_TW |
dc.relation.reference (參考文獻) | 10. Simpson, E. H. (1949), “Measurement of diversity”, Nature, 163, 688. | zh_TW |
dc.relation.reference (參考文獻) | 11. Yue, C. J. and Clayton, M. K. (2005), “A Similarity Measures based on Species Proportions”, Communications in Statistics: Theory and Methods, 34, 2123-2131. | zh_TW |
dc.relation.reference (參考文獻) | 12. Wikipedia, Web search engine, http://en.wikipedia.org/wiki/Web_search_engine (as of June 15, 2009). | zh_TW |
dc.relation.reference (參考文獻) | 13. Zipf, G. K. (1935), The Psychobiology of Language: An Introduction to Dynamic Phinology, Houghton-Mifflin. | zh_TW |
dc.relation.reference (參考文獻) | 14. Zipf, G. K. (1949), Human behavior and the principle of least effort: An introduction to human ecology, Addison-Wesley, Cambridge, MA. | zh_TW |