學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 混和型資料之距離矩陣計算
Calculation of Dissimilarity Matrix for Mixed-type Data
作者 姜順貿
Jiang, Shun-Mao
貢獻者 周珮婷
姜順貿
Jiang, Shun-Mao
關鍵詞 混和型資料
分群
距離矩陣
距離學習
Mixed-type data
Clustering
Distance matrix
Distance learning
日期 2018
上傳時間 3-Jul-2018 17:23:30 (UTC+8)
摘要 資料分群是資料探勘常見的一種方式,而分群需使用資料間距離的資訊,如何定義資料間距離成為一大挑戰。在資料收集越來越便利的情況下,資料通常為混和型資料,這會產生兩類型的問題,分別為類別距離的計算與結合連續與類別變數的方式。本篇方法透過區分其它相關變數的能力來定義類別距離,如未有相關變數則只考慮變數自己本身。另一方面,連續變數經過離散化計算對應權重後,再對原始連續變數做正規化轉換,以歐式距離的計算方式乘上權重,最後與類別距離做結合得到最終總距離。
為驗證本篇方法的合理性,使用階層式分群,透過一些真實的資料與過去文獻的方法做比較,結果顯示提出的方法具有可比性(comparable),且整體平均表現最佳,可應用在各種類型資料上。此外對本篇方法求得的距離矩陣,做熱圖視覺化可以發現在大部份資料上,仍保有原始類別數等特質或從資料變數上,找到另一種相近的詮釋。
Clustering is a common method for data mining. It requires the information about the distance between observations. The way to define the distance becomes a big challenge due to the convenience of data collection. Datasets are in more complex structures, such as mixed-type. Two types of problems have arisen: how to measure the distances between categorical variables and how to measure the distances for mixed variables. The current study proposed an algorithm to define the distance of categorical variables by the ability of distinguishing other related variables. On the other hand, for continuous variables, first, variables were normalized and weighted Euclidean distances were calculated. Then, two distances we calculated above were combined to find a final distance.
Hierarchical clustering was used to verify the performance of proposed method, through some real-world data compared with the methods of the previous paper. The experiments results showed that the proposed method was comparable with other methods. The overall average performance was the best. The technique can be applied to all types of the data. In addition, by visualizing the proposed distance matrix from the heat maps, we found that the number of cluster patterns were the same as the level of class in the majority of our examples.
參考文獻 1.A. Ahmad, L. Dey, “A k-mean clustering algorithm for mixed numeric and categorical data” , Data & Knowledge Engineering vol. 63, November 2007, pp.503-527.
2.C. Stanfill, D. Waltz, “Toward memory-based reasoning” , Commun. ACM 29(12), 1986, pp. 1213-1228.
3.D.R. Wilson, T.R. Martinez, “Improved heterogeneous distance functions” , J. Artif. Intell. Res. 6, 1997, pp. 1-34.
4.D. Ienco, R. G. Pensa, and R. Meo, “From context to distance: Learning dissimilarity for categorical data clustering” , ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 6, no. 1, March 2012.
5.J. C. Gower and P. Legendre, “Metric and Euclidean properties of dissimilarity coefficients” , J. Classification, vol. 3, no. 1, 1986, pp. 5–48.
6.L. Yu & H. LIU, “Feature selection for high-dimensional data: A fast correlation-based filter solution” . In Proceedings of ICML 2003. Washington, DC, USA, 2003, pp. 856–863.
7.L. Hubert, P. Arabie, “Comparing partitions” , Journal of Classification, vol 2, issue 1, December 1985, pp. 193-218.
8.M. Ring, F. Otto, M. Becker, T. Niebler, D. Landes, A. Hotho, “ConDist: A Context-Driven Categorical Distance Measure” , Machine Leraing and Knowledge Discovery in Databases, 2015 , pp. 251-266.
9.S. Boriah, V. Chandola, V. Kumar, “Similarity measures for categorical data: A comparative evaluation” , In: Proc. SIAM Int. Conference on Data Mining, 2008 , pp. 243–254.
10.S. C. Johnson, “Hierarchical clustering schemes” , Psychometrika, vol 32, no 3, September, 1967
11.V. Batagelj and M. Bren, “Comparing resemblance measures,” J. Classification, vol. 12, no. 1, 1995, pp. 73–90.
12.Z. Huang, “Extensions to the k-means algorithm for clustering large data sets with categorical values,” Data Mining and Knowledge Discovery, vol. 2, no. 3, 1998, pp. 283–304.
13.Z. Hubálek, “Coefficients of association and similarity, based on binary (presence–absence) data: An evaluation,” Biol. Rev., vol. 57, no. 4, 1982, pp. 669–689.
描述 碩士
國立政治大學
統計學系
105354016
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0105354016
資料類型 thesis
dc.contributor.advisor 周珮婷zh_TW
dc.contributor.author (Authors) 姜順貿zh_TW
dc.contributor.author (Authors) Jiang, Shun-Maoen_US
dc.creator (作者) 姜順貿zh_TW
dc.creator (作者) Jiang, Shun-Maoen_US
dc.date (日期) 2018en_US
dc.date.accessioned 3-Jul-2018 17:23:30 (UTC+8)-
dc.date.available 3-Jul-2018 17:23:30 (UTC+8)-
dc.date.issued (上傳時間) 3-Jul-2018 17:23:30 (UTC+8)-
dc.identifier (Other Identifiers) G0105354016en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/118219-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 統計學系zh_TW
dc.description (描述) 105354016zh_TW
dc.description.abstract (摘要) 資料分群是資料探勘常見的一種方式,而分群需使用資料間距離的資訊,如何定義資料間距離成為一大挑戰。在資料收集越來越便利的情況下,資料通常為混和型資料,這會產生兩類型的問題,分別為類別距離的計算與結合連續與類別變數的方式。本篇方法透過區分其它相關變數的能力來定義類別距離,如未有相關變數則只考慮變數自己本身。另一方面,連續變數經過離散化計算對應權重後,再對原始連續變數做正規化轉換,以歐式距離的計算方式乘上權重,最後與類別距離做結合得到最終總距離。
為驗證本篇方法的合理性,使用階層式分群,透過一些真實的資料與過去文獻的方法做比較,結果顯示提出的方法具有可比性(comparable),且整體平均表現最佳,可應用在各種類型資料上。此外對本篇方法求得的距離矩陣,做熱圖視覺化可以發現在大部份資料上,仍保有原始類別數等特質或從資料變數上,找到另一種相近的詮釋。
zh_TW
dc.description.abstract (摘要) Clustering is a common method for data mining. It requires the information about the distance between observations. The way to define the distance becomes a big challenge due to the convenience of data collection. Datasets are in more complex structures, such as mixed-type. Two types of problems have arisen: how to measure the distances between categorical variables and how to measure the distances for mixed variables. The current study proposed an algorithm to define the distance of categorical variables by the ability of distinguishing other related variables. On the other hand, for continuous variables, first, variables were normalized and weighted Euclidean distances were calculated. Then, two distances we calculated above were combined to find a final distance.
Hierarchical clustering was used to verify the performance of proposed method, through some real-world data compared with the methods of the previous paper. The experiments results showed that the proposed method was comparable with other methods. The overall average performance was the best. The technique can be applied to all types of the data. In addition, by visualizing the proposed distance matrix from the heat maps, we found that the number of cluster patterns were the same as the level of class in the majority of our examples.
en_US
dc.description.tableofcontents 口試委員會審定書 i
摘要 ii
ABSTRACT iii
目錄 iv
圖目錄 v
表目錄 vi
第一章 緒論 1
第二章 文獻探討 3
第三章 研究方法 11
第四章 研究過程 16
第五章 研究結果與結論 22
第一節 研究結果 22
第二節 結論與建議 27
參考文獻 29
附錄 30
zh_TW
dc.format.extent 2456548 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0105354016en_US
dc.subject (關鍵詞) 混和型資料zh_TW
dc.subject (關鍵詞) 分群zh_TW
dc.subject (關鍵詞) 距離矩陣zh_TW
dc.subject (關鍵詞) 距離學習zh_TW
dc.subject (關鍵詞) Mixed-type dataen_US
dc.subject (關鍵詞) Clusteringen_US
dc.subject (關鍵詞) Distance matrixen_US
dc.subject (關鍵詞) Distance learningen_US
dc.title (題名) 混和型資料之距離矩陣計算zh_TW
dc.title (題名) Calculation of Dissimilarity Matrix for Mixed-type Dataen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) 1.A. Ahmad, L. Dey, “A k-mean clustering algorithm for mixed numeric and categorical data” , Data & Knowledge Engineering vol. 63, November 2007, pp.503-527.
2.C. Stanfill, D. Waltz, “Toward memory-based reasoning” , Commun. ACM 29(12), 1986, pp. 1213-1228.
3.D.R. Wilson, T.R. Martinez, “Improved heterogeneous distance functions” , J. Artif. Intell. Res. 6, 1997, pp. 1-34.
4.D. Ienco, R. G. Pensa, and R. Meo, “From context to distance: Learning dissimilarity for categorical data clustering” , ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 6, no. 1, March 2012.
5.J. C. Gower and P. Legendre, “Metric and Euclidean properties of dissimilarity coefficients” , J. Classification, vol. 3, no. 1, 1986, pp. 5–48.
6.L. Yu & H. LIU, “Feature selection for high-dimensional data: A fast correlation-based filter solution” . In Proceedings of ICML 2003. Washington, DC, USA, 2003, pp. 856–863.
7.L. Hubert, P. Arabie, “Comparing partitions” , Journal of Classification, vol 2, issue 1, December 1985, pp. 193-218.
8.M. Ring, F. Otto, M. Becker, T. Niebler, D. Landes, A. Hotho, “ConDist: A Context-Driven Categorical Distance Measure” , Machine Leraing and Knowledge Discovery in Databases, 2015 , pp. 251-266.
9.S. Boriah, V. Chandola, V. Kumar, “Similarity measures for categorical data: A comparative evaluation” , In: Proc. SIAM Int. Conference on Data Mining, 2008 , pp. 243–254.
10.S. C. Johnson, “Hierarchical clustering schemes” , Psychometrika, vol 32, no 3, September, 1967
11.V. Batagelj and M. Bren, “Comparing resemblance measures,” J. Classification, vol. 12, no. 1, 1995, pp. 73–90.
12.Z. Huang, “Extensions to the k-means algorithm for clustering large data sets with categorical values,” Data Mining and Knowledge Discovery, vol. 2, no. 3, 1998, pp. 283–304.
13.Z. Hubálek, “Coefficients of association and similarity, based on binary (presence–absence) data: An evaluation,” Biol. Rev., vol. 57, no. 4, 1982, pp. 669–689.
zh_TW
dc.identifier.doi (DOI) 10.6814/THE.NCCU.STAT.002.2018.B03-