學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 類別資料探索 - 影響NBA球員分數的變數選取
Categorical Exploratory Data Analysis - Feature Selection for Average Scores of NBA Players
作者 趙立騰
Chao, Li-Teng
貢獻者 周珮婷<br>張育瑋
Chou, Pei-Ting<br>Chang, Yu-Wei
趙立騰
Chao, Li-Teng
關鍵詞 NBA
條件熵
互信息
特徵選取
類別資料分析
NBA
Conditional Entropy
Mutual Information
Feature Selection
Categorical Data Analysis
日期 2023
上傳時間 6-Jul-2023 17:05:27 (UTC+8)
摘要 條件熵是信息理論中的一個重要概念,用於量化給定一個隨機變數的值的條件下,另一個變量的不確定性。本論文利用條件熵以及條件熵下降的概念對 NBA 球員資料做類別資料分析,試著找出影響平均得分最為重要的變數,透過結合變數從條件熵獲得更多的訊息再加以分析,找出的關鍵變數為球權使用率及籃板,並針對 11 位現今 NBA的知名球員、特定球員 Dwight Howard 及 Carmelo Anthony 做分析,找出影響知名球員的變數為球員本身,Dwight Howard 最關鍵的變數為真實命中率、籃板及年齡,Carmelo Anthony 則是真實命中率,最後再將結果與隨機森林方法的重要變數比較。
Conditional entropy is a crucial concept in information theory, utilized to measure the uncertainty of one variable given the value of another random variable. This study applies the concept of conditional entropy and examines conditional entropy drops to conduct a categorical data analysis on NBA player data, aiming to identify the most influential variables impacting average scores. By incorporating additional variables to extract more information from conditional entropy, we deepen our analysis. The key variables identified include usg_pct and reb. Our analysis focuses on eleven prominent contemporary NBA players, with specific attention given to Dwight Howard and Carmelo Anthony. The variable found to influence prominent players is player_name. For Dwight Howard, the critical variables found to influence his performance are ts_pct, reb, and age. Meanwhile, for Carmelo Anthony,
the defining variable is ts_pct. Finally, we compare our results with the important variables determined by the Random Forest method.
參考文獻 Breiman, L. (2001). Random forests. Machine learning, 45:5–32.
Cao, C. (2012). Sports data mining technology used in basketball outcome prediction.
Chen, T.-L., Chou, E. P., and Fushing, H. (2021). Categorical nature of major factor selection via information theoretic measurements. Entropy, 23(12):1684.
Cirtautas, J. (2022). Nba players. https://www.kaggle.com/datasets/justinas nba-players-data.
Cortes, C. and Vapnik, V. (1995). Support-vector networks. Machine learning, 20:273–297.
Duda, R. O., Hart, P. E., et al. (1973). Pattern classification and scene analysis, volume 3.Wiley New York.
Fisher, R. A. (1936). The use of multiple measurements in taxonomic problems. Annals of eugenics, 7(2):179–188.
Guyon, I. and Elisseeff, A. (2003). An introduction to variable and feature selection.
Journal of machine learning research, 3(Mar):1157–1182.
Guyon, I., Weston, J., Barnhill, S., and Vapnik, V. (2002). Gene selection for cancer classification using support vector machines. Machine learning, 46:389–422.
Hlaváčková-Schindler, K., Paluš, M., Vejmelka, M., and Bhattacharya, J. (2007). Causality detection based on information-theoretic approaches in time series analysis. Physics Reports, 441(1):1–46.
Hu, Q., Yu, D., Liu, J., and Wu, C. (2008). Neighborhood rough set based heterogeneous feature subset selection. Information sciences, 178(18):3577–3594.
Kira, K. and Rendell, L. A. (1992). A practical approach to feature selection. In Machine learning proceedings 1992, pages 249–256. Elsevier.
Kononenko, I. et al. (1994). Estimating attributes: Analysis and extensions of relief. In ECML, volume 94, pages 171–182. Citeseer.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature, 521(7553):436–444.
Liaw, A. and Wiener, M. (2002). Classification and regression by randomforest. R News, 2(3):18–22.
Loeffelholz, B., Bednar, E., and Bauer, K. W. (2009). Predicting nba games using neural networks. Journal of Quantitative Analysis in Sports, 5(1).
Meyer, P. E. (2022). infotheo: Information-Theoretic Measures. R package version 1.2.0.1.
Oughali, M. S., Bahloul, M., and El Rahman, S. A. (2019). Analysis of nba players and shot prediction using random forest and xgboost models. In 2019 International
Conference on Computer and Information Sciences (ICCIS), pages 1–5. IEEE.
Pawlak, Z. (1982). Rough sets. International journal of computer & information sciences, 11:341–356.
Peng, H., Long, F., and Ding, C. (2005). Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on pattern analysis and machine intelligence, 27(8):1226–1238.
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net.
Journal of the royal statistical society: series B (statistical methodology), 67(2):301–320.
描述 碩士
國立政治大學
統計學系
110354017
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110354017
資料類型 thesis
dc.contributor.advisor 周珮婷<br>張育瑋zh_TW
dc.contributor.advisor Chou, Pei-Ting<br>Chang, Yu-Weien_US
dc.contributor.author (Authors) 趙立騰zh_TW
dc.contributor.author (Authors) Chao, Li-Tengen_US
dc.creator (作者) 趙立騰zh_TW
dc.creator (作者) Chao, Li-Tengen_US
dc.date (日期) 2023en_US
dc.date.accessioned 6-Jul-2023 17:05:27 (UTC+8)-
dc.date.available 6-Jul-2023 17:05:27 (UTC+8)-
dc.date.issued (上傳時間) 6-Jul-2023 17:05:27 (UTC+8)-
dc.identifier (Other Identifiers) G0110354017en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/145945-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 統計學系zh_TW
dc.description (描述) 110354017zh_TW
dc.description.abstract (摘要) 條件熵是信息理論中的一個重要概念,用於量化給定一個隨機變數的值的條件下,另一個變量的不確定性。本論文利用條件熵以及條件熵下降的概念對 NBA 球員資料做類別資料分析,試著找出影響平均得分最為重要的變數,透過結合變數從條件熵獲得更多的訊息再加以分析,找出的關鍵變數為球權使用率及籃板,並針對 11 位現今 NBA的知名球員、特定球員 Dwight Howard 及 Carmelo Anthony 做分析,找出影響知名球員的變數為球員本身,Dwight Howard 最關鍵的變數為真實命中率、籃板及年齡,Carmelo Anthony 則是真實命中率,最後再將結果與隨機森林方法的重要變數比較。zh_TW
dc.description.abstract (摘要) Conditional entropy is a crucial concept in information theory, utilized to measure the uncertainty of one variable given the value of another random variable. This study applies the concept of conditional entropy and examines conditional entropy drops to conduct a categorical data analysis on NBA player data, aiming to identify the most influential variables impacting average scores. By incorporating additional variables to extract more information from conditional entropy, we deepen our analysis. The key variables identified include usg_pct and reb. Our analysis focuses on eleven prominent contemporary NBA players, with specific attention given to Dwight Howard and Carmelo Anthony. The variable found to influence prominent players is player_name. For Dwight Howard, the critical variables found to influence his performance are ts_pct, reb, and age. Meanwhile, for Carmelo Anthony,
the defining variable is ts_pct. Finally, we compare our results with the important variables determined by the Random Forest method.
en_US
dc.description.tableofcontents 摘要 i
Abstract ii
目次 iii
圖目錄 v
表目錄 vi
第 一 章 緒論 1
1.1 特徵選取 2
第 二 章 文獻回顧 4
2.1 特徵選取 4
2.2 NBA 資料集 5
第 三 章 研究方法 6
3.1 條件熵 8
3.2 隨機森林 10
第 四 章 資料介紹 11
4.1 探索性資料分析 13
4.2 資料類別化 20
第 五 章 研究結果 21
5.1 所有球員 21
5.1.1 CEDA 方法 21
5.1.2 RF 方法 23
5.2 知名球員 24
5.2.1 CEDA 方法 24
5.2.2 RF 方法 25
5.3 特定球員 26
5.3.1 Dwight Howard 26
5.3.2 Carmelo Anthony 27
第 六 章 結論與建議 29
6.1 研究結論 29
6.2 未來方向與建議 30
參考文獻 31
zh_TW
dc.format.extent 1048773 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110354017en_US
dc.subject (關鍵詞) NBAzh_TW
dc.subject (關鍵詞) 條件熵zh_TW
dc.subject (關鍵詞) 互信息zh_TW
dc.subject (關鍵詞) 特徵選取zh_TW
dc.subject (關鍵詞) 類別資料分析zh_TW
dc.subject (關鍵詞) NBAen_US
dc.subject (關鍵詞) Conditional Entropyen_US
dc.subject (關鍵詞) Mutual Informationen_US
dc.subject (關鍵詞) Feature Selectionen_US
dc.subject (關鍵詞) Categorical Data Analysisen_US
dc.title (題名) 類別資料探索 - 影響NBA球員分數的變數選取zh_TW
dc.title (題名) Categorical Exploratory Data Analysis - Feature Selection for Average Scores of NBA Playersen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Breiman, L. (2001). Random forests. Machine learning, 45:5–32.
Cao, C. (2012). Sports data mining technology used in basketball outcome prediction.
Chen, T.-L., Chou, E. P., and Fushing, H. (2021). Categorical nature of major factor selection via information theoretic measurements. Entropy, 23(12):1684.
Cirtautas, J. (2022). Nba players. https://www.kaggle.com/datasets/justinas nba-players-data.
Cortes, C. and Vapnik, V. (1995). Support-vector networks. Machine learning, 20:273–297.
Duda, R. O., Hart, P. E., et al. (1973). Pattern classification and scene analysis, volume 3.Wiley New York.
Fisher, R. A. (1936). The use of multiple measurements in taxonomic problems. Annals of eugenics, 7(2):179–188.
Guyon, I. and Elisseeff, A. (2003). An introduction to variable and feature selection.
Journal of machine learning research, 3(Mar):1157–1182.
Guyon, I., Weston, J., Barnhill, S., and Vapnik, V. (2002). Gene selection for cancer classification using support vector machines. Machine learning, 46:389–422.
Hlaváčková-Schindler, K., Paluš, M., Vejmelka, M., and Bhattacharya, J. (2007). Causality detection based on information-theoretic approaches in time series analysis. Physics Reports, 441(1):1–46.
Hu, Q., Yu, D., Liu, J., and Wu, C. (2008). Neighborhood rough set based heterogeneous feature subset selection. Information sciences, 178(18):3577–3594.
Kira, K. and Rendell, L. A. (1992). A practical approach to feature selection. In Machine learning proceedings 1992, pages 249–256. Elsevier.
Kononenko, I. et al. (1994). Estimating attributes: Analysis and extensions of relief. In ECML, volume 94, pages 171–182. Citeseer.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature, 521(7553):436–444.
Liaw, A. and Wiener, M. (2002). Classification and regression by randomforest. R News, 2(3):18–22.
Loeffelholz, B., Bednar, E., and Bauer, K. W. (2009). Predicting nba games using neural networks. Journal of Quantitative Analysis in Sports, 5(1).
Meyer, P. E. (2022). infotheo: Information-Theoretic Measures. R package version 1.2.0.1.
Oughali, M. S., Bahloul, M., and El Rahman, S. A. (2019). Analysis of nba players and shot prediction using random forest and xgboost models. In 2019 International
Conference on Computer and Information Sciences (ICCIS), pages 1–5. IEEE.
Pawlak, Z. (1982). Rough sets. International journal of computer & information sciences, 11:341–356.
Peng, H., Long, F., and Ding, C. (2005). Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on pattern analysis and machine intelligence, 27(8):1226–1238.
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net.
Journal of the royal statistical society: series B (statistical methodology), 67(2):301–320.
zh_TW