學術產出-學位論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 深度學習在不平衡數據集之研究
Survey on Deep Learning with Imbalanced Data Sets
作者 蔡承孝
Tsai, Cheng-Hsiao
貢獻者 蔡炎龍
蔡承孝
Tsai, Cheng-Hsiao
關鍵詞 深度學習
卷積神經網路
不平衡數據集
異常偵測
圖像分類
Deep Learning
CNN
Imbalanced Data Sets
Anomaly Detection
Image Classification
日期 2019
上傳時間 3-十月-2019 17:17:29 (UTC+8)
摘要 本文旨在回顧利用深度學習處理不平衡數據集和異常偵測的方法,我們 從 MNIST 生成兩個高度不平衡數據集,不平衡比率高達 2500 並應用在多 元分類任務跟二元分類任務上,在二元分類任務中第 0 類為少數類;而在 多元分類任務中少數類為第 0、1、4、6、7 類,我們利用卷積神機網路來 訓練我們的模型。在異常偵測方面,我們用預先訓練好的手寫辨識 CNN 模 型來判斷其他 18 張貓狗的圖片是否為手寫辨識圖片。
由於數據的高度不平衡,原始分類模型的表現不盡理想。因此,在不同 的分類任務上,我們分別利用 6 個和 7 個不同的方法來調整我們的模型。我 們發現新的損失函數 Focalloss 在多元分類任務表現最好,而在二元分類任
務中隨機過採樣的表現最佳,但是成本敏感學習的方法並不適用於我們所
生成的不平衡數據集。我們利用信心估計讓分類器成功判斷所有貓狗圖片
皆不是手寫辨識圖片。
This paper is a survey on deep learning with imbalanced data sets and anomaly detection. We create two imbalanced data sets from MNIST for multi­-classification task with minority classes 0,1,4,6,7 and binary classification task with minority class 0. Our data sets are highly imbalanced with imbalanced rate ρ = 2500 and we use convolutional neural network(CNN) for training. In anomaly detection,we use the pretrained CNN handwriting classifier to decide the 18 cat and dog pictures are handwriting pictures or not.
Due to the data set is imbalanced, the baseline model have poor performance on minority classes. Hence, we use 6 and 7 different methods to adjust our model. We find that the focal loss function and random over­-sampling(ROS) have best performance on multi­-classification task and binary classification task on our imbalanced data sets but the cost sensitive learning method is not suitable for our imbalanced data sets. By confidence estimation, our classifier successfully judge all the pictures of cat and dog are not handwriting picture.
參考文獻 [1] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[2] Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249–259, 2018.
[3] MHB Carvalho, ML Brizot, LM Lopes, CH Chiba,S Miyadahira, and M Zugaib. Detection of fetal structural abnormalities at the 11–14 week ultrasound scan. Prenatal Diagnosis: Published in Affiliation With the International Society for Prenatal Diagnosis, 22(1):1–4, 2002.
[4] Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM computing surveys(CSUR), 41(3):15, 2009.
[5] Nitesh V Chawla, KevinW Bowyer, Lawrence OHall, and W Philip Kegelmeyer. Smote: synthetic minority over­-sampling technique. Journal of artificial intelligence research, 16:321–357, 2002.
[6] Edward Choi, Andy Schuetz, Walter F Stewart, and Jimeng Sun. Using recurrent neural network models for early detection of heart failure onset. Journal of the American Medical Informatics Association, 24(2):361–370, 2016.
[7] David A Cieslak, Nitesh V Chawla, and Aaron Striegel. Combating imbalance in network intrusion datasets. In GrC, pages 732–737, 2006.
[8] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM, 2008.
[9] MJ Desforges, PJ Jacob, and JE Cooper. Applications of probability density estimation to the detection of abnormal conditions in engineering. Proceedings of the Institution of Mechanical Engineers, PartC: Journal of Mechanical Engineering Science, 212(8):687– 703,1998.
[10] Chris Drummond,Robert CHolte, et al. C4. 5, class imbalance, and cost sensitivity: why under­-sampling beats over­-sampling. In Workshop on learning from imbalanced datasets II, volume 11, pages 1–8. Citeseer, 2003.
[11] CharlesElkan. The foundations of cost-­sensitive learning. In International joint conference on artificial intelligence, volume 17, pages 973–978. Lawrence Erlbaum Associates Ltd, 2001.
[12] Guo Haixiang, Li Yijing, Jennifer Shang, Gu Mingyun, Huang Yuanyue, and Gong Bing. Learning from class­imbalanced data: Review of methods and applications. Expert Systems with Applications, 73:220–239, 2017.
[13] Haibo He and Edwardo A Garcia. Learning from imbalanced data. IEEE Transactions on Knowledge&Data Engineering, (9):1263–1284, 2008.
[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[15] JB Heaton, Nicholas G Polson, and Jan Hendrik Witte. Deep learning in finance. arXiv preprint arXiv: 1602.06561, 2016.
[16] David Hsu, Gildardo Sánchez­Ante, and Zheng Sun. Hybrid prm sampling with a cost sensitive adaptive strategy. In Proceedings of the 2005 IEEE international conference on robotics and automation, pages 3874–3880.IEEE, 2005.
[17] Anil K Jain, Jianchang Mao, and KM Mohiuddin. Artificial neural networks: A tutorial. Computer, (3):31–44, 1996.
[18] Justin M Johnson and Taghi M Khoshgoftaar. Survey on deep learning with class imbalance. Journal of Big Data,6(1):27,2019.
165
[19] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei­Fei. Large­scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732,2014.
[20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105,2012.
[21] Miroslav Kubat, Robert C Holte, and Stan Matwin. Machine learning for the detection of oil spills in satellite radar images. Machine learning,30(2­3):195–215, 1998.
[22] Matjaz Kukar, Igor Kononenko, et al. Cost­-sensitive learning with neural networks. In ECAI, pages 445–449,1998.
[23] Yoji Kukita, Junji Uchida, Shigeyuki Oba, Kazumi Nishino, Toru Kumagai, Kazuya Taniguchi, Takako Okuyama, Fumio Imamura, and Kikuya Kato. Quantitative identification of mutant alleles derived from lung cancer in plasma cell­-free dna via anomaly detection using deep sequencing data. PloS one,8(11): e81468, 2013.
[24] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553): 436,2015.
[25] Hansang Lee, Minseok Park, and Junmo Kim. Plankton classification on imbalanced large scale database via convolutional neural networks with transfer learning. In 2016 IEEE international conference on image processing(ICIP), pages 3713–3717.IEEE,2016.
[26] Tsung­-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988,2017.
[27] CX Ling and VS Sheng. Cost­-sensitive learning and the class imbalance problem. 2011. Encyclopedia of Machine Learning: Springer, 24.
[28] Amogh Mahapatra, Nisheeth Srivastava, and Jaideep Srivastava. Contextual anomaly detection in text data. Algorithms,5(4):469–489,2012.
[29] Bomin Mao, Zubair Md Fadlullah, Fengxiao Tang, Nei Kato, Osamu Akashi, Takeru Inoue, and Kimihiro Mizutani. Routing or computing? the paradigm shift towards intelligent computer network packet transmission based on deep learning. IEEE Transactions on Computers,66(11):1946–1960,2017.
[30] David Masko and Paulina Hensman. The impact of imbalanced training data for convolutional neural networks,2015.
[31] P Rahmawati and Prawito Prajitno. Online vibration monitoring of a water pump machine to detect its malfunction components based on artificial neural network. In Journal of Physics: Conference Series, volume 1011, page 012045. IOP Publishing, 2018.
[32] R Bharat Rao, Sriram Krishnan, and Radu Stefan Niculescu. Data mining for improved cardiac care. ACM SIGKDD Explorations Newsletter, 8(1):3–10, 2006.
[33] Richard G Stafford, Jacob Beutel, et al. Application of neural networks as an aid in medical diagnosis and general anomaly detection, July 19 1994. US Patent 5, 331, 550.
[34] David WJ Stein, Scott G Beaven, Lawrence E Hoff, Edwin M Winter, Alan P Schaum, and Alan D Stocker. Anomaly detection from hyperspectral imagery. IEEE signal processing magazine,19(1):58–69,2002.
[35] Daniel Svozil, Vladimir Kvasnicka, and Jiri Pospichal. Introduction to multi­-layer feed-forward neural networks. Chemometrics and intelligent laboratory systems,39(1):43–62, 1997.
[36] Shoujin Wang, Wei Liu, Jia Wu, Longbing Cao, Qinxue Meng, and Paul J Kennedy. Training deep neural networks on imbalanced data sets. In 2016 international joint conference on neural networks(IJCNN), pages 4368–4374.IEEE,2016.
[37] Wei Wei, Jinjiu Li, Longbing Cao, Yuming Ou, and Jiahang Chen. Effective detection of sophisticated online banking fraud on extremely imbalanced data. World Wide Web, 16(4): 449–475, 2013.
[38] Rui Yan, Yiping Song, and Hua Wu. Learning to respond with deep neural networks for retrieval-­based human­-computer conversation system. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 55–64. ACM, 2016.
[39] Ke Zhang, Jianwu Xu, Martin Renqiang Min, Guofei Jiang, Konstantinos Pelechrinis,and Hui Zhang. Automated it system failure prediction: A deep learning approach. In 2016 IEEE International Conferenceon Big Data(Big Data), pages 1291–1300.IEEE,2016.
[40] Zhi­Hua Zhou and Xu­Ying Liu. Training cost-­sensitive neural networks with methods addressing the class imbalance problem. IEEE Transactions on Knowledge & Data Engineering, (1):63–77, 2006.
描述 碩士
國立政治大學
應用數學系
105751009
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0105751009
資料類型 thesis
dc.contributor.advisor 蔡炎龍zh_TW
dc.contributor.author (作者) 蔡承孝zh_TW
dc.contributor.author (作者) Tsai, Cheng-Hsiaoen_US
dc.creator (作者) 蔡承孝zh_TW
dc.creator (作者) Tsai, Cheng-Hsiaoen_US
dc.date (日期) 2019en_US
dc.date.accessioned 3-十月-2019 17:17:29 (UTC+8)-
dc.date.available 3-十月-2019 17:17:29 (UTC+8)-
dc.date.issued (上傳時間) 3-十月-2019 17:17:29 (UTC+8)-
dc.identifier (其他 識別碼) G0105751009en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/126578-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 應用數學系zh_TW
dc.description (描述) 105751009zh_TW
dc.description.abstract (摘要) 本文旨在回顧利用深度學習處理不平衡數據集和異常偵測的方法,我們 從 MNIST 生成兩個高度不平衡數據集,不平衡比率高達 2500 並應用在多 元分類任務跟二元分類任務上,在二元分類任務中第 0 類為少數類;而在 多元分類任務中少數類為第 0、1、4、6、7 類,我們利用卷積神機網路來 訓練我們的模型。在異常偵測方面,我們用預先訓練好的手寫辨識 CNN 模 型來判斷其他 18 張貓狗的圖片是否為手寫辨識圖片。
由於數據的高度不平衡,原始分類模型的表現不盡理想。因此,在不同 的分類任務上,我們分別利用 6 個和 7 個不同的方法來調整我們的模型。我 們發現新的損失函數 Focalloss 在多元分類任務表現最好,而在二元分類任
務中隨機過採樣的表現最佳,但是成本敏感學習的方法並不適用於我們所
生成的不平衡數據集。我們利用信心估計讓分類器成功判斷所有貓狗圖片
皆不是手寫辨識圖片。
zh_TW
dc.description.abstract (摘要) This paper is a survey on deep learning with imbalanced data sets and anomaly detection. We create two imbalanced data sets from MNIST for multi­-classification task with minority classes 0,1,4,6,7 and binary classification task with minority class 0. Our data sets are highly imbalanced with imbalanced rate ρ = 2500 and we use convolutional neural network(CNN) for training. In anomaly detection,we use the pretrained CNN handwriting classifier to decide the 18 cat and dog pictures are handwriting pictures or not.
Due to the data set is imbalanced, the baseline model have poor performance on minority classes. Hence, we use 6 and 7 different methods to adjust our model. We find that the focal loss function and random over­-sampling(ROS) have best performance on multi­-classification task and binary classification task on our imbalanced data sets but the cost sensitive learning method is not suitable for our imbalanced data sets. By confidence estimation, our classifier successfully judge all the pictures of cat and dog are not handwriting picture.
en_US
dc.description.tableofcontents 1.Introduction 1
2. Deep Learning 3
2.1 Neurons and Neural Networks 4
2.2 Activation Function 7
2.3 Loss Function 9
2.4 Gradient Descent Method 10
3. Convolutional Neural Network(CNN) 11
3.1 Convolutional Layer 12
3.2 Max Pooling Layer 12
4. Abnormal Condition and Imbalanced Data Set 14
4.1 Abnormal Condition 14
4.2 Imbalanced Data Set 15
5. Anomaly Detection 17
5.1 Confidence Estimation 17
5.2 Gaussian Distribution 18
5.3 Model for Confidence Estimation 20
6. Methods for Imbalanced Data Problem 23
6.1 Data‑level Methods 23
6.1.1 Random­ over-sampling(ROS) 23
6.1.2 Synthetic Minority Over-­sampling Technique(SMOTE)24
6.1.3 Random­ under-sampling(RUS) 25
6.2 Algorithm‑level Methods 26
6.2.1 Mean false error(MFE) 26
6.2.2 Mean squared false error(MSFE) 27
6.2.3 Focal loss 28
6.2.4 Cost sensitive learning 30
7. Experiment for Multi-­classification Task 32
7.1 Baseline Model 33
7.2 Random­ Over-sampling Model 35
7.3 Synthetic Minority Over­-sampling Technique Model 36
7.4 Random­ Under-sampling Model 37
7.5 Mean False Error Model 38
7.6 Focal Loss Model 39
7.7 Cost Sensitive Learning Model 42
7.8 Result for Multi-­classification Task 43
8. Experiment for Binary Classification Task 45
8.1 Baseline Model 45
8.2 Random­ Over-sampling Model 46
8.3 Synthetic Minority Over­-sampling Technique Model 47
8.4 Random­ Under-sampling Model 48
8.5 Mean False Error Model 48
8.6 Mean Squared False Error Model 49
8.7 Focal Loss Model 50
8.8 Cost Sensitive Learning Model 52
8.9 Result for Multi-­classification Task 53
9. Conclusion 55
9.1 Contribution 55
9.2 Future Work 55
Appendix A Python Code 56
A.1 Baseline Model 56
A.2 Random­ Over-sampling Model 68
A.3 Synthetic Minority Over-­sampling Technique Model 81
A.4 Random ­Under-sampling Model 100
A.5 Mean False Error Model 113
A.6 Focal Loss Model 125
A.7 Cost Sensitive Learning Model 138
A.8 Mean Squared False Error Model 145
A.9 Anomaly Detection Model 154
Bibliography 164
zh_TW
dc.format.extent 3698886 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0105751009en_US
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) 卷積神經網路zh_TW
dc.subject (關鍵詞) 不平衡數據集zh_TW
dc.subject (關鍵詞) 異常偵測zh_TW
dc.subject (關鍵詞) 圖像分類zh_TW
dc.subject (關鍵詞) Deep Learningen_US
dc.subject (關鍵詞) CNNen_US
dc.subject (關鍵詞) Imbalanced Data Setsen_US
dc.subject (關鍵詞) Anomaly Detectionen_US
dc.subject (關鍵詞) Image Classificationen_US
dc.title (題名) 深度學習在不平衡數據集之研究zh_TW
dc.title (題名) Survey on Deep Learning with Imbalanced Data Setsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[2] Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249–259, 2018.
[3] MHB Carvalho, ML Brizot, LM Lopes, CH Chiba,S Miyadahira, and M Zugaib. Detection of fetal structural abnormalities at the 11–14 week ultrasound scan. Prenatal Diagnosis: Published in Affiliation With the International Society for Prenatal Diagnosis, 22(1):1–4, 2002.
[4] Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM computing surveys(CSUR), 41(3):15, 2009.
[5] Nitesh V Chawla, KevinW Bowyer, Lawrence OHall, and W Philip Kegelmeyer. Smote: synthetic minority over­-sampling technique. Journal of artificial intelligence research, 16:321–357, 2002.
[6] Edward Choi, Andy Schuetz, Walter F Stewart, and Jimeng Sun. Using recurrent neural network models for early detection of heart failure onset. Journal of the American Medical Informatics Association, 24(2):361–370, 2016.
[7] David A Cieslak, Nitesh V Chawla, and Aaron Striegel. Combating imbalance in network intrusion datasets. In GrC, pages 732–737, 2006.
[8] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM, 2008.
[9] MJ Desforges, PJ Jacob, and JE Cooper. Applications of probability density estimation to the detection of abnormal conditions in engineering. Proceedings of the Institution of Mechanical Engineers, PartC: Journal of Mechanical Engineering Science, 212(8):687– 703,1998.
[10] Chris Drummond,Robert CHolte, et al. C4. 5, class imbalance, and cost sensitivity: why under­-sampling beats over­-sampling. In Workshop on learning from imbalanced datasets II, volume 11, pages 1–8. Citeseer, 2003.
[11] CharlesElkan. The foundations of cost-­sensitive learning. In International joint conference on artificial intelligence, volume 17, pages 973–978. Lawrence Erlbaum Associates Ltd, 2001.
[12] Guo Haixiang, Li Yijing, Jennifer Shang, Gu Mingyun, Huang Yuanyue, and Gong Bing. Learning from class­imbalanced data: Review of methods and applications. Expert Systems with Applications, 73:220–239, 2017.
[13] Haibo He and Edwardo A Garcia. Learning from imbalanced data. IEEE Transactions on Knowledge&Data Engineering, (9):1263–1284, 2008.
[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[15] JB Heaton, Nicholas G Polson, and Jan Hendrik Witte. Deep learning in finance. arXiv preprint arXiv: 1602.06561, 2016.
[16] David Hsu, Gildardo Sánchez­Ante, and Zheng Sun. Hybrid prm sampling with a cost sensitive adaptive strategy. In Proceedings of the 2005 IEEE international conference on robotics and automation, pages 3874–3880.IEEE, 2005.
[17] Anil K Jain, Jianchang Mao, and KM Mohiuddin. Artificial neural networks: A tutorial. Computer, (3):31–44, 1996.
[18] Justin M Johnson and Taghi M Khoshgoftaar. Survey on deep learning with class imbalance. Journal of Big Data,6(1):27,2019.
165
[19] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei­Fei. Large­scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732,2014.
[20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105,2012.
[21] Miroslav Kubat, Robert C Holte, and Stan Matwin. Machine learning for the detection of oil spills in satellite radar images. Machine learning,30(2­3):195–215, 1998.
[22] Matjaz Kukar, Igor Kononenko, et al. Cost­-sensitive learning with neural networks. In ECAI, pages 445–449,1998.
[23] Yoji Kukita, Junji Uchida, Shigeyuki Oba, Kazumi Nishino, Toru Kumagai, Kazuya Taniguchi, Takako Okuyama, Fumio Imamura, and Kikuya Kato. Quantitative identification of mutant alleles derived from lung cancer in plasma cell­-free dna via anomaly detection using deep sequencing data. PloS one,8(11): e81468, 2013.
[24] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553): 436,2015.
[25] Hansang Lee, Minseok Park, and Junmo Kim. Plankton classification on imbalanced large scale database via convolutional neural networks with transfer learning. In 2016 IEEE international conference on image processing(ICIP), pages 3713–3717.IEEE,2016.
[26] Tsung­-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988,2017.
[27] CX Ling and VS Sheng. Cost­-sensitive learning and the class imbalance problem. 2011. Encyclopedia of Machine Learning: Springer, 24.
[28] Amogh Mahapatra, Nisheeth Srivastava, and Jaideep Srivastava. Contextual anomaly detection in text data. Algorithms,5(4):469–489,2012.
[29] Bomin Mao, Zubair Md Fadlullah, Fengxiao Tang, Nei Kato, Osamu Akashi, Takeru Inoue, and Kimihiro Mizutani. Routing or computing? the paradigm shift towards intelligent computer network packet transmission based on deep learning. IEEE Transactions on Computers,66(11):1946–1960,2017.
[30] David Masko and Paulina Hensman. The impact of imbalanced training data for convolutional neural networks,2015.
[31] P Rahmawati and Prawito Prajitno. Online vibration monitoring of a water pump machine to detect its malfunction components based on artificial neural network. In Journal of Physics: Conference Series, volume 1011, page 012045. IOP Publishing, 2018.
[32] R Bharat Rao, Sriram Krishnan, and Radu Stefan Niculescu. Data mining for improved cardiac care. ACM SIGKDD Explorations Newsletter, 8(1):3–10, 2006.
[33] Richard G Stafford, Jacob Beutel, et al. Application of neural networks as an aid in medical diagnosis and general anomaly detection, July 19 1994. US Patent 5, 331, 550.
[34] David WJ Stein, Scott G Beaven, Lawrence E Hoff, Edwin M Winter, Alan P Schaum, and Alan D Stocker. Anomaly detection from hyperspectral imagery. IEEE signal processing magazine,19(1):58–69,2002.
[35] Daniel Svozil, Vladimir Kvasnicka, and Jiri Pospichal. Introduction to multi­-layer feed-forward neural networks. Chemometrics and intelligent laboratory systems,39(1):43–62, 1997.
[36] Shoujin Wang, Wei Liu, Jia Wu, Longbing Cao, Qinxue Meng, and Paul J Kennedy. Training deep neural networks on imbalanced data sets. In 2016 international joint conference on neural networks(IJCNN), pages 4368–4374.IEEE,2016.
[37] Wei Wei, Jinjiu Li, Longbing Cao, Yuming Ou, and Jiahang Chen. Effective detection of sophisticated online banking fraud on extremely imbalanced data. World Wide Web, 16(4): 449–475, 2013.
[38] Rui Yan, Yiping Song, and Hua Wu. Learning to respond with deep neural networks for retrieval-­based human­-computer conversation system. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 55–64. ACM, 2016.
[39] Ke Zhang, Jianwu Xu, Martin Renqiang Min, Guofei Jiang, Konstantinos Pelechrinis,and Hui Zhang. Automated it system failure prediction: A deep learning approach. In 2016 IEEE International Conferenceon Big Data(Big Data), pages 1291–1300.IEEE,2016.
[40] Zhi­Hua Zhou and Xu­Ying Liu. Training cost-­sensitive neural networks with methods addressing the class imbalance problem. IEEE Transactions on Knowledge & Data Engineering, (1):63–77, 2006.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU201901175en_US