Publications-Theses

題名 A Mathematical Study of the Rule Extraction of a 3-layered Feed-forward Neural Networks
作者 林志忠
Lin, Chih-chung
貢獻者 蔡瑞煌
Tsaih, Ray
林志忠
Lin, Chih-chung
關鍵詞 類神經網路
法則萃取
反函數
neural networks
rule-extraction
inversion function
日期 2004
上傳時間 18-Sep-2009 14:36:15 (UTC+8)
摘要 對於神經網路系統將提出一個法則萃取的方式,並從神經網路中得到相關法則。在這裡我們所提到的方法是根據反函數的觀念而得到的。
A rule-extraction method of the layered feed-forward neural networks is proposed here for identifying the rules suggested in the network. The method that we propose for the trained layered feed-forward neural network is based on the inversion of the functions computed by each layer of the network. The new rule-extraction method back-propagates regions from the output layer back to the input layer, and we hope that the method can be used further to deal with the predicament of ANN being a black box.
參考文獻 [1] Andrews, R., Diederich, J., and Tickle, A. (1995). “A survey and critique of techniques for extracting rules from trained artificial neural networks.” Knowledge-Based System, Vol. 8, Issue 6, pp. 373 -389.
[2] Zhou, R. R., Chen, S. F., and Chen, Z. Q. (2000). “A statistics based approach for extracting priority rules from trained neural networks.” In: Proceedings of the IEEE-INNS-ENNS International Join Conference on Neural Network, Como, Italy, Vol. 3, pp. 401 -406.
[3] Thrun, S. B., and Linden, A. (1990). “Inversion in time.” In: Proceedings of the EURASIP Workshop on Neural Networks, Sesimbra, Portugal.
[4] Ke, W. C. (2003). “The Rule Extraction from Multi-layer Feed-forward Neural Networks.” Taiwan: National Chengchi University.
[5] Tsaih, R., and Lin, C. C. (2004). “The Layered Feed-Forward Neural Networks and Its Rule Extraction.” In: Proceeding of ISNN 2004 International Symposium on Neural Networks, Dalian, China, pp. 377 -382.
[6] Rumelhart, D. E., Hinton, G. E., and Williams, R., “Learning internal representation by error propagation,” in Parallel Distributed Processing, vol. 1, Cambridge, MA: MIT Press, 1986, pp. 318-362.
[7] Tsaih, R. (1998). “An Explanation of Reasoning Neural Networks.” Mathematical and Computer Modeling, vol. 28, pp. 37 -44.
[8] Maire, F. (1999). “Rule-extraction by backpropagation of polyhedra.” Neural Networks, vol. 12, pp. 717 -725.
描述 碩士
國立政治大學
資訊管理研究所
92356014
93
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0923560141
資料類型 thesis
dc.contributor.advisor 蔡瑞煌zh_TW
dc.contributor.advisor Tsaih, Rayen_US
dc.contributor.author (Authors) 林志忠zh_TW
dc.contributor.author (Authors) Lin, Chih-chungen_US
dc.creator (作者) 林志忠zh_TW
dc.creator (作者) Lin, Chih-chungen_US
dc.date (日期) 2004en_US
dc.date.accessioned 18-Sep-2009 14:36:15 (UTC+8)-
dc.date.available 18-Sep-2009 14:36:15 (UTC+8)-
dc.date.issued (上傳時間) 18-Sep-2009 14:36:15 (UTC+8)-
dc.identifier (Other Identifiers) G0923560141en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/35271-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理研究所zh_TW
dc.description (描述) 92356014zh_TW
dc.description (描述) 93zh_TW
dc.description.abstract (摘要) 對於神經網路系統將提出一個法則萃取的方式,並從神經網路中得到相關法則。在這裡我們所提到的方法是根據反函數的觀念而得到的。zh_TW
dc.description.abstract (摘要) A rule-extraction method of the layered feed-forward neural networks is proposed here for identifying the rules suggested in the network. The method that we propose for the trained layered feed-forward neural network is based on the inversion of the functions computed by each layer of the network. The new rule-extraction method back-propagates regions from the output layer back to the input layer, and we hope that the method can be used further to deal with the predicament of ANN being a black box.en_US
dc.description.tableofcontents 1. Introduction …………………………………………………………………………1
2. Literature Review…………………………………………………………………...3
2.1 A Mathematical Study of the Layered Feed-forward Neural Networks……...3
2.2 The Rule Extraction from Multi-layer Feed-forward Neural Networks……..6
2.3 Discussion……………………………………………………………………7
3. The Definition and Representation of Polyhedra…………………………………...9
4. The Definition and Representation of Feed-forward Neural Networks…………...10
5. The rule-extraction of a 3-layer feed-forward approximation network…………...12
5.1 The back-propagation with respect to the linear output transformation sub-process……………………………………………………………………...12
5.2 The back-propagation with respect to the approximation function sub-process……………………………………………………………………...12
5.3 The back-propagation with respect to the affine net transformation sub-process……………………………………………………………………...13
5.4 An illustration of the rule-extraction………………………………………..13
6. The rule-extraction of a 3-layer feed-forward neural network…………………….21
6.1 The back-propagation with respect to the linear output transformation sub-process……………………………………………………………………...21
6.2 The back-propagation with respect to the transfer function sub-process…...21
6.3 The back-propagation with respect to the affine net transformation sub-process……………………………………………………………………...22
6.4 An illustration of the rule-extraction .................................................22
7. The Illustration…………………………………………………………………….26
7.1 Definition…………………………………………………………………...26
7.2 The Network I………………………………………………………………26
7.2.1 The rule-extraction of the Network I………………………………...26
7.2.2 The rule-extraction of the approximation Network I………………..28
7.3 The Network II……………………………………………………………...29
7.3.1 The rule-extraction of the Network II……………………………….29
7.3.2 The rule-extraction of the approximation Network II……………….32
7.4 The Network III……………………………………………………………..34
7.4.1 The rule-extraction of the Network III………………………………34
7.4.2 The rule-extraction of the approximation Network III………………36
8. The Discussion of Error Ratio and The Future Work……………………………...38
8.1 Definition…………………………………………………………………...38
8.2 The Definition of Evaluation Mechanism …………………………………..38
8.3 The Discussion of Network I………………………………………………..38
8.3.1 The Discussion of ER1 in Network I………………………………..39
8.3.2 The Discussion of ER2 in Network I………………………………..40
8.4 The Discussion of Network II………………………………………………41
8.4.1 The Discussion of ER1 in Network II……………………………….42
8.4.2 The Discussion of ER2 in Network II……………………………….45
8.5 The Discussion of Network III……………………………………………...49
8.5.1 The Discussion of ER1 in Network III………………………………49
8.5.2 The Discussion of ER2 in Network III………………………………51
8.6 The Future Work…………………………………………………………….53

Figure 1: The feed-forward neural network with one hidden layer and one output node………………………………..…………………………………………………..3
Figure 2: The feed-forward neural network with one hidden layer and one output node……………………………………………………………………………………6
Figure 3: The framework of feed-forward neural network…………………………..10
Figure 6: The observation of f-1(-0.5) in Network I…………………………………27
Figure 7: The observation of Xa(-0.5) in the approximation Network I……………..28
Figure 9: The observation of Network II……………………………………………..31
Figure 10: The observation of approximation Network II…………………………...33
Figure 11: The observation of f-1(y) in Network III…………………………………35
Figure 12: The observation of Xa( ) in Network III………………………………...37
Figure 13: The observation of Xa(-0.5) and f-1(-0.5) in Network I………………….39
Figure 14: The difference of y and ya in Network I………………………………….39
Figure 15: The observation of ER1 in Network I…………………………………….40
Figure 16: The difference of and in Network I……………………………40
Figure 17: The observation of ER2 in Network I…………………………………….41
Figure 18: The observation of Xa(-1.29) and f-1(-1.29) in Network II. ……………..41
Figure 19: The difference of y and ya in Network II………………………………...42
Figure 20: The observation of ER1 in Network II…………………………………...42
Figure 21: The difference of y and ya in Network II………………………………...43
Figure 22: The observation of ER1 in Network II. …………………………………..43
Figure 23: The difference of y and ya in Network II………………………………...44
Figure 24: The observation of ER1 in Network II…………………………………...44
Figure 25: The difference of and in Network II……………………………45
Figure 26: The observation of ER2 in Network II. …………………………………..45
Figure 27: The difference of and in Network II……………………………46
Figure 28: The observation of ER2 in Network II…………………………………...46
Figure 29: The difference of and in Network II……………………………47
Figure 30: The observation of ER2 in Network II…………………………………...47
Figure 31: The difference of and in Network II……………………………48
Figure 32: The observation of ER2 in Network II. …………………………………..48
Figure 33: The observation of Xa(y) and f-1(y) in Network III……………………...49
Figure 34: The difference of y and ya in Network III………………………………..50
Figure 35: The observation of ER1 in Network III…………………………………..50
Figure 36: The difference of and in Network III…………………………...51
Figure 37: The observation of ER2 in Network III…………………………………..51
Figure 38: The difference of and in Network III…………………………...52
Figure 39: The observation of ER2 in Network III…………………………………..52
zh_TW
dc.format.extent 43419 bytes-
dc.format.extent 200561 bytes-
dc.format.extent 25327 bytes-
dc.format.extent 38352 bytes-
dc.format.extent 26846 bytes-
dc.format.extent 140054 bytes-
dc.format.extent 21584 bytes-
dc.format.extent 32878 bytes-
dc.format.extent 99774 bytes-
dc.format.extent 119180 bytes-
dc.format.extent 216500 bytes-
dc.format.extent 193505 bytes-
dc.format.extent 31937 bytes-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.language.iso en_US-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0923560141en_US
dc.subject (關鍵詞) 類神經網路zh_TW
dc.subject (關鍵詞) 法則萃取zh_TW
dc.subject (關鍵詞) 反函數zh_TW
dc.subject (關鍵詞) neural networksen_US
dc.subject (關鍵詞) rule-extractionen_US
dc.subject (關鍵詞) inversion functionen_US
dc.title (題名) A Mathematical Study of the Rule Extraction of a 3-layered Feed-forward Neural Networkszh_TW
dc.type (資料類型) thesisen
dc.relation.reference (參考文獻) [1] Andrews, R., Diederich, J., and Tickle, A. (1995). “A survey and critique of techniques for extracting rules from trained artificial neural networks.” Knowledge-Based System, Vol. 8, Issue 6, pp. 373 -389.zh_TW
dc.relation.reference (參考文獻) [2] Zhou, R. R., Chen, S. F., and Chen, Z. Q. (2000). “A statistics based approach for extracting priority rules from trained neural networks.” In: Proceedings of the IEEE-INNS-ENNS International Join Conference on Neural Network, Como, Italy, Vol. 3, pp. 401 -406.zh_TW
dc.relation.reference (參考文獻) [3] Thrun, S. B., and Linden, A. (1990). “Inversion in time.” In: Proceedings of the EURASIP Workshop on Neural Networks, Sesimbra, Portugal.zh_TW
dc.relation.reference (參考文獻) [4] Ke, W. C. (2003). “The Rule Extraction from Multi-layer Feed-forward Neural Networks.” Taiwan: National Chengchi University.zh_TW
dc.relation.reference (參考文獻) [5] Tsaih, R., and Lin, C. C. (2004). “The Layered Feed-Forward Neural Networks and Its Rule Extraction.” In: Proceeding of ISNN 2004 International Symposium on Neural Networks, Dalian, China, pp. 377 -382.zh_TW
dc.relation.reference (參考文獻) [6] Rumelhart, D. E., Hinton, G. E., and Williams, R., “Learning internal representation by error propagation,” in Parallel Distributed Processing, vol. 1, Cambridge, MA: MIT Press, 1986, pp. 318-362.zh_TW
dc.relation.reference (參考文獻) [7] Tsaih, R. (1998). “An Explanation of Reasoning Neural Networks.” Mathematical and Computer Modeling, vol. 28, pp. 37 -44.zh_TW
dc.relation.reference (參考文獻) [8] Maire, F. (1999). “Rule-extraction by backpropagation of polyhedra.” Neural Networks, vol. 12, pp. 717 -725.zh_TW