dc.contributor.advisor | 蔡瑞煌 | zh_TW |
dc.contributor.advisor | Tsaih, Ray | en_US |
dc.contributor.author (Authors) | 林志忠 | zh_TW |
dc.contributor.author (Authors) | Lin, Chih-chung | en_US |
dc.creator (作者) | 林志忠 | zh_TW |
dc.creator (作者) | Lin, Chih-chung | en_US |
dc.date (日期) | 2004 | en_US |
dc.date.accessioned | 18-Sep-2009 14:36:15 (UTC+8) | - |
dc.date.available | 18-Sep-2009 14:36:15 (UTC+8) | - |
dc.date.issued (上傳時間) | 18-Sep-2009 14:36:15 (UTC+8) | - |
dc.identifier (Other Identifiers) | G0923560141 | en_US |
dc.identifier.uri (URI) | https://nccur.lib.nccu.edu.tw/handle/140.119/35271 | - |
dc.description (描述) | 碩士 | zh_TW |
dc.description (描述) | 國立政治大學 | zh_TW |
dc.description (描述) | 資訊管理研究所 | zh_TW |
dc.description (描述) | 92356014 | zh_TW |
dc.description (描述) | 93 | zh_TW |
dc.description.abstract (摘要) | 對於神經網路系統將提出一個法則萃取的方式,並從神經網路中得到相關法則。在這裡我們所提到的方法是根據反函數的觀念而得到的。 | zh_TW |
dc.description.abstract (摘要) | A rule-extraction method of the layered feed-forward neural networks is proposed here for identifying the rules suggested in the network. The method that we propose for the trained layered feed-forward neural network is based on the inversion of the functions computed by each layer of the network. The new rule-extraction method back-propagates regions from the output layer back to the input layer, and we hope that the method can be used further to deal with the predicament of ANN being a black box. | en_US |
dc.description.tableofcontents | 1. Introduction …………………………………………………………………………12. Literature Review…………………………………………………………………...32.1 A Mathematical Study of the Layered Feed-forward Neural Networks……...32.2 The Rule Extraction from Multi-layer Feed-forward Neural Networks……..62.3 Discussion……………………………………………………………………73. The Definition and Representation of Polyhedra…………………………………...94. The Definition and Representation of Feed-forward Neural Networks…………...105. The rule-extraction of a 3-layer feed-forward approximation network…………...125.1 The back-propagation with respect to the linear output transformation sub-process……………………………………………………………………...125.2 The back-propagation with respect to the approximation function sub-process……………………………………………………………………...125.3 The back-propagation with respect to the affine net transformation sub-process……………………………………………………………………...135.4 An illustration of the rule-extraction………………………………………..136. The rule-extraction of a 3-layer feed-forward neural network…………………….216.1 The back-propagation with respect to the linear output transformation sub-process……………………………………………………………………...216.2 The back-propagation with respect to the transfer function sub-process…...216.3 The back-propagation with respect to the affine net transformation sub-process……………………………………………………………………...22 6.4 An illustration of the rule-extraction .................................................227. The Illustration…………………………………………………………………….267.1 Definition…………………………………………………………………...267.2 The Network I………………………………………………………………267.2.1 The rule-extraction of the Network I………………………………...267.2.2 The rule-extraction of the approximation Network I………………..287.3 The Network II……………………………………………………………...297.3.1 The rule-extraction of the Network II……………………………….297.3.2 The rule-extraction of the approximation Network II……………….327.4 The Network III……………………………………………………………..347.4.1 The rule-extraction of the Network III………………………………347.4.2 The rule-extraction of the approximation Network III………………368. The Discussion of Error Ratio and The Future Work……………………………...388.1 Definition…………………………………………………………………...388.2 The Definition of Evaluation Mechanism …………………………………..388.3 The Discussion of Network I………………………………………………..388.3.1 The Discussion of ER1 in Network I………………………………..398.3.2 The Discussion of ER2 in Network I………………………………..408.4 The Discussion of Network II………………………………………………418.4.1 The Discussion of ER1 in Network II……………………………….428.4.2 The Discussion of ER2 in Network II……………………………….458.5 The Discussion of Network III……………………………………………...498.5.1 The Discussion of ER1 in Network III………………………………498.5.2 The Discussion of ER2 in Network III………………………………518.6 The Future Work…………………………………………………………….53Figure 1: The feed-forward neural network with one hidden layer and one output node………………………………..…………………………………………………..3Figure 2: The feed-forward neural network with one hidden layer and one output node……………………………………………………………………………………6Figure 3: The framework of feed-forward neural network…………………………..10Figure 6: The observation of f-1(-0.5) in Network I…………………………………27Figure 7: The observation of Xa(-0.5) in the approximation Network I……………..28Figure 9: The observation of Network II……………………………………………..31Figure 10: The observation of approximation Network II…………………………...33Figure 11: The observation of f-1(y) in Network III…………………………………35Figure 12: The observation of Xa( ) in Network III………………………………...37Figure 13: The observation of Xa(-0.5) and f-1(-0.5) in Network I………………….39Figure 14: The difference of y and ya in Network I………………………………….39Figure 15: The observation of ER1 in Network I…………………………………….40Figure 16: The difference of and in Network I……………………………40Figure 17: The observation of ER2 in Network I…………………………………….41Figure 18: The observation of Xa(-1.29) and f-1(-1.29) in Network II. ……………..41Figure 19: The difference of y and ya in Network II………………………………...42Figure 20: The observation of ER1 in Network II…………………………………...42Figure 21: The difference of y and ya in Network II………………………………...43Figure 22: The observation of ER1 in Network II. …………………………………..43Figure 23: The difference of y and ya in Network II………………………………...44Figure 24: The observation of ER1 in Network II…………………………………...44Figure 25: The difference of and in Network II……………………………45Figure 26: The observation of ER2 in Network II. …………………………………..45Figure 27: The difference of and in Network II……………………………46Figure 28: The observation of ER2 in Network II…………………………………...46Figure 29: The difference of and in Network II……………………………47Figure 30: The observation of ER2 in Network II…………………………………...47Figure 31: The difference of and in Network II……………………………48Figure 32: The observation of ER2 in Network II. …………………………………..48Figure 33: The observation of Xa(y) and f-1(y) in Network III……………………...49Figure 34: The difference of y and ya in Network III………………………………..50Figure 35: The observation of ER1 in Network III…………………………………..50Figure 36: The difference of and in Network III…………………………...51Figure 37: The observation of ER2 in Network III…………………………………..51Figure 38: The difference of and in Network III…………………………...52Figure 39: The observation of ER2 in Network III…………………………………..52 | zh_TW |
dc.format.extent | 43419 bytes | - |
dc.format.extent | 200561 bytes | - |
dc.format.extent | 25327 bytes | - |
dc.format.extent | 38352 bytes | - |
dc.format.extent | 26846 bytes | - |
dc.format.extent | 140054 bytes | - |
dc.format.extent | 21584 bytes | - |
dc.format.extent | 32878 bytes | - |
dc.format.extent | 99774 bytes | - |
dc.format.extent | 119180 bytes | - |
dc.format.extent | 216500 bytes | - |
dc.format.extent | 193505 bytes | - |
dc.format.extent | 31937 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en_US | - |
dc.source.uri (資料來源) | http://thesis.lib.nccu.edu.tw/record/#G0923560141 | en_US |
dc.subject (關鍵詞) | 類神經網路 | zh_TW |
dc.subject (關鍵詞) | 法則萃取 | zh_TW |
dc.subject (關鍵詞) | 反函數 | zh_TW |
dc.subject (關鍵詞) | neural networks | en_US |
dc.subject (關鍵詞) | rule-extraction | en_US |
dc.subject (關鍵詞) | inversion function | en_US |
dc.title (題名) | A Mathematical Study of the Rule Extraction of a 3-layered Feed-forward Neural Networks | zh_TW |
dc.type (資料類型) | thesis | en |
dc.relation.reference (參考文獻) | [1] Andrews, R., Diederich, J., and Tickle, A. (1995). “A survey and critique of techniques for extracting rules from trained artificial neural networks.” Knowledge-Based System, Vol. 8, Issue 6, pp. 373 -389. | zh_TW |
dc.relation.reference (參考文獻) | [2] Zhou, R. R., Chen, S. F., and Chen, Z. Q. (2000). “A statistics based approach for extracting priority rules from trained neural networks.” In: Proceedings of the IEEE-INNS-ENNS International Join Conference on Neural Network, Como, Italy, Vol. 3, pp. 401 -406. | zh_TW |
dc.relation.reference (參考文獻) | [3] Thrun, S. B., and Linden, A. (1990). “Inversion in time.” In: Proceedings of the EURASIP Workshop on Neural Networks, Sesimbra, Portugal. | zh_TW |
dc.relation.reference (參考文獻) | [4] Ke, W. C. (2003). “The Rule Extraction from Multi-layer Feed-forward Neural Networks.” Taiwan: National Chengchi University. | zh_TW |
dc.relation.reference (參考文獻) | [5] Tsaih, R., and Lin, C. C. (2004). “The Layered Feed-Forward Neural Networks and Its Rule Extraction.” In: Proceeding of ISNN 2004 International Symposium on Neural Networks, Dalian, China, pp. 377 -382. | zh_TW |
dc.relation.reference (參考文獻) | [6] Rumelhart, D. E., Hinton, G. E., and Williams, R., “Learning internal representation by error propagation,” in Parallel Distributed Processing, vol. 1, Cambridge, MA: MIT Press, 1986, pp. 318-362. | zh_TW |
dc.relation.reference (參考文獻) | [7] Tsaih, R. (1998). “An Explanation of Reasoning Neural Networks.” Mathematical and Computer Modeling, vol. 28, pp. 37 -44. | zh_TW |
dc.relation.reference (參考文獻) | [8] Maire, F. (1999). “Rule-extraction by backpropagation of polyhedra.” Neural Networks, vol. 12, pp. 717 -725. | zh_TW |