Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 類神經網路與混沌現象
The Neural Network and Chaos
作者 吳慧娟
Wu, Hui-Chuan
貢獻者 蔡瑞煌
Ray Tsaih, R.
吳慧娟
Wu, Hui-Chuan
關鍵詞 神經網路系統
混沌
有限性
確定性
非週期性
對初始條件的敏感依賴
Neural Network systems
Chaos
Boundedness
Determinism
Aperiodicity
Sensitive dependence on initial conditions
日期 2000
上傳時間 31-Mar-2016 15:43:24 (UTC+8)
摘要 本研究設計了一些實驗來檢測學習完混沌資料的神經網路系統是否為混沌系統,驗證的方法是檢驗是否具有混沌資料的四個特性,這四個特性包括:有限性、非週期性、確定性、及對初始條件的敏感依賴。同時,更進一步地利用上述學習完的網路系統來預測所學習的混沌模型,這麼做的目的是想要了解:學習後的網路系統是一個混沌系統時,與學習後網路系統不是一個混沌系統時,其預測能力的比較。
     此外,我們亦從理論上證明:學習完混沌資料後的神經網路系統無法重建其所學習的混沌模型。然而,有時網路系統卻能夠模擬成一個混沌系統;如果使用模擬成混沌系統的神經網路來預測具有混沌現象的資料,換句話說,也就可能是使用一個混沌系統去預測另一個混沌系統,根據混沌的特性 -- 對初始條件的敏感依賴,這樣的預測應該會造成相當大的誤差;不過,從本研究的實驗中發現,無論學習後的神經網路系統是否為一個混沌系統,對其預測能力並無顯著的影響。
     本論文希望能給「用神經網路系統來預測具有混沌現象的金融市場或其他領域」一些貢獻與幫助。This paper uses some experimental designs to detect if the Neural Networks system after learning the chaotic data is a chaotic system. That is verified via testing four characteristics in chaotic data, inclusive of boundedness, determinism, aperiodicity and sensitive dependence on initial conditions. Further, this paper uses the result above to predict the learned chaotic model. The purpose is to probe into if the Neural Networks system after learning the chaotic data is a chaotic system and is used to predict, how good the short-term and the long-term predictions will be? And, compare with if the Neural Networks system after learning the chaotic data is not a chaotic system and is used to predict, how large the error will be? We present the Neural Network systems after learning the chaotic data never can rebuild the learned chaotic model. But, sometimes the Neural Network system would mimic as a chaotic system. So, if we take Neural Network system to predict something with chaotic phenomena, it is possible to use one chaotic system to predict another chaotic system. According to the property of sensitive dependence on initial conditions, it should make large errors. However, from the experiments we design, we find whether the Neural Network system after learning is a chaotic system or not, it has no influence on its predicting effect. This hint is applied to use ANN to predict in financial markets or other areas with chaotic phenomenon.
This paper uses some experimental designs to detect if the Neural Networks system after learning the chaotic data is a chaotic system. That is verified via testing four characteristics in chaotic data, inclusive of boundedness, determinism, aperiodicity and sensitive dependence on initial conditions. Further, this paper uses the result above to predict the learned chaotic model. The purpose is to probe into if the Neural Networks system after learning the chaotic data is a chaotic system and is used to predict, how good the short-term and the long-term predictions will be? And, compare with if the Neural Networks system after learning the chaotic data is not a chaotic system and is used to predict, how large the error will be? We present the Neural Network systems after learning the chaotic data never can rebuild the learned chaotic model. But, sometimes the Neural Network system would mimic as a chaotic system. So, if we take Neural Network system to predict something with chaotic phenomena, it is possible to use one chaotic system to predict another chaotic system. According to the property of sensitive dependence on initial conditions, it should make large errors. However, from the experiments we design, we find whether the Neural Network system after learning is a chaotic system or not, it has no influence on its predicting effect. This hint is applied to use ANN to predict in financial markets or other areas with chaotic phenomenon.
參考文獻 English:
     1. Barndorff-Nielsen, O.E., Jensen, J.L., and Kendall, W.S. (1993), Networks and Chaos - Statistical and Probabilistic Aspects, 1th edition, Chapman & Hall, Inc.
     2. Barron, A.R. (1991), “Complexity regularization with application to artificial neural networks.” Nonparametric Functional Estimation and Related Topics (G. Roussas, ed.), pp. 561-576.
     3. Barron, A.R. (1992), “Neural net approximation.” Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems, pp. 69-72.
     4. Battiti, R. (1992), ”First- and second-order methods for learning between steepest descent and Newton’s method,” Neural Networks, Vol. 5, pp507-529.
     5. Cohen, J., Kesten, H., and Newman, C. (1986), ”Random Matrices and their Applications. Contemporary Mathematics,” American Math. Soc., Vol. 50.
     6. Devaney, Robert L. (1989), An Introduction to Chaotic Dynamical Systems, 2th edition, Addison-Wesley, Inc.
     7. Edward Ott, Tim Sauer, and James A. Yorke (1994), Coping With Chaos – Analysis of Chaotic Data and the Exploitation of Chaotic Systems, 1th edition, Wiley Inc.
     8. Fischer, P. and Smith, W.R. (1985), Chaos, Fractals, and Dynamics, Marcel Dekker, Inc.
     9. Feigenbaum, M. J. (1980), “Universal behavior in nonlinear systems,” Los Alamos Sci., Vol. 1, pp4-27.
     10. Gleick, J. (1987), Chaos, 1th edition, Viking, Inc.
     11. Jacobs, R.A. (1988), ”Increased rate of convergence through learning rate adaptation,” Neural Networks, Vol. 1, pp295-307.
     12. Kaplan, Daniel, and Glass, Leon (1995), Understanding Nonlinear Dynamics, Springer-Verlag New York, Inc.
     13. Matsuba I., Masui H. & Hebishima S. (1992), “Prediction of Chaotic Time-Series Data Using Optimized Neural Networks,” Proceedings of IJCNN, Beijirg.
     14. McCaffrey, Daniel F., Ellner, Stephen, Gallant, A. Ronald, and Nychka, Douglas W. (1992), ”Estimating the Lyapunov Exponent of a Chaotic System With Nonparametric Regression,” Journal of the American Statistical Association, Vol. 87, No. 419, pp682-695.
     15. Oseledec, V. (1968), ”A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems,” Trans. Moscow Math. Soc., 19, pp197-231.
     16. Rosenblatt, F. (1958), ”The perceptron: a probabilistic model for information storage and organization in the brain,” Psychological Review, Vol. 65, pp386-408.
     17. Rumelhart, D.E., Hinton, G.E., and Williams, R. (1986), ”Learning internal representation by error propagation,” Parallel Distributed Processing, Cambridge, MA: MIT Press, Vol. 1, pp318-362.
     18. Sarkar, D. (1995), ”Methods to speed up error back-propagation learning algorithm,” ACM Computer Surveys, Vol. 27, No. 4, pp519-542.
     19. Stewart, I. (1989), Does God Play Dice? The Mathematics of Chaos, 1th edition, Blackwell, Inc.
     20. Takechi, H., Murakami, K., and Izumida, M. (1995), ”Back propagation learning algorithm with different learning coefficients for each layer,” Systems and Computers in Japan, Vol. 26-(7), pp47-56.
     21. Tsoukas, H. (1998), ”Chaos, Complexity and Organisation Theory,” Organization, 5, pp291-313.
     22. Walters, P. (1982), An Introduction to Ergodic Theory, Springer-Verlag, Inc.
     23. Wang, S. (1995), ”The unpredictability of standard back propagation neural networks in classification applications,” Management Science, Vol. 41, No. 3, pp555-559.
     24. Wong, F.S. (1991), ”Time Series Forecasting Using Back-Propagation Neural Networks,” Neurocomputing, 2, pp147-159.
     Chinese:
     楊朝成,民國85年,「渾沌理論與類神經網路之結合運用於股市走勢預測」,行政院國科會科資中心。
描述 碩士
國立政治大學
資訊管理學系
87356001
資料來源 http://thesis.lib.nccu.edu.tw/record/#A2002002104
資料類型 thesis
dc.contributor.advisor 蔡瑞煌zh_TW
dc.contributor.advisor Ray Tsaih, R.en_US
dc.contributor.author (Authors) 吳慧娟zh_TW
dc.contributor.author (Authors) Wu, Hui-Chuanen_US
dc.creator (作者) 吳慧娟zh_TW
dc.creator (作者) Wu, Hui-Chuanen_US
dc.date (日期) 2000en_US
dc.date.accessioned 31-Mar-2016 15:43:24 (UTC+8)-
dc.date.available 31-Mar-2016 15:43:24 (UTC+8)-
dc.date.issued (上傳時間) 31-Mar-2016 15:43:24 (UTC+8)-
dc.identifier (Other Identifiers) A2002002104en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/83304-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 87356001zh_TW
dc.description.abstract (摘要) 本研究設計了一些實驗來檢測學習完混沌資料的神經網路系統是否為混沌系統,驗證的方法是檢驗是否具有混沌資料的四個特性,這四個特性包括:有限性、非週期性、確定性、及對初始條件的敏感依賴。同時,更進一步地利用上述學習完的網路系統來預測所學習的混沌模型,這麼做的目的是想要了解:學習後的網路系統是一個混沌系統時,與學習後網路系統不是一個混沌系統時,其預測能力的比較。
     此外,我們亦從理論上證明:學習完混沌資料後的神經網路系統無法重建其所學習的混沌模型。然而,有時網路系統卻能夠模擬成一個混沌系統;如果使用模擬成混沌系統的神經網路來預測具有混沌現象的資料,換句話說,也就可能是使用一個混沌系統去預測另一個混沌系統,根據混沌的特性 -- 對初始條件的敏感依賴,這樣的預測應該會造成相當大的誤差;不過,從本研究的實驗中發現,無論學習後的神經網路系統是否為一個混沌系統,對其預測能力並無顯著的影響。
     本論文希望能給「用神經網路系統來預測具有混沌現象的金融市場或其他領域」一些貢獻與幫助。This paper uses some experimental designs to detect if the Neural Networks system after learning the chaotic data is a chaotic system. That is verified via testing four characteristics in chaotic data, inclusive of boundedness, determinism, aperiodicity and sensitive dependence on initial conditions. Further, this paper uses the result above to predict the learned chaotic model. The purpose is to probe into if the Neural Networks system after learning the chaotic data is a chaotic system and is used to predict, how good the short-term and the long-term predictions will be? And, compare with if the Neural Networks system after learning the chaotic data is not a chaotic system and is used to predict, how large the error will be? We present the Neural Network systems after learning the chaotic data never can rebuild the learned chaotic model. But, sometimes the Neural Network system would mimic as a chaotic system. So, if we take Neural Network system to predict something with chaotic phenomena, it is possible to use one chaotic system to predict another chaotic system. According to the property of sensitive dependence on initial conditions, it should make large errors. However, from the experiments we design, we find whether the Neural Network system after learning is a chaotic system or not, it has no influence on its predicting effect. This hint is applied to use ANN to predict in financial markets or other areas with chaotic phenomenon.
zh_TW
dc.description.abstract (摘要) This paper uses some experimental designs to detect if the Neural Networks system after learning the chaotic data is a chaotic system. That is verified via testing four characteristics in chaotic data, inclusive of boundedness, determinism, aperiodicity and sensitive dependence on initial conditions. Further, this paper uses the result above to predict the learned chaotic model. The purpose is to probe into if the Neural Networks system after learning the chaotic data is a chaotic system and is used to predict, how good the short-term and the long-term predictions will be? And, compare with if the Neural Networks system after learning the chaotic data is not a chaotic system and is used to predict, how large the error will be? We present the Neural Network systems after learning the chaotic data never can rebuild the learned chaotic model. But, sometimes the Neural Network system would mimic as a chaotic system. So, if we take Neural Network system to predict something with chaotic phenomena, it is possible to use one chaotic system to predict another chaotic system. According to the property of sensitive dependence on initial conditions, it should make large errors. However, from the experiments we design, we find whether the Neural Network system after learning is a chaotic system or not, it has no influence on its predicting effect. This hint is applied to use ANN to predict in financial markets or other areas with chaotic phenomenon.en_US
dc.description.tableofcontents 封面頁
     證明書
     致謝詞
     論文摘要
     目錄
     表目錄
     圖目錄
     Chapter 1 Introduction
     Chapter 2 Literature Review
     2.1 Neural Networks
     2.1.1 An Introduction to Neural Networks
     2.1.2 Back Propagation Neural Networks
     2.2 Chaos
     2.2.1 Course of development in chaos
     2.2.2 Definition of chaos
     2.3 The techniques of investigating the characteristics in chaotic data
     2.3.1 Boundedness
     2.3.2 Aperiodicity
     2.3.3 Determinism
     2.3.4 Sensitive Dependence on Initial Conditions
     2.4 A chaotic model --- Logistic growth equation
     Chapter 3 Experiment Design and Methodology
     3.1 Experiment Design
     3.2 The verifying method of a chaotic system
     3.2.1 Stationarity
     3.2.2 Aperiodicity
     3.2.3 Determinism
     3.2.4 Sensitive Dependence on initial conditions
     Chapter 4 Empirical Results and Analysis
     4.1 The Explanation of the Back Propagation Neural Networks system can`t Rebuild the learned Chaotic Model
     4.2 The result of verifying the Neural Networks system after learning the chaotic data
     4.2.1 The result of verifying Stationarity
     4.2.2 The figures to observe Aperiodicity
     4.2.3 The result of verifying Determinism
     4.2.4 The results of verifying Sensitive Dependence on Initial Conditions
     4.2.5 Summary of the verifying results
     4.3 Comparison in Predicting Results
     Chapter 5 Summary and Future Work
     5.1 Discussions from the experiments and predictions
     5.2 Future Work
     Reference
     中文論文
zh_TW
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#A2002002104en_US
dc.subject (關鍵詞) 神經網路系統zh_TW
dc.subject (關鍵詞) 混沌zh_TW
dc.subject (關鍵詞) 有限性zh_TW
dc.subject (關鍵詞) 確定性zh_TW
dc.subject (關鍵詞) 非週期性zh_TW
dc.subject (關鍵詞) 對初始條件的敏感依賴zh_TW
dc.subject (關鍵詞) Neural Network systemsen_US
dc.subject (關鍵詞) Chaosen_US
dc.subject (關鍵詞) Boundednessen_US
dc.subject (關鍵詞) Determinismen_US
dc.subject (關鍵詞) Aperiodicityen_US
dc.subject (關鍵詞) Sensitive dependence on initial conditionsen_US
dc.title (題名) 類神經網路與混沌現象zh_TW
dc.title (題名) The Neural Network and Chaosen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) English:
     1. Barndorff-Nielsen, O.E., Jensen, J.L., and Kendall, W.S. (1993), Networks and Chaos - Statistical and Probabilistic Aspects, 1th edition, Chapman & Hall, Inc.
     2. Barron, A.R. (1991), “Complexity regularization with application to artificial neural networks.” Nonparametric Functional Estimation and Related Topics (G. Roussas, ed.), pp. 561-576.
     3. Barron, A.R. (1992), “Neural net approximation.” Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems, pp. 69-72.
     4. Battiti, R. (1992), ”First- and second-order methods for learning between steepest descent and Newton’s method,” Neural Networks, Vol. 5, pp507-529.
     5. Cohen, J., Kesten, H., and Newman, C. (1986), ”Random Matrices and their Applications. Contemporary Mathematics,” American Math. Soc., Vol. 50.
     6. Devaney, Robert L. (1989), An Introduction to Chaotic Dynamical Systems, 2th edition, Addison-Wesley, Inc.
     7. Edward Ott, Tim Sauer, and James A. Yorke (1994), Coping With Chaos – Analysis of Chaotic Data and the Exploitation of Chaotic Systems, 1th edition, Wiley Inc.
     8. Fischer, P. and Smith, W.R. (1985), Chaos, Fractals, and Dynamics, Marcel Dekker, Inc.
     9. Feigenbaum, M. J. (1980), “Universal behavior in nonlinear systems,” Los Alamos Sci., Vol. 1, pp4-27.
     10. Gleick, J. (1987), Chaos, 1th edition, Viking, Inc.
     11. Jacobs, R.A. (1988), ”Increased rate of convergence through learning rate adaptation,” Neural Networks, Vol. 1, pp295-307.
     12. Kaplan, Daniel, and Glass, Leon (1995), Understanding Nonlinear Dynamics, Springer-Verlag New York, Inc.
     13. Matsuba I., Masui H. & Hebishima S. (1992), “Prediction of Chaotic Time-Series Data Using Optimized Neural Networks,” Proceedings of IJCNN, Beijirg.
     14. McCaffrey, Daniel F., Ellner, Stephen, Gallant, A. Ronald, and Nychka, Douglas W. (1992), ”Estimating the Lyapunov Exponent of a Chaotic System With Nonparametric Regression,” Journal of the American Statistical Association, Vol. 87, No. 419, pp682-695.
     15. Oseledec, V. (1968), ”A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems,” Trans. Moscow Math. Soc., 19, pp197-231.
     16. Rosenblatt, F. (1958), ”The perceptron: a probabilistic model for information storage and organization in the brain,” Psychological Review, Vol. 65, pp386-408.
     17. Rumelhart, D.E., Hinton, G.E., and Williams, R. (1986), ”Learning internal representation by error propagation,” Parallel Distributed Processing, Cambridge, MA: MIT Press, Vol. 1, pp318-362.
     18. Sarkar, D. (1995), ”Methods to speed up error back-propagation learning algorithm,” ACM Computer Surveys, Vol. 27, No. 4, pp519-542.
     19. Stewart, I. (1989), Does God Play Dice? The Mathematics of Chaos, 1th edition, Blackwell, Inc.
     20. Takechi, H., Murakami, K., and Izumida, M. (1995), ”Back propagation learning algorithm with different learning coefficients for each layer,” Systems and Computers in Japan, Vol. 26-(7), pp47-56.
     21. Tsoukas, H. (1998), ”Chaos, Complexity and Organisation Theory,” Organization, 5, pp291-313.
     22. Walters, P. (1982), An Introduction to Ergodic Theory, Springer-Verlag, Inc.
     23. Wang, S. (1995), ”The unpredictability of standard back propagation neural networks in classification applications,” Management Science, Vol. 41, No. 3, pp555-559.
     24. Wong, F.S. (1991), ”Time Series Forecasting Using Back-Propagation Neural Networks,” Neurocomputing, 2, pp147-159.
     Chinese:
     楊朝成,民國85年,「渾沌理論與類神經網路之結合運用於股市走勢預測」,行政院國科會科資中心。
zh_TW