學術產出-學位論文
文章檢視/開啟
書目匯出
-
題名 結合隱私保護功能之GRU預測模型框架
A Study on Privacy-preserving GRU Inference Framework作者 蕭守晴
Hsiao, Shou-Ching貢獻者 左瑞麟
Tso, Ray-Lin
蕭守晴
Hsiao, Shou-Ching關鍵詞 隱私保護
Gated Recurrent Unit模型
秘密分享
Universal Composability架構
Privacy-preserving
Gated Recurrent Unit Model
Secret Sharing
Universal Composability Framework日期 2020 上傳時間 2-九月-2020 12:15:48 (UTC+8) 摘要 Gated Recurrent Unit (GRU) 模型具有廣泛應用,包括情緒分析、語音辨識、惡意程式分析等領域。在提供服務階段,模型擁有者常選擇雲端機器學習服務 (Machine-learning-as-a-service, MLaaS) 作為系統架構,因其提供企業以低建置成本部屬模型且達到高效能機器學習服務;然而,資料上傳至雲端會產生隱私疑慮,包括模型隱私、使用者資料隱私以及預測結果隱私,無論是雲端代管商遭受外部入侵或內部員工竊取,都有可能造成隱私洩漏。本篇研究主要針對含有隱私資料的預測情境,如文字資料、網路封包資料、醫療心電圖等資料,並選用能學習時序關聯性的 GRU 模型來設計隱私保護預測框架。考量系統的準確度與效能,本文採用秘密分享 (Secret Sharing) 機制作為主要保護隱私方式,並設計基於秘密分享的 GRU 系統架構與演算法。由於所有雲端上的運算都針對分享秘密 (Secret Shares) 進行,任何一方都無法從部分秘密得知原本的模型參數、預測資料及預測結果,其安全性在半誠實攻擊者模型下可透過Universal Composability證明,並確保能安全地套用至不同架構之 GRU 模型。除此之外,本文也透過實作證實架構與演算法的正確性,並分別以時間與準確度呈現實驗結果。
Gated Recurrent Unit (GRU) has broad application fields, such as sentiment analysis, speech recognition, malware analysis, and other sequential data processing. For low-cost deployment and efficient machine learning services, a growing number of model owners choose to deploy the trained GRU models through Machine-learning-as-a-service (MLaaS). However, privacy has become a significant concern for both model owners and prediction clients, including model weights privacy, input data privacy, and output results privacy. The privacy leakage may be caused by either external intrusion or insider attacks. To address the above issues, this research designs a framework for privacy-preserving GRU models, which aims for privacy scenarios such as predicting on textual data, network packets, heart rate data, and so on. In consideration of accuracy and efficiency, this research uses additive secret sharing to design the basic operations and gating mechanisms of GRU. The protocols can meet the security requirements of privacy and correctness under the Universal Composability framework with the semi-honest adversary. Additionally, the framework and protocols are realized with a proof-of-concept implementation. The experimental results are presented with respect to time consumption and inference accuracy.參考文獻 [1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deeplearning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference onComputer and Communications Security, pages 308–318, 2016.[2] A. F. Agarap and F. J. H. Pepito. Towards Building an Intelligent Anti-Malware System: A DeepLearning Approach using Support Vector Machine (SVM) for Malware Classification. arXivpreprint arXiv:1801.00318, 2017.[3] G. Beigi, K. Shu, R. Guo, S. Wang, and H. Liu. Privacy Preserving Text Representation Learning.Proceedings of the 30th on Hypertext and Social Media (HT’19). ACM, 2019.[4] S. Biswas, E. Chadda, and F. Ahmad. Sentiment Analysis with Gated Recurrent Units. Departmentof Computer Engineering. Annual Report Jamia Millia Islamia New Delhi, India, 2015.[5] G. R. Blakley. Safeguarding cryptographic keys. In 1979 International Workshop on ManagingRequirements Knowledge (MARK), pages 313–318. IEEE, 1979.[6] R. Canetti. Security and Composition of Multiparty Cryptographic Protocols. Journal of CRYPTOLOGY,13(1):143–202, 2000.[7] R. Canetti. Universally Composable Security: A New Paradigm for Cryptographic Protocols.In Proceedings 42nd IEEE Symposium on Foundations of Computer Science, pages 136–145.IEEE, 2001.[8] R. Canetti. Security and composition of cryptographic protocols: a tutorial (part i). ACM SIGACTNews, 37(3):67–92, 2006.[9] R. Canetti, A. Cohen, and Y. Lindell. A Simpler Variant of Universally Composable Securityfor Standard Multiparty Computation. In Annual Cryptology Conference, pages 3–22. Springer,2015.[10] T. Capes, P. Coles, A. Conkie, L. Golipour, A. Hadjitarkhani, Q. Hu, N. Huddleston, M. Hunt,J. Li, M. Neeracher, et al. Siri On-Device Deep Learning-Guided Unit Selection Text-to-SpeechSystem. In INTERSPEECH, pages 4011–4015, 2017.[11] H. Chabanne, A. de Wargny, J. Milgram, C. Morel, and E. Prouff. Privacy-preserving Classificationon Deep Neural Network. IACR Cryptology ePrint Archive, 2017:35, 2017.[12] C.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J.Weiss, K. Rao, E. Gonina, et al. State-of-the-art Speech Recognition with Sequence-to-sequenceModels. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP), pages 4774–4778. IEEE, 2018.[13] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio.Learning Phrase Representations using RNN Encoder-decoder for Statistical MachineTranslation. arXiv preprint arXiv:1406.1078, 2014.[14] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical Evaluation of Gated Recurrent NeuralNetworks on Sequence Modeling. arXiv preprint arXiv:1412.3555, 2014.[15] M. De Cock, R. Dowsley, A. C. Nascimento, D. Reich, and A. Todoki. Privacy-PreservingClassification of Personal Text Messages with Secure Multi-Party Computation: An Applicationto Hate-Speech Detection. arXiv preprint arXiv:1906.02325, 2019.[16] W. Diffie and M. Hellman. New Directions in Cryptography. IEEE transactions on InformationTheory, 22(6):644–654, 1976.[17] W. Du and M. J. Atallah. Protocols for Secure Remote Database Access with ApproximateMatching. In E-Commerce Security and Privacy, pages 87–111. Springer, 2001.[18] C. Dwork. Differential Privacy. Encyclopedia of Cryptography and Security, pages 338–340,2011.[19] M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart. Privacy in pharmacogenetics:An end-to-end case study of personalized warfarin dosing. In 23rd fUSENIXg SecuritySymposium (fUSENIXg Security 14), pages 17–32, 2014.[20] R. Fu, Z. Zhang, and L. Li. Using LSTM and GRU Neural Network Methods for Traffic FlowPrediction. In 2016 31st Youth Academic Annual Conference of Chinese Association of Automation(YAC), pages 324–328. IEEE, 2016.[21] R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing. Cryptonets:Applying neural networks to encrypted data with high throughput and accuracy. In InternationalConference on Machine Learning, pages 201–210, 2016.[22] O. Goldreich. Foundations of Cryptography: volume 1, basic tools. Cambridge university press,2007.[23] O. Goldreich. Foundations of cryptography: volume 2, basic applications. Cambridge universitypress, 2009.[24] O. Goldreich, S. Micali, and A. Wigderson. How to Play Any Mental Game, or A CompletenessTheorem for Protocols with Honest Majority. In Providing Sound Foundations for Cryptography:On the Work of Shafi Goldwasser and Silvio Micali, pages 307–328. 2019.[25] Q. Gu, N. Lu, and L. Liu. A Novel Recurrent Neural Network Algorithm with Long Short-termMemory Model for Futures Trading. Journal of Intelligent & Fuzzy Systems, 37(4):1–8.[26] X. Hu, L. Liang, L. Deng, S. Li, X. Xie, Y. Ji, Y. Ding, C. Liu, T. Sherwood, and Y. Xie. Neuralnetwork model extraction attacks in edge devices by hearing architectural hints. arXiv preprintarXiv:1903.03916, 2019.[27] Y. Huang. Practical Secure Two-party Computation. PhD thesis, Citeseer, 2012.[28] T. Hunt, C. Song, R. Shokri, V. Shmatikov, and E. Witchel. Chiron: Privacy-preserving MachineLearning as a Service. arXiv preprint arXiv:1803.05961, 2018.[29] Z. Ji, Z. C. Lipton, and C. Elkan. Differential Privacy and Machine Learning: A Survey andReview. arXiv preprint arXiv:1412.7584, 2014.[30] C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan. GAZELLE: A Low Latency Frameworkfor Secure Neural Network Inference. In 27th USENIX Security Symposium (USENIX Security18), pages 1651–1669, 2018.[31] R. Küsters and D. Rausch. A framework for universally composable diffie-hellman key exchange.In 2017 IEEE Symposium on Security and Privacy (SP), pages 881–900. IEEE, 2017.[32] R. Küsters and M. Tuengerthal. Universally composable symmetric encryption. In 2009 22ndIEEE Computer Security Foundations Symposium, pages 293–307. IEEE, 2009.[33] Y. Li, T. Baldwin, and T. Cohn. Towards Robust and Privacy-preserving Text Representations.arXiv preprint arXiv:1805.06093, 2018.[34] Y. Lindell. How to Simulate It–A Tutorial on the Simulation Proof Technique. In Tutorials onthe Foundations of Cryptography, pages 277–346. Springer, 2017.[35] J. Liu, M. Juuti, Y. Lu, and N. Asokan. Oblivious Neural Network Predictions via MinionnTransformations. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and CommunicationsSecurity, pages 619–631. ACM, 2017.[36] L. Ma, S. Liu, and Y. Wang. A DRM model based on Proactive Secret Sharing Scheme for P2PNetworks. In 9th IEEE International Conference on Cognitive Informatics (ICCI’10), pages859–862. IEEE, 2010.[37] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning Word Vectorsfor Sentiment Analysis. In Proceedings of the 49th annual meeting of the association for computationallinguistics: Human language technologies, volume 1, pages 142–150. Association forComputational Linguistics, 2011.[38] P. Mohassel and Y. Zhang. Secureml: A System for Scalable Privacy-preserving Machine Learning.In 2017 IEEE Symposium on Security and Privacy (SP), pages 19–38. IEEE, 2017.[39] T. B. Pedersen, Y. Saygın, and E. Savaş. Secret Sharing vs. Encryption-based Techniques forPrivacy Preserving Data Mining. 2007.[40] P. Poomka, W. Pongsena, N. Kerdprasop, and K. Kerdprasop. Sms spam detection based onlong short-term memory and gated recurrent unit. International Journal of Future Computerand Communication, 8(1), 2019.[41] M. S. Riazi, C. Weinert, O. Tkachenko, E. M. Songhori, T. Schneider, and F. Koushanfar.Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications. InProceedings of the 2018 on Asia Conference on Computer and Communications Security, pages707–721. ACM, 2018.[42] M. Ribeiro, K. Grolinger, and M. A. Capretz. Mlaas: Machine learning as a service. In 2015IEEE 14th International Conference on Machine Learning and Applications (ICMLA), pages896–902. IEEE, 2015.[43] V. Rijmen and J. Daemen. Advanced encryption standard. Proceedings of Federal InformationProcessing Standards Publications, National Institute of Standards and Technology, pages 19–22, 2001.[44] R. L. Rivest, L. Adleman, M. L. Dertouzos, et al. On Data Banks and Privacy Homomorphisms.Foundations of secure computation, 4(11):169–180, 1978.[45] B. D. Rouhani, M. S. Riazi, and F. Koushanfar. Deepsecure: Scalable Provably-secure DeepLearning. In Proceedings of the 55th Annual Design Automation Conference, page 2. ACM,2018.[46] S. van der Walt, S.C. Colbert, and G. Varoquaux. The NumPy Array: A Structure for EfficientNumerical Computation. Computing in Science Engineering, 13(2):22–30, March 2011.[47] N. Saleem, M. Irfan Khattak, and A. B. Qazi. Supervised Speech Enhancement based on DeepNeural Network. Journal of Intelligent & Fuzzy Systems, 37(4):5187–5201, 2019.[48] A. Shamir. How to Share a Secret. Communications of the ACM, 22(11):612–613, 1979.[49] D. Takabi, R. Podschwadt, J. Druce, C. Wu, and K. Procopio. Privacy Preserving Neural NetworkInference on Encrypted Data with GPUs. arXiv preprint arXiv:1911.11377, 2019.[50] S. Wagh, D. Gupta, and N. Chandran. SecureNN: 3-Party Secure Computation for Neural NetworkTraining. Proceedings on Privacy Enhancing Technologies, 1:24, 2019.[51] L. Wang, X. Shen, J. Li, J. Shao, and Y. Yang. Cryptographic Primitives in Blockchains. Journalof Network and Computer Applications, 127:43–58, 2019.[52] A. C.-C. Yao. How to Generate and Exchange Secrets. In 27th Annual Symposium on Foundationsof Computer Science (SFCS 1986), pages 162–167. IEEE, 1986.[53] W. Yin, K. Kann, M. Yu, and H. Schütze. Comparative Study of CNN and RNN for NaturalLanguage Processing. arXiv preprint arXiv:1702.01923, 2017.[54] Z. Ying, S. Cao, P. Zhou, S. Zhang, and X. Liu. Lightweight outsourced privacy-preserving heartfailure prediction based on gru. In International Conference on Algorithms and Architecturesfor Parallel Processing, pages 521–536. Springer, 2019.[55] A. Zhang, Z. C. Lipton, M. Li, and A. J. Smola. Dive into Deep Learning. 2020. https://d2l.ai. 描述 碩士
國立政治大學
資訊科學系
107753010資料來源 http://thesis.lib.nccu.edu.tw/record/#G0107753010 資料類型 thesis dc.contributor.advisor 左瑞麟 zh_TW dc.contributor.advisor Tso, Ray-Lin en_US dc.contributor.author (作者) 蕭守晴 zh_TW dc.contributor.author (作者) Hsiao, Shou-Ching en_US dc.creator (作者) 蕭守晴 zh_TW dc.creator (作者) Hsiao, Shou-Ching en_US dc.date (日期) 2020 en_US dc.date.accessioned 2-九月-2020 12:15:48 (UTC+8) - dc.date.available 2-九月-2020 12:15:48 (UTC+8) - dc.date.issued (上傳時間) 2-九月-2020 12:15:48 (UTC+8) - dc.identifier (其他 識別碼) G0107753010 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/131633 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊科學系 zh_TW dc.description (描述) 107753010 zh_TW dc.description.abstract (摘要) Gated Recurrent Unit (GRU) 模型具有廣泛應用,包括情緒分析、語音辨識、惡意程式分析等領域。在提供服務階段,模型擁有者常選擇雲端機器學習服務 (Machine-learning-as-a-service, MLaaS) 作為系統架構,因其提供企業以低建置成本部屬模型且達到高效能機器學習服務;然而,資料上傳至雲端會產生隱私疑慮,包括模型隱私、使用者資料隱私以及預測結果隱私,無論是雲端代管商遭受外部入侵或內部員工竊取,都有可能造成隱私洩漏。本篇研究主要針對含有隱私資料的預測情境,如文字資料、網路封包資料、醫療心電圖等資料,並選用能學習時序關聯性的 GRU 模型來設計隱私保護預測框架。考量系統的準確度與效能,本文採用秘密分享 (Secret Sharing) 機制作為主要保護隱私方式,並設計基於秘密分享的 GRU 系統架構與演算法。由於所有雲端上的運算都針對分享秘密 (Secret Shares) 進行,任何一方都無法從部分秘密得知原本的模型參數、預測資料及預測結果,其安全性在半誠實攻擊者模型下可透過Universal Composability證明,並確保能安全地套用至不同架構之 GRU 模型。除此之外,本文也透過實作證實架構與演算法的正確性,並分別以時間與準確度呈現實驗結果。 zh_TW dc.description.abstract (摘要) Gated Recurrent Unit (GRU) has broad application fields, such as sentiment analysis, speech recognition, malware analysis, and other sequential data processing. For low-cost deployment and efficient machine learning services, a growing number of model owners choose to deploy the trained GRU models through Machine-learning-as-a-service (MLaaS). However, privacy has become a significant concern for both model owners and prediction clients, including model weights privacy, input data privacy, and output results privacy. The privacy leakage may be caused by either external intrusion or insider attacks. To address the above issues, this research designs a framework for privacy-preserving GRU models, which aims for privacy scenarios such as predicting on textual data, network packets, heart rate data, and so on. In consideration of accuracy and efficiency, this research uses additive secret sharing to design the basic operations and gating mechanisms of GRU. The protocols can meet the security requirements of privacy and correctness under the Universal Composability framework with the semi-honest adversary. Additionally, the framework and protocols are realized with a proof-of-concept implementation. The experimental results are presented with respect to time consumption and inference accuracy. en_US dc.description.tableofcontents 1 Introduction 11.1 Motivations and Purposes 21.2 Contributions 32 Definitions and Preliminaries 52.1 Additive Secret Sharing (ASS) 52.2 Gated Recurrent Unit (GRU) Model 72.3 Universal Composability (UC) Framework 83 Technical Literature 113.1 Privacy-preserving Techniques 113.2 Privacy-preserving Deep Neural Network 124 Privacy-preserving GRU Inference Framework 154.1 Architecture 154.2 Security Model 164.2.1 Non-colluding Cloud Servers 174.2.2 Prediction Clients 174.2.3 Outsiders 174.2.4 Network Transmission 174.3 Basic Protocols 184.3.1 Hadamard Product 194.3.2 Division 214.3.3 Share Re-generation 224.3.4 Sigmoid Activation Function 224.3.5 Tanh Activation Function 244.4 Gating Protocols 254.4.1 Update Gate and Reset Gate 254.4.2 Current Memory 264.4.3 Activation of Current Cell 264.5 Putting It All Together 275 Security Analysis 305.1 Security of Basic Protocols 315.2 Security of Gating Protocols 395.3 Security of GRU Inference 416 Experiments and Results 456.1 Dataset 456.2 Implementation 456.3 Results 467 Discussions and Future Works 507.1 Discussions on Accuracy 507.2 Discussions on Time Consumption 517.3 Potential Collusion Problems 527.4 Extended Future Works 538 Conclusion 54Reference 55 zh_TW dc.format.extent 4667550 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0107753010 en_US dc.subject (關鍵詞) 隱私保護 zh_TW dc.subject (關鍵詞) Gated Recurrent Unit模型 zh_TW dc.subject (關鍵詞) 秘密分享 zh_TW dc.subject (關鍵詞) Universal Composability架構 zh_TW dc.subject (關鍵詞) Privacy-preserving en_US dc.subject (關鍵詞) Gated Recurrent Unit Model en_US dc.subject (關鍵詞) Secret Sharing en_US dc.subject (關鍵詞) Universal Composability Framework en_US dc.title (題名) 結合隱私保護功能之GRU預測模型框架 zh_TW dc.title (題名) A Study on Privacy-preserving GRU Inference Framework en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deeplearning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference onComputer and Communications Security, pages 308–318, 2016.[2] A. F. Agarap and F. J. H. Pepito. Towards Building an Intelligent Anti-Malware System: A DeepLearning Approach using Support Vector Machine (SVM) for Malware Classification. arXivpreprint arXiv:1801.00318, 2017.[3] G. Beigi, K. Shu, R. Guo, S. Wang, and H. Liu. Privacy Preserving Text Representation Learning.Proceedings of the 30th on Hypertext and Social Media (HT’19). ACM, 2019.[4] S. Biswas, E. Chadda, and F. Ahmad. Sentiment Analysis with Gated Recurrent Units. Departmentof Computer Engineering. Annual Report Jamia Millia Islamia New Delhi, India, 2015.[5] G. R. Blakley. Safeguarding cryptographic keys. In 1979 International Workshop on ManagingRequirements Knowledge (MARK), pages 313–318. IEEE, 1979.[6] R. Canetti. Security and Composition of Multiparty Cryptographic Protocols. Journal of CRYPTOLOGY,13(1):143–202, 2000.[7] R. Canetti. Universally Composable Security: A New Paradigm for Cryptographic Protocols.In Proceedings 42nd IEEE Symposium on Foundations of Computer Science, pages 136–145.IEEE, 2001.[8] R. Canetti. Security and composition of cryptographic protocols: a tutorial (part i). ACM SIGACTNews, 37(3):67–92, 2006.[9] R. Canetti, A. Cohen, and Y. Lindell. A Simpler Variant of Universally Composable Securityfor Standard Multiparty Computation. In Annual Cryptology Conference, pages 3–22. Springer,2015.[10] T. Capes, P. Coles, A. Conkie, L. Golipour, A. Hadjitarkhani, Q. Hu, N. Huddleston, M. Hunt,J. Li, M. Neeracher, et al. Siri On-Device Deep Learning-Guided Unit Selection Text-to-SpeechSystem. In INTERSPEECH, pages 4011–4015, 2017.[11] H. Chabanne, A. de Wargny, J. Milgram, C. Morel, and E. Prouff. Privacy-preserving Classificationon Deep Neural Network. IACR Cryptology ePrint Archive, 2017:35, 2017.[12] C.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J.Weiss, K. Rao, E. Gonina, et al. State-of-the-art Speech Recognition with Sequence-to-sequenceModels. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP), pages 4774–4778. IEEE, 2018.[13] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio.Learning Phrase Representations using RNN Encoder-decoder for Statistical MachineTranslation. arXiv preprint arXiv:1406.1078, 2014.[14] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical Evaluation of Gated Recurrent NeuralNetworks on Sequence Modeling. arXiv preprint arXiv:1412.3555, 2014.[15] M. De Cock, R. Dowsley, A. C. Nascimento, D. Reich, and A. Todoki. Privacy-PreservingClassification of Personal Text Messages with Secure Multi-Party Computation: An Applicationto Hate-Speech Detection. arXiv preprint arXiv:1906.02325, 2019.[16] W. Diffie and M. Hellman. New Directions in Cryptography. IEEE transactions on InformationTheory, 22(6):644–654, 1976.[17] W. Du and M. J. Atallah. Protocols for Secure Remote Database Access with ApproximateMatching. In E-Commerce Security and Privacy, pages 87–111. Springer, 2001.[18] C. Dwork. Differential Privacy. Encyclopedia of Cryptography and Security, pages 338–340,2011.[19] M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart. Privacy in pharmacogenetics:An end-to-end case study of personalized warfarin dosing. In 23rd fUSENIXg SecuritySymposium (fUSENIXg Security 14), pages 17–32, 2014.[20] R. Fu, Z. Zhang, and L. Li. Using LSTM and GRU Neural Network Methods for Traffic FlowPrediction. In 2016 31st Youth Academic Annual Conference of Chinese Association of Automation(YAC), pages 324–328. IEEE, 2016.[21] R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing. Cryptonets:Applying neural networks to encrypted data with high throughput and accuracy. In InternationalConference on Machine Learning, pages 201–210, 2016.[22] O. Goldreich. Foundations of Cryptography: volume 1, basic tools. Cambridge university press,2007.[23] O. Goldreich. Foundations of cryptography: volume 2, basic applications. Cambridge universitypress, 2009.[24] O. Goldreich, S. Micali, and A. Wigderson. How to Play Any Mental Game, or A CompletenessTheorem for Protocols with Honest Majority. In Providing Sound Foundations for Cryptography:On the Work of Shafi Goldwasser and Silvio Micali, pages 307–328. 2019.[25] Q. Gu, N. Lu, and L. Liu. A Novel Recurrent Neural Network Algorithm with Long Short-termMemory Model for Futures Trading. Journal of Intelligent & Fuzzy Systems, 37(4):1–8.[26] X. Hu, L. Liang, L. Deng, S. Li, X. Xie, Y. Ji, Y. Ding, C. Liu, T. Sherwood, and Y. Xie. Neuralnetwork model extraction attacks in edge devices by hearing architectural hints. arXiv preprintarXiv:1903.03916, 2019.[27] Y. Huang. Practical Secure Two-party Computation. PhD thesis, Citeseer, 2012.[28] T. Hunt, C. Song, R. Shokri, V. Shmatikov, and E. Witchel. Chiron: Privacy-preserving MachineLearning as a Service. arXiv preprint arXiv:1803.05961, 2018.[29] Z. Ji, Z. C. Lipton, and C. Elkan. Differential Privacy and Machine Learning: A Survey andReview. arXiv preprint arXiv:1412.7584, 2014.[30] C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan. GAZELLE: A Low Latency Frameworkfor Secure Neural Network Inference. In 27th USENIX Security Symposium (USENIX Security18), pages 1651–1669, 2018.[31] R. Küsters and D. Rausch. A framework for universally composable diffie-hellman key exchange.In 2017 IEEE Symposium on Security and Privacy (SP), pages 881–900. IEEE, 2017.[32] R. Küsters and M. Tuengerthal. Universally composable symmetric encryption. In 2009 22ndIEEE Computer Security Foundations Symposium, pages 293–307. IEEE, 2009.[33] Y. Li, T. Baldwin, and T. Cohn. Towards Robust and Privacy-preserving Text Representations.arXiv preprint arXiv:1805.06093, 2018.[34] Y. Lindell. How to Simulate It–A Tutorial on the Simulation Proof Technique. In Tutorials onthe Foundations of Cryptography, pages 277–346. Springer, 2017.[35] J. Liu, M. Juuti, Y. Lu, and N. Asokan. Oblivious Neural Network Predictions via MinionnTransformations. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and CommunicationsSecurity, pages 619–631. ACM, 2017.[36] L. Ma, S. Liu, and Y. Wang. A DRM model based on Proactive Secret Sharing Scheme for P2PNetworks. In 9th IEEE International Conference on Cognitive Informatics (ICCI’10), pages859–862. IEEE, 2010.[37] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning Word Vectorsfor Sentiment Analysis. In Proceedings of the 49th annual meeting of the association for computationallinguistics: Human language technologies, volume 1, pages 142–150. Association forComputational Linguistics, 2011.[38] P. Mohassel and Y. Zhang. Secureml: A System for Scalable Privacy-preserving Machine Learning.In 2017 IEEE Symposium on Security and Privacy (SP), pages 19–38. IEEE, 2017.[39] T. B. Pedersen, Y. Saygın, and E. Savaş. Secret Sharing vs. Encryption-based Techniques forPrivacy Preserving Data Mining. 2007.[40] P. Poomka, W. Pongsena, N. Kerdprasop, and K. Kerdprasop. Sms spam detection based onlong short-term memory and gated recurrent unit. International Journal of Future Computerand Communication, 8(1), 2019.[41] M. S. Riazi, C. Weinert, O. Tkachenko, E. M. Songhori, T. Schneider, and F. Koushanfar.Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications. InProceedings of the 2018 on Asia Conference on Computer and Communications Security, pages707–721. ACM, 2018.[42] M. Ribeiro, K. Grolinger, and M. A. Capretz. Mlaas: Machine learning as a service. In 2015IEEE 14th International Conference on Machine Learning and Applications (ICMLA), pages896–902. IEEE, 2015.[43] V. Rijmen and J. Daemen. Advanced encryption standard. Proceedings of Federal InformationProcessing Standards Publications, National Institute of Standards and Technology, pages 19–22, 2001.[44] R. L. Rivest, L. Adleman, M. L. Dertouzos, et al. On Data Banks and Privacy Homomorphisms.Foundations of secure computation, 4(11):169–180, 1978.[45] B. D. Rouhani, M. S. Riazi, and F. Koushanfar. Deepsecure: Scalable Provably-secure DeepLearning. In Proceedings of the 55th Annual Design Automation Conference, page 2. ACM,2018.[46] S. van der Walt, S.C. Colbert, and G. Varoquaux. The NumPy Array: A Structure for EfficientNumerical Computation. Computing in Science Engineering, 13(2):22–30, March 2011.[47] N. Saleem, M. Irfan Khattak, and A. B. Qazi. Supervised Speech Enhancement based on DeepNeural Network. Journal of Intelligent & Fuzzy Systems, 37(4):5187–5201, 2019.[48] A. Shamir. How to Share a Secret. Communications of the ACM, 22(11):612–613, 1979.[49] D. Takabi, R. Podschwadt, J. Druce, C. Wu, and K. Procopio. Privacy Preserving Neural NetworkInference on Encrypted Data with GPUs. arXiv preprint arXiv:1911.11377, 2019.[50] S. Wagh, D. Gupta, and N. Chandran. SecureNN: 3-Party Secure Computation for Neural NetworkTraining. Proceedings on Privacy Enhancing Technologies, 1:24, 2019.[51] L. Wang, X. Shen, J. Li, J. Shao, and Y. Yang. Cryptographic Primitives in Blockchains. Journalof Network and Computer Applications, 127:43–58, 2019.[52] A. C.-C. Yao. How to Generate and Exchange Secrets. In 27th Annual Symposium on Foundationsof Computer Science (SFCS 1986), pages 162–167. IEEE, 1986.[53] W. Yin, K. Kann, M. Yu, and H. Schütze. Comparative Study of CNN and RNN for NaturalLanguage Processing. arXiv preprint arXiv:1702.01923, 2017.[54] Z. Ying, S. Cao, P. Zhou, S. Zhang, and X. Liu. Lightweight outsourced privacy-preserving heartfailure prediction based on gru. In International Conference on Algorithms and Architecturesfor Parallel Processing, pages 521–536. Springer, 2019.[55] A. Zhang, Z. C. Lipton, M. Li, and A. J. Smola. Dive into Deep Learning. 2020. https://d2l.ai. zh_TW dc.identifier.doi (DOI) 10.6814/NCCU202001474 en_US