學術產出-Proceedings

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 Compressed Multimodal Hierarchical Extreme Learning Machine for Speech Enhancement
作者 廖文宏
Liao, Wen-Hung
Hussain, Tassadaq
Tsao, Yu
Wang, Hsin-Min
Wang, Jia-Ching
Siniscalchi, Sabato Marco
貢獻者 資科系
關鍵詞 Data compression ; Feedforward neural networks ; Quantization (signal) ; Speech enhancement
日期 2019-11
上傳時間 4-Jun-2021 14:46:13 (UTC+8)
摘要 Recently, model compression that aims to facilitate the use of deep models in real-world applications has attracted considerable attention. Several model compression techniques have been proposed to reduce computational costs without significantly degrading the achievable performance. In this paper, we propose a multimodal framework for speech enhancement (SE) by utilizing a hierarchical extreme learning machine (HELM) to enhance the performance of conventional HELM-based SE frameworks that consider audio information only. Furthermore, we investigate the performance of the HELM-based multimodal SE framework trained using binary weights and quantized input data to reduce the computational requirement. The experimental results show that the proposed multimodal SE framework outperforms the conventional HELM-based SE framework in terms of three standard objective evaluation metrics. The results also show that the performance of the proposed multimodal SE framework is only slightly degraded, when the model is compressed through model binarization and quantized input data.
關聯 Proceedings of 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), IEEE Signal Processing Society
資料類型 conference
DOI https://doi.org/10.1109/APSIPAASC47483.2019.9023012
dc.contributor 資科系
dc.creator (作者) 廖文宏
dc.creator (作者) Liao, Wen-Hung
dc.creator (作者) Hussain, Tassadaq
dc.creator (作者) Tsao, Yu
dc.creator (作者) Wang, Hsin-Min
dc.creator (作者) Wang, Jia-Ching
dc.creator (作者) Siniscalchi, Sabato Marco
dc.date (日期) 2019-11
dc.date.accessioned 4-Jun-2021 14:46:13 (UTC+8)-
dc.date.available 4-Jun-2021 14:46:13 (UTC+8)-
dc.date.issued (上傳時間) 4-Jun-2021 14:46:13 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/135532-
dc.description.abstract (摘要) Recently, model compression that aims to facilitate the use of deep models in real-world applications has attracted considerable attention. Several model compression techniques have been proposed to reduce computational costs without significantly degrading the achievable performance. In this paper, we propose a multimodal framework for speech enhancement (SE) by utilizing a hierarchical extreme learning machine (HELM) to enhance the performance of conventional HELM-based SE frameworks that consider audio information only. Furthermore, we investigate the performance of the HELM-based multimodal SE framework trained using binary weights and quantized input data to reduce the computational requirement. The experimental results show that the proposed multimodal SE framework outperforms the conventional HELM-based SE framework in terms of three standard objective evaluation metrics. The results also show that the performance of the proposed multimodal SE framework is only slightly degraded, when the model is compressed through model binarization and quantized input data.
dc.format.extent 946159 bytes-
dc.format.mimetype application/pdf-
dc.relation (關聯) Proceedings of 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), IEEE Signal Processing Society
dc.subject (關鍵詞) Data compression ; Feedforward neural networks ; Quantization (signal) ; Speech enhancement
dc.title (題名) Compressed Multimodal Hierarchical Extreme Learning Machine for Speech Enhancement
dc.type (資料類型) conference
dc.identifier.doi (DOI) 10.1109/APSIPAASC47483.2019.9023012
dc.doi.uri (DOI) https://doi.org/10.1109/APSIPAASC47483.2019.9023012