Publications-Theses
Article View/Open
Publication Export
-
Google ScholarTM
NCCU Library
Citation Infomation
Related Publications in TAIR
題名 基於球員行動的羽球贏家策略
Badminton winner strategy synthesis based on player position motions作者 陳宜莉 貢獻者 郁方
陳宜莉關鍵詞 自動化程式
物件偵測
序列預測
邏輯規則
序列分析
Automation Programs
Object Detection
Sequence Prediction
Logic Rule
Sequence Synthesis日期 2022 上傳時間 1-Sep-2023 14:52:33 (UTC+8) 摘要 利用機器學習技術對球員行為進行分析已在足球和棒球等各種活動中取得成就。我們將這一成就延續到羽毛球領域,旨在系統地綜合球員的動作,從 Youtube 視頻中分析選手對不同對手的勝負策略。模型的學習過程基於可從影片的幀中識別的球員位置(相對於基於跟蹤羽毛球的方法而言),我們使用來自 Youtube 上的國際羽聯官方網站的視頻。分析框架包括幾個階段:首先將羽毛球比賽的幀整理成為同一種畫面,然後採用 MASK-RCNN 檢測幀中的雙方球員,以獲得他們的邊界框和坐標。另外我們訓練一個 DeepCTRL 模型,根據球員的坐標預測每幀中羽毛球的方向。我們使用連續的位置坐標來預測每幀中的羽毛球方向。為了提高精準度,我們採用了DeepCTRL 模型結合了 Bi-LSTM 模型作為方向序列預測的模型,並附加了分數規則,以指導正確分數變化下的學習過程。羽毛球的方向可用於識別擊球幀(基於方向的變化)、獲勝者(下一個發球者)和比賽歷史。然後,我們利用球員在擊球幀中的位置來檢測擊球類型,並進一步識別每個點的勝負擊球序列。為了克服觀察到的擊球序列不足的限制,我們使用 SeqGAN 合成更多的勝負擊球序列,以識別戴資穎對不同對手的策略。最後我們以戴資穎參加奧運會和世界羽聯匯豐銀行總決賽的比賽為例,總結了她對不同對手的勝負策略。
Systematical analysis on player behaviors that leverages modern machine learning techniqueshas shown great success in various activities such as football and baseball. We continuethe success to badminton and aim to systematically synthesize player movements to analyzetheir winning/losing strategies against different opponents from Youtube videos. The learningprocess is based on player positions that can be recognized from rather low-resolutionframes (with respect to approaches based on tracking shuttlecocks), advancing our sourcebase in scale to widely available videos, e.g., from the BWF official website on Youtube.The analysis framework involves several stages: We first convert frames of badminton gamesinto normalized court frames, then adopt MASK-RCNN to detect both players in framesto obtain their bounding boxes and coordinates. We train a DeepCTRL model to predictdirections of the shuttlecock in each frame based on players’ coordinates. Our DeepCTRLmodel combines a Bi-LSTM model as the task base for direction sequence prediction withadditional rule loss on scores to guide the learning process under correct score changes. Thedirections of the shuttlecock can then be used to identify Shot frames (based on changes of directions),point winners (the next starting direction) and match history (direction segments).We then use the position of players in shot frames to detect shot type and further identifywinning/losing shot sequences in each point. To overcome the limitation of insufficient observedshot sequences, we synthesize more winning/losing shot sequences with SeqGAN torecognize the Tai Tzu Ying’s strategies against different opponents. We take Tai Tzu Ying’sgames including Olympic games and BWF HSBC Final games as an example and summarizeour findings in her winning and losing strategies to different opponents.參考文獻 1.J. Galeano, M. A. Gomez, F. Rivas, and J. M. Buld ́u, “Entropy of badminton strike positions,” Entropy, vol. 23, no. 7, p. 799, Jun 2021.2.K. Mona Teja and N. Prabakaran, “Automated visual tracking and live data analysis in badminton,” International Journal of Advanced Science and Technology, vol. 29, no. 9s,pp. 5094 – 5105, May 2020.3.T. H. Hsu, C. C. Wang, Y. H. Lin, C. H. Chen, N. P. Ju, T. U. Ik, W. C. Peng, Y. S.Wang, Y. C. Tseng, J. L. Huang, and Y. T. Ching, “Coachai: A project for microscopic badminton match data collection and tactical analysis,” 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS), pp. 1–4, 2019.4.Y. Su and Z. Liu, “Position detection for badminton tactical analysis based on multi-person pose estimation,” 2018 14th International Conference on Natural Computation,Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 379–383, 2018.5.K. Weeratunga, K. How, A. Dharmaratne, and C. Messom, “Application of computer vision to automate notation for tactical analysis of badminton,” 2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014, pp. 340–345, Mar 2015.6.M. Ks, “Applications of artificial intelligence in the game of football: The global per-spective,” Researchers World – Journal of Arts Science Commerce, vol. 11, pp. 18–29, Jul 2020.7.S. Chen, Z. Feng, Q. Lu, B. Mahasseni, T. Fiez, A. Fern, and S. Todorovic, “Play type recognition in real-world football video,” IEEE Winter Conference on Applications of Computer Vision, pp. 652–659, 2014.8.V. D. Silva, M. Caine, J. Skinner, S. Dogan, A. Kondoz, T. Peter, E. Axtell, M. Birnie, and B. Smith, “Player tracking data analytics as a tool for physic agement in football: A case study from Chelsea Football Club Academy,” Sports, Oct 2018.9.M. H. Hung and C. H. Hsieh, “Event detection of broadcast baseball videos,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 12, pp. 1713–1726, 2008.10.S. Chun, C. H. Son, and H. Choo, “Inter-dependent lstm: Baseball game prediction with starting and finishing lineups,” 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM), pp. 1–4, 2021.11.K. Hirasawa, K. Maeda, T. Ogawa, and M. Haseyama, “Important scene prediction of baseball videos using twitter and video analysis based on lstm,” 2020 IEEE 9th Global Conference on Consumer Electronics (GCCE), pp. 636–637, 2020.12.G. Sudhir, J. C. M. Lee, and A. K. Jain, “Automatic classification of tennis video for high-level content-based retrieval,” Proceedings 1998 IEEE International Workshop on Content-Based Access of Image and Video Database, pp. 81–90, 1998.13.W. T. Chu and S. Situmeang, “Badminton video analysis based on spatiotemporal and stroke features,” Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, p. 448–451, 2017.14.M. St ̈ockl, T. Seidl, D. Marley, and P. Power, “Making offensive play predictable - using a graph convolutional network to understand defensive performance in soccer,”42 Analytics, Apr 2021.15.K. He, G. Gkioxari, P. Doll ́ar, and R. Girshick, “Mask r-cnn,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988, 2017.16.S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object de-tection with region proposal networks,” Advances in Neural Information Processing Systems, vol. 28, 2015.17.J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2625–2634, Jun 2015.18.W. W. T. Fok, L. C. W. Chan, and C. Chen, “Artificial intelligence for sport actions and performance analysis using recurrent neural network (rnn) with long short-term memory (lstm),” Proceedings of the 2018 4th International Conference on Robotics and Artificial Intelligence, p. 40–44, 2018.19.W. Y. Wang, T. F. Chan, H. K. Yang, C. C. Wang, Y. C. Fan, and W. C. Peng, “Exploring the long short-term dependencies to infer shot influence in badminton matches,”2021 IEEE International Conference on Data Mining (ICDM), pp. 1397–1402, 2021.20.M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Trans-actions on Signal Processing, vol. 45, no. 11, pp. 2673–2681,1997.21.Z. Huang, W. Xu, and K. Yu, “Bidirectional LSTM-CRF models for sequence tagging,”CoRR, 2015.22.R. Zhao, R. Yan, J. Wang, and K. Mao, “Learning to monitor machine health with convolutional bi-directional lstm networks,” Sensors, vol. 17, no. 2, p. 273, Jan 2017.23.] S. Seo, S. ̈O. Arik, J. Yoon, X. Zhang, K. Sohn, and T. Pfister, “Controlling neural networks with rule representations,” Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 11 196–11 207, 2021.24.L. Yu, W. Zhang, J. Wang, and Y. Yu, “Seqgan: Sequence generative adversarial nets with policy gradient,” Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pp. 2852–2858, 2017.25.C. Li, Y. Su, J. Qi, and M. Xiao, “Using gan to generate sport news from live game stats,” Cognitive Computing – ICCC 2019, pp. 102–116, 2019.26.R. Girdhar, G. Gkioxari, L. Torresani, M. Paluri, and D. Tran, “Detect-and-track: Efficient pose estimation in videos,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 350–359, Jun 2018.27."O. Lorente, I. Riera, and A. Rana, “Scene understanding for autonomous driving,”CoRR, 2021.28.N. E. Sun, Y. C. Lin, S. P. Chuang, T. H. Hsu, D. R. Yu, H. Y. Chung, and T. U. ̇Ik,“Tracknetv2: Efficient shuttlecock tracking network,” 2020 International Conference on Pervasive Artificial Intelligence (ICPAI), pp. 86–91, 2020.29.S. L. Teng and R. Paramesran, “Detection of service activity in a badminton game,”TENCON 2011 - 2011 IEEE Region 10 Conference, pp. 312–315, 2011.30.A. Bulat and G. Tzimiropoulos, “Human pose estimation via convolutional part heatmap regression,” Computer Vision – ECCV 2016, pp. 717–732, 2016.31.T. Fernando, S. Denman, S. Sridharan, and C. Fookes, “Memory augmented deep gen-erative models for forecasting the next shot location in tennis,” IEEE Transactions on Knowledge amp; Data Engineering, vol. 32, no. 09, pp. 1785–1797,Sep 2020.32.W. Y. Wang, H. H. Shuai, K. S. Chang, and W. C. Peng, “Shuttlenet: Position-aware fusion of rally progress and player styles for stroke forecasting in badminton,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 4, pp. 4219–4227, Jun. 2022.33.W. Y. Wang, T. F. Chan, W. C. Peng, H. K. Yang, C. C. Wang, and Y. C. Fan,“How is the stroke? inferring shot influence in badminton matches via long short-term dependencies,” ACM Trans. Intell. Syst. Technol., vol. 14, no. 1, nov 2022.34.N. P. Ju, D. R. Yu, T. U. ̇Ik, and W. C. Peng, “Trajectory-based badminton shots detection,” in 2020 International Conference on Pervasive Artificial Intelligence (ICPAI), 2020, pp. 64–71.35.K. S. Chang, W. Y. Wang, and W. C. Peng, “Where will players move next? dynamic graphs and hierarchical fusion for movement forecasting in badminton." 2023. 描述 碩士
國立政治大學
資訊管理學系
109356049資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109356049 資料類型 thesis dc.contributor.advisor 郁方 zh_TW dc.contributor.author (Authors) 陳宜莉 zh_TW dc.creator (作者) 陳宜莉 zh_TW dc.date (日期) 2022 en_US dc.date.accessioned 1-Sep-2023 14:52:33 (UTC+8) - dc.date.available 1-Sep-2023 14:52:33 (UTC+8) - dc.date.issued (上傳時間) 1-Sep-2023 14:52:33 (UTC+8) - dc.identifier (Other Identifiers) G0109356049 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/146884 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊管理學系 zh_TW dc.description (描述) 109356049 zh_TW dc.description.abstract (摘要) 利用機器學習技術對球員行為進行分析已在足球和棒球等各種活動中取得成就。我們將這一成就延續到羽毛球領域,旨在系統地綜合球員的動作,從 Youtube 視頻中分析選手對不同對手的勝負策略。模型的學習過程基於可從影片的幀中識別的球員位置(相對於基於跟蹤羽毛球的方法而言),我們使用來自 Youtube 上的國際羽聯官方網站的視頻。分析框架包括幾個階段:首先將羽毛球比賽的幀整理成為同一種畫面,然後採用 MASK-RCNN 檢測幀中的雙方球員,以獲得他們的邊界框和坐標。另外我們訓練一個 DeepCTRL 模型,根據球員的坐標預測每幀中羽毛球的方向。我們使用連續的位置坐標來預測每幀中的羽毛球方向。為了提高精準度,我們採用了DeepCTRL 模型結合了 Bi-LSTM 模型作為方向序列預測的模型,並附加了分數規則,以指導正確分數變化下的學習過程。羽毛球的方向可用於識別擊球幀(基於方向的變化)、獲勝者(下一個發球者)和比賽歷史。然後,我們利用球員在擊球幀中的位置來檢測擊球類型,並進一步識別每個點的勝負擊球序列。為了克服觀察到的擊球序列不足的限制,我們使用 SeqGAN 合成更多的勝負擊球序列,以識別戴資穎對不同對手的策略。最後我們以戴資穎參加奧運會和世界羽聯匯豐銀行總決賽的比賽為例,總結了她對不同對手的勝負策略。 zh_TW dc.description.abstract (摘要) Systematical analysis on player behaviors that leverages modern machine learning techniqueshas shown great success in various activities such as football and baseball. We continuethe success to badminton and aim to systematically synthesize player movements to analyzetheir winning/losing strategies against different opponents from Youtube videos. The learningprocess is based on player positions that can be recognized from rather low-resolutionframes (with respect to approaches based on tracking shuttlecocks), advancing our sourcebase in scale to widely available videos, e.g., from the BWF official website on Youtube.The analysis framework involves several stages: We first convert frames of badminton gamesinto normalized court frames, then adopt MASK-RCNN to detect both players in framesto obtain their bounding boxes and coordinates. We train a DeepCTRL model to predictdirections of the shuttlecock in each frame based on players’ coordinates. Our DeepCTRLmodel combines a Bi-LSTM model as the task base for direction sequence prediction withadditional rule loss on scores to guide the learning process under correct score changes. Thedirections of the shuttlecock can then be used to identify Shot frames (based on changes of directions),point winners (the next starting direction) and match history (direction segments).We then use the position of players in shot frames to detect shot type and further identifywinning/losing shot sequences in each point. To overcome the limitation of insufficient observedshot sequences, we synthesize more winning/losing shot sequences with SeqGAN torecognize the Tai Tzu Ying’s strategies against different opponents. We take Tai Tzu Ying’sgames including Olympic games and BWF HSBC Final games as an example and summarizeour findings in her winning and losing strategies to different opponents. en_US dc.description.tableofcontents ContentsChapter1 Introduction p.1Chapter2 Related work p.42.1 Sports science p.42.2 Machine learning technique adoptions p.5Chapter3 Methodology p.73.1 Overview p.73.2 Data Preprocessing p.73.2.1 Images Cropping p.83.2.2 Frames Filtering p.83.2.3 Frames Reshaping p.83.3 Object Detection: Mask-RCNN p.93.4 Bidirectional LSTM p.93.4.1 Logical loss Fuction p.93.4.2 Training process p.103.5 Shot position, Shot type and Score p.113.5.1 Shot position p.113.5.2 Shot type p.113.5.3 Score p.123.5.4 Winning Sequences p.123.6 Sequence Synthesis: SeqGAN p.13Chapter4 Experiments p.154.1 Dataset p.154.2 Data p.154.2.1 Images Cropping p.154.2.2 Frames Filtering p.154.2.3 Frames Reshape p.174.3 Object Detection p. 194.4 Prediction of shuttlecock flight direction p.214.4.1 Prediction p.214.4.2 Comparison to baseline p.224.5 Shot position, Shot type and Score p.244.5.1 Shot position p.244.5.2 Shot type p.244.5.3 Score p.254.5.4 Winning Sequence p.264.6 Sequence Synthesis and analysis p.294.6.1 Shot type analysis p.294.6.2 Attack sequence analysis p.34Chapter5 Conclusion p.39Referenc p.40 zh_TW dc.format.extent 5999723 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109356049 en_US dc.subject (關鍵詞) 自動化程式 zh_TW dc.subject (關鍵詞) 物件偵測 zh_TW dc.subject (關鍵詞) 序列預測 zh_TW dc.subject (關鍵詞) 邏輯規則 zh_TW dc.subject (關鍵詞) 序列分析 zh_TW dc.subject (關鍵詞) Automation Programs en_US dc.subject (關鍵詞) Object Detection en_US dc.subject (關鍵詞) Sequence Prediction en_US dc.subject (關鍵詞) Logic Rule en_US dc.subject (關鍵詞) Sequence Synthesis en_US dc.title (題名) 基於球員行動的羽球贏家策略 zh_TW dc.title (題名) Badminton winner strategy synthesis based on player position motions en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) 1.J. Galeano, M. A. Gomez, F. Rivas, and J. M. Buld ́u, “Entropy of badminton strike positions,” Entropy, vol. 23, no. 7, p. 799, Jun 2021.2.K. Mona Teja and N. Prabakaran, “Automated visual tracking and live data analysis in badminton,” International Journal of Advanced Science and Technology, vol. 29, no. 9s,pp. 5094 – 5105, May 2020.3.T. H. Hsu, C. C. Wang, Y. H. Lin, C. H. Chen, N. P. Ju, T. U. Ik, W. C. Peng, Y. S.Wang, Y. C. Tseng, J. L. Huang, and Y. T. Ching, “Coachai: A project for microscopic badminton match data collection and tactical analysis,” 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS), pp. 1–4, 2019.4.Y. Su and Z. Liu, “Position detection for badminton tactical analysis based on multi-person pose estimation,” 2018 14th International Conference on Natural Computation,Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 379–383, 2018.5.K. Weeratunga, K. How, A. Dharmaratne, and C. Messom, “Application of computer vision to automate notation for tactical analysis of badminton,” 2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014, pp. 340–345, Mar 2015.6.M. Ks, “Applications of artificial intelligence in the game of football: The global per-spective,” Researchers World – Journal of Arts Science Commerce, vol. 11, pp. 18–29, Jul 2020.7.S. Chen, Z. Feng, Q. Lu, B. Mahasseni, T. Fiez, A. Fern, and S. Todorovic, “Play type recognition in real-world football video,” IEEE Winter Conference on Applications of Computer Vision, pp. 652–659, 2014.8.V. D. Silva, M. Caine, J. Skinner, S. Dogan, A. Kondoz, T. Peter, E. Axtell, M. Birnie, and B. Smith, “Player tracking data analytics as a tool for physic agement in football: A case study from Chelsea Football Club Academy,” Sports, Oct 2018.9.M. H. Hung and C. H. Hsieh, “Event detection of broadcast baseball videos,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 12, pp. 1713–1726, 2008.10.S. Chun, C. H. Son, and H. Choo, “Inter-dependent lstm: Baseball game prediction with starting and finishing lineups,” 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM), pp. 1–4, 2021.11.K. Hirasawa, K. Maeda, T. Ogawa, and M. Haseyama, “Important scene prediction of baseball videos using twitter and video analysis based on lstm,” 2020 IEEE 9th Global Conference on Consumer Electronics (GCCE), pp. 636–637, 2020.12.G. Sudhir, J. C. M. Lee, and A. K. Jain, “Automatic classification of tennis video for high-level content-based retrieval,” Proceedings 1998 IEEE International Workshop on Content-Based Access of Image and Video Database, pp. 81–90, 1998.13.W. T. Chu and S. Situmeang, “Badminton video analysis based on spatiotemporal and stroke features,” Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, p. 448–451, 2017.14.M. St ̈ockl, T. Seidl, D. Marley, and P. Power, “Making offensive play predictable - using a graph convolutional network to understand defensive performance in soccer,”42 Analytics, Apr 2021.15.K. He, G. Gkioxari, P. Doll ́ar, and R. Girshick, “Mask r-cnn,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988, 2017.16.S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object de-tection with region proposal networks,” Advances in Neural Information Processing Systems, vol. 28, 2015.17.J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2625–2634, Jun 2015.18.W. W. T. Fok, L. C. W. Chan, and C. Chen, “Artificial intelligence for sport actions and performance analysis using recurrent neural network (rnn) with long short-term memory (lstm),” Proceedings of the 2018 4th International Conference on Robotics and Artificial Intelligence, p. 40–44, 2018.19.W. Y. Wang, T. F. Chan, H. K. Yang, C. C. Wang, Y. C. Fan, and W. C. Peng, “Exploring the long short-term dependencies to infer shot influence in badminton matches,”2021 IEEE International Conference on Data Mining (ICDM), pp. 1397–1402, 2021.20.M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Trans-actions on Signal Processing, vol. 45, no. 11, pp. 2673–2681,1997.21.Z. Huang, W. Xu, and K. Yu, “Bidirectional LSTM-CRF models for sequence tagging,”CoRR, 2015.22.R. Zhao, R. Yan, J. Wang, and K. Mao, “Learning to monitor machine health with convolutional bi-directional lstm networks,” Sensors, vol. 17, no. 2, p. 273, Jan 2017.23.] S. Seo, S. ̈O. Arik, J. Yoon, X. Zhang, K. Sohn, and T. Pfister, “Controlling neural networks with rule representations,” Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 11 196–11 207, 2021.24.L. Yu, W. Zhang, J. Wang, and Y. Yu, “Seqgan: Sequence generative adversarial nets with policy gradient,” Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pp. 2852–2858, 2017.25.C. Li, Y. Su, J. Qi, and M. Xiao, “Using gan to generate sport news from live game stats,” Cognitive Computing – ICCC 2019, pp. 102–116, 2019.26.R. Girdhar, G. Gkioxari, L. Torresani, M. Paluri, and D. Tran, “Detect-and-track: Efficient pose estimation in videos,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 350–359, Jun 2018.27."O. Lorente, I. Riera, and A. Rana, “Scene understanding for autonomous driving,”CoRR, 2021.28.N. E. Sun, Y. C. Lin, S. P. Chuang, T. H. Hsu, D. R. Yu, H. Y. Chung, and T. U. ̇Ik,“Tracknetv2: Efficient shuttlecock tracking network,” 2020 International Conference on Pervasive Artificial Intelligence (ICPAI), pp. 86–91, 2020.29.S. L. Teng and R. Paramesran, “Detection of service activity in a badminton game,”TENCON 2011 - 2011 IEEE Region 10 Conference, pp. 312–315, 2011.30.A. Bulat and G. Tzimiropoulos, “Human pose estimation via convolutional part heatmap regression,” Computer Vision – ECCV 2016, pp. 717–732, 2016.31.T. Fernando, S. Denman, S. Sridharan, and C. Fookes, “Memory augmented deep gen-erative models for forecasting the next shot location in tennis,” IEEE Transactions on Knowledge amp; Data Engineering, vol. 32, no. 09, pp. 1785–1797,Sep 2020.32.W. Y. Wang, H. H. Shuai, K. S. Chang, and W. C. Peng, “Shuttlenet: Position-aware fusion of rally progress and player styles for stroke forecasting in badminton,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 4, pp. 4219–4227, Jun. 2022.33.W. Y. Wang, T. F. Chan, W. C. Peng, H. K. Yang, C. C. Wang, and Y. C. Fan,“How is the stroke? inferring shot influence in badminton matches via long short-term dependencies,” ACM Trans. Intell. Syst. Technol., vol. 14, no. 1, nov 2022.34.N. P. Ju, D. R. Yu, T. U. ̇Ik, and W. C. Peng, “Trajectory-based badminton shots detection,” in 2020 International Conference on Pervasive Artificial Intelligence (ICPAI), 2020, pp. 64–71.35.K. S. Chang, W. Y. Wang, and W. C. Peng, “Where will players move next? dynamic graphs and hierarchical fusion for movement forecasting in badminton." 2023. zh_TW