dc.contributor | 國立政治大學統計學系 | en_US |
dc.contributor | 行政院國家科學委員會 | en_US |
dc.creator (作者) | 洪英超 | zh_TW |
dc.date (日期) | 2010 | en_US |
dc.date.accessioned | 30-Aug-2012 09:58:31 (UTC+8) | - |
dc.date.available | 30-Aug-2012 09:58:31 (UTC+8) | - |
dc.date.issued (上傳時間) | 30-Aug-2012 09:58:31 (UTC+8) | - |
dc.identifier.uri (URI) | http://nccur.lib.nccu.edu.tw/handle/140.119/53373 | - |
dc.description.abstract (摘要) | 多拉桿吃角子老虎問題(multi-armed bandit problem)可以應用在許多領域如臨床試驗,線 上工業實驗(on-line industrial experimentations),可調性網路路由(adaptive network routing)等. 本計畫將以貝氏的角度探討“無窮多拉桿之吃角子老虎問題“.我們假設未 知的白努利參數為相互獨立且來自同一個機率分配F,而我們的目的是找出一如何選擇 拉桿的策略使得長時間操作下的失敗率為最低. 在本計畫的第一部份,我們假設F為一 任意但已知的機率分配.接著介紹1996年由Berry等人提出的三種策略,並証明當試驗次 數趨近無窮大時,此三種策略皆可以使長時間操作下的失敗率為最低.此外,我們也利用 電腦模擬來比較此三種策略的實際表現. 在本計畫的第二部份,我們假設F為一未知的 機率分配.在此假設下,我們提出一個新的策略叫做” empirical non-recalling m-run策略”, 並証明此策略亦為一近似最佳策略. 此外,我們也將利用電腦模擬與Herschkorn等人於 1995年提出的二個策略進行比較. | en_US |
dc.description.abstract (摘要) | Multi-armed bandit problems have a wide area of applications such as clinical trials, on-line industrial experimentations, adaptive network routing, etc. In this study, we examine the bandit problem with infinitely many arms from a Bayesian perspective. We assume the unknown Bernoulli parameters are independent observations from a common distribution F, and the objective is to provide strategies for selecting arms at each decision epoch so that the expected long run failure rate is minimized. In the first part of this study, we assume the common distribution F is arbitrary but known. We introduce three strategies proposed by Berry et al. (1996) and show that they asymptotically minimize the expected long run failure rate. Numerical results from computer simulations are also provided to evaluate the performance of the three strategies. In the second part of this study, we assume the common distribution F is unknown. For this setting, we propose a strategy called the “empirical non-recalling m-run strategy” and prove that this strategy is asymptotically optimal. Numerical results from computer simulations will also be provided to evaluate the proposed strategy and two other strategies by Herschkorn et al. (1995). | en_US |
dc.language.iso | en_US | - |
dc.relation (關聯) | 基礎研究 | en_US |
dc.relation (關聯) | 學術補助 | en_US |
dc.relation (關聯) | 研究期間:9908~ 10007 | en_US |
dc.relation (關聯) | 研究經費:429仟元 | en_US |
dc.subject (關鍵詞) | 多拉桿吃角子老虎問題; 貝氏策略 | en_US |
dc.subject (關鍵詞) | Multi-armed bandit problem; Bayesian strategy | en_US |
dc.title (題名) | 吃角子老虎問題之最佳貝氏策略 | zh_TW |
dc.title.alternative (其他題名) | Optimal Bayesian Strategies for Bandit Problems | en_US |
dc.type (資料類型) | report | en |