學術產出-Periodical Articles

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 A generic construct based workload model for business intelligence benchmark
作者 諶家蘭
Seng, Jia-Lang ; Chiu, S.H.
貢獻者 會計系
關鍵詞 Data warehouse; Data mining; Business intelligence; Generic construct; Benchmark;Workload model; Performance measurement and evaluation
日期 2011.11
上傳時間 25-Jun-2014 10:23:45 (UTC+8)
摘要 Benchmarks are vital tools in the performance measurement and evaluation of computer hardware and software systems. Standard benchmarks such as the TREC, TPC, SPEC, SAP, Oracle, Microsoft, IBM, Wisconsin, AS3AP, OO1, OO7, XOO7 benchmarks have been used to assess the system performance. These benchmarks are domain-specific in that they model typical applications and tie to a problem domain. Test results from these benchmarks are estimates of possible system performance for certain pre-determined problem types. When the user domain differs from the standard problem domain or when the application workload is divergent from the standard workload, they do not provide an accurate way to measure the system performance of the user problem domain. System performance of the actual problem domain in terms of data and transactions may vary significantly from the standard benchmarks. In this research, we address the issue of domain boundness and workload boundness which results in the ir-representative and ir-reproducible performance readings. We tackle the issue by proposing a domain-independent and workload-independent benchmark method which is developed from the perspective of the user requirements. We present a user-driven workload model to develop a benchmark in a process of workload requirements representation, transformation, and generation. We aim to create a more generalized and precise evaluation method which derives test suites from the actual user domain and application. The benchmark method comprises three main components. They are a high-level workload specification scheme, a translator of the scheme, and a set of generators to generate the test database and the test suite. The specification scheme is used to formalize the workload requirements. The translator is used to transform the specification. The generator is used to produce the test database and the test workload. In web search, the generic constructs are main common carriers we adopt to capture and compose the workload requirements. We determine the requirements via the analysis of literature study. In this study, we have conducted ten baseline experiments to validate the feasibility and validity of the benchmark method. An experimental prototype is built to execute these experiments. Experimental results demonstrate that the method is capable of modeling the standard benchmarks as well as more general benchmark requirements.
關聯 Expert Systems with Applications, 38(12), 14460-14477
資料類型 article
DOI http://dx.doi.org/10.1016/j.eswa.2011.04.193
dc.contributor 會計系en_US
dc.creator (作者) 諶家蘭zh_TW
dc.creator (作者) Seng, Jia-Lang ; Chiu, S.H.en_US
dc.date (日期) 2011.11en_US
dc.date.accessioned 25-Jun-2014 10:23:45 (UTC+8)-
dc.date.available 25-Jun-2014 10:23:45 (UTC+8)-
dc.date.issued (上傳時間) 25-Jun-2014 10:23:45 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/66907-
dc.description.abstract (摘要) Benchmarks are vital tools in the performance measurement and evaluation of computer hardware and software systems. Standard benchmarks such as the TREC, TPC, SPEC, SAP, Oracle, Microsoft, IBM, Wisconsin, AS3AP, OO1, OO7, XOO7 benchmarks have been used to assess the system performance. These benchmarks are domain-specific in that they model typical applications and tie to a problem domain. Test results from these benchmarks are estimates of possible system performance for certain pre-determined problem types. When the user domain differs from the standard problem domain or when the application workload is divergent from the standard workload, they do not provide an accurate way to measure the system performance of the user problem domain. System performance of the actual problem domain in terms of data and transactions may vary significantly from the standard benchmarks. In this research, we address the issue of domain boundness and workload boundness which results in the ir-representative and ir-reproducible performance readings. We tackle the issue by proposing a domain-independent and workload-independent benchmark method which is developed from the perspective of the user requirements. We present a user-driven workload model to develop a benchmark in a process of workload requirements representation, transformation, and generation. We aim to create a more generalized and precise evaluation method which derives test suites from the actual user domain and application. The benchmark method comprises three main components. They are a high-level workload specification scheme, a translator of the scheme, and a set of generators to generate the test database and the test suite. The specification scheme is used to formalize the workload requirements. The translator is used to transform the specification. The generator is used to produce the test database and the test workload. In web search, the generic constructs are main common carriers we adopt to capture and compose the workload requirements. We determine the requirements via the analysis of literature study. In this study, we have conducted ten baseline experiments to validate the feasibility and validity of the benchmark method. An experimental prototype is built to execute these experiments. Experimental results demonstrate that the method is capable of modeling the standard benchmarks as well as more general benchmark requirements.en_US
dc.format.extent 2676879 bytes-
dc.format.mimetype application/pdf-
dc.language.iso en_US-
dc.relation (關聯) Expert Systems with Applications, 38(12), 14460-14477en_US
dc.subject (關鍵詞) Data warehouse; Data mining; Business intelligence; Generic construct; Benchmark;Workload model; Performance measurement and evaluationen_US
dc.title (題名) A generic construct based workload model for business intelligence benchmarken_US
dc.type (資料類型) articleen
dc.identifier.doi (DOI) 10.1016/j.eswa.2011.04.193en_US
dc.doi.uri (DOI) http://dx.doi.org/10.1016/j.eswa.2011.04.193en_US