學術產出-國科會研究計畫

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 基於MPEG-7之地理群眾註記個人生命記憶典藏暨檢索資訊系統
其他題名 MPEG-7 Based Geo-Crowdsourcing Personal Lifelong Archiving and Retrieving Information System
作者 郭正佩
貢獻者 資科系
日期 2013
上傳時間 27-七月-2015 17:59:13 (UTC+8)
摘要 現今,個人非常容易就能擁有一台以上具有攝影功能的配備。數位記錄、消費型影 音產品如數位攝錄影機、數位相機、平板電腦以及智慧型手機等消費性電子產品的普 及,以及無所不在的社群分享服務使個人數位生命足跡隨時隨地持續地被記錄著。由 於儲存設備的價格逐年下降,網路雲端服務的成熟,個人數位生命記憶資料庫的建置 以及新興數位文字影音服務成為近年日漸重要的課題。然而,若缺少適當的多媒體內 容註記,在長期累積的個人數位生命記憶資料中檢索特定影音資訊將是一件困難且費 時的工作。 2004 年,郭等人曾提出基於 MPEG-7 的多媒體內容註記標準 PARIS (Personal Archiving and Retrieving Image System)系統. [1] PARIS 系統結合 MPEG-7 多媒體內容語意 註記標準以及多媒體內容時間及空間資訊,期能長期以及連續地將個人生命經驗之影 音記錄加以半自動註記之後設資料,並利用外部社群媒體資料達到自動或半自動的註 記、檢索及管理之可能性。 2010 年起,郭及鄭實驗利用外界社群媒體群眾註記內容資料加速個人影像內容註記 的可能性。[2],[4] 2012 年,陳及郭提出延伸自 PARIS 系統 DDDC(Dozen Dimensional Digital Content )註記標準,並利用群眾註記之社群媒體地理資訊的 iDDDC (Integrated Dozen Dimensional Digital Content )地理註記標準 [3],並以 iDDDC 為基礎,著手建立基於 MPEG-7 之地理群眾註記個人生命記憶典藏暨檢索資訊系統。 自從一九七〇年代起,多媒體資料檢索一直是一個熱門的研究主題。過去多媒體資 料檢索許多研究都著力於如何結合內容式影像檢索 (Content Based Image Retrieval )中之 影像特徵分析以及後設資料式影像檢索(Metadata Based Image Retrieval) 裡所用的語意 式註記以提升影音資料庫檢索的準確性。然而個人數位影音資料庫相較於一般型影音 資料庫有許多不同特點,在典藏時因應這些特性,也能發展出不同於傳統影音資訊典 藏之需求以及檢索的可能性。除此之外,由於平板型行動戴具的萌芽,個人數位生命 記憶的呈現以及回憶之形式也產生許多尚待開發的創新模式。 本計劃期能針對個人影音資料庫的特性建立具有系統互通性,並可有效幫助未來典 藏、管理以及檢索的註記結構以及地理群眾註記本體論系統,致力建立長期性個人圖 文影音典藏資料庫。除了應用最近影音載具可能自動化產生之多媒體數位內容語意註 記之外,也將利用各種相關社群服務中由群眾合力建置的社交地理標籤建立本體論資 料庫,達成半主動後設資料建置以及個人數位生命記憶之嶄新回溯瀏覽型式。
關聯 計畫編號NSC102-2221-E004-010
描述 Nowadays, everyone may easily possess one or more cameras equipped devices. With the proliferation of digital recording devices such as digital video cameras, digital cameras, emerging tablets and smartphones, an increasing number of users are building up their private multimedia repositories. At the same time, the fast growing of hard drive capacity and digital audio-visual device resolution also results in an ever-increasing size of personal multimedia collections. Without appropriate multimedia content annotations; however, it is difficult and time consuming to organize and relocate specific audio-visual collection within long-term personal digital life memory database. In 2004, Kuo et al. proposed a multimedia description schema system based on MPEG-7 technology called PARIS (Personal Archiving and Retrieving Image System). [1] It was designed to integrate spatial and temporal information of multimedia content into a MPEG-7 based semantic description. With this description architecture, PARIS envisioned being able to continuously capture and archive personal experience with audio-visual recording data and utilize potential social networking annotations provided by third party services. Since 2010, Kuo and Cheng started to experiment the possibility of utilizing user generated social network content to facilitate personal photograph annotation process. [2] ,[4] In 2012, Chen and Kuo proposed an MPEG-7 based Integrated Dozen Dimensional Digital Content (iDDDC) description structure [3] extended from the PARIS DDDC (Dozen Dimensional Digital Content ) description architecture in order to implement semi-automatic crowdsourcing geographic annotation possibility enabled by third party social networking services. Multimedia Retrieval has been a very active research area since 1970’s. While previous researches on multimedia database system focused mainly on general-purpose archives, our proposal aims particular on continuous personal multimedia collections. Personal long-term archives have very different characteristics compared to general-purpose digital libraries, we envision novel archiving requirements and retrieving possibilities to be established. In addition, with the emerging tablet devices and smartphones, creative personal life memory storytelling system and autobiographical narratives are yet to be developed. This research aims on constructing common annotation architecture and geographic specific ontologies in order to facilitate archiving, retrieving, and managing of lifelong personal multimedia collections. We also envision utilizing and semi-automatic generating crowdsourcing geographic specific ontologies based on related social network contents to achieve novel personal audio-visual database management and autobiographical narrative implementations.
資料類型 report
dc.contributor 資科系-
dc.creator (作者) 郭正佩-
dc.date (日期) 2013-
dc.date.accessioned 27-七月-2015 17:59:13 (UTC+8)-
dc.date.available 27-七月-2015 17:59:13 (UTC+8)-
dc.date.issued (上傳時間) 27-七月-2015 17:59:13 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/77007-
dc.description (描述) Nowadays, everyone may easily possess one or more cameras equipped devices. With the proliferation of digital recording devices such as digital video cameras, digital cameras, emerging tablets and smartphones, an increasing number of users are building up their private multimedia repositories. At the same time, the fast growing of hard drive capacity and digital audio-visual device resolution also results in an ever-increasing size of personal multimedia collections. Without appropriate multimedia content annotations; however, it is difficult and time consuming to organize and relocate specific audio-visual collection within long-term personal digital life memory database. In 2004, Kuo et al. proposed a multimedia description schema system based on MPEG-7 technology called PARIS (Personal Archiving and Retrieving Image System). [1] It was designed to integrate spatial and temporal information of multimedia content into a MPEG-7 based semantic description. With this description architecture, PARIS envisioned being able to continuously capture and archive personal experience with audio-visual recording data and utilize potential social networking annotations provided by third party services. Since 2010, Kuo and Cheng started to experiment the possibility of utilizing user generated social network content to facilitate personal photograph annotation process. [2] ,[4] In 2012, Chen and Kuo proposed an MPEG-7 based Integrated Dozen Dimensional Digital Content (iDDDC) description structure [3] extended from the PARIS DDDC (Dozen Dimensional Digital Content ) description architecture in order to implement semi-automatic crowdsourcing geographic annotation possibility enabled by third party social networking services. Multimedia Retrieval has been a very active research area since 1970’s. While previous researches on multimedia database system focused mainly on general-purpose archives, our proposal aims particular on continuous personal multimedia collections. Personal long-term archives have very different characteristics compared to general-purpose digital libraries, we envision novel archiving requirements and retrieving possibilities to be established. In addition, with the emerging tablet devices and smartphones, creative personal life memory storytelling system and autobiographical narratives are yet to be developed. This research aims on constructing common annotation architecture and geographic specific ontologies in order to facilitate archiving, retrieving, and managing of lifelong personal multimedia collections. We also envision utilizing and semi-automatic generating crowdsourcing geographic specific ontologies based on related social network contents to achieve novel personal audio-visual database management and autobiographical narrative implementations.-
dc.description.abstract (摘要) 現今,個人非常容易就能擁有一台以上具有攝影功能的配備。數位記錄、消費型影 音產品如數位攝錄影機、數位相機、平板電腦以及智慧型手機等消費性電子產品的普 及,以及無所不在的社群分享服務使個人數位生命足跡隨時隨地持續地被記錄著。由 於儲存設備的價格逐年下降,網路雲端服務的成熟,個人數位生命記憶資料庫的建置 以及新興數位文字影音服務成為近年日漸重要的課題。然而,若缺少適當的多媒體內 容註記,在長期累積的個人數位生命記憶資料中檢索特定影音資訊將是一件困難且費 時的工作。 2004 年,郭等人曾提出基於 MPEG-7 的多媒體內容註記標準 PARIS (Personal Archiving and Retrieving Image System)系統. [1] PARIS 系統結合 MPEG-7 多媒體內容語意 註記標準以及多媒體內容時間及空間資訊,期能長期以及連續地將個人生命經驗之影 音記錄加以半自動註記之後設資料,並利用外部社群媒體資料達到自動或半自動的註 記、檢索及管理之可能性。 2010 年起,郭及鄭實驗利用外界社群媒體群眾註記內容資料加速個人影像內容註記 的可能性。[2],[4] 2012 年,陳及郭提出延伸自 PARIS 系統 DDDC(Dozen Dimensional Digital Content )註記標準,並利用群眾註記之社群媒體地理資訊的 iDDDC (Integrated Dozen Dimensional Digital Content )地理註記標準 [3],並以 iDDDC 為基礎,著手建立基於 MPEG-7 之地理群眾註記個人生命記憶典藏暨檢索資訊系統。 自從一九七〇年代起,多媒體資料檢索一直是一個熱門的研究主題。過去多媒體資 料檢索許多研究都著力於如何結合內容式影像檢索 (Content Based Image Retrieval )中之 影像特徵分析以及後設資料式影像檢索(Metadata Based Image Retrieval) 裡所用的語意 式註記以提升影音資料庫檢索的準確性。然而個人數位影音資料庫相較於一般型影音 資料庫有許多不同特點,在典藏時因應這些特性,也能發展出不同於傳統影音資訊典 藏之需求以及檢索的可能性。除此之外,由於平板型行動戴具的萌芽,個人數位生命 記憶的呈現以及回憶之形式也產生許多尚待開發的創新模式。 本計劃期能針對個人影音資料庫的特性建立具有系統互通性,並可有效幫助未來典 藏、管理以及檢索的註記結構以及地理群眾註記本體論系統,致力建立長期性個人圖 文影音典藏資料庫。除了應用最近影音載具可能自動化產生之多媒體數位內容語意註 記之外,也將利用各種相關社群服務中由群眾合力建置的社交地理標籤建立本體論資 料庫,達成半主動後設資料建置以及個人數位生命記憶之嶄新回溯瀏覽型式。-
dc.format.extent 136 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) 計畫編號NSC102-2221-E004-010-
dc.title (題名) 基於MPEG-7之地理群眾註記個人生命記憶典藏暨檢索資訊系統-
dc.title.alternative (其他題名) MPEG-7 Based Geo-Crowdsourcing Personal Lifelong Archiving and Retrieving Information System-
dc.type (資料類型) reporten