研究生: |
黃英旗 Ying Chi Huage |
---|---|
論文名稱: |
以語音呈現模式導讀網頁文件之研究 Research on Web Accessing with Aural Rendering Model |
指導教授: |
葉耀明
Yeh, Yao-Ming |
學位類別: |
碩士 Master |
系所名稱: |
資訊教育研究所 Graduate Institute of Information and Computer Education |
論文出版年: | 2002 |
畢業學年度: | 90 |
語文別: | 中文 |
論文頁數: | 107 |
中文關鍵詞: | 全球資訊網應用 、可擴展標籤語言應用 、語音瀏覽器 、網路可及性 、語音呈現模式 |
英文關鍵詞: | WWW Application, XML Application, Voice Browser, Web Accessibility, Aural Rendering Model |
論文種類: | 學術論文 |
相關次數: | 點閱:154 下載:5 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
自從網際網路發展以來,網際網路顯然已成為一個無遠弗屆的知識庫藏系統,其中又以具有多媒體型態的全球資訊網的發展最受人矚目。然而傳統的瀏覽器軟體只能以視覺型態來呈現網頁資訊,即使搭配現有的商用語音導讀軟體,還是無法以聽覺型態來呈現正確的網頁資訊,甚至還會引發資訊認知的誤導。由於近年來無線通訊、語音辨識、語音合成三者技術的發展,使得人們有機會能夠隨時隨地只需透過行動電話就可以獲得網頁的資訊。因此建立新型態的語音瀏覽服務機制,勢必能幫助人們透過語音通訊服務取得所需的網頁資訊。
基於上述的原因與動機,本論文提出一套語音呈現模式(Aural Rendering Model,ARM),並且實作出一個語音呈現模式設計家(Aural Rendering Model Designer,AURMOD)的系統以解決上述的問題。ARM的設計理念是將以視覺形式呈現的網頁資訊,自動加入適當的語意資訊,轉換成以聽覺形式呈現的語音文件,並搭配現有技術成熟的語音合成器,將網頁文件內的資訊以語音形式導讀給一般人或視覺障礙者來聽取網頁內的資訊。藉著此系統的便利性,即使是視覺障礙者,也能夠如同一般人,即時且便利地取得全球資訊網的網頁資訊。
Since the Internet develops affluently, Internet obviously has become boundless and limitless knowledge-base system. The World Wide Web, WWW for short, which provides multimedia information, is the most popular framework. Traditional browser can only provide visual-type presentation for the web information. Even the browser which is integrated with commercial aural software can cause confusion, when user uses its speech synthesizer to read the web content. Recent advances in wireless communication, speech recognition, and speech synthesis technologies have made it possible for people to obtain the Internet information from any place at any time by using only a cellular phone. Hence, building new model architecture for Voice Browser enables people to have access to the Internet information via vocal communicative services.
On basis of the reasons and the motivation mentioned above, this study proposes an Aural Rendering Model, ARM for short. Furthermore, we implement a software system named Aural Rendering Model Designer, AURMOD for short, to resolve the above-mentioned problems. The purpose of designing ARM is to transform visual-type web pages into aural-type vocal documents, automatically adding necessary semantic meanings to ensure no loss of any relevant information; then, accompanied with the mature Speech Synthesizer, which can read out the information on the web page, people with and without visual disabilities can both “read” the web pages by listening. With the convenience this system provides, people with visual disabilities can access the web pages instantly and efficiently as ordinary people do.
Since the Internet develops affluently, Internet obviously has become boundless and limitless knowledge-base system. The World Wide Web, WWW for short, which provides multimedia information, is the most popular framework. Traditional browser can only provide visual-type presentation for the web information. Even the browser which is integrated with commercial aural software can cause confusion, when user uses its speech synthesizer to read the web content. Recent advances in wireless communication, speech recognition, and speech synthesis technologies have made it possible for people to obtain the Internet information from any place at any time by using only a cellular phone. Hence, building new model architecture for Voice Browser enables people to have access to the Internet information via vocal communicative services.
On basis of the reasons and the motivation mentioned above, this study proposes an Aural Rendering Model, ARM for short. Furthermore, we implement a software system named Aural Rendering Model Designer, AURMOD for short, to resolve the above-mentioned problems. The purpose of designing ARM is to transform visual-type web pages into aural-type vocal documents, automatically adding necessary semantic meanings to ensure no loss of any relevant information; then, accompanied with the mature Speech Synthesizer, which can read out the information on the web page, people with and without visual disabilities can both “read” the web pages by listening. With the convenience this system provides, people with visual disabilities can access the web pages instantly and efficiently as ordinary people do.
【1】陳連壎,以全球資訊網為基礎的個別化隨取書籍模式與設計,國立台灣師範大學資訊教育系碩士論文,中華民國八十八年六月。
【2】李進寶、周二銘、王華沛(民86):電腦相關輔具分析調查研究報告,台北:內政部委託資訊工業策進會調查報告。
【3】Agarwal, R., Y. Muthusamy, and V. Viswanathan, “Voice Browsing the Web for Information Access”, WWW Consortium. http://www.w3.org/Voice/1998/ Workshop/RajeevAgarwal.html
【4】AT&T Corporation, The AT&T Labs Natural Voices. http://www.research.att. com/projects/tts/
【5】Bobby, http://www.cast.org/bobby/
【6】Bray, T., J. Paoli, and C.M.Sperberg-McQueen, “Extensible Markup Language 1.0”, W3C Recommendation. WWW Consortium, Oct. 2000. http:// www.w3.org/TR/REC-xml
【7】Chisholm, W., G. Vanderheiden, and I. Jacobs, “Web Content Accessibility Guidelines 1.0”, W3C Recommendation. WWW Consortium, May 1999. http:// www.w3.org/ TR/WAI-WEBCONTENT
【8】Clark, J. “Comparison of SGML and XML”, W3C Note. WWW Consortium, Dec. 1997. http://www.w3.org/TR/NOTE-sgml-xml
【9】Danielsen, P. J. “The Promise of a Voice-Enabled Web”. IEEE Vol. 33 pp. 104-106. Aug. 2000.
【10】Hemphill, C.T., P.R. Thrift, and J.C. Linn, “Speech-Aware Multimedia”, IEEE Multimedia, Vol 3, no. 1, Spring 1996.
【11】Hunt, A. “JSpeech Markup Language”, W3C Note. WWW Consortium, June 2001. http://www.w3.org/TR/jsml
【12】IBM Corporation, ViaVoice System, http://www-4.ibm.com/software/speech/
【13】IBM Corporation, XML Parser for Java, http://www.alphaworks.ibm.com/ tech/xml4j/
【14】 James, F. “AHA: Audio HTML Access”, The Six International World Wide Web Conference. Ed, by Michael R. Genesereth and Anna Patterson, Santa Clara, CA, 7-11 April 1997. IW3C2, pp. 129-139.
【15】James, F. “Presenting HTML Structure in Audio: User Satisfaction with Audio Hypertext”, ICAD 96 Proceedings, Xerox PARC, 4-6 Nov. 1996, pp. 97-103.
【16】James, F. “Lessons from Developing Audio HTML Interfaces”, ASSETS 98, April 1998, pp. 15-17.
【17】Kondo, K. and C. Hemphill, “Surfin' the World Wide Web with Japanese”, Acoustics, Speech, and Signal Processing, 1997. ICASSP-97, 1997 IEEE International Conference, pp.1151-1154 vol. 2. April 1997.
【18】Ouahid, H. and A. Kormouch, “Converting Web Pages into Well-formed XML Documents”, IEEE 1999, pp.676-680.
【19】Raggett, D., A.L. Hors, and I. Jacobs, “HTML 4.01 Specification”, W3C Recommendation. WWW Consortium, Dec. 1999. http://www.w3.org/TR/ html401/
【20】Raggett, D. and O. Ben-Natan, “Voice Browsers”, W3C Note. WWW Con- sortium, Jan. 1998. http://www.w3.org/TR/NOTE-voice
【21】Rollins, S. and N. Sundaresan, “AVoN calling: AXL for voice-enabled Web navigation”, Elsevier Science, Computer Networks, Vol: 33, Issue: 1-6, pp. 533-551, June 2000.
【22】Sun Microsystems, “Java Speech Markup Language Specification”, Beta Version 0.6. Oct. 1999. http://java.sun.com/products/java-media/speech/ forDevelopers/JSML/index.html
【23】Sun Microsystems, “Java Speech API Programmer's Guide”, Version 1.0. Oct. 1998. http://java.sun.com/products/java-media/speech/forDevelopers/JSML/ index.html
【24】 The Centre for Speech Technology Research, Festiva Speech Synthesis System, http://www.cstr.ed.ac.uk/projects/ festival/
【25】Unicode Consortium, “Unicode Character Set”. http://www.unicode.org/
【26】W3C DOM Working Group, “Document Object Model”, W3C Recommen- dation. WWW Consortium. http://www.w3.org/ DOM/
【27】 W3C HTML working group, “XHTML 1.0: The Extensible Hyper Text Markup Language”, W3C Recommendation. WWW Consortium, Jan. 2000. http:// www.w3.org/TR/xhtml1/
【28】W3C Voice Browser Working Group, “Voice Extensible Markup Language”, Version 2.0, W3C Working Draft. WWW Consortium, April 2002. http:// www.w3.org/TR/voicexml20/
【29】Walker, M.R. and A. Hunt, “Speech Synthesis Markup Language Specification for the Speech Interface Framework”, W3C Working Draft. WWW Consortium, Jan. 2001. http://www.w3.org/TR /speech-synthesis
【30】Wang, H., Y. Chou, and B. Chen, “Surfing The Chinese Web Pages By Unconstrained Mandarin Speech”, Consumer Electronics, 1998. ICCE. International Conference, pp.84-85, June 2-4, 1998.
【31】Waters, C. “Universal Web Design”, New Riders Co., 1997.
【32】WWW Consortium, “Voice Browser Activity Voice enabling the Web”. http:// www.w3.org/Voice/
【33】WWW Consortium, “Web Accessibility Initiative”. http://www.w3.org/WAI/