Center for Language Research
University of Aizu
   English  /  日本語     

Search this site

In the CLR Phonetics Lab, we mainly focus our research on speech production and pronunciation. Some of our research projects focus on articulatory phonetics, while others focus on acoustic phonetics.

For studies on articulatory phonetics, we have an ultrasound machine to display real-time images of the tongue moving during speech. We also have access to a Vicon motion capture system for tracking the lips, jaw, eyebrows, etc. during speech.

For studies on acoustic phonetics, we mainly use open-source acoustic analysis software such as Praat. Many of our research projects involve an analysis of both articulation and acoustics.

Specific Research Projects
・Ultrasound Research Methodology
As a method of observing the movements of the tongue for speech research, one-dimensional ultrasound was first used about 40 years ago (e.g. Kelsey et al., 1969, JASA 46) effectively allowing one point at a time on the tongue’s surface to be seen. Two-dimensional ultrasound has been used in speech research for 27 years (since Sonies et al., 1981, JASA 70). However, only recently have higher image quality and greater affordability made ultrasound viable for this research. Methodology used in ultrasound data collection and analysis varies widely and we are developing/testing it in our lab.
See more about this research...
・Articulatory Setting
Articulatory setting is the underlying setting of the articulators (i.e., the tongue, lips, jaw, etc.) during speech. In his PhD research, Dr. Wilson showed that the articulatory settings for French and English differ significantly. We are now measuring the articulatory setting for Japanese and determining whether the explicit teaching of the differences between English and Japanese articulatory setting affects EFL students' pronunciation.
See more about this research...
・Ultrasound Tongue in Context
The ultrasound image of the surface of the tongue moving during speech can be difficult to interpret. This is because the ultrasound image usually shows only the tongue and not the opposing surfaces (i.e., the hard and soft palates and the teeth). This project involved the combination of ultrasound, CT, and video images, with the help of motion capture data. The resultant movies have been useful for illustrating the physical context of ultrasound data.
See more about this research...
・Ultrasound and L2 Pronunciation
In this area of research, we are comparing second language (L2) learners' tongue movements during the pronunciation of their native language (usually Japanese) with their L2 (English). We are also investigating the effectiveness of ultrasound as a form of real-time visual biofeedback for pronunciation learners.
See more about this research...
・Wind Instrument Tonguing
This project is a collaboration between our lab and Professor Masaichi Takeuchi of Nagoya University of Arts. We are using ultrasound to examine the tongue's movements and shape during the playing of wind instruments. By observing and recording the relationship between the tongue's shape/movements and the resultant sounds, pedagogical materials can be created for university music classes.
See more about this research...
・Tongue Twisters
In December 2006, we recorded ultrasound movies of tongue movements during the production of English and Japanese tongue twisters. Some of those movies are available on this page. It is readily apparent that the degree of tongue movement does not, on its own, determine the difficulty of a tongue twister.
See more about this research...
・Praat Japanese Tutorial
Praat is very popular open-source software for acoustic analysis. We have started to create a tutorial in Japanese, showing translations of menu items and how to use the software for basic acoustic analysis. We hope that this tutorial can benefit Japanese students who are learning Praat, including students and teachers at the School for the Hearing Impaired (located next to our university).
See more about this research...
・Acoustic Analysis of L2 Pronunciation
As Japan attempts to meet the demands of the Ministry of Education by introducing English-as-a-Foreign-Language (EFL) classes in all elementary schools, there has been a shortage of qualified native Japanese EFL teachers at all levels. The English communicative ability (and pronunciation, in particular) of Japanese EFL teachers varies across educational levels, throughout Japan - including within individual prefectures. This study presents an acoustic analysis of 77 Fukushima Prefecture junior and senior high school Japanese EFL teachers' read speech, comparing the two levels. It also presents an analysis of 133 Japanese university students' read speech, recorded both before and after 14 weeks of weekly 90-min explicit pronunciation instruction. The reading passage was the "Please call Stella" paragraph from the speech accent archive. Detailed analyses of vowel formants, voice onset time, fricative spectral peaks, and intonation were carried out using Praat and will be presented. The data have implications for curriculum planning for both EFL pronunciation classes and teacher training courses in Japan.
See more about this research...
・Pitch and Facial Movement
In this research, we are investigating whether a relationship exists, in spoken Japanese, between the prominent pitch contours in a sentence and the up-down motion of the eyebrows and nodding motion of the head. Applications of this research include increasing the degree of realism in animation and video games.
See more about this research...
・Aizu Dialect of Japanese
The Aizu region of Fukushima Prefecture has a relatively large number of dialects, and we have started a project to record, phonetically analyze, and document these dialects. Results, including many audio samples, will be made available on this website.
See more about this research...