Current Projects
Bilingual children are at elevated risk for being clinically misdiagnosed in the area of speech and language development. This is in part due to there being many paths to bilingualism, and children even from similar language backgrounds may have a large variety of language experiences. For most language combinations, speech-language pathologists must rely heavily on clinical judgement to determine whether bilingual children present with speech and language disorders based on their linguistic background. The accuracy of diagnosis decreases when clinicians do not have language expertise in both of a child’s languages. Yet, in 2022, only 2.7% of service providers in the US could provide clinical speech services in a language other than English or Spanish. This falls well below the growing 8.4% of the US population who spoke a language other than English or Spanish at home in 2019. It is crucial to meet the linguistic needs of the growing population of linguistically heterogeneous children in the US.
The Speech in Little Bilinguals Lab (SLBL) focuses on the development of speech perception and speech production in children who are exposed to more than one language at home, in the community, or at school. We aim to build the knowledge base necessary to make evidence-based clinical decisions on a variety of bilingual populations.

Korean-English bilingual speech development
Perception and production of speech sounds are tightly linked, and perception tends to precede production abilities in children and adults. This is especially relevant for second language learners. Learners who can perceptually differentiate two speech sounds (known as phonemes) in their second language, will also be better at producing the difference between the two phonemes. Bilinguals cannot simply ‘turn-off’ one of their languages. Therefore, speech perception in bilingual children must be approached with the idea that both languages will always affect perception.
Although it is known that during speech therapy, bilingual children’s productions of shared phonemes in English and Spanish tend to improve more quickly than phonemes unique to each language, it is not clear how generalizable this finding is to less related language pairs. It is likely that the ability to perceive acoustic differences between similar sounds across languages is related to more differentiation in production of the sounds across languages, but this must be tested. The goal of this study is to examine this cross-linguistic perception-production link with the guiding principle that perceptual tests could aid in determining which phonemes to target for speech treatment based on the individual knowledge of each bilingual child. The focus of the project is on children acquiring Korean and English, two languages less related than English and Spanish.

Cross-linguistic influence of signed and spoken language
The research project, Cross-Linguistic Influence of Signed and Spoken Language looks at bimodal English-American Sign Language (ASL) bilinguals when they complete a verbal fluency task. Hearing bimodal bilinguals make up a large percentage of ASL learners. Thus, the research project looks to see if the influence (or activation) of one language can impact the response of another language during a verbal fluency task. The cross-linguistic activation of first and second languages is well studied in spoken languages. Researchers and educators alike use activation methods, such as cognates (e.g., “doctor” in English versus “doctor” in Spanish) to activate, or increase, second language learners’ responses. The research project was started by undergraduate researcher, Archisa Ghimire, who noticed that some signs are easier to memorize than others in the classroom.
In order to explain the verbal fluency task that these participants did, it is important to explain who hearing bimodal English-ASL bilinguals are. “Hearing” participants refers to participants who have no known hearing impairments, and who do not use devices such as hearing aids or cochlear implants to enhance hearing. As many know, bilingual populations are populations that have at least some proficiency in two languages. Typically, this proficiency can be defined as one’s ability to use two languages with communication partners who also have at least some level of proficiency in one or both languages. Bimodal, or two modalities, refers to bilinguals who have some level of proficiency in two languages that use separate modalities. Communication modalities include, but are not limited to the following: speaking, hand signs (formally from signed languages), gestures (like pointing; not formally from signed languages), picture exchange (using a board for communication), etc. For the purposes of this research project, participants use signed (i.e., ASL) and spoken (i.e., English) modalities.
Verbal responses, in the case of this project, not only include spoken responses, but signed responses too! Think of verbality as words or phrases (either produced with spoken English or signed ASL). The research project looks at the activation of the spoken (English) modality during signed (ASL) responses and the activation of the signed (ASL) modality during spoken (English) responses. The main hypothesis of this project predicts that the activation of ASL will have very minimal impact on the number of correct responses in the spoken (English) modality; whereas, the activation of the spoken (English) modality will impact the number of correct responses in the ASL language condition. The goal of this project is to discover if English knowledge assists emerging hearing bimodal bilingual learners access verbal responses in ASL. Thus, these findings have implications for ASL instructors of hearing bimodal bilinguals to understand how existing English knowledge may/may not impact student learning. Moreover, these findings may have implications for ASL learners to understand their language learning process, especially when learning a language with a different modality. The research project will be presented at KU’s Research Symposium in December 2024 to a public audience.

Learning sounds from native and non-native speech contexts
Understanding speech from an unknown speaker can be challenging at first, especially if the speaker has an unfamiliar accent. Research has shown that adults can rapidly adapt to unfamiliar speech including foreign accents and unfamiliar dialects, but children struggle more. Bilingual children are likely to come into contact with foreign-accented speech in their homes and communities, indicating a potentially more challenging language learning environment. Yet children in these communities do not tend to pick up foreign accents in their own speech. Additionally, challenging environments have been shown to lead to more robust learning, potentially leading to a more adaptive speech perception system.
Similar sounds can be difficult for second language learners to perceive. Have you ever heard someone say two words in a different language that you could not hear the difference between? For instance, in tonal languages such as Mandarin, a character could carry different meanings based on different tones used in various contexts, such as the character 好, which is pronounced as hǎo with a low-dipping tone, or hào with a falling tone. While listeners of Mandarin Chinese differentiate these two sounds aurally, they are also able to differentiate the meaning of it in utterances. The same thing happens for all language learners, especially in the early stages of learning a new language. Now imagine learning this from a non-native speaker who also struggles to produce the difference between these sounds. This is the challenging environment that bilingual children may face when learning sounds in a second language. This project aims to examine how children develop the ability to perceive new, difficult sounds produced by native and non-native speakers. It aims to examine both the benefits and challenges posed by such environments in the initial stages of learning.