*** Welcome to piglix ***

Phonological development


Sound is at the beginning of language learning. Children have to learn to distinguish different sounds and to segment the speech stream they are exposed to into units – eventually meaningful units – in order to acquire words and sentences. Here is one reason that speech segmentation is challenging: When you read, there are spaces between the words. No such spaces occur between spoken words. So, if an infant hears the sound sequence “thisisacup,” it has to learn to segment this stream into the distinct units “this”, “is”, “a”, and “cup.” Once the child is able to extract the sequence “cup” from the speech stream it has to assign a meaning to this word. Furthermore, the child has to be able to distinguish the sequence “cup” from “cub” in order to learn that these are two distinct words with different meanings. Finally, the child has to learn to produce these words. The acquisition of native language phonology begins in the womb and isn’t completely adult-like until the teenage years. Perceptual abilities (such as being able to segment “thisisacup” into four individual word units) usually precede production and thus aid the development of speech production.

Children don’t utter their first words until they are about 1 year old, but already at birth they can tell some utterances in their native language from utterances in languages with different prosodic features.

Infants as young as 1 month perceive some speech sounds as speech categories (they display categorical perception of speech). For example, the sounds /b/ and /p/ differ in the amount of breathiness that follows the opening of the lips. Using a computer generated continuum in breathiness between /b/ and /p/, Eimas et al. (1971) showed that English-learning infants paid more attention to differences near the boundary between /b/ and /p/ than to equal-sized differences within the /b/-category or within the /p/-category. Their measure, monitoring infant sucking-rate, became a major experimental method for studying infant speech perception.

Infants up to 10–12 months can distinguish not only native sounds but also nonnative contrasts. Older children and adults lose the ability to discriminate some nonnative contrasts. Thus, it seems that exposure to one’s native language causes the perceptual system to be restructured. The restructuring reflects the system of contrasts in the native language.

At four months infants still prefer infant-directed speech to adult-directed speech. Whereas 1-month-olds only exhibit this preference if the full speech signal is played to them, 4-month-old infants prefer infant-directed speech even when just the pitch contours are played. This shows that between 1 and 4 months of age, infants improve in tracking the suprasegmental information in the speech directed at them. By 4 months, finally, infants have learned which features they have to pay attention to at the suprasegmental level.


...
Wikipedia

...