top

昂立教育 > 项目总揽 > 新概念 > 新概念新闻 > Julia Alexander谈语法与听力(2)

Julia Alexander谈语法与听力(2)
发布时间:2007-09-07 作者: 来源于:昂立外语网站

The importance of Listening in second language learning

Julia Alexander: Beijing, August 2006

Key words :
neuro-biology, neuron, neural activity
acoustic (of sound ), phoneme (individual speech sound), phonetic (of speech sounds)
cognitive, language cognition (the mental processes that convert phonological data into meaning)
phonological processing (perceiving/discriminating acoustic input and processing the data into categories)
working memory, coded information
pitch, rhythm, frequency, tone
sensory, (of the 5 senses) articulatory (of the motor system – e.g. pronunciation)
statistical/computational
lexical (of words), syntactical (of grammar)

- Research into hearing and learning suggests that listening is the most important activity for language learning.
- ‘Working memory’ : Phonological processing + language cognition = working memory:
1 Professor David Baddeley 2001 – writing about deafened and hearing-impaired people:.
‘Working memory is a cognitive system adapted for the storage and processing of information during a short period of time, and is similar to phonological processing, a central processing component in most cognitive activities relating to the processing of spoken and written languages. How do working memory performance in general and the components of the working memory system in particular relate to speech understanding? The working memory is the same in deaf and hearing subjects. They have the same sensitivity to manipulations of word-length, poor articulation and phonological familiarity. … ‘The missing parts of the signal that are absent or distorted must be filled in by verbal inference and disambiguation. This depends on an individual’s capacity to store and process verbal information over a short period of time. A large working memory allows connexions and inferences with previous parts of the same context.’
2 A large working memory goes with good conversational and communicative skills. Studies in Sweden show that working memory becomes smaller when we can’t ‘hear’ well. Phonological processing is the most basic brain activity for all language tasks.
Listening is central to all learning.

- Phonological categories 3 come from learning our native language. The brains of new born babies respond to every sound. During the first year of life, the neural response to sound becomes specialised in favour of the sounds of the mother-tongue. By 12 months, a child can separate familiar words from unknown words, 4 spoken by different voices, at different speeds, in different contexts. He or she does this by making categories.
Our brains are shaped by the native language, and they remain so lifelong. 5 When we hear a phoneme that is similar to one in our first language, the neurons fire as if it were the same: The ‘perceptual magnet’ effect makes it hard to ‘hear’ sounds in a second language.
- 6 In order to ‘hear’ a foreign language, our brains must overcome the ‘magnet’ effect. Our brains can adapt throughout life. 7 The adaptation will not change the first language phonological map. But it will reactivate the learning pathways created when we were babies.
- Verbal processing in the adult brain is mainly in the left hemisphere. In children, parts of the right hemisphere are involved as well. The same is true of people who are learning a foreign language. For adults to learn a foreign language, their brains must be actively engaged in 3 ways: sensory processing, articulatory-motor perception, cognitive engagement.
- 7 3 kinds of learning, each in a different brain area
- unsupervised learning, - in the cortex - where we make categories by finding patterns in the input,
- supervised learning with error feedback - in the cerebellum - includes motor/articulatory learning, as well as cognitive processing,
- supervised learning with a reward for success - in the basal ganglia – ‘language as communication’. 3
We know that learners learn when:
- they can understand the input,
- when the input is structured
- when they have feedback about success or failure;
- and when they are rewarded for being successful.

- Rhythm: the way that sound is organised in time. Languages are said to be either ‘stress timed’ or syllable timed. In a syllable timed language, each syllable occupies more or less the same time as the syllable next to it.

8 Les parents se sont approchés de l’enfant sans faire de bruit.
Cette boulangerie fabrique les meilleurs gateaux de tout le quartier.
Les banques ferment particulièrement t?t le vendredi soir.

In a stress-timed language, like English syllables vary in length.
Les parents se sont approchés de l’enfant sans faire de bruit.
The parents crept over to the child without making a sound.
Cette boulangerie fabrique les meilleurs gateaux de tout le quartier.
This bakery makes the best cakes in the entire district.
Les banques ferment particulièrement t?t le vendredi soir.
The banks shut especially early on Sunday evenings.

There are 3 contrasting characteristics of each language.
- English has a lot of consonants. In French, more time is given to vowels.
- English consonants can be single, double and triple. French does not have many consonant pairs at all.
- The placement of consonants in English does not tell us much about word boundaries. In French, most words end in a vowel sound, and a consonant often marks the onset of a new word. Most consonant + consonant placements occur across a word boundary.
These contrasts are typical of stress timed versus syllable timed languages.

- Vowels in English vary in duration (timing), amplitude (db) and pitch (Hz), depending on whether they are stressed or unstressed. They are also affected by the timing of syllable-end consonants. Explicit teaching will have no cognitive effect, unless it comes after ‘statistical learning’ 3.

- ‘Statistical learning’: finding patterns in the speech stream of the ‘unsupervised learning environment’. Foreign language learners need to ‘be the audience’. People find statistical regularities in the speech around them, but they need to process the data in working memory. 1
- Language input is phonetic before it is phonological. That is, it is acoustic, not cognitive. Working memory fails when the learner is distracted. 2 Statistical learning is blocked by pre-teaching vocabulary, by translation before listening, by teachers explaining before the students have listened to the dialogue.
- Statistical learning e.g. of ‘permitted’ consonant + consonant pairs in word-initial/word-final placements; of trochaic word stress; of timing. The brain is interested in timing. 7 The right ear is specialised for tracking fast transitions in vowels and consonants. Vowels in stressed syllables can carry a glide.
- Tone changes, and changes in articulation, are picked up by the brain area that tracks consonant/vowel timings. 6 ‘Motherese’ 3 helps the child identify phonetic data, essential for organising phonological data, for both reception and production.
- Tone/pitch changes mark the status of key words. They also mark affirmative, negative and interrogative. They mark attitudes and emotions. Neither the speaker nor the listener is conscious of pitch changes: they are part of the language process. The learner must have an unconscious awareness of pitch variations before we give explanations.

- 9 Syntactic and lexical learning are also ‘statistical’, but the input also requires interaction, before we can assign meaning. Language is social. Our brains ‘frame’ the familiar word, which divides the words that come before it, from the words that come after. This process is repeated and repeated, segmenting/chunking down, until we can separate out other words too. This is how we acquire grammar.
-Acoustic perception + lexical coding + grammatical coding lead to segmentation: that is, identifying word boundaries. Phonological processing is all about segmentation. To understand, speak, read and write, we must be able to identify word boundaries.
- The whole process of language learning depends on listening.

References
(I am indebted to Dr. Jennifer Linden, Department of Research Neurology at the Ear Institute, University College, London, for her generous assistance with research references.)
1 Professor David Baddeley, Working Memory, Oxford, Clarendon Press 1986, and Is working memory still working? - American Psychologist 2001
2 Bjorn Lyxell, Ulf Anderssson, Erik Borg, Inga-Stina Ohlsson (Orebro University, Sweden): Working memory capacity and phonological processing in deafened adults and individuals with a severe hearing impairment – International Journal of Audiology 2003, (also 1994, 1996, 1998)
3 Patricia K. Kuhl, (Institute for Learning and Brain Science and the Department of Speech and Hearing Sciences, University of Washington, Seattle, USA): Early Language Acquisition: cracking the speech code – Nature November 2004
4 Peter W. Jusczyk, (Department of Psychology and Cognitive Science, Johns Hopkins University, Baltimore, USA) How infants begin to extract words from speech – Trends in Cognitive Sciences Vol 3, No. 9 1999
5 Bruce McCandliss and Julie A. Fiez et al. (Center for the Neural Basis of Cognition, Pittsburg, Pensylvania): Success and failure in teaching the /r/ -/l/ contrast to Japanese adults: tests of a Hebbian model of plasticity and stabilization in spoken language perception – Psychonomic Society 2002
6 Ann R. Bradlow and David Pisoni: (Speech research laboratory, Department of Psychology, Indiana University): Training Japanese listeners to identify English /r/ and /l/: some effects of perceptual learning on speech production - Journal of the Acoustical Society of America 1997
7 Daniel E. Callan, Keiichi Tajima, et al. (Human Information Science Laboratories, ATR International, Kyoto): Learning-induced neural plasticity associated with improved identification performance after training of a difficult second-language phonetic contrast. Academic Press 2003
8 Aniruddh Patel, John R. Iverson, and Jason C. Rosenburg (The Neurosciences Institute, San Diego, CA, USA) Comparing the rhythms of speech and music: the case of British English and French – Journal of the Acoustical Society of America 2006. And Aniruddh D. Patel (Neurosciences Institute, San Diego) - An Empirical Method for Comparing Pitch patterns in Spoken and Musical Notation – Empirical Musicology Review Vol 1 No 3 2006
9 Michael R. Brent (Johns Hopkins University): Speech segmentation and word discovery: a computational perspective – Trends in Cognitive Science Vol. 3 No. 8 1999
10 Ed Kaiser: The Structure of Spoken Language: Spectral cues - 1997 

分享到:
评论·留言
开放课堂 更多
  • 新概念II全册进阶迷你班(155807)
    主讲人:俞博珺
      时间:每周五 18:30-21:00
     
  • 哈佛少儿中外教特色2A班(163061)
    主讲人:王思超
      时间:每周五 18:30-20:30
     
  • 哈佛讲座
    主讲人:马馨
      时间:每周日 上午10:00-11:00
     
热荐课程 更多
  • 哈佛少儿中外教特色2A班-WY-ZP-1...
      开班时间:2016-11-15
      上课时间:16:30-19:00
      价格:8800
     
    在线预约立减50元
  • 新概念II下半册进阶班(49-96课...
      开班时间:2017-01-08
      上课时间:09:00-11:30
      价格:6000
     
    在线预约立减50元
  • 新概念II下半册进阶班(49-96课...
      开班时间:2017-01-08
      上课时间:18:00-20:30
      价格:6000
     
    在线预约立减50元
  • 小升初考证3E笔试3级班-YY-ZS-1...
      开班时间:2016-07-04
      上课时间:09:00-11:30
      价格:3980
     
    在线预约立减50元
  • 新概念II下半册进阶班(49-96课...
      开班时间:2016-11-06
      上课时间:15:30-18:00
      价格:6000
     
    在线预约立减50元
专题· 更多