Skip navigation

Category Archives: acquisition

James Hunter, Assistant Professor – English Language Center, Gonzaga University, Spokane USA

Excel has an Analysis ToolPak which can do a lot of statistical tasks. Help on installing it is here. Also, try the R Project.  This is a free “software environment for statistical computing and graphics” and it will run on Windows, Mac, and Linux.  I haven’t had much of a chance to play with it, but it is certainly not user-friendly.  However, you can also get Statistical Lab, which is a GUI interface for R, also free but not for Mac or Linux. There’s also a free version of SPSS (the “big” stats package that businesses & colleges use), called PSPP.

With all of these, you can easily do correlation matrices, T-test, Chi-square, item analysis, Anova, etc. These will enable you to compare results on assessments, do pre- and post-tests, get inter-rater reliability information, find links between variables, etc.  See also this for information on which statistical procedures to use when.

I use mean and SD on most tests and quizzes to a) compare classes to previous semesters and b) look at the distribution and spread of scores on a test/item. This helps to make informed decisions about assessment instruments, especially those that might be adopted as standardized tests for the program. I’ve done a lot of work with our placement instruments, for example, to determine reliability and check our cut scores.

Recently, I’ve been doing research on corrective feedback in oral production, so have needed measures of accuracy and fluency (and complexity!). Statistical analysis has been essential to find correlations between, say, accuracy and reaction time on a grammaticality test and accuracy and production time in a correction test.  For instance, in class a student says to another: *”Yeah, actually I’m agree with you”. This goes down on a worksheet for her (and occasionally other classmates – see this for a description of this methodology), and she is later given a timed test in which she sees the incorrect sentence and has to record a corrected version. Her speed in doing this task (plus her accuracy) give a measure of whether this structure/lexis is part of her competence (or to use Krashen’s model, whether it has been “acquired” or “learned”: presumably, if this theory holds water, “learned” forms will take longer to process and produce than “acquired” ones). In addition to this production test, I’ve been doing a reaction time-test in which the same learner hears her own recording and has to decide, as quickly as possible, whether what she said is correct or not.  You can try this for yourself here (you will not be able to hear student recordings, only a few practice sets, recorded by me using student errors from our database; use anything as Username and “elc” as password).

These measures yield 1000s of results, and that’s why statistical analysis has been essential. Excel can do a lot of the work, especially in graphical representation, but SPSS has done most of the heavy lifting. For instance, it has revealed that there is no significant difference between the reaction time (or accuracy) when a student is listening to herself correcting an error she originally made and when she is listening to herself correcting errors made by classmates. In other words, students are just as good or bad at noticing and judging errors whether they made them or a classmate did. The same is true in the correction task described above.  This indicates that WHOSE error a student is correcting/judging has much less effect on her speed or accuracy than some other factor, e.g. the nature of the error itself. Probably a large “Duh!” factor there, but these things need to be ruled out before moving on…

Advertisements

>By Steve Kaufmann – The Linguist Institute, West Vancouver, Canada

In my experience, deliberate vocabulary review is a support activity to the main task of exposing oneself to massive input through reading and listening. Any words I study come from my reading and listening, which I have chosen.

I would find it annoying to sit through a class where a teacher is providing me with his/her examples,or forcing me to produce examples, of word or phrases chosen by the teacher. I learn better from meaningful examples from meaningful contexts, even if they reappear in a somewhat random fashion. On the other hand, any communication that takes place in the classroom is a good thing, so to that extent talking about words and phrases is good. It is just that there are so many words and phrases to learn.

Study with Flash Cards, when the words for review are chosen from content of interest to a learner, is actually quite enjoyable. I find that, and our learners find that. But it is a support activity, not “the very heart of what language learning is all about” at least in my experience.

I did a podcast recently about what might be closer to the very hear of language learning. It is entitled “Language learning is like falling in love”.

The text can be found on my blog.

>By Margaret Orleans

Dave and Jane Willis (authors of the CoBuild series) suggest for a pre-reading activity having students skim a new passage and identify a limited number of words (say five) that they think are the most important for them to know before reading the passage more carefully. This can lead to fruitful discussion among small groups of students, sharing explanations of words that some but not all of them know and searching for as many contextual clues as possible.

I agree that this is very fruitful because it helps students create the links that will enable them to access the words from memory through various routes (rather than just Chinese-English equivalents), but it is so time consuming that it cannot be effective by itself. It’s better as a model of how they should be dealing with words when they learn them–not just simple lists that gather dust. But in fact, students need to be learning hundreds of new words every month and the best (but not always practicable) method for retaining them is through constant exposure in new contexts–which means wide reading and listening.

>By P. Ilangovan, Freelance Teacher Trainer, Teacher & Materials writer, India

Quite a few people have now pointed out the fallibility of trying to learn a language by isolating its elements. And as one teacher pointed out in an earlier mail, even when a teacher is presenting a grammatical item to her class the Ss are most probably also learning bits of pronunciation, vocabulary and so on.

This means that they are not only focusing on what their teacher wants them to, but also “paying attention” to the other things that their SUBCONSCIOUS minds want them to attend to.

It is a fallacy of post-modern thinking (Outcome = or > Input) that Ss will only learn what teachers present

Many years ago Dick Allwright wrote a paper called, “Why don’t learners learn what teachers teach?” in which he answered in part the questions he raised

Why learners do not learn everything they are taught,
How learners manage to learn things they are not taught,
How learners manage to learn things they are taught — some learners learned bits of language that they had done in interactive work (comprehending…) for.

Visit this URL to read his paper: Dick Allwright – Language Learning in Formal and Informal Contexts

All said and done, Ss appear to learn (and acquire) languages when they are LEAST focused on any one single item during communal learning. In other words, the more focus teacher gives to an element of language, there is less likelihood of all Ss in class of learning it.

Consequently, the less focus that teacher gives to the items to be learnt (in other words, the more embedded in discourse context they are) the geater the chances of Ss picking them up! THAT INDEED IS THE POWER OF SUB-CONSCIOUS LANGUAGE LEARNING!