Jason Anderson: Problems with grasping elementary rules of logic persist

Jason Anderson: Problems with grasping elementary rules of logic persist

In his well-received plenary on TBLT at the IATEFL conference last week, Neil McMillan cited Bryfonski and McKay’s (2019) meta-analysis of 52 studies of TBLT programmes which concludes that TBLT has a strong positive effect on second language learning outcomes. The following week, in his blog post (Methodological monocultures or ecosystems? A few reflections and cautions on an IATEFL plenary), Jason Anderson refers to a 2021 article by Boers et al which highlights weaknesses in the 2019 meta-analysis in order to question Neil’s (and my) academic credibility. Here’s what he said:

“the fact that the original study ever got published, and has since been cited over 170 times (Google Scholar stats) offers clear evidence of systematic bias, and arguably critical illiteracy, both within the academic peer-review community …… and among those that have since cited it as evidence for stronger forms of TBLT without first assessing it critically, including Geoff Jordan and Neil McMillan. It is indeed ironic that the very study that they have cited in support of their arguments actually constitutes evidence against their arguments” (emphasis in original).

There’s no doubt that Boers et al (2021) make a persuasive case; I quite agree with their view that the 2019 meta-analysis is so flawed that it doesn’t provide reliable evidence for the efficacy of a strong version of TBLT. Furthermore, I acknowledge that I was wrong to cite it as I did in articles and presentations.  But Anderson’s post  raises some doubts too; about his own “systematic bias” and “critical literacy”, and also about his continued struggle with assembling a logical argument.

The Bryfonski and McKay’s (2019) meta-analysis examined 52 studies. In their re-examination, Boers et al (2021) immediately rejected 24 of the 51 studies on the grounds that they were not between-group studies. Of the remaining 28 studies they examined, only one met their standards. I repeat: only one. The other 27 studies were judged to be ineligible for one or more of the following reasons:

(1) no pre-test administered to both groups;

(2) not a ‘genuine’ TBLT implementation and vague notion of ‘task’ in classroom activities;

(3) insufficient details on control group; and

(4) practice–test congruency (i.e. the probability of better performance due to practice in the treatment group that embeds test components, while control group does not).

Allow me to stress the point: only one of the studies in the Bryfonski & MacKay meta-analysis was judged as providing reliable evidence for the efficacy of TBLT programmes.   

Anderson focuses on the part of the Boers et al 2021 paper that discusses the failure of the Bryfonski and McKay analysis to distinguish carefully enough between strong versions of TBLT like Mike Long’s (which uses tasks as the primary unit of the syllabus; see Long, 2015) and weak versions, (which are commonly referred to as task-supported syllabuses, where the syllabus lists language elements and patterns which  should be taught explicitly during the course. This certainly is a major problem when discussing TBLT; the differences between, for example, Long’s, Skehan’s, Willis’ and Rod Elllis’ views of how a TBLT programme should be designed and implemented are enormous, and the Bryfonski and McKay meta-analysis fails to distinguish such crucial differences carefully enough. One of Boer et al’s main recommendations is that more care should be taken in studies of TBLT to identify what type of TBLT syllabus and pedagogy is being used, so that programmes using strong and weak versions can be clearly distinguished.

Anderson Logic Prplem 1

Given all this, it’s hard to understand how Anderson can use the Boer et al paper to criticise Neil and me for our bias and critical illiteracy and at the same time use the flawed findings of the Bryfonski and McKay meta-analysis to argue for his own views about ELT.

Recall this:  It is indeed ironic that the very study that they have cited in support of their arguments actually constitutes evidence against their arguments”. First Anderson says that the study Neil and I have used to support our argument for the efficacy of a strong version of TBLT is “deeply problematic”, so poorly executed that it should never have been published in a refereed journal, and that any decent academic should have realised that the conclusions were completely untrustworthy. In the next sentence he says that the same study provides evidence which supports weak, task-supported versions of TBLT, i.e., the ones which he favours.  

Logic Problem 2

The Boers et al paper explains why the main finding of the Bryfonski and McKay study (a strong positive effect (effect size d = 0.93) on second language learning outcomes) is unreliable. Having pointed this out in order to discredit the claims Neil and I make for a strong version of TBLT, Anderson goes on to say that  the strong effect size of the Bryfonski and McKay meta-analysis “actually provides compelling evidence for task-supported language teaching, something that I have always argued can also work effectively”. Again, Anderson says that the faulty analysis can’t be trusted to support the strong version of TBLT which he rejects, but it can be trusted to provide compelling evidence to support the kind of TBLT programmes that he endorses. If there are robust studies included in the meta-analysis which examine task-supported programmes and judged them to be efficacious, then Anderson should go and find them and report them. What he can’t do, if he wants to be taken seriously, is manipulate the data in the way he does, seemingly unaware that he’s using the same data to make two contradictory claims.

Logic Problem 3

There’s more. Anderson says that once Boers et al had eliminated all the studies in the 2019 meta-analysis that either did not involve TBLT or were problematic in other ways,

 “they actually found a negative impact of TBLT: “an averaged d-value of -0.06” (Boers et al., 2020, p. 15)!”

Anderson insists that no general conclusions about the effectiveness of TBLT can be drawn because “the sample size is too small”. In fact, the sample size is precisely 1, as mentioned above. Thus, Anderson uses bold type, italics and an exclamation mark to emphasise the importance of one non-significant datum.

Logic Problem 4

Anderson ends his discussion of the Bryfonski and MaKay (2019) meta-analysis by arguing that, despite its weaknesses (51 of its 52 studies failed to meet reasonable criteria for comparative analysis), “the meta-analysis itself nonetheless provides evidence that tasks can fit into synthetic curricula and work well in diverse contexts worldwide, and also, importantly, that it’s OK and effective for those teachers who prefer to, or are compelled to, work within the constraints of such curricula to do so, contrary to Long and Jordan’s repeated arguments against such practices”.

What are methodological Ecosystems? Never mind, psychocognitive culture is appalling

In the final part of his post, Anderson pleas for a move towards methodological ecosystems, not monocultures. It’s mostly devoid of content, apart from accusing Mike Long (Chief of the “psychocognitive” (mental-mental?) gang) of trying to carry out “alarming theory culling, which, according to Anderson “prompted the sociocultural turn in applied linguistics”.  Anderson has never seemed to grasp that in most fields of investigation, researchers deliberately limit the domain of their work. In applied linguistics, psycholinguistics is one domain and sociolinguistics is another. Some psycholinguists study what goes on in the mind when people learn an additional language, using constructs such as the mind, modularity, working and long-term memory, parsing, attention, implicit and explicit learning, declarative and procedural knowledge, etc. That’s their domain. Sociolinguistics work in a different domain, with different interests, constructs and ways of studying the phenomena that they try to explain. Unless the sociolinguists adopt a relativist epistemology, deny the possibility of objective knowledge and invite us to throw off our “positivist” blinkers so as to evaluate rival theories of SLA in the same way as you might evaluate paintings in the Prado, I see no reason why we can’t discuss our different areas of interest, find common ground, collaborate and so on. Which is, of course what often happens. But it requires a certain basic grasp of logic, reasoning, and the need for empirical evidence. I’m afraid Anderson’s grasp of these matters often seems less than assured.    

Anderson Double Standards Problem 1

In a number of papers Anderson published in praise of the Presentation-Practice-Production (PPP) approach to ELT (e.g. Anderson 2016), Anderson claims that the meta-analysis by Norris & Ortega (2000) “indicates strongly that explicit instruction (including PPP) is more effective than implicit instruction”.  Rather like the Bryfonski & MacKay paper, the Norris & Ortega (2000) study was met with original high praise, but was later the subject of much criticism. Shin (2017), for example, re-examined their procedures, and reassessed the 49 unique samples they used in their meta-analysis. She found three key methodological limitations concerning (a) the data collection procedure, (b) the coding system, and (c) the statistical analysis. Shin concludes that “the lack of data quality inherent in the primary studies, the oversimplified coding scheme, and the inappropriate use of effect size statistics combine to compromise the validity of the conclusions Norris and Ortega have drawn from their meta-analysis”. Anderson has never recognised the weaknesses of the Norris & Ortega study he so frequently cited to support his arguments in favour of a PPP approach.

Logic Problem 5

Anderson’s (2016) article in praise of PPP also illustrates his shaky grasp of elementary logic and the distinction between facts and value judgements.  Anderson bases his arguments on the following non-sequitur, which appears throughout the paper:There is evidence to support explicit (grammar) instruction, therefore there is evidence to support the “PPP paradigm”. While there is certainly evidence to support explicit (grammar) instruction, and indeed, it is generally accepted that explicit  instruction has a role to play in ELT, this evidence can’t logically be used to support a PPP methodology. Explicit instruction can take many forms, including, for example, different types of error correction, different types of grammar explanation, and different types of explanations of unknown vocabulary. PPP, on the other hand, involves explicit (grammar) instruction of a very specific type – the presentation and practice of a linear sequence of chopped up bits of language. Anderson appeals to evidence for the effectiveness of a variety of types of explicit instruction to support the argument that PPP is efficacious in many ELT contexts. In doing so, he commits a schoolboy error in logic.

Logic Problem 6

As for not being able to distinguish a fact from a value judgement (i.e. difficulty understanding the logical rule “you can’t get an ought from an is”), in his blog post The PPP Saga Ends, Anderson says in reply to a comment by Neil McMillan:

the notion of ‘linear progress’ is a reflection of a much wider tendency in curricula and syllabi design. Given that the vast majority of English language teaching in the world today is happening in state-sponsored primary and secondary education, where national curricula perform precisely this role, we can predict to a large extent that top down approaches to language instruction are going to dominate for the foreseeable future

Well yes, as a matter of fact, the notion of linear progress is a reflection of top-down approaches, and yes, they do dominate ELT today, but that doesn’t mean that we should countenance the obviously erroneous notion of linear progress, or approve of top-down approaches to language instruction. 

References

Anderson, J. (2016). Why practice makes perfect sense. English Language Teaching Education and Development, 19, 1, 14-22.

Boers, F., Bryfonski, L., Faez, F., & McKay, T. (2021). A call for cautious interpretation of meta-analytic reviews. Studies in Second Language Acquisition, 43,1, 2-24.

Bryfonski, L., & McKay, T. H. (2019). TBLT implementation and evaluation: A meta-analysis. Language Teaching Research, 23, 5, 603-632.

 

Norris, J., & Ortega, L. (2000). Effectiveness of L2 instruction: A research synthesis  and quantitative meta-analysis. Language Learning, 50, 417-528.

Shin, H.W (2017) Another Look at Norris and Ortega (2000) Teachers College, Columbia University Working Papers in TESOL & Applied Linguistics, 10, 1.   

Peter Fenton

IELTS and EAP Teacher | MA TESOL | Trinity DipTESOL

4w

I personally don't think one even needed to read beyond the Norris and Ortega (2000) paper itself to notice problems with it, as they acknowledge in the conclusion. For example: 'First, testing of learning outcomes usually favors explicit treatments by asking learners to engage in explicit memory tasks and/or in discrete, decontextualized L2 use.' This sentence alone should set the alarm bells ringing. The Spada and Tomita (2010) study, which Anderson also uses to support PPP, includes similar caveats.

Bruno Albuquerque

Co-Founder @ELT in Brazil | Communications Director @BRAZ-TESOL | Teacher Educator | Writer | ELT Consultant | Teacher

4w

Luiz Otávio Barros here's a reply to that text you tagged me on! The conversation continues 😀

Neil McMillan

EAP Lecturer, University of Glasgow; Founding member of Serveis Lingüístics de Barcelona

4w

There is nothing in Bryfonski and McKay (2019) that constitutes evidence *against* anything I said at IATEFL. Clearly, however, we need to highlight better evidence in favour of programmatic TBLT. Absent a rigorous enough meta-analysis, we can point to individual studies - e.g. Gonzalez-Lloret and Nielson (2024) (Spanish programme in the US); McDonough and Chaikitmongkol (2007) (English programme in Thailand): and Gong & Skehan (2022) (English programme for school-age children in China). I think the argument may also be strengthened by looking at EAP courses which may not always be organised purely around tasks, but are analytic programmes which feature communicative tasks alongside a focus on genre and academic literacies.

To view or add a comment, sign in

More articles by Geoff Jordan

  • Who's Afraid of Rudolf von Laban?

    In his recent post The "Science of Learning" and the ELT Power Game", Alastair Grant asks “Whose voices are being…

    27 Comments
  • Enhanced incidental learning (EIL)

    This is a heavily abridged excerpt from Chapter 6 of Jordan & Long (2023). In previous chapters we argue that SLA is…

    1 Comment
  • Examples of poor Critical Thinking in published work on SLA and ELT Part 1

    I'm collecting examples of poor critical thinking in our field. More examples welcome! 1, Li Wei Translanguaging In his…

  • Critical Thinking

    Introduction Happy New Year! To start the new year, I offer a few thoughts about the importance of critical thinking…

    12 Comments
  • Dellar, 2024: Whither Teachng Lexically?

    I was up early on Friday morning and during what was supposed to be a quick browse of social media, I came across a…

    8 Comments
  • SLTE Part 4: How it could be done better

    Introduction The three previous parts of this discussion of SLTE gave my own opinion about things discussed in Chapter…

    9 Comments
  • SLTE Part 3: CELTA and Continuous Professional Development (CPD)

    CELTA and CUP best illustrate how the $200 billion ELT industry has become commodified, how it has abandoned…

    40 Comments
  • SLTE Part 2: Pre-service ELT courses for non-native English speaker teachers

    Note: This is an abridged version of part of Chapter 10 of Jordan and Long (2023) ELT: Now and How it Could Be.) More…

    15 Comments
  • Second Language Teacher Education (SLTE)

    Here’s an abridged version of part of Chapter 10 of Jordan & long (2023) ElT: Now and how it Could Be. Introduction…

    8 Comments
  • Writing

    I came across this post I wrote years ago on a now defunct blog. Those doing papers for an MA might, just might, learn…

    9 Comments

Insights from the community

Others also viewed

Explore topics