Donate SIGN UP

Ofqual Chief Does The Decent Thing And Resigns. Meanwhile The Main Culprit, Gavin Williamson Clings On

Avatar Image
Gromit | 19:53 Tue 25th Aug 2020 | News
67 Answers
https://www.bbc.co.uk/news/education-53909487

There is a pattern emerging.
Cummings, Jenrick and Williamson all *** up massively, but brazen it out as though they are angels.

Bunch of arrogant twits ?
Gravatar

Answers

41 to 60 of 67rss feed

First Previous 1 2 3 4 Next Last

Best Answer

No best answer has yet been selected by Gromit. Once a best answer has been selected, it will be shown here.

For more on marking an answer as the "Best Answer", please visit our FAQ.
Incidentally:

// One needs no clever way. The idea of exams is primarily to give an indication of the ability of the individual to understand and take in the subject matter. That doesn't rely on format of exam, or the syllabus. //

This also is utterly nonsense. For example, because syllabuses are changing, comparison over the years is difficult: if students in 2020 were tested on their understanding of something that was *not* tested in 2019 (or vice versa) then it's a comparison of apples with oranges. Even between syllabuses the comparisons are difficult; although they are more or less the same, different exam boards seem to emphasise different aspects, or style their questions differently, making it potentially easier or harder for some students to understand what is being asked and how.

This also ignores the element of luck in exams. Because no exam can test everything, the only safe approach is to understand all the material, but, in principle, a student who attempted to learn 95% of the course could be beaten by a student who learned only 5% of it if the exam happens to focus on the 5% -- and yet, it should be obvious that in that case the exam has ranked the students incorrectly. This has definitely happened in my own history: in my final-year University exams, I gave up studying one module because I simply didn't understand it, had no interest in it, and was utterly bored with the whole thing, so I read the first five pages of an 80-page set of lecture notes and hoped that they came up. If they didn't, I resigned myself to a zero. In the event, they did, and I got 65%. That is luck, not skill. The exam failed to assess me correctly.

A separate flaw is that exams are very dependent on mood and circumstances. Somebody I know once failed an exam because it happened on the same day that their grandfather died, so they spent the entire three hours crying. I don't think anyone could pretend that the exam in that case was a fair assessment either.

This is a wider discussion now than in 2020, but really the point is that exams are a terrible way of assessing ability in the first place. It's more about your ability to regurgitate information in a short space of time, it's more about the luck of what comes up and doesn't, and it's more about your mood at the time you sit the exam. I'm not saying ability doesn't matter, but ability to do what? To be good at exams, not good at the subject.

This does relate to any predictive model because ultimately any student's performance on an exam is only partially connected to their ability at the subject. Clearly a bright student can be expected to do well, and a student that hasn't learned the material well can be expected to do badly, but that is an expectation and not a certainty. On the other hand, any attempt to compensate for the randomness would equate to arbitrarily assigning revised grades based on little more than a coin toss.

Ofqual's approach was flawed from the start, and, since you are trying to achieve the same thing they were going for, so is yours.
Jim, any chance of a precis at the end of your posts?
I thought that "Ofqual's approach was flawed from the start..." captured the sense of my post fairly succinctly.
Thanks jim :-)
I should just add, though, that Ofqual's approach was based on instructions given to them by the Minister (ie, by Williamson); as soon as they were told to ensure that results were compatible with history then the approach was doomed, because there is simply no fair way to achieve that.
With the benefit of hindsight, they should have run some sort of exams so they had something tangible to base the grades on. When a system is built on an end exam, and you take the end exam away, you've whipped out the foundations and should expect it all to crumble down. So, who decided not to run any exams?

And then, even without exams, they had literally months to come up with a fair replacement. The fact that they didn't model and test their algorithm, and just put the results straight out there, beggars belief. It's like writing the code for a nuclear power station and then installing it without testing ... "Looks good enough. We can always fix any bugs as and when they happen." Again, who was accountable for that?

If I was in Williamson's shoes, I would have wanted to know at the time:

1) Is there really no way we can run any kind of exams? Really? You know the problems this is going to cause us ...
2) Show me the results of the modelling your algorithm has produced, so we can assess anomalies and unfairness and the likely political fallout.
It's not really true to say that Ofqual didn't test it, although certainly they didn't test it enough. But what I mean with "flawed from the start" is that, to an extent, it doesn't matter how much testing is done. Their objective, as instructed by Williamson, was to match to 2019's national results, and since that's the criterion then anomalies in individual cases were more or less irrelevant -- if you fix the individual anomalies then the national picture, the target to be no more than 1%-2% better than 2019, would be broken.

I hit answer too soon, but I would only have added general agreement with the "beggars belief". The technical report shows a lot of testing but it's all focused on national considerations: individual students be damned -- and it saddens me that this was either not even noticed or regarded as irrelevant.
I wonder if they thought of using the new system based upon 2018 results, to see how closely the outputs matched the actual results in 2019?
Yes, they did... See the technical report...

(Although it's claimed that there was a flaw in the way they did this, ie they tried to predict the 2019 results by using a model shaped by the 2019 data, which is effectively cheating).
I agree with OG. And no I don't trust the teachers who not only will have their own interests at the forefront, but are often very biased. Universities will now be swamped with applications from undeserving pupils. I think the results should have remained as amended by the authority and kids should learn to cope with the fact that they got grades they didn't like. That's real life And the government should never have given in. If the kids want higher grades they should retake and reapply next year.
Oh my god. Really?!
Yes, really!
I'm confused how you can say that while ignoring all of the flaws I've pointed out, is all. Like, why should a student accept that they are a U grade if they were predicted a B and have been working as a B student all year?
And I base my feelings from working in schools for many years (not as a teacher, but in a senior admin capacity)
Teachers were never going to be entirely accurate. It's a shame we didn't work out ways to test at some point as well, so there would have at least been a compromise, like every other year.
Predictions can be very flawed. The examination system is not perfect, but I don't see any alternative. Basically, I have lost my trust in the teaching profession.
I respect your views Jim, but can't agree with them.
My children receive estimated grades from Year 7 (based on CAT scores). They are regularly assessed and receive grades throughout the five years of high school including mock exam grades. This gives a pretty clear picture of what they should achieve in their actual GCSEs (although some will have a shocker in the exams and some will pull out all the stops). Teachers’ estimated grades are backed up by this data. Slagging off the teaching profession doesn’t achieve anything.
I have every respect for the teaching profession... but for years, we have used their assessments along with objective exams, to get thr grades. We have never just asked them before (and I'm glad, as they got all mine wrong). This is the first year we haven't had a compromise and balance, so it is bound to be different.

41 to 60 of 67rss feed

First Previous 1 2 3 4 Next Last

Do you know the answer?

Ofqual Chief Does The Decent Thing And Resigns. Meanwhile The Main Culprit, Gavin Williamson Clings On

Answer Question >>

Related Questions

Sorry, we can't find any related questions. Try using the search bar at the top of the page to search for some keywords, or choose a topic and submit your own question.