Saul Nassé labci



Download 48.41 Kb.
Date06.12.2018
Size48.41 Kb.
#73777


Saul Nassé LABCI
Alternative facts? Lies and deception? Post-truth? Oxford Dictionaries made ‘post truth’ their word of the year for 2016. Apparently, post truth usage went up 2000% last year alone. That’s quite some increase for a phrase we’d barely heard of a couple of years ago.
Post-truth – the idea that facts can be bent and changed to suit your argument. That emotional appeals and prejudices trump logic and reason.
Well I want to say something which is simultaneously controversial and comforting. It’s a big lie! Yup. The irony of it! This notion of ‘post truth’ is a great big lie.
Because the truth is that the whole premise is fundamentally flawed. To talk about a post-truth era implies there was once an era of truth. So when was that?
The age when they thought the world was flat? The age when they thought leeches provided the cure for everything? The age of Machiavelli, the age of Henry the Eighth, the age of Nixon?
The truth is that deception – alternative facts if you like - has been around for as long as there’s been communication. In Ancient Greece, the Sophists even gave training courses in how to lie.
So forget this stuff about post truth. My argument is the opposite. Where some people are filled with negativity and pessimism about this modern age, I am filled with excitement and hope.
I believe we are actually living in an age of unprecedented truth, of unparalleled intelligence and insight. It’s a post-post-truth world, where we all have the facts at our fingertips.
Now, every time Donald Trump speaks, every time Vladamir Putin speaks, their comments are instantly subjected to challenge and debate online.
If you Google ‘Is Donald Trump wrong on climate change?’ you get more than 2 million results. In the past, what the President of the United States said went as fact. Unchallenged. Well that era has gone for good.

And it goes bigger than checking a Wikipedia page or running a speech through politifacts.com. People have access to a massive wealth of published data, of learned journals that they can sift through themselves. This post-control world is one where facts have become democratised.


But the digital revolution goes much further than old facts being at people’s fingertips. We’re now deep in the data revolution, where new facts are being revealed simply from mining the vast tracts of data that are being created in this always on, always connected world we now live in.

IBM estimates that 2.5 quintillion bytes of data are created every day – the equivalent of 100 million blu-ray discs. They also estimate that 90% of the world’s current data were created in the last two years alone. By crunching this data, we can gain some of the most astonishing insights.


We can use data to predict and understand trends in health, crime, energy consumption, transport use, food, the environment… And it’s leading the whole human race to become much smarter. Data helps us to boldly go to places the brain has never been before. We’re not living in a world of post-truth, we’re living in a world of data-assisted truth.
Let me pose a question to you. ‘Who is the first person to know when a woman is pregnant?’
Is it the woman herself? Her partner? Her doctor?
Well these days it’s often her supermarket. It all started when a supermarket chain found they could detect slight shifts in a woman’s purchasing patterns almost immediately after conception.

Some of them were pretty predictable… For instance, a slight increase in the amount of fruit and vegetables that were bought. But the others were less easy to explain… There was a certain propensity towards buying products in green packaging.


No-one could explain what the causal relationship was but in the world of data the facts don’t lie. They simply worked back nine months from the date that thousands of women first bought nappies in the supermarket and looked at the change in the patterns!
There are heaps of other examples in the world of data-assisted truth.
For instance, there is an academic called James Pennebaker at the University of Texas at Austin. He has spent his life analysing the different use of language of men and women.
He is the Sherlock Holmes of language. He has discovered, for instance, how women use more first person pronouns than men; men use more articles; women use more verbs, social and cognitive words; and men use more nouns, numbers, big words and swear words. Or in my case a number of big swear words.
He’s used these insights to develop an analytical tool. This is now able to say, with some accuracy, whether the author of a text is a man or a woman.
He put 100,000 blog posts through this tool and it was able to say with 72% accuracy whether the author was a man or a woman. This rose to 76% when the topics covered in the blogs were also taken into account.
Genius!
When he asked real people the same question – whether the author was a man or a woman – the result was much worse. So the machines do a better job than humans.
There are all sorts of buzz words flying around in this brave new world. The Semantic Web. Machine Learning. The Internet of Things. And buzziest of all… Big Data. The good news is these powerful new sciences are making their mark in the world we know and love, the world of language learning. I’d argue the opportunity for us is not so much in the Big Data space, but thinking about the Right Data – using all these wonderful new techniques, but applying the knowledge and understanding that we’ve built up as language teachers, as assessment experts. That’s why for me data-assisted truth has the ring… of truth.
The data are already out there. Take Cambridge English. We generate half a billion marks every single year. Half a billion? It sounds impossible. But take 5 million exams with on average 100 marks per exam, that’s where you get to! 

 

Look at this.


This is a small, representative data set for a typical Cambridge English First listening test – not live data - and it contains 900 pieces of data generated by 30 candidates answering 30 questions.  Candidates are on the y-axis running top to bottom and the items are on the x-axis running left to right.

 

When I mentioned the half billion marks, this is what most of them look like - simple binary data.  A zero marks an incorrect answer in the red squares; a one marks a correct answer in the green squares.  Data, plain and simple.



 

Candidates are listed sequentially by candidate number and items are listed in the order they appeared in the exam and the. At this stage no patterns are visible.  It’s all data, and no truth.

 

To get to the data assisted truth, we have to ask the right questions. We do this by crunching the data through one of our 40 statistical models.  



 

Obviously, the first question asked by everyone involved with an exam is ‘how well did the candidates do’?

So if we ask the spreadsheet to reorder the data so that the best performing candidate is at the top, and worst performing one is at the bottom, you start to see a different pattern.  You can see that most of the green squares – the correct answers - are now in the top half of our dataset, and most of the red squares – the wrong answers – are in in the bottom half.

 

But a much clearer, and more striking, pattern emerges if we sift the data in another way. We keep the best candidate at the top, and the worst candidate at the bottom. But we change the order of the answers, so instead of them being in the order from the exam, we make the first column the question most people got right, and the last column the question most people got wrong.



 

On the left of the chart, where the questions are easiest only the worst candidates are getting them wrong. On the right of the chart where the questions are hardest, even some of the best candidates are getting them wrong. This shows that the questions are doing a good job at discriminating between candidates according to their ability.

 

However, look at questions 21 and 27.  The data shows that strong candidates are getting these items wrong while weaker ones are getting them right.  We would not want to see this type of pattern resulting from an actual exam – these two questions are not working effectively. That is why we pre-test all our items and subject them to this kind of analysis. We must be confident the exams provide accurate, fair results which give a clear, reliable picture of every candidate’s true level of ability in English.



 
There is another unexpected pattern hidden in this dataset. These three candidates have given almost identical answers.  That immediately raises a suspicion that some sort of malpractice may be taking place. And so, as a matter of course, we run further statistical analysis to see if we can see any further patterns by assembling the right data.  For example, we compare every candidate’s answers with all the other candidates, pair by pair: candidate one against candidate two, then against three and so on. This sample of 30 candidates alone generates 435 different pairs.
Brace yourself linguists for some mind bending mathematics. This is what the analysis shows. On the x-axis across the bottom is the average number of wrong answers for a pair of candidates. So if Candidate One got 30 wrong, and Candidate Two got 50 wrong, the average is 40. On the left are pairs with very few answers wrong, on the right pairs with lots of answers wrong.
On the y-axis along the side we plot the average number of common wrong answers for the pair. That’s what you’re looking for – when wrong answers are in common it’s most likely there has been collusion. Much more so than common right answers.

So as you move from pairs with a low average number of wrong answers to a high average, you of course get a higher number of common wrong answers. Which is how you can spot the outliers. Look at these three pairs – a significantly higher number of common wrong answers than for other pairs at the same average number of wrong answers. Possible collusion - we’ve found the right data to focus some human effort upon.


Because when we do find unusual patterns relating to candidate or item performance we always investigate the cause – we don’t just rely on the data as it does not give us the complete answer.  We play in human expertise and judgement in order to interpret the evidence.  Data helps us to make informed decisions, but data alone is often not enough. It really is data-assisted truth.
Cambridge English has always collected such data but until relatively recently this was a laborious and time consuming process. With the advent of purely digital products data has become much easier to collect, collate and analyse.
In January, we changed the technology behind our Test your English webpages and it’s enabled to us to view huge amounts of data very quickly. Here’s some of the output. You can see that 472,129 tests have been taken with 8,335,935 questions answered, 5,156,146 correct answers and an average score of 10.92. That’s a lot of data.

One of the more difficult questions is:


It was only ten days ago ...... she started her new job.

  1. Then

  2. Since

  3. After

  4. That

The correct answer is ‘that’.


This is the data output of learners at all levels. As we can see, less than a quarter of people got the correct answer, ‘that’. But if you delve in to the data, you can see more interesting patterns.
This is a chart of learners who score ‘below A2’ in the test overall. At this level we see a fairly random set of answers, they’re quite evenly weighted. Learners find this such a complex construct that they are just guessing.
Here are the learners who score B1 overall. The majority incorrectly answered ‘since’. At this level, learners have learned some past tense constructions, and some constructions with ‘since’, and they default to that being the answer. Interestingly, knowing more gives them a worse average answer than making a random choice!
Finally, these are C2-level learners. Here the majority have acquired the ‘It was … ago that …’ construction. And they’re getting the right answer.
So the Right Data can help us gain really personal insights into performance as we saw with the malpractice piece, but it can also give us insights into what’s going on at a much larger level.
One other trick we can pull with our data is to take individual exam results to look at skills profiles at a national level. This is territory where you need to tread very carefully, to find the right data that get you to data-assisted truth. Then add that to meaningful expert interpretation.
Simple rankings on their own are not useful or meaningful – you’re skating on the thin ice of post-truth. I’m sure you’re familiar with the Pisa studies but the OECD, the authors of the study, warn that big data generated from test results must be used in conjunction with additional contextual information such as descriptive data, research findings and practitioner knowledge. That’s the only way you’ll generate insight which can really improve practice at the level of the individual classroom. This requires skill and an understanding of which data is important, and why.

Data can help to shape policy at a national level. It can also help to plan effective teaching at a school and class level. And it can help teachers to spot and prioritise areas for improvement. But if it’s going to do this it has to be good data and it has to be used properly. Otherwise it can lead to bad decisions and can discourage teachers and learners, leading to negative learning experiences.


We’ve all seen the global “league tables” which claim to rank countries use of language according to the average scores of people who take a quick online test, without testing their speaking skills. These usually put Latin American countries low down the ranking. Great publicity for the organisations behind the survey, and great fun for journalists, but what do they really tell us? I’ll stick my neck out and say next to nothing. You have to know much more about the participants and to understand the nature of your data before it can be of any use.
Where the data starts to become really useful is when we start to look at relative strengths and weaknesses by skill, level and age. That’s when we start to discover some much more useful insights. For example, at Cambridge English, we test the four skills – listening, speaking, reading and writing. Which do you think is the strongest skill in Latin America?

It’s actually speaking. This is a general pattern worldwide, and it happens largely because candidates for the Cambridge English exams know that in order to succeed they have to really practise their speaking skills, which in turn means that in preparing for the exam, they are learning the skills they really need in order to use English effectively. If you don’t test the four skills, learners won’t concentrate on speaking, simple as that. And they won’t be effectively mastering the communicative skills we all know are so important.


But, here’s the interesting bit. While speaking is on average the strongest skill across the world, there is quite some variation from country to country. For instance, Mexico is strong on speaking, with scores at B1 and B2 that are in line with Italy and France, and significantly ahead of Spain. A lot of this is down to systematic efforts by the state governments we work with and by teachers in many individual schools to really encourage students to speak English, and we all know what a struggle that can be.
We also see relatively high rankings for speaking in other countries in the region, so Argentina and Brazil do well at level B1, while at C1 level Argentina, Peru and especially Chile score highly.
This is just one example of the kind of data we have available and can share with policy makers and educators across the world. Letting everyone harness the right data to get to a world of data-assisted truth.
What’s getting more interesting in the world of language learning is the kind of data that start to be generated as digital suffuses every aspect of students’ learning journeys. Especially in the classroom, where it’s now possible for teachers to gain new insights into their students’ work. Let’s look at an example.
At the heart of communicative language learning, teaching and assessment is the notion of the task.   Let's just take a moment to remind ourselves what the process is for task-based learning.  

A learning objective is set for the learner, prior knowledge is checked and so is the learner's understanding of the task. The learner undertakes a language activity, observed by the teacher who interprets the language produced against the goal.

The teacher then decides what feedback to give, whether to change the method of instruction and the goal. The teacher corrects errors, offers guidance and gives support.

Task-based learning creates a virtuous circle whereby the output of a language activity becomes the input for the next activity. It creates the opportunity for teachers and learners to repeat, extend and improve performance outcomes.


There’s one step I missed out – that’s the record keeping part of the process. It’s a key part, but in the traditional classroom it’s the hardest thing to do, thanks to time constraints and the practical difficulties of recording language production as it is being performed.
Harness digital technology, and you can record in real time, as it happens, learning and performance. It’s here, in the middle of the task cycle at the heart of learning, that digital technology makes the difference and can be transformative.  It closes the circle and accelerates learning. It allows evidence for learning to flow freely and acts as a catalyst to convert outputs into inputs.

This evidence for learning takes many forms. It can be the diagnostic profile generated by the student’s score in a progress test taken within a learner management system, as in our Empower course.

And as the Internet of Things reaches into the classroom everything that takes place there can be captured, recorded and replayed.  Learners can watch videos of their speaking, and with their peers reflect on their performance.  This self reflection is in turn a virtuous circle as learners become better learners: more motivated and taking ownership.
Evidence for learning helps us to personalise learning according to individual learners’ needs. To see what students can and can’t do, and how they learn, what works for them and what doesn’t. Goals, tasks and instruction can be scaffolded and adjusted to maximise learning opportunities within what Veegotskee calls the ‘zone of proximal development’ between tasks that are too easy and boring or too difficult and de-motivating. The right data can help you target the right task on the right learner.
So far I have been talking about what we humans can learn from digital data. One of the most exciting developments made possible in the world of data-assisted truth is what happens when the machines start learning – the dawn of artificial intelligence. So what can a computer learn to do? And what does this have to do with language learning? Take a look at this.


Play video

This technology has lots of applications, the most obvious being auto-marking, but we’re putting it to use in the field of learning too.


Take a look at this, it’s called Write and Improve. It provides an instant way for your students to practise their written English. As you saw in the film, the underlying technology is cutting edge but it is really simple to use. It’s free and it provides your students with a choice of writing tasks that they can complete in their own time. When they’ve finished, they submit the completed task for marking and the system very rapidly provides feedback. This consists of a proficiency level aligned to the Common Framework of Reference - the CEFR - and pedagogic advice to help them improve specific areas of their writing. Based on this feedback, they can try again and resubmit an improved version – many times if they want to. So students write, and improve.
Hence the name!
I’d now like to show you how that works by demonstrating the system on line – in other words a live demo! You to the website and click the start writing button. There are a range of questions or prompts at different levels: Beginner, Intermediate and Advanced.
I am going to go to one I prepared earlier. It requires me to write a report on Making a video on daily life at my school. Now as a native English speaking former TV producer hopefully my writing on this topic would be ok…! So I have here a piece of typical B1 learner’s writing already copied in. The prompt asks for 150-190 words and this handy green bar tells you how many words you have entered. As the text is in, I submit my task for marking by the system and wait for the result, which takes under 15 seconds.
You see that a rather large badge has appeared and this one of the gamification features included to motivate learners.
Before I start to improve my writing, this is what the all the different symbols and colours mean. We have world level feedback and sentence level feedback. The darker the highlight the more room there is for improvement.
I’m just going to pick out a few examples – including a couple of errors; I’ll make the corrections to them, and then resubmit the text.
In paragraph 1, you notice it is marked as white – a well formed sentence.
In paragraph 2 you can see some errors in the darker highlight. I am correcting them now. And now I can resubmit.
As you see, this now gives new feedback showing that although the level hasn’t changed, the writing has improved.
I’m now making the remaining corrections. In the orange sentence. I am changing ‘enjoin’ to ‘enjoy’ and adding ‘the’ before students.
I have also spotted a couple of errors to correct in Paragraph 4. Let’s correct those and click check again.
You can see that this has raised the level to B2 – notice the green circle and some exciting feedback.
This graph shows how progress is charted on the CEFR. The really neat thing about machine learning is that every piece of data that I have submitted during this session – and it’s happened here and now for real – has been captured by the system. And not just stored, but used by the machine learning engine to get even better with time. Data assisted truth again.
And there’s a brand new feature released today. We want learners to experiment and develop their writing. We don’t want them to repeat set phrases. So we have introduced Prompt Relevance - does the answer match the question? It is a really interesting approach and another very different example of using the right data.
Prompt relevance uses a technique called distributional semantics. Brace yourself again! What the system does first is look at the prompt and break it down into the ‘stem’ words. It strips away the ands and ifs and buts and converts the rest into the stem of the words. Filming would become film. Walked walk and so on.

It then heads off to one of the biggest data sets in the world – Wikipedia. The content may not always be 100% correct, but it’s an amazing user-generated database, where thousands of words appear alongside thousands of other words. So the software looks at the stem words from the prompt and sees which other stem words it’s associated with across the mass of Wikipedia. For example, we would find that the word "film" tends to occur alongside words like “movie”, "watch", "actor", “director”, “Hollywood” and “action”. While the word "movie" rarely connects with "avocado" for instance.


The system then goes to the learner’s answer, strips it back to stem words and sees if they match with the expectation it has generated from Wikipedia. If the words in a learner’s text aren’t related to the words in the prompt the writing will get a low prompt relevance score and if they have used many related words they will get a high prompt relevance score.
So this text scores a five but if you look at this text which about the environment you can see it only gets a one.
Write & Improve is designed not to give too much feedback in one go. Too much ‘red pen’ is ultimately demotivating and counter-productive. Instead, it returns initial feedback on common errors after the first submission and then, as the learner edits those errors, surrounding errors become marked on the next view, where possible and appropriate. This is designed to keep the user motivated and engaged.

In summary, the system is designed to be engaging, inspiring and encouraging. Overall encouragement needs to be personalised and adaptive – so the system needs to be changing and regularly refreshed.


As we all know, learning a language takes a long time and it is important for learners to be persistent - and to get lots of practice, particularly for the productive skills. As you have seen, learners using Write and Improve are encouraged to read, think - and then have another go until they feel that real progress is being made.
Supportive feedback is key to maintaining this kind of interest and persistence. The feedback also helps by creating learning pathways that are tailored to the level of the learner. This is why comments generated by the system are graded to suit the level of the learner.
A low level learner at A1 gets feedback like this:

This is a good start! Now improve your writing. Read the feedback. Make changes and click Check again!


A high level learner at C2 gets a much more complex response from the system, “C2! Your writing is excellent! We advise you to practise writing in many different genres and styles. You can afford to take risks and then check the results. Keep learning! You can use Start again to start a whole new essay or return to Workbooks to try a different task. Keep on writing to keep on improving!”
A proportion of submitted checks are cross-checked by our human annotators (‘humannotators’) who teach the algorithm about subtleties it may never have seen before and reinforce where it is performing well. In this way, a virtuous circle is created where users learn from the algorithm which in turn learns from the users.
Since September over a million pieces of writing have been submitted and checked. The average student takes on board the feedback and improves on each check an amount equivalent to 5% of a CEFR level. Though of course larger gains tend to be made by A1 / A2 students, and gains become harder on successive checks.
Write & Improve helps learners to identify and eliminate their common and repeated errors, so that when they submit their writing to their teacher, the teacher is freed up to focus on the things only they can help with – discourse organisation, argumentation, nuance, and much more.
Write & Improve is a great example of using data – in very different ways. There’s the mass of data from the corpus that feeds the machine learning, the human annotation that adds sophistication to the Artificial Intelligence, the massive user-generated data set from Wikipedia that makes prompt relevance happen, and the logic that connects the performance data with appropriate feedback. It’s like a case study for the world of data-assisted truth!
So does this mean that machines and artificial intelligence will be the teachers of tomorrow? Absolutely not – these technologies will be the teachers’ assistants of the future. They will give teachers more time to teach and learners more time to learn.
I am passionate about digital technology but I am not advocating a paperless classroom or suggesting that you throw away the course books. Far from it. We saw earlier some of the data to show how well your students are learning in your classrooms and performing in exams. What you do works, and what I hope I have done in this presentation is give you some insights and ideas about how data is changing the world you teach in.
Because the world needs teachers. The world needs you. In 2015, 74 countries faced teacher shortages, and about 59 million children were excluded from primary education. Universal primary education by 2030 requires 25.8 million teachers – 3.2 million new teachers and 22.6 million replacements. 1.5 billion people are learning English and less than a quarter of them have access to formal instruction.
How do we reach these learners without access to teachers? That is a challenge for all of us working in English Language learning. Here technology helps us reach out. I am just going to quickly show you Quiz your English an app we’ve recently launched in partnership with Bravi from Brazil.
It uses data in that most modern of ways – in the social media context. Learners sign in with facebook into a rapid fire English quizzing game. They can compete with their friends, or be paired with other players from all over the world.
There is a range of topics at B1 and B2. I am going to choose Random Mix, which is new and has questions at both levels and from all the topics.
You have to be fast to win, so I am really going to go for it. I have a challenger so let’s begin.

DEMO
And if you want you can challenge me, I’m on the Cambridge English stand during the break after this session.


You could also use Quiz Your English in the classroom, too, for pair work or when students finish a task ahead of the rest of the class. Previous research we have done in Mexico with Dr David Graddol showed that students get very excited and engaged when they allowed to use their phones in class. By the end of last week an amazing 1,345,000 games have been played. So at a minute a game this little app has so far provided 22,417 hours, nearly a two and a half years, of lexico-grammatical skills practice for learners. And I haven’t even talked about how much data all that learning has created!
This is what technology can do for us. This is what a good idea, some good content and a smartphone can achieve.

It can be hard to appreciate the value of technology. The head of the British Post Office said in the 19th Century, ‘The Americans might have need of the telephone, but we do not’. A famous newspaper editor in the UK, C.P.Scott, said when the television was first invented ‘No good can possibly come of it!’ And famously, Thomas Watson of IBM said after the war, ‘I think there is a world market for maybe five computers’.


There is only one certainty when it comes to predicting the future, particularly when it comes to the future of technology. You will be wrong.
But I am as certain as certain can be that we are not living in a post truth world. The search for data-assisted truth is happening all around us. Finding the right data and learning from it is changing our lives like never before – and it’s truly exciting.
The most exciting part is that the intoxicating new data techniques I’ve talked about today are already being applied in the world that we know and love so much, the world of language learning. Helping us design the right questions and find cheats. Looking for patterns of answers across the levels. Seeing how learning differs from country to country. Helping machines learn. And creating games that power learning.
In education, we are about looking to the future, not the past; moving forward, not backward; helping society move ever more into the light.
Today, we are at the dawning of a new era, where we can achieve unparalleled knowledge, unparalleled learning, unparalleled insight. It is up to us to grasp this opportunity, not to let it slip through our fingers. So let’s grab the right data, search for data-assisted truth and be part of this wonderful new world.
And by the way, how did post-truth get to be Oxford’s word of the year? Even with a hyphen it looks very much like two words to me! Thank you very much.




Download 48.41 Kb.

Share with your friends:




The database is protected by copyright ©sckool.org 2022
send message

    Main page