Sunday, February 28, 2016

Completion a category mistake in MOOCs



In a fascinating survey taken at the start of the University of Derby’s ‘Dementia’ MOOC, using Canvas, where 775 learners were asked whether they expected to fully engage with the course, 477 said yes but 258 stated that they did NOT INTEND TO COMPLETE. This showed that people come to MOOCs with different intentions. In fact, around 35% of both groups completed, a much higher level of completion that the vast majority of MOOCs. They bucked the trend.

Now much is made of dropout rates in MOOCs, yet the debate is usually flawed. It is a category mistake to describe people who stop at some point in a MOOC as ‘dropouts’. This is the language of institutions. People drop out of institutions,  ‘University dropouts', not open, free and online experiences. I’m just amazed that many millions have dropped in.

So let’s go back to that ‘Dementia’ MOOC, where 26.29% of those that enroled never actually did anything in the course. These are the window-shoppers and false starters. False starters are common in the consumer learning market. For example, the majority of those who buy language courses, never complete much more than a small portion of the course. And in MOOCs, many simply have a look, often just curious, others want a brief taster, just an introduction to the subject, or just some familiarity with the topic, and further in, many find the level inappropriate or, because they are NOT 18 year old undergraduates, find that life (job, kids etc.) make them too busy to continue. For these reasons, many, myself included, have long argued that course completion is NOT the way to judge a MOOC (Clark D. 2013, Ho A. et al, 2014; Hayes, 2015).

Course completion may make sense when you have paid up front for your University course and made a huge investment in terms of money, effort, moving to a new location and so on. Caplan rightly says that 'signalling' that you attended a branded institution explains the difference. In open, free and online courses there is no such commitment, risks and investments. The team at Derby argue for a different approach to the measurement of the impact of MOOCs, based not on completion but meaningful learning. This recognises that the diverse audience want and get different things from a MOOC and that this has to be recognised. MOOCs are not single long-haul flights, they are more like train journeys where some people want to get to the end of the line but most people get on and off along the way.

Increasing persistence
Many of the arguments around course completion in MOOCs are, I have argued, category mistakes, based on a false comparison with traditional HE, semester-long courses. We should not, of course, allow these arguments to distract us from making MOOCs better, in the sense of having more sticking power for participants. This is where things get interesting, as there have been some features of recent MOOCs that have caught my eye as providing higher levels of persistence among learners. The University of Derby ‘Dementia’ MOOC, full title ‘Bridging the Dementia Divide: Supporting People Living with Dementia’ is a case in point.

1. Audience sensitive
MOOC learners are not undergraduates who expect a diet of lectures delivered synchronously over a semester. They are not at college and do not want to conform to institutional structures and timetables. It is unfortunate that many MOOC designers treat MOOC learners as if they were physically (and psychologically) at a University – they are not. They have jobs, kids, lives, things to do. MOOC designers have to get out of their institutional thinking and realize that their audience often has a different set of intentions and needs. The new MOOCs need to be sensitive to learner needs.

2. Make all material available
To be sensitive to a variety of learners (see why course completion is a wrong-headed measure), the solution is to provide flexible approaches to learning within a MOOC, so that different learners can take different routes and approaches. Some may want to be part of a ‘cohort’ of learners and move through the course with a diet of synchronous events but many MOOC learners are far more likely to be driven by interest than paper qualifications, so make the learning accessible from the start. Having materials available from day one allows learners to start later than others, proceed at their own rate and, importantly, catch up when they choose. This is in line with real learners in the real world and not institutional learning.

2. Modular
The idea of a strictly linear diet of lectures and learning should also be eschewed, as different learners want different portions of the learning, at different times. A more modular approach, where modules are self-contained and can be taken in any order is one tactic. Adaptive MOOCs, using AI software that guides learners through content on the basis of their needs, is another. 6.16% of the dementia MOOCs didn’t start with Module 1.
This tracked data shows that some completed the whole course in one day, others did a couple of modules on one day, many did the modules in a different order, some went through in a linear and measured fashion. Some even went backwards. The lesson here is that the course needs to be designed to cope with these different approaches to learning, in terms of order and time. This is better represented in this state diagram, showing the different strokes for different folks. 
Each circle is a module containing the number of completions. Design for flexibility.

3. Shorter
MOOC learners don’t need the 10-week semester structure. Some want much shorter and faster experiences, others medium length and some longer. Higher Education is based on an agricultural calendar, with set semesters that fit harvest and holiday patterns. The rest of the world does not work to this pre-industrial timetable. In the Derby Dementia MOOC, there is considerable variability on when people did their learning. Many took less that the six weeks but that did not mean they spent less time on the course, Many preferred concentrated bouts of longer learning than the regular once per week model that many MOOCs recommend or mandate. Others did the week-by-week learning. We have to understand that learning for MOOC audiences is taken erratically and not always in line with the campus model. We need to design for this.

4. Structured and unstructured
I personally find the drip-feed, synchronous, moving through the course with a cohort, rather annoying and condescending. The evidence in the Dementia MOOC suggests that there was more learner activity in unsupported periods than supported periods. This shows a considerable thirst for doing things at your own pace and convenience, than that mandated by synchronous, supported courses. Nevertheless, this is not an argument for a wholly unstructured strategy. This MOOC attracted a diverse set of learners and having both structured and unstructured approach brought the entire range of learners along.
You can see that the learners who experienced the structured approach of live Monday announcement by the lead academic, a Friday wrap-up with a live webinar, help forum and email query service was a sizeable group in any one week. Yet the others, who learnt without support were also substantial in every week. This dual approach seems ideal, appealing to an entire range of learners with different needs and motivations.

5. Social not necessary
Many have little interest in social chat and being part of a consistent group or cohort. One of the great MOOC myths is that social participation is a necessary condition for learning and/or success. Far too much is made of ‘chat’ in MOOCs, in terms of needs and quality. I’m not arguing for no social components in MOOCs, only claiming that the evidence shows that they are less important than the ‘social constructivist’ orthodoxy in design would suggest. In essence, I’m saying it is desirable but not essential. To rely on this as the essential pedagogic technique, is, in my opinion, a mistake and is to impose an ideology on learners that they do not want.

6.  Adult content
In line with the idea of being sensitive to the needs of the learners, I’ve found too many rather earnest, talking heads from academics, especially the cosy chats, more suitable to the 18 year-old undergraduate, than the adult learner. You need to think about voice and tone, and avoid second rate PhD research and an over-Departmental approach to the content. I’m less interested in what your Department is doing and far more interested in the important developments and findings, at an international level in your field. MOOC learners have not chosen to come to your University, they’ve chosen to study a topic. We have to let up on being too specific in content, tone and approach.

7. Content as a driver
In another interesting study of MOOCs, the researchers found that stickiness was highly correlated to the quality of the 'content'. This contradicts those who believe that the primary driver in MOOCs is social. They found that the learners dropped out if they didn't find the content appropriate, or of the right quality and good content turns out to be a primary driver for perseverance and completion, as their stats show.

8. Badges
The Dementia MOOC had six independent, self-contained sections, each with its own badge for completion, and each can be taken in any order, with an overall badge for completion. These partial rewards for partial completion proved valuable. It moves us away from the idea that certificates of completion are the way we should judge MOOC participation. In the Dementia MOOC 1201 were rewarded with badges against 527 completion certificates.

9. Demand driven
MOOCs are made for all sorts of reasons, marketing, grant applications, even whim - this is supply led. Yet the MOOCs market has changed dramatically, away from representing the typical course offerings in Universities, towards more vocational subjects. This is a good thing, as the providers are quite simply reacting to demand. Before making your MOOC, do some marketing, estimate the size of your addressible audince and tweak your marketing towards that audience. Tis is likely to resultin a higher number of participants, as well as higher stickiness.

10. Marketing
If there's one thing that will get you more participants and more stickiness, it's good marketing. Yet academic institutions are often short of htese skills or see it as 'trade'. This is a big mistake. Marketing matters, it is a skill and need a budget.

Conclusion
The researchers at Derby used a very interesting phrase in their conclusion, that “a certain amount of chaos may have to be embraced”. This is right. Too many MOOCs are over-structured, too linear and too like traditional University courses. They need to loosen up and deliver what these newer diverse audiences want. Of course, this also means being careful about what is being achieved here. Quality within these looser structures and in each of these individual modules must be maintained.

Bibiography
Clark, D. (2013). MOOCs: Adoption curve explains a lot. http://donaldclarkplanb.blogspot.co.uk/2013/12/moocs-adoption-curve-explains-lot.html
Hayes, S. (2015). MOOCs and Quality: A review of the recent literature. Retrieved 5 October 2015, from http://www.qaa.ac.uk/en/Publications/Documents/MOOCs-and- Quality-Literature-Review-15.pdf
Ho, A. D., Reich, J., Nesterko, S., Seaton, D. T., Mullaney, T., Waldo, J. & Chuang, I. (2014). HarvardX and MITx: The first year of open online courses. Re- trieved 22 September 2015, from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2381263
Leach, M. Hadi, S. Bostock, (2016) A. Supporting Diverse Learner Goals through Modular Design and Micro-Learning. Presentation at European MOOCs Stakeholder Summ
Hadi, S. Gagen P. New model formeasuring MOOCs completion ratesPresentation at European MOOCs Stakeholder Summit.
You can enrol for the University of Derby 'Dementia' MOOC here.
And more MOOC stuff here.

Saturday, February 27, 2016

MOOCs: course completion is wrong measure

In a fascinating survey taken at the start of the University of Derby’s ‘Dementia’ MOOC, using Canvas, where 775 learners were asked whether they expected to fully engage with the course, 477 said yes but 258 stated that they did NOT INTEND TO COMPLETE. This showed that people come to MOOCs with different intentions. In fact, around 35% of both groups completed, a much higher level of completion that the vast majority of MOOCs. They bucked the trend.
Now much is made of dropout rates in MOOCs, yet the debate is usually flawed. It is a category mistake to describe people who stop at some point in a MOOC as ‘dropouts’. This is the language of institutions. People drop out of institutions -  ‘University dropouts,’ not open, free and online experiences. I’m just amazed that 40 million have dropped in.
So let’s go back to that ‘Dementia’ MOOC, where 26.29% of enrolees never actually did anything in the course. These are the window-shoppers and false starters. False starters are common in the consumer learning market. For example, the majority of those who buy language courses, never complete much more than a small portion of the course. And in MOOCs, many simply have a look, often just curious, others want a brief taster, just an introduction to the subject, or just some familiarity with the topic, and further in, many find the level inappropriate or, because they are NOT 18 year old undergraduates, find that life (job, kids etc.) make them too busy to continue. For these reasons, many, myself included, have long argued that course completion is NOT the way to judge a MOOC (Clark D. 2013, Ho A. et al, 2014; Hayes, 2015).
Course completion may make sense when you have paid up front for your University course and made a huge investment in terms of money, effort, moving to a new location and so on. In open, free and online courses there is no such commitment, risks and investments. The team at Derby argue for a different approach to the measurement of the impact of MOOCs, based not on completion but meaningful learning. This recognises that the diverse audience want and get different things from a MOOC and that this has to be recognised. MOOCs are not single long-haul flights, they are more like train journeys where some people want to get to the end of the line but most people get on and off along the way.
Audience age
Here’s two sets of data from the Derby Dementia MOOC and the six Coursera MOOCs delivered by the University of Edinburgh. It is clear that MOOCs attract a much older audience than the average campus student.



This is important, as older learners are far less likely to want pieces of paper and certification or bother that much about not completing the full diet of content.
Audience mix
We are also seeing a drift away from the initial graduate only audience. There is still a skew towards graduates but this is because these are the early adopters and almost the only group who know that MOOCs exist. Only now, do we see some serious marketing, targeted at different audiences and this is starting to have effect. Indeed, the majority of participants (55%) in the Dementia MOOC are not University graduates.
Audience motivation
Now here’s an interesting thing.  A point often forgotten in MOOCs -  learner motivation.
This compares well with the Edinburgh data.
The bottom line is that people who do MOOCs really want to learn. They are not largely motivated by pieces of paper or even completion.
Conclusion
As MOOC audiences are different from traditional HE students and as their audiences change in terms of age, background and motivation, the more likely MOOCs will have to respond to these new audiences and not mimic University semester courses. The team at Derby have already suggested an alternative set of metrics for measuring the success of a MOOC. They’re right. It’s time to move beyond the boring. Repetitive questions we hear every time the word MOOC is mentioned – dropout, graduates only…..
Bibliography

Hadi, S. Gagen P. New model for measuring MOOCs completion ratesPresentation at European MOOCs Stakeholder Summit.
You can enrol for the University of Derby 'Dementia' MOOC here.
And more MOOC stuff here.

Friday, February 26, 2016

AI maths app that students love and teachers hate

We’ve all been stuck on a maths problem. Look up a textbook – hardly ever helps, as the worked examples are rarely close to what you need and explanations clumsy and generic. What you really need in help on THAT specific problem. This is personalised learning and an app called Photomath does it elegantly using AI. Simply point your mobile camera at the problem. You don’t even have to click. It simply scans and comes up with the answer and a breakdown of the steps you need to take to get to the answer. It can’t do everything, such as word problems, but it’s OK for school-level maths.
Getting there
The app is quite simple at the moment and only solves basic maths problems. It has been criticised for being basic but it’s at this level that the vast majority of learners fail. But it’s getting there and I don't want to get hung up on whether Photomaths is as good as it says it is. or better than other maths apps. For me, it's a great start and a hint of great things to come. In fact Wolfram Alpha is a lot more sophisticated. But it is the convenience of the mobile camera functionality that makes it special.
The problem that is maths
Maths is a subject that is full of small pitfalls for learners, many which switch off learners, inducing a mindset of ‘I’m not good at maths’. In my experience, this can be overcome by good teaching/tutoring and detailed, deliberate feedback, something that is difficult in a class of 30 plus students. This subject, above all others, needs detailed feedback, as little things lead to catastrophic failure. This approach, therefore, where the detail of a maths problem is unpacked, is exactly what maths teaching needs. It is a glimpse of a future, where performance support, or teacher-like help, is available on mobile devices. AI will do what good teachers do, walk you through specific problems, until you can do it for yourself.
Students love it, teachers hate it
Predictably, students love this app, while teachers hate it. This is a predictable phenomenon and neither side is to blame. It happened with Google, Wikipedia, MOOCs,…..  and it’s the same argument we heard when calculators were invented. The teachers’ point is that kids use it to cheat on homework. That depends on whether you see viewing the right answer and steps in solving an equation as cheating. In my opinion, it simply exposes bad homework. Simply setting a series of dry problems, without adequate support, is exactly what makes people hate maths, as help is so hard so find when you’re sitting there, on your own, struggling to solve problems. Setting problems is fine for those who are confident and competent, it often disheartens those who are not.
Sure the app will give you the answer but it also gives you a breakdown of the steps. That’s exactly where the real leaning takes place. What we needs is a rethink about what learning and practice means to the learner (and homework) in maths. The app is simple but we now see technology that is, in effect, doing what a good teacher does – illustrating, step-by-step, how to solve maths problems.
Homework
Homework causes no end of angst for teachers, parents and students. Some teachers, based on cherry-picked evidence or hearsay, don't provide any homework at all. Many set banal and ill-designed tasks that become no more than a chore to be endured by the student. I personally think the work 'homework' is odd. Why use the language of the workplace 'work' to describe autonomous learning? In any case, we must move beyond the 'design a poster'  and get the right answer tests, to encoring autonomy in the learner. This means providing tasks where adequate support is available to help the learner understand the process or task at hand.
AI in learning
AI is entering the learning arena at five different taxonomic levels; tech, assistive, analytic, hybrid and automatic. This is a glimpse of what the future will bring, as intelligent AI-driven software delivers, initially assistance to students, then teacher-level functionality and eventually the equivalent of the autonomous, self-driving car. It's early days but I've been involved in projects that are seeing dramatic improvements in attainment, dropout and motivation using AI technology in learning.
WildFire

I’ve been using AI in a tool called WildFire that uses semantic AI to create online learning content from ANY document, PowerPoint or video. No lead time, sophisticated active learning and a massive reduction in cost. We’re starting to see a new generation of tools that use smart AI techniques to deliver personalised learning. AI is fast becoming the most important development in the advancement of teaching we’ve seen to date.

Friday, February 19, 2016

We have fetishised 'Leadership', we're all leaders now, rendering the word meaningless

We have fetishised the word 'Leader'. Everyone is obsessed by doing Leadership training and reading 4th rate paperbacks on this dubious subject. You're a leader, I'm a leader, we're all leaders now - rendering the very meaning of the word useless. What do you do for a living? I’m a ‘leader’. Cue laughter and ridicule. Have you ever heard anyone in an organisation say, We need to ask our ‘Leader’? Only if it was sneering sarcasm. It was invented by people who sell management training to fool us all into thinking that it's a noble calling but is it all a bit phoney and exaggerated and does it lead to dysfunctional behaviour?

Weasel words
When I first started in the learning world over 35 years ago ‘Leader’ was not a word I heard - ever. There was plenty of good management theory and training, and most people who headed up companies were called Managing Directors and in Education Vice-Chancellors and Heads. Then the tech bubble came along in the 90s and we all went gaga for snazzy, new US terms and everyone swapped out the sober and descriptive MD for CEO (Chief Executive Officer) (I’m also guilty here). The word ‘Chief’ is an interesting choice. You were no longer someone who ‘managed’ others but the big chief, big cheese, a big shot. It was then that another word rose like Godzilla from the depths of the cess pit that is HR lingo – ‘leader’. Suddenly, managers weren’t people with competences but top dogs who ‘Led’ people towards victory. Brian the Head of Geography was now a leader. Mike, senior manager in accounts, was now a dog of war.

Followers
Using the word 'Leader' creates a sense of us and them, a sort of feudal relationship. Leaders are now the aristocracy in an organisation, everyone else is a kulak, working serf or follower. In a sense the word infers that the people you lead and manage are followers. It sets you apart from other people, not a great quality in management within an educational institution. In fact, it is the exact opposite of most qualities you need to manage an organisation or school. Of course, leadership trainers will tell you that it’s not about creating followers, but in practice this is the effect the word creates and management trainers jump through hoops to reconcile this leader/follower dilemma. We need to connect with people not lead and follow.

Leadership courses
When the language changed, so did the training. HR bods and teacher trainers were suddenly the leading thinkers on leadership. HR and training departments saw an opportunity to big-up their status by breeding, not managers, but leaders. Middle managers went on ‘leadership’ courses run by people who had never led anything, except flipchart workshops. In practice this meant cobbling together stuff from existing management courses and adding a veneer of specious content from books on leadership. Winging it became a new course-design methodology and every management trainer in the land suddenly became a leadership trainer.

Standford Professor Pfeffer’s book Leadership is BS should be required reading for all Leadership trainers and consultants. He puts the blame for all the hubris around Leaders and Leadership firmly in the world of training, a confusion of nostrums, stories, fictions, anecdotes, promises, glib simplicities, bromides, romanticism, myth-making feel-good nonsense, His solution – realism. The real world is much more complex and messy than the Leadership theorists and trainers would have you believe. The reality of most Leadership training is that it peddles inspiration rather than real competence. Worse still, those that value a quieter approach, with some modesty and humility, are seen as not having the right qualities. Competence gets bowled away by confidence.

Leadership books
At the theoretical level, the idea that there is a body of knowledge and practice called ‘Leadership’ is laughable. It’s the word that launched a thousand bad books. Middle managers and senior teaching staff went crazy for books they’d never dreamt of reading. I’ve seen everything from Meditations by Marcus Aurelius to Clausewitz touted as serious management texts. I knew it had all gone seriously wrong when I saw a commuter, with a bad suit and combination lock briefcase, on the 7.15 from Brighton to London, reading ‘The Art of War’ by Sun Tzu. What next? Hitler, Stalin… How to reset your company to Year Zero by Pol Pot?

What we find is an astonishing lack of evidence and valuation. Leadership has the characteristics of a cult or creed rather than competences. It is what gave us Trump and the Art of the Deal. Witness the greed that enveloped Vice Chancellors, when they were awash with cash from student fees. This hasn’t gone away.

Led to the abyss
Managers loved their new found status as little generals, leading the troops. They responded to the training as narcissists respond to flattery, with gusto. I don’t think it’s an accident that this coincided with the megalomaniac behaviour in the banks where ‘leaders’ fed on a high-octane diet of ‘leadership’ training, ‘led’ us into the abyss of financial collapse.  I well remember the managers at RBS getting ‘Leadership’ training and turning into the monsters they became, mis-selling, stupid strategies on acquisitions, losing touch with reality, and a huge vacuum in competences. These ‘leaders’ adopted delusional strategies based on over-confidence and a lack of reality. There’s a measurable price to pay for believing that you’re destined to ‘lead’ – it's called realism. Managers who now saw themselves as ‘Leaders of the pack’ engaged in behaviours that flowed from the word. They became driven by their own goals and not the goals of the organisation or others. It also led to greater differentials between leader and follower salaries, witness the gross gaps in income between our new 'leaders', in business and education,and the rest. Leadership has led to rising inequalities.

Conclusion
It's the language of narcissism and excess. We have seen leaders in every area of human endeavour succumb to the tyranny of ‘leadership’, in business, politics, newspapers, even sport. In finance, for example, rather than focus on competences and sound management, fuelled by greed, they focused on personal rewards and ‘go for broke’ strategies. So what happened to these ’leaders’? Did they lose their own money? No. Did any go to jail? No. Are they still around? Yes. Have we reflected on whether all of that ‘leadership’ malarkey was right? No. In education we saw Vice Chancellors and their sidekicks, on so-called Remuneration Committees, balloon up senior salaries. Let’s get real and go back to realistic learning and realistic titles. The Academy system has seen obscene salaries and an increase in corruption, all in the new 'Leadership' class. In politics (Trump. May, Boris, Corbyn), finance (most of them), sport (FIFA), entertainment (Weinstein et al) and education (Vice Chancellors, Academies), we have fetishised leaders and there has been a disastrous loss of faith, as the greed, corruption, self-aggrandisement and narcissism went through the roof. Time to call it out.

10 powerful results from Adaptive (AI) learning trial at ASU

AI in general, and adaptive learning systems in particular, will have enormous long-term effect on teaching, learner attainment and student drop-out. This was confirmed by the results from courses run at Arizona State University in Fall 2015. 
One course, Biology 100, delivered as blended learning, was examined in detail. The students did the adaptive work on the CogBooks platform then brought that knowledge to class, where group work and teaching took place – a flipped classroom model. This data was presented at the Educause Learning Initiative in San Antonio in February and is impressive.
Aims
The aim of this technology enhanced teaching system was to:
increase attainment
reduce in dropout rates
maintain student motivation
increase teacher effectiveness
It is not easy to juggle all three at the same time but ASU want these undergraduate courses to be a success on all three fronts, as they are seen as the foundation for sustainable progress by students as they move through a full degree course.
1. Higher attainment
A dumb rich kid is more likely to graduate from college than a smart poor one. So, these increases in attainment are therefore hugely significant, especially for students from low income backgrounds, in high enrolment courses. Many interventions in education show razor thin improvements. These are significant, not just on overall attainment rates but, just as importantly, the way this squeezes dropout rates. It’s a double dividend.
2. Lower dropout
A key indicator is the immediate impact on drop-out. It can be catastrophic for the students and, as funding follows students, also the institution. Between 41-45% of those who enrol in US colleges drop out. Given the 1.3 trillion student debt problem and the fact that these students dropout, but still carry the burden of that debt, this is a catastrophic level of failure. In the UK it is 16%. As we can see increase overall attainment and you squeeze dropout and failure. Too many teachers and institutions are coasting with predictable dropout and failure rates. This can change. The fall in drop out rate for the most experienced instructor was also greater than for other instructors. In fact the fall was dramatic.
3. Experienced instructor effect
An interesting effect emerged from the data. Both attainment and lower dropout were better with the most experienced instructor. Most instructors take two years until their class grades rise to a stable level. In this trial the most experienced instructor achieved greater attainment rises (13%), as well as the greatest fall in dropout rates (18%).
4. Usability
Adaptive learning systems do not follow the usual linear path. This often makes the adaptive interface look different and navigation difficult. The danger is that students don;t know what to do next or feel lost. In this case ASU saw good student acceptance across the board. 
5. Creating content
One of the difficulties in adaptive, AI-driven systems, is the creation of ustable content. By content, I mean content, structures, assessment items and so on. CogBooks has create a suite of tools that allow instructors to create a network of content, working back from objectives. Automatic help with layout and conversion of content is also used. Once done, this creates a complex network of learning content that students vector through, each student taking a different path, depending on their on-going performance. The system is like a satnav, always trying to get students to their destination, even when they go off course.
6. Teacher dashboards
Beyond these results lie something even more promising. The CogBooks system slews off detailed and useful data on every student, as well as analyses of that data. Different dashboards give unprecedented insights, in real-time, of student performance. This allows the instructor to help those in need. The promise here, is of continuous improvement, badly needed in education. We could be looking at an approach that not only improves the performance of teachers but also of the system itself, the consequence being on-going improvement in attainment, dropout and motivation in students.
7. Automatic course improvement
Adaptive systems, such as Cogbooks, take an AI approach, where the system uses its own data to automatically readjust the course to make it better. Poor content, badly designed questions and so on, are identified by the system itself and automatically adjusted. So, as the courses get better, as they will, the student results are likely to get better.
8. Useful across the curriculum
By way of contrast, ASU is also running a US History course, very different from Biology. Similar results are being reported. The CogBooks platform is content agnostic and has been designed to run any course. Evidence has already emerged that this approach works in both STEM and humanities courses.
9. Personalisation works
Underlying this approach is the idea that all learners are different and that one-size-fits-all, largely linear courses, delivered largely by lectures, do not deliver to this need. It is precisely this dimension, the real-time adjustment of the learning to the needs of the individual that produce the reults, as well as the increase in the teacher’s ability to know and adjust their teaching to the class and individual student needs through real-time data.
10. Student’s want more
Over 80% of students on this first experience of an adaptive course, said they wanted to use this approach in other modules and courses. This is heartening, as without their acceptance, it is difficult to see this approach working well.
Conclusion
I have described the use of AI in learning in terms of a 5-Level taxonomy. This Level 4 application (hybrid of teacher plus adaptive system) assists instructors to increase attainment and combat dropout. So far, so good. If we can replicate this overall increase in attainment across all courses and the system as a whole, the gains are enormous. The immediate promise is one of blended learning, using adaptive systems to get immediate results. The future promise is of autonomous systems, even adaptive driven MOOCs, that deliver massive amounts of high quality learning at a minute cost per learner.