Google-izing Online Learning.

Recently, I wrote about the emerging social learning platforms--and why they are all a bust.

In short, I said that learning requires tasks, criteria and a curriculum. That is: there must be tasks in a sequence, and each task needs criteria that define success or failure. At a minimum, learners need to know: "Am I succeeding or failing?" And after they know that, they can learn a degree of mastery--"How can I do a bit better?"

Already you can guess why so much teaching transfers so poorly to the web. Professors are experts with vast quantities of rich information and complex ideas at their fingertips. So a terrific professor is apt to tell you the twelve things wrong with your work--rather than simply "You're on the right track, but head a bit more in this direction." And it's the latter communication learners really need: right track/wrong track, steps for improvement.

So if learning is to transfer successfully to the internet--how?
What special role could software play? What kind of software?

Again, I've already suggested that to transfer learning here or there, you need to think "learning how" and not just 'learning in general."

And this brings us to how the current social learning platforms get learning wrong.

It's true that a curriculum is a sequence of tasks. It may involve resources like books or web sites or videos. The Khan Academy seems to believe--wrongly, I think, and I can prove it--that a series of videos and online quizzes make up a curriculum. If you believe this is right, go watch a video and take the corresponding quiz and see if you learn what's in the video. An eight-minute lecture of a whiteboard capture or an interview no more causes learning to happen than a classroom lecture. It may, but it may as likely not.

A list of videos and some quizzes does not a curriculum make. It might, if the learner were prepared and the degree of feedback on the quizzes were tuned to the learner's level. And it also might if there were practice that helped you get it right. But that's not where we are yet.

The thing software could really get right is the sequencing of tasks in the curriculum.

The flip side of the "social" web is not just sharing ratings with friends: it's crowd-sourcing. That is the secret behind Google's phenomenal early success. Every other search engine wanted either to individually evaluate web sites or to jigger the search rankings by taking money from web sites to put their url higher in the search results. Google refused to do this.

Google observed that when they returned search results, those results were on their own page. And Google therefore knew which of the results uses clicked on. Therefore Google's own traffic showed them which results were better--and could be fed back into the search algorithm.

That's it. That's why Google was better than the search engines you've forgotten about--like Lycos.

And that's why a web-based curriculum will be as good as the size of its user base.

If a curriculum is a series of sequenced tasks with matching criteria, a web-based repository of tasks could collect what educational researchers today only dream of. Namely, such a web site could capture the exact likelihood of a learner performing better on Task B after performing Task A.

That is the heart of a curriculum: the likelihood of performing better on one task after performing another task. That is a curriculum: a sequence of tasks arranged so that success on one task makes success on the next task more likely rather than less. If you're not doing that, you're just frustrating learners with obstacles--because any task that does not improve learning is an obstacle, and it more likely demotivates than motivates.

The rest is a statistical nicety to be debated by stats wonks. (And the company with the best stats wonks will win--another Google Takeaway.)

In short, online learning tasks need to be Google-ized: ranked by effectiveness.

The rest of the details of a good social learning platform could be inferred from other examples. But I'll blog more about that later.

In the meantime, I think the idea of a self-adjusting system that ranks tasks (with clear success criteria attached) is enough for one day.

--Edward R. O'Neill

Comments

Popular Posts