Monday, September 8, 2014

Algorithms learning about human learning

One consequence of the online-5-minute-lecture (e.g., Khan Academy) approach is that with lessons broken into bite-size pieces, each of which has associated exercises, it's easier to test the effectiveness of any given piece. Khan Academy, Video tasks on the learning dashboard
Many of our exercises are tagged with “curated related videos”—videos that are hand-selected as related to the exercise. Using this as a starting point, we looked at all the videos that were already tagged as related to any exercise. For each of these videos, we compared the accuracy on its associated exercise both before watching the video and after watching it. From there, we selected the top fifty most effective videos, each improving the accuracy on its associated exercise by at least twenty percent, and are now highlighting them on the mission dashboard. When the system recommends an exercise with an associated video on the list of our top fifty related videos, it will automatically recommend the related video as well.
Compare with a human teacher who is trying to see which explanations are most helpful, judging class reaction and then perhaps a weekly quiz...the algorithm is of course completely incapable of what the human does, but on the other hand it has immediate access to individual data about what works for whom. In the not-terribly-long run we should be able to have videos tagged as having different styles (highly compressed v. wordy, words v. equations v. pictures, rules v. examples, humor v. straight exposition...) and automatically choose whichever works for given students based on what has improved their scores in the past. I suppose in the very long run we're moving towards a time and motion study program for small units of learning-effort.

Or then again, maybe not.

No comments:

Post a Comment