Knewton – the future of education?

During the learning analytics conference in Banff, several presenters mentioned the speed at which analytics are moving into policy level decisions in universities and schools. Malcolm Brown, from EDUCAUSE, made the statement that learning analytics “are moving faster than any of us realize”.

This rapid development is due to a variety of factors: data mining focus in the technology sector, business intelligence, increased calls for accountability of the publicly funded education sector, organizations and foundations targeting analytics in research projects (Digging into Data, Gates Foundation, EDUCAUSE), and increased entrepreneurial activity in the educational technology sector.

Taken together, these trends produce compelling pressure for change in education and learning (compelling enough that learning analytics might even survive the coming “death by hype and consultants” wave).

A few years ago I wrote a post on technology externalized knowledge and learning, arguing that classrooms need to give way to wearable, adaptive, personal, ubiquitous learning systems comprised of agents that track our activity and automatically provide information based on context, knowledge needs, and our previous learning activity. Earlier this year, I wrote a short article on the role of learning analytics as an alternative to traditional curriculum design and teaching methods. The core message of both TEKL and learning analytics: the traditional model of education is positioned for dramatic transformation – a transformation that will invalidate many of our current assumptions of classrooms, learning content, teaching, and schools/universities.

While planning the Learning Analytics 2011 conference, the steering committee spent quite a bit of time debating the question “what are learning analytics?”. We didn’t come to a full agreement, but settled on the following broad definition: “learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs”. This is an effective definition for the conference, but it only alludes to an important component: content. Learning analytics should lead to alteration of course content. This is hardly a new idea – in the late 90’s we were talking about content personalization under the umbrella of computer-based training. What has changed, however, is the growth of semantic content, ontologies (knowledge domain structures), educational data mining models/algorithms, and greater amount of digitally-captured learner activity.

The core concept of learning analytics has existing for several decades in education theory, computer science, human-computer interaction, and what’s now called web sciences. University research centres and labs, however, often make poor use of their intellectual resources for commercial purposes or broad application.

Over the last week, I’ve been exploring various companies to profile on FutureLearn. I kept encountering companies in the test prep (SAT, GRE, GMAT, etc) space. The lower-hanging fruit of innovation is to take an existing system (such as test prep) and make it more effective/efficient. After all, it’s easier to get schools/universities/individuals to pay for a product that will help students do well on existing test models than it is to get someone to buy something vague like learning analytics.

One company kept surfacing: Knewton. Knewton is often cited as an innovative test prep company using advanced algorithms to provide personalized learning. They collect up to 150000 data points on a single student studying one afternoon. This data is then used to adapt the learning experience to each student. As far as I can tell, they are doing quite well with this test prep model. I’m far more interested, however, in their adaptive learning platform.

Educational technology and content providers have an uneasy relationship. Large companies like Pearson provide their own learning platform, but also offer content for LMS providers like Blackboard and D2L. LMS providers, recognizing that Moodle and other open systems have commodified their systems, have started to adopt a tightly-integrated ecosystem of services in order to maintain value points for customers. In fulfilling this vision, Blackboard has acquired synchronous tools like Wimba and Elluminate as well as analytics platforms like iStrategy. Content providers, like Pearson and Thomson Reuters, are actively trying to reposition as learning or intelligence companies. Pearson, for example, is (will be) offering degrees in UK. These three industry segments (LMS, content, intelligence platforms) are heading for a convergence/collision. They are battling over the multi-billion dollar education market of content, teaching, and testing. Universities may well be reduced to degree granting systems as they simply have not been capable, as a whole, of adapting to the digital world (in terms of teaching, at least – they’ve adapted quite well in terms of research).

Knewton, with their adaptive learning platform, sits a bit outside of the LMS/content space. They now offer universities, organizations, and content publishers the opportunity to use their platform for providing customized, personalized learning. I’ve signed up for the service in order to learn more about how it works, but haven’t received a response yet. From the videos on the site, the best I can glean is that learning content from publishers and universities is repurposed into Knewton’s platform and the platform then algorithmically personalizes content (real-time) based on student’s activities.

I’m curious to find out how they do this – do they scan and automatically classify content according to an ontology? Do developers have to code content by various levels (basic, intermediate, advanced) so that the system can deliver customized content to learners (though this model wouldn’t be true personalization – it would be more about classifying the student at a certain level and then matching content to the classification scheme, not personalized to the actual student)? Or do they computationally generate content for each learner? I suspect it’s a combination of the first two approaches (i.e. ontology with learner classification model). The Wolfram computational vision of learning content and adaptivity is still a bit in the future.

Regardless of the model used, Knewton has effectively positioned itself short term as a partner to publishers, schools, universities, and organizations who recognize the value of analytics, but don’t know how to start. Long term, Knewton is a takeover target for Pearson, Thomson, or possibly Blackboard. If they can resist that temptation, they may well create an entirely separate category (learning and knowledge analytics) in the learning space.

1 Comments.

  1. An entirely separate category indeed!

    While there is an overwhelming push to integrate technology into education at every level, I don’t see a clearly defined model for evaluating is success.

    Identifying goals and metrics and tracking/reporting of the progress of any initiative is quite underdeveloped.

    I am encouraged to see articles such as this that parallel my modest implementation of educational analytics on SharePoint.

    L.