Tuesday, April 19, 2016

FutureLearn MOOC thoughts #flnetwork

Today I'm attending a meeting of the FutureLearn Academic Network, the research network for partners in FutureLearn (a consortium which operates a MOOC - Massive Online Open Course- platform). The meeting's taking place at Glasgow University. I'll just pick out a few things that struck me during the day.
Minji Xu (FutureLearn) compared her experience of a MOOC with that of an online degree course. She identified that the MOOC course had more culturally diverse participants, which meant that people felt more included (rather than being left out because those in the national majority made assumptions about time zones, language etc.) Both courses have group work, and she liked being able to see what people had already done (profile, previous posts and portfolio) when choosing group mates. This struck me as interesting, as it may be more possible to find evidence about your classmates in online learning than (say) early on in a face to face class where you might be going more just on what people do or don't say in class.
Phil Tubman (Lancaster University) was talking about social learning, reflecting on interaction in FutureLearn MOOCs. He felt that existing tools and instruments (content analysis etc.) for "measuring" learning in small group discussions or extended online conversations were not necessarily best suited for investigating learning in MOOCs. Tubman identified that dimensions of sociocultural learning were: participative, interactive, social, cognitive and metacognitive (I think referring to this). He decided to focus on the interactive dimension, looking at comments and replies in 10 FutureLearn MOOCs. There was a si8milar curve for all the MOOCs, in that about 50-60% of comments that had replied had just one reply, 19-25% have 2 replies, with a steeply declining curve after that. From that point of view, sociocultural learning seemed low, and all the MOOCs (which were in widely different subject areas, with widely different numbers of participants) showed similar trends. You could hypothesise from this that this was something to do with the nature of the platform. Tubman proposed that people needed to be able to discover conversations that interest them (like having hashtags they could follow or search for), to be able to filter out irrelevant conversations (e.g. large number that are just thanking the person who made a comment) and to know what was expected of them in discussion spaces.
Finally this morning, Paul Browning (National STEM Learning Centre) was talking about a MOOC on Assessment for Learning in STEM teachers (aimed at pre-university teachers who teach science etc. subjects). He was focusing on their use of peer review in the MOOC (at a point where the teachers were creating and critiquing a particular type of question, a hinge-point question). The main learning point for the MOOC designers was ensuring that teachers would be able to peer review people in the same discipline (rather than someone in, say, physics, being presented with a Biology question to review). To begin with, people were randomly allocated, and they were not happy about it. They solved the problem by allowing the learner to "shuffle" if they didn't like the peer-review assignment they were presented with, which they could keep doing until they got one they liked. Enabling this learner choice led to greater learner satisfaction (and, presumably, learning) and didn't lead to "unwanted" peer assignments.

No comments: