Wednesday 12 September 2012

Some thoughts about yesterday symposium on awareness in technology-enhanced learning

Yesterday, Katrien did a great presentation about our most recent work. We are trying to wrap up all our case studies and to draw some generic conclusions from our experience about learning dashboards.

One of the main criticisms that we received was the kind of evaluations that we were performing with students. Typical standardized forms that we evaluate the perception of the user about the tools rather than usage data... and I agree that we cannot stop on this level... and we are doing steps ahead in this area.

We are also trying to find correlations between different kind of activities. Another criticism was about this point, if the activities are mandatory... it's normal to find correlations. And I think that it is a totally understandable assumption, but my research on this field says something different...

Trying to summarize what I am doing... I am building dashboards visualizing different kind of traces, from tweets, blogs, comments, paper reads and time spent. We deploy these dashboard in Erik's courses that they follow some kind of 'open learning' approach where we encourage students to share their results/opinions using the above mentioned social networks. The line between encouraging and being mandatory is pretty thin and for sure, there are biased results.

Based on the first assumption, if something is mandatory, you will find correlations with the grades... this is not true... for instance, commenting on each other blogs and tweeting is equally 'mandatory' (although this activity does not influence the grades) so... based on this assumption, both should correlate with the grades... sorry! There is no significance correlation in... (I let you guess which variable has no significance and try to guess why!).

We build dashboards trying to help students and teachers (I like more the idea of students... they are a bigger challenge from my point of view). How can we help them? Giving them metrics that help to understand them their performance.

I really liked the presentation of Marcus Specht yesterday, they also work quite a lot related to awareness and reflection in learning but I could also see yesterday they do it also in other interesting fields. They also get a lot of inspiration from the quantified self movement as we do. But I also think that it is a bit dangerous to simplify approaches... the quantified self movement starts from a self-knowledge or self-goal motivation... learning has a mixed of motivations... even more when a learning activity as it can be "to use a learning dashboard" is not mandatory because we think that it's optional due to a master or bachelor student should be almost autonomous in its learning decisions.

Why did I start to talk about the quantified self movement? Because understandability and motivation are linked from my point of view and conclusions from my evaluations. The use of every visualization, dashboard or tool has its own learning curve. If it's complicated, nobody will use it (also other reason because we still perform usability tests). So the metrics and the visualizations should be easily understandable.

I performed an evaluation with HCI students to compare two of my prototypes: mobile version vs big table. Some of the students that preferred the mobile version pointed out that they still wanted to have the big table available because they wanted to understand their own results. Others commented that they wanted to use exclusively the big table, because they wanted to draw conclusions by themselves.

I agree that algorithmic and computational efforts can make a big contribution to the field, I strongly believe that they are completely necessary... but we are developing tools intended to help users and users have their own feelings and opinions and if they do not like the tool, they will try to avoid to use it.

Algorithms can make a good work... but also they can be wrong... and being clustered somehow in some kind of cluster is not always nice... for instance, I am attending some courses in coursera, I am sure that people who is participative in forums, meetups and so on... they will learn much more than me... I am sure of it... If the system would give me feedback, they will categorize me probably in the group of 'slackers' and the systems totally ignores what are my priorities in my life... maybe I am person who works in a factory, twelve hours per day... and I arrive at home very tired with time enough to watch the videos and nothing else... or maybe I do not work twelve hours... but I just had recently a baby who requires most of my attention... or maybe it's true! I'm a slacker... but how do you differentiate between the three cases? I think that it is not an easy task... in the same way, that it's hard previously define what are the conclusions that your algorithm should draw from the data... another point the data... when do you start to have enough high quality dataset to draw conclusions? Coursera courses are 6 weeks long... how long have you to wait till your dataset work?

I am not claiming that our way of doing is the correct one... I really like the idea of recommenders, profiling users and so on... but maybe and only maybe... since the moment, learners are not so interested in the results of our European projects... maybe it is because we do not understand them... and for that, we do not need to do great, fancy and elaborated applications, we need to evaluate everything and to try to understand them... and my hopes are that with a little more of time... my PhD and our work can provide a bit of light in the middle of the darkness... :)