Wednesday 12 September 2012

Some thoughts about yesterday symposium on awareness in technology-enhanced learning

Yesterday, Katrien did a great presentation about our most recent work. We are trying to wrap up all our case studies and to draw some generic conclusions from our experience about learning dashboards.

One of the main criticisms that we received was the kind of evaluations that we were performing with students. Typical standardized forms that we evaluate the perception of the user about the tools rather than usage data... and I agree that we cannot stop on this level... and we are doing steps ahead in this area.

We are also trying to find correlations between different kind of activities. Another criticism was about this point, if the activities are mandatory... it's normal to find correlations. And I think that it is a totally understandable assumption, but my research on this field says something different...

Trying to summarize what I am doing... I am building dashboards visualizing different kind of traces, from tweets, blogs, comments, paper reads and time spent. We deploy these dashboard in Erik's courses that they follow some kind of 'open learning' approach where we encourage students to share their results/opinions using the above mentioned social networks. The line between encouraging and being mandatory is pretty thin and for sure, there are biased results.

Based on the first assumption, if something is mandatory, you will find correlations with the grades... this is not true... for instance, commenting on each other blogs and tweeting is equally 'mandatory' (although this activity does not influence the grades) so... based on this assumption, both should correlate with the grades... sorry! There is no significance correlation in... (I let you guess which variable has no significance and try to guess why!).

We build dashboards trying to help students and teachers (I like more the idea of students... they are a bigger challenge from my point of view). How can we help them? Giving them metrics that help to understand them their performance.

I really liked the presentation of Marcus Specht yesterday, they also work quite a lot related to awareness and reflection in learning but I could also see yesterday they do it also in other interesting fields. They also get a lot of inspiration from the quantified self movement as we do. But I also think that it is a bit dangerous to simplify approaches... the quantified self movement starts from a self-knowledge or self-goal motivation... learning has a mixed of motivations... even more when a learning activity as it can be "to use a learning dashboard" is not mandatory because we think that it's optional due to a master or bachelor student should be almost autonomous in its learning decisions.

Why did I start to talk about the quantified self movement? Because understandability and motivation are linked from my point of view and conclusions from my evaluations. The use of every visualization, dashboard or tool has its own learning curve. If it's complicated, nobody will use it (also other reason because we still perform usability tests). So the metrics and the visualizations should be easily understandable.

I performed an evaluation with HCI students to compare two of my prototypes: mobile version vs big table. Some of the students that preferred the mobile version pointed out that they still wanted to have the big table available because they wanted to understand their own results. Others commented that they wanted to use exclusively the big table, because they wanted to draw conclusions by themselves.

I agree that algorithmic and computational efforts can make a big contribution to the field, I strongly believe that they are completely necessary... but we are developing tools intended to help users and users have their own feelings and opinions and if they do not like the tool, they will try to avoid to use it.

Algorithms can make a good work... but also they can be wrong... and being clustered somehow in some kind of cluster is not always nice... for instance, I am attending some courses in coursera, I am sure that people who is participative in forums, meetups and so on... they will learn much more than me... I am sure of it... If the system would give me feedback, they will categorize me probably in the group of 'slackers' and the systems totally ignores what are my priorities in my life... maybe I am person who works in a factory, twelve hours per day... and I arrive at home very tired with time enough to watch the videos and nothing else... or maybe I do not work twelve hours... but I just had recently a baby who requires most of my attention... or maybe it's true! I'm a slacker... but how do you differentiate between the three cases? I think that it is not an easy task... in the same way, that it's hard previously define what are the conclusions that your algorithm should draw from the data... another point the data... when do you start to have enough high quality dataset to draw conclusions? Coursera courses are 6 weeks long... how long have you to wait till your dataset work?

I am not claiming that our way of doing is the correct one... I really like the idea of recommenders, profiling users and so on... but maybe and only maybe... since the moment, learners are not so interested in the results of our European projects... maybe it is because we do not understand them... and for that, we do not need to do great, fancy and elaborated applications, we need to evaluate everything and to try to understand them... and my hopes are that with a little more of time... my PhD and our work can provide a bit of light in the middle of the darkness... :)


Tuesday 5 June 2012

Invitation to participate in a survey funded by the European Commission

One colleague is conducting a survey on why some innovative SMEs do or not take part in R&D projects by the European Commission. If you hold a SME or work for one, I think that your input can be highly relevant for this study. Hopefully, the results can help to improve the funding programs becoming a useful way to provide resources to the true back-bone of the European economy.

I hope you can contribute with your highly valuable opinion. If it is your situation, please read  bellow and share it with other SMEs.

 
*Please feel free to disseminate. Apologies for any cross-postings*
We are currently running a survey on reasons why some innovative SMEs (Small to medium sized enterprises) in the ICT sector do or do not take part in R&D projects funded by the European Commission, and we would very much like to have your opinion.

By taking part in the survey, funded by the European Commission, you will not only be ensuring  that ICT SMEs will influence the Commissions project planning, but you will also be entered in to a draw to win an iPad. You can also undertake a free innovation audit for your company.

If you are interested in more information about this study, please visit our website: www.smenonparticipation.eu


To take part in the survey you should be either an innovative ICT SME or an ICT Association from the EU-27 or associated countries (including Switzerland, Israel, Norway, Iceland, Liechtenstein, Turkey, Croatia, Macedonia, Serbia, Albania, Montenegro, Bosnia & Herzegovina, Faroe Islands and Moldova).

The survey should take no more than 15 minutes to complete and does not require any prior knowledge of R&D funding programmes. It is currently available in English, and will soon be available in French and German as well. If you have any questions please get in touch directly by email (noaa.barak@theia.eu) or by filling in this contact form.

Our target is to collect the views of as many companies as possible so that our findings truly represent the views of companies in your sector. We would appreciate it if you could pass this invitation forward to your contacts, if you feel it could be of interest to them.

Thank you in advance for your support, and good luck in the draw!
 

Friday 25 May 2012

New prototype and playing with quartiles and outliers in STEP UP!

"subliminal" advertisement - REMEMBER: We organize LAK'13, are you ready for your submission? Do you have any good, amazing and original idea? Come on! Let's do it! ;) - the "subliminal" advertisement is finished- yes... I know... I don't really understand the concept of subliminal :)

As I already presented at the end of my presentation at LAK'12, we are trying to simplify STEP UP!.

First, we did an small prototype. You already know our iteration process methodology, aren't you? Otherwise, read one of our papers! ;)



 We didn't evaluate this prototype because I had a PhD meeting with Katrien and Erik, and during the discussion came up the idea of developing it for mobile devices. So then, I moved the code to JQuery mobile, giving us the following result:

What did we change from one prototype to other?

Basically, both follow the same concept except for the colors. In the first prototype, we used:
  • red: Bad student!
  • yellow: Careful! Maybe you should work more, shouldn't you?
  • green: Good boy/girl! Good student!
But we realized that we can not say that from the activity of a social network, at least, with the analysis that we do. We try to encourage students to reflect on their data, we do not intend to say: "You are a good/bad student". So we decided to give different meaning to the colors:
  • blue: cold activity. Dude! Your activity is lower than your peers. Up to you! Maybe you don't need anything from the community, but maybe the community need something from you. We are also learning how we can become good open learning students. Share your learning and knowledge for free and you will receive something back... not sure why, when and how but do it and you will see that!
  • green: you are in the average activity... you are participating as most of your peers. It does not mean that your contributions are good, but you have at least the habit to contribute to the community, and it is also part of the process.
  • red: "You are in the hot zone". What is going on with you? Are you a social network addicted? Are you addicted to study? Go to the real life and enjoy a beer with your peers! Just kidding.. for sure... this is not the message that we want to send to the students, on the other hand, it is quite similar... the student is participating over the average activity. Is it really necessary? If others are not so active, why are you so actively contributing? Reflect on that! If you really need it, do it! But it's important to be aware of this aspect.
Is the prototype already plugged to real data?

Yes! It is! We have already started to play with student data. For instance, the screenshot above contains real data of this week. We have already finished the lectures of the course so this week there is no so many activity.

Once that we had the prototype, we had to decide what would be the criteria to translate data activity to a percentage to fill the bar. First, we thought in the arithmetic mean, we implement it. But we were not totally convinced... why? How do you detect outliers?

So we decided to go for the concept of box plots calculating quartiles and outliers. We found a really easy way to calculate quartiles that made our work easier. And how can we calculate the outliers? Also it's very simple.

  • IQR = Q3 - Q1
  • Up Outliers > Q3 + 1.5*IQR
  • Down Outliers < Q1 - 1.5*IQR
 All the students with an activity between Q3 and Q1 have filled their bar with color and the percentage is between 25% and 75%.

Students with an activity bellow Q1 go from 0 to 25% (and blue color) and above Q3 (and red color), from 75 to 100%. Outliers are assigned respectively to 0% and 100%.

What do you think? Does it make sense? If you have better idea, don't hesitate to share it with us!
  

Wednesday 9 May 2012

LAK'12 conclusions

After some days, I've had time to think about LAK'12 conference.

BTW, really important note: Next year, we are going to organize LAK'13 in Leuven! So prepare your submissions... we are waiting for your contribution!

But how do we see Learning Analytics? You can check some slides from my Prof. Erik Duval or some of his thoughts about Educational Data Mining vs Learning Analytics.

But, what are my conclusions after LAK'12?

I would summarize it with a really nice photo that I took during my days off in Vancouver.


What does this photo mean to me?

The conference is finished, but now we have a nice picture of what our community in learning analytics is working on. The mountains (our goals) are still far away, but we only have to swim (to work) to get there. One day is over, but we are sure that tomorrow is going to start another great day. I think that it's really nice when you finish a conference with this kind of feeling.

It was a really nice experience that give me a lot of food for thought. I would highlight a really nice talk with Jon Dron who during the demo session gave me some interesting pointers (i.e. The Design of Everyday Things and Dr. Vive Kumar (still I have to take a look to his work)) and I had the opportunity to discuss with him issues regarding privacy and my PhD topic.

Another positive aspect is to read a bit the conclusions from others (i.e Abelardo's blog, Doug Clow's Blog, Audrey Watters' blog, ... ). People who share their thoughts about the topic... the common conclusion after every conference is that sharing knowledge is the key to progress in a research field and there is people doing a great job in that! So I only can say thanks guys to share your thought with all of us!

After my presentation, I had also a really nice talk about how to engage people in the reflection process. One easy argument is to include this process in the learning itinerary. However, we are often tracking sensible data and we can not force students to give it (i.e. tracking data beyond of the LMS common problem with Abelardo's group work). In our case, we track data from different systems and they have to give us their API keys to have access to their own data. If we include the reflection process in the learning itinerary, the tracking becomes mandatory... and somehow the final feeling is that everything is corrupted (in addition it can be against the law). We are trying to engage students in the process of open learning, show and share your thoughts with the world and somehow it will come back with some additional food for thought. Sharing your information and reflection should be a voluntary and participatory process. It is our premise.

Also, I had a really nice talk with David García-Solórzano from the Open University of Catalonia regarding his paper: "Educational Monitoring Tool Based on Faceted Browsing and Data Portraits". Damn! They are really doing nice work. I really encouraged him to evaluate their prototype because I think that it has a lot of possibilities. They are concentrating a lot of information and I think that they need to get some feedback now. Also, it was really nice to hear that he got a lot of inspiration from the paper "Attention please! Learning analytics for visualization and recommendation" by Erik Duval. You feel somehow lucky and think: "Yeah! And I am doing the PhD with him". Afterwards, you wonder why he hasn't still fired you after questioning all his ideas in the PhD meetings... I guess that it's the PhD student syndrome, we think that we know more than we actually know... or maybe it's my personal syndrome but I feel better thinking that others share the same problem... as the saying goes:

"It is a fool's consolation the think everyone is in the same boat"

Cheers! ;)

Wednesday 25 April 2012

Preparing LAK'12 presentation!

I was thinking to write some lines about my presentation in LAK'12 conference. This presentation is about the paper Goal-oriented visualizations of activity tracking: a case study with engineering students and just now taking a look to the program, I saw that it will be broadcasted by video streaming. It is getting funnier, before I was just worried about giving a presentation with approximately one hundred attendees... now it will be broadcasted... so... more people... even more, I guess that it will be recorded so... more fun? :-) anyway... summarizing... it is a bit scary, isn't it? I think that I'll never get used to present my work, although it's always a nice learning experience (once finished :-))

However thinking about it, my conclusion is that I don't have anything to worry about... because I have a super presentation thanks to the feedback got from my colleagues in one internal try-out presentation.

I would like to share it with you and get maybe some additional feedback? I am still not convinced about mixing bar charts and box plots to show similar information, however the SUS questionnaire is not represented using box plots because the result was a bit weird. Anyway any feedback is welcome.

So... here you go!

Thursday 22 March 2012

Twitter, blogs, tinyarm and a possible experiment

I have been playing with graphs libraries to visualize social relationships... you know this kind of stuff that nobody has done so far... (irony? ;)) So yes... I was playing with these libraries and I was thinking in which kind of experiment could involve these visualizations.

I haven't really thought about it in depth, but I reminded a conversation with Luis de la Fuente (when he was visiting us) that they (he and Katrien) were thinking to set up an ethnographic experiment with twitter and do some kind of comparison between belgian and spanish behavior... after some time  brainstorming we didn't see the point... and we finished with the unusual (irony again) sentence: "we have to think more about it" and we didn't talk about the topic anymore...

But now just thinking, an idea came up to my mind... I was thinking that we could set up an experiment four different research groups in different countries(?). The scope could be master thesis supervision.

What would be the requirements?
  • Similar approach using social networks in the supervision of master thesis. For instance, twitter, blogs and Tinyarm to report the read/skimmed papers (It is not an hypothetical example... is how we supervise ours ;))
  • The topic of the research groups should be similar.
  • The location of the groups could be two from the center of Europe (Belgium-Germany) and two from the north of Africa... sorry... south of Europe (Portugal-Italy-Greece-Spain) (The order of the countries was not random). But preferably countries that share the root of their languages, so Greece could be discarded.
  • All the communication should be in English.
  • All the groups will share the same hashtag in Twitter. 
  • All the papers in TinyARM should be also in English.
What could we study?
  • Are new established links between students in different countries?
  • If yes, makes sense to think that there are some intercultural relation?
  • Maybe is more important the topic.
  • In which system do the students create more new relationships? For instance, in twitter sending tweets to each other, commenting on each other blogs or even in TinyArm reading papers that others have read or recommending papers. 
And for sure... STEP UP! could be the central point of information... ;)

I guess that it is a very open experiment where we can study many factors, for sure, we have to think more about it. Damn! The forbidden sentence... now such experiment is condemned to obscurity... Anyway... it was a nice exercise to write it down...

Keep in mind that you are the only one who can avoid the predictable and sad fate of this experiment! ;)

Feedback is welcome, for sure, I can learn from you! :) Thx in advance! 

Friday 24 February 2012

Meeting with one of my assesors

Wow! I was a bit... I don't know the correct word... afraid? I arranged a meeting with Andrew Vande Moore to sign my PhD Plan. Would he criticize too much my PhD Plan? Ok... I know... the PhD Plan went through several iterations with Erik and Katrien, but still I was aware that the PhD was a bit open to face my next two years of PhD.

I am working in learning... and learning is affected by multiple factors. We can control some of them but maybe others we are not aware of them (or we can not control them because are externals). But sometimes I am a bit afraid... I have a really good scenario to get a good PhD. We are experimenting with real students, so I could do really nice experiments, however, I end up after every evaluations with a feeling that I could have got more valuable evaluation (It also relates to my non-sexual usual sadomasochist tendencies, I don't know why but I'm always thinking that I did something wrong). 

Anyway, let's go to the conversation. In the last post I wrote about concepts such as awareness, meaningfulness and usefulness. So after the conversation, I added two concepts to my TODO list: Trust and Robustness. Btw, sometimes, I have the feeling that my thesis will be like the collin dictionary, a collection of concepts with their context-dependent definitions. Or maybe it's end up as an onthology... a self-definition of concepts that nobody uses except the owners (Oooops! Sorry! I don't want to offend pro-onthologists ;))

Andrew pointed out these concepts as something that I should consider in the evaluations. Why? Because it is a really important part in the learning process. An iterative feedback cycle between students and teachers. If students don't trust the teacher, would teaching make sense? We are trying to increase the awareness of our students through some kind of feedback (STEP UP!-the dashboard for those that don't remember the name ;)). I really like this picture, trust is reliability plus delight, and in this case, reliability relies (I know it's redundant) on robustness. How will students increase their awareness if they don't trust the system? And just thinking, the thesis students scenario is ideal for evaluating this. We have students, we have supervisors, and we have the activity of both in the social networks. In addition, neither students nor supervisor have the same motivation to work on the topics. Still master students don't have seen the visualizations so we have the perception of the students on how they are performing before the dashboard. We can show them the dashboard and we can ask them about their perception after the reflection. Afterwards, we can ask supervisors about their perception on how the student is going, and,  finally, we can mix motivation on the topic and social-network activity. Here are included posts, tweets and read/skimmed papers but even more important the comments received on their blogs from their peers and supervisors. Therefore these parameters can show us whether they are performance indicators. Because it was other part of the conversation and it also relates to trust:

What do we visualize?
It also relates to visual storytelling concepts. What is the message? What is the goal of the visualization? I think that I already mentioned before in other posts this concept. Andrew has explained to me how they use visualizations to display spent energy. He has told me that makes no sense to show spent energy whether they don't explain information about the context, for instance, how the sensors are and some additional contextual information.

We talked about more things but I was trying to summarize, although as you can see... I'm not good on it!. Also he offered me an additional testbed, he recommended to read one paper.

So conclusions of the meeting: It was a really productive meeting!

Monday 20 February 2012

Usefulness, meaningfulness and awareness

Previous week, I had a couple of discussions with my colleagues Sten and Gonzalo about these terms. One common topic in our research is awareness... we try to increase the awareness of the user about what s/he is doing, and we use the (sub-)community to contextualize such activity. To this end, Gonzalo uses activity streams in Tinyarm (Haven't you tried it yet? Do it! It's a really cool tool for research awareness) and I visualize activity streams (Post, comments, twitter, toggl and soon Tinyarm activity).

Ok, both different methods to achieve awareness... but how to evaluate whether our tools provide awareness or not? Most of the papers, that I've read so far, made evaluations about conclusions extracted from the visualization or the tool in general... from my point of view, it is related to meaningfulness of something. Sure! It is a really important step! The user reflect on something meaningful and get aware(?) about their conclusions...

(?) Can we say that someone gets aware of something whether there is no change in her/his behavior?

(?) What is the proof that someone gets aware?

From my point of view, change of behavior can be a proof of awareness, however, someone can change the behavior for different reasons. No change in the behavior does not mean anything... maybe s/he feels ok with her/his behavior and s/he does not have such change... or maybe yes, but the trigger is not enough strong.

So what is a solution?

Sten sent me several user experience/design models (1, 2 and 3) previous week and my conclusion is that we should take a more pragmatic point of view. All of them share (directly or indirectly) concepts such as requirements, satisfaction and user's needs. Concepts that are also related to marketing. And ok... we cannot claim that we satisfy a predefined user need if the user comes back to our tool and use it... but at least, we know that something is going well when the user does it. We cannot claim that all the users share the same needs and we cover them... but we can say that we are providing a service and the users consider it useful for some reasons.

I have to think further on this and even more important, thinking about metrics that can be meaningful for us... I believe that usefulness (not perceived usefulness) is the key.

If anybody have some useful pointers, don't hesitate to tell me about them!

Thursday 9 February 2012

Step Up! Because sometimes the name really matters....

Sometimes I feel a bit... how can I say it? Silly... We try to find a suitable name for our application... Yes! Because marketing is also important in research... in fact, I think that is becoming more and more important... sometimes I think that is even more important than the research itself... but anyway... I drop this ideas off and start to discuss about the name itself.

 So... yes... sometimes I feel a bit weird because we discuss about names before we have our application in a stable version, even before we know that the concept works...

 Yes... it seems obvious sometimes that the concept works... For instance, in our case, most of the people thinks that increasing your awareness is positive! But how you will achieve this goal and whether the people will have enough time to spend on your proposed solution is not so clearly very often.

  Ok... so nobody understands why we look for a name in a so early stage... but it has an explanation... a name can define ideas... can send a message... can explain a concept... for me, a name is like an slogan in an election campaign... Would Obama have won without the slogan "Yes, we can!"? I don't know but it's clear that it had a clear impact on the people.

  So we have been thinking on different names such as:
  1. LYA -> Learning from your activity.
  2. Step Up!
  3. SuP! -> Step Up!
  4. Learnograph
  and other different names that can be representative for our application. But the selected one is:


 And maybe the question now is: What does it means for me?

STEP UP YOUR AWARENESS!

Because we are too worried about achieving goals, getting certificates and so on. Out of the educational context is the same... what really matters is to buy a new car, new house, new phone... but what happens with the process? Maybe if we were more aware about the process... our choices, goals and achievements would change...

And this is what STEP UP! tries to change telling to the students:

STOP! LOOK WHAT THE OTHERS ARE DOING!
ARE YOU IN THE CORRECT PATH?
THINK ABOUT IT!
MAYBE YOU CAN LEARN SOMETHING
MAYBE YOU CAN APPLY WHAT YOU LEARNT

And it is all what means STEP UP! for me. It is a tool for students, a tool for teachers, a tool for users that want to enjoy the process instead to achieve something, because in such process is where we spend more time... The success, the achievement, the goals are ephemeral... but we always keep our learned lessons along our life...

Tuesday 7 February 2012

Blogs viz and twitter aggregation

After a while, I have just decided to post about the new visualization of blog posts/comments and the twitter integration.

Usually the problem is that depending on which decisions you took (related to architecture/development), they have some consequences on the next iterations, so you have to stop and to think a bit about it.

Now, we are working on visualizing blogs of our thesis students, but they have also to tweet. So the ideal situation is merge both visualizations trying to get an overview of the students activity. You can see the visualization here. And you can find a small screencast bellow.


You have two tables. The first one for the authors and the other for external people, they can be supervisors, colleagues of the authors or simply people that decided to comment on the blogs. You have a legend with three colors (Green means posts, Blue means comments and red means no activity at all).

You can interact with the table in different ways:

  • If you click on the headers, the rows are sorted based on the activity.
  • Clicking on the header with the right button of the mouse, you can access to the blogs.
  • If you click on the cells, you get a tip with extra information. Click again to hide it.
  • If you click on the sparklines, you get a bigger visualization that shows the aggregated data of comments and blogs per week of the year.
Now the question is:

How to add twitter information to the current table?

Just thinking that in the end, twitter in this visualization can be like a blog. An additional source where users can do some activity.

However, also we are thinking to aggregate other data from toggl. Ok... it can be another column. And every additional source can be an additional column, but in the end, you can find a huge overview of your classroom activity.

For instance, another use case that I'm thinking about is on one course that I'm teaching. Is about learning management systems and technological resources. The students learn how to manage different LMSs in several weeks. In this period, they have to tweet and blog about web 2.0 tools and/or reflect on their weekly activity. So other scenario could be something like blog activity, twitter activity and activity in the different LMSs. Something like number of activities, resources, lessons and so on. But it is always the same... not sure whether the column system is a good approach. In addition, they are more than 100 hundred students... it means 100 hundred rows in the table... ok... we can minimize the impact adding a filter and sorting functionality, but sometimes you lose a bit of the overview. Maybe the solution is to add additional metrics to take the overview, but the matter here is... what are meaningful metrics? Once that you decide to provide metrics, you are trying to drive the conclusions. As a teacher, maybe this is the goal. But not sure whether students expect that from a learning dashboard and what is the data that they want to look into. Yes... for sure... you can ask them, but it is not so obvious approach because sometimes they never thought about what is meaningful for them. Anyway, life is a matter of decisions... and wrong decisions means a lesson that can be learned for the future... as Erik usually says life is learning.

I will tell you more about my decission and what I have learnt in the next post! Because it is what really matters... What do we learn from our research...