Friday, April 22, 2011

Experimental Draft 2

This week I was looking into research on the efficacy of video lecture learning versus in class traditional lectures. There have been many studies recently in this area as video lectures are becoming more prevalent, and the general consensus of results are that there is no statistically significant evidence that video lectures are less effective than in person lectures, although the majority of people prefer the experience of the regular lecture [1,2]. There has been some evidence showing that watching video lectures of lectures you have already attended in person can increase how well you do in a course [1].There has also been some research into how to best present video lectures for optimal learning [1,3]. In order to test out a hypothesis on distance vs. traditional learning, I would need to measure students learning in some way, and this can be done by administering quizzes. I then researched a bit into quizzes and learning [1,4]. I had planned on doing in class quiz related trials and possibly online lecture quiz trials, but after reading about [7], it seemed like a great way to test quiz efficacy on online learning through Wikipedia (and also a great way to get participants since so many people seem interested).

papers :
[1] M.B. Wieling, W.H.A. Hofman. The impact of online video lecture recordings and automated feedback on student performance. Computers & Education. 2010.
[2] Benjamin E Schreiber, Junaid Fukuta, Fabiana Gordon. Live lecture versus video podcast in undergraduate medical education: A randomised controlled trial. 2010.
[3] Ading, Tanja, Astrid Gruber, and Bernad Batinic. "Learning with e-lectures: the meaning of learning strategies." Educational Technology & Society 12.3 (July 2009): 282(7). Academic OneFile. Gale. Stanford University Libraries. 22 Apr. 2011 http://find.galegroup.com/gtx/start.do?prodId=AONE&userGroupName=stan90222
[4] Sansone, Carol, Tamra Fraughton, Joseph L. Zachary, Jonathan Butner, and Cecily Heiner. "Self-regulation of motivation when learning online: the importance of who, why and how." Educational Technology Research and Development 59.2 (April 2011): 199(14). Academic OneFile. Gale. Stanford University Libraries. 22 Apr. 2011
http://find.galegroup.com/gtx/start.do?prodId=AONE&userGroupName=stan90222

[5] Steenhuis, Harm-Jan, Brian Grinder, and Erik Joost De Bruijn. "The use(lessness) of online quizzes for achieving student learning." International Journal of Information and Operations Management Education 3.2 (Jan 18, 2010): 119. Academic OneFile. Gale. Stanford University Libraries. 22 Apr. 2011
http://find.galegroup.com/gtx/start.do?prodId=AONE&userGroupName=stan90222

[6] Johnson, Danette Ifert, and Kaleigh Mrowka. "Generative learning, quizzing and cognitive learning: an experimental study in the communication classroom." Communication Education 59.2 (April 2010): 107(17). Academic OneFile. Gale. Stanford University Libraries. 22 Apr. 2011
http://find.galegroup.com/gtx/start.do?prodId=AONE&userGroupName=stan90222
 
[7] http://www.reddit.com/r/AskReddit/comments/gubge/if_wikipedia_were_to_have_a_quiz_section_at_the/ 

Hypotheses:  I could test multiple hypothesis related to quizzes and/or online vs. traditional learning. Multiple shorter quizzes placed periodically through the lecture / material are more effective than a longer quiz at the end. Also, I think that providing immediate feedback on quizzes verses delayed feedback or no feedback will increase the user's satisfaction and learning from them. I can also test out the effect of notifying the user of a quiz beforehand or not on quiz performance and learning, and the effect of providing a "pre-quiz" beforehand on post quiz performance and learning. I could also test the effect of positive feedback (even fake positive feedback) on user performance - for example by keeping track of the user's percentage of correct answers in real time and letting them see this (but making it up for some users).

 Experimental Method: I think generating a Wikipedia quiz website would be a great way to test several of these quiz hypotheses and garner a larger number of participants than an in class quiz trial. The experiment would involve picking several subjects of general interest on Wikipedia, and coming up with several quiz questions for each subject based on the material in the page. Then, several different versions of a Wikipedia quiz site would be generated possibly including:

1. The article followed by several quiz questions at the end
2. The article with single questions intermixed with the subsections
A. The viewing page with a notification that quizzes were present
B.  The viewing page with no prior notification that quizzes would be appearing. 
AA. The viewing page keeping track of user's performance in real time
BB. The viewing page always showing user's performance  as well above average
CC. The viewing page always showing the user's performance as well below average 

Setups 1 and 2 could be combined with any of the other setups. The A/B comparison would show any possible effect of having prior knowledge of the quiz on quiz performance and on learning. The AA/BB/CC comparison could show any effect of positive / negative feedback on user performance. I am still considering different variations of these trials, or different trial setups entirely - the pilot study might help with this. For all trials, after viewing a page and completing a quiz, the user could complete a short survey on their experience with the page and quiz, measuring their enjoyment and how much they feel they learned using a Likert scale. The relevant data gathered during the process would include the quiz responses, the amount of time spent on the page, and the survey responses. I am weighing the idea of allowing users to go through several pages vs. just one. 

The website for collecting this data could be deployed and advertised online, and if there is trouble finding participants, then a service like mechanical turk could be used to get users to complete the reading/quizzes, although this is less ideal then finding people genuinely interested in the topic.  

Friday, April 15, 2011

Experiment Draft 1

*Just to note, this is completely a draft, and I am not even fully decided on the topic of the experiment*

The experiment I am thinking about doing right now is something in the realm of measuring conversational involvement, or some specific aspects of conversational involvement within different types of communication. Conversational involvement is a measure of how cognitively and behaviorally engaged participants are in a conversation. It measures specific nonverbal behaviors along five dimensions: immediacy, expressiveness, interaction management, altercentrism, and social anxiety. In prior work on conversational involvement, a number of behaviors are identified to strongly correlate between high and low involvement in conversations. Some of these behaviors include general proxemic attentiveness, forward lean, relaxed laughter, coordinated speech, number of silences and latencies in communication, number of object manipulations, facial animation, vocal warmth, and amount of random body movement. Some of these can be easily quantitatively measured, like silences / pauses, forward lean, amount of relaxed laughter. I want to explore the conversational involvement during different online interactive communications vs. in person communications - and I want to see the effect of having an initial in person conversation with controlled high or low conversational involvement on subsequent online interactive communications.    

An initial draft of the experiment would involve video tapping the short conversations on set topics between two people in person. One person would be the study subject, and the other would be a 'tester'. The 'tester' would vary their conversational involvement between trials from high to low amounts by changing their behavior according to the conversational involvement metrics. The subjects would later be recorded engaging in a subsequent video chat conversations or audio conversations with the tester. The video of them can be analyzed to extract their level of conversational involvement during these sessions. The subjects would also be recorded in another chat with a random tester they had not met before, and the conversational involvement of these two trials would be compared. Also the different conversational involvements of the subjects in the second conversation would be compared against each other depending on whether their initial interaction with the tester had low or high conversational involvement on the part of the tester.