Fundación Ceibal’s research, conducted in collaboration with Jacob Whitehill (Worcester Polytechnic Institute, WPI), Monica Bulger (Future of Privacy Forum) and the Ceibal en Inglés team at Plan Ceibal has been accepted for publication at the International Conference on Educational Data Mining (EDM 2019), to be held in Montréal, Canada on July 2-5, 2019.
The first article summarizes their research on teacher feedback in online learning environments, and is a collaboration with the Ceibal en Inglés team at Plan Ceibal, Jacob Whitehill (WPI), and Monica Bulger (Future of Privacy Forum):
Title: How Should Online English as a Foreign Language Teachers Write their Feedback to Students?
Authors: Cecilia Aguerrebere, Monica Bulger, Cristóbal Cobo, Sofía García, Gabriela Kaplan, Jacob Whitehill
They analyze how teachers write feedback to students in an online learning environment, specifically a setting in which high school students are learning English as a foreign language. How complex should teachers’ feedback be? Should it be somehow adapted to each student’s English proficiency level? How does teacher feedback affect the probability of engaging the student in a conversation? Results suggest: (1) Teachers should adapt their feedback complexity to their students’ English proficiency level. Students who receive feedback that is too complex or too basic for their level post 13-15% fewer comments than those who receive adapted feedback. (2) Feedback that includes a question is associated with higher odds-ratio (between 17.5-19) of engaging the student in conversation. (3) For students with low English proficiency, slow turnaround (feedback after 1 week) reduces this odds ratio by 0.7. These results have potential implications for the online platforms offering foreign language learning services, in which it is crucial to give the best possible learning experience while judiciously allocating teachers’ time.
The second article summarizes the results of a project on Open Educational Resources, lead by Jacob Whitehill (WPI):
Title: Do Learners Know What’s Good for Them? Crowdsourcing Subjective Ratings of OERs to Predict Learning Gains
Authors: Jacob Whitehill, Cecilia Aguerrebere and Benjamin Hylak
In this case, they explored how learners’ subjective ratings of open educational resources (OERs) in terms of how much they find them “helpful” can predict the actual learning gains associated with those resources as measured with pre- and post-tests. To this end, they developed a probabilistic model called GRAM (Gaussian Rating Aggregation Model) that combines subjective ratings from multiple learners into an aggregate quality score of each resource. Based on an experiment they conducted on Mechanical Turk, they found that aggregated subjective ratings are highly (and stat. sig.) predictive of the resources’ average learning gains. Moreover, when predicting average learning gains of new learners, subjective scores were still predictive and attained higher prediction accuracy than a model that directly uses pre- and post-test data to estimate learning gains for each resource. These results have potential implications for large-scale learning platforms (e.g., MOOCs, Khan Academy) that assign resources (tutorials, explanations, hints, etc.) to learners based on the expected learning gains.