Basser Seminar Series
TOWARD RELIABLE AUTOMATIC ASSESSMENT OF DOMAIN EXPERTISE
Multimodal learning analytics is an emerging area that analyzes students’ natural communication patterns (speech, writing, images) to predict learning-oriented behaviors and the consolidation of expertise during educational activities. It offers better matched and more robust techniques for evaluating students’ learning progress than click-stream analysis. In addition, it is compatible with the shift to multimodal-multisensor interfaces on cell phones, tablets, and other devices that now dominate educational technologies. In this talk, I’ll describe data resources available for working in this area, as well as promising initial research findings. I’ll also describe an example of how multi-level multimodal learning analytics can inspire new insights and systems-theoretic views of the process of learning.
As one example, new research indicates that signal-level features of dynamic writing, which were extracted as students wrote with digital pens and paper using existing technology, can reliably identify their domain expertise in mathematics. Analyses were conducted using the Math Data Corpus, in which collaborating student groups jointly solved mathematics problems varying in difficulty. Linear regressions confirmed that lower total energy expended during writing is a significant predictor of higher domain expertise, with models of energy accounting for 35-43% of the variance in students’ expertise level. Further convergent analyses have demonstrated that both empirical modeling and machine learning techniques can yield over 90% correct classification of students by expertise level.
These results demonstrate that signal-level writing features, and models of total energy expenditure based on them, can predict domain expertise with surprisingly high reliability. To interpret these findings, I’ll present a new limited-resource theory that describes the role of adaptive energy expenditure during acquisition of domain expertise. From a more pragmatic perspective, corporate interest is building because the signal-level writing features outlined in this work can be collected automatically in real time as students use existing digital pen technologies. The race is now on to collect strategic new datasets, in partnership with corporations and school districts, so innovative and high-quality educational applications can be developed.
Sharon Oviatt is internationally known for her multidisciplinary work on multimodal and mobile interfaces, human-centered interfaces, educational interfaces and learning analytics. She has been recipient of the inaugural ACM-ICMI Sustained Accomplishment Award, National Science Foundation Special Creativity Award, and ACM-SIGCHI CHI Academy award. She has published over 160 scientific articles in a wide range of venues, and is an Associate Editor of the main journals and edited book collections in the field of human-centered interfaces. Her recent books include The Design of Future Educational Interfaces (2013, Routledge) and The Paradigm Shift to Multimodality in Contemporary Computer Interfaces (2015, Morgan Claypool). She currently is editing The Handbook of Multimodal-Multisensor Interfaces (forthcoming in 2017, ACM Books). Related to today’s talk, Sharon was a founder of the ACM international conference series on Multimodal Interfaces (ICMI), and also its satellite series of Data-Driven Grand Challenge Workshops on Multimodal Learning Analytics.