Strand 1
Understanding and Facilitating Collaborations
Strand 1 researchers are working to answer the founÂdational question: What AI advances are needed to understand and facilitate collaborative learning converÂsations? Foundational AI research in natural language understanding, multimodal processing, and knowledge representation is needed to develop AI models that can autonomously monitor the unfolding collaborative learning discourse at multiple levels—understanding the content, the conversational dynamics, gestures, and social signals (e.g., facial expressions)—and learn to generate appropriate conversational moves to be an effective partner in the learning conversation. Strand 1 develops mechanisms to sift through and integrate information from multiple student-AI conversations both within a class and over time. The main areas of focus for Strand 1 are: Speech Processing and Diarization, Content Analysis & Dialogue Management (also known as MMIA: Multimodal interactive agent) and Situated Grounding.
This team is working to enable our AI Partners to better understand students when they talk and identify who is speaking and when by improving Automatic Speech Recognition models for classrooms.
This theme is dedicated to helping our AI Partners make sense of what it’s hearing and seeing and determining optimal interactions between students and teachers. Their work is helping the partners understand key content words and concepts uttered by students.
Â鶹ÒùÔº and teachers establish common ground when interÂacting with one another through both behavioral and verbal cues, as well as prior goals, expectations, and beliefs. This theme is identifying this common ground through discourse and gesture.