Developing the next generation of equitable, collaborative learning environments is an exciting and important mission, but one that is nearly impossible without data. Enter our latest cross-strand collaboration endeavour—the iSAT Fall 2021 Data Collection team.
Our creative team-naming chops aside, this team began working in tight sprints over our fourth quarter to determine the best approach for recording high-fidelity audio and eventually video of small groups of students in real-world K–12 classrooms while protecting the privacy and agency of these students and teachers. Those who teach in these classrooms or know anything about analyzing audio data in noisy environments will immediately recognize the enormity of this challenge. How does one get data of multiple groups working together in one room—with students talking over each other and using gestures instead of words—with sufficient quality for automated analysis?
To tackle this challenge, the team includes experts from each of our three strands as well as software development experts. Our Strand 1 automatic speech recognition (ASR) experts are testing a variety of microphones to determine which provide the best audio quality for our data collection needs. Education experts from Strand 3 guide the group by adapting our existing Sensor Immersion curriculum unit for enhanced collaboration opportunities and working closely with our K–12 partners to ensure our data collection methods protect students’ privacy and do not interrupt classroom activities. Strand 2 experts in team science and computer-supported learning worked with the group to develop data recording and data access solutions that could be incorporated into iSAT’s AI-enabled Collaborative Learning Environments (AICL). Software developers worked to implement a standardized data scheme so that the data collected can be easily stored, accessed and analyzed by the strand researchers.
With the spread of the COVID-19 delta variant creating new uncertainty, the team has worked quickly to collect as much data as possible. In July, our school district liaisons and education experts recorded audio from multiple microphones as students worked in small groups on the sensor immersion unit (check out more about this effort in this newsletter’s feature on Strand 3!). We are in the process of using the data to determine which mic would provide the best quality.
With the efforts of this group enabling us to take a giant step toward our mission, we end this quarter and begin our second year with visions of our AI Partner helping teachers orchestrate rich, collaborative classroom experiences coming into greater focus.