Project Details
Description
This proposal combines the excitement presented by Making, as instantiated through Minecraft, with the affordances and opportunities that can be realized through artificial intelligence and multimodal interfaces. Specifically, the last several years has seen advances in the adoption and utilization of Minecraft as a novel, collaborative, learning platform, as well as the popularization of speech- (e.g., Google Home, Siri, and Amazon’s Alexa) and gesture-enabled interfaces (e.g., Xbox Kinect, Nintendo Wii, Leap Motion). This proposal builds on those capabilities to consider ways that intelligent, multimodal, naturalistic interfaces can support a novel and effective paradigm for learning and working. In this way, this work is positioned as exploring the genre of multimodal interfaces for supporting collaborative creation of digital artifacts and learning.
Recent developments in multimodal sensor technologies are creating novel opportunities for naturalistic multimodal interfaces. These interfaces include speech-, gesture-, touch-, video- and gaze-based input and feedback. We will leverage these technologies to develop a multimodal interface for creating, mining and exploring in Minecraft. Furthermore, the design of the platform will include important features, such as: ability to use natural language and gaze for mining, exploring and building, naming custom-designed objects to simplify their reuse, interpretation of simple mathematical operations and shapes, and the ability to quickly undo one’s design. We hypothesize that the proposed platform could confer a number of important benefits to learners. For example, it may 1) enable younger learners to participate in Minecraft experiences, 2) accelerate the development of spatial reasoning skills through the faster design cycles, 3) provide a naturalistic means for which students can learn computational thinking, and 4) allows students to be more cognitively engaged in developing the macro-level structures and relationships within the game environment
This project used a design-based implementation research methodology that is situated in a research-practice partnership. The research will be conducted in concert with local Minecraft clubs and through an on-going partnership with local media arts teachers who utilize Minecraft in their courses. In year 1, team will participate in extensive observations, prototype development and individual small scale testing. This will be followed by small group testing and additional prototype refinement in year 2, and larger group testing in year 3. Across the three years of the project, the research team will employ a host of analytic strategies for studying the digital traces produced through the Minecraft environment and through multimodal data capture. The team will also explore strategies for increasing the intelligence of the platform. For example, the speech-based commands will be customizable for different age groups, and for learners with differing levels of design proficiency.
At the conclusion of this project we aim to have 1) developed a robust prototype that has been tested in both laboratory and ecological settings, 2) documented the differential learning gains achieved by using this multimodal naturalistic interface, and 3) advanced the state-of-the-art in adaptive, multimodal interfaces.
Intellectual Merit: Beyond the development of a prototype, this project will lay the groundwork for on-going research and development in the genre of multimodal naturalistic interfaces for collaborative creation of digital artifacts and learning.
Status | Finished |
---|---|
Effective start/end date | 9/1/18 → 8/31/22 |
Funding
- National Science Foundation (IIS-1822865 Amnd 2)
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.