How Can Collaborative Tools Improve Online Learning with VR Video?

By on

Virtual Reality (VR) has been long touted as a revolutionary technology, offering a unique and immersive learning experience that can transport students to far-flung locations and bring abstract concepts to life. However, one of the biggest challenges for VR adoption has been the high cost of creating VR content. Instructors have to find help from the VR developers or 3D model designers to create the content, because it’s hard to find or create a perfect content to fit into their classes.

With the proliferation of inexpensive panoramic consumer video cameras and various types of video editing software, using 360-degree videos in VR has attracted more attention as an alternative method for instructors building a realistic and immersive environment. It is a “more user-friendly, realistic and affordable” way to create a realistic digital experience compared to developing a simulated VR environment.

Pedagogically, collaboration learning is better than individual learning in many scenarios. This articulates a research gap in the development and empirical investigation of collaboration VR video learning environments. 

Our work designed two modes to investigate the roles of collaborative tools and shared video control, and compared it with a plain video player (See our demo through the following video). Each mode contains a video viewing system and an after-video platform for further discussion and collaboration.  Basic mode uses a conventional VR video viewing system together with an existing widely-available online platform. Non-sync mode includes a collaborative video viewing system with individual control and video timeline and an in-VR platform for after-video discussion. Sync mode contains the same in-VR after-video platform, but students have shared video control. 

The study aimed to answer two research questions: 

RQ1: How does VR video delivery via existing technology (Basic mode) compare to collaborative VR video delivery (Sync and Non-Sync mode) on measures of knowledge acquisition, collaboration, social presence, cognitive load and satisfaction?
RQ2: How does individual VR video control (Non-sync mode) compare with shared video control (Sync mode) on measures of knowledge acquisition, collaboration, social presence, cognitive load, and satisfaction?

In order to examine the influence of different types of collaborative technology on the perceptions and experiences of online learning, we conducted three conditions within-subject experiment with 54 participants (18 groups (trios)). We collected quantitative data from knowledge assessment, self-reported questionnaires and log files, then we triangulated the validated measures with qualitative data from semi-structured interviews.

Figure 1. Study protocol

For RQ1, we found that collaborative VR-based systems both achieved statistically significantly higher scores on the measures of visual knowledge acquisition, collaboration, social presence, and satisfaction, compared to the baseline system.  For qualitative results, participants reported the potential reasons, such as lack of shared context and current technical obstacles (e.g, echos), for lower scores of Basic mode on collaboration and satisfaction. They also appreciated the in-VR platform’s power to transmit and display visuals for after-video discussion, which explained the potential reason for lower scores of Basic mode on the measures of visual knowledge acquisition. 

For RQ2, The shared control in Sync Mode significantly increased the ease of collaboration and sense of social presence. In particular, shared control significantly increased the view similarity (where the team was watching the same view of the video) and discussion time during the video. Based on the qualitative results, There were better collaboration experiences with shared control in the Sync mode due to better communication comfort. There was a tension between communication comfort and learning pace flexibility and the control method would influence the perceived usefulness of collaborative tools. 

Our work provides implications for design and research on collaborative VR video viewing. One important one is balancing the trade-off between learning pace flexibility and communication comfort based on teaching needs. The expectations for time flexibility and collaboration experience might differ for diverse educational activities and learning scenarios. Therefore, VR collaborative applications should decide whether or not to use shared control based on specific purposes.

Finally, takeaways from this paper:

  • Collaborative VR video viewing system can improve visual knowledge acquisition, collaboration, social presence, and satisfaction compared to the conventional system
  • Shared video control in VR video viewing can enhance collaboration experiences by increasing communication comfort, but may also reduce learning pace flexibility.
  • In-VR platforms for after-video discussion can enhance visual transmission and engagement, and improve the overall learning experience in collaborative VR video environments.

Find more information in our paper here –– coming to CHI 2023! 

Cite this paper:

Qiao Jin*, Yu Liu*, Ruixuan Sun, Chen Chen, Puqi Zhou, Bo Han, Feng Qian, and Svetlana Yarosh. 2023, Collaborative Online Learning with VR Video: Roles of Collaborative Tools and Shared Video Control. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). https://doi.org/10.1145/3544548.3581395

Towards Practices for Human-Centered Machine Learning

By on

Toward Practices for Human-Centered Machine Learning from CACM on Vimeo.

People are excited about human-centered AI and machine learning to make AI more ethical and socially appropriate. AI has captured the popular zeitgeist with promises of generalized artificial intelligence that can solve many complex human problems. These promises of ML, however, have had negative consequences, with both ridiculous and catastrophic failures – they rack up so fast that colleagues are keeping AI Indicents databases, reports of AI ethics failures, and more to boot.

How will ML researchers and engineers avoid these problems and move towards more compassionate and responsible ML? There aren’t many concrete guidelines on what it looks like to do human-centered machine learning in practice. And while there are some pragmatic guides, they often lack the connection between technical and social/cultural/ethical focus.

In my recently published CACM article, I argue that there is a gap in building human-centered systems – the gap between the values we hold but don’t have actionable methods and technical methods that don’t align with our values. The paper argues for practices bridging the ever-significant value and the focus of ever-practical methods.  

This paper synthesizes my CS and Critical Media Studies background in thinking about how we should DO HCML. It also builds on my decade of research experience in human-centered research in a challenging area – predicting and acting on dangerous mental health behaviors discussed on social media data.  It builds on classical definitions of human-centeredness in defining HCML and lays out five practices for researchers and practitioners. These practices ask us to prioritize technical advancements EQUAL TO our commitments to social realities. In doing this, we can make genuinely impactful technical systems that meet people and communities where they’re at.

Here are the five big takeaways from the paper and the practices you can implement immediately.

  1. Ask if machine learning is the appropriate approach to take 
  2. Acknowledge that ML decisions are “political”
  3. Consider more than just a single “user” of an ML system
  4. Recognize other fields’ contributions to HCML 
  5. Think about ML failures as a point of interest, not something to be afraid of

Let’s dig into one of these that seems – considering more than just a single “user” of an ML system. When considering who “uses” a system, we often only consider the person commissioning or building the system. Even in HCI, we talk about “users” of systems and (if lucky) the people whose data goes into the model. However, many systems have much larger constellations of people “involved” in the ML model. For example, the “user” may be a government or business in facial recognition technology. But the people whose faces are in that system are also “users” of the technology. Likewise, if that facial recognition system is used in an airport to screen passengers for flight identification, everyone who walks by ambiently may interact with it. The existing ML system meaningfully impacts a user who chooses NOT to interact with that system – if opting out means they must spend more time in airport security or have their identity scrutinized more closely. Both examples make it clear that with the consideration of multiple stakeholders involved in the ML model, we should consider all the stakeholders whose data goes into creating the model.

I aim for these principles to inspire action – to encourage more profound research, empirical evaluations, and new ML methods. I also hope the practices make human-centered activities more tractable for researchers AND practitioners. I hope this inspires you and your colleagues to ask hard questions that may mean making bold decisions, taking action, and balancing these competing priorities in our work. 

You can read more about this paper in the recently published Featured Article in the Communications of the ACM here.