Project summary

The metaverse is a digitally simulated environment of rich user interactions that aims to mimic and augment the real world. Global platforms are extending their social and gaming experiences for children and young people, but more research is urgently needed to examine and mitigate the risks these environments pose to their safety. We have determined that social engineering can cause social harms and are key threats to users of metaverse applications. In this project, we will build upon this work to research, design and evaluate mitigations and education components to enable safer use of shared metaverse gaming [1].


Project description

This project will develop an innovative threat-detection system to improve safety for young people in the metaverse. We have identified that young people are particularly at risk of threats targeting the technology’s new attack surfaces, interaction methods and capacity to heighten engagement and collect data from users at unprecedented levels [2] [3]. The project innovation centres around two key strategies. First, researching human centric design of immersive visual interventions to protect user

data. Second, we will develop a prototype immersive safety training materials to guide young people’s use of the metaverse.

The primary research question that this project will address is: How to design extensible social engineering threat visualisations suited to indicating extent of risk to young people on metaverse platforms? The project will involve the following steps:

Requirements Gathering and Analysis

A specialist panel of industry experts (metaverse developers and practitioners), CIs and partners (approx. 15-20 people) will convene in an online workshop to engage in a focus group, panel and scenario design fiction activity. This will identify current capability gaps and inform our codesign of visual interventions.

We have received ethical clearance (2021 HE002032) to engage industry in a codesign workshop to develop visualisation interventions and (adult) participants in user studies. Additional ethical clearance will be sought to involve young people in evaluation of the visualisation tools and development of the learning resources:

Proof of Concept Metrics, Visualisations and Learning Resources

1. We will design and implement, in the UCL social VR Ubiq platform, a set of metrics to characterise/indicate levels of trust and risk. These will quantify risks from conversational patterns using natural language processing, to obtain a degree of confidence about a particular threat, e.g. attempts to identify user location, identity or personal information.

2. Visualisations will be developed to advise users of potential adversaries (e.g. history of using grooming language, unknown to user’s network, summary of reporting/warning history/community trust passport); this can be modified for user’s risk appetite and used to inform user (and caregivers) of their behaviour and vulnerability to attack surfaces and interactions;

3. We will also develop an awareness training application to improve metaverse cyberliteracy skills for young people, schools, parents, and caregivers. This learning module will be co-designed with children, carers, and educators and with support from learning designers at UQ ITALI.

Evaluation

1. We will conduct a mixed methods study to collect quantitative and qualitative data about the user’s experience of the visualisation tools. This will include data analysis of the user’s interaction with a visualisation tool and interviews post-experience to evaluate their behavioural response to the tools. This will involve two groups of approx. 10 users (young people over age of 16 and researchers in Australia) who will engage in a VR scenario aided by visual interventions. This will allow us to evaluate how successfully the visual interventions affected the users’ risk perceptions.

2. We will evaluate our interventions and training materials with stakeholders through the UQ node of the Digital Child CoE and ITALI.


Publication

View publications


Partner organization(s)

Royal Holloway University LondonUniversity College London


Reference

[1] Baldry, Moya, Happa, Jassim, Steed, Anthony, Smith, Simon and Glencross, Mashhuda, Affective computing in the metaverse: A diegetic fiction exploration of risks and harms in Special Issue in the IEEE Transactions on Affective Computing. Forthcoming.

[2] Baldry, Moya, Happa, Jassim, Steed, Anthony, and Glencross, Mashhuda, “X-IRL risks: Identifying privacy and security risks in inter-reality attacks and interactions" IEEE VR workshop, MAR 13, 2022.

[3] Baldry, Moya, and Glencross, Mashhuda. Novel attacks in Extended Reality applications. Birds of a Feather workshop. IEEE VR. DEC 4, 2021.

Project members

Lead investigator:

Dr Mashuhuda Glencross

Senior Lecturer in Computer Science
School of Electrical Engineering and Computer Science

Other investigator(s):

Professor Simon Smith

Professorial Research Fellow
Institute for Social Science Research

Dr Janelle MacKenzie

Postdoctoral Research Fellow
Institute for Social Science Research