Datasets used to train AI algorithms may underestimate older people. credit: pixabay / CC0 Public Domain
Educators, mental health professionals, juvenile justice officers, and child welfare workers who often witness the trials faced by vulnerable youth, and who are charged with protecting them, see some value in using artificial intelligence to improve online safety. .
But they are concerned about feasibility due to a lack of resources, access to the necessary social media data, context, and concerns about breaching the relationships of trust they have built with youth, which takes time.
As part of the National Science Foundation I-Corps program, Vanderbilt University Computer Science Associate Professor Pamela J. A team of researchers led by Wisniewski, Flower Family Fellow in Engineering, conducted interviews with 37 social service providers (SSPs) across the United States. Work with disadvantaged young people to determine which online risks are of most concern to them and if they see value in using AI as a solution to automated online risk detection.
Respondents included child, youth and family services workers, mental health therapists, teachers, juvenile justice officers, an LGBTQ+ advocate, a government consultant and police officers.
Online sexual risks, such as sexual grooming and abuse, and cyberbullying were top concerns, especially when these experiences crossed the boundary between the digital and physical worlds. SSPs say they rely heavily on self-reporting to know if and when there are risks online, which requires building a trusting relationship. Otherwise, they find out only after a formal investigation has been launched.
While child welfare agencies have algorithmic decision-support systems in place to assess offline risk outcomes so caseworkers can support the needs of children placed in care, this study seeks to help SSPs identify and mitigate online risk experiences. The study is the first to use AI risk identification for Disadvantaged youth.
“What we found, and what was impressive, is that SSPs don’t want to use technology as a form of surveillance or to crack down on youth, they want it to help them start conversations. In such a solution There is very little interest that censors or sends a message.” alert to legal authorities,” said Xavier V. Cadel, a graduate student on Wisniewski’s research team. “They want a nudge or a little nudge to ask, ‘Did something happen at school today? Someone sent this message. Sent, did it hurt you? Did it hurt you?'”
Wisniewski said the study provides detailed feedback from a variety of SSPs that indicates risk detection technology needs to recognize differences in end-user views and how this will influence model design. “AI can overrule the flag. Kids cuss, so using the F-word becomes ‘noise’.”
For example, users of the judicial system need ideas that support investigation and incident response. They care about detecting and preventing illegal behavior. Educators and child welfare officers need a more everyday view of the experiences of specific adolescents. Therapists, therapists, and mental health practitioners primarily want to look at the assessment to correlate findings with their current established means of patient assessment to identify factors that may indicate poor mental health.
“There is interest among SSPs in online risk identification technology because they rely mainly on self-disclosure and tip-offs and they see it useful as conversation starters, but surveys and reports on children in their care Not to,” Wisniewski said. “It is clear that automated risk detection systems for SSPs must be designed and deployed with care.”
The findings of the study were reported in Proceedings of the ACM on Human-Computer Interaction,
more information:
Javier V. Cadel et al, Duty to Respond, Proceedings of the ACM on Human-Computer Interaction (2022). DOI: 10.1145/3567556
Citation: Service providers responsible for keeping children safe are cautious but see value in AI tools to track risky behavior online (2023, 8 June) retrieved 8 June 2023
This document is subject to copyright. No part may be reproduced without written permission, except in any fair dealing for the purpose of private study or research. The content is provided for information purposes only.










