Teen Girls Fear Being Perceived as ‘Weak’ or ‘Lacking in Self-Reliance’ for Using AI Interventions to Address Cyberbullying
DCU Anti-Bullying Centre in collaboration with ADAPT SFI Research Centre undertook the first study to engage children’s views on the use of artificial intelligence (AI) to detect cyberbullying on social media. Fifty-nine teenagers aged 12-17 from Ireland participated in interviews and focus groups. Social media platforms are increasingly relying on AI to proactively detect and remove cyberbullying, nonetheless it is not always clear how companies use AI to accomplish this. This research consulted teenager’s views on proposed and existing AI interventions across popular social media applications such as Instagram and TikTok.
The AI interventions included in the study were broadly welcomed by teenagers, provided they were given the option to opt in. However, the research raised concerns about young people’s perception of seeking help with cyberbullying, in particular older female teenagers, as well as concerns about privacy and freedom of expression and the monitoring of their communication via AI.
Commenting on the findings of the report, the lead researcher and expert on cyberbullying, Tijana Milosevic, Research Fellow, DCU Anti-Bullying Centre said,
This is the first study, where teenager’s views on the evolution of AI interventions to address cyberbullying were listened to and examined. While AI interventions appear to be broadly welcomed, we also must ensure that teenager’s privacy and rights are protected, and not infringed upon.
In addition to asking teenagers about these issues, the research team designed and tested hypothetical AI interventions. Examples of such interventions include; teenagers selecting a trusted adult or friend on their social media platform to be automatically notified if cyberbullying is detected by AI; the involvement of bystanders, i.e., those who witness bullying, to support the target of cyberbullying and the use of facial recognition to identify online bullying through the tagging of a user, i.e., tagging someone left out in a group post. Despite the broad welcome for AI interventions, teenagers reported being particularly concerned about the possibility of having their private messages screened, as well as the use of facial recognition, with some expressing unease with its use, and describing it as ‘creepy.’
Furthermore, some teenagers expressed a desire to handle cyberbullying complaints on their own, for example, by choosing to unfollow, mute, restrict or block someone. Whilst these tools can be helpful in some instances, researchers conducting the study expressed concern about young people’s perception of asking for help when they are the target of cyberbullying. Older teen girls of 15-17 years of age, in particular expressed a preference for handling cyberbullying themselves, as they did not think someone else should be responsible for solving their problems. Some pointed out they would be reluctant to admit they have a support contact, as this implied weakness or lack of self-reliance, adding they thought this intervention was more appropriate for smaller children. Interestingly, the male focus groups were more welcoming of the support contact feature, as an AI intervention.
Given the concerns about teenagers seeking to handle complaints on their own, Tijana Milosevic said,
As teenagers increasingly use social media to communicate with their peers, we as researchers are concerned about some of the feedback from young people, in particular older teenage girls, regarding their fears of being perceived as “sensitive” or “making a big deal out of being the target of cyberbullying. This finding has implications for how we design online safety technologies and interventions —when much of online safety education with respect to cyberbullying advises young people to report incidents and talk to someone, it is important to know that young people might be reluctant to do so.
These findings underscore the importance of engaging teenager’s views on the development of new technologies, such as the AI applications for cyberbullying detection. Children’s rights, which are outlined in the UN Convention on the Rights of the Child (UNCRC), apply in a digital environment. According to the UNCRC children also have the right to privacy and freedom of expression, among other rights. It is also particularly relevant considering the passage of legislation through the Houses of the Oireachtas on the Online Safety and Media Regulation Bill 2022 (OSMR), which is to provide for the regulation of harmful online content and the protection of children online, amongst other issues. Furthermore, this study highlights the need for continued consultation with children and experts in the academic community, and that the future policies of social media companies to address cyberbullying are open to public scrutiny.