DCU Anti-Bullying Centre header
DCU Anti-Bullying Centre
Image 1 from Co-Designing Artificial Intelligence Based Cyberbullying Interventions on Social Media with Children

Co-Designing Artificial Intelligence-Based Cyberbullying Interventions on Social Media with Children

Qualitative Research Findings

By: Tijana Milosevic, Kanishk Verma, Samantha Vigil, Michael Carter, Derek Laffan, Brian Davis, & James O’Higgins Norman

Download "Co-Designing Artificial Intelligence-Based Cyberbullying Interventions on Social Media with Children: Qualitative Research Findings" PDF

Abstract

This report details the results of a qualitative research study (focus groups and indepth interviews) with children and teens aged 12-17 (N=59) in Ireland about the perceived effectiveness of Artificial Intelligence (AI)-based cyberbullying enforcement mechanisms on popular social media platforms. The adoption of the UN General Comment No. 25 established that children’s rights, as outlined in the UN Convention on the Rights of the Child (UNCRC), apply in a digital environment. We therefore examine children’s perceptions about how AI-based enforcement mechanisms affect their rights to protection (safety), participation and privacy. We inquire into how children perceive the effectiveness of the proposed mechanisms; and how these could be made more effective from their perspective; and which changes or alternatives they propose. The proposed interventions are based on social learning and social norm theories, and they include designated support contacts, bystander and school involvement, and systems that are designed to reward prosocial behaviours and deter perpetration. We find that children would welcome many interventions but raise concerns around their privacy and effectiveness of what has been proposed. We provide policy recommendations for the technology industry and policy makers.

Tijana Milosevic, Elite-S Research fellow, DCU Anti-bullying Centre (ABC) and ADAPT SFI

Kanishk Verma, Irish Research Council PhD Candidate, DCU School of Computing, ADAPT SFI, ABC DCU 

Samantha Vigil, PhD Student, Department of Communication, University of California, Davis

Michael Carter, PhD Candidate (ABD), Department of Communication, University of California, Davis

Derek Laffan, ABC DCU

Brian Davis, Professor, DCU School of Computing and ADAPT SFI

James O’Higgins Norman, Director ABC, DCU, Professor at DCU, and UNESCO Chair on Tackling Bullying in Schools and Cyberspace

 

The current study 

The following research questions guided this phase of the project: 

RQ1: How can we design automatic tools that support effective proactive bullying interventions that assist victimised children while ensuring children’s rights to privacy, freedom of expression and other relevant rights as outlined in the UNCRC? 

RQ2: How can we leverage children’s feedback to optimise the effectiveness of such tools?

Interventions tested in the study 

Interventions we designed in this study involve not only the target (victim) and the perpetrator but also those who witness cyberbullying incidents, the so-called “bystanders” (Rudnicki et al., 2022). Bystanders can remain neutral and not become involved in the incident they are witnessing; or they can support the perpetrator or support the victim (at which point, they are considered to be “upstanders”). Furthermore, we have included a feature called “support contact/helper/friend” whom children can add upon sign up and who can be contacted when abuse is detected by AI. The idea behind the support contact is based on peer mentoring (Papatrainou et al., 2014; Bauman & Yoon, 2014), but we envisaged that the support contact can be an adult as well (parent/caregiver, or someone else who is close to the child).

Using a collaborative interface design tool, Figma,13 the research team created four core and two optional demos14 each showing a scenario with an example of abusive behaviour that could constitute a cyberbullying incident on Instagram, TikTok and Trill15 and a subsequent intervention. Core scenarios were shown in each interview and focus group while the optional ones were shown if there was additional time in the session. Each scenario then showed examples of how the incident could be detected by AI proactively and a subsequent intervention based on research into bystander involvement in cyberbullying incidents (Bastiaensens et al., 2014; DiFranzo et al., 2018; Macaulay et al., 2022). The proposed interventions as designed in this study are hypothetical and only some components of these are available currently on certain social media platforms. For example, “hidden words” on Instagram allow the user to turn on comment filtering, which removes abusive comments which the user can later on nonetheless view if they would like to. All of the features we propose, should be, however, technologically feasible to implement, based on the current state of AI development for the purpose of detecting cyberbullying and harassment as previously identified by the authors of this report (Milosevic et al., 2021).

For example, we proposed that once children create an account on Instagram/TikTok/Trill, children be offered the option to add a support contact/helper/friend who could be contacted if AI detects cyberbullying or some other type of abuse on the platform. A support contact could be a friend, parent, teacher or someone else and the person need not be using the given platform. In Demo 1, (Image sequence 1) we showed an example of a girl receiving negative comments on her post on TikTok; once these are detected by AI, the girl receives a notice from TikTok that abusive comments have been detected, and she is prompted to review them (abusive comments are not displayed automatically in order not to traumatise her if she chooses not to see them); or to request help from the support contact. Demo 1 also showed the option to request support from those who have been detected by AI as bystanders (eg, they posted something positive or neutral on the post that received negative comments, or have merely been detected as having seen the abusive post). Those identified by AI as bystanders would receive a prompt from the platform that abusive comments have been detected on the person’s post and they’d be prompted to intervene by providing support to the person who was abused; or by reporting the abusive content or account to the platform; or by reaching out to the perpetrator asking them to take it down. We then asked children for feedback on the desirability of such options, perceived effectiveness of these interventions and their perceptions of how such deployment of AI might affect their privacy and freedom of expression.

In the second demo, we featured an example of cyberbullying by exclusion, which according to Instagram was a common way for teen girls to experience cyberbullying on the platform.16 For example, purposeful exclusion would be made visible and performative (Marwick & boyd, 2014) by tagging the person in a story or post featuring photos from the event to which she was not invited. In the demo, we showed three teen girls tagging the fourth one in a photo from an event where she was not invited

By photo analysis and facial recognition, AI application could detect that more people are tagged in the photo than are actually present in the photo; and establish that bullying has possibly occurred by further examination of direct messages (DMs) exchanged among the three girls who talked about not inviting the fourth one to the event, and then showing her that she was not invited by tagging her in the photos. Thereafter, the victim would receive a prompt asking her whether she’d like to review the post where she’d been tagged in and report it to Instagram, in case it was bullying. Any intervention that would prompt the victim to view an abusive message should contain a trigger warning as well. She would also be prompted to reach out to her support contact for help. The support contact would be provided with the option to reach out to the girls who engaged in exclusion and ask them to take the post/story down, explaining that such behaviour is hurtful. Both the victim and the support contact would have the option to restrict further sharing of this post/story on Instagram and other platforms, in addition to the regular options of reporting it to the platform and untagging themselves.

Demo 3 offered the possibility of reporting a cyberbullying incident on Instagram to one’s official school account which would be managed by a professional at their school. Under this scheme, every school in Ireland would have an official account on Instagram. Upon sign up, children would be given an option to confirm their attendance of a particular school and given the ability to report incidents to their school. This demo is a variation on Facebook/Meta’s earlier proposals and efforts in the United States (at the state level)17 to involve schools as escalators or trusted flaggers.

Under such a scheme, the school would be able to flag a case to the platform for prioritised handling as a trusted flagger (Milosevic, 2018). In the demo, we did not position schools as trusted flaggers, but rather we tested the desirability of school involvement into cyberbullying cases altogether. The demo shows a boy tagged in a post with abusive comments underneath; the post was then detected by AI proactively and the boy was prompted to report it to his school in addition to reporting it to the platform; like in previous demos, the option to reach out to a support person was provided; as well as the possibility of asking the perpetrator to take the post down. Furthermore, the perpetrator was punished by having less engagement on all his posts over the course of the following month (i.e., all his posts regardless of the nature of their content would have less visibility to other users on the platform, similarly to shadow banning18), following a notification and the option to appeal the decision.

Demo 4 took place on Trill and it showed homophobic bullying of a person via direct messaging. AI was able to scan DMs for abusive content and following the detection of such content, the sender was automatically blocked; and the victim received prompts with options to seek support from the support contact and report the content to the platform. Subsequently, those who engaged as support contacts were rewarded with support score points, which could be added to one’s account profile/username and they were also rewarded by being able to unlock additional platform features such as colours.

Demo 5 was an optional demo (we only showed it if there was enough time left in the end of each interview/FG session) which allowed users to create an anti-bullying video on TikTok and Instagram upon sign-up. The anti-bullying video could be tailored by the user and created together with the support contact/helper/ friend and feature any music/sound clips available. Users could incorporate a pre-made message such as “be kind” or “that was hurtful,” or “this is not ok,” asking the perpetrator to take abusive content down or stop the abuse (common messages in online safety campaigns19); or the user could write something that they thought was appropriate, which could even try to frame the situation in a joking manner or be more assertive in tone towards the perpetrator. The video could then be sent automatically when AI detects something abusive towards the user; or the user could choose whether and when it should be sent.

Finally, the last optional demo showed a “reflective message,” a well-researched intervention already used by some platforms, which prompts the user who is about to post something detected as abusive to think twice before posting it. The message that the poster was about to post was not necessarily abusive, it expressed a negative opinion “a bit dull if you ask me” in response to a throwback post of someone having fun in a photo of a pre-Covid lockdown party. The comment was trying to convey the message that their party did not seem like that much fun after all.

 

Method and data analyses 

We rely on qualitative research with preteen and teen children aged 12-17 (15 semi-structured in-depth interviews conducted online, 8 females, 7 males); and 6 focus groups (from now on FGs: 4 groups with female participants conducted offline in one school in an urban area of Ireland, and 2 online FGs with males, with 6-10 children per group). See Tables 1 and 2 in the Appendix for the sample structure. All research was conducted in Ireland. Interview recruitment took place with the help of the youth organisation Foróige20 as well as via Amarach research agency. The fieldwork was conducted from May to August 2021 and all except for the 4 school based FGs were conducted online due to lockdown conditions. All procedures received approval from the Dublin City University Research Ethics Committee (REC) as well as the Data Protection Unit (DPU). Parental/caregiver written consent as well as child written assent were sought from all participants following the provision of plain language statements (PLS) which explained that research was voluntary in nature and that they could give up at any time, as well as the principles of confidentiality and anonymity. The PLS stated these in a child friendly manner. Following the transcription and anonymisation procedures, three coders engaged in an iterative, thematic analysis of the data; they discussed the themes that emerged and refined broad themes into more nuanced ones, and discussed any disagreements as to how the content was coded (Boyatzis, 1998; Braun & Clarke, 2006). Deductive coding (following predefined codes) was performed first with all three coders searching for the research questions-driven codes; and open ended, inductive round of coding was performed thereafter, with coders adding codes that they thought emerged from the research, which were subsequently exchanged and discussed.

Picture 4 from Co-Designing Artificial Intelligence Based Cyberbullying Interventions on Social Media with Children Findings

Limitations: We experienced significant recruitment difficulties and delays due to Covid-19 lockdown circumstances, and we were unable to specifically recruit children from non-white Irish ethnic backgrounds; while some children in our sample did come from minority ethnic backgrounds, we were not able to recruit based on this criterion nor did we consequently record this feature as a variable in our study. We were also unable to recruit any children who openly identified as non-binary in terms of their gender or as LGBTQI+ and non-governmental organizations in Ireland catering to this minority group were not able to assist with recruitment at the time of our fieldwork.

 

Findings


Concluding summary

In this report, we detailed the key findings of the project which solicited children’s ideas and suggestions as to the design of AI-based cyberbullying interventions on popular social media platforms. We were especially interested in children’s views as to how proactive regulation of abusive behaviours such as cyberbullying affected their rights to protection (safety) and what kind of interventions they would consider to be effective in reducing cyberbullying; how these interventions affected their rights to privacy, freedom of expression and access to information. Unlike with reactive moderation, where a child first reports content or an account to the platform before moderation takes place, proactive moderation refers to platforms deploying AI to detect and take action against abusive behaviours before they are reported by users (Milosevic et al., 2022).

Following the UN Committee on the Rights of the Child’s adoption of the General Comment No. 25, children’s rights, as stipulated in the UNCRC, apply in the digital world (Livingstone, 2021; Third et al., 2021). This means that in addition to the right to protection (safety), privacy and freedom of expression, they also have the right to be heard on matters that concern them (Article 12). While states and not technology companies are the primary duty bearers of the UNCRC implementation (see eg, Benesch, 2020), the passage of the General Comment No. 25 nonetheless underscores the calls long made by scholars: that all stakeholders whose activity has an impact on children’s lives, including technology industry, need to take responsibility to improve children’s rights in digital environments; and especially to take into account children’s views when developing polices and mechanisms that impact children (Lievens et al., 2018; Staksrud, 2016).

Embedding children’s views on matters that concern them with respect to technology design will become ever more important with the implementation of national laws that regulate online safety such as the OSMR in Ireland and Online Safety Bill in the UK; as well as the Digital Services Act at the EU level. With respect to privacy and the General Data Protection Regulation (GDPR)26 implementation, the Irish Data Protection Commission’s Fundamentals for a Child-Oriented Approach to Data Processing already stipulates, among other clauses, that children should have their say as regards to data processing by online services.27 In our study, therefore, we solicit children’s views on AI-based enforcement as a step towards ensuring that child best interests are a primary consideration as regards to interventions that have a clear impact on them. The authors of this report have long been emphasising the need for technology companies to consult children’s views during the safety policy design process, and moreover to make the information about how this is done and the results of this process open to public scrutiny (Milosevic, 2018).

Our results, based on qualitative research with 59 adolescents aged 12 to 17 from Ireland, suggest that children would generally welcome AI-based interventions provided that they are given the option to opt in and out. Children, however, brought up a number of privacy concerns especially as regards to the use of facial recognition and DM/private message monitoring for the purpose of cyberbullying intervention.

While most of them would welcome the option to have a support contact/helper/friend whose help could be solicited when cyberbullying is detected by AI, children brought up a number of concerns about the effectiveness of such assistance and willingness to use it: from preferring to deal with cyberbullying on their own; unwillingness to tell others that they experienced cyberbullying and to bring them into the incident; to the fear of burdening their friends with their own problems. Involving the support contact to ask the perpetrator to stop was considered to be particularly problematic, especially by older girls (15-17) in focus groups, as they did not think someone else should be responsible for solving their own problems. Some pointed out they would be reluctant to admit they have a support contact, as this implied weakness or lack of self-reliance and was perceived to be appropriate for smaller children.

Involving bystanders into becoming upstanders and studying the conditions under which they are most likely to become involved in helping children who are experiencing victimisation, is a widely researched issue (Bastiaensens et al., 2014; DiFranzo et al., 2018; Macaulay et al., 2022; Williford et al., 2013). In our study, children expressed reluctance to bring in random bystanders into the incident, emphasising that such involvement was platform and context dependent. They preferred to address the problem with their support contact or on their own and even said that bystanders (if they are strangers) could make things worse.

While many children were reluctant to bring others in, they seemed to think that if a bystander was someone they knew it could be a welcome idea, depending on the context of a particular incident.

Reporting incidents to school via an official Instagram account handled by the school counsellor or another professional, was also met with ambivalence; some children thought it would be helpful to have it in place but brought up a number of reasons as to why they would not wish to have the school involved. Some thought that there was little the school could do in any event, and especially if the perpetrator did not go to that school; or that the school was not responsible for what happened online outside school hours and premises.

Custom tailored anti-bullying videos which could be sent in response to abusive behaviour when detected by AI, were met with mixed feelings, seen to be more appropriate for younger children; and many children thought telling someone who is bullying you to be kind could backfire; just like telling them that their behaviour is hurtful could be counterproductive (children surmised that some perpetrators could think: “well, that is the point, I want to hurt you.”). A number of cyberbullying interventions designed by adults, researchers and advocacy organisations, many of which are featured every year on the Safer Internet Day,28 include messages such as “Be Kind!” and “Don’t Bully”. Feedback we received from children shows how these messages might fail to resonate with youth culture and how we need to ensure that cyberbullying prevention and intervention is meaningful and context-sensitive in order for it to be effective in reducing the problem (Jones et al., 2014; Finkelhor et al., 2021).

While respondents did not seem too concerned about FoE, they nonetheless emphasised the importance of effective appeals mechanisms when AI-based takedown decisions or activity restrictions are being made (such as the perpetrator’s content being algorithmically underprioritized, similar to shadow banning).

Some pointed out such restrictions should be time-limited or triggered only after repeated violations and reconsidered after a while. While they did not think that less engagement should be the only punishment available, (some thought banning or content take-down was more appropriate), they would welcome this feature as long as it is transparent, and appropriate appeals mechanisms are provided. Similarly, they thought that giving the victim and support contact the option to restrict the sharing of AI-detected cyberbullying content to other platforms (such as posts or stories eg, from Instagram to Snapchat etc.) could be a welcome feature; however they thought it would not be necessarily effective given that one can screenshot and copy content in many ways. Finally, they were overall worried that AI would wrongly detect joking or slagging as cyberbullying, which would negatively impact FoE as well as their friendships.


Policy Recommendations

References

Anderle, M. (2016, March 15). Making a more Empathetic Facebook. The Atlantic. Retrieved from: https://www.theatlantic.com/technology/archive/2016/03/facebooks-anti-b…

Ashktorab, Z., & Vitak, J. (2016, May). Designing cyberbullying mitigation and prevention solutions through participatory design with teenagers. In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 3895-3905).

Barlińska, J., Szuster, A., & Winiewski, M. (2013). Cyberbullying among adolescent bystanders: Role of the communication medium, form of violence, and empathy. Journal of Community & Applied Social Psychology, 23(1), 37-51.

Bastiaensens, S., Vandebosch, H., Poels, K., Van Cleemput, K., DeSmet, A., & De Bourdeaudhuij, I. (2014). Cyberbullying on social network sites. An experimental study into bystanders’ behavioural intentions to help the victim or reinforce the bully. Computers in Human Behavior, 31, 259-271.

Bastiaensens, S., Pabian, S., Vandebosch, H., Poels, K., Van Cleemput, K., DeSmet, A., & De Bourdeaudhuij, I. (2016). From normative influence to social pressure: How relevant others affect whether bystanders join in cyberbullying. Social Development, 25(1), 193-211.

Bauman, S., & Yoon, J. (2014). This issue: Theories of bullying and cyberbullying. Theory Into Practice, 53(4), 253-256.

Benesch, S. (2020). But Facebook’s Not a Country: How to Interpret Human Rights Law for Social Media Companies. JREG Bulletin, 38, 86.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77-101.

DiFranzo, D., Taylor, S. H., Kazerooni, F., Wherry, O. D., & Bazarova, N. N. (2018, April). Upstanding by design: Bystander intervention in cyberbullying. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-12).

DeSmet, A., Veldeman, C., Poels, K., Bastiaensens, S., Van Cleemput, K., Vandebosch, H., & De Bourdeaudhuij, I. (2014). Determinants of self-reported bystander behavior in cyberbullying incidents amongst adolescents. Cyberpsychology, Behavior, and Social Networking, 17(4), 207-215.

Douek, E. (2022). Second Wave Content Moderation Institutional Design: From Rights To Regulatory Thinking. Available at SSRN 4005326.

Espelage, D. L., Rao, M. A., & Craven, R. G. (2012). Theories of cyberbullying. In S. Bauman, D. Cross and J. Walker (Eds.). Principles of cyberbullying research: Definitions, measures, and methodology, 49-67.

Finkelhor, D., Walsh, K., Jones, L., Mitchell, K., & Collier, A. (2021). Youth internet safety education: aligning programs with the evidence base. Trauma, violence, & abuse, 22(5), 1233-1247.

Gillespie, T. (2018). Custodians of the Internet. Yale University Press.

Ging, D., & O’Higgins Norman, J. (2016). Cyberbullying, conflict management or just messing? Teenage girls’ understandings and experiences of gender, friendship, and conflict on Facebook in an Irish second-level school. Feminist Media Studies, 16(5), 805-821.

Görzig, A., & Macháčková, H. (2015). Cyberbullying from a socio-ecological perspective: a contemporary synthesis of findings from EU Kids Online. Retrieved from: https://www.researchgate.net/publication/281554815_Cyberbullying_from_a_socioecological_perspective_A_contemporary_synthesis_of_findings_from_EU_Kids_Online

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 2053951719897945.

Heldt, A. & Dreyer, S. (2021). Competent third parties and content moderation on platforms. Journal of Information Policy, 11, 265-300. https://doi.org/10.5325/jinfopoli.11.2021.0266

Hinduja, S., & Patchin, J. W. (2013). Social influences on cyberbullying behaviors among middle and high school students. Journal of youth and adolescence, 42(5), 711-722.

Hinduja, S., & Patchin, J. W. (2015). Bullying beyond the schoolyard: Preventing and responding to cyberbullying. Corwin press.

Jones, L. M., Mitchell, K. J., & Walsh, W. A. (2014). A content analysis of youth internet safety programs: Are effective prevention strategies being used? Retrieved from: https://scholars.unh.edu/ccrc/41/

Kowalski, R. M., Giumetti, G. W., Schroeder, A. N., & Lattanner, M. R. (2014). Bullying in the digital age: a critical review and meta-analysis of cyberbullying research among youth. Psychological bulletin, 140(4), 1073.

Latané, B., & Darley, J. M. (1970). The unresponsive bystander: Why doesn’t he help?. Prentice Hall.

Lieberman, H., Dinakar, K., & Jones, B. (2011). Let’s gang up on cyberbullying. Computer, 44(9), 93-96.

Lievens, E., Livingstone, S., McLaughlin, S., O’Neill, B., & Verdoodt, V. (2018). Children’s rights and digital technologies. International human rights of children, 1-27.

Livingstone, S., Carr, J., & Byrne, J. (2016). One in three: Internet governance and children’s rights.

Livingstone, S., & Third, A. (2017). Children and young people’s rights in the digital age: An emerging agenda. New media & society, 19(5), 657-670.

Livingstone, S., Stoilova, M., & Nandagiri, R. (2019). Children’s data and privacy online: growing up in a digital age: an evidence review. Retrieved from: http://eprints.lse.ac.uk/101283/1/Livingstone_childrens_data_and_privac…

Livingstone, S., Stoilova, M., Nandagiri, R., Milosevic, T., Zdrodowska, A., Mascheroni, G., … & Wartella, E. A. (2020). The datafication of childhood: Examining children’s and parents’ data practices, children’s right to privacy and parents’ dilemmas. AoIR Selected Papers of Internet Research.

Livingstone, S. (2021, February 4). Children’s Rights Apply in the Digital World! LSE blogs. Retrieved from: https://blogs.lse.ac.uk/medialse/2021/02/04/childrens-rights-apply-in-t…

Lobe, B., Velicu, A., Staksrud, E., Chaudron, S., & Di Gioia, R. (2021). How children (10-18) experienced online risks during the Covid-19 lockdown-Spring 2020. Key findings from surveying families in 11 European countries. The Joint Research Centre (JRC) of the European Commission. Retrieved from: https://publications.jrc.ec.europa.eu/repository/handle/JRC124034

Macaulay, P. J., Betts, L. R., Stiller, J., & Kellezi, B. (2022). Bystander responses to cyberbullying: The role of perceived severity, publicity, anonymity, type of cyberbullying, and victim response. Computers in Human Behavior, 107238.

Macháčková, H., & Pfetsch, J. (2016). Bystanders’ responses to offline bullying and cyberbullying: The role of empathy and normative beliefs about aggression. Scandinavian journal of psychology, 57(2), 169-176.

Marwick, A., & Boyd, D. (2014). ‘It’s just drama’: Teen perspectives on conflict and aggression in a networked era. Journal of youth studies, 17(9), 1187-1204.

Mascheroni, G., & Siibak, A. (2021). Datafied Childhoods: Data Practices and Imaginaries in Children’s Lives. Peter Lang.

Miller, P. H., Baxter, S. D., Royer, J. A., Hitchcock, D. B., Smith, A. F., Collins, K. L., … & Finney, C. J. (2015). Children’s social desirability: Effects of test assessment mode. Personality and individual differences, 83, 85-90.

Mishna, F., Saini, M., & Solomon, S. (2009). Ongoing and online: Children and youth’s perceptions of cyber bullying. Children and Youth Services Review, 31(12), 1222-1228.

Mishna, F., Birze, A., Greenblatt, A., & Khoury-Kassabri, M. (2021). Benchmarks and bellwethers in cyberbullying: the relational process of telling. International Journal of Bullying Prevention, 3(4), 241-252.

Milosevic, T., Van Royen, K., & Davis, B. (2022). Artificial intelligence to address cyberbullying, harassment and abuse: new directions in the midst of complexity. International journal of bullying prevention, 1-5.

Milosevic, T., Verma, K., Davis, B., Laffan, D., Walshe, R., O’Higgins Norman, J. (2021, September). Developing AI-based Interventions on Online Platforms: Standardising Children’s Rights. 11th International Conference on Standardisation and Innovation in Information Technology (SIIT).

Milosevic, T. (2016). Social media companies’ cyberbullying policies. International Journal of Communication, 10, 22.

Milosevic, T. (2018). Protecting children online?: Cyberbullying policies of social media companies. The MIT Press.

Montgomery, K. C., Chester, J., & Milosevic, T. (2017). Children’s privacy in the big data era: Research opportunities. Pediatrics, 140(Supplement_2), S117-S121.

O’Higgins Norman, J. (2020). Tackling bullying from the inside out: Shifting paradigms in bullying research and interventions. International journal of bullying prevention, 2(3), 161-169.

National Advisory Council for Online Safety (Report of a National Survey of Children, their Parents and Adults regarding Online Safety). Retrieved from: https://www.gov.ie/en/publication/ebe58-national-advisory-council-for-o…

Papatraianou, L. H., Levine, D., & West, D. (2014). Resilience in the face of cyberbullying: An ecological perspective on young people’s experiences of online adversity. Pastoral Care in Education, 32(4), 264-283.

Pfattheicher, S., & Keller, J. (2015). The watching eyes phenomenon: The role of a sense of being seen and public self-awareness. European journal of social psychology, 45(5), 560-566.

Phillips, W. (2015). This is why we can’t have nice things: Mapping the relationship between online trolling and mainstream culture. MIT Press.

Rudnicki, K., Vandebosch, H., Voué, P., & Poels, K. (2022). Systematic review of determinants and consequences of bystander interventions in online hate and cyberbullying among adults. Behaviour & Information Technology, 1-18.

Smith, P. K. (2016). Bullying: Definition, Types, Causes, Consequences, and Intervention. Social and Personality Psychology Compass, 10, 519-532.

Staksrud, E. (2016). Children in the online world: Risk, regulation, rights. Routledge.

Third, A., Collin, P., Fleming, C., Hanckel, B., Moody, L., Swist, T., & Theakstone, G. (2021). Governance, children’s rights and digital health. Retrieved from: https://www.governinghealthfutures2030.org/wp-content/uploads/2021/10/G…

Van Bommel, M., van Prooijen, J. W., Elffers, H., & Van Lange, P. A. (2012). Be aware to care: Public self-awareness leads to a reversal of the bystander effect. Journal of Experimental Social Psychology, 48(4), 926-930.

Van Royen, K. V., Poels, K., Vandebosch, H., & Zaman, B. (2021). Think Twice to be Nice? A User Experience Study on a Reflective Interface to Reduce Cyber Harassment on Social Networking Sites. International Journal of Bullying Prevention, 1-12.

Van Royen, K., Poels, K., Vandebosch, H., & Adam, P. (2017). “Thinking before posting?” Reducing cyber harassment on social networking sites through a reflective message. Computers in human behavior, 66, 345-352.

Van Royen, K., Poels, K., & Vandebosch, H. (2016). Harmonizing freedom and protection: Adolescents’ voices on automatic monitoring of social networking sites. Children and Youth Services Review, 64, 35-41

Vidgen, B., & Derczynski, L. (2020). Directions in abusive language training data, a systematic review: Garbage in, garbage out. PloS one, 15(12), e0243300.

Williford, A., Elledge, L. C., Boulton, A. J., DePaolis, K. J., Little, T. D., & Salmivalli, C. (2013). Effects of the KiVa antibullying program on cyberbullying and cybervictimization frequency among Finnish youth. Journal of Clinical Child & Adolescent Psychology, 42(6), 820-833.

Wu, J., Luan, S., & Raihani, N. (2022). Reward, punishment, and prosocial behavior: Recent developments and implications. Current opinion in psychology, 44, 117-123. 13


Appendix

Table 1: Focus Groups (FGs), sample structure

 

Focus Groups Number of participants Sex Age
FG1 9 Female 13-14
FG2 6 Female 16-17
FG3 8 Female 15-16
FG4 9 Female 15-16
FG5 6 Male 13-14
FG6 6 Male 15-16

 

Table 2: Interviews, sample structure

Sex and age Number of interviews
Males, age 12 2 interviews
Males, age 13 1 interview
Males, age 14 1 interview
Males, age 15 1 interview
Males, age 16 2 interview
Females, age 12 1 interview
Females, age 13 1 interview
Females, age 14 1 interview
Females, age 15 3 interview
Females, age 16 2 interview

Acknowledgements

As an Elite-S fellow, Tijana Milosevic has  received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie SkłodowskaCurie grant agreement No. 801522, by Science Foundation Ireland and co-funded by the European Regional Development Fund through the ADAPT Centre for Digital Content Technology grant number 13/RC/2106_P2.

Mr Derek Laffan also received funds from the Department of Education, Ireland, while working on the project.

We thank Prof. Sameer Hinduja and Ms. Anne Collier for kindly providing feedback on this report.

We would also like to thank Foróige for their help with participant recruitment. Special thanks to Ms. Angela Kinahan and ABC for all the administrative support which enabled the execution of this study; to Mr. Darran Heaney and Ms. Katrina Harrison for help with recruitment.

We thank Dushandthini Saravanan and Vivek Raj for their help with transcriptions.

We would like to thank Trill for allowing us to use their platform in our study and for their support.

Last but not least, our enormous gratitude goes to all the young people who took the time and energy to participate in this study, and we hope that this work will translate into meaningful policies made not just for but by them.

All photos of users used in demos are taken from community mockup templates available in Figma: https://www.figma.com/file/q3YK35FXUSrlZzjgd3NFqC/TikTok-UI-Screens-Com…

The photos were published open access for TikTok by: https://www.figma.com/@pixsellz and for Instagram by: https://www.figma.com/@arthurhazan

All the other photos are purchased and owned by the Anti-bullying Centre and do not require crediting.

ISBN Number: 978-1-911669-45-6

Contact details
tijana.milosevic@dcu.ie
@TiMilosevic

Footnotes

1 UNESCO and the World Anti-Bullying Forum. (November 1-3, 2022). Presenting a proposed revised definition of school bullying. Retrieved from: https://delegia-virtual.s3.eu-north-1.amazonaws.com/projects/delegia-wa… WABF_summary_of_new_definition.pdf 

2 Meta (2021, November 9). Community Standards Enforcement Report: Third Quarter 2021. Retrieved from: https://about.fb.com/news/2021/11/community-standards-enforcement-report-q3-2021/

3 Instagram Help Centre (2022). How do I filter our and hide comments I don’t want to appear on my posts on Instagram? Retrieved from: https://help.instagram.com/700284123459336

4 Government of Ireland. (2022, January 25). Publication of the Online Safety and Media Regulation Bill. Retrieved from: https://www.gov.ie/en/publication/88404-publication-of-the-online-safety-and-media-regulation-bill/

5 Gov. UK, Department for Digital, Culture and Sport. (2022, March 17). Online Safety Bill: FactSheet. Retrieved from: https://www.gov.uk/government/publications/online-safety-bill-supportin…

6 European Commission.(2022, March 25). The Digital Services Act Package. Retrieved from: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package

7 Australian Government (n.d.). Federal Register of Legislation: Online Safety Act 2021. Retrieved from: https://www.legislation.gov.au/Details/C2021A00076

8 Meta. (2019). Announcing the winners of phase two content policy research awards. Retrieved from: https:// research.facebook.com/blog/2019/09/announcing-the-winners-of-phase-two-content-policy-research-awards/

9 Australian Government, eSafety Commissioner. (n.d.). Safety by design. Retrieved from: https://www.esafety.gov. au/industry/safety-by-design 

10 United Nations Human Rights Office of the High Commissioner (2021). General Comment No. 25 (2021) on children’s rights in relation to the digital environment. Retrieved from: https://www.ohchr.org/EN/HRBodies/CRC/Pages/GCChildrensRightsRelationDigitalEnvironment.aspx

11 Instagram Help Centre (2022). How do I filter our and hide comments I don’t want to appear on my posts on Instagram? Retrieved from: https://help.instagram.com/700284123459336 

12 YouTube Help. (n.d.). YouTube trusted flagger program. Retrieved from: https://support.google.com/youtube/ answer/7554338?hl=en; European Commission (2019). Code of Conducting on Countering Illegal Hate Speech Online. Retrieved from: https://ec.europa.eu/info/sites/default/files/code_of_conduct_factsheet_5_web.pdf

13 Figma can be accessed here: https://www.figma.com/ 

14 All demos can be found on this link: https://drive.google.com/file/d/1O6PzyffWKhjP1SkJedDbgl_qrjFG1HYl/view?…

15 Trill is a social network that allows for anonymous sharing and whose goal is to provide support space for improving mental health: https://www.trillproject.com/

16 According to information presented at Meta/Facebook Global Safety Summit, 2019: https://about.fb.com/ news/2019/05/2019-global-safety-well-being-summit/ 

17 The Baltimore Sun. (2013, October 3). Facebook and Md. Schools Partner to Combat Bullying. Retrieved from: https://www.baltimoresun.com/education/bs-xpm-2013-10-03-bs-md-facebook-school-partnership-20131003-story.html

18 TikTok. (n.d.) What is Shadow Banning. Retrieved from: https://www.tiktok.com/discover/what-is-shadowbanning?lang=en 

19 Webwise.ie (n.d.). Be kind online. Retrieved from: https://www.webwise.ie/uncategorized/be-kind-online-sid/;TackleBullying… (n.d.). Resources. Retrieved from: https://tacklebullying.ie/resources/

20 Foróige. (n.d.). Foróige’s philosophy. Retrieved from: https://www.foroige.ie/

21 Minister for Education and Skills, IE. (2013). Action plan on bullying. Retrieved from: https://assets.gov.ie/24758/0966ef74d92c4af3b50d64d286ce67d0.pdf

22 Circular 045/2013. Anti-bullying Procedures for Primary and Post Primary Schools. Retrieved from: https://circulars.gov.ie/pdf/circular/education/2013/45.pdf 

23 TUSLA, Child and Family Agency. (n.d.). Children First Guidance and Legislation. Retrieved from: https://www.tusla.ie/children-first/children-first-guidance-and-legislation/

24 Child Protection Procedures for Primary and Post-Primary Schools, IE. (2017). Retrieved from: https://www.pdst.ie/sites/default/files/Child%20Protection%20Procedures%202017.pdf

25 After the post was detected by AI, reported to the platform and confirmed as violating platform policy

26 GDPR. EU. (n.d.) Complete Guide to GDPR compliance. Retrieved from: https://gdpr.eu/ 

27 Data Protection Commission, Ireland. (2021, December). Fundamentals for a Child-Oriented Approach to Data Processing. Retrieved from: https://www.dataprotection.ie/sites/default/files/uploads/2021-12/Fundamentals%20for%20a%20Child-Oriented%20Approach%20to%20Data%20Processing_FINAL_EN.pdf

28 Safer Internet Day. (n.d.) Together for a Better Internet. Retrieved from: https://www.saferinternetday.org

29 Exclusion-based bullying which was said to be frequent on Instagram: According to information presented at Meta/Facebook Global Safety Summit, 2019: https://about.fb.com/news/2019/05/2019-global-safety-wellbeing-summit/&…;

30 Companies are concerned that by revealing the exact details of their policies and their moderation decisions, they might inadvertently provide guidelines for those who wish to violate the policies as to how to get around those (see Milosevic, 2018). We do not think that by revealing to the user that a piece of their content or an action violated the company policy would necessarily lead to such an outcome. It is important to exhibit transparency in the context of restrictive decisions, and children have voiced such concerns as well. 

31 Meta. (2022, March 16). Introducing Family Centre and Parental Supervision Tools on Instagram and in VR. Retrieved from: https://about.fb.com/news/2022/03/parental-supervision-tools-instagram-vr/