Opening Statement and Submission to the Joint Oireachtas Committee on Children and Equality
Opening Statement for Joint Oireachtas Committee on Children and Equality
Prof. James O’Higgins Norman
UNESCO Chair on Bullying and Cyberbullying
Dr. Darragh McCashin
Chair, Observatory on Cyberbullying, Cyberhate & Online Harassment
5th February 2026
Download "Opening Statement for Joint Oireachtas Committee on Children and Equality" PDF
Chair and Members of the Committee,
Thank you for the opportunity to appear before you today and to speak to the submission from the DCU Anti-Bullying Centre.
Our Centre is Ireland’s leading research centre on bullying, cyberbullying and online safety. Over the past number of years, our research has informed policy and practice nationally and internationally, and today’s submission draws on a substantial body of peer-reviewed Irish and international evidence focused on children and young people aged 4 to 17.
The central message we wish to convey is that online safety cannot be reduced to simple debates about screen time or technology bans. The evidence is very clear: screen time alone is a weak and inconsistent predictor of harm. What matters far more are specific online experiences, particularly cyberbullying and peer harm, image-based sexual abuse, coercive or pressured sexual communication, appearance-related distress, exposure to misogynistic and extremist content, and the impact of digital engagement on sleep and wellbeing.
Our research shows that bullying and cyberbullying remain persistent harms across childhood and adolescence, with a significant minority of young people directly affected and many more exposed as bystanders. In fact, bystander exposure to online bullying is now a common experience, yet many young people report low confidence in knowing how to intervene or report harm. This has important implications for policy, as it highlights the need to invest not only in reporting systems but also in strengthening young people’s self-efficacy and peer support.
We are also deeply concerned about the gendered nature of emerging online harms. Algorithmically amplified influencer cultures are increasingly normalising misogyny, entitlement and sexual harassment, particularly among boys, while simultaneously escalating risks to girls through image-based abuse and the misuse of AI-enabled “nudification” tools. These developments represent a serious threat to children’s dignity, safety and equality, and they underline the urgent need for robust regulation, safety-by-design, and platform accountability.
Importantly, our evidence also cautions against policy responses that exclude children and young people from decision-making. Measures such as smartphone or social media bans are often introduced without meaningful consultation, despite children’s right to be heard in matters affecting their lives. Moreover, there is currently poor evidence that such bans improve mental wellbeing or reduce harm, particularly when they are not accompanied by broader educational, regulatory and rights-based approaches.
In conclusion, the evidence strongly supports a shift toward experience-focused online safety policy, one that prioritises children’s rights, addresses high-risk online interactions, strengthens reporting and bystander responses, and holds platforms accountable for the design and amplification of harmful content. At the same time, it is essential to recognise that digital technologies also play a positive and deeply embedded role in children’s learning, relationships and identity development.
We look forward to discussing these issues further with the Committee and to supporting evidence-based policy that genuinely improves the online safety and wellbeing of children and young people in Ireland.
Chair and Members of the Committee,
Thank you for the opportunity to meet with you today on behalf of the DCU Anti-Bullying Centre. The Centre is Ireland’s leading centre of excellence in research on bullying, social communication and relationships.
Over the last five years, research from the Centre has been cited in 106 policy documents across 18 countries, including Ireland, where the Centre has worked with various Government departments to develop policy and resources to tackle bullying and online safety issues in schools and communities across Ireland.
This submission draws on peer-reviewed Irish and international research, including a substantial body of work authored or co-authored by colleagues from DCU Anti-Bullying Centre, and addresses children and young people aged 4 to 17 years. It focuses specifically on overall online safety and wellbeing, with attention to the design of online environments and the systems in which risks, benefits and harms occur. Crucially, our work is critically aware of the rights of the child in this rapidly changing environment.
The rise of digital technologies has created new contexts in which children and young people interact, alongside new forms of aggression that are often poorly understood. Our research shows that to be effective, policy should avoid framing online risks and harms as isolated digital behaviour problems. Instead, these must be situated within the broader continuum of bullying that spans both offline and online environments, and the societal norms that underpin these behaviours.
Screen time often receives a lot of attention. However, Irish and Ireland-linked evidence indicates that “screen time” on its own is an imprecise predictor of harm. Associations with wellbeing tend to be small and inconsistent when measured robustly, while poorer outcomes are more consistently linked to specific online experiences and mechanisms, particularly cyberbullying/peer harm, sexting behaviours (including pressured or unwanted requests and non-consensual sharing), appearance related distress, and sleep displacement or sleep difficulty (Orben & Przybylski, 2019; Foody et al., 2021; Foody et al., 2023; Cotter et al., 2025; Walsh & Perrotta, 2019).
While Irish population-based studies of adolescents have found high social media use, body dissatisfaction, and sexting behaviours were each associated with poorer mental health and elevated self-harm risk indicators, often stronger for girls (Cotter et al., 2025), the authors refrain from claiming these associations as causal.
Although debates are often framed in sometimes medicalised language such as dosedependent responses, “dopamine hits” , and addiction, the scientific evidence suggests a far more complex picture for young people (Meshi and Binder, 2025). This involves a complicated interrelationship between individual characteristics, school and family contexts, in addition to broader cultural and sociological factors (Etchells, 2024). Best practice randomised controlled trials and observational studies have been used to evaluate the impact of smartphone bans and digital detoxes across recent studies. The overall findings are quite mixed, but a large portion of research for school bans showed no significant changes in the mental health and well-being of participants, their average time spent on social media or their educational outcomes.
Online risks change as children grow, but peer harm, particularly bullying and cyberbullying, remains a consistent concern across age groups.
Due to different methodologies and definitions, estimates of bullying in Ireland can vary widely, making it hard to build a clear overall picture. For example, a recent nationally representative survey of 2nd-year post-primary students found that 6.4 % reported online bullying victimisation at least once a month, with boys (9.6 %) more than twice as likely as girls (3.8 %) to report monthly online victimisation (O’Higgins Norman et al., 2026, in review). In contrast, the Health Behaviour in School-aged Children (HBSC) Ireland 2022 Study, published in 2024, reports that about 15-18 % of Irish adolescents aged roughly 10–17 have experienced cyberbullying (being cyberbullied) at least once in the past couple of months, reflecting a broader definition and older age range.
In a separate bystander study in Ireland, roughly 60 % of young people reported having witnessed cyberbullying on social networks, highlighting that many encounter online bullying even if they are not direct targets (Sanmartín et al. 2023).
Either way, these data underscore that online bullying is a significant minority experience among young people in Ireland, and that bystander exposure, where young people witness these behaviours, is even more common, making bystander dynamics a critical component of understanding bullying in the digital age.
Other recent research by DCU Anti-Bullying Centre highlights the growing impact of toxic and misogynistic online influencers, particularly within short-form video platforms such as TikTok and YouTube Shorts, where algorithmic recommender systems play a central role in amplifying harmful content. Experimental research using simulated adolescent user accounts found that boys were exposed to male-supremacist, anti-feminist and reactionary content within minutes of use, even when accounts initially engaged only with generic interests such as sport, fitness or gaming. Once engaged, algorithmic systems rapidly increased the volume and intensity of this content, with the majority of recommended videos becoming toxic within a short viewing period. This content frequently exploits boys’ vulnerabilities around mental health help-seeking, identity and economic insecurity, promoting rigid and oppressive models of masculinity that normalise misogyny, emotional suppression, aggression and entitlement.
These dynamics intersect with emerging and deeply concerning risks linked to image-based sexual abuse and AI-generated child sexual abuse material (CSAM), including the recent proliferation of so-called “nudification” tools that enable the creation and circulation of non-consensual sexually explicit images, and particularly of girls and minors (Baker et al, 2024; O’Rourke et al, 2024).
Elements of both misogynistic influencers and so-called “nudification” technologies represent a intensified form of sexual violence which are underpinned by entitlement-based conspiratorial ideologies often promoted within segments of wider society. Evidence from experimental research, schools and youth settings indicates that these narratives are shaping how some boys perceive and interact with girls and women, contributing to the normalisation of sexist language, sexual harassment and gender-based abuse, with serious implications for girls’ safety, dignity and participation in educational spaces (Renström and Bäck, 2024; Wescott et al., 2024).
Taken together, the findings underline that algorithmically amplified toxic influencer cultures present a dual harm: they undermine boys’ emotional wellbeing, empathy and healthy identity development; while simultaneously escalating the risk of sexualised and gendered harms to girls, reinforcing the urgent need for robust regulation, safeguarding-by-design, and accountability for platforms and AI systems that enable and amplify these abuses (Flynn et al., 2025; McCashin, 2024; Setty et al., 2025). In Ireland, there is also a strong appetite from the education sector, youth justice sector and parents groups to engage with evidenceinformed toolkits that facilitate meaningful dialogue with young people on such issues, not limited to: digital citizenship, masculinity and identity, sex and sexuality, and combatting mis/disinformation.
It is very important to note that frontline practitioners in cybercrime and researchers alike have long anticipated the misuse of genAI for the offences we are witnessing in recent times. The high-profile coverage of Grok has raised extremely serious concerns for everyone in this room regarding the online safety and dignity of young people online. Since September 2025, The Observatory project within DCU Anti-Bullying Centre, funded by the Department of Justice, has been investigating the role of emerging technologies in child sexual abuse material (CSAM) and will be publishing an evidence review later this year. Although empirical research specifically for Grok is limited, we would like to highlight some research insights which may be of note.
A recent report by AI Forensics collected 50,000 mentions of Grok and 20,000 images generated by Grok between December 25th and January 1st (inclusive). It found: 53% of images generated by Grok contained individuals in minimal attire of which 81% were women; 2% of images depicted persons appearing to be 18 years old or younger (as determined by Google's Gemini vision model); and 6% of images depicted public figures, approximately one-third of whom were political figures. Overall, a range of approximately 74-92% of the images were of females, generated by profiles that were 83% male-presenting in terms of profile name/picture. Updated data from AI Forensics as of January 13th and 14th suggest that these figures have significantly reduced in response to recently introduced safeguards. However, Grok can still be used via the website and app to generate far more explicit images than on X (AI Forensics, 2026). Unfortunately, there is also a wider range of similar technologies being abused for nefarious purposes by those with specialist skills (Easttom, 2025).
Therefore, it is reasonable to suggest that there is likely a range of typologies of individuals using such technologies for differing and potentially overlapping reasons. This includes, but is not limited to: those motivated by commercial interests, the (gendered) exertion of power and control, fantasy and/or contact-driven sexual interests, or those who purport to be ‘curious’ in exploring the technology based on what is a clear misperception of the harm caused by such AI (Fido, 2026; Flynn et al., 2025; Harper et al., 2021; McGlynn and Toparlak, 2025). It is important to note that pre-teen children who are developmentally incapable of implicitly understanding the risks of such technologies are at risk of becoming perpetrators of real harm to others, especially if these technologies are being advertised to them.
Of the factors associated with the perpetration of image-based sexual abuse, evidence reviews have found that the acceptance of myths that minimise/excuse the perpetrator correlated positively with the perpetration of non-consensual distribution of intimate material and all forms of image-based sexual abuse (Paradiso et al., 2024). Therefore, it is crucial that future research understands the factors that influence how young people either accept or reject the many myths associated with image-based sexual abuse. Finally, scholars have continually noted the absence of impactful victim-centric responses to image-based abuse, such that many of those targeted by this imagery have reported feeling unsupported (Caletti and Summerer, 2024).
Research consistently identifies a persistent tendency among children and young people not to report bullying, alongside particular weaknesses in students’ self-efficacy to report harm when they witness unwanted behaviour online. Studies indicate that a substantial proportion of adolescents act as passive online bystanders, with between 45.3% and 58.8% reporting having witnessed cyberbullying without intervening, highlighting that many young people encounter online bullying indirectly rather than as direct targets. Evidence further shows that online bystanders are often less confident in recognising, interpreting, and knowing how to respond appropriately when cyberbullying occurs, reinforcing non-reporting as a significant barrier to effective intervention (O’Higgins Norman et al., in review; O’Higgins Norman et al., 2024; Feijóo et al., 2025).
Our research indicates that neither primary nor post-primary students have been meaningfully consulted on proposed smartphone bans and other issues related to social media and the internet, nor has their feedback been systematically sought in contexts where such bans have already been implemented. As a result, children and adolescents are once again excluded from decisions that directly affect their daily lives. This raises significant concerns in light of the rights afforded to children and young people under Article 12 of the United Nations Convention on the Rights of the Child, which affirms their right to be heard in matters affecting them (Reynolds, et al. 2025).
Similarly, the framing of a ‘social media ban’ belies the reality that any proposed ‘ban’ is in fact a requirement for industry to meet safety thresholds to ensure it is age-appropriate for young people. Current bans in other countries (such as Australia) do not include gaming and gen-AI platforms, and have practical challenges in effective implementation. In keeping with many of the issues raised here, there is very poor evidence that social media bans work (Livingstone, 2026; Radtke et al., 2022), and recent real world evidence from UK schools with over 1,200 students found no evidence that restrictive school policies were associated with overall phone and social media use or better mental wellbeing in adolescents (Goodyear et al., 2025).
The research overview presented above underscores the need for a more nuanced, rights-based and evidence-informed approach to online safety and wellbeing for children and young people in Ireland. This approach closely aligns with the recent recommendations of the Online Health Taskforce (2025), particularly in relation to:
a. A strong children and young people’s rights focus
b. Safety-by-design approaches to online services
c. The development of critical digital literacy
d. Robust enforcement and accountability mechanisms
e. Coherence between European Union (EU) and national regulatory frameworks
Taken together, the evidence points to the following priorities for policy and practice:
- Move beyond screen-time debates and focus on preventing and responding to high-risk online experiences that are more strongly associated with harm.
- Adopt age-appropriate protections across childhood, recognising that online risks and protective needs change significantly from early childhood through adolescence.
- Invest in accessible reporting systems, alongside interventions that strengthen self-efficacy and bystander empowerment, particularly in school and youth settings.
- Address gendered and sexual harms explicitly, including image-based abuse, coercive sexual communication, and appearance-related pressures.
- Strengthen regulation and corporate accountability for platform design, recommender systems and algorithmic amplification, ensuring that the safety and rights of children are prioritised over engagement-driven business models.
The evidence base supports a shift toward experience-focused online safety, robust reporting and peer-support systems, and strong regulation of platform design and algorithms. Online safety is a matter of children’s rights, well-being, and equality.
Finally, it would be remiss of us to not also underline the positive and deeply embedded role of technology in the everyday lives of children, which, according to the many young people we speak to, play a significant role in their play, connections with others, education and identity.
References
AI Forensics. (2026, January 2). Grok unleashed: Grok generating flood of sexualized images of women, including minors, and extremist propaganda [Flash report]. https://aiforensics.org
AI Forensics. (2026, January 16). AI-generated image abuse: Closing the accountability gap [Policy brief]. https://aiforensics.org
Baker, C. R., Ging, D., & Brandt Andreasen, M. (2024). Recommending toxicity: The role of algorithmic recommender functions on YouTube Shorts and TikTok in promoting male supremacist influencers. DCU Anti-Bullying Centre, Dublin City University.
Caletti, G. M., & Summerer, K. (Eds.). (2024). Criminalising intimate image abuse: A comparative perspective. Oxford University Press.
Cotter, D. S., Dooley, N., Staines, L., Power, E., McCay, S., Gallo, K., Gupta, A., Doyle, L., Cotter, D. R., & Cannon, M. (2025). An investigation of gender-based differences in social media use, sexting behaviours and body dissatisfaction as risk factors for poor mental health and self-harm in adolescents: A cross-sectional population-based study. Irish Journal of Psychological Medicine. Advance online publication. https://doi.org/10.1017/ipm.2025.10122
Easttom, C. (2025, January). Malicious Use of Artificial Intelligence. In 2025 IEEE 15th Annual Computing and Communication Workshop and Conference (CCWC) (pp. 00499- 00507). IEEE.
Etchells, P. (2024). Unlocked: the real science of screen time (and how to spend it better). Piatkus.
Feijóo, S., Laffan, D. A., Sargioti, A., Sciacca, B., McGarrigle, J., Heaney, D., & O’Higgins Norman, J. (2025). Bystander behaviour online and anti-cyberbullying self-efficacy among a post-primary school-aged sample in Ireland. Educational Review. Advance online publication. https://doi.org/10.1080/00131911.2025.2582556
Foody, M., Kuldas, S., Sargioti, A., Mazzone, A., & O’Higgins Norman, J. (2023). Sexting behaviour among adolescents: Do friendship quality and social competence matter? Computers in Human Behavior, 142, Article 107651. https://doi.org/10.1016/j.chb.2023.107651
Flynn, A., Powell, A., Eaton, A., & Scott, A. J. (2025). Sexualized deepfake abuse: Perpetrator and victim perspectives on the motivations and forms of non-consensually created and shared sexualized deepfake imagery. Journal of Interpersonal Violence. Advance online publication. https://doi.org/10.1177/08862605251368834
Goodyear, V. A., Randhawa, A., Adab, P., Al-Janabi, H., Fenton, S., Jones, K., ... & Pallan, M. (2025). School phone policies and their association with mental wellbeing, phone use, and social media use (SMART Schools): A cross-sectional observational study. The Lancet Regional Health–Europe, 51, Article 101246. https://doi.org/10.1016/j.lanepe.2025.101246
Livingstone, S. (2026). The UK shouldn’t rush to a social media ban for children under 16. LSE Politics and Policy Blog. Retrieved from: https://blogs.lse.ac.uk/politicsandpolicy/the-ukshouldnt-rush-to-a-social-media-ban-for-children-under-16/?123
Foody, M., Mazzone, A., Laffan, D. A., Loftsson, M., & O’Higgins Norman, J. (2021). “It’s not just sexy pics”: An investigation into sexting behaviours and behavioural problems in adolescents. Computers in Human Behavior, 117, Article 106662. https://doi.org/10.1016/j.chb.2020.106662
McCashin, D. (2024). Understanding the Andrew Tate phenomenon among boys–a state of the literature review and recommendations for future directions. Retrieved from: https://doras.dcu.ie/31699/1/Understanding-Andrew-Tate_ABC-Report_2024%…;
McGlynn, C., & Toparlak, R. T. (2025). The "new voyeurism": Criminalizing the creation of "deepfake porn" . Journal of Law and Society, 52(2), 204-228. https://doi.org/10.1111/jols.12456
Meshi, D., & Binder, J. (2025). Smartphones are not addictive: a proposal to distinguish between rewards and reward delivery vehicles. Addictive Behaviors, 108585.
O’Higgins Norman, J., & Gorman, A. (2025). Cyberbullying in context: Embedding equity, evidence and education in European policy: Submission to the European Commission call for evidence on cyberbullying. DCU Anti-Bullying Centre, Dublin City University.
O’Higgins Norman, J., Pigeon, D., & Sciacca, B. (in review). The role of school climate in shaping gendered bullying behaviours among post-primary students in Ireland. International Journal of Bullying Prevention.
O’Higgins Norman, J., Viejo Otero, P., Canning, C., Kinehan, A., Heaney, D., & Sargioti, A. (2024). FUSE anti-bullying and online safety programme: Measuring self-efficacy amongst post-primary students. Irish Educational Studies, 43(4), 865–882. https://doi.org/10.1080/03323315.2023.2174573
Online Health Taskforce. (2025). Online health and rights for Ireland’s children and young people: Final report (Final Report, September 2025). Department of Health, Government of Ireland. https://assets.gov.ie/static/documents/b192b694/Online_Health_Taskforce… _2025.pdf
Orben, A., & Przybylski, A. K. (2019). Screens, teens, and psychological well-being: Evidence from three time-use-diary studies. Psychological Science, 30(5), 682–696. https://doi.org/10.1177/0956797619830329
O’Rourke, F., Baker, C., & McCashin, D. (2024). Addressing the impact of masculinity influencers on teenage boys: A guide for schools, teachers and parents/guardians. DCU AntiBullying Centre, Dublin City University. https://doi.org/10.5281/zenodo.14102915
Paradiso, M. N., Rollè, L., & Trombetta, T. (2024). Image-based sexual abuse associated factors: A systematic review. Journal of Family Violence, 39(5), 931-954.
Radtke T, Apel T, Schenkel K, Keller J, von Lindern E. Digital detox: An effective solution in the smartphone era? A systematic literature review. Mobile Media & Communication. 2022;10:190- 215.
Renström, E. A., & Bäck, H. (2024). Manfluencers and young men’s misogynistic attitudes: The role of perceived threats to men’s status. Sex Roles, 90(12), 1787-1806.
Reynolds, M., Esfandiari, M., & O’Higgins Norman, J. (2025). Restriction or Resilience? Smartphone Bans in Schools: A Qualitative Study of the Experiences of Students. DCU Anti-Bullying Centre. Dublin, Ireland.
Sanmartín Feijóo, S., Sargioti, A., Sciacca, B. & McGarrigle, J. (2023). Bystander Behaviour Online Among Young People in Ireland. DCU Anti-Bullying Centre. Dublin, Ireland; commissioned by Webwise (Safer Internet Day 2023).
Setty, E., Hunt, J., & Ringrose, J. (2025). From behaviours to interactions: reframing conceptualisations of the nature and causes of sexual harm among young people. Journal of Sexual Aggression, 1-21.
Walsh, C., & Perrotta, C. (2019). The association between self-reported online screen time and self-reported sleep outcomes in adolescents living in Ireland. Paper presented at the 11th Annual Growing Up in Ireland Research Conference, Dublin, Ireland. Growing Up in Ireland (GUI), Department of Children and Youth Affairs.
Wescott, S., Roberts, S., & Zhao, X. (2024). The problem of anti-feminist ‘manfluencer’ Andrew Tate in Australian schools: Women teachers’ experiences of resurgent male supremacy. Gender and Education, 36(2), 167-182.