Exploring How Young People Navigate the Evolving Online World in the Era of Artificial Intelligence and Misinformation
Maryam Esfandiari, Sinan Aşçı, Sandra Sanmartín Feijóo, Megan Reynolds, Carol O’Toole, Jane McGarrigle, Darran Heaney & James O’Higgins Norman
March 2025
Cover illustration: Teresa Di Manno (TeresaDiMannoDesign@gmail.com)
This study was supported by Vodafone Foundation Ireland and Enterprise Ireland Disruptive Technologies Fund.
How to cite this report: Esfandiari, M., Aşçı, S., Feijóo, S., Reynolds, M., O’Toole, C., McGarrigle, J., Heaney, D.,
& O’Higgins Norman, J. (2025). Exploring How Young People Navigate the Evolving Online World in the Era of Artificial Intelligence and Misinformation. DCU Anti-Bullying Centre ISBN: 978-1-911669-84-5.
In-Text Citation:
(Esfandiari et al., 2025)
Webwise logo
Cover image
DCU Anti-Bullying Centre
DCU Anti-Bullying Centre (ABC) is a research centre located in DCU Institute of Education. In line with DCU’s strategy, the core mission of the Centre is to be a future focused and globally connected European centre of excellence for research and education on bullying and digital safety. The Centre hosts the UNESCO Chair on Bullying and Cyberbullying and the International Journal of Bullying Prevention. Between 2018 and 2024, the Centre produced over 100 academic publications and 24 scientific reports, achieving a current combined Field-Weighted Citation Index of 2.4. Members of the Centre are drawn from all five faculties of DCU and from seven other universities and take pride in our ethical research practices and the positive social impact of our work in tackling bullying and promoting digital safety.
About Webwise
Webwise is the Irish Internet Safety Awareness Centre, we provide free information, advice and resources for schools, families and young people on online safety and digital citizenship. Funded by the Department of Education and co-funded through the European Commission; Webwise develops and disseminates free resources that help teachers integrate digital citizenship and online safety into teaching and learning in their schools. Webwise also provides information, advice, and tools to parents to support their engagement in their children's online lives. With the help of the Webwise Youth Advisory Panel, Webwise develops youth-oriented awareness raising resources and training programmes that promote digital citizenship and address topics such as online wellbeing, cyberbullying and more.
Contents
Key Findings
Introduction
Method
Study Design and Ethics
Survey Design and Measures
Recruitment and sample
Findings
Smart Device Ownership
Daily Smart Device Usage
Social media and app usage
Online Safety Self-efficacy
Familiarity with social media features
AI-based tools and features
Open-ended questions on misinformation and AI
Top Influencers
Influencer Engagement
Presentation of Online Self
Identify between fake and real headlines
Conclusions
Limitation
Recommendation and Future Direction
Appendix
Adolescent Digital Habits and Preferences
- The majority of adolescents, accounting for 76.1%, were between the ages of 11 and 15 years old. 60.22% received their first smartphone between 10 and 13 years old. 43% of participants indicated using their smart mobile devices several times a day.
- YouTube stands out as the most popular social media platform, with 61.5% of participants identifying it as their favourite online social network site. The findings reveal that filters (photo or video) were the most widely used tools, with
- 61.5% of adolescents using them.
- The majority of respondents feel that being online enhances their self-expression.
Online Self-efficacy and Misinformation Perception
- Adolescents report a reasonable level of perceived online self-efficacy, particularly in maintaining privacy. Over half of respondents are confident in keeping passwords safe and knowing whom not to share them with.
- In terms of AI and deepfake-related efficacy items, gaps exist in advanced digital literacy, particularly in identifying advanced forms of misinformation.
Influencers Engagement
- Adolescents use influencer content but seem indifferent to its influence on their self-concept and emotions. Their engagement with this content is moderate, but creating their own content about the influencers they follow is less common.
- The finding indicated that consuming influencer content on social media fulfils children’s and young people's needs for entertainment. The study revealed that participants showed a limited interest in the influence of social media personalities on their self-identity, and their active engagement with this content was minimal. Instead, the findings indicate that children and young people primarily consume influencer content for entertainment rather than emotional connection.
The proliferation of advanced digital media technologies in recent years, such as artificial intelligence (AI), video-sharing platforms, and virtual and augmented reality, has transformed how children and young people connect, learn, and express themselves online. These advancements have also significantly reshaped the landscape of information accessibility and consumption for children and young people (Klopfenstein Frei et al., 2024). One crucial aspect of adolescents' online experience is immediate access to online information. They access information across a wide range of areas, including health and wellness (Abrha et al., 2024), entertainment and pop culture (Ohiagu & Okorie, 2014), and educational resources such as e-books and online tutorials (Oddonw & Merga, 2024). Instead of traditional media formats such as newspapers or TV, adolescents now interact with fragmented, algorithm-based content that reflects their interests, a shift that brings both change and opportunities and challenges. Among these challenges, the rapid spread of false information is one of the key challenges children and young people face while navigating the online world (Reid Chassiakos et al., 2016). Factors such as the interactive and hypertextual features such as repost function and hyperlinks combined with compulsive internet use (Maftei et al., 2022) make spreading false information faster and easier.
False and incorrect information, which is commonly referred to by terms such as misinformation, disinformation, or fake news, poses several challenges and threats, such as influencing individuals' decision-making (El Mikati et al., 2023), fostering biases and false beliefs about specific topics or sections of society and contributing to cyberbullying (Klopfenstein Frei et al., 2024). In today’s digitalised society, the phenomenon of false information this phenomenal is gaining new forms, structure, and unprecedented speed. In 2018, “misinformation” was chosen as the word of the year, showing its influence on contemporary society (Guess & Lyons, 2020). The Merriam-Webster Dictionary defines misinformation as “incorrect or misleading information.” (Merriam-Webster, 2024). Besides this dictionary definition, scholars propose more nuanced explanations of the term for misinformation based on the context (Søe,2018). For instance, in health communication, the spread of misinformation during the COVID-19 pandemic has been amplified. In their 2020 study, Basch, Basch, and Hillyer identify various types of misinformation that circulated during that year, including false claims, conspiracy theories, and misleading information about preventive measures. The authors emphasise that major social media platforms, such as Facebook and YouTube, play a significant role in disseminating this misinformation.
Due to rapid technological advancements, such as artificial intelligence, it is crucial to gain updated insights from children and young people about how they engage with these advanced and complex innovations. These insights will inform educators and policymakers to develop regulations and educational resources based on the most recent perspectives from children and young people. Therefore, to provide the most recent insight from children and young people and considering the substantial number of studies that examined misinformation in the adult population, this study aims to fill these gaps by providing a more nuanced understanding of how adolescents navigate their online world, particularly focusing on the recent technology such as AI and the phenomenon of misinformation. We examined the adolescent's broader understanding and experiences of their online world and perception of online misinformation. This research seeks to contribute to extending the literature on children and young people's recent online experiences and perceptions of online misinformation. The following sections will provide a detailed overview of the study's methodology, findings, and discussion.
This study is part of a broader investigation on adolescents and misinformation, as well as their online behaviour in Ireland, using an exploratory sequential mixed methods design. Ethical approval was obtained from Dublin City University before conducting this research. In the first phase, a student focus group was conducted with adolescents in Ireland to understand their perception of misinformation and how they navigate the online world (Feijóo et al., 2024). The focus group guided the design of the questionnaire in the second phase of the study, on the results of which the present report will focus.
The online questionnaire was designed to assess students' understanding of misinformation, artificial intelligence (AI), and their broader online behaviour. The first questions referred to participants' demographics and their overall smart device usage and social media consumption. The participants were then presented with a series of scales assessing their online self-efficacy, susceptibility to misinformation, identification of fake and real headlines, presentation of their online self, and influencer engagement.
The online survey included items specifically developed for this study based on prior research on the topic in Ireland (Feijóo et al., 2023; NACOS, 2021) to inquire about the age of getting the first smart device, daily use of smart devices, social media, apps and different social media features usage. Furthermore, previously existing scales were included to assess an array of variables and are described below linked to the construct they measure.
Online Safety Self-Efficacy. Participants' self-efficacy regarding online behaviours was measured using an adapted version of the Students’ Self-Efficacy in Online Safety Scale by O’Higgins Norman and colleagues (2023). The scale uses a 5-point response range from "not at all" to "very." The adapted items for the present study focus on participants' confidence in areas such as managing online risks, safeguarding personal data, and handling inappropriate content. The report will describe the items specifically about AI and misinformation in the Findings, but the full adapted scale is available in the Appendix.
Perception of Online Self. The Scale of Perception of Online Self Scale (POSS) developed by Fullwood et al. (2016) was included. Each item is rated on a 5-point Likert scale ranging from "strongly disagree" to "strongly agree." This measure evaluates participants' perceptions of their online selves, with some exemplary items being "I find it easier to communicate in face-to-face contexts" or "I can show my best qualities online".
Influencer Engagement on Social Media. Participants' engagement with social media influencers was assessed using the Influencer Engagement on Social Media (IESM) Scale developed by Levesque and Pons (2023). This scale evaluates behaviours such as following influencers and engaging with their content.
Identifying between fake and real headlines. To assess misinformation susceptibility, a short version of the Misinformation Susceptibility Test (MIST) was adapted for the Irish context, based on Maertens et al. (2023). This test evaluated participants' ability to identify, resist, or engage with misinformation shaped as headlines as they had to indicate whether each headline was fake or real. Half of the presented headlines were real, while the other half were fake.
The participants were recruited through a convenience sampling by contacting Webwise’s school network. The sample was drawn from post-primary students aged 11 to 19 years. Schools’ principals and designated teachers distributed an online survey invitation to parents/guardians to obtain parental consent and participant assent. Parental consent and child assent were obtained before participating in the survey, and 397 parents consented to their children participating in the online study. Those who consented shared the survey link with their child, including the assent form, and invited their child to complete the survey. All adolescents were informed of their option to decline participation in the study and that their responses would be kept confidential and anonymous. Those who did not consent were not allowed to proceed to the survey, as answers to this question were necessary for inclusion in the final sample. Despite the recruitment efforts, a smaller sample than intended started the survey after receiving consent from their parents or guardians (214 adolescents). After data cleaning, 109 responses could be included for further analysis. Regarding demographics in the final sample (n = 109), 47.7% identified as boys, 48.6% as girls, 2.8% as other, and 0.9% preferred not to say (see Figure 1).
FIGURE 1 SAMPLE COMPOSITION
Age of Participants Regarding their age, the demographic data showed that most
participants were aged between 11 and 15, making up 76.1% of the total sample (see Figure 2).
FIGURE 2 AGE OF PARTICIPANTS
Regarding device ownership, participants answered an open-ended question about when they received their first smart digital device. Most participants received their first devices between 9 and 13 years old, with 10 and 12 being the most frequent ages. A few participants reported getting their first phone between 15 and 16 years old, which may reflect individual parental choices or cultural practices.
TABLE 1 AGE GETTING A FIRST MOBILE DEVICE
| Age Group | (%) |
| ≤ 9 | 30.12 |
| 10-11 | 30.11 |
| 12-13 | 30.11 |
| ≥ 14 | 7.53 |
Figure 3 illustrates the daily usage of smart devices among participants. The finding shows that 43.1% of respondents use these devices frequently throughout the day.
In comparison, 21.1% of students engage with their devices a few times each day,
indicating a more moderate level of daily usage. Additionally, 20.2% of participants reported using their devices only once per day, and 10.1% use them less often, highlighting limited digital interaction for this segment.
Figure 3 Daily Smart Device Usage
Participants indicated a diverse range of online social networking sites and apps they use (see Figure 4). YouTube emerged as the most popular platform, with approximately 61.5% of participants indicating they regularly use it. Following YouTube, WhatsApp was favoured by 49.5% of participants, while TikTok was chosen by 43.1%. Snapchat came next at 40.4%, and Instagram was mentioned by 33.9% of participants. Additionally, participants highlighted other used apps, including Bedtime History, Patreon, Discord, Fortnite, Google, and Netflix.
Figure 4 Online Social Networking Sites
The data on online self-efficacy reveals a mixed level of awareness and confidence among respondents regarding online safety and digital skills (see appendices for complete statistics). Notably, a significant portion of students demonstrated high self- efficacy in key areas: 40% of respondents expressed high confidence in determining which videos to avoid posting online. In comparison, only 4.6% reported feeling no confidence in this regard (see Appendix 1). Additionally, 43.1% of students know "how to treat others respectfully online," indicating a solid grasp of online etiquette.
In terms of confidence in misinformation content, participants demonstrate different confidence levels. For instance, most adolescents feel moderately confident in distinguishing jokes or parodies from real stories, with 56.8% rating their confidence as 4 and 25.7% as 3 on the scale. However, there are notable challenges in areas like detecting bot-generated content, where one in five respondents (20.3%) feels very unsure. Similarly, identifying fake online profiles poses difficulties, with 18.4% expressing low confidence. When it comes to recognising deepfake or AI-generated content, while some rated their confidence as 3 (27.5%) or 4 (22%), a smaller group (10.1%) remains rated 0 or 1 about being confident on this matter.
| Question | Not at all (%) (0) | (1) | (2) | (3) | (4) | (5) |
| ...when I am seeing content created by a bot |
6.4 | 2.8 | 11.9 | 33 | 11 | 12.8 |
| ...when an online profile is fake |
9.2 | 9.2 | 11.9 | 21.1 | 14.7 | 11.9 |
| ...how to distinguish between real news and fake news |
5.5 | 9.2 | 7.3 | 21.1 | 23.9 | 11 |
| ...when a real story is manipulated to trick me/clickbait me |
4.6 | 6.4 | 9.2 | 21.1 | 20.2 | 16.5 |
| ...when real content is manipulated/photoshopped |
6.4 | 3.7 | 11 | 25.7 | 13.8 | 17.4 |
| ...when a story is made up | 6.4 | 2.8 | 10.1 | 26.6 | 17.4 | 14.7 |
| ...when I am seeing a deepfake/AI generated content |
5.5 | 4.6 | 7.3 | 27.5 | 22 | 11 |
| ...how to distinguish a joke or parody from a real story |
1.8 | 2.8 | 6.4 | 25.7 | 28.4 | 12.8 |
The findings reveal that participants are generally familiar with social media features. For instance, 30.3% are aware of the blocking button but have not used it, while 50.3% have utilised it in the past. Regarding privacy settings, 66.1% reported that they manage these settings themselves.
Table 3 Familiarity with Social Media Features
| I don't know what this is | No, but I know what this is | Yes, I have used it for myself | Yes, I have used it to help someone else | |
| Blocking Button | 9.2 | 30.3 | 50.5 | 1.8 |
| Report button | 6.4 | 42.2 | 36.7 | 6.4 |
| Help centre or link to a helpline | 14.7 | 55.0 | 16.5 | 4.6 |
| Privacy settings | 5.5 | 18.3 | 66.1 | 2.8 |
| Artificial Intelligence (AI) | 9.2 | 36.7 | 41.3 | 4.6 |
Table 4 demonstrates AI-based digital tools awareness, with participants being asked to reply to “When using digital media, which of the following features have you come across?”. The findings show that filters (photo or video) are the AI-based tool they are more aware of, with 61.5% of the adolescents reporting having encountered it. Smart and virtual assistants, such as Alexa and Cortana, seem to be quite present as well, with an awareness rate of 59.6%, while automatic spell check demonstrates is reported by 54.1% of respondents. Some adolescents have engaged with image generators and deepfakes, with 35.8% indicating they have encountered them. Additionally, 27.5% of participants indicated coming across algorithms, and 38.5% report encountering personalised feeds on social media.
Table 4 AI-Based Tools and Features
| Feature | (%) |
| Filters (photo or video) | 61.5 |
| Spam filters | 26.6 |
| Facial Recognition | 48.6 |
| Automatic Spell Check | 54.1 |
| Writing aids/Word suggester | 44 |
| Image generators/Deepfakes | 35.8 |
| Speech recognition (microphone, voice notes) | 51.4 |
| Chatbots | 40.4 |
| Smart/Virtual assistants (Alexa, Cortana, Siri) | 59.6 |
| Recommendation systems | 48.6 |
| Real-time captioning | 24.8 |
| Navigation/Mapping | 47.7 |
| Algorithms | 27.5 |
| Personalised feeds in social media | 38.5 |
In your own words, please briefly describe your understanding of the term “misinformation”.
Based on adolescents' responses, misinformation primarily refers to incorrect, false, or untrue information. Common examples include "fake news" and "wrong information," demonstrating awareness of the inaccuracies that can mislead individuals. Students also recognise the intentional nature of misinformation, acknowledging that it can be used to manipulate or deceive others. Phrases such as "used to persuade you differently" show how they perceived that such information is designed to mislead. Furthermore, references to misrepresentation indicate that misinformation often involves twisted facts or inaccuracies, where information may be misinterpreted or deliberately distorted.
There is an acknowledgement of the harmful effects of misinformation, with some students pointing out that it can spread purposefully, often for personal gain or to attract attention, negatively impacting political and social discussions. However, not all misinformation is intentional; some responses suggest it can arise from accidental errors or miscommunication, where information is incorrectly conveyed or misunderstood. Lastly, the term "fake" is frequently used to describe misinformation, reinforcing the notion that it is fabricated or unreal. While students provide some simplistic definitions of misinformation, indicating a lack of depth in understanding its complexities, there is a clear trend toward recognising its connection to digital platforms and social media, along with a mixed perception of the intent behind the spread of misinformation.
Can you list as many examples of misinformation you may encounter when using social media as you can think of?
Analysing adolescents' responses regarding examples of misinformation on social media reveals diverse perceptions and patterns, reflecting a complex interaction with online platforms. One of the most frequently mentioned types of misinformation relates to news and current events. Students consistently highlighted issues such as fake news, misleading headlines, and biased reporting. Many noted the sensationalism often present in reporting (e.g., a story about “selling dog meat with a sheep's head”) and the deliberate dissemination of false political information. These responses indicate an awareness of how media can shape public perception.
Manipulation of media content through technology also emerged as a prominent theme, particularly the role of AI and editing tools. Students frequently mentioned deepfakes, AI-generated images, and photoshopped pictures as sources of misinformation, reflecting a growing concern about the credibility of visual content online. This highlights the challenges posed by advancements in media creation technologies.
Rumours and gossip also emerged as pointed categories, especially concerning celebrities, public figures, and social contexts like peer groups. These examples underscore the social nature of misinformation, where individuals often share unverifiable or sensationalised stories for engagement or entertainment. Similarly, scams and fraudulent activities, including fake profiles, phishing attempts, and misleading advertisements, were another common concern. Many students identified these as direct threats, particularly forms that target users’ financial security or personal data.
While health and medical misinformation were mentioned less frequently, they remain an area of concern for some participants. Examples include diet myths, misleading health advice, and unverified medical claims. Identity-related misinformation was another frequent issue, such as fake accounts and impersonation. Two respondents mentioned demographic inaccuracies (e.g., “age, weight, size” and "gender and age") as forms of misinformation, viewing them as misleading representations of personal information. In addition, concerns were also expressed about the authenticity of online personas and the potential for these false identities to mislead or exploit others (e.g. fake news impersonation).
Educational misinformation emerged as a notable theme, with students pointing out poorly researched or misleading content presented as fact. Examples include viral “life hacks” and questionable educational videos, such as exaggerated claims about products (e.g., “mascara that will change your life”) or information without any actual content.
Additionally, some mentioned “news” or “news headline” as examples of misinformation, indicating that there is a lack of trust in news for some.
In your own words, describe your understanding of the term “Artificial Intelligence (AI)”.
The analysis of student responses about their understanding of Artificial Intelligence (AI) reveals a range of interpretations, from basic concepts to more advanced insights, reflecting different levels of familiarity with the subject. Many students define AI as technology designed to simulate human intelligence, often referencing robots, computers, or bots that can perform tasks typically associated with humans. These tasks are usually described as generating responses, solving problems, or mimicking human behavior. This fundamental understanding highlights the widespread perception of AI as an entity with human-like capabilities.
Some students emphasise AI as a tool for convenience, describing it as technology that can "solve problems," "help achieve goals," or "make things easier." Specific examples include AI applications like autocorrect, image or text generation, and creating videos or accounts online. These responses demonstrate an awareness of AI as a practical, problem-solving tool integrated into everyday life.
Others associate AI with advanced computational capabilities, emphasising its ability to gather information, process data, and make decisions based on large datasets. Several responses focus on the machine learning aspect of AI, highlighting its capacity to learn from inputs, adapt, and apply knowledge in different contexts. For example, some students describe AI as a "computer that thinks like a human with all the information in the world" or "very intelligent, capable of solving many problems automatically."
Interestingly, a few responses reflect misconceptions or incomplete understandings. Some students perceive AI as "not real," "fake," or "not smart," suggesting scepticism or limited engagement with the technology. Others express concerns about potential misuse, such as AI being used to manipulate appearances or create inappropriate content. These responses highlight ethical implications and the potential for harm associated with AI (e.g “It can generate images and text that is fake. Homework helper. I don't use it, but others in my class do. People are lazy and don't want to bother.”). Additionally, there is a tendency to humanise AI, frequently mentioning robots and
"thinking computers." This reflects a common cultural framing of AI as resembling human intelligence or behaviour (e.g,, “A computer who thinks like me”), even though many students acknowledge its non-human nature. Fewer students demonstrated a deeper understanding of AI's technical aspects, referencing databases, programming, and machine learning explicitly
List as many examples as you can of Artificial Intelligence (AI) you may encounter when using social media.
Participants demonstrate varying levels of understanding regarding the use of Artificial Intelligence (AI) on social media. Many students identified AI-driven technologies embedded in popular platforms, such as Snapchat’s My AI, Siri, Alexa, and ChatGPT. These tools are appreciated for their ability to facilitate personalised interactions, answer questions, and provide recommendations, highlighting their role as virtual assistants in daily life.
Creative applications of AI were also notable, with students mentioning tools that generate text (e.g., ChatGPT), create images (e.g., Craiyon, Midjourney), and edit videos or photos. These applications demonstrate AI's ability to enhance user-generated content and provide entertainment, including deepfakes and interactive role-playing scenarios involving fictional characters.
A number of participants also recognised AI's role in curating social media feeds, targeting advertisements, and moderating content. This indicate that respondents are familiar that these algorithms influence user engagement by presenting tailored content, although their underlying mechanisms often go unnoticed.
From the total sample, 23% responded regarding influencers they follow, mentioning a wide range of influencers and celebrities. The findings indicate that the entertainment industry plays a significant role in shaping participants' interests, including music (e.g., Tylor Swifts, Beyonce), gaming (e.g., Minecraft YouTubers), and digital entertainment (e.g., MrBeast).
Interestingly, one participant mentioned me in response to popular influence, and another brought up a Chinese name for a favourite influencer (科尔⽐ rose). Instead of naming a specific influencer, some participants mentioned the names of sports teams. This indicates that influencers may hold a collective meaning for some adolescents rather than referring to an individual figure. Additionally, some respondents mentioned they do not follow particular influencers. One also mentioned, “Funny videos, teenage make-up and fashion, TikTok dances. No one in particular.” This suggests that for some teenagers, influencers are not a considerable part of their online activities.
Participants’ perceptions and interactions with influencers were examined across
dimensions of self-concept, emotional attachment, content consumption, and content creation (see Table 5).
In terms of engagement with influencer content, a significant percentage of respondents interact with it. For example, 32.1% of respondents "agree" and 6.4% "totally agree" that they view influencers' photos. For reading influencers' posts, 35.8% "agree" and 5.5% "totally agree." Engagement is highest with video content, where 45.9% "agree" and 6.4% "totally agree" that they watch influencers' videos. However, only 14.7% of users agree, and 5.5% totally agree that they comment on influencer posts. Additionally, creating stories about influencers is quite rare, as only 7.3% of users agree, and 4.6% totally agree with this behaviour.
These findings highlight that while participants frequently consume influencer content, content creation related to influencers is significantly lower. The survey indicates that adolescents have limited reliance on influencers in their daily lives. Only 1.8% of respondents believe their days would be significantly affected by the absence of the influencers they follow. In contrast, a substantial portion of adolescents —23.9% and 21.1%, respectively—disagree or strongly disagree with the notion that influencers play a crucial role in their routines. These suggest that the majority of adolescents do not believe that they depend heavily on influencers for their everyday experiences. However, some participants demonstrate moderate emotional attachment to social media influencers, as reflected in their concerns when influencers are inactive. Specifically, 15.6% of adolescents agree and 4.6% totally agree that they feel worried if a favoured influencer has not posted for a while, suggesting a degree of dependency on their activity.
Table 5 Influencer Engagement
| Strongly Disagree | Disagree | Neutral | Agree | Strongly Agree | |
| By interacting publicly with influencers, I can make a good impression on others. | 8.3 | 14.7 | 29.4 | 17.4 | 2.8 |
| Part of me is defined by my interactions with the influencers I follow. | 17.4 | 16.5 | 23.9 | 11.9 | 1.8 |
| Interacting publicly with influencers allows me to convey who I am to others. | 16.5 | 16.5 | 23.9 | 11 | 4.6 |
| By interacting publicly with influencers, I can improve how others see me. | 13.8 | 16.5 | 24.8 | 15.6 | 1.8 |
| Interacting publicly with influencers allows me to portray the image of who I want to be to others. | 13.8 | 20.2 | 17.4 | 17.4 | 3.7 |
| I am excited when I interact with an influencer. | 8.3 | 8.3 | 24.8 | 23.9 | 7.3 |
| If an influencer I follow doesn’t post for some time, I get worried. | 15.6 | 19.3 | 17.4 | 15.6 | 4.6 |
| I miss the influencers I follow when they are not posting | 15.6 | 13.8 | 14.7 | 26.6 | 2.8 |
| My days wouldn’t be the same without the influencers I follow. | 23.9 | 21.1 | 19.3 | 6.4 | 1.8 |
| I often feel happy about the influencers I follow when I think of them. | 12.8 | 16.5 | 20.2 | 18.3 | 4.6 |
| I look at influencers' photos. | 5.5 | 13.8 | 14.7 | 32.1 | 6.4 |
| I read influencers' posts. | 5.5 | 12.8 | 13.8 | 35.8 | 5.5 |
| I watch influencers' videos. | 3.7 | 3.7 | 6.4 | 45.9 | 13.8 |
| I comment on influencer posts. | 21.1 | 17.4 | 14.7 | 14.7 | 5.5 |
| I comment on influencers' lives. | 24.8 | 21.1 | 16.5 | 8.3 | 2.8 |
| I create stories about influencers. | 28.4 | 21.1 | 10.1 | 7.3 | 4.6 |
| I create visual publications (photos or videos) about the influencers I follow. | 28.4 | 22 | 11.9 | 9.2 | 1.8 |
| I create text-based publications about some influencers. | 27.5 | 22 | 12.8 | 8.3 | 2.8 |
| I tag influencers in my publications (text, images, or stories). | 28.4 | 21.1 | 11 | 11 | 1.8 |
| I create posts about influencers and hope they will share them. | 25.7 | 23.9 | 11.9 | 8.3 | 2.8 |
| I create posts about influencers and hope they will like them. | 25.7 | 19.3 | 11.9 | 12.8 | 3.7 |
Adolescents' perceptions of their online self were measured by the Presentation of the Online Self Scale (POSS). To assess the participants' views, we asked the following question: "For the items listed below, please select the answer that best describes how you feel about yourself in the online world." (POSS; Fullwood et al., 2016). Please see responses to all items for this scale below in Table 6.
Table 6 Presentation of Online Self
| Dimension | Items | Strongly Disagree (%) | Disagree (%) | Neutral (%) | Agree (%) | Strongly Agree (%) |
| Ideal Self | Being online allows me to express myself | 3.7 | 7.3 | 33.9 | 31.2 | 5.5 |
| Consistent Self | I cannot really be myself online | 9.2 | 27.5 | 30.3 | 11.9 | 2.8 |
| Consistent Self | I am always my true self online | 3.7 | 12.8 | 27.5 | 31.2 | 6.4 |
| Ideal Self | The way I am online is very different from my real life | 11.9 | 25.7 | 20.2 | 21.1 | 2.8 |
| Ideal Self | Communicating online allows me to say the things I cannot say offline | 11 | 29.4 | 15.6 | 20.2 | 5.5 |
| Consistent Self | I feel my personality online is the real me | 6.4 | 21.1 | 26.6 | 22 | 5.5 |
| Ideal Self | I like going online because it allows me to be different | 7.3 | 25.7 | 30.3 | 14.7 | 3.7 |
| Online Presentation Preference | I find it easier to communicate in face-to-face contexts | 1.8 | 11.9 | 19.3 | 31.2 | 17.4 |
| Online Presentation Preference | I find it difficult to be myself in the real world | 13.8 | 29.4 | 22.9 | 13.8 | 1.8 |
| Consistent Self | I feel I am the same person in the online world that I am in the real world | 3.7 | 7.3 | 19.3 | 39.4 | 11.9 |
| Online Presentation Preference | I prefer being online than offline | 11.9 | 22.9 | 28.4 | 15.6 | 1.8 |
| Multiple Selves | I regularly use different personas (roles/characters) online | 22.9 | 31.2 | 11.9 | 12.8 | 2.8 |
| Ideal Self | I can escape from myself online | 14.7 | 22.9 | 23.9 | 16.5 | 3.7 |
| Multiple Selves | I very often act out different personas in certain online spaces | 19.3 | 31.2 | 17.4 | 11 | 2.8 |
| Multiple Selves | Being online allows me to create a new identity | 18.3 | 25.7 | 17.4 | 16.5 | 3.7 |
| Ideal Self | I can show my best qualities online | 6.4 | 18.3 | 22 | 26.6 | 8.3 |
| Ideal Self | I can talk to people who wouldn't usually talk to me in the real world | 8.3 | 21.1 | 22.9 | 23.9 | 5.5 |
| Multiple Selves | I am a different person depending on which online space I’m in | 16.5 | 27.5 | 18.3 | 15.6 | 3.7 |
| Ideal Self | I feel more comfortable behaving how I want to online | 10.1 | 21.1 | 22 | 22.9 | 5.5 |
| Multiple Selves | I enjoy acting out different identities online | 19.3 | 30.3 | 14.7 | 12.8 | 4.6 |
| Ideal Self | I feel I can be my ideal self online | 8.3 | 19.3 | 25.7 | 21.1 | 6.4 |
Table 6 illustrates how adolescents perceive their online identities and actions. Notably, 31.2% of participants agree that the internet allows them to express themselves, while 33.9% feel neutral. This suggests that while many individuals use the internet for self expression, some remain undecided or believe that it depends on the situation. One notable finding is that almost 40% of respondents reported feeling like the same person online as they are in the real world, indicating a preference for authenticity in the online platform. In terms of the item “I regularly use different personas (roles/characters) online”, collectively, 54.1% disagreed or strongly disagreed, further demonstrating a willingness to remain authentic to themselves online. Interestingly, many adolescents feel more comfortable with face-to-face conversations. About 31.2% agree, while 17.4% strongly agree that in-person discussions are easier than online conversations, reflecting a comfort with traditional, direct communication.
Table 7 demonstrates the correct answer percentages of those who answered questions about spotting real or fake headlines. Participants showed varying awareness of real versus fake news, where eight headlines were presented: four real and four fake headlines. The data reveals a mixed ability among respondents to correctly identify real and fake news, with accuracy varying across different topics. Notably, the highest correct identification rate was observed for the fake headline about Wi-Fi causing brain damage in Irish teens, with 51.4% of respondents recognizing it as fake. This result suggests that respondents were sceptical of sensational health and technology claims. Similarly, 50.5% of participants correctly identified the real news about Instagram launching a "Restrict" feature to combat bullying, indicating some familiarity with recent advancements in technology aimed at promoting social well-being.
On the other hand, the lowest correct identification rate (27.5%) occurred with the fake news about a gaming console being linked to increased violence and addiction. This may indicate that respondents are more likely to believe fake stories that align with existing societal concerns, such as the negative effects of video games. Similarly, only 30.3% correctly identified the story about young candidates in Irish elections, highlighting a potential lack of engagement or interest in political topics, which might also explain the relatively high rate of incorrect and missing responses in this category.
The difficulty in differentiating real and fake news is evident in categories such as "Irish influencers promoting harmful diet products," where only 37.6% of respondents correctly identified it as fake. This result might reflect the likelihood of such stories in today’s influencer-driven culture, where the promotion of questionable products is not uncommon. Similarly, the real news about social media's link to depression among teenage girls had a correct identification rate of 50.5%, indicating some awareness of this widely discussed issue. Overall, the data highlights the variability in the ability to discern real from fake news, with respondents showing greater scepticism towards health and safety-related claims compared to technology and political stories.
Table 7 Identify Between Fake and Real Headlines
| Category | Correct % |
| Real-Instagram Launches 'Restrict' Feature to Help Stop Bullying | 50.5 |
| Fake-Popular Social Media App Secretly Tracks Users' Locations for Irish Government Surveillance | 41.3 |
| Fake-Irish Influencers Paid to Promote Harmful Diet Products | 37.6 |
| Real-Young people urged to join global climate strike | 40.4 |
| Fake-New Gaming Console Linked to Increased Risk of Addiction and Violence | 27.5 |
| Real-Social Media-Linked Depression More Common in Teenage Girls | 50.5 |
| Fake-Wi-Fi Found to Cause Brain Damage in Irish Teens, Experts Warn | 51.4 |
| General Election 2020: Youngest Candidates across Ireland Vying for Your Vote in Each Party | 30.3 |
In this section, we will present an overview of the current study's findings.
This study has explored recent online experiences of children and young people in relation to AI-based tools and the phenomenon of misinformation. This section presents an interpretation and summary of the findings, along with the limitations and recommendations. It should be noted that even if we would like to present several recommendations on what we believe can be learned from the data below, the present study faced a key limitation, which is presented in the limitation section below. Hence, the finding and following conclusions should be interpreted with caution due to the small sample and, therefore, low statistical power.
Digital tools and features: The study indicates that filters for photos and videos are the most commonly used tools, with 61.5% of adolescents reporting their use. The findings highlight the growing usage of digital tools among adolescents, revealing both opportunities and challenges. Additionally, the study shows an increasing interest in more sophisticated technologies, such as image generators and deepfakes, with 35.8% of adolescents stating they have experimented with these tools. As adolescents engage with technologies like personalised feeds and deepfakes, regulatory efforts such as Coimisiún na Meán become more crucial. These regulations focus on ensuring that these platforms function responsibly, protecting users from harmful content and fostering accountability.
Digital Safety Awareness: Participants generally show a good grasp of basic digital safety concepts. However, their confidence decreases when it comes to more complex technological issues. These areas necessitate comprehensive and up-to-date media literacy training. Many respondents reported moderate confidence in spotting fake content or manipulated news, highlighting the need for targeted educational initiatives to enhance their online skills, especially in identifying deepfakes and misleading narratives in the age of artificial intelligence.
Misinformation and AI Awareness: Open-ended responses allowed participants to express their understanding and awareness of misinformation and artificial intelligence. These responses showcased a wide range of comprehension, from basic definitions and misunderstandings to more nuanced perspectives on AI's abilities and limitations. This diversity highlights the need for education to cultivate a more profound and accurate grasp of AI, especially concerning its ethical implications, technical underpinnings, and societal effects. The findings indicate that focused efforts are necessary to close knowledge gaps and correct misconceptions about this innovative technology.
Fake and real headlines: Regarding the ability to distinguish between fake and real headlines, the findings show that respondents had varying levels of success in identifying real versus fake headlines, with accuracy levels differing significantly by topic. While participants were sceptical of sensational headlines, such as the false headline about "Wi-Fi causing brain damage" (51.4% correct), they found it more challenging to identify fake stories like "Gaming Console Linked to Violence" (27.5% correct). Real headlines, such as "Social Media-Linked Depression," were recognised more accurately (50.5%), suggesting a better awareness of commonly discussed issues. However, confusion persisted with ambiguous real stories like "Global Climate Strike" (40.4% correct), highlighting difficulties in interpreting less clear headlines. These findings underscore the
necessity of improving media literacy to enhance critical evaluation skills and decrease susceptibility to misinformation. This is a crucial skill for children and young people because, through the lens of media framing theory (Scheufele, 1999), the way information is presented or "framed" influences audience attitudes toward specific topics. In other words, misinformation, especially when it has sensational features, can affect how children and young people perceive and internalise misleading information.
Influencers and User Preferences: This study suggests that influencer culture has a limited impact on how adolescents perceive themselves and their emotional ties, but it greatly influences the type of content they engage with. Although adolescents enjoy watching influencer videos and photos, they often take a passive approach, such as commenting or creating content for the specific influencer. This trend reflects a broader influencer culture emphasising visual appeal and entertainment rather than active participation. The findings indicate that adolescents view influencers more as entertainment sources than as personal role models with whom they can connect on a deeper level. The study reveals that teenagers primarily interact with influencers' content for entertainment, with little emotional engagement or interaction. While influencers shape trends in adolescent culture, this study found that participants reported lower interest in the role of influencers on self-identity and fostering active participation is relatively minimal. This can be explained by the uses and gratification theory of media consumption (Korhan, & Ersoy, 2016), which why that participants tend to engage with media passively, viewing influencers as distant entertainers rather than actively interacting with them.
Adoption of Digital Tools: Participants indicated significantly higher adoption rates for practical tools, such as filters, virtual assistants, and spell-check features, which are popular due to their immediate utility and ease of use. In contrast, tools perceived as more innovative, such as image generators, are less popular regarding user acceptance and integration into daily routines. This suggests a clear preference for established technologies over those requiring more user adaptation and familiarity.
Online Self-efficacy: Adolescents demonstrate a relatively acceptable understanding of basic online safety skills, such as appropriately managing passwords and sharing information. However, there are notable gaps in their advanced digital literacy, particularly when it comes to identifying technology-based misinformation. These observations underscore the importance of implementing educational programs that emphasise advanced media literacy, AI literacy, and critical analysis skills. Such programs should be designed in participatory and experiential formats to foster deeper learning and practical application.
Media Literacy in Education: The findings underscore the importance of integrating updated media literacy into school curricula, aligning it with the latest media technology trends, particularly those related to artificial intelligence. By doing so, educational programs can better equip adolescents to navigate the rapidly evolving media landscape safely and effectively.
This study faces some limitations. Despite obtaining parental consent from a substantial number of eligible adolescents, the participation rate was lower than expected. Only around half of the eligible adolescents participated, and among those who did, many did not complete the assent form, resulting in their exclusion from further analysis. Since the number of participants was relatively small, the findings might not be fully representative or generalisable to the broader population of adolescents. Therefore, it is important to be cautious when concluding these results.
Future studies could examine the role of adolescents' trust in news-seeking and fact checking behaviour in relation to their perceptions of AI. Future research could explore how adolescents’ perceptions of AI and misinformation influence their trust in digital content, especially in relation to AI-generated media like deepfakes. Studies could also examine adolescents' emotional and behavioural responses to misinformation and how these impact fact-checking behaviours. Furthermore, longitudinal research could track changes in media literacy over time to assess the effectiveness of educational interventions. Additionally, investigating the role of cultural and socioeconomic factors on media literacy, as well as comparing different age groups, could offer valuable insights into targeted educational approaches. Lastly, research on AI ethics in education could help develop curricula that address both technical skills and the ethical implications of AI.
We would like to encourage children and young people to participate in future research that concerns them, so that data can be used to draw robust conclusions and to be able to better understand and support them in the future.
References
Abrha, S., Abamecha, F., Amdisa, D., Tewolde, D., & Regasa, Z. (2024). Electronic health literacy and its associated factors among university students using social network sites (SNSs) in a resource-limited setting, 2022: cross-sectional study. BMC Public Health, 24(1), 3444.
Basch, C. H., Basch, C. E., & Hillyer, G. C. (2020). The role of YouTube and the entertainment industry in saving lives by disseminating information about COVID-19. Global Health Promotion, 27(3), 10-12. https://doi.org/10.1177/1757975920937895.
Chen, Y., & Zahedi, F. M. (2016). Individuals’ internet security perceptions and behaviors. MIS Quarterly, 40(1), 205‒222. https://www.jstor.org/stable/26628390.
El Mikati, I. K., Hoteit, R., Harb, T., El Zein, O., Piggott, T., Melki, J., Mustafa, R. A., & Akl, E. A. (2023). Defining misinformation and related terms in health-related literature: scoping review. Journal of medical Internet research, 25, Article e45731. https://doi.org/10.2196/45731.
Feijóo, S., Sargioti, A., Sciacca, B. & McGarrigle, J. (2023). Bystander Behaviour Online Among Young People in Ireland. DCU Anti-Bullying Centre. ISBN: 978-1-911669-62-3.
Fullwood, C., James, B., & Chen-Wilson, J. (2016). Self-concept clarity and online self-presentation in adolescents. CyberPsychology, Behavior and Social Networking, 19(12), 716-720. https://doi.org/10.1089/cyber.2015.0623.
Guess, A. M., & Lyons, B. A. (2020). Misinformation, disinformation, and online propaganda. Social media and democracy: The state of the field, prospects for reform, 10. https://doi.org/10.1017/9781108890960.
Klopfenstein Frei, N., Wyss, V., Gnach, A., & Weber, W. (2024). “It’s a matter of age”: Four dimensions of youths’ news consumption. Journalism, 25(1), 100-121. https://doi.org/10.1177/14648849221123385
Korhan, O., & Ersoy, M. (2016). Usability and functionality factors of the social network site application users from the perspective of uses and gratification theory. Quality & quantity, 50, 1799-1816.
Levesque, N., & Pons, F. (2023). Influencer Engagement on Social Media: A Conceptual Model, the Development and Validation of a Measurement Scale. Journal of Theoretical and Applied Electronic Commerce Research, 18(4), 1741‒1763. https://doi.org/10.3390/jtaer18040088
Maertens, R., Götz, F. M., Golino, H. F., Roozenbeek, J., Schneider, C. R., Kyrychenko, Y., Kerr, J.R., Stieger, S., McClanahan, W. P., Drabot, K. & van der Linden, S. (2023). The Misinformation Susceptibility Test (MIST): A psychometrically validated measure of news veracity discernment. Behavior Research Methods, 1‒37. https://doi.org/10.3758/s13428-023-02124-2.
Maftei, A., Holman, A. C., & Merlici, I. A. (2022). Using fake news as means of cyberbullying: The link with compulsive internet use and online moral disengagement. Computers in Human Behavior, 127, 107032. https://doi.org/10.1016/j.chb.2021.107032.
Merriam-Webster encyclopedia. Retrieved [2024], from https://www.merriam-webster.com
National Advisory Council for Online Safety [NACOS] (2021). Report of a National Survey of Children, their Parents and Adults regarding ‒nline Safety 2021. Department of Tourism, Culture, Arts, Gaeltacht, Sport and Media.
https://www.gov.ie/en/publication/1f19b-report-of-a-national-survey-of-children-theirparents-and-adults-regarding-online-safety/.
Oddone, Kay, and Margaret Merga. "Evaluation Strategies of School Students Accessing Health Information in Social Media Videos: A Case Study Investigation." Journal of Library Administration (2024): 1-21.
Ohiagu, O. P., & Okorie, V. O. (2014). Social media: Shaping and transmitting popular culture. Covenant Journal of Communication.
O’Higgins Norman, J., Viejo Otero, P., Canning, C., Kinehan, A., Heaney, D., & Sargioti, A. (2023). FUSE anti-bullying and online safety programme: measuring self-efficacy amongst post-primary students. Irish Educational Studies, 1‒18.
https://doi.org/10.1080/03323315.2023.2174573.
Reid Chassiakos, Y. L., Radesky, J., Christakis, D., Moreno, M. A., Cross, C., Hill, D., Ameenuddin, N., Hutchinson, J., Levine, A., Boyd, R, Mendelson, R., & Swanson, W. S. (2016). Children and adolescents and digital media. Pediatrics, 138(5). https://doi.org/10.1542/peds.2016-2593.
Scheufele, D. A. (1999). Framing as a theory of media effects. Journal of communication, 49(1), 103-122.
Søe, S. O. (2018). Algorithmic detection of misinformation and disinformation: Gricean perspectives. Journal of Documentation, 74(3), 409-421. https://doi.org/10.1108/JD-05-2017-0075.
Appendix
Self-efficacy scale item
| Question | Not at all (%) (0) | (1) | (2) | (3) | (4) | 05 |
| ...what videos should I not post online | 4.6 | .9 | 3.7 | 10.1 | 18.3 | 40.4 |
| ...when I am seeing content created by a bot |
6.4 | 2.8 | 11.9 | 33 | 11 | 12.8 |
| ...when an online profile is fake | 9.2 | 9.2 | 11.9 | 21.1 | 14.7 | 11.9 |
| ...how to distinguish between real news and fake news |
5.5 | 9.2 | 7.3 | 21.1 | 23.9 | 11 |
| ...what pictures I should not post online | 3.7 | .9 | 2.8 | 10.1 | 22.0 | 38.5 |
| ...when a real story is manipulated to trick me/clickbait me |
4.6 | 6.4 | 9.2 | 21.1 | 20.2 | 16.5 |
| ...how to respect others online | .9 | 4.6 | 24.8 | 24.8 | 43.1 | |
| ...how to keep my password safe | 1.8 | 2.8 | 3.7 | 7.3 | 25.7 | 36.7 |
| ...who not to share my password with | 1.8 | .9 | .9 | 10.1 | 12.8 | 51.4 |
| ...when real content is manipulated/photoshopped |
6.4 | 3.7 | 11 | 25.7 | 13.8 | 17.4 |
| ...who not to trust online | 3.7 | .9 | 1.8 | 12.8 | 22 | 36.7 |
| ...who to tell when something bothers me online |
2.8 | 1.8 | 4.6 | 11 | 14.7 | 43.1 |
| ...when a story is made up | 6.4 | 2.8 | 10.1 | 26.6 | 17.4 | 14.7 |
| ...what information about me I should not share |
2.8 | .9 | .9 | 8.3 | 12.3 | 52.3 |
| ...how to use my social media safely | 1.8 | 4.6 | 10.1 | 23.9 | 37.6 | |
| ...who I have to talk to when I feel uncomfortable online |
1.8 | 2.8 | 5.5 | 12.8 | 19.3 | 35.8 |
| ...when I am seeing a deepfake/AI generated content |
5.5 | 4.6 | 7.3 | 27.5 | 22 | 11 |
| ...it is dangerous to meet in person someone I met online |
2.8 | .9 | 5.5 | 10.1 | 19.1 | 48.6 |
| ...when someone pretends to be someone else online |
4.6 | 7.3 | 8.3 | 17.4 | 22.9 | 17.4 |
| ...when someone is being impersonated | 6.4 | 2.8 | 9.2 | 22.9 | 21.1 | 15.6 |
| ...when sharing an online post can negatively affect others |
3.7 | 3.7 | 4.6 | 14.7 | 22 | 29.4 |
| ...what upsets me online | 2.8 | 2.8 | 7.3 | 10.1 | 19.3 | 35.8 |
| ...who I am following online | 2.8 | .9 | 3.7 | 12.8 | 20.2 | 37.6 |
| ...I should not harm others online ... | .9 | 1.8 | 3.7 | 5.5 | 12.8 | 53.2 |
| ...who to ask for help to use my social media profile |
1.8 | 2.8 | 5.5 | 14.7 | 17.4 | 35.8 |
| ...when someone does not want their picture to be posted online |
1.8 | 3.7 | .9 | 10.1 | 16.5 | 45 |
| ...how to distinguish a joke or parody from a real story |
1.8 | 2.8 | 6.4 | 25.7 | 28.4 | 12.8 |