FEC - School of Computing
School of Computing
polaris

EdgeAIGuard: New AI Framework Protects Minors from Online Exploitation in Real Time

EdgeAIGuard is a novel multi-agent framework proposed to safeguard minors from online grooming and various forms of digital exploitation in digital spaces, especially social media. Dr Sunder Ali Khowaja worked on this progressive research for ‘EdgeAIGuard’ alongside other international researchers.

Social media is now a central part of daily life for minors, with around 84% of teenagers aged 13 to 17 using it for an average of 4.8 hours per day. However, this rise in screen time has brought increased risks such as cyberbullying, affecting about 59% of minors, and an 82% rise in online grooming cases over the past five years. 

Traditional content moderation methods have largely failed to counter the evolving tactics of exploiters, underscoring the urgent need for more advanced AI-based solutions.

The proposed EdgeAIGuard tackles these issues using a multi-agent architecture based on Agentic AI and Large Language Models (LLMs), deployed at the network edge. This edge-first design enables fast, low-latency detection while enhancing privacy by keeping data on-device, ensuring compliance with regulations like GDPR and COPPA. 

The system comprises three specialised agents: a Sentinel agent for initial threat detection, a Context agent for analysing historical and contextual information, and an Intervention agent for generating appropriate protective actions. LLMs are particularly suited to this task due to their ability to interpret complex language patterns and adapt to emerging threats.

EdgeAIGuard’s intervention agent delivers proportionate responses based on threat level, from low (e.g., safety prompts), to moderate (e.g., pausing chats and notifying guardians/moderators), to high (e.g. terminating chats and alerting authorities).

 Experimental results show EdgeAIGuard outperforms existing moderation tools, achieving a 93.4% threat detection accuracy, 3.0% higher than the next best model, and superior real-time performance, with just 98 milliseconds of latency (over 30% faster than alternatives). It also uses fewer resources (834 MB memory, 9.8W power) and has a smaller model size (287 MB) compared to other LLM-based systems. 

While current limitations include the lack of user-centred evaluations such as adoption rates and user perceptions, future work will involve deploying the system on edge devices to gather real-world feedback on trust, safety, and usability.

See the full research article here.