DCU Research header
Research

Research Newsletter - Issue 111: Spotlight

Advancing Secure Communication and Fair, Privacy-Preserving AI

DCU researchers continue to push the boundaries of innovation in artificial intelligence. Crucially, this research extends to network security, AI use in IoT (Internet of Things) communication and machine learning privacy, as well as work on ensuring AI makes fair decisions. This research spotlight features three exceptional research contributions that address critical challenges in these fields.

Fairness in AI systems

As AI systems increasingly make high-stakes decisions—from predicting criminal recidivism to determining loan approvals—ensuring they treat everyone fairly has become crucial. However, researchers at DCU discovered a troubling pattern: while many AI fairness interventions appear successful when looking at overall statistics, they often mask serious harms to specific marginalized groups, particularly Black women.

 

Shows Michael Mayowa Farayola

Dr Michael Mayowa Farayola 

The DCU-led team featuring Dr Michael Mayowa Farayola (Business School), Dr Irina Tal (School of Computing) and Prof. Regina Connolly (Business School) evaluated 27 different AI model configurations across two major datasets, examining both criminal justice (COMPAS) and economic opportunity (Census Income) predictions.

They found that models that seemed fair overall showed significant disparities for people with multiple marginalized identities. For example, in income prediction, Black women faced drastically lower rates of positive outcomes than aggregate fairness scores suggested.

Shows Irina Tal

Dr Irina Tal

Furthermore, they found that by combining fairness techniques across data preparation, model training, and output adjustment they improved overall fairness with minimal accuracy loss. However, these improvements weren't distributed equally across all groups.

The same fairness approach performed differently depending on the domain. Income prediction required correcting underrepresentation, while criminal justice applications needed to guard against over-prediction bias.

Shows Prof Regina Connolly

Prof Regina Connolly

The study provides both empirical evidence and practical guidance for organizations deploying AI in high-stakes decisions, showing that true algorithmic fairness requires looking beyond simple averages to understand how systems affect every community.

 

 

Secure IoT Communication with Blockchain Integration

As billions of IoT devices are deployed across healthcare, industrial automation, and smart ports, these environments require fine-grained access control that's both auditable and efficient, without exposing sensitive data to intermediaries.

Dr Iqra Mustafa, Dr Asma Salimi, and Dr Michael Scriney from the School of Computing have devised SCOPE (Secure IoT Communication and Policy Enforcement), a comprehensive framework that addresses the privacy vulnerabilities inherent in time-sensitive IoT deployments. Their work, published in the IEEE Internet of Things Journal, presents a novel approach to authorization and encryption for resource-constrained devices.

What makes SCOPE particularly innovative is its separation of on-ledger authorization from end-to-end content protection. The framework comprises four key components: a decentralized trusted authority, a broker smart contract for recording authorization decisions, stateless edge relays that verify requests without decryption, and a pairing-free authenticated encryption channel.

 

Michael Scriney

Dr Michael Scriney

The team's prototype, evaluated on an IoT edge testbed, demonstrated subsecond end-to-end operation with on-ledger authorization within 500 milliseconds. Importantly, SCOPE achieved lower computational latency than existing pairing-based encryption approaches while maintaining robust security guarantees. The framework is designed to be portable across different ledger platforms and can even accommodate future postquantum cryptographic upgrades without requiring changes to the policy or relay infrastructure.

 

Privacy-Preserving Machine Learning for 6G Networks

Looking ahead to sixth-generation wireless systems, Dr Sunder Ali Khowaja (School of Computing) has contributed to a comprehensive analysis in IEEE Wireless Communications on adaptive gradient methods for differentially private machine learning in resource-constrained 6G environments.

As 6G networks promise to connect billions of TinyML (Tiny Machine Learning) devices—from smart wearables to industrial sensors—the challenge of training AI models on deeply personal data while preserving privacy has never been more critical. The research traces the evolution from static gradient clipping to fully adaptive scaling methods, culminating in the DP-PSASC (Differentially Private Per-sample Adaptive Scaling Clipping) algorithm.

The traditional approach to private machine learning relied on manually tuning a fixed threshold, which is impractical when dealing with the diversity of devices and data types in 6G networks. The team’s adaptive methods eliminate this bottleneck while actually improving model accuracy.

 

Dr Sunder Ali Khowaja

Dr Sunder Ali Khowaja

The research demonstrated accuracy improvements of up to three percentage points on challenging datasets while maintaining rigorous privacy guarantees. Critically, the team positioned their approach within the Open RAN (O-RAN) framework, showing how privacy-preserving mechanisms can be integrated as native network services rather than afterthoughts.

 

Broader Impact

Together, these three research contributions represent different facets of the same fundamental challenge: building trustworthy, efficient, and scalable intelligent systems for next-generation networks. Whether securing IoT deployments, or training AI models at the edge, researchers at DCU are developing practical solutions that balance performance with privacy and security.

These works also exemplify the interdisciplinary nature of modern computer science research. They demonstrate our institution's commitment to addressing real-world problems that will shape how billions of people interact with technology in the coming decade.