Research Newsletter - Issue 83: Spotlight

Who will write the ending?

This was the intriguing question posed by Emeritus Professor of Educational Technology at the Open University, Professor Mike Sharples, in a recent webinar. Professor Sharples was focusing specifically on ChatGPT and the AI Essay, but the question is highly relevant for more general debates on the impact of AI on society. The human-machine era raises many interesting and important questions on the future of humanity in a machine age. Research in Humanities and Social Sciences has a significant contribution to make to research, debates, policy and regulation in this era.

The Faculty of Humanities and Social Science at Dublin City University has a strong record of research in topics relating to AI and Society and we seek to promote cutting edge, interdisciplinary research. We believe that it is at the edge of disciplinary borders where new and important insights and developments can best be made. For this reason, we ran an open competition within the Faculty again this year for cross-disciplinary, cross-School PhD scholarships to focus on the topic of AI and Society. In previous years this competition focused on the themes of Climate and Society and Conflict.

Colleagues in the Faculty have to team up across schools to propose an interdisciplinary PhD topic on the selected theme. A panel then reviews and grants the award to the top-ranking proposals, after which the topics are advertised and applications are invited from potential PhD students.

The selected topics show a true interdisciplinary dimension and build on existing strengths within the Faculty and, in one case, even across Faculties. Below is a synopsis of the projects and their leaders.


AI in the Fight Against Corruption: Leveraging Digital Technologies to Enhance Media Integrity and Prevent Media Capture


AI in the Fight Against Corruption

AI in the Fight Against Corruption

The use of Artificial Intelligence (AI) to curb corruption and produce good quality journalism has many applications. These include analysing public procurement data to find possible cases of corruption, introducing automated fact-checking and text generation procedures to facilitate journalistic work, and analysing large corpora of media texts to investigate and research corruption.

How can the use of AI enhance media integrity and prevent media capture? Media capture is a situation in which news media are controlled by the government or vested interests linked to specific political groups. It is often achieved through mechanisms such as takeovers, clientelism, and the strategic use of government or state advertising. Interest in this phenomenon has grown significantly in recent years, as it is considered central to the decline of the democratic function of journalism in both autocratic states and more established democracies. This project will study the subjects of AI, anti-corruption and media capture from an interdisciplinary perspective, drawing from the theories and empirical methods developed in the fields of political science and media studies, and using/developing the practical tools and indicators applied in computer science and economics.

The supervisors are Dr Michael Breen, from the School of Law and Government, who is an expert in the political economy of corruption and Dr Alessio Cornia, from the School of Communications, who is an expert in political communication and journalism, with a particular focus on media coverage of corruption.


Towards a holistic approach of AI for societal good: reconciling the EU’s digital sovereignty and green transition ambitions


AI for Societal Good

Towards a holistic approach of AI for societal good

This project brings together the fields of EU law and policy, environmental geography and digital media. The student will investigate the role of AI for societal good by examining the tensions between the EU’s digital sovereignty and its green transition aspirations, analysing the impacts of these two ambitions on AI as a tool for societal good.


The supervisors are Dr Edoardo Celeste (School of Law and Government) and Dr Trish Morgan (School of Communications). Dr Celeste is the IRC Young Researcher of the Year (2023), whose research interests lie mainly in EU and comparative digital law, focusing in particular on digital rights and constitutionalism, privacy and data protection, and social media governance. Dr Trish Morgan’s key research interests lie in analysing the nature/society relationship through political economy, (urban) political ecology, human geography and environmental geography perspectives.


AI: Literary artistic representations and public understandings

AI: Literary artistic representations

AI: Literary artistic representations


In this project the disciplines of English Literature and Media Studies will be combined to study representations of artificial intelligence in popular culture, to find out how prominent cultural portrayals depict the social, cultural and ethical ramifications of AI in order to help us understand the collective ambivalence over the evolution of this emergent and evolving technology. 

The project would have as a core assumption that the representations of AI in popular culture can influence – through a complex, indirect, and dynamic process – the public understanding of technology.

The supervisors are Dr Paula Murphy from the School of English and Dr Declan Fahy from the School of Communications. Dr Murphy has published on representations of AI in literature. Dr Fahy has researched and published on journalistic coverage of science and technology to explain effects of such coverage on public debate and social attitudes about science and technology. The PhD candidate would examine the intricate interplay between scientific developments, cultural representations, and public understanding of science and technology, specifically AI.


AI, Disability and Society – Emerging Ethical, Legal and Regulatory Issues


AI, Disability and Society

AI, Disability and Society

Bringing together the fields of ethics, disability studies and law, this project will explore the emerging ethical, legal and regulatory challenges and opportunities in relation to the use of AI for persons with disabilities.

AI is likely to transform 21st century in profound ways - it is being employed in assisted and automated decision-making, and informs algorithms for core services, such as health, employment, finance, education and more. However, this raises significant ethical challenges. The negative impact and potential misuse of such technologies is likely to be felt by the vulnerable, such as persons with disabilities, who form the world’s largest minority. Ensuring  AI does not reproduce and entrench the marginalisation, stigmatisation and neglect of persons with disability is a condition of justice.

This project will have three supervisors, one from the School of Theology, Philosophy, and Music (Dr Fiachra O’Brolcháin, an expert in the ethics of assistive technology), one from the School of Law and Government (Dr Aisling de Paor, an expert in legal issues surrounding genetic technologies and disability) and one from the School of Psychology (Dr Lorraine Boran, an expert in data protection and medical law).




If you are interested in finding out more about these projects please don't hesitate to contact any of the supervisors listed above or the Faculty Associate Dean for Research - Prof. Sharon O'Brien (