Our research project explores how to combat the spread of fake news and misinformation in elections through the implementation of reliable algorithmic standards.
Our project addresses the significant threat of fake news and misinformation to global elections, which is exacerbated by artificial intelligence (AI) technologies like deepfakes.
Funded by Ã÷ÐÇ°ËØÔ University of London's Policy Development Fund, our research explores strategies to balance responsible AI use with mitigating the harms of misinformation.
Our goal is to guide policymakers in implementing algorithmic reliability standards to protect elections and ensure transparency.
Addressing the threat of AI-driven misinformation
In recent years, the spread of fake news and misinformation has become a major concern for democracies worldwide. Artificial intelligence (AI) technologies, particularly deepfakes, have made it easier to create and disseminate false information, which can undermine public trust in democratic institutions, manipulate voter behaviour, and destabilise societies.
With elections recently held in 77 countries, including the UK, we need to maintain trust in democratic processes.
Our research examines how governments and platforms can adopt algorithmic reliability standards and regulations to combat election misinformation, addressing issues such as voter manipulation and the misuse of AI technologies.
By balancing the responsible use of AI with harm reduction, our project contributes to societal goals such as equitable access to accurate information, democratic integrity, and ethical AI governance.
We aim to help policymakers and organisations create robust frameworks that promote transparency, accountability, and informed civic participation.
Understanding and mitigating psychological harm
We focus on the psychological harm caused by fake news, particularly during elections. We examine the characteristics of psychological harm —its triggers, manifestations, and mental health impacts on individuals and groups.
Unlike previous studies, we explore the lifecycle of psychological harm: how it originates, evolves, and spreads, examining its transfer from one person or group to another.
We measure this harm through indicators like emotional distress, cognitive biases, and behavioural changes, providing a framework to assess its severity and progression.
By understanding these psychological aspects, we offer insights into how misinformation destroys trust, incites fear or anger, and polarises societies. This helps us develop strategies to reduce harm and build resilience, guiding policymakers in creating frameworks that prioritise mental wellbeing, civic trust, and societal cohesion.
Using a narrative literature review, we examine the psychological harm caused by fake news, including emotional distress, behavioural changes, and societal polarisation. We investigate how psychological harm starts, evolves, and spreads across individuals and groups. Our review develops metrics to assess the severity and scope of psychological harm, offering insights into its societal impact.
One of our key objectives is balancing the responsible use of AI with harm reduction.
By analysing existing literature on algorithmic reliability, we propose recommendations for policymakers to create frameworks that support ethical AI usage while safeguarding democratic integrity.
We also explore how ethical AI governance can strengthen societal resilience against misinformation and promote informed civic participation. By synthesising research on AI’s effects on public trust, we examine how ethical guidelines can protect democratic institutions from manipulation.
Our research supports goals such as equitable access to accurate information, mental well-being, and the protection of democratic values. Supported by Ã÷ÐÇ°ËØÔ University of London's Policy Development Fund, our findings inform policy recommendations and regulatory frameworks to ensure responsible AI use, fostering transparency and accountability.
Dr Asieh Tabaghdehi - Dr Asieh Tabaghdehi is a Senior Lecturer in Strategy and Business Economy, and Programme Lead for the BSc International Business Programme and Trade2Grow Executive Education Programme at Ã÷ÐÇ°ËØÔ University of London, as well as, an economist and social impact advisor for the independent NGO, Social Innovation Movement.
Asieh is a well-recognised academic in digital transformation, focusing on the strategic integration of artificial intelligence (AI) and digital technologies to advance sustainable business practices and address critical social and economic challenges. From 2021 to 2022, Asieh co-investigated the Digital Footprint Project, funded by the UK Research Institute (UKRI), that explored the ethical implications of digital footprint data on value creation for SMEs. One of the project outputs was the Digital Business Auditing Framework, which has been adopted internationally for smart city initiatives.
Asieh has attained an international reputation in the field of ethical integration and deployment of AI, particularly in the contexts of smart data governance. Her research integrates industry collaboration, public engagement, and policy dialogue with an emphasis on the social and economic dimensions of responsible smart city development and innovative healthcare systems. Asieh’s contributions have been widely cited in academic, practitioner, and policy outputs, and her research has informed national and international governments and businesses. Her research on connected technology has been presented as both written and oral evidence to the House of Commons Select Committee at the request of the Department for Digital, Culture, Media and Sport (DCMS) and has been featured in national and international press. She is widely published in academic peer-reviewed journals, the press, is a frequent speaker at academic and industry conferences, and has responded to a number of policy inquiries at national and international level. She is also the author of the book Business Strategies and Ethical Challenges in the Digital Ecosystem, which addresses multiple facets of the AI ecosystem, including data ethics, governance, and innovation. In collaboration with the Department for Business, Energy & Industrial Strategy (BEIS), Asieh co-designed the "Digital Adoption" module for the UK Government’s Help to Grow Management program, aimed at enhancing the digital capabilities of SMEs.
Asieh is a Fellow of the UK Higher Education Academy, a member of the ESRC Review College, the British Academy of Management Review College, and the Energy Institute UK. She also serves as an associate practitioner at Social Value International, an associate member of the Big Innovation Centre, and All-Party Parliamentary Group (APPG)- Artificial Intelligence. She is also a member of the Centre for Artificial Intelligence: Social and Digital Innovation at Ã÷ÐÇ°ËØÔ University of London. Currently, Asieh serves as the Impact Lead at the Ã÷ÐÇ°ËØÔ Centre for AI, where she leads the capability area in the Future of Work.
Asieh earned her PhD in Economics and Finance (2008) and MSc in International Money, Finance, and Investment (2015), both from Ã÷ÐÇ°ËØÔ. She joined Ã÷ÐÇ°ËØÔ in 2020 after holding academic positions at Regent’s University London, where she also served as Director for the BSc Global Management (Finance Pathway). Prior to her academic career, she gained industry experience as a business analyst, further enriching her interdisciplinary approach to research and teaching.