You are here

Navigating digital mirage: AI-generated misinformation, disinformation as global risk

Jan 15,2024 - Last updated at Jan 15,2024

Artificial Intelligence (AI) has been in the headlines consistently for the past couple of years, and it seems that this will not be changing any time soon. As it takes the world by storm, becoming accessible to individuals and institutes alike, concerns on its pitfalls and dangers are being discussed and cannot be turned away from. According to the 2024 Global Risk Report by the World Economic Forum, 53 per cent of 1,490 surveyed leaders identify AI-generated misinformation and disinformation as the second most likely global risk, following extreme weather events. The potential of misinformation and disinformation spreading are applicable to an unfathomable number of sectors and fields, if not all. The social, political, economic and military domains are linked influencing and shaping lives globally. The sensitivity and lasting impact of these domains make AI’s far-reaching impacts dangerous.

AI’s ability to produce convincing yet false content poses a significant threat to trust in a social context. Deepfakes, deceptive videos or audio recordings indistinguishable from reality, can disseminate unverified information, causing confusion and panic. This erosion of trust extends to social institutions, fostering an atmosphere of skepticism and pessimism as individuals grapple with the challenge of discerning fact from fiction.

In the political arena, AI-driven misinformation and disinformation have the potential to manipulate public opinion, influencing election outcomes. AI algorithms can fabricate persuasive fake news tailored to individual biases, thereby polarising societies. This distortion not only undermines democratic processes but also jeopardises political system stability by creating divisions and fueling unrest.

Economically, the dissemination of false information through AI can lead to market manipulations. AI-generated rumours about status of companies can lead to stock price fluctuations, damaging investments and economic stability. Moreover, misinformation regarding economic policies or trade agreements can sow uncertainty, impacting global markets and international economic relations.

In the military domain, misinformation and disinformation can be used as weapons. AI’s capacity to create convincing yet false intelligence can result in misguided strategic decisions, escalating conflicts. Deceptive information may prompt military actions based on inaccurate data, potentially leading to unintended confrontations, and even genocide. The potential misuse of AI to analyze extensive personal data, including information on race, religion, or political affiliations, raises concerns about targeted actions against specific groups.

Addressing this global risk requires a comprehensive strategy. This encompasses the development of robust AI detection tools, the promotion of digital literacy to empower individuals in identifying fake content, and the implementation regulations to hold creators of malicious AI content accountable. 

Furthermore, collaborative efforts between governments, tech companies, and civil society are imperative to establish a standardized framework for responsible AI usage.

In conclusion, as AI’s influence continues to grow, its potential to shape our reality becomes more evident, for better or worse. Balancing the benefits of AI with the risks of misinformation and disinformation is essential. The Global Risk Report 2024 serves as a blunt reminder of the urgent need for proactive measures. By fostering a global ecosystem that encourages ethical AI development and usage, we can harness the power of AI to enrich our lives while safeguarding the integrity of information and the stability of our societies.

 

Nidal Bitar is the chief executive officer of the ICT Association of Jordan int@j

 

up
9 users have voted.

Add new comment

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
PDF