The National Information Technology Development Agency (NITDA) has warned Nigerian internet users and professionals about potential security vulnerabilities in newly released versions of ChatGPT, cautioning that the flaws could expose sensitive data and compromise digital safety.
In a statement issued on Monday, NITDA said it had identified serious weaknesses in advanced artificial intelligence models, including GPT-4 and GPT-5, which could be exploited by cyber attackers to manipulate AI-generated outputs and gain unauthorized access to user information.
According to the agency, seven critical vulnerabilities were discovered. These include the ability to embed hidden malicious instructions in seemingly harmless online content such as social media comments, webpages, or shortened links. When processed by AI systems during routine tasks like summarising text or browsing the web, such instructions could trigger harmful actions without the user’s knowledge.
NITDA also highlighted other threats, including techniques used to bypass safety filters, conceal dangerous content through formatting and markdown loopholes, and “memory poisoning,” a method that gradually alters an AI model’s behaviour over time.
The agency warned that these exploits could result in data leaks or unauthorized activities carried out by AI tools.
While OpenAI has announced that some of the identified issues have been addressed, NITDA noted that large language models still face challenges in detecting highly sophisticated and disguised malicious commands.
The agency therefore urged Nigerians to exercise caution when using AI-powered tools, stressing the need to independently verify AI-generated information and remain vigilant against suspicious online content.
NITDA reaffirmed its commitment to promoting safe and responsible use of emerging technologies and called on stakeholders to prioritise cybersecurity as artificial intelligence adoption continues to grow.

