The Indian Ministry of Finance has issued a directive banning the use of AI-powered tools such as ChatGPT and DeepSeek on official government devices, citing concerns over data security and confidentiality. The circular, dated January 29, 2025, aims to mitigate potential cybersecurity risks associated with these applications.
Signed by Joint Secretary Pradeep Kumar Singh, the notice warns that AI-based applications on office computers pose a significant threat to sensitive government information. As a precautionary measure, the ministry has strictly advised all employees against using AI tools on official devices.
The directive has been approved by the Finance Secretary and circulated across key government departments, including Revenue, Economic Affairs, Expenditure, Public Enterprises, DIPAM, and Financial Services.
This move aligns with a growing global apprehension about AI platforms handling sensitive data. Many AI tools, including ChatGPT, process user inputs on external servers, raising concerns about potential data leaks or unauthorized access. Similar AI restrictions have been imposed by governments and corporations worldwide to prevent unintentional exposure of confidential information.
While the directive restricts the use of AI tools on official devices, it remains unclear whether employees can use them on personal devices for work-related tasks. The ban underscores the government’s cautious stance on AI adoption, prioritizing data security over the convenience offered by AI-powered assistance.
Reasons Behind the AI Ban
The Ministry of Finance’s decision stems from multiple concerns surrounding security and compliance. The primary reasons include:
- Risk of Data Leaks: AI models like ChatGPT and DeepSeek operate on external servers, which means any sensitive government data inputted into these tools could be stored, accessed, or misused. Given that government offices handle classified financial data, policy documents, and internal communications, even an inadvertent breach could have significant consequences.
- Lack of Control Over AI Models: Unlike government-approved software, AI tools are cloud-based and managed by private companies, such as OpenAI for ChatGPT. The government has no direct oversight of how these tools process or store information, increasing concerns about potential cyber threats and unauthorized foreign access.
- Compliance with Data Protection Policies: India is in the process of strengthening its data privacy framework, including the implementation of the Digital Personal Data Protection (DPDP) Act, 2023. The unregulated use of AI tools on official devices could lead to non-compliance with emerging data protection policies, making government systems vulnerable to security breaches.
As AI continues to be integrated into workplaces globally, it remains to be seen whether the Indian government will introduce structured policies for regulated AI use in the future. For now, finance ministry officials must rely on traditional methods, ensuring that sensitive government data remains protected from potential cybersecurity threats.