
The Ministry of Electronics and Information Technology (MeitY) has unveiled a comprehensive framework aimed at regulating AI-generated and manipulated content, marking a major step in India’s efforts to establish a responsible and transparent AI ecosystem. The proposed guidelines reflect the government’s growing emphasis on ethical AI governance, balancing innovation with safeguards against misinformation, deepfakes, and misuse of generative technologies.
Under the draft framework, all AI systems and platforms operating in India will be mandated to visibly label synthetic or AI-altered content—including text, images, audio, and video outputs—so users can clearly distinguish between human-created and machine-generated material. The measure is designed to promote transparency and user awareness, particularly as generative AI tools become more integrated into everyday communication, media, and business operations.
The proposal further requires disclosure of AI model provenance, dataset transparency, and risk-based classification of AI systems, especially for high-impact use cases such as healthcare, finance, and public services. These guidelines aim to ensure that AI development in India adheres to principles of accountability, safety, and fairness—while aligning with international standards such as the EU AI Act and the US Executive Order on AI.
By embedding traceability and accountability into the lifecycle of AI systems, MeitY’s framework seeks to combat the rising threat of misinformation, deepfake abuse, and content manipulation—issues that have become increasingly urgent with the rapid advancement of generative models. At the same time, the ministry emphasizes that the framework is not intended to stifle innovation but to create an environment where trust and ethical use become foundational to AI-led progress.
The initiative represents a significant move toward a globally harmonized approach to AI governance, positioning India among the early adopters of structured policy frameworks for responsible AI use. Once finalized, the regulations are expected to guide not only tech companies and AI developers but also media organizations, public institutions, and digital platforms, ensuring that the deployment of AI technologies remains transparent, accountable, and beneficial to society.




