(Image source from: pib.gov.in)
The Ministry of Electronics and Information Technology, known as MeitY, introduced the AI Governance Framework on Wednesday, which offers detailed guidelines and important suggestions for those making policies. Professor Ajay Kumar Sood, the Principal Scientific Adviser to the Government of India, officially presented the guidelines with other officials present. It advises establishing new regulatory authorities, broadening current laws, and changing existing articles to ensure a thorough approach to the new technology. MeitY shared a 68-page report outlining the fundamental principles for how India’s policymakers should create AI policies. The key principles include respect for human rights, fairness, safety, openness, and non-discrimination. The government emphasizes that AI systems should be dependable and inclusive, ensuring advantages for all communities, especially those that are currently neglected. Instead of applying universal rules, the framework uses a risk-based method, indicating that the level of scrutiny will be determined by the potential risks and effects associated with the AI system. To implement these principles, the guidelines suggest a step-by-step approach.
In the short term, organizations implementing AI in India are urged to establish internal safety measures. These measures consist of performing risk evaluations, recording data sources, and ensuring bias checks and safety evaluations are done before launching models. The document stresses the importance of clearly communicating the purpose and abilities of AI systems whenever possible. It also calls for setting up systems for filing complaints and reporting incidents related to AI systems. Over the next few years, the guidelines aim for a collaborative oversight system that involves various ministries, regulators, and public institutions. A main governance body is expected to guide and synchronize regulations across different sectors. For areas seen as high-risk, such as healthcare, finance, and law enforcement, the document suggests specific regulatory rules and compliance systems. Looking ahead, the guidelines predict a transition from optional self-regulation in the industry to required regulations for systems that present high or critical risks. Ongoing monitoring of AI behavior in real-world situations is expected to be standard practice, supported by a national database for AI incidents created to enhance oversight and hold the public accountable. This plan also includes research and innovation centers, and partnerships with international organizations to establish global standards for responsible AI.
The guidelines recommend creating new institutional frameworks to oversee AI across the government. A significant organization among these is the AI Governance Group, or AIGG, which is likely to be the central framework for aligning policies, managing risks, and coordinating across different ministries. The AIGG would collaborate with sector-specific regulators, like the Technology & Policy Expert Committee and the AI Safety Institute, to make sure that rules for high-risk uses are coherent but adapted for specific fields like healthcare, finance, or law enforcement. Lastly, the guidelines emphasize enhancing India's AI capabilities by improving infrastructure and resources. This covers expanding access to advanced computing, fostering the creation of high-quality representative datasets, and enabling the development of AI models that are relevant to local needs.








