
Introduction
India’s AI industry is estimated to hit $17 billion in 2027 with a growth rate of over 25% per annum. From banking chatbots to facial recognition for policing, AI tools have also forayed into governance and public affairs. However, India still does not have a precise law to govern these systems and is therefore facing a growing governance deficit. The existing laws in India are not adequate to deal with the challenges or opportunities that AI brings along. For the country to be at the forefront of this revolution, new laws ensuring privacy protection rights, and impartiality would be needed in securing India’s digital future.
The Governance Gap
The EU has already laid out stringent regulations regarding high-risk AI (European Commission, 2024). On the other hand, India still relies on outdated laws developed for privacy or cybercrime. Regulations that are insufficient to address issues such as biased algorithms and the question of who is accountable if AI makes a mistake. NITI Aayog developed the National Strategy on Artificial Intelligence, taking an “AI for All” approach in 2018. The governance gap is exacerbated by the digital divide. Most Indians are still limited in terms of access to online services (Government AI Readiness Index, 2024). In 2023, the Ministry of Electronics and Information Technology (MeitY) announced plans to replace the IT Act with a new framework, tentatively called the Digital India Act. While a dedicated AI law is unlikely in the near term, the new Act is expected to cover many of the risks associated with AI.
Government decisions have at times been hasty and supervision over AI is still decentralised, different departments are moving at their own pace and there is no single, strong command and control authority.
Examples from real world: Risks & Benefits of AI
So, what does this gap mean for the average Indian? AI is currently utilised by the police, banks, and hospitals and many more crucial public centric departments. However, there are not enough safeguards in place. Let’s see some risks and benefits in these sectors respectively:
Facial recognition in policing: The use of facial recognition in cities such as Delhi has raised privacy concerns and resulted in several cases of alleged discrimination. In Delhi, for example, police heavily relies on facial recognition for investigations. In fact, after 2020 riots, 73% (1,922) of those who were arrested were matched by facial recognition tech. Moreover, research reveals that some AI systems may give false positives, biased or harmful outputs, for example some arrests were suspected based on blurry footage. However, there are no laws that prohibit or even check discrimination by algorithms. The risk is, if technology is left unchecked, it may aggravate social and economic inequality. Thus, robust oversight framework is needed unless the technology gets matured.
Systemic Lending Bias: Off late, many startups have mushroomed in the Indian ecosystem that provide loans basis AI-driven algorithms and digital activity of the applicant over internet and social media. This can be discriminatory to those who are not active internet users , particularly farmers, low-income borrowers and rural women.
This necessitates the making of a framework that standardises the credit algorithms and makes it transparent for all. So vulnerable groups do not suffer bias and exorbitant high interest rates.
AI in healthcare: We can detect diseases such as Tuberculosis with very high accuracy, sensitivity and specificity, thanks to AI, that enables doctors to detect and treat TB more effectively. This shows that with human-AI synergy, we can create safe solutions that benefit humanity at large.
Each of these cases highlight the fact that India needs to close its governance gap to harness the benefits of AI or any new technology coming up.
If India intends to be the “AI hub” of the world, as the aim of several Indian leaders suggests, then we must bring in clear, rule-based governance with strong oversight, along with trusted infrastructure.
Global Models
Different countries have adopted diverse approaches to AI governance, reflecting their political priorities and regulatory philosophies. Examining these models provides useful insights for India, both in terms of best practices to emulate and pitfalls to avoid:
EU: Developed creative but strict, human-centered rules to govern high-risk AI which in turn increases trust.
US: Depends more on voluntary recommendations and existing laws that allow more innovation but fewer safeguards to be put in place.
China: Keeps an active and very involved stance with strict regulations and huge investment in local AI.
UK: Follows a strategy of implementing measures that are friendly to business and progressive in terms of regulation.
India must find a solution that suits its reality, an apt combination of innovation, protection, and democracy.
Policy Recommendations
To begin with,India should enact an updated AI law or strengthen the existing Digital India Act. This should involve a risk-based categorisation of AI systems, mandatory algorithmic impact assessments, transposable redress mechanisms, data-governance and transparency regulations, a right to explanation for the impacted citizens, and proportionate sanctions for non-compliance.
AI Regulator: India must establish an independent AI regulator rather than dividing responsibility among ministries. It must have the authority to scrutinise algorithms, certify high-risk systems, and impose fines. The body will have to work with regulators such as RBI, SEBI and ICMR, and state governments. It is possible to have a federated structure where central standards are issued, and the states implement locally under uniform rules.
Data Governance and Enforcement: A robust data governance framework is at the heart of secure AI. The Digital Personal Data Protection Act 2023 is a good beginning by empowering Data Protection Board to enforce compliance and issue fines. But enforcement must extend beyond AI-specific requirements. Legislation needs to extend data quality, provenance and minimisation. That way, precision will be guaranteed, traceable origins established, and collection restricted to the minimum required. The government could introduce audits and certifications to check for bias and data sources. Such measures would help in building trust in the public that AI systems are trained on reliable and fair data.
Inclusion and Fairness: If social welfare or banking transitions to AI-driven platforms, then those who are not technologically savvy might become isolated or excluded. High risk AI or AI in sensitive sectors should have mandatory human oversight or essentially a human-in-the-loopto review or override decisions wherever needed. AI data should be trained on different kinds of data, and finally, the meaningful participation from diverse communities in setting the AI standards.
Responsible Innovation: Support and encourage ethical AI startup entrepreneurs, offer the “sandbox” for testing purposes, and let the public contracts be intertwined with the highest compliance standards.
Conclusion
Artificial intelligence can improve the effectiveness of government services, help businesses to be more efficient, and strengthen our country’s economy. However, if India fails to take advantage of AI, the risks of the technology that includes greater inequality, less trust, and fewer rights, could become more prominent than the benefits.
It will take concerted effort from lawmakers, industry captains, academics, and citizens to shape AI in India. The mission must be to stimulate innovation with robust protection for the people who depend on these systems. The time to start is now.
To achieve this, action is needed on multiple fronts. The legislation must take the initiative by establishing clearly, with MeitY guiding policy and coordination. State governments, which distribute many services directly to people, must implement these standards locally. Industry and civil society also play crucial roles in establishing ethical practices and calling out risks. Most importantly, the process should be open to consultation so citizens can define how AI is governed. Only through this kind of broad debate can India be assured that AI enhances its democracy, not erodes it.

Author’s Bio:
Nayan Awasthi is currently pursuing the Post Graduate Programme at the Indian School of Business. He is a technology-driven strategist and IIT Roorkee alumnus with experience in driving product growth, GTM implementation, and operational excellence in fintech, mobility, and edtech spaces. He has been recognised for developing effective sustainable solutions, driving teams, establishing strategic alliances, and using automation to make businesses scale and drive results with purpose.

Author’s Bio:
Aman Pandey is currently pursuing the Post Graduate Programme at the Indian School of Business. Prior to this, he held positions in research and consulting at KPMG and Evalueserve, where he developed experience across industries like Energy, Consumer Goods, and ESG. He has managed teams, implemented impact projects, and has also founded his own tech startup. He is passionate about public policy and the governance of new technologies, with particular interest in the way India can integrate innovation with equity and accountability in the digital era.