The spring sun was already warming Delhi when an urgent analytical report arrived at the office of the Minister of Digital Development. It did not concern another glitch in the government services app or the spread of fake news. The focus was on artificial intelligence. Its development in the country had become too rapid, too unpredictable, and too independent to wait any longer for instructions from above.

India had long felt the duality of its digital destiny. On the one hand, it is one of the world’s largest IT powers, with more than five million specialists writing code, building platforms, and creating models. On the other hand, it is a country with a patchwork legal system, where laws are interpreted differently from one state to another, and where the technological base sometimes lags behind economic growth.

This divide has become particularly noticeable in the field of AI. Global companies have opened laboratories here and hired thousands of engineers. Indian startups have trained neural networks in Hindi, Marathi, and Telugu. But all this has taken place in an environment where there is no clear structure, no transparent rules, no uniform standards, and no basic accountability for mistakes.

This raised the central question: Is it possible to develop AI in a country where each region thinks differently, and each new application immediately affects millions?

Learning to Regulate Without Pressure

Strangely enough, the answer was not sought in bans. In 2023, India passed a new Personal Data Protection Act. This document was the first brick in a wall that had previously not existed. It introduced the idea of a techno-regulatory approach, in which the law is embedded in the very architecture of the technology.

Companies that work with personal data must consider confidentiality at the design stage. This meant that information protection was no longer seen as an external shell. It became part of the code, part of the logic, part of the business model.

Moreover, the law recognized for the first time the so-called “artificial legal entity.” This was a formal recognition that AI systems could enter the legal field. This step paved the way for further reflection: Who is responsible if the model is wrong? Who is to blame if an algorithm influences elections, court decisions, or credit ratings?

A category of organizations with increased responsibility emerged. They were called SDFs, or significant trusted entities. They were required to conduct audits, hire data protection specialists, and build processes focused on risk minimization. Thus, the contours of a new type of regulation began to take shape in the country: flexible, focused on scale and impact.

Putting the Brakes on

In early March 2024, the government decided to take stricter action. The Ministry of Electronics and Information Technology sent a new document to AI developers. Its content provoked mixed reactions. On the one hand, it contained recommendations. On the other hand, it hinted that tolerance for experimentation was ending.

It was about registering all models. Regarding the requirement to obtain approval if a model is still undergoing testing, about labeling every questionable result. In addition, it is essential to report any errors to the state. The paper did not have the status of law, but the signal was clear: AI was entering the control zone.

The response was not extended. Industry representatives, lawyers, and international experts spoke out sharply. No one understood how to apply for a permit, who would evaluate the model, and on what criteria.

As a result, three weeks later, the ministry returned to the topic with new wording. The strict provisions were canceled. Mandatory registration disappeared. The requirement for permits was removed. Only labeling remained as a warning tool, not a means of supervision. Users must be aware that the results are unreliable and decide whether to trust them.

The emphasis shifted. Now the state was speaking not in threats, but in words of responsibility. Models should not undermine trust in elections. Content created by AI must be labeled. This is especially true for video or voice material that could be misleading.

What Will India Do with AI Next?

In the fall of 2024, a roundtable was held again in Delhi. Representatives of major technology companies, technical institute professors, government officials, and members of civil society organizations discussed an idea that had previously only been heard in expert circles. They discussed creating an Artificial Intelligence Security Institute, a platform that would help the country develop its own standards and approaches.

This institute was not supposed to be controlled, fined, or banned. Its task was to research, warn, and gather facts. It was supposed to be where complex issues were translated into solutions. What should be done with an algorithm that influences elections? How can a neural network operating in the judicial system be verified? Where is the line between innovation and breach of trust?

Instead of control, coordination. Instead of regulations, there are voluntary standards. Instead of fragmented opinions, a standard risk map relevant to India should not be copied from another continent.

The private sector has joined this initiative. One of the first projects is a document called AIACT, in version 4. It is not a law but rather a draft for the future. It describes the risks, the classification of systems, the need for post-implementation monitoring, the importance of transparency, and even the idea of a national registry of AI use cases. These proposals provide food for thought and shape the agenda for the future institution.

Thus, India has decided to go its way, not through repression but through balance, not through a rigid framework but through flexible guidelines. A country with billions of data points, hundreds of languages, and millions of developers is creating an approach that reflects its scale, speed, and complexity.

Wrapping Up

The regulation of artificial intelligence in India is in its infancy. The country is rejecting direct pressure and focusing on cooperation. The Personal Data Act, the revision of model guidelines, and the creation of a security institute are all part of the same process.

India is seeking a way to manage the future without fear. It aims to give companies room to operate, society confidence, and the state the tools for meaningful participation. The path India is choosing is built on a combination of practice, flexibility, and respect for reality’s complexity.