13 mar AI Summit 2026: Event in India Highlights Governance Challenges for the Future of AI
Written by Matheus Soares
While Brazilians were taking to the streets to celebrate Carnival, on the wide and busy avenues of New Delhi, India, amid tuk-tuks and cars, the face of Prime Minister Narendra Modi was everywhere on billboards announcing the fourth edition of the AI Impact Summit, the Global Artificial Intelligence Summit, held from February 16 to 20.
The enthusiasm of the Indian government in hosting the event did not stem solely from an attempt to position itself as a global leader in AI discussions. It also reflected the growing importance of this gathering in the international landscape—marked by the voracious drive for growth among companies and by governments in developing countries seeking to capture their attention and capital.
The AI Summit has increasingly established itself as a key moment in the technology sector. It is no coincidence that major figures in the industry attended in person, such as Sam Altman, Sundar Pichai, and Dario Amodei, the CEOs of OpenAI, Google, and Anthropic, respectively. Global leaders were also present, including the President of France, Emmanuel Macron, and President Lula of Brazil.
In practice, however, this year’s event revealed a series of contradictions. It began with the promise of advancing the debate on AI governance based on the recognition that the technology presents risks and challenges. The AI Summit was anchored in three major pillars—planet, people, and progress—which were meant to guide how AI should be harnessed through multilateral cooperation for collective benefit.
What was observed, however, was that issues of safety and regulation were overshadowed by promises of the technology’s widespread application for economic development. India itself, in its AI governance guidelines, ruled out the possibility of moving forward with specific regulation of the technology, arguing that existing legislation would already be sufficient to protect citizens from emerging risks.
“Existing copyright laws may need to be amended, for example, to allow large-scale training of AI models, while ensuring adequate protections for copyright holders and data owners,” the document states.
In addition, while the Indian government emphasized digital sovereignty and national technological development (something that, it must be acknowledged, resulted in an agreement with Brazil) executives of big tech companies pledged billions of dollars in investments in the country, highlighting the continued entanglement between large corporations and public infrastructure.
Civil Society Must Be at the Table
Civil society was present at the event, but with far less visibility than large corporations and heads of state, which is concerning. Last year, when the AI Summit took place in Paris, France, a collective of organizations had already warned about the importance of opening and strengthening spaces for participation by these actors.
For truly fair, equitable, and trustworthy AI development (some of the terms repeatedly invoked by the event’s organizers) it is essential that the knowledge and experience of civil society organizations are genuinely taken into account in international debates on technology governance.
It does not need to look far to understand why this matters. Last week, following the outbreak of conflict between the United States and Iran, the U.S. government reportedly used AI for planning, target identification, and information processing. Around the same time, Anthropic relaxed its own safety policies, arguing that it was difficult to uphold them unilaterally in a context marked by deregulation and economic competition.
To this, we must add ongoing debates about the use of natural and energy resources by data centers (the infrastructures essential for the functioning of AI models) as well as the protection of workers across the global technology supply chain.
At Aláfia Lab and *Desinformante, we sought to bring to the discussions in New Delhi the risks posed by AI systems, particularly generative AI, that must be considered in relation to information integrity and electoral processes. Since 2024, with the implementation of the AI in Elections Observatory, which we developed together with Data Privacy Brasil, we have been demonstrating how deepfakes, deepnudes, and other forms of synthetic content can fuel disinformation and disrupt citizens’ ability to vote freely.
We are living through a pivotal moment to collectively build the principles and foundations that will shape the future of AI. All the critical issues are already on the table. The problem is that the chairs appear to be occupied almost entirely by heads of state and executives from major corporations. Where will this path lead us?