menu
menu
Technology

Regulating the use of AI in India’s telecom networks

14/03/2026 17:48:00

Artificial Intelligence (AI) is increasingly integrating into telecom operations - from core functions like network management and traffic optimisation, to user-centric areas like customer service and fraud detection.

With this has come increasing regulatory scrutiny. The Telecom Regulatory Authority of India (TRAI) has expressed public intent to regulate AI use through a ‘risk-based’ approach instead of blanket restrictions or regulations. Its recommendations on AI use in the telecom sector are currently pending consideration with Department of Telecommunications (DoT). The Telecom Engineering Centre has expressed preference for a similar risk-based approach in its proposed standard for fairness assessment and risk rating of AI systems, as well as its more recent non-binding standard on AI incident reporting.

This is a trend being seen across other regulated sectors which have included AI-related obligations in regulations for market participants, ministry of electronics and information technology (MEITY) has issued various AI governance guidelines and advisories, while the FREE-AI report by Reserve Bank of India (RBI) appointed committee discusses the future direction of AI regulation in India. Globally too, the Organisation for Economic Co-operation and Development (OECD) and World Economic Forum have reviewed the implications of AI in regulated sectors, while the European Union has enacted a binding AI Act.

In the near- to medium-term, we could also potentially see certification requirements similar to those that apply to physical telecom equipment under frameworks like Trusted Telecom. However, there is a more immediate challenge for telecom service providers – ensuring that existing AI deployments do not inadvertently breach licence or regulatory conditions applicable to them.

Under the Unified Licence framework and the draft Main Authorisation Rules proposed under the Telecom Act, subscriber data must be stored in India. For satcom players, localisation obligations are even stricter than those for terrestrial broadband services, as certain kinds of data (including ‘Indian telecom data’ and ‘sensitive’ user information) cannot even be viewed or decrypted outside India. Here, even routine uses like a chatbot for resolving customer complaints could cause inadvertent non-compliance with critical security conditions, since these systems process large volumes of subscriber data using cloud infrastructure or vendor platforms which could potentially store or process data on servers outside India.

TRAI has, in the past, flagged use of chatbots as a low risk activity but it is something for TSPs to be mindful of, as they look to negotiate vendor contracts to integrate such systems. Third party AI solutions available today are addressing residency concerns through various methods, including training models overseas and refining them locally, or deploying in-country infrastructure where feasible.

If subscriber data is used to train AI models without explicit consent, or without adequate aggregation or anonymisation, telecom providers could also face potential penalties under both telecom regulations and the Digital Personal Data Protection (DPDP) Act once it comes into force.

Network diagrams-related data presents similar risks. Licence conditions require that such information be shared with vendors strictly on a need-to-know basis. AI systems, by design, often require broad datasets to train and optimise. Here too, if network data is being shared with vendors for optimisation or predictive maintenance, vendor agreements should be negotiated to explicitly limit how that data is used, stored, and accessed.

Remote access and unauthorised interception risks are particularly serious, as these are critical security conditions the breach of which attracts penalties ranging from imprisonment of key officers to suspension of the licence. The Unified Licence permits remote access to networks from approved foreign locations, but only through approved locations within India. Use of an overseas AI vendor, or a domestic vendor with servers abroad, creates the risk of inadvertent remote access. Similarly, if AI systems have the capability to access communications on the network as part of network management, quality assurance, or fraud detection, that could constitute unauthorised interception.

Use of AI systems for network management could also create accidental outages in breach of quality-of-service obligations, or inadvertent service availability in areas where service has been blocked on law enforcement instructions.

AI deployment is not expressly prohibited, largely because regulations have not yet evolved to the point where they actively account for AI use. But there are specific functions where regulations specifically require the involvement of identifiable individuals with legal accountability.

For example, coordination with law enforcement on lawful interception and nodal functions cannot be automated. Appellate and advisory mechanisms under TRAI’s consumer protection regulations specifically require human decision-makers. The same applies to grievance redressal officers under intermediary regulations and designated points of contact for coordination with the Indian Computer Emergency Response Team.

While there is currently no separate framework for regulation of AI systems, harms arising from such systems are regulated under existing laws:

The TEC reporting standard is non-binding for now, but AI incidents could trigger reporting obligations under existing regulations, for example, CERT-In Rules, DPDP Act, UIDAI regulations, and Telecom Cybersecurity Rules.

Depending on the nature of the breach (for example, if it involves unauthorised access to subscriber data or communications), incidents involving AI systems could attract significant penalties – from licence suspension to imprisonment of key officers of the telecom service provider.

Further, even as the regulatory framework continues to evolve, the telecom industry globally is working to understand how AI systems can be securely integrated into their networks. At the recently concluded Mobile World Congress 2026, multiple sessions focused on incorporating AI into telecom operations. Industry bodies like GSMA are working on frameworks for safe AI integration, while vendors such as Ericsson are developing cybersecurity systems specifically for AI solutions. There are also emerging AI risk assessment services that help telecom providers evaluate AI vendor systems.

Telecom service providers in India should take similar stock of how AI systems are being deployed and whether those deployments comply with existing licence and regulatory obligations. As a low hanging first step, vendor agreements/terms of service should be reviewed to ensure they appropriately address handling of user and network data.

It is also helpful at this stage to engage with regulators, so industry feedback helps them form a practical understanding of how AI systems are being deployed, and associated challenges. Proactive dialogue at this stage can help shape a path forward that balances innovation with compliance.

This article is authored by Arjun Sinha, partner and Mriganki Nagpal, counsel at AP&Partners.

by Hindustan Times