The Indian government doesn’t want your data. It wants something more consequential. It tries to peer inside a device that never leaves your hands: your phone. What’s the result? Conflict between the government and global smartphone manufacturers. It has been explained as a technical disagreement over source code. That explanation is soothing, but also misleading.

This is not about some software line. The question is: Will the state supervise the behavior of the technology, or will it incorporate itself into the design of the technology? The government is considering mobile security standards that would require phone manufacturers to store phone logs for up to a year, notify authorities before major software updates, submit those updates for testing and possibly share elements of their source code, Reuters reported. The government says talks are ongoing and denies seeking access to the source code. But documents reviewed by Reuters suggest otherwise. The gap between these positions is not procedural. It’s philosophical.
Source code should have easy-to-understand explanations. It’s not your messages, photos, or contacts. This is an instruction manual written by engineers to explain the features of your phone. Access to the source code is access to the internal logic that controls the operation of the device.
From first principles, the state’s concerns are not unreasonable. There are nearly 750 million smartphones in use in India. They are a wallet, identity tool, work device, and political megaphone rolled into one. Fraud is on the rise. Cyber vulnerabilities are real. A government that does nothing will naturally be accused of negligence.
The danger lies not in the purpose, but in the method.
Modern digital security is built on speed. Vulnerabilities emerge all the time. Fixes will only work if deployed quickly. Systems that delay updates, even with good intentions, open up opportunities for compromise. This is not a hypothetical risk. If the exploit is valid, delays of days or even hours can be problematic.
Sujit Janardhanan’s concerns start much earlier and elsewhere. Janardhanan, chief marketing officer (CMO) at Neysa Networks, points out that even before issues of monitoring and control, India struggles with internet access and affordability. This is true even within the narrow scope of telecommunications. Technologies such as 5G were sold as platforms that would enable last-mile innovation in sectors such as agriculture and education. In reality, their success is limited by access, cost, and a weak ecosystem.
Against this backdrop, Janardhanan questions initiatives that have been introduced without clarity on what specific risks they aim to address and why these tools are appropriate. Broad claims of “user data privacy” are unaccountable, he argues. The lack of global precedent only increases anxiety.
While Janardhanan sees the issue as one of trust and adoption, Apal Gupta, founding director of the Internet Freedom Foundation (IFF), points to the problem as structural and legal.
Mr. Gupta’s concern is not with isolated access to source code, but with systems formed around it. Even if states already have surveillance powers in certain cases, access to source code, combined with requirements such as pre-approval of operating system updates, advance notification of patches, long-term device logging, and restrictions on operating system changes, changes the nature of that power. Monitoring will move from targeted use to built-in functionality. The scale is the key.
There are also constitutional issues. Supreme Court precedent requires intrusions into privacy to satisfy tests of legality, necessity, and proportionality. Measures built into every mobile phone and pervasive throughout the population struggle to meet that standard, with unclear legal foundations and weak independent oversight. In practice, Mr. Gupta argues, the restrictions will be difficult to enforce.
Other democracies draw this line differently. Mobile phone security is typically improved through standards, audits, and vulnerability disclosure, but not by giving states the ability to delay, control, or throttle software updates or, by design, mandate operational logging.
The worst risk, according to Mr. Gupta, is not that the government will “read the code.” That means access becomes a conduit for mandating weakening changes, delaying critical patches, and imposing compliance requirements that increase state control but reduce actual security. India’s own experience with spyware allegations and zero-click attacks has already shown how powerful even nation-state actors can be when vulnerabilities are exploited.
Second-order effects are predictable. Devices that generate long-term behavioral logs invite self-censorship. Delays in updates weaken cybersecurity. Anti-rollback and anti-change rules reduce user control and lock users into vendor and state-approved software choices.
Taken together, the warning is stark. Even if trust is lost, users will not revolt. They retreat quietly, rationally, and massively. So the real question is not whether India should secure smartphones. It must be so.
The question is: Will security be built on speed, resiliency, and accountability, or will it be built on control, preauthorization, and architectural oversight? This decision is not limited to mobile phones. It will determine how software is written, how trust is maintained, and how power is exercised in India’s digital economy.
And once a phone is designed to be controlled, freedom cannot be easily relearned.