Device-Level Blocking Won’t Stop Digital Arrest Scams — The UI Is the Real Vulnerability
Last week, India’s Home Ministry issued a directive to WhatsApp: block the device IDs of accounts involved in digital arrest scams so perpetrators can’t open new accounts on the same hardware.
WhatsApp agreed. They have 30 days to submit a proposal.
It won’t work.
The reason it won’t work explains why digital arrest scams will keep growing regardless of how many technical controls we layer on top.
Device IDs Are the Wrong Target
A digital arrest scammer running their operation in India has access to hundreds of millions of cheap Android handsets. A factory reset costs nothing. A new SIM card costs under a hundred rupees. New device ID, new phone number, new WhatsApp account — in under an hour.
Device ID blocking is designed for a threat model where scammers are sophisticated actors with expensive hardware. Digital arrest scams run on volume. The operators behind them are not protecting expensive infrastructure. They burn devices and phone numbers the way spammers burn email addresses.
Blocking device IDs will inconvenience scammers for exactly as long as it takes them to buy a new phone. That’s not a security solution. That’s a speed bump.
The same directive also asked WhatsApp to expand logo detection — comparing profile photos against known law enforcement insignia and removing impersonators. That’s closer to the right solution. But device ID blocking is the headline, and it’s a distraction.
The Verification Inversion
Digital arrest scams succeed not because WhatsApp security is weak, but because the UI creates an environment where users willingly bypass every security control they have.
Here’s what actually happens. The victim receives a video call from someone in a police or CBI uniform. The caller’s profile photo shows an official badge. The video call shows an official-looking backdrop with case numbers, warrant references, and government seals. The caller creates artificial urgency: “Your Aadhaar is linked to a money laundering case. You are under digital arrest. Do not end this call or a physical warrant will be issued.”
The victim is terrified. They’re not thinking about security. They’re thinking about arrest.
The scammer then walks them through every action step by step — opening the banking app, transferring money to a “safe account,” sharing OTPs to “verify identity.” Every step is narrated as protective. Every security control the bank built becomes something the victim actively completes.
I’m calling this the Verification Inversion — when the interface layer flips the purpose of security controls from protecting the user to executing the attack. The OTP isn’t protecting you. The interface told you it is, so you hand it over.
The attack surface is not the device. It’s the visual trust layer on the screen.
Logo Detection Is Actually the Right Instinct
Buried in the same directive: WhatsApp has already deployed a logo detection and media matching system. It compares profile photos against law enforcement agency logos — CBI, ATS, state police forces — and removes accounts misusing official insignia.
That’s the correct direction. The scam works because the visual presentation looks authoritative. Break the authority signal and you break the scam’s opening move.
The problem is logo detection only catches the obvious impersonators. Scammers are already adapting — using AI-generated variations of official logos that match closely enough to fool users but differ enough to bypass pixel-level matching. The arms race between logo variations and detection systems is real, and the scammers are running faster than the platform.
Caller information display — showing users context about who’s calling them — is another step in the right direction. If every incoming video call from an unverified account shows a prominent “unverified caller” flag that persists through the call, it creates friction at exactly the right moment.
What Would Actually Work
The solutions that would reduce digital arrest scams all share one characteristic: they create friction in the interface at the moment of manipulation.
Mandatory unverified caller overlays. Any account without a verified identity calling via video should show a persistent banner that cannot be dismissed: “Unverified account. Government agencies do not contact citizens via WhatsApp video call.” Not a one-time popup. A persistent overlay visible for the entire call duration.
Screen-share lockouts on government-branded calls. If an account impersonating a government agency initiates screen sharing — a common tactic for watching victims navigate their banking apps — the session should be terminated. Not warned. Terminated.
Banking app friction triggers. When a user accesses a banking app while on an active video call, the banking app should show a high-friction warning: “You are on an active call. Fraud alerts frequently occur during video calls. Are you being asked to transfer money or share a code?” This requires coordination between WhatsApp and banks, but both the data and the intent exist.
Timed lock-out on prolonged calls. Digital arrest scams often keep victims on calls for hours, maintaining the psychological pressure. Automatic warnings after 30 minutes of continuous video call — “You have been on this call for 30 minutes. If someone is asking you to take financial actions, this may be fraud” — break the trance.
None of these are technically complex. All of them require friction that product teams will resist because friction reduces engagement metrics. That’s the real barrier.
The Measurement Problem
The Home Ministry is measuring the wrong thing. Device ID blocks are measurable: accounts blocked, devices flagged, scammers inconvenienced. Logo detection is measurable: impersonator profiles removed, false positives reviewed.
What’s not being measured: how many users completed a video call with an unverified account and then opened their banking app within 10 minutes. How many users stayed on a video call for over 30 minutes while simultaneously accessing financial services. How many users shared their screen during a call with a government-branded profile.
These are the signals that predict a scam in progress. They’re also the signals that require real-time intervention, not post-facto account removal.
WhatsApp has the telemetry. Banks have the transaction data. The coordination layer doesn’t exist.
What I Don’t Know Yet
I don’t know how to build organizational trust in autonomous intervention systems that terminate calls or lock banking apps based on behavioral signals. The false positive rate for “user on video call + opens banking app” is probably high. Legitimate customer service calls exist. Remote financial assistance from family members exists.
The threshold between protective friction and user-hostile interference is real. I haven’t solved it. Neither has anyone else.
But I know this: device ID blocking is solving the wrong problem. The scam doesn’t live in the hardware. It lives in the 6-inch screen where a terrified user sees a uniform and hears the word “arrest” and does exactly what they’re told.
The question worth asking now is whether the platforms that control that screen are willing to break their own engagement metrics to intervene. Not in three years. This quarter.
Are we asking it? Mostly, no. We are still blocking device IDs.