The Asian Product Team Compliance Playbook For India's April 2026 AI Labelling Rules
If you run a product team anywhere in Asia and your platform reaches Indian users, the IT Rules 2026 effective 20 February have already changed your job. Labels must be visible, metadata must be non-removable, takedowns must happen inside three hours, and your terms of service must warn users of these rules every three months. This is the step-by-step playbook Asian engineering and policy teams should follow to get compliant fast.
Start With A Clean Scope Inventory
Before you ship any labelling pipeline, catalogue every surface on your platform where AI-generated content can reach an Indian user. That includes first-party generation (your own AI tools), user uploads, third-party embeds, federated content, and notification previews. If an AI image can land on an Indian feed, it is in scope. Do not stop at your flagship app: admin dashboards and internal tools have leaked synthetic media before, and Indian enforcement does not distinguish.
For Southeast Asian platforms that serve Indian diaspora users, this scope often turns out to be larger than expected. WhatsApp Business API integrations, cross-posting automations, and creator-tool SDKs are common blind spots. List them. You cannot compliance-check a surface you have not acknowledged.
By The Numbers
- 3 hours is the takedown window for flagged unlawful AI content or harmful deepfakes, per MeitY Gazette G.S.R. 120(E).
- 10% minimum screen coverage for visible AI image labels, per the enforcement notes.
- 5 million users is the threshold above which automated verification of user AI declarations is mandatory.
- 7 days is the new general grievance response window. 12 hours is the new urgent complaints window, halved from 24.
- Every 3 months is how often you must re-notify users of your updated terms, up from annually.
The 7-Step Compliance Pipeline
Build these in order. They compose cleanly, and you can ship them incrementally.
- Add a user AI declaration field to your upload API, not as an optional checkbox but as a required boolean. Log the timestamp and the uploader ID.
- Run automated verification for platforms above the 5-million-user threshold. Open-source detectors such as AI-generated image detectors on Hugging Face, paired with commercial providers like Hive, Reality Defender, or TruePic, give you a defensible verification layer.
- Embed non-removable metadata using C2PA-compatible manifests. If your pipeline strips EXIF during re-encoding, migrate to a C2PA-aware pipeline. The Content Authenticity Initiative publishes reference implementations in multiple languages.
- Render visible labels on image content covering at least 10% of pixel area, and on audio/video early in playback. Design the label so it is legible on small mobile screens and does not visually dominate the content.
- Stand up a 3-hour takedown workflow with on-call rotation, escalation runbooks, and public-holiday coverage. Integrate with your trust-and-safety queue and make sure government notifications hit an SLA-backed channel, not a general support inbox.
- Log everything. Timestamps, user IDs, declarations, verification outputs, takedown requests, and action times. This is your audit trail and your liability shield.
- Publish your updated terms of service and schedule the quarterly user re-notification. Queue a reminder in your release calendar so you do not miss the three-month cadence.
Tooling Choices You Should Make Now
Your build-versus-buy decisions on detection and provenance will dictate how fast you ship. Buying is usually the right answer for detection unless your scale is extreme. Hive and Reality Defender offer solid APIs. For provenance, the Adobe Content Credentials SDK gives you a C2PA-compliant implementation, and for mobile, truepic.js and truepic-ios are proven. For automated AI-declaration verification on uploads, a lightweight classifier on top of CLIP plus a cloud LLM check is enough for most platforms below 50 million users.
The harder build decision is the takedown workflow. A true 3-hour SLA requires either 24/7 on-call coverage or tight automation that triages government-issued notifications without a human first pass. Platforms above 10 million Indian users should invest in the automation. Below that, 24/7 on-call is probably cheaper.
The biggest implementation mistake we are seeing is platforms treating this as a content-moderation project. It is a logging and provenance project with a moderation surface.
C2PA metadata survives recompression in only some pipelines. If you are on a legacy image-processing chain, budget time to migrate.
Common Failure Modes To Avoid
- Labels that get re-encoded away. If a user re-uploads a labelled image, your label and metadata must survive. Test your re-upload path end-to-end.
- Declarations that are buried in a ToS checkbox. The rules require prominent user declarations, not a contractual footnote.
- Takedown queues routed to general support. Government flag notifications need a dedicated channel with SLA monitoring.
- Logs that do not survive a subpoena. Your audit trail needs to be immutable or append-only. Use write-once storage for the compliance logs.
- Ignoring the 12-hour urgent window. Most platforms build for the 3-hour deepfake takedown and forget that urgent grievances now run on a 12-hour clock.
Integration Testing Checklist
Before you claim compliance internally, run this smoke test on every in-scope surface:
| Test | Expected result |
|---|---|
| Upload AI-generated image, re-download | Metadata present, label visible, 10% coverage |
| Upload AI audio, replay | Label audio tag in first 5 seconds |
| Submit test government takedown notice | Workflow triggered inside 15 minutes |
| Upload without user declaration | Blocked with actionable error |
| Re-upload previously-labelled content | Original labels and metadata preserved |
| Pull audit log for test uploads | Complete, timestamped, immutable |
For the broader context, start with our analysis of India's IT Rules 2026 in full and our piece on India's GPU subsidy and sovereign AI plan. For comparable regional compliance playbooks, see Korea's AI Basic Act enforcement and Vietnam's phased AI law rollout. For engineers wanting to deepen their AI fluency, our six skills Asian AI engineers should build in 2026 is the best next read.
Frequently Asked Questions
We do not actively serve India. Do the rules still apply?
- If Indian users can reach your platform and generate or view synthetic content, you are likely in scope. Geo-fencing is a defensible operational choice, but it must be real and auditable, not a ToS clause.
Is the 10% label area a hard rule or a guideline?
- It is in the enforcement notes that accompany the gazette. In practice, regulators are expected to treat it as a hard floor during inspections. Design for the floor and avoid clever minimalism on label sizing.
Does the 3-hour window apply on weekends and public holidays?
- Yes. There is no carve-out for off-hours. Build 24/7 on-call rotations or automate the ingest and triage of government takedown notices.
Can a small Asian startup afford this compliance burden?
- Honestly, it is stretched. Detection APIs are usage-priced, C2PA tooling is free and open-source, and the incremental engineering cost for a small team is meaningful but not prohibitive. The biggest cost is the 24/7 takedown readiness, which is why some smaller platforms are considering regional carve-outs.
Which part of this playbook will your team tackle first? Drop your take in the comments below.