v2.3 - 8 threat vectors · Customer portal · Real-time scanning

Secure
AI agentsfrom the web.

Guni sits between your agent and every page it visits.
Detects prompt injection, phishing, clickjacking,
and goal hijacking - before execution. 0.001s.

CI-backed API testsSelf-hostableHosted API + dashboard
Early access - limited spots open
✓ You're on the list. Check your email for confirmation.
We could not save your request. Please try again in a moment.
No spam. Free tier always included.
12,480
Threats blocked
84,210
Scans run
15%
Block rate
8
Threat vectors
Open source coreFree heuristics layer and self-host path for technical evaluation.
Hosted productCustomer portal, billing, audit history, alerts, and owner operations.
Shareable trustSecurity, status, privacy, and terms pages for buyer conversations.
Production pathMove from demo traffic to managed API and tighter workflow controls.
guni — live threat analysis
What Guni detects

8 threat vectors.
One security layer.

Every page your agent visits gets a full multi-vector scan in under 1 millisecond.

01
Prompt Injection
Visible and CSS-hidden instructions overriding agent goals, including reworded attacks no keyword list catches.
Weight 30
02
Phishing Forms
Credential-harvesting forms, external URL submissions, urgency language designed to trick agents.
Weight 40
03
UI Deception
Deceptive button text, fake urgency, hidden clickable elements visible to agents but not humans.
Weight 25
04
Malicious Scripts
eval(), external fetch, cookie access, and JavaScript patterns that exfiltrate agent session data.
Weight 20
05
Goal Hijacking
Validates every page against the agent's declared objective. Blocks mid-session steering attempts.
Weight 35
06
Clickjacking
Invisible iframe overlays and transparent fixed elements hijacking agent clicks and actions.
Weight 30
07
CSRF & Token Theft
Scripts harvesting auth tokens, forms without CSRF protection, and hidden inputs with sensitive values.
Weight 35
08
Open Redirects
Meta refresh redirects, JavaScript location hijacks, and redirect parameters sending agents to adversarial pages.
Weight 20

How it works

Two layers.
Zero compromises.

Fast heuristics catch known attacks instantly. LLM reasoning catches everything else.

01 — PARSE
DOM normalization
Raw HTML parsed. Visible text, hidden elements, forms, and scripts extracted in parallel.
02 — DETECT
8-vector heuristics
All 8 detectors run simultaneously. Known attacks are caught in about 0.001s at zero API cost.
03 — REASON
LLM intent analysis
When heuristics flag something, Claude reasons about intent and catches novel reworded attacks.
04 — DECIDE
Policy enforcement
Risk ≥70 → BLOCK. 40–69 → CONFIRM. <40 → ALLOW. Full evidence logged.

Used by teams building with

Built for the browser-agent stack

Built for the frameworks browser-agent teams already use, with a clean structure you can later swap to customer or partner logos without redesigning the section.

Playwright
Protect scripted browser flows before credentials, clicks, and redirects go sideways.
LangChain
Add an execution safety layer around web tools, browsing agents, and retrieval-driven actions.
browser-use
Guard autonomous browsing sessions against prompt injection, clickjacking, and malicious UI flows.

Where it fits

Built for real browser workflows

The strongest use cases are not toy prompts. They are agents logging in, paying, clicking, extracting, and taking action across unpredictable websites.

Autonomous browsing

Protect browser-use, Playwright, and operator-style agents from hidden instructions, deceptive flows, and unsafe redirects.

Sensitive workflows

Wrap finance, admin, procurement, and data-entry flows with a clear allow, confirm, or block policy boundary.

Buyer-facing proof

Use the threat feed, portal history, and status pages to explain what was blocked and why during evaluations.


Plans

Free to start.
Plus and Pro when you grow.

Start self-hosted for free, then move into the hosted API and customer portal as you roll into production.

Free
₹0
Unlimited local and self-hosted usage
Hosted API calls not included
Full 8-vector heuristics
Self-hosted REST API
Unlimited local scans
Audit log
Open source MIT
View on GitHub
Recommended
Plus
₹999/mo
or ₹7,490/year
Save 38%
1,000 hosted scans per month
Everything in Free
LLM reasoning layer
Hosted API + dashboard
Real-time WebSocket scanning
Customer portal
Slack alerts
Choose Plus
Best fit for teams moving from demo traffic to real workflows.
Pro
₹4,999/mo
or ₹23,990/year
Save 60%
10,000 hosted scans per month
Everything in Plus
Custom threat rules
Key lifecycle controls
Admin audit visibility
Priority support
Choose Pro

Trust center

Everything a technical buyer asks for in one place

Use these pages during outreach, security review, or pilot setup so buyers can inspect the product perimeter without needing a custom walkthrough first.

SecurityArchitecture, controls, and deployment posture.
StatusPublic service health and threat-feed freshness.
PrivacyHosted data handling and customer controls.
TermsSimple usage terms for evaluation and product access.

"We were shipping a browser agent for automated procurement and never thought about what happens when it hits a malicious page. Guni is the thing we didn't know we needed."
— Founder, B2B SaaS startup
"The LLM reasoning layer caught a reworded injection attack in our staging environment that no WAF would have flagged. Genuinely impressive."
— Senior engineer, AI infrastructure team
"Three lines of code and my LangChain agent has a full security layer. 0.001s detection time. This is exactly what the agentic ecosystem needs."
— ML engineer, agent tooling startup

Your agent is browsing
an adversarial web.

Every page it visits could contain hidden instructions.
Do not ship without a security layer.

✓ You're on the list.
We could not save your request. Please try again in a moment.
No spam. Free tier always included.