For Cursor, Claude, Copilot Users

AI wrote your code.
Who's checking it?

Claude hallucinates packages that don't exist. Cursor skips input validation. Copilot suggests vulnerable patterns. Pinata finds what they leave behind.

One command: npx pinata-security-cli analyze . --verify

my-vibe-coded-app
$ npx pinata-security-cli analyze . --verify
Detected project type: Web server (high confidence)
Scanning 312 files...
 
Enter your Anthropic or OpenAI API key: sk-ant-***
API key saved to ~/.pinata/config.json
 
AI verification: 247 matches → 12 confirmed
 
Pinata Score: 58/100 (D)
 
Confirmed vulnerabilities:
CRITICAL sql-injection in api/users.ts:47
CRITICAL command-injection in utils/exec.ts:12
HIGH hallucinated-import: 'lodash-secure' doesn't exist
HIGH missing-auth in routes/admin.ts:8
... 8 more

The Vibe-Coding Problem

AI coding assistants ship code fast. They also ship vulnerabilities. These are real patterns we've seen in production codebases.

Critical

Hallucinated Packages

AI suggests npm packages that don't exist. Attackers register them. Your app installs malware. 35% of Claude suggestions reference non-existent packages.

Source: Socket.dev research, 2025
Critical

Prompt Injection via Code

Malicious instructions hidden in code comments, variable names, or Unicode. When AI reads and executes, it follows the hidden commands.

Source: OWASP LLM Top 10
High

Skipped Validation

AI prioritizes "working code" over secure code. Input validation, auth checks, and error handling are often missing or incomplete.

Source: Snyk AI code analysis
High

Outdated Patterns

Training data cutoffs mean AI suggests deprecated APIs and known-vulnerable patterns. MD5 for passwords. eval() for parsing. HTTP for APIs.

Source: GitHub Security Lab
High

Test Cheating

AI writes tests that pass by testing the implementation, not the behavior. Mocks that return hardcoded values. Assertions that never fail.

Source: Cursor user reports
High

Exposed Secrets

AI copies example code with placeholder API keys that look real. Or generates .env files without .gitignore. Your keys end up on GitHub.

Source: TruffleHog scans

6-Layer Convergence Architecture

Multiple detection methods that reinforce each other. When Layer 1 flags something, Layers 2-5 verify it. False positives drop to near zero.

1

Static Pattern Matching

47 detection categories with regex and AST patterns. Casts a wide net across your codebase.

2s for 1k files
2

Project Type Detection

Auto-detects CLI, web server, library, serverless, etc. Adjusts rules - blocking I/O is fine in CLI, not in Express.

Instant
3

Heuristic Pre-filtering

Fast rules that eliminate obvious false positives. Comments, test files, dead code.

Instant
4

AI Semantic Verification

Claude analyzes each match in context. Is this actually exploitable? What's the attack vector?

--verify flag
5

Adversarial Test Generation

AI writes complete, runnable test files that target your specific vulnerable code. Not templates. Real tests that fail until you fix the vulnerability.

pinata generate
6

Dynamic Execution

Runs exploit tests in a Docker sandbox. If the exploit succeeds, the vulnerability is confirmed.

--execute flag

Pinata vs Traditional Scanners

Feature
Pinata
Snyk
Semgrep
CodeQL
AI-generated code patterns
-
-
-
Hallucinated package detection
-
-
-
AI false-positive filtering
-
-
-
Generates runnable security tests
-
-
-
Mutation-tested (100% kill rate)
-
-
-
Dynamic exploit execution
-
-
-
Project type detection
-
-
-
Zero config, runs instantly
-
-
Open source
-
Free
-

Adversarial Test Generation

Other scanners give you a report. Pinata gives you a test file that blocks your CI until the vulnerability is fixed.

test generation
$ pinata generate --gaps --write
 
Extracting context for 3 findings...
Generating tests with AI...
 
+ tests/security/sqli-users.test.ts
SQL injection test for getUserById at api/users.ts:47
+ tests/security/xss-comments.test.ts
XSS test for renderComment at views/comments.tsx:23
+ tests/security/cmd-exec.test.ts
Command injection test for runBuild at utils/exec.ts:12
 
Wrote 3 test files
Tests fail against current code (vulnerabilities confirmed).
Fix the code, tests will pass. Add to CI to prevent regressions.
1

Context extraction

Reads the full function, its imports, your test framework, database type, and existing test style. Not a 5-line snippet. The whole picture.

2

AI generates complete test files

Not templates with placeholders. Real imports, real function calls, real payloads targeting your specific code. Matches your project's test style.

3

Validation: must fail

Generated tests are run against your current code. If the test passes, it's useless and gets regenerated. A security test that doesn't catch the vulnerability is a lie.

4

Mutation testing verification

Stryker mutates your code and checks if the test catches it. 100% mutation kill rate on Pinata's own test suite. The only honest metric for test quality.

CI/CD in 30 Seconds

Add to your GitHub workflow. Block PRs with vulnerabilities. Upload SARIF to GitHub Security.

- uses: christiancattaneo/pinata-security@v1
with:
confidence: high
sarif-output: pinata.sarif
# Optional: AI verification
# env:
# ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
# with:
# verify: true

Start Scanning

One command. Zero config. Find out what your AI left behind.

$ npx --yes pinata-security-cli@latest analyze .

Add --verify for AI verification. Prompts for API key on first run.

View Documentation