Documentation Index
Fetch the complete documentation index at: https://zeropath.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
How It Works
ZeroPath’s Static Application Security Testing (SAST) uses a multi-stage AI-powered analysis pipeline to discover vulnerabilities that traditional pattern-matching scanners miss:Automated Scanning
Runs on push, PR, or schedule. Builds an architecture model, analyzes code, and validates every
finding with AI.
Application-Aware
Identifies services and modules in your repo (e.g.,
/apps/payments) so findings are tied to
the right team.AI Validated
Reviews every candidate for exploitability and context, reducing false positives by up to 75%.
End-to-end Flow
Repository Checkout
ZeroPath clones your repository and pins the exact commit, ensuring repeatable results.
Application Discovery
An AI-powered analyzer maps your codebase’s services, modules, and entry points so each finding
can be attributed to the right application and team. If you have defined
custom source or sink declarations,
ZeroPath prepares optimized summaries for each declaration at scan start so they can be
evaluated efficiently alongside built-in detections throughout the scan.
Multi-Stage Pattern Analysis
Broad static analysis rules run across all supported languages, producing an initial set of
candidate findings. Source-to-sink analysis identifies external data entry points (sources)
and sensitive operations (sinks) in your code — including any
custom sources and sinks you have
declared. Custom declarations are evaluated in the same pre-screen and deep inspection
passes as built-in detections, so custom-declared patterns receive the same analysis
depth as standard vulnerability classes.
AI Validation & False-Positive Filtering
Each candidate is reviewed in context. The AI evaluates exploitability, code flow, and
surrounding logic to filter out false positives, reducing noise by up to 75%. File paths
referenced in findings are validated against the actual repository contents, preventing
hallucinated or incorrect paths from reaching your results.Before full validation, every non-SCA candidate undergoes a lightweight severity pre-screen.
Candidates below the severity threshold are filtered out early, reducing scan time and
focusing the deeper validation pass on findings that are most likely to matter.A secondary validation pass then reviews all remaining true-positive findings before they
are recorded. This pass can remove duplicate findings that share the same root cause at
overlapping locations, mark non-exploitable findings (dead code, test fixtures, sanitized
inputs), and correct inaccurate details such as title, description, or severity — further
improving signal quality and reducing noise.
Deep Codebase Analysis
A context-aware AI agent performs deeper analysis, examining authentication flows, authorization
logic, business rules, and custom security policies. The agent can follow extended call chains
and cross-file data flows, providing thorough coverage of complex vulnerability patterns.
Custom Rule Evaluation
Any natural-language rules you’ve defined are evaluated per-application, catching
organization-specific security policies.
Deduplication & Scoring
Findings are deduplicated across tools and historical scans. When a single file contains a large
number of candidate findings — for example, from extensive custom rule sets — the consolidation
step automatically splits them into manageable batches and runs a final cross-batch pass to catch
duplicates across batches, ensuring reliable deduplication regardless of finding volume.When multiple discovery models run in parallel, findings that reference the same file,
vulnerability category, and overlapping line range with similar titles are automatically
collapsed into a single canonical finding. The most detailed description is preserved, and
supplementary descriptions from other models are appended for reference.Cross-source deduplication (where the same vulnerability is flagged by multiple scanning
engines) now processes findings in file-boundary chunks. Each chunk is evaluated independently,
so a transient failure in one chunk does not discard deduplication results from the rest of
the scan — the affected chunk’s findings are kept as-is while successfully deduplicated
chunks proceed normally.Each confirmed finding receives a
CVSS 4.0 severity score with AI-generated reasoning, step-by-step attack exploitation steps,
a set of exploitability preconditions describing deployment-context factors the scanner
could not fully verify, and a security impact label summarizing the real-world attacker
outcome (for example, “Account Takeover” or “Remote Code Execution”). The impact label is
displayed prominently in the issue detail header alongside the vulnerability class, so you can
assess real-world risk at a glance.For full scans, deduplication is scoped per-branch so each branch maintains independent
findings — resolving an issue on one branch does not affect the same issue on another branch.For PR scans, deduplication is scoped to prevent cross-branch contamination — findings from one
pull request are only deduplicated against full scan baselines and rescans of the same PR, not
against findings from other pull requests. This ensures that each PR’s results accurately
reflect the code changes in that branch.When an existing finding is re-detected in a subsequent scan, ZeroPath automatically refreshes
the contributor attribution and code location data to reflect the current state of the code,
keeping blame information accurate as your codebase evolves.Findings with an unknown validation state (where exploitability cannot be determined) are
categorized as Informational rather than non-exploitable, giving you clearer signal about
which findings need further investigation.
Running Scans
- Full Scans
- PR Scans
Comprehensive analysis of your entire codebase. Triggered manually, on schedule, or when code is pushed to monitored branches.On subsequent scans of the same branch, ZeroPath intelligently reuses results for unchanged files and focuses deep analysis only on changed files, significantly reducing scan time.When the changes since the last scan are small, ZeroPath can use a smart rescan mode that
evaluates only the diff. An AI agent reviews whether existing issues are still present and
whether the new code introduces any vulnerabilities, delivering results significantly faster
than a full pipeline run. The rescan agent inspects issues on a per-file basis and resolves
them by exact line range and behavior, so if one vulnerability in a file is fixed while
another remains, only the fixed issue is marked as resolved.The differential scan engine automatically classifies changed files to determine the most
efficient scan strategy. When only code files change, ZeroPath runs the fast rescan path
alone. When only dependency manifests or lockfiles change, a lightweight SCA pass runs while
SAST findings are carried forward. When both code and dependency files change, ZeroPath runs
the rescan and SCA together — without triggering a full pipeline. Non-code files such as
lockfiles, source maps, images, and generated assets are excluded from the diff size
calculation, so large dependency updates no longer force unnecessary full scans.In monorepo setups, the rescan agent is scoped to the current repository partition. Only
files and findings within the configured scan scope are evaluated, preventing cross-partition
interference when multiple teams share a single repository.ZeroPath uses a three-tier regression detection pipeline to efficiently determine whether
previously reported issues are still present after code changes:
- Deterministic fast-path — if none of the files relevant to a finding changed between commits, the finding is carried forward instantly with no AI call.
- Structured diff review — when relevant files did change, a single AI review examines the bounded diff and code evidence to decide whether the issue persists or was fixed.
- Deep-agent escalation — only when the bounded evidence is insufficient does ZeroPath escalate to a full agentic investigation, keeping costs and latency low for the common case.
Vulnerability Categories
- Technical Vulnerabilities
- Business Logic Vulnerabilities
| Category | Examples |
|---|---|
| Injection | SQL injection, command injection, LDAP injection |
| Cross-Site Scripting (XSS) | Reflected XSS, stored XSS, DOM-based XSS |
| Server-Side Request Forgery (SSRF) | Internal service access, cloud metadata exfiltration |
| File Operations | Path traversal, local file inclusion, arbitrary file read/write |
| Insecure Deserialization | Object injection, remote code execution via deserialization |
| XML External Entity (XXE) | External entity expansion, SSRF via XML |
| Template Injection | Server-side template injection (SSTI) |
| Cryptography | Weak algorithms, hardcoded keys, insufficient key length |
| Open Redirect | Unvalidated redirect targets |
| CORS Misconfiguration | Overly permissive cross-origin policies |
| CSRF | Missing or bypassable cross-site request forgery protections |
| LLM Security | Prompt injection, insecure output handling, system prompt leakage, excessive agency (OWASP LLM Top 10) |
| Denial of Service | ReDoS, resource exhaustion, algorithmic complexity |
| Information Disclosure | Sensitive data in responses, error message leakage |
| Reflection | Reflected input flaws, unsafe reflection |
| Memory Safety | Buffer overflows, use-after-free, out-of-bounds access |
| Input Validation | Missing or improper input validation |
| Configuration | Insecure defaults, debug mode in production |
Severity Scoring
ZeroPath uses CVSS 4.0 as its primary scoring standard, and also provides CVSS 3.1 scores for compatibility with tools and workflows that require the older standard. Every confirmed finding receives a structured score with AI-generated reasoning tied directly to the code. Findings are also assigned one or more CWE identifiers for precise weakness classification. CVSS assessments are generated for nearly all security findings — only non-security issues like code quality or style problems are excluded. LLM Security findings (OWASP LLM Top 10) are scored conservatively: most receive a severity of 2.0–5.0 unless the LLM output flows directly into a dangerous sink (such aseval, SQL, or a shell command) with no validation, in which case the downstream sink impact is scored. System prompt leakage and excessive agency findings are typically scored 1.0–3.0.
CVSS 4.0 Metrics Evaluated
CVSS 4.0 Metrics Evaluated
| Metric | Values | What It Captures |
|---|---|---|
| Attack Vector | Network / Local | Whether exploitable remotely or requires local access |
| Attack Complexity | Low / High | Reliability of exploit; whether special conditions are needed |
| Attack Requirements | None / Present | Whether prerequisites (specific config/state) are required |
| Privileges Required | None / Low / High | Privilege level the attacker needs |
| User Interaction | None / Passive / Active | Whether a user must take action for exploitation |
| Confidentiality Impact | None / Low / High | Impact on data secrecy |
| Integrity Impact | None / Low / High | Impact on data or system integrity |
| Availability Impact | None / Low / High | Impact on system availability |
CVSS 3.1 Compatibility
CVSS 3.1 Compatibility
In addition to CVSS 4.0, each finding also receives a CVSS 3.1 score derived from the same
analysis. The 3.1 vector includes the Scope metric (Unchanged or Changed), which indicates
whether an exploit can affect resources beyond the vulnerable component. This dual-scoring
approach lets you use CVSS 4.0 for modern risk assessment while retaining CVSS 3.1 compatibility
for existing dashboards, compliance frameworks, and integrations that have not yet adopted 4.0.Older findings that were originally scored with CVSS 4.0 only are automatically backfilled with
derived CVSS 3.1 vectors and severity scores, so you can filter and sort all findings by CVSS 3.1
without gaps in coverage.
Priority Score Calculation
Priority Score Calculation
The overall finding priority score combines severity with confidence:
severity (0–10) × confidence (0–10) = a 0–100 scale for ranking and filtering.This means a high-severity finding with low confidence ranks lower than a medium-severity finding with high confidence — helping you focus on issues that are both impactful and reliably identified.Language Coverage
Python
JavaScript / TypeScript
Java
Go
Ruby
PHP
C / C++
C#
Rust
Swift / Objective-C
Kotlin
Scala
Dart
Elixir
Nim
AL (Business Central)
Key Capabilities
75% Fewer False Positives
AI validation reviews every finding in context, filtering out issues that aren’t actually
exploitable.
Novel Vulnerability Detection
Deep AI analysis discovers vulnerabilities that pattern-matching alone cannot find, including
business logic flaws.
Application-Aware Findings
Each finding is tied to the specific service or module it affects, with tech stack and
architecture context.
Data Flow Visualization
Source-to-sink traces show exactly how tainted data reaches vulnerable operations. Every
exploitable finding includes a mandatory data flow path. When a finding involves a
custom source or custom sink
declaration, the data flow details include the custom declaration name and description,
so you can see exactly which custom-defined entry point or sensitive operation was matched.
Source and sink previews display a Custom Source or Custom Sink badge when the
finding originated from a custom declaration, and you can click through directly to view
or edit the declaration from the issue detail pane.
For SCA vulnerabilities, the data flow tree includes the manifest file where the affected
dependency is declared, giving you direct visibility into which dependency file introduces
the risk. The scan explorer includes repository and scan navigation dropdowns so you can
quickly switch between repositories and compare results across recent scans without leaving
the page. Findings that are linked to an application but not to a specific source handler
appear under a Standalone Findings node in the scan explorer tree, so they are always
visible and never lost.
Attack Step Breakdown
Each finding includes a step-by-step description of how an attacker would exploit the
vulnerability, from prerequisites through impact — helping AppSec engineers quickly assess risk.
An expandable Exploit Walkthrough shows the ordered sequence of actions an attacker would take.
These attack steps are also available as structured data in the issue detail API response,
making them easy to consume in automated workflows and integrations. When exporting issues
via the API, you can include attack steps in the export by setting the
includeExploitWalkthrough SARIF option.Exploitability Preconditions
Every finding includes a list of preconditions — factors that affect whether the
vulnerability is actually exploitable but that the scanner cannot fully verify from code alone.
Examples include whether the endpoint is internet-facing, whether a WAF or API gateway filters
malicious input, and whether authentication is enforced by middleware not visible in the scanned
code. You can expand each precondition to view the supporting scanner evidence from your
codebase, helping you quickly assess real-world risk without re-investigating the finding
yourself. Preconditions are also
available as structured data in the issue detail API response, making them easy to consume in
automated workflows and integrations. When exporting issues via the API, you can include
preconditions in the export by setting the
includePreconditions SARIF option.Deterministic Results
Same codebase at the same commit produces the same findings, enabling reliable audits and
compliance tracking.
Auto-Fix Capable
Qualifying findings can be automatically patched with AI-generated code fixes.
Multi-Engine Secrets Detection
Secrets scanning runs multiple detection engines in parallel, cross-referencing results to
maximize coverage while deduplicating overlapping findings from different engines. Each secret
finding includes a validation state — Confirmed (verified active), Disconfirmed (verified
inactive or false positive), or Unknown (validation could not run) — along with a reason
when validation is inconclusive.
On-Demand Investigation
Request a deeper re-investigation of any finding using larger AI models for higher-confidence
validation results. You can trigger investigations from the issue detail view — including via the
Reinvestigate action in the issue actions menu — and monitor progress in real time.
Issue Chat
Ask questions about any finding directly from the issue detail view. A dedicated chat panel
scoped to the specific issue can explain how the vulnerability can be exploited, suggest fixes,
and assess reachability — all without leaving the dashboard. You can also delete chat threads
you no longer need.
Branch-Aware Issue Tracking
The global issues list displays the scan target branch for each finding, making it easy to see
which branch a vulnerability was detected on. Combined with per-branch deduplication, you can
track and filter findings across branches without confusion.
Investigation
You can request on-demand investigation of any finding to get a deeper, higher-confidence assessment. When you trigger an investigation, ZeroPath re-evaluates the finding using more powerful AI models:- SAST findings are re-validated with a larger model that examines exploitability with more context and nuance.
- SCA direct dependency findings undergo reachability analysis to determine whether the vulnerable code path is actually reachable in your project. Results are labeled as “Likely Reachable” or “Likely Not Reachable” to reflect the probabilistic nature of the analysis, and include a detailed reachability summary explaining the reasoning.
- SCA transitive dependency findings go through a two-phase process: first a triage step determines whether exploitability can even be assessed, and if so, a full reachability analysis traces the dependency chain to see if the vulnerability is reachable through parent packages.
Adoption Checklist
Connect Your Repository
Add your repo via GitHub App, GitLab, Bitbucket, or direct URL. See Quick Start.
Review Findings
Browse findings in the dashboard, grouped by severity, application, and category. You can filter
findings by type — including SAST, SCA, Secrets, IaC, CI/CD, and EoL — to focus on what matters
most to your team.
Configure Thresholds
Adjust confidence filtering and PR check failure thresholds to match your team’s tolerance.
Set Up Integrations
Route findings to Jira, Linear,
Slack, or webhooks. You can export individual
findings or bulk-export multiple findings at once to Jira or Linear directly from the issues
list.The export dialog lets you choose between Regular CSV, CASA Relevant CSV, and SARIF
formats. Before exporting, you can use granular controls to override the severity, category,
and status filters independently from your current view — for example, exporting only Critical
and High severity issues across all statuses without changing the filters on your dashboard.
A live preview shows the exact number of issues that will be included in your export. For SARIF
exports, you can optionally include preconditions and exploit walkthrough context in the output.
Define Custom Rules
Add natural-language rules for organization-specific security policies. See Custom
Rules.