Skip to main content

Documentation Index

Fetch the complete documentation index at: https://zeropath.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

How It Works

ZeroPath’s Static Application Security Testing (SAST) uses a multi-stage AI-powered analysis pipeline to discover vulnerabilities that traditional pattern-matching scanners miss:

Automated Scanning

Runs on push, PR, or schedule. Builds an architecture model, analyzes code, and validates every finding with AI.

Application-Aware

Identifies services and modules in your repo (e.g., /apps/payments) so findings are tied to the right team.

AI Validated

Reviews every candidate for exploitability and context, reducing false positives by up to 75%.

End-to-end Flow

1

Repository Checkout

ZeroPath clones your repository and pins the exact commit, ensuring repeatable results.
2

Application Discovery

An AI-powered analyzer maps your codebase’s services, modules, and entry points so each finding can be attributed to the right application and team. If you have defined custom source or sink declarations, ZeroPath prepares optimized summaries for each declaration at scan start so they can be evaluated efficiently alongside built-in detections throughout the scan.
3

Multi-Stage Pattern Analysis

Broad static analysis rules run across all supported languages, producing an initial set of candidate findings. Source-to-sink analysis identifies external data entry points (sources) and sensitive operations (sinks) in your code — including any custom sources and sinks you have declared. Custom declarations are evaluated in the same pre-screen and deep inspection passes as built-in detections, so custom-declared patterns receive the same analysis depth as standard vulnerability classes.
4

AI Validation & False-Positive Filtering

Each candidate is reviewed in context. The AI evaluates exploitability, code flow, and surrounding logic to filter out false positives, reducing noise by up to 75%. File paths referenced in findings are validated against the actual repository contents, preventing hallucinated or incorrect paths from reaching your results.Before full validation, every non-SCA candidate undergoes a lightweight severity pre-screen. Candidates below the severity threshold are filtered out early, reducing scan time and focusing the deeper validation pass on findings that are most likely to matter.A secondary validation pass then reviews all remaining true-positive findings before they are recorded. This pass can remove duplicate findings that share the same root cause at overlapping locations, mark non-exploitable findings (dead code, test fixtures, sanitized inputs), and correct inaccurate details such as title, description, or severity — further improving signal quality and reducing noise.
5

Deep Codebase Analysis

A context-aware AI agent performs deeper analysis, examining authentication flows, authorization logic, business rules, and custom security policies. The agent can follow extended call chains and cross-file data flows, providing thorough coverage of complex vulnerability patterns.
6

Custom Rule Evaluation

Any natural-language rules you’ve defined are evaluated per-application, catching organization-specific security policies.
7

Deduplication & Scoring

Findings are deduplicated across tools and historical scans. When a single file contains a large number of candidate findings — for example, from extensive custom rule sets — the consolidation step automatically splits them into manageable batches and runs a final cross-batch pass to catch duplicates across batches, ensuring reliable deduplication regardless of finding volume.When multiple discovery models run in parallel, findings that reference the same file, vulnerability category, and overlapping line range with similar titles are automatically collapsed into a single canonical finding. The most detailed description is preserved, and supplementary descriptions from other models are appended for reference.Cross-source deduplication (where the same vulnerability is flagged by multiple scanning engines) now processes findings in file-boundary chunks. Each chunk is evaluated independently, so a transient failure in one chunk does not discard deduplication results from the rest of the scan — the affected chunk’s findings are kept as-is while successfully deduplicated chunks proceed normally.Each confirmed finding receives a CVSS 4.0 severity score with AI-generated reasoning, step-by-step attack exploitation steps, a set of exploitability preconditions describing deployment-context factors the scanner could not fully verify, and a security impact label summarizing the real-world attacker outcome (for example, “Account Takeover” or “Remote Code Execution”). The impact label is displayed prominently in the issue detail header alongside the vulnerability class, so you can assess real-world risk at a glance.For full scans, deduplication is scoped per-branch so each branch maintains independent findings — resolving an issue on one branch does not affect the same issue on another branch.For PR scans, deduplication is scoped to prevent cross-branch contamination — findings from one pull request are only deduplicated against full scan baselines and rescans of the same PR, not against findings from other pull requests. This ensures that each PR’s results accurately reflect the code changes in that branch.When an existing finding is re-detected in a subsequent scan, ZeroPath automatically refreshes the contributor attribution and code location data to reflect the current state of the code, keeping blame information accurate as your codebase evolves.Findings with an unknown validation state (where exploitability cannot be determined) are categorized as Informational rather than non-exploitable, giving you clearer signal about which findings need further investigation.
8

Results Delivered

Validated findings are written atomically and surfaced in your dashboard, API, and integrations.

Running Scans

Comprehensive analysis of your entire codebase. Triggered manually, on schedule, or when code is pushed to monitored branches.On subsequent scans of the same branch, ZeroPath intelligently reuses results for unchanged files and focuses deep analysis only on changed files, significantly reducing scan time.When the changes since the last scan are small, ZeroPath can use a smart rescan mode that evaluates only the diff. An AI agent reviews whether existing issues are still present and whether the new code introduces any vulnerabilities, delivering results significantly faster than a full pipeline run. The rescan agent inspects issues on a per-file basis and resolves them by exact line range and behavior, so if one vulnerability in a file is fixed while another remains, only the fixed issue is marked as resolved.The differential scan engine automatically classifies changed files to determine the most efficient scan strategy. When only code files change, ZeroPath runs the fast rescan path alone. When only dependency manifests or lockfiles change, a lightweight SCA pass runs while SAST findings are carried forward. When both code and dependency files change, ZeroPath runs the rescan and SCA together — without triggering a full pipeline. Non-code files such as lockfiles, source maps, images, and generated assets are excluded from the diff size calculation, so large dependency updates no longer force unnecessary full scans.In monorepo setups, the rescan agent is scoped to the current repository partition. Only files and findings within the configured scan scope are evaluated, preventing cross-partition interference when multiple teams share a single repository.ZeroPath uses a three-tier regression detection pipeline to efficiently determine whether previously reported issues are still present after code changes:
  1. Deterministic fast-path — if none of the files relevant to a finding changed between commits, the finding is carried forward instantly with no AI call.
  2. Structured diff review — when relevant files did change, a single AI review examines the bounded diff and code evidence to decide whether the issue persists or was fixed.
  3. Deep-agent escalation — only when the bounded evidence is insufficient does ZeroPath escalate to a full agentic investigation, keeping costs and latency low for the common case.

Vulnerability Categories

CategoryExamples
InjectionSQL injection, command injection, LDAP injection
Cross-Site Scripting (XSS)Reflected XSS, stored XSS, DOM-based XSS
Server-Side Request Forgery (SSRF)Internal service access, cloud metadata exfiltration
File OperationsPath traversal, local file inclusion, arbitrary file read/write
Insecure DeserializationObject injection, remote code execution via deserialization
XML External Entity (XXE)External entity expansion, SSRF via XML
Template InjectionServer-side template injection (SSTI)
CryptographyWeak algorithms, hardcoded keys, insufficient key length
Open RedirectUnvalidated redirect targets
CORS MisconfigurationOverly permissive cross-origin policies
CSRFMissing or bypassable cross-site request forgery protections
LLM SecurityPrompt injection, insecure output handling, system prompt leakage, excessive agency (OWASP LLM Top 10)
Denial of ServiceReDoS, resource exhaustion, algorithmic complexity
Information DisclosureSensitive data in responses, error message leakage
ReflectionReflected input flaws, unsafe reflection
Memory SafetyBuffer overflows, use-after-free, out-of-bounds access
Input ValidationMissing or improper input validation
ConfigurationInsecure defaults, debug mode in production

Severity Scoring

ZeroPath uses CVSS 4.0 as its primary scoring standard, and also provides CVSS 3.1 scores for compatibility with tools and workflows that require the older standard. Every confirmed finding receives a structured score with AI-generated reasoning tied directly to the code. Findings are also assigned one or more CWE identifiers for precise weakness classification. CVSS assessments are generated for nearly all security findings — only non-security issues like code quality or style problems are excluded. LLM Security findings (OWASP LLM Top 10) are scored conservatively: most receive a severity of 2.0–5.0 unless the LLM output flows directly into a dangerous sink (such as eval, SQL, or a shell command) with no validation, in which case the downstream sink impact is scored. System prompt leakage and excessive agency findings are typically scored 1.0–3.0.
MetricValuesWhat It Captures
Attack VectorNetwork / LocalWhether exploitable remotely or requires local access
Attack ComplexityLow / HighReliability of exploit; whether special conditions are needed
Attack RequirementsNone / PresentWhether prerequisites (specific config/state) are required
Privileges RequiredNone / Low / HighPrivilege level the attacker needs
User InteractionNone / Passive / ActiveWhether a user must take action for exploitation
Confidentiality ImpactNone / Low / HighImpact on data secrecy
Integrity ImpactNone / Low / HighImpact on data or system integrity
Availability ImpactNone / Low / HighImpact on system availability
Each metric includes a written rationale tied to the specific code snippet and vulnerability context.
In addition to CVSS 4.0, each finding also receives a CVSS 3.1 score derived from the same analysis. The 3.1 vector includes the Scope metric (Unchanged or Changed), which indicates whether an exploit can affect resources beyond the vulnerable component. This dual-scoring approach lets you use CVSS 4.0 for modern risk assessment while retaining CVSS 3.1 compatibility for existing dashboards, compliance frameworks, and integrations that have not yet adopted 4.0.Older findings that were originally scored with CVSS 4.0 only are automatically backfilled with derived CVSS 3.1 vectors and severity scores, so you can filter and sort all findings by CVSS 3.1 without gaps in coverage.
The overall finding priority score combines severity with confidence:severity (0–10) × confidence (0–10) = a 0–100 scale for ranking and filtering.This means a high-severity finding with low confidence ranks lower than a medium-severity finding with high confidence — helping you focus on issues that are both impactful and reliably identified.

Language Coverage

Python

JavaScript / TypeScript

Java

Go

https://mintcdn.com/zeropath/3LQWG-DWQmf_zR2q/icons/ruby.svg?fit=max&auto=format&n=3LQWG-DWQmf_zR2q&q=85&s=e98ed481966bcbb1508f8d69e680b60e

Ruby

PHP

https://mintcdn.com/zeropath/3LQWG-DWQmf_zR2q/icons/c.svg?fit=max&auto=format&n=3LQWG-DWQmf_zR2q&q=85&s=61ae8cdf76c801f4ef8e6ce36ecc8c2b

C / C++

https://mintcdn.com/zeropath/3LQWG-DWQmf_zR2q/icons/csharp.svg?fit=max&auto=format&n=3LQWG-DWQmf_zR2q&q=85&s=a369b07fdfa85a098eef35c035d7c098

C#

Rust

Swift / Objective-C

https://mintcdn.com/zeropath/3LQWG-DWQmf_zR2q/icons/kotlin.svg?fit=max&auto=format&n=3LQWG-DWQmf_zR2q&q=85&s=582f32e7146a321baace404d06af4e93

Kotlin

https://mintcdn.com/zeropath/3LQWG-DWQmf_zR2q/icons/scala.svg?fit=max&auto=format&n=3LQWG-DWQmf_zR2q&q=85&s=8fb8caf2723e9afba5f7054fd06fc21d

Scala

https://mintcdn.com/zeropath/3LQWG-DWQmf_zR2q/icons/dart.svg?fit=max&auto=format&n=3LQWG-DWQmf_zR2q&q=85&s=c3fa2a0a3350a59c4f7fcd10cbeff84b

Dart

https://mintcdn.com/zeropath/3LQWG-DWQmf_zR2q/icons/elixir.svg?fit=max&auto=format&n=3LQWG-DWQmf_zR2q&q=85&s=1561f8905cfcc6ab231dbcb15aaf5db3

Elixir

https://mintcdn.com/zeropath/3LQWG-DWQmf_zR2q/icons/nim.svg?fit=max&auto=format&n=3LQWG-DWQmf_zR2q&q=85&s=756fddf205c40f37088442d806046355

Nim

https://mintlify.s3.us-west-1.amazonaws.com/zeropath/icons/al.svg

AL (Business Central)

Each finding is tagged with the detected language of the affected file, enabling language-aware filtering and remediation guidance.

Key Capabilities

75% Fewer False Positives

AI validation reviews every finding in context, filtering out issues that aren’t actually exploitable.

Novel Vulnerability Detection

Deep AI analysis discovers vulnerabilities that pattern-matching alone cannot find, including business logic flaws.

Application-Aware Findings

Each finding is tied to the specific service or module it affects, with tech stack and architecture context.

Data Flow Visualization

Source-to-sink traces show exactly how tainted data reaches vulnerable operations. Every exploitable finding includes a mandatory data flow path. When a finding involves a custom source or custom sink declaration, the data flow details include the custom declaration name and description, so you can see exactly which custom-defined entry point or sensitive operation was matched. Source and sink previews display a Custom Source or Custom Sink badge when the finding originated from a custom declaration, and you can click through directly to view or edit the declaration from the issue detail pane. For SCA vulnerabilities, the data flow tree includes the manifest file where the affected dependency is declared, giving you direct visibility into which dependency file introduces the risk. The scan explorer includes repository and scan navigation dropdowns so you can quickly switch between repositories and compare results across recent scans without leaving the page. Findings that are linked to an application but not to a specific source handler appear under a Standalone Findings node in the scan explorer tree, so they are always visible and never lost.

Attack Step Breakdown

Each finding includes a step-by-step description of how an attacker would exploit the vulnerability, from prerequisites through impact — helping AppSec engineers quickly assess risk. An expandable Exploit Walkthrough shows the ordered sequence of actions an attacker would take. These attack steps are also available as structured data in the issue detail API response, making them easy to consume in automated workflows and integrations. When exporting issues via the API, you can include attack steps in the export by setting the includeExploitWalkthrough SARIF option.

Exploitability Preconditions

Every finding includes a list of preconditions — factors that affect whether the vulnerability is actually exploitable but that the scanner cannot fully verify from code alone. Examples include whether the endpoint is internet-facing, whether a WAF or API gateway filters malicious input, and whether authentication is enforced by middleware not visible in the scanned code. You can expand each precondition to view the supporting scanner evidence from your codebase, helping you quickly assess real-world risk without re-investigating the finding yourself. Preconditions are also available as structured data in the issue detail API response, making them easy to consume in automated workflows and integrations. When exporting issues via the API, you can include preconditions in the export by setting the includePreconditions SARIF option.

Deterministic Results

Same codebase at the same commit produces the same findings, enabling reliable audits and compliance tracking.

Auto-Fix Capable

Qualifying findings can be automatically patched with AI-generated code fixes.

Multi-Engine Secrets Detection

Secrets scanning runs multiple detection engines in parallel, cross-referencing results to maximize coverage while deduplicating overlapping findings from different engines. Each secret finding includes a validation state — Confirmed (verified active), Disconfirmed (verified inactive or false positive), or Unknown (validation could not run) — along with a reason when validation is inconclusive.

On-Demand Investigation

Request a deeper re-investigation of any finding using larger AI models for higher-confidence validation results. You can trigger investigations from the issue detail view — including via the Reinvestigate action in the issue actions menu — and monitor progress in real time.

Issue Chat

Ask questions about any finding directly from the issue detail view. A dedicated chat panel scoped to the specific issue can explain how the vulnerability can be exploited, suggest fixes, and assess reachability — all without leaving the dashboard. You can also delete chat threads you no longer need.

Branch-Aware Issue Tracking

The global issues list displays the scan target branch for each finding, making it easy to see which branch a vulnerability was detected on. Combined with per-branch deduplication, you can track and filter findings across branches without confusion.

Investigation

You can request on-demand investigation of any finding to get a deeper, higher-confidence assessment. When you trigger an investigation, ZeroPath re-evaluates the finding using more powerful AI models:
  • SAST findings are re-validated with a larger model that examines exploitability with more context and nuance.
  • SCA direct dependency findings undergo reachability analysis to determine whether the vulnerable code path is actually reachable in your project. Results are labeled as “Likely Reachable” or “Likely Not Reachable” to reflect the probabilistic nature of the analysis, and include a detailed reachability summary explaining the reasoning.
  • SCA transitive dependency findings go through a two-phase process: first a triage step determines whether exploitability can even be assessed, and if so, a full reachability analysis traces the dependency chain to see if the vulnerability is reachable through parent packages.
SCA findings also display the manifest file path with a direct link to view the file in your repository. When a dependency is detected from a compiled binary rather than a manifest file, this is indicated in the package details. The issue detail header for SCA findings includes direct links to the relevant security advisory and associated CVE (GHSA advisories link to deps.dev, CVE advisories link to NVD), so you can quickly access upstream vulnerability information without leaving the platform. Investigation results update the finding’s validation state (confirmed, disconfirmed, or unknown) and include a detailed security assessment explaining the reasoning. You can request investigations via the API, MCP tools, or directly from the issue detail view in the dashboard using the Reinvestigate option in the actions menu. While an investigation is in progress, the issue detail view shows a live status indicator, and results appear automatically when the analysis completes. Findings confirmed as exploitable display a prominent banner with the full security assessment, making it easy to identify high-priority issues at a glance. Findings with an unknown validation state are labeled as Informational, indicating they match a known advisory but have not yet been investigated for exploitability. You can also request investigations in bulk across multiple issues at once, which is useful for triaging a batch of findings. When a finding cannot be automatically patched, the platform displays remediation steps with specific guidance on how to address the vulnerability manually.

Adoption Checklist

1

Connect Your Repository

Add your repo via GitHub App, GitLab, Bitbucket, or direct URL. See Quick Start.
2

Confirm SAST Is Enabled

SAST is enabled by default in scanner settings for all repositories.
3

Run Your First Scan

Trigger a full scan from the dashboard or wait for the next scheduled scan.
4

Review Findings

Browse findings in the dashboard, grouped by severity, application, and category. You can filter findings by type — including SAST, SCA, Secrets, IaC, CI/CD, and EoL — to focus on what matters most to your team.
5

Configure Thresholds

Adjust confidence filtering and PR check failure thresholds to match your team’s tolerance.
6

Enable PR Scanning

Turn on PR scanning to catch vulnerabilities before they reach your main branch.
7

Set Up Integrations

Route findings to Jira, Linear, Slack, or webhooks. You can export individual findings or bulk-export multiple findings at once to Jira or Linear directly from the issues list.The export dialog lets you choose between Regular CSV, CASA Relevant CSV, and SARIF formats. Before exporting, you can use granular controls to override the severity, category, and status filters independently from your current view — for example, exporting only Critical and High severity issues across all statuses without changing the filters on your dashboard. A live preview shows the exact number of issues that will be included in your export. For SARIF exports, you can optionally include preconditions and exploit walkthrough context in the output.
8

Define Custom Rules

Add natural-language rules for organization-specific security policies. See Custom Rules.