Introduction
A logic gap in Apache Polaris's storage location validation allows a user with table settings privileges to redirect metadata writes to an arbitrary storage location, and subsequently receive temporary cloud storage credentials scoped to that location. For organizations running multi tenant data lakehouses on shared cloud storage, this CVSS 9.9 vulnerability quietly turns a routine ALTER TABLE property change into a path for cross table or cross bucket data exposure and corruption.
Apache Polaris is an open source metadata and catalog service purpose built for Apache Iceberg tables in modern data lakehouse architectures. Originally donated to the Apache Software Foundation as an incubating project, it recently graduated to top level Apache status and serves as a foundational catalog layer for organizations managing large scale analytical data across cloud storage. Its role in credential vending and storage access control makes vulnerabilities in its validation logic particularly consequential.
Technical Information
Root Cause: Missing Property in Location Change Detection
In Apache Iceberg, metadata files are control files that tell readers which data files belong to a table and which table version to read. The write.metadata.path property is an optional table setting that directs Polaris where to write those metadata files.
The vulnerability resides in the doCommit() method of IcebergCatalog.java. This method contains a conditional branch that determines whether the requested table locations have changed. When a change is detected, Polaris runs a battery of validations: location allowlist checks, overlap checks, and metadata file in table directory checks. The old code only compared two values to decide whether locations had changed:
- The table's base
location() - The
USER_SPECIFIED_WRITE_DATA_LOCATION_KEYproperty
Critically, it did not check whether USER_SPECIFIED_WRITE_METADATA_LOCATION_KEY (write.metadata.path) had changed. Because this property was absent from the comparison, modifying it through an ALTER TABLE style settings change caused the entire validation branch to be skipped. Polaris would then write new table metadata to the attacker specified location before any location validation ran.
A secondary issue compounded the problem: loadFileIOForTableLike(), which refreshes storage credentials, executed before the location validation checks. This meant Polaris would issue storage credentials for potentially malicious locations before it even evaluated them.
Attack Flow
The exploitation path proceeds through several stages:
-
Identify a target catalog. The attacker locates a Polaris managed catalog where they hold permission to change table settings via an ALTER TABLE style operation. No row level INSERT, SELECT, UPDATE, or DELETE permissions are required.
-
Modify
write.metadata.path. The attacker issues a property change settingwrite.metadata.pathto a storage location they wish to access. This could be another table's prefix, a broader storage prefix, or even a bucket root. -
Bypass validation. Because
doCommit()does not includewrite.metadata.pathin its location change detection, the entire validation branch is skipped. Polaris writes new table metadata to the attacker specified location without checking it against the allowlist or performing overlap validation. -
Persist the poisoned path. If the catalog is configured with
polaris.config.allow.unstructured.table.location=trueandallowedLocationsis broad enough to include the attacker chosen target, the laterupdateTableLike(...)validation also accepts the location. Polaris persists the resulting metadata path into stored table state. -
Credential vending. From this point forward, table load and credential APIs return temporary cloud storage credentials for the attacker chosen location without revalidating it. The attacker can now read, and potentially write to, any data and metadata Polaris can reach at that storage prefix.
Configuration Dependent Impact
The severity varies based on catalog configuration:
| Configuration State | Risk Level |
|---|---|
allow.unstructured.table.location=true with broad allowedLocations | Critical. Polaris persists the attacker chosen path and vends credentials for it. |
allow.unstructured.table.location=true with narrow allowedLocations | Reduced. The attacker chosen target must fall within the restricted allowlist. |
allow.unstructured.table.location=false | Moderate. The pre write check is still skipped and metadata is written, but persistence is usually rejected by later validation. |
Public project materials confirm that polaris.config.allow.unstructured.table.location=true is a real supported compatibility and layout mode, not a contrived lab only prerequisite.
Scope of Exposure
The attacker chosen area is not limited to the poisoned table's own files. If it is a broader storage prefix, another table's prefix, or even a bucket or container root, the resulting disclosure or corruption scope extends to any data and metadata Polaris can reach there. Even before the credential vending step, Polaris itself performs the metadata write to the unchecked location, meaning the core defect is the skipped pre write location check, not solely the later credential issuance.
Patch Information
The fix for CVE-2026-42812 was shipped in Apache Polaris 1.4.1 via pull request #4330 ("Improve locations handling"), authored by Robert Stupp and merged on May 1, 2026 into the release/1.4.x branch. The merge commit is d6bbcc3. It touched two files, the core catalog implementation and its test suite, with 144 additions and 30 deletions across the changeset.
The patch addresses the vulnerability in several tightly coupled ways:
1. New requestedTableLocationsChanged() Method
The sprawling inline comparison was extracted into a dedicated helper that now checks three properties instead of two:
private boolean requestedTableLocationsChanged(TableMetadata base, TableMetadata metadata) { return !metadata.location().equals(base.location()) || !Objects.equal( base.properties().get(IcebergTableLikeEntity.USER_SPECIFIED_WRITE_DATA_LOCATION_KEY), metadata.properties().get(IcebergTableLikeEntity.USER_SPECIFIED_WRITE_DATA_LOCATION_KEY)) || !Objects.equal( base.properties().get(IcebergTableLikeEntity.USER_SPECIFIED_WRITE_METADATA_LOCATION_KEY), metadata.properties().get(IcebergTableLikeEntity.USER_SPECIFIED_WRITE_METADATA_LOCATION_KEY)); }
The addition of the USER_SPECIFIED_WRITE_METADATA_LOCATION_KEY comparison is the core of the fix. It ensures changes to write.metadata.path now trigger the full location validation branch, closing the bypass.
2. Credential Refresh Moved After Validation
Before the patch, loadFileIOForTableLike() (which refreshes storage credentials) ran before the location validation checks. The patch reorders the flow so loadFileIOForTableLike() now executes after all location validations pass. This is a defense in depth improvement ensuring that credentials are never vended for an unapproved path.
3. Unconditional Metadata File Location Validation
The old code only ran validateMetadataFileInTableDir() when metadata.metadataFileLocation() != null, which left a gap. The patch introduces a new nextMetadataFileLocation() helper:
private String nextMetadataFileLocation(TableMetadata metadata) { return metadata.metadataFileLocation() != null ? metadata.metadataFileLocation() : metadataFileLocation(metadata, "metadata.json"); }
This always computes the next metadata file path, even when no explicit metadata file location has been set yet. The validation call then always runs with a concrete path, eliminating the null based skip.
4. Refactored Validation Overload
A new overload of validateMetadataFileInTableDir() accepts explicit tableLocation and metadataLocation strings rather than extracting them from a TableMetadata object. This allows the method to validate the prospective write target rather than only the currently stored value, making it more flexible for pre write validation.
5. Comprehensive Test Coverage
Two new tests were added to AbstractIcebergCatalogTest.java:
testUpdatePropertiesRejectsOutOfTableWriteMetadataLocation()verifies that settingWRITE_METADATA_LOCATIONto a path outside the table directory is properly rejected with aBadRequestExceptionand no metadata files are written to the attacker chosen location.testUpdatePropertiesAcceptsInTableWriteMetadataLocation()confirms that settingWRITE_METADATA_LOCATIONto a path within the table's own directory structure succeeds, the metadata file is written to the correct location, and the property is persisted.
Taken together, the patch closes the validation gap by treating write.metadata.path changes as location altering operations that must pass the same security checks as any other storage path change, and it reorders credential issuance so that no storage access is granted until those checks have passed.
Affected Systems and Versions
Apache Polaris versions prior to 1.4.1 are affected. The vulnerability is present in any deployment where users have permission to modify table properties.
The severity and exploitability depend on catalog configuration:
- Deployments with
polaris.config.allow.unstructured.table.location=trueand broadallowedLocationsare at the highest risk, as the full persisted and credential vending variant is exploitable. - Deployments with
polaris.config.allow.unstructured.table.location=falsestill contain the underlying defect (skipped pre write location check) but the laterupdateTableLike(...)validation usually prevents persistence of out of tree metadata locations.
Organizations should upgrade to Apache Polaris 1.4.1, available through the official Polaris downloads page, Maven Central under the org.apache.polaris group, and Docker Hub under the apache/polaris and apache/polaris admin tool tags.
Vendor Security History
Apache Polaris 1.4.1 addresses four distinct security vulnerabilities, indicating a broader pattern of storage access control and credential scoping weaknesses in the platform:
| CVE Identifier | Description |
|---|---|
| CVE-2026-42809 | Authenticated low privileged users can abuse staged table creation to mint broad temporary storage credentials for an attacker chosen location. |
| CVE-2026-42810 | Polaris accepts literal star characters in namespace and table names, which are reused unescaped in S3 IAM resource patterns and prefix conditions. |
| CVE-2026-42811 | Crafted namespace or table names can cause short lived GCS credentials to work across the entire configured bucket instead of a single table. |
| CVE-2026-42812 | No protection on write.metadata.path leading to metadata write bypass and credential vending. |
The clustering of these issues in a single release highlights that the credential vending and storage location validation subsystems had not been subjected to thorough adversarial review prior to this cycle. The rapid turnaround on the 1.4.1 release does demonstrate a responsive security posture from the Apache Polaris community.
References
- NVD: CVE-2026-42812
- Apache Mailing List Advisory: CVE-2026-42812
- OpenWall OSS Security Notice
- Apache Polaris 1.4.1 Release Announcement
- GitHub Pull Request #4330: Improve locations handling
- Merge Commit d6bbcc3
- What is Apache Polaris? Unifying the Iceberg Ecosystem
- The Impact of Apache Polaris Graduating to Top Level Apache Project
- The Release of Apache Polaris 1.3.0 (Incubating)



