-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Hi Checkov maintainers,
I’m opening this as feedback rather than a bug report. After running a proof-of-concept using Checkov on a real Azure infrastructure codebase, I wanted to share observations and suggest a documentation improvement to better set expectations for Azure users.
Context
Our infrastructure is defined primarily in Bicep, structured as a non-trivial repository:
- Multiple Bicep entry points
- Use of local modules
- Parameterized deployments
- Loops and conditions
- Separate parameter files (
.bicepparam/ ARM parameters)
This is a fairly typical enterprise Azure setup.
Observations
1. Native Bicep scanning is very limited
Checkov’s Bicep support appears to rely on parsing logic that does not support a number of commonly used Bicep constructs. In practice:
- Valid Bicep files that compile and deploy successfully have, in several cases, failed to parse in Checkov
- Common constructs can either cause parse errors or be silently skipped
- This is not limited to recently introduced language features
As a result, scanning a realistic Bicep repository can produce partial results or no meaningful checks, without clear failure signals.
2. Bicep → ARM → Checkov is not a practical workaround
The workaround of compiling Bicep to ARM and scanning ARM templates also breaks down in practice:
- The typical Bicep compilation output is a parameterized ARM template with a separate parameters file
- In this form, Checkov may report no findings (without failing), effectively skipping validation
- Meaningful validation appears to require fully materialized ARM templates with parameters inlined into a single file
In Azure, such fully expanded templates typically only exist post-deployment or require fetching per-deployment artifacts via ARM APIs, which is impractical at scale and unsuitable for pre-deployment validation.
3. Expectations vs. reality
The current documentation states that Checkov “supports Azure Bicep,” which is technically true for simple cases but misleading for more realistic usage.
In practice, Checkov may work for simple, single-file Bicep templates, but it does not work well or reliably for complex, modular, parameterized Bicep codebases.
Suggested improvement
I’m not asking for feature parity or full Bicep language support.
What would be extremely helpful is clearer documentation, for example:
- A statement describing the current maturity level of Bicep support
- Explicit limitations (modules, parameters, certain language constructs, registry modules, etc.)
- Guidance on when Checkov is not a good fit for Azure/Bicep-heavy environments
This would save users considerable time and avoid incorrect assumptions during tool selection.
Related Issues
- #5320 – Bicep parsing error on valid templates
- #5321 – Parsing failure in Bicep output loops
- #4845 – Issues scanning Bicep modules published in Azure Container Registry
- #6640 – join() function in Bicep causes parsing failure
- #6682 – Bicep framework scan fails for some valid files
- #6998 – Parsing error with multiline subscriptionResourceId function
Disclaimer: I have not verified whether any of the listed issues directly caused the behavior observed in our environment.
Some of these issues were closed due to inactivity rather than confirmed resolution.
They are included to illustrate recurring patterns reported by other users and the gap between the stated Bicep support and real-world usage.
Summary
Checkov appears to be a strong tool in other contexts, but for Azure environments that rely heavily on Bicep, the current experience does not match the documentation.
Making this clearer would significantly improve user experience, even without code changes.
Thanks for the work on the project, and for considering this feedback.