Skip to main content

← View all survey events

2025 OPA Community Survey

This page presents the results from the 2025 OPA Community Survey. The survey was run at the end of 2025 online and during the KubeCon event in Atlanta.

General

This section contains questions seen by all respondents and is intended to give a high level overview of how OPA is used.

Which of the following use cases do you have for OPA?

What stage of production is your most advanced use case for OPA?

How long have you been using OPA?

Response labels have been normalized across survey years to enable comparison (e.g., 'Just started' and '< 3 months' are treated as equivalent)

How many OPA instances do you have deployed?

How many teams use OPA within your company?

If you haven't been able to use OPA for a project or use case, why was that?

Which of the following open source policy libraries do you make use of?

I make use of the following management API features

* REST API for policy management or non policy evaluation use cases

My use case for OPA requires the response latency to be under...

What typical latencies do you observe for OPA responses?

I learned Rego and OPA using the following resources

How do you use OPA for GenAI workloads?

Where I use OPA

I wish Rego was more like...

Main programming languages used at my workplace

Personal text editor or IDE of choice

How did you first find out about OPA?

59 responses were grouped and summarized as follows

  • Work/Colleagues (18 responses): Discovered through workplace connections, whether introduced by "architects" or "consultants," learning from coworkers, or joining teams where OPA was already in use.
  • Kubernetes/Cloud Native Ecosystem (15 responses): Found via the cloud native community through KubeCon events, CNCF resources, Gatekeeper, or while researching "PodSecurityPolicy being deprecated by Kubernetes."
  • Research/Web Search (12 responses): Discovered independently while searching for authorization and policy solutions, from "browsing for available policy frameworks" to finding "a way to replace our home-developed authorization microservice."
  • Product Integrations (6 responses): Encountered through existing tools and documentation, including "Trino integration," "Envoy ext_authz examples," and "HashiCorp's website."

Any success and failures with generative AI tooling for Rego and OPA learning?

39 responses were summarized and grouped, we still see that generative models still struggle with the Rego syntax.

  • Failures/Limitations (15 responses): LLMs struggle with Rego's unique syntax, often "hallucinating" features, generating "python-like code," or producing outdated v0 syntax like "the old name { ... } syntax without the if" despite recent language updates.
  • Mixed/Moderate Success (7 responses): AI can be helpful for specific tasks like "debugging operations," "drafting the rego rules," or "convert some pretty nasty v0 rego to v1," though manual edits are typically still required.
  • Haven't Tried/N/A (12 responses): Many respondents haven't explored AI tooling yet, with some noting they "prefer understanding what I'm doing first" or simply haven't needed to update policies since GenAI became widely available.

Any missing integrations or coverage for new use cases?

32 responses were summarized as follows, showing interest across a broad range integration options.

  • Data & State Management (7 responses): Users want better solutions for handling external data, from "on demand data ingestion" for high-volume volatile data to "a good shared cache for OPAs in same kubernetes namespace" and the ability to "load a baseline dataset from a bundle file and amend that via rest api."
  • Standards & Protocol Support (6 responses): Requests for broader protocol and API support, including "OpenID AuthZEN API support," "OPA running as gRPC," and more "first-party" integrations like "AuthZen and Rebac out-of-the-box."
  • Enterprise & Operational Features (6 responses): Need for production-ready capabilities such as "full stack OTEL integration with Grafana," "disk buffering for decision logs," "s3 export of logs," and "a builtin control plane."
  • Documentation & Tooling (4 responses): Gaps in guidance and developer experience, from "better documentation on OPA side car components" to "proper CLI installation for Linux machines" and package management where "a OPA lib can depend on remote OPA policy."

Stories to share about OPA adoption in your team and business?

28 responses were summarized showing a broad range of experiences.

  • Successful Implementations (8 responses): Teams have found success using OPA for diverse use cases, from "fine-grained Kubernetes security policy" with Gatekeeper to "auth across multiple Trino instances" and "decoupling permissions from our legacy monolith system," with one team seeing dramatic improvements where "10000 policies causing a JWT token request timeout" dropped to "around 200ms" with OPA.
  • Learning Curve Challenges (5 responses): Adoption friction centers on Rego's declarative nature, where "teams have a hard time learning Rego" and "error handling is weird" — particularly the concept that "if some value is undefined the top rule is also undefined."
  • Architectural & Integration Hurdles (5 responses): Organizations struggle with complex integration patterns, from "how to handle 'global' or enterprise policies at the gateway level and 'local' policies at the application level" to converting "arbitrary rules to SQL for filter requests" and wanting "better GraphQL support."
  • Organizational & Trust Concerns (3 responses): Business adoption faces resistance, with "strong pushback on OPA and externalized authz in general" and a need for "supporting tooling" like control planes and management apps to ease introduction.

If you want to but haven't yet, what's stopped you from contributing to OPA?

34 responses tell a pretty strong story about not having time to contribute to OPA, among other reasons.

  • Time Constraints (13 responses): The most common barrier is simply bandwidth, with many citing "only so much time in the day," competing "priorities at work," and "other tasks with higher priorities."
  • Skills & Knowledge Gaps (7 responses): Many feel underprepared, whether "not familiar with go," still on the "learning curve," or feeling "not that advanced user yet" to know what they could contribute.
  • Unclear Entry Points (3 responses): Some want to contribute but lack direction, "not sure what other contributions are welcome or needed" or actively "looking for my first issue to contribute to now" and needing "good first issues."
  • Company Policies & Restrictions (2 responses): Organizational barriers prevent some from participating, including "company policy on open-source contribution" and workplaces "not known for open-source contributions."

Community Project Use

Survey respondents are only shown project-specific questions if they indicate they are using that project

Gatekeeper

This section contains questions only shown to users who answered yes to using Gatekeeper and is intended to give a more detailed overview of OPA Gatekeeper users and their use cases.

How long have you been using Gatekeeper for?

How satisfied are you with using OPA Gatekeeper?

How easy or difficult was it to deploy and configure Gatekeeper?

Gatekeeper performance met our needs

How likely are you to recommend OPA Gatekeeper to a colleague or peer in the cloud-native community?

How would you describe your expertise with Kubernetes?

How would you describe your expertise with policy-as-code tools?

Which Kubernetes distributions or platforms do you use Gatekeeper with?

How would you best describe your current use of OPA Gatekeeper?

Which types of policies have you primarily used with Gatekeeper?

Which Gatekeeper features do you currently use?

Which other policy management tools have you used in the past year?

Do you have a need to extend or replace Kubernetes RBAC with more fine-grained authorization?

5 responses

  • Time-Based & Contextual Access (1 response): Users want dynamic authorization, such as granting "time based access to support engineers to 'unblock' users in emergency situations where the risk of being down outweighs the security violations."
  • Resource Segmentation (1 response): Teams need finer controls "to restrict certain use cases to segregated nodes."

What features would you like to see added to Gatekeeper?

What changes or new features would most improve your experience with Gatekeeper?

4 responses were summarized

  • Stability & Upgrades (2 responses): Users prioritize reliable version management, whether valuing "smooth upgrades" or needing to "get up to speed with the latest versions" before evaluating new features.

What, if anything, do you like about using OPA Gatekeeper?

8 responses were summarized

  • Reliability & Standardization (3 responses): Users appreciate that Gatekeeper simply works — "it works," "it just works," and it's become "the de-facto standard versus alternatives like Kyverno."
  • Functionality & Integration (3 responses): Teams value the capabilities it provides, from "advanced admission control" and "policy guardrails/enforcement" to easy "integration into our current build process."

What, if anything, do you dislike about using OPA Gatekeeper?

9 responses were summarized

  • Complexity & Usability (3 responses): Users find Gatekeeper "overcomplicated for simple use cases," with the "constraints + constraint templates" structure feeling like "a bit too much cruft" for simple tasks like enforcing a label, and "not always easy to debug policy failures."
  • Architecture Limitations (3 responses): Technical gaps frustrate users, particularly the "lack of shared lib support and hierarchical policy support" and "no simple way to embed rego policy at scale into the constraint template" while maintaining separate files for unit tests.

OPA Envoy Plugin

These questions were only shown to users who said they were using opa-envoy-plugin. They are intended to shed light on where OPA Envoy Plugin is used, and how it can be made easier to use.

How do you deploy OPA-Envoy?

What technologies do your services use alongside OPA-Envoy?

How do you manage policy for OPA-Envoy?

What challenges have you encountered when adopting OPA-Envoy?

7 responses were summarized

  • Documentation & Guidance Gaps (2 responses): Users struggle with "lack of documentation" and want clearer guidance on patterns like "leveraging SPIRE/SPIFFE identities for microservice authentication" — noting "there might be room for clearly documenting that as a good option."
  • Development & Data Challenges (2 responses): Practical hurdles include "local development without kubernetes" requiring nginx workarounds, and "getting external data into it," leading teams to build custom bundle servers.
  • Release & Policy Management (2 responses): Operational concerns include being "always behind the main OPA releases by few days" and difficulties "evaluating multiple policies."

What features would you like to see added to OPA-Envoy?

Do you find the documentation for OPA-Envoy to be sufficient?

5 responses

  • Insufficient Documentation (2 responses): Users find gaps in the current docs, noting they do "not provide details on the interfaces" and lack clarity on "the mapping between a request and a policy" or "how to evaluate multiple policies against a single request."

What additional integration documentation would you find most helpful?

Tell us anything else you'd like us to know about how we can improve OPA-Envoy

3 responses were summarized

  • Use Case Guidance (1 response): Users want documentation for complex scenarios, particularly "global and local policies" in "a multi-tenant (namespaced) cluster where there are enterprise level policies at the gateway and local policies at tenants."

Conftest

The following questions were only answered by users who said they were using the tool. They show how users typically invoke Conftest and where they source policies from.

Which conftest commands does your organization use?

How does your organization distribute and consume Rego policies used with conftest?

Regal

These questions are only answered by users of Regal, the main developer tooling stack for Rego programmers. The questions shed light on how different features are used and where users find value in the project.

How do you use Regal?

How often do you write Rego code in your team?

Does Regal help you find bugs?

Does Regal's documentation help you fix the bugs you find?

Do you have any other feedback or comments to share about Regal?

19 responses

  • Strong Appreciation (7 responses): Users are enthusiastic about Regal, calling it "best tool ever" that "augments Rego learning," with many finding it "the best source of documentation for idiomatic Rego" and valuing how it "defines some common guidelines for how Rego code should look."
  • Feature Requests (4 responses): Users want expanded capabilities, particularly "find references" and "LSP rename" for improved workflow, more "auto fixable rules," and wish "the examples on the OPA website were using rego that has zero violations from regal."
  • Usability Challenges (4 responses): Some encounter friction, including Regal not reloading properly in VS Code requiring full restarts, difficulty with the debugger in Zed, rules that are "hard to understand from documentation," and confusion "when migrating code pre v1."

Demographics

These questions are shown to all respondents and are intended to be used for comparisons with other questions and other survey years.

Years of professional experience

Your Role/title

Country of Residence

Company Size