Releases: StyraInc/enterprise-opa
v1.21.0
This release includes an enhancement to the Apache Kafka data source, and updates the OPA version used in Enterprise OPA to v0.64.1, and brings in various dependency bumps.
Kafka data source: prometheus metrics (per-instance)
Each instance of the Kafka data plugin now contributes a bunch of Prometheus metrics to the global metrics endpoint:
kafka_MOUNTPOINT_METRIC
Where MOUNTPOINT is foo:bar
for a Kafka data plugin configured to manage data.foo.bar
. (The Prometheus metrics naming restrictions forbid
both "." and "/" in metric names.)
Kafka data source: logging enhancements
When run with log level "debug", the low-level Kafka logs are overwhelming most of the times.
They are now suppressed by default, and can be inspected when running with the environment variable EOPA_KAFKA_DEBUG
, like:
EOPA_KAFKA_DEBUG=1 eopa run -s -ldebug -c eopa.yaml transform.rego
In addition to that, the consumer group (if configured) is now logged when the data source plugin is initiated.
Also, new key/value fields were introduced to read the batch size and transformation time from the logs more easily.
VM: builtin function json.unmarshal
is now natively implemented
This improves the performance by lowered data conversion overheads.
This, too, benefits Kafka transforms because they always include a json.unmarshal
call.
v1.20.0
This release includes an enhancement to the Apache Kafka data source, updates the OPA version used in Enterprise OPA to v0.63.0, and brings in various dependency bumps.
Kafka data source: consumer group support
By providing consumer_group: true
in the Kafka data source configuration, Enterprise OPA will register the data plugin instance as its own consumer group with the Kafka Broker.
This improves observability of the Kafka data plugin, since you can now use standard Kafka tooling to determine the status of your consuming Enterprise OPA instances, including the number of messages they lag behind.
Due to the way consumer groups work, each data plugin instance will form its own one-member consumer group.
The group name includes the Enterprise OPA instance ID, which is reset on restarts.
These two measures guarantee that the message consumption behaviour isn't changed: each (re)started instance of Enterprise OPA will read all the messages of the topic, unless configured otherwise.
For details, see the Kafka data source documentation.
Kafka data source: show print()
output on errors
When a data source Rego transform fails, it can be difficult to debug, even more so when it depends on hard-to-reproduce message batches coming in from Apache Kafka.
To help with this, any print()
calls in Rego transforms are now emitted, even if the overall transformation failed, e.g. with an object insertion conflict.
v1.19.0
This release includes a few new features around test generation, as well as a Regal version bump.
Test Stub Generation
It is now possible to quickly spin up a test suite for a policy project with Enterprise OPA, using the new test generation commands: test bootstrap
and test new
.
These commands will generate test stubs that are pre-populated with input
objects, based off of what keys each rule body references from the input.
While the stubs usually need some customization after generation in order to match the exact policy constraints, the generation commands remove much of the initial boilerplate work required for basic test coverage.
Bootstrapping a starting set of test stubs (one test group per rule body)
Given the file example.rego
:
package example
import rego.v1
servers := ["dev", "canary", "prod"]
default allow := false
allow if {
input.servers.names[_] == data.servers[_]
input.action == "fetch"
}
We can generate a set of basic tests for the allow
rules using the command: eopa test bootstrap -d example.rego example/allow
The generated tests will appear in a file called example_test.rego
, and should look roughly like the following:
package example_test
import rego.v1
# Testcases generated from: example.rego:7
# Success case: All inputs defined.
test_success_example_allow_0 if {
test_input = {"input": {}}
data.example.allow with input as test_input
}
# Failure case: No inputs defined.
test_fail_example_allow_0_no_input if {
test_input = {}
not data.example.allow with input as test_input
}
# Failure case: Inputs defined, but wrong values.
test_fail_example_allow_0_bad_input if {
test_input = {"input": {}}
not data.example.allow with input as test_input
}
# Testcases generated from: example.rego:9
# Success case: All inputs defined.
test_success_example_allow_1 if {
test_input = {"input": {"action": "EXAMPLE", "servers": {"names": "EXAMPLE"}}}
data.example.allow with input as test_input
}
# Failure case: No inputs defined.
test_fail_example_allow_1_no_input if {
test_input = {}
not data.example.allow with input as test_input
}
# Failure case: Inputs defined, but wrong values.
test_fail_example_allow_1_bad_input if {
test_input = {"input": {"action": "EXAMPLE", "servers": {"names": "EXAMPLE"}}}
not data.example.allow with input as test_input
}
Adding new named test stubs
If we add a new rule to the policy with an OPA metadata annotation test-bootstrap-name
:
# ...
# METADATA
# custom:
# test-bootstrap-name: allow_admin
allow if {
"admin" in input.user.roles
}
We can then add generated tests for this new rule to the test file with the command eopa test new -d example.rego 'allow_admin'
The new test will be appended at the end of test file, and will look like:
# ...
# Testcases generated from: example.rego:17
# Success case: All inputs defined.
test_success_allow_admin if {
test_input = {"input": {"user": {"roles": "EXAMPLE"}}}
data.example.allow with input as test_input
}
# Failure case: No inputs defined.
test_fail_allow_admin_no_input if {
test_input = {}
not data.example.allow with input as test_input
}
# Failure case: Inputs defined, but wrong values.
test_fail_allow_admin_bad_input if {
test_input = {"input": {"user": {"roles": "EXAMPLE"}}}
not data.example.allow with input as test_input
}
The metadata annotation allows control over test naming with both the bootstrap
and new
commands.
If two rules have the same metadata annotation, an error message will report the locations of the conflicts.
v1.18.1
This is a security fix release for the fixes published in Go 1.22.1.
Enterprise OPA servers using --authentication=tls
would be affected: crafted malicious client
certificates could cause a panic in the server.
Also, crafted server certificates could panic EOPA's HTTP clients, in bundle plugin,
status and decision logs; and http.send
calls that verify TLS.
This is CVE-2024-24783 (https://pkg.go.dev/vuln/GO-2024-2598).
Note that there are other security fixes in this Golang release, but whether or not
EOPA is affected is not as obvious. An update is advised.
As far as features go, v1.18.1 is the same code as v1.18.0.
v1.18.0
This release includes updates to embedded OPA and Regal versions, and various bug fixes and dependency bumps. It also include some telemetry enhancements.
OPA v0.62.0 and Regal v0.17.0
This release updates the OPA version used in Enterprise OPA to v0.62.0, and the embedded Regal version (used with eopa lint
) to v0.17.0.
eopa login
and eopa pull
prepare .styra.yaml for eopa run
Previously, an extra step was necessary to have eopa run
pick up DAS libraries pulled in via eopa pull
. Now, the generated configuration already includes all the necessary settings for a seamless workflow of:
eopa login --url https://my-tenant.styra.com
eopa pull
eopa run --server
See How to develop and test policies locally using Styra DAS libraries for details.
OPA compatibility when querying data
Previously, Enterprise OPA would include the data.system
tree in queries for data
-- either via the CLI, eopa eval data
or via the HTTP API, GET /v1/data
. That isn't harmful, but it differs from what OPA does. Now, Enterprise OPA will give the same results -- omitting data.system
.
Telemetry
Enterprise OPA now reports on the type of bundles used: delta/snapshot and JSON or BJSON, to help prioritizing future work.
v1.17.2
v1.17.1
v1.17.0
Regal Linting Support
Enterprise OPA now integrates the powerful Regal linter for Rego policies!
For example, if you had the example policy from the Regal docs:
policy/authz.rego
:
package authz
import future.keywords
default allow = false
deny if {
"admin" != input.user.roles[_]
}
allow if not deny
You can lint the policy with eopa lint
as follows:
$ eopa lint policy/
Rule: not-equals-in-loop
Description: Use of != in loop
Category: bugs
Location: policy/authz.rego:8:13
Text: "admin" != input.user.roles[_]
Documentation: https://docs.styra.com/regal/rules/bugs/not-equals-in-loop
Rule: use-assignment-operator
Description: Prefer := over = for assignment
Category: style
Location: policy/authz.rego:5:1
Text: default allow = false
Documentation: https://docs.styra.com/regal/rules/style/use-assignment-operator
Rule: prefer-some-in-iteration
Description: Prefer `some .. in` for iteration
Category: style
Location: policy/authz.rego:8:16
Text: "admin" != input.user.roles[_]
Documentation: https://docs.styra.com/regal/rules/style/prefer-some-in-iteration
1 file linted. 3 violations found.
DAS Workflow Support
You can now pull down policies and libraries from a Styra DAS instance, allowing easier local testing and development.
To start the process, run eopa login
, like in the example below.
eopa login --url https://example.styra.com
This will bring up an OAuth login screen, which will allow connecting your local Enterprise OPA instance to your company's DAS instance.
Once your Enterprise OPA instance is authenticated, you can then pull down the policies from your DAS Workspace using eopa pull
.
eopa pull
This will store the policies and library code from DAS under a folder named .styra/include/libraries
by default.
v1.16.0
This release updates the OPA version used in Enterprise OPA to v0.61.0, and includes telemetry enhancements, bug fixes, and various dependency updates.
Huge floats
Gigantic floating point numbers (like 23456789012E667
) no longer cause a panic in the VM.
Telemetry
Enterprise OPA now includes the latest retrieved bundle sizes, and the number of datasource plguins that are used, to help prioritizing future work.