Releases: mongodb/mongodb-enterprise-kubernetes
MongoDB Enterprise Kubernetes Operator 1.33.0
New Features
- MongoDBOpsManager, AppDB: Introduced support for OpsManager and Application Database deployments across multiple Kubernetes clusters without requiring a Service Mesh.
- New property spec.applicationDatabase.externalAccess used for common service configuration or in single cluster deployments
- Added support for existing, but unused property spec.applicationDatabase.clusterSpecList.externalAccess
- You can define annotations for external services managed by the operator that contain placeholders which will be automatically replaced to the proper values:
- AppDB: spec.applicationDatabase.externalAccess.externalService.annotations
- MongoDBOpsManager: Due to different way of configuring external service placeholders are not yet supported
- More details can be found in the public documentation.
Bug Fixes
- Fixed a bug where workloads in the
static
container architecture were still downloading binaries. This occurred when the operator was running with the default container architecture set tonon-static
, but the workload was deployed with thestatic
architecture using themongodb.com/v1.architecture: "static"
annotation. - MongoDB: Operator now correctly applies the external service customization based on
spec.externalAccess
andspec.mongos.clusterSpecList.externalAccess
configuration. Previously it was ignored, but only for Multi Cluster Sharded Clusters. - OpsManager: Ops Manager resources were not properly cleaned up on deletion. The operator now ensures that all resources are removed when the Ops Manager resource is deleted.
- AppDB: Fixed an issue with wrong monitoring hostnames for
Application Database
deployed in multi-cluster mode. Monitoring agents should discover the correct hostnames and send data back toOps Manager
. The hostnames used for monitoring AppDB in Multi-Cluster deployments with a service mesh are{resource_name}-db-{cluster_index}-{pod_index}-svc.{namespace}.svc.{cluster_domain}
. TLS certificate should be defined for these hostnames.- NOTE (Multi-Cluster) This bug fix will result in the loss of historical monitoring data for multi-cluster AppDB members. If retaining this data is critical, please back it up before upgrading. This only affects monitoring data for multi-cluster AppDB deployments — it does not impact single-cluster AppDBs or any other MongoDB deployments managed by this Ops Manager instance.
- To export the monitoring data of AppDB members, please refer to the Ops Manager API reference https://www.mongodb.com/docs/ops-manager/current/reference/api/measures/get-host-process-system-measurements/
- NOTE (Multi-Cluster) This bug fix will result in the loss of historical monitoring data for multi-cluster AppDB members. If retaining this data is critical, please back it up before upgrading. This only affects monitoring data for multi-cluster AppDB deployments — it does not impact single-cluster AppDBs or any other MongoDB deployments managed by this Ops Manager instance.
- OpsManager: Fixed a bug where the
spec.clusterSpecList.externalConnectivity
field was not being used by the operator, but documented in Ops Manager API reference https://www.mongodb.com/docs/kubernetes-operator/current/reference/k8s-operator-om-specification/#mongodb-opsmgrkube-opsmgrkube.spec.clusterSpecList.externalConnectivity - OpsManager: Fixed a bug where a custom CA would always be expected when configuring Ops Manager with TLS enabled.
Breaking Change
- Images: Removing all references of the images and repository of
mongodb-enterprise-appdb-database-ubi
, as it has been deprecated since version 1.22.0. This means, we won't rebuild images in that repository anymore nor add RELATED_IMAGES_*.
MongoDB Enterprise Kubernetes Operator 1.32.0
New Features
- General Availability - Multi Cluster Sharded Clusters: Support configuring highly available MongoDB Sharded Clusters across multiple Kubernetes clusters.
MongoDB
resources of type Sharded Cluster now support both single and multi cluster topologies.- The implementation is backwards compatible with single cluster deployments of MongoDB Sharded Clusters, by defaulting
spec.topology
toSingleCluster
. ExistingMongoDB
resources do not need to be modified to upgrade to this version of the operator. - Introduced support for Sharded deployments across multiple Kubernetes clusters without requiring a Service Mesh - the is made possible by enabling all components of such a deployment (including mongos, config servers, and mongod) to be exposed externally to the Kubernetes clusters, which enables routing via external interfaces.
- More details can be found in the public documentation.
- Adding opt-out anonymized telemetry to the operator. The data does not contain any Personally Identifiable Information
(PII) or even data that can be tied back to any specific customer or company. More can be read public documentation, this link further elaborates on the following topics:- What data is included in the telemetry
- How to disable telemetry
- What RBACs are added and why they are required
- MongoDB: To ensure the correctness of scaling operations, a new validation has been added to Sharded Cluster deployments. This validation restricts scaling different components in two directions simultaneously within a single change to the YAML file. For example, it is not allowed to add more nodes (scaling up) to shards while simultaneously removing (scaling down) config servers or mongos. This restriction also applies to multi-cluster deployments. A simple change that involves "moving" one node from one cluster to another—without altering the total number of members—will now be blocked. It is necessary to perform a scale-up operation first and then execute a separate change for scaling down.
Bug Fixes
- Fixes the bug when status of
MongoDBUser
was being set toUpdated
prematurely. For example, new users were not immediately usable followingMongoDBUser
creation despite the operator reportingUpdated
state. - Fixed a bug causing cluster health check issues when ordering of users and tokens differed in Kubeconfig.
- Fixed a bug when deploying a Multi-Cluster sharded resource with an external access configuration could result in pods not being able to reach each others.
- Fixed a bug when setting
spec.fcv = AlwaysMatchVersion
andagentAuth
to beSCRAM
causes the operator to set the auth value to beSCRAM-SHA-1
instead ofSCRAM-SHA-256
.
MongoDB Enterprise Kubernetes Operator 1.31.0
Bug Fixes
- Fixed handling proxy environment variables in the operator pod. The environment variables [
HTTP_PROXY
,HTTPS_PROXY
,NO_PROXY
] when set on the operator pod, can now be propagated to the MongoDB agents by also setting the environment variableMDB_PROPAGATE_PROXY_ENV=true
.
Kubernetes versions
- The minimum supported Kubernetes version for this operator is 1.30 and OpenShift 4.17.
MongoDB Enterprise Kubernetes Operator 1.30.0
New Features
- MongoDB: fixes and improvements to Multi-Cluster Sharded Cluster deployments (Public Preview)
- MongoDB:
spec.shardOverrides
field, which was added in 1.28.0 as part of Multi-Cluster Sharded Cluster Public Preview is now fully supported for single-cluster topologies and is the recommended way of customizing settings for specific shards. - MongoDB:
spec.shardSpecificPodSpec
was deprecated. The recommended way of customizing specific shard settings is to usespec.shardOverrides
for both Single and Multi Cluster topology. An example of how to migrate the settings to spec.shardOverrides is available here.
Bug Fixes
- MongoDB: Fixed placeholder name for
mongos
in Single Cluster Sharded with External Domain set. Previously it was calledmongodProcessDomain
andmongodProcessFQDN
now they're calledmongosProcessDomain
andmongosProcessFQDN
. - MongoDB, MongoDBMultiCluster, MongoDBOpsManager: In case of losing one of the member clusters we no longer emit validation errors if the failed cluster still exists in the
clusterSpecList
. This allows easier reconfiguration of the deployments as part of disaster recovery procedure.
Kubernetes versions
- The minimum supported Kubernetes version for this operator is 1.29 and OpenShift 4.17.
MongoDB Enterprise Kubernetes Operator 1.29.0
New Features
- AppDB: Added support for easy resize. More can be read in changelog 1.28.0 - "automated expansion of the pvc"
Bug Fixes
- MongoDB, AppDB, MongoDBMultiCluster: Fixed a bug where specifying a fractional number for a storage volume's size such as
1.7Gi
can break the reconciliation loop for that resource with an error likeCan't execute update on forbidden fields
even if the underlying Persistence Volume Claim is deployed successfully. - MongoDB, MongoDBMultiCluster, OpsManager, AppDB: Increased stability of deployments during TLS rotations. In scenarios where the StatefulSet of the deployment was reconciling and a TLS rotation happened, the deployment would reach a broken state. Deployments will now store the previous TLS certificate alongside the new one.
MongoDB Enterprise Kubernetes Operator 1.28.0
New Features
- MongoDB: public preview release of multi kubernetes cluster support for sharded clusters. This can be enabled by setting
spec.topology=MultiCluster
when creatingMongoDB
resource ofspec.type=ShardedCluster
. More details can be found here. - MongoDB, MongoDBMultiCluster: support for automated expansion of the PVC.
More details can be found here.
Note: Expansion of the pvc is only supported if the storageClass supports expansion.
Please ensure that the storageClass supports in-place expansion without data-loss.- MongoDB This can be done by increasing the size of the PVC in the CRD setting:
- one PVC - increase:
spec.persistence.single.storage
- multiple PVCs - increase:
spec.persistence.multiple.(data/journal/logs).storage
- one PVC - increase:
- MongoDBMulti This can be done by increasing the storage via the statefulset override:
- MongoDB This can be done by increasing the size of the PVC in the CRD setting:
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 2Gi # this is my increased storage
storageClass: <my-class-that-supports-expansion>
- OpsManager: Introduced support for Ops Manager 8.0.0
- MongoDB, MongoDBMulti: support for MongoDB 8.0.0
- MongoDB, MongoDBMultiCluster AppDB: change default behaviour of setting featurecompatibilityversion (fcv) for the database.
- When upgrading mongoDB version the operator sets the FCV to the prior version we are upgrading from. This allows to
have sanity checks before setting the fcv to the upgraded version. More information can be found here. - To keep the prior behaviour to always use the mongoDB version as FCV; set
spec.featureCompatibilityVersion: "AlwaysMatchVersion"
- When upgrading mongoDB version the operator sets the FCV to the prior version we are upgrading from. This allows to
- Docker images are now built with
ubi9
as the base image with the exception of mongodb-enterprise-database-ubi which is still based onubi8
to supportMongoDB
workloads < 6.0.4. Theubi8
image is only in use for the default non-static architecture.
For a fullubi9
setup, the Static Containers architecture should be used instead.
Bug Fixes
- MongoDB, AppDB, MongoDBMultiCluster: Fixed a bug where the init container was not getting the default security context, which was flagged by security policies.
- MongoDBMultiCluster: Fixed a bug where resource validations were not performed as part of the reconcile loop.
MongoDB Enterprise Kubernetes Operator 1.27.0
New Features
-
MongoDB: Added Support for enabling LogRotation for MongoDB processes, MonitoringAgent and BackupAgent. More can be found in the following documentation.
spec.agent.mongod.logRotation
to configure the mongoDB processesspec.agent.mongod.auditLogRotation
to configure the mongoDB processes audit logsspec.agent.backupAgent.logRotation
to configure the backup agentspec.agent.monitoringAgent.logRotation
to configure the backup agentspec.agent.readinessProbe.environmentVariables
to configure the environment variables the readinessProbe runs with.
That also applies to settings related to the logRotation,
the supported environment settings can be found here.- the same applies for AppDB:
- you can configure AppDB via
spec.applicationDatabase.agent.mongod.logRotation
- you can configure AppDB via
- Please Note: For shardedCluster we only support configuring logRotation under
spec.Agent
and not per process type (mongos, configsrv etc.)
-
Opsmanager: Added support for replacing the logback.xml which configures general logging settings like logRotation
spec.logging.LogBackAccessRef
points at a ConfigMap/key with the logback access configuration file to mount on the Pod- the key of the configmap has to be
logback-access.xml
- the key of the configmap has to be
spec.logging.LogBackRef
points at a ConfigMap/key with the logback access configuration file to mount on the Pod- the key of the configmap has to be
logback.xml
- the key of the configmap has to be
-
OpsManager: Added support for configuring
votes
,priority
andtags
for application database nodes in the multi-cluster topology under thespec.applicationDatabase.clusterSpecList[i].memberConfig
field.
Deprecations
- AppDB: logRotate for appdb has been deprecated in favor for the new field
- this
spec.applicationDatabase.agent.logRotation
has been deprecated forspec.applicationDatabase.agent.mongod.logRotation
- this
Bug Fixes
-
Agent launcher: under some resync scenarios we can have corrupted journal data in
/journal
.
The agent now makes sure that there are not conflicting journal data and prioritizes the data from/data/journal
.- To deactivate this behaviour set the environment variable in the operator
MDB_CLEAN_JOURNAL
to any other value than 1.
- To deactivate this behaviour set the environment variable in the operator
-
MongoDB, AppDB, MongoDBMulti: make sure to use external domains in the connectionString created if configured.
-
MongoDB: Removed panic response when configuring shorter horizon config compared to number of members. The operator now signals a
descriptive error in the status of the MongoDB resource. -
MongoDB: Fixed a bug where creating a resource in a new project named as a prefix of another project would fail, preventing the
MongoDB
resource to be created.
MongoDB Enterprise Kubernetes Operator 1.26.0
New Features
- Added the ability to control how many reconciles can be performed in parallel by the operator.
This enables strongly improved cpu utilization and vertical scaling of the operator and will lead to quicker reconcile of all managed resources.- It might lead to increased load on the Ops Manager and K8s API server in the same time window.
by settingMDB_MAX_CONCURRENT_RECONCILES
for the operator deployment oroperator.maxConcurrentReconciles
in the operator's Helm chart.
If not provided, the default value is 1.- Observe the operator's resource usage and adjust (
operator.resources.requests
andoperator.resources.limits
) if needed.
- Observe the operator's resource usage and adjust (
- It might lead to increased load on the Ops Manager and K8s API server in the same time window.
Helm Chart
- New
operator.maxConcurrentReconciles
parameter. It controls how many reconciles can be performed in parallel by the operator. The default value is 1. - New
operator.webhook.installClusterRole
parameter. It controls whether to install the cluster role allowing the operator to configure admission webhooks. It should be disabled when cluster roles are not allowed. Default value is true.
Bug Fixes
- MongoDB: Fixed a bug where configuring a MongoDB with multiple entries in
spec.agent.startupOptions
would cause additional unnecessary reconciliation of the underlyingStatefulSet
. - MongoDB, MongoDBMultiCluster: Fixed a bug where the operator wouldn't watch for changes in the X509 certificates configured for agent authentication.
- MongoDB: Fixed a bug where boolean flags passed to the agent cannot be set to
false
if their default value istrue
.
MongoDB Enterprise Kubernetes Operator 1.25.0
Known Issues
- mongodb-enterprise-openshift.yaml file released in this version is incomplete. There is missing operator's ServiceAccount resource. Please use the newer version of the operator or add the service account manually from this commit.
New Features
- MongoDBOpsManager: Added support for deploying Ops Manager Application on multiple Kubernetes clusters. See documentation for more information.
- (Public Preview) MongoDB, OpsManager: Introduced opt-in Static Architecture (for all types of deployments) that avoids pulling any binaries at runtime.
* This feature is recommended only for testing purposes, but will become the default in a later release.
* You can activate this mode by setting theMDB_DEFAULT_ARCHITECTURE
environment variable at the Operator level tostatic
. Alternatively, you can annotate a specificMongoDB
orOpsManager
Custom Resource withmongodb.com/v1.architecture: "static"
.- The Operator supports seamless migration between the Static and non-Static architectures.
- To learn more please see the relevant documentation:
- MongoDB: Recover Resource Due to Broken Automation Configuration has been extended to all types of MongoDB resources, now including Sharded Clusters. For more information see https://www.mongodb.com/docs/kubernetes-operator/master/reference/troubleshooting/#recover-resource-due-to-broken-automation-configuration
- MongoDB, MongoDBMultiCluster: Placeholders in external services.
- You can now define annotations for external services managed by the operator that contain placeholders which will be automatically replaced to the proper values.
- Previously, the operator was configuring the same annotations for all external services created for each pod. Now, with placeholders the operator is able to customize
annotations in each service with values that are relevant and different for the particular pod. - To learn more please see the relevant documentation:
- MongoDB: spec.externalAccess.externalService.annotations
- MongoDBMultiCluster: spec.externalAccess.externalService.annotations
kubectl mongodb
:- Added printing build info when using the plugin.
setup
command:- Added
--image-pull-secrets
parameter. If specified, created service accounts will reference the specified secret onImagePullSecrets
field. - Improved handling of configurations when the operator is installed in a separate namespace than the resources it's watching and when the operator is watching more than one namespace.
- Optimized roles and permissions setup in member clusters, using a single service account per cluster with correctly configured Role and RoleBinding (no ClusterRoles necessary) for each watched namespace.
- Added
- OpsManager: Added the
spec.internalConnectivity
field to allow overrides for the service used by the operator to ensure internal connectivity to theOpsManager
pods. - Extended the existing event based reconciliation by a time-based one, that is triggered every 24 hours. This ensures all Agents are always upgraded on timely manner.
- OpenShift / OLM Operator: Removed the requirement for cluster-wide permissions. Previously, the operator needed these permissions to configure admission webhooks. Now, webhooks are automatically configured by OLM.
- Added optional
MDB_WEBHOOK_REGISTER_CONFIGURATION
environment variable for the operator. It controls whether the operator should perform automatic admission webhook configuration. Default: true. It's set to false for OLM and OpenShift deployments.
Breaking Change
- MongoDBOpsManager Stopped testing against Ops Manager 5.0. While it may continue to work, we no longer officially support Ops Manager 5 and customers should move to a later version.
Helm Chart
- New
operator.webhook.registerConfiguration
parameter. It controls whether the operator should perform automatic admission webhook configuration (by settingMDB_WEBHOOK_REGISTER_CONFIGURATION
environment variable for the operator). Default: true. It's set to false for OLM and OpenShift deployments. - Changing the default
agent.version
to107.0.0.8502-1
, that will change the default agent used in helm deployments. - Added
operator.additionalArguments
(default: []) allowing to pass additional arguments for the operator binary. - Added
operator.createResourcesServiceAccountsAndRoles
(default: true) to control whether to install roles and service accounts for MongoDB and Ops Manager resources. Whenmongodb kubectl
plugin is used to configure the operator for multi-cluster deployment, it installs all necessary roles and service accounts. Therefore, in some cases it is required to not install those roles using the operator's helm chart to avoid clashes.
Bug Fixes
- MongoDBMultiCluster: Fields
spec.externalAccess.externalDomain
andspec.clusterSpecList[*].externalAccess.externalDomains
were reported as required even though they weren't
used. Validation was triggered prematurely when structurespec.externalAccess
was defined. Now, uniqueness of external domains will only be checked when the external domains are
actually defined inspec.externalAccess.externalDomain
orspec.clusterSpecList[*].externalAccess.externalDomains
. - MongoDB: Fixed a bug where upon deleting a MongoDB resource the
controlledFeature
policies are not unset on the related OpsManager/CloudManager instance, making cleanup in the UI impossible in the case of losing the kubernetes operator. - OpsManager: The
admin-key
Secret is no longer deleted when removing the OpsManager Custom Resource. This enables easier Ops Manager re-installation. - MongoDB ReadinessProbe Fixed the misleading error message of the readinessProbe:
"... kubelet Readiness probe failed:..."
. This affects all mongodb deployments. - Operator: Fixed cases where sometimes while communicating with Opsmanager the operator skipped TLS verification, even if it was activated.
Improvements
Kubectl plugin: The released plugin binaries are now signed, the signatures are published with the release assets. Our public key is available at this address. They are also notarized for MacOS.
Released Images signed: All container images published for the enterprise operator are cryptographically signed. This is visible on our Quay registry, and can be verified using our public key. It is available at this address.
MongoDB Enterprise Kubernetes Operator 1.24.0
New Features
- MongoDBOpsManager: Added support for the upcoming 7.0.x series of Ops Manager Server.
Bug Fixes
- Fix a bug that prevented terminating backup correctly.