Docs Menu
Docs Home
/ /

Install and Use Search With MongoDB Enterprise Edition

You can use the Kubernetes Operator to deploy MongoDB Search and Vector Search resources to run with MongoDB enterprise v8.2.0 or higher on a Kubernetes cluster. This procedure demonstrates how to deploy and configure the mongot process to run with a new or existing replica set in your Kubernetes cluster. The deployment uses TLS certificates to ensure secure communication between MongoDB nodes and the mongot search process.

To deploy MongoDB Search and Vector Search, you must have the following:

  • A running Kubernetes cluster with kubeconfig available locally.

  • Kubernetes command-line tool, kubectl, configured to communicate with your cluster.

  • Helm, the package manager for Kubernetes, to install the Kubernetes Operator.

  • Bash v5.1 or higher for running the commands in this tutorial.

  • MongoDB Ops Manager or MongoDB Cloud Manager project and API credentials.

1

Set the environment variables for use in the subsequent steps in this procedure. Copy the following commands, update the values for your environment, and then run them to load the variables:

1# set it to the context name of the k8s cluster
2export K8S_CTX="<local cluster context>"
3
4# the following namespace will be created if not exists
5export MDB_NS="mongodb"
6
7# name of the MongoDB Custom Resource.
8export MDB_RESOURCE_NAME="mdb-rs"
9
10export MDB_MEMBERS=3
11# OM/CM's project name to be used to manage mongodb replica set
12export OPS_MANAGER_PROJECT_NAME="<arbitrary project name>"
13
14# URL to Cloud Manager or Ops Manager instance
15export OPS_MANAGER_API_URL="https://cloud-qa.mongodb.com"
16
17# The API key can be an Org Owner - the operator can create the project automatically then.
18# The API key can also be created in a particular project that was created manually with the Project Owner scope.
19export OPS_MANAGER_API_USER="<SET API USER>"
20export OPS_MANAGER_API_KEY="<SET API KEY>"
21export OPS_MANAGER_ORG_ID="<SET ORG ID>"
22
23# minimum required MongoDB version for running MongoDB Search is 8.2.0
24export MDB_VERSION="8.2.0-ent"
25
26# root admin user for convenience, not used here at all in this guide
27export MDB_ADMIN_USER_PASSWORD="admin-user-password-CHANGE-ME"
28# regular user performing restore and search queries on sample mflix database
29export MDB_USER_PASSWORD="mdb-user-password-CHANGE-ME"
30# user for MongoDB Search to connect to the replica set to synchronise data from
31export MDB_SEARCH_SYNC_USER_PASSWORD="search-sync-user-password-CHANGE-ME"
32
33export OPERATOR_HELM_CHART="mongodb/mongodb-kubernetes"
34# comma-separated key=value pairs for additional parameters passed to the helm-chart installing the operator
35export OPERATOR_ADDITIONAL_HELM_VALUES=""
36
37export MDB_TLS_CERT_SECRET_PREFIX="certs"
38export MDB_TLS_CA_CONFIGMAP="${MDB_RESOURCE_NAME}-ca-configmap"
39
40export CERT_MANAGER_NAMESPACE="cert-manager"
41export MDB_TLS_SELF_SIGNED_ISSUER="selfsigned-bootstrap-issuer"
42export MDB_TLS_CA_CERT_NAME="my-selfsigned-ca"
43export MDB_TLS_CA_SECRET_NAME="root-secret"
44export MDB_TLS_CA_ISSUER="my-ca-issuer"
45export MDB_TLS_SERVER_CERT_SECRET_NAME="${MDB_TLS_CERT_SECRET_PREFIX}-${MDB_RESOURCE_NAME}-cert"
46export MDB_SEARCH_TLS_SECRET_NAME="${MDB_RESOURCE_NAME}-search-tls"
47
48export MDB_CONNECTION_STRING="mongodb://mdb-user:${MDB_USER_PASSWORD}@${MDB_RESOURCE_NAME}-svc.${MDB_NS}.svc.cluster.local:27017/?replicaSet=${MDB_RESOURCE_NAME}&tls=true&tlsCAFile=/tls/ca.crt"
2

Helm automates the deployment and management of MongoDB instances on Kubernetes. If you have already added the Helm repository that contains the Helm chart for installing the Kubernetes Operator operator, skip this step. Otherwise, add the Helm repository.

To add, copy, paste, and run the following command:

1helm repo add mongodb https://mongodb.github.io/helm-charts
2helm repo update mongodb
3helm search repo mongodb/mongodb-kubernetes
1"mongodb" has been added to your repositories
2Hang tight while we grab the latest from your chart repositories...
3...Successfully got an update from the "mongodb" chart repository
4Update Complete. ⎈Happy Helming!⎈
5NAME CHART VERSION APP VERSION DESCRIPTION
6mongodb/mongodb-kubernetes 1.6.0 MongoDB Controllers for Kubernetes translate th...
3

The Kubernetes Operator watches MongoDB, MongoDBOpsManager, and MongoDBSearch custom resources and manages the lifecycle of your MongoDB deployments. If you already installed the MongoDB Controllers for Kubernetes Operator, skip this step. Otherwise, install the MongoDB Controllers for Kubernetes Operator from the Helm repository you added in the previous step.

To install the MongoDB Controllers for Kubernetes Operator in the mongodb namespace, copy, paste, and run the following:

1helm upgrade --install --debug --kube-context "${K8S_CTX}" \
2 --create-namespace \
3 --namespace="${MDB_NS}" \
4 mongodb-kubernetes \
5 ${OPERATOR_ADDITIONAL_HELM_VALUES:+--set ${OPERATOR_ADDITIONAL_HELM_VALUES}} \
6 "${OPERATOR_HELM_CHART}"
1Release "mongodb-kubernetes" does not exist. Installing it now.
2NAME: mongodb-kubernetes
3LAST DEPLOYED: Mon Nov 17 13:22:46 2025
4NAMESPACE: mongodb
5STATUS: deployed
6REVISION: 1
7TEST SUITE: None
8USER-SUPPLIED VALUES:
9{}
10
11COMPUTED VALUES:
12agent:
13 name: mongodb-agent
14 version: 108.0.12.8846-1
15community:
16 agent:
17 name: mongodb-agent
18 version: 108.0.2.8729-1
19 mongodb:
20 imageType: ubi8
21 name: mongodb-community-server
22 repo: quay.io/mongodb
23 registry:
24 agent: quay.io/mongodb
25 resource:
26 members: 3
27 name: mongodb-replica-set
28 tls:
29 caCertificateSecretRef: tls-ca-key-pair
30 certManager:
31 certDuration: 8760h
32 renewCertBefore: 720h
33 certificateKeySecretRef: tls-certificate
34 enabled: false
35 sampleX509User: false
36 useCertManager: true
37 useX509: false
38 version: 4.4.0
39database:
40 name: mongodb-kubernetes-database
41 version: 1.6.0
42initAppDb:
43 name: mongodb-kubernetes-init-appdb
44 version: 1.6.0
45initDatabase:
46 name: mongodb-kubernetes-init-database
47 version: 1.6.0
48initOpsManager:
49 name: mongodb-kubernetes-init-ops-manager
50 version: 1.6.0
51managedSecurityContext: false
52mongodb:
53 appdbAssumeOldFormat: false
54 imageType: ubi8
55 name: mongodb-enterprise-server
56 repo: quay.io/mongodb
57multiCluster:
58 clusterClientTimeout: 10
59 clusters: []
60 kubeConfigSecretName: mongodb-enterprise-operator-multi-cluster-kubeconfig
61 performFailOver: true
62operator:
63 additionalArguments: []
64 affinity: {}
65 baseName: mongodb-kubernetes
66 createOperatorServiceAccount: true
67 createResourcesServiceAccountsAndRoles: true
68 deployment_name: mongodb-kubernetes-operator
69 enableClusterMongoDBRoles: true
70 enablePVCResize: true
71 env: prod
72 maxConcurrentReconciles: 1
73 mdbDefaultArchitecture: non-static
74 name: mongodb-kubernetes-operator
75 nodeSelector: {}
76 operator_image_name: mongodb-kubernetes
77 podSecurityContext:
78 runAsNonRoot: true
79 runAsUser: 2000
80 replicas: 1
81 resources:
82 limits:
83 cpu: 1100m
84 memory: 1Gi
85 requests:
86 cpu: 500m
87 memory: 200Mi
88 securityContext: {}
89 telemetry:
90 collection:
91 clusters: {}
92 deployments: {}
93 frequency: 1h
94 operators: {}
95 send:
96 frequency: 168h
97 tolerations: []
98 vaultSecretBackend:
99 enabled: false
100 tlsSecretRef: ""
101 version: 1.6.0
102 watchedResources:
103 - mongodb
104 - opsmanagers
105 - mongodbusers
106 - mongodbcommunity
107 - mongodbsearch
108 webhook:
109 installClusterRole: true
110 registerConfiguration: true
111opsManager:
112 name: mongodb-enterprise-ops-manager-ubi
113readinessProbe:
114 name: mongodb-kubernetes-readinessprobe
115 version: 1.0.23
116registry:
117 agent: quay.io/mongodb
118 database: quay.io/mongodb
119 imagePullSecrets: null
120 initAppDb: quay.io/mongodb
121 initDatabase: quay.io/mongodb
122 initOpsManager: quay.io/mongodb
123 operator: quay.io/mongodb
124 opsManager: quay.io/mongodb
125 pullPolicy: Always
126 readinessProbe: quay.io/mongodb
127 versionUpgradeHook: quay.io/mongodb
128search:
129 name: mongodb-search
130 repo: quay.io/mongodb
131 version: 0.55.0
132versionUpgradeHook:
133 name: mongodb-kubernetes-operator-version-upgrade-post-start-hook
134 version: 1.0.10
135
136HOOKS:
137MANIFEST:
138---
139# Source: mongodb-kubernetes/templates/database-roles.yaml
140apiVersion: v1
141kind: ServiceAccount
142metadata:
143 name: mongodb-kubernetes-appdb
144 namespace: mongodb
145---
146# Source: mongodb-kubernetes/templates/database-roles.yaml
147apiVersion: v1
148kind: ServiceAccount
149metadata:
150 name: mongodb-kubernetes-database-pods
151 namespace: mongodb
152---
153# Source: mongodb-kubernetes/templates/database-roles.yaml
154apiVersion: v1
155kind: ServiceAccount
156metadata:
157 name: mongodb-kubernetes-ops-manager
158 namespace: mongodb
159---
160# Source: mongodb-kubernetes/templates/operator-sa.yaml
161apiVersion: v1
162kind: ServiceAccount
163metadata:
164 name: mongodb-kubernetes-operator
165 namespace: mongodb
166---
167# Source: mongodb-kubernetes/templates/operator-roles-clustermongodbroles.yaml
168kind: ClusterRole
169apiVersion: rbac.authorization.k8s.io/v1
170metadata:
171 name: mongodb-kubernetes-operator-mongodb-cluster-mongodb-role
172rules:
173 - apiGroups:
174 - mongodb.com
175 verbs:
176 - '*'
177 resources:
178 - clustermongodbroles
179---
180# Source: mongodb-kubernetes/templates/operator-roles-telemetry.yaml
181# Additional ClusterRole for clusterVersionDetection
182kind: ClusterRole
183apiVersion: rbac.authorization.k8s.io/v1
184metadata:
185 name: mongodb-kubernetes-operator-cluster-telemetry
186rules:
187 # Non-resource URL permissions
188 - nonResourceURLs:
189 - "/version"
190 verbs:
191 - get
192 # Cluster-scoped resource permissions
193 - apiGroups:
194 - ''
195 resources:
196 - namespaces
197 resourceNames:
198 - kube-system
199 verbs:
200 - get
201 - apiGroups:
202 - ''
203 resources:
204 - nodes
205 verbs:
206 - list
207---
208# Source: mongodb-kubernetes/templates/operator-roles-webhook.yaml
209kind: ClusterRole
210apiVersion: rbac.authorization.k8s.io/v1
211metadata:
212 name: mongodb-kubernetes-operator-mongodb-webhook-cr
213rules:
214 - apiGroups:
215 - "admissionregistration.k8s.io"
216 resources:
217 - validatingwebhookconfigurations
218 verbs:
219 - get
220 - create
221 - update
222 - delete
223 - apiGroups:
224 - ""
225 resources:
226 - services
227 verbs:
228 - get
229 - list
230 - watch
231 - create
232 - update
233 - delete
234---
235# Source: mongodb-kubernetes/templates/operator-roles-clustermongodbroles.yaml
236kind: ClusterRoleBinding
237apiVersion: rbac.authorization.k8s.io/v1
238metadata:
239 name: mongodb-kubernetes-operator-mongodb-cluster-mongodb-role-binding
240roleRef:
241 apiGroup: rbac.authorization.k8s.io
242 kind: ClusterRole
243 name: mongodb-kubernetes-operator-mongodb-cluster-mongodb-role
244subjects:
245 - kind: ServiceAccount
246 name: mongodb-kubernetes-operator
247 namespace: mongodb
248---
249# Source: mongodb-kubernetes/templates/operator-roles-telemetry.yaml
250# ClusterRoleBinding for clusterVersionDetection
251kind: ClusterRoleBinding
252apiVersion: rbac.authorization.k8s.io/v1
253metadata:
254 name: mongodb-kubernetes-operator-mongodb-cluster-telemetry-binding
255roleRef:
256 apiGroup: rbac.authorization.k8s.io
257 kind: ClusterRole
258 name: mongodb-kubernetes-operator-cluster-telemetry
259subjects:
260 - kind: ServiceAccount
261 name: mongodb-kubernetes-operator
262 namespace: mongodb
263---
264# Source: mongodb-kubernetes/templates/operator-roles-webhook.yaml
265kind: ClusterRoleBinding
266apiVersion: rbac.authorization.k8s.io/v1
267metadata:
268 name: mongodb-kubernetes-operator-mongodb-webhook-crb
269roleRef:
270 apiGroup: rbac.authorization.k8s.io
271 kind: ClusterRole
272 name: mongodb-kubernetes-operator-mongodb-webhook-cr
273subjects:
274 - kind: ServiceAccount
275 name: mongodb-kubernetes-operator
276 namespace: mongodb
277---
278# Source: mongodb-kubernetes/templates/database-roles.yaml
279kind: Role
280apiVersion: rbac.authorization.k8s.io/v1
281metadata:
282 name: mongodb-kubernetes-appdb
283 namespace: mongodb
284rules:
285 - apiGroups:
286 - ''
287 resources:
288 - secrets
289 verbs:
290 - get
291 - apiGroups:
292 - ''
293 resources:
294 - pods
295 verbs:
296 - patch
297 - delete
298 - get
299---
300# Source: mongodb-kubernetes/templates/operator-roles-base.yaml
301kind: Role
302apiVersion: rbac.authorization.k8s.io/v1
303metadata:
304 name: mongodb-kubernetes-operator
305 namespace: mongodb
306rules:
307 - apiGroups:
308 - ''
309 resources:
310 - services
311 verbs:
312 - get
313 - list
314 - watch
315 - create
316 - update
317 - delete
318 - apiGroups:
319 - ''
320 resources:
321 - secrets
322 - configmaps
323 verbs:
324 - get
325 - list
326 - create
327 - update
328 - delete
329 - watch
330 - apiGroups:
331 - apps
332 resources:
333 - statefulsets
334 verbs:
335 - create
336 - get
337 - list
338 - watch
339 - delete
340 - update
341 - apiGroups:
342 - ''
343 resources:
344 - pods
345 verbs:
346 - get
347 - list
348 - watch
349 - delete
350 - deletecollection
351 - apiGroups:
352 - mongodbcommunity.mongodb.com
353 resources:
354 - mongodbcommunity
355 - mongodbcommunity/status
356 - mongodbcommunity/spec
357 - mongodbcommunity/finalizers
358 verbs:
359 - '*'
360 - apiGroups:
361 - mongodb.com
362 verbs:
363 - '*'
364 resources:
365 - mongodb
366 - mongodb/finalizers
367 - mongodbusers
368 - mongodbusers/finalizers
369 - opsmanagers
370 - opsmanagers/finalizers
371 - mongodbmulticluster
372 - mongodbmulticluster/finalizers
373 - mongodbsearch
374 - mongodbsearch/finalizers
375 - mongodb/status
376 - mongodbusers/status
377 - opsmanagers/status
378 - mongodbmulticluster/status
379 - mongodbsearch/status
380---
381# Source: mongodb-kubernetes/templates/operator-roles-pvc-resize.yaml
382kind: Role
383apiVersion: rbac.authorization.k8s.io/v1
384metadata:
385 name: mongodb-kubernetes-operator-pvc-resize
386 namespace: mongodb
387rules:
388 - apiGroups:
389 - ''
390 resources:
391 - persistentvolumeclaims
392 verbs:
393 - get
394 - delete
395 - list
396 - watch
397 - patch
398 - update
399---
400# Source: mongodb-kubernetes/templates/database-roles.yaml
401kind: RoleBinding
402apiVersion: rbac.authorization.k8s.io/v1
403metadata:
404 name: mongodb-kubernetes-appdb
405 namespace: mongodb
406roleRef:
407 apiGroup: rbac.authorization.k8s.io
408 kind: Role
409 name: mongodb-kubernetes-appdb
410subjects:
411 - kind: ServiceAccount
412 name: mongodb-kubernetes-appdb
413 namespace: mongodb
414---
415# Source: mongodb-kubernetes/templates/operator-roles-base.yaml
416kind: RoleBinding
417apiVersion: rbac.authorization.k8s.io/v1
418metadata:
419 name: mongodb-kubernetes-operator
420 namespace: mongodb
421roleRef:
422 apiGroup: rbac.authorization.k8s.io
423 kind: Role
424 name: mongodb-kubernetes-operator
425subjects:
426 - kind: ServiceAccount
427 name: mongodb-kubernetes-operator
428 namespace: mongodb
429---
430# Source: mongodb-kubernetes/templates/operator-roles-pvc-resize.yaml
431kind: RoleBinding
432apiVersion: rbac.authorization.k8s.io/v1
433metadata:
434 name: mongodb-kubernetes-operator-pvc-resize-binding
435 namespace: mongodb
436roleRef:
437 apiGroup: rbac.authorization.k8s.io
438 kind: Role
439 name: mongodb-kubernetes-operator-pvc-resize
440subjects:
441 - kind: ServiceAccount
442 name: mongodb-kubernetes-operator
443 namespace: mongodb
444---
445# Source: mongodb-kubernetes/templates/operator.yaml
446apiVersion: apps/v1
447kind: Deployment
448metadata:
449 name: mongodb-kubernetes-operator
450 namespace: mongodb
451spec:
452 replicas: 1
453 selector:
454 matchLabels:
455 app.kubernetes.io/component: controller
456 app.kubernetes.io/name: mongodb-kubernetes-operator
457 app.kubernetes.io/instance: mongodb-kubernetes-operator
458 template:
459 metadata:
460 labels:
461 app.kubernetes.io/component: controller
462 app.kubernetes.io/name: mongodb-kubernetes-operator
463 app.kubernetes.io/instance: mongodb-kubernetes-operator
464 spec:
465 serviceAccountName: mongodb-kubernetes-operator
466 securityContext:
467 runAsNonRoot: true
468 runAsUser: 2000
469 containers:
470 - name: mongodb-kubernetes-operator
471 image: "quay.io/mongodb/mongodb-kubernetes:1.6.0"
472 imagePullPolicy: Always
473 args:
474 - -watch-resource=mongodb
475 - -watch-resource=opsmanagers
476 - -watch-resource=mongodbusers
477 - -watch-resource=mongodbcommunity
478 - -watch-resource=mongodbsearch
479 - -watch-resource=clustermongodbroles
480 command:
481 - /usr/local/bin/mongodb-kubernetes-operator
482 resources:
483 limits:
484 cpu: 1100m
485 memory: 1Gi
486 requests:
487 cpu: 500m
488 memory: 200Mi
489 env:
490 - name: OPERATOR_ENV
491 value: prod
492 - name: MDB_DEFAULT_ARCHITECTURE
493 value: non-static
494 - name: NAMESPACE
495 valueFrom:
496 fieldRef:
497 fieldPath: metadata.namespace
498 - name: WATCH_NAMESPACE
499 valueFrom:
500 fieldRef:
501 fieldPath: metadata.namespace
502 - name: MDB_OPERATOR_TELEMETRY_COLLECTION_FREQUENCY
503 value: "1h"
504 - name: MDB_OPERATOR_TELEMETRY_SEND_FREQUENCY
505 value: "168h"
506 - name: CLUSTER_CLIENT_TIMEOUT
507 value: "10"
508 - name: IMAGE_PULL_POLICY
509 value: Always
510 # Database
511 - name: MONGODB_ENTERPRISE_DATABASE_IMAGE
512 value: quay.io/mongodb/mongodb-kubernetes-database
513 - name: INIT_DATABASE_IMAGE_REPOSITORY
514 value: quay.io/mongodb/mongodb-kubernetes-init-database
515 - name: INIT_DATABASE_VERSION
516 value: "1.6.0"
517 - name: DATABASE_VERSION
518 value: "1.6.0"
519 # Ops Manager
520 - name: OPS_MANAGER_IMAGE_REPOSITORY
521 value: quay.io/mongodb/mongodb-enterprise-ops-manager-ubi
522 - name: INIT_OPS_MANAGER_IMAGE_REPOSITORY
523 value: quay.io/mongodb/mongodb-kubernetes-init-ops-manager
524 - name: INIT_OPS_MANAGER_VERSION
525 value: "1.6.0"
526 # AppDB
527 - name: INIT_APPDB_IMAGE_REPOSITORY
528 value: quay.io/mongodb/mongodb-kubernetes-init-appdb
529 - name: INIT_APPDB_VERSION
530 value: "1.6.0"
531 - name: OPS_MANAGER_IMAGE_PULL_POLICY
532 value: Always
533 - name: AGENT_IMAGE
534 value: "quay.io/mongodb/mongodb-agent:108.0.12.8846-1"
535 - name: MDB_AGENT_IMAGE_REPOSITORY
536 value: "quay.io/mongodb/mongodb-agent"
537 - name: MONGODB_IMAGE
538 value: mongodb-enterprise-server
539 - name: MONGODB_REPO_URL
540 value: quay.io/mongodb
541 - name: MDB_IMAGE_TYPE
542 value: ubi8
543 - name: PERFORM_FAILOVER
544 value: 'true'
545 - name: MDB_MAX_CONCURRENT_RECONCILES
546 value: "1"
547 - name: POD_NAME
548 valueFrom:
549 fieldRef:
550 fieldPath: metadata.name
551 - name: OPERATOR_NAME
552 value: mongodb-kubernetes-operator
553 # Community Env Vars Start
554 - name: MDB_COMMUNITY_AGENT_IMAGE
555 value: "quay.io/mongodb/mongodb-agent:108.0.2.8729-1"
556 - name: VERSION_UPGRADE_HOOK_IMAGE
557 value: "quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.10"
558 - name: READINESS_PROBE_IMAGE
559 value: "quay.io/mongodb/mongodb-kubernetes-readinessprobe:1.0.23"
560 - name: MDB_COMMUNITY_IMAGE
561 value: "mongodb-community-server"
562 - name: MDB_COMMUNITY_REPO_URL
563 value: "quay.io/mongodb"
564 - name: MDB_COMMUNITY_IMAGE_TYPE
565 value: "ubi8"
566 # Community Env Vars End
567 - name: MDB_SEARCH_REPO_URL
568 value: "quay.io/mongodb"
569 - name: MDB_SEARCH_NAME
570 value: "mongodb-search"
571 - name: MDB_SEARCH_VERSION
572 value: "0.55.0"
4

If you have already deployed MongoDB Enterprise, skip to the next step. Otherwise, deploy the MongoDB Enterprise resource.

To deploy the MongoDB Enterprise, complete the following steps:

  1. Create the ConfigMap and secret for the MongoDB Ops Manager project if you are using MongoDB Ops Manager.

    To store the configuration and credentials for integration with MongoDB Ops Manager, copy, paste, and run the following commands:

    1kubectl --context "${K8S_CTX}" -n "${MDB_NS}" create configmap om-project \
    2 --from-literal=projectName="${OPS_MANAGER_PROJECT_NAME}" --from-literal=baseUrl="${OPS_MANAGER_API_URL}" \
    3 --from-literal=orgId="${OPS_MANAGER_ORG_ID:-}"
    4
    5kubectl --context "${K8S_CTX}" -n "${MDB_NS}" create secret generic om-credentials \
    6 --from-literal=publicKey="${OPS_MANAGER_API_USER}" \
    7 --from-literal=privateKey="${OPS_MANAGER_API_KEY}"
  2. Create a MongoDB custom resource named mdb-rs.

    The resource defines CPU and memory resources for the mongod and mongodb-agent containers and instructs the Kubernetes Operator to configure a MongoDB replica set with 3 members:

    To deploy MongoDB Enterprise, copy, paste, and run the following in the namespace:

    1kubectl apply --context "${K8S_CTX}" -n "${MDB_NS}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDB
    4metadata:
    5 name: ${MDB_RESOURCE_NAME}
    6spec:
    7 members: ${MDB_MEMBERS}
    8 version: ${MDB_VERSION}
    9 type: ReplicaSet
    10 opsManager:
    11 configMapRef:
    12 name: om-project
    13 credentials: om-credentials
    14 security:
    15 authentication:
    16 enabled: true
    17 ignoreUnknownUsers: true
    18 modes:
    19 - SCRAM
    20 certsSecretPrefix: ${MDB_TLS_CERT_SECRET_PREFIX}
    21 tls:
    22 enabled: true
    23 ca: ${MDB_TLS_CA_CONFIGMAP}
    24 agent:
    25 logLevel: INFO
    26 podSpec:
    27 podTemplate:
    28 spec:
    29 containers:
    30 - name: mongodb-enterprise-database
    31 resources:
    32 limits:
    33 cpu: "2"
    34 memory: 2Gi
    35 requests:
    36 cpu: "1"
    37 memory: 1Gi
    38EOF
  3. Wait for the MongoDB resource deployment to complete.

    When you apply the MongoDB custom resource, the Kubernetes operator begins deploying the MongoDB nodes (pods). This step pauses the execution until the mdbc-rs resource's status phase is Running, which indicates that the MongoDB Community replica set is operational.

    1echo "Waiting for MongoDB resource to reach Running phase..."
    2kubectl --context "${K8S_CTX}" -n "${MDB_NS}" wait --for=jsonpath='{.status.phase}'=Running "mdb/${MDB_RESOURCE_NAME}" --timeout=400s
    3echo; echo "MongoDB resource"
    4kubectl --context "${K8S_CTX}" -n "${MDB_NS}" get "mdb/${MDB_RESOURCE_NAME}"
    5echo; echo "Pods running in cluster ${K8S_CTX}"
    6kubectl --context "${K8S_CTX}" -n "${MDB_NS}" get pods
    1Waiting for MongoDB resource to reach Running phase...
    2mongodb.mongodb.mongodb.com/mdbc-rs condition met
    3
    4MongoDB resource
    5NAME PHASE VERSION
    6mdbc-rs Running 8.2
    7
    8Pods running in cluster minikube
    9NAME READY STATUS RESTARTS AGE
    10mdbc-rs-0 2/2 Running 0 2m30s
    11mdbc-rs-1 2/2 Running 0 82s
    12mdbc-rs-2 2/2 Running 0 38s
    13mongodb-kubernetes-operator-5776c8b4df-cppnf 1/1 Running 0 7m37s
5

The cert-manager is required for managing TLS certificates. If you already have cert-manager installed in your cluster, skip this step. Otherwise, install cert-manager using Helm.

To install cert-manager in the cert-manager namespace, run the following command in your terminal:

1helm upgrade --install \
2 cert-manager \
3 oci://quay.io/jetstack/charts/cert-manager \
4 --kube-context "${K8S_CTX}" \
5 --namespace "${CERT_MANAGER_NAMESPACE}" \
6 --create-namespace \
7 --set crds.enabled=true
8
9for deployment in cert-manager cert-manager-cainjector cert-manager-webhook; do
10 kubectl --context "${K8S_CTX}" \
11 -n "${CERT_MANAGER_NAMESPACE}" \
12 wait --for=condition=Available "deployment/${deployment}" --timeout=300s
13done
14
15echo "cert-manager is ready in namespace ${CERT_MANAGER_NAMESPACE}."
6

Create the certificate authority infrastructure that will issue TLS certificates for MongoDB and MongoDBSearch resources. The commands perform the following actions:

  • Create a self-signed ClusterIssuer.

  • Generate a CA certificate.

  • Publish a cluster-wide CA issuer that all namespaces can use.

  • Expose the CA bundle through a ConfigMap so MongoDB resources can use it.

1# Bootstrap a self-signed ClusterIssuer to mint the CA secret consumed by application workloads.
2kubectl apply --context "${K8S_CTX}" -f - <<EOF_MANIFEST
3apiVersion: cert-manager.io/v1
4kind: ClusterIssuer
5metadata:
6 name: ${MDB_TLS_SELF_SIGNED_ISSUER}
7spec:
8 selfSigned: {}
9EOF_MANIFEST
10
11kubectl --context "${K8S_CTX}" wait --for=condition=Ready clusterissuer "${MDB_TLS_SELF_SIGNED_ISSUER}"
12
13kubectl apply --context "${K8S_CTX}" -f - <<EOF_MANIFEST
14apiVersion: cert-manager.io/v1
15kind: Certificate
16metadata:
17 name: ${MDB_TLS_CA_CERT_NAME}
18 namespace: ${CERT_MANAGER_NAMESPACE}
19spec:
20 isCA: true
21 commonName: ${MDB_TLS_CA_CERT_NAME}
22 secretName: ${MDB_TLS_CA_SECRET_NAME}
23 privateKey:
24 algorithm: ECDSA
25 size: 256
26 issuerRef:
27 name: ${MDB_TLS_SELF_SIGNED_ISSUER}
28 kind: ClusterIssuer
29EOF_MANIFEST
30
31kubectl --context "${K8S_CTX}" wait --for=condition=Ready -n "${CERT_MANAGER_NAMESPACE}" certificate "${MDB_TLS_CA_CERT_NAME}"
32
33kubectl apply --context "${K8S_CTX}" -f - <<EOF_MANIFEST
34apiVersion: cert-manager.io/v1
35kind: ClusterIssuer
36metadata:
37 name: ${MDB_TLS_CA_ISSUER}
38spec:
39 ca:
40 secretName: ${MDB_TLS_CA_SECRET_NAME}
41EOF_MANIFEST
42
43kubectl --context "${K8S_CTX}" wait --for=condition=Ready clusterissuer "${MDB_TLS_CA_ISSUER}"
44
45TMP_CA_CERT="$(mktemp)"
46trap 'rm -f "${TMP_CA_CERT}"' EXIT
47
48kubectl --context "${K8S_CTX}" get secret "${MDB_TLS_CA_SECRET_NAME}" -n "${CERT_MANAGER_NAMESPACE}" -o jsonpath="{.data['ca\\.crt']}" | base64 --decode > "${TMP_CA_CERT}"
49
50kubectl --context "${K8S_CTX}" create configmap "${MDB_TLS_CA_CONFIGMAP}" -n "${MDB_NS}" \
51 --from-file=ca-pem="${TMP_CA_CERT}" --from-file=mms-ca.crt="${TMP_CA_CERT}" \
52 --from-file=ca.crt="${TMP_CA_CERT}" \
53 --dry-run=client -o yaml | kubectl --context "${K8S_CTX}" apply -f -
7

Issue TLS certificates for both the MongoDB server (${MDB_RESOURCE_NAME}-server-tls) and the MongoDBSearch service (${MDB_RESOURCE_NAME}-search-tls). The MongoDB server certificate includes all necessary DNS names for the pod and service communication. Both certificates support server and client authentication.

1server_certificate="${MDB_RESOURCE_NAME}-server-tls"
2search_certificate="${MDB_RESOURCE_NAME}-search-tls"
3
4mongo_dns_names=()
5for ((member = 0; member < MDB_MEMBERS; member++)); do
6 mongo_dns_names+=("${MDB_RESOURCE_NAME}-${member}")
7 mongo_dns_names+=("${MDB_RESOURCE_NAME}-${member}.${MDB_RESOURCE_NAME}-svc.${MDB_NS}.svc.cluster.local")
8done
9mongo_dns_names+=(
10 "${MDB_RESOURCE_NAME}-svc.${MDB_NS}.svc.cluster.local"
11 "*.${MDB_RESOURCE_NAME}-svc.${MDB_NS}.svc.cluster.local"
12)
13
14search_dns_names=(
15 "${MDB_RESOURCE_NAME}-search-svc.${MDB_NS}.svc.cluster.local"
16)
17
18render_dns_list() {
19 local dns_list=("$@")
20 for dns in "${dns_list[@]}"; do
21 printf " - \"%s\"\n" "${dns}"
22 done
23}
24
25kubectl apply --context "${K8S_CTX}" -n "${MDB_NS}" -f - <<EOF_MANIFEST
26apiVersion: cert-manager.io/v1
27kind: Certificate
28metadata:
29 name: ${server_certificate}
30 namespace: ${MDB_NS}
31spec:
32 secretName: ${MDB_TLS_SERVER_CERT_SECRET_NAME}
33 issuerRef:
34 name: ${MDB_TLS_CA_ISSUER}
35 kind: ClusterIssuer
36 duration: 240h0m0s
37 renewBefore: 120h0m0s
38 usages:
39 - digital signature
40 - key encipherment
41 - server auth
42 - client auth
43 dnsNames:
44$(render_dns_list "${mongo_dns_names[@]}")
45---
46apiVersion: cert-manager.io/v1
47kind: Certificate
48metadata:
49 name: ${search_certificate}
50 namespace: ${MDB_NS}
51spec:
52 secretName: ${MDB_SEARCH_TLS_SECRET_NAME}
53 issuerRef:
54 name: ${MDB_TLS_CA_ISSUER}
55 kind: ClusterIssuer
56 duration: 240h0m0s
57 renewBefore: 120h0m0s
58 usages:
59 - digital signature
60 - key encipherment
61 - server auth
62 - client auth
63 dnsNames:
64$(render_dns_list "${search_dns_names[@]}")
65EOF_MANIFEST
66
67kubectl --context "${K8S_CTX}" -n "${MDB_NS}" wait --for=condition=Ready certificate "${server_certificate}" --timeout=300s
68kubectl --context "${K8S_CTX}" -n "${MDB_NS}" wait --for=condition=Ready certificate "${search_certificate}" --timeout=300s
8

MongoDB requires authentication for secure access. In this step, you create three Kubernetes secrets:

  • mdb-admin-user-password: Credentials for the MongoDB administrator.

  • mdb-user-password: Credentials for the user authorized to perform search queries.

  • mdbc-rs-search-sync-source-password: Credentials for a dedicated search user used internally by the mongot process to synchronize data and manage indexes.

Kubernetes Operator mounts these secrets into the MongoDB pods.

To create the secrets, copy, paste, and run the following in the namespace where you deployed MongoDB Server and plan to deploy MongoDB Search and Vector Search:

1# admin user with root role
2kubectl --context "${K8S_CTX}" --namespace "${MDB_NS}" \
3 create secret generic mdb-admin-user-password \
4 --from-literal=password="${MDB_ADMIN_USER_PASSWORD}"
5
6kubectl apply --context "${K8S_CTX}" -n "${MDB_NS}" -f - <<EOF
7apiVersion: mongodb.com/v1
8kind: MongoDBUser
9metadata:
10 name: mdb-admin
11spec:
12 username: mdb-admin
13 db: admin
14 mongodbResourceRef:
15 name: ${MDB_RESOURCE_NAME}
16 passwordSecretKeyRef:
17 name: mdb-admin-user-password
18 key: password
19 roles:
20 - name: root
21 db: admin
22EOF
23
24# user used by MongoDB Search to connect to MongoDB database to synchronize data from
25# For MongoDB <8.2, the operator will be creating the searchCoordinator custom role automatically
26# From MongoDB 8.2, searchCoordinator role will be a built-in role.
27kubectl --context "${K8S_CTX}" --namespace "${MDB_NS}" \
28 create secret generic "${MDB_RESOURCE_NAME}-search-sync-source-password" \
29 --from-literal=password="${MDB_SEARCH_SYNC_USER_PASSWORD}"
30kubectl apply --context "${K8S_CTX}" -n "${MDB_NS}" -f - <<EOF
31apiVersion: mongodb.com/v1
32kind: MongoDBUser
33metadata:
34 name: search-sync-source-user
35spec:
36 username: search-sync-source
37 db: admin
38 mongodbResourceRef:
39 name: ${MDB_RESOURCE_NAME}
40 passwordSecretKeyRef:
41 name: ${MDB_RESOURCE_NAME}-search-sync-source-password
42 key: password
43 roles:
44 - name: searchCoordinator
45 db: admin
46EOF
47
48# user performing search queries
49kubectl --context "${K8S_CTX}" --namespace "${MDB_NS}" \
50 create secret generic mdb-user-password \
51 --from-literal=password="${MDB_USER_PASSWORD}"
52kubectl apply --context "${K8S_CTX}" -n "${MDB_NS}" -f - <<EOF
53apiVersion: mongodb.com/v1
54kind: MongoDBUser
55metadata:
56 name: mdb-user
57spec:
58 username: mdb-user
59 db: admin
60 mongodbResourceRef:
61 name: ${MDB_RESOURCE_NAME}
62 passwordSecretKeyRef:
63 name: mdb-user-password
64 key: password
65 roles:
66 - name: readWrite
67 db: sample_mflix
68EOF
1secret/mdb-admin-user-password created
2secret/mdbc-rs-search-sync-source-password created
3secret/mdb-user-password created
9

You can deploy one instance of the search node without any load balancing. To deploy, complete the following steps:

  1. Create a MongoDBSearch custom resource named mdbc-rs.

    This resource specifies the CPU and memory resource requirements for the search nodes. To learn more about the settings in this custom resource, see MongoDB Search and Vector Search Settings.

    1kubectl apply --context "${K8S_CTX}" -n "${MDB_NS}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBSearch
    4metadata:
    5 name: ${MDB_RESOURCE_NAME}
    6spec:
    7 # no need to specify source.mongodbResourceRef if MongoDBSearch CR has the same name as MongoDB CR
    8 # the operator infer it automatically
    9 security:
    10 tls:
    11 certificateKeySecretRef:
    12 name: ${MDB_SEARCH_TLS_SECRET_NAME}
    13 resourceRequirements:
    14 limits:
    15 cpu: "3"
    16 memory: 5Gi
    17 requests:
    18 cpu: "2"
    19 memory: 3Gi
    20EOF
  2. Wait for the MongoDBSearch resource deployment to complete.

    When you apply the MongoDBSearch custom resource, the Kubernetes operator begins deploying the search nodes (pods). This step pauses the execution until the mdbc-rs resource's status phase is Running, which indicates that the MongoDB Community replica set is operational.

    1echo "Waiting for MongoDBSearch resource to reach Running phase..."
    2kubectl --context "${K8S_CTX}" -n "${MDB_NS}" wait --for=jsonpath='{.status.phase}'=Running "mdbs/${MDB_RESOURCE_NAME}" --timeout=300s
10

Ensure that the MongoDB resource deployment with MongoDBSearch was successful.

1echo "Waiting for MongoDB resource to reach Running phase..."
2kubectl --context "${K8S_CTX}" -n "${MDB_NS}" wait --for=jsonpath='{.status.phase}'=Running "mdb/${MDB_RESOURCE_NAME}" --timeout=400s
11

View all the running pods in your namespace pods for the MongoDB replica set members, the MongoDB Controllers for Kubernetes Operator, and the Search nodes.

1echo; echo "MongoDB resource"
2kubectl --context "${K8S_CTX}" -n "${MDB_NS}" get "mdb/${MDB_RESOURCE_NAME}"
3echo; echo "MongoDBSearch resource"
4kubectl --context "${K8S_CTX}" -n "${MDB_NS}" get "mdbs/${MDB_RESOURCE_NAME}"
5echo; echo "Pods running in cluster ${K8S_CTX}"
6kubectl --context "${K8S_CTX}" -n "${MDB_NS}" get pods
1MongoDB resource
2NAME PHASE VERSION TYPE AGE
3mdb-rs Running 8.2.0-ent ReplicaSet 4m7s
4
5MongoDBSearch resource
6NAME PHASE VERSION AGE
7mdb-rs Running 0.55.0 93s
8
9Pods running in cluster kind-kind
10NAME READY STATUS RESTARTS AGE
11mdb-rs-0 1/1 Running 0 4m6s
12mdb-rs-1 1/1 Running 0 3m42s
13mdb-rs-2 1/1 Running 0 3m2s
14mdb-rs-search-0 1/1 Running 3 (52s ago) 93s
15mongodb-kubernetes-operator-8d9b999b7-859gc 1/1 Running 0 4m25s

Now that you've successfully deployed MongoDB Search and Vector Search to use with MongoDB Enterprise Edition, you can add data in your MongoDB cluster, create MongoDB Search and Vector Search indexes, and run queries against your data. To learn more, see MongoDB Search and Vector Search Settings.

On this page