欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 房产 > 建筑 > 向量数据量milvus k8s helm 对接外部安装部署流程

向量数据量milvus k8s helm 对接外部安装部署流程

2024/10/24 5:23:09 来源:https://blog.csdn.net/IT_Octopus/article/details/140443992  浏览:    关键词:向量数据量milvus k8s helm 对接外部安装部署流程

前情概要:历经了太多的坑,从简单的到困难的,该文章主要是为大家尽可能的展现安装部署流程中遇见的坑!
如果2024年7月15日17:13:41 你处在这个时间阶段 附近,你会发现docker下载镜像失败! 这个问题,没有办法,请使用魔法

官方部署网址:https://milvus.io/docs/install_cluster-helm.md
1.如果你想要直接部署,不对接外部组件,直接使用在线部署,当前要注意上面的问题:使用魔法先把需要的镜像下载下来!
镜像如下:

milvusdb/milvus: 
milvusdb/milvus-config-tool:
docker.io/milvusdb/etcd:
zilliz/attu:

value.yaml

## Enable or disable Milvus Cluster mode
cluster:enabled: trueimage:all:repository: milvusdb/milvustag: v2.4.5pullPolicy: IfNotPresent## Optionally specify an array of imagePullSecrets.## Secrets must be manually created in the namespace.## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/### pullSecrets:#   - myRegistryKeySecretNametools:repository: milvusdb/milvus-config-tooltag: v0.1.2pullPolicy: IfNotPresent# Global node selector
# If set, this will apply to all milvus components
# Individual components can be set to a different node selector
nodeSelector: {}# Global tolerations
# If set, this will apply to all milvus components
# Individual components can be set to a different tolerations
tolerations: []# Global affinity
# If set, this will apply to all milvus components
# Individual components can be set to a different affinity
affinity: {}# Global labels and annotations
# If set, this will apply to all milvus components
labels: {}
annotations: {}# Extra configs for milvus.yaml
# If set, this config will merge into milvus.yaml
# Please follow the config structure in the milvus.yaml
# at https://github.com/milvus-io/milvus/blob/master/configs/milvus.yaml
# Note: this config will be the top priority which will override the config
# in the image and helm chart.
extraConfigFiles:user.yaml: |+#    For example enable rest http for milvus proxy#    proxy:#      http:#        enabled: true#      maxUserNum: 100#      maxRoleNum: 10##  Enable tlsMode and set the tls cert and key#  tls:#    serverPemPath: /etc/milvus/certs/tls.crt#    serverKeyPath: /etc/milvus/certs/tls.key#   common:#     security:#       tlsMode: 1## Expose the Milvus service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##
service:type: NodePortport: 19530portName: milvusnodePort: ""annotations: {}labels: {}## List of IP addresses at which the Milvus service is available## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips##externalIPs: []#   - externalIp1# LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to# set allowed inbound rules on the security group assigned to the master load balancerloadBalancerSourceRanges:- 0.0.0.0/0# Optionally assign a known public LB IP# loadBalancerIP: 1.2.3.4ingress:enabled: falseannotations:# Annotation example: set nginx ingress type# kubernetes.io/ingress.class: nginxnginx.ingress.kubernetes.io/backend-protocol: GRPCnginx.ingress.kubernetes.io/listen-ports-ssl: '[19530]'nginx.ingress.kubernetes.io/proxy-body-size: 4mnginx.ingress.kubernetes.io/ssl-redirect: "true"labels: {}rules:- host: "milvus-example.local"path: "/"pathType: "Prefix"# - host: "milvus-example2.local"#   path: "/otherpath"#   pathType: "Prefix"tls: []#  - secretName: chart-example-tls#    hosts:#      - milvus-example.localserviceAccount:create: falsename:annotations:labels:metrics:enabled: trueserviceMonitor:# Set this to `true` to create ServiceMonitor for Prometheus operatorenabled: falseinterval: "30s"scrapeTimeout: "10s"# Additional labels that can be used so ServiceMonitor will be discovered by PrometheusadditionalLabels: {}livenessProbe:enabled: trueinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:enabled: trueinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5log:level: "info"file:maxSize: 300    # MBmaxAge: 10    # daymaxBackups: 20format: "text"    # text/jsonpersistence:mountPath: "/milvus/logs"## If true, create/use a Persistent Volume Claim## If false, use emptyDir##enabled: falseannotations:helm.sh/resource-policy: keeppersistentVolumeClaim:existingClaim: ""## Milvus Logs Persistent Volume Storage Class## If defined, storageClassName: <storageClass>## If set to "-", storageClassName: "", which disables dynamic provisioning## If undefined (the default) or set to null, no storageClassName spec is##   set, choosing the default provisioner.## ReadWriteMany access mode required for milvus cluster.##storageClass:accessModes: ReadWriteManysize: 10GisubPath: ""## Heaptrack traces all memory allocations and annotates these events with stack traces.
## See more: https://github.com/KDE/heaptrack
## Enable heaptrack in production is not recommended.
heaptrack:image:repository: milvusdb/heaptracktag: v0.1.0pullPolicy: IfNotPresentstandalone:replicas: 1  # Run standalone mode with replication disabledresources: {}# Set local storage size in resources# resources:#   limits:#     ephemeral-storage: 100GinodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falsedisk:enabled: truesize:enabled: false  # Enable local storage size limitprofiling:enabled: false  # Enable live profiling## Default message queue for milvus standalone## Supported value: rocksmq, natsmq, pulsar and kafkamessageQueue: rocksmqpersistence:mountPath: "/var/lib/milvus"## If true, alertmanager will create/use a Persistent Volume Claim## If false, use emptyDir##enabled: trueannotations:helm.sh/resource-policy: keeppersistentVolumeClaim:existingClaim: ""## Milvus Persistent Volume Storage Class## If defined, storageClassName: <storageClass>## If set to "-", storageClassName: "", which disables dynamic provisioning## If undefined (the default) or set to null, no storageClassName spec is##   set, choosing the default provisioner.##storageClass: "csi-driver-s3"accessModes: ReadWriteOncesize: 50GisubPath: ""proxy:enabled: true# You can set the number of replicas to -1 to remove the replicas field in case you want to use HPAreplicas: 1resources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilinghttp:enabled: true  # whether to enable http rest serverdebugMode:enabled: false# Mount a TLS secret into proxy podtls:enabled: false
## when enabling proxy.tls, all items below should be uncommented and the key and crt values should be populated.
#    enabled: true
#    secretName: milvus-tls
## expecting base64 encoded values here: i.e. $(cat tls.crt | base64 -w 0) and $(cat tls.key | base64 -w 0)
#    key: LS0tLS1CRUdJTiBQU--REDUCT
#    crt: LS0tLS1CRUdJTiBDR--REDUCT
#  volumes:
#  - secret:
#      secretName: milvus-tls
#    name: milvus-tls
#  volumeMounts:
#  - mountPath: /etc/milvus/certs/
#    name: milvus-tls# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}rootCoordinator:enabled: true# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1  # Run Root Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for root coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:port: 53100annotations: {}labels: {}clusterIP: ""queryCoordinator:enabled: true# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1  # Run Query Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for query coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:port: 19531annotations: {}labels: {}clusterIP: ""queryNode:enabled: true# You can set the number of replicas to -1 to remove the replicas field in case you want to use HPAreplicas: 1resources: {}# Set local storage size in resources# resources:#   limits:#     ephemeral-storage: 100GinodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falsedisk:enabled: true  # Enable querynode load disk index, and search on disk indexsize:enabled: false  # Enable local storage size limitprofiling:enabled: false  # Enable live profiling# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}indexCoordinator:enabled: true# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1   # Run Index Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for index coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:port: 31000annotations: {}labels: {}clusterIP: ""indexNode:enabled: true# You can set the number of replicas to -1 to remove the replicas field in case you want to use HPAreplicas: 1resources: {}# Set local storage size in resources# limits:#    ephemeral-storage: 100GinodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingdisk:enabled: true  # Enable index node build disk vector indexsize:enabled: false  # Enable local storage size limit# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}dataCoordinator:enabled: true# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1           # Run Data Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for data coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:port: 13333annotations: {}labels: {}clusterIP: ""dataNode:enabled: true# You can set the number of replicas to -1 to remove the replicas field in case you want to use HPAreplicas: 1resources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profiling# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}## mixCoordinator contains all coord
## If you want to use mixcoord, enable this and disable all of other coords
mixCoordinator:enabled: false# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1           # Run Mixture Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for Mixture coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:annotations: {}labels: {}clusterIP: ""attu:enabled: truename: attuimage:repository: zilliz/attutag: v2.3.10pullPolicy: IfNotPresentservice:annotations: {}labels: {}type: NodePortport: 3000# loadBalancerIP: ""resources: {}podLabels: {}ingress:enabled: falseannotations: {}# Annotation example: set nginx ingress type# kubernetes.io/ingress.class: nginxlabels: {}hosts:- milvus-attu.localtls: []#  - secretName: chart-attu-tls#    hosts:#      - milvus-attu.local## Configuration values for the minio dependency
## ref: https://github.com/zilliztech/milvus-helm/blob/master/charts/minio/README.md
##minio:enabled: falsename: miniomode: distributedimage:tag: "RELEASE.2023-03-20T20-16-18Z"pullPolicy: IfNotPresentaccessKey: minioadminsecretKey: minioadminexistingSecret: ""bucketName: "milvus-bucket"rootPath: fileuseIAM: falseiamEndpoint: ""region: ""useVirtualHost: falsepodDisruptionBudget:enabled: falseresources:requests:memory: 2Giservice:type: ClusterIPport: 9000persistence:enabled: trueexistingClaim: ""storageClass: "csi-driver-s3"accessMode: ReadWriteOncesize: 500GilivenessProbe:enabled: trueinitialDelaySeconds: 5periodSeconds: 5timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:enabled: trueinitialDelaySeconds: 5periodSeconds: 5timeoutSeconds: 1successThreshold: 1failureThreshold: 5startupProbe:enabled: trueinitialDelaySeconds: 0periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 60## Configuration values for the etcd dependency
## ref: https://artifacthub.io/packages/helm/bitnami/etcd
##etcd:enabled: falsename: etcdreplicaCount: 3pdb:create: falseimage:repository: "milvusdb/etcd"tag: "3.5.5-r4"pullPolicy: IfNotPresentservice:type: ClusterIPport: 2379peerPort: 2380auth:rbac:enabled: falsepersistence:enabled: truestorageClass: "csi-driver-s3"accessMode: ReadWriteOncesize: 10Gi## Change default timeout periods to mitigate zoobie probe processlivenessProbe:enabled: truetimeoutSeconds: 10readinessProbe:enabled: trueperiodSeconds: 20timeoutSeconds: 10## Enable auto compaction## compaction by every 1000 revision##autoCompactionMode: revisionautoCompactionRetention: "1000"## Increase default quota to 4G##extraEnvVars:- name: ETCD_QUOTA_BACKEND_BYTESvalue: "4294967296"- name: ETCD_HEARTBEAT_INTERVALvalue: "500"- name: ETCD_ELECTION_TIMEOUTvalue: "2500"## Configuration values for the pulsar dependency
## ref: https://github.com/apache/pulsar-helm-chart
##pulsar:enabled: falsename: pulsarfullnameOverride: ""persistence: truemaxMessageSize: "5242880"  # 5 * 1024 * 1024 Bytes, Maximum size of each message in pulsar.rbac:enabled: falsepsp: falselimit_to_namespace: trueaffinity:anti_affinity: false## enableAntiAffinity: nocomponents:zookeeper: truebookkeeper: true# bookkeeper - autorecoveryautorecovery: truebroker: truefunctions: falseproxy: truetoolset: falsepulsar_manager: falsemonitoring:prometheus: falsegrafana: falsenode_exporter: falsealert_manager: falseimages:broker:repository: apachepulsar/pulsarpullPolicy: IfNotPresenttag: 2.8.2autorecovery:repository: apachepulsar/pulsartag: 2.8.2pullPolicy: IfNotPresentzookeeper:repository: apachepulsar/pulsarpullPolicy: IfNotPresenttag: 2.8.2bookie:repository: apachepulsar/pulsarpullPolicy: IfNotPresenttag: 2.8.2proxy:repository: apachepulsar/pulsarpullPolicy: IfNotPresenttag: 2.8.2pulsar_manager:repository: apachepulsar/pulsar-managerpullPolicy: IfNotPresenttag: v0.1.0zookeeper:resources:requests:memory: 1024Micpu: 0.3configData:PULSAR_MEM: >-Xms1024m-Xmx1024mPULSAR_GC: >-Dcom.sun.management.jmxremote-Djute.maxbuffer=10485760-XX:+ParallelRefProcEnabled-XX:+UnlockExperimentalVMOptions-XX:+DoEscapeAnalysis-XX:+DisableExplicitGC-XX:+PerfDisableSharedMem-Dzookeeper.forceSync=nopdb:usePolicy: falsebookkeeper:replicaCount: 3volumes:journal:name: journalsize: 100Giledgers:name: ledgerssize: 200Giresources:requests:memory: 2048Micpu: 1configData:PULSAR_MEM: >-Xms4096m-Xmx4096m-XX:MaxDirectMemorySize=8192mPULSAR_GC: >-Dio.netty.leakDetectionLevel=disabled-Dio.netty.recycler.linkCapacity=1024-XX:+UseG1GC -XX:MaxGCPauseMillis=10-XX:+ParallelRefProcEnabled-XX:+UnlockExperimentalVMOptions-XX:+DoEscapeAnalysis-XX:ParallelGCThreads=32-XX:ConcGCThreads=32-XX:G1NewSizePercent=50-XX:+DisableExplicitGC-XX:-ResizePLAB-XX:+ExitOnOutOfMemoryError-XX:+PerfDisableSharedMem-XX:+PrintGCDetailsnettyMaxFrameSizeBytes: "104867840"pdb:usePolicy: falsebroker:component: brokerpodMonitor:enabled: falsereplicaCount: 1resources:requests:memory: 4096Micpu: 1.5configData:PULSAR_MEM: >-Xms4096m-Xmx4096m-XX:MaxDirectMemorySize=8192mPULSAR_GC: >-Dio.netty.leakDetectionLevel=disabled-Dio.netty.recycler.linkCapacity=1024-XX:+ParallelRefProcEnabled-XX:+UnlockExperimentalVMOptions-XX:+DoEscapeAnalysis-XX:ParallelGCThreads=32-XX:ConcGCThreads=32-XX:G1NewSizePercent=50-XX:+DisableExplicitGC-XX:-ResizePLAB-XX:+ExitOnOutOfMemoryErrormaxMessageSize: "104857600"defaultRetentionTimeInMinutes: "10080"defaultRetentionSizeInMB: "-1"backlogQuotaDefaultLimitGB: "8"ttlDurationDefaultInSeconds: "259200"subscriptionExpirationTimeMinutes: "3"backlogQuotaDefaultRetentionPolicy: producer_exceptionpdb:usePolicy: falseautorecovery:resources:requests:memory: 512Micpu: 1proxy:replicaCount: 1podMonitor:enabled: falseresources:requests:memory: 2048Micpu: 1service:type: ClusterIPports:pulsar: 6650configData:PULSAR_MEM: >-Xms2048m -Xmx2048mPULSAR_GC: >-XX:MaxDirectMemorySize=2048mhttpNumThreads: "100"pdb:usePolicy: falsepulsar_manager:service:type: ClusterIPpulsar_metadata:component: pulsar-initimage:# the image used for running `pulsar-cluster-initialize` jobrepository: apachepulsar/pulsartag: 2.8.2## Configuration values for the kafka dependency
## ref: https://artifacthub.io/packages/helm/bitnami/kafka
##kafka:enabled: falsename: kafkareplicaCount: 3image:repository: bitnami/kafkatag: 3.1.0-debian-10-r52## Increase graceful termination for kafka graceful shutdownterminationGracePeriodSeconds: "90"pdb:create: false## Enable startup probe to prevent pod restart during recoveringstartupProbe:enabled: true## Kafka Java Heap sizeheapOpts: "-Xmx4096m -Xms4096m"maxMessageBytes: _10485760defaultReplicationFactor: 3offsetsTopicReplicationFactor: 3## Only enable time based log retentionlogRetentionHours: 168logRetentionBytes: _-1extraEnvVars:- name: KAFKA_CFG_MAX_PARTITION_FETCH_BYTESvalue: "5242880"- name: KAFKA_CFG_MAX_REQUEST_SIZEvalue: "5242880"- name: KAFKA_CFG_REPLICA_FETCH_MAX_BYTESvalue: "10485760"- name: KAFKA_CFG_FETCH_MESSAGE_MAX_BYTESvalue: "5242880"- name: KAFKA_CFG_LOG_ROLL_HOURSvalue: "24"persistence:enabled: truestorageClass:accessMode: ReadWriteOncesize: 300Gimetrics:## Prometheus Kafka exporter: exposes complimentary metrics to JMX exporterkafka:enabled: falseimage:repository: bitnami/kafka-exportertag: 1.4.2-debian-10-r182## Prometheus JMX exporter: exposes the majority of Kafkas metricsjmx:enabled: falseimage:repository: bitnami/jmx-exportertag: 0.16.1-debian-10-r245## To enable serviceMonitor, you must enable either kafka exporter or jmx exporter.## And you can enable them bothserviceMonitor:enabled: falseservice:type: ClusterIPports:client: 9092zookeeper:enabled: truereplicaCount: 3###################################
# External S3
# - these configs are only used when `externalS3.enabled` is true
###################################
externalS3:enabled: truehost: "172.20.1.124"port: "9000"accessKey: "minioadmin"secretKey: "minioadmin"useSSL: falsebucketName: "milvus-dev"rootPath: ""useIAM: falsecloudProvider: "aws"iamEndpoint: ""region: ""useVirtualHost: false###################################
# GCS Gateway
# - these configs are only used when `minio.gcsgateway.enabled` is true
###################################
externalGcs:bucketName: ""###################################
# External etcd
# - these configs are only used when `externalEtcd.enabled` is true
###################################
externalEtcd:enabled: true## the endpoints of the external etcd##endpoints:- xxxx:23790###################################
# External pulsar
# - these configs are only used when `externalPulsar.enabled` is true
###################################
externalPulsar:enabled: truehost: "xxx"port: 30012maxMessageSize: "5242880"  # 5 * 1024 * 1024 Bytes, Maximum size of each message in pulsar.tenant: "xx"namespace: "xxx"authPlugin: "org.apache.pulsar.client.impl.auth.AuthenticationToken"authParams: token:"xxx"###################################
# External kafka
# - these configs are only used when `externalKafka.enabled` is true
# - note that the following are just examples, you should confirm the
#   value of brokerList and mechanisms according to the actual external
#   Kafka configuration. E.g. If you select the AWS MSK, the configuration
#   should look something like this:
#   externalKafka:
#     enabled: true
#     brokerList: "xxxx:9096"
#     securityProtocol: SASL_SSL
#     sasl:
#       mechanisms: SCRAM-SHA-512
#       password: "xxx"
#       username: "xxx"
###################################
externalKafka:enabled: falsebrokerList: localhost:9092securityProtocol: SASL_SSLsasl:mechanisms: PLAINusername: ""password: ""

k8s可执行文件milvus_manifest.yaml

---
# Source: milvus/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: my-release-milvus
data:default.yaml: |+# Copyright (C) 2019-2021 Zilliz. All rights reserved.## Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance# with the License. You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software distributed under the License# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express# or implied. See the License for the specific language governing permissions and limitations under the License.etcd:endpoints:- xxxx:23790metastore:type: etcdminio:address: xxxxport: 9000accessKeyID: minioadminsecretAccessKey: minioadminuseSSL: falsebucketName: milvus-devrootPath:useIAM: falsecloudProvider: awsiamEndpoint:region:useVirtualHost: falsemq:type: pulsarmessageQueue: pulsarpulsar:address: xxxport: 6650maxMessageSize: 5242880tenant: "my-tenant"namespace: my-namespacerootCoord:address: my-release-milvus-rootcoordport: 53100enableActiveStandby: true  # Enable rootcoord active-standbyproxy:port: 19530internalPort: 19529queryCoord:address: my-release-milvus-querycoordport: 19531enableActiveStandby: true  # Enable querycoord active-standbyqueryNode:port: 21123enableDisk: true # Enable querynode load disk index, and search on disk indexindexCoord:address: my-release-milvus-indexcoordport: 31000enableActiveStandby: true  # Enable indexcoord active-standbyindexNode:port: 21121enableDisk: true # Enable index node build disk vector indexdataCoord:address: my-release-milvus-datacoordport: 13333enableActiveStandby: true  # Enable datacoord active-standbydataNode:port: 21124log:level: infofile:rootPath: ""maxSize: 300maxAge: 10maxBackups: 20format: textuser.yaml: |-#    For example enable rest http for milvus proxy#    proxy:#      http:#        enabled: true#      maxUserNum: 100#      maxRoleNum: 10##  Enable tlsMode and set the tls cert and key#  tls:#    serverPemPath: /etc/milvus/certs/tls.crt#    serverKeyPath: /etc/milvus/certs/tls.key#   common:#     security:#       tlsMode: 1
---
# Source: milvus/templates/attu-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-attulabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "attu"
spec:type: NodePortports:- name: attuprotocol: TCPport: 3000targetPort: 3000selector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "attu"
---
# Source: milvus/templates/datacoord-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-datacoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "datacoord"
spec:type: ClusterIPports:- name: datacoordport: 13333protocol: TCPtargetPort: datacoord- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datacoord"
---
# Source: milvus/templates/datanode-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-datanodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "datanode"
spec:type: ClusterIPclusterIP: Noneports:- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datanode"
---
# Source: milvus/templates/indexcoord-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-indexcoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "indexcoord"
spec:type: ClusterIPports:- name: indexcoordport: 31000protocol: TCPtargetPort: indexcoord- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexcoord"
---
# Source: milvus/templates/indexnode-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-indexnodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "indexnode"
spec:type: ClusterIPclusterIP: Noneports:- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexnode"
---
# Source: milvus/templates/querycoord-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-querycoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "querycoord"
spec:type: ClusterIPports:- name: querycoordport: 19531protocol: TCPtargetPort: querycoord- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querycoord"
---
# Source: milvus/templates/querynode-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-querynodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "querynode"
spec:type: ClusterIPclusterIP: Noneports:- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querynode"
---
# Source: milvus/templates/rootcoord-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-rootcoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "rootcoord"
spec:type: ClusterIPports:- name: rootcoordport: 53100protocol: TCPtargetPort: rootcoord- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "rootcoord"
---
# Source: milvus/templates/service.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvuslabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "proxy"
spec:type: NodePortports:- name: milvusport: 19530protocol: TCPtargetPort: milvus- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "proxy"
---
# Source: milvus/templates/attu-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-attulabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "attu"spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "attu"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "attu"spec:containers:- name: attuimage: zilliz/attu:v2.3.10imagePullPolicy: IfNotPresentports:- name: attucontainerPort: 3000protocol: TCPenv:- name: MILVUS_URLvalue: http://my-release-milvus:19530resources:{}
---
# Source: milvus/templates/datacoord-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-datacoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "datacoord"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datacoord"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datacoord"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: datacoordimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "datacoord" ]env:ports:- name: datacoordcontainerPort: 13333protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/datanode-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-datanodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "datanode"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datanode"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datanode"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: datanodeimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "datanode" ]env:ports:- name: datanodecontainerPort: 21124protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/indexcoord-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-indexcoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "indexcoord"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexcoord"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexcoord"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: indexcoordimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "indexcoord" ]env:ports:- name: indexcoordcontainerPort: 31000protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/indexnode-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-indexnodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "indexnode"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexnode"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexnode"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: indexnodeimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "indexnode" ]env:ports:- name: indexnodecontainerPort: 21121protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: tools- mountPath: /var/lib/milvus/dataname: diskvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}- name: diskemptyDir: {}
---
# Source: milvus/templates/proxy-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-proxylabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "proxy"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "proxy"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "proxy"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: proxyimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "proxy" ]env:ports:- name: milvuscontainerPort: 19530protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/querycoord-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-querycoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "querycoord"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querycoord"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querycoord"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: querycoordimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "querycoord" ]env:ports:- name: querycoordcontainerPort: 19531protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/querynode-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-querynodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "querynode"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querynode"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querynode"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: querynodeimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "querynode" ]env:ports:- name: querynodecontainerPort: 21123protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: tools- mountPath: /var/lib/milvus/dataname: diskvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}- name: diskemptyDir: {}
---
# Source: milvus/templates/rootcoord-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-rootcoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "rootcoord"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "rootcoord"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "rootcoord"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: rootcoordimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "rootcoord" ]env:ports:- name: rootcoordcontainerPort: 53100protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com