实战环境涉及软件版本信息
操作系统:centos 7
KubeSphere:v3.4.1
Kubernetes:v1.28.8
KubeKey: v3.1.7
Redis: 6.2.16
部署分析图:
1. 部署 Redis 服务
1.1 创建 ConfigMap字典
-
选择项目->配置->配置字典
字典名称:redis-cluster-config
-
新建键值对
键:redis-cluster.conf
值:
#绑定主机IP,默认值为127.0.0.1
bind 0.0.0.0
#设置密码
requirepass 你的密码
masterauth 你的密码
#要是配置里没有指定bind和密码,开启该参数后,redis只能本地进行访问,要是开启了密码和bind,可以开启.否则最好设置为no。
protected-mode yes
#端口号
port 6379# 差异
tcp-keepalive 300
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
rdb-del-sync-files no
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del yes
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
aof-use-rdb-preamble yes
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
maxmemory 134217728
maxmemory-policy allkeys-random#和内核参数/proc/sys/net/core/somaxconn值一样,redis默认511,而内核默认值128,高并发场景将其增大,内核参数也增大
tcp-backlog 1024
#客户端闲置多少秒后,断开连接为0,则服务端不会主动断开连接
#timeout 0
#是否在后台执行
#daemonize yes
supervised no
#redis进程文件
pidfile /data/redis.pid
#日志的级别,包括:debug,verbose,notice(默认适合生成环境),warn(只有非常重要的信息)
loglevel notice
#指定日志文件
logfile /data/redis_log
#数据库的数量,默认使用的数据库是DB 0,可以通过”SELECT “命令选择一个db
databases 16
# -------------------- SLOW LOG --------------------
#slog log是用来记录慢查询,执行时间比slowlog-log-slower-than大的请求记录到slowlog里面,1000000=1秒
slowlog-log-slower-than 1000
#慢查询日志长度。当一个新的命令被写进日志的时候,最老的那个记录会被删掉。这个长度没有限制。只要有足够的内存就行。你可以通过 SLOWLOG RESET 来释放内存。
slowlog-max-len 128
# -------------------- rdb Persistence --------------------
#当有一条Keys数据被改变时,900秒刷新到disk一次
save 900 1
#当有10条Keys数据被改变时,300秒刷新到disk一次
save 300 10
#当有1w条keys数据被改变时,60秒刷新到disk一次
save 60 10000
#当RDB持久化出现错误后,是否依然进行继续进行工作
stop-writes-on-bgsave-error yes
#使用压缩rdb文件,压缩需要一些cpu的消耗,不压缩需要更多的磁盘空间
rdbcompression yes
##是否校验rdb文件,校验会有大概10%的性能损耗
#rdbchecksum yes
##rdb文件的名称
dbfilename dump.rdb
##数据目录,数据库的写入会在这个目录。rdb、aof文件也会写在这个目录
dir /data
# -------------------- AOF Persistence --------------------
#Append Only File是另一种持久化方式,可以提供更好的持久化特性.Redis会把每次写入的数据在接收后都写入 appendonly.aof 文件,每次启动时Redis都会先把这个文件的数据读入内存里,先忽略RDB文件
appendonly yes
#aof文件名
appendfilename "appendonly.aof"
#aof持久化策略,no表示不执行fsync,由操作系统保证数据同步到磁盘,速度最快.
#always表示每次写入都执行fsync,以保证数据同步到磁盘
#everysec表示每秒执行一次fsync,可能会导致丢失这1s数据
appendfsync everysec
#设置为yes表示rewrite期间对新写操作不fsync,暂时存在内存中,等rewrite完成后再写入,默认为no最安全,建议yes.Linux的默认fsync策略是30秒.可能丢失30秒数据.
no-appendfsync-on-rewrite no
#aof自动重写配置,前AOF文件大小是上次AOF文件大小的二倍(设置为100)时,自动启动新的日志重写过程
auto-aof-rewrite-percentage 100
#设置允许重写的最小aof文件大小,避免了达到约定百分比但尺寸仍然很小的情况还要重写
auto-aof-rewrite-min-size 64mb
#aof文件可能在尾部是不完整的,如果选择的是yes,当截断的aof文件被导入的时候,会自动发布一个log给客户端然后load
aof-load-truncated yes
# 如果达到最大时间限制(毫秒),redis会记个log,然后返回error。当一个脚本超过了最大时限。只有SCRIPT KILL和SHUTDOWN NOSAVE可以用。第一个可以杀没有调write命令的东西。要是已经调用了write,只能用第二个命令杀。
lua-time-limit 5000
# -------------------- REDIS CLUSTER --------------------
##集群开关,默认是不开启集群模式。
cluster-enabled yes
#集群配置文件的名称,每个节点都有一个集群相关的配置文件,持久化保存集群的信息。这个文件并不需要手动配置,这个配置文件有Redis生成并更新,每个Redis集群节点需要一个单独的配置文件,请确保与实例运行的系统中配置文件名称不冲突
cluster-config-file /data/nodes.conf
#节点互连超时的阀值。集群节点超时毫秒数
cluster-node-timeout 5000
#在进行故障转移的时候,全部slave都会请求申请为master,但是有些slave可能与master断开连接一段时间了,导致数据过于陈旧,这样的slave不应该被提升为master。
##如果节点超时时间为三十秒, 并且slave-validity-factor为10,假设默认的repl-ping-slave-period是10秒,即如果超过310秒slave将不会尝试进行故障转移
cluster-slave-validity-factor 10
#当某个主节点的从节点挂掉裸奔后,会从其他富余的主节点分配一个从节点过来,确保每个主节点都有至少一个从节点
#分配后仍然剩余migration barrier个从节点的主节点才会触发节点分配,默认是1,生产环境建议维持默认值,这样才能最大可能的确保集群稳定
cluster-migration-barrier 1
1.2 创建 Redis
应用负载->工作负载->有状态集副本
-
新建有状态集副本
名称:redis-cluster
-
容器设置
设置镜像仓库
设置端口
启动脚本
容器组副本数量为6,因为三主三从
-
存储设置
如果没有提前创建pvc,这里可以通过创建数据存储卷模板同时创建pvc。(强烈建议以后创建pvc时,按照这种方式进行创建,好处就是将来redis容器组pod进行扩缩容时,每个pod自动增减对应的pvc,而不至于多个pod容器组共用一个pvc)。
添加持久券
这样,redis的数据存储挂载完成了,接下来挂载redis的配置集。
选择配置字典
完整配置
点击创建
1.3 创建redis服务
这里创建两种redis服务:
- 集群内部访问的service:clusterIP
- 集群外访问的service:NodePort
1.3.1 创建集群内部访问service
选择指定工作负载
完成之后点击下一步
这样,redis集群内访问的service创建完成了
1.3.2 创建集群外访问service
上面选择为服务分配虚拟ip项。
接下来,勾选外部访问,选择访问模式为NodePort,点击创建。
ip为集群任意节点外网访问IP
3 创建 Redis 集群
Redis POD 创建完成后,不会自动创建 Redis 集群,需要手工执行集群初始化的命令,有自动创建和手工创建两种方式,二选一,建议选择自动。
3.1 自动创建 Redis 集群
执行下面的命令,自动创建 3 个 master 和 3 个 slave 的集群,中间需要输入一次 yes。
执行命令
kubectl exec -it redis-cluster-0 -n hgjg-common -- redis-cli -a 你的密码 --cluster create --cluster-replicas 1 $(kubectl get pods -n hgjg-common -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')
正确执行后,输出结果如下 :
[root@master1 ~]# kubectl exec -it redis-cluster-0 -n hgjg-common -- redis-cli -a 你的密码 --cluster create --cluster-replicas 1 $(kubectl get pods -n hgjg-common -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.233.104.16:6379 to 10.233.87.61:6379
Adding replica 10.233.103.22:6379 to 10.233.114.72:6379
Adding replica 10.233.116.13:6379 to 10.233.119.14:6379
M: 61045e3bb67e14307ab422373cbedb3e1125e352 10.233.87.61:6379slots:[0-5460] (5461 slots) master
M: 4228fc625d30a176f07f09679d735bcdb3080777 10.233.114.72:6379slots:[5461-10922] (5462 slots) master
M: 52d3903485f3cb003ab1eb42a5d7bf10afa49a24 10.233.119.14:6379slots:[10923-16383] (5461 slots) master
S: d98c6f81f1c3f16b8098d40771288ce23fdbf6c9 10.233.116.13:6379replicates 52d3903485f3cb003ab1eb42a5d7bf10afa49a24
S: 317794f5fc04984a952b13f0d19ce0d342d99bff 10.233.104.16:6379replicates 61045e3bb67e14307ab422373cbedb3e1125e352
S: 8df06e20004fbb161f701f7d16a1a4128987f0d4 10.233.103.22:6379replicates 4228fc625d30a176f07f09679d735bcdb3080777
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 10.233.87.61:6379)
M: 61045e3bb67e14307ab422373cbedb3e1125e352 10.233.87.61:6379slots:[0-5460] (5461 slots) master1 additional replica(s)
S: d98c6f81f1c3f16b8098d40771288ce23fdbf6c9 10.233.116.13:6379slots: (0 slots) slavereplicates 52d3903485f3cb003ab1eb42a5d7bf10afa49a24
S: 317794f5fc04984a952b13f0d19ce0d342d99bff 10.233.104.16:6379slots: (0 slots) slavereplicates 61045e3bb67e14307ab422373cbedb3e1125e352
M: 4228fc625d30a176f07f09679d735bcdb3080777 10.233.114.72:6379slots:[5461-10922] (5462 slots) master1 additional replica(s)
S: 8df06e20004fbb161f701f7d16a1a4128987f0d4 10.233.103.22:6379slots: (0 slots) slavereplicates 4228fc625d30a176f07f09679d735bcdb3080777
M: 52d3903485f3cb003ab1eb42a5d7bf10afa49a24 10.233.119.14:6379slots:[10923-16383] (5461 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
3.2 手动创建 Redis 集群
手动配置 3 个 Master 和 3 个 Slave 的集群(此步骤只为了记录手动配置集群的过程,实际环境建议用自动创建的方式)。
一共创建了 6 个 Redis pod,集群主-> 从配置的规则为 0->3,1->4,2->5。
由于命令太长,配置过程中,没有采用自动获取 IP 的方式,使用手工查询 pod IP 并进行相关配置。
- 查询 Redis pod 分配的 IP
kubectl get pods -n opsxlab -o wide | grep redis
redis-cluster-0 1/1 Running 0 18s 10.233.94.233 ksp-worker-1 <none> <none>
redis-cluster-1 1/1 Running 0 16s 10.233.96.29 ksp-worker-3 <none> <none>
redis-cluster-2 1/1 Running 0 13s 10.233.68.255 ksp-worker-2 <none> <none>
redis-cluster-3 1/1 Running 0 11s 10.233.94.209 ksp-worker-1 <none> <none>
redis-cluster-4 1/1 Running 0 8s 10.233.96.23 ksp-worker-3 <none> <none>
redis-cluster-5 1/1 Running 0 5s 10.233.68.4 ksp-worker-2 <none> <none>
- 创建 3 个 Master 节点的集群
# 下面的命令中,三个 IP 地址分别为 redis-cluster-0 redis-cluster-1 redis-cluster-2 对应的IP, 中间需要输入一次yes$ kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster create 10.233.94.233:6379 10.233.96.29:6379 10.233.68.255:6379Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: 1f4df418ac310b6d14a7920a105e060cda58275a 10.233.94.233:6379slots:[0-5460] (5461 slots) master
M: bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1 10.233.96.29:6379slots:[5461-10922] (5462 slots) master
M: 149ffd5df2cae9cfbc55e3aff69c9575134ce162 10.233.68.255:6379slots:[10923-16383] (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 10.233.94.233:6379)
M: 1f4df418ac310b6d14a7920a105e060cda58275a 10.233.94.233:6379slots:[0-5460] (5461 slots) master
M: bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1 10.233.96.29:6379slots:[5461-10922] (5462 slots) master
M: 149ffd5df2cae9cfbc55e3aff69c9575134ce162 10.233.68.255:6379slots:[10923-16383] (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
- 为每个 Master 添加 Slave 节点(共三组)
# 第一组 redis0 -> redis3
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster add-node 10.233.94.209:6379 10.233.94.233:6379 --cluster-slave --cluster-master-id 1f4df418ac310b6d14a7920a105e060cda58275a# 参数说明
# 10.233.94.233:6379 任意一个 master 节点的 ip 地址,一般用 redis-cluster-0 的 IP 地址
# 10.233.94.209:6379 添加到某个 Master 的 Slave 节点的 IP 地址
# --cluster-master-id 添加 Slave 对应 Master 的 ID,如果不指定则随机分配到任意一个主节点
- 正确执行后,输出结果如下 :(以第一组 0->3 为例)
$ kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster add-node 10.233.94.209:6379 10.233.94.233:6379 --cluster-slave --cluster-master-id 1f4df418ac310b6d14a7920a105e060cda58275aWarning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 10.233.94.209:6379 to cluster 10.233.94.233:6379
>>> Performing Cluster Check (using node 10.233.94.233:6379)
M: 1f4df418ac310b6d14a7920a105e060cda58275a 10.233.94.233:6379slots:[0-5460] (5461 slots) master
M: bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1 10.233.96.29:6379slots:[5461-10922] (5462 slots) master
M: 149ffd5df2cae9cfbc55e3aff69c9575134ce162 10.233.68.255:6379slots:[10923-16383] (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 10.233.94.209:6379 to make it join the cluster.
Waiting for the cluster to join>>> Configure node as replica of 10.233.94.233:6379.
[OK] New node added correctly.
依次执行另外两组的配置(结果略)
# 第二组 redis1 -> redis4
kubectl exec -it -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster add-node 10.233.96.23:6379 10.233.94.233:6379 --cluster-slave --cluster-master-id bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1# 第三组 redis2 -> redis5
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster add-node 10.233.68.4:6379 10.233.94.233:6379 --cluster-slave --cluster-master-id 149ffd5df2cae9cfbc55e3aff69c9575134ce162
3.3 验证集群状态
查看状态
kubectl exec -it redis-cluster-0 -n hgjg-common -- redis-cli -a 你的密码 --cluster check $(kubectl get pods -n hgjg-common -l app=redis-cluster -o jsonpath='{range.items[0]}{.status.podIP}:6379{end}')
[root@master1 ~]# kubectl exec -it redis-cluster-0 -n hgjg-common -- redis-cli -a 你的密码 --cluster check $(kubectl get pods -n hgjg-common -l app=redis-cluster -o jsonpath='{range.items[0]}{.status.podIP}:6379{end}')
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.233.87.61:6379 (61045e3b...) -> 0 keys | 5461 slots | 1 slaves.
10.233.114.72:6379 (4228fc62...) -> 0 keys | 5462 slots | 1 slaves.
10.233.119.14:6379 (52d39034...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.233.87.61:6379)
M: 61045e3bb67e14307ab422373cbedb3e1125e352 10.233.87.61:6379slots:[0-5460] (5461 slots) master1 additional replica(s)
S: d98c6f81f1c3f16b8098d40771288ce23fdbf6c9 10.233.116.13:6379slots: (0 slots) slavereplicates 52d3903485f3cb003ab1eb42a5d7bf10afa49a24
S: 317794f5fc04984a952b13f0d19ce0d342d99bff 10.233.104.16:6379slots: (0 slots) slavereplicates 61045e3bb67e14307ab422373cbedb3e1125e352
M: 4228fc625d30a176f07f09679d735bcdb3080777 10.233.114.72:6379slots:[5461-10922] (5462 slots) master1 additional replica(s)
S: 8df06e20004fbb161f701f7d16a1a4128987f0d4 10.233.103.22:6379slots: (0 slots) slavereplicates 4228fc625d30a176f07f09679d735bcdb3080777
M: 52d3903485f3cb003ab1eb42a5d7bf10afa49a24 10.233.119.14:6379slots:[10923-16383] (5461 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@master1 ~]#
4.集群功能测试
压力测试
使用 Redis 自带的压力测试工具,测试 Redis 集群是否可用,并简单测试性能。
测试 set 场景:
使用 set 命令,发送 100000 次请求,每个请求包含一个键值对,其中键是随机生成的,值的大小是 100 字节,同时有 20 个客户端并发执行。
kubectl exec -it redis-cluster-0 -n hgjg-common -- redis-benchmark -h 10.233.10.238 -p 6379 -a 你的密码 -t set -n 100000 -c 20 -d 100 --cluster
结果
[root@master1 ~]# kubectl exec -it redis-cluster-0 -n hgjg-common -- redis-benchmark -h 10.233.10.238 -p 6379 -a 你的密码-t set -n 100000 -c 20 -d 100 --cluster
Cluster has 3 master nodes:Master 0: 4228fc625d30a176f07f09679d735bcdb3080777 10.233.114.72:6379
Master 1: 61045e3bb67e14307ab422373cbedb3e1125e352 10.233.87.61:6379
Master 2: 52d3903485f3cb003ab1eb42a5d7bf10afa49a24 10.233.119.14:6379====== SET ====== 100000 requests completed in 1.51 seconds20 parallel clients100 bytes payloadkeep alive: 1cluster mode: yes (3 masters)node [0] configuration:save: 900 1 300 10 60 10000appendonly: yesnode [1] configuration:save: 900 1 300 10 60 10000appendonly: yesnode [2] configuration:save: 900 1 300 10 60 10000appendonly: yesmulti-thread: yesthreads: 3Latency by percentile distribution:
0.000% <= 0.039 milliseconds (cumulative count 6)
50.000% <= 0.223 milliseconds (cumulative count 51217)
75.000% <= 0.303 milliseconds (cumulative count 75873)
87.500% <= 0.359 milliseconds (cumulative count 88317)
93.750% <= 0.415 milliseconds (cumulative count 94274)
96.875% <= 0.479 milliseconds (cumulative count 97026)
98.438% <= 0.615 milliseconds (cumulative count 98480)
99.219% <= 0.879 milliseconds (cumulative count 99224)
99.609% <= 1.215 milliseconds (cumulative count 99611)
99.805% <= 1.831 milliseconds (cumulative count 99805)
99.902% <= 3.223 milliseconds (cumulative count 99903)
99.951% <= 4.215 milliseconds (cumulative count 99952)
99.976% <= 7.015 milliseconds (cumulative count 99976)
99.988% <= 12.079 milliseconds (cumulative count 99988)
99.994% <= 12.287 milliseconds (cumulative count 99994)
99.997% <= 12.471 milliseconds (cumulative count 99997)
99.998% <= 12.535 milliseconds (cumulative count 99999)
99.999% <= 12.591 milliseconds (cumulative count 100000)
100.000% <= 12.591 milliseconds (cumulative count 100000)Cumulative distribution of latencies:
5.921% <= 0.103 milliseconds (cumulative count 5921)
44.835% <= 0.207 milliseconds (cumulative count 44835)
75.873% <= 0.303 milliseconds (cumulative count 75873)
93.669% <= 0.407 milliseconds (cumulative count 93669)
97.491% <= 0.503 milliseconds (cumulative count 97491)
98.433% <= 0.607 milliseconds (cumulative count 98433)
98.842% <= 0.703 milliseconds (cumulative count 98842)
99.081% <= 0.807 milliseconds (cumulative count 99081)
99.261% <= 0.903 milliseconds (cumulative count 99261)
99.418% <= 1.007 milliseconds (cumulative count 99418)
99.521% <= 1.103 milliseconds (cumulative count 99521)
99.606% <= 1.207 milliseconds (cumulative count 99606)
99.667% <= 1.303 milliseconds (cumulative count 99667)
99.718% <= 1.407 milliseconds (cumulative count 99718)
99.737% <= 1.503 milliseconds (cumulative count 99737)
99.762% <= 1.607 milliseconds (cumulative count 99762)
99.777% <= 1.703 milliseconds (cumulative count 99777)
99.800% <= 1.807 milliseconds (cumulative count 99800)
99.819% <= 1.903 milliseconds (cumulative count 99819)
99.827% <= 2.007 milliseconds (cumulative count 99827)
99.833% <= 2.103 milliseconds (cumulative count 99833)
99.899% <= 3.103 milliseconds (cumulative count 99899)
99.939% <= 4.103 milliseconds (cumulative count 99939)
99.961% <= 5.103 milliseconds (cumulative count 99961)
99.967% <= 6.103 milliseconds (cumulative count 99967)
99.978% <= 7.103 milliseconds (cumulative count 99978)
99.987% <= 8.103 milliseconds (cumulative count 99987)
99.989% <= 12.103 milliseconds (cumulative count 99989)
100.000% <= 13.103 milliseconds (cumulative count 100000)Summary:throughput summary: 66225.16 requests per secondlatency summary (msec):avg min p50 p95 p99 max0.250 0.032 0.223 0.431 0.775 12.591
其它场景(结果略)
ping
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-benchmark -h 192.168.9.91 -p 31379 -a PleaseChangeMe2024 -t ping -n 100000 -c 20 -d 100 --cluster
get
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-benchmark -h 192.168.9.91 -p 31379 -a PleaseChangeMe2024 -t get -n 100000 -c 20 -d 100 --cluster