首页
学习
活动
专区
圈层
工具
发布
社区首页 >专栏 >支持10w级调度!新鲜出炉的SnailJob性能压测报告

支持10w级调度!新鲜出炉的SnailJob性能压测报告

作者头像
wayn
发布2025-11-13 18:34:44
发布2025-11-13 18:34:44
410
举报
文章被收录于专栏:wayn的程序开发wayn的程序开发

SnailJob:灵活、可靠、高效的分布式任务调度与重试平台

当下企业业务系统复杂,任务调度、任务失败重试、安全控制、监控告警等需求层出不穷,许多传统方案都面临接入复杂、扩展成本高、失败重试机制单一等痛点。 SnailJob的诞生正是为了解决这些难题。

平台概述

SnailJob 是一个专注于分布式任务调度与重试的平台,采用分区分桶架构具备极高的伸缩性和容错性,无需依赖外部中间件即可实现秒级调度和复杂重试策略,同时拥有现代化 UI 和完善的权限与告警机制。

SnailJob 性能压测报告

  • 报告日期: 2025-08-25
  • 版本: 1.7.2
  • 提供者: rpei

测试目标

本次压测的目标是验证 单个 SnailJob 服务节点在稳定条件下可支持的最大定时任务数量,并评估系统在高并发任务调度下的整体性能表现。

测试环境

🔹 数据库

  • 类型: 阿里云 RDS MySQL 8.0
  • 实例规格: mysql.n2.xlarge.1(8 vCPU,16 GB 内存)
  • 存储: 100 GB,InnoDB 引擎
  • 版本: MySQL_InnoDB_8.0_Default

🔹 应用部署

  • 服务器信息: 阿里云 ECS g6.4xlarge
  • SnailJob Server: 单实例(4 vCPU,8 GB 内存)
  • SnailJob Client: 16 个实例(每个 1 vCPU,1 GB 内存)

服务端配置

pekko配置(snail-job-server-starter/src/main/resources/snailjob.conf)

代码语言:javascript
复制
pekko {
  actor {
    common-log-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 16
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    common-scan-task-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 64
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    netty-receive-request-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 128
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    retry-task-executor-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 32
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    retry-task-executor-call-client-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 32
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }




    retry-task-executor-result-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 32
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-prepare-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 128
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-executor-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 160
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-executor-call-client-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 160
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-executor-result-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 160
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    workflow-task-prepare-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 4
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    workflow-task-executor-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 4
        core-pool-size-factor = 1.0
        core-pool-size-max = 512
      }
      throughput = 10
    }
  }
}

系统配置文件(snail-job-server-starter/src/main/resources/application.yml)

代码语言:javascript
复制
server:
  port: 8080
  servlet:
    context-path: /snail-job


spring:
  main:
    banner-mode: off
  profiles:
    active: dev
  datasource:
    name: snail_job
    ## mysql
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://ex-snailjob-mysql-svc:3306/snail_job?useSSL=false&characterEncoding=utf8&useUnicode=true
    username: root
    password: Ab1234567
    type: com.zaxxer.hikari.HikariDataSource
    hikari:
      connection-timeout: 30000
      minimum-idle: 16
      maximum-pool-size: 256
      auto-commit: true
      idle-timeout: 30000
      pool-name: snail_job
      max-lifetime: 1800000
  web:
    resources:
      static-locations: classpath:admin/


mybatis-plus:
  typeAliasesPackage: com.aizuda.snailjob.template.datasource.persistence.po
  global-config:
    db-config:
      where-strategy: NOT_EMPTY
      capital-mode: false
      logic-delete-value: 1
      logic-not-delete-value: 0
  configuration:
    map-underscore-to-camel-case: true
    cache-enabled: true
logging:
  config: /usr/snailjob/config/logback.xml
snail-job:
  retry-pull-page-size: 2000 # 拉取重试数据的每批次的大小
  job-pull-page-size: 2000 # 拉取重试数据的每批次的大小
  server-port: 17888  # 服务器端口
log-storage: 7 # 日志保存时间(单位: day)
  rpc-type: grpc
  summary-day: 0
  server-rpc:
    keep-alive-time: 45s                # 心跳间隔45秒
    keep-alive-timeout: 15s             # 心跳超时15秒
    permit-keep-alive-time: 30s         # 允许心跳间隔30秒  
    dispatcher-tp:                      # 调度线程池配置
      core-pool-size: 100
      maximum-pool-size: 100


  client-rpc:
    keep-alive-time: 45s                # 心跳间隔45秒
    keep-alive-timeout: 15s             # 心跳超时15秒  
    client-tp:                         # 客户端线程池配置
      core-pool-size: 100
      maximum-pool-size: 100

测试场景

  • 每个定时任务的执行周期:60 秒
  • 单个任务平均执行耗时:200 毫秒
  • 测试目标:测量单节点 SnailJob Server 可稳定调度的任务数量

测试结果

在单节点(4C/8G)环境下,SnailJob Server 能够稳定承载 30,000 个定时任务,并保证任务在每 60 秒 内按时执行。此时数据库负载率仅 20%,表明系统具备良好的可扩展性。通过水平扩展服务端节点,理论上可轻松支持 100,000+ 任务调度,满足绝大多数企业的业务场景。 同时,SnailJob Pro 版本引入 Redis 缓存改造与日志剥离(基于 Mongo 存储),进一步提升了系统的调度能力与稳定性。

资源消耗情况(受公司保密限制,截图无法公开,这里仅分享压测的结果数据)

指标

数据

SnailJob服务端CPU使用率

均值:71% 峰值:82%

SnailJob服务端内存

约32%

数据库实例IOPS使用率

采样间隔5秒峰值:40% 采样间隔30秒峰值:50%

数据库实例CPU使用率

约20%

数据库实例内存使用率

约55%

总结

SnailJob 的性能瓶颈主要来源于 数据库存储。由于调度过程中存在大量任务批次与日志写入操作,对数据库 IOPS 会产生较大压力。因此在部署 SnailJob 时,建议:

  • 数据库独立部署,避免与其他业务服务共享实例;
  • 优先选择高性能磁盘,以提升写入效率;
  • 开启异步写盘,进一步降低数据库写入延迟。
本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2025-08-28,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 waynblog 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • SnailJob 性能压测报告
    • 测试目标
    • 测试环境
      • 🔹 数据库
      • 🔹 应用部署
    • 服务端配置
      • pekko配置(snail-job-server-starter/src/main/resources/snailjob.conf)
      • 系统配置文件(snail-job-server-starter/src/main/resources/application.yml)
    • 测试场景
    • 测试结果
    • 资源消耗情况(受公司保密限制,截图无法公开,这里仅分享压测的结果数据)
    • 总结
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档