- build a delay service which will return {time} ms with /delay/ms/{time} endpoint
- the target server will invoke delay endpoint and return
- AWS t2.xlarge (4core 16g) -> Docker, Spring Web (Delay Service)
- AWS t2.xlarge (4core 16g) -> Docker, Spring Web, Spring Web Async
- AWS t2.xlarge (4core 16g) -> Docker, Spring Reactive Web, Spring Reactive Web Coroutine
- AWS t2.xlarge (4core 16g) -> Docker, Vert.x Verticle
- AWS t2.xlarge (4core 16g) -> Docker, Vert.x Coroutine Verticle
- AWS t2.xlarge (4core 16g) -> Docker, Ktor
- AWS t2.2xlarge (8core 32g) -> openjdk-11-jdk, Apache-JMeter-5.4.1
- target server receive request and invokes /delay/ms/500, /delay/ms/800, /delay/ms/1000 endpoints concurrently then return
- jmeter use constant throughput timer to keep 20 RPS
- for demo-delay-service
server.tomcat.threads.max=800
server.tomcat.threads.max=800
-Dreactor.netty.ioWorkerCount=1000 -Dreactor.netty.pool.maxConnections=8192
vertx.deployVerticle(
"com.example.demo.vertx.VertxVerticle",
DeploymentOptions().setInstances(VertxOptions.DEFAULT_EVENT_LOOP_POOL_SIZE)
)
vertx.deployVerticle(
"com.example.demo.vertx.ServiceVerticle",
DeploymentOptions()
.setInstances(VertxOptions.DEFAULT_EVENT_LOOP_POOL_SIZE)
.setWorker(true)
.setWorkerPoolSize(1000)
)
ktor {
deployment {
callGroupSize = 1000
connectionGroupSize = 1000
workerGroupSize = 1000
}
}
- Warm twice and hit one once