-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flood script #8
flood script #8
Conversation
feat: EigenDA support
Epociask feat larger batches
chore: Use open-source layr-labs/nitro-contracts
1368e4e
to
d252289
Compare
scripts/flood.ts
Outdated
import { runStress } from './stress'; | ||
import { ethers } from 'ethers'; | ||
import { namedAccount, namedAddress } from './accounts'; | ||
|
||
function randomInRange(maxSize: number): number { | ||
return Math.ceil(Math.random() * maxSize); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Entropy should be maybe be guarded behind a boolean feature flag feed via CLI. If not mistaken the send tx function iterates over a set of rounds where each round (R) is a random transaction dispersal event that moves funds between two different arbitrary actors (from, to) across T threads K times. AFAIK the total # of txs per round would be equal to T*K
but the actual byte amount used for each dispersal tx would be unknown but assumed to be in range [0, max_data_size]
. Having a way to explicitly set the expected byte rate for incoming user tx traffic would be key for better quantifying maximum throughput for layr-labs/nitro.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a targetThroughput
arg that disregard the data size arg, but instead set data size based on the number of threads, removed the wait for sending tx, and have each round tick by 1 second
console.log(`start sending transactions`) | ||
const max_time = argv.times | ||
const max_thread = argv.threads | ||
for (let i = 0; i < argv.rounds; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would there be any value in allowing > 1 from:to edges to exist per round?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you referring to times
? This is for running the same from:to edge tx multiple times, acting as a multiplier to throughput. This was more valuable for single send commands, so I think it makes sense to remove so that the calculation is easier, and I will go ahead and remove
@@ -330,7 +332,7 @@ services: | |||
entrypoint: /usr/local/bin/relay | |||
ports: | |||
- "127.0.0.1:9652:9652" | |||
command: --chain.id 412346 --node.feed.input.url ws://sequencer:9642 --node.feed.output.port 9652 | |||
command: --chain.id 412346 --node.feed.input.url ws://sequencer:9642 --node.feed.output.port 9652 --metrics --pprof --metrics-server.addr 0.0.0.0 --pprof-cfg.addr 0.0.0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should these be feature guarded? i.e, behind some flags like:
--profile
(i.e, pprof)
--observe
(i.e, prometheus/grafana)
7a08cab
to
8fce867
Compare
…into hope/flood-script
…board chore: Update dashboard and wire metrics for other services
docker compose run scripts flood
to simulate uniformly random network trafficsusers, ethamount, rounds, avgTxDataSize, maxTxDataSize, threads, times, delay, serial, wait
all has some defaultsdocker compose run scripts flood --serial true --rounds 1000 --users 20 --threads 10
Example dashboard view