Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flood script #8

Merged
merged 31 commits into from
Oct 2, 2024
Merged

flood script #8

merged 31 commits into from
Oct 2, 2024

Conversation

hopeyen
Copy link
Contributor

@hopeyen hopeyen commented Aug 23, 2024

  • add metric services to docker compose
  • allowing one to run commands such as docker compose run scripts flood to simulate uniformly random network traffics
    • optional args: users, ethamount, rounds, avgTxDataSize, maxTxDataSize, threads, times, delay, serial, wait all has some defaults
    • example usage: docker compose run scripts flood --serial true --rounds 1000 --users 20 --threads 10

Example dashboard view
Screenshot 2024-08-23 at 12 01 08 PM

@hopeyen hopeyen force-pushed the hope/flood-script branch from 1368e4e to d252289 Compare August 23, 2024 16:00
scripts/flood.ts Outdated
Comment on lines 1 to 8
import { runStress } from './stress';
import { ethers } from 'ethers';
import { namedAccount, namedAddress } from './accounts';

function randomInRange(maxSize: number): number {
return Math.ceil(Math.random() * maxSize);
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entropy should be maybe be guarded behind a boolean feature flag feed via CLI. If not mistaken the send tx function iterates over a set of rounds where each round (R) is a random transaction dispersal event that moves funds between two different arbitrary actors (from, to) across T threads K times. AFAIK the total # of txs per round would be equal to T*K but the actual byte amount used for each dispersal tx would be unknown but assumed to be in range [0, max_data_size]. Having a way to explicitly set the expected byte rate for incoming user tx traffic would be key for better quantifying maximum throughput for layr-labs/nitro.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a targetThroughput arg that disregard the data size arg, but instead set data size based on the number of threads, removed the wait for sending tx, and have each round tick by 1 second

console.log(`start sending transactions`)
const max_time = argv.times
const max_thread = argv.threads
for (let i = 0; i < argv.rounds; i++) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would there be any value in allowing > 1 from:to edges to exist per round?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you referring to times? This is for running the same from:to edge tx multiple times, acting as a multiplier to throughput. This was more valuable for single send commands, so I think it makes sense to remove so that the calculation is easier, and I will go ahead and remove

@@ -330,7 +332,7 @@ services:
entrypoint: /usr/local/bin/relay
ports:
- "127.0.0.1:9652:9652"
command: --chain.id 412346 --node.feed.input.url ws://sequencer:9642 --node.feed.output.port 9652
command: --chain.id 412346 --node.feed.input.url ws://sequencer:9642 --node.feed.output.port 9652 --metrics --pprof --metrics-server.addr 0.0.0.0 --pprof-cfg.addr 0.0.0.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should these be feature guarded? i.e, behind some flags like:
--profile (i.e, pprof)
--observe (i.e, prometheus/grafana)

test-node.bash Outdated Show resolved Hide resolved
test-node.bash Outdated Show resolved Hide resolved
scripts/flood.ts Outdated Show resolved Hide resolved
scripts/index.ts Outdated Show resolved Hide resolved
scripts/flood.ts Outdated Show resolved Hide resolved
scripts/flood.ts Outdated Show resolved Hide resolved
@hopeyen hopeyen requested a review from epociask September 5, 2024 16:18
@hopeyen hopeyen changed the base branch from eigenda--v3.0.3 to eigenda-v3.1.2 September 10, 2024 20:53
@epociask epociask changed the base branch from eigenda-v3.1.2 to eigenda-v3.2.1 September 30, 2024 21:44
@epociask epociask merged commit 6ad9022 into eigenda-v3.2.1 Oct 2, 2024
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants