-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AU doesn't meet expectation #55
Comments
Hi. I am also a user who is repeating trial & error by changing parameter values. |
@shp776 thanks. I set 1 accelerator and run on 1 node, here is my run command: ./benchmark.sh run -s localhost -w unet3d -g h100 -n 1 -r resultsdir -p dataset.num_files_train=1200 -p dataset.data_folder=unet3d_data |
[METRIC] ========================================================== |
read_threads=6 |
Hi, @hanyunfan -n, --num-parallel Number of parallel jobs used to generate the dataset Thank you! |
Hi,Since the benchmark score does not include the time it takes to generate the dataset files, you can set that parameter to anything you want to. I like 16 or 32, for example, because it usually makes the generation phase take less time.Thanks,CurtisOn Mar 24, 2024, at 22:08, shp776 ***@***.***> wrote:
Hi, @hanyunfan
I want to know What value did you set for the below parameter In the second step(datagen).
-n, --num-parallel Number of parallel jobs used to generate the dataset
Thank you!
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
This number doesn't really matter, you can use the default one, it just opens 8 or 16 parallel threads to process the data. |
@FileSystemGuy , @hanyunfan Thank you very much guys, your advice was very helpful to me. -m, --client-host-memory-in-gb Memory available in the client where benchmark is run I want to know if it is right that the above parameter`s value should be as close as possible to my DRAM memory size in order to maximize storage performance. From what I tested, the larger the above parameter value, the larger the result (-param dataset.num_files_train) in the datasize stage(step 1.) Is there anything you can tell me about this? : ) |
@shp776 looks like that's the design, you should set it to something equal to your testing system memory, if you set larger, more files or larger files will be generated to meet the 5x rule. So, you will see more files, this is common. Finial results for anything larger than 5x memory size should be similar, because they all removed the client cache effect. So, if you set it with a larger value, you only increased your testing time, not the throughput at the end, not seem worth it. # calculate required minimum samples given host memory to eliminate client-side caching effects Line 217 in 88e4f59
$HOST_MEMORY_MULTIPLIER is 5 by default here, so looks like it will generate 5x data. Line 51 in 88e4f59
|
Hi Huihuo@zhenghh04 When we test 8 GPUs in one client, the AU start to fail again even with 128 read threads, besides reader threads, are there any other parameters we can adjust to keep the GPU busy? |
Which workload were you testing?
You might also need to make sure the affinity setting of your threads, usually with –cpu-bind option in your MPI command.
Huihuo
From: hanyunfan ***@***.***>
Date: Thursday, December 5, 2024 at 9:49 AM
To: mlcommons/storage ***@***.***>
Cc: Subscribed ***@***.***>
Subject: Re: [mlcommons/storage] AU doesn't meet expectation (Issue #55)
Hi ***@***.*** When we test 8 GPUs in one client, the AU start to fail again even with 128 read threads, besides reader threads, are there any other parameters we can adjust to keep the GPU busy?
—
Reply to this email directly, view it on GitHub<#55 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABMCS3IMR2HQ63NH3AXKESD2EBYXHAVCNFSM6AAAAABTCYQGVOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMRQGY4DENJRGI>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
It is unet3d, thanks a lot. I will give -cpu-bind a try, not really familiar with mpich, but I will give it a try. Thanks again. -Frank |
I am new, just had the storage benchmark run the 1st time ever and got this line:
Training_au_meet_expectation = fail.
My questions are:
The text was updated successfully, but these errors were encountered: