-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about nuscenes data convert #15
Comments
Yes, the inference speed of LLaVA-1.6-34B is relatively slow, and each sample requires generating multiple QA responses. If you need to speed up data production, you might consider using 4-bit or 8-bit inference here. If you have sufficient GPU resources, you can also consider splitting the dataset and generating data in parallel. |
@AMzhanghan where you able to get the data converted successfully ?? can yo please share the converted data |
the data conversion is so slow, need almost 100+ days to finish on 8 *3090. who can share the converted data? many thanks |
same question |
The data conversion is very slow, about 6 minutes per frame. Why is it so slow?

The text was updated successfully, but these errors were encountered: