You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am currently using an API to train models for submission to the leaderboard in the Argoverse 2 prediction challenge. In addition to this, I'm interested in applying these techniques to real-world data. I'm seeking clarification on how tracks are labeled within the dataset:
Track Fragment: These are lower quality tracks that may contain only a few timestamps of observations.
Unscored Track: These tracks are used for contextual input and are not scored.
Scored Track: These are high-quality tracks relevant to the Autonomous Vehicle (AV) and are scored in the multi-agent prediction challenge.
Focal Track: This is the primary track of interest in a given scenario and is scored in the single-agent prediction challenge.
My main question revolves around the criteria for selecting these labels, especially:
Does a focal track represent tracks that have a longer duration of data?
Besides duration, are 'scored tracks' also selected based on their proximity to the AV or other factors?
I'm considering using a similar methodology to label and utilize real-world data to achieve comparable performance. Thank you for any insights or advice on this approach.
The text was updated successfully, but these errors were encountered:
Hi,
I am currently using an API to train models for submission to the leaderboard in the Argoverse 2 prediction challenge. In addition to this, I'm interested in applying these techniques to real-world data. I'm seeking clarification on how tracks are labeled within the dataset:
Track Fragment
: These are lower quality tracks that may contain only a few timestamps of observations.Unscored Track
: These tracks are used for contextual input and are not scored.Scored Track
: These are high-quality tracks relevant to the Autonomous Vehicle (AV) and are scored in the multi-agent prediction challenge.Focal Track
: This is the primary track of interest in a given scenario and is scored in the single-agent prediction challenge.My main question revolves around the criteria for selecting these labels, especially:
Does a
focal track
represent tracks that have a longer duration of data?Besides duration, are 'scored tracks' also selected based on their proximity to the AV or other factors?
I'm considering using a similar methodology to label and utilize real-world data to achieve comparable performance. Thank you for any insights or advice on this approach.
The text was updated successfully, but these errors were encountered: