Skip to content

Scheduler

Nathan Hui edited this page Apr 17, 2024 · 2 revisions

The Smartfin is intended as a modular data acquisition platform. As noted in https://github.com/UCSD-E4E/smartfin-fw3/wiki/Smartfin-Publish-Data-Format, there are a number of different data ensemble types. We anticipate there being more ensemble types as we add more sensors, and as we develop more use cases for the Smartfin platform.

Each ensemble has its own ensemble rate/period. For instance, we may only need Battery status once every 10 seconds, while we would like Temperature once every second, and IMU data 10 times a second. Some of these may even have some amount of averaging.

The schedule will be defined as a static table in RAM. Each element in the table will be a data structure encoding the following information:

  • Ensemble Delay (how long to wait after deployment start to execute the first ensemble)
  • Sample Interval (how long between consecutive measurements)
  • Measurements to Accumulate (how many measurements to accumulate before emitting an ensemble)
  • Total Ensembles (how many ensembles to emit per deployment)
  • Initialization Function
  • Measurement Function
  • Processing Function
  • state information (last measurement timestamp, deployment start timestamp, measurement/ensemble counter, ensemble buffer, etc.)

The scheduling module will also provide two functions:

  • initializeSchedule(scheduleTable, startTime)
  • getNextEvent(scheduleTable, p_nextEvent, p_nextTime)

initializeSchedule takes the scheduleTable and for each element in the scheduleTable, sets the deployment start timestamp as startTime, resets internal state variables, and executes the initialization function.

getNextEvent takes the scheduleTable and computes the next element that should be run. The selected element will be placed in p_nextEvent and the timestamp that the element should be executed at should be placed in p_nextTime.

Currently, the logic as implemented in https://github.com/UCSD-E4E/smartfin-fw2/blob/324b1749315e2d02d8b2e58c57178e95019c6c31/src/scheduler.cpp#L15 prioritizes the interval between measurements. That is, if the previous measurement was late, all subsequent measurements will continue to be late. This will result in an accumulation of phase shift for sampling of periodic signals.

The original logic as implemented in https://github.com/UCSD-E4E/smartfin-fw/blob/159aee826ef3426e9f5db051da91fcaa677a1cc4/src/util.h#L11 prioritizes the interval from the initial measurements. That is, if the previous measurement was late, subsequent measurements will continue to be timed from the initial measurement.

Analysis in https://github.com/UCSD-E4E/smartfin-fw3/blob/main/docs/sampling_algorithm.ipynb shows that the original logic (labeled Increment) introduces fewer spectral artifacts in the sampled signal, and should be used.

Some thought needs to be put into the scheduler design depending on whether this is done in a single-threaded or multi-threaded context. Processing functions may be required if we need to do on-board spectral analysis, multi-measurement averaging, etc. For initial MVP, we can assume trivial or no processing required for any given ensemble. However, future product revisions may require complex processing. In this case, complex processing may take up more time than is available between measurements. As a result, single-threaded schedule execution is not practical, and multi-threaded schedule execution is required.

Even in a single-threaded environment, there exists the possibility that multiple ensemble measurements may be scheduled for the same time slice. It is also equally possible that a single sensor measurement may be scheduled for a time slice that serves multiple ensembles. In the version 2 logic, these are ignored, and sequential measurements are taken depending on the table priority.

The elegant solution is to do schedule execution and sensor timing in a multi-threaded context. In principle, you would have the following threads:

  1. Scheduler (Priority 1)
  2. Sensor 1 Acquisition (Priority 2)
  3. Sensor 2 Acquisition (Priority 2)
  4. Sensor N Acquisition (Priority 2)
  5. Ensemble 1 Accumulator (Priority 3)
  6. Ensemble 2 Accumulator (Priority 3)
  7. Ensemble M Accumulator (Priority 3)
  8. Ensemble 1 Processor (Priority 4)
  9. Ensemble 2 Processor (Priority 4)
  10. Ensemble M Processor (Priority 4)

Having the ensemble accumulator and processor in separate threads allows us to delay ensemble processing and allows the thread scheduler to better interleave processing vs idle time. However, this has the side effect of allowing ensembles scheduled in proximity to one another to come out of order. It will be critical that all ensembles retain internal independent timestamps to allow proper resequencing. The other option is to enable the Recorder to be aware of ensembles as well as the schedule, and be able to reorder ensembles, however, a stalled ensemble pipeline will then result in the Recorder backing up and possibly overflowing the buffer.

Clone this wiki locally