Close

2015.01.23: Scheduler Refactor

A project log for Digitabulum: The last motion-capture glove

Digitabulum is an open motion-capture glove that was intended to be a full-featured, hacker-friendly user-input and sensor platform.

j-ian-lindsayJ. Ian Lindsay 01/27/2015 at 07:580 Comments

Original Post

The asynchronous machinery underwent a massive refactor over the past two days. Many desired features were added. The change list follows:

------------

####PriorityQueue

The PriorityQueue template has a new method called recycle(). This method takes the head of the queue, and returns it for use by the caller as well as demoting the element to the end of the queue. This solves a major efficiency gripe when traversing the queue. Previously, it was required that traversal either be done iteratively outside PriorityQueue (run-time was O(n^2), or via a dequeue()/insert() sequence (which results in needless heap thrash). The recycle() method allows for linear run-time with no heap thrash. The unit test for this data structure was updated to test the new addition, and is valgrind-clean.

------------

####Scheduler

* The linked list that was previously at the core of the scheduler was completely eliminated and migrated to the PriorityQueue template.

* Class-based callbacks are now supported under the EventManager interface. Client classes will be refactored over time to eliminate callback spaghetti, therefore realizing slightly faster service times and smaller code size.

* Execution queue that is not encumbered by memory management. This allows many copies of the same schedule to fire within a single millisecond, and also enforces a natural concurrency guard. It also

speeds up the service function. Instead of idle service time (the nominal case) being linear WRT number of schedules, it is now linear WRT to the number of schedules waiting to be fired. Future work will expand on this idea to maintain a third PriorityQueue for schedules that are active.

* The ScheduleItem struct is now a proper class with all the encapsulation benefits. This has yet to be fully leveraged, but code readability is already dramatically improved.

* A new method was added named fireSchedule() that allows an existing schedule to be manually fired ahead of schedule. This will greatly simplify one-shots and tasks such as bus timeouts.

* The profiler struct that was in use was removed in favor of the more-capable class that was written to support the EventManager. This cut down on total lines-of-code and added better profiling support to the scheduler.

------------

####EventManager

* Better error-checking on event insertion.

* Implemented a new member in the Event class that allows direct notify() calls. The EventManager will no longer bother informing other classes about an event if that event is so targeted.

* Events are now being fed to the system from a stack-resident preallocation buffer if possible; only allocating where the event load is higher than the preallocation buffer can sustain. This cut down on heap thrash dramatically.

* The memory-management strategy for Events is now decided and vetted. Client classes may now allocate and reap (or not) their own events. High-traffic events are now allocated on-stack in the classes that emit them.

* The EventManager now respects the flags in such events, and only returns self-allocated events to the preallocation pool (was formerly returning _any_ event to this queue if the queue depth was below watermark. This improved heap thrash, but led to heap fragmentation that was only resolvable by putting the event system under heavy load or flushing the preallocation buffer.

* A flag was added to the MeesageDef class that tags certain types of events as idempotent in the queue. This allows the EventManager to discard events that already have a pending execution. The performance gains from this are minor at the moment, but will be essential when the event system is under high-load from events that _should_ be treated as idempotent, such as bus service events.

All of these changes together had a net code size *increase* of about 3KB, and improved average event execution time by ~5x for events that fully-leveraged the new features (SPI_QUEUE_READY was ~250uS, and is now ~50).

------------

####SPIBusOp

* Underwent a conversion to stack-preallocation that closely mirrors the changes in EventManager. It would nice in the future to be able to consolidate this strategy into a single class and implement by composition in client classes.

* Callback execution is now handled in a queue separate from the one that services actual bus operations. This allows the bus to come closer to saturation because callbacks on bus operations can be deferred such that bus access happens concurrently with callbacks from previously-completed bus operations, rather than blocking the bus queue.

------------

###Short term goals

* Fix the bus faults that are causing only the first two bytes to be read from the SPI bus.

* Add a timeout schedule to the CPLDDriver class to allow hung SPI jobs to be gracefully cleared or retried rather than locking the bus.

---J. Ian Lindsay

Discussions