Back in April we discussed the concept of a wrapper and mentioned how it can be used to separate the core application layers from the system services, which implies some sort of operating system. So today we’ll talk about a few operating architecture forms and why you’d use one over the other.
In the embedded world when we refer to the Operating System (OS) as shown in Figure 2, we are referring to the core services that control the program execution flow and the coordination of each task activity. These services include creating and running tasks, semaphores, message queues, and memory allocation to name a few common ones, which we’ll discuss in more detail in our tutorial articles.
When selecting a suitable operating system for a project I like to work out the big features of the application and relate that to the hardware candidates. Normally that will whittle down the options very quickly. The first step is to understand the broad classes of operating system choices we have:
- No Operating System
- Pre-emptive Kernel (embedded)
- Real time operating system (RTOS)
- General purpose operating system
Within each of these broad methos there are a number of options or variations to consider. Today we’ll discuss the first choice (No Operating System) and when you’d use it and then follow up with articles on the other three.
No Operating System (NOS)
For many smaller systems no kernel is needed, but there may still value in having a wrapper and making it possible to use a kernel. For instance, if the application layers already exist from a larger system then using a NOS to maintain the same wrapper layer makes a great deal of sense. Also, this class includes a number of variants typically with interrupts providing the real time responsiveness needed by the application:
- Simple Main Loop (SML) – Just a main loop that does all the task level work. Typically the main initializes things and then enters and infinite loop where it calls task level activity repeatedly.
MAIN:
Initialize hardware
Initialize application code
LOOP_FOREVER:
Call each application function
- Big Frickin’ Loop (BFL) – Typically involves the main loop cycling through a bunch of state machines. The BFL may use a variety of flags or queues or timers to decide when a state machine runs. This is a primitive form of Cooperative Multitasking but without a formal context switch.
MAIN:
Initialize hardware
Initialize each state machine
Initialize application code
LOOP_FOREVER:
Call each state machine function
- Stack Kernel Loop (SKL) – Usually called Cooperative Multitasking, uses a stack (typically a stack per task), as a way of switching contexts between tasks and typically requires some sort of routine to push the current context on a stack and pop of the next ready to run stack.
MAIN:
Initialize hardware
Initialize each task context
Initialize application code
LOOP_FOREVER:
Run the cooperative scheduler
TASK_FUNCTION(N):
Call application functions as needed
Run scheduler
SCHEDULER:
Implement scheduling algorithm for N tasks (e.g. Round robin)
Run next TASK_FUNCTION
Often the LOOP_FOREVER is built into the scheduler and the main just calls the scheduler and never returns. A Stack or cooperative kernel can handle surprisingly complex systems with careful tuning of the scheduler algorithm and breaking up the task functions such that they don’t exceed some desired time slice of CPU. The down side is the code will be less portable because it has to call the scheduler explicitly which is why pre-emptive systems are popular. Using third party libraries will pose a problem if the foreign code doesn’t play nice.
The benefit is the code is easy to follow and there are fewer synchronization gotchas to watch out for, and the ones that do exist are easier to debug because everything is happening in the same processor context. In other words, the task context is very light weight, much like a thread in Unix versus a process.
If you have very limited space and the functionality is pretty one dimensional then either the SML or BFL approaches work best because they are simple to implement even by junior developers. The time to get up to speed is very short and the hardware can be very small. As the complexity of the tasks gets larger then breaking down the application problem becomes more of chore and the development team gets larger having a kernel becomes advantageous.
In all these loop models any real time requirements are typically in the drivers and are typically written using interrupts. Any polled IO should be restricted to interfaces where the timing can be really sloppy. The stack kernel (SKL) method has another advantage in that you can implement an even driven system much more easily and eliminate the wasteful polling activity that occurs in the other two methods. To do this properly the scheduler should only run tasks when something wants it to run. For instance, data comes in a serial port and is written into a serial buffer. Writing into the serial buffer signals the kernel that the serial task can now process the data in buffer. When the serial task sees something interesting in the serial data it can message another task to perform the request and that second task gets started once the serial task relinquishes.
From Figure 3 you can see that the Protocol Layer task only runs when the Serial Task sees it must do something and sends that task a message. Otherwise it doesn’t do anything.
One last feature of all the three is it assumes run to completion for each function called. In other words, no other task code will run until the current code has finished doing its thing. This makes the code simpler, but means that every task will impact the execution time of every other task and for larger systems this becomes a problem as no one person knows everything nor do we control all the code that might be in the system. It also may make it hard to met real time requirements. Enter the pre-emptive kernel we’ll talk about next time.
So, as you can see even in this most primitive task architecture there’s considerable variation in the complexity from almost nothing to a fully functional kernel with full messaging and context switching.
In general, the SML and BFL are great for hardware with very limited resources, deeply embedded and only a few well defined tasks are needed to implement the project or product. If there is a need for user interface, a variety of devices that may be plugged in at different times and code that must run in parallel then the SKL method works well. Hope that helps with your decision making in your next project, but next time we’ll look at more complex systems and see if you need to step up to a full pre-emptive kernel.
Cheers!