Jump to content

Real-time operating system

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Ray Van De Walker (talk | contribs) at 13:42, 27 July 2002. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A Real Time Operating System or RTOS, is an operating system that has been developed for real-time applications. Typically used for embedded applications they usually have the following characteristics:

  • Small footprint (doesn't use much memory)
  • Pre-emptable (any hardware event can cause a task to run)
  • Multi-architecture (code ports to another type of CPU)
  • Many have predictable response-times to electronic events

Debate exists about what actually constitutes real-time.

Many real-time operating systems have scheduler and hardware driver designs that minimize the periods for which interrupts are disabled, a number sometimes called the interrupt latency.

Many also include special forms of memory management that limit the possibility of memory fragmentation, and assure a minimal upper bound on memory allocation and deallocation times.

It is a fallacy to believe that this type of operating system is efficient in the sense of having high throughput. The specialized scheduling algorithm, and sometimes a high clock-interrupt rate both interfere with throughput.

There are two basic designs. An event-driven operating system only changes tasks when an event requries service. A time-sharing design switches tasks on a clock interrupt, as well as on events.

The time-sharing design is wastes more CPU time on unnecessary task-switches. However it also gives a better illusion of multitasking.

Design- scheduling

In typical designs, a task has three states: running, ready and blocked. Most tasks are blocked, most of the time. Only one task per CPU is running. The ready list is usually short, two or three tasks at most.

The real trick is designing the scheduler. Usually the data structure of the ready list in the scheduler is designed so that search, insertion and deletion require locking interrupts only for small periods of time, when looking at precisely defined parts of the list.

This means that other tasks can operate on the list asynchronously, while it is being searched. A typical successful schedule is a bidirectional linked list of ready tasks, sorted in order by priority. Note that this is not fast to search, but it is deterministic. Most ready lists are only two or three entries long, so a sequential search is usually the fastest, because it requries so little set-up time.

The critical response time, sometimes called the flyback time is the time it takes to queue a new ready task, and restore the state of the highest priority task.

In a well-designed RTOS, readying a new task will take 3-20 instructions per ready queue entry, and restoration of the highest-priority ready task will take 5-30 instructions.

On a 20MHz 68000 processor, task switch times run about 20 microseconds with two tasks ready. Hundred MIP ARMCPUs switch in a few microseconds.

Design- task interfaces to each other

There's really only one multitasking problem that multitasked systems have to solve. They can't use the same data or hardware at the same time. There are two notably successful designs for coping with this problem.

One design uses semaphores. Basically, a semaphore is either locked, or unlocked. When locked a queue of tasks are waiting for the semaphore.

Problems with semaphore designs are well known: priority inversion and deadlocks.

In priority inversion, a high priority task waits because a low priority task has a semaphore. A typical solution is to have the task that has a semaphore run at the priority of the highest waiting task.

In a deadlock, two tasks get two semaphores, but in the opposite order. This is usually solved by careful design, implementing queues, or having floored semaphores, that pass control of a semaphore to the higher priority task on defined conditions.

The other solution is to have tasks send messages to each other. These have exactly the same problems: Priority inversion occurs when a task is working on a low-priority message, and ignores a higher-priority message in its in-box. Deadlocks happen when two tasks wait for the other to respond.

Although their real-time behavior is less crisp than semaphore systems, message-based systems usually unstick themselves, and are generally better-behaved than semaphore systems.

Design- interrupt interface to the scheduler

Typically, the interrupt does a few things that it must do to keep the electronics happy, then it unlocks a semaphore blocking a driver task, or sends a message to a waiting driver task.

Memory allocation

There are two problems.

Speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a fitting memory block. This is simply unacceptable. A memory allocation has to occur in a fixed time.

Another problem is that memory can fragment into unusably small pieces that are separated by larger pieces. This can cause a program to stall, unable to get memory, even though there is theretically enough available.

One solution is to have a linked LIFO list of fixed size blcks of memory. This works astonishingly well for a simple system.

Another solution is to have a binary buddy block allocator. In this system, memory is allocated from a large block of memory that is a pwoer of two in size. If the block is more than twice as large as desired, it is broken in two. One the halve is examined. The process repeats until the block is just large enough.

All the buddies of a particular size are kept in a sorted linked list or tree. When a block is freed, it's compared to its buddy. If they are both free, they are combined and placed in the next-largest size of buddy-block list.

Example RTOS's: