Sun Microsystems, Inc.
spacer |
black dot
9.  Real-time Programming and Administration Scheduling Utilities That Control Scheduling dispadmin(1M)  Previous   Contents   Next 

Lists scheduler classes currently configured


Specifies the class with parameters to be displayed or changed


Gets the dispatch parameters for the specified class


Used with -g, specifies time quantum resolution


Specifies a file where values can be located

A class-specific file containing the dispatch parameters can also be loaded during runtime. Use this file to establish a new set of priorities replacing the default values established during boot time. This class-specific file must assert the arguments in the format used by the -g option. Parameters for the RT class are found in the rt_dptbl(4), and are listed in Example 9-1.

To add an RT class file to the system, the following modules must be present:

  • An rt_init() routine in the class module that loads the rt_dptbl(4)

  • An rt_dptbl(4) module that provides the dispatch parameters and a routine to return pointers to config_rt_dptbl

  • The dispadmin(1M) executable

The following steps install a RT class dispatch table:

  1. Load the class-specific module with the following command, where module_name is the class-specific module:

    # modload /kernel/sched/module_name
  2. Invoke the dispadmin command:

    # dispadmin -c RT -s file_name

    The file must describe a table with the same number of entries as the table that is being overwritten.

Configuring Scheduling

Associated with both scheduling classes is a parameter table, rt_dptbl(4), and ts_dptbl(4). These tables are configurable by using a loadable module at boot time, or with dispadmin(1M) during runtime.

Dispatcher Parameter Table

The in-core table for real-time establishes the properties for RT scheduling. The rt_dptbl(4) structure consists of an array of parameters, struct rt_dpent_t, one for each of the n priority levels. The properties of a given priority level are specified by the ith parameter structure in the array, rt_dptbl[i].

A parameter structure consists of the following members, which are also described in the /usr/include/sys/rt.h header file.


The global scheduling priority associated with this priority level. The rt_globpri values cannot be changed with dispadmin(1M).


The length of the time quantum allocated to processes at this level in ticks (see "Timestamp Interfaces"). The time quantum value is only a default or starting value for processes at a particular level. The time quantum of a real-time process can be changed by using the priocntl(1) command or the priocntl(2) system call.

Reconfiguring config_rt_dptbl

A real-time administrator can change the behavior of the real-time portion of the scheduler by reconfiguring the config_rt_dptbl at any time. One method is described in the rt_dptbl(4) man page, in the section titled "Replacing the rt_dptbl Loadable Module."

A second method for examining or modifying the real-time parameter table on a running system is through the dispadmin(1M) command. Invoking dispadmin(1M) for the real-time class enables retrieval of the current rt_quantum values in the current config_rt_dptbl configuration from the kernel's in-core table. When overwriting the current in-core table, the configuration file used for input to dispadmin(1M) must conform to the specific format described in the rt_dptbl(4) man page.

Following is an example of prioritized processes rtdpent_t with their associated time quantum config_rt_dptbl[] value as they might appear in config_rt_dptbl[].

Example 9-1 RT Class Dispatch Parameters

rtdpent_t  rt_dptbl[] = { 			129,    60,
	 /* prilevel Time quantum */							130,    40,
		100,    100,											131,    40,
		101,    100,											132,    40,
		102,    100,											133,    40,
		103,    100,											134,    40,
		104,    100,											135,    40,
		105,    100,											136,    40,
		106,    100,											137,    40,
		107,    100,											138,    40
		108,    100,											139,    40,
		109,    100,											140,    20,
		110,    80,											 141,    20,
		111,    80,											 142,    20,
		112,    80,											 143,    20,
		113,    80,											 144,    20,
		114,    80,											 145,    20,
		115,    80,											 146,    20,
		116,    80,											 147,    20,
		117,    80,											 148,    20,
		118,    80,											 149,    20,
		119,    80,											 150,    10,
		120,    60,											 151,    10,
		121,    60,											 152,    10,
		122,    60,											 153,    10,
		123,    60,											 154,    10,
		124,    60,											 155,    10,
		125,    60,											 156,    10,
		126,    60,											 157,    10,
		126,    60,											 158,    10,
		127,    60,											 159,    10,
		128,    60,										}

Memory Locking

Locking memory is one of the most important issues for real-time applications. In a real-time environment, a process must be able to guarantee continuous memory residence to reduce latency and to prevent paging and swapping.

This section describes the memory locking mechanisms available to real-time applications in SunOS.

Under SunOS, the memory residency of a process is determined by its current state, the total available physical memory, the number of active processes, and the processes' demand for memory. This residency is appropriate in a time-share environment but it is often unacceptable for a real-time process. In a real-time environment, a process must guarantee a memory residence for all or part of itself to reduce its memory access and dispatch latency.

For real-time in SunOS, memory locking is provided by a set of library routines that allow a process running with superuser privileges to lock specified portions of its virtual address space into physical memory. Pages locked in this manner are exempt from paging until they are unlocked or the process exits.

The operating system has a system-wide limit on the number of pages that can be locked at any time. This is a tunable parameter whose default value is calculated at boot time. The default value is based on the number of page frames minus another percentage, currently set at ten percent.

Locking a Page

A call to mlock(3C) requests that one segment of memory be locked into the system's physical memory. The pages that make up the specified segment are faulted in and the lock count of each is incremented. Any page with a lock count greater than 0 is exempt from paging activity.

A particular page can be locked multiple times by multiple processes through different mappings. If two different processes lock the same page, the page remains locked until both processes remove their locks. However, within a given mapping, page locks do not nest. Multiple calls of locking interfaces on the same address by the same process are removed by a single unlock request.

If the mapping through which a lock has been performed is removed, the memory segment is implicitly unlocked. When a page is deleted through closing or truncating the file, the page is also implicitly unlocked.

Locks are not inherited by a child process after a fork(2) call. If a process that has some memory locked forks a child, the child must perform a memory locking operation on its own behalf to lock its own pages. Otherwise, the child process incurs copy-on-write page faults, which are the usual penalties associated with forking a process.

  Previous   Contents   Next