STREAMS Framework - Kernel Level
Because the STREAMS subsystem of UNIX® provides a framework on which communications services can be built, it is often called the STREAMS framework. This framework consists of the stream head and a series of utilities (put, putnext), kernel structures (mblk, dblk), and linkages (queues) that facilitate the interconnections between modules, drivers, and basic system calls. This chapter describes the STREAMS components from the kernel developer's perspective.
Overview of Streams in Kernel Space
Chapter 1, Overview of STREAMS describes a stream as a full-duplex processing and data transfer path between a STREAMS driver in kernel space and a process in user space. In the kernel, a stream consists of a stream head, a driver, and zero or more modules between the driver and the stream head.
The stream head is the end of the stream nearest the user process. All system calls made by user-level applications on a stream are processed by the stream head.
Messages are the containers in which data and control information is passed between the stream head, modules, and drivers. The stream head is responsible for translating the appropriate messages between the user application and the kernel. Messages are simply pointers to structures (mblk, dblk) that describe the data contained in them. Messages are categorized by type according to the purpose and priority of the message.
Queues are the basic elements by which the stream head, modules, and drivers are connected. Queues identify the open, close, put, and service entry points. Additionally, queues specify parameters and private data for use by modules and drivers, and are the repository for messages destined for deferred processing.
In the rest of this chapter, the word "modules" refers to modules, drivers, or multiplexers, except where noted.
Stream Head
The stream head interacts between applications in the user space and the rest of the stream in kernel space. The stream head is responsible for configuring the plumbing of the stream through open, close, push, pop, link, and unlink operations. It also translates user data into messages to be passed down the stream, and translates messages that arrive at the stream head into user data. Any characteristics of the stream that can be modified by the user application or the underlying stream are controlled by the stream head, which also informs users of data arrival and events such as error conditions.
If an application makes a system call with a STREAMS file descriptor, the stream head routines are invoked, resulting in data copying, message generation, or control operations. Only the stream head can copy data between the user space and kernel space. All other parts of the stream pass data by way of messages and are thus isolated from direct interaction with users of the stream.
Kernel-Level Messages
Chapter 3, STREAMS Application-Level Mechanisms discusses messages from the application perspective. The following sections discuss message types, message structure and linkage; how messages are sent and received; and message queues and priority from the kernel perspective.
Message Types
Several STREAMS messages differ in their purpose and queueing priority. The message types are briefly described and classified, according to their queueing priority, in Table 7-1 and Table 7-2. A detailed discussion of message types is in Chapter 8, STREAMS Kernel-Level Mechanisms.
Some message types are defined as high-priority types. Ordinary or normal messages can have a normal priority of 0, or a priority (also called a band) from 1 to 255.
Table 7-1 Ordinary Messages, Description of Communication Flow
Table 7-2 High-Priority Messages, Description of Communication Flow
Direction | ||
---|---|---|
M_COPYIN | Upstream | |
M_COPYOUT | Upstream | |
M_ERROR | Upstream | |
M_FLUSH | Bidirectional | |
M_HANGUP | Upstream | |
M_UNHANGUP | Upstream | |
M_IOCACK | Upstream | |
M_IOCDATA | Downstream | |
M_IOCNAK | Upstream | |
M_PCPROTO | Bidirectional | |
M_PCSIG | Upstream | |
M_READ | Downstream | |
M_START | Downstream | |
M_STARTI | Downstream | |
M_STOP | Downstream | |
M_STOPI | Downstream |
Message Structure
A STREAMS message in its simplest form contains three elements--a message block, a data block, and a data buffer. The data buffer is the location in memory where the actual data of the message is stored. The data block (datab(9S) describes the data buffer--where it starts, where it ends, the message types, and how many message blocks reference it. The message block (msgb(9S)) describes the data block and how the data is used.
The data block has a typedef of dblk_t and has the following public elements:
struct datab { unsigned char *db_base; /* first byte of buffer */ unsigned char *db_lim; /* last byte+1 of buffer */ unsigned char db_ref; /* msg count ptg to this blk */ unsigned char db_type; /* msg type */ }; typedef struct datab dblk_t; |
The datab structure specifies the data buffers' fixed limits (db_base and db_lim), a reference count field (db_ref), and the message type field (db_type). db_base points to the address where the data buffer starts, db_lim points one byte beyond where the data buffer ends, and db_ref maintains a count of the number of message blocks sharing the data buffer.
Caution - db_base, db_lim, and db_ref should not be modified directly. db_type is modified under carefully monitored conditions, such as changing the message type to reuse the message block.
In a simple message, the message block references the data block, identifying for each message the address where the message data begins and ends. Each simple message block refers to the data block to identify these addresses, which must be within the confines of the buffer such that db_base >= b_rptr >=>= b_wptr >= db_lim. For ordinary messages, a priority band can be indicated, and this band is used if the message is queued.
Figure 7-1 shows the linkages between msgb, datab, and the data buffer in a simple message.
Figure 7-1 Simple Message Referencing the Data Block
The message block (see msgb(9S)) has a typedef of mblk_t and has the following public elements:
struct msgb { struct msgb *b_next; /*next msg in queue*/ struct msgb *b_prev; /*previous msg in queue*/ struct msgb *b_cont; /*next msg block of message*/ unsigned char *b_rptr; /*1st unread byte in bufr*/ unsigned char *b_wptr; /*1st unwritten byte in bufr*/ struct datab *b_datap; /*data block*/ unsigned char b_band; /*message priority*/ unsigned short b_flag; /*message flags*/ }; |
The STREAMS framework uses the b_next and b_prev fields to link messages into queues. b_rptr and b_wptr specify the current read and write pointers respectively, in the data buffer pointed to by b_datap. The fields b_rptr and b_wptr are maintained by drivers and modules.
The field b_band specifies a priority band where the message is placed when it is queued using the STREAMS utility routines. This field has no meaning for high-priority messages and is set to zero for these messages. When a message is allocated using allocb(9F), the b_band field is initially set to zero. Modules and drivers can set this field to a value from 0 to 255 depending on the number of priority bands needed. Lower numbers represent lower priority. The kernel incurs overhead in maintaining bands if nonzero numbers are used.
Caution - Message block data elements must not modify b_next, b_prev, or b_datap. The first two fields are modified by utility routines such as putq(9F) and getq(9F). Message block data elements can modify b_cont, b_rptr, b_wptr, b_band (for ordinary messages types), and b_flag.
The SunOS environment places b_band in the msgb structure. Some other STREAMS implementations place b_band in the datab structure. The SunOS implementation is more flexible because each message is independent. For shared data blocks, the b_band can differ in the SunOS implementation, but not in other implementations.