Sun Microsystems, Inc.
spacerspacer
spacer www.sun.com docs.sun.com |
spacer
black dot
 
 
16.  Accessing Remote File Systems Reference How the NFS Service Works How File Systems Are Mounted  Previous   Contents   Next 
   
 

Effects of the -public Option and NFS URLs When Mounting

Using the -public option can create conditions that cause a mount to fail. Adding an NFS URL can also confuse the situation. The following list describes the specifics of how a file system is mounted when you use these options.

  • Public option with NFS URL - Forces the use of the public file handle. The mount fails if the public file handle is not supported.

  • Public option with regular path - Forces the use of the public file handle. The mount fails if the public file handle is not supported.

  • NFS URL only - Use the public file handle if it is enabled on the NFS server. If the mount fails when using the public file handle, then try the mount with the MOUNT protocol.

  • Regular path only - Do not use the public file handle. The MOUNT protocol is used.

Client-Side Failover

By using client-side failover, an NFS client can switch to another server if the server that supports a replicated file system becomes unavailable. The file system can become unavailable if the server it is connected to crashes, if the server is overloaded, or if a network fault occurs. The failover, under these conditions, is normally transparent to the user. After it is established, the failover can occur at any time without disrupting the processes that are running on the client.

Failover requires that the file system be mounted read-only. The file systems must be identical for the failover to occur successfully. See "What Is a Replicated File System?" for a description of what makes a file system identical. A static file system or one that is not changed often is the best candidate for failover.

You cannot use file systems that are mounted by using CacheFS with failover. Extra information is stored for each CacheFS file system. This information cannot be updated during failover, so only one of these two features can be used when mounting a file system.

The number of replicas that need to be established for each file system depends on many factors. In general, the ideal is to have a minimum of two servers, with each supporting multiple subnets, rather than to have a unique server on each subnet. The process requires checking of each server in the list. Therefore, if more servers are listed, each mount will be slower.

Failover Terminology

To fully comprehend the process, you need to understand two terms.

  • failover - The process of selecting a server from a list of servers that support a replicated file system. Normally, the next server in the sorted list is used, unless it fails to respond.

  • remap - To making use of a new server. Through normal use, the clients store the path name for each active file on the remote file system. During the remap, these path names are evaluated to locate the files on the new server.

What Is a Replicated File System?

For the purposes of failover, a file system can be called a replica when each file is the same size and has the same vnode type as the original file system. Permissions, creation dates, and other file attributes are not considered. If the file size or vnode types are different, the remap fails and the process hangs until the old server becomes available.

You can maintain a replicated file system by using rdist, cpio, or another file transfer mechanism. Because updating the replicated file systems causes inconsistency, follow these suggestions for best results:

  • Rename the old version of the file before installing a new one.

  • Run the updates at night when client usage is low.

  • Keep the updates small.

  • Minimize the number of copies.

Failover and NFS Locking

Some software packages require read locks on files. To prevent these products from breaking, read locks on read-only file systems are allowed, but are visible to the client side only. The locks persist through a remap because the server doesn't "know" about them. Because the files should not change, you do not need to lock the file on the server side.

Large Files

Starting with 2.6, the Solaris release supports files that are over 2 Gbytes. By default, UFS file systems are mounted with the -largefiles option to support the new functionality. Previous releases cannot handle files of this size. See "How to Disable Large Files on an NFS Server" for instructions.

If the file system on the server is mounted with the -largefiles option, no changes need to occur on a Solaris 2.6 NFS client in order for it to be able to access a large file. However, not all 2.6 commands can handle these large files. See largefile(5) for a list of the commands that can handle the large files. Clients that cannot support the NFS version 3 protocol with the large file extensions cannot access any large files. Although clients that run the Solaris 2.5 release can use the NFS version 3 protocol, large file support was not included in that release.

How NFS Server Logging Works

NFS server logging provides records of NFS reads and writes, as well as operations that modify the file system. This data can be used to track access to information. In addition, the records can provide a quantitative way to measure interest in the information.

When a file system with logging enabled is accessed, the kernel writes raw data into a buffer file. This data includes the following:

  • A timestamp

  • The client IP address

  • The UID of the requester

  • The file handle of the file or directory object that is being accessed

  • The type of operation that occurred

The nfslogd daemon converts this raw data into ASCII records that are stored in log files. During the conversion, the IP addresses are modified to host names and the UIDs are modified to logins if the name service that is enabled can find matches. The file handles are also converted into path names. To accomplish the conversion, the daemon tracks the file handles and stores information in a separate file handle-to-path table. That way the path does not have to be re-identified each time a file handle is accessed. Because no changes to the mappings are made in the file handle-to-path table if nfslogd is turned off, you must keep the daemon running.

How the WebNFS Service Works

The WebNFS service makes files in a directory available to clients by using a public file handle. A file handle is an address that is generated by the kernel that identifies a file for NFS clients. The public file handle has a predefined value, so the server does not need to generate a file handle for the client. The ability to use this predefined file handle reduces network traffic by eliminating the MOUNT protocol and should accelerate processes for the clients.

By default, the public file handle on an NFS server is established on the root file system. This default provides WebNFS access to any clients that already have mount privileges on the server. You can change the public file handle to point to any file system by using the share command.

When the client has the file handle for the file system, a LOOKUP is run to determine the file handle for the file to be accessed. The NFS protocol allows the evaluation of only one path name component at a time. Each additional level of directory hierarchy requires another LOOKUP. A WebNFS server can evaluate an entire path name with a single transaction, called multicomponent lookup, when the LOOKUP is relative to the public file handle. With multicomponent lookup, the WebNFS server can deliver the file handle to the desired file without having to exchange the file handles for each directory level in the path name.

In addition, an NFS client can initiate concurrent downloads over a single TCP connection. This connection provides quick access without the additional load on the server that is caused by setting up multiple connections. Although Web browser applications support concurrent downloading of multiple files, each file has its own connection. By using one connection, the WebNFS software reduces the overhead on the server.

If the final component in the path name is a symbolic link to another file system, the client can access the file if the client already has access through normal NFS activities.

Normally, an NFS URL is evaluated relative to the public file handle. The evaluation can be changed to be relative to the server's root file system by adding an additional slash to the beginning of the path. In this example, these two NFS URLs are equivalent if the public file handle has been established on the /export/ftp file system.

nfs://server/junk
nfs://server//export/ftp/junk

How WebNFS Security Negotiation Works

The Solaris 8 release includes a new protocol so a WebNFS client can negotiate a selected security mechanism with a WebNFS server. The new protocol uses security negotiation multicomponent lookup, which is an extension to the multicomponent lookup that was used in earlier versions of the WebNFS protocol.

The WebNFS client initiates the process by making a regular multicomponent lookup request by using the public file handle. Because the client has no knowledge of how the path is protected by the server, the default security mechanism is used. If the default security mechanism is not sufficient, the server replies with an AUTH_TOOWEAK error, indicating that the default mechanism is not valid and the client needs to use a stronger one.

When the client receives the AUTH_TOOWEAK error, it sends a request to the server to determine which security mechanisms are required. If the request succeeds, the server responds with an array of security mechanisms that are required for the specified path. Depending on the size of the array of security mechanisms, the client might have to make more requests to obtain the complete array. If the server does not support WebNFS security negotiation, the request fails.

After a successful request, the WebNFS client selects the first security mechanism from the array that it supports. The client then issues a regular multicomponent lookup request by using the selected security mechanism to acquire the file handle. All subsequent NFS requests are made by using the selected security mechanism and the file handle.

 
 
 
  Previous   Contents   Next