Naming schemes
Simple, efficient naming schemes make the difference between a filesystem that is well organized and a pleasure to use, and a filesystem that you are constantly fighting against. In this section, we'll look at ways of using mount points and symbolic links to create simple, consistent naming schemes on all NFS clients. NFS provides the mechanism for making distributed filesystems transparent to the user, but it has no inherent guidelines for creating easy to use and easier to manage filesystem hierarchies. There are few global rules, and each network will adopt conventions based on the number of servers, the kinds of files handled by those servers, and any peculiar naming requirements of locally developed or third-party software. Note that this section assumes that you will not be using the automounter (see "The Automounter"). It is strongly advised that you do use the automounter, because every issue mentioned and solved here is much more easily solved with the automounter. As a system administrator, you should first decide how the various NFS fileservers fit together on a client before assigning filesystem names and filling them with software or users. Here are some ideas and suggestions for choosing NFS naming schemes:- Avoid having NFS mounts on directories directly under the root (/ ) directory of each NFS client. The reason is that if an NFS server crashes, then any attempts to access the mounted directory will hang the application even if it is not interested in the NFS mount point. This can happen if an application invokes the library equivalent of the pwd command: getcwd( ). [9]
[9]The getcwd( ) routine builds its pathname of the current working directory by searching upward via the ".." directory, and then reading each directory to find the directory with the same file ID number as the current working directory. To get the file ID requires invoking the stat( ) system call on the directory. If the directory is served by an NFS server, and the server is unavailable, then stat( ), hence getcwd( ), and the application will hang indefinitely.
- Pick a common directory on each client under which you will mount each user's home directory. For example, if you pick /users, then each user's home directory is accessed via the /users/username naming scheme.[10]
[10]The example uses /users and not /home. This is because the automounter in Solaris reserves /home. While you can modify each Solaris client to remove the reservation, that is tedious. A common error is for people to use vfstab or the mount command to mount onto /home, and if the automounter has reserved /home, things will fail in odd ways.
This makes it easier to deal with servers that have several filesystems of home directories. The disadvantage to this approach is that it requires a larger /etc/vfstab file, with one entry for each user's home directory. If you use the NFS automounter, this naming scheme is more easily managed than the hostname-oriented one (and the automounter has a /home/username scheme preconfigured). Directories that follow any regular naming scheme are easily managed by the automounter, as discussed in "The Automounter". - Do not allow the physical location of the files on the server to dictate the pathnames to be used on the client. For example, if the software tools directory is on wahoo:/export/home/toolbox, then instead of mounting wahoo:/export/home/toolbox ontoeachclient's /export/home/toolbox directory, use something more user friendly, like /software/toolbox:
mount wahoo:/export/home/toolbox /software/toolbox
Normally you don't want people running applications on hosts that are also NFS servers. However, if you allow this, and if you want users on the NFS server to be able to access the toolbox as /software/toolbox, then you can either create a symbolic link from /software/toolbox to /export/home/toolbox, or use the loopback filesystem in Solaris to accomplish the same thing without the overhead of a symbolic link:
mount -F lofs /export/home/toolbox /software/toolbox
- Keep growth in mind. Having a single third-party software filesystem may be the most effective (or only) solution immediately, but over the next year you may need to add a second or third filesystem to make room for more tools. To provide adequate performance, you may want to put each filesystem on a different server, distributing the load. If you choose a naming scheme that cannot be extended, you will end up renaming things later on and having to support the "old style" names.
Before: single tools depository #mount toolbox:/export/home/tools /software/tools
After: multiple filesystems #mount toolbox:/export/home/epubs /software/tools/epubs
#mount backpack:/export/home/case /software/tools/cae
Solving the /usr/local puzzle
Let's assume you have a network with many different kinds of workstations: SPARC workstations, PowerPC-based workstations, Unix PCs, and so on. Of course, each kind of workstation has its own set of executables. The executables may be built from the same source files, but you need a different binary for each machine architecture. How do you arrange the filesystem so that each system has a /usr/local/bin directory (and, by extension, other executable directories) that contains only the executables that are appropriate for its architecture? How do you "hide" the executables that aren't appropriate, so there's no chance that a user will mistakenly try to execute them? This is the /usr/local puzzle: creating an "architecture neutral" executable directory. Implementing an architecture-neutral /usr/local/bin is probably one of the first challenges posed to the system administrator of a heterogeneous network. Everybody wants the standard set of tools, such as emacs, PostScript filters, mail-pretty printers, and the requisite telephone list utility. Ideally, there should be one bin directory for each architecture, and when a user looks in /usr/local/bin on any machine, he or she should find the proper executables. Hiding the machine architecture is a good job for symbolic links. One solution is to name the individual binary directories with the machine type as a suffix and then mount the proper one on /usr/local/bin:On server toolbox: #cd /export/home/local
#ls
bin.mips bin.sun3 bin.sun4 bin.vax On client: #mount toolbox:/export/home/local/bin.`arch` /usr/local/bin
The mount command determines the architecture of the local host and grabs the correct binary directory from the server. This scheme is sufficient if you only have binaries in your local depository, but most sites add manual pages, source code, and other ASCII files that are shared across client architectures. There is no need to maintain multiple copies of these files. To accommodate a mixture of shared ASCII and binary files, use two mounts of the same filesystem: the first mount sets up the framework of directories, and puts the shared file directories in their proper place. The second mount deposits the proper binary directory on top of /usr/local/bin:
On server toolbox: #cd /export/home/local
#ls
bin
bin.mips bin.sun3 bin.sun4 bin.vax mansharesrc On client: #mount toolbox:/export/home/local /usr/local
#mount toolbox:/export/home/local/bin.`arch` /usr/local/bin
At first glance, the previous example appears to violate the NFS rules prohibiting the export of a directory and any of its subdirectories. However, there is only one exported filesystem on server toolbox, namely, /export/home. The clients mount different parts of this exported filesystem on top of one another. NFS allows a client to mount any part of an exported filesystem, on any directory. To save disk space with the two-mount approach, populate /export/home/bin on the server with the proper executables, and make the bin.arch directory a symbolic link to bin. This allows clients of the same architecture as the server to get by with only one mount. If you keep all executables -- scripts and compiled applications -- in the bin directories, you still have a problem with duplication. At some sites, scripts may account for more than half of the tools in /usr/local/bin, and having to copy them into each architecture-specific bin directory makes this solution less pleasing. A more robust solution to the problem is to divide shell scripts and executables into two directories: scripts go in /usr/local/share while compiled executables live in the familiar /usr/local/bin. This makes share a peer of the /usr/local/man and src directories, both of which contain architecture-neutral ASCII files. To adapt to the fully architecture-neutral /usr/local/bin, users need to put both /usr/local/bin and /usr/local/share in their search paths, although this is a small price to pay for the guarantee that all tools are accessible from all systems. There is one problem with mounting one filesystem on top of another: if the server for these filesystems goes down, you will not be able to unmount them until the server recovers. When you unmount a filesystem, it gets information about all of the directories above it. If the filesystem is not mounted on top of another NFS filesystem, this isn't a problem: all of the directory information is on the NFS client. However, the hierarchy of mounts used in the /usr/local/bin example presents a problem. One of the directories that an unmount operation would need to check is located on the server that crashed. An attempt to unmount the /usr/local/bin directory will hang because it tries to get information about the /usr/local mount point -- and the server for that mount point is the one that crashed. Similarly, if you try to unmount the /usr/local filesystem, this attempt will fail because the /usr/local/bin directory is in use: it has a filesystem mounted on it.