Publicly available diagnostics
Only a handful of publicly available NFS diagnostic tools exist at the time of this writing. The ethereal/tethereal network analyzer introduced in "Network Diagnostic and Administrative Tools" provides detailed information for diagnosis of NFS problems at the protocol level. The NFSWATCH utility is mainly used to monitor NFS traffic over the network. The nfsbug and SATAN utilities are used to report potential security problems on NFS servers.ethereal / tethereal
As described in "Network Diagnostic and Administrative Tools", ethereal/tethereal can be used to capture network traffic and decode it to a great level of detail. Since ethereal/tethereal can decode NFS Version 2 and NFS Version 3 packets, it can be used to debug NFS communication, permissions, performance, and data corruption problems. It is very similar in functionality to snoop. It provides powerful filtering and is available for a diverse set of platforms where snoop is not. Consider the example presented in the previous snoop section, where the NFS client rome attempts to access the contents of the filesystems exported by the server zeus through the /net automounter path:rome% ls -la /net/zeus/export
total 5 dr-xr-xr-x 3 root root 3 Jul 31 22:51 . dr-xr-xr-x 2 root root 2 Jul 31 22:40 .. drwxr-xr-x 3 root other 512 Jul 28 16:48 eng dr-xr-xr-x 1 root root 1 Jul 31 22:51 home rome% ls /net/zeus/export/home
/net/zeus/export/home: Permission denied
The network traffic is captured into the /tmp/ethereal.cap file concurrently with the operation. Note that only traffic between rome and zeus is captured:
rome# tethereal -w /tmp/ethereal.cap host rome and host zeus
46 ^C rome#tethereal -r /tmp/ethereal.cap
1 0.000000 rome -> zeus PORTMAP V2 GETPORT Call XID 0x398fd3ea 2 0.003138 zeus -> rome PORTMAP V2 GETPORT Reply XID 0x398fd3ea 3 0.003328 rome -> zeus NFS V3 NULL Call XID 0x398fd3eb 4 0.004613 zeus -> rome NFS V3 NULL Reply XID 0x398fd3eb 5 0.005823 rome -> zeus PORTMAP V2 GETPORT Call XID 0x398fca35 6 0.008871 zeus -> rome PORTMAP V2 GETPORT Reply XID 0x398fca35 7 0.009823 rome -> zeus TCP 49699 > 33168 [SYN] Seq=1251769928 Ack=0 Win=24820 Len=0 8 0.011067 zeus -> rome TCP 33168 > 49699 [SYN, ACK] Seq=3939269366 Ack=1251769929 Win=24820 Len=0 9 0.011100 rome -> zeus TCP 49699 > 33168 [ACK] Seq=1251769929 Ack=3939269367 Win=24820 Len=0 10 0.011339 rome -> zeus MOUNT V1 EXPORT Call XID 0x398f20d9 11 0.012102 zeus -> rome TCP 33168 > 49699 [ACK] Seq=3939269367 Ack=1251769973 Win=24776 Len=0 12 0.018302 zeus -> rome MOUNT V1 EXPORT Reply XID 0x398f20d9 13 0.018332 rome -> zeus TCP 49699 > 33168 [ACK] Seq=1251769973 Ack=3939269463 Win=24820 Len=0 14 0.018588 rome -> zeus TCP 49699 > 33168 [FIN, ACK] Seq=1251769973 Ack=3939269463 Win=24820 Len=0 15 0.019245 zeus -> rome TCP 33168 > 49699 [ACK] Seq=3939269463 Ack=1251769974 Win=24820 Len=0 16 0.020104 zeus -> rome TCP 33168 > 49699 [FIN, ACK] Seq=3939269463 Ack=1251769974 Win=24820 Len=0 17 0.020143 rome -> zeus TCP 49699 > 33168 [ACK] Seq=1251769974 Ack=3939269464 Win=24820 Len=0 18 0.020661 rome -> zeus PORTMAP V2 GETPORT Call XID 0x398f0440 19 0.024550 zeus -> rome PORTMAP V2 GETPORT Reply XID 0x398f0440 20 0.024731 rome -> zeus MOUNT V3 NULL Call XID 0x398f0441 21 0.026323 zeus -> rome MOUNT V3 NULL Reply XID 0x398f0441 22 0.026881 rome -> zeus MOUNT V3 MNT Call XID 0x398f0442 23 0.179757 zeus -> rome MOUNT V3 MNT Reply XID 0x398f0442
The explanation given in the snoop section describing each packet applies to the tethereal capture file as well. The main difference is that listing the XID next to the operation type is less intuitive than expanding the arguments to the call as performed by snoop. We suspect this will be addressed in the future. You can see that the reason for failure is not obvious by just looking at this output format. Fortunately, tethereal has extensive filtering capabilities and we can request all mount operations that failed. Using the mount.status filter, we determine that packet 23 returned a failure. We can then print the protocol tree for packet 23 alone and verify that indeed it failed with ERR_ACCESS:
rome#tethereal -r /tmp/ethereal.cap -R "mount.status != 0"
23 0.179757 zeus -> rome MOUNT V3 MNT Reply XID 0x398f0442 rome#tethereal -r /tmp/ethereal.cap -V -R "frame.number == 23"
... Remote Procedure Call XID: 0x398f0442 (965674050) Message Type: Reply (1) Program: MOUNT (100005) Program Version: 3 Procedure: MNT (1) Reply State: accepted (0) Verifier Flavor: AUTH_NULL (0) Length: 0 Accept State: RPC executed successfully (0) Mount Service Program Version: 3 Procedure: MNT (1) Status: ERR_ACCESS (13)
For simplicity, only the RPC and Mount portions of the packet are shown. The RPC header decodes the transaction ID, message type indicating this to be a reply, program, and version number as well as the procedure invoked. The credential verifier is also decoded indicating that the server used no verifier in its reply (since the call did not specify it to begin with). A nice feature of snoop, that tethereal does not yet have, is the ability to indicate the frame for which this is a reply. As expected, the status field of the mount service reply reports an error. Packet 12 contains the results of the export information request:
rome#tethereal -r /tmp/ethereal.cap -V -R "frame.number == 12"
... Remote Procedure Call Last Fragment: Yes Fragment Length: 92 XID: 0x398f20d9 (965681369) Message Type: Reply (1) Program: MOUNT (100005) Program Version: 1 Procedure: EXPORT (5) Reply State: accepted (0) Verifier Flavor: AUTH_NULL (0) Length: 0 Accept State: RPC executed successfully (0) Mount Service Program Version: 1 Procedure: EXPORT (5) Data (68 bytes) 0 0000 0001 0000 000b 2f65 7870 6f72 742f ......../export/ 10 656e 6700 0000 0000 0000 0001 0000 000c eng............. 20 2f65 7870 6f72 742f 686f 6d65 0000 0001 /export/home.... 30 0000 0006 7665 726f 6e61 0000 0000 0000 ....verona...... 40 0000 0000 ....
The Data field of the Mount packet shows a hex dump of the export list. The interpreted text value is in the far right column. We can see how the export list is encoded into the packet as a set of exported directories , each followed by the list of hosts (or group of hosts) that they give access to.
Useful filters
Read filters help you remove the noise from a packet trace and let you see only the packets that interest you. If a packet meets the requirements expressed in the read filter, then it is printed. Read filters let you compare the fields within a protocol against a specific value, compare fields against other fields, and check the existence of specified fields or protocols altogether. One of the main strengths of tethereal is its powerful filters. You are encouraged to learn more about them from the tethereal documentation. The following list includes some of the read filters you are most likely to use when analyzing NFS-related traffic:- nfs
- Displays NFS traffic regardless of the version. Note that MOUNT, NLM, and Portmapper traffic is not captured. Useful once the mount has already occurred. The following example displays all NFS protocol traffic involving the host rome:
#
tethereal -R "nfs and ip.addr == rome"
- nfs.status
- Displays all replies to successful NFS calls when nfs.status == 0 or the replies to unsuccessful NFS calls otherwise. The originating call can be obtained using the rpc.xid filter. The following example displays all NFS failures:
#
tethereal -R "nfs.status != 0"
- rpc
- Displays all RPC traffic regardless of the program number. The following example displays all RPC traffic on the wire:
#
tethereal -R "rpc"
- rpc.xid
- Displays the RPC call or reply matching a given Transaction ID. This is useful when the call packet is available and the matching reply is needed, or viceversa. The following example finds the RPC call and reply with transaction ID equal to 0x398f0441:
#
tethereal -R "rpc.xid == 0x398f0441"
- tcp.port == 111 or udp.port == 111
- Displays rpcbind and portmapper traffic. Useful during filesystem mount negotiation. The following example displays all rpcbind traffic on the network:
#
tethereal -R "tcp.port == 111 or udp.port == 111"
- rpc.program, rpc.programversion, rpc.procedure
- Use rpc.program == 100005 to capture MOUNT protocol related traffic. Useful during the mount process. The following example displays all MOUNT protocol traffic between the hosts zeus and rome:
#
tethereal -R "rpc.program == 100005 and ip.addr == zeus \
and ip.addr == rome"
Use rpc.program == 100021 to capture NLM traffic. Useful for tracking lock manager-related traffic. The following example displays all NFS Version 3 Network Lock Manager traffic between hosts zeus and rome. Note that NLM v4 is used for NFS Version 3:
#
tethereal -R "rpc.program == 100021 and rpc.programversion == 4 \
and ip.addr == rome and ip.addr == zeus"
NFSWATCH
NFSWATCH was developed by David Curry of Purdue University in the late 1980s, with some improvements to the basic framework provided by Jeff Mogul of Digital Equipment Corporation (now Compaq). It is mainly used to monitor NFS activity on a given server, or NFS activity on the local network. NFSWATCH gathers its data by monitoring the network interface of the system where it is invoked. NFSWATCH 4.3 is the most recent version at the time of this writing, and only supports NFS Version 2 over UDP. You should be aware that at the time of this writing, a bug in the tool causes NFS Version 3 traffic to the server to incorrectly increment the NFS Version 2 counters. This is due to the fact that the tool does not check the NFS version number of the packet received. Regardless of its current limitations, NFSWATCH is still a very useful tool whose main features are worth mentioning:- The tool categorizes the incoming network traffic and continuously updates the statistics on the display. You can also instruct the tool to create a more detailed log file of the network traffic.
- It allows you to log statistics for every NFS operation, for every exported filesystem, for files for which you specify particular interest, or for NFS clients that access your server.
- It reports usage of NFS clients and users of the filesystems.
- It can be run interactively or remotely (via rsh), or it can be scheduled to run from cron.
- Total runtime can be specified for unsupervised traffic monitoring.
# NFSwatch log file # Packets from: all hosts # Packets to: zeus # # begin # Date: Tue Aug 1 16:31:22 2000 Cycle Time: 5 Elapsed Time: # # total packets network to host dropped # Interval Packets: 2371 2371 0 Total Packets: 2371 2371 0 # # packet counters int pct total # ND Read: 0 0% 0 ND Write: 0 0% 0 NFS Read: 166 7% 166 NFS Write: 346 15% 346 NFS Mount: 0 0% 0 YP/NIS/NIS+: 0 0% 0 RPC Authorization: 0 0% 0 Other RPC Packets: 1844 78% 1844 TCP Packets: 2 0% 2 UDP Packets: 2358 99% 2358 ICMP Packets: 1 0% 1 Routing Control: 2 0% 2 Address Resolution: 10 0% 10 Reverse Addr Resol: 0 0% 0 Ethernet/FDDI Bdcst: 13 1% 13 Other Packets: 0 0% 0 # # nfs counters int pct total # /export/home: 512 100% 512 (0/0/5/0/12/0/154/0/ 335/2/0/0/0/0/3/1/0/0) # # file counters int pct total # # # nfs procs # Procedure int pct total completed ave.resp var.resp max.resp CREATE 2 0% 2 GETATTR 0 0% 0 GETROOT 0 0% 0 LINK 0 0% 0 LOOKUP 12 2% 12 MKDIR 3 1% 3 NULLPROC 0 0% 0 READ 154 30% 154 READDIR 0 0% 0 READLINK 0 0% 0 REMOVE 0 0% 0 RENAME 0 0% 0 RMDIR 1 0% 1 SETATTR 5 1% 5 STATFS 0 0% 0 SYMLINK 0 0% 0 WCACHE 0 0% 0 WRITE 335 65% 335
The NFSWATCH log shows the distribution of NFS READ, NFS WRITE, NFS MOUNT, NIS, and RPC AUTHORIZATION packets among others. The NFS counters section indicates the total number NFS operations per filesystem exported (one in this case) during the interval. The operation distribution denoted by (0/0/5/0/12/0/154/0/335/2/0/0/0/0/3/1/0/0) indicates that the majority of the operations occurred in the middle of the interval. The packet counters and nfs procs indicate that there were close to twice as many writes as reads. The low lookup count leads us to believe that most writes occurred to the same file.
nfsbug
The nfsbug utility was written by Leendert van Doorn in the mid-1990s to test hosts for well-known NFS problems and bugs. nfsbug is available at http://www.cs.vu.nl/~leendert. Use it to identify (and consequently correct) the following problems:- Find worldwide exportable filesystems. This is a common occurrence in large organizations with hundreds or thousands of NFS clients. System administrators choose to export filesystems to all clients instead of grouping the hosts into netgroups and exporting the filesystems only to the netgroups that really need access to the filesystems.
- Determine the effectiveness of the export list.
- Determine if filesystems can be mounted through the portmapper.
- Attempt to guess filehandles and obtain access to filesystems not exported to the test client.
- Exercise the system for well-known bugs.[34]
[34]According to Leendert's web page, the tool has not been updated in recent years, although he still plans to get to it at some point.
SATAN
SATAN is a tool used to find well-known security holes in Unix systems. SATAN stands for Security Administrator's Tool for Analyzing Networks. At the time of this writing, none of the problems SATAN probes for are new. Each one has already been discussed in CERT bulletins and each can be countered either by installing the appropriate patch or fixing a system configuration flaw. SATAN is available at http://www.fish.com/satan. SATAN was written by Dan Farmer and Wietse Venema and first released for general availability in April of 1995. The tool is intended to help system administrators identify several common network-related security problems, hopefully before someone else has a chance to exploit them. The tool provides a description of the problem, explains the consequences if no action is taken, and indicates how to correct the problem. Note that the tool itself will not exploit the security hole. At the time of this writing, SATAN can identify and fix the following problems related to NFS and NIS:- NFS filesystems exported to arbitrary hosts
- NFS filesystems exported to unprivileged programs
- NFS filesystems exported via the portmapper
- NIS password file access from arbitrary hosts