Automounter

From Wikipedia, the free encyclopedia

An automounter is any program or software facility which automatically mounts filesystems in response to access operations by user programs. An automounter system utility (daemon under Unix), when notified of file and directory access attempts under selectively monitored subdirectory trees, dynamically and transparently makes local or remote devices accessible.

The automounter has the purpose of conserving local system resources and of reducing the coupling between systems which share filesystems with a number of servers. For example, a large to mid-sized organization might have hundreds of file servers and thousands of workstations or other nodes accessing files from any number of those servers at any time. Usually, only a relatively small number of remote filesystems (exports) will be active on any given node at any given time. Deferring the mounting of such a filesystem until a process actually needs to access it reduces the need to track such mounts, increasing reliability, flexibility and performance.

Frequently, one or more fileservers will become inaccessible (down for maintenance, on a remote and temporarily disconnected network, or accessed via a congested link). Administrators also often find it necessary to relocate data from one file server to another - to resolve capacity issues and balance the load. Having data mount-points automated makes it easier to reconfigure client systems in such cases.

These factors combine to pose challenges to older "static" management methods of filesystem mount tables (the fstab files on Unix systems). Automounter utilities address these challenges and allow sysadmins to consolidate and centralize the associations of mountpoints (directory names) to the exports. When done properly, users can transparently access files and directories as if all of their workstations and other nodes attach to a single enterprise-wide filesystem.

One can also use automounters to define multiple repositories for read-only data; client systems can automatically choose which repository to mount based on availability, file-server load, or proximity on the network.

Home directories[edit]

Many establishments will have a number of file servers which host the home directories of various users. All workstations and other nodes internal to such organizations (typically all those behind a common firewall separating them from the Internet) will be configured with automounter services so that any user logging into any node implicitly triggers access to his or her own home directory which, consequently, is mounted at a common mountpoint, such as /home/user. This allows users to access their own files from anywhere in the enterprise, which is extremely useful in UNIX environments, where users may frequently invoke commands on many remote systems via various job-dispatching commands such as ssh, telnet, rsh or rlogin, or via the X11 or VNC protocols.

/net[edit]

A very common default automounter local path is of the form /net/hostname/nfspath where hostname is the host name of the remote machine and nfspath is the path that is exported over NFS on the remote machine. This notation generally frees the system manager from having to manage each exported path explicitly via a central automounter map.

Software shares and repositories[edit]

In some computing environments, user workstations and computing nodes do not host installations of the full range of software that users might want to access. Systems may be "imaged" with a minimal or typical cross-section of the most commonly used software. Also, in some environments, users might require specialized or occasional access to older versions of software (for instance, developers may need to perform bug fixes and regression testing, or some users may need access to archived data using outdated tools).

Commonly, organizations will provide repositories or "depots" of such software, ready for installation as required. These also may include full copies of the system images from which machines have their operating systems initially installed, or available for repair of any system files that may get corrupted during a machine's lifecycle.

Some software may require quite substantial storage space or might be undergoing rapid (perhaps internal) development. In those cases the software may be installed on, and configured to be run directly from, the file servers.

Dynamically variant automounts[edit]

In the simplest case, a fileserver houses data and perhaps scripts which can be accessed by any system in an environment. However, certain types of files (executable binaries and shared libraries, in particular) can only be used by specific types of hardware or specific versions of specific operating systems.

For situations like this, automounter utilities generally support some means of "mapping" or "interpolating" variable data into the mount arguments.

For example, an organization with a mixture of Linux and Solaris systems might arrange to host their software package repositories for each on a common file-server using export names like depot:/export/linux and depot:/export/solaris respectively. Thereunder they might have directories for each of the OS versions that they support. Using the dynamic variation features in their automounter, they might then configure all their systems so that any administrator on any machine in their enterprise could access available software updates under /software/updates. A user on a Solaris system would find the Solaris compiled packages under /software, while a Linux user would find RPMs, DEBs, or other packages for their particular OS version thereunder. Moreover, a Solaris user on a SPARC workstation would have his /software/updates mapped to an appropriate export for that system's architecture, while a Solaris user on an x86 PC would transparently find his /software/updates directory containing packages suited to his system. Some software (written in scripting languages such as Perl or Python) can be installed and/or run on any supported platform without porting, recompilation or re-packaging of any sort. A systems administrator might conceivably locate such software in a /software/common export.

In some cases, organizations may also use regional or location-based variable/dynamic mappings — so that users in one building or site are directed to a closer file server which hosts replications of the resources that are hosted at other locations.

In all of these cases, automounter utilities allow the users to access files and directories without regard for the actual physical location. Using an automounter, the users and systems administrators can usually access files where they are "supposed to be" and find that they appear to be there.

Software[edit]

Tom Lyon developed the original automount software at Sun Microsystems: SunOS 4.0 made automounting available in 1988.[1] Sun Microsystems eventually licensed this implementation to other commercial UNIX distributions. Solaris 2.0, first released in 1992, implemented its automounter with a pseudofilesystem called autofs, which communicates with a user-mode daemon that performs mounts.[2][3] Other Unix-like systems have adopted that implementation of the automounter - including AIX, HP-UX, and Mac OS X 10.5 and later.

In December 1989 Jan-Simon Pendry released Amd, an automounter "based in spirit" on the SunOS automount program.[4] Amd has also become known as the Berkeley Automounter.

Linux has an independent implementation of an autofs-based automounter; version 5 of that automounter generally operates compatibly with the Solaris automounter.

FreeBSD used to provide Amd; starting with 10.1 it has a new automounter very similar to the Solaris one.[5] It has been subsequently ported to DragonFly BSD[6] and NetBSD.[7]

Some operating systems also support automatic mounting of external drives (such as disk drives or flash drives that use FireWire or USB connections) and removable media (such as CDs and DVDs). This technology differs from the automounting described here; it involves mounting local media when the user attaches them to or inserts them into the system, rather than mounting directories from remote file servers when a reference is made to them. Linux currently (as of Linux 2.6) uses the user-space program udev for this form of automounting. Some automounting functions have been implemented in the separate program HAL, but As of 2010 are being merged[by whom?] into udev. OpenBSD has hotplugd(8) which triggers special scripts on attach or detach of removable devices, so that user can easily add mounting of removable drives. In macOS, diskarbitrationd carries out this form of automatic mounting. In FreeBSD, the removable media can be handled by the automounter, just as network shares are.[8][9]

Disadvantages and caveats[edit]

While automounter utilities (and remote filesystems in general) can provide centrally managed, consistent and largely transparent access to an organization's storage services, they also can have their downsides:

  • Access to automounted directories can trigger delays while the automounter resolves the mapping and mounts the export into place.
  • Timeouts can cause the unmounting of mounted directories (which situation can later result in mount delays upon the next attempted access).
  • The mapping of mountpoint to export arguments is usually done via some directory service such as LDAP or NIS, which constitutes another dependency (potential point of failure).
  • When some systems require frequent access to some resources, while others only need occasional access, this can cause difficult or impossible problems in implementing a consistent, enterprise-wide mixture of locally "mirrored" (replicated) and automounted directories.
  • When data is migrated from one file server (export) to another, there can be an indeterminate number of systems which, for various reasons, still have an active mount on the old location ("stale NFS mounts"); these can cause issues which may even require the reboot of otherwise perfectly stable hosts.
  • Organizations can find that they've created a "spaghetti" of mappings which can entail considerable management overhead and sometimes quite a bit of confusion among users and administrators.
  • Users can become so accustomed to the transparency of automounted resources that they neglect to consider some of the differences in access semantics that may apply to networked filesystems, as compared to locally mounted devices. In particular, programmers may attempt to use "locking" techniques which are safe and provide the desired atomicity guarantees on local filesystems, but which are documented as inherently vulnerable to race conditions when used on NFS.

References[edit]

  1. ^ Callaghan, Brent (2000) [1999]. NFS Illustrated. Addison-Wesley. pp. 322–323. ISBN 0-201-32570-5. Retrieved 2007-12-23.
  2. ^ Callaghan, Brent; Singh, Satinder (June 21–25, 1993). The Autofs Automounter. USENIX Summer 1993 Technical Conference. Cincinnati, Ohio.
  3. ^ Labiaga, Ricardo (November 7–12, 1999). Enhancements to the Autofs Automounter. 1999 LISA XIII. Seattle, Washington.
  4. ^ Jan-Simon Pendry (1989-12-01). "''Amd'' - An Automounter". Newsgroupcomp.unix.wizards. Retrieved 2007-12-23.
  5. ^ Edward Tomasz Napierała (2014-07-30). "Autofs" (PDF). Archived (PDF) from the original on 7 June 2021.
  6. ^ Tomohiro Kusumi (2016-06-02). "git: autofs: Port autofs from FreeBSD". [email protected] (Mailing list). DragonFly BSD. Retrieved 2019-11-13.
  7. ^ "New automounter". NetBSD Wiki. Archived from the original on 7 June 2021.
  8. ^ "FreeBSD Handbook, section 17.4.2. Automounting Removable Media". Archived from the original on 7 June 2021.
  9. ^ Dickison, Anne (2015-03-13). "FreeBSD From the Trenches: Using autofs(5) to Mount Removable Media". FreeBSD Foundation. Archived from the original on 7 June 2021. Retrieved 2019-11-13.