The Andrew File System (AFS) is a distributed networked filesystem developed by Carnegie Mellon University under the direction of Mahadev Satyanarayanan as part of their Andrew Project. It is named for Andrew Carnegie and Andrew Mellon. Its primary use is in distributed computing.
AFS has several benefits over traditional networked filesystems, particularly in the areas of security and scalability. It is not uncommon for enterprise AFS cells to exceed 50,000 clients. AFS uses Kerberos for authentication, and implements access control lists on directories for users and groups. AFS's client-level caching improves filesystem perfomance, and allows limited filesystem access in the event of a server crash or a network outage:
AFS is a location-independent file system that uses a local cache to reduce the workload and increase the performance of a distributed computing environment. A first request for data to a server from a workstation is satisfied by the server and placed in a local cache. A second request for the same data is satisfied from the local cache. (Source - searchStorage.com)
A significant feature of AFS is the volume, a tree of files and sub-directories. Volumes are created by administrators and linked at a specific named path in an AFS cell. Once created, users of the filesystem may create directories and files as usual without concern for the physical location of the volume. As needed, AFS administrators can move that volume to another server and disk location without the need to notify users; indeed the operation can occur while files in that volume are being used.
Volumes can also be replicated to up to eleven read-only back-up copies. When accessing files in a read-only volume, a client system will retrieve data from a particular read-only copy. If at some point that copy becomes unavailable, the client will look for any of the remaining copies. Again, users of that data are unaware of the location of the read-only copy; administrators can create and relocate such copies as needed. The AFS command suite guarantees that all read-only volumes contain exact copies of the original read-write volume.
The file name space on an Andrew workstation is partitioned into a shared and local name space. The shared name space is identical on all workstations. The local name space is unique to each workstation. It only contains temporary files needed for workstation initialization. The name spaces are both hierarchically structured. Each sub tree in the shared name space is assigned to a single server, called its custodian. Files in the shared name space are cached on demand on the local workstations. Read and write operations on an open file are directed to the cached copy. If a cached file is modified it is copied back to the custodian when the file is closed. Cache consistency is maintained by a mechanism called callback. When a file is cached the custodian makes a note of this and promises to inform the client if the file is updated by someone else.
|Filesystems: FAT | FATX | FAT12 | FAT16 | FATX16 | FAT32 | FATX32 | NTFS | JFS | Ext | Ext2 | Ext3 | HPFS | ReiserFS | Reiser4 | HFS+ | FFS | UFS1 | UFS2 | UFSMacOSX | XFS | OFS | BFS | BeFS | OpenBFS | NSS | NWFS | ODS5 | VxFS | ZFS | MFS | IFS | AFS | TVFS | MinixFS | SkyFS | AtheOSFS | ArlaFS | CDFS | UDF | CFS | DFS | OpenAFS | GFS | DTFS | CODA | UMSDOS | OldBeFS | RFS | EFS|