Lines Matching refs:and
5 performance, reliability, and scalability.
11 * High availability and reliability. No single point of failure.
21 In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
23 separates data and metadata management into independent server
24 clusters, similar to Lustre. Unlike Lustre, however, metadata and
28 across storage nodes in large chunks to distribute workload and
32 system extremely efficient and scalable.
37 and can tolerate arbitrary (well, non-Byzantine) node failures. The
41 directories, allowing entire directories of dentries and inodes to be
43 extremely large directories can be fragmented and managed by
51 and things will "just work."
54 a snapshot on any subdirectory (and its nested contents) in the
55 system. Snapshot creation and deletion are as simple as 'mkdir
56 .snap/foo' and 'rmdir .snap/foo'.
59 files and bytes. That is, a 'getfattr -d foo' on any directory in the
60 system will reveal the total number of nested regular files and
61 subdirectories, and a summation of all nested file sizes. This makes
90 Specify the IP and/or port the client should bind to locally.
123 Use the dcache contents to perform negative lookups and
132 and is useful for tracking down bugs.
147 and the source for the full system is at