Code: Select all
%/usr/bin/df -F ufs -o i
Filesystem iused ifree %iused Mounted on
/dev/md/dsk/d0 6546 251054 3% /
/dev/md/dsk/d5 102904 1434632 7% /usr
/dev/md/dsk/d3 34943 309505 10% /var
/dev/md/dsk/d4 28176 360432 7% /opt
/dev/md/dsk/d30 27478 1217834 2% /www
/dev/md/dsk/d33 3611 1241701 0% /libs
/dev/md/dsk/d34 8548 1236764 1% /logs
/dev/md/dsk/d31 567480 5635016 9% /apps
/dev/md/dsk/d35 481883 6496677 7% /apps2
/dev/md/dsk/d6 123652 1537788 7% /export/home
Linux:http://www.sun.com/bigadmin/features/articles/zfs_part1.scalable.jsp wrote: ZFS Scalability
While data security and integrity is paramount, a file system must also perform well and stand the test of time, otherwise it won't see much use. The designers of ZFS have removed or greatly increased the limits imposed by modern file systems by using a 128-bit architecture and making all metadata dynamic. ZFS also implements data pipelining, dynamic block sizing, intelligent prefetch, dynamic striping, and built-in compression to improve performance.
The 128-Bit Architecture
Current trends in the industry show that disk drive capacity roughly doubles every nine months to a year. If this trend continues, file systems will require 64-bit addressability in about 10 to 15 years. Instead of planning on 64-bit requirements, the designers of ZFS have taken the long view and implemented a 128-bit file system. This means that ZFS delivers more than 16 billion times more capacity than current 64-bit systems. According to Jeff Bonwick, the ZFS chief architect, in ZFS: the last word in file systems, "Populating 128-bit file systems would exceed the quantum limits of earth-based storage. You couldn't fill a 128-bit storage pool without boiling the oceans." Jeff also discussed the mathematics behind this statement in his blog entry on 128-bit storage. Since we don't yet have the technology to produce that kind of energy for the mass market, we might be safe for a while.
Dynamic Metadata
In addition to being 128-bit, ZFS metadata is 100 percent dynamic. Because of this, creation of new storage pools and file systems is extremely fast. Only 1 to 2 percent of writes to disk are metadata, which results in a big initial overhead savings. There are, for example, no static inodes, so the only restriction is the number of inodes that will fit on the the disks in the storage pool.
The 128-bit architecture also means that there are no practical limits on the number of files, directories, and so on. Here are some theoretical limits that might, if you can conceive of the scope, knock your socks off:
* 248 snapshots in any file system
* 248 files in any individual file system
* 16 exabyte file systems
* 16 exabyte files
* 16 exabyte attributes
* 3x1023 petabyte storage pools
* 248 attributes for a file
* 248 files in a directory
* 264 devices in a storage pool
* 264 storage pools per system
* 264 file systems per storage pool
Code: Select all
%df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda3 131616 27553 104063 21% /
/dev/sda1 130560 51 130509 1% /boot
/dev/sda7 35962880 1501190 34461690 5% /data
none 498761 1 498760 1% /dev/shm
/dev/sda5 1987136 187122 1800014 10% /usr
/dev/sda6 656000 2800 653200 1% /var
appint02:/www/pint-portal01
1245312 27478 1217834 3% /www/pint-portal01
appint02:/www/pint-store01
1245312 27478 1217834 3% /www/pint-store01
appint04:/apps/weblogic
135284408 218806 135065602 1% /apps/weblogic