Subject: Re: df output with a really large filesystems
To: enami tsugutomo <enami@but-b.or.jp>
From: Bill Studenmund <wrstuden@netbsd.org>
List: tech-userlevel
Date: 03/29/2004 18:38:25
--0FRtVia6Q6lt+M0P
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On Fri, Mar 26, 2004 at 12:17:59AM +0900, enami tsugutomo wrote:
> > BTW, our filer only has one shelf of disks (to make the 1.1T fs).
> > i guess if i had more shelves i would have a problem?
>=20
> Ya, if the # of blocks doesn't fit the `long' variable, it is
> impossible for userland program to calculate correct value. For nfs,
> the limit is 2T since it uses 512 as block size (f_bsize).
Interesting. Looks like statfs got missed. stat(2) already has:
int64_t st_blocks; /* blocks allocated for file */
So we're ok if a file is big, but not the fs.
Someone want to fix this fast so it makes 2.0? :-)
Take care,
Bill
--0FRtVia6Q6lt+M0P
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (NetBSD)
iD8DBQFAaN2hWz+3JHUci9cRAiZjAJ9tg0Mb/o6iphG4QWdVXjVPPD9tGgCeKtnU
4DqiZzxjPI7KMRdr8aEtTX0=
=TQMp
-----END PGP SIGNATURE-----
--0FRtVia6Q6lt+M0P--