This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH 1/6] xstat: Add a pair of system calls to make extended file stats available
- From: David Howells <dhowells at redhat dot com>
- To: Steve French <smfrench at gmail dot com>
- Cc: dhowells at redhat dot com, "J. Bruce Fields" <bfields at fieldses dot org>, linux-fsdevel at vger dot kernel dot org, linux-nfs at vger dot kernel dot org, linux-cifs at vger dot kernel dot org, samba-technical at lists dot samba dot org, linux-ext4 at vger dot kernel dot org, wine-devel at winehq dot org, kfm-devel at kde dot org, nautilus-list at gnome dot org, linux-api at vger dot kernel dot org, libc-alpha at sourceware dot org
- Date: Thu, 26 Apr 2012 14:45:54 +0100
- Subject: Re: [PATCH 1/6] xstat: Add a pair of system calls to make extended file stats available
- References: <CAH2r5muMb8m9-fMc_tcfn3ku_s55q9EEbc-vzvoFjPnsDdq1gA@mail.gmail.com> <20120419140558.17272.74360.stgit@warthog.procyon.org.uk> <20120419140612.17272.57774.stgit@warthog.procyon.org.uk> <20120424212911.GA26073@fieldses.org>
Steve French <smfrench@gmail.com> wrote:
> I also would prefer that we simply treat the time granularity as part
> of the superblock (mounted volume) ie returned on fstat rather than on
> every stat of the filesystem. For cifs mounts we could conceivably
> have different time granularity (1 or 2 second) on mounts to old
> servers rather than 100 nanoseconds.
The question is whether you want to have to do a statfs in addition to a stat?
I suppose you can potentially cache the statfs based on device number.
That said, there are cases where caching filesystem-level info based on i_dev
doesn't work. OpenAFS springs to mind as that only has one superblock and
thus one set of device numbers, but keeps all the inodes for all the different
volumes it may have mounted there.
I don't know whether this would be a problem for CIFS too - say on a windows
server you fabricate P:, for example, by joining together several filesystems
(with junctions?). How does this appear on a Linux client when it steps from
one filesystem to another within a mounted share?
David