This is the mail archive of the cygwin mailing list for the Cygwin project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: NTFS fragmentation redux

Vladimir Dergachev wrote:
This is curious - how do you find out fragmentation of ext3 file ? I do not know of a utility to tell me that.
	There's a debugfs for ext2/ext3 that allows you to dump all of the
segments associated with an inode.  "ls -i" dumps the inode number.
A quick hack (attached) displays segments for either extX or (using xfs_bmap) xfs.
I couldn't find a similar tool for jfs or reiser (at least not in my distro).

From indirect observation ext3 does not have fragmentation nearly that bad until the filesystem is close to full or I would not be able to reach sequential read speeds (the all-seeks speed is about 6 MB/sec for me, I was getting 40-50 MB/sec). This was on much larger files though.
On an empty partition, I created a deterministic pathological case. Lots
little files all separated by holes.  ext3 (default mount) just
allocated 4k blocks in a first come-first serve manner.  XFS apparently
looked for larger allocation units as the file was larger than 4K.
In that regard, it's similar to NT.  I believe both use a form of B-Tree
to manage free space.

Which journal option was the filesystem mounted with ?
	I can't see how that would matter, but default. For speed of
test, I mounted both with noatime,async & xfs also got
nodiratime and logbuffs=8 (or deletes take way long).

I actually implemented a workaround that calls "fsutil file createnew FILESIZE" to preallocate space and then write data in append mode
(after doing seek 0).
	I wonder if it does the same thing as dd or if it uses
the special call to tell the OS what to expect.  FWIW,
"cp" used some smallish number of blocks (4 or 8, I think), so
it is almost guaranteed to give you about the worse possibly
fragmented file! :-)  Most likely the other file utils will
give similar allocation performance (not so good).



export debugfs=$(which debugfs)

if (($#<1)); then echo "need filename" >&2 ; exit 1; fi

while [[ -n "$1" ]]; do
	if [ ! -e "$1" ]; then echo "Name \"$1\" doesn't exist. Ignoring" >&2
		inode=$(ls -i "$1"|cut -d\  -f1)
		echo -n "$1: "
		PAGER=cat $debugfs /dev/hdc1 -R "stat <$inode>" 2>/dev/null | fragfilt.ext

# vim:ts=4:sw=4:
#!/usr/bin/perl -w

my $lineno=0;
my $frags=0;

while (<>) {
	/^ \( \d+[^\)]* \) :\d+[^,]*,/x && do {
		my @block_ranges;
		my @blocknums;
		my @this_range;

		@block_ranges = split / /;
		foreach (@block_ranges) {
			push @blocknums, @this_range;
		my $bn=$blocknums[0];
		print "$#blocknums blocks";
		my $frags=1;
		for ($i=1; $i < $#blocknums; ++$i) {
			my $nbn=$blocknums[$i];
#			print "bn=$bn, nbn=$nbn; ";
			if ($bn+1 != $nbn ) {
#				print "Hiccup, skip ($bn -> $nbn)\n";
#		next;
if ($frags==0) {
	print "No fragments in file, (length = 0?)\n";
	exit 1;
if ($frags==1) {
	print ", fully defragmented\n";
} else {
	print " in $frags fragments\n";

# vim:ts=4:sw=4
Unsubscribe info:
Problem reports:

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]