This is the mail archive of the
ecos-discuss@sources.redhat.com
mailing list for the eCos project.
Re: The performance issue of FAT16 file system?
- From: Savin Zlobec <savin dot zlobec at email dot si>
- To: jiaoxf at hellocq dot net
- Cc: nickg at ecoscentric dot com, ecos-discuss at sources dot redhat dot com
- Date: Sun, 22 Feb 2004 10:01:40 +0100
- Subject: [ECOS] Re: The performance issue of FAT16 file system?
Nick Garnett wrote:
> "James Jiao" <jiaoxf@hellocq.net> writes:
>
> > Hello,
> >
> > I am a user of eCos FAT16 file system package, we ported the V85X CF
> > disk package to another platform. But strange problems occur:
> >
> > 1. We run the fileio1 test for the FAT file system, the chdir(".")
> > test failed!
>
> How does it fail? does the chdir() itself fail, or is it the
> checkcwd() ? Do the subsequent operations work?
>
> > 2. When doing the maxfile() test, the file writting performance
> > degrades quickly, that is to say, with the the increase of file
> > length, the writting speed decereases quickly! At about 8 Megabytes
> > file length's time, it costs the system about 5 seconds to write
> > 100kilo-bytes while at the beginning of file creation, it only cost
> > the system less than 1 second to write 100kilo-bytes.
>
> This is a consequence of the FAT filesystem format. File clusters are
> chained linearly through the FAT table. Each time a new cluster is
> added, the chain must be followed down to the end. Beyond a certain
> size there is not enough space in the cache to contain the entire FAT
> table and it starts to turn over the entire cache each time.
>
> You can try increasing the size of the cache, but this just delays the
> point at which thrashing occurs, rather than eliminates it.
Part of the problem is in the current file cluster chain cache which is not
well suited for large files. The FS tries to keep all file cluster numbers
from the first and up to the last accessed in cache. When we need to access
a specific file position on the disk, then this cache is used to find the
cluster number. If the cluster number is not found in cache, then FAT
table is searched from the last cluster number in cache to the requested
(and the cache is updated - if there is any space left). When you run out
of cache space, then FAT table is used each time you access data from a
non cached cluster number.
One solution would be to keep some additional data for each open file.
This data could be the current cluster number and maybe the next N
clusters. Because most of the file accesses are of sequential nature
this could speed up things quite a bit for large files.
savin
____________________
http://www.email.si/
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss