This is the mail archive of the libc-help@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Asking for Help on Seeking to End of File


Thank you for the important messages and patient explanation! We will
check the detail tweaks when time is available.

On Tue, Jul 22, 2014 at 5:56 AM, Linda A. Walsh <gnu@tlinx.org> wrote:
> Linlin Yan (éææ) wrote:
>>
>>  mount parameters:
>>
>>
>> rw,noatime,usrquota,grpquota,logbsize=256k,logbufs=8,allocsize=1073741824,largeio
>> which was inherited from other servers maintained by another
>> ex-technician.
>>
>> "allocsize" was set to about 256M in order to make it
>> cache as much as possible to improve the I/O speed. I enlarged "allosize"
>> because [of] larger memory...
>
> ----
>    Forgive my intrusion, again, but I wasn't sure if you understood
> "allocsize" parameter.
>
>    allocsize controls the *size* of space *alloc*ated on disk for a file.
> When you create a file, or when the previous free region is full, xfs
> will try to find a contiguous region on disk of allocsize.
>    Note: allocsize and the raid params (su/sw, etc), are
> *ignored* if "largeio" is not turned on.
>    Largeio tells xfs you want to be able to do large reads
> and/or large writes in 1 operation if possible. The allocsize
> raid params allow you specify the size and alignment of those
> reads/writes, but without largeio, they are ignored.  They only
> affect how space is allocated, neither affects caching.
> The only mount params that affect cache are
> the logbufs and logbsize which only affect the xfs's meta-data
> log not the actual file-data.
> By default, linux will use all free memory for file buffering (and release
> it if a program needs it, automatically).
> There are various "tweaks" in the /proc/sys/fs/xfs dir for xfs and how
> often it writes some things to disk, and for all file systems
> in /proc/sys/vm which are documented in the kernel subdirectory:
> Documentation/filesystems, in files xfs[more than 1].txt and proc.txt.
>
> Biggest things affecting i/o performance are the size of the
> reads/writes.  It depends on the disk configuration, but on a RAID
> system, 256MB to *read* or *write* in 1 call is more than enough.
> Most would be fine with 16MB.
>
> However, in your case, your files use "odd" size values -- not a multiple
> of any power of 2.  That suggests your application may be writing small
> unaligned bits of data (unaligned = NOT multiple of the system allocation
> size,
> 4K by default w/xfs).  Doing odd size IO's will significantly slow
> down file-io.  Even doing *small* file-io's (<128K) on most disks will
> cause problems.
> For comparison (I used dd w/"direct" to avoid the buffer and measure
> time to do I/O on disk).
> Here are writes for writing 1GB, 1M at a time, vs. 4K at a time:
>
> 1MB R+W:
> read: 2.08161 s, 516 MB/s,    write: 1.5624 s, 687 MB/s
>
> 4K R+W:
> read:  37.2781 s, 28.8 MB/s,  write: 43.0325 s, 25.0 MB/s
>
> Using small, but *even* (power of 2-4K) size I/O's is 20-30 times
> slower.
>
> Using 1k IO is about 4 times slower.
>
> If they were odd, dunno... but likely 10x slower than the 4K case or
> greater than 200X slower than the 1MB case.
>
> You should look how the application does I/O if that's causing
> a slowdown, changing kernel or file system parms are not likely to do
> much good in such a case.
>
> Hope this helps!
> Linda
>
>
>
>
>
>
>
>
>
>
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]