This is the mail archive of the
libc-help@sourceware.org
mailing list for the glibc project.
Re: Asking for Help on Seeking to End of File
- From: Linlin Yan (éææ) <yanll at mail dot cbi dot pku dot edu dot cn>
- To: "Linda A. Walsh" <gnu at tlinx dot org>
- Cc: Ãngel GonzÃlez <keisial at gmail dot com>, Siddhesh Poyarekar <siddhesh dot poyarekar at gmail dot com>, Godmar Back <godmar at gmail dot com>, "libc-help at sourceware dot org" <libc-help at sourceware dot org>
- Date: Tue, 22 Jul 2014 22:39:50 +0800
- Subject: Re: Asking for Help on Seeking to End of File
- Authentication-results: sourceware.org; auth=none
- References: <CA+YjnUuUj_LeGTtGbuuOQy=YFR3HQ7ZHJ7h8XP1Y1ssEQA1Ryw at mail dot gmail dot com> <CAB4+JY+2Dhg1uo-+jputmfSjFtCMBFc7WQ1E3y1WXFYqZadiJQ at mail dot gmail dot com> <CA+YjnUs_CX4E14-JyrHXg8QeoL0YuaGpYFgRYVs7PJ3TCqN3cg at mail dot gmail dot com> <CAAHN_R0t8X7uN=J-JfLVZbt+_dB+uG0hybX2iq5VF_sZprV4bQ at mail dot gmail dot com> <5390EF18 dot 7010306 at gmail dot com> <CA+YjnUs7bfSaAUyvLJCd0mkx9mzpSq2X==BqB6K_7mpYYpO74g at mail dot gmail dot com> <CA+YjnUtbnkr6e4FSsBJ3DXhYkNPGOOaqquX9DipsX7csnQMfiA at mail dot gmail dot com> <53CB3462 dot 5070401 at tlinx dot org> <CA+YjnUvitjGNriwJ1_gp70nRx4qZD_KZCMKm-XUSpL90U7nR2Q at mail dot gmail dot com> <53CD8C89 dot 3050409 at tlinx dot org>
Thank you for the important messages and patient explanation! We will
check the detail tweaks when time is available.
On Tue, Jul 22, 2014 at 5:56 AM, Linda A. Walsh <gnu@tlinx.org> wrote:
> Linlin Yan (éææ) wrote:
>>
>> mount parameters:
>>
>>
>> rw,noatime,usrquota,grpquota,logbsize=256k,logbufs=8,allocsize=1073741824,largeio
>> which was inherited from other servers maintained by another
>> ex-technician.
>>
>> "allocsize" was set to about 256M in order to make it
>> cache as much as possible to improve the I/O speed. I enlarged "allosize"
>> because [of] larger memory...
>
> ----
> Forgive my intrusion, again, but I wasn't sure if you understood
> "allocsize" parameter.
>
> allocsize controls the *size* of space *alloc*ated on disk for a file.
> When you create a file, or when the previous free region is full, xfs
> will try to find a contiguous region on disk of allocsize.
> Note: allocsize and the raid params (su/sw, etc), are
> *ignored* if "largeio" is not turned on.
> Largeio tells xfs you want to be able to do large reads
> and/or large writes in 1 operation if possible. The allocsize
> raid params allow you specify the size and alignment of those
> reads/writes, but without largeio, they are ignored. They only
> affect how space is allocated, neither affects caching.
> The only mount params that affect cache are
> the logbufs and logbsize which only affect the xfs's meta-data
> log not the actual file-data.
> By default, linux will use all free memory for file buffering (and release
> it if a program needs it, automatically).
> There are various "tweaks" in the /proc/sys/fs/xfs dir for xfs and how
> often it writes some things to disk, and for all file systems
> in /proc/sys/vm which are documented in the kernel subdirectory:
> Documentation/filesystems, in files xfs[more than 1].txt and proc.txt.
>
> Biggest things affecting i/o performance are the size of the
> reads/writes. It depends on the disk configuration, but on a RAID
> system, 256MB to *read* or *write* in 1 call is more than enough.
> Most would be fine with 16MB.
>
> However, in your case, your files use "odd" size values -- not a multiple
> of any power of 2. That suggests your application may be writing small
> unaligned bits of data (unaligned = NOT multiple of the system allocation
> size,
> 4K by default w/xfs). Doing odd size IO's will significantly slow
> down file-io. Even doing *small* file-io's (<128K) on most disks will
> cause problems.
> For comparison (I used dd w/"direct" to avoid the buffer and measure
> time to do I/O on disk).
> Here are writes for writing 1GB, 1M at a time, vs. 4K at a time:
>
> 1MB R+W:
> read: 2.08161 s, 516 MB/s, write: 1.5624 s, 687 MB/s
>
> 4K R+W:
> read: 37.2781 s, 28.8 MB/s, write: 43.0325 s, 25.0 MB/s
>
> Using small, but *even* (power of 2-4K) size I/O's is 20-30 times
> slower.
>
> Using 1k IO is about 4 times slower.
>
> If they were odd, dunno... but likely 10x slower than the 4K case or
> greater than 200X slower than the 1MB case.
>
> You should look how the application does I/O if that's causing
> a slowdown, changing kernel or file system parms are not likely to do
> much good in such a case.
>
> Hope this helps!
> Linda
>
>
>
>
>
>
>
>
>
>
>