This is the mail archive of the
mailing list for the newlib project.
Re: [PATCH] Do not break buffers in fvwrite for unbuffered files
- From: Federico Terraneo <fede dot tft at hotmail dot it>
- To: newlib at sourceware dot org
- Date: Mon, 21 Oct 2013 14:19:11 +0200
- Subject: Re: [PATCH] Do not break buffers in fvwrite for unbuffered files
- Authentication-results: sourceware.org; auth=none
- References: <BLU0-SMTP3706870746E06F88A8C1B14F9010 at phx dot gbl> <20131021115449 dot GB2480 at calimero dot vinschen dot de>
-----BEGIN PGP SIGNED MESSAGE-----
On 10/21/2013 01:54 PM, Corinna Vinschen wrote:
> On Oct 21 10:36, Federico wrote:
>> - w = fp->_write (ptr, fp->_cookie, p, MIN (len, BUFSIZ)); +
>> w = fp->_write (ptr, fp->_cookie, p, len);
> As noted in my other reply, len is size_t but the parameter to
> _write may be int, even after my _READ_WRITE_BUFSIZE_TYPE patch has
> been applied.
> Therefore the type size difference still has to be accounted for.
> Maybe something like this is sufficient:
> w = fp->_write (ptr, fp->_cookie, p, MIN (len, MAX_INT));
> Thanks, Corinna
First of all I'm happy to know that my profiling work is useful to
other targets as well.
Thanks for the patch review, didn't notice the type problem as my
target still uses int for file offsets.
Your solution using MAX_INT is more than satisfactory, as it would
only split *really* large buffers in 2GByte chunks. In my target
(microcontrollers with limited RAM amount), this will simply never
happen, while on other targets the overhead of one syscall every
2GByte is probably of no concern.
I'm wondering if it makes sense to avoid splitting writes (and reads)
also for buffered files. glibc appears to do it, and outside of the
RAM constrained embedded world few programmers care about adding a
setbuf(f,NULL); if they are going to read/write in large chunks.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
-----END PGP SIGNATURE-----