[PATCH] libelf: Fix some 32bit offset/size issues that break updating 4G+ files.
Thu Jun 20 07:29:00 GMT 2019
On Thu, 2019-06-20 at 04:54 +0300, Dmitry V. Levin wrote:
> On Thu, Jun 20, 2019 at 01:10:53AM +0200, Mark Wielaard wrote:
> > +# Make sure the disk is reasonably fast, should be able to write
> > 100MB/s
> > +fast_disk=1
> > +timeout -s9 10s dd conv=fsync if=/dev/urandom of=tempfile bs=1M
> > count=1K \
> > + || fast_disk=0; rm tempfile
> Why /dev/urandom? I suggest to use /dev/zero instead.
Good question. In early testing I noticed some file systems seemed to
optimize away the whole writing of zeros and dd would finish almost
immediately. So I used /dev/urandom to get some "real bits" in the
file. But even that didn't always show the true write-through speed.
Then I added conv=fsync which makes sure to physically write output
file data before finishing. That seems to work to see the actual write
speed with either /dev/urandom or /dev/zero. So, I'll change it to
/dev/zero again. Thanks.
> Also, the check itself is quite expensive, it writes 1G into
> I suggest to move it after mem_available check.
> > +if test $fast_disk -eq 0; then
> > + echo "Disk not fast enough, need at least 100MB/s"
> It isn't necessarily a disk, I'd say that file system is not fast
More information about the Elfutils-devel