This is the mail archive of the elfutils-devel@sourceware.org mailing list for the elfutils project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] libelf: Fix some 32bit offset/size issues that break updating 4G+ files.


On Thu, 2019-06-20 at 04:54 +0300, Dmitry V. Levin wrote:
> On Thu, Jun 20, 2019 at 01:10:53AM +0200, Mark Wielaard wrote:
> > +# Make sure the disk is reasonably fast, should be able to write
> > 100MB/s
> > +fast_disk=1
> > +timeout -s9 10s dd conv=fsync if=/dev/urandom of=tempfile bs=1M
> > count=1K \
> > +  || fast_disk=0; rm tempfile
> 
> Why /dev/urandom?  I suggest to use /dev/zero instead.

Good question. In early testing I noticed some file systems seemed to
optimize away the whole writing of zeros and dd would finish almost
immediately. So I used /dev/urandom to get some "real bits" in the
file. But even that didn't always show the true write-through speed.
Then I added conv=fsync which makes sure to physically write output
file data before finishing. That seems to work to see the actual write
speed with either /dev/urandom or /dev/zero. So, I'll change it to
/dev/zero again. Thanks.

> Also, the check itself is quite expensive, it writes 1G into
> tempfile,
> I suggest to move it after mem_available check.

OK. Moved.

> > +if test $fast_disk -eq 0; then
> > +  echo "Disk not fast enough, need at least 100MB/s"
> 
> It isn't necessarily a disk, I'd say that file system is not fast
> enough.

Changed.

Thanks,

Mark


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]