GLIBC Localedata out of memory
Stafford Horne
shorne@gmail.com
Sat May 29 10:08:14 GMT 2021
On Wed, Feb 03, 2021 at 06:40:34AM +0900, Stafford Horne wrote:
> On Tue, Feb 02, 2021 at 10:40:02AM -0300, Adhemerval Zanella wrote:
> >
> >
> > On 30/01/2021 20:50, Stafford Horne via Libc-alpha wrote:
> > > Hello,
> > >
> > > I am working on testing the new OpenRISC port and getting all of the tests
> > > passing. I am having an issue with generating localedata on my board running
> > > out of memory. The process is getting killed by the linux oom-killer. The
> > > board has 256mb ram and it seems to not be enough. Lot's of ram is being used
> > > by my rootfs.
> >
> > I usually ran a full make check on a sh4 board that has 512MB of ram and it is
> > shared with debian buildd process and I haven't see any issue. The machine does
> > have 8GBi of swap enabled, so it might be mitigated by this.
> >
> > I checked the memory requirements of localedata generation (using valgrind massig)
> > for tests and it seems that only one (cmw_TW.UTF8) consumes about ~190MB, mostly
> > uses about ~150MB (as below). So I think using a small swap should help you here.
> >
> > az_AZ.UTF-8 : 143.4 MB
> > bg_BG.UTF-8 : 145.29 MB
> > am_ET.UTF-8 : 145.62 MB
> > ber_DZ.UTF-8 : 145.28 MB
> > ber_MA.UTF-8 : 145.26 MB
> > be_BY.UTF-8 : 145.27 MB
> > br_FR.UTF-8 : 145.26 MB
> > bs_BA.UTF-8 : 145.27 MB
> > ckb_IQ.UTF-8 : 144.6 MB
> > cmn_TW.UTF-8 : 190.72 MB
>
> Actually, I was able to get the locale generation working yesterday by shrinking
> my rootfs as much as possible. Next I found that the sort-test is failing
> (localedata/xfrm-test with -nocache). The -nocache option makes the program
> allocate 4096 byte per each locale line to avoid strxfrm from using a cache
> path. Oh boy!
>
> > >
> > > Previously I was running tests on a simulator where I could adjust the ram to
> > > anything I like. But now I am running tests on my ARTY fpga dev board.
> > >
> > > Is it OK to skip locale generation in tests results? Or any suggestions?
> >
> > You will need the locales to get a full result, otherwise some tests will
> > fail. I think it should be possible to build a native localedef and
> > generate the locales in the host for a cross-compilation. You will need to
> > handle the possible endianess mismatch though and there is the potential
> > issue of a conversion library not support by the host that test might use.
>
> I think that I'm going to need to solve the ram issue by giving it some swap.
>
> I tried swap over NFS but that resulted in write errors (Write error on dio
> swapfile):
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/page_io.c?h=v5.11-rc6#n351
>
> I will try to use an SD card.
So, this took a bit longer than I expected but I have swap working now over SD
card and I don't have a memory issue any more generating locale data.
Interesting chain of events:
- Had to order SD card pmod for my board which took a few days
- Next, I found my SoC had broken hardware (ok its FPGA, but it took about
a month to fix.)
- After fixing we found some issues with the kernel driver, but now those
are fixed and everyting works fine.
- My workstation died so I had to rebuild it, now I have a fancy 6 core
processor.
Anyway, another question. Do most glibc embedded developers build and run tests
on embedded boards using a cross-compiler and 'scripts/cross-test-ssh.sh'?
I have been running tests like this using a buildroot rootfs on my board.
I now have 58 failures, but looking into them some seem related to handling of
stdout and it seems like maybe its related to running over SSH.
If this is what other are doing then I will continue with SSH and figure out
these issues. But if other do another way, i.e. with a native compiler on
the board then I will look into that. In that case I will probably have to
start building debian for fedora as buildroot doesn't allow for creating
a native compiler.
-Stafford
More information about the Libc-alpha
mailing list