[ECOS] JFFS2 "eating memory" when creating files
Wed Mar 10 14:16:00 GMT 2004
> > After making the attached changes to fs-ecos.c, things are looking much
> > better, but JFFS2 still "eats" 24 bytes every time I open, write and
> > close a file.
> That is most likely because a new node has to be created in the log, which
> is represented in RMA as a struct raw_node_ref. You probably have configured
> with the memory pool size (CYGNUM_FS_JFFS2_RAW_NODE_REF_CACHE_POOL_SIZE) set
> to zero.
I do indeed have CYGNUM_FS_JFFS2_RAW_NODE_REF_CACHE_POOL_SIZE=0.
> It then uses malloc() to allocated new nodes, which as a granularity
> of 24 bytes (if using the standard dlmalloc implementation). Using a an
> allocation pool saves some memory, because the node size is actually only
> 16 bytes.
> Of course, you will have to estimate the maximum number of nodes
> you will ever use.
I shudder at the thought of estimating a fixed limit for something that
I don't understand.
> This figure depeds on the size of your file system, and
> how your files are written (buffer size). Increasing the page size in the
> Linux compatibility package tends to reduce the number of nodes required,
> because you can write bigger, and hence fewer, data nodes at the expense
> of the file system allocating larger static buffers.
For now we are rebooting the unit to free up these caches. Not very
elegant, and obviously I'd like to find some way to make JFFS2 free them
when no files are open.
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss
More information about the Ecos-discuss