This is the mail archive of the gsl-discuss@sourceware.org mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: switched sparse format to binary trees


On 04/29/2014 02:39 AM, Frank Reininghaus wrote:

> you can create an array of many tree nodes in a single allocation
> though. The allocator struct could then store a pointer to this array,
> the total size of the array, and the index of the next node that is
> not in use yet.

Right. This is what I meant by a pool allocator. Just to clarify, if
GSL provides an allocator mechanism, it should be more general than
this, but it is a good idea to provide an implementation of something
like this that a client can just invoke if they want. It also gives
clients an example implementation that they can use to create their
own variations when necessary.

This is why the avl library provides such a mechanism themselves.
They know that some clients really need it.

> And saving memory for each individual node also means
> that the entire tree will become more cache-friendly
> and thus possibly more performant.

Even more important than the memory savings is the control that the
client gains over cache coherency and other memory layout issues.
The main lossage with malloc is that there is only one. So allocations
of different kinds of memory can get interleaved, and it may become
impossible to control your layout and caching. If the allocations
are not exposed, then the client is wedged.

> If individual nodes could be deleted at some point, then the
> allocator needed some bookkeeping of reusable nodes of its own,
> but the only node deletion that can take place is that all nodes
> are deleted together, then this would probably be quite
> straightforward to implement.

In practice, you are probably right, the nodes would all
be deleted together. So we do not have to worry about it
too much.

For the sake of completeness though, it is worth pointing
out at least one simple way to handle the general case.
An allocator can implement a lazy-deletion strategy,
where the pool is allowed to fragment, until the
client requests a repacking. You can also trigger
a repacking when a certain fragmentation level is
reached, although this kind of approach (garbage collection)
is somewhat dangerous if the client loses control of when
it happens. In any case, it is not hard to implement this
sort of thing.

Anyway, the main point is that once a general mechanism
exists, the client can create any arbitrarily complicated
scheme for themselves. They get to take control of
allocations, if they need to.


On 04/28/2014 07:13 PM, Patrick Alken wrote:

> Anyway the study found that AVL trees were much superior to hash
> tables in terms of speed. The study is here:
>
> http://link.springer.com/chapter/10.1007%2F978-3-540-25944-2_135

It's a shame they don't explain why this is happening.
Those large factors are kind of nuts.

--
G. Jungman


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]