This is the mail archive of the gsl-discuss@sourceware.org mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: switched sparse format to binary trees


Hi,

2014-04-29 3:13 GMT+02:00 Patrick Alken:
> On 04/28/2014 04:35 PM, Gerard Jungman wrote:
>>
>> On 04/27/2014 07:32 PM, Patrick Alken wrote:
>>
>>> This actually makes a huge speed improvement in duplicate detection and
>>> element retrieval for large sparse matrices. There may however be a
>>> performance hit since a new tree node must be allocated each time an
>>> element is added to the matrix. But I believe the tradeoff is worth it.
>>
>>
>> For what it's worth, the right thing to do is probably
>> to expose the avl allocation strategy to the GSL client
>> in some way. It could be as simple as an extra argument
>> to gsl_spmatrix_alloc(), which could be a pointer to a
>> generic allocator strategy struct (similar or identical
>> to 'struct libavl_allocator'). This is the most general
>> solution, which requires that clients who care be able
>> to write their own allocator.
>
> Interesting idea...I'll think a bit more on it. Unfortunately I don't
> think there is any easy way to preallocate a binary tree. Even if the
> user knows ahead of time how many elements they want to add to the
> sparse matrix, its not so easy to preallocate the whole tree, since
> after each insertion, the tree rebalances itself to minimize its height.

you can create an array of many tree nodes in a single allocation
though. The allocator struct could then store a pointer to this array,
the total size of the array, and the index of the next node that is
not in use yet.

When a new node is inserted into the tree, one would just take a
pointer to the next unused node instead of calling malloc, increase
the index in the allocator struct, and perform a new allocation only
if all nodes are in use. The advantage is that the amortized cost of
this approach is much lower than doing a malloc for each node, and it
will also save memory because malloc adds a certain amount of overhead
to each individual allocation for its internal bookkeeping. And saving
memory for each individual node also means that the entire tree will
become more cache-friendly and thus possibly more performant.

If individual nodes could be deleted at some point, then the allocator
needed some bookkeeping of reusable nodes of its own, but the only
node deletion that can take place is that all nodes are deleted
together, then this would probably be quite straightforward to
implement.

Best regards,
Frank


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]