This is the mail archive of the
`gsl-discuss@sourceware.org`
mailing list for the GSL project.

Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|

Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |

Other format: | [Raw text] |

*From*: Gerard Jungman <jungman at lanl dot gov>*To*: gsl-discuss at sourceware dot org*Date*: Mon, 05 Oct 2009 16:56:07 -0600*Subject*: Re: containers tentative design summary*References*: <1254708349.18519.4.camel@ForbiddenPlanet> <7f1eaee30910050750l738876b1p41e6bd8ae5aa6d16@mail.gmail.com>

On Mon, 2009-10-05 at 10:50 -0400, James Bergstra wrote: > Two comments: > > I'm a bit rusty with my C structs... but you would need two distinct > static classes to have const and non-const data pointers for your view > right? Yes, sorry. I left that out. > Also, it sounds like writing code that will work for a tensor of any > rank (e.g. add two tensors together) might be either tedious or > impossible. I recognize that part of the problem is the lack of > templating and polymorphism, but it would at least be comforting to > see just how bad the situation is via a use case or two in the design > documentation. I (naively?) fear that to get good performance will > require a whole library of functions for even the most basic of > operations. > > gsl_marray_add_1_0( gsl_marray_1, double ); > gsl_marray_add_1_1( gsl_marray_1, gsl_marray_1); > gsl_marray_add_1_2( gsl_marray_1, gsl_marray_2); > gsl_marray_add_2_2(... ) > ... > > gsl_marray_sub_1_0( ... ) > > Maybe a system of macros could be designed to help here, but it sounds > like it will never be as easy as writing a couple of for-loops. First, I want to be sure that we distinguish the multi-array case from the vector/matrix/tensor case. Algebra for vectors and matrices is well-defined and already on place; we only need to re-write some wrappers, etc. It is handled by BLAS calls. As I mentioned, this places a constraint on matrices, that the fast traversals be of unit stride. It seems like we just have to live with that constraint, for the matrix type. Addition, subtraction, scaling of multi-arrays is not hard because it is only defined within the same rank. So there is only a linear complexity, and no combinatorial explosion in the interfaces, for these operations. Of course, there are issues of shape conformance at run-time, but that is also true for vectors and matrices. That leaves multi-array algebra as an open question. By algebra, I mean technically the "cross-rank" operations, which form some kind of general multi-linear algebra. Sounds pretty hairy, as you suspect. First Idea: In fact, none of these operations are required from the library. If you have a good (fast) indexing scheme, then the user can implement whatever they want. This is the boost solution; boost::multi_array has no support for algebra operations. So we just punt on this. This was my implicit choice in the design summary. Second Idea: Implement as much as seems reasonable, in a way which is equivalent to what a user would do, with some loops. I am not sure that efficiency is an issue, since I don't see how you could do better than some loops, in the general case. Higher efficiency can be obtained for the vector and matrix types, using a good BLAS, but then it should be clear that the vector and matrix types are what you want, and not the general multi-array types. Third Idea: Figure out a way to generate everything automatically. Hmmm. Not likely. And the interface explosion would be ridiculous. Finally, we come to tensors. As defined in the design summary, tensors are multi-index objects with the same dimension at each place. We definitely do want to support the usual tensor algebra operations: tensor product and contraction. The question seems to be: How much simplicity do we gain in going from the general multi-array case to the restricted tensor case? If the interfaces for doing the tensor stuff are no less complicated than the general case, then we might as well go back and solve it for general multi-arrays. In any case, I think multi-array support would be a good thing, even without multi-linear algebra. Fixing up vectors and matrices so that the view types make sense is also a good thing. These are mostly orthogonal tasks; I just want to get them both clear in my head so I understand the limited ways in which they will interact. Right now I think the interaction is limited to some light-weight conversion functions between them and a common slicing mechanism. Tensors are another mostly orthogonal task. Again, they will benefit from the generic slicing mechanism, and there will be some light-weight conversion functions. But we can solve the problems here as we come to them. Does this make sense? -- G. Jungman

**Follow-Ups**:**Re: containers tentative design summary***From:*James Bergstra

**Re: containers tentative design summary***From:*Brian Gough

**References**:**containers tentative design summary***From:*Gerard Jungman

**Re: containers tentative design summary***From:*James Bergstra

Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|

Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |