This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Byte ordering


> > I have a scientific calculation program (which uses GSL) but most
> > importantly it uses doubles. I need to transfer those data to another
> > computer. The problem is that if the two computers have different byte
> > ordering, I have to do something special about the data. I understand
> > that there is a network standard for shorts and for ints to serve this
> > purpose. I could not find any standards to transfer doubles/floats over
> > net. I the past I could live with printf()-type things and convert all
> > doubles to strings and pass strings since ASCII is more universal.
> > This, however, increases the data size by a factor of three.

not if you compress it, typically. But then the CPU time to process
it is horrible: since you have to first decompress, and then spend
time to scanf() the data. Horrible! Stick to a smart version of binary
data (XDR comes to mind, or look at NCSA's HDF(5) library).

> > The current problem I am working on has data output rate of about
> > 5-15MBytes/sec and increasing it by factor of three is not feasible.
> > 
> > Could someone, please, point me in the right direction? Should I
> > give-up  on portability of my code and assume/hope that both ends use
> > the same
> > byte ordering?

you could also mark your data, type tag and dimension tag it, and perhaps
even nametag it. That way at the other end you could be lucky and not
have to byteswap it (if the endianism tag is the same). On the ohter hand,
it's slightly better to always assume the same (usually big) endianism,
i.e. network order, for your data. Since must high performance machines
these days are arguably little endian, that's a penalty, that's why i'm
partial to storing the data native endianism,and hope the receiver uses
the same.


> void byteswap_doubles(double *a)
> {
>         unsigned char b[8],c[8];
>         (void) memcpy(b,a,8); 
>         c[0]=b[7]; /* swap data around */
>         c[1]=b[6];
>         c[2]=b[5];
>         c[3]=b[4];
>         c[4]=b[3];
>         c[5]=b[2];
>         c[6]=b[1];
>         c[7]=b[0];
>         (void) memcpy(a,c,8);
> }


i'd take away the memcpy there, and swap in place. I saw the speed of this
routine go from 40 to 116 Mswaps on my 1.6 GHz machine with a forced -g debug
compile option.  -O2 gave me 266 Mswaps, and some sse instructions got the
speed to 314 Mswaps. I'm sure some handcoding w/ assembly would get you more.
I can't imagine if the code is very cpu intensive, that a bit of byteswapping
would be holding you up. At a data rate of 15MB/sec of double precision data,
that's about 2 M words, and at some nominal 200 Mswap rate would go in 0.01
seconds. My machine was only 1.6GHz, so a modern 3 GHz cuts that down in 2.
That would be 5 ms.

The NEMO library (which i co-author) has a routine called bswap, which
supposedly does this efficiently. See
	http://bima.astro.umd.edu/nemo/man_html/bswap.1.html
for background and the program wrapper.

- peter



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]