This is the mail archive of the
docbook-apps@lists.oasis-open.org
mailing list .
Re: Re: speedy XSLT processor for win
- From: Daniel Veillard <veillard at redhat dot com>
- To: Norman Walsh <ndw at nwalsh dot com>
- Cc: Bob Stayton <bobs at caldera dot com>, Gabor Hojtsy <goba at php dot net>,DocBook list <docbook-apps at lists dot oasis-open dot org>
- Date: Thu, 22 Aug 2002 08:22:34 -0400
- Subject: Re: DOCBOOK-APPS: Re: speedy XSLT processor for win
- References: <003a01c23497$c7376e80$9137a3d5@mia><20020730023820.A12510@caldera.com> <871y8r9l8z.fsf@nwalsh.com>
- Reply-to: veillard at redhat dot com
On Thu, Aug 22, 2002 at 07:38:20AM -0400, Norman Walsh wrote:
> / Bob Stayton <bobs@caldera.com> was heard to say:
> | I'll bet you are chunking out a lot of files. If so, then
> | you are probably I/O bound. I get similar results on
>
> I/O bound, or just working really hard to calculate all of the
> navigational links.
>
> Chunking really big documents requires some potentially expensive
> operations over the document tree.
Basically the next and previous are recomputed *everytime* for
each chunk, independantly that next->prev is the current start node.
At least minimal caching here would help, okay I know one cannot
override variables, but still what a waste of power...
Can't you just compute the boundaries once, use a key() to store them
and come back to linear cost for this computation ?
Daniel
--
Daniel Veillard | Red Hat Network https://rhn.redhat.com/
veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/
http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/