This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v2] Improve performance of strstr

Hi Rich,

On Tue, Apr 16, 2019 at 11:49:43PM +0000, Wilco Dijkstra wrote:
> Hi Rich,
> > Ultimately the 3 key questions are:
> > 
> > 1. Is it linear time even on huge inputs?
> > 2. Is it faster than Two-way in the average case?
> > 3. Is it within a small factor even in the worst case?
> > 
> > The first 2 were never in doubt, and your result proves 3 as well.
> > His result was up to 40x slower, which refutes #3.
> No that was incorrectly comparing a fast case with a slow case. You can
> easily show 1000x difference between fast and slow cases, but what we're
> interested in is how the slow cases compare.

> No, the 40x was comparing the same input -- that is, there's an input
> for which your patch makes it 40x slower.

The same input can be fast for one algorithm and slow on another. Is that
hard to understand? Like I said it can be 1000 times. And it goes both ways
so it doesn't matter at all.

> You can find some inputs which are mildly slow for the current two-way
> implementation (likely due to a mix of fundamental reasons and
> implementation flaws), and normalized for size (a poor normalization
> but whatever), searching for them takes significantly more than 1/40
> the time that your code takes on its worst-case. This is really not a

Yes it's very common to get bad factorizations, but every now and again it
gets one that starts with an uncommon character and it ends up really fast.
That huge disparity means it's a bad algorithm. 

Yes the implementation is insanely stupid with the large and small needle
code reversed. For small needles you simply want to use a skip table since
factorization can't do anything useful anyway. For large needles you can get
reasonable factorizations and so a skip table actually slows things down.

> meaningful comparison. What's meaningful is that some operations
> become 40x slower for no good reason except your irrational refusal to
> improve the implementation of the good algorithm rather than reverting
> to something backwards.

A 40x slowdown vs a rare lucky fast case for Two-way is not in any way
meaningful. Being 11 times faster than the optimized Two-way on the
average case certainly is. What is irrational is insisting that Two-way is in any
way better when it obviously isn't.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]