This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v2] Make bindresvport() function to multithread-safe


On Fri, Sep 28, 2012 at 5:26 AM, Peng Haitao <penght@cn.fujitsu.com> wrote:
>
> On 09/24/2012 11:08 PM, Carlos O'Donell wrote:
>> Define a macro in an internal header that for generic unix uses __getpid,
>> and for Linux it is overriden by a linux-specific header that defines
>> that macro as a combination of either INTERNAL_SYSCALL/INLINE_SYSCALL.
>>
>
> Thanks.
> I have sent v4 patch, please review.
>
>> The whole point of that code is to create a hash function from which
>> to start looking for free ports. If the hash function has poor spread
>> because you only have __getpid in your OS, then that's not a serious
>> issue.
>>
>> You will still need to present performance data to show how these changes
>> effect the performance of the original non-thread-safe version.
>>
>
> The single-thread test result is as follows:
> Before the patch, execute the test program with 100 times:
>
> # perf stat -r 100 -e instructions -- ./bindresvport_test > /dev/null
>
>  Performance counter stats for './bindresvport_test' (100 runs):
>
>        106,594,501 instructions              #    0.00  insns per cycle          ( +-  0.35% )
>
>        0.028158698 seconds time elapsed                                          ( +-  0.88% )
>
>
> After the patch, execute the test program with 100 times:
>
> # perf stat -r 100 -e instructions -- ./bindresvport_test > /dev/null
>
>  Performance counter stats for './bindresvport_test' (100 runs):
>
>        104,938,026 instructions              #    0.00  insns per cycle          ( +-  0.44% )
>
>        0.027621324 seconds time elapsed                                          ( +-  0.94% )
>
>
>
> The multi-threaded test result is as follows:
> Before the patch, execute the test program with 100 times:
>
> # perf stat -r 100 -e instructions -- ./bindresvport_mul_test > /dev/null
> bindresvport: Address already in use
> ...
>
>  Performance counter stats for './bindresvport_mul_test' (100 runs):
>
>        116,481,478 instructions              #    0.00  insns per cycle          ( +-  0.25% )
>
>        0.023839225 seconds time elapsed                                          ( +-  2.04% )
>
>
> After the patch, execute the test program with 100 times:
> # perf stat -r 100 -e instructions -- ./bindresvport_mul_test > /dev/nullbindresvport: Address already in use
> bindresvport: Address already in use
> ...
>
>  Performance counter stats for './bindresvport_mul_test' (100 runs):
>
>        124,069,053 instructions              #    0.00  insns per cycle          ( +-  0.32% )
>
>        0.021486935 seconds time elapsed                                          ( +-  0.84% )
>
>
> The single-thread test program is bindresvport_test.c
> The multi-thread test program is bindresvport_mul_test.c

How do you justify the performance gain?

Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]