This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][BZ #13065] New pthread_barrier algorithm to fulfill barrier destruction requirements.



On 12/21/2015 02:34 PM, Torvald Riegel wrote:
> On Fri, 2015-12-18 at 17:47 -0600, Paul E. Murphy wrote:
>>
>> On 12/18/2015 10:42 AM, Torvald Riegel wrote:
>> +	  if (i <= cr)
>> +	    goto ready_to_leave;
>> +	  else
>> +	    break;
>>
>> Is the else here only hit if the number of participating threads is
>> greater than the barrier count?
> 
> Yes.  the surrounding block is only run if we finish previous rounds or
> the current one, and if the CAS used for finishing it succeeds; if we
> finished the current round (i <= cr), we're ready to leave; otherwise,
> we finished a previous round, which in turn means that there must be
> more threads trying to enter the barrier than the barrier count (which
> isn't disallowed by POSIX).
> 
> Would you like to see a clarifying comment regarding this?

In hindsight, I believe the question is answered in the comment ahead of
it. I don't think an extra comment is necessary.

>> Otherwise, it looks good to me, and seems like a good improvement to
>> have. Though, a more experienced reviewer may have more to say. This
>> is a bit more complicated than its predecessor. I'll test it on PPC
>> next week.
> 
> Thanks!

Tested out fine on POWER8/PPC64LE. I was curious what the performance
difference might be, so I slapped together the attached program. It
showed about 25% improvement with 64 thread/64 count/100000 iter input
on a 16 core machine.

-Paul
/* A quick and hokey "performance test" for pthread barriers.  The
 * term performance is used extremely loosely.
 *
 * This is just a quick and dirty test.  It has not been well tested,
 * and may not actually work correctly. Yada yada yada.
 *
 * gcc -c barrier.c -o barrier -lm -pthread
 *
 *
 *
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 * GNU General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program.  If not, see <http://www.gnu.org/licenses/>.
 */

#include <pthread.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>

pthread_barrier_t barrier;
long long int niter;

void * wfunc(void *a)
{
  static __thread double d = 2;
  long long int i = 0;

  for(i=0;i<niter;i+=1)
  {
    pthread_barrier_wait(&barrier);
    d = log(d);
    d = exp(d);
  }

  int fd = *(int*)a;
  write(fd,"d",1);
  return &d;
}

int main(int a, char**v)
{
  if(a<4) { printf("usage ./foo [num threads] [barrier count] [iterations]\n"); exit(1); }
  int nthr = strtol(v[1],NULL,0);
  int nbar = strtol(v[2],NULL,0);
  niter = strtoll(v[3],NULL,0);
  if(nthr < nbar) { printf("Less threads than barrier count. bail.\n"); exit(1); }

  printf("Testing with thread=%d barrier count=%d iter=%lld\n",nthr,nbar,niter);
  pthread_t threads[nthr];
  pthread_barrier_init(&barrier,NULL,nbar);

  int i;
  int fd[2];
  if(socketpair(AF_UNIX,SOCK_STREAM,0,fd)){ perror("socketpair"); exit(1); }
  for(i=0;i<nthr;i+=1)
    pthread_create(&threads[i],NULL,wfunc,&fd[1]);

  /* note, for nthr > nbar, only nbar threads will terminate.  */
  char b[128];
  i = 0;
  while(i<nbar)
  {
    int n = read(fd[0],b,sizeof b);
    i += n > 0 ? n : 0;
  }
  exit(0);
  return 0;
}

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]