This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 1/2] MIPS: Compressed PLT/stubs support


Looks good.

"Maciej W. Rozycki" <macro@codesourcery.com> writes:
> @@ -3215,25 +3325,19 @@ static bfd_vma
>  mips_elf_gotplt_index (struct bfd_link_info *info,
>  		       struct elf_link_hash_entry *h)
>  {
> -  bfd_vma plt_index, got_address, got_value;
> +  bfd_vma got_address, got_value;
>    struct mips_elf_link_hash_table *htab;
>  
>    htab = mips_elf_hash_table (info);
>    BFD_ASSERT (htab != NULL);
>  
> -  BFD_ASSERT (h->plt.offset != (bfd_vma) -1);
> -
> -  /* This function only works for VxWorks, because a non-VxWorks .got.plt
> -     section starts with reserved entries.  */
> -  BFD_ASSERT (htab->is_vxworks);
> -
> -  /* Calculate the index of the symbol's PLT entry.  */
> -  plt_index = (h->plt.offset - htab->plt_header_size) / htab->plt_entry_size;
> +  BFD_ASSERT (h->plt.plist != NULL);
> +  BFD_ASSERT (h->plt.plist->gotplt_index != MINUS_ONE);
>  
>    /* Calculate the address of the associated .got.plt entry.  */
>    got_address = (htab->sgotplt->output_section->vma
>  		 + htab->sgotplt->output_offset
> -		 + plt_index * 4);
> +		 + h->plt.plist->gotplt_index * 4);
>  
>    /* Calculate the value of _GLOBAL_OFFSET_TABLE_.  */
>    got_value = (htab->root.hgot->root.u.def.section->output_section->vma

If we remove the is_vxworks assert, I think we should use MIPS_ELF_GOT_SIZE
instead of 4.

The patch updates most uses of plt.offset, but we still have:

      else if (htab->is_vxworks
	       && h->got_only_for_calls
	       && h->root.plt.offset != MINUS_ONE)
	/* On VxWorks, calls can refer directly to the .got.plt entry;
	   they don't need entries in the regular GOT.  .got.plt entries
	   will be allocated by _bfd_mips_elf_adjust_dynamic_symbol.  */
	h->global_got_area = GGA_NONE;

Shouldn't that be using plt.plist instead?

> @@ -5124,13 +5231,63 @@ mips_elf_calculate_relocation (bfd *abfd
>  		|| h->root.root.type == bfd_link_hash_defweak)
>  	       && h->root.root.u.def.section)
>  	{
> -	  sec = h->root.root.u.def.section;
> -	  if (sec->output_section)
> -	    symbol = (h->root.root.u.def.value
> -		      + sec->output_section->vma
> -		      + sec->output_offset);
> +	  if (h->has_plt_entry)
> +	    {
> +	      bfd_boolean micromips_p = MICROMIPS_P (abfd);
> +	      unsigned int other;
> +	      bfd_vma plt_offset;
> +	      bfd_vma isa_bit;
> +	      bfd_vma val;
> +
> +	      BFD_ASSERT (h->root.plt.plist != NULL);
> +	      BFD_ASSERT (h->root.plt.plist->mips_offset != MINUS_ONE
> +			  || h->root.plt.plist->comp_offset != MINUS_ONE);
> +
> +	      plt_offset = htab->plt_header_size;
> +	      if (h->root.plt.plist->comp_offset == MINUS_ONE
> +		  || (h->root.plt.plist->mips_offset != MINUS_ONE
> +		      && r_type != R_MIPS16_26 && r_type != R_MICROMIPS_26_S1))
> +		{
> +		  isa_bit = 0;
> +		  target_is_16_bit_code_p = FALSE;
> +		  target_is_micromips_code_p = FALSE;
> +		  plt_offset += h->root.plt.plist->mips_offset;
> +		}
> +	      else
> +		{
> +		  isa_bit = 1;
> +		  target_is_16_bit_code_p = !micromips_p;
> +		  target_is_micromips_code_p = micromips_p;
> +		  plt_offset += (htab->plt_mips_offset
> +				 + h->root.plt.plist->comp_offset);
> +		}

When can we have a microMIPS call to a PLT without a microMIPS PLT for
it to call?  I was expecting something like:

    if (r_type == R_MIPS16_26 || r_type == R_MICROMIPS_26_S1)
      {
	BFD_ASSERT (h->root.plt.plist->comp_offset != MINUS_ONE);
	...
      }
    else
      {
	BFD_ASSERT (h->root.plt.plist->mips_offset != MINUS_ONE);
	...
      }

although see the comment about MIPS16 stubs below.

> +	      BFD_ASSERT (plt_offset <= htab->splt->size);
> +
> +	      sec = htab->splt;
> +	      val = plt_offset + isa_bit;
> +	      symbol = sec->output_section->vma + sec->output_offset + val;
> +
> +	      /* Set the symbol's value in the symbol table to the address
> +	         of the stub too.  Prefer the standard MIPS one.  */
> +	      other = 0;
> +	      if (h->root.plt.plist->mips_offset != MINUS_ONE)
> +		val = htab->plt_header_size + h->root.plt.plist->mips_offset;
> +	      else
> +		other = micromips_p ? STO_MICROMIPS : STO_MIPS16;
> +	      h->root.root.u.def.section = sec;
> +	      h->root.root.u.def.value = val;
> +	      h->root.other = other;

calculate_relocation doesn't feel like the right place to do this.
We should either do it in size_dynamic_sections (where we also set _P_L_T_)
or finish_dynamic_symbol.  Probably the former.  E.g. we could do it in
mips_elf_allocate_lazy_stub, renaming that and mips_elf_lay_out_lazy_stubs
to something more general.

> @@ -5242,7 +5393,7 @@ mips_elf_calculate_relocation (bfd *abfd
>  	       || (local_p
>  		   && mips_elf_tdata (input_bfd)->local_call_stubs != NULL
>  		   && mips_elf_tdata (input_bfd)->local_call_stubs[r_symndx] != NULL))
> -	   && !target_is_16_bit_code_p)
> +	   && (!target_is_16_bit_code_p || (h != NULL && h->has_plt_entry)))
>      {
>        if (local_p)
>  	sec = mips_elf_tdata (input_bfd)->local_call_stubs[r_symndx];

It'd be worth describing this in the comment (and getting rid of the weird
"64-bit" thing that seems to be there now :-)).  Something like:

  /* If this is a MIPS16 call that has an associated call stub, we need to
     use that stub when calling non-MIPS16 functions or PLTs (which themselves
     are MIPS16, but the target might not be).  Note that we specifically
     exclude R_MIPS16_CALL16 from this behavior; indirect calls should
     use an indirect stub instead.  */

if that's accurate.  Although, if it is, why are we creating and choosing
a MIPS16 PLT in the first place?  It looks like you rightly try to avoid
that further down.

> +/* Make a new PLT record to keep internal data.  */
> +
> +static void
> +mips_elf_make_plt_record (bfd *abfd, struct plt_entry **plist)
> +{
> +  struct plt_entry *entry;
> +
> +  BFD_ASSERT (plist);
> +  entry = bfd_alloc (abfd, sizeof (**plist));
> +  if (entry == NULL)
> +    return;
> +  entry->need_mips = FALSE;
> +  entry->need_comp = FALSE;

Please use zalloc and remove the FALSE stuff.

I think it would be cleaner to return a boolean success code.

> @@ -8201,6 +8349,40 @@ _bfd_mips_elf_check_relocs (bfd *abfd, s
>  	  break;
>  	}
>  
> +      /* Record the need for a PLT entry.  At this point we don't know
> +         yet if we are going to create a PLT in the first place, but
> +         we only record whether the relocation requires a standard MIPS
> +         or a compressed code entry anyway.  If we don't make a PLT after
> +         all, then we'll just ignore these arrangements.  Likewise if
> +         a PLT entry is not created because the symbol is satisfied
> +         locally.  */
> +      if (h != NULL
> +	  && !info->shared
> +	  && jal_reloc_p (r_type)
> +	  && !SYMBOL_CALLS_LOCAL (info, h)
> +	  && !(ELF_ST_VISIBILITY (h->other) != STV_DEFAULT
> +	       && h->root.type == bfd_link_hash_undefweak))

It feels strange to check the last condition in check_relocs.  I realise
you added it because the same condition is used in adjust_dynamic_symbol
(and is boilerplate elsewhere too, across targets), but I think it'd be
better to abstract away:

	   && htab->use_plts_and_copy_relocs
	   && !SYMBOL_CALLS_LOCAL (info, h)
	   && !(ELF_ST_VISIBILITY (h->other) != STV_DEFAULT
		&& h->root.type == bfd_link_hash_undefweak))

(I think we should be using use_plts_and_copy_relocs here too.)

> +
> +	  /* Now work out the sizes of individual PLT entries.  */
> +	  if (htab->is_vxworks && info->shared)
> +	    htab->plt_mips_entry_size
> +	      = 4 * ARRAY_SIZE (mips_vxworks_shared_plt_entry);
> +	  else if (htab->is_vxworks)
> +	    htab->plt_mips_entry_size
> +	      = 4 * ARRAY_SIZE (mips_vxworks_exec_plt_entry);
> +	  else if (newabi_p)
> +	    htab->plt_mips_entry_size
> +	      = 4 * ARRAY_SIZE (mips_exec_plt_entry);
> +	  else if (micromips_p)
> +	    {
> +	      htab->plt_mips_entry_size
> +		= 4 * ARRAY_SIZE (mips_exec_plt_entry);
> +	      htab->plt_comp_entry_size
> +		= 2 * ARRAY_SIZE (micromips_o32_exec_plt_entry);
> +	    }
> +	  else
> +	    {
> +	      htab->plt_mips_entry_size
> +		= 4 * ARRAY_SIZE (mips_exec_plt_entry);
> +	      htab->plt_comp_entry_size
> +		= 2 * ARRAY_SIZE (mips16_o32_exec_plt_entry);
> +	    }
>  	}
>  
> -      /* Assign the next .plt entry to this symbol.  */
> -      h->plt.offset = htab->splt->size;
> -      htab->splt->size += htab->plt_entry_size;
> +      /* See if we preallocated any PLT entries for this symbol.  If none,
> +         then if microMIPS code is built, then make a compressed entry or
> +         if no microMIPS code is built, then make a standard entry.  Also
> +         if a call stub is used, then it is the call stub's standard MIPS
> +         code that jumps to the PLT entry, so we may not be able to use a
> +         MIPS16 entry in case the stub tail-jumps to it, and in any case
> +         we would not benefit from using one, so revert to a standard one
> +         in this case too.  Lastly, NewABI and VxWorks targets never use
> +         compressed entries.  */
> +      if (h->plt.plist == NULL)
> +	mips_elf_make_plt_record (dynobj, &h->plt.plist);
> +      if (h->plt.plist == NULL)
> +	return FALSE;
> +      if (h->plt.plist->need_comp && (hmips->call_stub || hmips->call_fp_stub))
> +	h->plt.plist->need_comp = FALSE;
> +      if (newabi_p || htab->is_vxworks)
> +	h->plt.plist->need_mips = !(h->plt.plist->need_comp = FALSE);
> +      if (!h->plt.plist->need_mips && !h->plt.plist->need_comp)
> +	h->plt.plist->need_mips = !(h->plt.plist->need_comp = micromips_p);

Suggest:

      if (!h->plt.plist && !mips_elf_make_plt_record (dynobj, &h->plt.plist))
	return FALSE;

      /* There are no defined microMIPS PLT entries for VxWorks, n32 or n64,
	 so always use a standard entry there.

	 If the symbol has a MIPS16 call stub, then all MIPS16 calls
	 should go via that stub, and there is no benefit to having a
	 MIPS16 entry.  */
      if (newabi_p
	  || htab->is_vxworks
	  || hmips->call_stub
	  || hmips->call_fp_stub)
	{
	  h->plt.plist->need_mips = TRUE;
	  h->plt.plist->need_comp = FALSE;
	}

      /* Otherwise, if there are no direct calls to the function, we
	 have a free choice of whether to use standard or compressed
	 entries.  Prefer microMIPS entries if the object is known to
	 contain microMIPS code, so that it becomes possible to create
	 pure microMIPS binaries.  Prefer standard entries otherwise,
	 because MIPS16 ones are no smaller and are usually slower.  */
      if (!h->plt.plist->need_mips && !h->plt.plist->need_comp)
	{
	  if (micromips_p)
	    h->plt.plist->need_comp = TRUE;
	  else
	    h->plt.plist->need_mips = TRUE;
	}

> +      if (h->plt.plist->need_mips && h->plt.plist->mips_offset == MINUS_ONE)
> +	{
> +	  bfd_vma offset;
>  
> -      /* If the output file has no definition of the symbol, set the
> -	 symbol's value to the address of the stub.  */
> +	  h->plt.plist->mips_offset = offset = htab->plt_mips_offset;
> +	  htab->plt_mips_offset = offset + htab->plt_mips_entry_size;
> +	}

When is mips_offset not MINUS_ONE?

The offset variable seems a bit obfuscating.  I'd prefer:

	  h->plt.plist->mips_offset = htab->plt_mips_offset;
	  htab->plt_mips_offset += htab->plt_mips_entry_size;

> +      if (h->plt.plist->need_comp && h->plt.plist->comp_offset == MINUS_ONE)
> +	{
> +	  bfd_vma offset;
> +
> +	  h->plt.plist->comp_offset = offset = htab->plt_comp_offset;
> +	  htab->plt_comp_offset = offset + htab->plt_comp_entry_size;
> +	}

Same comments here.

> +
> +      /* Reserve the corresponding .got.plt entry now too.  */
> +      if (h->plt.plist->gotplt_index == MINUS_ONE)
> +	{
> +	  bfd_vma gpindex;
> +
> +	  h->plt.plist->gotplt_index = gpindex = htab->plt_got_index;
> +	  htab->plt_got_index = gpindex + 1;
> +	}

Here too.

> +
> +      /* If the output file has no definition of the symbol, we'll use
> +         the address of the stub.
> +
> +         For VxWorks, point at the PLT load stub rather than the lazy
> +         resolution stub; this stub will become the canonical function
> +         address.
> +
> +         Otherwise we cannot determine the address of the stub yet, so
> +         just record that we'll create a PLT entry for this symbol.  */

I'd prefer we set the address in the same place for VxWorks.  Also:

>  	{
> -	  h->root.u.def.section = htab->splt;
> -	  h->root.u.def.value = h->plt.offset;
> -	  /* For VxWorks, point at the PLT load stub rather than the
> -	     lazy resolution stub; this stub will become the canonical
> -	     function address.  */
>  	  if (htab->is_vxworks)
> -	    h->root.u.def.value += 8;
> +	    {
> +	      h->root.u.def.section = htab->splt;
> +	      h->root.u.def.value = h->plt.offset + 8;
> +	    }
> +	  else
> +	    hmips->has_plt_entry = TRUE;
>  	}

shouldn't the plt.offset be plt.plist->...?

> @@ -8923,13 +9190,27 @@ static bfd_boolean
>  mips_elf_allocate_lazy_stub (struct mips_elf_link_hash_entry *h, void **data)
>  {
>    struct mips_elf_link_hash_table *htab;
> +  struct bfd_link_info *info;
> +
> +  info = (struct bfd_link_info *) data;
> +  htab = mips_elf_hash_table (info);
> +  BFD_ASSERT (htab != NULL);
>  
> -  htab = (struct mips_elf_link_hash_table *) data;
>    if (h->needs_lazy_stub)
>      {
> +      bfd_boolean micromips_p = MICROMIPS_P (info->output_bfd);
> +      unsigned int other = micromips_p ? STO_MICROMIPS : 0;
> +      bfd_vma isa_bit = micromips_p;
> +
> +      BFD_ASSERT (htab->root.dynobj != NULL);
> +      if (h->root.plt.plist == NULL)
> +	mips_elf_make_plt_record (htab->sstubs->owner, &h->root.plt.plist);
> +      if (h->root.plt.plist == NULL)
> +	return FALSE;

This won't propagate the error up.  Other callbacks typically handle
this by having data be a pointer to a pointer that gets nullified
on error.  Or you could simply allocate the PLT record when setting
needs_lazy_stub in adjust_dynamic_symbol, I don't mind which.

Also, preexisting problem, but it should be "void *data" rather than
"void **data".

> @@ -8985,18 +9282,55 @@ _bfd_mips_elf_size_dynamic_sections (bfd
>  	    = (bfd_byte *) ELF_DYNAMIC_INTERPRETER (output_bfd);
>  	}
>  
> -      /* Create a symbol for the PLT, if we know that we are using it.  */
> -      if (htab->splt && htab->splt->size > 0 && htab->root.hplt == NULL)
> +      /* Figure out the size of the PLT header if we know that we
> +         are using it.  For the sake of cache alignment always use
> +         a standard header whenever any standard entries are present
> +         even if microMIPS entries are present as well.  This also
> +         lets the microMIPS header rely on the value of $v0 only set
> +         by microMIPS entries, for a small size reduction.
> +
> +         Also create the _PROCEDURE_LINKAGE_TABLE_ symbol if we
> +         haven't already in _bfd_elf_create_dynamic_sections.  */
> +      if (htab->splt && htab->splt->size > 0)
>  	{
> +	  bfd_boolean micromips_p = MICROMIPS_P (output_bfd);
> +	  bfd_boolean std_mips_p = !micromips_p || htab->plt_mips_offset;
> +	  unsigned int other = std_mips_p ? 0 : STO_MICROMIPS;
> +	  bfd_vma isa_bit = !std_mips_p;
>  	  struct elf_link_hash_entry *h;
> +	  bfd_vma size;

I'd prefer this as:

	  bfd_boolean micromips_p = (MICROMIPS_P (output_bfd)
				     && htab->plt_mips_offset == 0);
	  unsigned int other = micromips_p ? STO_MICROMIPS : 0;
	  bfd_vma isa_bit = micromips_p;

which also matches what you did earlier.

> @@ -9835,68 +10169,135 @@ _bfd_mips_elf_finish_dynamic_symbol (bfd
>  
>    BFD_ASSERT (!htab->is_vxworks);
>  
> -  if (h->plt.offset != MINUS_ONE && hmips->no_fn_stub)
> +  if (h->plt.plist != NULL && hmips->no_fn_stub
> +      && (h->plt.plist->mips_offset != MINUS_ONE
> +	  || h->plt.plist->comp_offset != MINUS_ONE))

The old code was written that way because plt.offset was used for both PLTs
and lazy stubs.  I don't think we need or want to test hmips->no_fn_stub now.
By the same token, the "else if" should probably now be "if".

> +      /* Now handle the PLT itself.  First the standard entry (the order
> +         does not matter, we just have to pick one).  */
> +      if (h->plt.plist->mips_offset != MINUS_ONE)
> +	{
> +	  const bfd_vma *plt_entry;
> +	  bfd_vma plt_offset;
>  
> -      /* Pick the load opcode.  */
> -      load = MIPS_ELF_LOAD_WORD (output_bfd);
> +	  plt_offset = htab->plt_header_size + h->plt.plist->mips_offset;
>  
> -      /* Fill in the PLT entry itself.  */
> -      plt_entry = mips_exec_plt_entry;
> -      bfd_put_32 (output_bfd, plt_entry[0] | got_address_high, loc);
> -      bfd_put_32 (output_bfd, plt_entry[1] | got_address_low | load, loc + 4);
> +	  BFD_ASSERT (plt_offset <= htab->splt->size);
>  
> -      if (! LOAD_INTERLOCKS_P (output_bfd))
> -	{
> -	  bfd_put_32 (output_bfd, plt_entry[2] | got_address_low, loc + 8);
> -	  bfd_put_32 (output_bfd, plt_entry[3], loc + 12);
> +	  /* Find out where the .plt entry should go.  */
> +	  loc = htab->splt->contents + plt_offset;
> +
> +	  /* Pick the load opcode.  */
> +	  load = MIPS_ELF_LOAD_WORD (output_bfd);
> +
> +	  /* Fill in the PLT entry itself.  */
> +	  plt_entry = mips_exec_plt_entry;
> +	  bfd_put_32 (output_bfd, plt_entry[0] | got_address_high, loc);
> +	  bfd_put_32 (output_bfd, plt_entry[1] | got_address_low | load,
> +		      loc + 4);
> +
> +	  if (! LOAD_INTERLOCKS_P (output_bfd))
> +	    {
> +	      bfd_put_32 (output_bfd, plt_entry[2] | got_address_low, loc + 8);
> +	      bfd_put_32 (output_bfd, plt_entry[3], loc + 12);
> +	    }
> +	  else
> +	    {
> +	      bfd_put_32 (output_bfd, plt_entry[3], loc + 8);
> +	      bfd_put_32 (output_bfd, plt_entry[2] | got_address_low,
> +			  loc + 12);
> +	    }
>  	}
> -      else
> +
> +      /* Now the compressed entry.  They come after any standard ones.  */
> +      if (h->plt.plist->comp_offset != MINUS_ONE)
>  	{
> -	  bfd_put_32 (output_bfd, plt_entry[3], loc + 8);
> -	  bfd_put_32 (output_bfd, plt_entry[2] | got_address_low, loc + 12);
> +	  bfd_vma plt_offset;
> +
> +	  plt_offset = (htab->plt_header_size + htab->plt_mips_offset
> +			+ h->plt.plist->comp_offset);
> +
> +	  BFD_ASSERT (plt_offset <= htab->splt->size);
> +
> +	  /* Find out where the .plt entry should go.  */
> +	  loc = htab->splt->contents + plt_offset;
> +
> +	  /* Fill in the PLT entry itself.  */
> +	  if (MICROMIPS_P (output_bfd))
> +	    {
> +	      const bfd_vma *plt_entry = micromips_o32_exec_plt_entry;
> +	      bfd_vma gotpc_offset;
> +	      bfd_vma loc_address;
> +
> +	      BFD_ASSERT (got_address % 4 == 0);
> +
> +	      loc_address = (htab->splt->output_section->vma
> +			     + htab->splt->output_offset + plt_offset);
> +	      gotpc_offset = got_address - ((loc_address | 3) ^ 3);
> +
> +	      /* ADDIUPC has a span of +/-16MB, check we're in range.  */
> +	      if (gotpc_offset + 0x1000000 >= 0x2000000)
> +		return FALSE;

We shouldn't just return false without reporting an error.

> @@ -9905,21 +10306,39 @@ _bfd_mips_elf_finish_dynamic_symbol (bfd
>  	 binary where pointer equality matters.  */
>        sym->st_shndx = SHN_UNDEF;
>        if (h->pointer_equality_needed)
> -	sym->st_other = STO_MIPS_PLT;
> +	{
> +	  if (ELF_ST_IS_MIPS16 (sym->st_other))
> +	    sym->st_other
> +	      = ELF_ST_SET_MIPS16 (ELF_ST_SET_MIPS_PLT (sym->st_other));
> +	  else
> +	    sym->st_other = ELF_ST_SET_MIPS_PLT (sym->st_other);
> +	}

Please update the definition of ELF_ST_SET_MIPS_PLT instead, so that
STO_MIPS16 gets preserved in the same way as STO_MICROMIPS.

> +  else if (h->plt.plist != NULL && h->plt.plist->stub_offset != MINUS_ONE)
>      {
>        /* We've decided to create a lazy-binding stub.  */
> +      bfd_boolean micromips_p = MICROMIPS_P (output_bfd);
> +      bfd_vma stub_size = htab->function_stub_size;
>        bfd_byte stub[MIPS_FUNCTION_STUB_BIG_SIZE];
> +      bfd_vma stub_big_size;
> +      unsigned int other;
> +      bfd_vma isa_bit;
> +
> +      if (micromips_p)
> +	stub_big_size = MICROMIPS_FUNCTION_STUB_BIG_SIZE;
> +      else
> +	stub_big_size = MIPS_FUNCTION_STUB_BIG_SIZE;
>  
>        /* This symbol has a stub.  Set it up.  */
>  
>        BFD_ASSERT (h->dynindx != -1);
>  
> -      BFD_ASSERT ((htab->function_stub_size == MIPS_FUNCTION_STUB_BIG_SIZE)
> -                  || (h->dynindx <= 0xffff));
> +      BFD_ASSERT (stub_size == stub_big_size || h->dynindx <= 0xffff);
>  
>        /* Values up to 2^31 - 1 are allowed.  Larger values would cause
>  	 sign extension at runtime in the stub, resulting in a negative
> @@ -9928,35 +10347,80 @@ _bfd_mips_elf_finish_dynamic_symbol (bfd
>  	return FALSE;
>  
>        /* Fill the stub.  */
> -      idx = 0;
> -      bfd_put_32 (output_bfd, STUB_LW (output_bfd), stub + idx);
> -      idx += 4;
> -      bfd_put_32 (output_bfd, STUB_MOVE (output_bfd), stub + idx);
> -      idx += 4;
> -      if (htab->function_stub_size == MIPS_FUNCTION_STUB_BIG_SIZE)
> -        {
> -          bfd_put_32 (output_bfd, STUB_LUI ((h->dynindx >> 16) & 0x7fff),
> -                      stub + idx);
> -          idx += 4;
> -        }
> -      bfd_put_32 (output_bfd, STUB_JALR, stub + idx);
> -      idx += 4;
> +      if (micromips_p)
> +	{
[...]
> +	  isa_bit = 1;
> +	  other = STO_MICROMIPS;

Minor consistency thing, but please use the same idiom as you used elsewhere.

> @@ -10338,14 +10809,40 @@ mips_finish_exec_plt (bfd *output_bfd, s
>  
>    /* Install the PLT header.  */
>    loc = htab->splt->contents;
> -  bfd_put_32 (output_bfd, plt_entry[0] | gotplt_value_high, loc);
> -  bfd_put_32 (output_bfd, plt_entry[1] | gotplt_value_low, loc + 4);
> -  bfd_put_32 (output_bfd, plt_entry[2] | gotplt_value_low, loc + 8);
> -  bfd_put_32 (output_bfd, plt_entry[3], loc + 12);
> -  bfd_put_32 (output_bfd, plt_entry[4], loc + 16);
> -  bfd_put_32 (output_bfd, plt_entry[5], loc + 20);
> -  bfd_put_32 (output_bfd, plt_entry[6], loc + 24);
> -  bfd_put_32 (output_bfd, plt_entry[7], loc + 28);
> +  if (plt_entry == micromips_o32_exec_plt0_entry)
> +    {
> +      bfd_vma gotpc_offset;
> +      bfd_vma loc_address;
> +      size_t i;
> +
> +      BFD_ASSERT (gotplt_value % 4 == 0);
> +
> +      loc_address = (htab->splt->output_section->vma
> +		     + htab->splt->output_offset);
> +      gotpc_offset = gotplt_value - ((loc_address | 3) ^ 3);
> +
> +      /* ADDIUPC has a span of +/-16MB, check we're in range.  */
> +      if (gotpc_offset + 0x1000000 >= 0x2000000)
> +	return FALSE;

Here too we shouldn't just return false without reporting an error.

> @@ -10452,6 +10949,7 @@ _bfd_mips_elf_finish_dynamic_sections (b
>    asection *sgot;
>    struct mips_got_info *gg, *g;
>    struct mips_elf_link_hash_table *htab;
> +  bfd_boolean ok = TRUE;
>  
>    htab = mips_elf_hash_table (info);
>    BFD_ASSERT (htab != NULL);
> @@ -10867,10 +11365,10 @@ _bfd_mips_elf_finish_dynamic_sections (b
>        else
>  	{
>  	  BFD_ASSERT (!info->shared);
> -	  mips_finish_exec_plt (output_bfd, info);
> +	  ok = mips_finish_exec_plt (output_bfd, info);
>  	}
>      }
> -  return TRUE;
> +  return ok;
>  }

No need for the ok variable.  Just return early on error.

> @@ -12860,6 +13358,10 @@ _bfd_mips_elf_link_hash_table_create (bf
>        free (ret);
>        return NULL;
>      }
> +  ret->root.init_plt_refcount.refcount = 0;
> +  ret->root.init_plt_refcount.plist = NULL;
> +  ret->root.init_plt_offset.offset = 0;
> +  ret->root.init_plt_offset.plist = NULL;

These are unions, so we should only initialise one field each.

> +      for (i = 0, p = relplt->relocation;
> +	   i < count && p->address != gotplt_addr;
> +	   i++, p += bed->s->int_rels_per_ext_rel);

This looks needlessly quadratic.  If we start the search from the previous
match, won't a well-formed .plt only need two walks of the relocs?

> +      if (i < count)
> +	{
> +	  size_t namelen;
> +	  size_t len;
> +
> +	  *s = **p->sym_ptr_ptr;
> +	  /* Undefined syms won't have BSF_LOCAL or BSF_GLOBAL set.  Since
> +	     we are defining a symbol, ensure one of them is set.  */
> +	  if ((s->flags & BSF_LOCAL) == 0)
> +	    s->flags |= BSF_GLOBAL;
> +	  s->flags |= BSF_SYNTHETIC;
> +	  s->section = plt;
> +	  s->value = plt_offset;
> +	  s->name = names;
> +	  s->udata.i = other;
> +
> +	  len = strlen ((*p->sym_ptr_ptr)->name);
> +	  namelen = len + (p->addend != 0 ? addlen : 0) + suffixlen;
> +	  if (names + namelen > nend)
> +	    break;
> +
> +	  memcpy (names, (*p->sym_ptr_ptr)->name, len);
> +	  names += len;
> +	  if (p->addend != 0)
> +	    {
> +	      char buf[30], *a;
> +
> +	      memcpy (names, "+0x", sizeof ("+0x") - 1);
> +	      names += sizeof ("+0x") - 1;
> +	      bfd_sprintf_vma (abfd, buf, p->addend);
> +	      for (a = buf; *a == '0'; ++a)
> +		;
> +	      len = strlen (a);
> +	      memcpy (names, a, len);
> +	      names += len;
> +	    }

Maybe I'm showing my ignorance, but when do R_MIPS_JUMP_SLOTs have addends?
If they shouldn't, lets just ignore the addend or skip entries with nonzero
addends.

Rather than making a general assumption that udata.i == st_other for
BSF_SYNTHETIC, I think we should just do it for MIPS, and only (for now)
when the defined symbol is in a section called .plt.  What do others think?

Thanks,
Richard


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]