This is the mail archive of the binutils@sourceware.org mailing list for the binutils project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [gold][aarch64]Patch for Relaxation


Hi Cary, thanks. All done in below.

Attached modified patch - relaxation.patch2. Also attached the diff
from patch2 to patch1 for ease of review.

Thanks,
Han

On Fri, Oct 10, 2014 at 4:28 PM, Cary Coutant <ccoutant@google.com> wrote:
>
> > Here we have the patch for gold aarch64 backend to support relaxation.
> >
> > In short relaxation is the linker's generation of stubs that fixes the
> > out-of-range jumps/branches in the original object file.
> >
> > With this implementation, we are able to link a 456MB aarch64 application
> > (correctness of the result file, though, hasn't been verified.)
>
> Thanks. Here are my comments...
>
> -cary
>
>
> +      struct
> +      {
> + // For a global symbol, the symbol itself.
> + Symbol* symbol;
> +      } global;
> +      struct
> +      {
> + // For a local symbol, the object defining object.
>
> I know this is actually unchanged code from before, but this
> comment looks wrong. Should probably be "the defining object"
> or "the object defining the symbol".


Done
>
>
> +  // Do not change the value of the enums, they are used to index into
> +  // stub_insns array.
>
> In that case, you should probably assign values explicitly to each
> enum constant. Enum constants like this should generally be all
> upper case (like macros and static constants).


Done - Changed to all upper case and explicitly assigned values to them.
>
>
> +  // Determine the stub type for a certain relocation or st_none, if no stub is
> +  // needed.
> +  static Stub_type
> +  stub_type_for_reloc(unsigned int r_type, AArch64_address address,
> +      AArch64_address target);
> +
> + public:
>
> Unnecessary: everything above this was also public.


Done
>
>
> +  // The key class used to index the stub instance in the stub
> table's stub map.
> +  class Key
> +  {
> +  public:
>
> "public" should be indented one space.


Done
>
>
> +    // Return a hash value.
> +    size_t
> +    hash_value() const
> +    {
> +      return (this->stub_type_
> +      ^ this->r_sym_
> +      ^ gold::string_hash<char>(
> +    (this->r_sym_ != Reloc_stub::invalid_index)
> +    ? this->u_.relobj->name().c_str()
> +    : this->u_.symbol->name())
> +      ^ this->addend_);
>
> I know this is just copied from arm.cc, but it looks to me like
> this hash function might be a bit weak. In particular,
> stub_type_, r_sym_, and addend_ might easily cancel one another
> out, causing some preventable collisions. r_sym_ is also
> unnecessary (but harmless, I guess), when it's equal to
> invalid_index. I'd suggest at least shifting stub_type_, r_sym_,
> and addend_ by different amounts, or applying a different
> multiplier to each to distribute the bits around more, and
> applying r_sym_ only when it's a local symbol index.


Done by placing name hash at bit 0-19, stub_type 20-21, r_sym 22-26
(even if it's a invalid_index) and addend 27-31.
>
>
> +    struct equal_to
> +    {
> +      bool
> +      operator()(const Key& k1, const Key& k2) const
> +      { return k1.eq(k2); }
> +    };
> +
> +  private:
>
> "private" should be indented one space.


Done.
>
>
> +    // Initialize look-up tables.
> +    Stub_table_list empty_stub_table_list(this->shnum(), NULL);
> +    this->stub_tables_.swap(empty_stub_table_list);
>
> It's simpler to just use resize(this->shnum()). The "swap" idiom
> is useful when you need to release memory, but not necessary for
> initialization.

Done
>
>
> +  // Call parent to relocate sections.
> +  Sized_relobj_file<size, big_endian>::do_relocate_sections(symtab, layout,
> +   pshdrs, of, pviews);
>
> Indentation is off by one.


Done.
>
>
> +  unsigned int reloc_size;
> +  if (sh_type == elfcpp::SHT_RELA)
> +    reloc_size = elfcpp::Elf_sizes<size>::rela_size;
>
> For REL sections, reloc_size will be left uninitialized here.
> If you don't expect REL sections, you should assert that.


Done.
>
>
> +  const unsigned int shdr_size = elfcpp::Elf_sizes<size>::shdr_size;
> +  const elfcpp::Shdr<size, big_endian> text_shdr(pshdrs + index * shdr_size);
> +  return this->section_is_scannable(text_shdr, index,
> +    out_sections[index], symtab);
>
> The naming doesn't really make it clear that
> section_needs_reloc_stub_scanning is looking at the REL/RELA
> section, while section_is_scannable is looking at the PROGBITS
> section that is being relocated. You've used text_shdr for the
> section header, so I'd suggest using text_shndx instead of index,
> and text_section_is_scannable for the function name.


Done.
>
>
> +      // Currently this only happens for a relaxed section.
> +      const Output_relaxed_input_section* poris =
> +      out_sections[index]->find_relaxed_input_section(this, index);
>
> This continuation line needs indentation (4 spaces).


Done.
>
>
> +  // Get the section contents. This does work for the case in which
> +  // we modify the contents of an input section.  We need to pass the
> +  // output view under such circumstances.
>
> Should the comment read "This does *not* work ..."?
>
> I don't see you handling the case where you modify the contents,
> so maybe that last sentence is not necessary?


Done by removing the confusing last 2 sentences.
>
>
> +// Create a stub group for input sections from BEGIN to END.  OWNER
> +// points to the input section to be the owner a new stub table.
>
> "OWNER points to the input section that will be the owner of the
> stub table."


Done.
>
>
> +  // Currently we convert ordinary input sections into relaxed sections only
> +  // at this point but we may want to support creating relaxed input section
> +  // very early.  So we check here to see if owner is already a relaxed
> +  // section.
> +  gold_assert(!owner->is_relaxed_input_section());
> +
> +  The_aarch64_input_section* input_section;
> +  if (owner->is_relaxed_input_section())
> +    {
> +      input_section = static_cast<The_aarch64_input_section*>(
> +  owner->relaxed_input_section());
> +    }
>
> Given the assert above, this code can't be reached. If you want to
> leave it in as a stub for the future, I'd suggest removing the above
> assert, and replacing the then clause with gold_unreachable().


Done by replacing gold_assert by gold_unreachable inside the if clause
also updated the comments.
>
>
> +  do
> +    {
> +      if (p->is_input_section() || p->is_relaxed_input_section())
> + {
> +  // The stub table information for input sections live
> +  // in their objects.
> +  The_aarch64_relobj* aarch64_relobj =
> +      static_cast<The_aarch64_relobj*>(p->relobj());
> +  aarch64_relobj->set_stub_table(p->shndx(), stub_table);
> + }
> +      prev_p = p++;
> +    }
> +  while (prev_p != end);
>
> Suggest renaming begin/end to maybe first/last? It took me a minute
> to understand that "end" doesn't refer to the same "end" that an
> iterator would test against, and that you intended this to be an
> inclusive range.


Done renaming.
>
>
> Why not just "do { ... } while (p++ != last);"?

Done.
>
>
> +// Group input sections for stub generation.  GROUP_SIZE is roughly the limit
> +// of stub groups.  We grow a stub group by adding input section until the
> +// size is just below GROUP_SIZE.  The last input section will be converted
> +// into a stub table.  If STUB_ALWAYS_AFTER_BRANCH is false, we also add
> +// input section after the stub table, effectively double the group size.
>
> Do you mean "the last input section will be converted into a stub
> table *owner*"?
>
> And "we also add input section*s* after the stub table, effectively
> *doubling* the group size."


Done.
>
>
> + case HAS_STUB_SECTION:
> +  // Adding this section makes the post stub-section group larger
> +  // than GROUP_SIZE.
> +  gold_assert(false);
>
> You can just use gold_unreachable() here.
>
> +  // ET_EXEC files are valid input for --just-symbols/-R,
> +  // and we treat them as relocatable objects.
>
> For --just-symbols, shouldn't you use the base implementation
> of do_make_elf_object?


Done by invoking parent implementation.
>
>
> +// This function scans a relocation sections for stub generation.
>
> s/sections/section/


Done.
>
>
> + // An error should never aroused in the above step. If so, please
> + // An error should never arouse, it is an "_NC" relocation.
>
> "arise".
>
> +    gold_error(_("Stub is too far away, try use a smaller value "
> + "for '--stub-group-size'. For example, 0x2000000."));
>
> Either "try using" or "try to use" or just "try a smaller value".


Done.



-- 
Han Shen |  Software Engineer |  shenhan@google.com |  +1-650-440-3330

Attachment: relaxation.patch2
Description: Binary data

diff --git a/gold/aarch64.cc b/gold/aarch64.cc
index 79f153a..a1b4011 100644
--- a/gold/aarch64.cc
+++ b/gold/aarch64.cc
@@ -286,11 +286,11 @@ class Output_data_got_aarch64 : public Output_data_got<size, big_endian>
 	// For a global symbol, the symbol itself.
 	Symbol* symbol;
       } global;
       struct
       {
-	// For a local symbol, the object defining object.
+	// For a local symbol, the object defining the symbol.
 	Sized_relobj_file<size, big_endian>* relobj;
 	// For a local symbol, the symbol index.
 	unsigned int index;
       } local;
     } u_;
@@ -319,24 +319,24 @@ class Reloc_stub
 
   // Do not change the value of the enums, they are used to index into
   // stub_insns array.
   typedef enum
   {
-    st_none,
+    ST_NONE = 0,
 
     // Using adrp/add pair, 4 insns (including alignment) without mem access,
     // the fastest stub. This has a limited jump distance, which is tested by
     // aarch64_valid_for_adrp_p.
-    st_adrp_branch,
+    ST_ADRP_BRANCH = 1,
 
     // Using ldr-absolute-address/br-register, 4 insns with 1 mem access,
     // unlimited in jump distance.
-    st_long_branch_abs,
+    ST_LONG_BRANCH_ABS = 2,
 
     // Using ldr/calculate-pcrel/jump, 8 insns (including alignment) with 1 mem
     // access, slowest one. Only used in position independent executables.
-    st_long_branch_pcrel,
+    ST_LONG_BRANCH_PCREL = 3,
 
   } Stub_type;
 
   // Branch range. This is used to calculate the section group size, as well as
   // determine whether a stub is needed.
@@ -363,17 +363,16 @@ class Reloc_stub
     typedef AArch64_relocate_functions<size, big_endian> Reloc;
     int64_t adrp_imm = (Reloc::Page(dest) - Reloc::Page(location)) >> 12;
     return adrp_imm >= MIN_ADRP_IMM && adrp_imm <= MAX_ADRP_IMM;
   }
 
-  // Determine the stub type for a certain relocation or st_none, if no stub is
+  // Determine the stub type for a certain relocation or ST_NONE, if no stub is
   // needed.
   static Stub_type
   stub_type_for_reloc(unsigned int r_type, AArch64_address address,
 		      AArch64_address target);
 
- public:
   Reloc_stub(Stub_type stub_type)
     : stub_type_(stub_type), offset_(invalid_offset),
       destination_address_(invalid_address)
   { }
 
@@ -440,11 +439,11 @@ class Reloc_stub
   { this->do_write(view, view_size); }
 
   // The key class used to index the stub instance in the stub table's stub map.
   class Key
   {
-  public:
+   public:
     Key(Stub_type stub_type, const Symbol* symbol, const Relobj* relobj,
 	unsigned int r_sym, int32_t addend)
       : stub_type_(stub_type), addend_(addend)
     {
       if (symbol != NULL)
@@ -497,17 +496,26 @@ class Reloc_stub
 
     // Return a hash value.
     size_t
     hash_value() const
     {
-      return (this->stub_type_
-	      ^ this->r_sym_
-	      ^ gold::string_hash<char>(
-		    (this->r_sym_ != Reloc_stub::invalid_index)
-		    ? this->u_.relobj->name().c_str()
-		    : this->u_.symbol->name())
-	      ^ this->addend_);
+      // This will occupy the lowest 20 bits in the final hash value.
+      size_t name_hash_value = 0XFFFFFF & gold::string_hash<char>(
+	  (this->r_sym_ != Reloc_stub::invalid_index)
+	  ? this->u_.relobj->name().c_str()
+	  : this->u_.symbol->name());
+      // We only have 4 stub types, so we allocate 2 bits for it in the final
+      // hash value, and they will occupy bit 21 - 22.
+      size_t stub_type_hash_value = 0x03 & this->stub_type_;
+      // 5 bits for r_sym_, even if it's a invalid_index.
+      size_t r_sym_hash_value = this->r_sym_ & 0x1F;
+      // 5 bits for addend_.
+      size_t addend_hash_value = this->addend_ & 0x1F;
+      return name_hash_value
+	| (stub_type_hash_value << 20)
+	| (r_sym_hash_value << 22)
+	| (addend_hash_value << 27);
     }
 
     // Functors for STL associative containers.
     struct hash
     {
@@ -521,11 +529,11 @@ class Reloc_stub
       bool
       operator()(const Key& k1, const Key& k2) const
       { return k1.eq(k2); }
     };
 
-  private:
+ private:
     // Stub type.
     const Stub_type stub_type_;
     // If this is a local symbol, this is the index in the defining object.
     // Otherwise, it is invalid_index for a global symbol.
     unsigned int r_sym_;
@@ -586,34 +594,34 @@ template<int size, bool big_endian>
 const uint32_t
 Reloc_stub<size, big_endian>::stub_insns_[][10] =
   {
     // The first element of each group is the num of the insns.
 
-    // st_none
+    // ST_NONE
     {0, 0},
 
-    // st_adrp_branch
+    // ST_ADRP_BRANCH
     {
 	4,
 	0x90000010,	/*	adrp	ip0, X		   */
 			/*	  ADR_PREL_PG_HI21(X)	   */
 	0x91000210,	/*	add	ip0, ip0, :lo12:X  */
 			/*	  ADD_ABS_LO12_NC(X)	   */
 	0xd61f0200,	/*	br	ip0		   */
 	0x00000000,	/*	alignment padding	   */
     },
 
-    // st_long_branch_abs
+    // ST_LONG_BRANCH_ABS
     {
 	4,
 	0x58000050,	/*	ldr   ip0, 0x8		   */
 	0xd61f0200,	/*	br    ip0		   */
 	0x00000000,	/*	address field		   */
 	0x00000000,	/*	address fields		   */
     },
 
-    // st_long_branch_pcrel
+    // ST_LONG_BRANCH_PCREL
     {
       8,
 	0x58000090,	/*	ldr   ip0, 0x10            */
 	0x10000011,	/*	adr   ip1, #0		   */
 	0x8b110210,	/*	add   ip0, ip0, ip1	   */
@@ -624,11 +632,11 @@ Reloc_stub<size, big_endian>::stub_insns_[][10] =
 	0x00000000,	/*	alignment padding	   */
     }
   };
 
 
-// Determine the stub type for a certain relocation or st_none, if no stub is
+// Determine the stub type for a certain relocation or ST_NONE, if no stub is
 // needed.
 
 template<int size, bool big_endian>
 inline
 typename Reloc_stub<size, big_endian>::Stub_type
@@ -645,20 +653,20 @@ Reloc_stub<size, big_endian>::stub_type_for_reloc(
     default:
       gold_assert(false);
     }
 
   if (aarch64_valid_branch_offset_p(branch_offset))
-    return st_none;
+    return ST_NONE;
 
   if (aarch64_valid_for_adrp_p(location, dest))
-    return st_adrp_branch;
+    return ST_ADRP_BRANCH;
 
   if (parameters->options().output_is_position_independent()
       && parameters->options().output_is_executable())
-    return st_long_branch_pcrel;
+    return ST_LONG_BRANCH_PCREL;
 
-  return st_long_branch_abs;
+  return ST_LONG_BRANCH_ABS;
 }
 
 // A class to hold stubs for the ARM target.
 
 template<int size, bool big_endian>
@@ -919,12 +927,12 @@ class AArch64_relobj : public Sized_relobj_file<size, big_endian>
   scan_sections_for_stubs(The_target_aarch64*, const Symbol_table*,
 			  const Layout*);
 
   // Whether a section is a scannable text section.
   bool
-  section_is_scannable(const elfcpp::Shdr<size, big_endian>&, unsigned int,
-		       const Output_section*, const Symbol_table*);
+  text_section_is_scannable(const elfcpp::Shdr<size, big_endian>&, unsigned int,
+			    const Output_section*, const Symbol_table*);
 
   // Convert regular input section with index SHNDX to a relaxed section.
   void
   convert_input_section_to_relaxed_section(unsigned /* shndx */)
   {
@@ -940,12 +948,11 @@ class AArch64_relobj : public Sized_relobj_file<size, big_endian>
   {
     // Call parent's setup method.
     Sized_relobj_file<size, big_endian>::do_setup();
 
     // Initialize look-up tables.
-    Stub_table_list empty_stub_table_list(this->shnum(), NULL);
-    this->stub_tables_.swap(empty_stub_table_list);
+    this->stub_tables_.resize(this->shnum());
   }
 
   virtual void
   do_relocate_sections(
       const Symbol_table* symtab, const Layout* layout,
@@ -973,11 +980,11 @@ AArch64_relobj<size, big_endian>::do_relocate_sections(
     const unsigned char* pshdrs, Output_file* of,
     typename Sized_relobj_file<size, big_endian>::Views* pviews)
 {
   // Call parent to relocate sections.
   Sized_relobj_file<size, big_endian>::do_relocate_sections(symtab, layout,
-							   pshdrs, of, pviews);
+							    pshdrs, of, pviews);
 
   // We do not generate stubs if doing a relocatable link.
   if (parameters->options().relocatable())
     return;
 
@@ -1027,37 +1034,33 @@ AArch64_relobj<size, big_endian>::do_relocate_sections(
 // section for the input section and SYMTAB is the global symbol table used to
 // look up ICF information.
 
 template<int size, bool big_endian>
 bool
-AArch64_relobj<size, big_endian>::section_is_scannable(
-    const elfcpp::Shdr<size, big_endian>& shdr,
-    unsigned int shndx,
+AArch64_relobj<size, big_endian>::text_section_is_scannable(
+    const elfcpp::Shdr<size, big_endian>& text_shdr,
+    unsigned int text_shndx,
     const Output_section* os,
     const Symbol_table* symtab)
 {
   // Skip any empty sections, unallocated sections or sections whose
   // type are not SHT_PROGBITS.
-  if (shdr.get_sh_size() == 0
-      || (shdr.get_sh_flags() & elfcpp::SHF_ALLOC) == 0
-      || shdr.get_sh_type() != elfcpp::SHT_PROGBITS)
+  if (text_shdr.get_sh_size() == 0
+      || (text_shdr.get_sh_flags() & elfcpp::SHF_ALLOC) == 0
+      || text_shdr.get_sh_type() != elfcpp::SHT_PROGBITS)
     return false;
 
   // Skip any discarded or ICF'ed sections.
-  if (os == NULL || symtab->is_section_folded(this, shndx))
+  if (os == NULL || symtab->is_section_folded(this, text_shndx))
     return false;
 
-  // If this requires special offset handling, check to see if it is
-  // a relaxed section.  If this is not, then it is a merged section that
-  // we cannot handle.
-  if (this->is_output_section_offset_invalid(shndx))
-    {
-      const Output_relaxed_input_section* poris =
-	  os->find_relaxed_input_section(this, shndx);
-      if (poris == NULL)
-	return false;
-    }
+  // Skip exception frame.
+  if (strcmp(os->name(), ".eh_frame") == 0)
+    return false ;
+
+  gold_assert(!this->is_output_section_offset_invalid(text_shndx) ||
+	      os->find_relaxed_input_section(this, text_shndx) != NULL);
 
   return true;
 }
 
 
@@ -1084,29 +1087,29 @@ AArch64_relobj<size, big_endian>::section_needs_reloc_stub_scanning(
   // Ignore reloc section with unexpected symbol table.  The
   // error will be reported in the final link.
   if (this->adjust_shndx(shdr.get_sh_link()) != this->symtab_shndx())
     return false;
 
-  unsigned int reloc_size;
-  if (sh_type == elfcpp::SHT_RELA)
-    reloc_size = elfcpp::Elf_sizes<size>::rela_size;
+  gold_assert(sh_type == elfcpp::SHT_RELA);
+  unsigned int reloc_size = elfcpp::Elf_sizes<size>::rela_size;
 
   // Ignore reloc section with unexpected entsize or uneven size.
   // The error will be reported in the final link.
   if (reloc_size != shdr.get_sh_entsize() || sh_size % reloc_size != 0)
     return false;
 
   // Ignore reloc section with bad info.  This error will be
   // reported in the final link.
-  unsigned int index = this->adjust_shndx(shdr.get_sh_info());
-  if (index >= this->shnum())
+  unsigned int text_shndx = this->adjust_shndx(shdr.get_sh_info());
+  if (text_shndx >= this->shnum())
     return false;
 
   const unsigned int shdr_size = elfcpp::Elf_sizes<size>::shdr_size;
-  const elfcpp::Shdr<size, big_endian> text_shdr(pshdrs + index * shdr_size);
-  return this->section_is_scannable(text_shdr, index,
-				    out_sections[index], symtab);
+  const elfcpp::Shdr<size, big_endian> text_shdr(pshdrs +
+						 text_shndx * shdr_size);
+  return this->text_section_is_scannable(text_shdr, text_shndx,
+					 out_sections[text_shndx], symtab);
 }
 
 
 // Scan relocations for stub generation.
 
@@ -1154,23 +1157,21 @@ AArch64_relobj<size, big_endian>::scan_sections_for_stubs(
 	    }
 	  else
 	    {
 	      // Currently this only happens for a relaxed section.
 	      const Output_relaxed_input_section* poris =
-	      out_sections[index]->find_relaxed_input_section(this, index);
+		  out_sections[index]->find_relaxed_input_section(this, index);
 	      gold_assert(poris != NULL);
 	      output_address = poris->address();
 	    }
 
 	  // Get the relocations.
 	  const unsigned char* prelocs = this->get_view(shdr.get_sh_offset(),
 							shdr.get_sh_size(),
 							true, false);
 
-	  // Get the section contents.	This does work for the case in which
-	  // we modify the contents of an input section.  We need to pass the
-	  // output view under such circumstances.
+	  // Get the section contents.
 	  section_size_type input_view_size = 0;
 	  const unsigned char* input_view =
 	      this->section_contents(index, &input_view_size, false);
 
 	  relinfo.reloc_shndx = i;
@@ -1431,34 +1432,27 @@ class AArch64_output_section : public Output_section
 		    std::vector<Output_relaxed_input_section*>&,
 		    const Task*);
 };  // End of AArch64_output_section
 
 
-// Create a stub group for input sections from BEGIN to END.  OWNER
-// points to the input section to be the owner a new stub table.
+// Create a stub group for input sections from FIRST to LAST. OWNER points to
+// the input section that will be the owner of the stub table.
 
 template<int size, bool big_endian> void
 AArch64_output_section<size, big_endian>::create_stub_group(
-    Input_section_list::const_iterator begin,
-    Input_section_list::const_iterator end,
+    Input_section_list::const_iterator first,
+    Input_section_list::const_iterator last,
     Input_section_list::const_iterator owner,
     The_target_aarch64* target,
     std::vector<Output_relaxed_input_section*>& new_relaxed_sections,
     const Task* task)
 {
   // Currently we convert ordinary input sections into relaxed sections only
-  // at this point but we may want to support creating relaxed input section
-  // very early.  So we check here to see if owner is already a relaxed
-  // section.
-  gold_assert(!owner->is_relaxed_input_section());
-
+  // at this point.
   The_aarch64_input_section* input_section;
   if (owner->is_relaxed_input_section())
-    {
-      input_section = static_cast<The_aarch64_input_section*>(
-	  owner->relaxed_input_section());
-    }
+    gold_unreachable();
   else
     {
       gold_assert(owner->is_input_section());
       // Create a new relaxed input section.  We need to lock the original
       // file.
@@ -1472,34 +1466,32 @@ AArch64_output_section<size, big_endian>::create_stub_group(
   The_stub_table* stub_table =
       target->new_stub_table(input_section);
 
   input_section->set_stub_table(stub_table);
 
-  Input_section_list::const_iterator p = begin;
-  Input_section_list::const_iterator prev_p;
-  // Look for input sections or relaxed input sections in [begin ... end].
+  Input_section_list::const_iterator p = first;
+  // Look for input sections or relaxed input sections in [first ... last].
   do
     {
       if (p->is_input_section() || p->is_relaxed_input_section())
 	{
 	  // The stub table information for input sections live
 	  // in their objects.
 	  The_aarch64_relobj* aarch64_relobj =
 	      static_cast<The_aarch64_relobj*>(p->relobj());
 	  aarch64_relobj->set_stub_table(p->shndx(), stub_table);
 	}
-      prev_p = p++;
     }
-  while (prev_p != end);
+  while (p++ != last);
 }
 
 
-// Group input sections for stub generation.  GROUP_SIZE is roughly the limit
-// of stub groups.  We grow a stub group by adding input section until the
-// size is just below GROUP_SIZE.  The last input section will be converted
-// into a stub table.  If STUB_ALWAYS_AFTER_BRANCH is false, we also add
-// input section after the stub table, effectively double the group size.
+// Group input sections for stub generation. GROUP_SIZE is roughly the limit of
+// stub groups. We grow a stub group by adding input section until the size is
+// just below GROUP_SIZE. The last input section will be converted into a stub
+// table owner. If STUB_ALWAYS_AFTER_BRANCH is false, we also add input sectiond
+// after the stub table, effectively doubling the group size.
 //
 // This is similar to the group_sections() function in elf32-arm.c but is
 // implemented differently.
 
 template<int size, bool big_endian>
@@ -1567,12 +1559,12 @@ void AArch64_output_section<size, big_endian>::group_sections(
 	    break;
 
 	case HAS_STUB_SECTION:
 	  // Adding this section makes the post stub-section group larger
 	  // than GROUP_SIZE.
-	  gold_assert(false);
-	  // NOT SUPPORTED YET.
+	  gold_unreachable();
+	  // NOT SUPPORTED YET. For completeness only.
 	  if (section_end_offset - stub_table_end_offset >= group_size)
 	   {
 	     gold_assert(group_end != this->input_sections().end());
 	     this->create_stub_group(group_begin, group_end, stub_table,
 				     target, new_relaxed_sections, task);
@@ -2469,12 +2461,14 @@ Target_aarch64<size, big_endian>::do_make_elf_object(
     off_t offset, const elfcpp::Ehdr<size, big_endian>& ehdr)
 {
   int et = ehdr.get_e_type();
   // ET_EXEC files are valid input for --just-symbols/-R,
   // and we treat them as relocatable objects.
-  if (et == elfcpp::ET_REL
-      || (et == elfcpp::ET_EXEC && input_file->just_symbols()))
+  if (et == elfcpp::ET_EXEC && input_file->just_symbols())
+    return Sized_target<size, big_endian>::do_make_elf_object(
+	name, input_file, offset, ehdr);
+  else if (et == elfcpp::ET_REL)
     {
       AArch64_relobj<size, big_endian>* obj =
 	new AArch64_relobj<size, big_endian>(name, input_file, offset, ehdr);
       obj->setup();
       return obj;
@@ -2545,11 +2539,11 @@ Target_aarch64<size, big_endian>::scan_reloc_for_stub(
       gold_assert(false);
     }
 
   typename The_reloc_stub::Stub_type stub_type = The_reloc_stub::
       stub_type_for_reloc(r_type, address, destination);
-  if (stub_type == The_reloc_stub::st_none)
+  if (stub_type == The_reloc_stub::ST_NONE)
     return ;
 
   The_stub_table* stub_table = aarch64_relobj->stub_table(relinfo->data_shndx);
   gold_assert(stub_table != NULL);
 
@@ -2562,11 +2556,11 @@ Target_aarch64<size, big_endian>::scan_reloc_for_stub(
     }
   stub->set_destination_address(destination);
 }  // End of Target_aarch64::scan_reloc_for_stub
 
 
-// This function scans a relocation sections for stub generation.
+// This function scans a relocation section for stub generation.
 // The template parameter Relocate must be a class type which provides
 // a single function, relocate(), which implements the machine
 // specific part of a relocation.
 
 // BIG_ENDIAN is the endianness of the data.  SH_TYPE is the section type:
@@ -2812,37 +2806,37 @@ relocate_stub(The_reloc_stub* stub,
 
   Address dest = stub->destination_address();
 
   switch(stub->stub_type())
     {
-    case The_reloc_stub::st_adrp_branch:
+    case The_reloc_stub::ST_ADRP_BRANCH:
       {
 	// 1st reloc is ADR_PREL_PG_HI21
 	The_reloc_functions_status status =
 	    The_reloc_functions::adrp(view, dest, address);
-	// An error should never aroused in the above step. If so, please
+	// An error should never arise in the above step. If so, please
 	// check 'aarch64_valid_for_adrp_p'.
 	gold_assert(status == The_reloc_functions::STATUS_OKAY);
 
 	// 2nd reloc is ADD_ABS_LO12_NC
 	const AArch64_reloc_property* arp =
 	    aarch64_reloc_property_table->get_reloc_property(
 		elfcpp::R_AARCH64_ADD_ABS_LO12_NC);
 	gold_assert(arp != NULL);
 	status = The_reloc_functions::template
 	    rela_general<32>(view + 4, dest, 0, arp);
-	// An error should never arouse, it is an "_NC" relocation.
+	// An error should never arise, it is an "_NC" relocation.
 	gold_assert(status == The_reloc_functions::STATUS_OKAY);
       }
       break;
 
-    case The_reloc_stub::st_long_branch_abs:
+    case The_reloc_stub::ST_LONG_BRANCH_ABS:
       // 1st reloc is R_AARCH64_PREL64, at offset 8
       elfcpp::Swap<64,big_endian>::writeval(view + 8, dest);
       break;
 
-    case The_reloc_stub::st_long_branch_pcrel:
+    case The_reloc_stub::ST_LONG_BRANCH_PCREL:
       {
 	// "PC" calculation is the 2nd insn in the stub.
 	uint64_t offset = dest - (address + 4);
 	// Offset is placed at offset 4 and 5.
 	elfcpp::Swap<64,big_endian>::writeval(view + 16, offset);
@@ -4084,11 +4078,11 @@ maybe_apply_stub(unsigned int r_type,
 
   typename elfcpp::Elf_types<size>::Elf_Swxword addend = rela.get_r_addend();
   Address branch_target = psymval->value(object, 0) + addend;
   The_reloc_stub_type stub_type = The_reloc_stub::
     stub_type_for_reloc(r_type, address, branch_target);
-  if (stub_type == The_reloc_stub::st_none)
+  if (stub_type == The_reloc_stub::ST_NONE)
     return false;
 
   const The_aarch64_relobj* aarch64_relobj =
       static_cast<const The_aarch64_relobj*>(object);
   The_stub_table* stub_table = aarch64_relobj->stub_table(relinfo->data_shndx);
@@ -4106,11 +4100,11 @@ maybe_apply_stub(unsigned int r_type,
       aarch64_reloc_property_table->get_reloc_property(r_type);
   gold_assert(arp != NULL);
   This::Status status = This::template
       rela_general<32>(view, branch_offset, 0, arp);
   if (status != This::STATUS_OKAY)
-    gold_error(_("Stub is too far away, try use a smaller value "
+    gold_error(_("Stub is too far away, try a smaller value "
 		 "for '--stub-group-size'. For example, 0x2000000."));
   return true;
 }
 
 
@@ -5439,18 +5433,21 @@ Target_aarch64<size, big_endian>::Relocate::relocate(
       break;
 
     case elfcpp::R_AARCH64_PREL64:
       reloc_status = Reloc::template pcrela_ua<64>(
 	view, object, psymval, addend, address, reloc_property);
+      break;
 
     case elfcpp::R_AARCH64_PREL32:
       reloc_status = Reloc::template pcrela_ua<32>(
 	view, object, psymval, addend, address, reloc_property);
+      break;
 
     case elfcpp::R_AARCH64_PREL16:
       reloc_status = Reloc::template pcrela_ua<16>(
 	view, object, psymval, addend, address, reloc_property);
+      break;
 
     case elfcpp::R_AARCH64_ADR_PREL_PG_HI21_NC:
     case elfcpp::R_AARCH64_ADR_PREL_PG_HI21:
       reloc_status = Reloc::adrp(view, object, psymval, addend, address,
 				 reloc_property);

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]