This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Bad performance in updating JIT debug symbols


On 09/18/2016 02:54 PM, Yichao Yu wrote:

> Ref https://sourceware.org/ml/gdb/2016-03/msg00038.html
> 
> With some of my profiles and a patch that helps somewhat (there's
> still bad scaling) but also cause functional regression.
> 

Back then, I did a bunch more work to fix the performance issues.

One of the earliest things I did was to get rid of the sections
sorting with qsort and use an incrementally updated addrmap
instead, which sounds like what Fredrik is stumbling on.

Of course, fixing that bottleneck exposes some other, and on and on.

I fixed a bunch of such cascading bottlenecks back then, but never
managed to find the time to clean it up and post it to the list.

I've now rebased and force-pushed part of what I had to the
users/palves/jit-speedup branch.  It's regression-free for me
on x86_64 Fedora 23.  I'm also attaching qsort patch below for
reference.

If you don't have any breakpoint set, then with that branch,
JIT debugging is snappy.  The problem is that the next bottleneck
triggers as soon as you have some breakpoint set.  The bottleneck
is the all-too-familiar breakpoint_re_set problem.  Whenever
new symbols appear, gdb deletes all breakpoint locations, and
recreates all breakpoints from scratch, which involves walking
all objfiles...  GDB needs to be adjusted to be smarter.  The
branch has some preliminary code for some kinds of internal
breakpoints, but user breakpoints need a bunch more work.

I did work on that, standing on top of the experience of Keith's
earlier attempt at fixing it
too (https://sourceware.org/gdb/wiki/BreakpointReset).  My version
is different from Keith's, but I'm not sure I like it that much.
I'd need more time to work on it.  I'm not likely to find it in
the next few weeks, at least, though... :-/



>From 5cd42e9fb13d25febe3da26595d044a57150cee5 Mon Sep 17 00:00:00 2001
From: Pedro Alves <palves@redhat.com>
Date: Fri, 1 Apr 2016 01:14:30 +0100
Subject: [PATCH] Get rid of sections sorting with qsort and use an
 incrementally updated addrmap instead

This gives a massive speed up.  The problem with the qsort is that we
qsort for any one of the thousands of jit loads/unloads, and when you
have thousands of objfiles, that gets very slow.  In this scenario,
we're constantly adding/removing a handfull of obj_sections to a set
of thousands of already-sorted obj_sections.  It's much cheaper to do
an incremental update.

I'm using a mutable addrmap for this, but I needed to add a new
primitive that allowed updating a region's object, to handle the case
of overlapping sections.  The only primitive available, only allows
setting a value to a currently-NULL region.
---
 gdb/addrmap.c    |  94 ++++++++-----
 gdb/addrmap.h    |  45 +++++--
 gdb/objfiles.c   | 397 +++++++++++++++++++++++++------------------------------
 gdb/objfiles.h   |  30 ++---
 gdb/solib-svr4.c |  17 ---
 gdb/symfile.c    |  15 ++-
 6 files changed, 297 insertions(+), 301 deletions(-)

diff --git a/gdb/addrmap.c b/gdb/addrmap.c
index 6cdad38..7a94545 100644
--- a/gdb/addrmap.c
+++ b/gdb/addrmap.c
@@ -29,9 +29,10 @@
    implementation.  */
 struct addrmap_funcs
 {
-  void (*set_empty) (struct addrmap *self,
-                     CORE_ADDR start, CORE_ADDR end_inclusive,
-                     void *obj);
+  void (*subregion_update) (struct addrmap *self,
+			    CORE_ADDR start, CORE_ADDR end_inclusive,
+			    addrmap_subregion_update_callback_ftype *cb,
+			    void *cb_data);
   void *(*find) (struct addrmap *self, CORE_ADDR addr);
   struct addrmap *(*create_fixed) (struct addrmap *self,
                                    struct obstack *obstack);
@@ -47,11 +48,12 @@ struct addrmap
 
 
 void
-addrmap_set_empty (struct addrmap *map,
-                   CORE_ADDR start, CORE_ADDR end_inclusive,
-                   void *obj)
+addrmap_subregion_update (struct addrmap *map,
+			  CORE_ADDR start, CORE_ADDR end_inclusive,
+			  addrmap_subregion_update_callback_ftype *cb,
+			  void *cb_data)
 {
-  map->funcs->set_empty (map, start, end_inclusive, obj);
+  map->funcs->subregion_update (map, start, end_inclusive, cb, cb_data);
 }
 
 
@@ -83,6 +85,37 @@ addrmap_foreach (struct addrmap *map, addrmap_foreach_fn fn, void *data)
 {
   return map->funcs->foreach (map, fn, data);
 }
+
+
+
+/* addmap_subregion_update callback for addrmap_set_empty.  Sets all
+   regions that are currently mapped to NULL to CB_DATA instead.  */
+
+static void *
+addrmap_set_empty_callback (void *cb_data, void *curr_obj)
+{
+  if (curr_obj == NULL)
+    return cb_data;
+  return curr_obj;
+}
+
+/* See addrmap.h.  */
+
+void
+addrmap_set_empty (struct addrmap *self,
+		   CORE_ADDR start, CORE_ADDR end_inclusive,
+		   void *obj)
+{
+  /* If we're being asked to set all empty portions of the given
+     address range to empty, then probably the caller is confused.
+     (If that turns out to be useful in some cases, then we can change
+     this to simply return, since overriding NULL with NULL is a
+     no-op.)  */
+  gdb_assert (obj != NULL);
+
+  addrmap_subregion_update (self, start, end_inclusive,
+			    addrmap_set_empty_callback, obj);
+}
 
 /* Fixed address maps.  */
 
@@ -113,12 +146,13 @@ struct addrmap_fixed
 
 
 static void
-addrmap_fixed_set_empty (struct addrmap *self,
-                   CORE_ADDR start, CORE_ADDR end_inclusive,
-                   void *obj)
+addrmap_fixed_subregion_update (struct addrmap *self,
+				CORE_ADDR start, CORE_ADDR end_inclusive,
+				addrmap_subregion_update_callback_ftype *cb,
+				void *cb_data)
 {
   internal_error (__FILE__, __LINE__,
-                  "addrmap_fixed_set_empty: "
+                  "addrmap_fixed_subregion_update: "
                   "fixed addrmaps can't be changed\n");
 }
 
@@ -197,7 +231,7 @@ addrmap_fixed_foreach (struct addrmap *self, addrmap_foreach_fn fn,
 
 static const struct addrmap_funcs addrmap_fixed_funcs =
 {
-  addrmap_fixed_set_empty,
+  addrmap_fixed_subregion_update,
   addrmap_fixed_find,
   addrmap_fixed_create_fixed,
   addrmap_fixed_relocate,
@@ -330,21 +364,15 @@ force_transition (struct addrmap_mutable *self, CORE_ADDR addr)
 
 
 static void
-addrmap_mutable_set_empty (struct addrmap *self,
-                           CORE_ADDR start, CORE_ADDR end_inclusive,
-                           void *obj)
+addrmap_mutable_subregion_update (struct addrmap *self,
+				  CORE_ADDR start, CORE_ADDR end_inclusive,
+				  addrmap_subregion_update_callback_ftype *cb,
+				  void *cb_data)
 {
   struct addrmap_mutable *map = (struct addrmap_mutable *) self;
   splay_tree_node n, next;
   void *prior_value;
 
-  /* If we're being asked to set all empty portions of the given
-     address range to empty, then probably the caller is confused.
-     (If that turns out to be useful in some cases, then we can change
-     this to simply return, since overriding NULL with NULL is a
-     no-op.)  */
-  gdb_assert (obj);
-
   /* We take a two-pass approach, for simplicity.
      - Establish transitions where we think we might need them.
      - First pass: change all NULL regions to OBJ.
@@ -360,8 +388,11 @@ addrmap_mutable_set_empty (struct addrmap *self,
        n && addrmap_node_key (n) <= end_inclusive;
        n = addrmap_splay_tree_successor (map, addrmap_node_key (n)))
     {
-      if (! addrmap_node_value (n))
-        addrmap_node_set_value (n, obj);
+      void *value;
+
+      value = addrmap_node_value (n);
+      value = cb (cb_data, value);
+      addrmap_node_set_value (n, value);
     }
 
   /* Walk the area again, removing transitions from any value to
@@ -386,12 +417,15 @@ addrmap_mutable_set_empty (struct addrmap *self,
 static void *
 addrmap_mutable_find (struct addrmap *self, CORE_ADDR addr)
 {
-  /* Not needed yet.  */
-  internal_error (__FILE__, __LINE__,
-                  _("addrmap_find is not implemented yet "
-                    "for mutable addrmaps"));
-}
+  struct addrmap_mutable *map = (struct addrmap_mutable *) self;
+
+  splay_tree_node n
+    = addrmap_splay_tree_lookup (map, addr);
 
+  if (n == NULL)
+    n = addrmap_splay_tree_predecessor (map, addr);
+  return n != NULL ? addrmap_node_value (n) : NULL;
+}
 
 /* A function to pass to splay_tree_foreach to count the number of nodes
    in the tree.  */
@@ -504,7 +538,7 @@ addrmap_mutable_foreach (struct addrmap *self, addrmap_foreach_fn fn,
 
 static const struct addrmap_funcs addrmap_mutable_funcs =
 {
-  addrmap_mutable_set_empty,
+  addrmap_mutable_subregion_update,
   addrmap_mutable_find,
   addrmap_mutable_create_fixed,
   addrmap_mutable_relocate,
diff --git a/gdb/addrmap.h b/gdb/addrmap.h
index 5df3aa3..0f7f09c 100644
--- a/gdb/addrmap.h
+++ b/gdb/addrmap.h
@@ -54,31 +54,48 @@ struct addrmap *addrmap_create_mutable (struct obstack *obstack);
    let the caller construct more complicated operations from that,
    along with some others for traversal?
 
-   It turns out this is the mutation operation we want to use all the
-   time, at least for now.  Our immediate use for address maps is to
-   represent lexical blocks whose address ranges are not contiguous.
-   We walk the tree of lexical blocks present in the debug info, and
-   only create 'struct block' objects after we've traversed all a
-   block's children.  If a lexical block declares no local variables
-   (and isn't the lexical block for a function's body), we omit it
-   from GDB's data structures entirely.
+   It turns out this is the mutation operation we want to use most of
+   the time.  One use for address maps is to represent lexical blocks
+   whose address ranges are not contiguous.  We walk the tree of
+   lexical blocks present in the debug info, and only create 'struct
+   block' objects after we've traversed all a block's children.  If a
+   lexical block declares no local variables (and isn't the lexical
+   block for a function's body), we omit it from GDB's data structures
+   entirely.
 
    However, this means that we don't decide to create a block (and
    thus record it in the address map) until after we've traversed its
    children.  If we do decide to create the block, we do so at a time
    when all its children have already been recorded in the map.  So
    this operation --- change only those addresses left unset --- is
-   actually the operation we want to use every time.
+   actually the operation we want to use then.
 
-   It seems simpler to let the code which operates on the
-   representation directly deal with the hair of implementing these
-   semantics than to provide an interface which allows it to be
-   implemented efficiently, but doesn't reveal too much of the
-   representation.  */
+   If you need more flexibility, use addrmap_set instead.  */
 void addrmap_set_empty (struct addrmap *map,
                         CORE_ADDR start, CORE_ADDR end_inclusive,
                         void *obj);
 
+/* The type of the CALLBACK parameter of addrmap_subregion_update.
+   CB_DATA is the CB_DATA argument passed to addrmap_subregion_update.
+   CURR_VAL is the current value of a region.  */
+typedef void *(addrmap_subregion_update_callback_ftype) (void *cb_data,
+							 void *curr_val);
+
+/* In the mutable address map MAP, create a new region for addresses
+   from START to END_INCLUSIVE.  Then for each transition/subregion
+   within the new region, call CALLBACK with the existing value, and
+   replace the value with the one the callback returns.  Contiguous
+   subregions with the same value are compacted afterwards.
+
+   As the name suggests, END_INCLUSIVE is inclusive.  This convention
+   is unusual, but it allows callers to accurately specify ranges that
+   abut the top of the address space, and ranges that cover the entire
+   address space.  */
+void addrmap_subregion_update (struct addrmap *map,
+			       CORE_ADDR start, CORE_ADDR end_inclusive,
+			       addrmap_subregion_update_callback_ftype *cb,
+			       void *cb_data);
+
 /* Return the object associated with ADDR in MAP.  */
 void *addrmap_find (struct addrmap *map, CORE_ADDR addr);
 
diff --git a/gdb/objfiles.c b/gdb/objfiles.c
index f022d10..0a708bb 100644
--- a/gdb/objfiles.c
+++ b/gdb/objfiles.c
@@ -58,23 +58,13 @@
 
 DEFINE_REGISTRY (objfile, REGISTRY_ACCESS_FIELD)
 
-/* Externally visible variables that are owned by this module.
-   See declarations in objfile.h for more info.  */
+typedef struct obj_section *obj_section_p;
+DEF_VEC_P (obj_section_p);
 
 struct objfile_pspace_info
 {
-  struct obj_section **sections;
-  int num_sections;
-
-  /* Nonzero if object files have been added since the section map
-     was last updated.  */
-  int new_objfiles_available;
-
-  /* Nonzero if the section map MUST be updated before use.  */
-  int section_map_dirty;
-
-  /* Nonzero if section map updates should be inhibited if possible.  */
-  int inhibit_updates;
+  struct obstack obj_section_map_obstack;
+  struct addrmap *obj_section_map;
 };
 
 /* Per-program-space data key.  */
@@ -85,7 +75,7 @@ objfiles_pspace_data_cleanup (struct program_space *pspace, void *arg)
 {
   struct objfile_pspace_info *info = (struct objfile_pspace_info *) arg;
 
-  xfree (info->sections);
+  obstack_free (&info->obj_section_map_obstack, NULL);
   xfree (info);
 }
 
@@ -103,6 +93,10 @@ get_objfile_pspace_data (struct program_space *pspace)
     {
       info = XCNEW (struct objfile_pspace_info);
       set_program_space_data (pspace, objfiles_pspace_data, info);
+
+      obstack_init (&info->obj_section_map_obstack);
+      info->obj_section_map
+	= addrmap_create_mutable (&info->obj_section_map_obstack);
     }
 
   return info;
@@ -448,9 +442,6 @@ allocate_objfile (bfd *abfd, const char *name, int flags)
   /* Save passed in flag bits.  */
   objfile->flags |= flags;
 
-  /* Rebuild section map next time we need it.  */
-  get_objfile_pspace_data (objfile->pspace)->new_objfiles_available = 1;
-
   return objfile;
 }
 
@@ -634,6 +625,10 @@ free_objfile (struct objfile *objfile)
   /* First notify observers that this objfile is about to be freed.  */
   observer_notify_free_objfile (objfile);
 
+  /* Do this before deleting the bfd, as this references the bfd
+     sections.  */
+  obj_section_map_remove_objfile (objfile);
+
   /* Free all separate debug objfiles.  */
   free_objfile_separate_debug (objfile);
 
@@ -742,9 +737,6 @@ free_objfile (struct objfile *objfile)
   psymbol_bcache_free (objfile->psymbol_cache);
   obstack_free (&objfile->objfile_obstack, 0);
 
-  /* Rebuild section map next time we need it.  */
-  get_objfile_pspace_data (objfile->pspace)->section_map_dirty = 1;
-
   /* Free the map for static links.  There's no need to free static link
      themselves since they were allocated on the objstack.  */
   if (objfile->static_links != NULL)
@@ -897,6 +889,8 @@ objfile_relocate1 (struct objfile *objfile,
   if (objfile->sf)
     objfile->sf->qf->relocate (objfile, new_offsets, delta);
 
+  obj_section_map_remove_objfile (objfile);
+
   {
     int i;
 
@@ -904,8 +898,7 @@ objfile_relocate1 (struct objfile *objfile,
       (objfile->section_offsets)->offsets[i] = ANOFFSET (new_offsets, i);
   }
 
-  /* Rebuild section map next time we need it.  */
-  get_objfile_pspace_data (objfile->pspace)->section_map_dirty = 1;
+  obj_section_map_add_objfile (objfile);
 
   /* Update the table in exec_ops, used to read memory.  */
   ALL_OBJFILE_OSECTIONS (objfile, s)
@@ -1130,37 +1123,43 @@ have_minimal_symbols (void)
   return 0;
 }
 
-/* Qsort comparison function.  */
+static struct obj_section *preferred_obj_section (struct obj_section *a,
+						  struct obj_section *b);
+
+/* Returns true if the first argument is strictly less than the
+   second, useful for VEC_lower_bound.  We keep sections sorted by
+   starting address.  */
 
 static int
-qsort_cmp (const void *a, const void *b)
+obj_section_lessthan (struct obj_section *sect1, struct obj_section *sect2)
 {
-  const struct obj_section *sect1 = *(const struct obj_section **) a;
-  const struct obj_section *sect2 = *(const struct obj_section **) b;
+  const struct objfile *const objfile1 = sect1->objfile;
+  const struct objfile *const objfile2 = sect2->objfile;
   const CORE_ADDR sect1_addr = obj_section_addr (sect1);
   const CORE_ADDR sect2_addr = obj_section_addr (sect2);
 
-  if (sect1_addr < sect2_addr)
-    return -1;
-  else if (sect1_addr > sect2_addr)
-    return 1;
-  else
+  gdb_assert (objfile1->pspace == objfile2->pspace);
+
+  /* We use VEC_lower_bound to find the element's index, in order to
+     remove it.  Avoid falling into the degenerate/ slow "Case B"
+     below.  */
+  if (sect1 == sect2)
+    return 0;
+
+  if (sect1_addr == sect2_addr)
     {
       /* Sections are at the same address.  This could happen if
 	 A) we have an objfile and a separate debuginfo.
 	 B) we are confused, and have added sections without proper relocation,
 	 or something like that.  */
 
-      const struct objfile *const objfile1 = sect1->objfile;
-      const struct objfile *const objfile2 = sect2->objfile;
-
       if (objfile1->separate_debug_objfile == objfile2
 	  || objfile2->separate_debug_objfile == objfile1)
 	{
-	  /* Case A.  The ordering doesn't matter: separate debuginfo files
-	     will be filtered out later.  */
-
-	  return 0;
+	  /* Order "better" sections first.  We prefer the one that came
+	     from the real object, rather than the one from separate
+	     debuginfo.  */
+	  return preferred_obj_section (sect1, sect2) == sect1;
 	}
 
       /* Case B.  Maintain stable sort order, so bugs in GDB are easier to
@@ -1180,9 +1179,9 @@ qsort_cmp (const void *a, const void *b)
 
 	  ALL_OBJFILE_OSECTIONS (objfile1, osect)
 	    if (osect == sect1)
-	      return -1;
-	    else if (osect == sect2)
 	      return 1;
+	    else if (osect == sect2)
+	      return 0;
 
 	  /* We should have found one of the sections before getting here.  */
 	  gdb_assert_not_reached ("section not found");
@@ -1193,24 +1192,22 @@ qsort_cmp (const void *a, const void *b)
 
 	  const struct objfile *objfile;
 
-	  ALL_OBJFILES (objfile)
+	  ALL_PSPACE_OBJFILES (objfile1->pspace, objfile)
 	    if (objfile == objfile1)
-	      return -1;
-	    else if (objfile == objfile2)
 	      return 1;
+	    else if (objfile == objfile2)
+	      return 0;
 
 	  /* We should have found one of the objfiles before getting here.  */
 	  gdb_assert_not_reached ("objfile not found");
 	}
     }
 
-  /* Unreachable.  */
-  gdb_assert_not_reached ("unexpected code path");
-  return 0;
+  return sect1_addr < sect2_addr;
 }
 
-/* Select "better" obj_section to keep.  We prefer the one that came from
-   the real object, rather than the one from separate debuginfo.
+/* Select which obj_section is "better".  We prefer the one that came
+   from the real object, rather than the one from separate debuginfo.
    Most of the time the two sections are exactly identical, but with
    prelinking the .rel.dyn section in the real object may have different
    size.  */
@@ -1251,72 +1248,44 @@ insert_section_p (const struct bfd *abfd,
   return 1;
 }
 
-/* Filter out overlapping sections where one section came from the real
-   objfile, and the other from a separate debuginfo file.
-   Return the size of table after redundant sections have been eliminated.  */
-
-static int
-filter_debuginfo_sections (struct obj_section **map, int map_size)
-{
-  int i, j;
 
-  for (i = 0, j = 0; i < map_size - 1; i++)
-    {
-      struct obj_section *const sect1 = map[i];
-      struct obj_section *const sect2 = map[i + 1];
-      const struct objfile *const objfile1 = sect1->objfile;
-      const struct objfile *const objfile2 = sect2->objfile;
-      const CORE_ADDR sect1_addr = obj_section_addr (sect1);
-      const CORE_ADDR sect2_addr = obj_section_addr (sect2);
-
-      if (sect1_addr == sect2_addr
-	  && (objfile1->separate_debug_objfile == objfile2
-	      || objfile2->separate_debug_objfile == objfile1))
-	{
-	  map[j++] = preferred_obj_section (sect1, sect2);
-	  ++i;
-	}
-      else
-	map[j++] = sect1;
-    }
-
-  if (i < map_size)
-    {
-      gdb_assert (i == map_size - 1);
-      map[j++] = map[i];
-    }
-
-  /* The map should not have shrunk to less than half the original size.  */
-  gdb_assert (map_size / 2 <= j);
-
-  return j;
-}
+/*
+| A                       |
+               |  B |
+                      |  C   |
+*/
 
-/* Filter out overlapping sections, issuing a warning if any are found.
-   Overlapping sections could really be overlay sections which we didn't
-   classify as such in insert_section_p, or we could be dealing with a
-   corrupt binary.  */
+/* Issue a complaint about overlapping sections.  Overlapping sections
+   could really be overlay sections which we didn't classify as such
+   in insert_section_p, or we could be dealing with a corrupt
+   binary.  */
 
-static int
-filter_overlapping_sections (struct obj_section **map, int map_size)
+static void
+complaint_overlapping_sections (struct obj_section **map, int map_size)
 {
-  int i, j;
+  int i;
 
-  for (i = 0, j = 0; i < map_size - 1; )
+  for (i = 0; i < map_size - 1; )
     {
       int k;
 
-      map[j++] = map[i];
       for (k = i + 1; k < map_size; k++)
 	{
 	  struct obj_section *const sect1 = map[i];
 	  struct obj_section *const sect2 = map[k];
+	  const struct objfile *const objfile1 = sect1->objfile;
+	  const struct objfile *const objfile2 = sect2->objfile;
 	  const CORE_ADDR sect1_addr = obj_section_addr (sect1);
 	  const CORE_ADDR sect2_addr = obj_section_addr (sect2);
 	  const CORE_ADDR sect1_endaddr = obj_section_endaddr (sect1);
 
 	  gdb_assert (sect1_addr <= sect2_addr);
 
+	  if (sect1_addr == sect2_addr
+	      && (objfile1->separate_debug_objfile == objfile2
+		  || objfile2->separate_debug_objfile == objfile1))
+	    continue;
+
 	  if (sect1_endaddr <= sect2_addr)
 	    break;
 	  else
@@ -1348,85 +1317,126 @@ filter_overlapping_sections (struct obj_section **map, int map_size)
 	}
       i = k;
     }
+}
 
-  if (i < map_size)
-    {
-      gdb_assert (i == map_size - 1);
-      map[j++] = map[i];
-    }
+struct obj_section_map_addrmap_value
+{
+  VEC (obj_section_p) *sections;
+};
 
-  return j;
-}
+static void *
+obj_section_map_addrmap_remove_section_cb (void *cb_data, void *curr_val)
+{
+  struct obj_section_map_addrmap_value *sect_map_val
+    = (struct obj_section_map_addrmap_value *) curr_val;
+  struct obj_section *s = (struct obj_section *) cb_data;
+  struct obj_section *removed;
+  int i;
 
+  gdb_assert (sect_map_val != NULL);
 
-/* Update PMAP, PMAP_SIZE with sections from all objfiles, excluding any
-   TLS, overlay and overlapping sections.  */
+  i = VEC_lower_bound (obj_section_p, sect_map_val->sections, s,
+		       obj_section_lessthan);
+  removed = VEC_ordered_remove (obj_section_p, sect_map_val->sections, i);
+  gdb_assert (removed == s);
 
-static void
-update_section_map (struct program_space *pspace,
-		    struct obj_section ***pmap, int *pmap_size)
+  if (VEC_empty (obj_section_p, sect_map_val->sections))
+    {
+      VEC_free (obj_section_p, sect_map_val->sections);
+      return NULL;
+    }
+  return sect_map_val;
+}
+
+void
+obj_section_map_remove_objfile (struct objfile *objfile)
 {
   struct objfile_pspace_info *pspace_info;
-  int alloc_size, map_size, i;
-  struct obj_section *s, **map;
-  struct objfile *objfile;
+  struct obj_section *s;
 
-  pspace_info = get_objfile_pspace_data (pspace);
-  gdb_assert (pspace_info->section_map_dirty != 0
-	      || pspace_info->new_objfiles_available != 0);
+  if ((objfile->flags & OBJF_IN_SECTIONS_MAP) == 0)
+    return;
 
-  map = *pmap;
-  xfree (map);
+  pspace_info = get_objfile_pspace_data (objfile->pspace);
 
-  alloc_size = 0;
-  ALL_PSPACE_OBJFILES (pspace, objfile)
-    ALL_OBJFILE_OSECTIONS (objfile, s)
-      if (insert_section_p (objfile->obfd, s->the_bfd_section))
-	alloc_size += 1;
+  ALL_OBJFILE_OSECTIONS (objfile, s)
+    if (insert_section_p (objfile->obfd, s->the_bfd_section))
+      {
+	CORE_ADDR addr = obj_section_addr (s);
+	CORE_ADDR endaddr = obj_section_endaddr (s);
+
+	if (addr != endaddr)
+	  addrmap_subregion_update (pspace_info->obj_section_map,
+				    addr, endaddr - 1,
+				    obj_section_map_addrmap_remove_section_cb,
+				    s);
+      }
+}
 
-  /* This happens on detach/attach (e.g. in gdb.base/attach.exp).  */
-  if (alloc_size == 0)
-    {
-      *pmap = NULL;
-      *pmap_size = 0;
-      return;
-    }
+static void *
+obj_section_map_addrmap_add_section_cb (void *cb_data, void *curr_val)
+{
+  struct obj_section_map_addrmap_value *sect_map_val
+    = (struct obj_section_map_addrmap_value *) curr_val;
+  struct obj_section *new_sect = (struct obj_section *) cb_data;
+  size_t sections_size;
+  int i;
 
-  map = XNEWVEC (struct obj_section *, alloc_size);
+  if (sect_map_val == NULL)
+    sect_map_val = XCNEW (struct obj_section_map_addrmap_value);
 
-  i = 0;
-  ALL_PSPACE_OBJFILES (pspace, objfile)
-    ALL_OBJFILE_OSECTIONS (objfile, s)
-      if (insert_section_p (objfile->obfd, s->the_bfd_section))
-	map[i++] = s;
+  i = VEC_lower_bound (obj_section_p, sect_map_val->sections,
+		       new_sect, obj_section_lessthan);
+  VEC_safe_insert (obj_section_p, sect_map_val->sections, i, new_sect);
 
-  qsort (map, alloc_size, sizeof (*map), qsort_cmp);
-  map_size = filter_debuginfo_sections(map, alloc_size);
-  map_size = filter_overlapping_sections(map, map_size);
+  /* Check that looking up the obj_section with VEC_lower_bound finds
+     it again correctly.  This is a debugging aid, disabled by
+     default, because it isn't free.  */
+  if (0)
+    {
+      struct obj_section *found;
+      int j;
 
-  if (map_size < alloc_size)
-    /* Some sections were eliminated.  Trim excess space.  */
-    map = XRESIZEVEC (struct obj_section *, map, map_size);
-  else
-    gdb_assert (alloc_size == map_size);
+      j = VEC_lower_bound (obj_section_p, sect_map_val->sections, new_sect,
+			   obj_section_lessthan);
+      found = VEC_index (obj_section_p, sect_map_val->sections, j);
+      gdb_assert (found == new_sect);
+    }
 
-  *pmap = map;
-  *pmap_size = map_size;
-}
+  sections_size = VEC_length (obj_section_p, sect_map_val->sections);
+  if (sections_size > 1)
+    {
+      struct obj_section **sections;
 
-/* Bsearch comparison function.  */
+      sections = VEC_address (obj_section_p, sect_map_val->sections);
+      complaint_overlapping_sections (sections, sections_size);
+    }
 
-static int
-bsearch_cmp (const void *key, const void *elt)
+  return sect_map_val;
+}
+
+void
+obj_section_map_add_objfile (struct objfile *objfile)
 {
-  const CORE_ADDR pc = *(CORE_ADDR *) key;
-  const struct obj_section *section = *(const struct obj_section **) elt;
+  struct objfile_pspace_info *pspace_info;
+  struct obj_section *s;
 
-  if (pc < obj_section_addr (section))
-    return -1;
-  if (pc < obj_section_endaddr (section))
-    return 0;
-  return 1;
+  pspace_info = get_objfile_pspace_data (objfile->pspace);
+
+  objfile->flags |= OBJF_IN_SECTIONS_MAP;
+
+  ALL_OBJFILE_OSECTIONS (objfile, s)
+    if (insert_section_p (objfile->obfd, s->the_bfd_section))
+      {
+	CORE_ADDR addr = obj_section_addr (s);
+	CORE_ADDR endaddr = obj_section_endaddr (s);
+
+	if (addr != endaddr)
+	  addrmap_subregion_update (pspace_info->obj_section_map,
+				    addr, endaddr - 1,
+				    obj_section_map_addrmap_add_section_cb,
+				    s);
+      }
 }
 
 /* Returns a section whose range includes PC or NULL if none found.   */
@@ -1435,7 +1445,8 @@ struct obj_section *
 find_pc_section (CORE_ADDR pc)
 {
   struct objfile_pspace_info *pspace_info;
-  struct obj_section *s, **sp;
+  struct obj_section *s;
+  struct obj_section_map_addrmap_value *sect_map_val;
 
   /* Check for mapped overlay section first.  */
   s = find_pc_mapped_section (pc);
@@ -1443,35 +1454,21 @@ find_pc_section (CORE_ADDR pc)
     return s;
 
   pspace_info = get_objfile_pspace_data (current_program_space);
-  if (pspace_info->section_map_dirty
-      || (pspace_info->new_objfiles_available
-	  && !pspace_info->inhibit_updates))
-    {
-      update_section_map (current_program_space,
-			  &pspace_info->sections,
-			  &pspace_info->num_sections);
-
-      /* Don't need updates to section map until objfiles are added,
-         removed or relocated.  */
-      pspace_info->new_objfiles_available = 0;
-      pspace_info->section_map_dirty = 0;
-    }
 
-  /* The C standard (ISO/IEC 9899:TC2) requires the BASE argument to
-     bsearch be non-NULL.  */
-  if (pspace_info->sections == NULL)
+  sect_map_val = ((struct obj_section_map_addrmap_value *)
+		  addrmap_find (pspace_info->obj_section_map, pc));
+  if (sect_map_val != NULL)
     {
-      gdb_assert (pspace_info->num_sections == 0);
-      return NULL;
-    }
+      int ix;
+
+      for (ix = 0;
+	   VEC_iterate (obj_section_p, sect_map_val->sections, ix, s);
+	   ++ix)
+	if (obj_section_addr (s) <= pc && pc < obj_section_endaddr (s))
+	  return s;
 
-  sp = (struct obj_section **) bsearch (&pc,
-					pspace_info->sections,
-					pspace_info->num_sections,
-					sizeof (*pspace_info->sections),
-					bsearch_cmp);
-  if (sp != NULL)
-    return *sp;
+      gdb_assert_not_reached ("section not found");
+    }
   return NULL;
 }
 
@@ -1493,40 +1490,6 @@ pc_in_section (CORE_ADDR pc, char *name)
 }
 
 
-/* Set section_map_dirty so section map will be rebuilt next time it
-   is used.  Called by reread_symbols.  */
-
-void
-objfiles_changed (void)
-{
-  /* Rebuild section map next time we need it.  */
-  get_objfile_pspace_data (current_program_space)->section_map_dirty = 1;
-}
-
-/* See comments in objfiles.h.  */
-
-void
-inhibit_section_map_updates (struct program_space *pspace)
-{
-  get_objfile_pspace_data (pspace)->inhibit_updates = 1;
-}
-
-/* See comments in objfiles.h.  */
-
-void
-resume_section_map_updates (struct program_space *pspace)
-{
-  get_objfile_pspace_data (pspace)->inhibit_updates = 0;
-}
-
-/* See comments in objfiles.h.  */
-
-void
-resume_section_map_updates_cleanup (void *arg)
-{
-  resume_section_map_updates ((struct program_space *) arg);
-}
-
 /* Return 1 if ADDR maps into one of the sections of OBJFILE and 0
    otherwise.  */
 
diff --git a/gdb/objfiles.h b/gdb/objfiles.h
index 59b73f1..855f831 100644
--- a/gdb/objfiles.h
+++ b/gdb/objfiles.h
@@ -135,13 +135,13 @@ struct obj_section
 
 /* The memory address of section S (vma + offset).  */
 #define obj_section_addr(s)				      		\
-  (bfd_get_section_vma ((s)->objfile->obfd, s->the_bfd_section)		\
+  (bfd_get_section_vma ((s)->objfile->obfd, (s)->the_bfd_section)	\
    + obj_section_offset (s))
 
 /* The one-passed-the-end memory address of section S
    (vma + size + offset).  */
 #define obj_section_endaddr(s)						\
-  (bfd_get_section_vma ((s)->objfile->obfd, s->the_bfd_section)		\
+  (bfd_get_section_vma ((s)->objfile->obfd, (s)->the_bfd_section)	\
    + bfd_get_section_size ((s)->the_bfd_section)			\
    + obj_section_offset (s))
 
@@ -489,6 +489,11 @@ struct objfile
 
 #define OBJF_NOT_FILENAME (1 << 6)
 
+/* ORIGINAL_NAME and OBFD->FILENAME correspond to text description unrelated to
+   filesystem names.  It can be for example "<image in memory>".  */
+
+#define OBJF_IN_SECTIONS_MAP (1 << 7)
+
 /* Declarations for functions defined in objfiles.c */
 
 extern struct objfile *allocate_objfile (bfd *, const char *name, int);
@@ -536,8 +541,6 @@ extern int have_full_symbols (void);
 extern void objfile_set_sym_fns (struct objfile *objfile,
 				 const struct sym_fns *sf);
 
-extern void objfiles_changed (void);
-
 extern int is_addr_in_objfile (CORE_ADDR addr, const struct objfile *objfile);
 
 /* Return true if ADDRESS maps into one of the sections of a
@@ -575,22 +578,6 @@ in_plt_section (CORE_ADDR pc)
    modules.  */
 DECLARE_REGISTRY(objfile);
 
-/* In normal use, the section map will be rebuilt by find_pc_section
-   if objfiles have been added, removed or relocated since it was last
-   called.  Calling inhibit_section_map_updates will inhibit this
-   behavior until resume_section_map_updates is called.  If you call
-   inhibit_section_map_updates you must ensure that every call to
-   find_pc_section in the inhibited region relates to a section that
-   is already in the section map and has not since been removed or
-   relocated.  */
-extern void inhibit_section_map_updates (struct program_space *pspace);
-
-/* Resume automatically rebuilding the section map as required.  */
-extern void resume_section_map_updates (struct program_space *pspace);
-
-/* Version of the above suitable for use as a cleanup.  */
-extern void resume_section_map_updates_cleanup (void *arg);
-
 extern void default_iterate_over_objfiles_in_search_order
   (struct gdbarch *gdbarch,
    iterate_over_objfiles_in_search_order_cb_ftype *cb,
@@ -762,4 +749,7 @@ extern void objfile_register_static_link
 extern const struct dynamic_prop *objfile_lookup_static_link
   (struct objfile *objfile, const struct block *block);
 
+extern void obj_section_map_add_objfile (struct objfile *obj);
+extern void obj_section_map_remove_objfile (struct objfile *obj);
+
 #endif /* !defined (OBJFILES_H) */
diff --git a/gdb/solib-svr4.c b/gdb/solib-svr4.c
index fe36d45..2bb9dba 100644
--- a/gdb/solib-svr4.c
+++ b/gdb/solib-svr4.c
@@ -1947,20 +1947,6 @@ svr4_handle_solib_event (void)
       return;
     }
 
-  /* evaluate_probe_argument looks up symbols in the dynamic linker
-     using find_pc_section.  find_pc_section is accelerated by a cache
-     called the section map.  The section map is invalidated every
-     time a shared library is loaded or unloaded, and if the inferior
-     is generating a lot of shared library events then the section map
-     will be updated every time svr4_handle_solib_event is called.
-     We called find_pc_section in svr4_create_solib_event_breakpoints,
-     so we can guarantee that the dynamic linker's sections are in the
-     section map.  We can therefore inhibit section map updates across
-     these calls to evaluate_probe_argument and save a lot of time.  */
-  inhibit_section_map_updates (current_program_space);
-  usm_chain = make_cleanup (resume_section_map_updates_cleanup,
-			    current_program_space);
-
   TRY
     {
       val = evaluate_probe_argument (pa->probe, 1, frame);
@@ -2021,9 +2007,6 @@ svr4_handle_solib_event (void)
 	action = FULL_RELOAD;
     }
 
-  /* Resume section map updates.  */
-  do_cleanups (usm_chain);
-
   if (action == UPDATE_OR_RELOAD)
     {
       if (!solist_update_incremental (info, lm))
diff --git a/gdb/symfile.c b/gdb/symfile.c
index 7eb6cdc..bee7cdb 100644
--- a/gdb/symfile.c
+++ b/gdb/symfile.c
@@ -1098,6 +1098,8 @@ syms_from_objfile (struct objfile *objfile,
 static void
 finish_new_objfile (struct objfile *objfile, int add_flags)
 {
+  obj_section_map_add_objfile (objfile);
+
   /* If this is the main symbol file we have to clean up all users of the
      old main symbol file.  Otherwise it is sufficient to fixup all the
      breakpoints that may have been redefined by this symbol file.  */
@@ -2503,6 +2505,14 @@ reread_symbols (void)
 	  printf_unfiltered (_("`%s' has changed; re-reading symbols.\n"),
 			     objfile_name (objfile));
 
+	  /* Remove these before wiping them and before removing
+	     separate debug info files (the address map relies on
+	     those).  */
+	  obj_section_map_remove_objfile (objfile);
+	  /* In case we delete the objfile through a cleanup, don't
+	     try removing these again from the address map.  */
+	  objfile->sections = NULL;
+
 	  /* There are various functions like symbol_file_add,
 	     symfile_bfd_open, syms_from_objfile, etc., which might
 	     appear to do what we want.  But they have various other
@@ -2639,6 +2649,8 @@ reread_symbols (void)
 		  SIZEOF_N_SECTION_OFFSETS (num_offsets));
 	  objfile->num_sections = num_offsets;
 
+	  obj_section_map_add_objfile (objfile);
+
 	  /* What the hell is sym_new_init for, anyway?  The concept of
 	     distinguishing between the main file and additional files
 	     in this way seems rather dubious.  */
@@ -2685,9 +2697,6 @@ reread_symbols (void)
     {
       int ix;
 
-      /* Notify objfiles that we've modified objfile sections.  */
-      objfiles_changed ();
-
       clear_symtab_users (0);
 
       /* clear_objfile_data for each objfile was called before freeing it and
-- 
2.5.5



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]