This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
[PATCH 10/12] Merge optimized_out into unavailable vector.
- From: "Andrew Burgess" <aburgess at broadcom dot com>
- To: gdb-patches at sourceware dot org
- Date: Mon, 12 Aug 2013 13:31:37 +0100
- Subject: [PATCH 10/12] Merge optimized_out into unavailable vector.
- References: <5208D1DF dot 1090201 at broadcom dot com>
Finally! Store the optimized out state in to the unavailable vector.
Removes the separate value_optimized_out interface and the last few
uses of it. Update the code within value.c to just use the unavailable
vector.
The code to maintain the unavailable vector has had to change a bit and
patch #12 adds a new mechanism for unit testing this code, as it's not
required for functionality I split it into a separate patch.
OK to apply?
Thanks,
Andrew
gdb/ChangeLog
2013-08-08 Andrew Burgess <aburgess@broadcom.com>
* ada-lang.c (coerce_unspec_val_to_type): Switch to using
value_contents_copy_raw.
* findvar.c (read_frame_register_value): Remove special handling
of optimized out registers.
* python/py-value.c (valpy_get_is_optimized_out): Update mechanism
to get optimized out flag.
* stack.c (read_frame_reg): Remove optimized out check, ensure
values are not lazy.
* tui/tui-regs.c (tui_get_register): Remove special check for
optimized out registers.
* value.c (enum unavailability_reason): Add.
(struct range): Add reason field.
(struct value): Remove optimized_out field.
(value_bits_available): Remove optimized_out checks, just check
unavailable vector.
(value_entirely_available): Likewise.
(value_entirely_unavailable): Likewise.
(insert_into_bit_range_vector): Add reason parameter, rewrite to
deal with reason field.
(value_availability_flags): Look in unavailability vector for
answer.
(allocate_value_lazy): Remove setting of optimized_out.
(require_not_optimized_out): Remove.
(require_available): Check both available and optimized out.
(value_contents_all): Remove call to require_not_optimized_out.
(value_contents_copy_raw): Maintain unavailability reason.
(value_contents_copy): Update comment, remove call to
require_not_optimized_out.
(value_contents): Remove call to require_not_optimized_out.
(value_optimized_out): Remove.
(mark_value_bit_optimized_out): Store optimized out state in
unavailability vector.
(value_copy): Remove use of optimized_out.
(value_primitive_field): Remove special handling of optimized out.
(value_fetch_lazy): Add assert that lazy values should have no
unavailable regions. Remove some special handling for optimized
out values.
* value.h (value_optimized_out): Remove.
gdb/testsuite/ChangeLog
2013-08-08 Andrew Burgess <aburgess@broadcom.com>
* gdb.dwarf2/dw2-op-out-param.exp: Tests now pass.
diff --git a/gdb/ada-lang.c b/gdb/ada-lang.c
index 20a0f02..b0b310d 100644
--- a/gdb/ada-lang.c
+++ b/gdb/ada-lang.c
@@ -574,10 +574,7 @@ coerce_unspec_val_to_type (struct value *val,
struct type *type)
else
{
result = allocate_value (type);
- memcpy (value_contents_raw (result), value_contents (val),
- TYPE_LENGTH (type));
- if (value_optimized_out (val))
- mark_value_bytes_optimized_out (result, 0, TYPE_LENGTH (type));
+ value_contents_copy_raw (result, 0, val, 0, TYPE_LENGTH (type));
}
set_value_component_location (result, val);
set_value_bitsize (result, value_bitsize (val));
diff --git a/gdb/findvar.c b/gdb/findvar.c
index f2bb247..c92915c 100644
--- a/gdb/findvar.c
+++ b/gdb/findvar.c
@@ -711,16 +711,6 @@ read_frame_register_value (struct value *value,
struct frame_info *frame)
struct value *regval = get_frame_register_value (frame, regnum);
int reg_len = TYPE_LENGTH (value_type (regval)) - reg_offset;
- if (value_optimized_out (regval))
- {
- /* If any one of the component registers is marked optimized out
- then we just mark the whole composite register as optimized
- out. We could do better, but this style of composite register
- passing is not standard, and is only used on a few targets. */
- mark_value_bytes_optimized_out (value, 0, TYPE_LENGTH (value_type
(value)));
- break;
- }
-
/* If the register length is larger than the number of bytes
remaining to copy, then only copy the appropriate bytes. */
if (reg_len > len)
diff --git a/gdb/python/py-value.c b/gdb/python/py-value.c
index 0d87219..426ac0c 100644
--- a/gdb/python/py-value.c
+++ b/gdb/python/py-value.c
@@ -674,7 +674,12 @@ valpy_get_is_optimized_out (PyObject *self, void
*closure)
TRY_CATCH (except, RETURN_MASK_ALL)
{
- opt = value_optimized_out (value);
+ if (!value_entirely_available (value))
+ {
+ int unavailablep;
+
+ value_availability_flags (value, &opt, &unavailablep);
+ }
}
GDB_PY_HANDLE_EXCEPTION (except);
diff --git a/gdb/stack.c b/gdb/stack.c
index cec5df5..084ed7a 100644
--- a/gdb/stack.c
+++ b/gdb/stack.c
@@ -384,9 +384,13 @@ read_frame_arg (struct symbol *sym, struct
frame_info *frame,
{
struct type *type = value_type (val);
- if (!value_optimized_out (val)
- && value_available_contents_eq (val, 0, entryval, 0,
- TYPE_LENGTH (type)))
+ if (value_lazy (val))
+ value_fetch_lazy (val);
+ if (value_lazy (entryval))
+ value_fetch_lazy (entryval);
+
+ if (value_available_contents_eq (val, 0, entryval, 0,
+ TYPE_LENGTH (type)))
{
/* Initialize it just to avoid a GCC false warning. */
struct value *val_deref = NULL, *entryval_deref;
diff --git a/gdb/testsuite/gdb.dwarf2/dw2-op-out-param.exp
b/gdb/testsuite/gdb.dwarf2/dw2-op-out-param.exp
index 5e4ca01..9c60236 100644
--- a/gdb/testsuite/gdb.dwarf2/dw2-op-out-param.exp
+++ b/gdb/testsuite/gdb.dwarf2/dw2-op-out-param.exp
@@ -50,36 +50,12 @@ gdb_test "bt" "#0 ($hex in )?breakpt \\(\\)\r\n#1
($hex in )?int_param_single_
# (2) struct_param_single_reg_loc
gdb_continue_to_breakpoint "Stop in breakpt for
struct_param_single_reg_loc"
-set test "Backtrace for test struct_param_single_reg_loc"
-gdb_test_multiple "bt" "$test" {
- -re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in
)?struct_param_single_reg_loc \\(operand0={a = 0xdeadbe00deadbe01, b =
<optimized out>}, operand1={a = <optimized out>, b =
0xdeadbe04deadbe05}, operand2=<optimized out>\\)\r\n#2 ($hex in )?main
\\(\\)\r\n$gdb_prompt $" {
- xpass $test
- }
- -re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in
)?struct_param_single_reg_loc \\(operand0=<optimized out>,
operand1=<optimized out>, operand2=<optimized out>\\)\r\n#2 ($hex in
)?main \\(\\)\r\n$gdb_prompt $" {
- kfail "symtab/14604" $test
- }
-}
+gdb_test "bt" "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in
)?struct_param_single_reg_loc \\(operand0={a = 0xdeadbe00deadbe01, b =
<optimized out>}, operand1={a = <optimized out>, b =
0xdeadbe04deadbe05}, operand2=<optimized out>\\)\r\n#2 ($hex in )?main
\\(\\)" "Backtrace for test struct_param_single_reg_loc"
# (3) struct_param_two_reg_pieces
gdb_continue_to_breakpoint "Stop in breakpt for
struct_param_two_reg_pieces"
-set test "Backtrace for test struct_param_two_reg_pieces"
-gdb_test_multiple "bt" "$test" {
- -re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in
)?struct_param_two_reg_pieces \\(operand0={a = 0xdeadbe04deadbe05, b =
<optimized out>}, operand1={a = <optimized out>, b =
0xdeadbe00deadbe01}, operand2=<optimized out>\\)\r\n#2 ($hex in )?main
\\(\\)\r\n$gdb_prompt $" {
- xpass $test
- }
- -re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in
)?struct_param_two_reg_pieces \\(operand0=.*, operand1=.*,
operand2=.*\\)\r\n#2 ($hex in )?main \\(\\)\r\n$gdb_prompt $" {
- kfail "symtab/14605" $test
- }
-}
+gdb_test "bt" "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in
)?struct_param_two_reg_pieces \\(operand0={a = 0xdeadbe04deadbe05, b =
<optimized out>}, operand1={a = <optimized out>, b =
0xdeadbe00deadbe01}, operand2=<optimized out>\\)\r\n#2 ($hex in )?main
\\(\\)" "Backtrace for test struct_param_two_reg_pieces"
# (4) int_param_two_reg_pieces
gdb_continue_to_breakpoint "Stop in breakpt for int_param_two_reg_pieces"
-set test "Backtrace for test int_param_two_reg_pieces"
-gdb_test_multiple "bt" "$test" {
- -re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in
)?int_param_two_reg_pieces \\(operand0=<optimized out>,
operand1=<optimized out>, operand2=<optimized out>\\)\r\n#2 ($hex in
)?main \\(\\)\r\n$gdb_prompt $" {
- xpass $test
- }
- -re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in
)?int_param_two_reg_pieces \\(operand0=.*, operand1=.*,
operand2=.*\\)\r\n#2 ($hex in )?main \\(\\)\r\n$gdb_prompt $" {
- kfail "symtab/14605" $test
- }
-}
+gdb_test "bt" "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in
)?int_param_two_reg_pieces \\(operand0=<optimized out>,
operand1=<optimized out>, operand2=<optimized out>\\)\r\n#2 ($hex in
)?main \\(\\)" "Backtrace for test int_param_two_reg_pieces"
diff --git a/gdb/tui/tui-regs.c b/gdb/tui/tui-regs.c
index bb72340..102beb6 100644
--- a/gdb/tui/tui-regs.c
+++ b/gdb/tui/tui-regs.c
@@ -737,9 +737,8 @@ tui_get_register (struct frame_info *frame,
if (value_lazy (old_val))
value_fetch_lazy (old_val);
- if (value_optimized_out (data->value) != value_optimized_out (old_val)
- || !value_available_contents_eq (data->value, 0,
- old_val, 0, size))
+ if (!value_available_contents_eq (data->value, 0,
+ old_val, 0, size))
*changedp = TRUE;
}
diff --git a/gdb/value.c b/gdb/value.c
index c41e6e1..ec46863 100644
--- a/gdb/value.c
+++ b/gdb/value.c
@@ -63,6 +63,16 @@ struct internal_function
void *cookie;
};
+/* Used to describe the different reasons part of a value
+ might not be available to gdb. */
+
+enum unavailability_reason
+{
+ /* Leave value 0 as invalid. */
+ bit_range_unavailable = 1,
+ bit_range_optimized_out = 2
+};
+
/* Defines an [OFFSET, OFFSET + LENGTH) range. */
struct range
@@ -72,6 +79,9 @@ struct range
/* Length of the range. */
int length;
+
+ /* The reason this range is unavailable. */
+ enum unavailability_reason reason;
};
typedef struct range range_s;
@@ -197,10 +207,6 @@ struct value
reset, be sure to consider this use as well! */
unsigned int lazy : 1;
- /* If nonzero, this is the value of a variable which does not
- actually exist in the program. */
- unsigned int optimized_out : 1;
-
/* If value is a variable, is it initialized or not. */
unsigned int initialized : 1;
@@ -345,15 +351,7 @@ value_bits_available (const struct value *value,
int offset, int length)
{
gdb_assert (!value->lazy);
- if (ranges_contain (value->unavailable, offset, length))
- return 0;
- if (!value->optimized_out)
- return 1;
- if (value->lval != lval_computed
- || !value->location.computed.funcs->check_validity)
- return 0;
- return value->location.computed.funcs->check_validity (value, offset,
- length);
+ return (!ranges_contain (value->unavailable, offset, length));
}
int
@@ -364,65 +362,80 @@ value_entirely_available (struct value *value)
if (value->lazy)
value_fetch_lazy (value);
- if (VEC_empty (range_s, value->unavailable)
- && !value->optimized_out)
- return 1;
- return 0;
+ return VEC_empty (range_s, value->unavailable);
}
int
value_entirely_unavailable (struct value *value)
{
+ struct range *r;
+ int i, expected_offset;
+ struct type *type;
+
if (value->lazy)
value_fetch_lazy (value);
- /* Check if the unavailable vector covers the entire value. As we merge
- entries in the vector, if the entire value is covered then we'll
- have a single entry starting at offset 0 and length as long as the
- type. */
- if (VEC_length (range_s, value->unavailable) == 1)
+ /* Short cut the common case, no bits are marked unavailable. */
+ if (VEC_empty (range_s, value->unavailable))
+ return 0;
+
+ expected_offset = 0;
+ for (i = 0; VEC_iterate (range_s, value->unavailable, i, r); i++)
{
- const range_s *r;
- struct type *type;
+ if (r->offset != expected_offset)
+ return 0; /* Not completely optimized out. */
- type = check_typedef (value_type (value));
- r = VEC_index (range_s, value->unavailable, 0);
- if (r->offset == 0 && r->length >= TARGET_CHAR_BIT * TYPE_LENGTH
(type))
- return 1;
+ expected_offset += r->length;
}
- /* At least some of the value contents are NOT covered by the
unavailable
- vector, fall back to the optimized out heuristic. */
- if (!value->optimized_out)
- return 0;
- if (value->lval != lval_computed
- || !value->location.computed.funcs->check_any_valid)
+ /* We've now looked at every entry in the vector of unavailable bits
+ and not found any gaps. If the EXPECTED_OFFSET is the same as the
+ type length then the whole value is no longer available. */
+ type = check_typedef (value_type (value));
+ if (expected_offset >= TYPE_LENGTH (type) * TARGET_CHAR_BIT)
return 1;
- return !value->location.computed.funcs->check_any_valid (value);
+
+ /* There's a few bits available at the very end of the value. */
+ return 0;
}
/* Insert into the vector pointed to by VECTORP the bit range starting of
OFFSET bits, and extending for the next LENGTH bits. */
static void
-insert_into_bit_range_vector (VEC(range_s) **vectorp, int offset, int
length)
+insert_into_bit_range_vector (VEC(range_s) **vectorp,
+ int offset, int length,
+ enum unavailability_reason reason)
{
- range_s newr;
- int i;
+ gdb_assert (vectorp != NULL);
+
+ while (length > 0)
+ {
+ int i;
+ range_s newr;
+
+ /* All the ranges of unavailable bits are stored in order of
+ increasing offset within the value. Contiguous or overlapping
+ ranges with the same unavailability reason will be merged, while
+ ranges with different unavailability reasons that overlap will
+ result in one of the ranges being truncated to accommodate the
+ other; no two ranges in the vector may overlap. */
- /* Insert the range sorted. If there's overlap or the new range
- would be contiguous with an existing range, merge. */
+ newr.offset = offset;
+ newr.length = length;
+ newr.reason = reason;
- newr.offset = offset;
- newr.length = length;
+ /* If we only consider the insertion of ranges with the same
+ unavailability reason then the following logic is used to manges
+ the ranges in the vector:
- /* Do a binary search for the position the given range would be
- inserted if we only considered the starting OFFSET of ranges.
- Call that position I. Since we also have LENGTH to care for
- (this is a range afterall), we need to check if the _previous_
- range overlaps the I range. E.g., calling R the new range:
+ Do a binary search for the position the given range would be
+ inserted if we only considered the starting OFFSET of ranges.
+ Call that position I. Since we also have LENGTH to care for
+ (this is a range afterall), we need to check if the _previous_
+ range overlaps the I range. E.g., calling R the new range:
- #1 - overlaps with previous
+ #1 - overlaps with previous
R
|-...-|
@@ -431,15 +444,15 @@ insert_into_bit_range_vector (VEC(range_s)
**vectorp, int offset, int length)
I=1
- In the case #1 above, the binary search would return `I=1',
- meaning, this OFFSET should be inserted at position 1, and the
- current position 1 should be pushed further (and become 2). But,
- note that `0' overlaps with R, so we want to merge them.
+ In the case #1 above, the binary search would return `I=1',
+ meaning, this OFFSET should be inserted at position 1, and the
+ current position 1 should be pushed further (and become 2). But,
+ note that `0' overlaps with R, so we want to merge them.
- A similar consideration needs to be taken if the new range would
- be contiguous with the previous range:
+ A similar consideration needs to be taken if the new range would
+ be contiguous with the previous range:
- #2 - contiguous with previous
+ #2 - contiguous with previous
R
|-...-|
@@ -448,44 +461,44 @@ insert_into_bit_range_vector (VEC(range_s)
**vectorp, int offset, int length)
I=1
- If there's no overlap with the previous range, as in:
+ If there's no overlap with the previous range, as in:
- #3 - not overlapping and not contiguous
+ #3 - not overlapping and not contiguous
- R
- |-...-|
- |--| |---| |------| ... |--|
- 0 1 2 N
+ R
+ |-...-|
+ |--| |---| |------| ... |--|
+ 0 1 2 N
I=1
- or if I is 0:
+ or if I is 0:
- #4 - R is the range with lowest offset
+ #4 - R is the range with lowest offset
- R
+ R
|-...-|
|--| |---| |------| ... |--|
0 1 2 N
I=0
- ... we just push the new range to I.
+ ... we just push the new range to I.
- All the 4 cases above need to consider that the new range may
- also overlap several of the ranges that follow, or that R may be
- contiguous with the following range, and merge. E.g.,
+ All the 4 cases above need to consider that the new range may
+ also overlap several of the ranges that follow, or that R may be
+ contiguous with the following range, and merge. E.g.,
- #5 - overlapping following ranges
+ #5 - overlapping following ranges
- R
+ R
|------------------------|
|--| |---| |------| ... |--|
0 1 2 N
I=0
- or:
+ or:
R
|-------|
@@ -494,79 +507,173 @@ insert_into_bit_range_vector (VEC(range_s)
**vectorp, int offset, int length)
I=1
- */
+ When we consider ranges with different availability reasons the
+ following additional rules are used.
- i = VEC_lower_bound (range_s, *vectorp, &newr, range_lessthan);
- if (i > 0)
- {
- struct range *bef = VEC_index (range_s, *vectorp, i - 1);
+ Each unavailability reason has a value, see the enum
+ unavailability_reason. The higher value reasons have greater
+ precedence, they are considered a "better" reason to explain why
+ some bits of a value are not available.
- if (ranges_overlap (bef->offset, bef->length, offset, length))
- {
- /* #1 */
- ULONGEST l = min (bef->offset, offset);
- ULONGEST h = max (bef->offset + bef->length, offset + length);
+ When two ranges overlap, as in case #1 above, the range with the
+ lowest precedence is shortened, possibly to the point of
+ non-existence, to accommodate the range with greater precedence.
- bef->offset = l;
- bef->length = h - l;
- i--;
- }
- else if (offset == bef->offset + bef->length)
- {
- /* #2 */
- bef->length += length;
- i--;
- }
- else
- {
- /* #3 */
- VEC_safe_insert (range_s, *vectorp, i, &newr);
- }
- }
- else
- {
- /* #4 */
- VEC_safe_insert (range_s, *vectorp, i, &newr);
- }
+ Two contiguous ranges, as in case #2 above will not be merged if
+ they have different reasons.
- /* Check whether the ranges following the one we've just added or
- touched can be folded in (#5 above). */
- if (i + 1 < VEC_length (range_s, *vectorp))
- {
- struct range *t;
- struct range *r;
- int removed = 0;
- int next = i + 1;
+ Case #3 and #4 above are unchanged.
- /* Get the range we just touched. */
- t = VEC_index (range_s, *vectorp, i);
- removed = 0;
+ When inserting a range that overlaps many existing ranges, and in
+ case #5 above, all overlapped ranges with a lower precedence
+ reason will be deleted, effectively replaced by the new range. A
+ range of higher precedence will cause the newly inserted range to
+ fragment around the existing, higher precedence range.
+ */
- i = next;
- for (; VEC_iterate (range_s, *vectorp, i, r); i++)
- if (r->offset <= t->offset + t->length)
- {
- ULONGEST l, h;
+ i = VEC_lower_bound (range_s, *vectorp, &newr, range_lessthan);
+ if (i > 0)
+ {
+ struct range *bef = VEC_index (range_s, *vectorp, i - 1);
- l = min (t->offset, r->offset);
- h = max (t->offset + t->length, r->offset + r->length);
+ if (ranges_overlap (bef->offset, bef->length, offset, length))
+ {
+ /* #1 */
+ if (bef->reason == reason)
+ {
+ ULONGEST l = min (bef->offset, offset);
+ ULONGEST h = max (bef->offset + bef->length, offset + length);
+
+ bef->offset = l;
+ bef->length = h - l;
+ i--;
+ }
+ else if (bef->reason < reason)
+ {
+ /* Reduce the previous range. */
+ bef->length -= (bef->offset + bef->length) - offset;
+
+ if (bef->length == 0)
+ VEC_block_remove (range_s, *vectorp, (i - 1), 1);
+
+ VEC_safe_insert (range_s, *vectorp, i, &newr);
+ }
+ else
+ {
+ /* Reduce the new range. */
+ if ((bef->offset + bef->length) >= offset + length)
+ return; /* We're completely within previous range. */
- t->offset = l;
- t->length = h - l;
+ newr.offset = bef->offset + bef->length;
+ newr.length -= (bef->offset + bef->length) - newr.offset;
- removed++;
- }
- else
- {
- /* If we couldn't merge this one, we won't be able to
- merge following ones either, since the ranges are
- always sorted by OFFSET. */
- break;
- }
+ VEC_safe_insert (range_s, *vectorp, i, &newr);
+ }
+ }
+ else if (offset == bef->offset + bef->length)
+ {
+ /* #2 */
+ if (bef->reason == reason)
+ {
+ bef->length += length;
+ i--;
+ }
+ else
+ VEC_safe_insert (range_s, *vectorp, i, &newr);
+ }
+ else
+ {
+ /* #3 */
+ VEC_safe_insert (range_s, *vectorp, i, &newr);
+ }
+ }
+ else
+ {
+ /* #4 */
+ VEC_safe_insert (range_s, *vectorp, i, &newr);
+ }
- if (removed != 0)
- VEC_block_remove (range_s, *vectorp, next, removed);
+ /* Check whether the ranges following the one we've just added or
+ touched can be folded in (#5 above). */
+ if (i + 1 < VEC_length (range_s, *vectorp))
+ {
+ struct range *t;
+ struct range *r;
+ int removed = 0;
+ int next = i + 1;
+
+ /* Get the range we just touched. */
+ t = VEC_index (range_s, *vectorp, i);
+ removed = 0;
+
+ i = next;
+ for (; VEC_iterate (range_s, *vectorp, i, r); i++)
+ if (r->offset <= t->offset + t->length
+ && r->reason <= t->reason)
+ {
+ ULONGEST h;
+
+ gdb_assert (t->offset <= r->offset);
+
+ if (r->offset + r->length >= t->offset + t->length)
+ {
+ if (r->reason == t->reason)
+ {
+ t->length += ((r->offset + r->length)
+ - (t->offset + t->length));
+ removed++;
+ }
+ else
+ {
+ r->length = ((r->offset + r->length)
+ - (t->offset + t->length));
+ r->offset = t->offset + t->length;
+ }
+ }
+ else
+ {
+ removed++;
+ }
+ }
+ else
+ {
+ /* If we couldn't merge this one, we won't be able to
+ merge following ones either, since the ranges are
+ always sorted by OFFSET. */
+ break;
+ }
+
+ if (removed != 0)
+ VEC_block_remove (range_s, *vectorp, next, removed);
+
+ /* If the next range overlaps with the range we just touched, but has
+ a more important reason then we need to skip over the existing
+ range and create a new range on the other side. */
+ if (r && r->offset <= t->offset + t->length)
+ {
+ /* If the next one overlaps its reason should be greater than
+ ours, otherwise we should have folded it in above. */
+ gdb_assert (r->reason > t->reason);
+
+ if (r->offset + r->length >= t->offset + t->length)
+ {
+ t->length -= (t->offset + t->length) - r->offset;
+ return;
+ }
+
+ offset = r->offset + r->length;
+ length = (t->offset + t->length) - (r->offset + r->length);
+ t->length = r->offset - t->offset;
+ }
+ else
+ return; /* All inserted. */
+ }
+ else
+ return; /* All inserted. */
}
+
+ /* Strange, was not expecting to get here. */
+ internal_error (__FILE__, __LINE__, "error during vector insert");
}
void
@@ -590,8 +697,21 @@ value_availability_flags (const struct value *value,
int *optimizedp,
int *unavailablep)
{
- *optimizedp = value->optimized_out;
- *unavailablep = !VEC_empty (range_s, value->unavailable);
+ int i;
+ struct range *r;
+
+ gdb_assert (!value->lazy);
+
+ *optimizedp = *unavailablep = 0;
+
+ /* Look through every entry in the unavailable vector. */
+ for (i = 0; VEC_iterate (range_s, value->unavailable, i, r); i++)
+ {
+ if (r->reason == bit_range_optimized_out)
+ *optimizedp = 1;
+ else if (r->reason == bit_range_unavailable)
+ *unavailablep = 1;
+ }
}
/* Find the first range in RANGES that overlaps the range defined by
@@ -755,7 +875,6 @@ allocate_value_lazy (struct type *type)
val->bitsize = 0;
VALUE_REGNUM (val) = -1;
val->lazy = 1;
- val->optimized_out = 0;
val->embedded_offset = 0;
val->pointed_to_offset = 0;
val->modifiable = 1;
@@ -967,17 +1086,18 @@ value_actual_type (struct value *value, int
resolve_simple_types,
}
static void
-require_not_optimized_out (const struct value *value)
-{
- if (value->optimized_out)
- error (_("value has been optimized out"));
-}
-
-static void
require_available (const struct value *value)
{
if (!VEC_empty (range_s, value->unavailable))
- throw_error (NOT_AVAILABLE_ERROR, _("value is not available"));
+ {
+ int optimizedp, unavailablep;
+
+ value_availability_flags (value, &optimizedp, &unavailablep);
+ if (optimizedp)
+ throw_error (OPTIMIZED_OUT_ERROR, _("value has been optimized out"));
+ else
+ throw_error (NOT_AVAILABLE_ERROR, _("value is not available"));
+ }
}
const gdb_byte *
@@ -999,7 +1119,6 @@ const gdb_byte *
value_contents_all (struct value *value)
{
const gdb_byte *result = value_contents_for_printing (value);
- require_not_optimized_out (value);
require_available (value);
return result;
}
@@ -1052,7 +1171,7 @@ value_contents_copy_raw (struct value *dst, int
dst_offset,
if (l < h)
insert_into_bit_range_vector (&dst->unavailable,
dst_bit_offset + (l - src_bit_offset),
- h - l);
+ h - l, r->reason);
}
}
@@ -1061,8 +1180,7 @@ value_contents_copy_raw (struct value *dst, int
dst_offset,
(all) contents, starting at DST_OFFSET. If unavailable contents
are being copied from SRC, the corresponding DST contents are
marked unavailable accordingly. DST must not be lazy. If SRC is
- lazy, it will be fetched now. If SRC is not valid (is optimized
- out), an error is thrown.
+ lazy, it will be fetched now.
It is assumed the contents of DST in the [DST_OFFSET,
DST_OFFSET+LENGTH) range are wholly available. */
@@ -1071,8 +1189,6 @@ void
value_contents_copy (struct value *dst, int dst_offset,
struct value *src, int src_offset, int length)
{
- require_not_optimized_out (src);
-
if (src->lazy)
value_fetch_lazy (src);
@@ -1107,7 +1223,6 @@ const gdb_byte *
value_contents (struct value *value)
{
const gdb_byte *result = value_contents_writeable (value);
- require_not_optimized_out (value);
require_available (value);
return result;
}
@@ -1139,17 +1254,6 @@ value_contents_equal (struct value *val1, struct
value *val2)
TYPE_LENGTH (type1)) == 0);
}
-int
-value_optimized_out (struct value *value)
-{
- /* We can only know if a value is optimized out once we have tried to
- fetch it. */
- if (!value->optimized_out && value->lazy)
- value_fetch_lazy (value);
-
- return value->optimized_out;
-}
-
/* Mark contents of VALUE as optimized out, starting at OFFSET bytes, and
the following LENGTH bytes. */
@@ -1166,12 +1270,12 @@ mark_value_bytes_optimized_out (struct value
*value, int offset, int length)
void
mark_value_bits_optimized_out (struct value *value,
- int offset ATTRIBUTE_UNUSED,
- int length ATTRIBUTE_UNUSED)
+ int offset,
+ int length)
{
- /* For now just set the optimized out flag to indicate that part of the
- value is optimized out, this will be expanded upon in later
patches. */
- value->optimized_out = 1; + insert_into_bit_range_vector
(&value->unavailable,
+ offset, length,
+ bit_range_optimized_out);
}
int
@@ -1479,7 +1583,6 @@ value_copy (struct value *arg)
VALUE_FRAME_ID (val) = VALUE_FRAME_ID (arg);
VALUE_REGNUM (val) = VALUE_REGNUM (arg);
val->lazy = arg->lazy;
- val->optimized_out = arg->optimized_out;
val->embedded_offset = value_embedded_offset (arg);
val->pointed_to_offset = arg->pointed_to_offset;
val->modifiable = arg->modifiable;
@@ -2730,24 +2833,19 @@ value_primitive_field (struct value *arg1, int
offset,
int bitpos = TYPE_FIELD_BITPOS (arg_type, fieldno);
int container_bitsize = TYPE_LENGTH (type) * 8;
- if (arg1->optimized_out)
- v = allocate_optimized_out_value (type);
+ v = allocate_value_lazy (type);
+ v->bitsize = TYPE_FIELD_BITSIZE (arg_type, fieldno);
+ if ((bitpos % container_bitsize) + v->bitsize <= container_bitsize
+ && TYPE_LENGTH (type) <= (int) sizeof (LONGEST))
+ v->bitpos = bitpos % container_bitsize;
else
- {
- v = allocate_value_lazy (type);
- v->bitsize = TYPE_FIELD_BITSIZE (arg_type, fieldno);
- if ((bitpos % container_bitsize) + v->bitsize <= container_bitsize
- && TYPE_LENGTH (type) <= (int) sizeof (LONGEST))
- v->bitpos = bitpos % container_bitsize;
- else
- v->bitpos = bitpos % 8;
- v->offset = (value_embedded_offset (arg1)
- + offset
- + (bitpos - v->bitpos) / 8);
- set_value_parent (v, arg1);
- if (!value_lazy (arg1))
- value_fetch_lazy (v);
- }
+ v->bitpos = bitpos % 8;
+ v->offset = (value_embedded_offset (arg1)
+ + offset
+ + (bitpos - v->bitpos) / 8);
+ set_value_parent (v, arg1);
+ if (!value_lazy (arg1))
+ value_fetch_lazy (v);
}
else if (fieldno < TYPE_N_BASECLASSES (arg_type))
{
@@ -2760,37 +2858,29 @@ value_primitive_field (struct value *arg1, int
offset,
if (VALUE_LVAL (arg1) == lval_register && value_lazy (arg1))
value_fetch_lazy (arg1);
- /* The optimized_out flag is only set correctly once a lazy value is
- loaded, having just loaded some lazy values we should check the
- optimized out case now. */
- if (arg1->optimized_out)
- v = allocate_optimized_out_value (type);
+ /* We special case virtual inheritance here because this
+ requires access to the contents, which we would rather avoid
+ for references to ordinary fields of unavailable values. */
+ if (BASETYPE_VIA_VIRTUAL (arg_type, fieldno))
+ boffset = baseclass_offset (arg_type, fieldno,
+ value_contents (arg1),
+ value_embedded_offset (arg1),
+ value_address (arg1),
+ arg1);
else
- {
- /* We special case virtual inheritance here because this
- requires access to the contents, which we would rather avoid
- for references to ordinary fields of unavailable values. */
- if (BASETYPE_VIA_VIRTUAL (arg_type, fieldno))
- boffset = baseclass_offset (arg_type, fieldno,
- value_contents (arg1),
- value_embedded_offset (arg1),
- value_address (arg1),
- arg1);
- else
- boffset = TYPE_FIELD_BITPOS (arg_type, fieldno) / 8;
+ boffset = TYPE_FIELD_BITPOS (arg_type, fieldno) / 8;
- if (value_lazy (arg1))
- v = allocate_value_lazy (value_enclosing_type (arg1));
- else
- {
- v = allocate_value (value_enclosing_type (arg1));
- value_contents_copy_raw (v, 0, arg1, 0,
- TYPE_LENGTH (value_enclosing_type (arg1)));
- }
- v->type = type;
- v->offset = value_offset (arg1);
- v->embedded_offset = offset + value_embedded_offset (arg1) + boffset;
+ if (value_lazy (arg1))
+ v = allocate_value_lazy (value_enclosing_type (arg1));
+ else
+ {
+ v = allocate_value (value_enclosing_type (arg1));
+ value_contents_copy_raw (v, 0, arg1, 0,
+ TYPE_LENGTH (value_enclosing_type (arg1)));
}
+ v->type = type;
+ v->offset = value_offset (arg1);
+ v->embedded_offset = offset + value_embedded_offset (arg1) + boffset;
}
else
{
@@ -2804,9 +2894,7 @@ value_primitive_field (struct value *arg1, int offset,
/* The optimized_out flag is only set correctly once a lazy value is
loaded, having just loaded some lazy values we should check for
the optimized out case now. */
- if (arg1->optimized_out)
- v = allocate_optimized_out_value (type);
- else if (value_lazy (arg1))
+ if (value_lazy (arg1))
v = allocate_value_lazy (type);
else
{
@@ -3497,6 +3585,9 @@ value_fetch_lazy (struct value *val)
{
gdb_assert (value_lazy (val));
allocate_value_contents (val);
+ /* Right now a value is either lazy, or fully fetched. The availability
+ is only established as we try to fetch a value. */
+ gdb_assert (VEC_empty (range_s, val->unavailable));
if (value_bitsize (val))
{
/* To read a lazy bitfield, read the entire enclosing value. This
@@ -3581,16 +3672,11 @@ value_fetch_lazy (struct value *val)
if (value_lazy (new_val))
value_fetch_lazy (new_val);
- /* If the register was not saved, mark it optimized out. */
- if (value_optimized_out (new_val))
- mark_value_bytes_optimized_out (val, 0, TYPE_LENGTH (value_type (val)));
- else
- {
- set_value_lazy (val, 0);
- value_contents_copy (val, value_embedded_offset (val),
- new_val, value_embedded_offset (new_val),
- TYPE_LENGTH (type));
- }
+ /* Copy the contents and any unavailability from NEW_VAL to VAL. */
+ set_value_lazy (val, 0);
+ value_contents_copy (val, value_embedded_offset (val),
+ new_val, value_embedded_offset (new_val),
+ TYPE_LENGTH (type));
if (frame_debug)
{
@@ -3643,11 +3729,6 @@ value_fetch_lazy (struct value *val)
else if (VALUE_LVAL (val) == lval_computed
&& value_computed_funcs (val)->read != NULL)
value_computed_funcs (val)->read (val);
- /* Don't call value_optimized_out on val, doing so would result in a
- recursive call back to value_fetch_lazy, instead check the
- optimized_out flag directly. */
- else if (val->optimized_out)
- /* Keep it optimized out. */;
else
internal_error (__FILE__, __LINE__, _("Unexpected lazy value type."));
diff --git a/gdb/value.h b/gdb/value.h
index 8226a0d..d2b7f18 100644
--- a/gdb/value.h
+++ b/gdb/value.h
@@ -318,11 +318,6 @@ extern const gdb_byte *
extern int value_fetch_lazy (struct value *val);
extern int value_contents_equal (struct value *val1, struct value *val2);
-/* If nonzero, this is the value of a variable which does not actually
- exist in the program, at least partially. If the value is lazy,
- this may fetch it now. */
-extern int value_optimized_out (struct value *value);
-
/* Mark VALUE's content bytes starting at OFFSET and extending for
LENGTH bytes as optimized out. */