[PATCH 08/16] linux, btrace: perf_event based branch tracing
Metzger, Markus T
markus.t.metzger@intel.com
Thu May 31 15:34:00 GMT 2012
> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Wednesday, May 30, 2012 10:43 PM
> To: Metzger, Markus T
Thanks for your review!
[...]
> > +typedef unsigned char byte;
>
> Two spaces. There already exists gdb_byte.
Gdb_byte is not available in gdb/common/.
Is it OK to put the typedef into gdb/common/common-utils.h?
> > +
> > +/* A branch trace record in perf_event. */ struct perf_event_bts {
> > + unsigned long long from;
> > + unsigned long long to;
>
> 'long long' is forbidden in GDB, it will produce ARI script warnings.
> uint64_t is OK.
Fixed.
Could we put this ARI script and maybe some style checker scripts into the
gdb source tree?
> > +};
> > +
> > +/* A perf_event branch trace sample. */ struct perf_event_sample {
> > + struct perf_event_header header;
> > + struct perf_event_bts bts;
>
> GNU Coding Style forbids such alignment, just use:
> struct perf_event_header header;
> struct perf_event_bts bts;
Fixed.
> > +};
> > +
> > +static inline volatile struct perf_event_mmap_page *
> > +perf_event_header (struct btrace_target_info* tinfo) {
> > + return tinfo->buffer;
> > +}
> > +
> > +static inline size_t
> > +perf_event_mmap_size (const struct btrace_target_info *tinfo) {
> > + /* The branch trace buffer is preceded by a configuration page. */
> > + return ((tinfo->size + 1) * PAGE_SIZE);
>
> Excessive out parentheses. Multiple times.
Fixed.
> > +}
> > +
> > +static inline size_t
> > +perf_event_buffer_size (struct btrace_target_info* tinfo) {
> > + return (tinfo->size * PAGE_SIZE);
> > +}
> > +
> > +static inline const byte *
> > +perf_event_buffer_begin (struct btrace_target_info* tinfo) {
> > + return ((const byte *) tinfo->buffer) + PAGE_SIZE; }
> > +
> > +static inline const byte *
> > +perf_event_buffer_end (struct btrace_target_info* tinfo) {
> > + return perf_event_buffer_begin (tinfo) + perf_event_buffer_size
> > +(tinfo); }
> > +
> > +static inline int
> > +perf_event_skip_record (struct btrace_target_info* tinfo,
>
> Formatting:
> struct btrace_target_info *tinfo,
Fixed.
> > + const struct perf_event_bts *bts) {
> > + if (tinfo->ptr_bits)
> > + {
> > + int shift = tinfo->ptr_bits - 1;
> > +
> > + /* Branch trace records branches from kernel space to user space.
> > */
> > + if (bts->from & (1ull << shift))
>
> long long again.
Replaced with explicit type cast.
> > + return 1;
> > +
> > + /* Branch trace records branches from user space to kernel space.
> > */
> > + if (bts->to & (1ull << shift))
> > + return 1;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static inline int
> > +perf_event_check_sample (const struct perf_event_sample *sample) {
> > + if (sample->header.type != PERF_RECORD_SAMPLE)
> > + return EINVAL;
>
> Here everywhere some error with description would be easier than the error
> codes passing in the callers.
I'll fix this once it is clear whether this will also work with gdbserver.
> > +
> > + if (sample->header.size != sizeof (*sample))
> > + return EINVAL;
> > +
> > + return 0;
> > +}
> > +
> > +/* Branch trace is collected in a circular buffer [begin; end) as pairs
> > of from
> > + and to addresses (plus some header).
> > +
> > + Start points into that buffer at the next sample position.
> > + We read the collected samples backwards from start.
> > +
> > + While reading the samples, we convert the information into a list of
> blocks.
> > + For two adjacent samples s1 and s2, we form a block b such that
> > b.begin
> =
> > + s1.to and b.end = s2.from.
> > +
> > + In case the buffer overflows during sampling, samples may be
> > + split. */
>
> Empty line.
Fixed.
> > +static int
> > +perf_event_read_bts (struct btrace_target_info* tinfo,
> > + const byte *begin, const byte *end, const byte
> > *start,
> > + int (*fun) (struct linux_btrace_block *, void *),
> > + void *arg)
> > +{
> > + struct perf_event_sample sample;
> > + int read = 0, size = (end - begin), errcode = 0;
> > + struct linux_btrace_block block = { 0, 0 };
> > +
> > + if (start < begin)
> > + return EINVAL;
> > +
> > + if (end < start)
> > + return EINVAL;
> > +
> > + /* The buffer may contain a partial record as its last entry (i.e. when
> > the
> > + buffer size is not a mulitple of the sample size). */ read =
> > + sizeof (sample) - 1;
> > +
> > + for (; read < size; read += sizeof (sample))
> > + {
> > + const struct perf_event_sample *psample;
> > +
> > + /* Find the next perf_event sample. */
> > + start -= sizeof (sample);
> > + if (begin <= start)
> > + psample = (const struct perf_event_sample *) start;
> > + else
> > + {
> > + int missing = (begin - start);
> Empty line.
Fixed.
[...]
> > +int
> > +linux_btrace_has_changed (struct btrace_target_info *tinfo) {
> > + volatile struct perf_event_mmap_page *header = perf_event_header
> > +(tinfo);
> Empty line.
Fixed.
> > + if (!header)
> > + return 0;
> > +
> > + return (header->data_head != tinfo->data_head); }
> > +
> > +struct btrace_target_info *
> > +linux_enable_btrace (ptid_t ptid)
> > +{
> > + struct btrace_target_info *tinfo;
> > + int pid;
> > +
> > + tinfo = xzalloc (sizeof (*tinfo));
> > + if (!tinfo)
>
> Remove, xzalloc can never return NULL.
Fixed.
[...]
> > +#else /* HAVE_LINUX_PERF_EVENT_H */
>
> GNU Coding Standards say here should be:
> #else /* !HAVE_LINUX_PERF_EVENT_H */
> (they say "not" but GDB uses !)
Fixed.
[...]
> > +#endif /* HAVE_LINUX_PERF_EVENT_H */
>
> And !HAVE_LINUX_PERF_EVENT_H again.
Fixed.
[...]
> > + /* The mmap configuration mapping the branch trace perf_event buffer.
> > +
> > + file .. the file descriptor
> > + buffer .. the mmapped memory buffer
> > + size .. the buffer's size in pages without the configuration
> > page
> > + data_head .. the data head from the last read */
> > + int file;
> > + void *buffer;
> > + size_t size;
>
> GNU Coding Style - no indentation of the names.
Fixed.
> > + unsigned long data_head;
> > +#endif /* HAVE_LINUX_PERF_EVENT_H */
> > +
> > + /* The size of a pointer in bits for this thread.
> > + The information is used to identify kernel addresses in order to
> > skip
> > + records from/to kernel space. */
> > + int ptr_bits;
> > +};
> > +
> > +extern int linux_supports_btrace (void); extern struct
> > +btrace_target_info *linux_enable_btrace (ptid_t); extern int
> > +linux_disable_btrace (struct btrace_target_info *); extern int
> > +linux_btrace_has_changed (struct btrace_target_info *); extern int
> > +linux_read_btrace (struct btrace_target_info *,
> > + int (*) (struct linux_btrace_block *, void
> > *),
> > + void *);
>
> Tabs, not spaces, everywhere.
Fixed.
Regards,
Markus.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 7228 bytes
Desc: not available
URL: <http://sourceware.org/pipermail/gdb-patches/attachments/20120531/d8e8d809/attachment.p7s>
-------------- next part --------------
--------------------------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
More information about the Gdb-patches
mailing list