A new strategy for internals documentation

John Gilmore gnu@toad.com
Sat Aug 10 02:24:00 GMT 2013


> The ironic thing is that no one working on a new free software project
> today would react to that situation by saying "we need an internals
> manual".  The project would add a wiki page, send an email, or maybe a
> blog posting, or a long comment block in the source code.  

Or a README file in the source code.  But the info in gdbint is too
voluminous for that.  Or for an email or a blog posting or a long 
comment block.  Because it aggregates info from a lot of emails
and blog postings and comment blocks.

I mean, we could paste the info about symbol tables from gdbint into
symtab.h, as a long multipage comment block.  I doubt that that would
encourage people to update it any more than they do now.  Maybe more
people would notice it because they would find it in grep results,
since it'd be in the main source directory rather than in the doc
directory.

I was a long term volunteer for One Laptop Per Child.  OLPC used a
Wiki for its documentation.  You can see that the older stuff got well
documented, partly by staff and partly by volunteers like me.  It fell
rapidly out of date as soon as the staff shrunk and most volunteers
went on to other things.  You can see it today at
http://wiki.laptop.org .  It doesn't even mention the latest OLPC
hardware or software (the XO Tablet on sale at Walmart)!

And if you want the Wiki pages that match your OLPC hardware (they
had four different vintages), operating system release (they've gone
through about 13), etc, you are SOL -- you have to find them manually,
possibly digging through the revision history of particular pages.

And when wiki.laptop.org goes down, which it will one day, then
only the Internet Archive will have a copy -- if we're lucky.  At
least the GDB documentation is in the source code, so if GDB survives,
the doc also survives.

> If there were a half-dozen files to edit in sync, these days there
> is more likely to be intense pressure to refactor that code and
> bring it back down to one place to edit, which would then be the
> place for the documentation about it.

That's a lovely concept.  But I don't think even the amazing all-powerful
C++ has managed to eliminate the requirement to add things to both header
files and source files.  And if you change the classic way from:

  *  Add definitions for a new something to a struct or enum in category.h
  *  Add code to use that new something in category.c
  *  Add a new source module something.c that implements the something
  *  Add something.c to the Makefile

to:

  *  Add something.c to the directory, the Makefile builds and links in
     every .c automatically, initializers build a dynamic data
     structure that enumerates all things of this category, etc...

... then you have scattered the info about "things of this category"
into X number of files, rather than having a central place to ask,
"How many things fall into this category?  Which are they?"

The classic problem with object-oriented code like this is that the
information that would let you wrap your head around a concept (like a
"target" or an "architecture" or a "symtab") is scattered among dozens
of source files, and even merely instantiating one of them results in
code running in a dozen files that inherit from each other in obscure
ways.

	John



More information about the Gdb mailing list