OSS-QM global patch repository

Enrico Weigelt weigelt@metux.de
Tue Mar 13 07:40:00 GMT 2007

* Robert Schwebel <r.schwebel@pengutronix.de> wrote:


> Yes, sure, we do it the same way. 


> But once you have it in your patch stack, the packet works. 
> Everything else is extra effort, without anybody really being 
> interested in it.

In short terms: yes.

In long terms, maybe not (hope so ;o): often we have to fix every 
new release, if our patches don't go into upstream. That suxx and 
eats up resources. It's really frustrating when really obvious 
fixes (ie. just prepending $(DESTIR) to install dirs or replacing 
some install dirs by proper variables) take month to years to go 
into the upstream. Some projects are really ugly about that (ie. 
apache), others are very courteous and quick (ie. zlib - they 
even added the build rules for treebuild, my own buildsystem :))

The big trick is to find an optimum between investing not more 
than necessary, but enough for cooperating with those projects
being worth to. I already had to learn the lessons, that some 
projects simply aren't intrested in my works, ie. mplayer or gtk, 
so I simply don't care about them at all.

> > So, we're doing quite the same, we want quite the same, now let's
> > get our repositories compatible. 
> > 
> > Mine has an quite simple structure: one directory per package
> dito here (although we have a directory "generic" inside, which is
> historic and could be removed some day).

Okay, I'll have a look at your repository. I'm now reorganizing my 
repository: raw patches and per-release-patchlists will be completely 
separated. There's now one dir, with per-package subdirs, containing 
all the collected patches. Each vendor (ie. pengutronix, metux, ...) 
get their own dirs containing the list files, one per package. These 
listfiles have the same structure than the old (per-package) 
"patches.db" files: <release-version>+":"+<list of patch names>
This way we both can use the same patch repositories, while making
our own decisions what patches to be used in an particular release.
All vendors should put their listfiles into the global repository,
so others can easily scan for things what some vendor had done.

> > (names probably normalized -- see oss-qm wiki)
> You cannot enforce naming policies; we use the normal packet name 
> for that, which is usually what you get if you extract a packet. 

Well, I don't really have some naming policy ;-o 
But sometimes it's necessary to do some (individual) renamings,
ie. on collisions or really stupid names. In 99.99% the package
name is taken from the upstream, in fact it's the tarball name.

> There are activities on the way to restructure our packets to 
> have something like
> memedit-1.2.3/
> memedit-1.2.3/patches/
> memedit-1.2.3/patches/memedit-1.2.3-fix-something.diff
> memedit-1.2.3/memedit.make
> memedit-1.2.3/memedit.in

What do your *.make and *.in files do ?
For configuring / building the package ?

> Regarding the patch format, we have our own header ontop of it, 
> to specify things like the upstream status, source or the error etc. 

This looks fine. I'm going to use it, too.

> In the future it will most probably be the canonical patch format 
> used for Linux.

How does this look like ? 

> > there some patches.db file which lists patches per (normalized)
> > release, and the patches are sitting within the src subdir. 
> The community tool for getting patches in order is quilt; so we use
> quilt like series files in our patch repositories. It also makes
> patch development easy: just link the patch dir into a breaking project
> and use quilt to maintain the patch.

hmm, I didn't have the time to read the docs yet. Perhaps you could 
give quick examples. At the end I need exactly one patch per release,
which has some normalized location. (my buildsystem does not care
about patches, it just applies one single patch)


> > And we also have tools that automatically notify the package 
> > maintainers.
> Forget it. Patch feeding is communication, and it needs _active_ work
> with upstream. 

Well, at least the initial mail, opening bug, etc can be done 
automatically. I've written a small tool, which can be used to 
open bugs (currently on bugzilla, but other issuetrackers can be
added easily) via commandline.


> We agree that libtool has problems. But it is the same with libtool as
> with for example cmake: People come and want to do everything better
> what has been done by skilled developers for years. They make something
> new, and in the end it does the deficiencies 100% perfectly, but on the
> way 50% of the old functionality is lost. 

Well, for my unitool (and it's libtool-alike-frontend), it works quite
good for me - at least much better than libtool. Thanks to the frontend
(an some minor patch in autoconf, which just puts a call to lt-unitool 
into libtool.sh), it works as an drop-in-replacement.

Yes, I sometimes belong to those folks wanting to make the whole world
better, but, of course, I know, this is only possible for dedicated 
things and in small steps. (that's the reason why I developed unitool
and lt-unitool instead of trying to port all packages to treebuild ;-O)

> Cmake for example still cannot cross compile as far as I know, and it 
> breaks the known-for-decades way of building packets with ./configure 
> && make && make install, together with well established methods like 
> --with-foobar=blub and --enable-baz.

Yes, that's one reason why I don't like it. Although I don't like 
autotools very much, at least the configure syntax is quite handy 
and should be provided by non-autotool'ed packages, too.
> So IMHO the only solution is: fix libtool.

I tried it, but I didn't see any realistic chance. It had been ended
up into an complete rewrite. So my decision was to write an drop-in
replacement as unitool frontend. 

Unitool follows an completely different philosophy than the whole 
autotools family. It has an quite strictly defined and platform 
independent syntax. For example, libtool intercepts the (platform
dependent) command line and rewrites things there. That's an totally
no-go for unitool. You have to tell unitool what you want, ie. 
"compile this .c file to .o" or "link that objects together to an 
lib / executable", and unitool handles all the platform and toolchain
dependent stuff and calls the actual tools. Of course this needs an
customized unitool for each platform, but this has do be done *once*.

The really ugly part is the libtool-alike frontend. It has to guess
what really shall be done. This took the most time to get it running. 
At least on gnu+x86 it seems to work for now. 


> So for the moment I'd say that we are generally interested in such a
> project, taken that it is structured in a way that we may discuss a
> generic patch + documentation format and build automatisms to push
> patches from the build systems into a QM project. Let's do some more
> thinking about how to structure it correctly...

Great :)

So what do you think about my new patch dir structure (I recently posted)
combined with your headers ? 

To work in an global repository without conflicts I propose:

* each vendor has it's own subdir within each package dir. 
* the vendor is free to do whatever he wants in this subdir,
  he should be careful, too. 
* the oss-qm-group is also an vendor. before patches go to here, 
  they have to be checked very carefully, approved by all 
  core-vendors and are considered as production-stable.

BTW: we could add some stability graduation into the patch headers.


> And, of course, it would have to happen under a neutral organisation. 
> I wouldn't also mind to let this thing be quality.pengutronix.de to 
> get the marketing effect ;) but if we want to be it a success, it must 
> be strictly vendor neutral. 

Well, an neutral organisation is good, but I don't see why we 
should abstain from a little bit "marketing side effects" ;-o
At the moment this group would consist of pengutronix + metux,
hopefully some more in future. We're doing a good job and the 
would should know who we are :)

Of course we have to make clear, that this project itself is
not a commercial one. Other companies are hosting and funding
OSS projects, too, and (IMHO) that's working good. 

> LTP could be a candidate, or the Linux Foundation. Has anybody contacts?

Well, if such an foundation someday wants to take the project,
let's see. But for now I don't see any value in investing time
for that. 

(yeah, some time ago, I also thought in those ideals, but that
brought absolutely nothing, so I decided to drop those ideals ;-o)

 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
 Please visit the OpenSource QM Taskforce:
 Patches / Fixes for a lot dozens of packages in dozens of versions:

For unsubscribe information see http://sourceware.org/lists.html#faq

More information about the crossgcc mailing list