Subject: Re: SoC Part I: pbulk
To: None <tech-pkg@netbsd.org>
From: Joerg Sonnenberger <joerg@britannica.bec.de>
List: tech-pkg
Date: 05/18/2007 12:38:07
On Wed, May 16, 2007 at 08:44:32PM +0200, Hubert Feyrer wrote:
> >The pbulk system is modular and allows customisation of each phase. This
> >is used handle to full vs. limited bulk builds, but also to handle
> >environental differences for parallel builds.
>
> Will/can any of the existing code be reused?
It is hard to do so. I want to get rid of the Perl parts for obvious
reasons. The post-phase works quite a bit differently, as e.g. the check
for the restricted packages uses the output of the scan phase. The build
phase has the logic for up-to-date check and dependency installation
interwoven, but the way it works has a few issues. E.g. (horrible
broken) packages like p5-math-pari can rebuild half of Xorg when not
using the modular code.
> Why would a package have missing dependencies?
> (Guesswork: is this to work around broken/inconsistent pkgsrc, or does one
> have to list all the dependencies for a partial build?)
> (What problem are you trying to solve here?)
For partial builds, the list is created, see below. This is to ensure
that the view of the tree from the scanning phase is as consistent as
possible with the build view. We had some bad interactions of
installation order and builtin.mk before, I want to error out on at
least one side of those. Another example is what happened with
p5-math-pari -- building a different set of dependencies was
historically necessary to get e.g. Python packages that don't support
the default version to work, but that's done a lot more cleanly now.
In short, both to find broken and inconsistent parts of the build.
>
>
> >_ALL_DEPENDS are used as hints, but can be overriden.
> >
> >For partial builds, two different mechanisms could be used. I'm not sure
> >which is the better. For both a list of directories to scan is given.
> >For the first idea, pbulk-index is called that gives all possible
> >packages to create. Those are filtered by a pattern. The second approach
> >is to list the options directly and call pbulk-index-item instead.
>
> (pbulk-index?)
>
> What is that filtering pattern - an addition to the list of directories in
> pkgsrc for pkgs to build, defined by the pkg builder?
Variant I: www/ap-php PKG_APACHE=ap13
Variant II: www/ap-php ap13-*
Both say "don't build all variants in www/ap-php, but only those
specified".
> >Dependencies are resolved for partial builds as well, but missing
> >dependencies are searched by calling pbulk-index in the given
> >directories. Those that fulfill the patterns are adding to the list and
> >the process is repeated.
>
> I'm not 100% sure what depends you mean here - if it's in pkgsrc it was
> either already built and is available as binary pkg and can be pkg_add'ed,
> or can be built. What is that pattern for, and is it something different
> then the one mentioned above?
I have listed www/ap-php in the to-build list. It now needs to figure
out how to get Apache, right? It gets a directory (later possibly a list
of directories, see pkgsrcCon) and calls pbulk-index to find what can be
built. The first package that can fulfill the dependency of ap-php gets
added to the to-build list.
> Nuking $PREFIX is fine & fast, please consider update builds e.g. from a
> pkgsrc stable branch. No need to rebuild everything there (which was about
> THE design criteria for the current bulk build code :).
Incremental builds are not affected. The Python version currently has
the following rules:
- check if the list of dependencies changed -> rebuild
- check if any of the depending packages changed -> rebuild
- check if any of the recorded RCS ID changed -> rebuild
> BTW, do you take any precaution to make sure the build is done on a
> "clear" system, e.g. one with a freshly installed $OS release in a chroot?
No, that's outside the scope.
> Also: will the bootstrap kit be mandatory on NetBSD systems, too?
> It should have all this in base, and while it's probably fast to build
> compared to the bulk build's time, but for a selective built it seems like
> overkill to me.
Given the issues with requiring newer pkg_install versions, my answer is
"they most likely are". If not, it is easy to prepare a tarball without.
I haven't worried about that part too much yet.
> How does this FTP/rsyncing before (yumm) play with the distributed
> environment (NFS) mentioned in answers to the proposal? Or is this for a
> setup with no common file system? (guessing)
The latter. The way I'm currnetly running concurrent builds is with a
NFS for each of PACKAGES, DISTDIR and the log area.
> The first and last part of your proposal look pretty much the same as the
> current system, and the "job scheduling" in the middle seems to be the
> interesting part. Guessing that you already have this running, I wonder if
> what the reasons are to not reuse the current code. (Just curious).
>
> BTW, did you take SMP machines into account in any way?
This project started to solve two major issues with the current code:
- build more than one package from one location in the tree.
- build more than one job in parallel.
Both are very hard to do in the current structure. E.g. you have to use
a PKGNAME centered view for the first to work, which would have meant
quite a bit of reorganisation.
Doing the scheduling within make would be possible, but only when
knowing the number of parallel jobs in advance etc.
Joerg