Angstrom / BeagleBoard image build practices

I am new to bitbake/openembedded etc and would like to know what is
the intended approach to building the software.

I have been able to build the Angstrom images from the source code -
but was wondering how i should be keeping the source up-to-date.

Should i be running:

'git pull' followed by 'bitbake beagleboard-demo-image' to make sure i
am running the latest? this appeared to cause me some build problems
when i last did it so i ended up deleting temp files and having to
recompile everything (which worked although took a long time).

There are also a number of different kernel branches available to
build. Would you normally place a branch within the build and rebuild
- or do you build these branches separately from outside of bitbake?

Thanks.

Here's my approach: I'm doing user space development, so I don't need
to rebuild the Kernel. I just download Koen's Ångström demo binary
image and file systems as needed. (See http://elinux.org/BeagleBoard#Binaries.)
That way I'm using a stable release. (By "as needed", I mean that if
I can't get something to work and there is a new demo available, it's
worth trying to see if it's fixed in the new demo.)

I've never done Linux kernel development, but my understanding is that
you either take a snapshot and work from there or keep up to date with
the expectation that things may break unpredictably. You should
monitor the IRC channel (http://beagleboard.org/discuss) to see what's
happening.

Hi,

pjgeary@hotmail.com schrieb:

I am new to bitbake/openembedded etc and would like to know what is
the intended approach to building the software.

I have been able to build the Angstrom images from the source code -
but was wondering how i should be keeping the source up-to-date.

Should i be running:

'git pull' followed by 'bitbake beagleboard-demo-image' to make sure i
am running the latest? this appeared to cause me some build problems
when i last did it so i ended up deleting temp files and having to
recompile everything (which worked although took a long time).

Normally this is the right way to do it. When someone updates a build
recipe in OE the package revision (PR variable in the recipe) is
increased. This is what causes the package to be rebuild.

Sometimes however packages which depend (= build against it) on the one
that was increased must be rebuild as well. This behavior is not default
but AFAIK there is some option which you can add to your local.conf to
enable it.

@Koen: Can you help here?

Another useful option that you should have in your local.conf is
packaged staging. This one helps tremendously preventing the staging
directory being littered with junk from older recipes. If not already
done enable it by adding INHERIT += "packaged-staging" .

Regards
Robert

Angstrom has had that turned on by default for months now :slight_smile:

Hi Koen,

Joining the thread for asking a point close to the same subject.

In order to development on Angstrom release, I only need the headers files
for my personal packages compilation.
So I have also to do the heavy bitbake operation for producing theses .h
files.
I understand the need of shrinking the distribution. But why not making a
separate archive of
all installed headers .h files for developpers.
I would be a sort of Angstrom-devel package.

Regards,

Laurent

There are *-dev packages, which export useful header files, e.g. libz-dev, ncurses-dev etc. So, if you want to develop on the target, you can install those packages. They are in the deploy directory of your build.

PS. Sorry for top-posting - too lazy to reconfigure my Outlook :slight_smile:

Does the org.openembedded.dev directory contain any build information
on what has been built in the system directory?

I would like to keep a copy of the org.openembedded.dev directory
before updating it with anything new so that i can recover to
something 'working' in case i encounter build problems. I can also
diff changes between builds to keep track on what is happening.

Should i just move my org.openembedded.dev directory to another
location, and do a new clone of the tree each time? or copy the
directory to another location and do a git pull?

Are there any actual differences between the two options as far as
what has already been built in the system directory. I don't want to
rebuild packages for no reason if the clone into a clean directory
causes packages already built to be built again.

Thanks,

Paul