new beaglebone image-builder script fails when downloading Cloud9

I tried to use the image builder linked at the end of http://www.beagleboard.org/blog/2014-01-04-happy-new-year/ to build a new Debian based image for a BBB. I used the following commands in my home directory to download and run the image builder script.

mkdir BBB
cd BBB
git clone https://github.com/beagleboard/image-builder
cd image-builder
./beagleboard.org_image.sh

The script worked until it tried to clone the Cloud9 git repository. There were many subsequent errorsafter that as copied below.

Cloning into ‘/opt/cloud9’…
error: Problem with the SSL CA cert (path? access rights?) while accessing https://github.com/ajaxorg/cloud9.git/info/refs
fatal: HTTP request failed
/bin/chown: invalid user: debian:debian' Cloning into '/var/www'... error: Problem with the SSL CA cert (path? access rights?) while accessing https://github.com/beagleboard/bone101/info/refs fatal: HTTP request failed Cloning into '/var/lib/cloud9'... error: Problem with the SSL CA cert (path? access rights?) while accessing https://github.com/beagleboard/bonescript/info/refs fatal: HTTP request failed /bin/chown: invalid user: debian:debian’
Cloning into ‘/opt/source/libsoc’…
error: Problem with the SSL CA cert (path? access rights?) while accessing https://github.com/jackmitch/libsoc/info/refs
fatal: HTTP request failed
final.sh: 160: cd: can’t cd to /opt/source/libsoc/
final.sh: 161: final.sh: ./autogen.sh: not found
final.sh: 162: final.sh: ./configure: not found
final.sh: 163: final.sh: make: not found
final.sh: 164: final.sh: make: not found
final.sh: 165: final.sh: make: not found
Cloning into ‘/opt/source/Userspace-Arduino’…
error: Problem with the SSL CA cert (path? access rights?) while accessing https://github.com/prpplague/Userspace-Arduino/info/refs
fatal: HTTP request failed
/bin/sed: can’t read /etc/ssh/sshd_config: No such file or directory
/bin/sed: can’t read /etc/ssh/sshd_config: No such file or directory

Does anyone know what went wrong, or more importantly, what I need to do to fix it?

TIA
Dennis Cote

I tried to use the image builder linked at the end of
Happy New Year! - BeagleBoard to build a new
Debian based image for a BBB. I used the following commands in my home
directory to download and run the image builder script.

mkdir BBB
cd BBB
git clone GitHub - beagleboard/image-builder: Image builder
cd image-builder
./beagleboard.org_image.sh

The script worked until it tried to clone the Cloud9 git repository. There
were many subsequent errorsafter that as copied below.

I'm guessing this was on x86?

So what OS and what version and what version of qemu-arm-static

qemu-arm-static -version

Cloning into '/opt/cloud9'...
error: Problem with the SSL CA cert (path? access rights?) while accessing
https://github.com/ajaxorg/cloud9.git/info/refs
fatal: HTTP request failed
/bin/chown: invalid user: `debian:debian'

I'm not 100% sure of this is qemu to blame or the network went down..

BTW: due to issues with qemu across the board, I only run this script
on native arm hardware (Debian Jessie)..

Regards,

I’m guessing this was on x86?

So what OS and what version and

Yes, you are correct. A Ubuntu 13.10 VM running on virtualbox.

Linux dennis-VirtualBox 3.11.0-15-generic #23-Ubuntu SMP Mon Dec 9 18:17:04 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

what version of qemu-arm-static

qemu-arm-static -version

qemu-arm version 1.5.0 (Debian 1.5.0+dfsg-3ubuntu5.2), Copyright (c) 2003-2008 Fabrice Bellard

I’m not 100% sure of this is qemu to blame or the network went down…

BTW: due to issues with qemu across the board, I only run this script
on native arm hardware (Debian Jessie)…

OK I can try to build with a Ubuntu 13.10 install I have running on an SD card on the BBB.

Linux ubuntu-armhf 3.8.13-bone30 #1 SMP Thu Nov 14 06:23:24 UTC 2013 armv7l armv7l armv7l GNU/Linux

How do you get the image off the SD card used to build it and onto another SD card to test booting? I guess I’ll copy it up to another computer (like my Ubuntu PC) and write the image to an SD card there.

Thanks.

Dennis Cote

Yes, you are correct. A Ubuntu 13.10 VM running on virtualbox.

Linux dennis-VirtualBox 3.11.0-15-generic #23-Ubuntu SMP Mon Dec 9 18:17:04
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

what version of qemu-arm-static

qemu-arm-static -version

qemu-arm version 1.5.0 (Debian 1.5.0+dfsg-3ubuntu5.2), Copyright (c)
2003-2008 Fabrice Bellard

For me.. Sometimes this version works..

voodoo@work-e6400:~$ qemu-arm-static -version
qemu-arm version 1.7.0 (Debian 1.7.0+dfsg-2), Copyright (c) 2003-2008
Fabrice Bellard

depending on mode the git calls just fail..

I'm not 100% sure of this is qemu to blame or the network went down..

BTW: due to issues with qemu across the board, I only run this script
on native arm hardware (Debian Jessie)..

btw, this should help the script atleast complete on totaly failure
qemu/git failure..

OK I can try to build with a Ubuntu 13.10 install I have running on an SD
card on the BBB.

Linux ubuntu-armhf 3.8.13-bone30 #1 SMP Thu Nov 14 06:23:24 UTC 2013 armv7l
armv7l armv7l GNU/Linux

How do you get the image off the SD card used to build it and onto another
SD card to test booting? I guess I'll copy it up to another computer (like
my Ubuntu PC) and write the image to an SD card there.

Just copy it via any means, rsync/apache/etc.

When you run ./beagleboard.org_image.sh, your final image "base" image
will be under the deploy directory.

You'll also see a "ship.sh" script, this created the final 3 images i
uploaded to here:

http://rcn-ee.net/deb/testing/2014-01-10/

Normally i copy these 2 files (ship.sh/debian*-.tar) to an x86 and
then run ./ship.sh as the xz compression takes a lot of resources
before uploading..

Regards,

As a point of reference, I have run all the MachineKit builds on a
fairly potent amd64 Debian Wheezy box (16-core Opteron 6320) with the
stock Debian qemu version (1.1.2+dfsg-6a).

I haven't had any issues completing the Debian install inside the
chroot, and I even build the user-mode Xenomai tools and LinuxCNC from
source (pulled via git running in the ARM chroot environment).

So...qemu works fine here, but then I haven't tried playing with Cloud9.
I'm still running the rcn-ee_image.sh script, and building just the
MachineKit release.

Maybe I'm just lucky?!? If so, that would be a first... :slight_smile:

My build on the BBB itself just failed in exactly the same fashion. I got the same error after starting to clone Cloud9. Also, the same follow on errors since I hadn’t fixed the script as you did in your patch.

So this does not look like it is caused by qemu.

Interestingly, I think this build completed slightly faster than the x86 build. I think it was mostly because the network speed for the downloads was better. I suspect the network translation slows down the virtual machine build enough to make a difference. On the other hand it could have been the time of day or something else. Ultimately the downloads were about twice the speed on the BBB as my VM, and both eventually lead to the same ISP.

In any case, the native build is definitely not radically slower than the PC build (as I had expected).

For me… Sometimes this version works…

voodoo@work-e6400:~$ qemu-arm-static -version
qemu-arm version 1.7.0 (Debian 1.7.0+dfsg-2), Copyright (c) 2003-2008
Fabrice Bellard

depending on mode the git calls just fail…

I’m not 100% sure of this is qemu to blame or the network went down…

BTW: due to issues with qemu across the board, I only run this script
on native arm hardware (Debian Jessie)…

btw, this should help the script atleast complete on totaly failure
qemu/git failure…

https://github.com/beagleboard/image-builder/commit/7e308e293e6d524e00406590a8cefc57d2173115

How do I use this link to update the script? I’m quite new to git.

Just copy it via any means, rsync/apache/etc.

When you run ./beagleboard.org_image.sh, your final image “base” image
will be under the deploy directory.

You’ll also see a “ship.sh” script, this created the final 3 images i
uploaded to here:

http://rcn-ee.net/deb/testing/2014-01-10/

Normally i copy these 2 files (ship.sh/debian*-.tar) to an x86 and
then run ./ship.sh as the xz compression takes a lot of resources
before uploading…

Thanks for the tips. I’ll do that when I can get the build to complete.

Dennis Cote

My build on the BBB itself just failed in *exactly* the same fashion. I got
the same error after starting to clone Cloud9. Also, the same follow on
errors since I hadn't fixed the script as you did in your patch.

I think this was the real issue..

I had re-ordered the pkg list to make things easier for another user,
but missed a space, so the default package set never got installed.
Running a full rerun right now and so far it looks good..

So this does not look like it is caused by qemu.

Interestingly, I think this build completed slightly faster than the x86
build. I think it was mostly because the network speed for the downloads was
better. I suspect the network translation slows down the virtual machine
build enough to make a difference. On the other hand it could have been the
time of day or something else. Ultimately the downloads were about twice the
speed on the BBB as my VM, and both eventually lead to the same ISP.

In any case, the native build is definitely not radically slower than the PC
build (as I had expected).

Regards,

Google answered that one I think. I did

git pull

in the directory I had cloned from github. It updated the correct file as expected.

Dennis Cote

OH sorry missed that.. Yeah, just "git pull" we don't normally bother
to create any branches/etc this repo is pretty linear..

Just a follow on my previous message, it just finished creating the
image fine on my native quad/a9.. So you shouldn't have any more
issues with the base script..

Regards,

How do I use this link to update the script? I'm quite new to git.

Google answered that one I think. I did

git pull

in the directory I had cloned from github. It updated the correct file
as
expected.

OH sorry missed that.. Yeah, just "git pull" we don't normally bother
to create any branches/etc this repo is pretty linear..

Just a follow on my previous message, it just finished creating the
image fine on my native quad/a9.. So you shouldn't have any more
issues with the base script..

Hi Robert,

What is the quad/a9?

BTW, have you done any work on OMAP5?

Regards,
John

Hi Robert,

What is the quad/a9?

It's just the imx6 1Ghz/quad/2GB/sata system from boundary devices..
It's nice and fast when doing native builds. (such as chromium)

BTW, have you done any work on OMAP5?

I have one.. It doesn't really like to boot at the moment. There's a
few nice git repo's queued for v3.14-rc0 so i'll be revisiting it a
week or two.

Regards,

Hi Robert,

What is the quad/a9?

It's just the imx6 1Ghz/quad/2GB/sata system from boundary devices..
It's nice and fast when doing native builds. (such as chromium)

Nice.

BTW, have you done any work on OMAP5?

I have one.. It doesn't really like to boot at the moment. There's a
few nice git repo's queued for v3.14-rc0 so i'll be revisiting it a
week or two.

Hi Robert,

I have one also. I just need the time work on it. I¹ve managed run code on
the dual CortexM4¹s using GEL scripts and next I want to get RPMSG
working. My goal is to do all real-time I/O, comms and time sync via the
CortexM4s and leave the CortexA15s to run apps.

Regards,
John

Hi Robert,

I haven’t had any luck yet. The native build on the BBB had a panic while building node.js. The cross build on my PC is still running (it was waiting for a password when I returned this morning).

Putty wouldn’t let me copy the text from the BBB console, so had to do a screen capture. See file attached.

It looks like something went wrong with the SD card driver. I thought the root partition may have run out of space, but it had 1.9GB free after the reset.

I am using an 8GB SD card with ubuntu 13.10. Do you know how much space is required to run the native image builder script?

Also, is there a way to restart the script without redoing all the downloads etc again?

Thanks again.

Dennis Cote

panic.PNG

Hi Robert,

I haven't had any luck yet. The native build on the BBB had a panic while
building node.js. The cross build on my PC is still running (it was waiting
for a password when I returned this morning).

Putty wouldn't let me copy the text from the BBB console, so had to do a
screen capture. See file attached.

Humm, i thought we fixed that.. what kernel version on your BBB?

It looks like something went wrong with the SD card driver. I thought the
root partition may have run out of space, but it had 1.9GB free after the
reset.

I am using an 8GB SD card with ubuntu 13.10. Do you know how much space is
required to run the native image builder script?

Humm, honestly I'm not sure on the "minimum" spec's. I really don't
mess around with the script, using a quad A9 with 2GB of ram and a
fast sata drive..

I know python needs 256Mb of dedicated memory when building nodejs..

The source *.tar is around 1.2G

The final image (in uncompressed form) is around 1.2G

Just started a fresh run right now, so i'll get those numbers..

Also, is there a way to restart the script without redoing all the downloads
etc again?

No on restarting..

But if you build alot, look at setting up "apt-cacher-ng" on a local server..

then just set the apt-proxy variable

like: https://github.com/beagleboard/image-builder/blob/master/host/rcn-ee-host.sh#L12

Then on the 2nd run every is gotten from the local cache server...

Regards,

Humm, i thought we fixed that… what kernel version on your BBB?

ubuntu@ubuntu-armhf:~$ uname -a
Linux ubuntu-armhf 3.8.13-bone30 #1 SMP Thu Nov 14 06:23:24 UTC 2013 armv7l armv7l armv7l GNU/Linux
ubuntu@ubuntu-armhf:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 13.10
Release: 13.10
Codename: saucy

Humm, honestly I’m not sure on the “minimum” spec’s. I really don’t
mess around with the script, using a quad A9 with 2GB of ram and a
fast sata drive…

Just started a fresh run right now, so i’ll get those numbers…

OK. I’ll wait until then to try the native build again.

The good news is the cross build on my PC completed successfully. So I can try using your Wheezy image next time.

No on restarting…

But if you build alot, look at setting up “apt-cacher-ng” on a local server…

then just set the apt-proxy variable

like: https://github.com/beagleboard/image-builder/blob/master/host/rcn-ee-host.sh#L12

Then on the 2nd run every is gotten from the local cache server…

Again, thanks for the tips.

Dennis Cote

ubuntu@ubuntu-armhf:~$ uname -a
Linux ubuntu-armhf 3.8.13-bone30 #1 SMP Thu Nov 14 06:23:24 UTC 2013 armv7l
armv7l armv7l GNU/Linux
ubuntu@ubuntu-armhf:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 13.10
Release: 13.10
Codename: saucy

Hum, that should have been fine.. dpkg really stress out mmc cards...

Humm, honestly I'm not sure on the "minimum" spec's. I really don't
mess around with the script, using a quad A9 with 2GB of ram and a
fast sata drive..

Just started a fresh run right now, so i'll get those numbers..

OK. I'll wait until then to try the native build again.

The good news is the cross build on my PC completed successfully. So I can
try using your Wheezy image next time.

2.8GB of space needed right as my sata drive took a dive in the lab.
(this drive has been abused specially with sata bring up on the
imx53/6 so it's not exactly 100% anymore..)

btw, you can also nuke the ./ignore directory as not everything gets
cleaned up before runs...

Regards,

Robert,

When my PC was done I ran the ship.sh script in the deploy directory. When I look at the files I see the following:

dennis@dennis-VirtualBox:~/BBB/image-builder/deploy$ ls -hl
total 3.1G
-rw-r–r-- 1 root root 1.2G Jan 15 09:02 armhf-rootfs-debian-wheezy-2014-01-14-src.tar
-rw-r–r-- 1 root root 1.7G Jan 15 09:46 BBB-eMMC-flasher-debian-7.3-2014-01-14-2gb.img
-rw-r–r-- 1 dennis dennis 254K Jan 15 09:38 BBB-eMMC-flasher-debian-7.3-2014-01-14-2gb.img.xz
-rw-r–r-- 1 dennis dennis 362M Jan 15 09:48 bone-debian-7.3-2014-01-14-4gb.img.xz
-rw-r–r-- 1 dennis dennis 300M Jan 15 09:05 debian-7.3-lxde-armhf-2014-01-14.tar.xz
-rwxr-xr-x 1 dennis dennis 689 Jan 15 09:05 ship.sh

I notice that the file sizes are different than your website which has:

      [BBB-eMMC-flasher-deb..>](http://rcn-ee.net/deb/testing/2014-01-10/BBB-eMMC-flasher-debian-7.3-2014-01-10-2gb.img.xz) 10-Jan-2014 10:52  358M  
      [bone-debian-7.3-2014..>](http://rcn-ee.net/deb/testing/2014-01-10/bone-debian-7.3-2014-01-10-4gb.img.xz) 10-Jan-2014 10:54  358M  
      [debian-7.3-console-a..>](http://rcn-ee.net/deb/testing/2014-01-10/debian-7.3-console-armhf-2014-01-10.tar.xz) 10-Jan-2014 10:36  302M  

My flasher image seems to be substantially smaller than yours. Too small to be correct.

My debian… file has “lxde” in the name rather than “console”, and is about the same size even though lxde would imply an X11 based system as opposed to a console only system.

My 4G bone… image file is about the same size as yours.

Where does the armhf… file come from? I tried tracing through the build scripts, but I couldn’t see where it was created.

Do you know why any of these differences exist?

BTW I noticed that the tar command in ship.sh has no “-” before the xf options. The tar help says it should be there, but perhaps it is not required.

Thanks again.

Dennis Cote

When you run ./beagleboard.org_image.sh, your final image "base" image
will be under the deploy directory.

You'll also see a "ship.sh" script, this created the final 3 images i
uploaded to here:

http://rcn-ee.net/deb/testing/2014-01-10/

Normally i copy these 2 files (ship.sh/debian*-.tar) to an x86 and
then run ./ship.sh as the xz compression takes a lot of resources
before uploading..

Robert,

When my PC was done I ran the ship.sh script in the deploy directory. When I
look at the files I see the following:

dennis@dennis-VirtualBox:~/BBB/image-builder/deploy$ ls -hl
total 3.1G
-rw-r--r-- 1 root root 1.2G Jan 15 09:02
armhf-rootfs-debian-wheezy-2014-01-14-src.tar
-rw-r--r-- 1 root root 1.7G Jan 15 09:46

BBB-eMMC-flasher-debian-7.3-2014-01-14-2gb.img
-rw-r--r-- 1 dennis dennis 254K Jan 15 09:38
BBB-eMMC-flasher-debian-7.3-2014-01-14-2gb.img.xz
-rw-r--r-- 1 dennis dennis 362M Jan 15 09:48

It looks like xz just died, it should remove
"BBB-eMMC-flasher-debian-7.3-2014-01-14-2gb.img" when done..

bone-debian-7.3-2014-01-14-4gb.img.xz
-rw-r--r-- 1 dennis dennis 300M Jan 15 09:05
debian-7.3-lxde-armhf-2014-01-14.tar.xz
-rwxr-xr-x 1 dennis dennis 689 Jan 15 09:05 ship.sh

I notice that the file sizes are different than your website which has:

      BBB-eMMC-flasher-deb..> 10-Jan-2014 10:52 358M
      bone-debian-7.3-2014..> 10-Jan-2014 10:54 358M
      debian-7.3-console-a..> 10-Jan-2014 10:36 302M

My flasher image seems to be substantially smaller than yours. Too small to
be correct.

My debian... file has "lxde" in the name rather than "console", and is about
the same size even though lxde would imply an X11 based system as opposed to
a console only system.

I changed that this week, seemed kinda silly calling it "console" when
obviously an 'lxde' image..

My 4G bone... image file is about the same size as yours.

Where does the armhf... file come from? I tried tracing through the build
scripts, but I couldn't see where it was created.

The "armhf" is the name of the debian port we are using..

https://wiki.debian.org/ArmHardFloatPort

Default gcc compile settings: --with-arch=armv7-a --with-fpu=vfpv3-d16
--with-float=hard --with-mode=thumb

This script also supports the older "armel" and plans for the "arm64"
archs.. You could do x86, but that would be silly..

Do you know why any of these differences exist?

BTW I noticed that the tar command in ship.sh has no "-" before the xf
options. The tar help says it should be there, but perhaps it is not
required.

The ship.sh file came out of me being lazy about a week ago, it's not
really tested beyond debian jessie..

Regards,

I noticed that the timestamp of the compressed flasher image was before the timestamp of the uncompressed image. This file was probably created the first time I ran the ship.sh script. It did some operations and then told be I was missing some dependencies. I installed the missing tools and then reran the script. which produced they files above.

It seems like I should be able to delete the output files and rerun the script, but it is not clear to me which files are the outputs at this point. Should I delete the two BBB… flasher image files and the *-4gb.img.xz file, and uncompress the debian…tar.xz file to get back to square one before running the ship.sh script again?

Thanks
Dennis Cote

This will work around that.. :wink:

https://github.com/beagleboard/image-builder/commit/d856d07c209f6af4022c95fd8fecf9d1f86514c6

essentially leaving "debian-7.3-lxde-armhf-2014-01-14.tar.xz" as is.
As it is now already compressed after the first run and building all
the final images off it..

Regards,