In this series of posts, we describe how to create a simple and fully open source CI/CD pipeline that you can use for embedded Linux device development on GitLab. It also takes advantage of the power of Pantacor Hub and Pantavisor Linux.
Table of Contents (4 part series) |
---|
Part 1: Automating CICD Pipelines for Embedded Linux Projects |
Part 2: Generate Flashable Images with CI/CD Pipelines |
Part 3 – Templated CICD pipelines for Embedded Linux Devices |
Part 4 – Customizing CICD pipelines for Embedded Linux Projects |
In the previous post, we went through the basic setup to keep a couple of devices automatically up to date with the latest and stable versions of your project’s code base.
In this second part, we will continue where we left off and add the ability to generate and store flashable images with the stable version of your project.
Background
If you completed Part 1, you can now keep several devices up to date with the latest and stable version of our software. That will help us with development, as we are going to have always access to test and manipulate those devices with the security that they contain the current and up to date versions of the code.
An immediate need after testing a working version might be replication. For devices that already have Pantavisor already installed, you can simply remotely update them. However how do you propagate these changes to new devices when you need to prepare the hardware for production?

This CI pipeline can help you with this. It keeps a list of stable factory images that you can use to flash devices.
Set up the AWS bucket
We’ll use AWS to store the flashable images. Register for a free trial here.
After this, create a bucket to store the images. You will need an IAM user, with write permissions on the bucket. And you’ll also need to create an access key in order to upload the images from device-ci pipeline.
Configure the GitLab project
The pipeline takes the AWS credentials created in the previous section from the variables you must set. These are AWS_BUCKET
, AWS_PROJECT_PATH
, AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
. You will have to set them up as GitLab CI variables in your device-ci project. Don’t forget to mask AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
!

Test it
Select the commit to tag from the CI project you created in the previous part of the tutorial. In the device-ci pipeline, each commit corresponds to a different state of the device.
In this case, we can physically see that latest device (on the right) is working as desired for our next stable version, which includes the blinking LED feature from Part I.

The latest version corresponds with the latest commit in GitLab project, if the pipeline finished without errors. Go ahead and tag that one with the name 001
.
In the git log, we can check if the changes contained in the commit we just tagged are good and match with the version of the device we want to release. After that, push the tag.

You can see that the stable job has already begun after the push in GitLab.

The pipeline successfully updated the stable device (at the left) with the version that corresponds to the tagged commit!

After the post, the CI checks if the upgrade went well with a verify job that triggers an error in case the new version breaks the board boot up process.
Let’s check that the pipeline passed and the image is uploaded to AWS in GitLab CI log.

Check updates in AWS S3 Bucket
The updated stable device name, revision , generated image download link and other relevant metadata can be consulted in a json at the root of your project path in AWS.
The URL to that AWS will vary depending on the values of the variables you set in the configuration section.
https://<AWS_BUCKET>.s3.amazonaws.com/<AWS_PROJECT_PATH>/stable.json
In this case, the json file containing the metadata for the example prepared for this tutorial is:

What’s next?
In Part 3 of this tutorial series, we will explore the internals of our templates. And in Part 4 we’ll expand on this knowledge to override the jobs to make the pipeline fully customizable.