CI/CD for Embedded with VS Code, Docker and GitHub Actions

“Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.”

Agile Manifesto, https://agilemanifesto.org/principles.html

It is interesting to see that modern tools and agile development workflows are getting more and more into the embedded world. CI/CD is a strategy where code changes to an application get automatically integrated, tested and released automatically into a production environment.

VS Code with CI/CD

Outline

In this article I’ll show how to set a CI/CD pipeline, with the example of the NXP LPC55S16-EVK, using Visual Studio Code, Docker and GitHub actions.

I show how to build a simple ‘pipeline’ which gets triggered by a git action, to build the software and to release the firmware binary on GitHub.

For this, I’m using the NXP LPC55S16-EVK board, using the NXP MCUXpresso SDK with CMake and VS Code. You can find all the sources on GitHub.

NXP LPC55S16-EVK

What is CI/CD?

CI/CD (Continuous Integration/Continuous Delivery) means automating the development process from adding changes, to tests up to delivery of the software product.

CI/CD is an important part of DevOps, a combination of development (dev) and operations (ops), where people are working together to build and deliver products.

DevOps chain (Source: Wikipedia)

Continuous Integration (CI) means that developers check-in their changes into a version control system. This then triggers an automated build of the system on a server which then reports the status back. This ensures that the change by the developer does not break anything.

Continuous Integration (Source: Wikipedia)

The whole process can be seen as a ‘pipeline’ of automated steps:

CI/CD Pipeline

The idea is to ensure that software can be released at and time, ideally in a fully automated way.

Software and Tools

Key component of the workflow is a version control system (e.g. git). It requires a way of automatically build the software (command line build tools, cmake/make/ninja, …) and a testing framework (CTest, unity, …), plus a controlled build environment (e.g. docker). VS Code actually is only used here because it nicely integrates with git, cmake and docker through extensions: any other editor will do it too.

So here is what I use in this article:

As project I’m using a ‘blinky’ for the LPC55S16-EVK board. You can find the project on GitHub here: https://github.com/ErichStyger/MCUXpresso_LPC55S16_CI_CD

Using Docker

We are going to use docker both on a GitHub server. It makes sense to test out things locally first, so we have to install it locally too. I have installed Docker Desktop for Windows: https://www.docker.com/products/docker-desktop/, but using it on the command line.

Docker uses ‘images’ (think about a blue-print) and ‘container’ (think about a kind of virtual machine). To create an image, we need to create a file with the instructions how to build it.

The next steps go through the process of creating an image and a container, plus inspecting it.

In the ‘blinky’ project root, I have such a file, named Dockerfile:

# Fetch ubuntu image FROM ubuntu:22.04 # Install prerequisites RUN \ apt update && \ apt install -y cmake gcc-arm-none-eabi libnewlib-arm-none-eabi build-essential ninja-build # create a directory for the project RUN \ mkdir -p /project/ # Copy project sources into image COPY CMakeLists.txt /project/ COPY arm-none-eabi-gcc.cmake /project/ COPY sdk /project/sdk/ COPY src /project/src/ COPY McuLib /project/McuLib/ # Build project RUN \ cd /project && \ cmake -G"Ninja" . -B build && \ cmake --build build # Command that will be invoked when the container starts ENTRYPOINT ["/bin/bash"]

What it does is:

  1. Use an Ubuntu ‘base’ image
  2. Perform an update and install the necessary tools (cmake, gcc for ARM Embedded, libraries, build tools, ninja)
  3. Create a directory in the image for the project
  4. Copy the project source files and CMake files into that project folder
  5. Initialize the project (for using Ninja) and build the project
  6. Specify that the bash shell is used

To create the docker image, open a console/shell (e.g. in VS Code) and ‘cd’ to where the Dockerfile is located.

Run the following command to build the image:

docker build -t lpc55s16-image .
Docker Image building

The option ‘-t’ tags the image with a name, and ‘.’ uses the Dockerfile in the current directory.

Hint: To view the available images, use:

docker images

Hint: To delete an image:

docker rmi lpc55s16-image

To create a container from our image:

docker create -i -t --name lpc55s16-container lpc55s16-image

‘-i’ creates a container for the interactive mode, ‘-t’ adds a pseudo-terminal, and ‘–name’ gives the container a name.

Hint: to list the available container:

docker container ls -a

Hint: to remove the container:

docker rm lpc55s16-container

Hint: to copy a file from the host to the container, use the following

docker cp <filename> <container>:<pathInContainer>

Now lets start the container and log into it:

docker start -i lpc55s16-container
Running the container

Here I can inspect the files and see that the project has been built, with the expected output files in the build folder. Later we want to run docker on GitHub, let it build the output files and publish it as a release.

To leave the container, use:

exit

So now we have the docker environment, the next step is to build it on the GitHub server using GitHub Actions.

GitHub Actions

GitHub offers the a workflow for ‘actions’: for example it can trigger an event if someone pushes new content or pushes a tag to the repository. And this is what I want here: If I push a tag labeled e.g. “v1.0.0” then it shall create an event to build the project and publish the resulting binary on GitHub as a release.

To use Github workflows, I need to have a special directory on GitHub/in my project: .github\workflows wehre I put in my actions as YAML files:

GitHub Workflow Directory

Into that folder, I an put workflow files with a .yml extension. The deploy.yml has the following content:

# Name of the workflow name: Deploy new version # Define the trigger event(s) # Only deploy when a new tag is pushed ('push:tags:') or manually (with 'workflow_dispatch:') # If pushing tag, it has to match "v*.*.*" on: workflow_dispatch: push: tags: - "v*.*.*" # Must match the project() name in CMakeLists.txt, variable used below to copy .hex file env: APP_NAME: LPC55S16_Blinky # Allow this workflow to write back to the repository permissions: contents: write # Jobs run in parallel by default, each runs steps in sequence jobs: # Build binary and send to releases build-and-deploy: runs-on: ubuntu-latest name: Build and deploy steps: - name: Check out this repository uses: actions/checkout@v3 - name: Build Docker image run: docker build -t lpc55s16-image . - name: Create Docker container run: docker create --name lpc55s16-container lpc55s16-image - name: Copy out Intel Hex file run: docker cp lpc55s16-container:/project/build/${APP_NAME}.hex ./${APP_NAME}.hex - name: Put environment variable into the env context run: echo "app_name=$APP_NAME" >> $GITHUB_ENV # for the push, we need the tag! This step is skipped if we run it manually - name: Push to release uses: softprops/action-gh-release@v1 if: startsWith(github.ref, 'refs/tags/') with: files: ${{ env.app_name }}.hex body_path: CHANGELOG.md

The workflow has a single job (build-and-deploy). It is possible to run multiple jobs in parallel and for this it runs on a Ubuntu machine (runs-on). It sets an environment variable with the name of the application, so it can be used later during pushing to the ‘release’ section of the repository.

The push to the release section is made with an open source GitHub Action (action-gh-release, https://github.com/softprops/action-gh-release): It uses the git tag to append to the CHANGELOG.md file and publishes the artifacts.

Because the action has the ‘workflow_dispatch’ trigger specified, I can run the workflow manually too:

Run Workflow manually

To trigger the workflow, I push a new tag with a version, for example:

git tag v0.0.1 git push origin v0.0.1

On GitHub then I can monitor the running action:

running GitHub Action

Clicking on the action I can get more details:

Action Details

It has pushed the assets to the ‘Releases’ and added an entry to the Change-Log:

New release

Congratulations, you have completed a pipeline :-).

Summary

It is not that hard to build a CI/CD pipeline even for an embedded application, with the help of GitHub actions and docker. VS Code can be nicely used for the task, as it comes with the necessary support for CMake, Docker and GitHub. The presented framework can be extended with additional things like static code analysis or automated testing.

Happy integrating and delivering 🙂

Links

16 thoughts on “CI/CD for Embedded with VS Code, Docker and GitHub Actions

  1. This great stuff! We use something similar with docker, GitLab and some dedicated hardware but its still a work in progress. I will be reviewing this article in-depth to see if it can enhance our existing setup. Many thanks for the article Erich.

    Liked by 1 person

    • Hi John,
      If time permits, I want to write some follow-ups to this article, as there are many more possibilities. We do use GitLab too, hosted inside our university network, because CI/CD and machine time is limited on a public git like GitHub. In any case, if someone wants to do it on a larger scale and for company products, self-hosting the servers and the repos can make sense.

      Like

        • A private git server is a must for anything non-open-source imho. And I don’t trust ‘private’ repos of cloud providers, as potentially it can end up in the open too.
          Having said that: if the infrastructure is local, it has to be protected from the outside and inside too, so it does not get a victim of a ransomware attack.

          Like

  2. This is a very helpful article, thank you. Is Docker always the way to go? Or can one expect to find standard tools such as CMake and gcc on the target machine?

    Liked by 1 person

  3. Erich,

    My software team at Cypress was 100s of people… the last program I was responsible for Modus Toolbox had more than 200 people.

    My experience with Agile… with the right people… who were disciplined was great

    BUT… most people say that want to do agile really just don’t want to have any rules and they just want to hack and go.

    To do Agile right… you have to do those other things (e.g. CI/CD) … and it turns out that most people are just not disciplined enough… even “professional” developers.

    Alan

    >

    Like

    • Hi Naga,
      It looks nice, is not free, and for a background: I purchased it about two years ago and have used it. But I have not used it for about a year or so, because it did not fit my needs, and was more getting in the way. The VisualGDB extension imho is great for you if you come from the Arduino world. It certainly had some nice features (and still has) at the time where VS Code had much less embedded features. Since then, Microsoft added itself many embedded features (terminal, register view, assembly stepping, …) and so did Cortex-Debug, making VisualGDB less relevant for VS Code Users. More of a personal thing: I like in VS Code the clean and consistent UI, which SysProgs did not follow, making it look kind of strange. This is more of a ‘design’ and ‘UI’ thing, but I have found that odd.
      That it is not free is not a big thing, because I value good software and tools, and I’m willing to pay for it. It had some value for me back in time, but not any more.
      Finally, the VisualGDB pitch is about vendor independence: Well, there is not really such a thing, you just get dependent on yet another vendor. What I prefer is not to get into another dependency, instead base my environment on open and if possible free tools, which I easily can swap out and replace. So for example STM, NXP and for example Espressif have some useful extensions, but I’m keeping them separate in profiles, only using some part of the extension if they really make sense. For example I’m using the Cortex-Debug extension, even if the vendor is offering its own one, because that extension is open, more powerful and let me not depend on Microsoft or a vendor extension. And having all these little building blocks (CMake, Ninja, VS Code Editor with a selection of extensions, toolchain, SDK, …) individually chosen, I know every detail of the whole environment and can easily change or replace it if needed, while with something like a ‘VisualGDB’ I get a tie-in into a solution with much less flexibility. Nothing wrong with VisualGDB and others in the market making the same pitch, but it is simply not a sound and sustainable solution for me.

      I hope that makes sense?

      Like

  4. Pingback: CI/CD for embedded with VS Code, Docker and GitHub Actions @McuOnEclipse « Adafruit Industries – Makers, hackers, artists, designers and engineers!

  5. Great article, thank you Erich! Like the other people posting, I am setting up CI/CD for my project now.

    I see that you are using the standard gcc-arm-none-eabi rather than the one that NXP puts in MCUxpresso (which seems to be a patched earlier version). I have worried about this choice. Do you think using latest gcc rather than a patched version that NXP built is the better way to go (I guess, yes)? Did you give the question any thought when setting up your CI?

    Thanks!
    Gavin

    Like

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.