19 December 2021

How to create your own custom Docker images


In a previous article we covered the basics of docker images usage. But there we used images built by others. That's great if you find the image you look for, but what happens if none fits your needs?

In this article we are going to explain how to create your own images and upload them to Docker Hub so they can be easily downloaded in your projects environments and shared with other developers.


To cook you need a recipe

Provided that you followed the tutorial previously linked, you'll have docker already installed in your linux box. Once docker is installed you need a recipe to tell it how to "cook" your image. That recipe is a file called Dockerfile that you'll create in a folder of your choice with any files you want to include in your image.

What you cook with that recipe? An image, but what is a docker image? A docker image is a ready to use virtual linux operating system with a specific configuration and set of dependencies installed. How you use that image is up to you. Some images are provided "as-is", as a base point where you are supposed to install your application and run it inside. Other images are more specific and contains an app that is executed as soon you run that image. Just keep in your mind that a docker image is like what an OOP language calls a "class", while a container is an instance of that class (an OOP language would call it an "object"). You may have many containers running after being started from the same image.

To understand how a Dockerfile works, we are going to asses some examples. 

Let's start with this Dockerfile from the vdist project. The image you build with that Dockerfile is intended to compile a python distribution and run an application called fpm over a folder is created at runtime when an image container is used. So, this Dockerfile is supposed to install every dependency to make possible compiling python and running fpm.

As a first step, every Dockerfile starts defining which image you are going to use as starting point. Following with the OOP metaphor, being our image a class, we need to define from which class inherits ours.

In this case, our image is derived from ubuntu:latest image, but you can use any image available at Docker Hub. Just take care to check that image Docker Hub page to find out which tag to use. Once you upload your image to Docker Hub others may use it as a base point for their respective images.

Every art piece must be signed. Your image is no different so you should define some metadata for it:

The real match comes with RUN commands. Those are what you use to configure your image. 

Some people misunderstand what RUN commands are intended for the first time they try to build a docker image. RUN commands are not executed when a container is created from an image with "docker run" command. Instead they are run only once by "docker build" to create an image from a Dockerfile.

The exact set of RUN command will vary depending of your respective project needs. In this example RUN commands check apt sources list are OK, update apt database and install a bunch of dependencies using both "apt-get" and "gem install". 

However, you'd better start your bunch of RUN commands with a "RUN set -e" this will make fail your entire image building process if any RUN command returns and error. May seem an extreme measure, but that way you are sure you are not uploading an image with unadvertised errors. 

Besides, when you review some Dockerfiles from other project you will find many of them include several shell commands inside the same RUN command as our example does in lines 14-16. Docker people recommends including inside the same RUN command a bunch of shell commands if they are related between them (i.e. if two commands have no sense separated, being executed without the other, they should be run inside the same RUN command). That's because of how images are built using a layer structure where every RUN command is a separate layer. Following Docker people advise, your images should be quicker to rebuilt when you perform any change over its Dockerfile. So to include several shell commands inside the same RUN command, remember to separate those commands in a line for each and append every line with a "&& \" to chain them (except the last line as the example shows). 

Apart of RUN commands, there are others you should know. Let's review the Dockerfile from my project markdown2man. That image is intended to run a python script to use Pandoc to make a file conversion using arguments passed by the user when a container is started from that image. So, to create that image you can find some now already familiar commands:


But, from there things turn interesting with some new commands.

With ENV commands you can create environments variables to be used in following building commands so you don't have to repeat the same string over and over again and simplify modifications:


Nevertheless, be aware that environment variables created with ENV commands outlast building phase and persist when a container is created from that image. That can provoke collisions with other environments variables created further. If you only need the environment variable for the building phase and want it removed when a image container is created, then use ARG commands instead of ENV ones.

To include your application files inside the image use COPY commands:


Those commands copy a source file, relative to the Dockerfile location, from the host where you are building the image to a path inside that image. In this example, we are copying a requirements.txt, which is located alongside Dockerfile, to a folder called /script (as SCIPT_PATH environment variable defines) inside the image.

Last, but not least, we find another new command: ENTRYPOINT.


An ENTRYPOINT defines which command to run when a container is started from this image so arguments passed to "docker run" are actually passed to this command. Container will stay alive until command defined at ENTRYPOINT returns. 

ENTRYPOINTS are great to use docker containers to run commands without needing to pollute your user system with packages needed to run those commands.

Time to cook

Once your recipe is ready, you must cook something with it.

When your Dockerfile is ready, use docker build command to create an image from it. Provided you are at the same folder of your Dockerfile:

dante@Camelot:~/Projects/markdown2man$ docker build -t dantesignal31/markdown2man:latest .
Sending build context to Docker daemon  20.42MB
Step 1/14 : FROM python:3.8
 ---> 67ec76d9f73b
Step 2/14 : LABEL maintainer="dante-signal31 (dante.signal31@gmail.com)"
 ---> Using cache
 ---> ca94c01e56af
Step 3/14 : LABEL description="Image to run markdown2man GitHub Action."
 ---> Using cache
 ---> b749bd5d4bab
Step 4/14 : LABEL homepage="https://github.com/dante-signal31/markdown2man"
 ---> Using cache
 ---> 0869d30775e0
Step 5/14 : RUN set -e
 ---> Using cache
 ---> 381750ae4a4f
Step 6/14 : RUN apt-get update     && apt-get install pandoc -y
 ---> Using cache
 ---> 8538fe6f0c06
Step 7/14 : ENV SCRIPT_PATH /script
 ---> Using cache
 ---> 25b4b27451c6
Step 8/14 : COPY requirements.txt $SCRIPT_PATH/
 ---> Using cache
 ---> 03c97cc6fce4
Step 9/14 : RUN pip install --no-cache-dir -r $SCRIPT_PATH/requirements.txt
 ---> Using cache
 ---> ccb0ee22664d
Step 10/14 : COPY src/lib/* $SCRIPT_PATH/lib/
 ---> d447ceaa00db
Step 11/14 : COPY src/markdown2man.py $SCRIPT_PATH/
 ---> 923dd9c2c1d0
Step 12/14 : RUN chmod 755 $SCRIPT_PATH/markdown2man.py
 ---> Running in 30d8cf7e0586
Removing intermediate container 30d8cf7e0586
 ---> f8386844eab5
Step 13/14 : RUN ln -s $SCRIPT_PATH/markdown2man.py /usr/bin/markdown2man
 ---> Running in aa612bf91a2a
Removing intermediate container aa612bf91a2a
 ---> 40da567a99b9
Step 14/14 : ENTRYPOINT ["markdown2man"]
 ---> Running in aaa4774f9a1a
Removing intermediate container aaa4774f9a1a
 ---> 16baba45e7aa
Successfully built 16baba45e7aa
Successfully tagged dantesignal31/markdown2man:latest

dante@Camelot:~$

I you weren't at the same folder than Dockerfile you should replace that ".", at the end of command, with a path to Dockerfile folder.

The "-t" parameter is used to give a proper name (a.k.a. tag) to your image. If you want to upload your image to Docker Hub try to follow its naming conventions. For an image to be uploaded to Docker Hub its name should be composed like: <docker-hub-user>/<repository>:<version>. You can see in the last console example that docker-hub-user parameter was dantesignal31 while repository was markdown2man and version was latest.

Upload your image to Docker Hub

If building process ended right you should be able to find your image registered in your system.

dante@Camelot:~/Projects/markdown2man$ docker images
REPOSITORY                   TAG                 IMAGE ID       CREATED          SIZE
dantesignal31/markdown2man   latest              16baba45e7aa   15 minutes ago   1.11GB


dante@Camelot:~$

But a only locally available image has a limited use. To make that image globally available you should upload it to Docker Hub. But to do that you first need an account at Docker Hub. The process to sign up for a new account ID is similar to any other online service.

Done the sign up process, login with your new account to Docker Hub. Once logged in, create a new repository. Remember that whatever name you give to your repository it will be prefixed with your username to get the repo full name.


In this example, full name for the repository would be mobythewhale/my-private-repo. Unless you are using a paid account you'll probably set a "Public" repository.

Remember to tag your image with a repository name according what you created at Docker Hub.

Before you can push your image you have to login to Docker Hub from your console with "docker login":

dante@Camelot:~/Projects/markdown2man$ docker login
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /home/dante/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
dante@Camelot:~$

First time you log in you will be asked your username and password.

Once logged you can now upload your image with "docker push":

dante@Camelot:~/Projects/markdown2man$ docker push dantesignal31/markdown2man:latest
The push refers to repository [docker.io/dantesignal31/markdown2man]
05271d7777b6: Pushed 
67b7e520b6c7: Pushed 
90ccec97ca8c: Pushed 
f8ffd19ea9bf: Pushed 
d637246b9604: Pushed 
16c591e22029: Pushed 
1a4ca4e12765: Pushed 
e9df9d3bdd45: Mounted from library/python 
1271cc224a6b: Mounted from library/python 
740ef99eafe1: Mounted from library/python 
b7b662b31e70: Mounted from library/python 
6f5234c0aacd: Mounted from library/python 
8a5844586fdb: Mounted from library/python 
a4aba4e59b40: Mounted from library/python 
5499f2905579: Mounted from library/python 
a36ba9e322f7: Mounted from library/debian 
latest: digest: sha256:3e2a65f043306cc33e4504a5acb2728f675713c2c24de0b1a4bc970ae79b9ec8 size: 3680

dante@Camelot:~$

Your image is now available at Docker Hub, ready to be used like anyone else.

Conclusion

We reviewed the very basics about how build a docker image. From here, the only way is practicing creating increasingly complex images starting from the simpler ones. Fortunately Docker has a great documentation so it should be easy to solve any blocker you find in your way.

Hopefully all this process should be greatly simplified when Docker Desktop is finally available for linux as now is for windows or mac.

17 December 2021

How to create your own custom Actions for GitHub Actions


In my article about GitHub Actions we reviewed how you can build entire workflows just using premade bricks, called Actions, that you can find at GitHub Marketplace

That marketplace has many ready to use Actions but chances are that sooner than later you'll need to do something that has no action at markeplace. Sure, you can still use your own scripts. In that article I used a custom script (./ci_scripts/get_version.py) at section "Sharing data between steps and jobs". Problem with that approach is that if you want to do the same task in another project you need to copy your scripts between project repositories and adapt them. You'd better transform your scripts to Actions to be easily reusable not only in your projects but publicly with other people projects.

Every Action you can find at marketplace is made in one of the ways I'm going to explain here. Actually if you enter to any Action marketplace page you will find a link, at right hand side, to that Action repository so you can asses it and learn how it works.

There are 3 main methods to build your own GitHub Actions:

  • Composite Actions: They are the simpler, and quicker, but a condition to use this way is that your Action should be based in a self-sufficient script that needs no additional dependency to be installed. It should run only with an standard linux distribution offers.
  • Docker Actions: If you need any dependency to make your script work then you'll need t follow this way.
  • Javascript Actions: Well... you can write your own Actions with javascript, but I don't like that language so I'm not going to include it in this article.
The problem with your Actions dependencies is that they can pollute the workflow environment where your Action is going to be used. Your action dependencies can even collide with those of the app being built. That's why we need to encapsulate our Action and its dependencies to be independent of the environment of the app being built. Of course, this problem does not apply if your Action is intended to setup the workflow environment installing something. There are Actions to, for example, installing and setup Pandoc to be used by the workflow. Problem arises when your Action is intended to do one specific task not related to installing something (for example copying files) and it does install something under the table, as that con compromise the workflow environment. So, best option if you need to install anything to make your Action work is installing it in a docker container and make your Action script run from inside that container, entirely independent of workflow environment. 

Composite Actions

If your Action just needs a bunch of bash commands or a python script exclusively using its built-in standard library then composite Actions is your way to go.

As an example of a composite action, we are going to review how my Action rust-app-version works. That Action looks for a rust Cargo.toml configuration file and read which version is set there for the rust app. That version string is offered as the Action output and you can use that output in your workflow, for instance, to tag a new release at GitHub. This action only uses modules available at standard python distribution. It does not need to install anything at user runner. It's true that there is a requirements.txt at rust-app-version repository but those are only dependencies for unit testing.

To have your own composite Action you first need a GitHub repository to host it. There you can place the few files really needed for your action.

At the very root of your repository you need a file called "action.yml". This file is really important as it models your Action. Your users should be able to think about your action as a black box. Something where you enter some inputs and you receive any output. Those inputs and outputs are defined in action.yml.

If we read the action.yml file at rust-app-version we can see that this action only needs an input called "cargo_toml_folder" and actually that input is optional as it can receive a value of "." if it is omitted when this action is called:



Outputs are somewhat different as the must refer to the output of an specific step in your action:


In last section we specify that this action is going to have just one output called "app_version" and that output is going to be the output called "version" of an step with an id value of "get-version".

Those inputs and outputs define what your action consumes and offers, i.e. what your action does. How your action does it is defined under "runs:" tag. There you set that your Action is a composite one and you call a sequence of steps. This particular example only has one step but you can have as many steps as you need:



Take note of line 22 where that steps receives a name: "get-version". That name is important to refer to this step from outputs configuration.

Line 24 is where your command is run. I only executed one command. If you needed multiple commands to be executed inside the same step, then you should use a bar after run: "run: |". With that bar you mark that next few lines (indented under "run:" tag) are lines separated commands to be executed sequentially.

Command at line 24 is interesting because of 3 points:
  • It calls an script located at our Action repository. To refer to our Action repository root use the github.action_path environment variable. The great thing is that although our script is hosted at its repository, GitHub runs it so that it can view the repository of the workflow from where it is called. Our script will see the workflow repository files as it was run from its root.
  • At the end of the line you may see how inputs are used through inputs context.
  • The weirdest thing of that line is how you setup that step output. You set a bash step output doing an echo "::set-output name=<ouput_name>::<output_value>". In this case name is version and its value is what get_version.py prints to console. Be aware that output_name is used to retrieve that output after step ends through ${{ steps.<id>.outputs.<output_name> }}, in this case ${{ steps.get_version.outputs.version }}
Apart from that, you only need to setup your Action metadata. That is done in the first few lines:


Be aware that "name:" is the name your action will have at GitHub Marketplace. The another parameter, "description:", its the short explanation that will be shown along name in the search results at Markeplace. And "branding:" is only the icon (from Feather icon suite) and color that will represent your action at Markeplace.

With those 24 lines at action.yml and your script at its respective path (here at rust_app_version/ subfolder), you can use your action. You just need to push the button that will appear in your repository to publish your action at Marketplace. Nevertheless, you'd better read this article to the end because I have some recommendations that may be helpful for you.

Once published, it becomes visible for other GitHub users and a Marketplace page is created for your action. To use an Action like this you only need to include in your workflow a configuration like this:



Docker actions

If your Action needs to install any dependency then you should package that Action inside a docker container. That way your Action dependencies won't mess with your user workflow dependencies.

As an example of a docker action, we are going to review how my Action markdown2man works. That action takes a README.md file and converts it to a man page. Using it you don't have to keep two sources to document your console application usage. Instead of that you may document your app usage only with README.md and convert that file to a man page.

To do that conversion markdown2man needs Pandoc package installed. But Pandoc has its respective dependencies, so installing them at user runner may break his workflow. Instead of it, we are going to install those dependencies in a docker image and run our script from that image. Remember that docker lets you execute scripts from container interacting with host files.

As with composite Actions, we need to create an action.yml at Action repository root. There we set our metadata, input and outputs like we do with composite actions. The difference here is that this specific markdown2man Action does not emit any output, so that section is omitted. Section for "runs:" is different too:


In that section we specify this Action is a docker one (at "using:"). There are two ways use a docker image in your action: 
  • Generate an specific image for that action and store it at GitHub docker registry. In that case you use the "image: Dockerfile" tag.
  • Use a prebuilt image from DockerHub registry. To do that you use the "image: <dockerhub_user>:<docker-image-tag>" tag.
If the image you are going to build is exclusively intended to be used at GitHub Action I would follow Dockerfile option. Here, with markdown2man we follow the Dockerfile approach so a docker image is build any time Action is run after a Dockerfile update. Generated image is cached at GitHub registry to be offered quicker to further Actions. Remember a Dockerfile is a kind of a recipe to "cook" an image, so commands that file contains are only executed when the image is built ("cooked"). Once build, the only command that is run is the one you set at entrypoint tag, passing in arguments set at "docker run".                                                                                                                                                                          The "args:" tag has every parameter to be passed to our script at the container. You will probably use your input here to be passed to our script. Be aware that as it happened in composite action, here user repository files are visible to our container.

As you may suspect by now, docker actions are more involved than composite Actions because of the added complexity of creating the Dockerfile. The Dockerfile for markdown2man is pretty simple. As markdown2man script is a python one, we make our image derive from the official docker image for version 3.8:



Afterwards, we set image metadata:


To configure your image, for example installing things, you use RUN commands.


ENV command generates environment variables to be used in your Dockerfile commands:


You use COPY command to copy your requirements.txt from your repository and include it in your generated image. Your scripts are copied fro your Action repository to container following the same approach:


After script files are copied, I like to make then executable and link them from /usr/bin/ folder to include it at the system path:


After that, you set your script as the image entrypoint so this script is run once image is started and that script is provided with arguments you set at the "args:" tag at action.yml file.



You can try that image at your computer building that image from the Dockerfile and running that image as a container:

dante@Camelot:~/$ docker run -ti -v ~/your_project/:/work/ dantesignal31/markdown2man:latest /work/README.md mancifra


dante@Camelot:~$

For local testing you need to mount your project folder as volume (-v flag) if your scripts to process any file form that repository. Last two argument in the example (work/README.md and mancifra) are the arguments that must be passed to entrypoint.

And that's all. Once you have tested everything you can publish your Action and use it in your workflows:


With a call like that a man file called cifra.2.gz should be created at man folder. If manpage_folder does not exist then markdown2man creates it for you.


Your Actions are first class code

Although your Action will likely be small sized, you should take of them as you would with your full-blown apps. Be aware that many people will find and your Actions through Marketplace in their workflows. An error in your Action can brreak many workflows so be diligent and test your Action as you would with any other app.

So, with my Actions I follow the same approach as in other projects and I set up a GitHub workflow to run tests against any pushes in a staging branch. Only once those tests succeed I merge staging with main branch and generate a new release for the Action.

Lets use the markdown2man workflow as example. There you can read that we have two test types:

  • Unit tests: They check the python script markdown2man is based on.

  • Integration tests: They check markdown2man behaviour as a GitHub Action. Although your Action was not published yet you can install it from a workflow in the same repository (lines 42-48). So, what I do is calling the Action from the very same staging branch we are testing and I use that Action with a markdown I have ready at test folder. If a proper man page file is generated then integration test is passed (line 53). Having the chance to test an Action against its own repository is great as it lets you test your Action as people would use it without needing to publish it.

In addition to testing it, you should write a README.md for your action in order to explain in detail how to use your Action. In that document you should include at least this information:

  • A description of what the action does.
  • Required input and output arguments.
  • Optional input and output arguments.
  • Any secret your action needs.
  • Any environment variable your action uses.
  • An example of how to use your action in a workflow.

And you should add too a LICENSE file explaining the usage terms for your Action.


Conclusion

The strong point of GitHub Action is the high degree of reusability and sharing it promotes. Every time you find yourself repeating the same bunch of commands you are encouraged to make and Action with those commands and share it through Marketplace. Doing that way you get a piece of functionality easier to use throughout your workflows than copy-pasting commands and you contribute to improve the Marketplace so that others can benefit too from that Action. 

Thanks to this philosophy GitHub Marketplace has grown to a huge amount of Actions, ready to use and to help you to save you from implementing that functionality by your own.

07 November 2021

How to use GitHub Actions for continuous integration and deployment


In a previous article I explained how to use Travis CI to get continuous integration in your project. Problem is that Travis has changed its usage terms in the last year and now its not so comfortable for open source projects. The keep saying they are still free for open source project but actually you have to beg for free credits every time you expend them and they make you to prove you still comply with what they define as an open source project (I've heard about cases where developers where discarded for free credits just because they had GitHub sponsors).

So, although I've kept my vdist project at Travis (at the moment) I've been searching alternatives for my projects. As I use GitHub for my open source repositories its natural to try its continuous integration and deployment framework: GitHub Actions.

Conceptually, using GitHub Actions is pretty much the same as Travis CI, so I'm not going to repeat myself. If you want to learn why to use a continuous integration and deployment framework you can read the article I linked at this one start.

So, lets focus at the point of how to use GitHub Actions.

As Travis CI what you do with GitHub Actions is centered in yaml files you place at .github/workflows folder in your repository. GitHub will look at that folder to execute CI/CD workflows. In that folder you can have many workflows to be executed depending on different GitHub events (pushes over branches, pull requests, issues filling and a long etcetera). 

But I find GitHub Actions way better than Travis CI. GitHub promotes massive reusing. Every single step can be encapsulated and shared with your other workflows and with other people at GitHub. With so many people developing and sharing at GitHub you'll find yourself reusing others tasks (a.k.a actions) than implementing by yourself. Unless your are automatizing something really weird, chances are that some other has implemented it and shared. In this article we'll use others actions and implement our own custom steps. Besides you can reuse your own workflows (or share with others) so if you have a working workflow for your project you can reuse it in a similar project so you don't need to reimplement its workflow from scratch.

For this article we'll focus in a typical "test -> package -> deploy" workflow which we're going to call test_and_deploy.yaml (if you feel creative you can call it as you like, but try to be expressive). If you want this article full code you can find it in this commit of my Cifra-rust project at GitHub.

To create that file you have two options: create it in your IDE and push like any other file or write it using built-in GitHub web editor. For the first time my advice is to use web editor as it guides you better to get your first yaml up and working. So go to your GitHub repo page and click over Actions tab:


There, your are prompted to create your first workflow. When you choose to create a workflow you're offered to use a predefined template (GitHub has many for different tasks and languages) or set up a workflow yourself. For the sake of this article choose "set up a workflow yourself". Then you will enter to web editor. An initial example will be already loaded to let you have and initial scaffolding.

Now lets check Cifra-rust yaml file to learn what we can do with GitHub actions.


Header

At very first few lines (from line 1 to 12) you can see we name this workflow ("name" tag). Use an expressive name, to identify quickly what this workflow does. You will use this name to reuse this workflow from other repositories.

The "on" tag defines which events trigger this workflow. In my case, this workflow is triggered by pushes and pull requests over staging branch. There're many more events you can use.

The "workflow_dispatch" tag allows you to trigger this workflow manually from GitHub web interface. I use to set it, it doesn't harm to have that option.



Jobs

Next is the "jobs" tag (line 15) and there is where the "nuts and guts" of workflow begins. A workflow is composed of jobs. Every job is runned in a separate virtual machine (the runner) so each job has its dependencies encapsulated. That encapsulation is good to avoid jobs messing dependencies and filesystems of others jobs. Try to focus every job in just one task. 

Jobs are run in parallel by default unless you explicitly set dependencies between them. If you need a job B be run after a job A is completed successfully you need to use tag "needs" to set B needs A to be completed to start. In Cifra-rust example jobs "merge_staging_and_master", "deploy_crates_io" and "generate_deb_package" need "tests" job to be successfully finished to start. You can see an example of "needs" tag usage at line 53:

As "deploy_debian_package" respectively needs "generate_deb_package" to be finished before, you end with an execution tree like this:


Actions

Every job is composed of one or multiple steps. One step is a sequence of shell commands You can run native shell commands or scripts you had included in your repository. From line 112 to 115 we have one of those steps:

There we are calling a script stored in a folder called ci_scripts from my repository. Note the pipe (" | ") next to "run" tag. Without that pipe you can include just one command in run tag but that pipe allows you to include multiple command to executed separated in different lines (like step in lines 44 to 46).

If you find yourself repeating the same commands across multiple workflows them you have a good candidate to encapsulate those commands in an action. As you can guess Actions are the key stones of GitHub Actions. An action is a set of commands packaged to be shared and reused in many workflows (yours or of others users). An action has inputs and outputs and what happens inside is not your problem while it works as intended. You can develop your own actions and share with others but it deserves an article on its own. In next article I will convert that man page generation step in a reusable action.

At right hand side, GitHub Actions web editor has a searcher to find actions suitable for the task you want to perform. Guess you want to install a Rust toolchain, you can do this search:


Although GitHub web editor is really complete and useful to find mistakes in your yaml files, its searcher lacks a way to filter or reorder its results. Nevertheless that searcher is your best option to find shared actions for your workflows.

Once you click in any searcher result you are shown a summary about which text to include in your yaml file to use that action. You can get more detailed instructions clicking in "View full Marketplace listing" link.

As any other step, and action uses a "name" tag to document which task is intended to do and an "id" if that step must be referenced from other workflow places (see an example at line 42). What makes different an action from a command step is the "uses" tag. That tag links to the action we want to use. the text to use in that tag differs for every action but you can find what to write there at searcher results instructions. In those instructions the use to describe which inputs the action accepts. Those inputs are included in the "with" tag. For instance, in lines 23 to 27 I used an action to install the Rust building framework:

As you can see, in "uses" tag you can include which version of given action to use. There are wildcards to use latest version but you'd better set an specific version to avoid your workflows get broken by actions updates.

You make a job chaining actions as steps. Every step in a job is executed sequentially. If any of then fails the entire workflow fails. 


Sharing data between steps and jobs

Although the steps of a workflow are executed in the same virtual machine they cannot share bash environment variables because every step spawns a different bash process. If you set an environment variable in a step that needs to be used in another step of the same job you have two options:

  • Quick and dirty: Append that environment variable to $GITHUB_ENV so that variable can be accessed later using env context. For instance, at line 141 we create DEB_PACKAGE environment variable:

That environment variable is accessed at line 146, in the next step:


  • Set step outputs: Problem with last method is that although you can share data between steps of the same job you cannot do it across different jobs. Setting steps outputs is a bit slower but leaves your step ready to share data not only with other steps of the same job but with any step of the workflow. To set a variable environment as an step output you need to echo that variable to ::set-output and give a name to that variable followed by its value after a double colon ("::"). You have an example at line 46:

Note that step must be identified with an "id" tags to further retrieve the shared variable. As that step is identified as "version_tag" the created "package_tag" variable can be later retrieved from another step of the same job using: 

${{ steps.version_tag.outputs.package_tag }}

Actually that method is used at line 48 to prepare that variable to be recovered from another job. Remember that method so far helps you to pass data to steps in the same job. To export data from a job to be used from another job you have to declare it first as a job output (lines 47-48):


Note that in last screenshot indentation level should be at the same level as "steps" tag to properly set package_tag as a job output.

To retrieve that output from another job, the catching job must declare giving job as needed in its "needs" tag. After that, the shared value can be retrieved using next format:

${{ needs.<needed_job>.outputs.<output_varieble_name> }} 

In our example, "deploy_debian_package" needs the value exported at line 48 so it declares its job (tests) as needed in line 131:

After that, it can get and use that value at line 157:



Passing files between jobs

Sometimes passing a variable is not enough because you need to produce files in a job to be consumed in another job.

You can share files between steps in the same job because those steps share the same virtual machine. But between jobs you need to transfer files from a virtual machine to another.

When you generate a file (an artifact) in a job, you can upload it to a temporal shared storage to allow other jobs in the same workflow get that artifact. To upload and download artifact to that temporal storage you have two predefined actions: upload-artifact and download-artifact.

In our example, "generate_deb_package" job generates a debian package that is needed by "deploy_debian_package". So, in lines 122 to 126 "generated_deb_package" uploads that package:

On the other side "deploy_debian_package" downloads saved artifact in lines 132 to 136:



Using your repository source code

By default you start every job with a clean virtual machine. To have your source code downloaded to that virtual machine you use an action called checkout. In our example it is used as first step of "tests" job (line 21) to allow code to be built and tested:

You can set any branch to be downloaded, but if you don't set anyone the one related to triggering event is used. In our example, staging branch is the used one.


Running your workflow

You have two options to trigger your workflow to try it: you can do the triggering event (in our example pushing code to staging branch) or you can launch workflow manually from GitHub web interface (assuming you set "workflow_dispatch" tag in your yml file as I advised).

To do a manual triggering, go to repository Actions tab and select the workflow you want to launch. Then you'll see "Run workflow button" at right hand side:


Once pushed that button, workflow will start and that tab list will show that workflow as active. Selecting that workflow in the list shows an execution tree like the one I showed earlier. Clicking in any of the boxes shows running logs of every step of that job. It is extremely easy read through generated logs.


Conclusion

I've found GitHub Actions really enjoyable. Its focus in reusability and sharing makes really easy and fast creating complex workflow. Unless you're working in a really weird workflow, chances are that most of your workflow components (if not all) are already implemented and shared as actions, so designing a complex workflow becomes an easy task of joining already available pieces. 

GitHub Actions documentation is great, and it popularity makes easy find answers online for any problem you meet.

Besides that, I've felt yml files structure coherent and logic so its easy to grasp concepts an gain a good level really quickly. 

Being free and unlimited for open source repositories I guess I'm going to migrate all my CI/CD workflows to GitHub Actions from Travis CI.

23 October 2021

How to use PackageCloud to distribute your packages


Recently I wrote an article explaining how to setup a JFrog Artifactory account to host Debian and RPM repositories. 

Since then, my interest in Artifactory has weakened because they have a policy of continuous activity to keep your account alive. If you have periods with no packages uploads/downloads they suspend your account and you have to reactivate it manually. That is extremely unpleasant and uncomfortable and happens frequently (I've been receiving suspensions every few weeks) when you're like me, a hobbyist developer in his spare time who can't keep a continuous advance in his projects. That and the extremely complex setup to run a package repository made me look for another option.

Searching through the web I've found PackageCloud and so far it happens to be an appealing alternative to Artifactory. PackageCloud has a free tier with 2GB storage and 10GB of monthly transfer. For a hobbyist like I think it is enough. 

Creating a repository

Once your register at PackageCloud you access to an extremely clean dashboard. Compared to Artifactory everything is simple and easy. At up-right corner of "Home" page you have a big button to "Create a repository". You can use it to create a repository for every application you have. 

 


A wonderful feature of PackageCloud is that a single repository can host many packages types simultaneously. So you don't have to create a repository for every package type you want to host for your application, instead you have a single repository for your application and you upload there your deb, rpm, gem, etc packages you build. 

Uploading packages

While you use PackageCloud your realize it's developers have made a great effort to guide at every step. Once you create a new repository you are given immediate guidance about how to upload packages through console (although you have a nice blue button too to upload them through web interface if you like):

As a first approach we will first try web interface to upload a package and after we will test console approach.

If you select your recently created repository at Home page you will enter to the same screen I posted before. To use web interface to upload packages push "Upload a package" button. That button will trigger next pop-up window:

As you can see in last screenshot, I've pushed "Select a package" button and selected vdist_2.1.0:amd64.deb. After you have to select your target distribution in given combo box. I'd suggest selecting a general compatible distribution to widen you audience. For example, I develop in a Linux Mint box and although Linux Mint is present at combo box I prefer to select Ubuntu Focal as a wider equivalent (being aware that my Linux Mint Uma is based in Ubuntu Focal). After selecting package and target distribution, "upload" button will be enabled. Upload will start when you push that button. You will informed that upload has ended with a green boxed "Upload Successful!".

If you prefer console you can use PackageCloud console application. That package is developed in Ruby so you have to be sure you have it installed:

dante@Camelot:~/$ sudo apt install ruby ruby-dev g++
[sudo] password for dante:
[...]

dante@Camelot:~$

Ruby package gives you "gem" command to install PackageCloud application:

dante@Camelot:~/$ sudo gem install package_cloud
Building native extensions. This could take a while...
Successfully installed unf_ext-0.0.8
Successfully installed unf-0.1.4
Successfully installed domain_name-0.5.20190701
Successfully installed http-cookie-1.0.4
Successfully installed mime-types-data-3.2021.0901
Successfully installed mime-types-3.3.1
Successfully installed netrc-0.11.0
Successfully installed rest-client-2.1.0
Successfully installed json_pure-1.8.1
Building native extensions. This could take a while...
Successfully installed rainbow-2.2.2
Successfully installed package_cloud-0.3.08
Parsing documentation for unf_ext-0.0.8
Installing ri documentation for unf_ext-0.0.8
Parsing documentation for unf-0.1.4
Installing ri documentation for unf-0.1.4
Parsing documentation for domain_name-0.5.20190701
Installing ri documentation for domain_name-0.5.20190701
Parsing documentation for http-cookie-1.0.4
Installing ri documentation for http-cookie-1.0.4
Parsing documentation for mime-types-data-3.2021.0901
Installing ri documentation for mime-types-data-3.2021.0901
Parsing documentation for mime-types-3.3.1
Installing ri documentation for mime-types-3.3.1
Parsing documentation for netrc-0.11.0
Installing ri documentation for netrc-0.11.0
Parsing documentation for rest-client-2.1.0
Installing ri documentation for rest-client-2.1.0
Parsing documentation for json_pure-1.8.1
Installing ri documentation for json_pure-1.8.1
Parsing documentation for rainbow-2.2.2
Installing ri documentation for rainbow-2.2.2
Parsing documentation for package_cloud-0.3.08
Installing ri documentation for package_cloud-0.3.08
Done installing documentation for unf_ext, unf, domain_name, http-cookie, mime-types-data, mime-types, netrc, rest-client, json_pure, rainbow, package_cloud after 8 seconds
11 gems installed


dante@Camelot:~$

That package_cloud command lets you do many things, even create repositories from console, but let focus on package uploading. To upload a package to an existing repository just use "push" verb:

dante@Camelot:~/$ package_cloud push dante-signal31/vdist/ubuntu/focal vdist_2.2.0post1_amd64.deb 
Email:
dante.signal31@gmail.com
Password:

/var/lib/gems/2.7.0/gems/json_pure-1.8.1/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
Got your token. Writing a config file to /home/dante/.packagecloud... success!
/var/lib/gems/2.7.0/gems/json_pure-1.8.1/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
Looking for repository at dante-signal31/vdist... /var/lib/gems/2.7.0/gems/json_pure-1.8.1/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
success!
Pushing vdist_2.2.0post1_amd64.deb... success!

dante@Camelot:~$

URL you pass to "package_cloud push" is always of the form <username>/<repository>/<distribution>/<distribution version>.

After that you can see the new package registered at web interface:

Installing packages

Ok, so far you know everything you need from the publisher side, but know you need to learn what your users have to do to install your packages.

The thing cannot be simplest. Every repository has a button in its packages section called "Quick install instructions for:". Push its button and you will get a pop-up window like this:


Just copy given command text (or use copy button) and make your user paste that command in their consoles (i.e. include that command at your installation instructions documentation):

dante@Camelot:~/$ curl -s https://packagecloud.io/install/repositories/dante-signal31/vdist/script.deb.sh | sudo bash
[sudo] password for dante:
Detected operating system as LinuxMint/uma.
Checking for curl...
Detected curl...
Checking for gpg...
Detected gpg...
Running apt-get update... done.
Installing apt-transport-https... done.
Installing /etc/apt/sources.list.d/dante-signal31_vdist.list...done.
Importing packagecloud gpg key... done.
Running apt-get update... done.

The repository is setup! You can now install packages.

dante@Camelot:~$

That command registered your PackageCloud repository as one of the system authorised packaged sources. Theoretically now your user could do a "sudo apt update" and get your package listed but here comes the only gotcha of this process. Recall when I told that I develop in Linux Mint but I set repository to Ubuntu/focal? the point is that last command detected my system and set my source as it was for Linux Mint:

dante@Camelot:~/$ cat /etc/apt/sources.list.d/dante-signal31_vdist.list 
# this file was generated by packagecloud.io for
# the repository at https://packagecloud.io/dante-signal31/vdist

deb https://packagecloud.io/dante-signal31/vdist/linuxmint/ uma main
deb-src https://packagecloud.io/dante-signal31/vdist/linuxmint/ uma main


dante@Camelot:~$

I've marked as red the incorrect path. If you insist to update apt in this moment you get next error:

dante@Camelot:~/$ sudo apt update
Hit:1 http://archive.canonical.com/ubuntu focal InRelease
Hit:2 https://download.docker.com/linux/ubuntu focal InRelease
Hit:3 http://archive.ubuntu.com/ubuntu focal InRelease
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Ign:5 http://packages.linuxmint.com uma InRelease
Get:6 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:7 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Hit:8 http://packages.linuxmint.com uma Release
Ign:11 https://packagecloud.io/dante-signal31/vdist/linuxmint uma InRelease
Hit:10 https://packagecloud.io/dante-signal31/cifra-rust/ubuntu focal InRelease
Err:12 https://packagecloud.io/dante-signal31/vdist/linuxmint uma Release
404 Not Found [IP: 52.52.239.191 443]
Reading package lists... Done
E: The repository 'https://packagecloud.io/dante-signal31/vdist/linuxmint uma Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.


dante@Camelot:~$

So, in my case I have to correct it manually to let it this way:

dante@Camelot:~/$ cat /etc/apt/sources.list.d/dante-signal31_vdist.list 
# this file was generated by packagecloud.io for
# the repository at https://packagecloud.io/dante-signal31/vdist

deb https://packagecloud.io/dante-signal31/vdist/ubuntu/ focal main
deb-src https://packagecloud.io/dante-signal31/vdist/ubuntu/ focal main

dante@Camelot:~$

Now "apt update" will work and you'll find our package:

dante@Camelot:~/$ sudo apt update
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:3 http://archive.canonical.com/ubuntu focal InRelease
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Ign:5 http://packages.linuxmint.com uma InRelease
Get:6 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:7 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Hit:8 http://packages.linuxmint.com uma Release
Hit:10 https://packagecloud.io/dante-signal31/cifra-rust/ubuntu focal InRelease
Get:11 https://packagecloud.io/dante-signal31/vdist/ubuntu focal InRelease [24,4 kB]
Get:12 https://packagecloud.io/dante-signal31/vdist/ubuntu focal/main amd64 Packages [963 B]
Fetched 353 kB in 3s (109 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
dante@Camelot:~/Downloads$ sudo apt show vdist
Package: vdist
Version: 2.2.0post1
Priority: extra
Section: net
Maintainer: dante.signal31@gmail.com
Installed-Size: 214 MB
Depends: libssl1.0.0, docker-ce
Homepage: https://github.com/dante-signal31/vdist
Download-Size: 59,9 MB
APT-Sources: https://packagecloud.io/dante-signal31/vdist/ubuntu focal/main amd64 Packages
Description: vdist (Virtualenv Distribute) is a tool that lets you build OS packages from your Python applications, while aiming to build an isolated environment for your Python project by utilizing virtualenv. This means that your application will not depend on OS provided packages of Python modules, including their versions.

N: There is 1 additional record. Please use the '-a' switch to see it

dante@Camelot:~$

Obviously, if your user system matches your repository target there would be nothing to fix, but chances are that your user have derivatives boxes so they'll need to apply this fix, so make sure to include it in your documentation.

At this point, your users will be able to install your package as any other:

dante@Camelot:~/$ sudo apt install vdist
[...]

dante@Camelot:~$

Conclusion

PackageCloud makes extremely easy deploying your packages. Comparing with Bintray or Artifactory, its setup is a charm. I have to check how things go on the long term but at first glance it seems a promising service.