Showing posts with label programming. Show all posts
Showing posts with label programming. Show all posts

20 January 2022

How to parse console arguments in your Rust application with Clap

In a previous article I explained how to use ArgParse module to read console arguments in  your Python applications. Rust has it's own crates to parse consoles arguments too. In this article I'm going to explain how to use clap crate for that.

One wonderful thing about Rust is that inside its hard rustacean shell it uses to have a pythonista heart. Using clap you'll find many of ArgParse features. Actually concept are quite similar: while you called verbs as subparsers in ArgParser, here in clap you're going to call them subcommands, and ArgParse arguments are simply named at clap like arg. So if you're used to ArgParser you'll likely feel at home using clap. I'm going to assume you read my ArgParse article to not to repeat myself explaining the same concepts. 

Like any other Rust crate you need to include clap in your Cargo.toml file:

After that you can use clap in your source code. To illustrate explanations, I'm going to use as an example the command parsing I use in my project cifra-rust

As you can see there, you can use clap directly in you main() function but I like to abstract it in a generic function that returns my own defined Configuration struct type. That way if I switch from clap to any other parser crate change will be smoother for my app (reduce coupling). So, as I do at python I define a parse_arguments() function that returns a Configuration type.

There you can see that root parser is defined at clap using App::new(). As other rust crates, clap makes heavy use of builder pattern to configure its parsing. That way you can configure command version, author or long description ("long_about"), between other options, as you can see from lines 277 to 280:


Clap behaviour can be customized with setting() call. A typical parameter for that call is AppSettings::ArgRequiredElseHelp, to show help if command is called with no arguments:


A subparser is created calling subcommand() and passing it a new App instance:


With call about() you can define a short description, about the command or arguments, that will appear when --help is called.

Usually parser (and subparsers) will need arguments. Arguments are defined calling arg() in the parser they belong to. Here you have an example:


In last example you can see that subparser create (line 285) has two arguments: dictionary_name (line 287) and initial_words_file (line 292). Note that every arg() call uses as a parameter and Arg instance. Argument configuration is done using builder pattern over Arg instances.

Argument dictionary_name is required because it is configured as required(true) at line 288. Be aware that although you can use a flag here you are discouraged to do so. By definition, all required arguments should be positional (i.e. require no flag), the only time a flag could be used in required arguments is when called operation is destructive and user is required to prove he knows what he is doing providing that extra flag. When a positional argument is used you may call index() to specify the position of this argument relative to other positional arguments. Actually, I've found out later that you can leave off index() and in that case index will be assigned in order of evaluation. What index() allows you is setting indexes out of order.

When you call takes_value(true) on an argument, the value provided by user is stored in a key called like the argument. For instance, the takes_value(true) at line 290 makes the provided value to be stored in a key called dictionary_name. If your are using an optional flag with no value (i.e. a boolean flag) you could call takes_value(false) or just omit it.

The call to value_name() is equivalent to metavar parameter at python's argparse. It lets you define the string you want to be used to represent this parameter when help is called with --help argument.

Oddly, arguments don't use about() to define their help strings but help() instead.

You can find an optional argument definition from line 292 to 298. There, long version of flag is defined with long() and short one with short() call. In this example, this argument could be called both "--initial_words_file <PATH_TO_FILE_WITH_WORDS>" and "-i <PATH_TO_FILE_WITH_WORDS>".

It can be useful call validator(), as it let you define a function to be called over provided argument to assert it is what you expect to receive. Provided function should receive an string parameter with provided argument and it should return a Result<(), String>. If you make your checks and find correct the argument the function should return an Ok(()), or an Err("Here you write your error message") instead.

Chaining methods on nested arguments and subcommands you can define an entire command tree. Once you finish you have to make one final call to get_matches_from() to the root parser. You pass a vector of strings to that method, with every individual command argument:



Usually you'll pass a vector of strings from args() which returns a vector with every command argument given by user when application was called from console.



Note that my main() function is almost empty. That is because that way I can call _main() (note the underscore) from my integration tests, entering my own argument vector, to simulate a user calling the application from console.

What get_matches_from() returns is a ArgMatches type. That type retuns every argument value if be search by its key name. From line 149 to 245 we implement a method to create a Configuration type using an ArgMatches contents.

Be aware that when you only have a parser you can get values directly using value_of() method. But if you have subcommands you have first to get the specific ArgMatches for that subcommand branch doing a call to subcommand_matches():


In last example you can see the usual workflow:
  • You go deep getting the ArgMatches of the branch you are interested in. (lines 160-161)
  • Once you have the ArgMatches you want, you use value_of() to get an specific argument value. (line 164)
  • For optionals parameters you can make a is_present() call to check if that one was provided or not. (line 165). 
Using those methods, you can retrieve every provided value and build up your configuration to run your application.

As you can see you can make really powerful command parser with clap getting every functionality you are used at ArgParse but in a rustacean environment.

17 December 2021

How to create your own custom Actions for GitHub Actions


In my article about GitHub Actions we reviewed how you can build entire workflows just using premade bricks, called Actions, that you can find at GitHub Marketplace

That marketplace has many ready to use Actions but chances are that sooner than later you'll need to do something that has no action at markeplace. Sure, you can still use your own scripts. In that article I used a custom script (./ci_scripts/get_version.py) at section "Sharing data between steps and jobs". Problem with that approach is that if you want to do the same task in another project you need to copy your scripts between project repositories and adapt them. You'd better transform your scripts to Actions to be easily reusable not only in your projects but publicly with other people projects.

Every Action you can find at marketplace is made in one of the ways I'm going to explain here. Actually if you enter to any Action marketplace page you will find a link, at right hand side, to that Action repository so you can asses it and learn how it works.

There are 3 main methods to build your own GitHub Actions:

  • Composite Actions: They are the simpler, and quicker, but a condition to use this way is that your Action should be based in a self-sufficient script that needs no additional dependency to be installed. It should run only with an standard linux distribution offers.
  • Docker Actions: If you need any dependency to make your script work then you'll need t follow this way.
  • Javascript Actions: Well... you can write your own Actions with javascript, but I don't like that language so I'm not going to include it in this article.
The problem with your Actions dependencies is that they can pollute the workflow environment where your Action is going to be used. Your action dependencies can even collide with those of the app being built. That's why we need to encapsulate our Action and its dependencies to be independent of the environment of the app being built. Of course, this problem does not apply if your Action is intended to setup the workflow environment installing something. There are Actions to, for example, installing and setup Pandoc to be used by the workflow. Problem arises when your Action is intended to do one specific task not related to installing something (for example copying files) and it does install something under the table, as that con compromise the workflow environment. So, best option if you need to install anything to make your Action work is installing it in a docker container and make your Action script run from inside that container, entirely independent of workflow environment. 

Composite Actions

If your Action just needs a bunch of bash commands or a python script exclusively using its built-in standard library then composite Actions is your way to go.

As an example of a composite action, we are going to review how my Action rust-app-version works. That Action looks for a rust Cargo.toml configuration file and read which version is set there for the rust app. That version string is offered as the Action output and you can use that output in your workflow, for instance, to tag a new release at GitHub. This action only uses modules available at standard python distribution. It does not need to install anything at user runner. It's true that there is a requirements.txt at rust-app-version repository but those are only dependencies for unit testing.

To have your own composite Action you first need a GitHub repository to host it. There you can place the few files really needed for your action.

At the very root of your repository you need a file called "action.yml". This file is really important as it models your Action. Your users should be able to think about your action as a black box. Something where you enter some inputs and you receive any output. Those inputs and outputs are defined in action.yml.

If we read the action.yml file at rust-app-version we can see that this action only needs an input called "cargo_toml_folder" and actually that input is optional as it can receive a value of "." if it is omitted when this action is called:



Outputs are somewhat different as the must refer to the output of an specific step in your action:


In last section we specify that this action is going to have just one output called "app_version" and that output is going to be the output called "version" of an step with an id value of "get-version".

Those inputs and outputs define what your action consumes and offers, i.e. what your action does. How your action does it is defined under "runs:" tag. There you set that your Action is a composite one and you call a sequence of steps. This particular example only has one step but you can have as many steps as you need:



Take note of line 22 where that steps receives a name: "get-version". That name is important to refer to this step from outputs configuration.

Line 24 is where your command is run. I only executed one command. If you needed multiple commands to be executed inside the same step, then you should use a bar after run: "run: |". With that bar you mark that next few lines (indented under "run:" tag) are lines separated commands to be executed sequentially.

Command at line 24 is interesting because of 3 points:
  • It calls an script located at our Action repository. To refer to our Action repository root use the github.action_path environment variable. The great thing is that although our script is hosted at its repository, GitHub runs it so that it can view the repository of the workflow from where it is called. Our script will see the workflow repository files as it was run from its root.
  • At the end of the line you may see how inputs are used through inputs context.
  • The weirdest thing of that line is how you setup that step output. You set a bash step output doing an echo "::set-output name=<ouput_name>::<output_value>". In this case name is version and its value is what get_version.py prints to console. Be aware that output_name is used to retrieve that output after step ends through ${{ steps.<id>.outputs.<output_name> }}, in this case ${{ steps.get_version.outputs.version }}
Apart from that, you only need to setup your Action metadata. That is done in the first few lines:


Be aware that "name:" is the name your action will have at GitHub Marketplace. The another parameter, "description:", its the short explanation that will be shown along name in the search results at Markeplace. And "branding:" is only the icon (from Feather icon suite) and color that will represent your action at Markeplace.

With those 24 lines at action.yml and your script at its respective path (here at rust_app_version/ subfolder), you can use your action. You just need to push the button that will appear in your repository to publish your action at Marketplace. Nevertheless, you'd better read this article to the end because I have some recommendations that may be helpful for you.

Once published, it becomes visible for other GitHub users and a Marketplace page is created for your action. To use an Action like this you only need to include in your workflow a configuration like this:



Docker actions

If your Action needs to install any dependency then you should package that Action inside a docker container. That way your Action dependencies won't mess with your user workflow dependencies.

As an example of a docker action, we are going to review how my Action markdown2man works. That action takes a README.md file and converts it to a man page. Using it you don't have to keep two sources to document your console application usage. Instead of that you may document your app usage only with README.md and convert that file to a man page.

To do that conversion markdown2man needs Pandoc package installed. But Pandoc has its respective dependencies, so installing them at user runner may break his workflow. Instead of it, we are going to install those dependencies in a docker image and run our script from that image. Remember that docker lets you execute scripts from container interacting with host files.

As with composite Actions, we need to create an action.yml at Action repository root. There we set our metadata, input and outputs like we do with composite actions. The difference here is that this specific markdown2man Action does not emit any output, so that section is omitted. Section for "runs:" is different too:


In that section we specify this Action is a docker one (at "using:"). There are two ways use a docker image in your action: 
  • Generate an specific image for that action and store it at GitHub docker registry. In that case you use the "image: Dockerfile" tag.
  • Use a prebuilt image from DockerHub registry. To do that you use the "image: <dockerhub_user>:<docker-image-tag>" tag.
If the image you are going to build is exclusively intended to be used at GitHub Action I would follow Dockerfile option. Here, with markdown2man we follow the Dockerfile approach so a docker image is build any time Action is run after a Dockerfile update. Generated image is cached at GitHub registry to be offered quicker to further Actions. Remember a Dockerfile is a kind of a recipe to "cook" an image, so commands that file contains are only executed when the image is built ("cooked"). Once build, the only command that is run is the one you set at entrypoint tag, passing in arguments set at "docker run".                                                                                                                                                                          The "args:" tag has every parameter to be passed to our script at the container. You will probably use your input here to be passed to our script. Be aware that as it happened in composite action, here user repository files are visible to our container.

As you may suspect by now, docker actions are more involved than composite Actions because of the added complexity of creating the Dockerfile. The Dockerfile for markdown2man is pretty simple. As markdown2man script is a python one, we make our image derive from the official docker image for version 3.8:



Afterwards, we set image metadata:


To configure your image, for example installing things, you use RUN commands.


ENV command generates environment variables to be used in your Dockerfile commands:


You use COPY command to copy your requirements.txt from your repository and include it in your generated image. Your scripts are copied fro your Action repository to container following the same approach:


After script files are copied, I like to make then executable and link them from /usr/bin/ folder to include it at the system path:


After that, you set your script as the image entrypoint so this script is run once image is started and that script is provided with arguments you set at the "args:" tag at action.yml file.



You can try that image at your computer building that image from the Dockerfile and running that image as a container:

dante@Camelot:~/$ docker run -ti -v ~/your_project/:/work/ dantesignal31/markdown2man:latest /work/README.md mancifra


dante@Camelot:~$

For local testing you need to mount your project folder as volume (-v flag) if your scripts to process any file form that repository. Last two argument in the example (work/README.md and mancifra) are the arguments that must be passed to entrypoint.

And that's all. Once you have tested everything you can publish your Action and use it in your workflows:


With a call like that a man file called cifra.2.gz should be created at man folder. If manpage_folder does not exist then markdown2man creates it for you.


Your Actions are first class code

Although your Action will likely be small sized, you should take of them as you would with your full-blown apps. Be aware that many people will find and your Actions through Marketplace in their workflows. An error in your Action can brreak many workflows so be diligent and test your Action as you would with any other app.

So, with my Actions I follow the same approach as in other projects and I set up a GitHub workflow to run tests against any pushes in a staging branch. Only once those tests succeed I merge staging with main branch and generate a new release for the Action.

Lets use the markdown2man workflow as example. There you can read that we have two test types:

  • Unit tests: They check the python script markdown2man is based on.

  • Integration tests: They check markdown2man behaviour as a GitHub Action. Although your Action was not published yet you can install it from a workflow in the same repository (lines 42-48). So, what I do is calling the Action from the very same staging branch we are testing and I use that Action with a markdown I have ready at test folder. If a proper man page file is generated then integration test is passed (line 53). Having the chance to test an Action against its own repository is great as it lets you test your Action as people would use it without needing to publish it.

In addition to testing it, you should write a README.md for your action in order to explain in detail how to use your Action. In that document you should include at least this information:

  • A description of what the action does.
  • Required input and output arguments.
  • Optional input and output arguments.
  • Any secret your action needs.
  • Any environment variable your action uses.
  • An example of how to use your action in a workflow.

And you should add too a LICENSE file explaining the usage terms for your Action.


Conclusion

The strong point of GitHub Action is the high degree of reusability and sharing it promotes. Every time you find yourself repeating the same bunch of commands you are encouraged to make and Action with those commands and share it through Marketplace. Doing that way you get a piece of functionality easier to use throughout your workflows than copy-pasting commands and you contribute to improve the Marketplace so that others can benefit too from that Action. 

Thanks to this philosophy GitHub Marketplace has grown to a huge amount of Actions, ready to use and to help you to save you from implementing that functionality by your own.

07 November 2021

How to use GitHub Actions for continuous integration and deployment


In a previous article I explained how to use Travis CI to get continuous integration in your project. Problem is that Travis has changed its usage terms in the last year and now its not so comfortable for open source projects. The keep saying they are still free for open source project but actually you have to beg for free credits every time you expend them and they make you to prove you still comply with what they define as an open source project (I've heard about cases where developers where discarded for free credits just because they had GitHub sponsors).

So, although I've kept my vdist project at Travis (at the moment) I've been searching alternatives for my projects. As I use GitHub for my open source repositories its natural to try its continuous integration and deployment framework: GitHub Actions.

Conceptually, using GitHub Actions is pretty much the same as Travis CI, so I'm not going to repeat myself. If you want to learn why to use a continuous integration and deployment framework you can read the article I linked at this one start.

So, lets focus at the point of how to use GitHub Actions.

As Travis CI what you do with GitHub Actions is centered in yaml files you place at .github/workflows folder in your repository. GitHub will look at that folder to execute CI/CD workflows. In that folder you can have many workflows to be executed depending on different GitHub events (pushes over branches, pull requests, issues filling and a long etcetera). 

But I find GitHub Actions way better than Travis CI. GitHub promotes massive reusing. Every single step can be encapsulated and shared with your other workflows and with other people at GitHub. With so many people developing and sharing at GitHub you'll find yourself reusing others tasks (a.k.a actions) than implementing by yourself. Unless your are automatizing something really weird, chances are that some other has implemented it and shared. In this article we'll use others actions and implement our own custom steps. Besides you can reuse your own workflows (or share with others) so if you have a working workflow for your project you can reuse it in a similar project so you don't need to reimplement its workflow from scratch.

For this article we'll focus in a typical "test -> package -> deploy" workflow which we're going to call test_and_deploy.yaml (if you feel creative you can call it as you like, but try to be expressive). If you want this article full code you can find it in this commit of my Cifra-rust project at GitHub.

To create that file you have two options: create it in your IDE and push like any other file or write it using built-in GitHub web editor. For the first time my advice is to use web editor as it guides you better to get your first yaml up and working. So go to your GitHub repo page and click over Actions tab:


There, your are prompted to create your first workflow. When you choose to create a workflow you're offered to use a predefined template (GitHub has many for different tasks and languages) or set up a workflow yourself. For the sake of this article choose "set up a workflow yourself". Then you will enter to web editor. An initial example will be already loaded to let you have and initial scaffolding.

Now lets check Cifra-rust yaml file to learn what we can do with GitHub actions.


Header

At very first few lines (from line 1 to 12) you can see we name this workflow ("name" tag). Use an expressive name, to identify quickly what this workflow does. You will use this name to reuse this workflow from other repositories.

The "on" tag defines which events trigger this workflow. In my case, this workflow is triggered by pushes and pull requests over staging branch. There're many more events you can use.

The "workflow_dispatch" tag allows you to trigger this workflow manually from GitHub web interface. I use to set it, it doesn't harm to have that option.



Jobs

Next is the "jobs" tag (line 15) and there is where the "nuts and guts" of workflow begins. A workflow is composed of jobs. Every job is runned in a separate virtual machine (the runner) so each job has its dependencies encapsulated. That encapsulation is good to avoid jobs messing dependencies and filesystems of others jobs. Try to focus every job in just one task. 

Jobs are run in parallel by default unless you explicitly set dependencies between them. If you need a job B be run after a job A is completed successfully you need to use tag "needs" to set B needs A to be completed to start. In Cifra-rust example jobs "merge_staging_and_master", "deploy_crates_io" and "generate_deb_package" need "tests" job to be successfully finished to start. You can see an example of "needs" tag usage at line 53:

As "deploy_debian_package" respectively needs "generate_deb_package" to be finished before, you end with an execution tree like this:


Actions

Every job is composed of one or multiple steps. One step is a sequence of shell commands You can run native shell commands or scripts you had included in your repository. From line 112 to 115 we have one of those steps:

There we are calling a script stored in a folder called ci_scripts from my repository. Note the pipe (" | ") next to "run" tag. Without that pipe you can include just one command in run tag but that pipe allows you to include multiple command to executed separated in different lines (like step in lines 44 to 46).

If you find yourself repeating the same commands across multiple workflows them you have a good candidate to encapsulate those commands in an action. As you can guess Actions are the key stones of GitHub Actions. An action is a set of commands packaged to be shared and reused in many workflows (yours or of others users). An action has inputs and outputs and what happens inside is not your problem while it works as intended. You can develop your own actions and share with others but it deserves an article on its own. In next article I will convert that man page generation step in a reusable action.

At right hand side, GitHub Actions web editor has a searcher to find actions suitable for the task you want to perform. Guess you want to install a Rust toolchain, you can do this search:


Although GitHub web editor is really complete and useful to find mistakes in your yaml files, its searcher lacks a way to filter or reorder its results. Nevertheless that searcher is your best option to find shared actions for your workflows.

Once you click in any searcher result you are shown a summary about which text to include in your yaml file to use that action. You can get more detailed instructions clicking in "View full Marketplace listing" link.

As any other step, and action uses a "name" tag to document which task is intended to do and an "id" if that step must be referenced from other workflow places (see an example at line 42). What makes different an action from a command step is the "uses" tag. That tag links to the action we want to use. the text to use in that tag differs for every action but you can find what to write there at searcher results instructions. In those instructions the use to describe which inputs the action accepts. Those inputs are included in the "with" tag. For instance, in lines 23 to 27 I used an action to install the Rust building framework:

As you can see, in "uses" tag you can include which version of given action to use. There are wildcards to use latest version but you'd better set an specific version to avoid your workflows get broken by actions updates.

You make a job chaining actions as steps. Every step in a job is executed sequentially. If any of then fails the entire workflow fails. 


Sharing data between steps and jobs

Although the steps of a workflow are executed in the same virtual machine they cannot share bash environment variables because every step spawns a different bash process. If you set an environment variable in a step that needs to be used in another step of the same job you have two options:

  • Quick and dirty: Append that environment variable to $GITHUB_ENV so that variable can be accessed later using env context. For instance, at line 141 we create DEB_PACKAGE environment variable:

That environment variable is accessed at line 146, in the next step:


  • Set step outputs: Problem with last method is that although you can share data between steps of the same job you cannot do it across different jobs. Setting steps outputs is a bit slower but leaves your step ready to share data not only with other steps of the same job but with any step of the workflow. To set a variable environment as an step output you need to echo that variable to ::set-output and give a name to that variable followed by its value after a double colon ("::"). You have an example at line 46:

Note that step must be identified with an "id" tags to further retrieve the shared variable. As that step is identified as "version_tag" the created "package_tag" variable can be later retrieved from another step of the same job using: 

${{ steps.version_tag.outputs.package_tag }}

Actually that method is used at line 48 to prepare that variable to be recovered from another job. Remember that method so far helps you to pass data to steps in the same job. To export data from a job to be used from another job you have to declare it first as a job output (lines 47-48):


Note that in last screenshot indentation level should be at the same level as "steps" tag to properly set package_tag as a job output.

To retrieve that output from another job, the catching job must declare giving job as needed in its "needs" tag. After that, the shared value can be retrieved using next format:

${{ needs.<needed_job>.outputs.<output_varieble_name> }} 

In our example, "deploy_debian_package" needs the value exported at line 48 so it declares its job (tests) as needed in line 131:

After that, it can get and use that value at line 157:



Passing files between jobs

Sometimes passing a variable is not enough because you need to produce files in a job to be consumed in another job.

You can share files between steps in the same job because those steps share the same virtual machine. But between jobs you need to transfer files from a virtual machine to another.

When you generate a file (an artifact) in a job, you can upload it to a temporal shared storage to allow other jobs in the same workflow get that artifact. To upload and download artifact to that temporal storage you have two predefined actions: upload-artifact and download-artifact.

In our example, "generate_deb_package" job generates a debian package that is needed by "deploy_debian_package". So, in lines 122 to 126 "generated_deb_package" uploads that package:

On the other side "deploy_debian_package" downloads saved artifact in lines 132 to 136:



Using your repository source code

By default you start every job with a clean virtual machine. To have your source code downloaded to that virtual machine you use an action called checkout. In our example it is used as first step of "tests" job (line 21) to allow code to be built and tested:

You can set any branch to be downloaded, but if you don't set anyone the one related to triggering event is used. In our example, staging branch is the used one.


Running your workflow

You have two options to trigger your workflow to try it: you can do the triggering event (in our example pushing code to staging branch) or you can launch workflow manually from GitHub web interface (assuming you set "workflow_dispatch" tag in your yml file as I advised).

To do a manual triggering, go to repository Actions tab and select the workflow you want to launch. Then you'll see "Run workflow button" at right hand side:


Once pushed that button, workflow will start and that tab list will show that workflow as active. Selecting that workflow in the list shows an execution tree like the one I showed earlier. Clicking in any of the boxes shows running logs of every step of that job. It is extremely easy read through generated logs.


Conclusion

I've found GitHub Actions really enjoyable. Its focus in reusability and sharing makes really easy and fast creating complex workflow. Unless you're working in a really weird workflow, chances are that most of your workflow components (if not all) are already implemented and shared as actions, so designing a complex workflow becomes an easy task of joining already available pieces. 

GitHub Actions documentation is great, and it popularity makes easy find answers online for any problem you meet.

Besides that, I've felt yml files structure coherent and logic so its easy to grasp concepts an gain a good level really quickly. 

Being free and unlimited for open source repositories I guess I'm going to migrate all my CI/CD workflows to GitHub Actions from Travis CI.

23 October 2021

How to use PackageCloud to distribute your packages


Recently I wrote an article explaining how to setup a JFrog Artifactory account to host Debian and RPM repositories. 

Since then, my interest in Artifactory has weakened because they have a policy of continuous activity to keep your account alive. If you have periods with no packages uploads/downloads they suspend your account and you have to reactivate it manually. That is extremely unpleasant and uncomfortable and happens frequently (I've been receiving suspensions every few weeks) when you're like me, a hobbyist developer in his spare time who can't keep a continuous advance in his projects. That and the extremely complex setup to run a package repository made me look for another option.

Searching through the web I've found PackageCloud and so far it happens to be an appealing alternative to Artifactory. PackageCloud has a free tier with 2GB storage and 10GB of monthly transfer. For a hobbyist like I think it is enough. 

Creating a repository

Once your register at PackageCloud you access to an extremely clean dashboard. Compared to Artifactory everything is simple and easy. At up-right corner of "Home" page you have a big button to "Create a repository". You can use it to create a repository for every application you have. 

 


A wonderful feature of PackageCloud is that a single repository can host many packages types simultaneously. So you don't have to create a repository for every package type you want to host for your application, instead you have a single repository for your application and you upload there your deb, rpm, gem, etc packages you build. 

Uploading packages

While you use PackageCloud your realize it's developers have made a great effort to guide at every step. Once you create a new repository you are given immediate guidance about how to upload packages through console (although you have a nice blue button too to upload them through web interface if you like):

As a first approach we will first try web interface to upload a package and after we will test console approach.

If you select your recently created repository at Home page you will enter to the same screen I posted before. To use web interface to upload packages push "Upload a package" button. That button will trigger next pop-up window:

As you can see in last screenshot, I've pushed "Select a package" button and selected vdist_2.1.0:amd64.deb. After you have to select your target distribution in given combo box. I'd suggest selecting a general compatible distribution to widen you audience. For example, I develop in a Linux Mint box and although Linux Mint is present at combo box I prefer to select Ubuntu Focal as a wider equivalent (being aware that my Linux Mint Uma is based in Ubuntu Focal). After selecting package and target distribution, "upload" button will be enabled. Upload will start when you push that button. You will informed that upload has ended with a green boxed "Upload Successful!".

If you prefer console you can use PackageCloud console application. That package is developed in Ruby so you have to be sure you have it installed:

dante@Camelot:~/$ sudo apt install ruby ruby-dev g++
[sudo] password for dante:
[...]

dante@Camelot:~$

Ruby package gives you "gem" command to install PackageCloud application:

dante@Camelot:~/$ sudo gem install package_cloud
Building native extensions. This could take a while...
Successfully installed unf_ext-0.0.8
Successfully installed unf-0.1.4
Successfully installed domain_name-0.5.20190701
Successfully installed http-cookie-1.0.4
Successfully installed mime-types-data-3.2021.0901
Successfully installed mime-types-3.3.1
Successfully installed netrc-0.11.0
Successfully installed rest-client-2.1.0
Successfully installed json_pure-1.8.1
Building native extensions. This could take a while...
Successfully installed rainbow-2.2.2
Successfully installed package_cloud-0.3.08
Parsing documentation for unf_ext-0.0.8
Installing ri documentation for unf_ext-0.0.8
Parsing documentation for unf-0.1.4
Installing ri documentation for unf-0.1.4
Parsing documentation for domain_name-0.5.20190701
Installing ri documentation for domain_name-0.5.20190701
Parsing documentation for http-cookie-1.0.4
Installing ri documentation for http-cookie-1.0.4
Parsing documentation for mime-types-data-3.2021.0901
Installing ri documentation for mime-types-data-3.2021.0901
Parsing documentation for mime-types-3.3.1
Installing ri documentation for mime-types-3.3.1
Parsing documentation for netrc-0.11.0
Installing ri documentation for netrc-0.11.0
Parsing documentation for rest-client-2.1.0
Installing ri documentation for rest-client-2.1.0
Parsing documentation for json_pure-1.8.1
Installing ri documentation for json_pure-1.8.1
Parsing documentation for rainbow-2.2.2
Installing ri documentation for rainbow-2.2.2
Parsing documentation for package_cloud-0.3.08
Installing ri documentation for package_cloud-0.3.08
Done installing documentation for unf_ext, unf, domain_name, http-cookie, mime-types-data, mime-types, netrc, rest-client, json_pure, rainbow, package_cloud after 8 seconds
11 gems installed


dante@Camelot:~$

That package_cloud command lets you do many things, even create repositories from console, but let focus on package uploading. To upload a package to an existing repository just use "push" verb:

dante@Camelot:~/$ package_cloud push dante-signal31/vdist/ubuntu/focal vdist_2.2.0post1_amd64.deb 
Email:
dante.signal31@gmail.com
Password:

/var/lib/gems/2.7.0/gems/json_pure-1.8.1/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
Got your token. Writing a config file to /home/dante/.packagecloud... success!
/var/lib/gems/2.7.0/gems/json_pure-1.8.1/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
Looking for repository at dante-signal31/vdist... /var/lib/gems/2.7.0/gems/json_pure-1.8.1/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
success!
Pushing vdist_2.2.0post1_amd64.deb... success!

dante@Camelot:~$

URL you pass to "package_cloud push" is always of the form <username>/<repository>/<distribution>/<distribution version>.

After that you can see the new package registered at web interface:

Installing packages

Ok, so far you know everything you need from the publisher side, but know you need to learn what your users have to do to install your packages.

The thing cannot be simplest. Every repository has a button in its packages section called "Quick install instructions for:". Push its button and you will get a pop-up window like this:


Just copy given command text (or use copy button) and make your user paste that command in their consoles (i.e. include that command at your installation instructions documentation):

dante@Camelot:~/$ curl -s https://packagecloud.io/install/repositories/dante-signal31/vdist/script.deb.sh | sudo bash
[sudo] password for dante:
Detected operating system as LinuxMint/uma.
Checking for curl...
Detected curl...
Checking for gpg...
Detected gpg...
Running apt-get update... done.
Installing apt-transport-https... done.
Installing /etc/apt/sources.list.d/dante-signal31_vdist.list...done.
Importing packagecloud gpg key... done.
Running apt-get update... done.

The repository is setup! You can now install packages.

dante@Camelot:~$

That command registered your PackageCloud repository as one of the system authorised packaged sources. Theoretically now your user could do a "sudo apt update" and get your package listed but here comes the only gotcha of this process. Recall when I told that I develop in Linux Mint but I set repository to Ubuntu/focal? the point is that last command detected my system and set my source as it was for Linux Mint:

dante@Camelot:~/$ cat /etc/apt/sources.list.d/dante-signal31_vdist.list 
# this file was generated by packagecloud.io for
# the repository at https://packagecloud.io/dante-signal31/vdist

deb https://packagecloud.io/dante-signal31/vdist/linuxmint/ uma main
deb-src https://packagecloud.io/dante-signal31/vdist/linuxmint/ uma main


dante@Camelot:~$

I've marked as red the incorrect path. If you insist to update apt in this moment you get next error:

dante@Camelot:~/$ sudo apt update
Hit:1 http://archive.canonical.com/ubuntu focal InRelease
Hit:2 https://download.docker.com/linux/ubuntu focal InRelease
Hit:3 http://archive.ubuntu.com/ubuntu focal InRelease
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Ign:5 http://packages.linuxmint.com uma InRelease
Get:6 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:7 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Hit:8 http://packages.linuxmint.com uma Release
Ign:11 https://packagecloud.io/dante-signal31/vdist/linuxmint uma InRelease
Hit:10 https://packagecloud.io/dante-signal31/cifra-rust/ubuntu focal InRelease
Err:12 https://packagecloud.io/dante-signal31/vdist/linuxmint uma Release
404 Not Found [IP: 52.52.239.191 443]
Reading package lists... Done
E: The repository 'https://packagecloud.io/dante-signal31/vdist/linuxmint uma Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.


dante@Camelot:~$

So, in my case I have to correct it manually to let it this way:

dante@Camelot:~/$ cat /etc/apt/sources.list.d/dante-signal31_vdist.list 
# this file was generated by packagecloud.io for
# the repository at https://packagecloud.io/dante-signal31/vdist

deb https://packagecloud.io/dante-signal31/vdist/ubuntu/ focal main
deb-src https://packagecloud.io/dante-signal31/vdist/ubuntu/ focal main

dante@Camelot:~$

Now "apt update" will work and you'll find our package:

dante@Camelot:~/$ sudo apt update
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:3 http://archive.canonical.com/ubuntu focal InRelease
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Ign:5 http://packages.linuxmint.com uma InRelease
Get:6 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:7 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Hit:8 http://packages.linuxmint.com uma Release
Hit:10 https://packagecloud.io/dante-signal31/cifra-rust/ubuntu focal InRelease
Get:11 https://packagecloud.io/dante-signal31/vdist/ubuntu focal InRelease [24,4 kB]
Get:12 https://packagecloud.io/dante-signal31/vdist/ubuntu focal/main amd64 Packages [963 B]
Fetched 353 kB in 3s (109 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
dante@Camelot:~/Downloads$ sudo apt show vdist
Package: vdist
Version: 2.2.0post1
Priority: extra
Section: net
Maintainer: dante.signal31@gmail.com
Installed-Size: 214 MB
Depends: libssl1.0.0, docker-ce
Homepage: https://github.com/dante-signal31/vdist
Download-Size: 59,9 MB
APT-Sources: https://packagecloud.io/dante-signal31/vdist/ubuntu focal/main amd64 Packages
Description: vdist (Virtualenv Distribute) is a tool that lets you build OS packages from your Python applications, while aiming to build an isolated environment for your Python project by utilizing virtualenv. This means that your application will not depend on OS provided packages of Python modules, including their versions.

N: There is 1 additional record. Please use the '-a' switch to see it

dante@Camelot:~$

Obviously, if your user system matches your repository target there would be nothing to fix, but chances are that your user have derivatives boxes so they'll need to apply this fix, so make sure to include it in your documentation.

At this point, your users will be able to install your package as any other:

dante@Camelot:~/$ sudo apt install vdist
[...]

dante@Camelot:~$

Conclusion

PackageCloud makes extremely easy deploying your packages. Comparing with Bintray or Artifactory, its setup is a charm. I have to check how things go on the long term but at first glance it seems a promising service.


How to package Rust applications - DEB packages


Rust language itself is harsh. Your first contact with compiler and borrow checker uses to be traumatic until you realize they are actually to protect you against yourself. Once you understand that you begin to love that language.

But everything else apart of the language is kind, really comfortable I'd say. With cargo, compiling, testing, documenting, profiling and even publishing to crates.io (the Pypi of Rust) is a charm. Packaging is no exception of that as it is integrated with cargo assuming some configuration we're going to explain here.

To package my Rust applications into debian packages, I use cargo-deb. To install it just type:

dante@Camelot:~~/Projects/cifra-rust/$ cargo install cargo-deb
Updating crates.io index
Downloaded cargo-deb v1.32.0
Downloaded 1 crate (63.2 KB) in 0.36s
Installing cargo-deb v1.32.0
Downloaded crc v1.8.1
Downloaded build_const v0.2.2
[...]
Compiling crossbeam-deque v0.8.1
Compiling xz2 v0.1.6
Compiling toml v0.5.8
Compiling cargo_toml v0.10.1
Finished release [optimized] target(s) in 53.21s
Installing /home/dante/.cargo/bin/cargo-deb
Installed package `cargo-deb v1.32.0` (executable `cargo-deb`)


dante@Camelot:~$

Done that, you could start packaging simple applications. By default cargo deb obtains basic information from your Cargo.toml file. That way it loads next fields:

  • name
  • version
  • license
  • license-file
  • description
  • readme
  • homepage
  • repository

But seldom happens your application has no dependencies at all. To configure more advanced use cases, create a [package.metadata.deb] section in your Cargo.toml. In that section you can configure next fields:

  • maintainer
  • copyright
  • changelog
  • depends
  • recommends
  • enhances
  • conflicts
  • breaks
  • replaces
  • provides
  • extended-description
  • extended-description-file
  • section
  • priority
  • assets
  • maintainer-scripts

As a working example of this you can read this Cargo.toml version of my application Cifra.

There you can read general section from where cargo deb loads its basic information:

 

 

Cargo has a great documentation where you can find every section and tag explained.

Be aware that every file path you include in Cargo.toml is relative to Cargo.toml file.

Specific section for cargo-deb must not be long to have a working package:


 

Tags for this section are documented at cargo-deb homepage.

Section and priority tags are used to classify your application in Debian hierarchy. Although I've set them I think they are rather useless because official Debian repositories have higher requirement than cargo-deb can meet at the moment, so any debian package produced with cargo-deb will end in a personal repository were debian hierarchy for applications is not present.

Actually, most important tag is assets. That tag lets you set which files should be included in package and where they should be placed at installation. Format of that tag contents is straightforward. It is a list of tuples of three elements:

  • Relative path to file to be included in package: That path is relative to Cargo.toml location.
  • Absolute path to place that file in user computer.
  • Permissions for that file at user computer.

I should have included a "depends" tag to add my package dependencies. Cifra depends on SQLite3 and that is not a Rust crate but a system package, so it is a dependency of Cifra debian package. If you want to use "depends" tag you must use debian dependency format, but actually is not necessary because cargo-deb can calculate your dependencies automatically if you don't use "depends" tag. It does it using ldd against your compiled artifact and searching with dpkg  which system packages provides libraries detected by ldd.

Once you have your cargo-deb configuration in your Cargo.toml, building your debian package is as simple as:

dante@Camelot:~/Projects/cifra-rust/$ cargo deb
[...]
Finished release [optimized] target(s) in 0.17s
/home/dante/Projects/cifra-rust/target/debian/cifra_0.9.1_amd64.deb

dante@Camelot:~$

As you can see in the output, you will find your generated package in a new folder inside your project's called target/debian/.

Cargo deb is a wonderful tool which only downside is not being capable to meet Debian packaging policy to build packages suitable to be included in official repositories.