23 October 2021

How to use PackageCloud to distribute your packages


Recently I wrote an article explaining how to setup a JFrog Artifactory account to host Debian and RPM repositories. 

Since then, my interest in Artifactory has weakened because they have a policy of continuous activity to keep your account alive. If you have periods with no packages uploads/downloads they suspend your account and you have to reactivate it manually. That is extremely unpleasant and uncomfortable and happens frequently (I've been receiving suspensions every few weeks) when you're like me, a hobbyist developer in his spare time who can't keep a continuous advance in his projects. That and the extremely complex setup to run a package repository made me look for another option.

Searching through the web I've found PackageCloud and so far it happens to be an appealing alternative to Artifactory. PackageCloud has a free tier with 2GB storage and 10GB of monthly transfer. For a hobbyist like I think it is enough. 

Creating a repository

Once your register at PackageCloud you access to an extremely clean dashboard. Compared to Artifactory everything is simple and easy. At up-right corner of "Home" page you have a big button to "Create a repository". You can use it to create a repository for every application you have. 

 


A wonderful feature of PackageCloud is that a single repository can host many packages types simultaneously. So you don't have to create a repository for every package type you want to host for your application, instead you have a single repository for your application and you upload there your deb, rpm, gem, etc packages you build. 

Uploading packages

While you use PackageCloud your realize it's developers have made a great effort to guide at every step. Once you create a new repository you are given immediate guidance about how to upload packages through console (although you have a nice blue button too to upload them through web interface if you like):

As a first approach we will first try web interface to upload a package and after we will test console approach.

If you select your recently created repository at Home page you will enter to the same screen I posted before. To use web interface to upload packages push "Upload a package" button. That button will trigger next pop-up window:

As you can see in last screenshot, I've pushed "Select a package" button and selected vdist_2.1.0:amd64.deb. After you have to select your target distribution in given combo box. I'd suggest selecting a general compatible distribution to widen you audience. For example, I develop in a Linux Mint box and although Linux Mint is present at combo box I prefer to select Ubuntu Focal as a wider equivalent (being aware that my Linux Mint Uma is based in Ubuntu Focal). After selecting package and target distribution, "upload" button will be enabled. Upload will start when you push that button. You will informed that upload has ended with a green boxed "Upload Successful!".

If you prefer console you can use PackageCloud console application. That package is developed in Ruby so you have to be sure you have it installed:

dante@Camelot:~/$ sudo apt install ruby ruby-dev g++
[sudo] password for dante:
[...]

dante@Camelot:~$

Ruby package gives you "gem" command to install PackageCloud application:

dante@Camelot:~/$ sudo gem install package_cloud
Building native extensions. This could take a while...
Successfully installed unf_ext-0.0.8
Successfully installed unf-0.1.4
Successfully installed domain_name-0.5.20190701
Successfully installed http-cookie-1.0.4
Successfully installed mime-types-data-3.2021.0901
Successfully installed mime-types-3.3.1
Successfully installed netrc-0.11.0
Successfully installed rest-client-2.1.0
Successfully installed json_pure-1.8.1
Building native extensions. This could take a while...
Successfully installed rainbow-2.2.2
Successfully installed package_cloud-0.3.08
Parsing documentation for unf_ext-0.0.8
Installing ri documentation for unf_ext-0.0.8
Parsing documentation for unf-0.1.4
Installing ri documentation for unf-0.1.4
Parsing documentation for domain_name-0.5.20190701
Installing ri documentation for domain_name-0.5.20190701
Parsing documentation for http-cookie-1.0.4
Installing ri documentation for http-cookie-1.0.4
Parsing documentation for mime-types-data-3.2021.0901
Installing ri documentation for mime-types-data-3.2021.0901
Parsing documentation for mime-types-3.3.1
Installing ri documentation for mime-types-3.3.1
Parsing documentation for netrc-0.11.0
Installing ri documentation for netrc-0.11.0
Parsing documentation for rest-client-2.1.0
Installing ri documentation for rest-client-2.1.0
Parsing documentation for json_pure-1.8.1
Installing ri documentation for json_pure-1.8.1
Parsing documentation for rainbow-2.2.2
Installing ri documentation for rainbow-2.2.2
Parsing documentation for package_cloud-0.3.08
Installing ri documentation for package_cloud-0.3.08
Done installing documentation for unf_ext, unf, domain_name, http-cookie, mime-types-data, mime-types, netrc, rest-client, json_pure, rainbow, package_cloud after 8 seconds
11 gems installed


dante@Camelot:~$

That package_cloud command lets you do many things, even create repositories from console, but let focus on package uploading. To upload a package to an existing repository just use "push" verb:

dante@Camelot:~/$ package_cloud push dante-signal31/vdist/ubuntu/focal vdist_2.2.0post1_amd64.deb 
Email:
dante.signal31@gmail.com
Password:

/var/lib/gems/2.7.0/gems/json_pure-1.8.1/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
Got your token. Writing a config file to /home/dante/.packagecloud... success!
/var/lib/gems/2.7.0/gems/json_pure-1.8.1/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
Looking for repository at dante-signal31/vdist... /var/lib/gems/2.7.0/gems/json_pure-1.8.1/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
success!
Pushing vdist_2.2.0post1_amd64.deb... success!

dante@Camelot:~$

URL you pass to "package_cloud push" is always of the form <username>/<repository>/<distribution>/<distribution version>.

After that you can see the new package registered at web interface:

Installing packages

Ok, so far you know everything you need from the publisher side, but know you need to learn what your users have to do to install your packages.

The thing cannot be simplest. Every repository has a button in its packages section called "Quick install instructions for:". Push its button and you will get a pop-up window like this:


Just copy given command text (or use copy button) and make your user paste that command in their consoles (i.e. include that command at your installation instructions documentation):

dante@Camelot:~/$ curl -s https://packagecloud.io/install/repositories/dante-signal31/vdist/script.deb.sh | sudo bash
[sudo] password for dante:
Detected operating system as LinuxMint/uma.
Checking for curl...
Detected curl...
Checking for gpg...
Detected gpg...
Running apt-get update... done.
Installing apt-transport-https... done.
Installing /etc/apt/sources.list.d/dante-signal31_vdist.list...done.
Importing packagecloud gpg key... done.
Running apt-get update... done.

The repository is setup! You can now install packages.

dante@Camelot:~$

That command registered your PackageCloud repository as one of the system authorised packaged sources. Theoretically now your user could do a "sudo apt update" and get your package listed but here comes the only gotcha of this process. Recall when I told that I develop in Linux Mint but I set repository to Ubuntu/focal? the point is that last command detected my system and set my source as it was for Linux Mint:

dante@Camelot:~/$ cat /etc/apt/sources.list.d/dante-signal31_vdist.list 
# this file was generated by packagecloud.io for
# the repository at https://packagecloud.io/dante-signal31/vdist

deb https://packagecloud.io/dante-signal31/vdist/linuxmint/ uma main
deb-src https://packagecloud.io/dante-signal31/vdist/linuxmint/ uma main


dante@Camelot:~$

I've marked as red the incorrect path. If you insist to update apt in this moment you get next error:

dante@Camelot:~/$ sudo apt update
Hit:1 http://archive.canonical.com/ubuntu focal InRelease
Hit:2 https://download.docker.com/linux/ubuntu focal InRelease
Hit:3 http://archive.ubuntu.com/ubuntu focal InRelease
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Ign:5 http://packages.linuxmint.com uma InRelease
Get:6 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:7 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Hit:8 http://packages.linuxmint.com uma Release
Ign:11 https://packagecloud.io/dante-signal31/vdist/linuxmint uma InRelease
Hit:10 https://packagecloud.io/dante-signal31/cifra-rust/ubuntu focal InRelease
Err:12 https://packagecloud.io/dante-signal31/vdist/linuxmint uma Release
404 Not Found [IP: 52.52.239.191 443]
Reading package lists... Done
E: The repository 'https://packagecloud.io/dante-signal31/vdist/linuxmint uma Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.


dante@Camelot:~$

So, in my case I have to correct it manually to let it this way:

dante@Camelot:~/$ cat /etc/apt/sources.list.d/dante-signal31_vdist.list 
# this file was generated by packagecloud.io for
# the repository at https://packagecloud.io/dante-signal31/vdist

deb https://packagecloud.io/dante-signal31/vdist/ubuntu/ focal main
deb-src https://packagecloud.io/dante-signal31/vdist/ubuntu/ focal main

dante@Camelot:~$

Now "apt update" will work and you'll find our package:

dante@Camelot:~/$ sudo apt update
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:3 http://archive.canonical.com/ubuntu focal InRelease
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Ign:5 http://packages.linuxmint.com uma InRelease
Get:6 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:7 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Hit:8 http://packages.linuxmint.com uma Release
Hit:10 https://packagecloud.io/dante-signal31/cifra-rust/ubuntu focal InRelease
Get:11 https://packagecloud.io/dante-signal31/vdist/ubuntu focal InRelease [24,4 kB]
Get:12 https://packagecloud.io/dante-signal31/vdist/ubuntu focal/main amd64 Packages [963 B]
Fetched 353 kB in 3s (109 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
dante@Camelot:~/Downloads$ sudo apt show vdist
Package: vdist
Version: 2.2.0post1
Priority: extra
Section: net
Maintainer: dante.signal31@gmail.com
Installed-Size: 214 MB
Depends: libssl1.0.0, docker-ce
Homepage: https://github.com/dante-signal31/vdist
Download-Size: 59,9 MB
APT-Sources: https://packagecloud.io/dante-signal31/vdist/ubuntu focal/main amd64 Packages
Description: vdist (Virtualenv Distribute) is a tool that lets you build OS packages from your Python applications, while aiming to build an isolated environment for your Python project by utilizing virtualenv. This means that your application will not depend on OS provided packages of Python modules, including their versions.

N: There is 1 additional record. Please use the '-a' switch to see it

dante@Camelot:~$

Obviously, if your user system matches your repository target there would be nothing to fix, but chances are that your user have derivatives boxes so they'll need to apply this fix, so make sure to include it in your documentation.

At this point, your users will be able to install your package as any other:

dante@Camelot:~/$ sudo apt install vdist
[...]

dante@Camelot:~$

Conclusion

PackageCloud makes extremely easy deploying your packages. Comparing with Bintray or Artifactory, its setup is a charm. I have to check how things go on the long term but at first glance it seems a promising service.


How to package Rust applications - DEB packages


Rust language itself is harsh. Your first contact with compiler and borrow checker uses to be traumatic until you realize they are actually to protect you against yourself. Once you understand that you begin to love that language.

But everything else apart of the language is kind, really comfortable I'd say. With cargo, compiling, testing, documenting, profiling and even publishing to crates.io (the Pypi of Rust) is a charm. Packaging is no exception of that as it is integrated with cargo assuming some configuration we're going to explain here.

To package my Rust applications into debian packages, I use cargo-deb. To install it just type:

dante@Camelot:~~/Projects/cifra-rust/$ cargo install cargo-deb
Updating crates.io index
Downloaded cargo-deb v1.32.0
Downloaded 1 crate (63.2 KB) in 0.36s
Installing cargo-deb v1.32.0
Downloaded crc v1.8.1
Downloaded build_const v0.2.2
[...]
Compiling crossbeam-deque v0.8.1
Compiling xz2 v0.1.6
Compiling toml v0.5.8
Compiling cargo_toml v0.10.1
Finished release [optimized] target(s) in 53.21s
Installing /home/dante/.cargo/bin/cargo-deb
Installed package `cargo-deb v1.32.0` (executable `cargo-deb`)


dante@Camelot:~$

Done that, you could start packaging simple applications. By default cargo deb obtains basic information from your Cargo.toml file. That way it loads next fields:

  • name
  • version
  • license
  • license-file
  • description
  • readme
  • homepage
  • repository

But seldom happens your application has no dependencies at all. To configure more advanced use cases, create a [package.metadata.deb] section in your Cargo.toml. In that section you can configure next fields:

  • maintainer
  • copyright
  • changelog
  • depends
  • recommends
  • enhances
  • conflicts
  • breaks
  • replaces
  • provides
  • extended-description
  • extended-description-file
  • section
  • priority
  • assets
  • maintainer-scripts

As a working example of this you can read this Cargo.toml version of my application Cifra.

There you can read general section from where cargo deb loads its basic information:

 

 

Cargo has a great documentation where you can find every section and tag explained.

Be aware that every file path you include in Cargo.toml is relative to Cargo.toml file.

Specific section for cargo-deb must not be long to have a working package:


 

Tags for this section are documented at cargo-deb homepage.

Section and priority tags are used to classify your application in Debian hierarchy. Although I've set them I think they are rather useless because official Debian repositories have higher requirement than cargo-deb can meet at the moment, so any debian package produced with cargo-deb will end in a personal repository were debian hierarchy for applications is not present.

Actually, most important tag is assets. That tag lets you set which files should be included in package and where they should be placed at installation. Format of that tag contents is straightforward. It is a list of tuples of three elements:

  • Relative path to file to be included in package: That path is relative to Cargo.toml location.
  • Absolute path to place that file in user computer.
  • Permissions for that file at user computer.

I should have included a "depends" tag to add my package dependencies. Cifra depends on SQLite3 and that is not a Rust crate but a system package, so it is a dependency of Cifra debian package. If you want to use "depends" tag you must use debian dependency format, but actually is not necessary because cargo-deb can calculate your dependencies automatically if you don't use "depends" tag. It does it using ldd against your compiled artifact and searching with dpkg  which system packages provides libraries detected by ldd.

Once you have your cargo-deb configuration in your Cargo.toml, building your debian package is as simple as:

dante@Camelot:~/Projects/cifra-rust/$ cargo deb
[...]
Finished release [optimized] target(s) in 0.17s
/home/dante/Projects/cifra-rust/target/debian/cifra_0.9.1_amd64.deb

dante@Camelot:~$

As you can see in the output, you will find your generated package in a new folder inside your project's called target/debian/.

Cargo deb is a wonderful tool which only downside is not being capable to meet Debian packaging policy to build packages suitable to be included in official repositories.

17 October 2021

How to write manpages with Markdown and Pandoc


No decent console application is released without a man page documenting how to use it. However, man page inners are rather arcane, but being a 1971 file format it has held up quite well.

Nowadays you have two standard ways to learn how to use a console command (google apart ;-) ): typing application command followed by "--help" to get a quick glance of application usage, or typing "man" followed by application name to get a detailed information about application usage. 

To implement "--help" approach in your application you can manually include "--help" parsing and output or you'd better use an argument parsing library like Python's ArgParse.

The "man" approach needs you to write a man page for your application.

Standard way to produce man pages is using troff formating commands. Linux has it's own troff implementation called groff. You have a feel about how is standard man pages format you can type next source inside a file called, for instance, corrupt.1:

.TH CORRUPT 1 .SH NAME corrupt \- modify files by randomly changing bits .SH SYNOPSIS .B corrupt [\fB\-n\fR \fIBITS\fR] [\fB\-\-bits\fR \fIBITS\fR] .IR file ... .SH DESCRIPTION .B corrupt modifies files by toggling a randomly chosen bit. .SH OPTIONS .TP .BR \-n ", " \-\-bits =\fIBITS\fR Set the number of bits to modify. Default is one bit.

Once saved, that file can be displayed by man. Assuming you are in the same folder than corrupt.1 you can type:

dante@Camelot:~/$ man -l corrupt.1

Output is: 

CORRUPT(1)                                                 General Commands Manual 

NAME
corrupt - modify files by randomly changing bits

SYNOPSIS
corrupt [-n BITS] [--bits BITS] file...

DESCRIPTION
corrupt modifies files by toggling a randomly chosen bit.

OPTIONS
-n, --bits=BITS
Set the number of bits to modify. Default is one bit.

CORRUPT(1)


 

You can opt to write specific man pages for your application following for instance this small cheat sheet

Nevertheless, troff/groff format is rather cumbersome and I feel keeping two sources of usage documentation (your README.md and your man page) is prone to errors and an effort waste. So, I follow a different approach: I write and keep updated my README.md and afterwards I convert it to a man page. Sure you need to keep a quite standard format for your README, and it won't be the fanciest one, but at least you will avoid to type the same things twice, in two different files, with different formatting tags.

The key stone to make that conversion is a tool called Pandoc. That tool is the Swiss army knife of document conversion. You can use it to convert between many documents format like word (docx), openoffice (odt), epub... or markdown (md) and groff man. Pandoc uses to be available at your distribution standard package manager repositories, so at an ubuntu distribution you just need:

dante@Camelot:~/$ sudo apt install pandoc

Once Pandoc is installed, you have to observe some conventions with your README.md to make further conversion easier. To illustrate this article explanations, we will follow as an example the file README.md for my project Cifra.

As you can see, linked README.md contains GitHub badges at the very beginning but those are going to be removed in conversion process we are going to explain later in this article. Everything else is structured to comply with contents a man page is supposed to have.

Maybe, most suspecting line is the first one. It's not usual to find a line like this in GitHub README:

Actually, that line is metadata for man page reader. After "%" contains document title (usually application name), manual section, a version, a "|" separator and finally a header. Manual section is 1 for user commands, 2 for system calls and 3 for C library functions. Your applications will fit in section 1 99% of times. I've not included a version for Cifra in that line, but you could have done. Besides, header indicates what set of documentation this manual page belongs to.

After that line, every section use to be included in man pages, but the only ones that should be included at least are:

  • Name: The name of the command.
  • Synopsis: A one-liner summarizing command line arguments and options.
  • Description: Describes in detail how to use your application command.

Other sections you can add are:

  • Options: Command line options.
  • Examples: About command usage.
  • Files: Useful if your application includes configuration files.
  • Environment: Here you explain if your application uses any environment variable.
  • Bugs: Detected bugs should be reported? There you can link your GitHub issues page.
  • Authors: Who is this masterpiece of code author?
  • See also: References to other man pages.
  • Copyright | License: A good place to include your application license text.

Once you have decided which sections include in manpage, you should write them following a format easily convertible by pandoc in a manpage with an standard structure. That's why main sections are always marked at level one of indentation ("#"). A problem with indentation is that while you can get further subheaders in markdown using "#" character  ("#" for main titles, "##" for subheaders, "###" for sections, and so on), those subheaders are only recognized by Pandoc up to sublevel 2. To get more sublevels I've opted for "pandoc lists", a format you can use in your markdown that is recognized afterwards by pandoc:

 

In lines 42 and 48, you have what pandoc call are "line blocks". Those are lines beginning with a vertical bar ( | ) followed by a space. Those spaces between vertical bar and command will be keep in man page converted text by pandoc.

Everything else in that README.md is good classic markdown format.

Let guess we have written our entire README.md and we want to perform conversion. To do that you can use an script like this from a temporal folder:

 

Lines 3 and 4 clean any file used in a previous conversion, while line 5 actually copies README.md from source folder.

Line 6 removes every GitHub badge you could have in your README.md. 

Line 7 is where we call pandoc to perform conversion.

In that point you could have a file that can be opened with man:

dante@Camelot:~/$ man -l man/cifra.1

But man pages use to be compressed in gzip format, as you can see with your system man pages:

dante@Camelot:~/$ ls /usr/share/man/man1

That's why script compress at line 8 your generated man page. And while generated man page is now a compressed gzip file, it is still readable by man:

dante@Camelot:~/$ man -l man/cifra.1.gz

Although those are some steps, you can see they are easily scriptable both as a local script, as an step of your continuous integration workflow.

As you can see, with these easy steps you only need to keep updated your README.md file because man page can be generated from it.