Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

17 December 2021

How to create your own custom Actions for GitHub Actions


In my article about GitHub Actions we reviewed how you can build entire workflows just using premade bricks, called Actions, that you can find at GitHub Marketplace

That marketplace has many ready to use Actions but chances are that sooner than later you'll need to do something that has no action at markeplace. Sure, you can still use your own scripts. In that article I used a custom script (./ci_scripts/get_version.py) at section "Sharing data between steps and jobs". Problem with that approach is that if you want to do the same task in another project you need to copy your scripts between project repositories and adapt them. You'd better transform your scripts to Actions to be easily reusable not only in your projects but publicly with other people projects.

Every Action you can find at marketplace is made in one of the ways I'm going to explain here. Actually if you enter to any Action marketplace page you will find a link, at right hand side, to that Action repository so you can asses it and learn how it works.

There are 3 main methods to build your own GitHub Actions:

  • Composite Actions: They are the simpler, and quicker, but a condition to use this way is that your Action should be based in a self-sufficient script that needs no additional dependency to be installed. It should run only with an standard linux distribution offers.
  • Docker Actions: If you need any dependency to make your script work then you'll need t follow this way.
  • Javascript Actions: Well... you can write your own Actions with javascript, but I don't like that language so I'm not going to include it in this article.
The problem with your Actions dependencies is that they can pollute the workflow environment where your Action is going to be used. Your action dependencies can even collide with those of the app being built. That's why we need to encapsulate our Action and its dependencies to be independent of the environment of the app being built. Of course, this problem does not apply if your Action is intended to setup the workflow environment installing something. There are Actions to, for example, installing and setup Pandoc to be used by the workflow. Problem arises when your Action is intended to do one specific task not related to installing something (for example copying files) and it does install something under the table, as that con compromise the workflow environment. So, best option if you need to install anything to make your Action work is installing it in a docker container and make your Action script run from inside that container, entirely independent of workflow environment. 

Composite Actions

If your Action just needs a bunch of bash commands or a python script exclusively using its built-in standard library then composite Actions is your way to go.

As an example of a composite action, we are going to review how my Action rust-app-version works. That Action looks for a rust Cargo.toml configuration file and read which version is set there for the rust app. That version string is offered as the Action output and you can use that output in your workflow, for instance, to tag a new release at GitHub. This action only uses modules available at standard python distribution. It does not need to install anything at user runner. It's true that there is a requirements.txt at rust-app-version repository but those are only dependencies for unit testing.

To have your own composite Action you first need a GitHub repository to host it. There you can place the few files really needed for your action.

At the very root of your repository you need a file called "action.yml". This file is really important as it models your Action. Your users should be able to think about your action as a black box. Something where you enter some inputs and you receive any output. Those inputs and outputs are defined in action.yml.

If we read the action.yml file at rust-app-version we can see that this action only needs an input called "cargo_toml_folder" and actually that input is optional as it can receive a value of "." if it is omitted when this action is called:



Outputs are somewhat different as the must refer to the output of an specific step in your action:


In last section we specify that this action is going to have just one output called "app_version" and that output is going to be the output called "version" of an step with an id value of "get-version".

Those inputs and outputs define what your action consumes and offers, i.e. what your action does. How your action does it is defined under "runs:" tag. There you set that your Action is a composite one and you call a sequence of steps. This particular example only has one step but you can have as many steps as you need:



Take note of line 22 where that steps receives a name: "get-version". That name is important to refer to this step from outputs configuration.

Line 24 is where your command is run. I only executed one command. If you needed multiple commands to be executed inside the same step, then you should use a bar after run: "run: |". With that bar you mark that next few lines (indented under "run:" tag) are lines separated commands to be executed sequentially.

Command at line 24 is interesting because of 3 points:
  • It calls an script located at our Action repository. To refer to our Action repository root use the github.action_path environment variable. The great thing is that although our script is hosted at its repository, GitHub runs it so that it can view the repository of the workflow from where it is called. Our script will see the workflow repository files as it was run from its root.
  • At the end of the line you may see how inputs are used through inputs context.
  • The weirdest thing of that line is how you setup that step output. You set a bash step output doing an echo "::set-output name=<ouput_name>::<output_value>". In this case name is version and its value is what get_version.py prints to console. Be aware that output_name is used to retrieve that output after step ends through ${{ steps.<id>.outputs.<output_name> }}, in this case ${{ steps.get_version.outputs.version }}
Apart from that, you only need to setup your Action metadata. That is done in the first few lines:


Be aware that "name:" is the name your action will have at GitHub Marketplace. The another parameter, "description:", its the short explanation that will be shown along name in the search results at Markeplace. And "branding:" is only the icon (from Feather icon suite) and color that will represent your action at Markeplace.

With those 24 lines at action.yml and your script at its respective path (here at rust_app_version/ subfolder), you can use your action. You just need to push the button that will appear in your repository to publish your action at Marketplace. Nevertheless, you'd better read this article to the end because I have some recommendations that may be helpful for you.

Once published, it becomes visible for other GitHub users and a Marketplace page is created for your action. To use an Action like this you only need to include in your workflow a configuration like this:



Docker actions

If your Action needs to install any dependency then you should package that Action inside a docker container. That way your Action dependencies won't mess with your user workflow dependencies.

As an example of a docker action, we are going to review how my Action markdown2man works. That action takes a README.md file and converts it to a man page. Using it you don't have to keep two sources to document your console application usage. Instead of that you may document your app usage only with README.md and convert that file to a man page.

To do that conversion markdown2man needs Pandoc package installed. But Pandoc has its respective dependencies, so installing them at user runner may break his workflow. Instead of it, we are going to install those dependencies in a docker image and run our script from that image. Remember that docker lets you execute scripts from container interacting with host files.

As with composite Actions, we need to create an action.yml at Action repository root. There we set our metadata, input and outputs like we do with composite actions. The difference here is that this specific markdown2man Action does not emit any output, so that section is omitted. Section for "runs:" is different too:


In that section we specify this Action is a docker one (at "using:"). There are two ways use a docker image in your action: 
  • Generate an specific image for that action and store it at GitHub docker registry. In that case you use the "image: Dockerfile" tag.
  • Use a prebuilt image from DockerHub registry. To do that you use the "image: <dockerhub_user>:<docker-image-tag>" tag.
If the image you are going to build is exclusively intended to be used at GitHub Action I would follow Dockerfile option. Here, with markdown2man we follow the Dockerfile approach so a docker image is build any time Action is run after a Dockerfile update. Generated image is cached at GitHub registry to be offered quicker to further Actions. Remember a Dockerfile is a kind of a recipe to "cook" an image, so commands that file contains are only executed when the image is built ("cooked"). Once build, the only command that is run is the one you set at entrypoint tag, passing in arguments set at "docker run".                                                                                                                                                                          The "args:" tag has every parameter to be passed to our script at the container. You will probably use your input here to be passed to our script. Be aware that as it happened in composite action, here user repository files are visible to our container.

As you may suspect by now, docker actions are more involved than composite Actions because of the added complexity of creating the Dockerfile. The Dockerfile for markdown2man is pretty simple. As markdown2man script is a python one, we make our image derive from the official docker image for version 3.8:



Afterwards, we set image metadata:


To configure your image, for example installing things, you use RUN commands.


ENV command generates environment variables to be used in your Dockerfile commands:


You use COPY command to copy your requirements.txt from your repository and include it in your generated image. Your scripts are copied fro your Action repository to container following the same approach:


After script files are copied, I like to make then executable and link them from /usr/bin/ folder to include it at the system path:


After that, you set your script as the image entrypoint so this script is run once image is started and that script is provided with arguments you set at the "args:" tag at action.yml file.



You can try that image at your computer building that image from the Dockerfile and running that image as a container:

dante@Camelot:~/$ docker run -ti -v ~/your_project/:/work/ dantesignal31/markdown2man:latest /work/README.md mancifra


dante@Camelot:~$

For local testing you need to mount your project folder as volume (-v flag) if your scripts to process any file form that repository. Last two argument in the example (work/README.md and mancifra) are the arguments that must be passed to entrypoint.

And that's all. Once you have tested everything you can publish your Action and use it in your workflows:


With a call like that a man file called cifra.2.gz should be created at man folder. If manpage_folder does not exist then markdown2man creates it for you.


Your Actions are first class code

Although your Action will likely be small sized, you should take of them as you would with your full-blown apps. Be aware that many people will find and your Actions through Marketplace in their workflows. An error in your Action can brreak many workflows so be diligent and test your Action as you would with any other app.

So, with my Actions I follow the same approach as in other projects and I set up a GitHub workflow to run tests against any pushes in a staging branch. Only once those tests succeed I merge staging with main branch and generate a new release for the Action.

Lets use the markdown2man workflow as example. There you can read that we have two test types:

  • Unit tests: They check the python script markdown2man is based on.

  • Integration tests: They check markdown2man behaviour as a GitHub Action. Although your Action was not published yet you can install it from a workflow in the same repository (lines 42-48). So, what I do is calling the Action from the very same staging branch we are testing and I use that Action with a markdown I have ready at test folder. If a proper man page file is generated then integration test is passed (line 53). Having the chance to test an Action against its own repository is great as it lets you test your Action as people would use it without needing to publish it.

In addition to testing it, you should write a README.md for your action in order to explain in detail how to use your Action. In that document you should include at least this information:

  • A description of what the action does.
  • Required input and output arguments.
  • Optional input and output arguments.
  • Any secret your action needs.
  • Any environment variable your action uses.
  • An example of how to use your action in a workflow.

And you should add too a LICENSE file explaining the usage terms for your Action.


Conclusion

The strong point of GitHub Action is the high degree of reusability and sharing it promotes. Every time you find yourself repeating the same bunch of commands you are encouraged to make and Action with those commands and share it through Marketplace. Doing that way you get a piece of functionality easier to use throughout your workflows than copy-pasting commands and you contribute to improve the Marketplace so that others can benefit too from that Action. 

Thanks to this philosophy GitHub Marketplace has grown to a huge amount of Actions, ready to use and to help you to save you from implementing that functionality by your own.

17 October 2021

How to write manpages with Markdown and Pandoc


No decent console application is released without a man page documenting how to use it. However, man page inners are rather arcane, but being a 1971 file format it has held up quite well.

Nowadays you have two standard ways to learn how to use a console command (google apart ;-) ): typing application command followed by "--help" to get a quick glance of application usage, or typing "man" followed by application name to get a detailed information about application usage. 

To implement "--help" approach in your application you can manually include "--help" parsing and output or you'd better use an argument parsing library like Python's ArgParse.

The "man" approach needs you to write a man page for your application.

Standard way to produce man pages is using troff formating commands. Linux has it's own troff implementation called groff. You have a feel about how is standard man pages format you can type next source inside a file called, for instance, corrupt.1:

.TH CORRUPT 1 .SH NAME corrupt \- modify files by randomly changing bits .SH SYNOPSIS .B corrupt [\fB\-n\fR \fIBITS\fR] [\fB\-\-bits\fR \fIBITS\fR] .IR file ... .SH DESCRIPTION .B corrupt modifies files by toggling a randomly chosen bit. .SH OPTIONS .TP .BR \-n ", " \-\-bits =\fIBITS\fR Set the number of bits to modify. Default is one bit.

Once saved, that file can be displayed by man. Assuming you are in the same folder than corrupt.1 you can type:

dante@Camelot:~/$ man -l corrupt.1

Output is: 

CORRUPT(1)                                                 General Commands Manual 

NAME
corrupt - modify files by randomly changing bits

SYNOPSIS
corrupt [-n BITS] [--bits BITS] file...

DESCRIPTION
corrupt modifies files by toggling a randomly chosen bit.

OPTIONS
-n, --bits=BITS
Set the number of bits to modify. Default is one bit.

CORRUPT(1)


 

You can opt to write specific man pages for your application following for instance this small cheat sheet

Nevertheless, troff/groff format is rather cumbersome and I feel keeping two sources of usage documentation (your README.md and your man page) is prone to errors and an effort waste. So, I follow a different approach: I write and keep updated my README.md and afterwards I convert it to a man page. Sure you need to keep a quite standard format for your README, and it won't be the fanciest one, but at least you will avoid to type the same things twice, in two different files, with different formatting tags.

The key stone to make that conversion is a tool called Pandoc. That tool is the Swiss army knife of document conversion. You can use it to convert between many documents format like word (docx), openoffice (odt), epub... or markdown (md) and groff man. Pandoc uses to be available at your distribution standard package manager repositories, so at an ubuntu distribution you just need:

dante@Camelot:~/$ sudo apt install pandoc

Once Pandoc is installed, you have to observe some conventions with your README.md to make further conversion easier. To illustrate this article explanations, we will follow as an example the file README.md for my project Cifra.

As you can see, linked README.md contains GitHub badges at the very beginning but those are going to be removed in conversion process we are going to explain later in this article. Everything else is structured to comply with contents a man page is supposed to have.

Maybe, most suspecting line is the first one. It's not usual to find a line like this in GitHub README:

Actually, that line is metadata for man page reader. After "%" contains document title (usually application name), manual section, a version, a "|" separator and finally a header. Manual section is 1 for user commands, 2 for system calls and 3 for C library functions. Your applications will fit in section 1 99% of times. I've not included a version for Cifra in that line, but you could have done. Besides, header indicates what set of documentation this manual page belongs to.

After that line, every section use to be included in man pages, but the only ones that should be included at least are:

  • Name: The name of the command.
  • Synopsis: A one-liner summarizing command line arguments and options.
  • Description: Describes in detail how to use your application command.

Other sections you can add are:

  • Options: Command line options.
  • Examples: About command usage.
  • Files: Useful if your application includes configuration files.
  • Environment: Here you explain if your application uses any environment variable.
  • Bugs: Detected bugs should be reported? There you can link your GitHub issues page.
  • Authors: Who is this masterpiece of code author?
  • See also: References to other man pages.
  • Copyright | License: A good place to include your application license text.

Once you have decided which sections include in manpage, you should write them following a format easily convertible by pandoc in a manpage with an standard structure. That's why main sections are always marked at level one of indentation ("#"). A problem with indentation is that while you can get further subheaders in markdown using "#" character  ("#" for main titles, "##" for subheaders, "###" for sections, and so on), those subheaders are only recognized by Pandoc up to sublevel 2. To get more sublevels I've opted for "pandoc lists", a format you can use in your markdown that is recognized afterwards by pandoc:

 

In lines 42 and 48, you have what pandoc call are "line blocks". Those are lines beginning with a vertical bar ( | ) followed by a space. Those spaces between vertical bar and command will be keep in man page converted text by pandoc.

Everything else in that README.md is good classic markdown format.

Let guess we have written our entire README.md and we want to perform conversion. To do that you can use an script like this from a temporal folder:

 

Lines 3 and 4 clean any file used in a previous conversion, while line 5 actually copies README.md from source folder.

Line 6 removes every GitHub badge you could have in your README.md. 

Line 7 is where we call pandoc to perform conversion.

In that point you could have a file that can be opened with man:

dante@Camelot:~/$ man -l man/cifra.1

But man pages use to be compressed in gzip format, as you can see with your system man pages:

dante@Camelot:~/$ ls /usr/share/man/man1

That's why script compress at line 8 your generated man page. And while generated man page is now a compressed gzip file, it is still readable by man:

dante@Camelot:~/$ man -l man/cifra.1.gz

Although those are some steps, you can see they are easily scriptable both as a local script, as an step of your continuous integration workflow.

As you can see, with these easy steps you only need to keep updated your README.md file because man page can be generated from it.


29 August 2021

How to use Docker containers


Virtualization is all about deception.

With heavy virtualization (i.e. VMware, Virtualbox, Xen) a guest operating system is deceived to think it is running in a dedicated hardware, while it is actually shared. 

With light virtualization (i.e Docker) an application is deceived to think it is using a dedicated operating system kernel, while it is actually shared too. It happens that in Linux everything but the kernel is considered an application so with Docker you can make multiple linux distribution share the same kernel (the one from host system). As this is a higher abstraction level of deception than the kind of VMware it consumes less resources, that's why is called light virtualization. It's so light that many applications are distributed in docker packages (called containers) so application is bundled along and operating system and its dependencies to be run all at once in another system and don't mess with its respective dependencies.

Sure there are things light virtualization cannot do but they are not many. For standard "level-7" application development you won't find any limitation using Docker virtualization.

Installation

In the past you could install docker from your distribution package repository. Nowadays you may find docker package in your usual package repository. But that no longer works. If you want to install docker in your computer you should ignore those packages available in your standard repositories and use official Docker repositories.

Docker provides package repositories for many distributions. For instance, here you can find instructions to install docker in an ubuntu distribution. Just be aware that if you are using an Ubuntu derivative, like Linux Mint, you're going to need to customize those instructions to set your version in apt sources list.

Once you have installed docker in your computer, you can run a Hello World app, bundled in a container, to check everything works:

dante@Camelot:~/$ sudo docker run hello-world
[sudo] password for dante:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:7d91b69e04a9029b99f3585aaaccae2baa80bcf318f4a5d2165a9898cd2dc0a1
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

dante@Camelot:~$

Be aware that although you might we able to run docker containers you may not be able to download and install them without sudo. If you are not comfortable working like that you can add yourself to docker group:

dante@Camelot:~/$ sudo usermod -aG docker $USER
[sudo] password for dante:

dante@Camelot:~$

You may need to log out and re log to activate change. This way you can run docker commands as an unprivileged user.

Usage 

You can make your own custom containers, but that is a topic for another article. In this article we are going to use containers customized by others.

To find available containers, head to Docker Hub and type any application or Linux distribution in its search field. Output will show you many options. If you're looking for a raw linux distribution you'd better use those tagged as "Official image". 

Guess you want to try an app in an Ubuntu Xenial, then select Ubuntu is search output and take a look to "Supported tags..." sections. There you can find how different versions are named to be downloaded. In our case we would take note of "xenial" or "16.04" tags.

Now that you know what to download, let's do it with docker pull:

dante@Camelot:~/$ docker pull ubuntu:xenial
xenial: Pulling from library/ubuntu
528184910841: Pull complete
8a9df81d603d: Pull complete
636d9303bf66: Pull complete
672b5bdcef61: Pull complete
Digest: sha256:6a3ac136b6ca623d6a6fa20a7622f098b2fae1ac05f0114386ef439d8ca89a4a
Status: Downloaded newer image for ubuntu:xenial
docker.io/library/ubuntu:xenial


dante@Camelot:~$

What you've done is download what is called an image. An image is a base package with an specific linux distribution and applications. From that base package you can derive your own custom packages or run instances of that base packages, those instances are what we call containers.

If you want to check how many images you have locally available just run docker images:

dante@Camelot:~/$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu xenial 38b3fa4640d4 4 weeks ago 135MB
hello-world latest d1165f221234 5 months ago 13.3kB



dante@Camelot:~$

You can start instances (aka containers) from those images using docker run:

dante@Camelot:~/$ docker run --name ubuntu_container_1 ubuntu:xenial
dante@Camelot:~$

Using --name you can assign an specific name to your container to identify it from other containers started from the same image.

Problem starting containers this way is that they close inmediately. If you check your containers status using docker ps

dante@Camelot:~/$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7baad2f7e56 ubuntu:xenial "/bin/bash" 12 seconds ago Exited (0) 11 seconds ago ubuntu_container_1
3ba89f1f37c6 hello-world "/hello" 9 hours ago Exited (0) 9 hours ago focused_zhukovsky

dante@Camelot:~$

I've used an -a flag to show every container, not only the active ones. That way you can see that ubuntu_container_1 ended its activity almost at once since start. That happens because docker containers are designed to run an specific application and close themselves when that application ends. We did not said which application to run in the container so it just closed.

Before trying anything else let's delete previous container, using docker rm, to start from scratch:

dante@Camelot:~/$ docker rm ubuntu_container_1
ubuntu_container_1

dante@Camelot:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ba89f1f37c6 hello-world "/hello" 10 hours ago Exited (0) 10 hours ago focused_zhukovsky
dante@Camelot:~$

Now we want to keep our container alive to access its console. One way is this:

dante@Camelot:~/$ docker run -d -ti --name ubuntu_container_1 ubuntu:xenial
a14e6bcac57dd04cc777a4eac787a8465acd5b8d379591976c56de1d0acc2798

dante@Camelot:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a14e6bcac57d ubuntu:xenial "/bin/bash" 8 seconds ago Up 7 seconds ubuntu_container_1
3ba89f1f37c6 hello-world "/hello" 10 hours ago Exited (0) 10 hours ago focused_zhukovsky
dante@Camelot:~$

We've used -d flag to run container in the background and -ti to start and interactive shell and keep it open. Doing so we can see that this time container stays up. But we are not yet in container console, to access it we must do docker attach to connect with that container shell:

dante@Camelot:~/$ dante@Camelot:~$ docker attach ubuntu_container_1
root@a14e6bcac57d:/#

Now you can see that shell has changed its left identifier. You can leave container console using exit, but that stops container. To leave container keeping it active use Ctrl+p followed by Ctrl+q instead.

You can pause an idle container to save resources and resume it later using docker stop and docker start:

dante@Camelot:~/$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9727742a40bf ubuntu:xenial "/bin/bash" 4 minutes ago Up 4 minutes ubuntu_container_1
3ba89f1f37c6 hello-world "/hello" 10 hours ago Exited (0) 10 hours ago focused_zhukovsky
dante@Camelot:~$ docker stop ubuntu_container_1
ubuntu_container_1
dante@Camelot:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9727742a40bf ubuntu:xenial "/bin/bash" 7 minutes ago Exited (127) 6 seconds ago ubuntu_container_1
3ba89f1f37c6 hello-world "/hello" 10 hours ago Exited (0) 10 hours ago focused_zhukovsky
dante@Camelot:~$ docker start ubuntu_container_1
ubuntu_container_1
dante@Camelot:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9727742a40bf ubuntu:xenial "/bin/bash" 8 minutes ago Up 4 seconds ubuntu_container_1
3ba89f1f37c6 hello-world "/hello" 10 hours ago Exited (0) 10 hours ago focused_zhukovsky
dante@Camelot:~$

Saving your changes

Chances are that you want export changes made to an existing container so you can start new container with those changes already applied.

Best way is using dockerfiles, but I'll leave that for a further article. A quick and dirty way is to commit your changes to a new image with docker commit:

dante@Camelot:~/$ docker commit ubuntu_container_1 custom_ubuntu
sha256:e2005a0ec8302f8948958a90e2abc1e8957a15155c6e6bbf7300eb1709d4ae70
dante@Camelot:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
custom_ubuntu latest e2005a0ec830 5 seconds ago 331MB
ubuntu xenial 38b3fa4640d4 4 weeks ago 135MB
ubuntu latest 1318b700e415 4 weeks ago 72.8MB
hello-world latest d1165f221234 5 months ago 13.3kB

dante@Camelot:~$

In this example we created a new image called custom_ubuntu. Using that new image we can create new instances with the changes made so far to ubuntu_container_1.

We aware that commiting changes from a running container stops it to avoid data corruption. After commiting is ended container is resumed.


Running services from containers

So far you have a lightweight virtual machine and you have access to its console, but that is not enough as you'll want offer services from that container.

Guess you have configured a SSH server in your container and you want to access it from your LAN. In that case you need container ports to be mapped by its host and offered to LAN.

An important gotcha here is that port mapping should be configured when a container is first time started with docker run using -p flag:

dante@Camelot:~/$ docker run -d -ti -p 8888:22 --name custom_ubuntu_container custom_ubuntu
5ea2604da8c696af397cd1b1f98d7426dd68182e5edd81a56a6b739849ba1e84

dante@Camelot:~$

Now you can access to the container SSH service through host 8888 port.


Sharing files with containers

Our containers won't be isolated islands. They may need files from us or the may retrieve files to us.

One way to do that file sharing is starting containers mounting a host folder as a shared folder:

dante@Camelot:~/$ docker run -d -ti -v $(pwd)/docker_share:/root/shared ubuntu:focal
ee64e7392a77b4d0b35adadad467fe5d6a105b84290945e346f82199116b7c56

dante@Camelot:~$

Here, host folder docker_share will be accessible from container at /root/shared container path. Be aware you should enter absolute paths. I use $(pwd) as a shortcut to enter host current working folder.

Once your container is started, every file placed at its /root/shared folder will be visible from host, even after container is stopped. The other way round, that is placing a file from host to be seen at container, is possible but you will need to do sudo:

dante@Camelot:~/$ cp docker.png docker_share/.
cp: cannot create regular file 'docker_share/./docker.png': Permission denied

dante@Camelot:~/$ sudo cp docker.png docker_share/.
[sudo] password for dante:

dante@Camelot:~$

Another way of sharing is using built-in copy command:

dante@Camelot:~/$ docker cp docker.png vigorous_kowalevski:/root/shared 
dante@Camelot:~$

Here, we have copied docker.png host file to vigorous_kowalevski container /root/shared folder. Note that we didn't need sudo to run docker cp.

Other way round is also possible, just change argument order:

dante@Camelot:~/$ docker cp vigorous_kowalevski:/etc/apt/sources.list sources.list 
dante@Camelot:~$

Here we copied container sources.list to host.


From here

So far you know how to deal with docker containers. Next step is creating your own custom images using dockerfiles and sharing them through Docker Hub. I'm going to explain those topics in a further article.

 

07 May 2020

Linux Firewalls by Michael Rash


Linux Firewalls: Attack Detection and Response with iptables, psad, and fwsnort is a quite interesting book about iptables features to be configured as an UTM device through its integration with Snort specific iptables plugins for log correlation, traffic access control.
 
This book has many real examples and detailed configurations. I could not test those configurations in a lab so I cannot see them running, but expect this tool and its way to be configured evolve with time.

In my humble opinion, tools detailed in this book are useful for home usage and SOHO environments only able to afford free open source tools. For more exigent corporate environments, I feel this solutions loosely integrated, with a difficult maintenance if your network is big and heterogeneous  and besides nowadays there are many commercial tools with a reduced price for almost every firm.

I felt more interesting explanations given by author for many networks attacks. Those explanations are very detailed but easily understandable. Nevertheless, most part of those attacks are known at the public domain for long time, so hardened security engineers won't learn anything new here.

Summarizing: This book is correct but only really useful for those who are making their first steps in security engineering world.  

14 June 2017

Packaging Python programs - DEB (and RPM) packages

Native way to distribute Python code is through PyPI, as I explained in a previous article. But to be honest PyPI has some drawbacks that limit its use to developing environments.

Problem with PyPI is that it does not implement a proper package management so pip uninstall doesn't work properly, and there's no way to rollback to a previous state. Besides pip installation procedures often build packages from source which can be painfully slow for a complete virtualenv.

Deploying Python application on production environments should be fast and should be done in a manner that clean uninstalling would be always available.

Debian has a really solid and stable package managing system. It can check dependencies either in installation and unistallation, and run pre and post installation scripts. That's why is one of the most used package managing system in Linux ecosystem.

This article is going to cover some ways to package our Python applications in Debian packages easily installable in Debian/Ubuntu Linux distros.

Debian package format

Although other linux packaging systems rely on binary formats, Debian packages use a simply ar archive to group in a single file a pair of tar archives, compressed with gzip or bzip. One of those two archives contains a folder tree with all application files and the another one the package configuration files.

Package configuration files are bare text files so the classic way of creating Debian packages involves only a bit of folder creation, textual editing of some configuration files and in it's simplest form just a command to be run:

dante@Camelot:~/project-directory$ debuild -us -uc


You can fin a good tutorial about the topic here.

The thing is not complex but can be tricky and you have to create and edit many textual files. It's not hard to see why developers have created so many tools to automate the task. We are going to see some of them specialised in packaging Python applications.

STDEB

If you are already used to python packaging for PyPI it shouldn't be hard for you to grasp stdeb concepts. If you don't know what I'm speaking about then you should read article I linked at the beggining of this one.

As stdeb call some Debian/Ubuntu tools under the hood you have to use one of those distros or any of their derivatives. If you are in a Debian/Ubuntu, you can download stdeb from standard linux repositories:

dante@Camelot:~$ sudo aptitude search stdeb
p   python-stdeb    - Python to Debian source package conversion utility                                 p   python3-stdeb   - Python to Debian source package conversion plugins for distutils                  
dante@Camelot:~$ sudo aptitude install python-stdeb python3-stdeb

But if you want to get a newer version you can also install stdeb from PyPI as a Python package.

The best workflow to use stdeb involves creating the necessary files to create a PyPI package, that way stdeb can use your setup.py file to get the info for creating debian package configuration files.

So, let's suppose you have your application ready to be packaged for PyPI with your setup.py already done. For this example, I've cloned this git repository.

To make stdeb generate a source debian package you just have to do:

dante@Camelot:~/geolocate$ python3 setup.py --command-packages=stdeb.command sdist_dsc

The --command-packages stuff can be cumbersome but the thing doesn't work without it, so rely on me and include it.

After some verbosy output you'll realize some folders are added to your working directory. You have to focus on the one called deb_dist. That folder contains mainly three files: a .dsc file, a .orig.tar.gz one and a .diff.gz. Those three files together are what we call a debian source package.

Inside deb_dist folder it you'll find another one called as your project with current version appended. That folder contains all generated data to compile a binary debian package, so move there and run next command:

dante@Camelot:~/geolocate/deb_dist/glocate-1.3-0$ dpkg-buildpackage -rfakeroot -uc -us

The fakeroot thing is just a flag to be able to build a debian package withount being logged as root user. That is the command recommended in stdeb documentation, but I've realized that you can use debuild command seen before:

dante@Camelot:~/geolocate/deb_dist/glocate-1.3-0$ debuild -uc -us

In the end debuild is a wrapper for dpkg-buildpackage that automates some things you should do manually otherwise.

After either of those commands you should find a source debian package in your deb_dist folder:

dante@Camelot:~/geolocate/deb_dist/glocate-1.3-0$ ls *.deb python3-glocate_1.3.0-1_all.deb

Those are the "two step" method, you can reduce it to "just one step" method running next command:

dante@Camelot:~/geolocate$ python3 setup.py --command-packages=stdeb.command bdist_deb

Although both methods ends with a .deb package in debian_dist folder, you should be aware that generated debian package is architecture dependent (althought it package names include a "_all" tag), that means that if you generate the package in an amd64 Ubuntu box chances are that you will face problems if you try to install resulting package in another Ubuntu box with a different architecture. To avoid this problem you have two options:
  • Use virtual machines to compile a debian package in every target debian architecture.
  • Use a PPA repository: Ubuntu offers personal hosting space for packaging project. You just upload there your source package (the content of deb_dist folder after running "python3 setup.py --command-packages=stdeb.command sdist_dsc"), and Ubuntu server compile it to each Ubuntu target architecture (mainly x86 and amd64). After compilation, created packages are available in your personal PPA until you remove them or replace them with newer versions.
If your Python applications just use built-in packages then your packaging trip ends here but if you use additional packages, for instance downloaded from PyPI, chances are that your compiled package doesn't include them properly.

Following our example, if you check geolocate's setup.py you should see that it depends of these additional packages:

install_requires=["geoip2>=2.1.0", "maxminddb>=1.1.1", "requests>=2.5.0", "wget>=2.2"]

Lets see if these dependencies have been included in generated debian package metadata:

dante@Camelot:~/geolocate/deb_dist$ dpkg -I python3-glocate_1.3.0-1_all.deb [...] Depends: python3, python3-requests, python3:any (>= 3.3.2-2~) [...]


Obviously they have not been included. In fact our compiling command warned us that dependency check failed at building time. If we check building output we'll find this output:
[...]
I: dh_python3 pydist:184: Cannot find package that provides geoip2. Please add package that provides it to Build-Depends or add "geoip2 python3-geoip2-fixme"
line to debian/py3dist-overrides or add proper  dependency to Depends by hand and ignore this info.
I: dh_python3 pydist:184: Cannot find package that provides maxminddb. Please add package that provides it to Build-Depends or add "maxminddb python3-maxmindd
b-fixme" line to debian/py3dist-overrides or add proper  dependency to Depends by hand and ignore this info.
I: dh_python3 pydist:184: Cannot find package that provides wget. Please add package that provides it to Build-Depends or add "wget python3-wget-fixme" line t
o debian/py3dist-overrides or add proper  dependency to Depends by hand and ignore this info.
[...]
Problem is that stdeb didn't identify which linux packages include those PyPI packages. So we have to set them manually.

To manually configure stdeb you have to create a file called stdeb.cfg in the same folder than setup.py. You usually will create a [DEFAULT] section where you'll put your configuration but you can create too [package_name] sections, where package_name is specified as the name argument to the setup() command.

For instance, if we find out that geoip2 PyPI library is included inside python3-geoip Ubuntu repository's package, and requests PyPI library is included inside python3-requests we could create a stdeb.cfg with this content:
[DEFAULT]
X-Python3-Version: >= 3.4
Depends3: python3-geoip (>=1.3.1), python3-requests
All tags follow the same format as they would have if they were inserted in debian/control. For depends tag, format specification is here.

With that configuration, the given dependencies are included in generated debian package:

dante@Camelot:~/geolocate/deb_dist$ dpkg -I python3-glocate_1.3.0-1_all.deb
[...]
Depends: python3, python3-requests, python3:any (>= 3.3.2-2~), python3-geoip (>= 1.3.1)
[...]

What I haven't found out yet is a way to change the architecture tag of generated package so it is no longer generated with "Architecture: all".

At first glance, stdeb looks great and indeed it is but actually it has too some serious drawbacks.

Problem is that stdeb limits you to use only libraries and python packages from your standard linux repository. If you develop your application using PyPI libraries, chances are that when you try to find which linux package includes your PyPI library you'll find that those packages contain only older versions than those you downloaded from PyPI. Worse even, many PyPI libraries have not been ported to standard linux repositories so you won't find any package to match your dependency. For instance, geolocate needs to use geoip2 (v.2.1.0) which is easily downloadable from PyPI but only Ubuntu 15.04 has an available package called python3-geoip, but this one comes with version 1.3.2 of geoip. Will geolocate work with geoip version provided by python3-geoip package? probably won't. Other geolocate dependencies are even missing in standard repository, like PyPI wget python library.
 
It's clear that if you like to use PyPI libraries stdeb may not be your best option. But if you develop using only libraries available through your standard package manager then stdeb will probably save you the day.

FPM

FPM is a Ruby tool similar to stdeb. It's main advantage is that you can use FPM to create many kinds of packages installer, not only Debian ones, currently RPM packages (for Red Hat distros). And what is even more interesting FPM allows package conversions, for example from RPM to DEB.

FPM has no package in Ubuntu main repository, so you have to download it from Ruby's equivalent of PyPI. To get that you should install first Ruby packages:


dante@Camelot:~/geolocate$ sudo aptitude install ruby-dev gcc make

Afterwards you can install FPM from Ruby repositories:


dante@Camelot:~/geolocate$ gem install fpm

Creating a DEB package is pretty simple, just set FPM source as python, its target as deb and give it your package setup.py file path:


dante@Camelot:~/geolocate$ fpm -s python -t deb ./setup.py

FPM takes dependency names from setup.py and prepend them with the tag you set with --python-package-name-prefix flag (if not set then python prefix is used):


dante@Camelot:~/geolocate$ dpkg -I python-glocate_1.3.0_all.deb
[...]
Depends: python-geoip2 (>= 2.1.0), python-maxminddb (>= 1.1.1), python-requests (>= 2.5.0), python-wget (>= 2.2)
[...]

Problem here is similar than in stdeb: those dependencies doesn't exists in standard Ubuntu repository. In case dependencies would exists but with different names than those autogenerated by FPM, then you could set them manually:


dante@Camelot:~/geolocate$ fpm -s python -t deb --no-auto-depends -d "python3-geoip>=1.3.1, python3-wget" ./setup.py
[...]>
dante@Camelot:~/geolocate$ dpkg -I python-glocate_1.3.0_all.deb 
[...]
Depends: python3-geoip>=1.3.1, python3-wget
[...] 


Another useful flag is "-a native". This flag sets package architecture to the one of your system, so it is not set any longer to "_all" in your package name

FPM is a great tool. It allows to create RPM packages and it is very configurable but in my opinion it has one serious drawbacks: as happened with stdeb, it is useless if your applications imports a library available in PyPI but not in standard operating system repositories.

DH-VIRTUALENV

Up to this point it should be clear the main problem to package a Python a application is ensuring to meet its dependencies at installation, because developer may have used PyPI libraries not available through Linux standard repositories at user end.

The guys at Spotify developed a packager to address this problem: dh-virtualenv.

This assumes that if you are using PyPI to develop then you are likely using virtualenvs. So dh-virtualenv includes the entire virtualenv into the package so you have not to install them in user end.

Nevertheless, in my humble opinion dh-virtualenv has a serious drawback: it is not as cleaner to use as stdeb or fpm (because you have to create manually a debian folder and a rules file) and you end using debuild as in the beginning of the article.

VDIST

Main concepts of vdist are similar to those seen in dh-virtualenv but vdist uses a combination of docker and fpm to create operating system standard package. This tool lets you build linux packages from your Python applications while aiming to build an isolated environment for your Python project using virtualenv. At first glance vdist may looks complex but its documentation it really clear and helpful, and actually is quite simple to use and automate.

If your main problem while packaging python application is to ensure dependencies are present at user end, vdist solves this making your application self contained and self sufficient so it does not depend on OS provided packages of Python modules. This means that packages generated by vdist contain your application, all python dependencies needed by your application, and a Python interpreter. That python interpreter allows to run your application with the interpreter of your choice not with the one shipped with the OS you're deploying on.

To ensure the host used to build the package keeps its system packages intact, vdist uses docker to create a clean OS image at build time and install there needed dependencies before your application is being packaged on top of it. Thanks to this your build machines will always be reverted to it's original state. To load your application into docker image, vdist downloads application source code from a git repository, so having your application in Bitbucket or Github is a good idea. Downloaded source code is placed in a virtualenv created inside your docker image. Pypy dependencies will be installed inside virtualenv

Main dependency for using vdist is having docker installed and its daemon running. To install it in Ubuntu you just need to do the following:


dante@Camelot:~/geolocate$ sudo aptitude install docker.io python-docker python3-docker


After installing docker, remember to add your user to docker group:


dante@Camelot:~/geolocate$ sudo usermod -a -G docker dante


You may need to restart your system to be sure the group is really updated.

Easiest way to install vdist is to install it using your standard package tools. Vdist packages are hosted at Bintray, so to install them from there you should include Bintray in your system repositories before anything else. To do it in Ubuntu just type:


dante@Camelot:~$ sudo apt-get update
dante@Camelot:~$ sudo apt-get install apt-transport-https
dante@Camelot:~$ sudo echo "deb [trusted=yes] https://dl.bintray.com/dante-signal31/deb generic main" | tee -a /etc/apt/sources.list
dante@Camelot:~$ sudo apt-key adv --keyserver pgp.mit.edu --recv-keys 379CE192D401AB61

Once added bintray in your repositories you can install and update vdist like any other system package. For instance, in Ubuntu:


dante@Camelot:~$ sudo apt-get update
dante@Camelot:~$ sudo apt-get install vdist


If you are in a system where you don't have permission to install system packages you may find interesting installing it from PyPI repository inside a virtualenv created ad-hoc for packaging your application:


(env) dante@Camelot:~/geolocate$ pip install vdist


After installing vdist you are provided with a console command called... vdist. Just be aware that if you have installed vdist inside a virtualenv, that console command only will be available inside that virtualenv.

You have many ways to use vdist, I think the easiest way to use it is creating a configuration file and making vdist read it. Vdist is used to package itself so its configuration file is a good example:

[DEFAULT]
app = vdist
version = 1.1.0
source_git = https://github.com/dante-signal31/${app}, master
fpm_args = --maintainer dante.signal31@gmail.com -a native --url
    https://github.com/dante-signal31/${app} --description
    "vdist (Virtualenv Distribute) is a tool that lets you build OS packages
     from your Python applications, while aiming to build an
     isolated environment for your Python project by utilizing virtualenv. This
     means that your application will not depend on OS provided packages of
     Python modules, including their versions."
    --license MIT --category net
requirements_path = /REQUIREMENTS.txt
compile_python = True
python_version = 3.5.3
output_folder = ./package_dist/
after_install = packaging/postinst.sh
after_remove = packaging/postuninst.sh

[Ubuntu-package]
profile = ubuntu-trusty
runtime_deps = libssl1.0.0, docker.io
build_deps =

[Centos7-package]
profile = centos7
runtime_deps = openssl, docker-ce

Vdist documentation is good enough to know what each parameter is useful for. Just note that you can have just in one configuration file parameters for every package you want to build . Just keep common parameters in [DEFAULT] section and put distribution dependent parameters in separate sections (they can be called as you want but you'd better use expressive names).

Once you have your configuration file you can launch vdist like this (guess configuration file is called configuration_file):


dante@Camelot:~/$ vdist batch configuration_file

Then you'll start to see a bunch of screen output while vdists builds your packages. Generated packages will be placed in folder set in output_folder configuration file parameter.

The smallest package size produced by vdist is about 50 MB because an entire python distribucion has to be included in that package. That size is what you pay for your application self-containment. Discounted those 50 MB, all the rest is due your application and it dependencies. At first glance it may seem big but nowadays is quite usual size for any compiled application you may find out there.

I think vdist is the most complete packaging solution available to deploy python apps in Linux boxes. With it you can deploy even in linux boxes with no python installed at all, giving you a valuable isolation in client end and easying your final user life to install your app.

Disclaimer: I started using vdist to write this article and I've ended being the current main contributor to its development, so feel free to comment any further improvement you feel could be interesting.

14 October 2011

Bastille

Local security management of computers organization can be rather expensive. Some organizations are constantly launching new systems, either for use as a work PC or server. In theory, the IT Security Department should oversee the creation and configuration of these new teams, but in practice these departments are often overworked and lack of personnel available to assign to this task. This is where comes in the use of scripts to ensure proper computer setup before passing it into production. The department of IT Security passes these scripts to Systems Department to apply them just before the start of production of the equipment. This will automate the task and gains in efficiency.

A design option is proram these scripts ourselves, the other is not reinventing the wheel and use what others have created. In this sense Bastille is one of the most recognized. Throughout this article we will explain its use.

If your operating system is Ubuntu Bastille installation is extremely simple install it since it is included in the default repositories of this distribution: 


@ dante dante-desktop: ~ $ sudo aptitude search bastille
[Sudo] password for dante:
c bastille - Security hardening tool
@ dante dante-desktop: ~ $ sudo aptitude show bastille
Package: bastille
Status: not installed
Version: 1:2.1.1-13
Priority: Optional
Section: universe / admin
Developer: Ubuntu MOTU Developers 
Uncompressed Size: 1544k
Depends: perl5, libcurses-perl
Recommends: whois, psad, bind9-host | host
Suggests: acct, perl-tk (> = 1:800.011) | libgtk-perl
It conflicts with: libcurses-widgets-perl
Description: Security hardening tool
Bastille Linux is a security hardening program for GNU / Linux. Increase the security of it the system by disabling services Either (If They are not NECESSARY) or by Altering
Their configuration.

If run in the (recommended) Interactive mode, Bastille educates the administrator During The hardening process: In Each step of the process, Extensive descriptions are Given
of what security issues are Involved. Each step is optional. If run in The Quick Automated mode, Bastille Hardens the system according the profile chosen.

Bastille Linux works for Linux Several distributions. This Has Been package specifically modified to work for Debian GNU / Linux.

Homepage: http://www.bastille-linux.org/

@ dante dante-desktop: ~ $ sudo aptitude install bastille
(...)
Be installed following NEW packages:
libbit bastille-vector-perl-perl libcarp-clan-perl libcurses libata-calc-perl-perl libnetwork-ipv4addr-syslog-perl libunix whois psad
0 upgraded, 9 newly installed, 0 to remove and 107 not upgraded.
0B/1355kB need to download files. After unpacking 6697kB will be used.
Do you want to continue? [Y / n /?] And
(...)
@ dante dante-desktop: ~ $

Once installed, you can run Bastille. It is recommended to run in interactive mode, so that the program will go by a series of questions to ascertain the use to which it intends to give the computer. Depending on the answers, Bastille configures the computer as secure as possible. These questions will be formulated through a simple ncurses-based interface and therefore easily viewable through an ssh session. With Bastille, IT Security just hast to give Systems Department a simple checklist with answers to each configuration step to let Bastille installed.

Bastille includes a detailed explanation next to each question with the cause of the issue and what will be done in terms of the response. Next we are going to overview main Bastille installation steps:
  1. Would you like to set more restrictive permissions on the administration utilities? [N] ---> Useful on machines with multiple user accounts. There are utilities that, in general, are only executed by the administrator of the machine but by default regular users have access to them, at least part of their functionality (why does the user need to implement management tools such as top or ifconfig ?). To avoid possible security problems that this could lead, Bastille can change the permissions of the above applications in order to ensure that only the administrator can perform. If you are the only user of the machine does not make sense to enable this option. If you have other users accessing it (eg to upload files to your web folders) it would be interesting to add this option.
  2. Would you like to disable SUID status for mount / umount? [Y] ---> In general, programs that have the SUID attribute are very dangerous because although they can be invoked by normal users run with root privileges. This does not entail any risk if these programs were limited to doing what they were designed, the problem is that it is relatively common to discover bugs in these applications that allow "fake" to do things with root privileges. If you say yes to this question Bastille will ensure that the command mount / umount can only be executed by those who know the root password, thus reducing the risk exposure of the team.
  3. Would you like to disable SUID status for ping? [Y] ---> Same as above.
  4. Would you like to disable SUID status for at? [Y] ---> Same as above.
  5. Should Bastille disable clear-text r-protocols that use IP-based authentication? [Y] ---> The so-called r-tools are a set of utilities for remote management of computers. The problem with them is they dont use encryption to exchange data and IP addresses for authentication. This lack of confidentiality and ease to spoof source IP addresses have made people move from the use of r-tools to safer alternatives.
  6. Would you like to enforce password aging? [Y] ---> Enables a time of password expiration of 180 days. Before you spend that time the user is prompted to change your password. If the deadline arrives and the user has not changed your password the account will be blocked until the administrator re-activates it.
  7. Would you like to restrict the use of cron to administrative accounts? [Y] ---> There are certain attacks that can take advantage of the ability of users to use cron tool to launch deferred tasks on a schedule determined. If users do not need to launch scheduled tasks, it is best to let Bastille restrict the use of cron so that is only available for the administrator. 
  8. Do you want to set the default umask? [Y] ---> The umask is the default permissions that you put the files you create. It is best to let it set Bastille to a safe value.
  9. What umask would you like to set for users on the system?[077] ---> Continued from previous question, the best option is to use 077 so as to ensure both the confidentiality of our records as data integrity by preventing anyone but the owner can either read or write about them .
  10. Should we disallow root login on all ttys? [N] ---> This option is extremely useful for computers that can be accessed via SSH without limitation as to the source IP, because they are often victims of dictionary attacks from Internet bots. These bots prove first root account, ubiquitous in all Unix / Linux. However if this option is enabled, Bastille will ensure that the only way to access the root is connect to a normal user account and then doing "su -". The advantage is that this way the bot is forced to discover not only a password, but also the name of the registered user able to connect. The best thing therefore is to enable this option.
  11. Would you like to password-protect the GRUB prompt? [N] ---> If an attacker has physical access to the computer can get a root console rebooting computer and passing certain parameters to GRUB. To avoid this, it is best to enable this option so that even if you can restart the computer and boot normally you would have to provide a password before being allowed to pass parameter to GRUB.
  12. Would you like to disable CTRL-ALT-DELETE rebooting? [N] ---> This sequence allows a user with physical access to the machine to boot it. At first sight it might be interesting to tell Bastille to disable this keyboard sequence. But problem is that an intruder who has access to local keyboard has access to electrical plug too. So he is going to reboot computer anyway but you have to ponder if you frefer him reboot computer cleanly with control-alt-del or violently unplugging computer (so its hard drives cold be severely damaged). The Bastille itself do not recommend disabling this keyboard sequence.
  13. Would you like to password protect single-user mode? [Y] ---> single-user mode is supposed to be used by root in emergencies so that he can access system without password. It is a type of access that is used in cases of emergency, such as when you do not remember the root password... Clearly, this is a double-edged sword that can allow unauthorized access to the system. Enabling this option will avoid this situation Bastille asking for password to access this ... but beware: do not forget your root password or you wont be able retrieve it!.
  14. Would you like to set to default-deny on TCP Wrappers and xinetd? [N] ---> inetd services are allowed by default. If this option is enabled, Bastille changes the settings so that is a denied by default.
  15. Should Bastille Ensure the telnet service does not run on this system? [And] ---> The telnet service is obsolete and pose a serious risk to the safety of the system because it transmits data in clear. It is recommended to use SSH access to the console instead of telnet.
  16. Should ensure inetd's FTP Bastille service does not run on this system? [And] ---> What I said about telnet service is equally valid for FTP.In this case it is best to replace it with SCP or SFTP.
  17. Would you like to display "Authorized Use" messages at log-in time? [Y] ---> This option will enable a message that appears at the beginning of console sessions. This message warns you that you are accessing a restricted and that any not allowed access may be prosecuted by law. Then you can edit the message to fit properly to what Legal Department advises.
  18. Who is responsible for granting authorization to keep my machine? ---> This one question is related to the last one we explained. In this case we indentify user responsible for authorizing the different accesses to computer.
  19. Would you like to disable the gcc compiler? [N] ---> Activating this option will prevent that you can use the C compiler in this computer. That will make life much harder for those intruders who gain access to the system and want download the source code of attack tools to compile in-situ. Furthermore, to disable the C compiler should not be a problem on computers, such as perimeter firewalls, in which there is no plan to make any kind of development.
  20. Would you like to put limits on system resource usage? [N] ---> Set some limits on the number of processes and memory used by user in order to avoid denial of service attacks.
  21. Should we restrict console access to a small group of user accounts? [N] ---> This is to deny access to the console except for a select group of accounts.
  22. Would you like to add additional logging? [Y] ---> Configure your computer to increase the number of log sources and to display some of them in the terminals 7 and 8 (accessible via Alt + F7 and F8, respectively).
  23. Would you like to disable printing? [N] ---> If your computer has not a printer, leaving the printer daemon enabled is leaving a door opened to disaster.
  24. Would you like to install TMPDIR / TMP scripts? [N] ---> Activating this option, Bastille will install some scripts in the accounts of users to configure the variables TMPDIR and TMP directories so that use of temporary files will be completely individual, instead of everyone using the / tmp which can be extremely dangerous in multi-user environments.
  25. Would you like to run the packet filtering script? [N] ---> Enable native Linux firewall. Activating it is safer but note that you need to configure locally (if you dont want to remotely "cut your hands down") either through the command line or through the GUI accordingly. So you must think about it twice before switching it. 
  26. Are you finished answering the questions, ie may we make changes? ---> And finally we come to the end. If you are really sure of the settings specified before, answer Yes to apply changes to your system.