How To Install Docker on Ubuntu 18.04 & Debian 10

Nowadays, the most widely used and fastest running DevOps techologies in the industry is Docker. Docker has two versions – Docker CE (Community Edition) and Docker EE (Enterprise Edition). In case you are having a small-scale project, or you’re simply learning then use Docker CE.

Docker is a machine that allows you to create, deploy, and manage lightweight, stand-alone packages named containers. This tutorial completely makes you learn How To Install Docker on Ubuntu 18.04 & Debian 10 with detailed steps along with the post-installation instructions.

Prerequisites

To learn & understand this tutorial, you will need the following requirements:

  • One Ubuntu 18.04 server set up by following the Ubuntu 18.04 initial server setup guide, including a sudo non-root user and a firewall.
  • Ubuntu 18.04 64-bit operating system
  • A user account with sudo privileges
  • Command-line/terminal (CTRL-ALT-T or Applications menu > Accessories > Terminal)
  • Docker software repositories (optional)

Also Check: How To Install Docker on Windows 7/8/10 Home and Pro

Ensure you have sudo rights

First of all, you want to make sure that you have sudo (administrative) rights on your Linux instance.

Without sudo rights, you won’t be able to install the Docker packages.

To check sudo rights, run the following command :

$ sudo -l
User devconnected may run the following commands on debian-10:
   (ALL : ALL) ALL

Now that you have sudo rights, let’s install Docker.

Steps to Install Docker using get-docker.sh (fastest)

This has to be the quickest way to install Docker on Ubuntu and Debian, yet not many tutorials describe this step.

Docker created an entire script that detects your Linux distribution, the package management system you are using (APT, YUM) in order to install Docker properly.

a – Install cURL

You will need cURL in order to download the installation script.

To install cURL on Linux, run the following command :

$ sudo apt-get update
$ sudo apt-get install curl
$ curl --version
curl 7.64.0 (x86_64-pc-linux-gnu)

b – Download the get-docker.sh script

Here is the script available. As you can see, this is a plain text script, running many commands on your system to install Docker.

By default, the “stable” version of Docker is installed.

If you want another version (nightly or test), make sure to modify the parameter in the script.

get-docker-sh-2

To download the get-docker.sh script, run the following commands.

$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sh get-docker.sh

The Docker installation process should start.

Docker will automatically grab the packages it needs to install (like the apt-transport-https package or ca-certificates).

When it is done, this is what you should see on your screen.

get-docker-sh-3

Awesome, Docker is now installed on your Linux system.

c – Add the user to the docker group

In order to execute docker commands, you will need sudo rights.

However, you can add users to the docker group to avoid prefixing commands with the sudo command.

To add a user to the docker group, run the following command.

$ sudo groupadd docker
$ sudo usermod -aG docker devconnected
$ sudo reboot

d – Get the current Docker version

To verify that everything was installed correctly, you can check your current Docker version.

$ docker -v
Docker version 19.03.1, build 74b1e89

Great!

You successfully installed Docker on Ubuntu and Debian.

Make sure to read the post-installation steps in order to customize your environment for Docker.

Steps to Install Docker from Official Repository

Here are the detailed steps that should be followed by the developer while installing docker from the official repository. Just follow them carefully:

a – Update Local Database

Firstly, you should update the local database with the following command:

sudo apt-get update

b – Download Dependencies

In the next step, you’ll need to run these commands to enable your operating system to access the Docker repositories over HTTPS.

In the terminal window, type:

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common

For more explanation, here’s a brief breakdown of each command:

  • apt-transport-https: Allows the package manager to transfer files and data over https
  • ca-certificates: Allows the system (and web browser) to check security certificates
  • curl: This is a tool for transferring data
  • software-properties-common: Adds scripts for managing software

c – Add Docker’s GPG Key

The GPG key is a security feature.

To ensure that the software you’re installing is authentic, enter:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

add docker gpg key

d – Install the Docker Repository

Just enter the following command for installation of Docker repository:

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

The command “$(lsb_release –cs)” scans and returns the codename of your Ubuntu installation – in this case, Bionic. Moreover, the final word of the command – stable– is the type of Docker release.

install docker repository

A stable release is tested and confirmed to work, but updates are released less frequently. You may substitute edge if you’d like more frequent updates, at the cost of potential instability. There are other repositories, but they are riskier.

e – Update Repositories

Update the repositories you just added:

sudo apt-get update

f – Install the Latest Version of Docker

Make use of the given command to install the latest version of docker:

sudo apt-get install docker-ce

g – Install Specific Version of Docker(Optional)

List the available versions of Docker by entering the following in a terminal window:

apt-cache madison docker-ce

list available docker version image for docker installation

The system needs to return a list of available versions as in the image above.

At this point, type the command:

sudo apt-get install docker-ce=[version]

However, substitute [version] for the version you want to install (pulled from the list you just generated).

For instance:

how to install docker using official repository example

Process to Install Docker manually on your Linux system

If you are reluctant to use the get-docker script to install Docker automatically, you can still install the packages by yourself.

Here are the steps to install Docker manually.

a – Remove old installed Docker versions

First, you need to make sure that you are not running any old versions of Docker locally.

$  sudo apt remove -y docker docker-engine docker.io contained runc

b – Set up the Docker repositories

Next, you are going to need to setup the Docker repositories, and make sure that you are downloading packages from secure and official Docker repos.

To do that, install the following packages.

$ sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common

c – Add the official Docker GPG keys

To add the official Docker GPG keys, run the following command.

$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

If the command is successful, the terminal should return OK.
docker-key

d – Verify the key fingerprint

In order to make sure that you grabbed the official and secure Docker key, you have to search for the fingerprint in your key.

Run the following command:

$ sudo apt-key fingerprint 0EBFCD88

pub   4096R/0EBFCD88 2017-02-22
      Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid                  Docker Release (CE deb) <docker@docker.com>
sub   4096R/F273FCD8 2017-02-22

Great! As you can see you got the key from the official Docker repositories.

e – Install Docker CE on your instance

In order to get the stable repository from Docker, you will need to run the following command.

$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

The command $(lsb_release -cs) will return the name of the distribution.

lsb-release

Now that you are done, simply install docker-ce on your computer.

$ sudo apt-get update
$ sudo apt-get install docker-ce

This should install Docker as a service. To verify it, run the following command:

$ sudo systemctl status docker

docker-service

Again, add the user to the docker group in order for the user to execute docker commands without sudo.

$ sudo groupadd docker
$ sudo usermod -aG docker devconnected
$ sudo reboot

And finally, check your Docker version.

$ docker -v
Docker version 19.03.1, build 74b1e89

You now have Docker installed on your instance.

Post Installation Docker instructions

In order to have a complete and functional Docker installation on your Linux system, you will need to complete a few more steps.

On Linux, docker-machine and docker-compose don’t come automatically with your docker-ce installation.

This is a problem that you won’t have for Windows and macOS as they come bundled with the Docker Community Edition binary.

a – Install docker-machine on Linux

A Docker machine is a tool that gives you the ability to manage your Docker containers with provisioning features.

It is the utility that will handle getting the binaries from the official repositories, install them on your container and run them.

Docker CE on the other hand is a client-server architecture that allows clients to communicate with Docker servers via the Docker CLI.

To install docker-machine, run the following command.

$ sudo - i

$ curl -L https://github.com/docker/machine/releases/download/v0.16.1/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine &&
    chmod +x /tmp/docker-machine &&
    sudo cp /tmp/docker-machine /usr/local/bin/docker-machine 

$ sudo +x /usr/local/bin/docker-machine
$ exit

Want another version? All the docker-machine versions are available here.

Make sure that the docker-machine utility is correctly installed on your computer.

docker-machine

b – Install docker-compose on Linux

Again, docker-compose is not shipped by default with Docker Community Edition.

Docker-compose is a tool that lets you “compose” Docker containers – i.e running multiple containers in the same isolated environment.

To install docker-compose, run the following commands:

$ sudo - i

$ curl -L https://github.com/docker/compose/releases/download/1.24.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

$ sudo +x /usr/local/bin/docker-compose
$ exit

Make sure that the docker-compose tool is correctly installed on your computer.

docker-compose

Awesome! Now that everything is ready, let’s start our first container.

c – Create your first container on Docker

For this tutorial, I am going to create a Debian 8 Jessie container running on my Debian 10 Buster instance.

Head over to https://hub.docker.com/ which is the place where most Docker container images are stored.

docker-hub

Search for Debian 8 in the search text field and click on the verified Debian option in the suggestions dropdown.

debian-official-docker-hub

By scrolling a bit, you should see all the Debian distributions available (Buster, Jessie, Wheezy etc..). For this tutorial, we are going to take a look at the Jessie distribution.

To grab the Jessie image, run the following command :

$ docker container run debian:jessie

docker-run

The docker image was successfully downloaded from the Docker Hub repositories.

You can check it by running a simple docker images command.

docker-images

The container was also successfully created by the run command, but it is inactive by default.

To see all your containers, run the following command:

$ docker container ls -a

docker-container-ls

We did not choose a name for our container when downloading it, so Docker assigned a name by default to the host (vigorous_kirch).

Time to go into our Jessie container. This is how to do it:

$ docker container start 61f66b78e140
61f66b78e140

$ docker exec -it 61f66b78e140 /bin/bash
root@61f66b78e140:/#
root@61f66b78e140:/# cat /etc/issue
Debian GNU/Linux 8

Awesome! We have a Jessie distribution running on a Debian 10 Buster one.

Conclusion

Today, we hope you have seen & learned how to install and configure Docker for Ubuntu and Debian distributions. Also, studied the post-installation steps that you must do to perform a complete Docker installation.

If you need more information, make sure to read the official Docker documentation. They provide great information about how to run commands and how to maintain your containers.

Docker Exec Command With Examples

Docker Exec Command With Examples | How to Run a Docker Exec Command inside a Docker Container?

Developers who need full-fledged information about Docker and their commands with examples can stick to Junosnotes.com especially this tutorial. As it was completely explained regarding the Docker Exec Command with Examples. 

Before going to the main topic of today’s, let’s have a look at Docker. It is a containerization platform founded in 2010 by Solomon Hykes that offers features to install, deploy, start and stop containers.

The command that helps to execute commands on running containers is known as the Docker exec command and makes it possible to access a shell example or start a CLI to manage your servers.

Get the main part of data about it by going through this tutorial and keep focusing on learning the docker exec command efficiently & effortlessly.

What is Docker Exec Command?

One of the useful and best commands to interact with your running docker containers is the Docker exec command. By using the docker exec, you will likely have the need to access the shell or CLI of the docker containers you have deployed when working with Docker.

Docker Exec Syntax

In order to execute commands on running containers, you have to execute “docker exec” and specify the container name (or ID) as well as the command to be executed on this container.

$ docker exec <options> <container> <command>

As an example, let’s say that you want to execute the “ls” command on one of your containers.

The first thing that you need to do is to identify the container name (if you gave your container one) or the container ID.

In order to determine the container name or ID, you can simply execute the “docker ps” command.

$ docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              
74f86665f0fd        ubuntu:18.04        "/bin/bash"         49 seconds ago      Up 48 seconds

Note: The “docker ps” is also used in order to determine whether a container is running or not.

As you can see, the container ID is the first column of the ‘docker ps’ output.

Now, to execute the “ls” command on this container, simply append the ‘ls’ command to the ID of your container.

$ docker exec 74f86665f0fd ls

bin
boot
dev
etc
home

Awesome, now that you know how you can use the “docker exec” command, let’s see some custom examples on the usage of this command.

Prerequisites

If you want to go with the examples provided on this page then you should have to adhere to the following requirements:

Docker Exec Bash / Docker Exec -it

The most popular usage of the “docker exec” command is to launch a Bash terminal within a container.

To start a Bash shell in a Docker container, execute the “docker exec” command with the “-it” option and specify the container ID as well as the path to the bash shell.

If the Bash is part of your PATH, you can simply type “bash” and have a Bash terminal in your container.

$ docker exec -it <container> /bin/bash

# Use this if bash is part of your PATH

$ docker exec -it <container> bash

When executing this command, you will have an interactive Bash terminal where you can execute all the commands that you want.

Docker Exec Bash Cmder

Awesome, you are now running an interactive Bash terminal within your container.

As you can see, we used an option that we did not use before to execute our command: the I and T options.

What is the purpose of those options?

Docker Exec Interactive Option (IT)

If you are familiar with Linux operating systems, you have probably already heard about the concept of file descriptors.

Whenever you are executing a command, you are creating three file descriptors:

  • STDIN: also called the standard input that will be used in order to type and submit your commands (for example a keyboard, a terminal, etc..);
  • STDOUT: called the standard output, this is where the process outputs will be written (the terminal itself, a file, a database, etc..);
  • STDERR: called the standard error, it is very related to the standard output and is used in order to display errors.

So how are file descriptors related to the “docker exec“?

When running “docker exec” with the “-i” option, you are binding the standard input of your host to the standard input of the process you are running in the container.

In order to get the results from your command, you are also binding the standard output and the standard error to the ones from your host machine.

Docker Exec Interactive Option (IT) it-option

As you are binding the standard input from your host to the standard input of your container, you are running the command “interactively”.

If you don’t specify the “IT” option, Bash will still get executed in the container but you won’t be able to submit commands to it.

Docker Exec as Root

In some cases, you are interested in running commands in your container as the root user.

In order to execute a command as root on a container, use the “docker exec” command and specify the “-u” with a value of 0 for the root user.

$ docker exec -u 0 <container> <command>

For example, in order to make sure that we execute the command as root, let’s have a command that prints the user currently logged in the container.

$ docker exec -u 0 74f86665f0fd whoami

root

Great, you are now able to run commands as the root user within a container with docker exec.

Docker Exec Multiple Commands

In order to execute multiple commands using the “docker exec” command, execute “docker exec” with the “bash” process and use the “-c” option to read the command as a string.

$ docker exec <container> bash -c "command1 ; command2 ; command3"

Note: Simple quotes may not work in your host terminal, you will have to use double quotes to execute multiple commands.

For example, let’s say that you want to change the current directory within the container and read a specific log file in your container.

To achieve that, you are going to execute two commands: “cd” to change directory and “cat” to read the file content.

$ docker exec 74f86665f0fd bash -c "cd /var/log ; cat dmesg "

(Nothing has been logged yet.)

Executing a command in a specific directory

In some cases, the purpose of executing multiple commands is to navigate to a directory in order to execute a specific command in this directory.

You can use the method we have seen before, but Docker provides a special option for this.

In order to execute a command within a specific directory in your container, use “docker exec” with the “-w” and specify the working directory to execute the command.

$ docker exec -w /path/to/directory <container> <command>

Given the example we have seen before, where we inspected the content of a specific log file, it could be shortened to

$ docker exec -w /var/log 74f86665f0fd cat dmesg

(Nothing has been logged yet.)

Docker Run vs Exec

Now that we have seen multiple ways of using the “docker exec” command, you may wonder what is the difference with the “docker run” command.

The difference between “docker run” and “docker exec” is that “docker exec” executes a command on a running container. On the other hand, “docker run” creates a temporary container, executes the command in it, and stops the container when it is done.

For example, you can execute a Bash shell using the “docker run” command but your container will be stopped when exiting the Bash shell.

$ docker run -it ubuntu:18.04 bash

root@b8d2670657e3:/# exit

$ docker ps

(No containers.)

On the other hand, if a container is started, you can start a Bash shell in it and exit it without the container stopping at the same time.

$ docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              
74f86665f0fd        ubuntu:18.04        "/bin/bash"         49 seconds ago      Up 48 seconds  

$ docker exec -it 74f86665f0fd bash
root@74f86665f0fd:/# exit


$ docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              
74f86665f0fd        ubuntu:18.04        "/bin/bash"         58 seconds ago      Up 58 seconds

Awesome, you know the difference between “docker run” and “docker exec” now.

Set Environment Variables

Setting environment variables is crucial for Docker: you may run databases that need specific environment variables to work properly.

Famous examples are Redis, MongoDB, or MySQL databases.

In order to set environment variables, execute “docker exec” with the “-e” option and specify the environment variable name and value next to it.

$ docker exec -e var='value' <container> <command>

For example, let’s have a command that sets the “UID” environment variable to print it out within the container.

To achieve that, we would use the “-e” option in order to set the environment variable.

$ docker exec -e UID='myuser' 74f86665f0fd printenv UID

'myuser'

How to Run a Docker Exec Command inside a Docker Container?

  • Make use of docker ps to get the name of the existing container.
  • Later, use the command docker exec -it <container name> /bin/bash to get a bash shell in the container.
  • Or straight away use docker exec -it <container name> <command> to execute whatever command you specify in the container.

Conclusion

In this tutorial, you learned about the “docker exec” command: command used to execute commands on your existing running containers.

If you are interested in DevOps or Docker, we have a complete section dedicated to it on the website, so make sure to check it out!

How To Encrypt Partition on Linux

In one of our previous articles, we learnt how you can encrypt your entire root filesystem on Linux easily.

However, in some cases, you may want to encrypt one simple partition that may store some of your important files.

As you already know, encrypting your disks is crucial. If your laptop were to be stolen, you would probably lose all your personal information.

However, there are some ways for you to cope with this problem : by encrypting your disk partitions.

In this tutorial, you will learn about all the steps necessary to encrypt an entire disk partition, secure it with a passphrase or with a keyfile.

For the example, the article will be illustrated on a RHEL 8 operating system, but there should not be any differences if you use another one.

Prerequisites

In order to execute most of the commands provided in this article, you need to have administrator rights.

To check whether this is the case or not, you can execute the “groups” command and verify that you belong to the “sudo” group for Debian-based distributions or “wheel” for RedHat based ones.

$ groups

How To Encrypt Partition on Linux groups

If you don’t have such rights, you can read one of our articles on the subject about getting sudo rights for Ubuntu or CentOS distributions.

Encrypt Partition using cryptsetup

As specified in the previous articles, encrypting a partition involves formatting the entire disk.

As a consequence, if you plan on encrypting a disk with existing data, you should now that your data will be erased in the process. To avoid losing anything, you should make a backup of your data on an external disk or in an online cloud.

Create New Partition on disk

In order to encrypt a partition, we are going first to create a new one using the “fdisk” utility. For the example, we are going to create a new partition named “sdb1” on the “sdb” disk.

$ sudo fdisk /dev/sdb

Create New Partition on disk fdisk-utility

In the fdisk utility, you can create a new partition using the “n” keyword and specify that you want a partition with a size of 5 GBs for example.

Create New Partition on disk create-partition-fdisk

If you are not sure about how to use “fdisk” or how to create partitions, we have a dedicated article about this subject.

At the end of the process, you need to use the “w” keyword in order to write the changes to the disk.

Create New Partition on disk write-changes-disk

Awesome, now that your partition is created, we are going to format it as a LUKS partition.

Format Disk Partition as LUKS

To encrypt the partition, we are going to use a command related to the LUKS project.

The LUKS project, short for Linux Unified Key System, is a specification used in order to encrypt all storage devices using special cryptographic protocols. As described, LUKS is only a specification, you will need a program that implements it.

In this case, we are going to use the “cryptsetup” utility. As explained in the manual section, cryptsetup aims at creating encrypted spaces for dm-crypt.

First of all, make sure that you have the “cryptsetup” command using the “which” command.

$ which cryptsetup

Format Disk Partition as LUKS which-cryptsetup

If the cryptsetup cannot be found on your server, make sure to install it using one of the following commands

$ sudo apt-get install cryptsetup                     (for Debian distributions)

$ sudo yum install cryptsetup                         (for RHEL/CentOS distributions)

To create your LUKS partition, you need to execute the “cryptsetup” followed by “luksFormat” and the name of the partition to be formatted.

$ sudo cryptsetup luksFormat /dev/sdb1

Format Disk Partition as LUKS cryptsetup-luksformat

First of all, you are reminded that encrypting your disk will actually format it in the process.

After typing “YES” in capital letters, you will have to choose a passphrase in order to secure your device.

LUKS supports two ways of protecting your media : using a passphrase (the one that we currently use) and using keys. For now, you can choose a safe password and your partition should be formatted automatically.

Now that your partition is created, you can inspect it using the “lsblk” command : the partition should be marked as “crypto_luks“.

$ lsblk -f

lsblk-command

Awesome! Now that the volume is formatted, we can open it and create a simple ext4 filesystem on it.

Create ext4 Filesystem on Partition

By default, your encrypted volume is closed meaning that you cannot access data that is available on it.

In order to “open”, meaning “unlocking” your volume, you have to use the “cryptsetup” command again followed by “luksOpen” and the name of the volume.

At the end of the command, provide a name for your open volume, in this case we are going to choose “cryptpart“.

$ sudo cryptsetup luksOpen /dev/sdb1 cryptpart

Create ext4 Filesystem on Partition cryptsetup-luksopen

As you can guess, you are asked to provide the passphrase that you chose in the previous section.

Running the “lsblk” command again, you probably noticed that one volume was created under the “sdb1” encrypted volume named “cryptpart“. The “device mapper”, which is one of the frameworks of the Linux Kernel, did that for you.

Now that your volume is unlocked, it is time for you to create a new ext4 filesystem on it.

To create a new filesystem on your partition, use the “mkfs” command followed by the filesystem format, in this case “ext4”.

$ sudo mkfs.ext4 /dev/mapper/cryptpart

mkfs-command

Awesome, the filesystem was created.

You can now mount it and add new files to it. Files created on this volume will automatically be encrypted.

$ mkdir -p /home/devconnected/files 

$ sudo mount /dev/mapper/cryptpart /home/devconnected/files

$ sudo chown devconnected:devconnected /home/devconnected/files

mount-encrypted-partition

Awesome, now that your data is safe on an encrypted partition, let’s see how you can mount the encryption partition on boot.

Modify crypttab and fstab files

Many system administrators know the existence of the fstab file that is used by your init process to mount drives.

However, when dealing with encrypted partitions, there is another file that comes into play : /etc/crypttab.

Similarly to the fstab file, crypttab is read by your init process when booting. Given the information provided in it, it will ask you to unlock the partition or it will read a key file in order to do it automatically.

Note : the /etc/crypttab may not exist on your system. If it is not the case, you may have to create it.

crypttab-columns

The columns of the crypttab are described above :

  • Device name : you can give your decrypted device any name that you want. Furthermore, it will be automatically created by the device mapper under the “/dev/mapper” path. In the previous section, we chose “cryptpart” for this column;
  • Encrypted device UUID : in order to find which partition contains the encrypted data, your system needs to have its UUID meaning its unique identifier;
  • Method of authentication : as explained, you can choose “none” for the passphrase or you can specify a path to the key. The key method will be explained in the last chapter of this article;
  • Mount options : using this column, you can specify the number of tries for a passphrase, the cipher, the encryption method and many other parameters. The complete list of options is available in the “crypttab” manual page.
$ sudo nano /etc/crypttab

# Content of the crypttab file
cryptpart    UUID=<partition_uuid>    none    luks

crypttab-file-2

If you have doubts about the UUID of your encrypted partition, you can use the “blkid” command with a simple “grep” pipe.

$ sudo blkid | grep -i luks

blkid-command

Now that the “/etc/crypttab” file is modified, you will have to modify the “fstab” file to specify the mountpoint.

$ sudo blkid | grep -i ext4

$ sudo nano /etc/fstab

fstab-file

In the fstab columns, you have to specify :

  • The decrypted device UUID : in order to find it, you can use the “blkid” command but make sure that you opened the device before proceeding. If the device is closed, you won’t be able to find your UUID;
  • The mount point : where the decrypted device is going to be mounted. If the path does not exist, it is going to be created automatically;
  • The filesystem type : in this case, we chose to use “ext4” but it may be different on your system;
  • Dump and pass options : we don’t want the filesystem to be checked on boot-time, so we can keep it to the default values.

When you are done, save your file and you should be good to go.

Given the steps you just performed, your device is ready and it should automatically be mounted on boot.

Verify encrypted device mounting on boot

In order to verify that the device is correctly mounted, we can restart our server and wait for the initramfs module to open the encrypted device.

$ sudo reboot

Verify encrypted device mounting on boot encryption-boot

This is the screen that you should see, at least on RHEL8, when starting your server. If you provide the passphrase, your machine should be able to unlock it and mount it for you.

Once you are logged in your server, you can check that the encrypted partition was correctly mounted using the “lsblk” once again.

$ lsblk -f | grep sdb1 -A 2

lsblk-mounted

Congratulations, you successfully encrypted a partition on Linux using LUKS!

Create Keys For Encrypted Partition

As explained before, LUKS handles two authentication methods, namely passphrases and key files.

In the previous section, we used passphrases but it can be quite handy for you to also have a authentication key.

First of all, create a key file and store it somewhere safe (in directories that regular users cannot navigate to, like “/boot” or “/root“).

$ echo "supersecretpass" > volume-key

$ sudo mv volume-key /boot/

create-volume-key

As you can see, by default, the file was created using the user credentials and it has too many permissions.

Using the “chown” and “chmod” commands, we can set “root” as the owner of the file and change its permissions to read-only.

$ sudo chown root:root /boot/volume-key

$ sudo chmod 0400 /boot/volume-key

read-only-file

Now that the file is set to read-only, we can add it as a key in one of the slots of our LUKS volume.

Add Key to LUKS Volume

In order to add a key to your LUKS volume, you need to execute the “cryptsetup” command followed by the “luksAddKey”, the name of the encrypted volume and the path to the key.

$ sudo cryptsetup luksAddKey <encrypted_device> <path_to_key>

$ sudo cryptsetup luksAddKey /dev/sdb1 /boot/volume-key

Add Key to LUKS Volume luks-add-key

In order to perform this operation, you will be prompted for your passphrase. When provided, the key will be automatically added to your keyslots.

To verify that the key was correctly added, you can inspect your keyslots using the “luksDump” command.

$ sudo cryptsetup luksDump /dev/sdb1

Add Key to LUKS Volume luks-dump-command

Now that the key is added, you only need to modify the “/etc/crypttab” in order for your system to find it on boot.

$ sudo nano /etc/crypttab

# Content of the crypttab file
cryptpart    UUID=<partition_uuid>    /boot/volume-key    luks

When rebooting, your encrypted partition will be mounted automatically!

auto-mount-linux

Conclusion

In this article, you learnt how you can easily encrypt your partition on Linux using the LUKS project and its implementation named cryptsetup.

You can saw that you can use a “key file” in order for your partition to be unlocked automatically.

If you are interested in a full system encryption, we recently wrote an article on the subject.

Also, if you want to read more about Linux System Administration, make sure to have a look at our dedicated section on the website.

How To Flush DNS Cache on Linux

DNS, short for the Domain Name System protocol, is used on Linux systems in order to retrieve IP addresses associated with names.

For example, when you are performing a ping request, it is quite likely that you are using the DNS protocol to retrieve the server IP.

In most cases, the DNS requests that you perform are stored in a local cache on your operating system.

However, in some cases, you may want to flush the DNS cache of your server.

It might be because you changed the IP of a server on your network and you want to changes to be reflected immediately.

In this tutorial, you are going to learn how you can easily flush the DNS cache on Linux, whether you are using systemd or dnsmasq.

Prerequisites

In order to be able to flush your DNS cache, you have to know how DNS resolution works on your Linux system.

Depending on your distribution, you may be facing different Linux services that act as a DNS resolver.

Before you start, it is quite important for you to know how DNS resolution will actually happen on your operating system.

How To Flush DNS Cache on Linux dns-resolution-linux
Inspired by this Wikipedia diagram
If you are reading this article, you are looking to flush the cache of your local DNS resolver. But as you can see, there are many different caches from your local application until the actual Internet DNS servers.

In this tutorial, we are going to focus on the yellow box meaning the local stub resolver implemented on every Linux system.

Finding your local DNS resolver

On most Linux systems, the DNS resolver is either “systemd-resolved” or dnsmasq. In order to know if you are dealing with one or another, you can execute the following command

$ sudo lsof -i :53 -S
Note : so why are we running this command? As DNS runs on port 53, we are looking for the commands associated with the service running on port 53, which is your local DNS resolver or “stub”.

Finding your local DNS resolver lsof-command
As you can see, on a recent Ubuntu 20.04 distribution, the service listening on port 53 is systemd-resolved. However, if you were to execute this command on Ubuntu 14.04, you would get a different output.
lsof-command-old-distribution

In this case, the local DNS used in dnsmasq and commands are obviously different.

local-dns-resolvers

Knowing this information, you can go the chapter you are interested in. If you were to have a different output on your server, make sure to leave a comment for us to update this article.

Flush DNS using systemd-resolved

The easiest way to flush the DNS on Linux, if you are using systemd-resolved, is to use the “systemd-resolve” command followed by “–flush-caches”.

Alternatively, you can use the “resolvectl” command followed by the “flush-caches” option.

$ sudo systemd-resolve --flush-caches

$ sudo resolvectl flush-caches

In order to verify that your Linux DNS cache was actually flushed, you can use the “–statistics” option that will highlight the “Current Cache Size” under the “Cache” section.

$ sudo systemd-resolve --statistics

flush-dns-systemd-resolve

Congratulations, you successfully flushed your DNS cache on Linux!

Flush DNS cache using signals

Another way of flushing the DNS cache can be achieved by sending a “USR2” signal to the “systemd-resolved” service that will instruct it to flush its DNS cache.

$ sudo killall -USR2 systemd-resolved

In order to check that the DNS cache was actually flushed, you can send a “USR1” signal to the systemd-resolved service. This way, it will dump its current state into the systemd journal.

$ sudo killall -USR1 systemd-resolved

$ sudo journalctl -r -u systemd-resolved

Flush DNS cache using signals flush-dns-using-signals

Awesome, your DNS cache was correctly flushed using signals!

Flush DNS using dnsmasq

The easiest way to flush your DNS resolver, when using dnsmasq, is send a “SIGHUP” signal to the “dnsmasq” process with the “killall” command.

$ sudo killall -HUP dnsmasq

Flush DNS using dnsmasq flush-dnsmasq

Similarly to systemd-resolved, you can send a “USR1” to the process in order for it to print its statistics to the “syslog” log file. Using a simple “tail” command, we are able to verify that the DNS cache was actually flushed.

Now what if you were to run dnsmasq as a service?

Dnsmasq running a service

In some cases, you may run “dnsmasq” as a service on your server. In order to check whether this is the case or not, you can run the “systemctl” command or the “service” one if you are on an SysVinit system.

$ sudo systemctl is-active dnsmasq

# On SysVinit systems
$ sudo service dnsmasq status

If you notice that dnsmasq is running as a service, you can restart it using the usual “systemctl” or “service” commands.

$ sudo systemctl restart dnsmasq

# On SysVinit systems
$ sudo service dnsmasq restart

After running those commands, always make sure that your services were correctly restarted.

$ sudo systemctl status dnsmasq

# On SysVinit systems
$ sudo service dnsmasq status

Conclusion

In this tutorial, you learnt how you can quickly and easily flush your DNS cache on Linux.

Using this article, you can easily clear the cache for systemd and dnsmasq local resolvers. However, you should know that there is another common DNS, named bind, that is purposefully omitted in this article.

Another article about setting up a local DNS cache server using BIND should come in the near future.

If you are interested in DNS queries and how they are performed, you can use this very useful article from “zwischenzugs” named the Anatomy of a DNS query. The article is particularly useful if you want to debug DNS queries and you wonder how they are performed.

Also if you are interested in Linux System Administration, we have a complete section about it on the website, so make sure to check it out.

How To Push Git Branch To Remote

How To Push Git Branch To Remote | Git Push to Existing Remote Branch

In Git, the git push command is utilized to upload local repository content to a remote repository. Pushing is how you transfer commits from your local repository to a remote repo. If you are working with a local branch and want to share your modifications, you will require to push your git branch to the remote repo.

Here is the ultimate tutorial that helps beginners and experienced developers to learn how to push Git branch to remote repo easily and solve all their hurdles while working with the local and remote repositories in Git. Also, check git commands that help developers to do modifications & other tasks on local and remote repositories of git.

Push Git Branch To Remote

In order to push a Git branch to remote, you need to execute the “git push” command and specify the remote as well as the branch name to be pushed.

$ git push <remote> <branch>

For example, if you need to push a branch named “feature” to the “origin” remote, you would execute the following query

$ git push origin feature

Push Branch To Remote git-push-2

If you are not already on the branch that you want to push, you can execute the “git checkout” command to switch to your branch.

If your upstream branch is not already created, you will need to create it by running the “git push” command with the “-u” option for upstream.
Push Branch To Remote git-push

$ git push -u origin feature

Congratulations, you have successfully pushed your branch to your remote!

Also Refer: How To Create a Git Branch

How to push all local branches to the remote?

You won’t need to push all branches from your local very often, but if you do you can add the --all flag:

(main)$ git branch
* main
my-feature

(main)$ git push --all
...
To github.com:johnmosesman/burner-repo.git
b7f661f..6e36148 main -> main
* [new branch] my-feature -> my-feature

Push Branch to Another Branch

In some cases, you may want to push your changes to another branch on the remote repository.

In order to push your branch to another remote branch, use the “git push” command and specify the remote name, the name of your local branch as the name of the remote branch.

$ git push <remote> <local_branch>:<remote_name>

As an example, let’s say that you have created a local branch named “my-feature”.

$ git branch

  master
* my-feature
  feature

However, you want to push your changes to the remote branch named “feature” on your repository.

In order to push your branch to the “feature” branch, you would execute the following command

$ git push origin my-feature:feature

Enumerating objects: 6, done.
Counting objects: 100% (6/6), done.
Delta compression using up to 2 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 513 bytes | 513.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To https://github.com/SCHKN/repo.git
   b1c4c91..9ae0aa6  my-feature -> feature

In order to push your branch to another branch, you may need to merge the remote branch to your current local branch.

In order to be merged, the tip of the remote branch cannot be behind the branch you are trying to push.

Before pushing, make sure to pull the changes from the remote branch and integrate them with your current local branch.

$ git pull

$ git checkout my-feature

$ git merge origin/feature

$ git push origin my-feature:feature

Note: When merging the remote branch, you are merging your local branch with the upstream branch of your local repository. Congratulations, you pushed your branch to another branch on your repository!

Push Branch to Another Repository

In order to push a branch to another repository, you need to execute the “git push” command and specify the correct remote name as well as the branch to be pushed.

$ git push <remote> <branch>

In order to see the remotes defined in your repository, you have to execute the “git remote” command with the “-v” option for “verbose”.

$ git remote -v

origin  https://github.com/user/repo.git (fetch)
origin  https://github.com/user/repo.git (push)
custom  https://github.com/user/custom.git (fetch)
custom  https://github.com/user/custom.git (push)

In the previous examples, we pushed our branch to the “origin” remote but we can choose to publish it to the “custom” remote if we want.

$ git push custom feature

Awesome, you pushed your branch to another remote repository!

How to push your branch to a remote GitHub repo?

While working with feature branches on a team, it is not typically suited to merge your own code into a master. Although this is up to your team to accomplish, the norm is normally to do pull requests. Pull requests demand that you push your branch to the remote repo.

To push the new feature branch to the remote repo, simply do the following:

$ git push origin my-new-feature-branch

As long as Git is concerned, there is no real difference between a master and a feature branch. So, all identical Git features apply.

Troubleshooting

In some cases, you may run into errors while trying to push a Git branch to a remote.

Failed to push some refs

Failed to push some refs troubleshoot

The error message states that the pushed branch tip is behind its remote (references are behind)

In order to fix this, you need first to pull the recent changes from your remote branches with the “git pull” command.

$ git pull

When pulling the changes, you may run into merge conflicts, run the conflicts and perform a commit again with your results.

Now that the files are merged, you may try to push your branch to the remote again.

$ git push origin feature

Enumerating objects: 6, done.
Counting objects: 100% (6/6), done.
Delta compression using up to 2 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 513 bytes | 513.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To https://github.com/SCHKN/repo.git
   b1c4c91..9ae0aa6  feature -> feature

Conclusion

In this tutorial, you learned how you can push a Git branch to a remote with the “git push” command.

You learned that you can easily specify your branch and your remote if you want to send your changes to other repositories.

If you are interested in Software Engineering or in Git, we have many other tutorials on the subject, so make sure to check it out!

Monitoring Linux Processes using Prometheus and Grafana

Monitoring Linux Processes using Prometheus and Grafana | Prometheus & Grafana Linux Monitoring

By referring to this tutorial, all Linux OS developers, system administrators, DevOps engineers, and many other technical developers can easily learn and perform Monitoring Linux Processes using Prometheus and Grafana. Follow this guide until you get familiar with the process.

Monitoring Linux Processes using Prometheus and Grafana Final-dashboard

One of the most difficult tasks for a Linux system administrator or a DevOps engineer would be tracking performance metrics on the servers. Sometimes, you may also get real issues like running very slow, unresponsive examples may be blocking you from running remote commands like top or htop on them. Moreover, you can also get a bottleneck on your server, but you cannot recognize it easily and fastly.

Do you need the entire monitoring technique to track all these general performance issues and resolve them from time to time by using various individual processes? Then this can be possible by following the tutorial carefully. Let’s see it live here for now:

Monitoring Linux Processes using Prometheus and Grafana featured-2

The main objective of this tutorial is to design a complete monitoring dashboard for Linux sysadmins.

Do Check Other Monitoring Guides: 

As a result, it will showcase several panels that are entirely customizable and scalable to multiple instances for distributed architectures.

What You Will Learn

Before jumping right into this technical journey, let’s have a quick look at everything that you are going to learn by reading this article:

  • Understanding current state-of-the-art ways to monitor process performance on Unix systems;
  • Learn how to install the latest versions of Prometheus v2.9.2Pushgateway v0.8.0, and Grafana v6.2;
  • Build a simple bash script that exports metrics to Pushgateway;
  • Build a complete Grafana dashboard including the latest panels available such as the ‘Gauge’ and the ‘Bar Gauge’;
  • Bonus: implementing ad-hoc filters to track individual processes or instances.

Now that we have an overview of everything that we are going to learn, and without further due, let’s have an introduction to what’s currently existing for Unix systems.

Unix Process Monitoring Basics

When it comes to process monitoring for Unix systems, you have multiple options.

The most popular one is probably ‘top’.

Top provides a full overview of performance metrics on your system such as the current CPU usage, the current memory usage as well as metrics for individual processes.

This command is widely used among sysadmins and is probably the first command run when a performance bottleneck is detected on a system (if you can access it of course!)

Unix Process Monitoring Basics final-top-command

The top command is already pretty readable, but there is a command that makes everything even more readable than that: htop.

Htop provides the same set of functionalities (CPU, memory, uptime..) as a top but in a colorful and pleasant way.

Htop also provides gauges that reflect current system usage.

Unix Process Monitoring Basics final-htop-command

Knowing that those two commands exist, why would we want to build yet another way to monitor processes?

The main reason would be system availability: in case of a system overload, you may have no physical or remote access to your instance.

By externalizing process monitoring, you can analyze what’s causing the outage without accessing the machine.

Another reason is that processes get created and killed all the time, often by the kernel itself.

In this case, running the top command would give you zero information as it would be too late for you to catch who’s causing performance issues on your system.

You would have to dig into kernel logs to see what has been killed.

With a monitoring dashboard, you can simply go back in time and see which process was causing the issue.

Now that you know why we want to build this dashboard, let’s have a look at the architecture put in place in order to build it.

Detailing Our Monitoring Architecture

Before having a look at the architecture that we are going to use, we want to use a solution that is:

  • Resource cheap: i.e not consuming many resources on our host;
  • Simple to put in place: a solution that doesn’t require a lot of time to instantiate;
  • Scalable: if we were to monitor another host, we can do it quickly and efficiently.

Those are the points we will keep in mind throughout this tutorial.

The detailed architecture we are going to use today is this one:

Detailing Our Monitoring Architecture process monitoring architecture

Our architecture makes use of four different components:

  • A bash script used to send periodically metrics to the Pushgateway;
  • Pushgateway: a metrics cache used by individual scripts as a target;
  • Prometheus: that instantiates a time series database used to store metrics. Prometheus will scrape Pushgateway as a target in order to retrieve and store metrics;
  • Grafana: a dashboard monitoring tool that retrieves data from Prometheus via PromQL queries and plots them.

For those who are quite familiar with Prometheus, you already know that Prometheus scraps metrics exposed by HTTP instances and stores them.

In our case, the bash script has a very tiny lifespan and it doesn’t expose any HTTP instance for Prometheus.

This is why we have to use the Pushgateway; designed for short-lived jobs, Pushgateway will cache metrics received from the script and expose them to Prometheus.

Detailing Our Monitoring Architecture pull vs push

Installing The Different Tools

Now that you have a better idea of what’s going on in our application, let’s install the different tools needed.

a – Installing Pushgateway

In order to install Pushgateway, run a simple wget command to get the latest binaries available.

wget https://github.com/prometheus/pushgateway/releases/download/v0.8.0/pushgateway-0.8.0.linux-amd64.tar.gz

Now that you have the archive, extract it, and run the executable available in the pushgateway folder.

> tar xvzf pushgateway-0.8.0.linux-amd64.tar.gz
> cd pushgateway-0.8.0.linux-amd64/   
> ./pushgateway &

As a result, your Pushgateway should start as a background process.

me@schkn-ubuntu:~/softs/pushgateway/pushgateway-0.8.0.linux-amd64$ ./pushgateway &

[1] 22806
me@schkn-ubuntu:~/softs/pushgateway/pushgateway-0.8.0.linux-amd64$ 
INFO[0000] Starting pushgateway (version=0.8.0, branch=HEAD, revision=d90bf3239c5ca08d72ccc9e2e2ff3a62b99a122e)  source="main.go:65"INFO[0000] Build context (go=go1.11.8, user=root@00855c3ed64f, date=20190413-11:29:19)  source="main.go:66"INFO[0000] Listening on :9091.                           source="main.go:108"

Nice!

From there, Pushgateway is listening to incoming metrics on port 9091.

b – Installing Prometheus

As described in the ‘Getting Started’ section of Prometheus’s website, head over to https://prometheus.io/download/ and run a simple wget command in order to get the Prometheus archive for your OS.

wget https://github.com/prometheus/prometheus/releases/download/v2.9.2/prometheus-2.9.2.linux -amd64.tar.gz

Now that you have the archive, extract it, and navigate into the main folder:

> tar xvzf prometheus-2.9.2.linux-amd64.tar.gz
> cd prometheus-2.9.2.linux-amd64/

As stated before, Prometheus scraps ‘targets’ periodically to gather metrics from them. Targets (Pushgateway in our case) need to be configured via Prometheus’s configuration file.

> vi prometheus.yml

In the ‘global’ section, modify the ‘scrape_interval’ property down to one second.

global:
  scrape_interval:     1s # Set the scrape interval to every 1 second.

In the ‘scrape_configs’ section, add an entry to the targets property under the static_configs section.

static_configs:
            - targets: ['localhost:9090', 'localhost:9091']

Exit vi, and finally run the Prometheus executable in the folder.

Prometheus should start when launching the final Prometheus command. To assure that everything went correctly, you can head over to http://localhost:9090/graph.

If you have access to Prometheus’s web console, it means that everything went just fine.

You can also verify that Pushgateway is correctly configured as a target in ‘Status’ > ‘Targets’ in the Web UI.

prometheus-web-console-final (1)

c – Installing Grafana

If you are looking for a tutorial to install Grafana on Linux, just follow the link!

Also Check: How To Create a Grafana Dashboard? (UI + API methods)

Last not but least, we are going to install Grafana v6.2. Head over to https://grafana.com/grafana/download/beta.

As done before, run a simple wget command to get it.

> wget https://dl.grafana.com/oss/release/grafana_6.2.0-beta1_amd64.deb> sudo dpkg -i grafana_6.2.0-beta1_amd64.deb

Now that you have extracted the deb file, grafana should run as a service on your instance.

You can verify it by running the following command:

> sudo systemctl status grafana-server
● grafana-server.service - Grafana instance
   Loaded: loaded (/usr/lib/systemd/system/grafana-server.service; disabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-05-09 10:44:49 UTC; 5 days ago
     Docs: http://docs.grafana.org

You can also check http://localhost:3000 which is the default address for Grafana Web UI.

Now that you have Grafana on your instance, we have to configure Prometheus as a datasource.

You can configure your datasource this way :

prometheus-data-source (1)

That’s it!

Click on ‘Save and Test’ and make sure that your datasource is working properly.

Building a bash script to retrieve metrics

Your next task is to build a simple bash script that retrieves metrics such as the CPU usage and the memory usage for individual processes.

Your script can be defined as a cron task that will run every second later on.

To perform this task, you have multiple candidates.

You could run top commands every second, parse it using sed and send the metrics to Pushgateway.

The hard part with top is that it runs on multiple iterations, providing a metrics average over time. This is not really what we are looking for.

Instead, we are going to use the ps command and more precisely the ps aux command.

ps-aux-final

This command exposes individual CPU and memory usages as well as the exact command behind it.

This is exactly what we are looking for.

But before going any further, let’s have a look at what Pushgateway is expecting as input.

Pushgateway, pretty much like Prometheus, works with key-value pairs: the key describes the metric monitored and the value is self-explanatory.

Here are some examples:

pushgateway format

As you can tell, the first form simply describes the CPU usage, but the second one describes the CPU usage for the java process.

Adding labels is a way of specifying what your metric describes more precisely.

Now that we have this information, we can build our final script.

As a reminder, our script will perform a ps aux command, parse the result, transform it and send it to the Pushgateway via the syntax we described before.

Create a script file, give it some rights and navigate to it.

> touch better-top
> chmod u+x better-top
> vi better-top

Here’s the script:

#!/bin/bash
z=$(ps aux)
while read -r z
do
   var=$var$(awk '{print "cpu_usage{process=\""$11"\", pid=\""$2"\"}", $3z}');
done <<< "$z"
curl -X POST -H  "Content-Type: text/plain" --data "$var
" http://localhost:9091/metrics/job/top/instance/machine

If you want the same script for memory usage, simply change the ‘cpu_usage’ label to ‘memory_usage’ and the $3z to $4z

So what does this script do?

First, it performs the ps aux command we described before.

Then, it iterates on the different lines and formats it accordingly to the key-labeled value pair format we described before.

Finally, everything is concatenated and sent to the Pushgateway via a simple curl command.

Simple, isn’t it?

As you can tell, this script gathers all metrics for our processes but it only runs one iteration.

For now, we are simply going to execute it every one second using a sleep command.

Later on, you are free to create a service to execute it every second with a timer (at least with systemd).

Interested in systemd? I made a complete tutorial about monitoring them with Chronograf

> while sleep 1; do ./better-top; done;

Now that our metrics are sent to the Pushgateway, let’s see if we can explore them in Prometheus Web Console.

Head over to http://localhost:9090. In the ‘Expression’ field, simply type ‘cpu_usage’. You should now see all metrics in your browser.

Congratulations! Your CPU metrics are now stored in Prometheus TSDB.

processes web console

Building An Awesome Dashboard With Grafana

Now that our metrics are stored in Prometheus, we simply have to build a Grafana dashboard in order to visualize them.

We will use the latest panels available in Grafana v6.2: the vertical and horizontal bar gaugesthe rounded gauges, and the classic line charts.

For your comfort, I have annotated the final dashboard with numbers from 1 to 4.

They will match the different subsections of this chapter. If you’re only interested in a certain panel, head over directly to the corresponding subsection.

grafana with numeros

1. Building Rounded Gauges

Here’s a closer view of what rounded gauges in our panel.

For now, we are going to focus on the CPU usage of our processes as it can be easily mirrored for memory usage.

With those panels, we are going to track two metrics: the current CPU usage of all our processes and the average CPU usage.

In order to retrieve those metrics, we are going to perform PromQL queries on our Prometheus instance?

So.. what’s PromQL?

PromQL is the query language designed for Prometheus.

Similarly, what you found to find on InfluxDB instances with InfluxQL (or IFQL), PromQL queries can aggregate data using functions such as the sum, the average, and the standard deviation.

The syntax is very easy to use as we are going to demonstrate it with our panels.

a – Retrieving the current overall CPU usage

In order to retrieve the current overall CPU usage, we are going to use the PromQL sum function.

At a given moment in time, our overall CPU usage is simply the sum of individual usages.

Here’s the cheat sheet:

a – Retrieving the current overall CPU usage - total

b – Retrieving the average CPU usage

Not much work to do for average CPU usage, you are simply going to use the  avg function of PromQL. You can find the cheat sheet below.

b – Retrieving the average CPU usage 1b-total

2. Building Horizontal Gauges

Horizontal gauges are one of the latest additions of Grafana v6.2.

Our goal with this panel is to expose the top 10 most consuming processes of our system.

To do so, we are going to use the topk function that retrieves the top k elements for a metric.

Similar to what we did before, we are going to define thresholds in order to be informed when a process is consuming too many resources.

2 – Building Horizontal Gauges 2-total

3. Building Vertical Gauges

Vertical gauges are very similar to horizontal gauges, we only need to tweak the orientation parameter in the visualization panel of Grafana.

Also, we are going to monitor our memory usage with this panel so the query is slightly different.

Here’s the cheat sheet:

3 – Building Vertical Gauges 3-final

Awesome! We have made great progress so far, with one panel to go.

4. Building Line Graphs

Line graphs have been in Grafana for a long time and this is the panel that we are going to use to have a historical view of how our processes have evolved over time.

This graph can be particularly handy when:

  • You had some outages in the past and would like to investigate which processes were active at the time.
  • A certain process died but you want to have a view of its behavior right before it happened

When it comes to troubleshooting exploration, it would honestly need a whole article (especially with the recent Grafana Loki addition).

Okay, here’s the final cheat sheet!

4 – Building Line Graphs 4-final

From there, we have all the panels that we need for our final dashboard.

You can arrange them the way you want or simply take some inspiration from the one we built.

Bonus: explore data using ad hoc filters

Real-time data is interesting to see – but the real value comes when you are able to explore your data.

In this bonus section, we are not going to use the ‘Explore’ function (maybe in another article?), we are going to use ad hoc filters.

With Grafana, you can define variables associated with a graph. You have many different options for variables: you can for example define a variable for your data source that would allow you to dynamically switch the datasource in a query.

In our case, we are going to use simple ad hoc filters to explore our data.

top-panel-2

From there, simply click on ‘Variables’ in the left menu, then click on ‘New’.

variable-configuration-final

As stated, ad hoc filters are automatically applied to dashboards that target the Prometheus datasource. Back to our dashboard.

Take a look at the top left corner of the dashboard.

filters-final

Filters!

Now let’s say that you want the performance of a certain process in your system: let’s take Prometheus itself for example.

Simply navigate into the filters and see the dashboard updating accordingly.

prometheus-filters

Now you have a direct look at how Prometheus is behaving on your instance.

You could even go back in time and see how the process behaved, independently from its pid!

A quick word to conclude

From this tutorial, you now have a better understanding of what Prometheus and Grafana have to offer.

You know have a complete monitoring dashboard for one instance, but there is really a small step to make it scale and monitor an entire cluster of Unix instances.

DevOps monitoring is definitely an interesting subject – but it can turn into a nightmare if you do it wrong.

This is exactly why we write those articles and build those dashboards: to help you reach the maximum efficiency of what those tools have to offer.

We believe that great tech can be enhanced with useful showcases.

Do you?

If you agree, join the growing list of DevOps who chose this path.

It is as simple as subscribing to our newsletter: get those tutorials right into your mailbox!

I made similar articles, so if you enjoyed this one, make sure to read the others :

Until then, have fun, as always.

Input Output Redirection on Linux Explained

Input Output Redirection on Linux Explained | Error Redirection in Linux

In Linux, it’s very often to execute Input or output redirection while working daily. It is one of the core concepts of Unix-based systems also it is utilized as a way to improve programmer productivity amazingly.

In this tutorial, we will be discussing in detail regarding the standard input/output redirections on Linux. Mostly, Unix system commands take input from your terminal and send the resultant output back to your terminal.

Moreover, this guide will reflect on the design of the Linux kernel on files as well as the way processes work for having a deep and complete understanding of what input & output redirection is.

Do Check: 

If you follow this Input Output Redirection on Linux Explained Tutorial until the end, then you will be having a good grip on the concepts like What file descriptors are and how they related to standard inputs and outputs, How to check standard inputs and outputs for a given process on Linux, How to redirect standard input and output on Linux, and How to use pipelines to chain inputs and outputs for long commands;

So without further ado, let’s take a look at what file descriptors are and how files are conceptualized by the Linux kernel.

Get Ready?

What is Redirection?

In Linux, Redirection is a feature that helps when executing a command, you can change the standard input/output devices. The basic workflow of any Linux command is that it takes an input and gives an output.

  • The standard input (stdin) device is the keyboard.
  • The standard output (stdout) device is the screen.

With redirection, the standard input/output can be changed.

What are Linux processes?

Before understanding input and output on a Linux system, it is very important to have some basics about what Linux processes are and how they interact with your hardware.

If you are only interested in input and output redirection command lines, you can jump to the next sections. This section is for system administrators willing to go deeper into the subject.

a – How are Linux processes created?

You probably already heard it before, as it is a pretty popular adage, but on Linux, everything is a file.

It means that processes, devices, keyboards, hard drives are represented as files living on the filesystem.

The Linux Kernel may differentiate those files by assigning them a file type (a file, a directory, a soft link, or a socket for example) but they are stored in the same data structure by the Kernel.

As you probably already know, Linux processes are created as forks of existing processes which may be the init process or the systemd process on more recent distributions.

When creating a new process, the Linux Kernel will fork a parent process and it will duplicate a structure which is the following one.

b – How are files stored on Linux?

I believe that a diagram speaks a hundred words, so here is how files are conceptually stored on a Linux system.

b – How are files stored on Linux?

As you can see, for every process created, a new task_struct is created on your Linux host.

This structure holds two references, one for filesystem metadata (called fs) where you can find information such as the filesystem mask for example.

The other one is a structure for files holding what we call file descriptors.

It also contains metadata about the files used by the process but we will focus on file descriptors for this chapter.

In computer science, file descriptors are references to other files that are currently used by the kernel itself.

But what do those files even represent?

c – How are file descriptors used on Linux?

As you probably already know, the kernel acts as an interface between your hardware devices (a screen, a mouse, a CD-ROM, or a keyboard).

It means that your Kernel is able to understand that you want to transfer some files between disks, or that you may want to create a new video on your secondary drive for example.

As a consequence, the Linux Kernel is permanently moving data from input devices (a keyboard for example) to output devices (a hard drive for example).

Using this abstraction, processes are essentially a way to manipulate inputs (as Read operations) to render various outputs (as write operations)

But how do processes know where data should be sent to?

Processes know where data should be sent to using file descriptors.

On Linux, the file descriptor 0 (or fd[0]) is assigned to the standard input.

Similarly the file descriptor 1 (or fd[1]) is assigned to the standard output, and the file descriptor 2 (or fd[2]) is assigned to the standard error.

file-descriptors-Linux

It is a constant on a Linux system, for every process, the first three file descriptors are reserved for standard inputs, outputs, and errors.

Those file descriptors are mapped to devices on your Linux system.

Devices registered when the kernel was instantiated, they can be seen in the /dev directory of your host.

c – How are file descriptors used on Linux?

If you were to take a look at the file descriptors of a given process, let’s say a bash process for example, you can see that file descriptors are essentially soft links to real hardware devices on your host.

c – How are file descriptors used on Linux proc

As you can see, when isolating the file descriptors of my bash process (that has the 5151 PID on my host), I am able to see the devices interacting with my process (or the files opened by the kernel for my process).

In this case, /dev/pts/0 represents a terminal which is a virtual device (or tty) on my virtual filesystem. In simpler terms, it means that my bash instance (running in a Gnome terminal interface) waits for inputs from my keyboard, prints them to the screen, and executes them when asked to.

Now that you have a clearer understanding of file descriptors and how they are used by processes, we are ready to describe how to do input and output redirection on Linux.

What is Output redirection on Linux?

Input and output redirection is a technique used in order to redirect/change standard inputs and outputs, essentially changing where data is read from, or where data is written to.

For example, if I execute a command on my Linux shell, the output might be printed directly to my terminal (a cat command for example).

However, with output redirection, I could choose to store the output of my cat command in a file for long-term storage.

a – How does output redirection works?

Output redirection is the act of redirecting the output of a process to a chosen place like files, databases, terminals, or any devices (or virtual devices) that can be written to.

output-redirection-diagram

As an example, let’s have a look at the echo command.

By default, the echo function will take a string parameter and print it to the default output device.

As a consequence, if you run the echo function in a terminal, the output is going to be printed in the terminal itself.

echo-devconnected

Now let’s say that I want the string to be printed to a file instead, for long-term storage.

To redirect standard output on Linux, you have to use the “>” operator.

As an example, to redirect the standard output of the echo function to a file, you should run

$ echo junosnotes > file

If the file is not existing, it will be created.

Next, you can have a look at the content of the file and see that the “junosnotes” string was correctly printed to it.

redirecting-output-to-a-file

Alternatively, it is possible to redirect the output by using the “1>” syntax.

$ echo test 1> file

a – How does output redirection works

b – Output Redirection to files in a non-destructive way

When redirecting the standard output to a file, you probably noticed that it erases the existing content of the file.

Sometimes, it can be quite problematic as you would want to keep the existing content of the file, and just append some changes to the end of the file.

To append content to a file using output redirection, use the “>>” operator rather than the “>” operator.

Given the example we just used before, let’s add a second line to our existing file.

$ echo a second line >> file

appending-content-output-redirection

Great!

As you can see, the content was appended to the file, rather than overwriting it completely.

c – Output redirection gotchas

When dealing with output redirection, you might be tempted to execute a command to a file only to redirect the output to the same file.

Redirecting to the same file

echo 'This a cool butterfly' > file
sed 's/butterfly/parrot/g' file > file

What do you expect to see in the test file?

The result is that the file is completely empty.

c – Output redirection gotchas cat-file-empty

Why?

By default, when parsing your command, the kernel will not execute the commands sequentially.

It means that it won’t wait for the end of the sed command to open your new file and to write the content to it.

Instead, the kernel is going to open your file, erase all the content inside it, and wait for the result of your sed operation to be processed.

As the sed operation is seeing an empty file (because all the content was erased by the output redirection operation), the content is empty.

As a consequence, nothing is appended to the file, and the content is completely empty.

In order to redirect the output to the same file, you may want to use pipes or more advanced commands such as

command … input_file > temp_file  &&  mv temp_file input_file

Protecting a file from being overwritten

In Linux, it is possible to protect files from being overwritten by the “>” operator.

You can protect your files by setting the “noclobber” parameter on the current shell environment.

$ set -o noclobber

It is also possible to restrict output redirection by running

$ set -C

Note: to re-enable output redirection, simply run set +C

noclobber

As you can see, the file cannot be overridden when setting this parameter.

If I really want to force the override, I can use the “>|” operator to force it.

override-1

What is Input Redirection on Linux?

Input redirection is the act of redirecting the input of a process to a given device (or virtual device) so that it starts reading from this device and not from the default one assigned by the Kernel.

a – How does input redirection works?

As an instance, when you are opening a terminal, you are interacting with it with your keyboard.

However, there are some cases where you might want to work with the content of a file because you want to programmatically send the content of the file to your command.

input-redirection-diagram

To redirect the standard input on Linux, you have to use the “<” operator.

As an example, let’s say that you want to use the content of a file and run a special command on them.

In this case, I am going to use a file containing domains, and the command will be a simple sort command.

In this way, domains will be sorted alphabetically.

With input redirection, I can run the following command

cat-input-redirect

If I want to sort those domains, I can redirect the content of the domains file to the standard input of the sort function.

$ sort < domains

sort-example

With this syntax, the content of the domains file is redirected to the input of the sort function. It is quite different from the following syntax

$ sort domains

Even if the output may be the same, in this case, the sort function takes a file as a parameter.

In the input redirection example, the sort function is called with no parameter.

As a consequence, when no file parameters are provided to the function, the function reads it from the standard input by default.

In this case, it is reading the content of the file provided.

b – Redirecting standard input with a file containing multiple lines

If your file is containing multiple lines, you can still redirect the standard input from your command for every single line of your file.

b – Redirecting standard input with a file containing multiple lines multiline1

Let’s say for example that you want to have a ping request for every single entry in the domains file.

By default, the ping command expects a single IP or URL to be pinged.

You can, however, redirect the content of your domain’s file to a custom function that will execute a ping function for every entry.

$ ( while read ip; do ping -c 2 $ip; done ) < ips

b – Redirecting standard input with a file containing multiple lines

c – Combining input redirection with output redirection

Now that you know that standard input can be redirected to a command, it is useful to mention that input and output redirection can be done within the same command.

Now that you are performing ping commands, you are getting the ping statistics for every single website on the domains list.

The results are printed on the standard output, which is in this case the terminal.

But what if you wanted to save the results to a file?

This can be achieved by combining input and output redirections on the same command.

$ ( while read ip; do ping -c 2 $ip; done ) < domains > stats.txt

Great! The results were correctly saved to a file and can be analyzed later on by other teams in your company.

d – Discarding standard output completely

In some cases, it might be handy to discard the standard output completely.

It may be because you are not interested in the standard output of a process or because this process is printing too many lines on the standard output.

To discard standard output completely on Linux, redirect the standard output to /dev/null.

Redirecting to /dev/null causes data to be completely discarded and erased.

$ cat file > /dev/null

Note: Redirecting to /dev/null does not erase the content of the file but it only discards the content of the standard output.

What is standard error redirection on Linux?

Finally, after input and output redirection, let’s see how standard error can be redirected.

a – How does standard error redirection work?

Very similarly to what we saw before, error redirection is redirecting errors returned by processes to a defined device on your host.

For example, if I am running a command with bad parameters, what I am seeing on my screen is an error message and it has been processed via the file descriptor responsible for error messages (fd[2]).

Note that there are no trivial ways to differentiate an error message from a standard output message in the terminal, you will have to rely on the programmer sending error messages to the correct file descriptor.

error-redirection-diagram

To redirect error output on Linux, use the “2>” operator

$ command 2> file

Let’s use the example of the ping command in order to generate an error message on the terminal.

Now let’s see a version where the error output is redirected to an error file.

As you can see, I used the “2>” operator to redirect errors to the “error-file” file.

If I were to redirect only the standard output to the file, nothing would be printed to it.

As you can see, the error message was printed to my terminal and nothing was added to my “normal-file” output.

b – Combining standard error with standard output

In some cases, you may want to combine the error messages with the standard output and redirect it to a file.

It can be particularly handy because some programs are not only returning standard messages or error messages but a mix of two.

Let’s take the example of the find command.

If I am running a find command on the root directory without sudo rights, I might be unauthorized to access some directories, like processes that I don’t own for example.

permission-denied

As a consequence, there will be a mix of standard messages (the files owned by my user) and error messages (when trying to access a directory that I don’t own).

In this case, I want to have both outputs stored in a file.

To redirect the standard output as well as the error output to a file, use the “2<&1” syntax with a preceding “>”.

$ find / -user junosnotes > file 2>&1

Alternatively, you can use the “&>” syntax as a shorter way to redirect both the output and the errors.

$ find / -user junosnotes &> file

So what happened here?

When bash sees multiple redirections, it processes them from left to right.

As a consequence, the output of the find function is first redirected to the file.

Next, the second redirection is processed and redirects the standard error to the standard output (which was previously assigned to the file).

multiple-redirections

multiple-redirections 2

What are pipelines on Linux?

Pipelines are a bit different from redirections.

When doing standard input or output redirection, you were essentially overwriting the default input or output to a custom file.

With pipelines, you are not overwriting inputs or outputs, but you are connecting them together.

Pipelines are used on Linux systems to connect processes together, linking standard outputs from one program to the standard input of another.

Multiple processes can be linked together with pipelines (or pipes)

pipelines-linux-2

Pipes are heavily used by system administrators in order to create complex queries by combining simple queries together.

One of the most popular examples is probably counting the number of lines in a text file, after applying some custom filters on the content of the file.

Let’s go back the domains file we created in the previous sections and let’s change their country extensions to include .net domains.

Now let’s say that you want to count the numbers of .com domains in the file.

How would you perform that? By using pipes.

First, you want to filter the results to isolate only the .com domains in the file. Then, you want to pipe the result to the “wc” command in order to count them.

Here is how you would count .com domains in the file.

$ grep .com domains | wc -l

Here is what happened with a diagram in case you still can’t understand it.

counting-domains

Awesome!

Conclusion

In today’s tutorial, you learned what input and output redirection is and how it can be effectively used to perform administrative operations on your Linux system.

You also learned about pipelines (or pipes) that are used to chain commands in order to execute longer and more complex commands on your host.

If you are curious about Linux administration, we have a whole category dedicated to it on JunosNiotes, so make sure to check it out!

How To Install and Configure Blackbox Exporter for Prometheus

How To Install and Configure Blackbox Exporter for Prometheus?

Whenever you are working with Prometheus, you must be familiar with How To Install and Configure Blackbox Exporter for Prometheus as it helps to monitor the endpoints with ease.

In case you are working as a network engineer then you should resolve DNS response times to diagnose network latency issues. In order to fix the problems, you need to have comprehensive monitoring of your ICMP requests to collect more data regarding network health.

Today’s tutorial is completely about the installation and configuration of the Blackbox exporter with Prometheus also you can take a look at the basic information like what is a Blackbox exporter? Monitoring with blackbox exporter, etc.

Let’s be ready with prerequisites and start installing & configuring the Blackbox Exporter with Prometheus?

What You Are Going To Learn?

By following this tutorial until the end, here are the concepts that you are going to learn about.

  • How to install Prometheus securely using authentication and TLS encryption;
  • What is the Blackbox exporter and how it differs from application instrumenting;
  • How to install the Blackbox exporter as a service;
  • How to bind the Blackbox exporter with Prometheus;
  • How to monitor your first HTTP endpoint.

That’s quite a long program, let’s head to it.

Also Check: Windows Server Monitoring using Prometheus and WMI Exporter

Installing Prometheus Securely

Firstly, have a look at our previous tutorial ie., how to install Prometheus on Linux operating systems, and then start following this guide carefully, your Prometheus server is currently sitting behind a reverse proxy with authentication enabled.

In this tutorial, we are going to use the https://localhost:1234 URL in order to reach Prometheus with the Blackbox exporter.

However, if you configured Prometheus on another URL, you will need to change the configuration options provided in this tutorial.

What is the Blackbox exporter with Prometheus?

The Blackbox exporter is a probing exporter or a tool that allows engineers to monitor such as HTTP, HTTPS, DNS, ICMP, or TCP endpoints.

By using the BlackBox exporter, we are able to scrape the details of all the endpoints on the target examples with details like response time, Status code, SSL certificate expiry, and DNS lookup latencies.

a – Blackbox general concepts

The Blackbox exporter provides metrics about HTTP latenciesDNS lookups latencies as well as statistics about SSL certificates expiration.

The Blackbox exporter is mainly used to measure response times.

As a consequence, if you are looking for more detailed metrics for your application (for example a Python application), you will have to instrument your application.

It is noteworthy to say that the Blackbox exporter can be bound with the AlertManager and Prometheus in order to have detailed alerts when one endpoint goes down.

When running, the Blackbox exporter is going to expose an HTTP endpoint that can be used in order to monitor targets over the network. By default, the Blackbox exporter exposes the /probe endpoint that is used to retrieve those metrics.

For example, if my Blackbox exporter is running on port 9115, and if I query metrics for google.com, this is the endpoint that I can query from the exporter.

$ http://localhost:9115/probe?target=https://google.com&module=https_2xx

example-1
As you probably understand, the Blackbox exporter is a standalone tool, it does not need any other tools to run.

On the other hand, Prometheus binds to the exporter. You are going to define ‘targets’ in a dedicated Blackbox configuration section, and Prometheus will issue requests to the probe endpoint we saw earlier.

Prometheus is acting as a way to automate requests and as a way to store them for long-term storage.

b – What are the Blackbox modules?

As with most of the Prometheus ecosystem tools, the BlackBox exporter is configured with a YAML configuration file.

The Blackbox exporter configuration file is made of modules.

A module can be seen as one probing configuration for the Blackbox exporter.

As a consequence, if you choose to have an HTTP prober checking for successful HTTPS responses (2xx HTTP codes for example), the configuration will be summarized within the same module.

The documentation for modules is available here. As you can see, you have many options for all the probers that are available to you.

Monitoring endpoint’s availability is either a success or a failure.

As a consequence, the Blackbox exporter will report if it successfully probed the targets it was assigned (0 for a failure and 1 for success).

Given the value of the probing, you can choose to define alerts in order to be notified (on Slack for example) when an endpoint goes down.

c – How does the Blackbox exporter differ from application instrumenting?

As a reminder, application instrumenting means that you are adding client libraries to an application in order to expose metrics to Prometheus.

Client libraries are available for most programming languages and frameworks such as Go, Python, Java, Javascript, or Ruby.

The main difference between the Blackbox exporter and application instrumenting is that the Blackbox exporter only focuses on availability while instrumentations can go more into details about the performance.

With instrumentation, you can choose to monitor the performance of a single SQL query for example, or a single server function.

The Blackbox exporter will not be able to expose this granularity, only a request failure or a request success.

Now that you have some concepts about the Blackbox exporter, let’s start installing it as a service on our host.

Installing the Blackbox exporter for Prometheus

First of all, you are going to download the latest version of the Blackbox exporter available for Prometheus.

a – Downloading the Blackbox exporter

To download the Blackbox exporter, head over to the Prometheus downloads page.

Filter your results by choosing Linux as the current operating system.

linux-filter

Scroll down a bit, and find the blackbox exporter executable, right below the AlertManager section.

blackbox

As you can see, at the time of this tutorial, the Blackbox exporter is available on the 0.14.0 version.

Click on the archive to download it. If you are more familiar with wget, copy the link and run this command.

$ wget https://github.com/prometheus/blackbox_exporter/releases/download/v0.14.0/blackbox_exporter-0.14.0.linux-amd64.tar.gz

The archive should be correctly downloaded on your host. To extract the archive, run the following command

$ tar xvzf blackbox_exporter-0.14.0.linux-amd64.tar.gz

extract-archive

Besides the license and the notice files, the archive contains two important files:

  • blackbox_exporter: the executable for the Blackbox exporter. This is the executable that you are going to launch via your service file in order to probe targets;
  • blackbox.yml: configuration file for the Blackbox exporter. This is where you are going to define your modules and probers.

Documentation for the blackbox_exporter executable is available when running the following command

$ cd blackbox_exporter-0.14.0.linux-amd64
$ ./blackbox_exporter -h

blackbox-help

As you can see, the configuration for the Blackbox exporter is pretty straightforward. Those are the flags that we are going to use when defining our systemd service file.

b – Create a service file for the Blackbox exporter

As a meticulous system administrator, you are not going to launch executables from your home directory.

Instead, you are going to define service files and define the start and restart policies for them.

First, make your executables accessible for your user’s local account.

$ sudo mv blackbox_exporter /usr/local/bin

Note: By moving files to the /usr/local/bin path, all users may have access to the blackbox exporter binary. For binaries to be restricted to your own user account, you need to add the path to your own PATH environment variable.

Next, create configuration folders for your blackbox exporter.

$ sudo mkdir -p /etc/blackbox
$ sudo mv blackbox.yml /etc/blackbox

For safety purposes, the Blackbox exporter is going to be run by its own user account (named blackbox here)

As a consequence, we need to create a user account for the Blackbox exporter.

$ sudo useradd -rs /bin/false blackbox

Then, make sure that the blackbox binary can be run by your newly created user.

$ sudo chown blackbox:blackbox /usr/local/bin/blackbox_exporter

Give the correct permissions to your configuration folders recursively.

$ sudo chown -R blackbox:blackbox /etc/blackbox/*

Now that everything is set, it is time for you to create your service file.

To create the Blackbox exporter service, head over to the /lib/systemd/system folder and create a service named blackbox.service

$ cd /lib/systemd/system
$ sudo touch blackbox.service

Edit your service file, and paste the following content into it.

$ sudo nano blackbox.service

[Unit]
Description=Blackbox Exporter Service
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=blackbox
Group=blackbox
ExecStart=/usr/local/bin/blackbox_exporter \
  --config.file=/etc/blackbox/blackbox.yml \
  --web.listen-address=":9115"

Restart=always

[Install]
WantedBy=multi-user.target

Note: As you can see, the Blackbox exporter service has no dependencies to other tools of the Prometheus ecosystem or to Prometheus itself.
Save your service file, and make sure that your service is enabled at boot time.

$ sudo systemctl enable blackbox.service
$ sudo systemctl start blackbox.service

For now, the Blackbox exporter is not configured to scrape any targets, but we are going to add a few ones in the next section.

If your service is correctly running, you can check the metrics gathered by issuing a request to the HTTP API.

$ curl http://localhost:9115/metrics

blackbox-metrics

If you navigate to the Blackbox exporter URL with a Web Browser, this is what you should see on your screen.

bb-exporter-web-browser

Now that your Blackbox exporter is gathering metrics, it is time to bind it to Prometheus.

c – Binding the Blackbox exporter with Prometheus

Important Note: In this section, Prometheus is going to scrape the Blackbox Exporter to gather metrics about the exporter itself. To configure Prometheus to scrape HTTP targets, head over to the next sections.

To bind the Blackbox exporter with Prometheus, you need to add it as a scrape target in the Prometheus configuration file.

If you follow the Prometheus setup tutorial, your configuration file is stored at /etc/prometheus/prometheus.yml

Edit this configuration file, and amend the following changes

$ sudo nano /etc/prometheus/prometheus.yml

global:
  scrape_interval:     15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090', 'localhost:9115']

With Prometheus, you don’t need to restart the systemd service for Prometheus to update.

You can also send a signal to the process for it to restart.

In order to send a SIGHUP signal to your Prometheus process, identify the PID of your Prometheus server.

$ ps aux | grep prometheus
schkn  4431  0.0  0.0  14856  1136 pts/0    S+   09:34   0:00 /bin/prometheus --config.file=...

The PID of my Prometheus process is 4431. Send a SIGHUP signal to this process for the configuration to restart.

$ sudo kill -HUP 4431

Head over to your Prometheus target configuration, and check that you are correctly scrapping your Blackbox exporter.

targets

Great!

Now that the Blackbox exporter is configured with Prometheus, it is time to add our first target.

As a matter of simplicity, we are going to monitor the HTTP endpoint of Prometheus itself with the Blackbox exporter.

Monitoring HTTPS endpoints with the Blackbox Exporter

In our setup, Prometheus is now currently sitting behind a reverse proxy (NGINX) configured with self-signed certificates.

This is the endpoint we are going to monitor with the Blackbox Exporter.

a – Creating a Blackbox module

To monitor Prometheus, we are going to use the HTTP prober.

Head over to your Blackbox configuration file, erase its content and paste the following configuration.

modules:
  http_prometheus:
    prober: http
    timeout: 5s
    http:
      valid_http_versions: ["HTTP/1.1", "HTTP/2"]
      method: GET
      fail_if_ssl: false
      fail_if_not_ssl: true
      tls_config:
        insecure_skip_verify: true
      basic_auth:
        username: "username"
        password: "password"

Here are the details of the parameters we chose.

  • fail_if_not_ssl: As we are actively monitoring an HTTPS endpoint, we need to make sure that we are retrieving the page with SSL encryption. Otherwise, we count it as a failure;
  • insecure_skip_verify: If you followed our previous tutorial, we generated our certificates with self-signed certificates. As a consequence, you are not able to verify it with a certificate authority;
  • basic_auth: The reverse proxy endpoint is configured with a basic username/password authentication. The Blackbox exporter needs to be aware of those to probe the Prometheus server.

Save your file, and check your configuration file with the BlackBox binary itself.

$ blackbox_exporter --config.check

config-file-ok

Great!

Our configuration file seems to be correctly formatted.

Again, there is no need to restart the Blackbox Exporter service. Instead, send a simple SIGHUP signal to the process.

Likewise to the Prometheus process, identify the Blackbox exporter PID.

$ ps aux | grep blackbox
devconnected 574  0.0  0.0  14856  1136 pts/0    S+   09:34   0:00 /usr/local/bin/blackbox_exporter --config.file=...

The PID of my Prometheus process is 574.

Send a SIGHUP signal to this process for the configuration to restart.

$ sudo kill -HUP 574

b – Binding the Blackbox Exporter Module in Prometheus

Now that the module is defined, it is time for Prometheus to start actively using it to monitor our target.

To do so, head over to the Prometheus configuration file, and paste the following changes.

scrape_configs:

...

  - job_name: 'blackbox'
    metrics_path: /probe
    params:
      module: [http_prometheus] 
    static_configs:
      - targets:
        - https://127.0.0.1:1234    # Target to probe with https.
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 127.0.0.1:9115  # The blackbox exporter's real hostname:port.

Save your changes, and send a SIGHUP signal to Prometheus for it to restart.

When your server has restarted, head over to https://localhost:1234/config and make sure that your changes were correctly saved.

config-pro

Great!

Prometheus will start scrapping our target now.

To verify it, head over to the /graph endpoint, and issue the following PromQL request.

probe_success{instance="https://127.0.0.1:1234", job="blackbox"}

If your target is up, the probe success should return 1.

probe-success-1

Now that your Blackbox exporter is all set, let’s have a quick Grafana dashboard in order to visualize your results.

Visualizing HTTP metrics on Grafana

a – Installing Grafana on Linux

As always, if you need to install Grafana on your Linux host, make sure to read the dedicated tutorial.

We are not going to create the dashboard by ourselves, instead, we are going to import an existing one that answers our needs.

b – Importing a Grafana dashboard

In order to import a Grafana dashboard, click on the “Plus” icon on the left menu, and click on “Import”.

import-dash 1

On the next window, select the Grafana.com dashboard option, and type the following dashboard ID: 7587.

import-dash-3

From there, Grafana should automatically detect your dashboard.

Select Prometheus as a datasource, and click on “Import“.

b – Importing a Grafana dashboard import-dash-2

That’s it! By clicking on “Import”, your dashboard was automatically created on Grafana.

This is what you should now see on your screen.

final-dash (1)

As you can see, this dashboard focuses on HTTP(s) metrics.

You can have metrics about the up status of your website, the current SSL status as well as the SSL expiry date.

You also have graphs showing the current latencies of your HTTP requests, as well as the average DNS lookup time.

It is noteworthy to say that you can choose the target that you want to monitor by clicking on the “Target” dropdown at the top of the dashboard. In our case, we have only one target, but it can become quite handy when you are monitoring several hosts.

Conclusion

Congratulations, you have successfully installed the Blackbox Exporter with Prometheus. Also, you read how you can install Grafana and import your first dashboard.

This tutorial is only the beginning of your learning path on becoming a monitoring expert.

If you are looking for more resources on monitoring guides, we have a complete section dedicated to it check our entire website for better options in learning various techniques & technologies.

I hope that you learned something new today.

Until then, have fun, as always.

AlertManager and Prometheus Complete Setup on Linux

This tutorial deals with the complete setup of AlertManager and Prometheus on Linux systems.

When we will observe the bad time can’t be defined earlier so that prior safety measures & being alert all the time is more important while working with the modern monitoring solutions in our infrastructure.

Receiving emails if any of our hosts go down is needed or else for an instance, if HTTPS certificates are about to expire then getting a message on our team stack is also necessary.

So, setting up custom alerts is important in your monitoring infrastructure. In this tutorial, we are going to take a special look at the AlertManager with Prometheus. Also, check out this AlertManager and Prometheus Complete Setup on Linux Guide for learning basic fundamentals and the main concept.

Are you Ready?

What is Prometheus?

Prometheus is a monitoring tool created for recording real-time metrics in a time-series database. It is an open-source software project, written in Go. The Prometheus metrics are collected using HTTP pulls, allowing for higher performance and scalability.

Some other Prometheus Tutorials: 

What You Will Learn?

If you read this tutorial until the end, you will learn all the following concepts:

  • How to install Prometheus securely, in HTTPS with authentication.
  • What is the AlertManager and how it binds with Prometheus
  • How to install the AlertManager as a service, using HTTPS.
  • How to configure the AlertManager and create your first rules.

Again, that’s quite a long program, let’s start working.

Installing Prometheus (with SSL and authentication)

The complete Prometheus and Grafana installation and configuration are already covered in one of our previous articles. Check this out for a better understanding of How To Install Prometheus with Docker on Ubuntu 18.04

If you followed carefully this tutorial, you should now have Prometheus configured behind a secure reverse proxy (NGINX in this case).

The authentication is also done on the reverse proxy side.

As a reference, the Prometheus Web UI was accessed using the following URL: https://localhost:1234.

When you are done, you can go to the next section.

What is the AlertManager with Prometheus?

The AlertManager is a tool that will allow you to create custom alerts with Prometheus, and define recipients for them.

a – AlertManager general concepts

The AlertManager is an alerting server that handles alerts provided by a set of clients (a Prometheus server for example) and dispatches them to a group of defined receivers (Slack, email, or Pagerduty for example).

As illustrated, the AlertManager is part of the Prometheus stack, but it is run as a standalone server aside from Prometheus.

By default, Prometheus will take care of sending alerts directly to the AlertManager if it is correctly configured as a Prometheus target.

If you are using clients different from Prometheus itself, the AlertManager exposes a set of REST endpoints that you can use to fire alerts.

The AlertManager API documentation is available here.

a – AlertManager general concepts alert-manager-works

b – What are the AlertManager routes?

The AlertManager works with configuration files defined in YAML format.

At the top of your configuration file, you are going to define routes.

Routes are a set of paths that alerts take in order to determine which action should be associated with the alert. In short, you associate a route with a receiver.

The initial route also called the “root route” is a route that matches every single alert sent to the AlertManager.

A route can have siblings and children that are also routes themselves. This way, routes can be nested any number of times, each level defining a new action (or receiver) for the alert.

Each route defines receivers. Those receivers are the alert recipients: Slack, a mail service, Pagerduty..

As always, a schema is better than words.

b – What are the AlertManager routes New-Wireframe

c – How are the AlertManager routes evaluated?

Now that you have a better idea of what the AlertManager routes are, let’s see how they are evaluated.

On each route, you can define a continuous attribute.

The continue attribute is a value used to define if you want to evaluate route siblings (belonging to the same level) if a route on the same level was already matching.

Note that the continue attribute is not used to determine if you want to go across children routes, but only siblings routes.

The AlertManager will evaluate children’s routes until there are no routes left or no routes for a given level are matching the current alert.

In that case, the AlertManager will take the configuration of the current node evaluated.

c – How are the AlertManager routes evaluated continue-attribute

Now that you have a better understanding of how alerts work on Prometheus, it is time for us to start configuring it on our Linux system.

Installing the AlertManager with Prometheus

First, we are going to download the latest version of the AlertManager and configure it as a server on our instance.

a – Downloading the AlertManager

In order to install the AlertManager, head over to the Prometheus downloads page.

At the top of the page, filter your results by choosing Linux as an operating system.

operating-systems

Scroll a bit, and find the AlertManager section, right below the Prometheus executable.

alert-manager-prometheus

Click on the archive to download it, or run a simple wget command to download the file.

$ wget https://github.com/prometheus/alertmanager/releases/download/v0.18.0/alertmanager-0.18.0.linux-amd64.tar.gz

You should now have the archive on your system.

Extract the files from the archive.

$ tar xvzf alertmanager-0.18.0.linux-amd64.tar.gz

alert-manager-archive

In the folder where you extracted your files, you should find the following entries:

  • amtool: the amtool is an executable that allows you to view or to modify the current state of the AlertManager. In other words, the amtool can silence alerts, expire silences, as well as import silences or query them. It can be seen as a utility to customize the AlertManager without directly modifying the configuration of your current alerts.
  • alertmanager: the executable for the alertmanager. This is the executable that you will run in order to start an AlertManager server on your instance.
  • alertmanager.yml: as its name suggests, this is the configuration file for the AlertManager. This configuration file already defines some example routes, but we will create our own alert file.

If you are interested, here’s the documentation of the amtool.

Now that you have downloaded the AlertManager, let’s see how you can launch it as a service.

b – Starting the AlertManager as a service

In order to start the AlertManager as a service, you are going to move the executables to the /usr/local/bin folder.

$ sudo mv amtool alertmanager /usr/local/bin

For the configuration files, create a new folder in /etc called alertmanager.

$ sudo mkdir -p /etc/alertmanager
$ sudo mv alertmanager.yml /etc/alertmanager

Create a data folder at the root directory, with a Prometheus folder inside.

$ sudo mkdir -p /data/alertmanager

Next, create a user for your upcoming service.

$ sudo useradd -rs /bin/false alertmanager

Give permissions to your newly created user for the AlertManager binaries.

$ sudo chown alertmanager:alertmanager /usr/local/bin/amtool /usr/local/bin/alertmanager

Give the correct permissions to those folders recursively.

$ sudo chown -R alertmanager:alertmanager /data/alertmanager /etc/alertmanager/*
Awesome! Time to create the service.

To create a Linux service (using systemd), head over to the /lib/systemd/system folder and create a service named alertmanager.service

$ cd /lib/systemd/system
$ sudo touch alertmanager.service

Likewise to our Prometheus, let’s first run the alertmanager executable with a “-h” flag to see our options.

$ alertmanager -h

alertmanager-help

In this case, we are interested in a couple of options:

  • config.file: we need to set this variable to the correct configuration file of the “etc/alertmanager” folder.
  • storage.path: again, we defined a custom folder for data which is “/data/alertmanager”
  • web.external-url: if you followed the Prometheus setup entirely, your Prometheus instance is running behind a reverse proxy. In this case, we are going to set the URL for the AlertManager to be externally reachable. Note that this step is optional, but if you plan on reaching the AlertManager from the outside world, there is a chapter dedicated to it.

Edit your service file, and paste the following content inside.

$ sudo nano alertmanager.service

[Unit]
Description=Alert Manager
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=alertmanager
Group=alertmanager
ExecStart=/usr/local/bin/alertmanager \
  --config.file=/etc/alertmanager/alertmanager.yml \
  --storage.path=/data/alertmanager

Restart=always

[Install]
WantedBy=multi-user.target

Save your file, enable the service and start it.

$ sudo systemctl enable alertmanager
$ sudo systemctl start alertmanager

alertmanager-service

Now that the AlertManager is running, let’s verify that everything is running properly.

By default, the AlertManager is running on port 9093.

This is the Web UI that you should see on your instance.

alertmanager-web-ui

Not accessible on port 9093? Run a simple ls of command to determine the port on which the AlertManager is currently listening.

Awesome! Our AlertManager is up and running.

Now, it is time to tell Prometheus to bind with the AlertManager.

c – Binding AlertManager with Prometheus

Go back to your Prometheus configuration directory, and edit the following changes.

$ cd /etc/prometheus/prometheus.yml


alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - localhost:9093

Restart Prometheus, and make sure that everything is running smoothly.

$ sudo systemctl restart prometheus

In the next chapter, we are going to configure it to be remotely accessible behind a reverse proxy. The next chapter is optional, so if you want to skip it, you can directly read the section on building our first alert with Slack.

Setting up a reverse proxy for the AlertManager (optional)

The reverse proxy setup with NGINX was already covered in our previous tutorial. (see Prometheus & Grafana setup)

Make sure to read this part and come back to this tutorial once you have everything ready.

a – Creating a proxy configuration file

Similar to our Prometheus proxy configuration file, we are going to create one for the AlertManager.

The conf.d directory is where we are going to create our reverse proxy configuration file for the AlertManager.

Create a new file in this directory called alertmanager.conf.

$ cd /etc/nginx/conf.d
$ sudo touch alertmanager.conf

Paste the following content in your file.

server {
    listen 3456;

    location / {
      proxy_pass           http://localhost:9093/;
    }
}

I choose port 3456 for my AlertManager but feel free to choose a port that fits your infrastructure.

Save your configuration, and restart your NGINX server for the modifications to be applied.

$ sudo systemctl restart nginx

-- Any errors? Inspect your journal
$ sudo journalctl -f -u nginx.service

This is what you should now see (when browsing

behind-proxy

Awesome, let’s have some reverse proxy authentication for the AlertManager now.

b – Setting up the reverse proxy authentication

If you want to use the Prometheus credentials file described in the previous tutorial, you can.

In this case, I am going to create a brand new authentication file dedicated to the AlertManager.

First, you are going to need the htpasswd utility in order to create the authentication file.

This utility comes with the apache2-utils package, so make sure to install it first.

$ sudo apt-get install apache2-utils
$ cd /etc/alertmanager
$ sudo htpasswd -c .credentials admin 

Again, choose a strong password as this endpoint is supposed to be accessible online.

credentials

Back to your NGINX configuration file (located at /etc/nginx/conf.d), change your server configuration to take into account the new credentials.

$ cd /etc/nginx/conf.d

server {
    listen 3456;

    location / {
      auth_basic           "AlertManager";
      auth_basic_user_file /etc/alertmanager/.credentials;
      proxy_pass           http://localhost:9093/;
    }
}

Great! You should now be asked for credentials when trying to reach http://localhost:3456.

c – Securing the AlertManager with TLS/SSL

We have already covered those steps in our Prometheus & Grafana guide, but in a similar way, we are going to create self-signed certificates and import them into NGINX.

Create Keys for the AlertManager

First, make sure that you have the gnutls packages available on your distribution.

As a reminder, here’s how to install them.

(Ubuntu)
$ sudo apt-get install gnutls-utils

(Debian)
$ sudo apt-get install gnutls-bin

Go to your /etc/ssl directory, and create a new folder for the AlertManager.

$ cd /etc/ssl
$ sudo mkdir alertmanager

Go to your new folder, and create a private key for the AlertManager.

$ sudo certtool --generate-privkey --outfile alertmanager-private-key.pem

When you are done, create a new certificate (also called the public key).

$ sudo certtool --generate-self-signed --load-privkey alertmanager-private-key.pem --outfile alertmanager-cert.pem
  # The certificate will expire in (days): 3650
  # Does the certificate belong to an authority? (Y/N): y
  # Will the certificate be used to sign other certificates? (Y/N): y
  # Will the certificate be used to sign CRLs? (y/N): y

Choose a validity that works for you, in this case, I chose 10 years.

Now that your keys are created, let’s add them to our NGINX configuration.

Reconfiguring NGINX for HTTPS

Find your configuration in the /etc/nginx/conf.d directory, and paste the following changes.

$ cd /etc/nginx/conf.d
$ sudo nano alertmanager.conf

server {
    listen 3456 ssl;
    ssl_certificate /etc/ssl/alertmanager/alertmanager-cert.pem;
    ssl_certificate_key /etc/ssl/alertmanager/alertmanager-private-key.pem;

    location / {
      auth_basic           "AlertManager";
      auth_basic_user_file /etc/alertmanager/.credentials;
      proxy_pass           http://localhost:9093/;
    }
}

When you are done, restart your NGINX server, and verify that the changes were applied.

$ sudo systemctl restart nginx

If you have any trouble launching NGINX, make sure to run the following command.

$ sudo journalctl -f -u nginx.service

Modifying the AlertManager service

Now that NGINX is configured to deliver content via HTTPS, let’s modify the service to take into account the external URL parameter.

Back to the /lib/systemd/system folder, modify the service accordingly.

$ cd /lib/systemd/system
$ sudo nano alertmanager.service

[Service]
Type=simple
User=alertmanager
Group=alertmanager
ExecStart=/usr/local/bin/alertmanager \
  --config.file=/etc/alertmanager/alertmanager.yml \
  --storage.path=/data/alertmanager \
  --web.external-url=https://localhost:3456

Save your file and restart your service.

$ sudo systemctl daemon-reload
$ sudo systemctl restart alertmanager

Make sure that your AlertManager is still accessible via the proxy.

Modifying the AlertManager service

Conclusion

In this tutorial, you had a complete overview of how to setup the AlertManager with Prometheus on Linux.

You have successfully setup an AlertManager instance running behind a reverse proxy, with authentication and encryption.

As I write more articles using the AlertManager rules and alerts, those articles will probably be linked here.

I hope that you learned something new today. Until then, have fun, as always.

How To Add A User to Sudoers On CentOS 8

How To Add A User to Sudoers On CentOS 8 | Centos 8 Add User to Group

Are you in a confused state to work with Add User to Sudoers CentOS? Then this tutorial on How To Add A User to Sudoers On CentOS 8 will definitely assist you through the process and clarifies all your queries regarding the concept.

Learning each and every concept right from the basis is very important and helpful. So let’s start with the definitions and their uses.

CentOS is a free and open-source Enterprise Linux distro obtained from an upstream distro known as Red Hat Enterprise Linux (RHEL). CentOS is mostly used on servers and clusters.

The sudocommand is the famous command available on Linux and it permits users to execute commands with the security privileges of another user, by default the root user. The /etc/sudoers file contains a security policy for system users and groups that is used by the sudo command.

Today’s tutorial focuses on all the information about adding a user to sudoers on CentOS 8 (the most recent CentOS distribution) and also details regarding two ways of adding a user to sudoers: add it to the wheel group (similar to the sudo group on Debian-based distributions) or add the user to the sudoers file.

Prerequisites

In order to grant the sudo rights to an existing user, you are going to need the sudo command on your CentOS 8 host.

First, make sure that your packages are up to date on your host and install the sudo command.

$ su -
$ yum update
$ yum install sudo

To verify that the sudo command is correctly installed, you can run the following command

$ sudo -l

Prerequisites

Do Check Other Linux Tutorials: 

Procedure to add or create a sudo user on CentOS 8

  • Open the terminal application
  • For remote CentOS Server, use the ssh command and log in as the root user using either su or sudo.
  • Create a new CentOS user named tom, run: useradd tom
  • Set the password, execute: passwd tom
  • Make tom user sudo user on CentOS Linux 8, run: usermod -aG wheel tom
  • Verify it by running the id tom command

Adding an existing user to the wheel group

The first way to add a user to sudoers is to add it to the wheel group.

In order to add your user to the group, you can either use the usermod or the gpasswd command.

$ sudo usermod -aG wheel <user>

Alternatively, here is the syntax using the gpasswd command.

$ sudo gpasswd -a <user> wheel
Adding user to the group wheel

Adding an existing user to the wheel group wheel-1

Make sure that the user belongs to the wheel group with the groups command.

$ su - <user>
(enter the password for user)

$ groups
user wheel

wheel-2

Alternatively, you can run the sudo command on the user you granted administrative rights.

$ sudo -l

Congratulations!

You have added a user to sudoers on CentOS 8.

During your CentOS 8 installation process, if you chose not to set a root password, your root account may be locked by default. As a consequence, you will need to set a password to the root user account if you need to unlock it.

Adding an existing user to the sudoers file

The other method to grant administrative rights is to add the user to the sudoers file.

By default, the sudoers file is located at /etc/sudoers by default.

This file contains a set of rules that are applied to determine who has administrative rights on a system, which commands they can execute with sudo privileges, and if they should be prompted a password or not.

However, you should not modify the sudoers file by yourself because if you make any mistakes during the process, you might be locked out of your host forever.

Instead of modifying the sudoers file by yourself, you are going to use visudo.

Visudo is a tool that checks the integrity and the correctness of the commands typed before saving the sudoers file.

To execute visudo, type the following command

$ sudo visudo

You should now see the following screen
Adding an existing user to the sudoers file visudo

At the end of the file, add the following line.

$ <user>       ALL=(ALL:ALL) ALL

Here are some details about the syntax of the sudoers file.

sudoers-syntax

By default, the account password will be asked every five minutes to perform sudo operations.

However, if you want to remove this password verification, you can set theNOPASSWDoption.

$ <user>       ALL=(ALL:ALL) NOPASSWD:ALL

If you want to increase the password verification time, you can modify the timestamp_timeout (expressed in minutes).

In the example shown below, you will be asked to provide your user password every thirty minutes.

# /etc/sudoers
#
# This file MUST be edited with the 'visudo' command as root.
#
# See the man page for details on how to write a sudoers file.
#

Defaults        env_reset
Defaults        mail_badpass
Defaults        secure_path = /sbin:/bin:/usr/sbin:/usr/bin
Defaults        timestamp_timeout=30

Adding a group to the sudoers file

In the sudoers file, you can add a user but you can also add an entire group which can be quite handy if you want to have specific rules for different groups.

To add a group to the sudoers file, simply add a percent symbol at the beginning of the line.

$ %sysadmins       ALL=(ALL:ALL) NOPASSWD:ALL

Make sure that your user is part of the designed group with the groups command.

$ su - user
$ groups
user sysadmins

Again, you can test that your changes were applied by changing your password for example

$ sudo passwd

Quick Steps To Create a New Sudo-enabled User on CentOS 8

The steps that should follow for performing how to create a new user with sudo access on CentOS 8 are given here:

  1. Logging Into Your Server
  2. Adding a New User to the System
  3. Adding the User to the wheel Group
  4. Testing sudo Access

Conclusion

In this tutorial, you learned how you can add a user to sudoers on CentOS 8, by using the usermod command or by changing the sudoers file.

If you are interested in Linux System Administration, we have dedicated a section for it on our website, kindly check it out for more information.