rh134 9.0 Student Guide
rh134 9.0 Student Guide
The contents of this course and all its modules and related materials, including handouts to audience members, are ©
2023 Red Hat, Inc.
No part of this publication may be stored in a retrieval system, transmitted or reproduced in any way, including, but
not limited to, photocopy, photograph, magnetic, electronic or other record, without the prior written permission of
Red Hat, Inc.
This instructional program, including all material provided herein, is supplied without any guarantees from Red Hat,
Inc. Red Hat, Inc. assumes no liability for damages or legal action arising from the use or misuse of contents or details
contained herein.
If you believe Red Hat training materials are being used, copied, or otherwise improperly distributed, please send
email to training@redhat.com or phone toll-free (USA) +1 (866) 626-2994 or +1 (919) 754-3700.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, JBoss, OpenShift, Fedora, Hibernate, Ansible, RHCA, RHCE,
RHCSA, Ceph, and Gluster are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United
States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is a trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open
source or commercial project.
The OpenStack word mark and the Square O Design, together or apart, are trademarks or registered trademarks
of OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's
permission. Red Hat, Inc. is not affiliated with, endorsed by, or sponsored by the OpenStack Foundation or the
OpenStack community.
Contributors: Adarsh Krishnan, David Sacco, Hemant Chauhan, Roberto Velazquez, Sajith
Eyamkuzhy, Samik Sanyal, Yuvaraj Balaraju
Document Conventions xi
.............................................................................................................................. xi
Introduction xiii
Red Hat System Administration II ............................................................................... xiii
Orientation to the Classroom Environment ................................................................. xiv
Performing Lab Exercises ....................................................................................... xviii
1. Improve Command-line Productivity 1
Write Simple Bash Scripts .......................................................................................... 2
Guided Exercise: Write Simple Bash Scripts .................................................................. 6
Loops and Conditional Constructs in Scripts ................................................................ 9
Guided Exercise: Loops and Conditional Constructs in Scripts ........................................ 15
Match Text in Command Output with Regular Expressions ............................................. 17
Guided Exercise: Match Text in Command Output with Regular Expressions .................... 26
Lab: Improve Command-line Productivity .................................................................. 29
Summary ............................................................................................................... 35
2. Schedule Future Tasks 37
Schedule a Deferred User Job ................................................................................. 38
Guided Exercise: Schedule a Deferred User Job ......................................................... 40
Schedule Recurring User Jobs ................................................................................. 43
Guided Exercise: Schedule Recurring User Jobs ......................................................... 46
Schedule Recurring System Jobs ............................................................................. 49
Guided Exercise: Schedule Recurring System Jobs ...................................................... 52
Manage Temporary Files .......................................................................................... 55
Guided Exercise: Manage Temporary Files ................................................................. 58
Quiz: Schedule Future Tasks ..................................................................................... 61
Summary ............................................................................................................... 65
3. Analyze and Store Logs 67
Describe System Log Architecture ............................................................................ 68
Quiz: Describe System Log Architecture .................................................................... 70
Review Syslog Files ................................................................................................. 74
Guided Exercise: Review Syslog Files ......................................................................... 79
Review System Journal Entries .................................................................................. 81
Guided Exercise: Review System Journal Entries ......................................................... 86
Preserve the System Journal ................................................................................... 89
Guided Exercise: Preserve the System Journal ........................................................... 92
Maintain Accurate Time ........................................................................................... 95
Guided Exercise: Maintain Accurate Time ................................................................... 99
Lab: Analyze and Store Logs ................................................................................... 103
Summary .............................................................................................................. 108
4. Archive and Transfer Files 109
Manage Compressed tar Archives ............................................................................ 110
Guided Exercise: Manage Compressed tar Archives ..................................................... 115
Transfer Files Between Systems Securely ................................................................... 117
Guided Exercise: Transfer Files Between Systems Securely .......................................... 120
Synchronize Files Between Systems Securely ............................................................ 123
Guided Exercise: Synchronize Files Between Systems Securely .................................... 126
Lab: Archive and Transfer Files ................................................................................ 129
Summary .............................................................................................................. 134
5. Tune System Performance 135
Adjust Tuning Profiles ............................................................................................. 136
Guided Exercise: Adjust Tuning Profiles .................................................................... 143
Influence Process Scheduling .................................................................................. 148
RH134-RHEL9.0-en-5-20230516 vii
Guided Exercise: Influence Process Scheduling .......................................................... 152
Lab: Tune System Performance ............................................................................... 156
Summary .............................................................................................................. 162
viii RH134-RHEL9.0-en-5-20230516
12. Install Red Hat Enterprise Linux 357
Install Red Hat Enterprise Linux ............................................................................... 358
Guided Exercise: Install Red Hat Enterprise Linux ....................................................... 362
Automate Installation with Kickstart ......................................................................... 365
Guided Exercise: Automate Installation with Kickstart ................................................. 374
Install and Configure Virtual Machines ...................................................................... 377
Quiz: Install and Configure Virtual Machines .............................................................. 382
Lab: Install Red Hat Enterprise Linux ........................................................................ 384
Summary ............................................................................................................. 390
13. Run Containers 391
Container Concepts .............................................................................................. 392
Quiz: Container Concepts ...................................................................................... 400
Deploy Containers ................................................................................................ 402
Guided Exercise: Deploy Containers ......................................................................... 412
Manage Container Storage and Network Resources ................................................... 418
Guided Exercise: Manage Container Storage and Network Resources .......................... 428
Manage Containers as System Services ................................................................... 434
Guided Exercise: Manage Containers as System Services ........................................... 440
Lab: Run Containers ............................................................................................. 446
Summary ............................................................................................................. 453
14. Comprehensive Review 455
Comprehensive Review ......................................................................................... 456
Lab: Fix Boot Issues and Maintain Servers ................................................................ 459
Lab: Configure and Manage File Systems and Storage ............................................... 465
Lab: Configure and Manage Server Security .............................................................. 471
Lab: Run Containers ............................................................................................... 481
RH134-RHEL9.0-en-5-20230516 ix
x RH134-RHEL9.0-en-5-20230516
Document Conventions
This section describes various conventions and practices that are used
throughout all Red Hat Training courses.
Admonitions
Red Hat Training courses use the following admonitions:
References
References describe where to find external documentation that is
relevant to a subject.
Note
Notes are tips, shortcuts, or alternative approaches to the task at hand.
Ignoring a note should have no negative consequences, but you might
miss out on something that makes your life easier.
Important
They provide details of information that is easily missed: configuration
changes that apply only to the current session, or services that need
restarting before an update applies. Ignoring these admonitions will not
cause data loss, but might cause irritation and frustration.
Warning
Warnings should not be ignored. Ignoring these admonitions will most
likely cause data loss.
Inclusive Language
Red Hat Training is currently reviewing its use of language in various areas
to help remove any potentially offensive terms. This is an ongoing process
and requires alignment with the products and services that are covered in
Red Hat Training courses. Red Hat appreciates your patience during this
process.
RH134-RHEL9.0-en-5-20230516 xi
xii RH134-RHEL9.0-en-5-20230516
Introduction
RH134-RHEL9.0-en-5-20230516 xiii
Introduction
In this course, the main computer system for hands-on learning activities is workstation.
Students also use two other machines for these activities: servera and serverb. All three
systems are in the lab.example.com DNS domain.
All student computer systems have a standard user account, student, which has the password
student. The root password on all student systems is redhat.
Classroom Machines
The primary function of bastion is to act as a router between the network that connects the
student machines and the classroom network. If bastion is down, then other student machines
can access only systems on the individual student network.
xiv RH134-RHEL9.0-en-5-20230516
Introduction
Note
When logging on to servera or serverb, you might see a message about
activating cockpit. You can ignore the message.
[student@serverb ~]$
RH134-RHEL9.0-en-5-20230516 xv
Introduction
Machine States
active The virtual machine is running and available. If it just started, it might
still be starting services.
stopped The virtual machine is completely shut down. On starting, the virtual
machine boots into the same state that it was in before shutdown. The
disk state is preserved.
Classroom Actions
CREATE Create the ROLE classroom. Creates and starts all the needed virtual
machines for this classroom. Creation can take several minutes to
complete.
CREATING The ROLE classroom virtual machines are being created. Creates and
starts all the needed virtual machines for this classroom. Creation can
take several minutes to complete.
DELETE Delete the ROLE classroom. Destroys all virtual machines in the
classroom. All saved work on those systems' disks is lost.
Machine Actions
OPEN CONSOLE Connect to the system console of the virtual machine in a new browser
tab. You can log in directly to the virtual machine and run commands,
when required. Normally, log in to the workstation virtual machine
only, and from there, use ssh to connect to the other virtual machines.
ACTION > Shutdown Gracefully shut down the virtual machine, preserving disk contents.
ACTION > Power Off Forcefully shut down the virtual machine, while still preserving disk
contents. This action is equivalent to removing the power from a
physical machine.
ACTION > Reset Forcefully shut down the virtual machine and reset associated storage
to its initial state. All saved work on that system's disks is lost.
xvi RH134-RHEL9.0-en-5-20230516
Introduction
At the start of an exercise, if instructed to reset a single virtual machine node, then click ACTION >
Reset for only that specific virtual machine.
At the start of an exercise, if instructed to reset all virtual machines, then click ACTION > Reset on
every virtual machine in the list.
If you want to return the classroom environment to its original state at the start of the course,
then click DELETE to remove the entire classroom environment. After the lab is deleted, then click
CREATE to provision a new set of classroom systems.
Warning
The DELETE operation cannot be undone. All completed work in the classroom
environment is lost.
To adjust the timers, locate the two + buttons at the bottom of the course management page.
Click the auto-stop + button to add another hour to the auto-stop timer. Click the auto-destroy +
button to add another day to the auto-destroy timer. Auto-stop has a maximum of 11 hours,
and auto-destroy has a maximum of 14 days. Be careful to keep the timers set while you are
working, so that your environment is not unexpectedly shut down. Be careful not to set the timers
unnecessarily high, which could waste your subscription time allotment.
RH134-RHEL9.0-en-5-20230516 xvii
Introduction
• A guided exercise is a hands-on practice exercise that follows a presentation section. It walks
you through a procedure to perform, step by step.
• A quiz is typically used when checking knowledge-based learning, or when a hands-on activity is
impractical for some other reason.
• An end-of-chapter lab is a gradable hands-on activity to help you to check your learning. You
work through a set of high-level steps, based on the guided exercises in that chapter, but the
steps do not walk you through every command. A solution is provided with a step-by-step walk-
through.
• A comprehensive review lab is used at the end of the course. It is also a gradable hands-on
activity, and might cover content from the entire course. You work through a specification of
what to accomplish in the activity, without receiving the specific steps to do so. Again, a solution
is provided with a step-by-step walk-through that meets the specification.
To prepare your lab environment at the start of each hands-on activity, run the lab start
command with a specified activity name from the activity's instructions. Likewise, at the end of
each hands-on activity, run the lab finish command with that same activity name to clean up
after the activity. Each hands-on activity has a unique name within a course.
The action is a choice of start, grade, or finish. All exercises support start and finish.
Only end-of-chapter labs and comprehensive review labs support grade.
start
The start action verifies the required resources to begin an exercise. It might include
configuring settings, creating resources, checking prerequisite services, and verifying
necessary outcomes from previous exercises. You can take an exercise at any time, even
without taking preceding exercises.
grade
For gradable activities, the grade action directs the lab command to evaluate your work, and
shows a list of grading criteria with a PASS or FAIL status for each. To achieve a PASS status
for all criteria, fix the failures and rerun the grade action.
finish
The finish action cleans up resources that were configured during the exercise. You can
take an exercise as many times as you want.
The lab command supports tab completion. For example, to list all exercises that you can start,
enter lab start and then press the Tab key twice.
xviii RH134-RHEL9.0-en-5-20230516
Chapter 1
Improve Command-line
Productivity
Goal Run commands more efficiently by using advanced
features of the Bash shell, shell scripts, and various
Red Hat Enterprise Linux utilities.
RH134-RHEL9.0-en-5-20230516 1
Chapter 1 | Improve Command-line Productivity
Objectives
Run commands more efficiently by using advanced features of the Bash shell, shell scripts, and
various Red Hat Enterprise Linux utilities.
A Bash shell script is an executable file that contains a list of commands, and possibly with
programming logic to control decision-making in the overall task. When well-written, a shell script
is a powerful command-line tool on its own, and you can use it with other scripts.
Shell scripting proficiency is essential for system administrators in any operational environment.
You can use shell scripts to improve the efficiency and accuracy of routine task completion.
Although you can use any text editor, advanced editors such as vim or emacs understand Bash
shell syntax and can provide color-coded highlighting. This highlighting helps to identify common
scripting errors such as improper syntax, unmatched quotes, parentheses, brackets, and braces,
and other structural mistakes.
#!/usr/bin/bash
If the script is stored in a directory that is listed in the shell's PATH environmental variable, then you
can run the shell script by using only its file name, similar to running compiled commands. Because
PATH parsing runs the first matching file name that is found, always avoid using existing command
names to name your script files. If a script is not in a PATH directory, then run the script by using
its absolute path name, which you can determine by querying the file with the which command.
Alternatively, run a script in your current working directory by using the . directory prefix, such as
./scriptname.
2 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
The backslash character removes the special meaning of the single character that immediately
follows the backslash. For example, to use the echo command to display the # not a comment
literal string, the # hash character must not be interpreted as a comment.
The following example shows the backslash character (\) modifying the hash character so it is not
interpreted as a comment:
To escape more than one character in a text string, either use the backslash character multiple
times, or enclose the whole string in single quotes ('') to interpret literally. Single quotes preserve
the literal meaning of all characters that they enclose. Observe the backslash character and single
quotes in these examples:
Use double quotation marks to suppress globbing (file name pattern matching) and shell
expansion, but still allow command and variable substitution. Variable substitution is conceptually
the same as command substitution, but might use optional brace syntax. Observe the following
examples of various quotation mark forms.
Use single quotation marks to interpret all enclosed text literally. Besides suppressing globbing
and shell expansion, single quotation marks also direct the shell to suppress command and variable
substitution. The question mark (?) is included inside the quotations, because it is a metacharacter
that also needs escaping from expansion.
RH134-RHEL9.0-en-5-20230516 3
Chapter 1 | Improve Command-line Productivity
Note
This user can run hello at the prompt because the ~/bin (/home/user/bin)
directory is in the user's PATH variable and the hello script has executable
permission. The PATH parser finds the script first, if no other executable file
called hello is found in any earlier PATH directory. Your home directory's bin
subdirectory is intended to store your personal scripts.
The echo command is widely used in shell scripts to display informational or error messages.
Messages helpfully indicate a script's progress, and can be directed to standard output or
standard error, or be redirected to a log file for archiving. When you display error messages, good
programming practice is to redirect error messages to STDERR to separate them from normal
program output.
4 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
The echo command is also helpful to debug a problematic shell script. Adding echo statements in
a script, to display variable values and other runtime information, can help to clarify how a script is
behaving.
References
bash(1), echo(1), and echo(1p) man pages
RH134-RHEL9.0-en-5-20230516 5
Chapter 1 | Improve Command-line Productivity
Guided Exercise
Outcomes
• Write and execute a simple Bash script.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user.
2.1. Use the vim command to create the firstscript.sh file under your home
directory.
2.2. Insert the following text, and save the file. The number of hash signs (#) is arbitrary.
#!/usr/bin/bash
echo "This is my first bash script" > ~/output.txt
echo "" >> ~/output.txt
echo "#####################################################" >> ~/output.txt
6 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
#####################################################
3. Add more commands to the firstscript.sh script, execute it, and review the output.
3.1. Use the Vim text editor to edit the firstscript.sh script.
The following output shows the expected content of the firstscript.sh file:
#!/usr/bin/bash
#
echo "This is my first bash script" > ~/output.txt
echo "" >> ~/output.txt
echo "#####################################################" >> ~/output.txt
echo "LIST BLOCK DEVICES" >> ~/output.txt
echo "" >> ~/output.txt
lsblk >> ~/output.txt
echo "" >> ~/output.txt
echo "#####################################################" >> ~/output.txt
echo "FILESYSTEM FREE SPACE STATUS" >> ~/output.txt
echo "" >> ~/output.txt
df -h >> ~/output.txt
echo "#####################################################" >> ~/output.txt
3.2. Make the firstscript.sh file executable by using the chmod command.
#####################################################
LIST BLOCK DEVICES
RH134-RHEL9.0-en-5-20230516 7
Chapter 1 | Improve Command-line Productivity
#####################################################
FILESYSTEM FREE SPACE STATUS
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
8 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
Objectives
Run repetitive tasks with for loops, evaluate exit codes from commands and scripts, run tests with
operators, and create conditional structures with if statements.
The loop processes the strings that you provide in LIST and exits after processing the last string
in the list. The for loop temporarily stores each list string as the value of VARIABLE, and then
executes the block of commands that use the variable. The variable name is arbitrary. Typically,
you reference the variable value with commands in the command block.
Provide the list of strings for the for loop from a list that the user enters directly, or that is
generated from shell expansion, such as variable, brace, or file name expansion, or command
substitution.
[user@host ~]$ for HOST in host1 host2 host3; do echo $HOST; done
host1
host2
host3
[user@host ~]$ for HOST in host{1,2,3}; do echo $HOST; done
host1
host2
host3
[user@host ~]$ for HOST in host{1..3}; do echo $HOST; done
host1
host2
host3
[user@host ~]$ for FILE in file{a..c}; do ls $FILE; done
filea
fileb
filec
RH134-RHEL9.0-en-5-20230516 9
Chapter 1 | Improve Command-line Productivity
Use the exit command with an optional integer argument between 0 and 255, which represents
an exit code. An exit code is returned to a parent process to indicate the status at exit. An exit
code value of 0 represents a successful script completion with no errors. All other nonzero values
indicate an error exit code. The script programmer defines these codes. Use unique values to
represent the different error conditions that are encountered. Retrieve the exit code of the last
completed command from the built-in $? variable, as in the following examples:
When a script's exit command is used without an exit code argument, the script returns the exit
code of the last command that was run within the script.
10 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
To see the exit status, view the $? variable immediately after executing the test command. An
exit status of 0 indicates a successful exit with nothing to report. Nonzero values indicate some
condition or failure. Use various operators to test whether a number is greater than (gt), greater
than or equal to (ge), less than (lt), less than or equal to (le), or equal (eq) to another number.
Use operators to test whether a string of text is the same (= or ==) or not the same (!=) as
another string of text, or whether the string has zero length (z) or has a non-zero length (n). You
can also test whether a regular file (-f) or directory (-d) exists, and has some special attributes,
such as if the file is a symbolic link (-L), or if the user has read permissions (-r).
Note
Shell scripting uses many other operator types. The test(1) man page lists
the conditional expression operators with descriptions. The bash(1) man page
also explains operator use and evaluation, but can be complex to read. Red Hat
recommends learning shell scripting through quality books and courses that are
dedicated to shell programming.
The following examples demonstrate the test command with Bash numeric comparison
operators:
Test by using the Bash test command syntax, [ <TESTEXPRESSION> ] or the newer extended
test command syntax, [[ <TESTEXPRESSION> ]], which provides features such as file name
globbing and regex pattern matching. In most cases, use the [[ <TESTEXPRESSION> ]] syntax.
The following examples demonstrate the Bash test command syntax and numeric comparison
operators:
RH134-RHEL9.0-en-5-20230516 11
Chapter 1 | Improve Command-line Productivity
The following examples demonstrate Bash string unary (one argument) operators:
Note
The space characters inside the brackets are mandatory, because they separate the
words and elements within the test expression. The shell's command parsing routine
divides the command elements into words and operators by recognizing spaces
and other metacharacters, according to built-in parsing rules. For full treatment
of this advanced concept, see the getopt(3) man page. The left square bracket
character ([) is itself a built-in alias for the test command. Shell words, whether
they are commands, subcommands, options, arguments, or other token elements,
are always delimited by spaces.
Conditional Structures
Simple shell scripts represent a collection of commands that are executed from beginning to end.
Programmers incorporate decision-making into shell scripts by using conditional structures. A
script can execute specific routines when stated conditions are met.
if <CONDITION>; then
<STATEMENT>
...
<STATEMENT>
fi
With this construct, if the script meets the given condition, then it executes the code in the
statement block. It does not act if the given condition is not met. Common test conditions in
the if/then statements include the previously discussed numeric, string, and file tests. The fi
statement at the end closes the if/then construct. The following code section demonstrates an
if/then construct to start the psacct service if it is not active:
12 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
if <CONDITION>; then
<STATEMENT>
...
<STATEMENT>
else
<STATEMENT>
...
<STATEMENT>
fi
The following code section demonstrates an if/then/else statement to start the psacct
service if it is not active, and to stop it if it is active:
if <CONDITION>; then
<STATEMENT>
...
<STATEMENT>
elif <CONDITION>; then
<STATEMENT>
...
<STATEMENT>
else
<STATEMENT>
...
<STATEMENT>
fi
In this conditional structure, Bash tests the conditions as they are ordered in the script. When a
condition is true, Bash executes the actions that are associated with the condition and then skips
the remainder of the conditional structure. If none of the conditions are true, then Bash executes
the actions in the else clause.
RH134-RHEL9.0-en-5-20230516 13
Chapter 1 | Improve Command-line Productivity
is active, or to run the sqlite3 client if both the mariadb and the postgresql service are
inactive:
References
bash(1) man page
14 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
Guided Exercise
Outcomes
• Create a for loop to iterate through a list of items from the command line and in a shell
script.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Use the ssh and hostname commands to print the hostname of the servera and
serverb machines to standard output.
2. Create a for loop to execute the hostname command on the servera and serverb
machines.
3. Create a shell script in the /home/student/bin directory to execute the same for loop.
Ensure that the script is included in the PATH environment variable.
3.1. Create the /home/student/bin directory to store the shell script, if the directory
does not exist.
RH134-RHEL9.0-en-5-20230516 15
Chapter 1 | Improve Command-line Productivity
3.2. Verify that the bin subdirectory of your home directory is in your PATH environment
variable.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
16 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
Objectives
Create regular expressions to match data, apply regular expressions to text files with the grep
command, and use grep to search files and data from piped commands.
Regular expressions are a unique language, with their own syntax and rules. This section introduces
regular expression syntax as implemented in bash, with examples.
Imagine that a user is looking through the following file for all occurrences of the pattern cat:
cat
dog
concatenate
dogma
category
educated
boondoggle
vindication
chilidog
The cat string is an exact match of the c character, followed by the a and t characters with no
other characters between. Searching the file with the cat string as the regular expression returns
the following matches:
cat
concatenate
category
educated
vindication
RH134-RHEL9.0-en-5-20230516 17
Chapter 1 | Improve Command-line Productivity
To match only at the beginning of a line, use the caret character (^). To match only at the end of a
line, use the dollar sign ($).
With the same file as for the previous example, the ^cat regular expression would match two lines.
cat
category
The cat$ regular expression would find only one match, where the cat characters are at the end
of a line.
cat
Locate lines in the file that end with dog, by using an end-of-line anchor to create the dog$
regular expression, which matches two lines:
dog
chilidog
To locate a line that contains only the search expression exactly, use both the beginning and end-
of-line anchors. For example, to locate the word cat when it is both at the beginning and the end
of a line simultaneously, use ^cat$.
cat
One difference between basic and extended regular expressions is in the behavior of the |, +, ?, (,
), {, and } special characters. In basic regular expression syntax, these characters have a special
meaning only if they are prefixed with a backslash \ character. In extended regular expression
syntax, these characters are special unless they are prefixed with a backslash \ character. Other
minor differences apply to how the ^, $, and * characters are handled.
The grep, sed, and vim commands use basic regular expressions. The grep command -E option,
the sed command -E option, and the less command use extended regular expressions.
With an unrestricted wildcard, you cannot predict the character that matches the wildcard. To
match specific characters, replace the unrestricted wildcard with appropriate characters.
The use of bracket characters, such as in the c[aou]t regular expression, matches patterns that
start with a c, followed by an a, o, or u, followed by a t. Possible matching expressions can have
the cat, cot, and cut strings.
18 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
Multipliers are an often used mechanism with wildcards. Multipliers apply to the previous character
or wildcard in the regular expression. An often used multiplier is the asterisk (*) character. When
used in a regular expression, the asterisk multiplier matches zero or more occurrences of the
multiplied expression. You can use the asterisk with expressions, in addition to characters.
For example, the c[aou]*t regular expression might match coat or coot. A regular expression
of c.*t matches cat, coat, culvert, and even ct (matching zero characters between the c
and the t). Any string that starts with a c, is followed by zero or more characters, and ends with a t
must be a match.
Another type of multiplier indicates a more precise number of characters in the pattern. An
example of an explicit multiplier is the 'c.\{2\}t' regular expression, which matches any word
that begins with a c, followed by exactly any two characters, and ends with a t. The 'c.\{2\}t'
expression would match two words in the following example:
cat
coat
convert
cart
covert
cypher
Note
This course introduced two metacharacter text parsing mechanisms: shell pattern
matching (also known as file globbing or file name expansion), and regular
expressions. Both mechanisms use similar metacharacters, such as the asterisk
character (*), but have differences in metacharacter interpretation and rules.
Pattern matching is a shell technique to specify multiple file names on the command
line. Regular expressions represent any form or pattern in text strings, no matter
how complex. Regular expressions are internally supported by many text processing
commands, such as grep, sed, awk, python, and perl, and in many applications.
RH134-RHEL9.0-en-5-20230516 19
Chapter 1 | Improve Command-line Productivity
\{n,m\} {n,m} The preceding item is matched at least n times, but not more than
m times.
[:digit:] Digits: 0 1 2 3 4 5 6 7 8 9.
[:lower:] Lowercase letters; in the 'C' locale and ASCII character encoding: a
b c d e f g h i j k l m n o p q r s t u v w x y z.
[:space:] Space characters: in the 'C' locale, it is tab, newline, vertical tab,
form feed, carriage return, and space.
[:upper:] Uppercase letters: in the 'C' locale and ASCII character encoding, it
is: A B C D E F G H I J K L M N O P Q R S T U V W X
Y Z.
20 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
Note
It is recommended practice to use single quotation marks to encapsulate the
regular expression to protect any shell metacharacters (such as the $, *, and {}
characters). Encapsulating the regular expression ensures that the command and
not the shell interprets the characters.
The grep command can process output from other commands by using a pipe operator character
(|). The following example shows the grep command parsing lines from the output of another
command.
RH134-RHEL9.0-en-5-20230516 21
Chapter 1 | Improve Command-line Productivity
Option Function
View the man pages to find other options for the grep command.
Regular expressions are case-sensitive by default. Use the grep command -i option to run a
case-insensitive search. The following example shows an excerpt of the /etc/httpd/conf/
httpd.conf configuration file.
#
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the <VirtualHost>
# directive.
#
# Change this to Listen on a specific IP address, but note that if
# httpd.service is enabled to run at boot time, the address may not be
# available when the service starts. See the httpd.service(8) man
# page for more information.
#
#Listen 12.34.56.78:80
Listen 80
...output omitted...
The following example searches for the serverroot regular expression in the /etc/httpd/
conf/httpd.conf configuration file.
22 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
Use the grep command -v option to reverse search the regular expression. This option displays
only the lines that do not match the regular expression.
In the following example, all lines, regardless of case, that do not contain the server regular
expression are returned.
To view a file without the distraction of comment lines, use the grep command -v option. In
the following example, the regular expression matches and excludes all the lines that begin with
a hash character (#) or a semicolon (;) character in the /etc/systemd/system/multi-
user.target.wants/rsyslog.service file. In that file, the hash character at the beginning
of a line indicates a general comment, whereas the semicolon character refers to a commented
variable value.
[Service]
Type=notify
EnvironmentFile=-/etc/sysconfig/rsyslog
ExecStart=/usr/sbin/rsyslogd -n $SYSLOGD_OPTIONS
ExecReload=/usr/bin/kill -HUP $MAINPID
UMask=0066
StandardOutput=null
Restart=on-failure
LimitNOFILE=16384
RH134-RHEL9.0-en-5-20230516 23
Chapter 1 | Improve Command-line Productivity
[Install]
WantedBy=multi-user.target
The grep command -e option can search for more than one regular expression at a time. The
following example, which uses a combination of the less and grep commands, locates all
occurrences of pam_unix, user root, and Accepted publickey in the /var/log/secure
log file.
To search for text in a file that you opened with the vim or less commands, first enter the slash
character (/) and then type the pattern to find. Press Enter to start the search. Press N to find
the next match.
24 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
References
regex(7) and grep(1) man pages
RH134-RHEL9.0-en-5-20230516 25
Chapter 1 | Improve Command-line Productivity
Guided Exercise
Outcomes
• Efficiently search for text in log files and configuration files.
Instructions
1. Log in to the servera machine as the student user and switch to the root user.
2. Use the grep command to find the GID and UID for the postfix and postdrop groups
and users. To do so, use the rpm -q --scripts command, which queries the information
for a specific package and shows the scripts that are used as part of the installation
process.
3. Modify the previous regular expression to display the first two messages in the /var/log/
maillog file. In this search, you do not need to use the caret character (^), because you
are not searching for the first character in a line.
26 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
4. Find the name of the queue directory for the Postfix server. Search the /etc/
postfix/main.cf configuration file for all information about queues. Use the grep
command -i option to ignore case distinctions.
5. Confirm that the postfix service writes messages to the /var/log/messages file. Use
the less command and then the slash character (/) to search the file. Press n to move to
the next entry that matches the search. Press q to quit the less command.
6. Use the ps aux command to confirm that the postfix server is currently running. Use
the grep command to limit the output to the necessary lines.
RH134-RHEL9.0-en-5-20230516 27
Chapter 1 | Improve Command-line Productivity
7. Confirm that the qmgr, cleanup, and pickup queues are correctly configured. Use the
grep command -e option to match multiple entries in the same file. The /etc/postfix/
master.cf file is the configuration file.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
28 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
Lab
Outcomes
• Create a Bash script and redirect its output to a file.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Create the executable /home/student/bin/bash-lab script file on the workstation
machine. The initial content in the script must use the shebang interpreter directive.
2. Edit your newly created script file to store the following information from the servera
and serverb machines on the workstation machine. The systems use SSH keys for
authentication, and therefore you do not require a password. Store the output of the listed
commands from the following table in the /home/student/output-servera and /home/
student/output-serverb files respectively on the workstation machine. Use the hash
sign (#) for differentiating the output of the successive commands in the output file.
RH134-RHEL9.0-en-5-20230516 29
Chapter 1 | Improve Command-line Productivity
echo "#####" Append the hash signs to differentiate the following command.
lscpu Get only the lines that start with the CPU string.
echo "#####" Append the hash signs to differentiate the following command.
/etc/selinux/config Ignore empty lines. Also, ignore lines that start with the #
character.
echo "#####" Append the hash signs to differentiate the following command.
echo "#####" Append the hash signs to differentiate the following command.
Save the required information to the output-servera and output-serverb files in the
/home/student directory on workstation.
Note
You can use the sudo command without requiring a password on the servera and
serverb hosts. Remember to use a loop to simplify your script. You can also use
multiple grep commands that are concatenated with the use of the pipe character
(|).
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
30 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
Solution
Outcomes
• Create a Bash script and redirect its output to a file.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Create the executable /home/student/bin/bash-lab script file on the workstation
machine. The initial content in the script must use the shebang interpreter directive.
1.2. Use the vim command to create and edit the /home/student/bin/bash-lab script
file.
#!/usr/bin/bash
2. Edit your newly created script file to store the following information from the servera
and serverb machines on the workstation machine. The systems use SSH keys for
RH134-RHEL9.0-en-5-20230516 31
Chapter 1 | Improve Command-line Productivity
authentication, and therefore you do not require a password. Store the output of the listed
commands from the following table in the /home/student/output-servera and /home/
student/output-serverb files respectively on the workstation machine. Use the hash
sign (#) for differentiating the output of the successive commands in the output file.
echo "#####" Append the hash signs to differentiate the following command.
lscpu Get only the lines that start with the CPU string.
echo "#####" Append the hash signs to differentiate the following command.
/etc/selinux/config Ignore empty lines. Also, ignore lines that start with the #
character.
echo "#####" Append the hash signs to differentiate the following command.
echo "#####" Append the hash signs to differentiate the following command.
Save the required information to the output-servera and output-serverb files in the
/home/student directory on workstation.
Note
You can use the sudo command without requiring a password on the servera and
serverb hosts. Remember to use a loop to simplify your script. You can also use
multiple grep commands that are concatenated with the use of the pipe character
(|).
2.1. Use the vim command to open and edit the /home/student/bin/bash-lab script
file.
2.2. Append the following lines to the /home/student/bin/bash-lab script file. The
number of hash signs is arbitrary.
Note
The following output is an example of how you can achieve the requested script. In
Bash scripting, you can take different approaches and obtain the same result.
#!/usr/bin/bash
USR='student'
OUT='/home/student/output'
#
for SRV in servera serverb; do
ssh ${USR}@${SRV} "hostname -f" > ${OUT}-${SRV}
32 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
RH134-RHEL9.0-en-5-20230516 33
Chapter 1 | Improve Command-line Productivity
Apr 1 05:42:17 serverb sshd[1257]: Failed password for invalid user sysadmin1
from 172.25.250.9 port 53496 ssh2
Apr 1 05:42:19 serverb sshd[1259]: Failed password for invalid user manager1 from
172.25.250.9 port 53498 ssh2
#####
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
34 RH134-RHEL9.0-en-5-20230516
Chapter 1 | Improve Command-line Productivity
Summary
• Create and execute Bash scripts to accomplish administration tasks.
• Use loops to iterate through a list of items from the command line and in a shell script.
• Search for text in log and configuration files by using regular expressions and the grep
command.
RH134-RHEL9.0-en-5-20230516 35
36 RH134-RHEL9.0-en-5-20230516
Chapter 2
RH134-RHEL9.0-en-5-20230516 37
Chapter 2 | Schedule Future Tasks
Objectives
Set up a command to run once at a future time.
These scheduled commands are called tasks or jobs, and the deferred term indicates that these
tasks run in the future.
One available solution for Red Hat Enterprise Linux users to schedule deferred tasks is the at
command, which is installed and enabled by default. The at package provides the atd system
daemon and the at and atq commands to interact with the daemon.
Any user can queue jobs for the atd daemon by using the at command. The atd daemon
provides 26 queues, identified from a to z, where jobs in alphabetically later queues get lower
system priority (with higher nice values, as discussed in a later chapter).
The at command TIMESPEC argument accepts natural time specifications to describe when a
job should run. For example, specify a time as 02:00pm, 15:59, midnight, or even teatime,
followed by an optional date or number of days in the future.
The TIMESPEC argument expects time and date specifications in that order. If you provide the
date and not the time, then the time defaults to the current time. If you provide the time and not
the date, then the date is considered to be matched, and the jobs run when the time next matches.
The following example shows a job schedule without providing the date. The at command
schedules the job for today or tomorrow depending whether the time has passed.
38 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
The man pages for the at command and other documentation sources use lowercase to write
the natural time specifications. You can use lowercase, sentence case, or uppercase. Here are
examples of time specifications that you can use:
• now +5min
• teatime tomorrow (teatime is 16:00)
• noon +4 days
• 5pm august 3 2021
For other valid time specifications, refer to the local timespec document listed in the references.
In the preceding output, every line represents a different scheduled future job. The following
description applies to the first line of the output:
• Mon May 16 05:13:00 2022 is the execution date and time for the scheduled job.
• user is the owner of the job (and the user that the job runs as).
Important
Unprivileged users can view and manage only their own jobs. The root user can
view and manage all jobs.
Use the at -c JOBNUMBER command to inspect the commands that run when the atd
daemon executes a job. This command shows the job's environment, which is set from the user's
environment when they created the job, and the command syntax to run.
References
at(1) and atd(8) man pages
/usr/share/doc/at/timespec
RH134-RHEL9.0-en-5-20230516 39
Chapter 2 | Schedule Future Tasks
Guided Exercise
Outcomes
• Schedule a job to run at a specified future time.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. From workstation, open an SSH session to servera as the student user.
2. Schedule a job to run in two minutes from now. Save the output of the date command to
the /home/student/myjob.txt file.
2.1. Pass the date >> /home/student/myjob.txt string as the input to the at
command, so that the job runs in two minutes from now.
2.3. Monitor the deferred jobs queue in real time. After the atd daemon executes, it
removes the job from the queue.
40 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
The command updates the output of the atq command every two seconds, by
default. After the atd daemon removes the deferred job from the queue, press
Ctrl+c to exit the watch command and return to the shell prompt.
2.4. Verify that the contents of the /home/student/myjob.txt file match the output
of the date command.
The output matches the output of the date command, which confirms that the
scheduled job executed successfully.
3. Interactively schedule a job in the g queue that runs at teatime (16:00). The job should
print the It's teatime message to the /home/student/tea.txt file. Append the new
messages to the /home/student/tea.txt file.
4. Interactively schedule another job with the b queue that runs at 16:05. The job should print
The cookies are good message to the /home/student/cookies.txt file. Append
the new messages to the /home/student/cookies.txt file.
5.2. View the commands in the pending job number 2. Replace the job number if it
changed for you.
RH134-RHEL9.0-en-5-20230516 41
Chapter 2 | Schedule Future Tasks
The job executes an echo command that appends the It's teatime message to
the /home/student/tea.txt file.
[student@servera ~]$ at -c 2
...output omitted...
echo "It's teatime" >> /home/student/tea.txt
marcinDELIMITER1d7be6a7
5.3. View the commands in the pending job number 3. Replace the job number if it
changed for you.
The job executes an echo command that appends the message The cookies are
good to the /home/student/cookies.txt file.
[student@servera ~]$ at -c 3
...output omitted...
echo "The cookies are good" >> /home/student/cookies.txt
marcinDELIMITER44662c6f
6. View the job number of a job that runs at teatime (16:00), and remove it by using the
atrm command.
7. Verify that the scheduled job to run at teatime (16:00) no longer exists.
7.1. View the list of pending jobs, and confirm that the scheduled job to run at teatime
(16:00) no longer exists.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
42 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
Objectives
Schedule commands to run on a repeating schedule with a user's crontab file.
crontab filename Remove all jobs, and replace them with jobs that are read from
filename. This command uses stdin input when no file is specified.
A privileged user might use the crontab command -u option to manage jobs for another
user. The crontab command is never used to manage system jobs, and using the crontab
commands as the root user is not recommended due to the ability to exploit personal jobs that
are configured to run as root. Configure such privileged jobs as described in the later section that
describes recurring system jobs.
RH134-RHEL9.0-en-5-20230516 43
Chapter 2 | Schedule Future Tasks
Standard variable settings include the SHELL variable, to declare the shell that is used for
interpreting the remaining lines of the crontab file. The MAILTO variable determines who should
receive the emailed output.
Note
The ability to send an email requires additional system configuration for a local mail
server or an SMTP relay.
• Minutes
• Hours
• Day of month
• Month
• Day of week
• Command
The command executes when the Day of month or Day of week fields use the same value other
than the * character. For example, to run a command on the 11th day of every month, and every
Friday at 12:15 (24-hour format), use the following job format:
15 12 11 * Fri command
The first five fields all use the same syntax rules:
• A number to specify the number of minutes or hours, a date, or a day of the week. For days of
the week, 0 equals Sunday, 1 equals Monday, 2 equals Tuesday, and so on. 7 also equals Sunday.
• Use x,y for lists. Lists might include ranges as well, for example, 5,10-13,17 in the Minutes
column, for a job to run at 5, 10, 11, 12, 13, and 17 minutes past the hour.
• The */x indicates an interval of x; for example, */7 in the Minutes column runs a job every
seven minutes.
Additionally, 3-letter English abbreviations are used for months or days of the week, for example,
Jan, Feb, and Mon, Tue.
The last field contains the full command with options and arguments to execute with the default
shell. If the command contains an unescaped percentage sign (%), then that percentage sign is
treated as a newline character, and everything after the percentage sign passes to the command
as stdin input.
0 9 3 2 * /usr/local/bin/yearly_backup
44 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
The following job sends an email that contains the Chime word to the owner of this job every five
minutes between and including 09:00 and 16:00, but only on each Friday in July.
The preceding 9-16 range of hours means that the job timer starts at the ninth hour (09:00) and
continues until the end of the sixteenth hour (16:59). The job starts executing at 09:00 with the
last execution at 16:55, because five minutes after 16:55 is 17:00, which is beyond the given
scope of hours.
If a range is specific for the hours instead of a single value, then all hours within the range will
match. Therefore, with the hours of 9-16, this example matches every five minutes from 09:00
through 16:55.
Note
This example job sends the output as an email, because crond recognizes that
the job allowed output to go to the STDIO channel without redirection. Because
cron jobs run in a background environment without an output device (known as a
controlling terminal), crond buffers the output and creates an email to send it to
the specified user in the configuration. For system jobs, the email is sent to the
root account.
The following job runs the /usr/local/bin/daily_report command every working day
(Monday to Friday) two minutes before midnight.
58 23 * * 1-5 /usr/local/bin/daily_report
The following job executes the mutt command to send the Checking in mail message to the
developer@example.com recipient every working day (Monday to Friday), at 9 AM.
References
crond(8), crontab(1), and crontab(5) man pages
RH134-RHEL9.0-en-5-20230516 45
Chapter 2 | Schedule Future Tasks
Guided Exercise
Outcomes
• Schedule recurring jobs to run as a non-privileged user.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user .
2. Schedule a recurring job as the student user that appends the current date and time to
the /home/student/my_first_cron_job.txt file every two minutes. Use the date
command to display the current date and time. The job must run only from one day before
to one day after the current time. The job must not run on any other day.
2.1. Use the date command to display the current date and time. Note the day of the
week, which you need for the next steps.
46 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
Note
You can use the date -d "last day" +%a command to display the day before
the current time, and the date -d "next day" +%a command to display the day
after the current time.
2.2. Open the crontab file with the default text editor.
2.3. Insert the following line. Replace the range of days from one day before to one day
after the current time:
2.4. Press Esc and type :wq to save the changes and exit the editor. When the editor
exits, you should see the following output:
...output omitted...
crontab: installing new crontab
[student@servera ~]$
3. Use the crontab -l command to list the scheduled recurring jobs. Inspect the command
that you scheduled to run as a recurring job in the preceding step.
Verify that the job runs the /usr/bin/date command and appends its output to the
/home/student/my_first_cron_job.txt file.
RH134-RHEL9.0-en-5-20230516 47
Chapter 2 | Schedule Future Tasks
6. Remove all the scheduled recurring jobs for the student user.
6.1. Remove all the scheduled recurring jobs for the student user.
6.2. Verify that no recurring jobs exist for the student user.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
48 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
Objectives
Schedule commands to run on a repeating schedule with the system crontab file and directories.
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
The /etc/crontab file and other files in the /etc/cron.d/ directory define the recurring
system jobs. Always create custom crontab files in the /etc/cron.d/ directory to schedule
recurring system jobs. Place the custom crontab file in the /etc/cron.d directory to prevent a
package update from overwriting the /etc/crontab file. Packages that require recurring system
jobs place their crontab files in the /etc/cron.d/ directory with the job entries. Administrators
also use this location to group related jobs into a single file.
The crontab system also includes repositories for scripts to run every hour, day, week, and month.
These repositories are placed in the /etc/cron.hourly/, /etc/cron.daily/, /etc/
cron.weekly/, and /etc/cron.monthly/ directories. These directories contain executable
shell scripts, not crontab files.
Note
Use the chmod +x script_name command to make a script executable. A script
must be executable to run.
RH134-RHEL9.0-en-5-20230516 49
Chapter 2 | Schedule Future Tasks
The /etc/anacrontab file ensures that scheduled jobs always run and are not skipped
accidentally because the system was turned off or hibernated. For example, when a system job
that runs daily was not executed at a specified time because the system was rebooting, then the
job is completed when the system becomes ready. A delay might occur before the job starts, if
specified in the Delay in minutes parameter in the /etc/anacrontab file.
Files in the /var/spool/anacron/ directory determine the daily, weekly, and monthly jobs.
When the crond daemon starts a job from the /etc/anacrontab file, it updates the timestamps
of those files. With this timestamp, you can determine the last time that the job executed. The
syntax of the /etc/anacrontab file is different from the regular crontab configuration files.
The /etc/anacrontab file contains four fields per line, as follows.
Period in days
Defines the interval in days for the job to run on a recurring schedule. This field accepts an
integer or a macro value. For example, the macro @daily is equivalent to the 1 integer, which
executes the job daily. Similarly, the macro @weekly is equivalent to the 7 integer, which
executes the job weekly.
Delay in minutes
Defines the time that the crond daemon must wait before it starts the job.
Job identifier
Identifies the unique name of the job in the log messages.
Command
The command to be executed.
The /etc/anacrontab file also contains environment variable declarations with the
NAME=value syntax. The START_HOURS_RANGE variable specifies the time interval for the jobs
to run. Jobs do not start outside this range. When a job does not run within this time interval on a
particular day, then the job must wait until the next day for execution.
Systemd Timer
The systemd timer unit activates another unit of a different type (such as a service), whose unit
name matches the timer unit name. The timer unit allows timer-based activation of other units.
The systemd timer unit logs timer events in system journals for easier debugging.
...output omitted...
[Unit]
Description=Run system activity accounting tool every 10 minutes
[Timer]
OnCalendar=*:00/10
50 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
[Install]
WantedBy=sysstat.service
The OnCalendar=*:00/10 option signifies that this timer unit activates the corresponding
sysstat-collect.service unit every 10 minutes. You might specify more complex time
intervals.
For example, a 2022-04-* 12:35,37,39:16 value against the OnCalendar option causes
the timer unit to activate the corresponding service unit at the 12:35:16, 12:37:16, and
12:39:16 times, every day during April 2022. You might also specify relative timers with the
OnUnitActiveSec option. For example, with the OnUnitActiveSec=15min option, the timer
unit triggers the corresponding unit to start 15 minutes after the last time that the timer unit
activated its corresponding unit.
Important
Do not modify any units in the configuration files under the /usr/lib/systemd/
system directory, because the systemd unit overrides the configuration changes
in that file. Create a copy of the configuration file in the /etc/systemd/system
directory, and then modify the copied file to prevent any update to the provider
package from overriding the changes. If two files exist with the same name in the
/usr/lib/systemd/system and /etc/systemd/system directories, then the
systemd timer unit parses the file in the /etc/systemd/system directory.
After you change the timer unit configuration file, use the systemctl daemon-reload
command to ensure that the systemd timer unit loads the changes.
After reloading the systemd daemon configuration, use the systemctl command to activate the
timer unit.
References
crontab(5), anacron(8), anacrontab(5), systemd.time(7),
systemd.timer(5), and crond(8) man pages
RH134-RHEL9.0-en-5-20230516 51
Chapter 2 | Schedule Future Tasks
Guided Exercise
Outcomes
• Schedule a recurring system job to count the number of active users.
• Update the systemd timer unit that gathers system activity data.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user and switch to the root user.
2. Schedule a recurring system job that generates a log message to indicate the number of
active users in the system. This job must run daily and use the w -h | wc -l command to
retrieve the number of active users in the system. Use the logger command to generate
the log message of currently active users.
2.1. Create the /etc/cron.daily/usercount script file with the following content:
#!/bin/bash
USERCOUNT=$(w -h | wc -l)
logger "There are currently ${USERCOUNT} active users"
52 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
3. Install the sysstat package. The timer unit must trigger the service unit every ten minutes
to collect system activity data with the /usr/lib64/sa/sa1 shell script. Change the
timer unit configuration file to collect the system activity data every two minutes.
...output omitted...
# Activates activity collector every 2 minutes
[Unit]
Description=Run system activity accounting tool every 2 minutes
[Timer]
OnCalendar=*:00/2
[Install]
WantedBy=sysstat.service
3.6. Wait until the binary file is created in the /var/log/sa directory.
The while command, ls /var/log/sa | wc -l returns 0 when the file does not
exist, or returns 1 when the file exists. The while command pauses for one second
when the file is not present. The while loop exits when the file is present.
RH134-RHEL9.0-en-5-20230516 53
Chapter 2 | Schedule Future Tasks
3.7. Verify that the binary file in the /var/log/sa directory was modified within two
minutes.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
54 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
Objectives
Enable and disable systemd timers, and configure a timer that manages temporary files.
Commonly, daemons and scripts operate correctly only when their expected temporary files and
directories exist. Additionally, purging temporary files on persistent storage is necessary to prevent
disk space issues or stale working data.
Red Hat Enterprise Linux includes the systemd-tmpfiles tool, which provides a structured and
configurable method to manage temporary directories and files.
At system boot, one of the first systemd service units to launch is the systemd-tmpfiles-
setup service. This service runs the systemd-tmpfiles command --create --remove
option, which reads instructions from the /usr/lib/tmpfiles.d/*.conf, /run/
tmpfiles.d/*.conf, and /etc/tmpfiles.d/*.conf configuration files. These configuration
files list files and directories that the systemd-tmpfiles-setup service is instructed to create,
delete, or secure with permissions.
A systemd timer unit configuration has a [Timer] section to indicate how to start the service
with the same name as the timer.
Use the following systemctl command to view the contents of the systemd-tmpfiles-
clean.timer unit configuration file.
RH134-RHEL9.0-en-5-20230516 55
Chapter 2 | Schedule Future Tasks
[Unit]
Description=Daily Cleanup of Temporary Directories
Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
ConditionPathExists=!/etc/initrd-release
[Timer]
OnBootSec=15min
OnUnitActiveSec=1d
In the preceding configuration, the OnBootSec=15min parameter indicates that the systemd-
tmpfiles-clean.service unit gets triggered 15 minutes after the system boots up.
The OnUnitActiveSec=1d parameter indicates that any further trigger to the systemd-
tmpfiles-clean.service unit happens 24 hours after the service unit was last activated.
After you change the timer unit configuration file, use the systemctl daemon-reload
command to ensure that the systemd daemon loads the new configuration.
For detailed information about the format of the configuration files for the systemd-tmpfiles
service, see the tmpfiles.d(5) man page. The syntax consists of the following columns: Type,
Path, Mode, UID, GID, Age, and Argument. Type refers to the action for the systemd-tmpfiles
service to take; for example, d to create a directory if it does not exist, or Z to recursively restore
SELinux contexts, file permissions, and ownership.
When you create files and directories, create the /run/systemd/seats directory if it does not
exist, with the root user and the root group as owners, and with permissions of rwxr-xr-x. If
this directory does exist, then take no action. The systemd-tmpfiles service does not purge
this directory automatically.
Create the /home/student directory if it does not exist. If it does exist, then remove all its
contents. When the system runs the systemd-tmpfiles --clean command, it removes from
the directory all files that you did not access, change, or modify for more than one day.
56 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
Create the /run/fstablink symbolic link, to point to the /etc/fstab directory. Never
automatically purge this line.
• /etc/tmpfiles.d/*.conf
• /run/tmpfiles.d/*.conf
• /usr/lib/tmpfiles.d/*.conf
Use the files in the /etc/tmpfiles.d/ directory to configure custom temporary locations, and
to override vendor-provided defaults. The files in the /run/tmpfiles.d/ directory are volatile
files, which normally daemons use to manage their own runtime temporary files. Relevant RPM
packages provide the files in the /usr/lib/tmpfiles.d/ directory; therefore do not edit these
files.
If a file in the /run/tmpfiles.d/ directory has the same file name as a file in the /usr/lib/
tmpfiles.d/ directory, then the service uses the file in the /run/tmpfiles.d/ directory. If
a file in the /etc/tmpfiles.d/ directory has the same file name as a file in either the /run/
tmpfiles.d/ or the /usr/lib/tmpfiles.d/ directories, then the service uses the file in the
/etc/tmpfiles.d/ directory.
Given these precedence rules, you can override vendor-provided settings by copying the relevant
file to the /etc/tmpfiles.d/ directory and then editing it. By using these configuration
locations correctly, you can manage administrator-configured settings from a central configuration
management system, and package updates do not overwrite your configured settings.
Note
When testing new or modified configurations, apply only the commands from a
single configuration file at a time. Specify the name of the single configuration file
on the systemd-tmpfiles command line.
References
systemd-tmpfiles(8), tmpfiles.d(5), stat(1), stat(2), and
systemd.timer(5) man pages
RH134-RHEL9.0-en-5-20230516 57
Chapter 2 | Schedule Future Tasks
Guided Exercise
Outcomes
• Configure systemd-tmpfiles to remove unused temporary files from the /tmp
directory.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera system as the student user and switch to the root user.
2. Configure the systemd-tmpfiles service to clean the /tmp directory of any unused files
from the last five days. Ensure that a package update does not overwrite the configuration
files.
2.2. Search for the configuration line in the /etc/tmpfiles.d/tmp.conf file that
applies to the /tmp directory. Replace the existing age of the temporary files in
that configuration line with the new age of 5 days. Remove from the file all the other
lines, including the commented lines. You can use the vim /etc/tmpfiles.d/
tmp.conf command to edit the configuration file.
58 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
In the configuration, the q type is the same as the d type, and instructs the
systemd-tmpfiles service to create the /tmp directory if it does not exist. The
directory's octal permissions must be set to 1777. Both the owning user and group of
the /tmp directory must be root. The /tmp directory must not contain the unused
temporary files from the last five days.
The /etc/tmpfiles.d/tmp.conf file should appear as follows:
3. Add a new configuration that ensures that the /run/momentary directory exists, and that
user and group ownership is set to the root user. The octal permissions for the directory
must be 0700. The configuration must purge from this directory any files that remain
unused in the last 30 seconds.
4. Verify that the systemd-tmpfiles --clean command removes from the /run/
momentary directory any file that is unused in the last 30 seconds, based on the
systemd-tmpfiles configuration for the directory.
RH134-RHEL9.0-en-5-20230516 59
Chapter 2 | Schedule Future Tasks
4.3. After your shell prompt returns, clean stale files from the /run/momentary
directory, based on the referenced rule in the /etc/tmpfiles.d/
momentary.conf configuration file.
The command removes the /run/momentary/test file, because it remains
unused for 30 seconds. This behavior is based on the referenced rule in the /etc/
tmpfiles.d/momentary.conf configuration file.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
60 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
Quiz
1. Which command displays all the user jobs that you scheduled to run as deferred jobs?
a. atq
b. atrm
c. at -c
d. at --display
2. Which command removes the deferred user job with the job number 5?
a. at -c 5
b. atrm 5
c. at 5
d. at --delete 5
3. Which command displays all the scheduled recurring user jobs for the currently
logged-in user?
a. crontab -r
b. crontab -l
c. crontab -u
d. crontab -V
RH134-RHEL9.0-en-5-20230516 61
Chapter 2 | Schedule Future Tasks
6. Which configuration file defines the settings for the system jobs that run daily, weekly,
and monthly?
a. /etc/crontab
b. /etc/anacrontab
c. /etc/inittab
d. /etc/sysconfig/crond
62 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
Solution
1. Which command displays all the user jobs that you scheduled to run as deferred jobs?
a. atq
b. atrm
c. at -c
d. at --display
2. Which command removes the deferred user job with the job number 5?
a. at -c 5
b. atrm 5
c. at 5
d. at --delete 5
3. Which command displays all the scheduled recurring user jobs for the currently
logged-in user?
a. crontab -r
b. crontab -l
c. crontab -u
d. crontab -V
RH134-RHEL9.0-en-5-20230516 63
Chapter 2 | Schedule Future Tasks
6. Which configuration file defines the settings for the system jobs that run daily, weekly,
and monthly?
a. /etc/crontab
b. /etc/anacrontab
c. /etc/inittab
d. /etc/sysconfig/crond
64 RH134-RHEL9.0-en-5-20230516
Chapter 2 | Schedule Future Tasks
Summary
• Deferred jobs or tasks are scheduled to run once in the future.
• Recurring system jobs accomplish, on a repeating schedule, administrative tasks with system-
wide impact.
• The systemd timer units can execute both the deferred and recurring jobs.
RH134-RHEL9.0-en-5-20230516 65
66 RH134-RHEL9.0-en-5-20230516
Chapter 3
RH134-RHEL9.0-en-5-20230516 67
Chapter 3 | Analyze and Store Logs
Objectives
Describe the basic Red Hat Enterprise Linux logging architecture to record events.
System Logging
The operating system kernel and other processes record a log of events that happen when the
system is running. These logs are used to audit the system and to troubleshoot problems. You can
use text utilities such as the less and tail commands to inspect these logs.
Red Hat Enterprise Linux uses a standard logging system that is based on the syslog protocol to
log the system messages. Many programs use the logging system to record events and to organize
them into log files. The systemd-journald and rsyslog services handle the syslog messages
in Red Hat Enterprise Linux 9.
The systemd-journald service is at the heart of the operating system event logging
architecture. The systemd-journald service collects event messages from many sources:
• System kernel
• Output from the early stages of the boot process
• Standard output and standard error from daemons
• Syslog events
The systemd-journald service restructures the logs into a standard format and writes them
into a structured, indexed system journal. By default, this journal is stored on a file system that
does not persist across reboots.
The rsyslog service reads syslog messages that the systemd-journald service receives from
the journal when they arrive. The rsyslog service then processes the syslog events, and records
them to its log files or forwards them to other services according to its own configuration.
The rsyslog service sorts and writes syslog messages to the log files that do persist across
reboots in the /var/log directory. The service also sorts the log messages to specific log files
according to the type of program that sent each message and the priority of each syslog message.
In addition to syslog message files, the /var/log directory contains log files from other services
on the system. The following table lists some useful files in the /var/log directory.
68 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
Some applications do not use the syslog service to manage their log messages. For example, the
Apache Web Server saves log messages to files in a subdirectory of the /var/log directory.
References
systemd-journald.service(8), rsyslogd(8), and rsyslog.conf(5) man
pages
For more information, refer to the Troubleshooting Problems Using Log Files section
in the Red Hat Enterprise Linux 9 Configuring Basic System Settings guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/configuring_basic_system_settings/index
RH134-RHEL9.0-en-5-20230516 69
Chapter 3 | Analyze and Store Logs
Quiz
1. Which log file stores most syslog messages, except for the ones about authentication,
mail, scheduled jobs, and debugging?
a. /var/log/maillog
b. /var/log/boot.log
c. /var/log/messages
d. /var/log/secure
2. Which log file stores syslog messages about security and authentication operations in
the system?
a. /var/log/maillog
b. /var/log/boot.log
c. /var/log/messages
d. /var/log/secure
3. Which service sorts and organizes syslog messages into files in the /var/log
directory?
a. rsyslog
b. systemd-journald
c. auditd
d. tuned
70 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
RH134-RHEL9.0-en-5-20230516 71
Chapter 3 | Analyze and Store Logs
Solution
1. Which log file stores most syslog messages, except for the ones about authentication,
mail, scheduled jobs, and debugging?
a. /var/log/maillog
b. /var/log/boot.log
c. /var/log/messages
d. /var/log/secure
2. Which log file stores syslog messages about security and authentication operations in
the system?
a. /var/log/maillog
b. /var/log/boot.log
c. /var/log/messages
d. /var/log/secure
3. Which service sorts and organizes syslog messages into files in the /var/log
directory?
a. rsyslog
b. systemd-journald
c. auditd
d. tuned
72 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
RH134-RHEL9.0-en-5-20230516 73
Chapter 3 | Analyze and Store Logs
Objectives
Interpret events in relevant syslog files to troubleshoot problems or review system status.
The following table lists the standard syslog priorities in descending order:
74 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
The rsyslog service uses the facility and priority of log messages to determine how to handle
them. Rules configure this facility and priority in the /etc/rsyslog.conf file and in any file in
the /etc/rsyslog.d directory with the .conf extension. Software packages can add rules by
installing an appropriate file in the /etc/rsyslog.d directory.
Each rule that controls how to sort syslog messages has a line in one of the configuration files.
The left side of each line indicates the facility and priority of the syslog messages that the rule
matches. The right side of each line indicates which file to save the log message in (or where else
to deliver the message). An asterisk (*) is a wildcard that matches all values.
For example, the following line in the /etc/rsyslog.d file would record messages that are sent
to the authpriv facility at any priority to the /var/log/secure file:
authpriv.* /var/log/secure
Sometimes, log messages match more than one rule in the rsyslog.conf file. In such cases, one
message is stored in more than one log file. The none keyword in the priority field indicates that
no messages for the indicated facility are stored in the given file, to limit stored messages.
Instead of being logged to a file, syslog messages can also be printed to the terminals of all
logged-in users. The rsyslog.conf file has a setting to print all the syslog messages with the
emerg priority to the terminals of all logged-in users.
RH134-RHEL9.0-en-5-20230516 75
Chapter 3 | Analyze and Store Logs
Note
The syslog subsystem has many more features beyond the scope of this course.
To explore further, refer to the rsyslog.conf(5) man page and the extensive
HTML documentation at /usr/share/doc/rsyslog/html/index.html that
the rsyslog-doc package provides.
After rotations during typically four weeks, the earliest log file is discarded to free disk space. A
scheduled job runs the logrotate command daily to see the rotation requirement of any log
files. Most log files rotate weekly; the logrotate command rotates some log files faster, or more
slowly, or when they reach a specific size.
Mar 20 20:11:48 localhost sshd[1433]: Failed password for student from 172.25.0.10
port 59344 ssh2
76 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
For example, to monitor for failed login attempts, run the tail command in one terminal, and
then run in another terminal the ssh command as the root user while a user tries to log in to the
system.
...output omitted...
Mar 20 09:01:13 host sshd[2712]: Accepted password for root from 172.25.254.254
port 56801 ssh2
Mar 20 09:01:13 host sshd[2712]: pam_unix(sshd:session): session opened for user
root by (uid=0)
To send a message to the rsyslog service to be recorded in the /var/log/boot.log log file,
execute the following logger command:
RH134-RHEL9.0-en-5-20230516 77
Chapter 3 | Analyze and Store Logs
References
logger(1), tail(1), rsyslog.conf(5), and logrotate(8) man pages
rsyslog Manual
78 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
Guided Exercise
Outcomes
• Configure the rsyslog service to write all log messages with the debug priority to the
/var/log/messages-debug log file.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user and switch to the root user.
2. Configure the rsyslog service on the servera machine to log all messages with the
debug or higher priority, for any service to the new /var/log/messages-debug log file
by changing the /etc/rsyslog.d/debug.conf configuration file.
*.debug /var/log/messages-debug
This configuration line logs syslog messages with any facility and with the debug or
higher priority level:
• The wildcard (*) in the facility field of the configuration line indicates any facility of
log messages.
RH134-RHEL9.0-en-5-20230516 79
Chapter 3 | Analyze and Store Logs
3. Verify that all the log messages with the debug priority appear in the /var/log/
messages-debug log file.
3.1. Generate a log message with the user type and the debug priority.
3.2. View the last ten log messages from the /var/log/messages-debug log file,
and verify that you see the Debug Message Test message among the other log
messages.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
80 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
Objectives
Find and interpret entries in the system journal to troubleshoot problems or review system status.
Important
In Red Hat Enterprise Linux, the memory-based /run/log directory holds the
system journal by default. The contents of the /run/log directory are lost when
the system is shut down. You can change the journald directory to a persistent
location, which is discussed later in this chapter.
To retrieve log messages from the journal, use the journalctl command. You can use the
journalctl command to view all messages in the journal, or to search for specific events based
on options and criteria. If you run the command as root, then you have full access to the journal.
Although regular users can also use the journalctl command, the system restricts them from
seeing certain messages.
RH134-RHEL9.0-en-5-20230516 81
Chapter 3 | Analyze and Store Logs
The journalctl command highlights important log messages; messages with the notice or
warning priority are in bold text, whereas messages with the error priority or higher are in red
text.
The key to successful use of the journal for troubleshooting and auditing is to limit journal searches
to show only relevant output.
By default, the journalctl command -n option shows the last 10 log entries. You can adjust the
number of log entries with an optional argument that specifies how many log entries to display. For
example, to review the last five log entries, you can run the following journalctl command:
Similar to the tail command, the journalctl command -f option outputs the last 10 lines of
the system journal and continues to output new journal entries when the journal appends them. To
exit the journalctl command -f option, use the Ctrl+C key combination.
To help to troubleshoot problems, you can filter the output of the journal by the priority of the
journal entries. The journalctl command -p option shows the journal entries with a specified
priority level (by name or by number) or higher. The journalctl command processes the debug,
info, notice, warning, err, crit, alert, and emerg priority levels, in ascending priority order.
82 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
As an example, run the following journalctl command to list journal entries with the err
priority or higher:
You can show messages for a specified systemd unit by using the journalctl command -u
option and the unit name.
When looking for specific events, you can limit the output to a specific time frame. To limit the
output to a specific time range, the journalctl command has the --since option and the --
until option. Both options take a time argument in the "YYYY-MM-DD hh:mm:ss" format (the
double quotation marks are required to preserve the space in the option).
The journalctl command assumes that the day starts at 00:00:00 when you omit the time
argument. The command assumes the current day when you omit the day argument. Both options
take yesterday, today, and tomorrow as valid arguments in addition to the date and time field.
As an example, run the following journalctl command to list all journal entries from today's
records:
RH134-RHEL9.0-en-5-20230516 83
Chapter 3 | Analyze and Store Logs
Run the following journalctl command to list all journal entries from 2022-03-11 20:30:00
to 2022-03-14 10:00:00:
You can also specify all entries since a relative time to the present. For example, to specify all
entries in the last hour, you can use the following command:
Note
You can use other, more sophisticated time specifications with the --since and --
until options. For some examples, see the systemd.time(7) man page.
In addition to the visible content of the journal, you can view additional log entries if you turn on
the verbose output. You can use any displayed extra field to filter the output of a journal query.
The verbose output is useful to reduce the output of complex searches for certain events in the
journal.
84 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
JOB_TYPE=stop
MESSAGE_ID=9d1aaa27d60140bd96365438aad20286
_HOSTNAME=host.lab.example.com
_CMDLINE=/usr/lib/systemd/systemd --switched-root --system --deserialize 31
_SELINUX_CONTEXT=system_u:system_r:init_t:s0
UNIT=user-1000.slice
MESSAGE=Removed slice User Slice of UID 1000.
INVOCATION_ID=0e5efc1b4a6d41198f0cf02116ca8aa8
JOB_ID=3220
_SOURCE_REALTIME_TIMESTAMP=1647335432625470
lines 46560-46607/46607 (END) q
The following list shows some fields of the system journal that you can use to search for relevant
lines to a particular process or event:
You can combine multiple system journal fields to form a granular search query with the
journalctl command. For example, the following journalctl command shows all related
journal entries to the sshd.service systemd unit from a process with PID 2110.
Note
For a list of journal fields, consult the systemd.journal-fields(7) man page.
References
journalctl(1), systemd.journal-fields(7), and systemd.time(7) man
pages
For more information refer to the Troubleshooting Problems Using Log Files section
in the Red Hat Enterprise Linux 9 Configuring Basic System Settings guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/configuring_basic_system_settings/index#troubleshooting-problems-using-
log-files_getting-started-with-system-administration
RH134-RHEL9.0-en-5-20230516 85
Chapter 3 | Analyze and Store Logs
Guided Exercise
Outcomes
• Search the system journal for entries to record events based on different criteria.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. From the workstation machine, open an SSH session to the servera machine as the
student user.
2. Use the journalctl command _PID=1 option to display only log events that originate
from the systemd PID 1 process on the servera machine. To quit from the journalctl
command, press q. The following output is an example and might differ on your system:
3. Use the journalctl command _UID=81 option to display all log events that originated
from a system service with a UID of 81 on the servera machine.
86 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
4. Use the journalctl command -p warning option to display log events with a warning
or higher priority on the servera machine.
5. Display all recorded log events in the past 10 minutes from the current time on the
servera machine.
RH134-RHEL9.0-en-5-20230516 87
Chapter 3 | Analyze and Store Logs
Note
Online classrooms typically run on the UTC time zone. To obtain results that start
at 9:00 AM in your local time zone, adjust your --since value by the amount of your
offset from UTC. Alternatively, ignore the local time and use a value of 9:00 to
locate journal entries that occurred since 9:00 for the servera time zone.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
88 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
Objectives
Configure the system journal to preserve the record of events when a server is rebooted.
• volatile: Stores journals in the volatile /run/log/journal directory. Because the /run
file system is temporary and exists only in the runtime memory, the data in it, including system
journals, does not persist across a reboot.
• auto: If the /var/log/journal directory exists, then the systemd-journald service uses
persistent storage; otherwise it uses volatile storage. This action is the default if you do not set
the Storage parameter.
• none: Do not use any storage. The system drops all logs, but you can still forward the logs.
The advantage of persistent system journals is that the historical data is available immediately
at boot. However, even with a persistent journal, the system does not keep all data forever. The
journal has a built-in log rotation mechanism that triggers monthly. In addition, the system does
not allow the journals to get larger than 10% of the file system that they are on, or leaving less
than 15% of the file system free. You can modify these values for both the runtime and persistent
journals in the /etc/systemd/journald.conf configuration file.
The systemd-journald process logs the current limits on the size of the journal when it starts.
The following command output shows the journal entries that reflect the current size limits:
RH134-RHEL9.0-en-5-20230516 89
Chapter 3 | Analyze and Store Logs
Note
In the previous grep command, the vertical bar (|) symbol acts as an or operator.
That is, the grep command matches any line with either the Runtime Journal
string or the System Journal string from the journalctl command output.
This command fetches the current size limits on the volatile (Runtime) journal store
and on the persistent (System) journal store.
[Journal]
Storage=persistent
...output omitted...
If the systemd-journald service successfully restarts, then the service creates subdirectories
in the /var/log/journal directory. The subdirectory in the /var/log/journal directory
has hexadecimal characters in its long name and contain files with the .journal extension. The
.journal binary files store the structured and indexed journal entries.
Although the system journals persist after a reboot, the journalctl command output includes
entries from the current system boot as well as from the previous system boots. To limit the output
to a specific system boot, use the journalctl command -b option. The following journalctl
command retrieves the entries from the first system boot only:
The following journalctl command retrieves the entries from the second system boot only. The
argument is meaningful only if the system was rebooted at least twice:
90 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
You can list the system boot events that the journalctl command recognizes, by using the --
list-boots option.
The following journalctl command retrieves the entries from the current system boot only:
Note
When debugging a system crash with a persistent journal, usually you must limit
the journal query to the reboot before the crash happened. You can use the
journalctl command -b option with a negative number to indicate how many
earlier system boots to include in the output. For example, the journalctl -b -1
command limits the output to only the previous boot.
References
systemd-journald.conf(5), systemd-journald(8) man pages
For more information, refer to the Troubleshooting Problems Using Log Files section
in the Red Hat Enterprise Linux 9 Configuring Basic System Settings guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/configuring_basic_system_settings/index#troubleshooting-problems-using-
log-files_getting-started-with-system-administration
RH134-RHEL9.0-en-5-20230516 91
Chapter 3 | Analyze and Store Logs
Guided Exercise
Outcomes
• Configure the system journal to preserve its data after a reboot.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. From the workstation machine, log in to the servera machine as the student user.
2. As the superuser, confirm that the /var/log/journal directory does not exist. Use the
ls command to list the /var/log/journal directory contents. Use the sudo command
to elevate the student user privileges. If prompted, use the student password.
92 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
You can type /Storage=auto in the vim editor command mode to search for the
Storage=auto line.
...output omitted...
[Journal]
Storage=persistent
...output omitted...
4. Verify that the systemd-journald service on the servera machine preserves its
journals so that they persist after a reboot.
The SSH connection terminates as soon as you restart the servera machine.
4.3. Verify that a subdirectory with a long hexadecimal name exists in the /var/log/
journal directory. You can find the journal files in that directory. The subdirectory
name on your system might be different.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 93
Chapter 3 | Analyze and Store Logs
94 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
Objectives
Maintain accurate time synchronization with Network Time Protocol (NTP) and configure the time
zone to ensure correct time stamps for events recorded by the system journal and logs.
The timedatectl command shows an overview of the current time-related system settings,
including the current time, time zone, and NTP synchronization settings of the system.
You can list a database of time zones with the timedatectl command list-timezones
option:
The Internet Assigned Numbers Authority (IANA) provides a public time zone database, and the
timedatectl command bases the time zone names on that database. IANA names time zones
based on the continent or ocean, and then typically (not always) the largest city within the time
zone region. For example, most of the US Mountain time zone is America/Denver.
Some localities inside the time zone have different daylight saving time rules. For example, in the
US, much of the state of Arizona (US Mountain time) does not change to daylight saving time, and
is in the America/Phoenix time zone.
RH134-RHEL9.0-en-5-20230516 95
Chapter 3 | Analyze and Store Logs
Use the tzselect command to identify the correct time zone name. This command interactively
prompts the user with questions about the system's location, and outputs the name of the correct
time zone. It does not change the system's time zone setting.
The root user can change the system setting to update the current time zone with the
timedatectl command set-timezone option. For example, the following timedatectl
command updates the current time zone to America/Phoenix.
Note
You can set a server's time zone to Coordinated Universal Time (UTC). The
tzselect command does not include the name of the UTC time zone. Use the
timedatectl set-timezone UTC command to set the system's current time
zone to UTC.
Use the timedatectl command set-time option to change the system's current time. You
might specify the time in the "YYYY-MM-DD hh:mm:ss" format, where you can omit either the
date or the time. For example, the following timedatectl command changes the time to
09:00:00.
Note
The previous example might fail with the "Failed to set time: Automatic time
synchronization is enabled" error message. In that case, first disable the automatic
time synchronization before manually setting the date or time, as explained after
this note.
The timedatectl command set-ntp option enables or disables NTP synchronization for
automatic time adjustment. The option requires either a true or a false argument to turn it on
or off. For example, the following timedatectl command turns off NTP synchronization.
96 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
Note
In Red Hat Enterprise Linux 9, the timedatectl set-ntp command adjusts
whether the chronyd NTP service is enabled. Other Linux distributions might use
this setting to adjust a different NTP or a Simple Network Time Protocol (SNTP)
service.
Enabling or disabling NTP with other utilities in Red Hat Enterprise Linux, such as in
the graphical GNOME Settings application, also updates this setting.
By default, the chronyd service uses servers from the NTP Pool Project to synchronize time and
requires no additional configuration. You might need to change the NTP servers for a machine that
runs on an isolated network.
The stratum of the NTP time source determines its quality. The stratum determines the number
of hops that the machine is away from a high-performance reference clock. The reference clock
is a stratum 0 time source. An NTP server that is directly attached to the reference clock is a
stratum 1 time source. A machine that synchronizes time from the NTP server is a stratum 2
time source.
The server and the peer are the two categories of time sources that you can declare in the /
etc/chrony.conf configuration file. The server is one stratum above the local NTP server, and
the peer is at the same stratum level. You can define multiple servers and peers in the chronyd
configuration file, one per line.
The first argument of the server line is the IP address or DNS name of the NTP server. Following
the server IP address or name, you can list a series of options for the server. Red Hat recommends
using the iburst option, because then the chronyd service takes four measurements in a
short time period for a more accurate initial clock synchronization after the service starts. For
more information about the chronyd configuration file options, use the man 5 chrony.conf
command.
Restart the service after pointing the chronyd service to the classroom.example.com local
time source.
RH134-RHEL9.0-en-5-20230516 97
Chapter 3 | Analyze and Store Logs
The chronyc command acts as a client to the chronyd service. After setting up NTP
synchronization, verify that the local system is seamlessly using the NTP server to synchronize
the system clock, by using the chronyc sources command. For more verbose output with
additional explanations about the output, use the chronyc sources -v command.
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current best, '+' = combined, '-' = not combined,
| / 'x' = may be in error, '~' = too variable, '?' = unusable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 172.25.254.254 3 6 17 26 +2957ns[+2244ns] +/- 25ms
The asterisk character (*) in the S (Source state) field indicates that the chronyd service uses
the classroom.example.com server as a time source and is the NTP server that the machine is
currently synchronized to.
References
timedatectl(1), tzselect(8), chronyd(8), chrony.conf(5), and chronyc(1)
man pages
98 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
Guided Exercise
Outcomes
• Change the time zone on a server.
• Configure the server to synchronize its time with an NTP time source.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user.
2. For this exercise, pretend that the servera machine is relocated to Haiti and that you
need to update the time zone. Elevate the privileges of the student user to run the
timedatectl command to update the time zone.
RH134-RHEL9.0-en-5-20230516 99
Chapter 3 | Analyze and Store Logs
Haiti
You can make this change permanent for yourself by appending the line
TZ='America/Port-au-Prince'; export TZ
to the file '.profile' in your home directory; then log out and log in again.
Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:
America/Port-au-Prince
2.3. Verify that you correctly set the time zone to America/Port-au-Prince.
100 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
3. Configure the chronyd service on the servera machine to synchronize the system time
with the classroom.example.com server as the NTP time source.
...output omitted...
server classroom.example.com iburst
...output omitted...
3.2. Enable time synchronization on the servera machine. The command activates the
NTP server with the settings from the /etc/chrony.conf configuration file. That
command might activate either the chronyd or the ntpd service, depending on
which service is currently installed on the system.
Note
If the output shows that the clock is not synchronized, then wait for a few seconds
and rerun the timedatectl command. It takes a few seconds to successfully
synchronize the time settings with the time source.
4.2. Verify that the servera machine currently synchronizes its time settings with the
classroom.example.com time source.
RH134-RHEL9.0-en-5-20230516 101
Chapter 3 | Analyze and Store Logs
The output shows an asterisk character (*) in the source state (S) field for the
classroom.example.com NTP time source. The asterisk indicates that the local
system time successfully synchronizes with the NTP time source.
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current best, '+' = combined, '-' = not combined,
| / 'x' = may be in error, '~' = too variable, '?' = unusable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 172.25.254.254 2 6 377 33 +84us[ +248us] +/- 21ms
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
102 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
Lab
Outcomes
• Update the time zone on an existing server.
• Configure a new log file to store all messages for authentication failures.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the serverb machine as the student user.
2. Pretend that the serverb machine is relocated to Jamaica and that you must update the
time zone to America/Jamaica. Verify that you correctly set the appropriate time zone.
3. View the recorded log events in the previous 30 minutes on the serverb machine.
4. Create the /etc/rsyslog.d/auth-errors.conf file. Configure the rsyslog service to
write the Logging test authpriv.alert message to the /var/log/auth-errors
file. Use the authpriv facility and the alert priority.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 103
Chapter 3 | Analyze and Store Logs
Solution
Outcomes
• Update the time zone on an existing server.
• Configure a new log file to store all messages for authentication failures.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the serverb machine as the student user.
2. Pretend that the serverb machine is relocated to Jamaica and that you must update the
time zone to America/Jamaica. Verify that you correctly set the appropriate time zone.
104 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
Jamaica
You can make this change permanent for yourself by appending the line
TZ='America/Jamaica'; export TZ
to the file '.profile' in your home directory; then log out and log in again.
Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:
America/Jamaica
2.2. Elevate the student user privileges to update the time zone of the serverb server to
America/Jamaica.
2.3. Verify that you successfully set the time zone to America/Jamaica.
RH134-RHEL9.0-en-5-20230516 105
Chapter 3 | Analyze and Store Logs
3. View the recorded log events in the previous 30 minutes on the serverb machine.
3.2. View the recorded log events in the previous 30 minutes on the serverb machine.
4.1. Create the /etc/rsyslog.d/auth-errors.conf file and specify the new /var/
log/auth-errors file as the destination for authentication and security messages.
authpriv.alert /var/log/auth-errors
4.2. Restart the rsyslog service to apply the configuration file changes.
106 RH134-RHEL9.0-en-5-20230516
Chapter 3 | Analyze and Store Logs
4.3. Use the logger -p command to write the Logging test authpriv.alert
message to the /var/log/auth-errors file. Use the authpriv facility and the
alert priority.
4.4. Verify that the /var/log/auth-errors file contains the log entry with the Logging
test authpriv.alert message.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 107
Chapter 3 | Analyze and Store Logs
Summary
• The systemd-journald and rsyslog services capture and write log messages to the
appropriate files.
• Periodic rotation of log files prevents them from filling up the file-system space.
• The systemd journals are temporary and do not persist across a reboot.
• The chronyd service helps to synchronize time settings with a time source.
• You can update the time zone of the server based on its location.
108 RH134-RHEL9.0-en-5-20230516
Chapter 4
RH134-RHEL9.0-en-5-20230516 109
Chapter 4 | Archive and Transfer Files
Objectives
Archive files and directories into a compressed file with tar, and extract the contents of an
existing tar archive.
Note
The original, ubiquitous zip compression and file packaging utility uses the PKZIP
(Phil Katz's ZIP for MSDOS systems) algorithm, which has evolved significantly, and
is supported on RHEL with the zip and unzip commands. Many other compression
algorithms have been developed since zip was introduced, and each has its
advantages. For creating compressed archives for general use, any tar-supported
compression algorithm is acceptable.
Archive files are used to create manageable personal backups, or to simplify transferring a set
of files across a network when other methods, such as rsync, are unavailable or might be more
complex. Archive files can be created with or without using compression to reduce the archive file
size.
On Linux, the tar utility is the common command to create, manage, and extract archives. Use
the tar command to gather multiple files into a single archive file. A tar archive is a structured
sequence of file metadata and data with an index so you can extract individual files.
Files can be compressed during creation by using one of the supported compression algorithms.
The tar command can list the contents of an archive without extracting, and can extract original
files directly from both compressed and uncompressed archives.
• -v or --verbose : Show the files that are being archived or extracted during the tar
operation.
• -f or --file : Follow this option with the archive file name to create or open.
• -p or --preserve-permissions : Preserve the original file permissions when extracting.
• --xattrs : Enable extended attribute support, and store extended file attributes.
110 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
• --selinux : Enable SELinux context support, and store SELinux file contexts.
The following tar command compression options are used to select an algorithm:
Note
The tar command still supports the legacy option style that does not use a dash
(-) character. You might find this syntax in legacy scripts or documentation, and the
behavior is essentially the same. For command consistency, Red Hat recommends
using the short- or long-option styles instead.
Create an Archive
To create an archive with the tar command, use the create and file options with the archive file
name as the first argument, followed by a list of files and directories to include in the archive.
The tar command recognizes absolute and relative file name syntax. By default, tar removes the
leading forward slash (/) character from absolute file names, so that files are stored internally
with relative path names. This technique is safer, because extracting absolute path names
always overwrites existing files. With files that are archived with relative path names, files can be
extracted to a new directory without overwriting existing files.
The following command creates the mybackup.tar archive to contain the myapp1.log,
myapp2.log, and myapp2.log files from the user's home directory. If a file with the same name
as the requested archive exists in the target directory, then the tar command overwrites the file.
A user must have read permissions on the target files that are being archived. For example,
creating an archive in the /etc directory requires root privileges, because only privileged users
can read all /etc files. An unprivileged user can create an archive of the /etc directory, but the
archive excludes files that the user cannot read, and directories for which the user lacks the read
and execute permissions.
In this example, the root user creates the /root/etc-backup.tar archive of the /etc
directory.
Important
Extended file attributes, such as access control lists (ACL) and SELinux file
contexts, are not preserved by default in an archive. Use the --acls, --selinux,
and --xattrs options to include POSIX ACLs, SELinux file contexts, and other
extended attributes, respectively.
RH134-RHEL9.0-en-5-20230516 111
Chapter 4 | Archive and Transfer Files
List the contents of the /root/etc.tar archive and then extract its files to the /root/
etcbackup directory:
When you extract files from an archive, the current umask is used to modify each extracted file's
permissions. Instead, use the tar command p option to preserve the original archived permissions
for extracted files. The --preserve-permissions option is enabled by default for a superuser.
• gzip compression is the earlier, fastest method, and is widely available across platforms.
• bzip2 compression creates smaller archives but is less widely available than gzip.
• xz compression is newer, and offers the best compression ratio of the available methods.
The effectiveness of any compression algorithm depends on the type of data that is compressed.
Previously compressed data files, such as picture formats or RPM files, typically do not
significantly compress further.
Create the /root/etcbackup.tar.gz archive with gzip compression from the contents of the
/etc directory:
112 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
Create the /root/logbackup.tar.bz2 archive with bzip2 compression from the contents of
the /var/log directory:
Create the /root/sshconfig.tar.xz archive with xz compression from the contents of the
/etc/ssh directory:
After creating an archive, verify its table of contents with the tar command tf options. It is not
necessary to specify the compression option when listing a compressed archive file, because
the compression type is read from the archive's header. List the archived content in the /root/
etcbackup.tar.gz file, which uses the gzip compression:
Listing a compressed tar archive works in the same way as listing an uncompressed tar archive.
Use the tar command with the tf option to verify the content of the compressed archive before
extracting its contents:
RH134-RHEL9.0-en-5-20230516 113
Chapter 4 | Archive and Transfer Files
The gzip, bzip2, and xz algorithms are also implemented as stand-alone commands for
compressing individual files without creating an archive. With these commands, you cannot create
a single compressed file of multiple files, such as a directory. As previously discussed, to create
a compressed archive of multiple files, use the tar command with your preferred compression
option. To uncompress a single compressed file or a compressed archive file without extracting its
contents, use the gunzip, bunzip2, and unxz stand-alone commands.
The gzip and xz commands provide an -l option to view the uncompressed size of a compressed
single or archive file. Use this option to verify that enough space is available before uncompressing
or extracting a file.
References
tar(1), gzip(1), gunzip(1), bzip2(1), bunzip2(1), xz(1), and unxz(1) man pages
114 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
Guided Exercise
Outcomes
• Archive a directory tree and extract the archive content to another location.
Instructions
1. From workstation, log in to servera as the student user and switch to the root user.
2. Create an archive of the /etc directory with gzip compression. Save the archive file as /
tmp/etc.tar.gz.
3. Verify that the etc.tar.gz archive contains the files from the /etc directory.
4. Create the /backuptest directory. Verify that the etc.tar.gz backup file is a valid
archive by decompressing the file to the /backuptest directory.
RH134-RHEL9.0-en-5-20230516 115
Chapter 4 | Archive and Transfer Files
4.3. List the contents of the /backuptest directory. Verify that the directory contains
the /etc directory backup files.
[root@servera backuptest]# ls -l
total 12
drwxr-xr-x. 95 root root 8192 Feb 8 10:16 etc
[root@servera backuptest]# ls -l etc
total 1228
-rw-r--r--. 1 root root 12 Feb 24 05:25 adjtime
-rw-r--r--. 1 root root 1529 Jun 23 2020 aliases
drwxr-xr-x. 2 root root 4096 Mar 3 04:48 alternatives
...output omitted...
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
116 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
Objectives
Transfer files to or from a remote system securely with SSH.
Specify a remote location for the source or destination of the files to copy. For the format of the
remote location, use [user@]host:/path. The user@ part of the argument is optional. If this
part is missing, then the sftp command uses your current local username. When you run the sftp
command, your terminal provides an sftp> prompt.
The interactive sftp session accepts various commands that work the same way in the remote
file system as in the local file system, such as the ls, cd, mkdir, rmdir, and pwd commands. The
put command uploads a file to the remote system. The get command downloads a file from the
remote system. The exit command exits the sftp session.
List the available sftp commands by using the help command in the sftp session:
sftp> help
Available commands:
bye Quit sftp
cd path Change remote directory to 'path'
chgrp [-h] grp path Change group of file 'path' to 'grp'
chmod [-h] mode path Change permissions of file 'path' to 'mode'
chown [-h] own path Change owner of file 'path' to 'own'
...output omitted...
In an sftp session, you might run some commands on your local host. For most available
commands, add the l character before the command. For example, the pwd command prints the
current working directory on the remote host. To print the current working directory on your local
host, use the lpwd command.
RH134-RHEL9.0-en-5-20230516 117
Chapter 4 | Archive and Transfer Files
sftp> pwd
Remote working directory: /home/remoteuser
sftp> lpwd
Local working directory: /home/user
The next example uploads the /etc/hosts file on the local system to the newly created /home/
remoteuser/hostbackup directory on the remotehost machine. The sftp session expects
that the put command is followed by a local file in the connecting user's home directory, in this
case the /home/remoteuser directory:
To copy a whole directory tree recursively, use the sftp command -r option. The following
example recursively copies the /home/user/directory local directory to the remotehost
machine.
To download the /etc/yum.conf file from the remote host to the current directory on the local
system, execute the get /etc/yum.conf command, and then exit the sftp session.
To get a remote file with the sftp command on a single command line, without opening an
interactive session, use the following syntax. You cannot use single command-line syntax to put
files on a remote host.
118 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
Although some vulnerabilities were fixed in recent years, not all can be fixed while
maintaining backward compatibility. For this reason, Red Hat recommends no longer
using the scp command in new applications or scripts, and instead using other
utilities such as the sftp or rsync commands to copy files to or from a remote
host.
The scp Secure Copy command, which is also part of the OpenSSH suite, copies files from a
remote system to the local system, or from the local system to a remote system. The command
uses the SSH server to authenticate and encrypt data during transfer.
You can specify a remote location for the source or destination of the files that you are copying.
As with the sftp command, the scp command uses [user@]host to identify the target system
and username. If you do not specify a user, then the command attempts to log in with your local
username as the remote username. When you run the command, your scp client authenticates
to the remote SSH server as with the ssh command, by using key-based authentication or by
prompting you for your password.
References
sftp(1) man pages
RH134-RHEL9.0-en-5-20230516 119
Chapter 4 | Archive and Transfer Files
Guided Exercise
Outcomes
• Copy files from a remote host to a directory on the local machine.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Use the ssh command to log in to servera as the student user.
2. Use the sftp command to copy the /etc/ssh directory from the serverb machine to
the /home/student/serverbackup directory on the servera machine.
2.2. Use the sftp command to open a session to the serverb machine. Only the root
user can read all the content in the /etc/ssh directory. When prompted, enter
redhat as the password.
2.3. Change the local current directory to the newly created /home/student/
serverbackup directory.
120 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
2.4. Recursively copy the /etc/ssh directory from the serverb machine to the /home/
student/serverbackup directory on the servera machine.
2.5. Exit from the sftp session. Verify that the /etc/ssh directory from the serverb
machine is copied to the /home/student/serverbackup directory on the
servera machine.
sftp> exit
[student@servera ~]$ ls -lR ~/serverbackup
/home/student/serverbackup:
total 4
drwxr-xr-x. 4 student student 4096 Mar 21 12:01 ssh
/home/student/serverbackup/ssh:
total 600
-rw-r--r--. 1 student student 578094 Mar 21 12:01 moduli
-rw-r--r--. 1 student student 1921 Mar 21 12:01 ssh_config
drwxr-xr-x. 2 student student 52 Mar 21 12:01 ssh_config.d
-rw-------. 1 student student 3730 Mar 21 12:01 sshd_config
drwx------. 2 student student 28 Mar 21 12:01 sshd_config.d
-rw-r-----. 1 student student 505 Mar 21 12:01 ssh_host_ecdsa_key
-rw-r--r--. 1 student student 173 Mar 21 12:01 ssh_host_ecdsa_key.pub
-rw-r-----. 1 student student 399 Mar 21 12:01 ssh_host_ed25519_key
-rw-r--r--. 1 student student 93 Mar 21 12:01 ssh_host_ed25519_key.pub
-rw-r-----. 1 student student 2602 Mar 21 12:01 ssh_host_rsa_key
-rw-r--r--. 1 student student 565 Mar 21 12:01 ssh_host_rsa_key.pub
/home/student/serverbackup/ssh/ssh_config.d:
total 8
-rw-r--r--. 1 student student 36 Mar 21 12:01 01-training.conf
RH134-RHEL9.0-en-5-20230516 121
Chapter 4 | Archive and Transfer Files
/home/student/serverbackup/ssh/sshd_config.d:
total 4
-rw-------. 1 student student 719 Mar 21 12:01 50-redhat.conf
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
122 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
Objectives
Efficiently and securely synchronize the contents of a local file or directory with a remote server
copy.
An advantage of the rsync command is that it copies files securely and efficiently between a local
system and a remote system. Whereas an initial directory synchronization takes about the same
time as copying it, subsequent synchronizations copy only the differences over the network, which
substantially accelerates updates.
Use the rsync command -n option for a dry run. A dry run simulates what happens when the
command is executed. The dry run shows the changes that the rsync command would perform
when executing the command. Perform a dry run before the actual rsync command operation to
ensure that no critical files are overwritten or deleted.
When synchronizing with the rsync command, two standard options are the -v and -a options.
The rsync command -v or --verbose option provides a more detailed output. This option is
helpful for troubleshooting and viewing live progress.
The rsync command -a or --archive option enables "archive mode". This option enables
recursive copying and turns on many valuable options to preserve most characteristics of the files.
Archive mode is the same as specifying the following options:
Option Description
RH134-RHEL9.0-en-5-20230516 123
Chapter 4 | Archive and Transfer Files
Archive mode does not preserve hard links, because it might add significant time to the
synchronization. Use the rsync command -H option to preserve hard links too.
Note
To include extended attributes when syncing files, add these options to the rsync
command:
You can use the rsync command to synchronize the contents of a local file or directory with a file
or directory on a remote machine, with either machine as the source. You can also synchronize the
contents of two local files or directories on the same machine.
Like the sftp command, the rsync command specifies remote locations in the [user@]host:/
path format. The remote location can be either the source or the destination system, provided
that one of the two machines is local.
You must be the root user on the destination system to preserve file ownership. If the destination
is remote, then authenticate as the root user. If the destination is local, then you must run the
rsync command as the root user.
In this example, synchronize the local /var/log directory to the /tmp directory on the hosta
system:
In the same way, the /var/log remote directory on the hosta machine synchronizes to the /tmp
directory on the host machine:
The following example synchronizes the contents of the /var/log directory to the /tmp
directory on the same machine:
124 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
[user@host ~]$ su -
Password: password
[root@host ~]# rsync -av /var/log /tmp
receiving incremental file list
log/
log/README
log/boot.log
...output omitted...
log/tuned/tuned.log
Important
Correctly specifying a source directory trailing slash is important. A source directory
with a trailing slash synchronizes the contents of the directory without including
the directory itself. The contents are synced directly into the destination directory.
Without the trailing slash, the source directory itself will sync to the destination
directory. The source directory's contents are in the new subdirectory in the
destination.
In this example, the content of the /var/log/ directory is synchronized to the /tmp directory
instead of the log directory being created in the /tmp directory.
References
rsync(1) man page
RH134-RHEL9.0-en-5-20230516 125
Chapter 4 | Archive and Transfer Files
Guided Exercise
Outcomes
• Use the rsync command to synchronize the contents of a local directory with a copy on a
remote server.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. On the workstation machine, use the ssh command to log in to the servera machine
as the student user, and then switch to the root user.
2. Open a new terminal window, and log in to the serverb machine as the student user.
126 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
3.2. On the servera machine, use the rsync command to synchronize the /var/log
directory tree on the servera machine to the /home/student/serverlogs
directory on the serverb machine. Only the root user can read all the /var/
log directory contents on the servera machine. Transfer all the files in the initial
synchronization.
5. Use the rsync command to securely synchronize from the /var/log directory tree on
the servera machine to the /home/student/serverlogs directory on the serverb
machine. This time, only the changed log files are transferred.
RH134-RHEL9.0-en-5-20230516 127
Chapter 4 | Archive and Transfer Files
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
128 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
Lab
Outcomes
• Synchronize a remote directory to a local directory.
• Extract an archive.
This command prepares your environment and ensures that all required resources are
available. It also installs SSH keys on your systems so that you can transfer files without
entering passwords.
Instructions
1. On serverb, synchronize the /etc directory tree from servera to the /configsync
directory.
2. Create a configfile-backup-servera.tar.gz archive with the /configsync
directory contents.
3. Securely copy the /root/configfile-backup-servera.tar.gz archive file from
serverb to the /home/student directory on workstation.
4. On workstation, extract the contents to the /tmp/savedconfig/ directory.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 129
Chapter 4 | Archive and Transfer Files
130 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
Solution
Outcomes
• Synchronize a remote directory to a local directory.
• Extract an archive.
This command prepares your environment and ensures that all required resources are
available. It also installs SSH keys on your systems so that you can transfer files without
entering passwords.
Instructions
1. On serverb, synchronize the /etc directory tree from servera to the /configsync
directory.
1.1. Log in to serverb as the student user and switch to the root user.
1.2. Create the /configsync directory to store the synchronized files from servera.
1.3. Synchronize the /etc directory tree from servera to the /configsync directory on
serverb.
RH134-RHEL9.0-en-5-20230516 131
Chapter 4 | Archive and Transfer Files
4.2. Create the /tmp/savedconfig directory, to store the extracted contents. Change to
the new directory.
132 RH134-RHEL9.0-en-5-20230516
Chapter 4 | Archive and Transfer Files
./configsync:
total 12
drwxr-xr-x. 105 student student 8192 Mar 28 16:03 etc
...output omitted...
[student@workstation savedconfig]$ cd
[student@workstation ~]$
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 133
Chapter 4 | Archive and Transfer Files
Summary
• The tar command creates an archive file from a set of files and directories. This command also
extracts and lists files from an archive file.
• The tar command provides a set of compression methods to reduce archive size.
• Besides providing a secure remote shell, the SSH service also provides the sftp command to
transfer files securely to and from a remote system that runs the SSH server.
• The rsync command securely and efficiently synchronizes files between two directories, of
which either one can be on a remote system.
134 RH134-RHEL9.0-en-5-20230516
Chapter 5
RH134-RHEL9.0-en-5-20230516 135
Chapter 5 | Tune System Performance
Objectives
Optimize system performance by selecting a tuning profile that the tuned daemon manages.
Tune Systems
System administrators optimize the performance of a system by adjusting device settings based
on various use case workloads. The tuned daemon applies tuning adjustments both statically and
dynamically by using tuning profiles that reflect particular workload requirements.
For example, storage devices experience high use during startup and login, but have minimal
activity when user workloads consist of using web browsers and email clients. Similarly, CPU
and network devices experience activity increases during peak usage throughout a workday.
The tuned daemon monitors the activity of these components, and adjusts parameter settings
to maximize performance during high-activity times and to reduce settings during low activity.
Predefined tuning profiles provide performance parameters that the tuned daemon uses.
To monitor and adjust parameter settings, the tuned daemon uses modules called monitor and
tuning plug-ins, respectively.
Monitor plug-ins analyze the system and obtain information from it, so the tuning plug-ins use this
information for dynamic tuning. At this moment, the tuned daemon ships with three monitor plug-
ins:
• disk: Monitors the disk load based on the number of I/O operations for every disk device.
• net: Monitors the network load based on the number of transferred packets per network card.
• load: Monitors the CPU load for every CPU.
Tuning plug-ins tune the individual subsystems. They use the data from the monitor plug-ins
and the performance parameters from the predefined tuning profiles. Among others, the tuned
daemon ships with the following tuning plug-ins:
• disk: Sets different disk parameters, for example, the disk scheduler, the spin-down timeout, or
the advanced power management.
• net: Configures the interface speed and the Wake-on-LAN (WoL) functionality.
• cpu: Sets different CPU parameters, for example, the CPU governor or the latency.
136 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
By default, dynamic tuning is disabled. You can enable it by setting the dynamic_tuning
variable to 1 in the /etc/tuned/tuned-main.conf configuration file. If you enable dynamic
tuning, then the tuned daemon periodically monitors the system and adjusts the tuning settings
to runtime behavior changes. You can set the time in seconds between updates by using the
update_interval variable in the /etc/tuned/tuned-main.conf configuration file.
• Power-saving profiles
• Performance-boosting profiles
The performance-boosting profiles include profiles that focus on the following aspects:
The next table shows a list of the tuning profiles that are distributed with Red Hat Enterprise
Linux 9:
RH134-RHEL9.0-en-5-20230516 137
Chapter 5 | Tune System Performance
latency-performance Ideal for server systems that require low latency at the
expense of power consumption.
The tuned application stores the tuning profiles under the /usr/lib/tuned and /etc/tuned
directories. Every profile has a separate directory, and inside the directory the tuned.conf main
configuration file and, optionally, other files.
138 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
[main]
summary=Optimize for running inside a virtual guest
include=throughput-performance
[sysctl]
# If a workload mostly uses anonymous memory and it hits this limit, the entire
# working set is buffered for I/O, and any more write buffering would require
# swapping, so it's time to throttle writes until I/O can catch up. Workloads
# that mostly use file mappings may be able to use even higher values.
#
# The generator of dirty data starts writeback at this percentage (system default
# is 20%)
vm.dirty_ratio = 30
# Filesystem I/O is usually much more efficient than swapping, so try to keep
# swapping low. It's usually safe to go even lower than this on systems with
# server-grade storage.
vm.swappiness = 30
The [main] section in the file might include a summary of the tuning profile. This section also
accepts the include parameter, for the profile to inherit all the settings from the referenced
profile.
This configuration file is useful when creating new tuning profiles, because you can use one of
the provided profiles as a basis, and then add or modify the parameters to configure. To create or
modify tuning profiles, copy the tuning profile files from the /usr/lib/tuned directory to the /
etc/tuned directory and then modify them. If profile directories exist with the same name under
the /usr/lib/tuned and /etc/tuned directories, the latter always take precedence. Thus,
never directly modify files in the /usr/lib/tuned system directory.
The rest of the sections in the tuned.conf file use the tuning plug-ins to modify parameters in
the system. In the previous example, the [sysctl] section modifies the vm.dirty_ratio and
vm.swappiness kernel parameters through the sysctl plug-in.
You can identify the currently active tuning profile with the tuned-adm active command.
The tuned-adm list command lists all available tuning profiles, including both built-in profiles
and the custom-created tuning profiles.
RH134-RHEL9.0-en-5-20230516 139
Chapter 5 | Tune System Performance
Use the tuned-adm profile_info command for information about a given profile.
Profile summary:
Optimize for deterministic performance at the cost of increased power consumption,
focused on low latency network performance
...output omitted..
If no profile is specified, then the tuned-adm profile_info command shows the information
for the active tuning profile:
Profile summary:
Optimize for running inside a virtual guest
...output omitted..
Use the tuned-adm profile profilename command to switch to a different active profile
that better matches the system's current tuning requirements.
The tuned-adm recommend command can recommend a tuning profile for the system. The
system uses this mechanism to determine the default profile after its installation.
140 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
Note
The tuned-adm recommend command bases its recommendation on various
system characteristics, including whether the system is a virtual machine and other
predefined selected categories during system installation.
To revert the setting changes that the current profile applied, either switch to another profile or
deactivate the tuned daemon. Turn off the tuned application tuning activity by using the tuned-
adm off command.
You can switch to the administrative access mode in the web console by clicking the Limited
access or the Turn on administrative access buttons. Then, enter your password when prompted.
After you escalate privileges, the Limited access button changes to Administrative access. As a
security reminder, always toggle back to limited access mode after completing the system task
that requires administrative privileges.
As a privileged user, click the Overview menu option in the left navigation bar. The Performance
profile field displays the current active profile.
RH134-RHEL9.0-en-5-20230516 141
Chapter 5 | Tune System Performance
To select a different profile, click the active profile link. In the Change performance profile user
interface, scroll through the profile list to select one that best suits the system purpose, and click
the Change profile button.
To verify changes, return to the main Overview page, and confirm that it displays the active profile
in the Performance profile field.
References
tuned(8), tuned.conf(5), tuned-main.conf(5), and tuned-adm(1) man pages
For more information, refer to the Monitoring and Managing System Status and
Performance guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/monitoring_and_managing_system_status_and_performance/index
142 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
Guided Exercise
Outcomes
• Configure a system to use a tuning profile.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to servera as the student user.
RH134-RHEL9.0-en-5-20230516 143
Chapter 5 | Tune System Performance
3. List the available tuning profiles and identify the active profile.
4. Review the tuned.conf configuration file for the current active profile, virtual-guest.
You can find the tuned.conf configuration file in the /usr/lib/tuned/virtual-
guest directory. The virtual-guest tuning profile is based on the throughput-
performance profile, but it sets different values for the vm.dirty_ratio and
vm.swappiness parameters. Verify that the virtual-guest tuning profile applies these
values on your system.
[main]
summary=Optimize for running inside a virtual guest
include=throughput-performance
[sysctl]
# If a workload mostly uses anonymous memory and it hits this limit, the entire
# working set is buffered for I/O, and any more write buffering would require
# swapping, so it's time to throttle writes until I/O can catch up. Workloads
# that mostly use file mappings may be able to use even higher values.
#
# The generator of dirty data starts writeback at this percentage (system default
# is 20%)
vm.dirty_ratio = 30
144 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
# Filesystem I/O is usually much more efficient than swapping, so try to keep
# swapping low. It's usually safe to go even lower than this on systems with
# server-grade storage.
vm.swappiness = 30
4.2. Verify that the tuning profile applies these values on your system.
5. Review the tuned.conf configuration file for the virtual-guest parent's tuning profile,
throughput-performance. You can find it in the /usr/lib/tuned/throughput-
performance directory. Notice that the throughput-performance tuning profile sets a
different value for the vm.dirty_ratio and vm.swappiness parameters, although the
virtual-guest profile overwrites them. Verify that the virtual-guest tuning profile
applies the value for the vm.dirty_background_ratio parameter, which it inherits from
the throughput-performance profile.
[main]
summary=Broadly applicable tuning that provides excellent performance across a
variety of common server workloads
...output omitted...
[sysctl]
# If a workload mostly uses anonymous memory and it hits this limit, the entire
# working set is buffered for I/O, and any more write buffering would require
# swapping, so it's time to throttle writes until I/O can catch up. Workloads
# that mostly use file mappings may be able to use even higher values.
#
# The generator of dirty data starts writeback at this percentage (system default
# is 20%)
vm.dirty_ratio = 40
# PID allocation wrap value. When the kernel's next PID value
# reaches this value, it wraps back to a minimum PID value.
# PIDs of value pid_max or larger are not allocated.
RH134-RHEL9.0-en-5-20230516 145
Chapter 5 | Tune System Performance
#
# A suggested value for pid_max is 1024 * <# of cpu cores/threads in system>
# e.g., a box with 32 cpus, the default of 32768 is reasonable, for 64 cpus,
# 65536, for 4096 cpus, 4194304 (which is the upper limit possible).
#kernel.pid_max = 65536
...output omitted...
5.2. Verify that the virtual-guest tuning profile applies the inherited
vm.dirty_background_ratio parameter.
6. Change the current active tuning profile to throughput-performance, and then confirm
the results. Verify that the vm.dirty_ratio and vm.swappiness parameters change to
the values in the throughput-performance configuration file.
6.3. Verify the values for the vm.dirty_ratio and vm.swappiness parameters.
146 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 147
Chapter 5 | Tune System Performance
Objectives
Prioritize or deprioritize specific processes, with the nice and renice commands.
Linux and other operating systems use a technique called time-slicing or multitasking for process
management. The operating system process scheduler rapidly switches between process threads
on each available CPU core. This behavior gives the impression that many processes are running at
the same time.
Process Priorities
Process priority sets the importance of each process. Linux implements scheduling policies that
define the rules by which processes are organized and prioritized to obtain CPU processing
time. The various Linux scheduling policies might be designed to handle interactive application
requests, or non-interactive batch application processing, or real-time application requirements.
Real-time scheduling policies still use process priorities and queues. However, current, non-
real-time (normal) scheduling policies use the Completely Fair Scheduler (CFS), which instead
organizes processes that are awaiting CPU time into a binary search tree. This process priority
uses the SCHED_NORMAL or SCHED_OTHER policies as the default scheduling policy.
Processes that run under the SCHED_NORMAL policy are assigned a static real-time priority of
0, to ensure that all system real-time processes have a higher priority than normal processes.
However, this static priority value is not included when organizing normal process threads for CPU
scheduling. Instead, the CFS scheduling algorithm arranges normal process threads into a time-
weighted binary tree, where the first item has the lowest previously spent CPU time, and the last
item has the most cumulative CPU time.
Nice Value
The order of the binary tree is additionally influenced by a user-modifiable, per-process nice
value, which ranges from -20 (increased priority) to 19 (decreased priority), with a default of 0.
Processes inherit their starting nice value from their parent process. All users can adjust the nice
value to decrease priority, but only root can increase its priority.
148 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
A higher nice value indicates a decrease in the process priority from the default, or making the
process nicer to other processes. A lower nice value indicates an increase in the process priority
from the default, or making the process less nice to other processes.
Increasing the nice value lowers the thread's position, and decreasing the value raises the thread's
position.
Important
Generally, priorities determine only indirectly the amount of CPU time that a
process thread receives. On a non-saturated system with available CPU capacity,
every process is scheduled for immediate CPU time, for as much time as each
process wants. Relative process importance, as managed in the binary tree,
determines only which threads are selected and placed on CPUs first.
Unprivileged users can only increase the nice value on their own processes, which makes their own
processes nicer, and therefore lowers their placement in the binary tree. Unprivileged users cannot
decrease their processes' nice values to raise their importance, nor can they adjust the nice values
for another user's process.
Figure 5.3: Priorities and nice values as reported by the top command
In the preceding figure, the nice values are aligned with the priority values that are used by the top
command. The top command displays the process priority in the PR column, and the nice value in
the NI column. The top priority numbering scheme, which displays real-time process priorities as
negative numbers, is a legacy of internal priority algorithms.
RH134-RHEL9.0-en-5-20230516 149
Chapter 5 | Tune System Performance
The following output is the summary and a partial process listing in the top command:
The ps command displays process nice values, when using the default formatting options.
The following ps command lists all processes with their process ID, process name, nice value,
and scheduling class. The processes are sorted in descending order by nice value. In the CLS
scheduling class column, TS stands for time sharing, which is another name for the normal
scheduling policies, including SCHED_NORMAL. Other CLS values, such as FF for first in first out
and RR for round robin, indicate real-time processes. Real-time processes are not assigned nice
values, as indicated by the dash (-) in the NI column. Advanced scheduling policies are taught in
the Red Hat Performance Tuning: Linux in Physical, Virtual, and Cloud (RH442) course.
The following example starts a process from the shell, and displays the process's nice value. Note
the use of the PID option in the ps command to specify the requested output.
Note
This command was chosen for demonstration for its low resource consumption.
150 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
All users can use the nice command to start commands with a default or higher nice value.
Setting a higher value by default ensures that the new process is a lower priority than your current
working shell, and would less likely affect your current interactive session.
The following example starts the same command as a background job with the default nice value,
and displays the process's nice value:
Use the nice command -n option to apply a user-defined nice value to the starting process. The
default is to add 10 to the process's current nice value. The following example starts a background
job with a user-defined nice value of 15 and displays the result:
You can also use the top command to change the nice value on an existing process. From the top
interactive interface, press the r key to access the renice command. Enter the process ID, and
then enter the new nice value.
References
nice(1), renice(1), top(1), and sched_setscheduler(2) man pages
RH134-RHEL9.0-en-5-20230516 151
Chapter 5 | Tune System Performance
Guided Exercise
Outcomes
• Adjust scheduling priorities for processes.
Important
This exercise uses commands that perform an endless checksum on a device
file and intentionally use significant CPU resources.
Instructions
1. Use the ssh command to log in to the servera machine as the student user.
2. Determine the number of CPU cores on the servera machine, and then start two
instances of the sha1sum /dev/zero & command for each core.
2.1. Use the grep command to parse the number of existing virtual processors (CPU
cores) from the /proc/cpuinfo file.
2.2. Use a looping command to start multiple instances of the sha1sum /dev/zero &
command. Start two instances for each virtual processor that was indicated in the
previous step. In this example, a for loop creates four instances. The PID values in
your output might vary from the example.
152 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
3. Verify that the background jobs are running for each of the sha1sum processes.
4. Use the ps and pgrep commands to display the percentage of CPU usage for each
sha1sum process.
5. Terminate all sha1sum processes, and then verify that no jobs are running.
5.1. Use the pkill command to terminate all running processes with the sha1sum name
pattern.
6. Start multiple instances of the sha1sum /dev/zero & command, and then start one
additional instance of the sha1sum /dev/zero & command with a nice level of 10. Start
at least as many instances as the number of system virtual processors. In this example,
three regular instances are started, plus another with a higher nice level.
6.1. Use looping to start three instances of the sha1sum /dev/zero & command.
RH134-RHEL9.0-en-5-20230516 153
Chapter 5 | Tune System Performance
6.2. Use the nice command to start the fourth instance with a nice level of 10.
7. Use the ps and pgrep commands to display the PID, percentage of CPU usage, nice value,
and executable name for each process. The instance with the nice value of 10 displays a
lower percentage of CPU usage than the other instances.
8. Use the sudo renice command to lower the nice level of a process from the previous
step. Use the PID value of the process instance with the nice level of 10 to lower its nice
level to 5.
9. Repeat the ps and pgrep commands to display the CPU percentage and nice level.
10. Use the pkill command to terminate all running processes with the sha1sum name
pattern.
154 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
Important
Verify that you have terminated all exercise processes before leaving this exercise.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 155
Chapter 5 | Tune System Performance
Lab
Outcomes
• Activate a specific tuning profile for a computer system.
• Adjust the CPU scheduling priority of a process.
This command prepares your environment and ensures that all required resources are
available.
Important
This lab uses commands that perform an endless checksum on a device file
and intentionally use significant CPU resources.
Instructions
1. Change the current tuning profile for the serverb machine to the balanced profile, a
general non-specialized tuned profile. List the information for the balanced tuning profile
when it is the current tuning profile.
2. Two processes on serverb are consuming a high percentage of CPU usage. Adjust each
process's nice level to 10 to allow more CPU time for other processes.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
156 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
RH134-RHEL9.0-en-5-20230516 157
Chapter 5 | Tune System Performance
Solution
Outcomes
• Activate a specific tuning profile for a computer system.
• Adjust the CPU scheduling priority of a process.
This command prepares your environment and ensures that all required resources are
available.
Important
This lab uses commands that perform an endless checksum on a device file
and intentionally use significant CPU resources.
Instructions
1. Change the current tuning profile for the serverb machine to the balanced profile, a
general non-specialized tuned profile. List the information for the balanced tuning profile
when it is the current tuning profile.
158 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
1.4. List all available tuning profiles and their descriptions. Note that the current active
profile is virtual-guest.
1.5. Change the current active tuning profile to the balanced profile.
1.6. List summary information of the current active tuned profile. Verify that the active
profile is the balanced profile.
Profile summary:
General non-specialized tuned profile
...output omitted...
2. Two processes on serverb are consuming a high percentage of CPU usage. Adjust each
process's nice level to 10 to allow more CPU time for other processes.
2.1. Determine the top two CPU consumers on the serverb machine. The ps command
lists the top CPU consumers at the bottom of the output. CPU percentage values
might vary on your machine.
RH134-RHEL9.0-en-5-20230516 159
Chapter 5 | Tune System Performance
2.2. Identify the current nice level for each of the top two CPU consumers.
2.3. Adjust the nice level for each process to 10. Use the correct PID values for your
processes from the previous command output.
2.4. Verify that the current nice level for each process is 10.
Important
Verify that you have terminated all lab processes before leaving this lab.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
160 RH134-RHEL9.0-en-5-20230516
Chapter 5 | Tune System Performance
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 161
Chapter 5 | Tune System Performance
Summary
• The tuned service automatically modifies device settings to meet specific system needs based
on a predefined selected tuning profile.
• To revert all changes of the selected profile to the system settings, either switch to another
profile or deactivate the tuned service.
• The system assigns a relative priority to a process to determine its CPU access. This priority is
called the nice value of a process.
162 RH134-RHEL9.0-en-5-20230516
Chapter 6
RH134-RHEL9.0-en-5-20230516 163
Chapter 6 | Manage SELinux Security
Objectives
Explain how SELinux protects resources, change the current SELinux mode of a system, and set
the default SELinux mode of a system.
SELinux Architecture
Security Enhanced Linux (SELinux) is a critical security feature of Linux. Access to files, ports,
and other resources is controlled at a granular level. Processes are permitted to access only the
resources that their SELinux policy or Boolean settings specify.
File permissions control file access for a specific user or group. However, file permissions do not
prevent an authorized user with file access from using a file for an unintended purpose.
For example, with write access to a file, other editors or programs can still open and modify a
structured data file that is designed for only a specific program to write to, which could result in
corruption or a data security issue. File permissions do not stop such undesired access, because
they do not control how a file is used but only who is allowed to read, write, or run a file.
SELinux Usage
SELinux enforces a set of access rules that explicitly define allowed actions between processes
and resources. Any action that is not defined in an access rule is not allowed. Because only defined
actions are allowed, applications with a poor security design are still protected from malicious use.
Applications or services with a targeted policy run in a confined domain, whereas an application
without a policy runs unconfined but without any SELinux protection. Individual targeted policies
can be disabled to assist with application and security policy development and debugging.
• Enforcing : SELinux enforces the loaded policies. This mode is the default in Red Hat
Enterprise Linux.
• Permissive : SELinux loads the policies and is active, but instead of enforcing access control
rules, it logs access violations. This mode is helpful for testing and troubleshooting applications
and rules.
• Disabled : SELinux is turned off. SELinux violations are not denied or logged. Disabling SELinux
is strongly discouraged.
164 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Important
Starting in Red Hat Enterprise Linux 9, SELinux can be fully disabled only by using
the selinux=0 kernel parameter at boot. RHEL no longer supports setting the
SELINUX=disabled option in the /etc/selinux/config file.
For example, a web server's open firewall port allows remote anonymous access to a web client.
However, a malicious user that accesses that port might try to compromise a system through an
existing vulnerability. If an example vulnerability compromises the permissions for the apache
user and group, then a malicious user might directly access the /var/www/html document root
content, or the system's /tmp and /var/tmp directories, or other accessible files and directories.
SELinux policies are security rules that define how specific processes access relevant files,
directories, and ports. Every resource entity, such as a file, process, directory, or port, has a label
called an SELinux context. The context label matches a defined SELinux policy rule to allow a
process to access the labeled resource. By default, an SELinux policy does not allow any access
unless an explicit rule grants access. When no allow rule is defined, all access is disallowed.
SELinux labels have user, role, type, and security level fields. Targeted policy, which is
enabled in RHEL by default, defines rules by using the type context. Type context names typically
end with _t.
An Apache web server process runs with the httpd_t type context. A policy rule
permits the Apache server to access files and directories that are labeled with the
httpd_sys_content_t type context. By default, files in the /var/www/html directory have
RH134-RHEL9.0-en-5-20230516 165
Chapter 6 | Manage SELinux Security
the httpd_sys_content_t type context. A web server policy has by default no allow rules for
using files that are labeled tmp_t, such as in the /tmp and /var/tmp directories, thus disallowing
access. With SELinux enabled, a malicious user who uses a compromised Apache process would
still not have access to the /tmp directory files.
A MariaDB server process runs with the mysqld_t type context. By default, files in the /
data/mysql directory have the mysqld_db_t type context. A MariaDB server can access the
mysqld_db_t labeled files, but has no rules to allow access to files for other services, such as
httpd_sys_content_t labeled files.
Many commands that list resources use the -Z option to manage SELinux contexts. For example,
the ps, ls, cp, and mkdir commands all use the -Z option.
166 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Alternatively, set the SELinux mode at boot time with a kernel parameter. Pass the enforcing=0
kernel parameter to boot the system into permissive mode, or pass enforcing=1 to boot
into enforcing mode. Disable SELinux by passing the selinux=0 kernel parameter, or pass
selinux=1 to enable SELinux.
Red Hat recommends rebooting the server when you change the SELinux mode from
Permissive to Enforcing. This reboot ensures that the services that are started in permissive
mode are confined in the next boot.
RH134-RHEL9.0-en-5-20230516 167
Chapter 6 | Manage SELinux Security
SELINUX=enforcing
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are
protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
The system reads this file at boot time and starts SELinux accordingly. The selinux=0|1 and
enforcing=0|1 kernel arguments override this configuration.
References
getenforce(8), setenforce(8), and selinux_config(5) man pages
168 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Guided Exercise
Outcomes
• View and set the current SELinux mode.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. On the workstation machine, use the ssh command to log in to the servera machine
as the student user and then switch to the root user.
2.1. Use the getenforce command to verify the current SELinux mode on the servera
machine.
2.2. Use the vim /etc/selinux/config command to edit the configuration file.
Change the SELINUX parameter from enforcing to permissive mode.
2.3. Use the grep command to confirm that the SELINUX parameter displays the
permissive mode.
RH134-RHEL9.0-en-5-20230516 169
Chapter 6 | Manage SELinux Security
2.4. Use the setenforce command to change the SELINUX parameter to the
permissive mode and verify the change.
3. Change the default SELinux mode back to the enforcing mode in the configuration file.
3.1. Use the vim /etc/selinux/config command to edit the configuration file.
Change the SELINUX parameter from permissive to enforcing mode.
3.2. Use the grep command to confirm that the SELINUX parameter sets the
enforcing mode on booting.
4. Set the SELinux mode to enforcing on the command line. Reboot the servera machine
and verify the SELinux mode.
4.1. Use the setenforce command to set the current SELinux mode to the enforcing
mode. Use the getenforce command to confirm that SELinux is set to the
enforcing mode.
4.3. Log in to the servera machine and verify the SELinux mode.
170 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 171
Chapter 6 | Manage SELinux Security
Objectives
Manage the SELinux policy rules that determine the default context for files and directories with
the semanage fcontext command and apply the context defined by the SELinux policy to files
and directories with the restorecon command.
When a new file's name does not match an existing labeling policy, the file inherits the same
label as the parent directory. With labeling inheritance, all files are always labeled when created,
regardless of whether an explicit policy exists for a file.
When files are created in default locations that have an existing labeling policy, or when a policy
exists for a custom location, then new files are labeled with a correct SELinux context. However,
if a file is created in an unexpected location without an existing labeling policy, then the inherited
label might not be correct for the new file's intended purpose.
Furthermore, copying a file to a new location can cause that file's SELinux context to change,
where the new context is determined by the new location's labeling policy, or from parent directory
inheritance if no policy exists. A file's SELinux context can be preserved during copying to retain
the context label that was determined for the file's original location. For example, the cp -
p command preserves all file attributes where possible, and the cp --preserve=context
command preserves only SELinux contexts, during copying.
Note
Copying a file always creates a file inode, and that inode's attributes, including the
SELinux context, must be initially set, as previously discussed.
However, moving a file does not typically create an inode if the move occurs within
the same file system, but instead moves the existing inode's file name to a new
location. Because the existing inode's attributes do not need to be initialized, a file
that is moved with mv preserves its SELinux context unless you set a new context on
the file with the -Z option.
After you copy or move a file, verify that it has the appropriate SELinux context and
set it correctly if necessary.
Create two files in the /tmp directory. Both files receive the user_tmp_t context type.
Move the first file, and copy the second file, to the /var/www/html directory.
172 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
• The moved file retains the file context that was labeled from the original /tmp directory.
• The copied file has a new inode and inherits the SELinux context from the destination /var/
www/html directory.
The ls -Z command displays the SELinux context of a file. Observe the label of the files that are
created in the /tmp directory.
The ls -Zd command displays the SELinux context of the specified directory. Note the label on
the /var/www/html directory and the files inside it.
Move one file from the /tmp directory to the /var/www/html directory. Copy the other file to
the same directory. Note the resulting label on each file.
The moved file retained its original label and the copied file inherited the destination directory
label. The unconfined_u is the SELinux user role, object_r is the SELinux role, and s0 is the
(lowest possible) sensitivity level. Advanced SELinux configurations and features use these values.
The recommended method to change the context for a file is to create a file context policy by
using the semanage fcontext command, and then to apply the specified context in the policy
to the file by using the restorecon command. This method ensures that you can relabel the file
to its correct context with the restorecon command whenever necessary. The advantage of this
method is that you do not need to remember what the context is supposed to be, and you can
correct the context on a set of files.
The chcon command changes the SELinux context directly on files, without referencing the
system's SELinux policy. Although chcon is useful for testing and debugging, changing contexts
manually with this method is temporary. Although file contexts that you can change manually
survive a reboot, they might be replaced if you run restorecon to relabel the contents of the file
system.
RH134-RHEL9.0-en-5-20230516 173
Chapter 6 | Manage SELinux Security
Important
When an SELinux system relabel occurs, all files on a system are labeled with their
policy defaults. When you use restorecon on a file, any context that you change
manually on the file is replaced if it does not match the rules in the SELinux policy.
The following example creates a directory with a default_t SELinux context, which it inherited
from the / parent directory.
The chcon command sets the file context of the /virtual directory to the
httpd_sys_content_t type.
Running the restorecon command resets the context to the default value of default_t. Note
the Relabeled message.
When viewing policies, the most common extended regular expression is (/.*)?, which is usually
appended to a directory name. This notation is humorously called the pirate, because it looks like a
face with an eye patch and a hooked hand next to it.
This syntax is described as "a set of characters that begin with a slash and followed by any number
of characters, where the set can either exist or not exist". Stated more simply, this syntax matches
the directory itself, even when empty, and also matches almost any file name that is created within
that directory.
For example, the following rule specifies that the /var/www/cgi-bin directory, and
any files in it or in its subdirectories (and in their subdirectories, and so on), have the
system_u:object_r:httpd_sys_script_exec_t:s0 SELinux context, unless a more
specific rule overrides this one.
174 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Note
The all files field option from the previous example is the default file type that
semanage uses when you do not specify one. This option applies to all file types
that you can use with semanage; they are the same as the standard file types as in
the Control Access to Files chapter in the Red Hat System Administration I (RH124)
course. You can get more information from the semanage-fcontext(8) man
page.
Option Description
To reset all files in a directory to the default policy context, first use the semanage fcontext
-l command to locate and verify that the correct policy exists with the intended file context.
Then, use the restorecon command on the wildcarded directory name to reset all the files
recursively. In the following example, view the file contexts before and after using the semanage
and restorecon commands.
Then, use the semanage fcontext -l command to list the default SELinux file contexts:
The semanage command output indicates that all the files and subdirectories in the /var/www/
directory have the httpd_sys_content_t context by default. Running restorecon command
on the wildcarded directory restores the default context on all files and subdirectories.
RH134-RHEL9.0-en-5-20230516 175
Chapter 6 | Manage SELinux Security
The following example uses the semanage command to add a context policy for a new directory.
First, create the /virtual directory with an index.html file inside it. View the SELinux context
for the file and the directory.
Next, use the semanage fcontext command to add an SELinux file context policy for the
directory.
Use the restorecon command on the wildcarded directory to set the default context on the
directory and all files within it.
Use the semanage fcontext -l -C command to view any local customizations to the default
policy.
References
chcon(1), restorecon(8), semanage(8), and semanage-fcontext(8) man
pages
176 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Guided Exercise
Outcomes
• Configure the Apache HTTP server to publish web content from a non-standard
document root.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to servera as the student user and switch to the root user.
2.2. Create the index.html file in the /custom directory that contains the This is
SERVERA. text.
2.3. Configure Apache to use the new directory location. Edit the Apache /etc/httpd/
conf/httpd.conf configuration file, and replace the two occurrences of the /
var/www/html directory with the /custom directory. You can use the vim /etc/
httpd/conf/httpd.conf command to do so. The following example shows the
expected content of the /etc/httpd/conf/httpd.conf file.
RH134-RHEL9.0-en-5-20230516 177
Chapter 6 | Manage SELinux Security
3. Start and enable the Apache web service and confirm that the service is running.
3.1. Start and enable the Apache web service by using the systemctl command.
5. To grant access to the index.html file on servera, you must configure the
SELinux context. Define an SELinux file context rule that sets the context type to
httpd_sys_content_t for the /custom directory and all the files under it.
178 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 179
Chapter 6 | Manage SELinux Security
Objectives
Activate and deactivate SELinux policy rules with the setsebool command, manage the
persistent value of SELinux Booleans with the semanage boolean -l command, and consult
man pages that end with _selinux to find useful information about SELinux Booleans.
SELinux Booleans
An application or service developer writes an SELinux targeted policy to define the allowed
behavior of the targeted application. A developer can include optional application behavior
in the SELinux policy that can be enabled when the behavior is allowed on a specific system.
SELinux Booleans enable or disable the SELinux policy's optional behavior. With Booleans, you can
selectively tune the behavior of an application.
These optional behaviors are application-specific, and must be discovered and selected for
each targeted application. Service-specific Booleans are documented in that service's SELinux
man page. For example, the web server httpd service has its httpd(8) man page, and an
httpd_selinux(8) man page to document its SELinux policy, including the supported process
types, file contexts, and the available Boolean-enabled behaviors. The SELinux man pages are
provided in the selinux-policy-doc package.
Use the getsebool command to list available Booleans for the targeted policies on this system,
and the current Boolean status. Use the setsebool command to enable or disable the running
state of these behaviors. The setsebool -P command option makes the setting persistent by
writing to the policy file. Only privileged users can set SELinux Booleans.
You can enable sharing and enable users to access their home directories with a browser. When
enabled, the httpd service shares home directories that are labeled with the user_home_dir_t
file context. Users can then access and manage their home directory files from a browser.
180 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
To list only Booleans with a current setting that is different from the default setting at boot, use
the semanage boolean -l -C command. This example has the same result as the previous
example, without requiring the grep filtering.
The previous example temporarily set the current value for the httpd_enable_homedirs
Boolean to on, until the system reboots. To change the default setting, use the setsebool -P
command to make the setting persistent. The following example sets a persistent value, and then
views the Boolean's information from the policy file.
Use the semanage boolean -l -C command again. The Boolean is displayed despite
the appearance that the current and default settings are the same. However, the -C option
matches when the current setting is different from the default setting from the last boot. For this
httpd_enable_homedirs example, the original default boot setting was off.
References
booleans(8), getsebool(8), setsebool(8), semanage(8), and semanage-
boolean(8) man pages
RH134-RHEL9.0-en-5-20230516 181
Chapter 6 | Manage SELinux Security
Guided Exercise
Outcomes
• Configure the Apache web service to publish web content from the user's home directory.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. On the workstation machine, use the ssh command to log in to the servera machine
as the student user and then switch to the root user.
...output omitted...
UserDir public_html
...output omitted...
</IfModule>
182 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
4. Open another terminal window, and use the ssh command to log in to the servera
machine as the student user. Create the index.html web content file in the
~/public_html directory.
4.1. In another terminal window, use the ssh command to log in to the servera machine
as the student user.
4.4. For the Apache web service to serve the contents of the /home/student/
public_html directory, it must be allowed to share files and subdirectories in the /
home/student directory. When you created the /home/student/public_html
directory, it was automatically configured to allow anyone with home directory
permission to access its contents.
Change the /home/student directory permissions to allow the Apache web service
to access the public_html subdirectory.
5. Open a web browser on the workstation machine and enter the http://servera/
~student/index.html address. An error message states that you do not have
permission to access the file.
6. Switch to the other terminal, and use the getsebool command to see whether any
Booleans restrict access to home directories for the httpd service.
7. Use the setsebool command to enable persistent access to the home directory for the
httpd service.
RH134-RHEL9.0-en-5-20230516 183
Chapter 6 | Manage SELinux Security
8. Verify that you can now see the This is student content on SERVERA. message in
the web browser after entering the http://servera/~student/index.html address.
You might need to close and reopen your web browser to see the message.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
184 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Objectives
Use SELinux log analysis tools and display useful information during SELinux troubleshooting with
the sealert command.
Red Hat Enterprise Linux provides a stable targeted SELinux policy for almost every service in
the distribution. Therefore, it is unusual to have SELinux access problems with common RHEL
services when they are configured correctly. SELinux access problems occur when services are
implemented incorrectly, or when new applications have incomplete policies. Consider these
troubleshooting concepts before making broad SELinux configuration changes.
• Most access denials indicate that SELinux is working correctly by blocking improper actions.
• Evaluating denied actions requires some familiarity with normal, expected service actions.
• The most common SELinux issue is an incorrect context on new, copied, or moved files.
• File contexts can be fixed when an existing policy references their location.
• Optional Boolean policy features are documented in the _selinux man pages.
• Implementing Boolean features generally requires setting additional non-SELinux configuration.
• SELinux policies do not replace or circumvent file permissions or access control list restrictions.
When a common application or service fails, and the service is known to have a working SELinux
policy, first see the service's _selinux man page to verify the correct context type label. View
the affected process and file attributes to verify that the correct labels are set.
The AVC summary includes an event unique identifier (UUID). Use the sealert -l UUID
command to view comprehensive report details for the specific event. Use the sealert -
a /var/log/audit/audit.log command to view all existing events.
RH134-RHEL9.0-en-5-20230516 185
Chapter 6 | Manage SELinux Security
Consider the following example sequence of commands on a standard Apache web server. You
create /root/mypage and move it to the default Apache content directory (/var/www/html).
Then, after starting the Apache service, you try to retrieve the file content.
The web server does not display the content, and returns a permission denied error. An AVC
event is logged to the /var/log/audit/audit.log and /var/log/messages files. Note the
suggested sealert command and UUID in the /var/log/messages event message.
The sealert output describes the event, and includes the affected process, the accessed file,
and the attempted and denied action. The output includes advice for correcting the file's label, if
appropriate. Additional advice describes how to generate a new policy to allow the denied action.
Use the given advice only when it is appropriate for your scenario.
Important
The sealert output includes a confidence rating, which indicates the level of
confidence that the given advice will mitigate the denial. However, that advice might
not be appropriate for your scenario.
For example, if the AVC denial is because the denied file is in the wrong location,
then advice that states either to adjust the file's context label, or to create a policy
for this location and action, although technically accurate, is not the correct solution
for your scenario. If the root cause is a wrong location or file name, then moving or
renaming the file and then restoring a correct file context is the correct solution
instead.
186 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
If you believe that httpd should be allowed getattr access on the mypage file by
default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'httpd' --raw | audit2allow -M my-httpd
# semodule -X 300 -i my-httpd.pp
Additional Information:
Source Context system_u:system_r:httpd_t:s0
Target Context unconfined_u:object_r:admin_home_t:s0
Target Objects /var/www/html/mypage [ file ]
Source httpd
Source Path /usr/sbin/httpd
...output omitted...
Hash: httpd,httpd_t,admin_home_t,file,getattr
In this example, the accessed file is in the correct location, but does not have the correct SELinux
file context. The Raw Audit Messages section displays information from the /var/log/
audit/audit.log event entry. Use the restorecon /var/www/html/mypage command
to set the correct context label. To correct multiple files recursively, use the restorecon -R
command on the parent directory.
RH134-RHEL9.0-en-5-20230516 187
Chapter 6 | Manage SELinux Security
Use the ausearch command to search for AVC events in the /var/log/audit/audit.log log
file. Use the -m option to specify the AVC message type and the -ts option to provide a time hint,
such as recent.
Click the > character to display event details. Click solution details to display all event details and
advice. You can click Apply the solution.
After correcting the issue, the SELinux access control errors section should remove that event
from view. If the No SELinux alerts message appears, then you have corrected all current
SELinux issues.
References
sealert(8) man page
188 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Guided Exercise
Outcomes
• Gain experience with SELinux troubleshooting tools.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. From a web browser on the workstation machine, open the http://servera/
index.html web page. An error message states that you do not have permission to
access the file.
2. Use the ssh command to log in to servera as the student user. Use the sudo -i
command to switch to the root user.
3. Use the less command to view the contents of the /var/log/messages file. You use
the / character and search for the sealert text. Press the n key until you reach the last
occurrence, because previous exercises might also have generated SELinux messages.
Copy the suggested sealert command so that you can use it in the next step. Use the q
key to quit the less command.
RH134-RHEL9.0-en-5-20230516 189
Chapter 6 | Manage SELinux Security
4. Run the suggested sealert command. Note the source context, the target objects, the
policy, and the enforcing mode. Find the correct SELinux context label for the file that the
httpd service tries to serve.
If you want to allow httpd to have getattr access on the index.html file
Then you need to change the label on /custom/index.html
Do
# semanage fcontext -a -t FILE_TYPE '/custom/index.html'
where FILE_TYPE is one of the following: NetworkManager_exec_t,
NetworkManager_log_t, NetworkManager_tmp_t, abrt_dump_oops_exec_t,
abrt_etc_t, abrt_exec_t, abrt_handle_event_exec_t, abrt_helper_exec_t,
abrt_retrace_coredump_exec_t, abrt_retrace_spool_t, abrt_retrace_worker_exec_t,
abrt_tmp_t, abrt_upload_watch_tmp_t, abrt_var_cache_t, abrt_var_log_t,
abrt_var_run_t, accountsd_exec_t, acct_data_t, acct_exec_t, admin_crontab_tmp_t,
admin_passwd_exec_t, afs_logfile_t, aide_exec_t, aide_log_t, alsa_exec_t,
alsa_tmp_t, amanda_exec_t, amanda_log_t, amanda_recover_exec_t, amanda_tmp_t,
amtu_exec_t, anacron_exec_t, anon_inodefs_t
...output omitted...
Additional Information:
Source Context system_u:system_r:httpd_t:s0
Target Context unconfined_u:object_r:default_t:s0
Target Objects /custom/index.html [ file ]
Source httpd
Source Path /usr/sbin/httpd
Port <Unknown>
Host servera.lab.example.com
Source RPM Packages httpd-2.4.51-7.el9_0.x86_64
Target RPM Packages
SELinux Policy RPM selinux-policy-targeted-34.1.27-1.el9.noarch
Local Policy RPM selinux-policy-targeted-34.1.27-1.el9.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name servera.lab.example.com
Platform Linux servera.lab.example.com
5.14.0-70.2.1.el9_0.x86_64 #1 SMP PREEMPT Wed Mar
16 18:15:38 EDT 2022 x86_64 x86_64
Alert Count 4
First Seen 2022-04-07 04:51:38 EDT
Last Seen 2022-04-07 04:52:13 EDT
Local ID 9a96294a-239b-4568-8f1e-9f35b5fb472b
190 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
...output omitted...
4.2. Verify the SELinux context for the directory from where the httpd service serves the
content by default, /var/www/html. The httpd_sys_content_t SELinux context
is appropriate for the /custom/index.html file.
5. The Raw Audit Messages section of the sealert command contains information from
the /var/log/audit/audit.log file. Use the ausearch command to search the /
var/log/audit/audit.log file. The -m option searches on the message type. The -
ts option searches based on time. The following entry identifies the relevant process and
file that cause the alert. The process is the httpd Apache web server, the file is /custom/
index.html, and the context is system_r:httpd_t.
RH134-RHEL9.0-en-5-20230516 191
Chapter 6 | Manage SELinux Security
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
192 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Lab
Outcomes
• Identify issues in system log files.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the serverb machine as the student user and switch to the root user.
2. From a web browser on the workstation machine, view the http://serverb/lab.html
web page. You see the error message: You do not have permission to access
this resource.
3. Research and identify the SELinux issue that prevents the Apache service from serving web
content.
4. Display the SELinux context of the new HTTP document directory and the original HTTP
document directory. Resolve the SELinux issue that prevents the Apache server from serving
web content.
5. Verify that the Apache server can now serve web content.
6. Return to the workstation machine as the student user.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 193
Chapter 6 | Manage SELinux Security
194 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Solution
Outcomes
• Identify issues in system log files.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the serverb machine as the student user and switch to the root user.
3.1. View the contents of the /var/log/messages file. Use the / key and search for the
sealert string. Use the q key to quit the less command.
RH134-RHEL9.0-en-5-20230516 195
Chapter 6 | Manage SELinux Security
3.2. Run the suggested sealert command. Note the source context, the target objects,
the policy, and the enforcing mode.
If you want to allow httpd to have getattr access on the lab.html file
Then you need to change the label on /lab-content/lab.html
Do
# semanage fcontext -a -t FILE_TYPE '/lab-content/lab.html'
where FILE_TYPE is one of the following:
...output omitted...
Additional Information:
Source Context system_u:system_r:httpd_t:s0
Target Context unconfined_u:object_r:default_t:s0
Target Objects /lab-content/lab.html [ file ]
Source httpd
Source Path /usr/sbin/httpd
Port <Unknown>
Host serverb.lab.example.com
Source RPM Packages httpd-2.4.51-7.el9_0.x86_64
Target RPM Packages
SELinux Policy RPM selinux-policy-targeted-34.1.27-1.el9.noarch
Local Policy RPM selinux-policy-targeted-34.1.27-1.el9.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name serverb.lab.example.com
Platform Linux serverb.lab.example.com
5.14.0-70.2.1.el9_0.x86_64 #1 SMP PREEMPT Wed Mar
16 18:15:38 EDT 2022 x86_64 x86_64
Alert Count 8
First Seen 2022-04-07 06:14:45 EDT
Last Seen 2022-04-07 06:16:12 EDT
Local ID 35c9e452-2552-4ca3-8217-493b72ba6d0b
196 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Hash: httpd,httpd_t,default_t,file,getattr
3.3. The Raw Audit Messages section of the sealert command contains information
from the /var/log/audit/audit.log file. Search the /var/log/audit/
audit.log file. The -m option searches on the message type. The ts option searches
based on time. The following entry identifies the relevant process and file that cause
the alert. The process is the httpd Apache web server, the file is /lab-content/
lab.html, and the context is system_r:httpd_t.
4. Display the SELinux context of the new HTTP document directory and the original HTTP
document directory. Resolve the SELinux issue that prevents the Apache server from serving
web content.
4.1. Compare the SELinux context for the /lab-content and /var/www/html
directories.
4.2. Create a file context rule that sets the default type to httpd_sys_content_ for the
/lab-content directory and all the files in it.
RH134-RHEL9.0-en-5-20230516 197
Chapter 6 | Manage SELinux Security
4.3. Correct the SELinux context for the files in the /lab-content directory.
5. Verify that the Apache server can now serve web content.
5.1. Use your web browser to refresh the http://serverb/lab.html link. If the content
is displayed, then your issue is resolved.
This is the html file for the SELinux final lab on SERVERB.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
198 RH134-RHEL9.0-en-5-20230516
Chapter 6 | Manage SELinux Security
Summary
• Use the getenforce and setenforce commands to manage the SELinux mode of a system.
• The semanage command manages SELinux policy rules. The restorecon command applies
the context that the policy defines.
• Booleans are switches that change the behavior of the SELinux policy. You can enable or disable
them to tune the policy.
• The sealert command displays useful information to help with SELinux troubleshooting.
RH134-RHEL9.0-en-5-20230516 199
200 RH134-RHEL9.0-en-5-20230516
Chapter 7
RH134-RHEL9.0-en-5-20230516 201
Chapter 7 | Manage Basic Storage
Objectives
Create storage partitions, format them with file systems, and mount them for use.
Partition Disks
Disk partitioning divides a hard drive into multiple logical storage partitions. You can use partitions
to divide storage based on different requirements, and this division provides many benefits:
The 2 TiB disk and partition size limit is now a common and restrictive limitation. Consequently, the
legacy MBR scheme is superseded by the GUID Partition Table (GPT) partitioning scheme.
GPT partitioning offers additional features and benefits over MBR. A GPT uses a globally unique
identifier (GUID) to identify each disk and partition. A GPT makes the partition table redundant,
202 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
with the primary GPT at the head of the disk, and a backup secondary GPT at the end of the disk.
A GPT uses a checksum to detect errors in the GPT header and partition table.
Manage Partitions
An administrator can use a partition editor program to change a disk's partitions, such as creating
and deleting partitions, and changing partition types.
The standard partition editor on the command line in Red Hat Enterprise Linux is parted. You can
use the parted partition editor with storage that uses either the MBR partitioning scheme or the
GPT partitioning scheme.
The parted command takes as its first argument the device name that represents the entire
storage device or disk to modify, followed by subcommands. The following example uses the
print subcommand to display the partition table on the disk that is the /dev/vda block device
(the first "virtualized I/O" disk detected by the system).
Use the parted command without a subcommand to open an interactive partitioning session.
(parted) quit
[root@host ~]#
By default, the parted command displays sizes in powers of 10 (KB, MB, GB). You can change the
unit size with the unit parameter, which accepts the following values:
• s for sector
• B for byte
• MiB , GiB , or TiB (powers of 2)
RH134-RHEL9.0-en-5-20230516 203
Chapter 7 | Manage Basic Storage
• MB , GB , or TB (powers of 10)
As shown in the previous example, you can also specify multiple subcommands (here, unit and
print) on the same line.
Warning
The mklabel subcommand wipes the existing partition table. Use the mklabel
subcommand when the intent is to reuse the disk without regard to the existing
data. If a new label moves the partition boundaries, then all data in existing file
systems becomes inaccessible.
Run the parted command and specify the disk device name as an argument, to start in interactive
mode. The session displays (parted) as a subcommand prompt.
(parted) mkpart
Partition type? primary/extended? primary
204 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
Note
If you need more than four partitions on an MBR-partitioned disk, then create three
primary partitions and one extended partition. The extended partition serves as a
container within which you can create multiple logical partitions.
Indicate the file-system type to create on the partition, such as xfs or ext4. This value is only a
useful partition type label, and does not create the file system.
Start? 2048s
The s suffix provides the value in sectors, or uses the MiB, GiB, TiB, MB, GB, or TB suffixes. The
parted command defaults to the MB suffix. The parted command rounds provided values to
satisfy disk constraints.
When the parted command starts, it retrieves the disk topology from the device, such as the
disk physical block size. The parted command ensures that the start position that you provide
correctly aligns the partition with the disk structure, to optimize performance. If the start position
results in a misaligned partition, then the parted command displays a warning. With most disks, a
start sector that is a multiple of 2048 is safe.
Specify the disk sector where the new partition should end, and exit parted. You can specify the
end as a size or as an ending location.
End? 1000MB
(parted) quit
Information: You may need to update /etc/fstab.
[root@host ~]#
RH134-RHEL9.0-en-5-20230516 205
Chapter 7 | Manage Basic Storage
When you provide the end position, the parted command updates the partition table on the disk
with the new partition details.
Run the udevadm settle command. This command waits for the system to detect the new
partition and to create the associated device file in the /dev directory. The prompt returns when
the task is done.
As the root user, execute the parted command and specify the disk device name as an
argument.
Use the mkpart subcommand to begin creating the partition. With the GPT scheme, each
partition is given a name.
(parted) mkpart
Partition name? []? userdata
Indicate the file-system type to create on the partition, such as xfs or ext4. This value does not
create the file system, but is a useful partition type label.
Specify the disk sector that the new partition starts on.
Start? 2048s
Specify the disk sector for the new partition to end, and exit parted. When you provide the end
position, the parted command updates the GPT on the disk with the new partition details.
End? 1000MB
(parted) quit
Information: You may need to update /etc/fstab.
[root@host ~]#
206 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
Run the udevadm settle command. This command waits for the system to detect the new
partition and to create the associated device file in the /dev directory. The prompt returns when
the task is done.
Delete Partitions
The following instructions apply for both the MBR and GPT partitioning schemes. Specify the disk
that contains the partition to remove.
Run the parted command with the disk device as the only argument.
(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 5369MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Delete the partition, and exit parted. The rm subcommand immediately deletes the partition
from the partition table on the disk.
(parted) rm 1
(parted) quit
Information: You may need to update /etc/fstab.
[root@host ~]#
RH134-RHEL9.0-en-5-20230516 207
Chapter 7 | Manage Basic Storage
As the root user, use the mkfs.xfs command to apply an XFS file system to a block device. For
an ext4 file system, use the mkfs.ext4 command.
You also use the mount command to view currently mounted file systems, the mount points, and
their options.
To configure the system to automatically mount the file system during system boot, add an entry
to the /etc/fstab file. This configuration file lists the file systems to mount at system boot.
The /etc/fstab file is a white-space-delimited file with six fields per line.
208 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
#
# /etc/fstab
# Created by anaconda on Thu Apr 5 12:05:19 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=a8063676-44dd-409a-b584-68be2c9f5570 / xfs defaults 0 0
UUID=7a20315d-ed8b-4e75-a5b6-24ff9e1f9838 /dbdata xfs defaults 0 0
The first field specifies the device. This example uses a UUID to specify the device. File systems
create and store the UUID in the partition super block at creation time. Alternatively, you could use
the device file, such as /dev/vdb1.
The second field is the directory mount point, from which the block device is accessible in the
directory structure. The mount point must exist; if not, create it with the mkdir command.
The third field contains the file-system type, such as xfs or ext4.
The fourth field is the comma-separated list of options to apply to the device. defaults is a set
of commonly used options. The mount(8) man page documents the other available options.
The fifth field is used by the dump command to back up the device. Other backup applications do
not usually use this field.
The last field, the fsck order field, determines whether to run the fsck command at system boot
to verify that the file systems are clean. The value in this field indicates the order in which fsck
should run. For XFS file systems, set this field to 0, because XFS does not use fsck to verify its
file-system status. For ext4 file systems, set it to 1 for the root file system, and to 2 for the other
ext4 file systems. By using this notation, the fsck utility processes the root file system first, and
then verifies file systems on separate disks concurrently, and file systems on the same disk in
sequence.
Note
An incorrect entry in /etc/fstab might render the machine non-bootable. Verify
that an entry is valid by manually unmounting the new file system and then by using
mount /mountpoint to read the /etc/fstab file, and remount the file system
with that entry's mount options. If the mount command returns an error, then
correct it before rebooting the machine.
Alternatively, use the findmnt --verify command to parse the /etc/fstab file
for partition usability.
When you add or remove an entry in the /etc/fstab file, run the systemctl daemon-reload
command, or reboot the server, to ensure that the systemd daemon loads and uses the new
configuration.
RH134-RHEL9.0-en-5-20230516 209
Chapter 7 | Manage Basic Storage
Red Hat recommends the use of UUIDs to persistently mount file systems, because block device
names can change in certain scenarios, such as if a cloud provider changes the underlying storage
layer of a virtual machine, or if disks are detected in a different order on a system boot. The block
device file name might change, but the UUID remains constant in the file-system's super block.
Use the lsblk --fs command to scan the block devices that are connected to a machine and
retrieve the file-system UUIDs.
References
info parted (GNU Parted User Manual)
For more information, refer to the Configuring and Managing File Systems guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/managing_file_systems/index
210 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
Guided Exercise
Outcomes
• Use the parted, mkfs.xfs, and other commands to create a partition on a new disk,
format it, and persistently mount it.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to servera as the student user and switch to the root user.
3. Add a 1 GB primary partition. For correct alignment, start the partition at the 2048 sector.
Set the partition file-system type to XFS.
RH134-RHEL9.0-en-5-20230516 211
Chapter 7 | Manage Basic Storage
Because the partition starts at the 2048 sector, the previous command sets the end
position to 1001 MB to get a partition size of 1000 MB (1 GB).
Alternatively, you can perform the same operation with the following non-interactive
command: parted /dev/vdb mkpart primary xfs 2048s 1001 MB
3.2. Verify your work by listing the partitions on the /dev/vdb device.
3.3. Run the udevadm settle command. This command waits for the system to register
the new partition, and returns when it is done.
5. Configure the new file system to mount to the /archive directory persistently.
5.2. Discover the UUID of the /dev/vdb1 device. The UUID in the output is probably
different on your system.
212 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
5.3. Add an entry to the /etc/fstab file. Replace the UUID with the one that you
discovered from the previous step.
...output omitted...
UUID=881e856c-37b1-41e3-b009-ad526e46d987 /archive xfs defaults 0 0
5.4. Update the systemd daemon for the system to register the new /etc/fstab file
configuration.
5.5. Mount the new file system with the new entry in the /etc/fstab file.
5.6. Verify that the new file system is mounted on the /archive directory.
6. Reboot servera. After the server rebooted, log in and verify that the /dev/vdb1 device is
mounted on the /archive directory. When done, log out from servera.
6.2. Wait for servera to reboot and log in as the student user.
6.3. Verify that the /dev/vdb1 device is mounted on the /archive directory.
RH134-RHEL9.0-en-5-20230516 213
Chapter 7 | Manage Basic Storage
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
214 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
Objectives
Create and manage swap spaces to supplement physical memory.
When the memory usage on a system exceeds a defined limit, the kernel searches through RAM
to look for idle memory pages that are assigned to processes. The kernel writes the idle pages to
the swap area and reassigns the RAM pages to other processes. If a program requires access to a
page on disk, then the kernel locates another idle page of memory, writes it to disk, and recalls the
needed page from the swap area.
Because swap areas are on disk, swap is slow when compared with RAM. Although swap space
augments system RAM, do not consider swap space as a sustainable solution for insufficient RAM
for your workload.
The laptop and desktop hibernation function uses the swap space to save the RAM contents
before powering off the system. When you turn the system back on, the kernel restores the RAM
contents from the swap space and does not need a complete boot. For those systems, the swap
space must be greater than the amount of RAM.
The Knowledgebase article in References at the end of this section gives more guidance about
sizing the swap space.
RH134-RHEL9.0-en-5-20230516 215
Chapter 7 | Manage Basic Storage
(parted) mkpart
Partition name? []? swap1
File system type? [ext2]? linux-swap
Start? 1001MB
End? 1257MB
(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 5369MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
(parted) quit
Information: You may need to update /etc/fstab.
[root@host ~]#
After creating the partition, run the udevadm settle command. This command waits for the
system to detect the new partition and to create the associated device file in the /dev directory.
The command returns only when it is finished.
216 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
Use swapon with the device as a parameter, or use swapon -a to activate all the listed swap
spaces in the /etc/fstab file. Use the swapon --show and free commands to inspect the
available swap spaces.
You can deactivate a swap space with the swapoff command. If pages are written to the swap
space, then the swapoff command tries to move those pages to other active swap spaces or
back into memory. If the swapoff command cannot write data to other places, then it fails with an
error, and the swap space stays active.
The example uses the UUID as the first field. When you format the device, the mkswap command
displays that UUID. If you lost the output of mkswap, then use the lsblk --fs command. As an
alternative, you can use the device name in the first field.
The second field is typically reserved for the mount point. However, for swap devices, which
are not accessible through the directory structure, this field takes the swap placeholder value.
The fstab(5) man page uses a none placeholder value; however, a swap value gives more
informative error messages if something goes wrong.
RH134-RHEL9.0-en-5-20230516 217
Chapter 7 | Manage Basic Storage
The third field is the file-system type. The file-system type for swap space is swap.
The fourth field is for options. The example uses the defaults option. The defaults option
includes the auto mount option, which activates the swap space automatically at system boot.
The final two fields are the dump flag and the fsck order. Swap spaces do not require backing up
or file-system checking, and so these fields should be set to zero.
When you add or remove an entry in the /etc/fstab file, run the systemctl daemon-reload
command, or reboot the server, for systemd to register the new configuration.
To set the priority, use the pri option in the /etc/fstab file. The kernel uses the swap space
with the highest priority first. The default priority is -2.
The following example shows three defined swap spaces in the /etc/fstab file. The kernel uses
the last entry first, because its priority is set to 10. When that space is full, it uses the second entry,
because its priority is set to 4. Finally, it uses the first entry, which has a default priority of -2.
Use the swapon --show command to display the swap space priorities.
When swap spaces have the same priority, the kernel writes to them in a round-robin fashion.
References
mkswap(8), swapon(8), swapoff(8), mount(8), and parted(8) man pages
Knowledgebase: What Is the Recommended Swap Size for Red Hat Platforms?
https://access.redhat.com/solutions/15244
218 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
Guided Exercise
Outcomes
• Create a partition and a swap space on a disk by using the GPT partitioning scheme.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to servera as the student user and switch to the root user.
2. Inspect the /dev/vdb disk. The disk already has a partition table and uses the GPT
partitioning scheme. Also, it has an existing 1 GB partition.
3. Add a new partition of 500 MB for use as a swap space. Set the partition type to linux-
swap.
3.1. Create the myswap partition. Because the disk uses the GPT partitioning scheme,
you must give a name to the partition. Notice that the start position, 1001 MB,
is the end of the existing first partition. The parted command ensures that the
RH134-RHEL9.0-en-5-20230516 219
Chapter 7 | Manage Basic Storage
new partition immediately follows the previous one, without any gap. Because the
partition starts at the 1001 MB position, the command sets the end position to
1501 MB to get a partition size of 500 MB.
3.2. Verify your work by listing the partitions on the /dev/vdb disk. The size of the
new partition is not exactly 500 MB. The difference in size is because the parted
command must align the partition with the disk layout.
3.3. Run the udevadm settle command. This command waits for the system to register
the new partition, and returns when it is done.
5.1. Verify that creating and initializing the swap space does not yet enable it for use.
220 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
6.1. Use the lsblk command with the --fs option to discover the UUID of the /dev/
vdb2 device. The UUID in the output is different on your system.
6.2. Add an entry to the /etc/fstab file. In the following command, replace the UUID
with the one that you discovered from the previous step.
...output omitted...
UUID=762735cb-a52a-4345-9ed0-e3a68aa8bb97 swap swap defaults 0 0
6.3. Update the systemd daemon for the system to register the new /etc/fstab file
configuration.
6.4. Enable the swap space by using the entry in the /etc/fstab file.
7. Reboot the servera machine. After the server reboots, log in and verify that the swap
space is enabled. When done, log out from servera.
7.2. Wait for servera to reboot and log in as the student user.
RH134-RHEL9.0-en-5-20230516 221
Chapter 7 | Manage Basic Storage
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
222 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
Lab
Outcomes
• Display and create partitions with the parted command.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. The serverb machine has several unused disks. On the first unused disk, create a GPT
partition label and a 2 GB GPT partition named backup.
Because it is difficult to set an exact size, a size between 1.8 GB and 2.2 GB is acceptable.
Configure the backup partition to host an XFS file system.
2. Format the 2 GB backup partition with an XFS file system and persistently mount it to the
/backup directory.
3. On the same disk, create two 512 MB GPT partitions with the swap1 and swap2 names.
A size between 460 MB and 564 MB is acceptable.
Configure the file-system types of the partitions to host swap spaces.
4. Initialize the two 512 MB partitions as swap spaces, and configure them to activate at boot.
Set the swap space on the swap2 partition to be preferred over the other. Note that 512 MB
is approximately equivalent to 488 MiB.
5. To verify your work, reboot the serverb machine. Confirm that the system automatically
mounts the first partition to the /backup directory. Also, confirm that the system activates
the two swap spaces.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
RH134-RHEL9.0-en-5-20230516 223
Chapter 7 | Manage Basic Storage
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
224 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
Solution
Outcomes
• Display and create partitions with the parted command.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. The serverb machine has several unused disks. On the first unused disk, create a GPT
partition label and a 2 GB GPT partition named backup.
Because it is difficult to set an exact size, a size between 1.8 GB and 2.2 GB is acceptable.
Configure the backup partition to host an XFS file system.
1.1. Log in to serverb as the student user and switch to the root user.
1.2. Identify the unused disks. The first unused disk, /dev/vdb, does not have any
partitions.
RH134-RHEL9.0-en-5-20230516 225
Chapter 7 | Manage Basic Storage
1.5. Create the 2 GB backup partition with an xfs file-system type. Start the partition at
sector 2048.
1.7. Run the udevadm settle command. This command waits for the system to detect
the new partition and to create the /dev/vdb1 device file.
2. Format the 2 GB backup partition with an XFS file system and persistently mount it to the
/backup directory.
226 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
2.3. Before adding the new file system to the /etc/fstab file, retrieve its UUID. The UUID
on your system might be different.
2.4. Edit the /etc/fstab file and define the new file system.
2.6. Manually mount the /backup directory to verify your work. Confirm that the mount is
successful.
3. On the same disk, create two 512 MB GPT partitions with the swap1 and swap2 names.
A size between 460 MB and 564 MB is acceptable.
Configure the file-system types of the partitions to host swap spaces.
3.1. Retrieve the end position of the first partition by displaying the current partition table
on the /dev/vdb disk. In the next step, you use that value as the start of the swap1
partition.
RH134-RHEL9.0-en-5-20230516 227
Chapter 7 | Manage Basic Storage
Disk Flags:
3.2. Create the first 512 MB GPT partition named swap1. Set its type to linux-swap. Use
the end position of the first partition as the starting point. The end position is 2000 MB
+ 512 MB = 2512 MB.
3.3. Create the second 512 MB GPT partition named swap2. Set its type to linux-swap.
Use the end position of the previous partition as the starting point: 2512M. The end
position is 2512 MB + 512 MB = 3024 MB.
3.5. Run the udevadm settle command. The command waits for the system to register
the new partitions and to create the device files.
4. Initialize the two 512 MB partitions as swap spaces, and configure them to activate at boot.
Set the swap space on the swap2 partition to be preferred over the other. Note that 512 MB
is approximately equivalent to 488 MiB.
4.1. Use the mkswap command to initialize the swap partitions. Note the UUIDs of the
two swap spaces, because you use that information in the next step. If you clear the
mkswap output, then use the lsblk --fs command to retrieve the UUIDs.
228 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
4.2. Edit the /etc/fstab file and define the new swap spaces. To set the swap space on
the swap2 partition to be preferred over the swap1 partition, give the swap2 partition
a higher priority with the pri option.
4.4. Activate the new swap spaces. Verify the correct activation of the swap spaces.
5. To verify your work, reboot the serverb machine. Confirm that the system automatically
mounts the first partition to the /backup directory. Also, confirm that the system activates
the two swap spaces.
5.2. Wait for serverb to boot, and then log in as the student user.
5.3. Verify that the system automatically mounts the /dev/vdb1 partition to the /backup
directory.
RH134-RHEL9.0-en-5-20230516 229
Chapter 7 | Manage Basic Storage
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
230 RH134-RHEL9.0-en-5-20230516
Chapter 7 | Manage Basic Storage
Summary
• The parted command adds, modifies, and removes partitions on disks with the MBR or the
GPT partitioning scheme.
RH134-RHEL9.0-en-5-20230516 231
232 RH134-RHEL9.0-en-5-20230516
Chapter 8
RH134-RHEL9.0-en-5-20230516 233
Chapter 8 | Manage Storage Stack
Objectives
Describe logical volume manager components and concepts, and implement LVM storage and
display LVM component information.
Physical devices
Logical volumes use physical devices for storing data. These devices might be disk partitions,
whole disks, RAID arrays, or SAN disks. You must initialize the device as an LVM physical
volume. An LVM physical volume must use the entire physical device.
• Determine the physical devices that are used for creating physical volumes, and initialize these
devices as LVM physical volumes.
• Create the logical volumes from the available space in the volume group.
• Format the logical volume with a file system and mount it, or activate it as swap space, or pass
the raw volume to a database or storage server for advanced structures.
234 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
Note
The examples here use a /dev/vdb device name and its storage partitions. The
device names on your classroom system might be different. Use the lsblk, blkid,
or cat /proc/partitions commands to identify your system's devices.
[root@host ~]# parted /dev/vdb mklabel gpt mkpart primary 1MiB 769MiB
...output omitted...
[root@host ~]# parted /dev/vdb mkpart primary 770MiB 1026MiB
[root@host ~]# parted /dev/vdb set 1 lvm on
[root@host ~]# parted /dev/vdb set 2 lvm on
[root@host ~]# udevadm settle
RH134-RHEL9.0-en-5-20230516 235
Chapter 8 | Manage Storage Stack
This command might fail if the volume group does not have enough free physical extents. The LV
size rounds up to the next PE size value when the size does not exactly match.
The lvcreate command -L option requires sizes in bytes, mebibytes (binary megabytes,
1048576 bytes), and gibibytes (binary gigabytes), or similar. The lowercase -l requires sizes
that are specified as a number of physical extents. The following commands are two choices for
creating the same LV with the same size:
• lvcreate -n lv01 -L 128M vg01 : create an LV of size 128 MiB, rounded to the next PE.
• lvcreate -n lv01 -l 32 vg01 : create an LV of size 32 PEs at 4 MiB each, total 128 MiB.
The Virtual Data Optimizer (VDO) provides inline block-level deduplication, compression, and
thin provisioning for storage. Configure a VDO volume to use up to 256 TB of physical storage.
Manage VDO as a type of LVM logical volume (LVs), similar to LVM thinly provisioned volumes. An
LVM VDO is composed of two logical volumes:
VDO pool LV
This LV stores, deduplicates, compresses data, and sets the size of the VDO volume that is
backed by the physical device. VDO is deduplicated and compresses each VDO LV separately,
because each VDO pool LV can hold only one VDO LV.
VDO LV
A virtual device is provisioned on top of the VDO pool LV, and sets the logical size of the VDO
volume to store the data before deduplication and compression occur.
236 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
LVM VDO presents the deduplicated storage as a regular logical volume (LV). The VDO volume
can be formatted with standard file systems, or shared as a block device, or used to build other
storage layers, the same as any normal logical volume.
To use VDO deduplication and compression, install the vdo and kmod-kvdo packages.
Verify that the selected LVM volume group has enough free storage capacity. Use the lvcreate
command with the --type vdo parameter to create a VDO LV.
Use the mkfs command to create a file system on the new logical volume.
To make the file system available persistently, add an entry to the /etc/fstab file.
Note
You can mount a logical volume by name or by UUID, because LVM parses the PVs
by looking for the UUID. This behavior is successful even when the VG was created
by using a name, because the PV always contains a UUID.
RH134-RHEL9.0-en-5-20230516 237
Chapter 8 | Manage Storage Stack
The associated pvs, vgs, and lvs commands are commonly used and show a subset of the status
information, with one line for each entity.
PV Size shows the physical size of the PV, including unusable space.
Free PE shows the PE size available in the VG to create new LVs or extend existing LVs.
238 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
Max PV 0
Cur PV 2
Act PV 2
VG Size 1012.00 MiB
PE Size 4.00 MiB
Total PE 253
Alloc PE / Size 75 / 300.00 MiB
Free PE / Size 178 / 712.00 MiB
VG UUID jK5M1M-Yvlk-kxU2-bxmS-dNjQ-Bs3L-DRlJNc
VG Size displays the total size of the storage pool that is available for LV allocation.
Free PE / Size shows the available space in the VG to create or extend LVs.
LV Size shows the total size of the LV. Use the file-system tools to determine the free and
used space for the LV.
RH134-RHEL9.0-en-5-20230516 239
Chapter 8 | Manage Storage Stack
Prepare the physical device and create the physical volume when not present.
The vgextend command adds the new PV to the VG. Provide the VG and PV names as
arguments to the vgextend command.
This command increases the size of the lv01 logical volume by 500 MiB. The (+) plus sign in front
of the size means adding this value to the existing size; otherwise, without the plus sign, the value
defines the final size of the LV.
The lvextend command -l option expects the number of PE as the argument. The lvextend
command -L option expects sizes in bytes, mebibytes, gibibytes, and similar.
240 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
Important
Always run the xfs_growfs command after executing the lvextend command.
Use the lvextend command -r option to run the two steps consecutively. After
extending the LV, resize the file system by using the fsadm command. This option
supports several other file systems.
The resize2fs command expands the file system to occupy the new extended LV. You can
continue to use the file system when resizing.
The primary difference between xfs_growfs and resize2fs is the argument that is passed
to identify the file system. The xfs_growfs command takes the mount point as an argument,
and the resize2fs command takes the LV name as an argument. The xfs_growfs command
supports only an online resize, whereas the resize2fs command supports both online and offline
resizing. You can resize an ext4 file system up or down, but you can resize an XFS file system only
up.
Use the swapoff command to deactivate the swap space on the LV.
RH134-RHEL9.0-en-5-20230516 241
Chapter 8 | Manage Storage Stack
Use the swapon command to activate the swap space on the LV.
Warning
Before using the pvmove command, back up the data that is stored on all LVs in
the VG. An unexpected power loss during the operation might leave the VG in an
inconsistent state, which might cause a loss of data on LVs.
Important
The GFS2 and XFS file systems do not support shrinking, so you cannot reduce the
size of an LV.
Warning
Removing a logical volume destroys any data that is stored on the logical volume.
Back up or move your data before you remove the logical volume.
242 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
The LV's physical extents are freed and available to assign to existing or new LVs in the volume
group.
The VG's physical volumes are freed and available to assign to existing or new VGs on the system.
References
fdisk(8), gdisk(8), parted(8), partprobe(8), lvm(8), pvcreate(8),
vgcreate(8), lvcreate(8), mkfs(8), pvdisplay(8), vgdisplay(8),
lvdisplay(8), vgextend(8), lvextend(8), xfs_growfs(8), resize2fs(8)
swapoff(8), mkswap(8), swapon(8), pvmove(8), vgcfgbackup(8),
vgreduce(8), lvremove(8), vgremove(8), and pvremove(8) man pages
RH134-RHEL9.0-en-5-20230516 243
Chapter 8 | Manage Storage Stack
Guided Exercise
Outcomes
• Create physical volumes, volume groups, and logical volumes with LVM tools.
• Resize the logical volume when the file system is still mounted and in use.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user and switch to the root user.
2.1. Create two partitions of 256 MiB each and set to the Linux LVM type. Use the first
and second names for these partitions.
244 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
2.3. List the partitions on the /dev/vdb storage device. In the Number column, the 1
and 2 values correspond to the /dev/vdb1 and /dev/vdb2 device partitions. The
Flags column indicates the partition type.
4. Create the servera_group volume group by using the two new PVs.
5. Create the servera_volume logical volume with a size of 400 MiB. This command creates
the /dev/servera_group/servera_volume LV without a file system.
RH134-RHEL9.0-en-5-20230516 245
Chapter 8 | Manage Storage Stack
6.3. To persistently mount the newly created file system, add the following content in the
/etc/fstab file:
7. Verify that the mounted file system is accessible, and display the status information of the
LVM.
7.1. Verify that you can copy files to the /data directory.
7.2. View the PV status information. The output shows that the PV uses the
servera_group VG. The PV has a size of 256 MiB and a physical extent size of
4 MiB.
The VG contains 63 PEs, of which 27 PEs are available for allocation, and 36 PEs are
currently allocated to LVs. Use the following calculation for allocating the volume size
in MiBs:
7.3. View the VG status information of the servera_group VG. The output shows a VG
size of 508 MiB with a PE size of 4 MiB. The available size from the VG is 108 MiB.
246 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 508.00 MiB
PE Size 4.00 MiB
Total PE 127
Alloc PE / Size 100 / 400.00 MiB
Free PE / Size 27 / 108.00 MiB
VG UUID g0ahyT-90J5-iGic-nnb5-G6T9-tLdK-dX8c9M
7.4. View the status information for the servera_volume LV. The output shows the VG
name for creating the LV. It also shows an LV size of 400 MiB and an LE size of 100.
7.5. View the free disk space in human-readable units. The output shows the total size of
395 MiB with the available size of 372 MiB.
8.1. Create an additional partition of 512 MiB and set it to the Linux LVM type. Use the
third name for this partition.
RH134-RHEL9.0-en-5-20230516 247
Chapter 8 | Manage Storage Stack
9. Using the newly created disk space, extend the file system on the servera_volume to be
a total size of 700 MiB.
9.3. Extend the XFS file system by using the free space on the LV.
10. Verify that the LV size is extended, and that the contents are still present in the volume.
10.1. Verify the size of the extended LV by using the lvdisplay command.
248 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
10.2. Verify the new file-system size. Verify that the previously copied files are still present.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 249
Chapter 8 | Manage Storage Stack
Objectives
Analyze the multiple storage components that make up the layers of the storage stack.
Storage Stack
Storage in RHEL is composed of multiple layers of drivers, managers, and utilities that are mature,
stable, and full of modern features. Managing storage requires familiarity with stack components,
and recognizing that storage configuration affects the boot process, application performance, and
the ability to provide needed storage features for specific application use cases.
Previous sections in the Red Hat System Administration courses presented XFS file systems,
network storage sharing, partitioning, and the Logical Volume Manager. This section shows the
bottom-to-top RHEL storage stack and introduces each layer.
This section also covers Stratis, the daemon that unifies, configures, and monitors the underlying
RHEL storage stack components, and provides automated local storage management from either
the CLI or the RHEL web console.
Block Device
Block devices are at the bottom of the storage stack, and present a stable, consistent device
protocol that enables including almost any block device transparently in a RHEL storage
configuration. Most block devices today are accessed through the RHEL SCSI device driver,
and appear as a SCSI device, including earlier ATA hard drives, solid-state devices, and common
enterprise host bus adapters (HBAs). RHEL also supports iSCSI, Fibre Channel over Ethernet
(FCoE), virtual machine driver (virtio), serial-attached SCSI (SAS), Non-Volatile Memory
Express (NVMe), and other block devices.
The Fibre Channel over Ethernet (FCoE) protocol transmits Fibre Channel frames over Ethernet
networks. Typically, each data center has dedicated LAN and Storage Area Network (SAN)
cabling, which is uniquely configured for its traffic. With FCoE, both traffic types can be combined
250 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
into a larger, converged, Ethernet network architecture. FCoE benefits include lower hardware and
energy costs.
Multipath
A path is a connection between a server and the underlying storage. Device Mapper multipath
(dm-multipath) is a RHEL native multipath tool for configuring redundant I/O paths into a
single, path-aggregated logical device. A logical device that is created by using the device mapper
(dm) appears as a unique block device in the /dev/mapper/ directory for each LUN that is
attached to the system.
You can also implement storage multipath redundancy by using network bonding when the
storage, such as iSCSI and FCoE, uses network cabling.
Partitions
A block device can be further divided into partitions. Partitions might consume the entire block
device size, or divide the block device for creating multiple partitions. You can use these partitions
to create a file system, LVM devices, or directly for database structures or other raw storage.
RAID
A Redundant Array of Inexpensive Disks (RAID) is a storage virtualization technology that creates
large logical volumes from multiple physical or virtual block device components. Different forms
of RAID volumes offer data redundancy, performance improvement, or both, by implementing
mirroring or striping layouts.
LVM supports RAID levels 0, 1, 4, 5, 6, and 10. RAID logical volumes that LVM creates and manages
use the Multiple Devices (mdadm) kernel drivers. When not using LVM, Device Mapper RAID (dm-
raid) provides a device mapper interface to the mdadm kernel drivers.
You can stack LVM volumes and implement advanced features such as encryption and
compression for each part of the stack. The stack LVM volumes have mandated rules and
recommended practices to follow for practical layering for specific scenarios. You can use case-
specific recommendations from the Configuring and Managing Logical Volumes user guide.
LVM supports LUKS encryption, where a lower block device or partition is encrypted and presented
as a secure volume to create a file system on top. The practical advantage for LUKS over file-
system or file-based encryption is that a LUKS-encrypted device does not allow public visibility
or access to the file-system structure. The LUKS-encrypted device ensures that a physical device
remains secure even when removed from a computer.
LVM now incorporates VDO deduplication and compression as a configurable feature of regular
logical volumes. You can use LUKS encryption and VDO together with logical volumes, where the
LVM LUKS encryption is enabled underneath the LVM VDO volume.
RH134-RHEL9.0-en-5-20230516 251
Chapter 8 | Manage Storage Stack
recommends XFS for most modern use cases. XFS is required when the utility that implements
LVM is Red Hat Ceph Storage or the Stratis storage tool.
Database server applications consume storage in different ways, depending on their architecture
and size. Some smaller databases store their structures in regular files that are contained in a file
system. Because of the additional overhead or restrictions of file system access, this architecture
has scaling limits. Larger databases that bypass file system caching, and that use their own
caching mechanisms, create their database structures on raw storage. Logical volumes are
suitable for database and other raw storage use cases.
Red Hat Ceph Storage creates its own storage management metadata structures on raw devices,
to create Ceph Object Storage Devices (OSDs). In the latest Red Hat Ceph Storage versions,
Ceph uses LVM to initialize disk devices for use as OSDs. More information is available in the Cloud
Storage with Red Hat Ceph Storage (CL260) course.
Important
Stratis is currently available as a Technology Preview, but is expected to be
supported in a later RHEL 9 version. For information about Red Hat scope of
support for Technology Preview features, see the Technology Features Support
Scope [https://access.redhat.com/support/offerings/techpreview] document.
Stratis runs as a service that manages pools of physical storage devices, and transparently creates
and manages volumes for the newly created file systems.
Stratis builds file systems from shared pools of disk devices by using the thin provisioning concept.
Instead of immediately allocating physical storage space to the file system when you create
it, Stratis dynamically allocates that space from the pool as the file system stores more data.
Therefore, the file system might appear to be 1 TiB, but might have only 100 GiB of real storage
that is allocated to it from the pool.
You can create multiple pools from different storage devices. From each pool, you can create one
or more file systems. Currently, you can create up to 224 file systems per pool.
Stratis builds the components that make up a Stratis-managed file system from standard Linux
components. Internally, Stratis uses the Device Mapper infrastructure that LVM also uses. Stratis
formats the managed file systems with XFS.
Figure 8.3 illustrates how Stratis assembles the elements of its storage management solution.
Stratis assigns block storage devices such as hard disks or SSDs to pools. Each device contributes
some physical storage to the pool. Then, Stratis creates file systems from the pools, and maps
physical storage to each file system as needed.
252 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
Warning
Reconfigure file systems created by Stratis only with Stratis tools and commands.
Stratis uses stored metadata to recognize managed pools, volumes, and file
systems. Manually configuring Stratis file systems with non-Stratis commands can
result in overwriting that metadata, and can prevent Stratis from recognizing the file
system volumes that it previously created.
RH134-RHEL9.0-en-5-20230516 253
Chapter 8 | Manage Storage Stack
Warning
The stratis pool list command displays the storage space in use and the
available pool space. Currently, if a pool becomes full, then further data that is
written to the pool's file systems is quietly discarded.
Use the stratis pool add-data command to add block devices to a pool. Then, use the
stratis blockdev list command to verify the block devices of a pool.
Create a Stratis file system snapshot by using the stratis filesystem snapshot command.
Snapshots are independent of the source file systems. Stratis dynamically allocates the snapshot
storage space, and uses an initial 560 MB to store the file system's journal.
254 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
The following example shows an entry in the /etc/fstab file to mount a Stratis
file system persistently. This example entry is a single long line in the file. The x-
systemd.requires=stratisd.service mount option delays mounting the file system until
the systemd daemon starts the stratisd service during the boot process.
Important
If you do not include the x-systemd.requires=stratisd.service mount
option in the /etc/fstab file for each Stratis file system, then the machine fails to
start correctly, and aborts to emergency.target the next time that you reboot it.
Warning
Do not use the df command to query Stratis file system space.
The df command reports that any mounted Stratis-managed XFS file system
is 1 TiB, regardless of the current allocation. Because the file system is thinly
provisioned, a pool might not have enough physical storage to back the entire file
system. Other file systems in the pool might use up all the available storage.
Instead, always use the stratis pool list command to monitor a pool's
available storage accurately.
References
For further information, refer to Deduplicating and Compressing Logical Volumes on
RHEL at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/deduplicating_and_compressing_logical_volumes_on_rhel/index
For further information, refer to Red Hat Enterprise Linux 9 Managing File Systems
Guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/managing_file_systems
Stratis Storage
https://stratis-storage.github.io/
What Stratis Learned from ZFS, Btrfs, and Linux Volume Manager
https://opensource.com/article/18/4/stratis-lessons-learned
RH134-RHEL9.0-en-5-20230516 255
Chapter 8 | Manage Storage Stack
Guided Exercise
Outcomes
• Create a thin-provisioned file system by using the Stratis storage management solution.
• Verify that the Stratis volumes grow dynamically to support real-time data growth.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user and switch to the root user.
4. Ensure that the stratispool1 Stratis pool exists on the /dev/vdb block device.
256 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
4.2. Verify the availability of the stratispool1 pool. Note the size of the pool.
5. Expand the capacity of the stratispool1 pool by adding the /dev/vdc block device.
5.2. Verify the size of the stratispool1 pool. The stratispool1 pool size increases
when you add the block device.
5.3. Verify the block devices that are currently members of the stratispool1 pool.
6.2. Verify the availability of the stratis-filesystem1 file system, and note its current
usage. The usage of the file system increases on demand in the later steps.
RH134-RHEL9.0-en-5-20230516 257
Chapter 8 | Manage Storage Stack
6.7. Obtain the UUID of the file system. The UUID would be different in your system.
6.8. Modify the /etc/fstab file to persistently mount the file system on the
/stratisvol directory. To do so, use the vim /etc/fstab command and add the
following line. Replace the UUID with the correct one for your system.
6.9. Update the systemd daemon with the new /etc/fstab configuration file.
6.10. Mount the stratisvol volume and verify that the stratis-filesystem1 volume
is mounted on the /stratisvol directory.
7. Reboot your system and verify that the file system is persistently mounted across reboots.
258 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
...output omitted...
[student@servera ~]$ sudo -i
[sudo] password for student: student
[root@servera ~]# mount
...output omitted...
/dev/mapper/stratis-1-3557...fbd3-thin-fs-d18c...b475 on /stratisvol type xfs
(rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,sunit=2048,swidth=2048,
noquota,x-systemd.requires=stratisd.service)
8.2. Create a 2 GiB file on the stratis-filesystem1 file system. It might take up to a
minute for the command to complete.
RH134-RHEL9.0-en-5-20230516 259
Chapter 8 | Manage Storage Stack
9.6. Verify that you can still access the file that you deleted from the stratis-
filesystem1 file system in the snapshot.
11. Remove the stratis-filesystem1 thin-provisioned file system and the stratis-
filesystem1-snap snapshot from the system.
260 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 261
Chapter 8 | Manage Storage Stack
Lab
Outcomes
• Resize the serverb_01_lv logical volume to 768 MiB.
• Create the serverb_02_lv logical volume with 128 MiB with an XFS file system.
This command prepares your environment and ensures that all required resources are
available.
Instructions
On the serverb machine, the serverb_01_lv logical volume that is mounted on the /
storage/data1 directory is running out of disk space, and must be extended to 768 MiB. You
must ensure that the serverb_01_lv LV remains persistently mounted on the /storage/
data1 directory.
Important
Note especially the specification of the partition size in MiB (220 bytes). If you
create the partition in MB (106 bytes), it does not satisfy the evaluation criteria,
because 1 MiB = 1.048576 MB.
Although the default unit when using the parted /dev/vdb print command
is MB, you can verify the size of the /dev/vdb device partitions in MiB units. Use
the parted /dev/vdb unit MiB print command to print the partition sizes in
MiB.
Create the serverb_02_lv LV with 128 MiB. Create the XFS file system on the newly created
volume. Mount the newly created logical volume on the /storage/data2 directory.
1. Create a 512 MiB partition on the /dev/vdb disk. Initialize this partition as a physical volume,
and extend the serverb_01_vg volume group to use this partition.
262 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 263
Chapter 8 | Manage Storage Stack
Solution
Outcomes
• Resize the serverb_01_lv logical volume to 768 MiB.
• Create the serverb_02_lv logical volume with 128 MiB with an XFS file system.
This command prepares your environment and ensures that all required resources are
available.
Instructions
On the serverb machine, the serverb_01_lv logical volume that is mounted on the /
storage/data1 directory is running out of disk space, and must be extended to 768 MiB. You
must ensure that the serverb_01_lv LV remains persistently mounted on the /storage/
data1 directory.
Important
Note especially the specification of the partition size in MiB (220 bytes). If you
create the partition in MB (106 bytes), it does not satisfy the evaluation criteria,
because 1 MiB = 1.048576 MB.
Although the default unit when using the parted /dev/vdb print command
is MB, you can verify the size of the /dev/vdb device partitions in MiB units. Use
the parted /dev/vdb unit MiB print command to print the partition sizes in
MiB.
Create the serverb_02_lv LV with 128 MiB. Create the XFS file system on the newly created
volume. Mount the newly created logical volume on the /storage/data2 directory.
1. Create a 512 MiB partition on the /dev/vdb disk. Initialize this partition as a physical volume,
and extend the serverb_01_vg volume group to use this partition.
264 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
1.1. Log in to the serverb machine as the student user and switch to the root user.
1.2. Print the partition sizes in MiB to determine where the first partition ends.
1.3. Create the 512 MiB partition and set the lvm partition type.
2.2. Extend the XFS file system to consume the remaining space on the LV.
RH134-RHEL9.0-en-5-20230516 265
Chapter 8 | Manage Storage Stack
Note
The xfs_growfs command introduces an extra step to extend the file system. An
alternative would be to use the lvextend command -r option.
3. In the existing volume group, create the serverb_02_lv logical volume with 128 MiB. Add
an XFS file system and mount it persistently on the /storage/data2 directory.
3.1. Create the serverb_02_lv LV with 128 MiB from the serverb_01_vg VG.
3.4. Add the following line to the end of the /etc/fstab file:
3.5. Update the systemd daemon with the new /etc/fstab configuration file.
4. Verify that the newly created LV is mounted with the intended size.
266 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
RH134-RHEL9.0-en-5-20230516 267
Chapter 8 | Manage Storage Stack
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
268 RH134-RHEL9.0-en-5-20230516
Chapter 8 | Manage Storage Stack
Summary
• You can use LVM to create flexible storage by allocating space on multiple storage devices.
• Physical volumes, volume groups, and logical volumes are managed by the pvcreate,
vgreduce, and lvextend commands.
• Logical volumes can be formatted with a file system or swap space, and they can be mounted
persistently.
• Storage can be added to volume groups, and logical volumes can be extended dynamically.
• Virtual Data Optimizer (VDO) uses LVM for compression and deduplication of data.
• You can use Stratis to configure initial storage or to enable advanced storage features.
RH134-RHEL9.0-en-5-20230516 269
270 RH134-RHEL9.0-en-5-20230516
Chapter 9
Access Network-Attached
Storage
Goal Access network-attached storage with the NFS
protocol.
RH134-RHEL9.0-en-5-20230516 271
Chapter 9 | Access Network-Attached Storage
Objectives
Identify NFS export information, create a directory to use as a mount point, mount an NFS export
with the mount command or by configuring the /etc/fstab file, and unmount an NFS export
with the umount command.
By default, Red Hat Enterprise Linux 9 uses NFS version 4.2. RHEL fully supports both NFSv3 and
NFSv4 protocols. NFSv3 might use either a TCP or a UDP transport protocol, but NFSv4 supports
only TCP connections.
NFS servers export directories. NFS clients mount exported directories to an existing local mount
point directory. NFS clients can mount exported directories in multiple ways:
The automounter methods, which include the autofs service and the systemd.automount
facility, are discussed in the Automount Network-Attached Storage section. You
must install the nfs-utils package to obtain the client tools for manually mounting, or for
automounting, to obtain exported NFS directories.
RHEL also supports mounting shared directories from Microsoft Windows systems by using the
same methods as for the NFS protocol, by using either the Server Message Block (SMB) or the
Common Internet File System (CIFS) protocols. Mounting options are protocol-specific, and
depend on your Windows Server or Samba Server configuration.
NFSv3 used the RPC protocol, which requires a file server that supports NFSv3 connections to run
the rpcbind service. An NFSv3 client connects to the rpcbind service at port 111 on the server
to request NFS service. The server responds with the current port for the NFS service. Use the
showmount command to query the available exports on an RPC-based NFSv3 server.
272 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
The NFSv4 protocol eliminated the use of the legacy RPC protocol for NFS transactions. Use of
the showmount command on a server that supports only NFSv4 times out without receiving a
response, because the rpcbind service is not running on the server. However, querying an NFSv4
server is simpler than querying an NFSv3 server.
NFSv4 introduced an export tree that contains all of the paths for the server's exported
directories. To view all of the exported directories, mount the root (/) of the server's export tree.
Mounting the export tree's root provides browseable paths for all exported directories, as children
of the tree's root directory, but does not mount ("bind") any of the exported directories.
To mount an NFSv4 export when browsing the mounted export tree, change directory to an
exported directory path. Alternatively, use the mount command with an exported directory's full
path name to mount a single exported directory. Exported directories that use Kerberos security
do not allow mounting or accessing a directory when browsing an export tree, even though
you can view the export's path name. Mounting Kerberos-protected shares requires additional
server configuration and the use of Kerberos user credentials, which are discussed in the Red Hat
Security: Identity Management and Active Directory Integration (RH362) training course.
As with local volume file systems, mount the NFS export to access its contents. NFS shares can be
mounted temporarily or permanently, only by a privileged user.
The -t nfs option specifies the NFS file-system type. However, when the mount command
detects the server:/export syntax, the command defaults to the NFS type. With the -o flag,
you can add a list of comma-separated options to the mount command. In the example, the
rw option specifies that the exported file system is mounted with read/write access. The sync
option specifies synchronous transactions to the exported file system. This method is strongly
recommended for all production network mounts where transactions must be completed or else
return as failed.
Using a manual mount command is not persistent. When the system reboots, that NFS export
is not still mounted. Manual mounts are useful for providing temporary access to an exported
directory, or for test mounting an NFS export before persistently mounting it.
RH134-RHEL9.0-en-5-20230516 273
Chapter 9 | Access Network-Attached Storage
Then, you can mount the NFS export by using only the mount point. The mount command obtains
the NFS server and mount options from the matching entry in the /etc/fstab file.
A mounted directory can sometimes fail to unmount, and returns an error that the device is busy.
The device is busy because either an application is keeping a file open within the file system, or
some user's shell has a working directory in the mounted file-system's root directory or below it.
To resolve the error, check your own active shell windows, and use the cd command to leave the
mounted file system. If subsequent attempts to unmount the file system still fail, then use the
lsof (list open files) command to query the mount point. The lsof command returns a list of
open file names and the process which is keeping the file open.
With this information, gracefully close any processes that are using files on this file system, and
retry the unmount. In critical scenarios only, when an application cannot be closed gracefully, kill
the process to close the file. Alternatively, use the umount -f option to force the unmount, which
can cause loss of unwritten data for all open files.
References
mount(8), umount(8), showmount(8), fstab(5), mount.nfs(8), nfsconf(8),
and rpcbind(8) man pages
274 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
Guided Exercise
Outcomes
• Test an NFS server with the mount command.
• Configure NFS exports in the /etc/fstab configuration file to save changes even after a
system reboots.
This command prepares your environment and ensures that all required resources are
available.
Instructions
A shipping company uses a central NFS server, serverb, to host various exported documents
and directories. Users on servera, who are all members of the admin group, need access to the
persistently mounted NFS export.
The following list provides the environment characteristics for completing this exercise:
• The serverb machine exports the /shares/public directory, which contains some text files.
• Members of the admin group (admin1, sysmanager1) have read and write access to the
/shares/public exported directory.
1. Log in to servera as the student user and switch to the root user.
1.1. Log in to servera as the student user and switch to the root user.
RH134-RHEL9.0-en-5-20230516 275
Chapter 9 | Access Network-Attached Storage
2. Test the NFS server on serverb with servera as the NFS client.
2.2. On servera, verify that the /shares/public NFS export from serverb
successfully mounts to the /public directory.
2.4. Explore the mount command options for the mounted NFS export.
276 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
4. After servera is finished rebooting, log in to servera as the admin1 user and test the
persistently mounted NFS export.
4.2. Test the NFS export that is mounted on the /public directory.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 277
Chapter 9 | Access Network-Attached Storage
278 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
Objectives
Describe the benefits of using the automounter, and automount NFS exports by using direct and
indirect maps.
The automounter function was created to solve the problem that unprivileged users do not
have sufficient permissions to use the mount command. Without use of the mount command,
normal users cannot access removable media such as CDs, DVDs, and removable disk drives.
Furthermore, if a local or remote file system is not mounted at boot time by using the /etc/
fstab configuration, then a normal user cannot mount and access those unmounted file systems.
The automounter configuration files are populated with file-system mount information, in a similar
way to /etc/fstab entries. Although /etc/fstab file systems mount during system boot and
remain mounted until system shutdown or other intervention, automounter file systems do not
necessarily mount during system boot. Instead, automounter-controlled file systems mount on
demand, when a user or application attempts to enter the file-system mount point to access files.
Automounter Benefits
Resource use for automounter file systems is equivalent to file systems that are mounted at
boot, because a file system uses resources only when a program is reading and writing open files.
Mounted but idle file systems and unmounted file systems use almost no resources.
The automounter advantage is that by unmounting the file system each time that it is no longer in
use, the file system is protected from unexpected corruption when it is open. When the file system
is directed to mount again, the autofs service uses the most current mount configuration, unlike
an /etc/fstab mount, which might still use a configuration that was mounted months ago during
the last system boot. Additionally, if your NFS server configuration includes redundant servers and
paths, then the automounter can select the fastest connection each time that a new file system is
requested.
Because the automounter is a client-side configuration that uses the standard mount and umount
commands to manage file systems, automounted file systems in use exhibit the same behavior
to file systems that are mounted by using /etc/fstab. The difference is that an automounter
file system remains unmounted until the mount point is accessed, which causes the file system to
mount immediately, and to remain mounted when the file system is in use. When all files on the file
RH134-RHEL9.0-en-5-20230516 279
Chapter 9 | Access Network-Attached Storage
system are closed, and all users and processes leave the mount point directory, the automounter
unmounts the file system after a minimal timeout.
An indirect mount is when the mount point location is not known until the mount demand occurs.
An example of an indirect mount is the configuration for remote-mounted home directories,
where a user's home directory includes their username in the directory path. The user's remote file
system is mounted to their home directory, only after the automounter learns which user specified
to mount their home directory, and determines the mount point location to use. Although indirect
mount points appear to exist, the autofs service creates them when the mount demand occurs,
and deletes them again when the demand ended and the file system is unmounted.
These packages contain all requirements to use the automounter for NFS exports.
The name of the master map file is mostly arbitrary (although typically meaningful), and it must
have an extension of .autofs for the subsystem to recognize it. You can place multiple entries in
a single master map file; alternatively, you can create multiple master map files, each with its own
logically grouped entries.
Include the following content in the master map entry for indirectly mapped mounts:
/shares /etc/auto.demo
This entry uses the /shares directory as the base for indirect automounts. The /etc/
auto.demo file contains the mount details. Use an absolute file name. The auto.demo file must
be created before starting the autofs service.
280 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
The mapping file-naming convention is /etc/auto.name, where name reflects the content of
the map.
The format of an entry is mount point, mount options, and source location. This example shows an
indirect mapping entry. Direct maps and indirect maps that use wildcards are covered later in this
section.
Known as the key in the man pages, the autofs service automatically creates and removes the
mount point. In this case, the fully qualified mount point is /shares/work (see the master map
file). The autofs service creates and removes the /shares and /shares/work directories as
needed.
In this example, the local mount point mirrors the server's directory structure. However, this
mirroring is not required; the local mount point can have an arbitrary name. The autofs service
does not enforce a specific naming structure on the client.
Mount options start with a dash character (-) and are comma-separated with no white space.
The file-system mount options for manual mounting are also available when automounting. In this
example, the automounter mounts the export with read/write access (rw option), and the server is
synchronized immediately during write operations (sync option).
Useful automounter-specific options include -fstype= and -strict. Use fstype to specify
the file-system type, for example nfs4 or xfs, and use strict to treat errors when mounting file
systems as fatal.
The source location for NFS exports follows the host:/pathname pattern, in this example
serverb:/shares/work. For this automount to succeed, the NFS server, serverb, must
export the directory with read/write access, and the user that requests access must have standard
Linux file permissions on the directory. If serverb exports the directory with read/only access,
then the client gets read/only access even if it requested read/write access.
Continuing the previous example, if serverb:/shares exports two or more subdirectories, and
they are accessible with the same mount options, then the content for the /etc/auto.demo file
might appear as follows:
* -rw,sync serverb:/shares/&
The mount point (or key) is an asterisk character (*), and the subdirectory on the source location is
an ampersand character (&). Everything else in the entry is the same.
When a user attempts to access /shares/work, the * key (which is work in this example)
replaces the ampersand in the source location and serverb:/exports/work is mounted.
As with the indirect example, the autofs service creates and removes the work directory
automatically.
RH134-RHEL9.0-en-5-20230516 281
Chapter 9 | Access Network-Attached Storage
To use directly mapped mount points, the master map file might appear as follows:
/- /etc/auto.direct
All direct map entries use /- as the base directory. In this case, the mapping file that contains the
mount details is /etc/auto.direct.
The mount point (or key) is always an absolute path. The rest of the mapping file uses the same
structure.
In this example, the /mnt directory exists, and the autofs service does not manage it. The
autofs service creates and removes the full /mnt/docs directory automatically.
The naming of the unit is based on its mount location. For example, if the mount point is /
remote/finance, then the unit file is named remote-finance.automount. The systemd
daemon mounts the file system when the /remote/finance directory is initially accessed.
This method can be simpler than installing and configuring the autofs service. However, a
systemd.automount unit can support only absolute path mount points, similar to autofs direct
maps.
References
autofs(5), automount(8), auto.master(5), mount.nfs(8), and
systemd.automount(5) man pages
282 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
Guided Exercise
Outcomes
• Install required packages for the automounter.
• Configure direct and indirect automounter maps, with resources from a preconfigured
NFSv4 server.
This start script determines whether servera and serverb are reachable on the network.
The script alerts you if those servers are not available. The start script configures serverb
as an NFSv4 server, sets up permissions, and exports directories. The script also creates
users and groups that are needed on both servera and serverb.
Instructions
An internet service provider uses a central server, serverb, to host shared directories with
important documents that must be available on demand. When users log in to servera, they
need access to the automounted shared directories.
The following list provides the environment characteristics for completing this exercise:
• The serverb machine exports the /shares/indirect directory, which in turn contains the
west, central, and east subdirectories.
• The operators group consists of the operator1 and operator2 users. They have read
and write access to the /shares/indirect/west, /shares/indirect/central, and
/shares/indirect/east exported directories.
• The contractors group consists of the contractor1 and contractor2 users. They have
read and write access to the /shares/direct/external exported directory.
• The expected mount points for servera are /external and /internal.
RH134-RHEL9.0-en-5-20230516 283
Chapter 9 | Access Network-Attached Storage
1.1. Log in to servera as the student user and switch to the root user.
2. Configure an automounter direct map on servera with exports from serverb. Create
the direct map with files that are named /etc/auto.master.d/direct.autofs for the
master map and /etc/auto.direct for the mapping file. Use the /external directory
as the main mount point on servera.
2.1. Test the NFS server and export before you configure the automounter.
/- /etc/auto.direct
2.3. Create a direct map file named /etc/auto.direct, insert the following content,
and save the changes.
284 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
3. Configure an automounter indirect map on servera with exports from serverb. Create
the indirect map with files that are named /etc/auto.master.d/indirect.autofs
for the master map and /etc/auto.indirect for the mapping file. Use the /internal
directory as the main mount point on servera.
3.1. Test the NFS server and export before you configure the automounter.
/internal /etc/auto.indirect
3.3. Create an indirect map file named /etc/auto.indirect, insert the following
content, and save the changes.
* -rw,sync,fstype=nfs4 serverb.lab.example.com:/shares/indirect/&
4. Start the autofs service on servera, and enable it to start automatically at boot time.
5. Test the direct automounter map as the contractor1 user. When done, exit from the
contractor1 user session on servera.
RH134-RHEL9.0-en-5-20230516 285
Chapter 9 | Access Network-Attached Storage
5.3. Review the content and test the access to the /external mount point.
6. Test the indirect automounter map as the operator1 user. When done, log out from
servera.
Note
With an automounter indirect map, you must access each exported subdirectory
for them to mount. With an automounter direct map, after you access the mapped
mount point, you can immediately view and access the subdirectories and content in
the exported directory.
286 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 287
Chapter 9 | Access Network-Attached Storage
288 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
Lab
Outcomes
• Install required packages to set up the automounter.
This start script determines whether the servera and serverb systems are reachable on
the network. The start script configures serverb as an NFSv4 server, sets up permissions,
and exports directories. The script also creates users and groups that are needed on both
servera and serverb systems.
Instructions
An IT support company uses a central server, serverb, to host some exported directories
on /shares for their groups and users. Users must be able to log in and have their exported
directories mounted on demand and ready to use, in the /remote directory on servera.
The following list provides the environment characteristics for completing this exercise:
• The serverb machine is sharing the /shares directory, which in turn contains the
management, production, and operation subdirectories.
• The managers group consists of the manager1 and manager2 users. Those users have read
and write access to the /shares/management exported directory.
• The production group consists of the dbuser1 and sysadmin1 users. Those users have read
and write access to the /shares/production exported directory.
• The operators group consists of the contractor1 and consultant1 users. Those users
have read and write access to the /shares/operation exported directory.
• Use the /etc/auto.master.d/shares.autofs file as the master map file, and use the /
etc/auto.shares file as the indirect map file.
RH134-RHEL9.0-en-5-20230516 289
Chapter 9 | Access Network-Attached Storage
Evaluation
On the workstation machine, use the lab command to confirm success of this exercise.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
290 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
Solution
Outcomes
• Install required packages to set up the automounter.
This start script determines whether the servera and serverb systems are reachable on
the network. The start script configures serverb as an NFSv4 server, sets up permissions,
and exports directories. The script also creates users and groups that are needed on both
servera and serverb systems.
Instructions
An IT support company uses a central server, serverb, to host some exported directories
on /shares for their groups and users. Users must be able to log in and have their exported
directories mounted on demand and ready to use, in the /remote directory on servera.
The following list provides the environment characteristics for completing this exercise:
• The serverb machine is sharing the /shares directory, which in turn contains the
management, production, and operation subdirectories.
• The managers group consists of the manager1 and manager2 users. Those users have read
and write access to the /shares/management exported directory.
• The production group consists of the dbuser1 and sysadmin1 users. Those users have read
and write access to the /shares/production exported directory.
• The operators group consists of the contractor1 and consultant1 users. Those users
have read and write access to the /shares/operation exported directory.
• Use the /etc/auto.master.d/shares.autofs file as the master map file, and use the /
etc/auto.shares file as the indirect map file.
RH134-RHEL9.0-en-5-20230516 291
Chapter 9 | Access Network-Attached Storage
1.1. Log in to servera as the student user and switch to the root user.
2. Configure an automounter indirect map on servera with exports from serverb. Create
an indirect map with files that are named /etc/auto.master.d/shares.autofs for
the master map and /etc/auto.shares for the mapping file. Use the /remote directory
as the main mount point on servera. Reboot servera to determine whether the autofs
service starts automatically.
2.1. Test the NFS server before you configure the automounter.
/remote /etc/auto.shares
2.3. Create an indirect map file named /etc/auto.shares, insert the following content,
and save the changes.
* -rw,sync,fstype=nfs4 serverb.lab.example.com:/shares/&
292 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
3. Test the autofs configuration with the various users. When done, log out from servera.
RH134-RHEL9.0-en-5-20230516 293
Chapter 9 | Access Network-Attached Storage
3.4. Explore the mount options for the NFS automounted export.
Evaluation
On the workstation machine, use the lab command to confirm success of this exercise.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
294 RH134-RHEL9.0-en-5-20230516
Chapter 9 | Access Network-Attached Storage
Summary
• Mount and unmount an NFS share from the command line.
• Configure the automounter with direct and indirect maps, and describe their differences.
RH134-RHEL9.0-en-5-20230516 295
296 RH134-RHEL9.0-en-5-20230516
Chapter 10
RH134-RHEL9.0-en-5-20230516 297
Chapter 10 | Control the Boot Process
Objectives
Describe the Red Hat Enterprise Linux boot process, set the default target when booting, and
boot a system to a non-default target.
• The machine is powered on. The system firmware, either modern UEFI or earlier BIOS, runs a
Power On Self Test (POST) and starts to initialize the hardware.
The system BIOS or UEFI is configured by pressing a specific key combination, such as F2, early
during the boot process.
• The UEFI boot firmware is configured by searching for a bootable device, which searches for or
configures the Master Boot Record (MBR) on all disks.
The system BIOS or UEFI configuration is configured by pressing a specific key combination,
such as F2, early during the boot process.
• The system firmware reads a boot loader from disk and then passes control of the system to
the boot loader. On a Red Hat Enterprise Linux 9 system, the boot loader is the GRand Unified
Bootloader version 2 (GRUB2).
The grub2-install command installs GRUB2 as the boot loader on the disk for BIOS
systems. Do not use the grub2-install command directly to install the UEFI boot loader.
RHEL 9 provides a prebuilt /boot/efi/EFI/redhat/grubx64.efi file, which contains
the required authentication signatures for a Secure Boot system. Executing grub2-install
directly on a UEFI system generates a new grubx64.efi file without those required signatures.
You can restore the correct grubx64.efi file from the grub2-efi package.
• GRUB2 loads its configuration from the /boot/grub2/grub.cfg file for BIOS, and from the
/boot/efi/EFI/redhat/grub.cfg file for UEFI, and displays a menu to select which kernel
to boot.
GRUB2 is configured by using the /etc/grub.d/ directory and the /etc/default/grub file.
The grub2-mkconfig command generates the /boot/grub2/grub.cfg or /boot/efi/
EFI/redhat/grub.cfg files for BIOS or UEFI, respectively.
• After you select a kernel, or the timeout expires, the boot loader loads the kernel and initramfs
from disk and places them in memory. An initramfs image is an archive with the kernel
modules for all the required hardware at boot, initialization scripts, and more. In Red Hat
Enterprise Linux 9, the initramfs image contains a bootable root file system with a running
kernel and a systemd unit.
298 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
• The boot loader hands control over to the kernel, and passes in any specified options on the
kernel command line in the boot loader, and the location of the initramfs image in memory.
The boot loader is configured by using the /etc/grub.d/ directory, the /etc/default/
grub file, and the grub2-mkconfig command to generate the /boot/grub2/grub.cfg file.
• The kernel initializes all hardware for which it can find a driver in the initramfs image, and then
executes the /sbin/init script from the initramfs image as PID 1. On Red Hat Enterprise
Linux 9, the /sbin/init script is a link to the systemd unit.
• The systemd unit from the initramfs image executes all units for the initrd.target
target. This unit includes mounting the root file system on disk to the /sysroot directory.
• The kernel switches (pivots) the root file system from the initramfs image to the root file
system in the /sysroot directory. The systemd unit then re-executes itself by using the
installed copy of the systemd unit on the disk.
• The systemd unit looks for a default target, which is either passed in from the kernel command
line or is configured on the system. The systemd unit then starts (and stops) units to comply
with the configuration for that target, and solves dependencies between units automatically.
A systemd unit is a set of units that the system activates to reach the intended state. These
targets typically start a text-based login or a graphical login screen.
RH134-RHEL9.0-en-5-20230516 299
Chapter 10 | Control the Boot Process
The systemctl poweroff command stops all running services, unmounts all file systems (or
remounts them read-only when they cannot be unmounted), and then powers down the system.
The systemctl reboot command stops all running services, unmounts all file systems, and then
reboots the system.
You can also use the shorter version of these commands, poweroff and reboot, which are
symbolic links to their systemctl equivalents.
Note
The systemctl halt and halt commands are also available to stop the system.
Unlike the poweroff command, these commands do not power off the system;
they bring down a system to a point where it is safe to power it off manually.
Target Purpose
emergency.target This target starts the most minimal system for repairing your
system when the rescue.target unit fails to start.
A target can be a part of another target. For example, the graphical.target unit includes the
multi-user.target unit, which in turn depends on the basic.target unit and others. You
can view these dependencies with the following command:
300 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
* | | ├─integritysetup.target
* │ │ ├─local-fs.target
...output omitted...
Isolating a target stops all services that the target does not require (and its dependencies), and
starts any required services that are not yet started.
Not all targets can be isolated. You can isolate only targets where AllowIsolate=yes is set in
their unit files. For example, you can isolate the graphical target, but not the cryptsetup target.
RH134-RHEL9.0-en-5-20230516 301
Chapter 10 | Control the Boot Process
For example, to boot the system into a rescue shell where you can change the system
configuration with almost no services running, append the following option to the kernel command
line from the boot loader:
systemd.unit=rescue.target
This configuration change affects only a single boot, and is a useful tool to troubleshoot the boot
process.
To use this method to select a different target, use the following procedure:
2. Interrupt the boot loader menu countdown by pressing any key (except Enter, which would
initiate a normal boot).
5. Move the cursor to the line that starts with linux which is the kernel command line.
302 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
References
info grub2 (GNU GRUB manual)
For more information, refer to the Managing Services with systemd chapter in the
Configuring Basic System Settings guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/configuring_basic_system_settings/index#managing-services-with-systemd
RH134-RHEL9.0-en-5-20230516 303
Chapter 10 | Control the Boot Process
Guided Exercise
Outcomes
• Update the system default target and use a temporary target from the boot loader.
Instructions
1. On the workstation machine, open a terminal and confirm that the default target is
graphical.target.
3. Access a text-based console. Use the Ctrl+Alt+F1 key sequence by using the relevant
button or menu entry. Log in as the root user by using redhat as the password.
Note
Reminder: If you are using the terminal through a web page, then you can click the
Show Keyboard icon in the menu on the right side of the screen under your web
browser's URL bar.
304 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
4. Configure the workstation machine to automatically boot into the multi-user target,
and then reboot the workstation machine to verify. When done, change the default
systemd target back to the graphical target.
4.2. Reboot the workstation machine. After reboot, the system presents a text-based
console and not a graphical login screen.
4.4. Set the default systemd target back to the graphical target.
This step concludes the first part of the exercise, where you practice setting the
default systemd target.
5. In this second part of the exercise, you practice by using rescue mode to recover the
system.
Access the boot loader by rebooting workstation again. From within the boot loader
menu, boot into the rescue target.
5.2. When the boot loader menu appears, press any key to interrupt the countdown
(except Enter, which would initiate a normal boot).
5.3. Use the cursor keys to highlight the default boot loader entry.
5.5. Using the cursor keys, navigate to the line that starts with linux.
5.6. Press End to move the cursor to the end of the line.
RH134-RHEL9.0-en-5-20230516 305
Chapter 10 | Control the Boot Process
Note
If it is difficult for you to read the text in the console, then consider changing the
resolution when you edit the kernel line in the boot loader entry.
5.9. Log in to rescue mode. You might need to press Enter to get a clean prompt.
6. Confirm that in rescue mode, the root file system is in read/write mode.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
306 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
Objectives
Log in to a system and change the root password when the current root password is lost.
Several methods exist to set a new root password. A system administrator could, for example,
boot the system by using a Live CD, mount the root file system from there, and edit /etc/
shadow. This section explores a method that does not require the use of external media.
On Red Hat Enterprise Linux 9, the scripts that run from the initramfs image can be paused at
certain points, to provide a root shell, and then continue when that shell exits. This script is mostly
meant for debugging, and also to reset a lost root password.
Starting from Red Hat Enterprise Linux 9, if you install your system from a DVD, then the default
kernel asks for the root password when you try to enter maintenance mode. Thus, to reset a lost
root password, you must use the rescue kernel.
3. Move the cursor to the rescue kernel entry to boot (the entry with the rescue word in its
name).
5. Move the cursor to the kernel command line (the line that starts with linux).
6. Append rd.break. With that option, the system breaks just before the system hands control
from the initramfs image to the actual system.
At this point, the system presents a root shell, and the root file system on the disk is mounted
read-only on /sysroot. Because troubleshooting often requires modifying the root file
system, you must remount the root file system as read/write. The following step shows how the
remount,rw option to the mount command remounts the file system where the new option (rw)
is set.
RH134-RHEL9.0-en-5-20230516 307
Chapter 10 | Control the Boot Process
Important
Because the system has not yet enabled SELinux, any file that you create does not
have SELinux context. Some tools, such as the passwd command, first create a
temporary file, and then replace it with the file that is intended for editing, which
effectively creates a file without SELinux context. For this reason, when you use the
passwd command with rd.break, the /etc/shadow file does not receive SELinux
context.
2. Switch into a chroot jail, where /sysroot is treated as the root of the file-system tree.
4. Ensure that all unlabeled files, including /etc/shadow at this point, get relabeled during
boot.
5. Type exit twice. The first command exits the chroot jail, and the second command exits the
initramfs debug shell.
At this point, the system continues booting, performs a full SELinux relabeling, and then reboots
again.
The procedure to use the rd.break option to get a root shell is similar to the previously outlined
procedure, with some minor changes.
If your system was deployed from a Red Hat Enterprise Linux cloud image, then your boot menu
does not have a rescue kernel by default. However, you can use the default kernel to enter
maintenance mode by using the rd.break option without entering the root password.
The kernel prints boot messages and displays the root prompt on the system console. Prebuilt
images might have multiple console= arguments on the kernel command line in the bootloader.
Even though the system sends the kernel messages to all the consoles, the root shell that the
rd.break option sets up uses the last console that is specified on the command line. If you do
not get your prompt, then you might temporarily reorder the console= arguments when you edit
the kernel command line in the boot loader.
308 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
Inspect Logs
Looking at the logs of previously failed boots can be useful. If the system journals persist across
reboots, then you can use the journalctl tool to inspect those logs.
Remember that by default, the system journals are kept in the /run/log/journal directory, and
the journals are cleared when the system reboots. To store journals in the /var/log/journal
directory, which persists across reboots, set the Storage parameter to persistent in the /
etc/systemd/journald.conf file.
To inspect the logs of a previous boot, use the journalctl command -b option. Without any
arguments, the journalctl command -b option displays only messages since the last boot. With
a negative number as an argument, it displays the logs of previous boots.
This command shows all messages that are rated as an error or worse from the previous boot.
Warning
Disable the debug-shell.service service when you are done debugging,
because it leaves an unauthenticated root shell open to anyone with local console
access.
Alternatively, to activate the debug shell during the boot by using the GRUB2 menu, follow these
steps:
5. Move the cursor to the kernel command line (the line that starts with linux).
RH134-RHEL9.0-en-5-20230516 309
Chapter 10 | Control the Boot Process
6. Append systemd.debug-shell. With this parameter, the system boots into the debug
shell.
The emergency target keeps the root file system mounted read-only, while the rescue target waits
for the sysinit.target unit to complete, so that more of the system is initialized, such as the
logging service or the file systems. The root user at this point cannot change /etc/fstab until
the drive is remounted in a read write state with the mount -o remount,rw / command.
Administrators can use these shells to fix any issues that prevent the system from booting
normally, for example, a dependency loop between services, or an incorrect entry in /etc/fstab.
Exiting from these shells continues with the regular boot process.
References
dracut.cmdline(7), systemd-journald(8), journald.conf(5),
journalctl(1), and systemctl(1) man pages
310 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
Guided Exercise
Outcomes
• Reset the lost root user password.
This command runs a start script that determines whether the servera machine is
reachable on the network. It also resets the root password to a random string and sets a
higher time-out for the GRUB2 menu.
Instructions
1. Reboot servera, and interrupt the countdown in the boot-loader menu.
1.1. Locate the icon for the servera console, as appropriate for your classroom
environment, and then open the console.
Send Ctrl+Alt+Del to your system by using the relevant button or menu entry.
1.2. When the boot-loader menu appears, press any key to interrupt the countdown,
except Enter.
2. Edit the rescue kernel boot-loader entry, in memory, to abort the boot process just after
the kernel mounts all the file systems, but before it hands over control to systemd.
2.1. Use the cursor keys to highlight the rescue kernel entry (the one with the rescue word
in its name).
2.3. Use the cursor keys to navigate to the line that starts with linux.
2.4. Press End to move the cursor to the end of the line.
RH134-RHEL9.0-en-5-20230516 311
Chapter 10 | Control the Boot Process
Note
If it is difficult for you to see the text in the console, then consider changing the
resolution when editing the kernel line in the boot loader entry.
3. Press Enter to perform maintenance. At the sh-5.1# prompt, remount the /sysroot
file-system as read/write, and then use the chroot command to enter a chroot jail at
/sysroot.
5. Configure the system to automatically perform a full SELinux relabeling after booting. This
step is necessary because the passwd command re-creates the /etc/shadow file without
an SELinux context.
6. Type exit twice to continue booting your system as usual. The system runs an SELinux
relabel operation, and then reboots automatically. When the system is up, verify your work
by logging in as root at the console.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
312 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
Objectives
Manually repair file-system configuration or corruption issues that stop the boot process.
File-system Issues
During the boot process, the systemd service mounts the persistent file systems that are defined
in the /etc/fstab file.
Errors in the /etc/fstab file or corrupted file systems can block a system from completing the
boot process. In some failure scenarios, the system breaks out of the boot process and opens an
emergency shell that requires the root user password.
The following list describes some common file-system mounting issues when parsing the /etc/
fstab file during the boot process:
Note
If the mount point is not present, then Red Hat Enterprise Linux 9 automatically
creates it during the boot process.
The next example demonstrates the boot process output when the system finds a file-system
issue and switches to the emergency target:
...output omitted...
[* ] A start job is running for /dev/vda2 (27s / 1min 30s)
[ TIME ] Timed out waiting for device /dev/vda2.
[DEPEND] Dependency failed for /mnt/mountfolder
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for Mark need to relabel after reboot.
...output omitted...
[ OK ] Started Emergency Shell.
[ OK ] Reached target Emergency Mode.
RH134-RHEL9.0-en-5-20230516 313
Chapter 10 | Control the Boot Process
...output omitted...
Give root password for maintenance
(or press Control-D to continue):
The systemd daemon failed to mount the /dev/vda2 device and timed out. Because the device
is not available, the system opens an emergency shell for maintenance access.
To repair file-system issues when your system opens an emergency shell, first locate the errant
file system, and then find and repair the fault. Now reload the systemd configuration to retry the
automatic mounting.
Use the mount command to find which file systems are currently mounted by the systemd
daemon.
If the root file system is mounted with the ro (read-only) option, then you cannot edit the /etc/
fstab file. Temporarily remount the root file system with the rw (read/write) option, if necessary,
before opening the /etc/fstab file. With the remount option, an in-use file system can change
its mount parameters without unmounting the file system.
Try to mount all the file systems that are listed in the /etc/fstab file by using the mount --all
option. This option mounts processes on every file-system entry, but skips those file systems that
are already mounted. The command displays any errors that occur when mounting a file system.
In this scenario, where the /mnt/mountfolder mount directory does not exist, create the /
mnt/mountfolder directory before reattempting the mount. Other error messages can occur,
including typing errors in the entries, or wrong device names or UUIDs.
After you corrected all issues in the /etc/fstab file, inform the systemd daemon to register the
new /etc/fstab file by using the systemctl daemon-reload command. Then, reattempt
mounting all the entries.
Note
The systemd service processes the /etc/fstab file by transforming each entry
into a .mount type systemd unit configuration and then starting the unit as a
service. The daemon-reload option requests the systemd daemon to rebuild and
reload all unit configurations.
314 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
If the mount --all command succeeds without further errors, then the final test is to verify that
file-system mounting is successful during a system boot. Reboot the system and wait for the boot
to complete normally.
For quick testing in the /etc/fstab file, use the nofail mount entry option. Using the nofail
option in an /etc/fstab entry enables the system to boot even if that file-system mount is
unsuccessful. This option must not be used with production file systems that must always mount.
With the nofail option, an application could start when its file-system data is missing, with
possibly severe consequences.
References
systemd-fsck(8), systemd-fstab-generator(8), and systemd.mount(5)
man pages
RH134-RHEL9.0-en-5-20230516 315
Chapter 10 | Control the Boot Process
Guided Exercise
Outcomes
• Diagnose /etc/fstab file issues and use emergency mode to recover the system.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Access the servera machine console and notice that the boot process is stuck early on.
1.1. Locate the icon for the servera console, as appropriate for your classroom
environment. Open the console.
Notice that a start job does not seem to complete. Consider a possible cause for this
behavior.
1.2. Reboot the servera machine, by sending Ctrl+Alt+Del to your system by using
the relevant button or menu entry. With this boot problem, this key sequence might
not immediately abort the running job, and you might have to wait for it to time out
before the system reboots.
If you wait for the task to time out without sending Ctrl+Alt+Del, then the system
eventually spawns an emergency shell by itself.
1.3. When the boot-loader menu appears, press any key to interrupt the countdown,
except the Enter key.
2. Looking at the error from the previous boot, parts of the system still seem to be
functioning. Use redhat as the root user password to try an emergency boot.
2.1. Use the cursor keys to highlight the default boot loader entry.
2.3. Use the cursor keys to navigate to the line that starts with the linux word.
2.4. Press End to move the cursor to the end of the line.
316 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
Note
If it is difficult for you to see the text in the console, consider changing the
resolution when editing the kernel line in the boot loader entry.
4. Determine which file systems the systemd daemon currently mounts. The systemd
daemon mounts the root file system in read-only mode.
6. Try to mount all the other file systems. The --all (-a) option mounts all the listed file
systems in the /etc/fstab file that are not yet mounted.
7.1. Remove or comment out the incorrect line by using the vim /etc/fstab command.
7.2. Reload the systemd daemon for the system to register the new /etc/fstab file
configuration.
RH134-RHEL9.0-en-5-20230516 317
Chapter 10 | Control the Boot Process
8. Verify that the /etc/fstab file is now correct by attempting to mount all entries.
9. Reboot the system and wait for the boot to complete. The system should now boot
normally.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
318 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
Lab
Outcomes
• Reset a lost password for the root user.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. On the serverb machine, reset the password to redhat for the root user.
Locate the icon for the serverb machine console as appropriate for your classroom
environment, and then open the console.
2. In the boot-loader menu, select the default kernel boot-loader entry. The system fails to
boot, because a start job does not complete successfully. Fix the issue from the console of
the serverb machine.
3. Change the default systemd target on the serverb machine for the system to
automatically start a graphical interface when it boots.
No graphical interface is installed on the serverb machine. Set only the default target, and
do not install the packages.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
RH134-RHEL9.0-en-5-20230516 319
Chapter 10 | Control the Boot Process
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
320 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
Solution
Outcomes
• Reset a lost password for the root user.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. On the serverb machine, reset the password to redhat for the root user.
Locate the icon for the serverb machine console as appropriate for your classroom
environment, and then open the console.
1.1. Send Ctrl+Alt+Del to your system by using the relevant button or menu entry.
1.2. When the boot-loader menu appears, press any key to interrupt the countdown, except
the Enter key.
1.3. Use the cursor keys to highlight the rescue kernel boot-loader entry (the one with the
rescue word in its name).
1.5. Use the cursor keys to navigate the line that starts with the linux text.
1.6. Press Ctrl+e to move the cursor to the end of the line.
RH134-RHEL9.0-en-5-20230516 321
Chapter 10 | Control the Boot Process
Note
If it is difficult for you to see the text in the console, consider changing the
resolution when editing the kernel line in the boot loader entry.
1.10. At the sh-5.1 prompt, remount the /sysroot file system as writable, and then use
the chroot command for the /sysroot directory.
1.12. Configure the system to perform a full SELinux relabeling after booting.
1.13. Exit the chroot environment and the sh-5.1 prompt. After the file system is
relabeled, the system prompts to enter maintenance mode. However, if you wait, then it
completes the reboot and shows the boot-loader menu.
2. In the boot-loader menu, select the default kernel boot-loader entry. The system fails to
boot, because a start job does not complete successfully. Fix the issue from the console of
the serverb machine.
2.1. Boot the system into emergency mode. Reboot the serverb machine by sending
Ctrl+Alt+Del to your system by using the relevant button or menu entry.
2.2. When the boot-loader menu appears, press any key to interrupt the countdown, except
Enter.
2.3. Use the cursor keys to highlight the default boot-loader entry.
2.5. Use the cursor keys to navigate the line that starts with the linux text.
322 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
2.6. Press Ctrl+e to move the cursor to the end of the line.
2.12. Edit the /etc/fstab file to remove or comment out the incorrect line that mounts the
/olddata mount point.
2.13. Update the systemd daemon for the system to register the changes in the /etc/
fstab file configuration.
2.14. Verify that the /etc/fstab file configuration is correct by attempting to mount all
entries.
2.15. Reboot the system and wait for the boot to complete. The system should now boot
normally.
3. Change the default systemd target on the serverb machine for the system to
automatically start a graphical interface when it boots.
No graphical interface is installed on the serverb machine. Set only the default target, and
do not install the packages.
3.1. Log in to the serverb machine as the student user and switch to the root user.
RH134-RHEL9.0-en-5-20230516 323
Chapter 10 | Control the Boot Process
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
324 RH134-RHEL9.0-en-5-20230516
Chapter 10 | Control the Boot Process
Summary
• The systemctl reboot and systemctl poweroff commands reboot and power down a
system, respectively.
• The systemctl get-default and systemctl set-default commands can query and set
the default target.
• You can use the rd.break option on the kernel command line to interrupt the boot process
before control is handed over from the initramfs image. The root file system is mounted
read-only under /sysroot.
RH134-RHEL9.0-en-5-20230516 325
326 RH134-RHEL9.0-en-5-20230516
Chapter 11
RH134-RHEL9.0-en-5-20230516 327
Chapter 11 | Manage Network Security
Objectives
Accept or reject network connections to system services with firewalld rules.
The nftables framework provides many advantages over iptables, including improved
usability and more efficient rule sets. For example, the iptables framework required a rule
for each protocol, but nftables rules can apply to both IPv4 and IPv6 traffic simultaneously.
The iptables framework required using different tools, such as iptables, ip6tables,
arptables, and ebtables, for each protocol. By contrast, the nftables framework uses the
single nft user-space utility to manage all protocols through a single interface.
Note
Convert earlier iptables configuration files into their nftables equivalents by
using the iptables-translate and ip6tables-translate utilities.
The firewalld service simplifies firewall management by classifying network traffic into zones.
A network packet's assigned zone depends on criteria such as the source IP address of the packet
or the incoming network interface. Each zone has its own list of ports and services that are either
open or closed.
328 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
Note
For laptops or other machines that often change networks, the NetworkManager
service can automatically set the firewall zone for a connection. This service is useful
when switching between home, work, and public wireless networks. A user might
want their system's sshd service to be reachable when connected to their home
or corporate networks, but not when connected to a public wireless network in the
local coffee shop.
The firewalld service ensures the source address for every incoming packet into the system. If
that source address is assigned to a specific zone, then the rules for that zone apply. If the source
address is not assigned to a zone, then the firewalld service associates the packet with the
zone for the incoming network interface, and the rules for that zone apply. If the network interface
is not associated with a zone, then the firewalld service sends the packet to the default zone.
The default zone is not a separate zone but rather an assigned designation to an existing zone.
Initially, the firewalld service designates the public zone as default, and maps the lo
loopback interface to the trusted zone.
Most zones allow traffic through the firewall if it matches a list of particular ports and protocols,
such as 631/udp, or a predefined service configuration, such as ssh. Normally, if the traffic does
not match a permitted port and protocol or service, then it is rejected. The trusted zone, which
permits all traffic by default, is an exception.
Predefined Zones
The firewalld service uses predefined zones, which you can customize. By default, all zones
allow any incoming traffic that is part of an existing session that system initiated, and also allow all
outgoing traffic. The following table details the initial zone configuration.
RH134-RHEL9.0-en-5-20230516 329
Chapter 11 | Manage Network Security
drop Drop all incoming traffic unless related to outgoing traffic (do not even
respond with ICMP errors).
For a list of available predefined zones and their intended use, see the firewalld.zones(5) man
page.
Predefined Services
The firewalld service includes predefined configurations for common services, to simplify
setting firewall rules. For example, instead of researching the relevant ports for an NFS server, use
the predefined nfs configuration create rules for the correct ports and protocols. The following
table lists some predefined service configurations that might be active in your default firewalld
zone.
samba-client Local Windows file and print sharing client. Traffic to 137/udp and 138/
udp.
cockpit Red Hat Enterprise Linux web-based interface for managing and
monitoring your local and remote system. Traffic to 9090 port.
The firewalld package includes many predefined service configurations. You can list the
services with the firewall-cmd --get-services command.
330 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
If the predefined service configurations are not appropriate for your scenario, then you can
manually specify the required ports and protocols. You can use the web console graphical
interface to review predefined services and manually define more ports and protocols.
Click the Networking option in the left navigation menu to display the Firewall section in the main
networking page. Click the Edit rules and zones button zones to navigate to the Firewall page.
The Firewall page displays active zones and their allowed services. Click the arrow (>) button to
the left of a service name to view its details. To add a service to a zone, click the Add services
button in the upper right corner of the applicable zone.
RH134-RHEL9.0-en-5-20230516 331
Chapter 11 | Manage Network Security
To select a service, scroll through the list or enter a selection in the Filter services text box. In the
following example, the http string filters the options to web-related services. Select the checkbox
to the left of the service to allow it through the firewall. Click the Add services button to complete
the process.
The interface returns to the Firewall page, where you can review the updated allowed services list.
332 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
The following table lists often use firewall-cmd commands, along with an explanation. Most
commands work on the runtime configuration, unless the --permanent option is specified. If
the --permanent option is specified, then you must activate the setting by also running the
firewall-cmd --reload command, which reads the current permanent configuration and
applies it as the new runtime configuration. Many of the listed commands take the --zone=ZONE
option to find which zone they affect. Where a netmask is required, use CIDR notation, such as
192.168.1/24.
--remove-source=CIDR [--zone=ZONE] Remove the rule that routes all traffic from
the zone that comes from the IP address or
network. If no --zone= option is provided,
then the default zone is used.
RH134-RHEL9.0-en-5-20230516 333
Chapter 11 | Manage Network Security
The following example sets the default zone to dmz, assigns all traffic coming from the
192.168.0.0/24 network to the internal zone, and opens the network ports for the mysql
service on the internal zone.
As another example, to add all the incoming traffic from the 172.25.25.11 single IPv4 address
to the public zone, use the following commands:
334 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
Note
For situations where the basic syntax is not enough, you can add rich-rules to write
complex rules. If even the rich-rules syntax is not enough, then you can also use
Direct Configuration rules (which use raw nft syntax mixed in with firewalld
rules). These advanced configurations are beyond the scope of this chapter.
References
firewall-cmd(1), firewalld(1), firewalld.zone(5), firewalld.zones(5),
and nft(8) man pages
RH134-RHEL9.0-en-5-20230516 335
Chapter 11 | Manage Network Security
Guided Exercise
Outcomes
• Configure firewall rules to control access to services.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user and switch to the root user.
2. Install the httpd and mod_ssl packages. These packages provide the Apache web server
and the necessary extensions for the web server to serve content over SSL.
3. Create the /var/www/html/index.html file. Add one line of text that reads: I am
servera.
336 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
6. From workstation, try to access the web server on servera by using both the 80/TCP
clear-text port and the 443/TCP SSL encapsulated port. Both attempts should fail.
6.2. The curl command with the -k option for insecure connections should also fail.
RH134-RHEL9.0-en-5-20230516 337
Chapter 11 | Manage Network Security
8.1. Verify that the default firewall zone is set to the public zone.
8.2. If the earlier step does not return public as the default zone, then correct it with the
following command:
8.3. Add the https service to the permanent configuration for the public network zone.
Confirm your configuration.
9. From workstation, open Firefox and log in to the web console that is running on
servera to verify the https service to the public firewall zone.
9.3. Click Turn on administrative access and enter the student password again.
9.5. Click Edit rules and zones in the Firewall section of the Networking page.
9.6. Verify that the https service is listed in the Service column.
10. Return to a terminal on workstation, and verify your work by attempting to access the
servera web server.
338 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from earlier
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 339
Chapter 11 | Manage Network Security
Objectives
Verify that network ports have the correct SELinux type for services to bind to them.
When a targeted process attempts to open a port for listening, SELinux verifies that the policy
includes entries that enable the binding of the process and the context. SElinux can then block a
rogue service from taking over ports that other legitimate network services use.
Typically, the targeted policy already labeled all expected ports with the correct type. For
example, because port 8008/TCP is often used for web applications, that port is already labeled
with http_port_t, which is the default port type that a web server uses. Individual ports can be
labeled with only one port context.
Use the semanage command to list the current port label assignments.
Use the grep command to filter the SELinux port label by using the service name.
340 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
A port label can appear in the list many times for each supported networking protocol.
Use the grep command to filter the SELinux port label by using the port number.
Important
Almost all of the services that are included in the RHEL distribution provide an
SELinux policy module, which includes that service's default port contexts. You
cannot change default port labels by using the semanage command. Instead,
you must modify and reload the targeted service's policy module. Writing and
generating policy modules is not discussed in this course.
You can label a new port with an existing port context label (type). The semanage port
command's -a option adds a new port label; the -t option denotes the type; and the -p option
denotes the protocol.
In the following example, enable the gopher service to listen on the 71/TCP port:
To view local changes to the default policy, use the semanage port command's -C option.
gopher_port_t tcp 71
Service-specific SELinux man pages are named by using the service name plus _selinux. These
man pages include service-specific information on SELinux types, Booleans, and port types, and
are not installed by default. To view a list of all of the available SELinux man pages, install the
package and then run a man -k keyword search for the _selinux string.
RH134-RHEL9.0-en-5-20230516 341
Chapter 11 | Manage Network Security
Use the semanage command for deleting a port label, with the -d option. In the following
example, remove the binding of port 71/TCP to the gopher_port_t type:
To change a port binding, when requirements change, use the -m option. This option is more
efficient than deleting the earlier binding and adding the latest one.
For example, to modify port 71/TCP from gopher_port_t to http_port_t, use the following
command:
http_port_t tcp 71
[root@server ~]# semanage port -l | grep http
http_cache_port_t tcp 8080, 8118, 8123, 10001-10010
http_cache_port_t udp 3130
http_port_t tcp 71, 80, 81, 443, 488, 8008, 8009, 8443,
9000
pegasus_http_port_t tcp 5988
pegasus_https_port_t tcp 5989
References
semanage(8), semanage-port(8), and *_selinux(8) man pages
342 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
Guided Exercise
Outcomes
• Configure a web server that is running on servera to successfully serve content that uses
a nonstandard port.
This command determines whether the servera machine is reachable on the network,
installs the httpd service, and configures the firewall on servera to allow HTTP
connections.
Instructions
Your organization is deploying a new custom web application. The web application is running on a
nonstandard port, in this case, 82/TCP.
A junior administrator already configured the application on your servera host. However, the web
server content is not accessible.
1. Log in to servera as the student user and switch to the root user.
2. Try to fix the web content problem by restarting the httpd service.
2.2. View the status of the httpd service. Note the permission denied error.
RH134-RHEL9.0-en-5-20230516 343
Chapter 11 | Manage Network Security
2.3. Verify whether SELinux is blocking httpd from binding to the 82/TCP port.
344 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
3. Configure SELinux to allow the httpd service to bind to the 82/TCP port, and then restart
the httpd.service service.
4. Verify that you can now access the web server that runs on the 82/TCP port.
5. In a different terminal window, verify whether you can access the new web service from
workstation.
That error means that you still cannot connect to the web service from workstation.
6.1. Open the 82/TCP port in the permanent configuration, for the default zone on the
firewall, on servera.
RH134-RHEL9.0-en-5-20230516 345
Chapter 11 | Manage Network Security
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
346 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
Lab
Outcomes
• Configure firewall and SELinux settings on a web server host.
This command prepares your environment and ensures that all required resources are
available.
Instructions
Your company decided to run a new web application. This application listens on the 80/TCP and
1001/TCP ports. All changes that you make must persist across a reboot.
Important
The Red Hat Online Learning environment needs the 5900/TCP port to remain
available to use the graphical interface. This port is also known under the vnc-
server service. If you accidentally lock yourself out from the serverb machine,
then you can either try to recover by using the ssh command to your serverb
machine from your workstation machine, or reset your serverb machine. If you
elect to reset your serverb machine, then you must run the setup scripts for this
lab again. The configuration on your machines already includes a custom zone called
ROL that opens these ports.
1. From the workstation machine, test access to the default web server at http://
serverb.lab.example.com and to the http://serverb.lab.example.com:1001
virtual host.
2. Log in to the serverb machine to determine what is preventing access to the web servers.
3. Configure SELinux to allow the httpd service to listen on the 1001/TCP port.
4. From workstation, test again access to the default web server at http://
serverb.lab.example.com and to the http://serverb.lab.example.com:1001
virtual host.
5. Log in to the serverb machine to determine whether the correct ports are assigned to the
firewall.
RH134-RHEL9.0-en-5-20230516 347
Chapter 11 | Manage Network Security
6. Add the 1001/TCP port to the permanent configuration for the public network zone.
Confirm your configuration.
7. From workstation, confirm that the default web server at http://
serverb.lab.example.com returns SERVER B, and that the virtual host at http://
serverb.lab.example.com:1001 returns VHOST 1.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
348 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
Solution
Outcomes
• Configure firewall and SELinux settings on a web server host.
This command prepares your environment and ensures that all required resources are
available.
Instructions
Your company decided to run a new web application. This application listens on the 80/TCP and
1001/TCP ports. All changes that you make must persist across a reboot.
Important
The Red Hat Online Learning environment needs the 5900/TCP port to remain
available to use the graphical interface. This port is also known under the vnc-
server service. If you accidentally lock yourself out from the serverb machine,
then you can either try to recover by using the ssh command to your serverb
machine from your workstation machine, or reset your serverb machine. If you
elect to reset your serverb machine, then you must run the setup scripts for this
lab again. The configuration on your machines already includes a custom zone called
ROL that opens these ports.
1. From the workstation machine, test access to the default web server at http://
serverb.lab.example.com and to the http://serverb.lab.example.com:1001
virtual host.
RH134-RHEL9.0-en-5-20230516 349
Chapter 11 | Manage Network Security
2. Log in to the serverb machine to determine what is preventing access to the web servers.
2.3. Enable and start the httpd service. The httpd service fails to start.
350 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
2.5. Check whether SELinux is blocking the httpd service from binding to the 1001/TCP
port.
3. Configure SELinux to allow the httpd service to listen on the 1001/TCP port.
3.1. Use the semanage command to find the correct port type.
3.3. Confirm that the 1001/TCP port is bound to the http_port_t port type.
RH134-RHEL9.0-en-5-20230516 351
Chapter 11 | Manage Network Security
4. From workstation, test again access to the default web server at http://
serverb.lab.example.com and to the http://serverb.lab.example.com:1001
virtual host.
4.1. Test access to the http://serverb.lab.example.com web server. The web server
should return SERVER B.
5. Log in to the serverb machine to determine whether the correct ports are assigned to the
firewall.
5.2. Verify that the default firewall zone is set to the public zone.
5.3. If the previous step does not return public as the default zone, then correct it with the
following command:
352 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
5.4. Determine the open ports that are listed in the public network zone.
6. Add the 1001/TCP port to the permanent configuration for the public network zone.
Confirm your configuration.
RH134-RHEL9.0-en-5-20230516 353
Chapter 11 | Manage Network Security
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
354 RH134-RHEL9.0-en-5-20230516
Chapter 11 | Manage Network Security
Summary
• The netfilter framework enables kernel modules to inspect every packet that traverses the
system, including all incoming, outgoing, or forwarded network packets.
• The firewalld service simplifies management by classifying all network traffic into zones.
Each zone has its own list of ports and services. The public zone is set as the default zone.
• The firewalld service ships with predefined services. You can list these services by using the
firewall-cmd --get-services command.
• SELinux policy controls network traffic by labeling the network ports. For example, the
ssh_port_t label is associated with the 22/TCP port. When a process wants to listen on a
port, SELinux verifies whether the port's associated label is allowed to bind that port label.
RH134-RHEL9.0-en-5-20230516 355
356 RH134-RHEL9.0-en-5-20230516
Chapter 12
RH134-RHEL9.0-en-5-20230516 357
Chapter 12 | Install Red Hat Enterprise Linux
Objectives
Install Red Hat Enterprise Linux on a server.
Installation Media
Red Hat provides different forms of installation media that you can download from the Customer
Portal website by using your active subscription.
• A binary image file in ISO 9660 format that contains the Anaconda Red Hat Enterprise Linux
installation program, and the BaseOS and AppStream package repositories. These repositories
contain the needed packages to complete the installation without additional repositories.
• A smaller "boot ISO" image file that contains Anaconda requires a configured network to access
package repositories that are made available by using HTTP, FTP, or NFS.
• A QCOW2 image contains a prebuilt system disk that is ready to deploy as a virtual machine in
cloud or enterprise virtual environments. Red Hat uses QCOW2 as the standard image format
for KVM-based virtualization.
• Source code (human-readable programming language instructions) for Red Hat Enterprise
Linux. The source DVDs have no documentation. This image helps to compile or develop your
software according to the Red Hat Enterprise Linux version.
After downloading, create bootable installation media according to the instructions in the
reference section.
Use the composer-cli command or the Red Hat web console interface to access Image Builder.
• The manual installation interacts with the user to query how Anaconda installs and configures
the system.
• The automated installation uses a Kickstart file to direct Anaconda how to install the system.
358 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
At the WELCOME TO RED HAT ENTERPRISE LINUX 9 screen, select the language, and click
Continue. Individual users can choose a preferred language after installation.
Anaconda presents the INSTALLATION SUMMARY window, the central interface to customize
parameters before beginning the installation.
From this window, configure the installation parameters by selecting the icons in any order. Select
an item to view or to edit. In any item, click Done to return to this central screen.
Anaconda marks mandatory items with a triangle warning symbol and message. The orange status
bar at the bottom of the screen reminds you to complete the required information before the
installation begins.
• Time & Date: Select the system's location city by clicking the interactive map or selecting it
from the lists. Specify the local time zone even when using Network Time Protocol (NTP).
• Connect to Red Hat: Register the system with your Red Hat account and select the system
purpose. The system purpose feature enables the registration process to automatically attach
RH134-RHEL9.0-en-5-20230516 359
Chapter 12 | Install Red Hat Enterprise Linux
the most appropriate subscription to the system. You must first connect to the network by using
the Network & Host Name icon to register the system.
• Installation Source: Provide the source package location that Anaconda requires for installation.
The installation source field already refers to the DVD when using the binary DVD.
• Software Selection: Select the base environment to install, and add any add-ons. The Minimal
Install environment installs only the essential packages to run Red Hat Enterprise Linux.
• Installation Destination: Select and partition the disks for Red Hat Enterprise Linux to install
to. To complete this task, the administrator must know partitioning schemes and file-system
selection criteria. The default radio button for automatic partitioning allocates the selected
storage devices by using all available space.
• KDUMP: The kdump kernel crash dump feature collects information about the state of the
system memory when the kernel crashes. Red Hat engineers analyze a kdump file to identify the
cause of a crash. Use this Anaconda item to enable or to disable kdump.
• Network & Host Name: Detected network connections are listed on the left. Select a
connection to display its details. By default, Anaconda activates the network automatically. Click
Configure for the selected network connection.
• Root Password: The installation program prompts to set a root password. The final stage of the
installation process continues only after you define a root password.
• User Creation: Create an optional non-root account. Creating a local, general-use account is a
recommended practice. You can also create accounts after the installation is complete.
Note
When setting the root user password, Red Hat Enterprise Linux 9 enables an
option to lock the root user access to the system. Red Hat Enterprise Linux 9 also
enables password-based SSH access to the root user.
After you complete the installation configuration, and resolve all warnings, click Begin Installation.
Clicking Quit aborts the installation without applying any changes to the system.
When the installation finishes, click Reboot. Anaconda displays the Initial Setup screen when
installing a graphical desktop. Accept the license information and optionally register the system
with the subscription manager. You might skip system registration until later.
The tmux terminal provides a shell prompt in the second window in the first virtual console.
You can use the terminal to enter commands to inspect and troubleshoot the system while the
installation continues. The other windows provide diagnostic messages, logs, and additional
information.
The following table lists the keystroke combinations to access the virtual consoles and the tmux
terminal windows. In the tmux terminal, the keyboard shortcuts are performed in two actions:
360 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
press and release Ctrl+B, and then press the number key of the window to access. In the tmux
terminal, you can also use Alt+Tab to rotate the current focus between the windows.
Ctrl+B 1 In the tmux terminal, access the main information page for the installation
process.
Ctrl+B 2 In the tmux terminal, provide a root shell. Anaconda stores the installation
log files in the /tmp directory.
Ctrl+B 4 In the tmux terminal, display the contents of the /tmp/storage.log file.
Ctrl+B 5 In the tmux terminal, display the contents of the /tmp/program.log file.
Note
For compatibility with earlier Red Hat Enterprise Linux versions, the virtual consoles
from Ctrl+Alt+F2 through Ctrl+Alt+F5 also present root shells during
installation.
References
For further information, refer to Understanding the Various RHEL .iso Files at
https://access.redhat.com/solutions/104063
For further information, refer to Creating a Bootable Installation Medium for RHEL at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/
html-single/performing_a_standard_rhel_installation/index#assembly_creating-a-
bootable-installation-medium_installing-RHEL
RH134-RHEL9.0-en-5-20230516 361
Chapter 12 | Install Red Hat Enterprise Linux
Guided Exercise
Outcomes
• Manually install Red Hat Enterprise Linux 9.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Access the servera console and reboot the system into the installation media.
1.1. Locate the servera console icon in your classroom environment. Open the console.
1.2. To reboot, send Ctrl+Alt+Del to your system by using the relevant keyboard,
virtual, or menu entry.
1.3. When the boot loader menu appears, select the Install Red Hat Enterprise Linux 9
menu entry.
3.1. Enter servera.lab.example.com in the Host Name field and then click Apply.
3.2. Click Configure and then click the IPv4 Settings tab.
Field Value
Address 172.25.250.10
Netmask 24
Gateway 172.25.250.254
362 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
Note
Because Red Hat Enterprise Linux is already installed on the
servera.lab.example.com machine, the values for each of the required fields
exist. For a clean server, you must complete this information.
3.4. Click Save to save the network configuration, and then click Done.
Note
The /dev/vda disk already has partitions and file systems from the previous
installation. With this selection, you can wipe the disk for the new installation.
6. Click Software Selection, select Minimal Install from the Base Environment list, and then
click Done.
7.1. Click Root Password and enter redhat in the Root Password field.
7.3. Click Done twice because the password fails the dictionary check.
8.2. Enter student in the Full Name field. The User name field automatically fills
student as the username.
8.3. Select Make this user administrator to enable the student user to use the sudo
command to run commands as the root user.
8.6. Click the Done button twice because the password fails the dictionary check.
RH134-RHEL9.0-en-5-20230516 363
Chapter 12 | Install Red Hat Enterprise Linux
11. When the system displays the login prompt, log in as the student user.
12. After you validate the installation, reset the servera machine from the web page of the
classroom environment.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
364 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
Objectives
Explain Kickstart concepts and architecture, create a Kickstart file with the Kickstart
Generator website, modify an existing Kickstart file with a text editor and check its syntax with
ksvalidator, publish a Kickstart file to the installer, and install Kickstart on the network.
Introduction to Kickstart
The Kickstart feature of Red Hat Enterprise Linux automates system installations. You can use
Kickstart text files to configure disk partitioning, network interfaces, package selection, and
customize the installation. The Anaconda installer uses Kickstart files for a complete installation
without user interaction. The Kickstart feature is similar and uses an unattended installation
answer file for Microsoft Windows.
Kickstart files begin with a list of commands that define how to install the target machine. The
installer ignores comment lines, which start with the number sign (#) character. Additional sections
begin with a directive, which start with the percentage sign (%) character, and end on a line with
the &end directive.
The %packages section specifies which software to include on installation. Specify individual
packages by name, without versions. The at sign (@) character denotes package groups (either
by group or ID), and the @^ characters denote environment groups (groups of package groups).
Lastly, use the @module:stream/profile syntax to denote module streams.
Groups have mandatory, default, and optional components. Normally, Kickstart installs mandatory
and default components. To exclude a package or a package group from the installation, precede
it with a hyphen (-) character. Excluded packages or package groups might still install if they are
mandatory dependencies of other requested packages.
A Kickstart configuration file typically includes one or more %pre and %post sections, which
contain scripts that further configure the system. The %pre scripts execute before any disk
partitioning is done. Typically, you use %pre scripts to initialize a storage or network device that
the remainder of the installation requires. The %post scripts execute after the initial installation
is complete. Scripts within the %pre and %post sections can use any available interpreter on the
system, including Bash or Python. Avoid the use of a %pre section, because any errors that occur
within it might be difficult to diagnose.
Lastly, you can specify as many sections as you need, in any order. For example, you can have two
%post sections, and they are interpreted in order of appearance.
Note
The RHEL Image Builder is an alternative installation method to Kickstart files.
Rather than a text file that provides installation instructions, Image Builder creates
an image with all the required system changes. The RHEL Image Builder can create
images for public clouds such as Amazon Web Services and Microsoft Azure, or for
private clouds such as OpenStack or VMware. See this section's references for more
information about the RHEL Image Builder.
RH134-RHEL9.0-en-5-20230516 365
Chapter 12 | Install Red Hat Enterprise Linux
Installation Commands
The following Kickstart commands configure the installation source and method:
url --url="http://classroom.example.com/content/rhel9.0/x86_64/dvd/"
• repo: Specifies where to find additional packages for installation. This option must point to a
valid DNF repository.
• vnc: Enables the VNC viewer so you can access the graphical installation remotely over VNC.
vnc --password=redhat
• part: Specifies the size, format, and name of a partition. Required unless the autopart or
mount commands are present.
• autopart: Automatically creates a root partition, a swap partition, and an appropriate boot
partition for the architecture. On large enough drives (50 GB+), this command also creates a /
home partition.
• ignoredisk: Prevents Anaconda from modifying disks, and is useful alongside the autopart
command.
ignoredisk --drives=sdc
366 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
Network Commands
The following Kickstart commands configure networking-related features:
• network: Configures network information for the target system. Activates network devices in
the installer environment.
lang en_US
keyboard --vckeymap=us
• timezone: Defines the time zone and whether the hardware clock uses UTC. Required.
• timesource: Enables or disables NTP. If you enable NTP, then you must specify NTP servers or
pools.
selinux --enforcing
• services: Modifies the default set of services to run under the default systemd target.
RH134-RHEL9.0-en-5-20230516 367
Chapter 12 | Install Red Hat Enterprise Linux
Miscellaneous Commands
The following Kickstart commands configure logging the host power state on completion:
• logging: This command defines how Anaconda handles logging during the installation.
logging --host=loghost.example.com
• firstboot: If enabled, then the Set up Agent starts the first time that the system boots. This
command requires the initial-setup package.
firstboot --disabled
• reboot, poweroff, halt: Specify the final action when the installation completes. The default
setting is the halt option.
Note
Most Kickstart commands have multiple available options. Review the Kickstart
Commands and Options guide in this section's references for more information.
#version=RHEL9
368 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
The second part of a Kickstart file contains the %packages section, with details of which
packages and package groups to install, and which packages not to install.
%packages
@core
chrony
cloud-init
dracut-config-generic
dracut-norescue
firewalld
grub2
kernel
rsync
tar
-plymouth
%end
The last part of the Kickstart file contains a %post installation script.
%post
echo "This system was deployed using Kickstart on $(date)" > /etc/motd
%end
You can also specify a Python script with the --interpreter option.
%post --interpreter="/usr/libexec/platform-python"
%end
RH134-RHEL9.0-en-5-20230516 369
Chapter 12 | Install Red Hat Enterprise Linux
Note
In a Kickstart file, missing required values cause the installer to interactively prompt
for an answer or to abort the installation entirely.
2. Publish the Kickstart file so that the Anaconda installer can access it.
370 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
Creating a Kickstart file from scratch is complex, so first try to edit an existing file. Every
installation creates a /root/anaconda-ks.cfg file that contains the Kickstart directives that
are used in the installation. This file is a good starting point to create a Kickstart file.
The ksvalidator utility checks for syntax errors in a Kickstart file. It ensures that keywords
and options are correctly used, but it does not validate URL paths, individual packages, groups,
nor any part of %post or %pre scripts. For example, if the firewall --disabled directive is
misspelled, then the ksvalidator command might produce one of the following errors:
The ksverdiff utility displays syntax differences between different operating system versions.
For example, the following command displays the Kickstart syntax changes between RHEL 8 and
RHEL 9:
• A network server that is available at installation time by using FTP, HTTP, or NFS.
• An available USB disk or CD-ROM.
• A local hard disk on the system.
The installer must access the Kickstart file to begin an automated installation. Usually, the
file is made available via an FTP, web, or NFS server. Network servers help with Kickstart file
maintenance, because changes can be made once, and then immediately be used for future
installations.
By providing Kickstart files on USB or CD-ROM, is also convenient. The Kickstart file can be
embedded in the boot media that starts the installation. However, when the Kickstart file is
changed, you must generate new installation media.
Providing the Kickstart file on a local disk enables a quick rebuild of a system.
RH134-RHEL9.0-en-5-20230516 371
Chapter 12 | Install Red Hat Enterprise Linux
• inst.ks=http://server/dir/file
• inst.ks=ftp://server/dir/file
• inst.ks=nfs:server:/dir/file
• inst.ks=hd:device:/dir/file
• inst.ks=cdrom:device
For virtual machine installations by using the Virtual Machine Manager or virt-manager, the
Kickstart URL can be specified in a box under URL Options. When installing physical machines,
boot by using installation media, and press the Tab key to interrupt the boot process. Add an
inst.ks=LOCATION parameter to the installation kernel.
372 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
References
Kickstart Installation Basics chapter in Performing an Advanced RHEL Installation at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/
html-single/performing_an_advanced_rhel_installation/
index#performing_an_automated_installation_using_kickstart
Kickstart Commands and Options Reference in Performing an Advanced RHEL
Installation at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/performing_an_advanced_rhel_installation/index#kickstart-commands-and-
options-reference_installing-rhel-as-an-experienced-user
Boot Options chapter in Performing an Advanced RHEL Installation at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/
html-single/performing_an_advanced_rhel_installation/kickstart-installation-
basics_installing-rhel-as-an-experienced-user#kickstart-and-advanced-boot-
options_installing-rhel-as-an-experienced-user
Composing a Customized RHEL System Image at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/composing_a_customized_rhel_system_image/index
RH134-RHEL9.0-en-5-20230516 373
Chapter 12 | Install Red Hat Enterprise Linux
Guided Exercise
Outcomes
• Create a kickstart file.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to servera as the student user.
#reboot
3.2. Modify the repo commands to specify the classroom server's BaseOS and
AppStream repositories:
374 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
3.3. Modify the url command to specify the classroom server's HTTP installation
source:
url --url="http://classroom.example.com/content/rhel9.0/x86_64/dvd/"
3.5. Modify the rootpw command to set the root user's password to redhat.
3.6. Modify the authselect command to set the sssd service as the identity and
authentication source.
3.8. Comment out the part commands and add the autopart command:
3.9. Delete all of the content between the %post section and its %end directive. Add the
echo "Kickstarted on $(date)" >> /etc/issue line.
%post --erroronfail
echo "Kickstarted on $(date)" >> /etc/issue
%end
3.10. Modify the %packages section to include only the following content:
%packages
@core
chrony
dracut-config-generic
dracut-norescue
firewalld
grub2
kernel
rsync
tar
RH134-RHEL9.0-en-5-20230516 375
Chapter 12 | Install Red Hat Enterprise Linux
httpd
-plymouth
%end
4. Validate the Kickstart file for syntax errors. If no errors are shown, then the command has
no output.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
376 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
Objectives
Install a virtual machine on your Red Hat Enterprise Linux server with the web console.
Red Hat Enterprise Linux supports KVM (Kernel-based Virtual Machine), a full virtualization
solution that is built into the standard Linux kernel. KVM can run multiple Windows and Linux guest
operating systems.
In Red Hat Enterprise Linux, you can manage KVM from the command line with the virsh
command, or graphically with the web console's Virtual Machines tool.
KVM virtual machine technology is available across all Red Hat products, from stand-alone
physical instances of Red Hat Enterprise Linux to the Red Hat OpenStack Platform:
• Physical hardware systems run Red Hat Enterprise Linux to provide KVM virtualization. Red Hat
Enterprise Linux is typically a thick host, a system that supports VMs and also providing other
local and network services, applications, and management functions.
• Red Hat Virtualization (RHV) provides a centralized web interface that administrators can use to
manage an entire virtual infrastructure. It includes advanced features such as KVM migration,
redundancy, and high availability. A Red Hat Virtualization Hypervisor is a tuned version of
Red Hat Enterprise Linux solely for provisioning and supporting VMs.
• Red Hat OpenStack Platform (RHOSP) provides the foundation to create, deploy, and scale a
public or a private cloud.
• Red Hat OpenShift Virtualization includes RHV components to enable deployment of containers
on bare metal.
On systems where SELinux is enabled, sVirt isolates guests and the hypervisor. Each virtual
machine process is labeled and is automatically allocated a unique level, and the associated virtual
disk files are given matching labels.
RH134-RHEL9.0-en-5-20230516 377
Chapter 12 | Install Red Hat Enterprise Linux
In Red Hat Enterprise Linux 8 and later versions, UEFI and Secure Boot support for virtual
machines is provided by Open Virtual Machine Firmware (OVMF), in the edk2-ovmf package.
Review the following documents to determine whether a guest operating system is supported.
• RHV, RHOSP, and OpenShift Virtualization: Certified Guest Operating Systems in Red Hat
OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise
Linux with KVM [https://access.redhat.com/articles/973163]
378 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
The system must pass all the validation items to operate as a KVM host.
[root@host ~]# virt-install --name demo --memory 4096 --vcpus 2 --disk size=40 \
--os-type linux --cdrom /root/rhel.iso
...output omitted...
Install the cockpit-machines package to add the Virtual Machines menu to the web console.
If the web console is not already running, then start and enable it.
RH134-RHEL9.0-en-5-20230516 379
Chapter 12 | Install Red Hat Enterprise Linux
To create a virtual machine with the web console, access the Virtual Machines menu. From there,
click Create VM and enter the VM configuration in the Create New Virtual Machine window. If you
are using the web console for the first time after installing the Virtual Machines plug-in, then you
must reboot your system to start the libvirt virtualization.
• Name sets a domain name for the virtual machine configuration. This name is unrelated to the
hostname that you give the virtual machine during installation.
• Installation type is the method for accessing the installation media. Choices include the local file
system, or an HTTPS, FTP, or NFS URL, or PXE.
380 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
• Operating system defines the virtual machine's operating system. The virtualization layer
presents hardware emulation to be compatible with the chosen operating system.
• Size is the disk size when creating a new volume. Associate additional disks with the VM after
installation.
• Immediately start VM indicates whether the VM immediately starts after you click Create.
Click Create to create the VM, and click Install to start the operating system installation. The web
console displays the VM console from which you can install the system.
References
For more information, refer to the Configuring and Managing Virtualization guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/configuring_and_managing_virtualization/index
What Is Virtualization?
https://www.redhat.com/en/topics/virtualization/what-is-virtualization
Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat
Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM
https://access.redhat.com/articles/973163
RH134-RHEL9.0-en-5-20230516 381
Chapter 12 | Install Red Hat Enterprise Linux
Quiz
1. Which package enables you to select the OVMF firmware for a virtual machine?
a. open-firmware
b. edk2-ovmf
c. core-ovmf
d. ovmf
e. virt-open-firmware
2. Which two components are required to configure your system as a virtualization host,
and to manage virtual machines with the web console? (Choose two.)
a. The Virtualization Host package group
b. The openstack package group
c. The cockpit-machines package
d. The Virtualization Platform package group
e. The kvm DNF module
f. The cockpit-virtualization package
4. Which two tools can you use to start and stop your virtual machines on a Red Hat
Enterprise Linux system? (Choose two.)
a. vmctl
b. libvirtd
c. virsh
d. neutron
e. the web console
382 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
Solution
1. Which package enables you to select the OVMF firmware for a virtual machine?
a. open-firmware
b. edk2-ovmf
c. core-ovmf
d. ovmf
e. virt-open-firmware
2. Which two components are required to configure your system as a virtualization host,
and to manage virtual machines with the web console? (Choose two.)
a. The Virtualization Host package group
b. The openstack package group
c. The cockpit-machines package
d. The Virtualization Platform package group
e. The kvm DNF module
f. The cockpit-virtualization package
4. Which two tools can you use to start and stop your virtual machines on a Red Hat
Enterprise Linux system? (Choose two.)
a. vmctl
b. libvirtd
c. virsh
d. neutron
e. the web console
RH134-RHEL9.0-en-5-20230516 383
Chapter 12 | Install Red Hat Enterprise Linux
Lab
Outcomes
• Create a kickstart file.
This command prepares your environment and ensures that all required resources are
available.
Instructions
Prepare a kickstart file on the serverb machine as specified, and provide it at the http://
serverb.lab.example.com/ks-config/kickstart.cfg address.
1. On the serverb machine, copy the /root/anaconda-ks.cfg kickstart file to the /home/
student/kickstart.cfg kickstart file to be editable for the student user.
2. Update the /home/student/kickstart.cfg kickstart file.
• Modify the repo command for the BaseOS and AppStream repositories. Modify the repo
command for the BaseOS repository to use the http://classroom.example.com/
content/rhel9.0/x86_64/dvd/BaseOS/ address. Modify the repo command for
the AppStream repository to use the http://classroom.example.com/content/
rhel9.0/x86_64/dvd/AppStream/ address.
• Change the rootpw command to set redhat as the root user password.
• Modify the authselect command to set the sssd service as the identity and
authentication source.
• Modify the services command to disable the kdump and rhsmcertd services and to
enable the sshd, rngd, and chronyd services.
384 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
• Simplify the %post section so that it runs only a script to append the text Kickstarted
on DATE at the end of the /etc/issue file. Use the date command to insert the date
with no additional options.
• Simplify the %package section as follows: include the @core, chrony, dracut-config-
generic, dracut-norescue, firewalld, grub2, kernel, rsync, tar, and httpd
packages. Ensure that the plymouth package does not install.
3. Validate the syntax of the kickstart.cfg kickstart file.
4. Provide the /home/student/kickstart.cfg file at the http://
serverb.lab.example.com/ks-config/kickstart.cfg address.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 385
Chapter 12 | Install Red Hat Enterprise Linux
Solution
Outcomes
• Create a kickstart file.
This command prepares your environment and ensures that all required resources are
available.
Instructions
Prepare a kickstart file on the serverb machine as specified, and provide it at the http://
serverb.lab.example.com/ks-config/kickstart.cfg address.
1. On the serverb machine, copy the /root/anaconda-ks.cfg kickstart file to the /home/
student/kickstart.cfg kickstart file to be editable for the student user.
1.2. On the serverb machine, copy the /root/anaconda-ks.cfg file to the /home/
student/kickstart.cfg file.
• Modify the repo command for the BaseOS and AppStream repositories. Modify the repo
command for the BaseOS repository to use the http://classroom.example.com/
content/rhel9.0/x86_64/dvd/BaseOS/ address. Modify the repo command for
the AppStream repository to use the http://classroom.example.com/content/
rhel9.0/x86_64/dvd/AppStream/ address.
386 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
• Change the rootpw command to set redhat as the root user password.
• Modify the authselect command to set the sssd service as the identity and
authentication source.
• Modify the services command to disable the kdump and rhsmcertd services and to
enable the sshd, rngd, and chronyd services.
• Simplify the %post section so that it runs only a script to append the text Kickstarted
on DATE at the end of the /etc/issue file. Use the date command to insert the date
with no additional options.
• Simplify the %package section as follows: include the @core, chrony, dracut-config-
generic, dracut-norescue, firewalld, grub2, kernel, rsync, tar, and httpd
packages. Ensure that the plymouth package does not install.
#reboot
2.2. Modify the repo command for the BaseOS and AppStream repositories.
Modify the repo command for the BaseOS repository to use the http://
classroom.example.com/content/rhel9.0/x86_64/dvd/BaseOS/ address.
Modify the repo command for the AppStream repository to use the http://
classroom.example.com/content/rhel9.0/x86_64/dvd/AppStream/
address.
2.3. Change the url command to specify the HTTP installation source media that the
classroom machine provides.
url --url="http://classroom.example.com/content/rhel9.0/x86_64/dvd/"
2.5. Modify the rootpw command to set redhat as the password for the root user.
RH134-RHEL9.0-en-5-20230516 387
Chapter 12 | Install Red Hat Enterprise Linux
2.6. Modify the authselect command to set the sssd service as the identity and
authentication source.
2.8. Comment out the part commands and add the autopart command:
2.9. Delete all content between the %post and %end sections. Add the echo
"Kickstarted on $(date)" >> /etc/issue line.
%post --erroronfail
echo "Kickstarted on $(date)" >> /etc/issue
%end
%packages
@core
chrony
dracut-config-generic
dracut-norescue
firewalld
grub2
kernel
rsync
tar
httpd
-plymouth
%end
388 RH134-RHEL9.0-en-5-20230516
Chapter 12 | Install Red Hat Enterprise Linux
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 389
Chapter 12 | Install Red Hat Enterprise Linux
Summary
• The RHEL 9 binary DVD includes Anaconda and all required repositories for installation.
• The RHEL 9 boot ISO includes the Anaconda installer, and can access repositories over the
network during installation.
• You can create Kickstart files by using the Kickstart Generator website or by copying and editing
/root/anaconda-ks.cfg.
• The Virtualization Host DNF package group provides the packages for a RHEL system to
become a virtualization host.
390 RH134-RHEL9.0-en-5-20230516
Chapter 13
Run Containers
Goal Obtain, run, and manage simple lightweight
services as containers on a single Red Hat
Enterprise Linux server.
RH134-RHEL9.0-en-5-20230516 391
Chapter 13 | Run Containers
Container Concepts
Objectives
Explain container concepts and the core technologies for building, storing, and running containers.
Container Technology
Software applications typically depend on system libraries, configuration files, or services
that their runtime environment provides. Traditionally, the runtime environment for a software
application is installed in an operating system that runs on a physical host or a virtual machine.
Administrators then install application dependencies on top of the operating system.
In Red Hat Enterprise Linux, packaging systems such as RPM help administrators to manage
application dependencies. When you install the httpd package, the RPM system ensures that the
correct libraries and other dependencies for that package are also installed.
The major drawback to traditionally deployed software applications is that these dependencies are
entangled with the runtime environment. An application might require earlier or later versions of
supporting software than the software that is provided with the operating system. Similarly, two
applications on the same system might require different and incompatible versions of the same
software.
One way to resolve these conflicts is to package and deploy the application as a container.
A container is a set of one or more processes that are isolated from the rest of the system.
Software containers provide a way to package applications and to simplify their deployment and
management.
Think of a physical shipping container. A shipping container is a standard way to package and ship
goods. It is labeled, loaded, unloaded, and transported from one location to another as a single
box. The container's contents are isolated from the contents of other containers so that they do
not affect each other. These underlying principles also apply to software containers.
Red Hat Enterprise Linux supports containers by using the following core technologies:
Note
For a deeper discussion of container architecture and security, refer to the "Ten
Layers of Container Security" [https://www.redhat.com/en/resources/container-
security-openshift-cloud-devops-whitepaper] white paper.
392 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
Both technologies isolate their application libraries and runtime resources from the host operating
system or hypervisor, and vice versa.
Containers and virtual machines interact differently with hardware and the underlying operating
system.
• Runs directly on the host operating system, and it shares resources with all containers on the
system.
• Shares the host's kernel, but it isolates the application processes from the rest of the system.
• Requires far fewer hardware resources than virtual machines, so containers are also quicker to
start.
• Includes all dependencies, such as system and programming dependencies, and configuration
settings.
Note
Some applications might not be suitable to run as a container. For example,
applications that access low-level hardware information might need more direct
hardware access than containers generally provide.
A rootless container is not allowed to use system resources that are usually reserved for privileged
users, such as access to restricted directories, or to publish network services on restricted ports
(below 1024). This feature prevents a possible attacker from gaining root privileges on the
container host.
RH134-RHEL9.0-en-5-20230516 393
Chapter 13 | Run Containers
Although you can run containers directly as root if necessary, this scenario weakens the security
of the system if a bug enables an attacker to compromise the container.
Containers are typically temporary, or ephemeral. You can permanently save in persistent storage
the data that a running container generates, but the containers themselves usually run when
needed, and then they stop and are removed. A new container process is started the next time
that particular container is needed.
You could install a complex software application with multiple services in a single container. For
example, a web server might need to use a database and a messaging system. However, using one
container for multiple services is hard to manage.
A better design runs in separate containers each component, the web server, the database, and
the messaging system. This way, updates and maintenance to individual application components
do not affect other components or the application stack.
These tools are compatible with the Open Container Initiative (OCI). With these tools, you can
manage any Linux containers that OCI-compatible container engines create, such as Podman or
Docker. These tools are designed to run containers under Red Hat Enterprise Linux on a single-
node container host.
In this chapter, you use the podman and skopeo utilities to run and manage containers and
existing container images.
Note
Using buildah to construct your own container images is beyond the scope of this
course. It is covered in the Red Hat OpenShift I: Containers & Kubernetes (DO180)
Red Hat Training course.
Container images are built according to specifications, such as the Open Container Initiative (OCI)
image format specification. These specifications define the format for container images, as well
as the metadata about the container host operating systems and hardware architectures that the
image supports.
394 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
A container registry is a repository for storing and retrieving container images. A developer pushes
or uploads container images to a container registry. You can pull or download container images
from a registry to a local system to run containers.
You might use a public registry that contains third-party images, or you might use a private
registry that your organization controls. The source of your container images matters. As with any
other software package, you must know whether you can trust the code in the container image.
Policies vary between registries about whether and how they provide, evaluate, and test container
images that are submitted to them.
Red Hat distributes certified container images through two main container registries that you can
access with your Red Hat login credentials:
• registry.redhat.io for containers that are based on official Red Hat products
• registry.connect.redhat.com for containers that are based on third-party products
Note
Red Hat provides the Universal Base Image (UBI) image as an initial layer to build
containers. The UBI image is a minimized container image that can be a first layer
for an application build.
You need a Red Hat Developer account to download an image from the Red Hat registries. You
can use the podman login command to authenticate to the registries. If you do not provide a
registry URL to the podman login command, then it authenticates to the default configured
registry.
You can also use the podman login command --username and --password-stdin options,
to specify the user and password to log in to the registry. The --password-stdin option reads
the password from stdin. Red Hat does not recommend using the --password option to provide
the password directly, because this option stores the password in the log files.
To verify that you are logged in to a registry, use the podman login command --get-login
option.
RH134-RHEL9.0-en-5-20230516 395
Chapter 13 | Run Containers
unqualified-search-registries = ["registry.fedoraproject.org",
"registry.access.redhat.com", "registry.centos.org", "quay.io", "docker.io"]
# [[registry]]
# # The "prefix" field is used to choose the relevant [[registry]] TOML table;
# # (only) the TOML table with the longest match for the input image name
# # (taking into account namespace/repo/tag/digest separators) is used.
# #
# # The prefix can also be of the form: *.example.com for wildcard subdomain
# # matching.
# #
# # If the prefix field is missing, it defaults to be the same as the "location"
field.
# prefix = "example.com/foo"
#
# # If true, unencrypted HTTP as well as TLS connections with untrusted
# # certificates are allowed.
# insecure = false
#
# # If true, pulling images with matching names is forbidden.
# blocked = false
#
...output omitted...
Because Red Hat recommends using a non-privileged user to manage containers, you can
create a registries.conf file for container registries in the $HOME/.config/containers
directory. The configuration file in this directory overrides the settings in the /etc/containers/
registries.conf file, and is used when Podman runs in rootless mode.
If you do not specify the fully qualified name of the container image when using podman
commands, then the list of registries in the unqualified-search-registries section of this
file is used to search for the container image.
If you do specify the fully qualified name of the container image from the command line, then
the container utility does not search in this section. The unqualified-search-registries
section can be left blank to ensure that you use the fully qualified name of the container image.
396 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
Note
Red Hat recommends always using the fully qualified name of container images.
Configure settings for container registries in the [[registry]] sections of the file. Use a
separate [[registry]] section to configure settings for each container registry.
[[registry]]
location = "registry.lab.example.com"
insecure = true
blocked = false
• If the insecure setting is set to true, then you can use unencrypted HTTP as well as TLS
connections with untrusted certificates to access the registry.
• If the blocked setting is set to true, then images cannot be downloaded from that registry.
Note
This classroom runs a private insecure registry that is based on Red Hat Quay to
provide container images. This registry meets the classroom need; however, you
would not expect to work with insecure registries in real-world scenarios. For more
information about this software, see https://access.redhat.com/products/red-hat-
quay
The following output is an example of a container file that uses the UBI image from the
registry.access.redhat.com registry, installs the python3 package, and prints the hello
string to the console.
Note
Creating a container file and its usage instructions are out of scope for this course.
For more information about container files, refer to the DO180 course.
RH134-RHEL9.0-en-5-20230516 397
Chapter 13 | Run Containers
Deploying containers at scale in production requires an environment that can adapt to the
following challenges:
• The platform must ensure the availability of containers that provide essential services.
• The environment must respond to application usage spikes by increasing or decreasing the
number of running containers and by load balancing the traffic.
• The platform must detect the failure of a container or a host and react accordingly.
Red Hat provides a distribution of Kubernetes called Red Hat OpenShift. Red Hat OpenShift is a
set of modular components and services that are built on top of the Kubernetes infrastructure. It
provides additional features, such as remote web-based management, multitenancy, monitoring
and auditing, advanced security features, application lifecycle management, and self-service
instances for developers.
Red Hat OpenShift is beyond the scope of this course. You can learn more about it at https://
www.openshift.com
Note
In the enterprise, individual containers are not generally run from the command line.
Instead, it is preferable to run containers in production with a Kubernetes-based
platform, such as Red Hat OpenShift.
However, you might need to use commands to manage containers and images
manually or at a small scale. This chapter focuses on this use case to improve your
grasp of the core concepts behind containers, how they work, and how they can be
useful.
398 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
References
cgroups(7), namespaces(7), seccomp(2) man pages.
For more information, refer to the Starting with Containers chapter in the Red Hat
Enterprise Linux 9 Building, Running, and Managing Containers guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/building_running_and_managing_containers/index
RH134-RHEL9.0-en-5-20230516 399
Chapter 13 | Run Containers
Quiz
Container Concepts
Choose the correct answers to the following questions:
3. Which two statements are true about container images? (Choose two.)
a. Container images package an application with all of its needed runtime dependencies.
b. Container images that work with Docker cannot work with Podman.
c. Container images can run only on a container host with the same installed software
version in the image.
d. Container images serve as blueprints for creating containers.
4. Which three core technologies are used to implement containers in Red Hat Enterprise
Linux? (Choose three.)
a. Hypervisor code for hosting VMs
b. Control Groups (cgroups) for resource management
c. Namespaces for process isolation
d. Full operating system for compatibility with the container's host
e. SELinux and Seccomp for security
400 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
Solution
Container Concepts
Choose the correct answers to the following questions:
3. Which two statements are true about container images? (Choose two.)
a. Container images package an application with all of its needed runtime dependencies.
b. Container images that work with Docker cannot work with Podman.
c. Container images can run only on a container host with the same installed software
version in the image.
d. Container images serve as blueprints for creating containers.
4. Which three core technologies are used to implement containers in Red Hat Enterprise
Linux? (Choose three.)
a. Hypervisor code for hosting VMs
b. Control Groups (cgroups) for resource management
c. Namespaces for process isolation
d. Full operating system for compatibility with the container's host
e. SELinux and Seccomp for security
RH134-RHEL9.0-en-5-20230516 401
Chapter 13 | Run Containers
Deploy Containers
Objectives
Discuss container management tools for using registries to store and retrieve images, and for
deploying, querying, and accessing containers.
Podman Commands
Command Description
podman cp Copy files or directories between a container and the local file system.
For more information about each subcommand by using the man pages, append the subcommand
to the podman command with a hyphen to separate the two. For example, the podman-build
man page explains the use of the podman build subcommand.
As a system administrator, you are tasked to run a container that is based on the RHEL 8 UBI
container image called python38 with the python-38 package. You are also tasked to create a
container image from a container file, and to run a container called python36 from that container
image. The container image that is created with the container file must have the python36:1.0
402 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
tag. Identify the differences between the two containers. Also, ensure that the installed python
packages in the containers do not conflict with the installed Python version in your local machine.
The container-tools meta-package provides the needed podman and skopeo utilities to
achieve the assigned tasks.
The podman search command searches for a matching name image by using the specified list
of registries in the registries.conf file. By default, Podman searches in all unqualified-search
registries.
Note
The unqualified-search-registries directive is a list of registries that
Podman uses to search or pull an image when a not fully qualified name image
such as registry.redhat.io/ubi9/python-39 is used. You can obtain more
information from the containers-registries.conf(5) man page.
RH134-RHEL9.0-en-5-20230516 403
Chapter 13 | Run Containers
Depending on the Docker distribution API that is implemented with the registry, some registries
might not support the search feature.
Use the podman search command to display a list of images on the configured registries that
contain the python-38 package.
You can use the skopeo inspect command to examine different container image formats
from a local directory or a remote registry without downloading the image. This command output
displays a list of the available version tags, exposed ports of the containerized application, and
metadata of the container image. You use the skopeo inspect command to verify that the
image contains the required python-38 package.
Then, you use the podman images command to display the local images.
404 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
Normally, a container runs a process, and then exits after that process is complete. The sleep
infinity command prevents the container from exiting, because the process never completes.
You can then test, develop, and debug inside the container.
After examining the container file, you use the podman build command to build the image. The
syntax for the podman build command is as follows:
NAME
Name for the new image.
TAG
Tag for the new image. If the tag is not specified, then the image is automatically tagged as
latest.
DIR
Path to the working directory. The container file must be in the working directory. If the
working directory is the current directory, then you designate it by a dot (.). Use the -f flag
to specify a different directory from the current one.
In the following example, you use the podman build command -t option to provide the
python36 name and the 1.0 tag for the new image. The container file is in the current directory.
RH134-RHEL9.0-en-5-20230516 405
Chapter 13 | Run Containers
The last line of the preceding output shows the container image ID. Most Podman commands
use the first 12 characters of the container image ID to refer to the container image. You can use
this short ID or the name of a container or a container image as arguments for most Podman
commands.
Note
If a version number is not specified in the tag, then the image is created with the
:latest tag. If an image name is not specified, then the image and tag fields show
the <none> string.
You use the podman images command to verify that the image is created with the defined name
and tag.
You then use the podman inspect command to view the low-level information of the container
image and verify that its content matches the requirements for the container.
Note
The podman inspect command output varies from the python-38 image to
the python36 image, because you created the /python36 image by adding a
layer with changes to the existing registry.access.redhat.com/ubi8/
ubi:latest base image, whereas the python-38 image is itself a base image.
406 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
Run Containers
Now that you have the required container images, you can use them to run containers. A container
can be in one of the following states:
Created
A container that is created but is not started.
Running
A container that is running with its processes.
Stopped
A container with its processes stopped.
Paused
A container with its processes paused. Not supported for rootless containers.
Deleted
A container with its processes in a dead state.
The podman ps command lists the running containers on the system. Use the podman ps -a
command to view all containers (that are created, stopped, paused, or running) in the machine.
You use the podman create command to create the container to run later. To create the
container, you use the ID of the localhost/python36 container image. You also use the
--name option to set a name to identify the container. The output of the command is the long ID
of the container.
Note
If you do not set a name for the container with the podman create or podman
run command with the --name option, then the podman utility assigns a random
name to the container.
You then use the podman ps and podman ps -a commands to verify that the container is
created but is not started. You can see information about the python36 container, such as
the short ID, name, and the status of the container, the command that the container runs when
started, and the image to create the container.
Now that you verified that the container is created correctly, you decide to start the container,
so you run the podman start command. You can use the name or the container ID to start the
container. The output of this command is the name of the container.
RH134-RHEL9.0-en-5-20230516 407
Chapter 13 | Run Containers
You use the podman run command -d option to run a container in detached mode, which runs
the container in the background instead of in the foreground of the session. In the example of the
python36 container, you do not need to provide a command for the container to run, because the
sleep infinity command was already provided in the container file that created the image for
that container.
To create the python38 container, you decide to use the podman run command and to refer
to the registry.access.redhat.com/ubi8/python-38 image. You also decide to use the
sleep infinity command to prevent the container from exiting.
a60f71a1dc1b997f5ef244aaed232e5de71dd1e8a2565428ccfebde73a2f9462
[user@host ~]$ podman ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
c54c7ee28158 localhost/python36:1.0 /bin/bash -c
slee... 37 minutes ago Up 30 minutes ago python36
a60f71a1dc1b registry.access.redhat.com/ubi8/python-38:latest sleep infinity
32 seconds ago Up 33 seconds ago python38
Important
If you run a container by using the fully qualified image name, but the image is not
yet stored locally, then the podman run command first pulls the image from the
registry and then runs.
You first run the ps -ax command on the local machine, and the command returns an expected
result with many processes.
408 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
The podman exec command executes a command inside a running container. The command
takes the name or ID of the container as the first argument and the following arguments as
commands to run inside the container. You use the podman exec command to view the running
processes in the python36 container. The output from the ps -ax command looks different,
because it is running different processes from the local machine.
You can use the sh -c command to encapsulate the command to execute in the container. In the
following example, the ps -ax > /tmp/process-data.log command is interpreted as the
command to be executed in the container. If you do not encapsulate the command, then Podman
might interpret the greater-than character (>) as part of the podman command instead of as an
argument to the podman exec option.
You decide to compare the installed python version on the host system with the installed python
version on the containers.
You create a simple bash script that displays hello world on the terminal in the /tmp directory.
RH134-RHEL9.0-en-5-20230516 409
Chapter 13 | Run Containers
The /tmp/hello.sh file exists on the host machine, and does not exist on the file system inside
the containers. If you try to use the podman exec to execute the script, then it gives an error,
because the /tmp/hello.sh script does not exist in the container.
The podman cp command copies files and directories between host and container file systems.
You can copy the /tmp/hello.sh file to the python38 container with the podman cp
command.
After the script is copied to the container file system, it can be executed from within the container.
You decide to remove the python38 container and its related image. If you try to remove the
registry.access.redhat.com/ubi8/python-38 image when the python38 container
exists, then it gives an error.
410 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
You must stop the container before you can remove it. To stop a container, use the podman stop
command.
After you stop the container, use the podman rm command to remove the container.
When the container no longer exists, you can remove the registry.access.redhat.com/
ubi8/python-38 image with the podman rmi command.
References
podman(1), podman-build(1), podman-cp(1), podman-exec(1), podman-
images(1), podman-inspect(1), podman-ps(1), podman-pull(1), podman-rm(1),
podman-rmi(1), podman-run(1), podman-search(1), and podman-stop(1) man
pages
For more information, refer to the Starting with Containers chapter in the Building,
Running, and Managing Linux Containers on Red Hat Enterprise Linux 9 guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/
html-single/building_running_and_managing_containers/index#starting-with-
containers_building-running-and-managing-containers
RH134-RHEL9.0-en-5-20230516 411
Chapter 13 | Run Containers
Guided Exercise
Deploy Containers
In this exercise, you use container management tools to build an image, run a container, and
query the running container environment.
Outcomes
• Configure a container image registry and create a container from an existing image.
• Copy a script from a host machine into containers and run the script.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user.
412 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
unqualified-search-registries = ['registry.lab.example.com']
[[registry]]
location = "registry.lab.example.com"
insecure = true
blocked = false
4. Run the python38 container in detached mode from an image with the python 3.8
package and based on the ubi8 image. The image is hosted on a remote registry.
RH134-RHEL9.0-en-5-20230516 413
Chapter 13 | Run Containers
4.4. Verify that the container is downloaded to the local image repository.
5. Build a container image called python39:1.0 from a container file, and use the image to
create a container called python39.
414 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
5.3. Verify that the container image exists in the local image repository.
RH134-RHEL9.0-en-5-20230516 415
Chapter 13 | Run Containers
6. Copy the /home/student/script.py script into the /tmp directory of the running
containers, and run the script on each container.
6.1. Copy the /home/student/script.py python script into the /tmp directory in
both containers.
6.2. Run the Python script in both containers, and then run the Python script on the host.
416 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 417
Chapter 13 | Run Containers
Objectives
Provide persistent storage for container data by sharing storage from the container host, and
configure a container network.
You can also configure a container to run a service continuously, such as a database server. If you
run a service continuously, you might eventually need to add more resources to the container, such
as persistent storage or access to more networks.
You can use different strategies to configure persistent storage for containers:
• For large deployments on an enterprise container platform, such as Red Hat OpenShift, you can
use sophisticated storage solutions to provide storage to your containers without knowing the
underlying infrastructure.
• For small deployments on a single container host, and without a need to scale, you can create
persistent storage from the container host by creating a directory to mount on the running
container.
When a container, such as a web server or database server, serves content for clients outside the
container host, you must set up a communication channel for those clients to access the content
of the container. You can configure port mapping to enable communication to a container. With
port mapping, the requests that are destined for a port on the container host are forwarded to a
port inside the container.
• Configure the container port mapping and host firewall to allow traffic on port 3306/tcp.
• Configure the db01 container to use persistent storage with the appropriate SELinux context.
• Add the appropriate network configuration so that the client01 container can communicate
with the db01 container by using DNS.
418 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
You use the podman container logs command to investigate the reason of the container
status.
From the preceding output, you determine that the container did not continue to run, because
the required environment variables were not passed to the container. So you inspect the
mariadb-105 container image to find more information about the environment variables to
customize the container.
The usage label from the output provides an example of how to run the image. The url label
points to a web page in the Red Hat Container Catalog that documents environment variables and
other information about how to use the container image.
The documentation for this image shows that the container uses the 3306 port for the database
service. The documentation also shows that the following environment variables are available to
configure the database service:
RH134-RHEL9.0-en-5-20230516 419
Chapter 13 | Run Containers
Variable Description
After examining the available environment variables for the image, you use the podman run
command -e option to pass environment variables to the container, and use the podman ps
command to verify that it is running.
To persist data, you can use host file-system content in the container with the --volume (-v)
option. You must consider file-system level permissions when you use this volume type in a
container.
In the MariaDB container image, the mysql user must own the /var/lib/mysql directory, the
same as if MariaDB was running on the host machine. The directory to mount into the container
must have mysql as the user and group owner (or the UID and GID of the mysql user, if MariaDB
is not installed on the host machine). If you run a container as the root user, then the UIDs and
GIDs on your host machine match the UIDs and GIDs inside the container.
The UID and GID matching configuration does not occur the same way in a rootless container. In a
rootless container, the user has root access from within the container, because Podman launches a
container inside the user namespace.
You can use the podman unshare command to run a command inside the user namespace. To
obtain the UID mapping for your user namespace, use the podman unshare cat command.
420 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
The preceding output shows that in the container, the root user (UID and GID of 0) maps to your
user (UID and GID of 1000) on the host machine. In the container, the UID and GID of 1 maps to
the UID and GID of 100000 on the host machine. Every UID and GID after 1 increments by 1. For
example, the UID and GID of 30 inside a container maps to the UID and GID of 100029 on the
host machine.
You use the podman exec command to view the mysql user UID and GID inside the container
that is running with ephemeral storage.
You decide to mount the /home/user/db_data directory into the db01 container to provide
persistent storage on the /var/lib/mysql directory of the container. You then create the
/home/user/db_data directory, and use the podman unshare command to set the user
namespace UID and GID of 27 as the owner of the directory.
The UID and GID of 27 in the container maps to the UID and GID of 100026 on the host machine.
You can verify the mapping by viewing the ownership of the /home/user/db_data directory
with the ls command.
Now that the correct file-system level permissions are set, you use the podman run command -v
option to mount the directory.
RH134-RHEL9.0-en-5-20230516 421
Chapter 13 | Run Containers
The podman container logs command shows a permission error for the /var/lib/mysql/
db_data directory.
This error happens because of the incorrect SELinux context that is set on the /home/user/
db_data directory on the host machine.
You then verify that the correct SELinux context is set on the /home/user/db_data directory
with the ls command -Z option.
422 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
For example, you can map the 13306 port on the container host to the 3306 port on the container
for communication with the MariaDB container. Therefore, traffic that is sent to the container host
port 13306 would be received by MariaDB that is running in the container.
You use the podman run command -p option to set a port mapping from the 13306 port from
the container host to the 3306 port on the db01 container.
Use the podman port command -a option to show all container port mappings in use. You can
also use the podman port db01 command to show the mapped ports for the db01 container.
You use the firewall-cmd command to allow port 13306 traffic into the container host machine
to redirect to the container.
Important
A rootless container cannot open a privileged port (ports below 1024) on the
container. That is, the podman run -p 80:8080 command does not normally
work for a running rootless container. To map a port on the container host below
1024 to a container port, you must run Podman as root or otherwise adjust the
system.
You can map a port above 1024 on the container host to a privileged port on the
container, even if you are running a rootless container. The 8080:80 mapping works
if the container provides service listening on port 80.
RH134-RHEL9.0-en-5-20230516 423
Chapter 13 | Run Containers
Note
The container-tools meta-package includes the netavark and aardvark-
dns packages. If Podman was installed as a stand-alone package, or if the
container-tools meta-package was installed later, then the result of the
previous command might be cni. To change the network back end, set the
following configuration in the /usr/share/containers/containers.conf file:
[network]
...output omitted...
network_backend = "netavark"
Existing containers on the host that use the default Podman network cannot resolve each other's
hostnames, because DNS is not enabled on the default network.
Use the podman network create command to create a DNS-enabled network. You use the
podman network create command to create the network called db_net, and specify the
subnet as 10.87.0.0/16 and the gateway as 10.87.0.1.
If you do not specify the --gateway or --subnet options, then they are created with the default
values.
The podman network inspect command displays information about a specific network. You
use the podman network inspect command to verify that the gateway and subnet were
correctly set and that the new db_net network is DNS-enabled.
You can add the DNS-enabled db_net network to a new container with the podman run
command --network option. You use the podman run command --network option to create
the db01 and client01 containers that are connected to the db_net network.
424 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
-e MYSQL_DATABASE=dev_data \
-e MYSQL_ROOT_PASSWORD=redhat \
-v /home/user/db_data:/var/lib/mysql:Z \
-p 13306:3306 \
--network db_net \
registry.lab.example.com/rhel8/mariadb-105
[user@host ~]$ podman run -d --name client01 \
--network db_net \
registry.lab.example.com/ubi8/ubi:latest \
sleep infinity
Because containers are designed to have only the minimum required packages, the containers
might not have the required utilities to test communication, such as the ping and ip commands.
You can install these utilities in the container by using the podman exec command.
[user@host ~]$ podman exec -it db01 dnf install -y iputils iproute
...output omitted...
[user@host ~]$ podman exec -it client01 dnf install -y iputils iproute
...output omitted...
The containers can now ping each other by container name. You test the DNS resolution with the
podman exec command. The names resolve to IPs within the subnet that was manually set for
the db_net network.
You verify that the IP addresses in each container match the DNS resolution with the podman
exec command.
RH134-RHEL9.0-en-5-20230516 425
Chapter 13 | Run Containers
You use the podman network create command to create the backend network.
You then use the podman network ls command to view all the Podman networks.
The subnet and gateway were not specified with the podman network create command
--gateway and --subnet options.
You use the podman network inspect command to obtain the IP information of the backend
network.
You can use the podman network connect command to connect additional networks to a
container when it is running. You use the podman network connect command to connect the
backend network to the db01 and client01 containers.
Important
If a network is not specified with the podman run command, then the container
connects to the default network. The default network uses the slirp4netns
network mode, and the networks that you create with the podman network
create command use the bridge network mode. If you try to connect a bridge
network to a container by using the slirp4netns network mode, then the
command fails:
426 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
You use the podman inspect command to verify that both networks are connected to each
container and to display the IP information.
The client01 container can now communicate with the db01 container on both networks.
You use the podman exec command to ping both networks on the db01 container from the
client01 container.
[user@host ~]$ podman exec -it client01 ping -c3 10.89.1.4 | grep 'packet loss'
3 packets transmitted, 3 received, 0% packet loss, time 2052ms
[user@host ~]$ podman exec -it client01 ping -c3 10.87.0.3 | grep 'packet loss'
3 packets transmitted, 3 received, 0% packet loss, time 2054ms
References
podman(1), podman-exec(1), podman-info(1), podman-network(1), podman-
network-create(1), podman-network-inspect(1), podman-network-ls(1),
podman-port(1), podman-run(1), and podman-unshare(1) man pages
For more information, refer to the Working with Containers chapter in the Building,
Running, and Managing Linux Containers on Red Hat Enterprise Linux 9 guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/
html-single/building_running_and_managing_containers/assembly_working-with-
containers_building-running-and-managing-containers
RH134-RHEL9.0-en-5-20230516 427
Chapter 13 | Run Containers
Guided Exercise
Outcomes
• Create container networks and connect them to containers.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user.
2. Create the frontend container network. Create the db_client and db_01 containers
and connect them to the frontend network.
2.1. Use the podman network create command --subnet and --gateway
options to create the frontend network with the 10.89.1.0/24 subnet and the
10.89.1.1 gateway.
428 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
2.4. Start in the background a container named db_01 that is connected to the
frontend network. Use the registry.lab.example.com/rhel8/
mariadb-105 image.
3. Troubleshoot the db_01 container and determine why it is not running. Re-create the
db_01 container by using the required environment variables.
3.1. View the container logs and determine why the container exited.
RH134-RHEL9.0-en-5-20230516 429
Chapter 13 | Run Containers
3.2. Remove the db_01 container and create it again with environment variables. Provide
the required environment variables.
4. Create persistent storage for the containerized MariaDB service, and map the local
machine 13306 port to the 3306 port in the container. Allow traffic to the 13306 port on the
servera machine.
4.2. Obtain the mysql UID and GID from the db_01 container, and then remove the db01
container.
4.3. Run the chown command inside the container namespace, and set the user and
group owner to 27 on the /home/student/database directory.
430 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
4.4. Create the db_01 container, and mount the /home/student/databases directory
from the servera machine to the /var/lib/mysql directory inside the db_01
container. Use the Z option to apply the required SELinux context.
4.6. Create the crucial_data table in the dev_db database in the db_01 container
from the db_client container.
4.7. Allow port 13306 traffic in the firewall on the servera machine.
RH134-RHEL9.0-en-5-20230516 431
Chapter 13 | Run Containers
4.8. Open a second terminal on the workstation machine and use the MariaDB client
to connect to the servera machine on port 13306, to show tables inside the db_01
container that are stored in the persistent storage.
5. Create a second container network called backend, and connect the backend network
to the db_client and db_01 containers. Test network connectivity and DNS resolution
between the containers.
5.1. Create the backend network with the 10.90.0.0/24 subnet and the 10.90.0.1
gateway.
5.2. Connect the backend container network to the db_client and db_01 containers.
432 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
5.5. Ping the db_01 container name from the db_client container.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 433
Chapter 13 | Run Containers
Objectives
Configure a container as a systemd service, and configure a container service to start at boot
time.
As a regular user, you can create a systemd unit to configure your rootless containers. You can
use this configuration to manage your container as a regular system service with the systemctl
command.
Managing containers based on systemd units is mainly useful for basic and small deployments
that do not need to scale. For more sophisticated scaling and orchestration of many container-
based applications and services, you can use an enterprise orchestration platform that is based on
Kubernetes, such as Red Hat OpenShift Container Platform.
As a system administrator, you are tasked to configure the webserver1 container that is based on
the http24 container image to start at system boot. You must also mount the /app-artifacts
directory for the web server content and map the 8080 port from the local machine to the
container. Configure the container to start and stop with systemctl commands.
By default, when you create a user account with the useradd command, the system uses the
next available ID from the regular user ID range. The system also reserves a range of IDs for
the user's containers in the /etc/subuid file. If you create a user account with the useradd
command --system option, then the system does not reserve a range for the user containers. As
a consequence, you cannot start rootless containers with system accounts.
You decide to create a dedicated user account to manage containers. You use the useradd
command to create the appdev-adm user, and use redhat as the password.
434 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
You then use the su command to switch to the appdev-adm user, and you start to use the
podman command.
Podman is a stateless utility and requires a full login session. Podman must be used within an SSH
session, and cannot be used in a sudo or an su shell. So you exit the su shell and log in to the
machine via SSH.
You then configure the container registry and authenticate with your credentials. You run the http
container with the following command.
Note
Remember to provide the right access to the directory that you mount from the
host file system to the container. For any error when running a container, you can
use the podman container logs command for troubleshooting.
RH134-RHEL9.0-en-5-20230516 435
Chapter 13 | Run Containers
Use the podman generate systemd command to generate systemd service files for an
existing container. The podman generate systemd command uses a container as a model to
create the configuration file.
The podman generate systemd command --new option instructs the podman utility to
configure the systemd service to create the container when the service starts, and to delete the
container when the service stops.
Important
Without the --new option, the podman utility configures the service unit file to start
and stop the existing container without deleting it.
You use the podman generate systemd command with the --name option to display the
systemd service file that is modeled for the webserver1 container.
On start, the systemd daemon executes the podman start command to start the existing
container.
On stop, the systemd daemon executes the podman stop command to stop the container.
Notice that the systemd daemon does not delete the container on this action.
You then use the previous command with the addition of the --new option to compare the
systemd configuration.
On starting, the systemd daemon executes the podman run command to create and
then start a new container. This action uses the podman run command --rm option, which
removes the container on stopping.
On stopping, systemd executes the podman stop command to stop the container.
After systemd stops the container, systemd removes it by using the podman rm -f
command.
You verify the output of the podman generate systemd command, and run the previous
command with the --files option to create the systemd user file in the current directory.
Because the webserver1 container uses persistent storage, you choose to use the podman
436 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
generate systemd command with the --new option. You then create the ~/.config/
systemd/user/ directory and move the file to this location.
First, you reload the systemd daemon to make the systemctl command aware of the new user
file. You use the systemctl --user start command to start the webserver1 container. Use
the name of the generated systemd user file for the container.
Important
When you configure a container with the systemd daemon, the daemon monitors
the container status and restarts the container if it fails. Do not use the podman
command to start or stop these containers. Doing so might interfere with the
systemd daemon monitoring.
The following table summarizes the directories and commands that are used between systemd
system and user services.
RH134-RHEL9.0-en-5-20230516 437
Chapter 13 | Run Containers
User services
$ systemctl --user daemon-reload
User services
$ systemctl --user start UNIT
$ systemctl --user stop UNIT
You can change this default behavior, and force your enabled services to start with the server and
stop during the shutdown, by running the loginctl enable-linger command.
You use the loginctl command to configure the systemd user service to persist after the last
user session of the configured service closes. You then verify the successful configuration with the
loginctl show-user command.
438 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
The procedure to set the service file as root is similar to the previously outlined procedure for
rootless containers, with the following exceptions:
• You manage the containers with the systemctl command without the --user option.
For a demonstration, see the YouTube video from the Red Hat Videos channel that is listed in the
References at the end of this section.
References
loginctl(1), systemd.unit(5), systemd.service(5), subuid(5), and
podman-generate-systemd(1) man pages
For more information, refer to the Running Containers as Systemd Services with
Podman chapter in the Red Hat Enterprise Linux 9 Building, Running, and Managing
Containers guide at
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-
single/building_running_and_managing_containers/index
RH134-RHEL9.0-en-5-20230516 439
Chapter 13 | Run Containers
Guided Exercise
Outcomes
• Create systemd service files to manage a container.
• Configure a user account for systemd user services to start a container when the host
machine starts.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. Log in to the servera machine as the student user.
2. Create a user account called contsvc and use redhat as the password. Use this user
account to run containers as systemd services.
2.1. Create the contsvc user. Set redhat as the password for the contsvc user.
440 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
2.2. To manage the systemd user services with the contsvc account, you must log in
directly as the contsvc user. You cannot use the su and sudo commands to create a
session with the contsvc user.
Return to the workstation machine as the student user, and then log in as the
contsvc user.
3.2. The lab script prepares the registries.conf file in the /tmp/containers-
services/ directory. Copy that file to the ~/.config/containers/ directory.
3.3. Verify that you can access the registry.lab.example.com registry. If everything
works as expected, then the command should list some images.
4.2. Create the index.html file and add the Hello World line.
4.3. Verify that the permission for others is set to r-x in the webcontent/html
directory, and is set to r-- in the index.html file. The container uses a non-
privileged user that must be able to read the index.html file.
RH134-RHEL9.0-en-5-20230516 441
Chapter 13 | Run Containers
6. Create a systemd service file to manage the webapp container with systemctl
commands. Configure the systemd service so that when you start the service, the
systemd daemon creates a container. After you finish the configuration, stop and then
delete the webapp container. Remember that the systemd daemon expects that the
container does not exist initially.
6.2. Create the unit file for the webapp container. Use the --new option so that systemd
creates a container when starting the service, and deletes the container when
stopping the service.
442 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
7. Reload the systemd daemon configuration, and then enable and start your new
container-webapp user service. Verify the systemd service configuration, stop and
start the service, and display the web server response and the container status.
Use the container ID information to confirm that the systemd daemon creates a
container when you restart the service.
7.5. Stop the container-webapp service, and confirm that the container no longer
exists. When you stop the service, the systemd daemon stops and then deletes the
container.
7.6. Start the container-webapp service, and then confirm that the container is
running.
The container ID is different, because the systemd daemon creates a container with
the start instruction, and deletes the container with the stop instruction.
RH134-RHEL9.0-en-5-20230516 443
Chapter 13 | Run Containers
8. Ensure that the services for the contsvc user start at system boot. When done, restart the
servera machine.
8.2. Confirm that the Linger option is set for the contsvc user.
8.3. Switch to the root user, and then use the systemctl reboot command to restart
servera.
[contsvc@servera user]$ su -
Password: redhat
Last login: Fri Aug 28 07:43:40 EDT 2020 on pts/0
[root@servera ~]# systemctl reboot
Connection to servera closed by remote host.
Connection to servera closed.
[student@workstation ~]$
9. When the servera machine is up again, log in to servera as the contsvc user. Verify
that the systemd daemon started the webapp container, and that the web content is
available.
444 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
Finish
On the workstation machine, run the lab finish containers-services script to
complete this exercise.
RH134-RHEL9.0-en-5-20230516 445
Chapter 13 | Run Containers
Lab
Run Containers
In this lab, you configure on your server a container that provides a MariaDB database
service, stores its database on persistent storage, and starts automatically with the server.
Outcomes
• Create detached containers.
• Configure systemd for containers to start when the host machine starts.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. On serverb, install the container tools package.
2. The container image registry at registry.lab.example.com stores the rhel8/
mariadb-103 image with several tags. Use the podsvc user to list the available tags and
note the tag with the lowest version number. Use the admin user and redhat321 password
to authenticate to the registry. Use the /tmp/registries.conf file as a template for the
registry configuration.
3. Create the /home/podsvc/db_data directory, and configure the directory so that
containers have read/write access. Then, create the inventorydb detached container. Use
the rhel8/mariadb-103 image from the registry.lab.example.com registry, and
specify the tag with the lowest version number on that image, which you found in a preceding
step. Map port 3306 in the container to port 13306 on the host. Mount the /home/podsvc/
db_data directory on the host as /var/lib/mysql/data in the container. Declare the
following variable values for the container:
Variable Value
MYSQL_USER operator1
MYSQL_PASSWORD redhat
MYSQL_DATABASE inventory
MYSQL_ROOT_PASSWORD redhat
446 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
You can copy and paste these parameters from the /home/podsvc/containers-
review/variables file on serverb. Execute the /home/podsvc/containers-
review/testdb.sh script to confirm that the MariaDB database is running.
4. Configure the systemd daemon so that the inventorydb container starts automatically
when the system boots.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 447
Chapter 13 | Run Containers
Solution
Run Containers
In this lab, you configure on your server a container that provides a MariaDB database
service, stores its database on persistent storage, and starts automatically with the server.
Outcomes
• Create detached containers.
• Configure systemd for containers to start when the host machine starts.
This command prepares your environment and ensures that all required resources are
available.
Instructions
1. On serverb, install the container tools package.
448 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
2.4. Log in to the container registry with the podman login command.
Note
The repository that contains the mariadb container image is not a public
repository, and so the podman search mariadb command returns no results.
Review the note in the podman-search (1) man page about the unreliability of
using podman-search to determine the existence of an image.
RH134-RHEL9.0-en-5-20230516 449
Chapter 13 | Run Containers
Variable Value
MYSQL_USER operator1
MYSQL_PASSWORD redhat
MYSQL_DATABASE inventory
MYSQL_ROOT_PASSWORD redhat
You can copy and paste these parameters from the /home/podsvc/containers-
review/variables file on serverb. Execute the /home/podsvc/containers-
review/testdb.sh script to confirm that the MariaDB database is running.
3.1. Start the db_01 detached container to obtain the mysql UID and GID.
3.3. Obtain the mysql UID and GID from the db_01 container, and then remove the db01
container.
3.4. Use the podman unshare command to set the user namespace UID and GID of 27 as
the owner of the directory.
450 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
4. Configure the systemd daemon so that the inventorydb container starts automatically
when the system boots.
4.1. If you used sudo or su to log in as the podsvc user, then exit serverb and use the
ssh command to log in directly to serverb as the podsvc user. Remember, the
systemd daemon requires the user to open a direct session from the console or
through SSH. Omit this step if you already logged in to the serverb machine as the
podsvc user by using SSH.
4.3. Create the systemd unit file from the running container.
4.5. Instruct the systemd daemon to reload its configuration, and then enable and start the
container-inventorydb service.
RH134-RHEL9.0-en-5-20230516 451
Chapter 13 | Run Containers
4.7. Run the loginctl enable-linger command for the user services to start
automatically when the server starts.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
452 RH134-RHEL9.0-en-5-20230516
Chapter 13 | Run Containers
Summary
• Containers provide a lightweight way to distribute and run an application with its dependencies
so that it does not conflict with installed software on the host.
• Containers run from container images that you can download from a container registry or create
yourself.
• You can use container files with instructions to build a customized container image.
• Podman, which Red Hat Enterprise Linux provides, directly runs and manages containers and
container images on a single host.
• Containers can be run as root, or as non-privileged rootless containers for increased security.
• You can map network ports on the container host to pass traffic to services that run in its
containers.
• You can use environment variables to configure the software in containers at build time.
• Although container storage is temporary, you can attach persistent storage to a container by
using the contents of a directory on the container host, for example.
• You can configure a systemd unit file to automatically run containers when the system starts.
RH134-RHEL9.0-en-5-20230516 453
454 RH134-RHEL9.0-en-5-20230516
Chapter 14
Comprehensive Review
Goal Review tasks from the Red Hat System
Administration II course.
RH134-RHEL9.0-en-5-20230516 455
Chapter 14 | Comprehensive Review
Comprehensive Review
Objectives
Demonstrate knowledge and skills learned in Red Hat System Administration II.
You can refer to earlier sections in the textbook for extra study.
• Run commands more efficiently by using advanced features of the Bash shell, shell scripts, and
various Red Hat Enterprise Linux utilities.
• Run repetitive tasks with for loops, evaluate exit codes from commands and scripts, run tests
with operators, and create conditional structures with if statements.
• Create regular expressions to match data, apply regular expressions to text files with the grep
command, and use grep to search files and data from piped commands.
• Schedule commands to run on a repeating schedule with the system crontab file and
directories.
• Enable and disable systemd timers, and configure a timer that manages temporary files.
• Describe the basic Red Hat Enterprise Linux logging architecture to record events.
• Interpret events in the relevant syslog files to troubleshoot problems or to review system status.
• Find and interpret entries in the system journal to troubleshoot problems or review system
status.
• Configure the system journal to preserve the record of events when a server is rebooted.
456 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
• Maintain accurate time synchronization with Network Time Protocol (NTP) and configure the
time zone to ensure correct time stamps for events that are recorded by the system journal and
logs.
• Archive files and directories into a compressed file with tar, and extract the contents of an
existing tar archive.
• Efficiently and securely synchronize the contents of a local file or directory with a remote server
copy.
• Optimize system performance by selecting a tuning profile that the tuned daemon manages.
• Prioritize or deprioritize specific processes, with the nice and renice commands.
• Explain how SELinux protects resources, change the current SELinux mode of a system, and set
the default SELinux mode of a system.
• Manage the SELinux policy rules that determine the default context for files and directories with
the semanage fcontext command, and apply the context defined by the SELinux policy to
files and directories with the restorecon command.
• Activate and deactivate SELinux policy rules with the setsebool command, manage the
persistent value of SELinux Booleans with the semanage boolean -l command, and consult
man pages that end with _selinux to find useful information about SELinux Booleans.
• Use SELinux log analysis tools and display useful information during SELinux troubleshooting
with the sealert command.
• Create storage partitions, format them with file systems, and mount them for use.
• Describe logical volume manager components and concepts, and implement LVM storage and
display LVM component information.
RH134-RHEL9.0-en-5-20230516 457
Chapter 14 | Comprehensive Review
• Analyze the multiple storage components that make up the layers of the storage stack.
• Identify NFS export information, create a directory to use as a mount point, mount an NFS
export with the mount command or by configuring the /etc/fstab file, and unmount an NFS
export with the umount command.
• Describe the benefits of using the automounter, and automount NFS exports by using direct
and indirect maps.
• Describe the Red Hat Enterprise Linux boot process, set the default target when booting, and
boot a system to a non-default target.
• Log in to a system and change the root password when the current root password is lost.
• Manually repair file-system configuration or corruption issues that stop the boot process.
• Verify that network ports have the correct SELinux type for services to bind to them.
• Explain Kickstart concepts and architecture, create a Kickstart file with the Kickstart
Generator website, modify an existing Kickstart file with a text editor and check its syntax with
ksvalidator, publish a Kickstart file to the installer, and install Kickstart on the network.
• Install a virtual machine on your Red Hat Enterprise Linux server with the web console.
• Explain container concepts and the core technologies for building, storing, and running
containers.
• Discuss container management tools for using registries to store and retrieve images, and for
deploying, querying, and accessing containers.
• Provide persistent storage for container data by sharing storage from the container host, and
configure a container network.
• Configure a container as a systemd service, and configure a container service to start at boot
time.
458 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
Lab
Note
If you plan to take the RHCSA exam, then use the following approach to
maximize the benefit of this Comprehensive Review: attempt each lab
without viewing the solution buttons or referring to the course content. Use
the grading scripts to gauge your progress as you complete each lab.
In this review, you troubleshoot and repair boot problems and update the system default
target. You also schedule tasks to run on a repeating schedule as a normal user.
Outcomes
• Diagnose issues and recover the system from emergency mode.
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command prepares your environment and ensures that all required resources are
available.
Specifications
• On workstation, run the /tmp/rhcsa-break1 script. This script causes an issue with the
boot process on serverb and then reboots the machine. Troubleshoot the cause and repair the
boot issue. When prompted, use redhat as the password of the root user.
• On workstation, run the /tmp/rhcsa-break2 script. This script causes the default target
to switch from the multi-user target to the graphical target on the serverb machine
and then reboots the machine. On serverb, reset the default target to use the multi-user
target. The default target settings must persist after reboot without manual intervention. As the
student user, use the sudo command for performing privileged commands. Use student as
the password, when required.
• On serverb, schedule a recurring job as the student user that executes the /home/
student/backup-home.sh script hourly between 7 PM and 9 PM every day except on
RH134-RHEL9.0-en-5-20230516 459
Chapter 14 | Comprehensive Review
• Reboot the serverb machine and wait for the boot to complete before grading.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
460 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
Solution
Note
If you plan to take the RHCSA exam, then use the following approach to
maximize the benefit of this Comprehensive Review: attempt each lab
without viewing the solution buttons or referring to the course content. Use
the grading scripts to gauge your progress as you complete each lab.
In this review, you troubleshoot and repair boot problems and update the system default
target. You also schedule tasks to run on a repeating schedule as a normal user.
Outcomes
• Diagnose issues and recover the system from emergency mode.
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command prepares your environment and ensures that all required resources are
available.
2. After the serverb machine boots, access the console and notice that the boot process
stopped early. Consider a possible cause for this behavior.
2.1. Locate the icon for the serverb console, as appropriate for your classroom
environment. Open the console and inspect the error. It might take a few seconds for
the error to appear.
2.2. Press Ctrl+Alt+Del to reboot the serverb machine. When the boot-loader menu
appears, press any key except Enter to interrupt the countdown.
RH134-RHEL9.0-en-5-20230516 461
Chapter 14 | Comprehensive Review
2.3. Edit the default boot-loader entry, in memory, to log in to the emergency mode. Press
e to edit the current entry.
2.4. Use the cursor keys to navigate to the line that starts with linux. Append
systemd.unit=emergency.target.
2.6. Log in to emergency mode. Use redhat as the root user's password.
3. Remount the / file system with read and write capabilities. Use the mount -a command to
try to mount all the other file systems.
3.1. Remount the / file system with read and write capabilities to edit the file system.
3.2. Try to mount all the other file systems. Notice that one of the file systems does not
mount.
3.3. Edit the /etc/fstab file to fix the issue. Remove or comment out the incorrect line.
3.4. Update the systemd daemon for the system to register the new /etc/fstab file
configuration.
3.5. Verify that /etc/fstab file is now correct by attempting to mount all entries.
3.6. Reboot serverb and wait for the boot to complete. The system should now boot
without errors.
4. On workstation, run the /tmp/rhcsa-break2 script. Wait for the serverb machine to
reboot before proceeding.
462 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
5. On serverb, set the multi-user target as the current and default target.
5.5. Reboot serverb and verify that the multi-user target is set as the default target.
5.6. After the system reboots, open an SSH session to serverb as the student user.
Verify that the multi-user target is set as the default target.
6. On serverb, schedule a recurring job as the student user that executes the /home/
student/backup-home.sh script hourly between 7 PM and 9 PM on all days except
Saturday and Sunday. Use the backup-home.sh script to schedule the recurring job.
Download the backup script from http://materials.example.com/labs/backup-
home.sh. Run the command as an executable.
RH134-RHEL9.0-en-5-20230516 463
Chapter 14 | Comprehensive Review
6.2. Open the crontab file with the default text editor.
6.4. Use the crontab -l command to list the scheduled recurring jobs.
7. Reboot serverb and wait for the boot to complete before grading.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
464 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
Lab
Note
If you plan to take the RHCSA exam, then use the following approach to
maximize the benefit of this Comprehensive Review: attempt each lab
without viewing the solution buttons or referring to the course content. Use
the grading scripts to gauge your progress as you complete each lab.
In this review, you create a logical volume, mount a network file system, and create a swap
partition that is automatically activated at boot. You also configure directories to store
temporary files.
Outcomes
• Create a logical volume.
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command prepares your environment and ensures that all required resources are
available.
Specifications
• On serverb, configure a new 1 GiB vol_home logical volume in a new 2 GiB extra_storage
volume group. Use the unpartitioned /dev/vdb disk to create the partition.
• Format the vol_home logical volume with the XFS file-system type, and persistently mount it
on the /user-homes directory.
• On serverb, persistently mount the /share network file system that servera exports on the
/local-share directory. The servera machine exports the servera.lab.example.com:/
share path.
RH134-RHEL9.0-en-5-20230516 465
Chapter 14 | Comprehensive Review
• On serverb, create a 512 MiB swap partition on the /dev/vdc disk. Persistently mount the
swap partition.
• Create the production user group. Create the production1, production2, production3,
and production4 users with the production group as their supplementary group.
• On serverb, configure the /run/volatile directory to store temporary files. If the files in
this directory are not accessed for more than 30 seconds, then the system automatically deletes
them. Set 0700 as the octal permissions for the directory. Use the /etc/tmpfiles.d/
volatile.conf file to configure the time-based deletion of the files in the /run/volatile
directory.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
466 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
Solution
Note
If you plan to take the RHCSA exam, then use the following approach to
maximize the benefit of this Comprehensive Review: attempt each lab
without viewing the solution buttons or referring to the course content. Use
the grading scripts to gauge your progress as you complete each lab.
In this review, you create a logical volume, mount a network file system, and create a swap
partition that is automatically activated at boot. You also configure directories to store
temporary files.
Outcomes
• Create a logical volume.
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command prepares your environment and ensures that all required resources are
available.
1.1. Log in to serverb as the student user and switch to the root user.
RH134-RHEL9.0-en-5-20230516 467
Chapter 14 | Comprehensive Review
1.4. Create the extra_storage volume group with the /dev/vdb1 partition.
2. Format the vol_home logical volume with the XFS file-system type, and persistently mount
it on the /user-homes directory.
468 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
3. On serverb, persistently mount the /share network file system that servera
exports on the /local-share directory. The servera machine exports the
servera.lab.example.com:/share path.
3.2. Append the appropriate entry to the /etc/fstab file to persistently mount the
servera.lab.example.com:/share network file system.
4. On serverb, create a 512 MiB swap partition on the /dev/vdc disk. Activate and
persistently mount the swap partition.
4.3. Create an entry in the /etc/fstab file to persistently mount the swap space. Use the
partition's UUID to create the /etc/fstab file entry. Activate the swap space.
RH134-RHEL9.0-en-5-20230516 469
Chapter 14 | Comprehensive Review
5. Create the production user group. Then, create the production1, production2,
production3, and production4 users with the production group as their
supplementary group.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
470 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
Lab
Note
If you plan to take the RHCSA exam, then use the following approach to
maximize the benefit of this Comprehensive Review: attempt each lab
without viewing the solution buttons or referring to the course content. Use
the grading scripts to gauge your progress as you complete each lab.
In this review, you configure SSH key-based authentication, change firewall settings, adjust
the SELinux mode and an SELinux Boolean, and troubleshoot SELinux issues.
Outcomes
• Configure SSH key-based authentication.
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command prepares your environment and ensures that all required resources are
available.
Specifications
• On serverb, generate an SSH key pair for the student user. Do not protect the private key
with a passphrase.
• Configure the student user on servera to accept login authentication with the SSH key pair
that you generated on the serverb machine. The student user on serverb must be able to
log in to servera via SSH without entering a password.
• On serverb, verify that the /localhome directory does not exist. Then, configure the
production5 user's home directory to mount the /user-homes/production5 network
RH134-RHEL9.0-en-5-20230516 471
Chapter 14 | Comprehensive Review
file system. The servera.lab.example.com machine exports the file system as the
servera.lab.example.com:/user-homes/production5 NFS share. Use the autofs
service to mount the network share. Verify that the autofs service creates the /localhome/
production5 directory with the same permissions as on servera.
• On serverb, adjust the appropriate SELinux Boolean so that the production5 user may use
the NFS-mounted home directory after authenticating with an SSH key. If required, use redhat
as the password of the production5 user.
• On serverb, adjust the firewall settings to reject all connection requests from the servera
machine. Use the servera IPv4 address (172.25.250.10) to configure the firewall rule.
• On serverb, investigate and fix the issue with the failing Apache web service, which listens
on port 30080/TCP for connections. Adjust the firewall settings appropriately so that the port
30080/TCP is open for incoming connections.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
472 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
Solution
Note
If you plan to take the RHCSA exam, then use the following approach to
maximize the benefit of this Comprehensive Review: attempt each lab
without viewing the solution buttons or referring to the course content. Use
the grading scripts to gauge your progress as you complete each lab.
In this review, you configure SSH key-based authentication, change firewall settings, adjust
the SELinux mode and an SELinux Boolean, and troubleshoot SELinux issues.
Outcomes
• Configure SSH key-based authentication.
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command prepares your environment and ensures that all required resources are
available.
1. On serverb, generate an SSH key pair for the student user. Do not protect the private key
with a passphrase.
1.2. Use the ssh-keygen command to generate an SSH key pair. Do not protect the
private key with a passphrase.
RH134-RHEL9.0-en-5-20230516 473
Chapter 14 | Comprehensive Review
2. Configure the student user on servera to accept login authentication with the SSH key
pair that you generated on the serverb machine. The student user on serverb must be
able to log in to servera via SSH without entering a password.
2.1. Send the public key of the newly generated SSH key pair to the student user on the
servera machine.
2.2. Verify that the student user can log in to servera from serverb without entering a
password. Do not close the connection.
474 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
3.2. Edit the /etc/sysconfig/selinux file to set the SELINUX parameter to the
permissive value.
4. On serverb, verify that the /localhome directory does not exist. Then, configure the
production5 user's home directory to mount the /user-homes/production5 network
file system. The servera.lab.example.com machine exports the file system as the
servera.lab.example.com:/user-homes/production5 NFS share. Use the
autofs service to mount the network share. Verify that the autofs service creates the
/localhome/production5 directory with the same permissions as on servera.
RH134-RHEL9.0-en-5-20230516 475
Chapter 14 | Comprehensive Review
Complete!
/- /etc/auto.production5
4.8. Verify that the autofs service creates the /localhome/production5 directory on
serverb with the same permissions as the /user-homes/production5 directory
on servera.
5. On serverb, adjust the appropriate SELinux Boolean so that the production5 user may
use the NFS-mounted home directory after authenticating with an SSH key. If required, use
redhat as the password of the production5 user.
5.1. Open a new terminal window and verify from servera that the production5 user
cannot log in to serverb with SSH key-based authentication. An SELinux Boolean is
preventing the user from logging in. From workstation, open a new terminal and log
in to servera as the student user.
5.2. Switch to the production5 user. When prompted, use redhat as the password of the
production5 user.
476 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
5.4. Transfer the public key of the SSH key pair to the production5 user on the serverb
machine. When prompted, use redhat as the password of the production5 user.
RH134-RHEL9.0-en-5-20230516 477
Chapter 14 | Comprehensive Review
5.6. On the terminal that is connected to serverb as the root user, set the
use_nfs_home_dirs SELinux Boolean to true.
5.7. Return to the terminal that is connected to servera as the production5 user, and
use SSH public key-based authentication instead of password-based authentication to
log in to serverb as the production5 user. This command should succeed.
5.8. Exit and close the terminal that is connected to serverb as the production5 user.
Keep open the terminal that is connected to serverb as the root user.
6. On serverb, adjust the firewall settings to reject all connection requests that originate from
the servera machine. Use the servera IPv4 address (172.25.250.10) to configure the
firewall rule.
7. On serverb, investigate and fix the issue with the failing Apache web service, which listens
on port 30080/TCP for connections. Adjust the firewall settings appropriately so that the
port 30080/TCP is open for incoming connections.
7.1. Restart the httpd service. This command fails to restart the service.
7.2. Investigate why the httpd service is failing. A permission error indicates that the
httpd daemon failed to bind to port 30080/TCP on startup. SELinux policies can
478 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
7.3. Determine whether an SELinux policy is preventing the httpd service from binding
to the 30080/TCP port. The log messages reveal that the 30080/TCP port does not
have the appropriate http_port_t SELinux context, and so SELinux prevents the
httpd service from binding to the port. The log message also produces the syntax of
the semanage port command, so that you can fix the issue.
7.4. Set the appropriate SELinux context on the 30080/TCP port for the httpd service to
bind to it.
7.5. Restart the httpd service. This command should successfully restart the service.
RH134-RHEL9.0-en-5-20230516 479
Chapter 14 | Comprehensive Review
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
480 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
Lab
Run Containers
Note
If you plan to take the RHCSA exam, then use the following approach to
maximize the benefit of this Comprehensive Review: attempt each lab
without viewing the solution buttons or referring to the course content. Use
the grading scripts to gauge your progress as you complete each lab.
Outcomes
• Create rootless detached containers.
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command prepares your environment and ensures that all required resources are
available.
Specifications
• On serverb, configure the podmgr user with redhat as the password, and set up the
appropriate tools for the podmgr user to manage the containers for this comprehensive review.
Configure the registry.lab.example.com as the remote registry. Use admin as the
user and redhat321 as the password to authenticate. You can use the /tmp/review4/
registries.conf file to configure the registry.
• Create the production DNS-enabled container network. Use the 10.81.0.0/16 subnet and
10.81.0.1 as the gateway. Use this container network for the containers that you create in this
comprehensive review.
RH134-RHEL9.0-en-5-20230516 481
Chapter 14 | Comprehensive Review
Variable Value
MYSQL_USER developer
MYSQL_PASSWORD redhat
MYSQL_DATABASE inventory
MYSQL_ROOT_PASSWORD redhat
• Create a systemd service file to manage the db-app01 container. Configure the systemd
service so that when you start the service, the systemd daemon keeps the original container.
Start and enable the container as a systemd service. Configure the db-app01 container to
start at system boot.
• Copy the /home/podmgr/db-dev/inventory.sql script into the /tmp directory of the db-
app01 container, and execute the script inside the container. If you executed the script locally,
then you would use the mysql -u root inventory < /tmp/inventory.sql command.
• Use the container file in the /home/podmgr/http-dev directory to create the http-app01
detached container in the production network. The container image name must be http-
client with the 9.0 tag. Map the 8080 port on the local machine to the 8080 port in the
container.
• Use the curl command to query the content of the http-app01 container. Verify that the
output of the command shows the container name of the client and that the status of the
database is up.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
482 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
Solution
Run Containers
Note
If you plan to take the RHCSA exam, then use the following approach to
maximize the benefit of this Comprehensive Review: attempt each lab
without viewing the solution buttons or referring to the course content. Use
the grading scripts to gauge your progress as you complete each lab.
Outcomes
• Create rootless detached containers.
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command prepares your environment and ensures that all required resources are
available.
1. On serverb, configure the podmgr user with redhat as the password and set up the
appropriate tools for the podmgr user to manage the containers for this comprehensive
review. Configure the registry.lab.example.com as the remote registry. Use admin as
the user and redhat321 as the password to authenticate. You can use the /tmp/review4/
registries.conf file to configure the registry.
RH134-RHEL9.0-en-5-20230516 483
Chapter 14 | Comprehensive Review
1.3. Create the podmgr user and set redhat as the password for the user.
1.4. Exit the student user session. Log in to the serverb machine as the podmgr user. If
prompted, use redhat as the password.
484 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
3. Create the production DNS-enabled container network. Use the 10.81.0.0/16 subnet
and 10.81.0.1 as the gateway. Use this container network for the containers that you
create in this comprehensive review.
3.1. Create the production DNS-enabled container network. Use the 10.81.0.0/16
subnet and 10.81.0.1 as the gateway.
3.2. Verify that the DNS feature is enabled in the production network.
RH134-RHEL9.0-en-5-20230516 485
Chapter 14 | Comprehensive Review
Variable Value
MYSQL_USER developer
MYSQL_PASSWORD redhat
MYSQL_DATABASE inventory
MYSQL_ROOT_PASSWORD redhat
4.1. Search for the earliest version tag number of the registry.lab.example.com/
rhel8/mariadb container image.
4.2. Use the earliest version tag number from the output of the previous step to create the
detached db-app01 container in the production network. Use the /home/podmgr/
storage/database directory as persistent storage for the container. Map the 13306
port to the 3306 container port. Use the data in the table to set the environment
variables for the container.
5. Create a systemd service file to manage the db-app01 container. Configure the systemd
service so that when you start the service, the systemd daemon keeps the original container.
Start and enable the container as a systemd service. Configure the db-app01 container to
start at system boot.
5.1. Create the ~/.config/systemd/user/ directory for the container unit file.
486 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
5.2. Create the systemd unit file for the db-app01 container, and move the unit file to the
~/.config/systemd/user/ directory.
5.4. Reload the user systemd service to use the new service unit.
5.5. Start and enable the systemd unit for the db-app01 container.
5.6. Use the loginctl command to configure the db-app01 container to start at system
boot.
RH134-RHEL9.0-en-5-20230516 487
Chapter 14 | Comprehensive Review
7. Use the container file in the /home/podmgr/http-dev directory to create the http-
app01 detached container in the production network. The container image name must be
http-client with the 9.0 tag. Map the 8080 port on the local machine to the 8080 port
in the container.
7.1. Create the http-client:9.0 image with the container file in the /home/podmgr/
http-dev directory.
7.2. Create the http-app01 detached container in the production network. Map the
8080 port from the local machine to the 8080 port in the container.
8. Query the content of the http-app01 container. Verify that it shows the container name of
the client and that the status of the database is up.
488 RH134-RHEL9.0-en-5-20230516
Chapter 14 | Comprehensive Review
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, change to the student user home directory and use the lab
command to complete this exercise. This step is important to ensure that resources from previous
exercises do not impact upcoming exercises.
RH134-RHEL9.0-en-5-20230516 489
490 RH134-RHEL9.0-en-5-20230516