0% found this document useful (0 votes)
31 views144 pages

Linux Command Line Basics Guide

This document provides an overview of the Linux command line, including the shell, navigation, file system structure, and command usage. It explains how to start and close a shell session, navigate the file system, manipulate files and directories, and utilize wildcards and I/O redirection. Additionally, it covers hard and symbolic links, command types, and the importance of documentation files.

Uploaded by

m.ettouahri1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views144 pages

Linux Command Line Basics Guide

This document provides an overview of the Linux command line, including the shell, navigation, file system structure, and command usage. It explains how to start and close a shell session, navigate the file system, manipulate files and directories, and utilize wildcards and I/O redirection. Additionally, it covers hard and symbolic links, command types, and the importance of documentation files.

Uploaded by

m.ettouahri1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 144

The Linux command line

this note only shows the commands default behavior ​

for more info go to the specific command file.

| notaion (command ...) ​

When three periods follow an argument in the description of a command (as


mkdir directory... ), it means that the argument can be repeated.

1. what is the shell


The shell is a program that takes keyboard commands and passes them to the
operating system to carry out. Almost all Linux distributions supply a shell program
from the GNU Project called bash .

starting a shell session :


to start a shell session open a terminal emulator (the gui terminal) or simply called
terminal.
Once it comes up, we should see something like this:

[me@linuxbox ~]$

| it's either $ or # ​

If the last character of the prompt is a pound sign (“#”) rather than a dollar $ ,
the terminal session has superuser privileges.

shell commands examples :


some commands of the shell are :

date command that display current time .


cal command that display a calender .
df command shows disk info .
free shows memory info .

closing a shell session :


We can end a terminal session by either

closing the terminal emulator window.


entering the exit command at the shell prompt.
pressing Ctrl-d.

2. navigation
Understanding the File System Tree
Linux organizes its files in what is called a hierarchical directory structure.
The first directory in the file system is called the root directory.

| linux has only one tree ​

unlike Windows, which has a separate file system tree for each storage device,
Linux always have a single file system tree, regardless of how many drives or
storage devices are attached to the computer.

| storage devices are mounted inside that tree ​

Storage devices are attached (or more correctly, mounted) at various points on
the tree according to the whims of the system administrator, the person (or
people) responsible for the maintenance of the system.

navigation commands
pwd print current directory.
ls list the files and directories in the current working directory.
cd change directory .

3. Exploring the System


Options and Arguments
Commands are often followed by one or more options that modify their behavior, and
further, by one or more arguments:
command -options arguments

Most commands use options which consist of a single character preceded by a


dash.
Many commands, however, also support long options, consisting of a word
preceded by two dashes.
Also, many commands allow multiple short options to be strung together.

In the following example, the ls command is given two options, which are the l option
to produce long format output, and the t option to sort the result by the file's
modification time.
[me@linuxbox ~]$ ls -lt

We'll add the long option “--reverse” to reverse the order of the sort.
[me@linuxbox ~]$ ls -lt --reverse

linux file system

directory description
/ the root directory
/bin Contains binaries (programs) that must be present for the system
to boot and run.
/boot Contains the Linux kernel, initial RAM disk image (for
drivers needed at boot time), and the boot loader.
Interesting files:
● /boot/grub/grub.conf or menu.lst, which
directory description
are used to configure the boot loader.
● /boot/vmlinuz (or something similar), the Linux kernel
/dev This is a special directory that contains device nodes. “Everything
is a file” also applies to devices. Here is where the kernel
maintains a list of all the devices it understands.
/etc The /etc directory contains all of the system-wide
configuration files. It also contains a collection of shell
scripts that start each of the system services at boot time.
Everything in this directory should be readable text.
Interesting files: While everything in /etc is interesting,
here are some all-time favorites:
● /etc/crontab, a file that defines when
automated jobs will run.
● /etc/fstab, a table of storage devices and their
associated mount points.
● /etc/passwd, a list of the user accounts.
/home In normal configurations, each user is given a directory in /home.
Ordinary users can only write files in their home directories. This
limitation protects the system from errant user activity.
/lib Contains shared library files used by the core system programs.
These are similar to dynamic link libraries (DLLs) in Windows.
/lost+found Each formatted partition or device using a Linux file system, such
as ext4, will have this directory. It is used in the case of a partial
recovery from a file system corruption event. Unless something
really bad has happened to our system, this directory will remain
empty.
/media On modern Linux systems the /media directory will contain the
mount points for removable media such as USB drives, CD-
ROMs, etc. that are mounted automatically at insertion.
/mnt On older Linux systems, the /mnt directory contains mount points
for removable devices that have been mounted manually.
/opt The /opt directory is used to install “optional” software. This is
mainly used to hold commercial software products that might be
installed on the system.
/proc The /proc directory is special. It's not a real file system in the
sense of files stored on the hard drive. Rather, it is a virtual file
system maintained by the Linux kernel. The “files” it contains are
directory description
peepholes into the kernel itself. The files are readable and will
give us a picture of how the kernel sees the computer.
/root This is the home directory for the root account.
/sbin This directory contains “system” binaries. These are programs that
perform vital system tasks that are generally reserved for the
superuser.
/tmp The /tmp directory is intended for the storage of temporary,
transient files created by various programs. Some configurations
cause this directory to be emptied each time the system is
rebooted.
/usr The /usr directory tree is likely the largest one on a Linux system.
It contains all the programs and support files used by regular
users.
/usr/bin /usr/bin contains the executable programs installed by the Linux
distribution. It is not uncommon for this directory to hold
thousands of programs.
/usr/lib The shared libraries for the programs in /usr/bin.
/usr/local The /usr/local tree is where programs that are not included with
the distribution but are intended for systemwide use are installed.
Programs compiled from source code are normally installed in
/usr/local/bin. On a newly installed Linux system, this tree exists,
but it will be empty until the system administrator puts something
in it.
/usr/sbin Contains more system administration programs.
/usr/share /usr/share contains all the shared data used by programs in
/usr/bin. This includes things such as default configuration files,
icons, screen backgrounds, sound files, etc
usr/share/doc Most packages installed on the system will include some kind of
documentation. In /usr/share/doc, we will find documentation files
organized by package.
/var With the exception of /tmp and /home, the directories we have
looked at so far remain relatively static, that is, their contents don't
change. The /var directory tree is where data that is likely to
change is stored. Various databases, spool files, user mail, etc.
are located here.
Hard Links
Hard links are the original Unix way of creating links, compared to symbolic
links, which are more modern.
By default, every file has a single hard link that gives the file its name.
When we create a hard link, we create an additional directory entry for a file.
Hard links have two important limitations:

1. A hard link cannot reference a file outside its own file system. This means a link
cannot reference a file that is not on the same disk partition as the link itself.
2. A hard link may not reference a directory.
A hard link is indistinguishable from the file itself. Unlike a symbolic link, when
we list a
directory containing a hard link we will see no special indication of the link.
When a hard link is deleted, the link is removed but the contents of the file itself
continue to exist (that is, its space is not deallocated) until all links to the file are
deleted.
It is important to be aware of hard links because you might encounter them from
time to time, but modern practice prefers symbolic links, .

| hard links , inodes and ls -i ​

When thinking about hard links, it is helpful to imagine that files are made up of
two
parts.

1. The data part containing the file's contents.


2. The name part that holds the file's name.
When we create hard links, we are actually creating additional name parts
that all refer to the same data part. The system assigns a chain of disk
blocks to what is called an inode, which is then associated with the name
part. Each hard link therefore refers to a specific inode containing the file's
contents.
The ls command has a way to reveal this information. It is invoked with the
-i option.
[me@linuxbox playground]$ ls -li
total 16
12353539 drwxrwxr-x 2 me me 4096 2018-01-14 16:17 dir1
12353540 drwxrwxr-x 2 me me 4096 2018-01-14 16:17 dir2
12353538 -rw-r--r-- 4 me me 1650 2018-01-10 16:33 fun
12353538 -rw-r--r-- 4 me me 1650 2018-01-10 16:33 fun-hard

In this version of the listing, the first field is the inode number and, as we can
see, both fun and fun-hard share the same inode number, which confirms they
are the same
file.

Symbolic Links

lrwxrwxrwx 1 root root 11 2007-08-11 07:34 libc.so.6 -> libc-2.6.so

the first letter of the listing is l and the entry seems to have two filenames. This is a
special kind of a file called a symbolic link (also known as a soft link or symlink). In
most Unix-like systems it is possible to have a file referenced by multiple names.

Symbolic links were created to overcome the limitations of hard links.


Symbolic links work by creating a special type of file that contains a text pointer
to the referenced file or directory.
A file pointed to by a symbolic link, and the symbolic link itself are largely
indistinguishable from one another.
For example, if we write something to the symbolic link, the referenced file is
written to. However when we delete a symbolic link, only the link is deleted, not
the file itself. If the file is deleted before the symbolic link, the link will continue
to exist but will point to nothing. In this case, the link is said to be broken. In
many implementations, the ls command will display broken links in a
distinguishing color, such as red, to reveal their presence.
we can create symbolic links using absolute path
$ ln -s /home/beau/foo/bar.txt /home/beau/bar.txt
or using reltaive path
$ ln -s foo/bar.txt bar.txt

| relative path for symbolic links ​

It's important to realise that the first argument after ln -s is stored as the target
of the symlink. It can be any arbitrary string (with the only restrictions that it can't
be empty and can't contain the NUL character), but at the time the symlink
is being used and resolved, that string is understood as a relative
path to the parent directory of the symlink (when it doesn't start
with / ).
so in the above example we can do $ cd foo $ ln -s foo/bar.txt
../bar.txt

| commands and symbolic links ​

One thing to remember about symbolic links is that most file operations are
carried out on the link's target, not the link itself. rm is an exception. When we
delete a link, it is the link that is deleted, not the target.

4. manipulating files and directories


cp : Copy files and directories
mv : Move/rename files and directories
mkdir : Create directories
rm : Remove files and directories
ln : Create hard and symbolic links

Wildcards
Using wildcards (which is also known as globbing) allows us to select filenames
based on patterns of characters
wildcard meaning
* Matches any characters
? Matches any single character
[characters] Matches any character that is a member of the set characters
[!characters] Matches any character that is not a member of the set
characters
[[:class:]] Matches any character that is a member of the specified class
character class meaning
[:alnum:] Matches any alphanumeric character
[:alpha:] Matches any alphabetic character
[:digit:] Matches any numeral
[:lower:] Matches any lowercase letter
[:upper:] Matches any uppercase letter

examples

Pattern Matches
g* Any file beginning with g
b*.txt Any file beginning with b followed by .txt
Data??? Any file beginning with Data followed
by exactly three characters
[abc]* Any file beginning with either an a , a
b , or a c
BACKUP.[0-9][0-9][0-9] Any file beginning with BACKUP.
followed by exactly three numerals
[[:upper:]]* Any file beginning with an uppercase letter
[![:digit:]]* Any file not beginning with a numeral
*[[:lower:]123] Any file ending with a lowercase letter or
the numerals “1”, “2”, or “3”

Wildcards can be used with any command that accepts filenames as arguments,

5. working with commands


type – Indicate how a command name is interpreted
which – Display which executable program will be executed
help – Get help for shell builtins
man – Display a command's manual page
apropos – Display a list of appropriate commands
info – Display a command's info entry
whatis – Display one-line manual page descriptions
alias – Create an alias for a command

What Exactly Are Commands?


A command can be one of four different things:

An executable program
A command built into the shell itself.
bash supports a number of commands internally called shell builtins. The cd
command, for example, is a shell builtin.
A shell function.
An alias.
Aliases are commands that we can define ourselves, built from other
commands.

README and Other Program Documentation Files


Many software packages installed on our system have documentation files residing
in the /usr/share/doc directory. Most of these are stored in plain text format and
can be
viewed with less. Some of the files are in HTML format and can be viewed with a
web
browser. We may encounter some files ending with a “.gz” extension. This indicates
that
they have been compressed with the gzip compression program. The gzip package
includes a special version of less called zless that will display the contents of
gzipcompressed
text files.
Creating Our Own Commands with alias
think of name of the command (if the name is taken the alias will take the place
of the other command, use type to verify)
create the alias with alias name='string' where name is the name we have
chosen and string is the commands we want to alias.
the alias will vanish when we exit the shell unless we add it to the envirenement

6. I/O Redirection
Standard Input, Output, and Error
Programs send results to standard output (stdout) and error/status messages
to standard error (stderr)
By default, stdout and stderr are linked to the screen, and standard input
(stdin) is linked to the keyboard

Everything is a File

Keeping with the Unix theme, programs treat stdout, stdin, and stderr as files.

Redirecting Standard Output

> redirects stdout to a file instead of screen


e.g. ls -l /usr/bin > ls-output.txt
>> appends stdout to a file instead of overwriting

Create an Empty File

Using just the > redirection operator with no command preceding it will truncate
an existing file or create a new empty file.
e.g. > empty.txt

Redirecting Standard Error


2> redirects stderr to a file
e.g. ls -l /bin/usr 2> ls-error.txt

Redirecting stdout and stderr to One File

>file 2>&1 or &>file redirects both to the same file

Order Matters for >file 2>&1

The redirection of stderr must occur after redirecting stdout or it doesn't work

Disposing of Unwanted Output

Redirect to /dev/null to discard output

Redirecting Standard Input

< redirects a file's contents as stdin


e.g. cat < file.txt

Pipelines |
Connects stdout of one command to stdin of another
e.g. ls /bin /usr/bin | sort

Difference Between > and |

> connects a command to a file


| connects stdout of one command to stdin of another

Filters
Pipelines are often used to perform complex operations on data. It is possible to put
several commands together into a pipeline. Frequently, the commands used this way
are referred to as filters. Filters take input, change it somehow, and then output it.

Commands that transform stdin to stdout, used in pipelines


sort, uniq, wc, grep, head, tail, tee.

7. Seeing the World as the Shell Sees It


Expansion
echo - Display a Line of Text
echo is a shell builtin that prints its arguments to stdout

Pathname Expansion
Shell expands wildcards like * into matching filenames before executing
command
e.g. echo * expands to list of files in current directory

Hidden Files and Pathname Expansion

Basic echo * does not show hidden files starting with .


To expand hidden files: echo .[!.]*
Or use ls -A to list almost all files including hidden ones

Tilde Expansion
~ expands to current user's home directory
~user expands to specified user's home directory

Arithmetic Expansion

$((expression)) allows arithmetic expansion


e.g. echo $((2+2)) prints 4
Grouping in Arithmetic Expansion

Use $(( ... )) nesting to group subexpressions


e.g. $(($((5**2)) * 3)) evaluates to 75
the inside $(()) can be replaced with ()

Brace Expansion

{A,B,C} expands to space separated list like A B C


Can specify ranges like {1..5} or {01..15}
Supports nesting like a{A{1,2},B{3,4}}b
Useful for creating lists of files/directories

Parameter Expansion

$var expands to value of variable var


If var is misspelled, expands to an empty string

Command Substitution

$(command) substitutes output of command


e.g. file $(ls /usr/bin/* | grep zip)
Old syntax uses backticks: `command`

Quoting
Double Quotes "
Suppresses word-splitting, pathname, tilde, brace expansions
Allows parameter, arithmetic and command substitutions

Single Quotes '


Suppresses all expansions

Escaping with \
\char escapes special meaning of char
e.g. \$5.00 prints literal $5.00

Within Single Quotes

Inside single quotes, backslash loses special meaning

Backslash Escape Sequences

\a bell, \b backspace, \n newline, etc.


echo -e enables interpretation of escape sequences

8. Advanced Keyboard Tricks


Command Line Editing
Cursor Movement

Key Action
Ctrl-a Move cursor to beginning of line
Ctrl-e Move cursor to end of line
Ctrl-f Move cursor forward one char (right arrow)
Ctrl-b Move cursor backward one char (left arrow)
Alt-f Move cursor forward one word
Alt-b Move cursor backward one word
Ctrl-l Clear screen (same as clear command)

Modifying Text
Key Action
Ctrl-d Delete character at cursor
Ctrl-t Transpose (swap) character at cursor with previous
Alt-t Transpose words at cursor location
Alt-l Convert word at cursor to lowercase
Alt-u Convert word at cursor to uppercase

Cutting and Pasting (Killing and Yanking)


Text cut is stored in a buffer called the kill-ring

Key Action
Ctrl-k Kill (cut) text from cursor to end of line
Ctrl-u Kill text from cursor to start of line
Alt-d Kill text from cursor to end of current word
Alt-Backspace Kill text from cursor to start of current word
Ctrl-y Yank (paste) text from kill-ring at cursor

The Meta Key

The documentation refers to the Alt key as the "meta" key

Completion
Pressing Tab attempts to auto-complete current word
Works for paths, variables (start with $), usernames (start with ~ ), hostnames
(start with @), commands
Press Tab twice to see possible completions

Programmable Completion

Allows adding custom completion rules using shell functions, often done by
distro providers
Using History
bash maintains a history of commands that have been entered. This list of
commands is kept in our home directory in a file called .bash_history .

Viewing History

history shows previous commands


Default stores last 500 commands (often 1000 in modern distros)

Searching History
history | grep pattern searches for pattern
Can use !n to execute command #n from history
Ctrl-r begins incremental reverse search
Ctrl-r again finds next match
Ctrl-j copies match to command line

History Expansion

!! repeats last command


!n repeats command #n
!string repeats last command starting with string
!?string repeats last command containing string

Other
script command records entire terminal session to file

9. permissions
Here are extensive notes on the provided content using markdown language:
Unix/Linux Permissions and Access Control
Multi-User Systems

Unix-like operating systems are multi-user systems, allowing multiple users to


use the computer simultaneously
This requires mechanisms to protect users from each other and control access
to files and resources
Users have separate accounts with unique user IDs (UIDs) and are assigned to
groups with group IDs (GIDs)

User Identity

The id command displays information about the current user's identity:

$ id
uid=1000(username) gid=1000(username)
groups=1000(username),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),11
3(lpadmin),128(sambashare)

This shows the user's UID, primary GID, and supplementary groups they belong
to
User account information is stored in /etc/passwd
Group information is stored in /etc/group or we can use getent getent group

System Users

Besides regular user accounts, there are also system users for various services
and processes

File Permissions

File permissions control who can read, write, and execute files
Permissions are set for three categories: owner, group, and others
Basic permission types:
Read (r)
Write (w)
Execute (x)

Viewing Permissions

Use ls -l to view file permissions:

$ ls -l file.txt
-rw-rw-r-- 1 user group 0 Mar 6 14:52 file.txt

The first 10 characters show the file type and permissions:


First character: file type (- for regular file, d for directory)
Next 3 characters: owner permissions
Next 3 characters: group permissions
Last 3 characters: others permissions

Changing Permissions

The chmod command changes file permissions


Two ways to specify permissions:
1. Octal notation
2. Symbolic notation

Octal Notation

Uses 3 octal digits to represent permissions for owner, group, and others
Each digit is the sum of:
4 (read)
2 (write)
1 (execute)

Example:
chmod 644 file.txt

Sets read/write for owner, read-only for group and others

Symbolic Notation

Uses letters to represent permissions:


u (user/owner)
g (group)
o (others)
a (all)
And symbols:
(add permission)
(remove permission)
= (set exact permission)

Example:

chmod u+x,go-w file.txt

Adds execute for owner, removes write for group and others

Using Symbolic Notation

Symbolic notation allows changing specific permissions without affecting others

Special Permissions

setuid (octal 4000): Run executable as file owner


setgid (octal 2000): For directories, new files inherit directory's group
sticky bit (octal 1000): For directories, only file owner can delete/rename files

Default Permissions
The umask command sets default permissions for new files/directories
It specifies which permissions to remove from the default (666 for files, 777 for
directories)

Example:

umask 022

This results in 644 (rw-r--r--) for new files and 755 (rwxr-xr-x) for new directories

Changing Identities
Three ways to change user identity:

1. Log out and log in as different user


2. Use su command
3. Use sudo command

su Command

Starts a new shell as another user (default is root)


Syntax: su [-[l]] [user]
-l or - option starts a login shell (loads user's environment)

Example:

su -

This starts a root shell

sudo Command

Executes a single command with privileges of another user (usually root)


Configured by administrator in /etc/sudoers
Uses user's own password for authentication
Example:

sudo command

sudo vs su

sudo doesn't start a new shell or load the target user's environment, unlike su

Changing File Ownership

chown Command

Changes file owner and/or group


Requires superuser privileges
Syntax: chown [owner][:[group]] file...

Example:

sudo chown newuser:newgroup file.txt

chgrp Command

Changes only the group ownership of a file


Older Unix systems used this instead of chown for changing group

Practical Example: Setting Up a Shared Directory

1. Create a new group for sharing


2. Add users to the group
3. Create the shared directory
4. Set appropriate ownership and permissions:

sudo mkdir /shared/directory


sudo chown :shared_group /shared/directory
sudo chmod 775 /shared/directory
sudo chmod g+s /shared/directory

5. Set umask for users to 002 to allow group write permissions

Permanent umask Changes

Remember to make umask changes permanent by adding to shell configuration


files

Changing Passwords
Use the passwd command to change passwords
Without arguments, changes current user's password
Superusers can change other users' passwords: sudo passwd username

Password Policies

The passwd command enforces password strength policies to prevent weak


passwords

User and Group Management Commands


adduser
High-level command for creating new user accounts
Often more user-friendly than useradd
Typically creates home directory and copies skeleton files

Example:

sudo adduser newusername

useradd
Low-level command for creating user accounts
Provides more fine-grained control over account creation
Doesn't create home directory by default (use -m option)

sudo useradd -m -s /bin/bash newusername

useradd vs adduser On some systems, adduser is a friendlier front-end


to useradd

groupadd
Creates a new group on the system
Useful for organizing users with similar access needs

Example:

sudo groupadd newgroupname

Group Management After creating a group, use usermod to add users to


it:

sudo usermod -aG groupname username

10. Processes
How Processes Work
Linux uses processes to manage programs waiting for CPU time
Kernel initiates a few processes and launches init
init runs scripts to start system services
Many services run as daemon programs in the background
Parent processes can produce child processes
Kernel tracks information about each process:
Process ID (PID)
Memory usage
Readiness to execute
Owner and user IDs

Viewing Processes
Using ps Command

Basic usage: ps
Shows processes associated with current terminal session
Options:
x (not -x) : Show all processes owned by user
aux : Show processes for all users

ps Output

The output includes fields like PID, TTY, TIME, and CMD

Using top Command


Provides a dynamic, real-time view of running processes
Updates every 3 seconds by default
Display includes:
System summary (uptime, load average, CPU usage, memory usage)
Table of processes sorted by CPU activity

top Navigation

Use 'h' for help screen and 'q' to quit top

Controlling Processes
Background and Foreground
Run a process in background: command &
List background jobs: jobs
Bring a background process to foreground fg: fg %job_number
Send a foreground process to background:
1. Suspend with Ctrl-z
2. Resume in background with bg bg %job_number

Stopping and Killing Processes


Interrupt (often terminates) a process: Ctrl-c
Suspend a process: Ctrl-z
Terminate a process: kill PID

Signals
Operating system communicates with programs using signals
Common signals:
SIGHUP (1): Hangup
SIGINT (2): Interrupt (Ctrl-c)
SIGKILL (9): Force termination
SIGTERM (15): Terminate gracefully
SIGCONT (18): Continue after stop
SIGSTOP (19): Stop process
SIGTSTP (20): Terminal stop (Ctrl-z)

Sending Signals
Use kill command: kill [-signal] PID
Send to multiple processes: killall [-u user] [-signal] name

SIGKILL Usage
SIGKILL (9) should be used as a last resort as it doesn't allow the process to
clean up

System Shutdown
Commands: halt , poweroff , reboot , shutdown
shutdown allows specifying action and delay

Additional Process-Related Commands


pstree : Shows process tree
vmstat : Displays system resource usage
xload : Graphical system load display
tload : Text-based system load display

Command Line vs GUI Tools

Command line tools are preferred for process management due to their speed
and low resource usage

Here are extensive notes on the content using markdown language:

11. The Environment in Shell


What is the Environment?
The shell maintains information during a session called the environment
Programs use data from the environment to determine system configuration
Some programs look for environment values to adjust their behavior
We can customize our shell experience using the environment

Key Commands for Working with the Environment


printenv: Print part or all of the environment
set: Set shell options
export: Export environment to subsequently executed programs
alias: Create an alias for a command

Types of Data Stored in the Environment


1. Environment variables
2. Shell variables (specific to bash)
3. Aliases
4. Shell functions (related to shell scripting)

Examining the Environment


Use set builtin in bash to show both shell and environment variables
Use printenv to display only environment variables
Pipe output to less for easier viewing: printenv | less

Viewing Specific Variables

To see the value of a specific variable, use:


printenv VARIABLE_NAME or echo $VARIABLE_NAME

Important Environment Variables


Variable Contents
DISPLAY Name of the display for graphical environment
EDITOR Default text editor
SHELL User's default shell program
HOME Pathname of user's home directory
LANG Character set and collation order of language
PATH Colon-separated list of directories for executable programs
Variable Contents
PS1 Defines the shell prompt (prompt string)
USER Username

How the Environment is Established


Types of Shell Sessions

1. Login shell session


Prompted for username and password
Occurs when starting a virtual console session
2. Non-login shell session
Typically occurs when launching a terminal session in GUI

Startup Files for Login Shell Sessions

File Contents
/etc/profile Global configuration for all users
~/.bash_profile User's personal startup file
~/.bash_login Read if ~/.bash_profile not found
~/.profile Read if neither ~/.bash_profile nor ~/.bash_login found

Startup Files for Non-Login Shell Sessions

File Contents
/etc/bash.bashrc Global configuration for all users
~/.bashrc User's personal startup file

Inheritance in Non-Login Shells

Non-login shells inherit the environment from their parent process, usually a
login shell.
Modifying the Environment
Which Files to Modify

For adding directories to PATH or defining additional environment variables:


Use ~/.bash_profile (or equivalent, e.g., ~/.profile on Ubuntu)
For everything else:
Use ~/.bashrc

System-wide Changes

Only modify files in /etc if you are the system administrator and need to change
defaults for all users.

Using a Text Editor

1. Create a backup of the file before editing:


cp .bashrc .bashrc.bak
2. Open the file with a text editor (e.g., nano):
nano .bashrc
3. Make desired changes
4. Save and exit the editor

Example Modifications to .bashrc

# Change umask to make directory sharing easier


umask 0002

# Ignore duplicates in command history and increase


# history size to 1000 lines
export HISTCONTROL=ignoredups
export HISTSIZE=1000

# Add some helpful aliases


alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
Commenting Your Changes

Always add comments to explain your modifications. This helps you remember
the purpose of changes in the future.

Activating Changes
Changes to .bashrc won't take effect until:

1. Starting a new terminal session, or


2. Forcing bash to reread the file with source:
source ~/.bashrc

Here are extensive notes on the vi text editor using markdown language, based on
the provided document:

12. A Gentle Introduction to vi


Why Learn vi?
1. Availability: Almost always present on Unix-like systems
2. Lightweight and fast: Quicker to start than graphical editors
3. Efficiency: Designed for typing speed, hands never leave keyboard

POSIX Requirement

The POSIX standard requires vi to be present on Unix systems

Background
Created in 1976 by Bill Joy
Name derives from "visual" editor (vs. line editors)
Most Linux distributions use vim (Vi IMproved) by Bram Moolenaar
Starting and Stopping vi
Start: vi [filename]
Exit:
:q (quit)
:q! (quit without saving)

Lost in vi?

Press the Esc key twice to return to command mode

Editing Modes
1. Command Mode: Default mode, keys are commands
2. Insert Mode: For entering text
Enter: Press i
Exit: Press Esc

Basic Commands
Saving Work

:w - Write (save) the file


:wq or ZZ - Save and quit

Cursor Movement

Key Moves Cursor


l or right arrow Right one character
h or left arrow Left one character
j or down arrow Down one line
k or up arrow Up one line
0 (zero) To beginning of current line
Key Moves Cursor
^ To first non-whitespace character on current line
$ To end of current line
w To beginning of next word
b To beginning of previous word
G To last line of file
nG To line n (e.g., 5G goes to line 5)

Command Prefixes

Many vi commands can be prefixed with a number to repeat the action (e.g., 5j
moves down 5 lines)

Editing Text

Command Action
i Insert at cursor
A Append at end of line
o Open line below cursor
O Open line above cursor
x Delete character at cursor
dd Delete current line
yy Yank (copy) current line
p Paste after cursor
P Paste before cursor
u Undo last change

Search and Replace

/pattern - Search forward for pattern


?pattern - Search backward for pattern
n - Repeat search in same direction
N - Repeat search in opposite direction
:%s/old/new/g - Replace all occurrences of 'old' with 'new' in entire file
The %s specifies the operation. In this case, it’s substitution (search-and-
replace).
The g means “global” in the sense that the search-and-replace is
performed on every instance of the search string in the line. If omitted, only
the first instance of the search string on each line is replaced.
:%s/old/new/gc - Replace with confirmation

Working with Multiple Files


Open multiple files: vi file1 file2 file3
Switch between files:
:bn (next file)
:bp (previous file)
List open files: :buffers
Switch to specific file: :buffer n (where n is the buffer number)

Unsaved Changes

vi prevents switching files with unsaved changes. Use ! to force (e.g., :bn! )

Copying Between Files

1. Yank text in first file


2. Switch to second file
3. Paste text

Inserting Entire File

:r filename - Reads contents of 'filename' and inserts below cursor

Summing Up
Learning vi/vim is a valuable skill for Linux users. Its influence extends to many other
Unix programs, making the time investment worthwhile.

Practice

Regular use of vi will help solidify your skills and increase efficiency over time

Here are extensive notes on the chapter about customizing the shell prompt, using
markdown language:

13. Customizing the Shell Prompt


Anatomy of a Prompt
The default prompt typically contains username, hostname, and current working
directory
Defined by environment variable PS1 (prompt string 1)
View contents with echo $PS1

PS1 Variable

PS1 contains special backslash-escaped characters that expand to various


values

Special Characters in Prompts


Sequence Value Displayed
\a ASCII bell (computer beep)
\d Current date (e.g. "Mon May 26")
\h Hostname without domain name
\H Full hostname
\j Number of jobs in current shell session
\l Name of current terminal device
Sequence Value Displayed
\n Newline
\r Carriage return
\s Name of shell program
\t Current time (24-hour HH:MM:SS)
\T Current time (12-hour format)
@ Current time (12-hour AM/PM)
\A Current time (24-hour HH:MM)
\u Username of current user
\v Shell version number
\V Shell version and release numbers
\w Current working directory
\W Basename of current working directory
! History number of current command
# Command number in this shell session
$ # for root, $ for others
[ Start of non-printing characters (like \a for example)
] End of non-printing characters

Trying Alternative Prompt Designs


1. Back up existing prompt:

ps1_old="$PS1"

2. Restore original prompt:

PS1="$ps1_old"

3. Examples of custom prompts:


Minimal prompt: PS1="\$ "
Prompt with bell: PS1="\[\a\]\$ "
Informative prompt: PS1="\A \h \$ "
Similar to original: PS1="<\u@\h \W>\$ "

Experiment

Try different combinations of special characters to create your ideal prompt

Adding Color to Prompts


Use ANSI escape codes for color control
Format: \033[attribute;text_color;background_colorm

Text Colors

Sequence Text Color Sequence Text Color


\033[0;30m Black \033[1;30m Dark Gray
\033[0;31m Red \033[1;31m Light Red
\033[0;32m Green \033[1;32m Light Green
\033[0;33m Brown \033[1;33m Yellow
\033[0;34m Blue \033[1;34m Light Blue
\033[0;35m Purple \033[1;35m Light Purple
\033[0;36m Cyan \033[1;36m Light Cyan
\033[0;37m Light Gray \033[1;37m White

Background Colors

Sequence Background Color Sequence Background Color


\033[0;40m Black \033[0;44m Blue
\033[0;41m Red \033[0;45m Purple
\033[0;42m Green \033[0;46m Cyan
\033[0;43m Brown \033[0;47m Light Gray
Example of a red prompt:

PS1="\[\033[0;31m\]<\u@\h \W>\$\[\033[0m\] "

Color Reset

Always end colored prompts with \[\033[0m\] to reset text color for user input

Moving the Cursor


Escape codes can position the cursor for advanced prompt designs:

Escape Code Action


\033[l;cH Move cursor to line l, column c
\033[nA Move cursor up n lines
\033[nB Move cursor down n lines
\033[nC Move cursor forward n characters
\033[nD Move cursor backward n characters
\033[2J Clear screen and move to upper-left
\033[K Clear from cursor to end of line
\033[s Store cursor position
\033[u Recall stored cursor position

Example of a complex prompt with a clock:

PS1="\[\033[s\033[0;0H\033[0;41m\033[K\033[1;33m\t\033[0m\033[u\]
<\u@\h \W>\$ "

Compatibility

Some terminal emulators may not support all cursor movement codes

Saving the Prompt


To make a custom prompt permanent:

1. Add the PS1 definition to your .bashrc file


2. Export the PS1 variable

Example:

PS1="\[\033[s\033[0;0H\033[0;41m\033[K\033[1;33m\t\033[0m\033[u\]
<\u@\h \W>\$ "
export PS1

Further Reading
Bash Prompt HOWTO: http://tldp.org/HOWTO/Bash-Prompt-HOWTO/
ANSI Escape Codes: http://en.wikipedia.org/wiki/ANSI_escape_code

Experimentation

Customizing the prompt can be a fun way to personalize your shell experience
and potentially increase productivity

Sure, I'll create extensive notes using markdown language, including the requested
notations for notes, tips, and warnings. I'll use ## for the biggest titles and include all
tables from the document. Here are the notes:

14. Package Management in Linux


Packaging Systems
Two main camps: Debian (.deb) and Red Hat (.rpm)
Some exceptions: Gentoo, Slackware, and Arch

Packaging System Distributions (Partial Listing)


Debian Style (.deb) Debian, Ubuntu, Linux Mint, Raspbian
Red Hat Style (.rpm) Fedora, CentOS, Red Hat Enterprise Linux, OpenSUSE
How Package Systems Work
Package Files

Basic unit of software in a packaging system


Compressed collection of files comprising the software package
Includes metadata and pre/post-installation scripts
Created by package maintainers

Repositories
Central locations containing thousands of packages
Different repositories for various stages of software development (e.g., testing,
development)
Third-party repositories for legally restricted software

Third-party Repositories

These are often needed for software that can't be included in main distributions
due to legal reasons, such as encrypted DVD support in the US.

Dependencies

Programs often rely on shared libraries and other software components


Package management systems handle dependency resolution

Package Management Tools


Two types of tools:

1. Low-level tools: Install and remove package files


2. High-level tools: Perform metadata searching and dependency resolution
Distributions Low-Level High-Level Tools
Tools
Debian style dpkg apt, apt-get,
aptitude
Fedora, Red Hat Enterprise Linux, rpm yum, dnf
CentOS

Common Package Management Tasks


Finding a Package in a Repository

Style Command(s)
Debian apt-get update
apt-cache search search_string
Red Hat yum search search_string

Example: yum search emacs

Installing a Package from a Repository

Style Command(s)
Debian apt-get update
apt-get install package_name
Red Hat yum install package_name

Example: apt-get update; apt-get install emacs

Installing a Package from a Package File

Style Command(s)
Debian dpkg -i package_file
Red Hat rpm -i package_file

Dependency Resolution
Using low-level tools like rpm doesn't perform dependency resolution. If there
are missing dependencies, the installation will fail.

Removing a Package

Style Command(s)
Debian apt-get remove package_name
Red Hat yum erase package_name

Updating Packages from a Repository

Style Command(s)
Debian apt-get update; apt-get upgrade
Red Hat yum update

Upgrading a Package from a Package File

Style Command(s)
Debian dpkg -i package_file
Red Hat rpm -U package_file

Debian Upgrade

dpkg doesn't have a specific upgrade option, unlike rpm.

Listing Installed Packages

Style Command(s)
Debian dpkg -l
Red Hat rpm -qa
Determining Whether a Package is Installed

Style Command(s)
Debian dpkg -s package_name
Red Hat rpm -q package_name

Displaying Information About an Installed Package

Style Command(s)
Debian apt-cache show package_name
Red Hat yum info package_name

Finding Which Package Installed a File

Style Command(s)
Debian dpkg -S file_name
Red Hat rpm -qf file_name

15. Storage Media in Linux


Key Concepts:
Linux has powerful capabilities for handling various storage devices:
Physical storage (hard disks)
Network storage
Virtual storage (RAID, LVM)
This chapter focuses on introducing key concepts and commands for managing
storage devices

Important Commands:
mount - Mount a file system
umount - Unmount a file system
fsck - Check and repair a file system
fdisk - Manipulate disk partition table
mkfs - Create a file system
dd - Convert and copy a file
genisoimage (formerly mkisofs ) - Create an ISO 9660 image file
wodim (formerly cdrecord ) - Write data to optical storage media
md5sum - Calculate an MD5 checksum

Mounting and Unmounting Storage Devices


Mounting attaches a storage device to the file system tree
Linux maintains a single file system tree with devices attached at various points
The /etc/fstab file lists devices to be mounted at boot time

Example /etc/fstab entry:

LABEL=/12 / ext4 defaults 1 1

Fields in /etc/fstab :

Field Contents Description


1 Device Device file or label
2 Mount point Directory where device is attached
3 File system type e.g. ext4, vfat, ntfs
4 Options Mount options
5 Frequency Backup frequency for dump
6 Order fsck check order

Device Naming

Modern Linux distributions often use labels or UUIDs instead of device files in
/etc/fstab for more reliable mounting
Viewing Mounted File Systems
Use the mount command without arguments:

mount

Output format:
device on mount_point type filesystem_type (options)

Manually Mounting a CD-ROM


1. Determine device name (e.g. /dev/sdc )
2. Create mount point: mkdir /mnt/cdrom
3. Mount as root:

mount -t iso9660 /dev/sdc /mnt/cdrom

Unmounting
Use the umount command:

umount /dev/sdc

Busy Devices

A device cannot be unmounted if it's in use. Change working directory away


from mount point before unmounting.

Importance of Unmounting

Unmounting ensures all data is written to the device before removal, preventing
file system corruption.
Determining Device Names
Device files are located in /dev directory
Common naming patterns:

Pattern Device Type


/dev/fd* Floppy disk drives
/dev/hd* IDE (PATA) disks (older systems)
/dev/sd* SCSI disks, including SATA and USB storage
/dev/sr* Optical drives (CD/DVD)

To determine removable device name:

1. Run tail -f /var/log/messages or tail -f /var/log/syslog


2. Plug in the device
3. Look for kernel messages indicating device name (e.g. [sdb] )

Creating New File Systems


Two steps:

1. (Optional) Create new partition layout


2. Create new file system on the partition

Manipulating Partitions with fdisk

sudo fdisk /dev/sdb

Common fdisk commands:

p - print partition table


n - add new partition
d - delete partition
t - change partition type
w - write changes and exit
q - quit without saving

Data Loss

Be extremely careful when using fdisk. Specifying the wrong device can result in
data loss.

Creating a File System with mkfs

sudo mkfs -t ext4 /dev/sdb1

Replace ext4 with desired file system type (e.g. vfat for FAT32)

Testing and Repairing File Systems


fsck checks file system integrity
Usually run automatically at boot
Can also be run manually on unmounted file systems:

sudo fsck /dev/sdb1

Recovered Files

Recovered file fragments are placed in the lost+found directory of the file system

Moving Data Directly to/from Devices


The dd command copies data at the block level:

dd if=input_file of=output_file [bs=block_size [count=blocks]]

Examples:

Cloning a drive: dd if=/dev/sdb of=/dev/sdc


Creating drive image: dd if=/dev/sdb of=flash_drive.img

Destructive Command

Double-check input and output specifications when using dd to avoid data loss

Creating CD-ROM Images


1. Create ISO image
2. Write image to CD-ROM

Creating ISO from existing CD-ROM

dd if=/dev/cdrom of=ubuntu.iso

Creating ISO from files

genisoimage -o cd-rom.iso -R -J ~/cd-rom-files

-R adds Rock Ridge extensions (long filenames, POSIX permissions)


-J adds Joliet extensions (long filenames for Windows)

Mounting an ISO image

mkdir /mnt/iso_image
mount -t iso9660 -o loop image.iso /mnt/iso_image

Writing CD-ROM Images


Blanking a rewritable CD-RW

wodim dev=/dev/cdrw blank=fast


Writing an image to CD-R/CD-RW

wodim dev=/dev/cdrw image.iso

Common options:

-v for verbose output


-dao for disc-at-once mode (for commercial reproduction)

Verifying ISO Integrity


Use md5sum to generate and compare checksums:

md5sum image.iso

Compare result with provided checksum from distributor.

To verify written media:

md5sum /dev/cdrom

For DVDs, calculate exact number of blocks:

md5sum dvd-image.iso; dd if=/dev/dvd bs=2048 count=$(( $(stat -c "%s"


dvd-image.iso) / 2048 )) | md5sum

Here are extensive notes on the networking chapter, using markdown and the
requested note/tip/warning formatting:

16. Networking in Linux


Introduction
Linux can be used to build various networking systems and appliances
Includes firewalls, routers, name servers, NAS boxes, etc.
Many commands available for configuring and controlling networks
This chapter focuses on common commands for monitoring networks and
transferring files

Key Networking Commands


ping - Send ICMP ECHO_REQUEST to network hosts
traceroute - Print route packets trace to a network host
ip - Show/manipulate routing, devices, policy routing and tunnels
netstat - Print network connections, routing tables, interface statistics, etc.
ftp - Internet file transfer program
wget - Non-interactive network downloader
ssh - OpenSSH SSH client (remote login program)

Background Knowledge

The chapter assumes basic familiarity with:

IP addresses
Host and domain names
Uniform Resource Identifiers (URIs)

Installing Commands

Some commands may require installing additional packages from your


distribution's repositories. Some may also require superuser privileges to
execute.

Examining and Monitoring a Network


ping
Most basic network command
Sends ICMP ECHO_REQUEST packet to specified host
Most network devices will reply, verifying connection
Usage: ping hostname
Continues sending packets at 1 second intervals until interrupted
After interruption, prints performance statistics

Example output:

[me@linuxbox ~]$ ping linuxcommand.org


PING linuxcommand.org (66.35.250.210) 56(84) bytes of data.
64 bytes from vhost.sourceforge.net (66.35.250.210): icmp_seq=1 ttl=43
time=107 ms
64 bytes from vhost.sourceforge.net (66.35.250.210): icmp_seq=2 ttl=43
time=108 ms
...
--- linuxcommand.org ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 6010ms
rtt min/avg/max/mdev = 105.647/107.052/108.118/0.824 ms

Network Performance

A properly performing network will show 0% packet loss. Successful pings


indicate that network interface cards, cabling, routing, and gateways are
generally working well.

Blocked ICMP Traffic

It's possible to configure network devices to ignore ICMP packets for security
reasons. Firewalls may also block ICMP traffic.

traceroute
Lists all "hops" network traffic takes to reach specified host
Some systems use similar tracepath program instead
Usage: traceroute hostname
Example output:

[me@linuxbox ~]$ traceroute slashdot.org


traceroute to slashdot.org (216.34.181.45), 30 hops max, 40 byte
packets
1 ipcop.localdomain (192.168.1.1) 1.066 ms 1.366 ms 1.720 ms
2 * * *
3 ge-4-13-ur01.rockville.md.bad.comcast.net (68.87.130.9) 14.622 ms
14.885 ms 15.169 ms
...
16 slashdot.org (216.34.181.45) 42.727 ms 42.016 ms 41.437 ms

Interpreting traceroute Output

Each line represents a router in the path


Shows hostname, IP address, and 3 time samples
Asterisks indicate no response from that router
Can sometimes overcome blocked info with -T or -I options

ip
Multi-purpose network configuration tool
Uses full range of networking features in modern Linux kernels
Replaces deprecated ifconfig program
Can examine network interfaces and routing table
Usage: ip a (to show network interfaces)

Example output:

[me@linuxbox ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP group default qlen 1000
link/ether ac:22:0b:52:cf:84 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::ae22:bff:fe52:cf84/64 scope link
valid_lft forever preferred_lft forever

Key Things to Check

When doing casual network diagnostics with ip a :

Look for "UP" in the first line for each interface (indicates enabled)
Check for valid IP address in "inet" field
For DHCP systems, a valid IP here verifies DHCP is working

netstat

Examines various network settings and statistics


Many options available for different views
Usage:
netstat -ie (examine network interfaces)
netstat -r (display kernel's network routing table)

Example routing table output:

[me@linuxbox ~]$ netstat -r


Kernel IP routing table
Destination Gateway Genmask Flags MSS Window
irtt Iface
192.168.1.0 * 255.255.255.0 U 0 0
0 eth0
default 192.168.1.1 0.0.0.0 UG 0 0
0 eth0

Interpreting Routing Table

Destinations ending in .0 refer to networks, not individual hosts


Gateway shows name/IP of router used to reach destination
Asterisk in Gateway means no gateway needed
"default" destination means any traffic not otherwise listed in table

Transporting Files Over a Network


ftp (File Transfer Protocol)

Classic program for downloading files over Internet


Uses File Transfer Protocol
Not secure - sends account names and passwords in cleartext
Most Internet FTP is done via anonymous FTP servers
Interactive program with various commands

Example session:

[me@linuxbox ~]$ ftp fileserver


Connected to fileserver.localdomain.
220 (vsFTPd 2.0.1)
Name (fileserver:me): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd pub/cd_images/Ubuntu-18.04
250 Directory successfully changed.
ftp> ls
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
-rw-rw-r-- 1 500 500 733079552 Apr 25 03:53 ubuntu-18.04-desktop-
amd64.iso
226 Directory send OK.
ftp> lcd Desktop
Local directory now /home/me/Desktop
ftp> get ubuntu-18.04-desktop-amd64.iso
local: ubuntu-18.04-desktop-amd64.iso remote: ubuntu-18.04-desktop-
amd64.iso
200 PORT command successful. Consider using PASV.
150 Opening BINARY mode data connection for ubuntu-18.04-desktop-
amd64.iso (733079552 bytes).
226 File send OK.
733079552 bytes received in 68.56 secs (10441.5 kB/s)
ftp> bye

Common FTP Commands:

Command Meaning
ftp fileserver Connect to FTP server
anonymous Login name for anonymous access
cd directory Change directory on remote system
ls List directory on remote system
lcd directory Change directory on local system
get file Transfer file from remote to local system
bye Log off and end FTP session

FTP Help

Type help at the ftp> prompt to see a list of supported commands

Alternative FTP Client


lftp is a more advanced FTP client with additional features like multiple
protocol support, automatic retry, background processes, and tab completion

wget
Popular command-line program for file downloading
Works with both web and FTP sites
Can download single files, multiple files, or entire sites
Usage: wget URL

Example:

[me@linuxbox ~]$ wget http://linuxcommand.org/index.php


--11:02:51-- http://linuxcommand.org/index.php
=> `index.php'
Resolving linuxcommand.org... 66.35.250.210
Connecting to linuxcommand.org|66.35.250.210|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
[ <=> ] 3,120 --.--K/s
11:02:51 (161.75 MB/s) - `index.php' saved [3120]

wget Features

wget has many useful options including:

Recursive downloading
Background downloading (continue after logging off)
Resuming partially downloaded files

Secure Communication with Remote Hosts


ssh (Secure Shell)
Provides secure encrypted communication with remote hosts
Solves two key problems:
1. Authenticates that remote host is who it claims to be
2. Encrypts all communications between local and remote hosts
Consists of SSH server (on remote host) and SSH client (on local system)
Most distributions use OpenSSH implementation

Usage: ssh remote-sys

First-time connection example:

[me@linuxbox ~]$ ssh remote-sys


The authenticity of host 'remote-sys (192.168.1.4)' can't be
established.
RSA key fingerprint is
41:ed:7a:df:23:19:bf:3c:a5:17:bc:61:b3:7f:d9:bb.
Are you sure you want to continue connecting (yes/no)?

Changed Host Key

If you see a warning about changed host key, it could indicate:

1. A potential man-in-the-middle attack (rare)


2. The remote system has been changed (e.g., OS reinstall)
Always check with the system administrator when this occurs

Connecting with Different Username

To connect as a different user on the remote system:


ssh username@remote-sys

SSH Tunneling

SSH creates an encrypted tunnel between local and remote systems


Can be used to securely transmit other network traffic
Common use: Allowing X Window system traffic

Example of running X client program remotely:

[me@linuxbox ~]$ ssh -X remote-sys


me@remote-sys's password:
Last login: Mon Sep 08 13:23:11 2016
[me@remote-sys ~]$ xload

X11 Forwarding

You may need to use -Y instead of -X on some systems for X11 forwarding

scp (Secure Copy)


Part of OpenSSH package
Uses SSH-encrypted tunnel to copy files across network
Similar to cp command, but can work with remote hosts
Usage: scp source destination

Example (copying from remote to local):

[me@linuxbox ~]$ scp remote-sys:document.txt .


me@remote-sys's password:
document.txt 100% 5581 5.5KB/s 00:00

sftp (Secure File Transfer Protocol)


Also part of OpenSSH package
Secure replacement for ftp program
Works similarly to ftp, but uses SSH encrypted tunnel
Doesn't require FTP server on remote host, only SSH server
Usage: sftp remote-sys

Example session:
[me@linuxbox ~]$ sftp remote-sys
Connecting to remote-sys...
me@remote-sys's password:
sftp> ls
ubuntu-8.04-desktop-i386.iso
sftp> lcd Desktop
sftp> get ubuntu-8.04-desktop-i386.iso
Fetching /home/me/ubuntu-8.04-desktop-i386.iso to ubuntu-8.04-desktop-
i386.iso
/home/me/ubuntu-8.04-desktop-i386.iso 100% 699MB 7.4MB/s 01:35
sftp> bye

GUI SFTP Support

Many graphical file managers in Linux support SFTP. You can often use
sftp:// URIs in GNOME or KDE file managers to access remote files securely.

Windows SSH Client

PuTTY for Windows

PuTTY is a popular SSH client for Windows:

Provides terminal window for SSH sessions


Includes scp and sftp analogs
Available at: http://www.chiark.greenend.org.uk/~sgtatham/putty/

Here are extensive notes on the content using markdown language:

17. Searching for Files in Linux


locate - Find Files the Easy Way
Performs rapid database search of pathnames
Outputs every name matching a given substring
Usage: locate pattern
Can combine with grep for more complex searches
Database created by updatedb program, usually run daily as cron job

locate variants

Common variants include slocate and mlocate


Usually accessed via symbolic link named locate
Check man page for specific version's options

find - Find Files the Hard Way


Searches given directories and subdirectories based on various attributes
More complex but powerful than locate

Basic Usage:

find directory

Lists all files/directories under specified directory

Tests
Used to filter results based on criteria
Common file type tests:

Type Description
b Block special device file
c Character special device file
d Directory
f Regular file
l Symbolic link
Example using file type and size tests:

find ~ -type f -name "*.JPG" -size +1M

Finds regular files ending in .JPG larger than 1 megabyte

Size Units

Use suffix letters to specify size units:


b (512-byte blocks), c (bytes), w (2-byte words),
k (kilobytes), M (megabytes), G (gigabytes)

Common Tests

Test Description
-cmin n Modified n minutes ago
-cnewer file Modified more recently than file
-ctime n Modified n*24 hours ago
-empty Match empty files/directories
-group name Belonging to group
-iname pattern Case-insensitive name match
-inum n Match inode number
-mmin n Contents modified n minutes ago
-mtime n Contents modified n*24 hours ago
-name pattern Match name pattern
-newer file Modified more recently than file
-nouser No valid owner
-nogroup No valid group
-perm mode Match permissions
-samefile name Same inode as file
-size n Match size
-type c Match file type
Test Description
-user name Belonging to user

Operators
Used to create more complex logical expressions:

Operator Description
-and Both tests true (default, can use -a)
-or Either test true (can use -o)
-not Test is false (can use !)
() Group tests and operators

Escaping Parentheses

Parentheses must be escaped when used on command line

Example complex find command:

find ~ \( -type f -not -perm 0600 \) -or \( -type d -not -perm 0700
\)

Predefined Actions

Action Description
-delete Delete matching files
-ls Perform ls -dils on matches
-print Output full pathname (default)
-quit Quit after first match

Using -delete
Use extreme caution with -delete action
Always test first by replacing with -print

User-Defined Actions
Use -exec to specify custom commands
Format: -exec command {} ;
Use -ok for interactive prompting before execution

Example:

find ~ -type f -name 'foo*' -ok ls -l '{}' ';'

Improving Efficiency
1. Use + instead of ; with -exec to combine results:

find ~ -type f -name 'foo*' -exec ls -l '{}' +

2. Use xargs command:

find ~ -type f -name 'foo*' -print | xargs ls -l

Handling Filenames with Spaces

Use -print0 with find and --null with xargs to handle filenames containing spaces:

find ~ -iname '*.jpg' -print0 | xargs --null ls -l

Options
Control the scope of find searches:
Option Description
-depth Process files before directories
-maxdepth levels Set maximum directory depth
-mindepth levels Set minimum directory depth
-mount Don't traverse mounted filesystems
-noleaf Don't optimize for Unix-like filesystems

Useful Commands for File Operations


touch : Update file timestamps or create empty files
stat : Display detailed file information

Here are extensive notes on the provided content using markdown language:

18. Archiving and Backup


Compression Programs
gzip: Compress or expand files
bzip2: A block sorting file compressor

Archiving Programs
tar: Tape archiving utility
zip: Package and compress files

File Synchronization Program


rsync: Remote file and directory synchronization

Compressing Files
Data compression removes redundancy from data
Compression algorithms fall into two categories:
Lossless: Preserves all original data
Lossy: Removes some data to allow more compression

Compression Example

A 100x100 pixel black image:

Uncompressed: 30,000 bytes (100 100 3)


Compressed: Could be encoded as 10,000 (pixel count) + 0 (black color)

gzip
Replaces original file with compressed version
gunzip restores compressed files

Basic usage:

gzip foo.txt
gunzip foo.txt.gz

Key options:

Option Long Option Description


-c --stdout, --to-stdout Write to standard output, keep original
-d --decompress, -- Decompress
uncompress
-f --force Force compression even if compressed
version exists
-h --help Display usage information
-l --list List compression statistics
-r --recursive Recursively compress files in directories
-t --test Test integrity of compressed file
-v --verbose Display verbose messages
Option Long Option Description
- Set compression level (1-9)
number

Viewing Compressed Files

Use zcat or zless to view contents of compressed files without decompressing

bzip2
Similar to gzip but uses different algorithm
Achieves higher compression at cost of speed
Uses .bz2 extension
bunzip2 for decompression
bzcat to view contents

Avoid Compressing Already Compressed Files

Compressing an already compressed file (e.g. JPEG, MP3) usually results in a


larger file due to overhead

Archiving Files
tar
Classic Unix archiving tool
Name means "tape archive"
Can archive files, directories, or both

Basic syntax:

tar mode[options] pathname...

Common modes:
Mode Description
c Create archive
x Extract archive
r Append to archive
t List contents of archive

Example usage:

tar cf playground.tar playground


tar tvf playground.tar
tar xf playground.tar

Relative vs Absolute Paths

tar uses relative paths by default, removing leading slashes

Extracting specific files:

tar xf archive.tar pathname

Using with find:

find playground -name 'file-A' -exec tar rf playground.tar '{}' '+'

Compression options:

z: Use gzip compression


j: Use bzip2 compression

Network transfer example:

ssh remote-sys 'tar cf - Documents' | tar xf -


zip
Both compression tool and archiver
Compatible with Windows .zip files
Less common on Linux than gzip/bzip2

Basic usage:

zip -r playground.zip playground


unzip playground.zip

Updating zip archives

zip updates existing archives instead of replacing them

Listing and extracting selectively:

unzip -l playground.zip
unzip playground.zip playground/dir-087/file-Z

Using standard input/output:

find playground -name "file-A" | zip -@ file-A.zip


ls -l /etc/ | zip ls-etc.zip -
unzip -p ls-etc.zip | less

Synchronizing Files and Directories


rsync
Efficient tool for synchronizing directories
Can sync local and remote directories
Uses rsync remote-update protocol to minimize data transfer

Basic syntax:
rsync options source destination

Example usage:

rsync -av playground foo

Trailing Slash Behavior

A trailing slash on the source copies only contents, not the directory itself

Practical backup example:

sudo rsync -av --delete /etc /home /usr/local /media/BigDisk/backup

Network usage:

1. With SSH:

sudo rsync -av --delete --rsh=ssh /etc /home /usr/local remote-


sys:/backup

2. With rsync server:

rsync -av --delete


rsync://archive.linux.duke.edu/fedora/linux/development/rawhide/E
verything/x86_64/os/ fedora-devel

Creating a Backup Alias

Add this to .bashrc:


alias backup='sudo rsync -av --delete /etc /home /usr/local
/media/BigDisk/backup'

Here are extensive notes on the Regular Expressions chapter, using Markdown
formatting:

19. Regular Expressions


Regular expressions are symbolic notations used to identify patterns in text. They
are supported by many command line tools and programming languages to facilitate
text manipulation tasks.

Key Concepts
Regular expressions allow matching and manipulating text patterns
They use special metacharacters to define patterns
POSIX standard defines Basic (BRE) and Extended (ERE) regular expression
syntax
Many Unix/Linux tools support regular expressions, including grep, sed, awk

grep and Regular Expressions


The grep command searches text files for lines matching a specified regular
expression:

grep [options] regex [file...]

Common grep options:

Option Description
-i Ignore case
-v Invert match - print non-matching lines
Option Description
-c Print count of matching lines
-l Print names of files with matches
-n Prefix output with line numbers
-h Suppress filenames in multi-file searches

Metacharacters and Literals


Most characters in a regex are literals that match themselves
Metacharacters have special meaning: ^ $ . [ ] { } - ? * + ( ) | \
Backslash \ is used to escape metacharacters

Escaping Metacharacters

Enclose regexes containing metacharacters in quotes to prevent shell expansion

Basic Regular Expression Syntax


. (dot) - Matches any single character
^ - Matches beginning of line
$ - Matches end of line
[ ] - Matches any character in the set
[^ ] - Matches any character not in the set

Examples:

grep '^zip' file.txt # Lines starting with "zip"


grep 'zip$' file.txt # Lines ending with "zip"
grep '[bg]zip' file.txt # Lines containing "bzip" or "gzip"
grep '[^bg]zip' file.txt # "zip" not preceded by b or g

Character Classes
POSIX defines standard character classes to match sets of characters:

Class Description
[:alnum:] Alphanumeric characters
[:alpha:] Alphabetic characters
[:digit:] Digits
[:lower:] Lowercase letters
[:upper:] Uppercase letters
[:punct:] Punctuation characters
[:space:] Whitespace characters

Example:

grep '^[[:upper:]]' file.txt # Lines starting with uppercase

Extended Regular Expressions


Extended regex syntax adds more metacharacters:

| - Alternation (OR)
( ) - Grouping
? - Match 0 or 1 occurrence
Match 0 or more occurrences
Match 1 or more occurrences
{ } - Match specific number of occurrences

To use ERE syntax:

Use egrep command


Or use grep -E option

Example:

grep -E '^(bz|gz|zip)' file.txt


Quantifiers
Specify number of matches:

? - Match 0 or 1 time
Match 0 or more times
Match 1 or more times
{n} - Match exactly n times
{n,} - Match n or more times
{n,m} - Match between n and m times

Example:

grep -E '^[0-9]{3}-[0-9]{3}-[0-9]{4}$' file.txt # Match phone numbers

Using Regular Expressions


Validating data with grep
Example - validating phone numbers:

grep -Ev '^\([0-9]{3}\) [0-9]{3}-[0-9]{4}$' phonelist.txt

This will print lines that don't match the phone number pattern.

Finding files with find


Use -regex option to match entire path:

find . -regex '.*[^-_./0-9a-zA-Z].*'

Finds paths with characters outside allowed set.

Searching with locate


locate supports basic (-regexp) and extended (-regex) regex:

locate --regex 'bin/(bz|gz|zip)'

Searching in less and vim


In less: Type / followed by regex
In vim: Use /regex for basic regex search

vim Regex

vim uses basic regex by default. Escape special chars like: (..) {3}

20. text processing


21. Formatting Output
This chapter covers tools for formatting text output, often used to prepare text for
printing. The main programs covered are:

nl - Number lines
fold - Wrap lines to a specified length
fmt - Simple text formatter
pr - Prepare text for printing
printf - Format and print data
groff - Document formatting system

Simple Formatting Tools


nl - Number Lines
Numbers lines of files or standard input
Basic usage similar to cat -n
Supports "logical pages" with header, body, footer sections
Can reset numbering for each section
Uses markup in the text to indicate sections:

\:\:\: - Start of header


\:\: - Start of body
\: - Start of footer

Common options:

Option Meaning
-b style Set body numbering style
-f style Set footer numbering style
-h style Set header numbering style
-i number Set page numbering increment
-n format Set numbering format
-p Don't reset numbering at start of page
-s string Add separator after line numbers
-v number Set first line number
-w width Set width of line number field

Example usage:

sort -k 1,1 -k 2n distros.txt | sed -f distros-nl.sed | nl

This sorts the distros.txt file, processes it with a sed script to add markup, then
numbers the lines with nl.

fold - Wrap Lines


Wraps input lines to a specified width
Default width is 80 characters
-w option sets custom width
-s option breaks at spaces to avoid splitting words
Example:

echo "The quick brown fox jumped over the lazy dog." | fold -w 12
echo "The quick brown fox jumped over the lazy dog." | fold -w 12 -s

fmt - Simple Text Formatter


Fills and joins lines in text while preserving blank lines and indentation
Useful for formatting paragraphs
-w option sets width (default 75)
-c option prevents indenting first two lines differently

Example:

fmt -w 50 -c file.txt

Useful options:

Option Description
-c Preserve indentation of first two lines
-p string Only format lines beginning with prefix
-s Split lines only, don't join short lines
-u Do uniform spacing (1 space between words, 2 after sentences)

Formatting Code Comments

The -p option is useful for formatting comments in code:

fmt -w 50 -p '# ' code.txt

This will format only the lines starting with "# " while leaving code untouched.

pr - Format Text for Printing


Paginates text with headers and margins
Useful for creating printable output

Example:

pr -l 15 -w 65 distros.txt

This creates pages 15 lines long and 65 characters wide.

printf - Format and Print Data


Similar to C printf function
Used mainly in scripts, not pipelines
Basic syntax: printf "format" arguments
Format string can contain:
Literal text
Escape sequences (e.g. \n for newline)
Conversion specifications (e.g. %s for string)

Common conversion specifiers:

Specifier Description
d Signed decimal integer
f Floating point number
o Octal number
s String
x Lowercase hexadecimal
X Uppercase hexadecimal

Full conversion specification:

%[flags][width][.precision]conversion_specification

Flags:
Flag Description
# Use "alternate format"
0 Pad with zeros
- Left-align
'' Add space before positive numbers
+ Always show sign for numbers

Examples:

printf "%s\t%s\t%s\n" str1 str2 str3


printf "Line: %05d %15.3f Result: %+15d\n" 1071 3.14156295 32589

printf vs echo

While echo is simpler, printf offers much more control over formatting output,
making it very useful in scripts that need to produce precisely formatted text.

Document Formatting Systems


Two main families:

1. roff descendants (nroff, troff)


2. TeX typesetting system

groff
GNU implementation of troff
Uses markup language to describe formatting
Often used with macro packages for easier formatting

Example - viewing a man page source:

zcat /usr/share/man/man1/ls.1.gz | head

Rendering a man page with groff:


zcat /usr/share/man/man1/ls.1.gz | groff -mandoc -T ascii | head

Converting to PDF

You can convert groff PostScript output to PDF:

ps2pdf input.ps output.pdf

Example - creating a formatted table with tbl and groff:

1. Create a sed script to add tbl markup


2. Process the data through the pipeline:

sort -k 1,1 -k 2n distros.txt | sed -f distros-tbl.sed | groff -t -T


ascii

For better output, use PostScript:

sort -k 1,1 -k 2n distros.txt | sed -f distros-tbl.sed | groff -t >


output.ps

More Conversion Tools

Many command line tools exist for file format conversion, often named
format2format or formattoformat.

Try: ls /usr/bin/*[[:alpha:]]2[[:alpha:]]* to find some.

22. printing
A Brief History of Printing in Unix-like Systems
Early Days of Printing
Printers were large, expensive, and centralized in the pre-PC era
Users shared printers, with banner pages identifying print jobs
Printers used impact technology (e.g. daisy-wheel, dot-matrix)
Character-based printers used fixed character sets
Monospaced fonts were standard
Standard page size: 80 characters wide, 66 lines high

Monospaced Fonts

Monospaced fonts have fixed character widths, allowing for predictable page
layouts.

Printing Process
Data sent as simple byte stream of characters
ASCII control codes used for carriage control
Special effects like boldface achieved through overprinting

Example of overprinting for boldface:

N^HNA^HAM^HME^HE

Transition to Graphical Printing


GUI development led to graphical printing techniques
Laser printers enabled printing of proportional fonts and images
Challenge: Increased data volume (e.g. 900,000 bytes per page for 300 DPI)

Page Description Languages (PDLs)


Invented to address data transmission issues
PostScript: First major PDL by Adobe Systems
PostScript printers contained their own processor and memory
Raster Image Processor (RIP) converts PDL to bitmap
PostScript Advantages

PostScript allowed for complex layouts and font support while reducing data
transmission needs.

Modern Printing Systems

RIP moved from printers to host computers


Some printers still accept character-based streams
Many low-cost printers rely on host computer's RIP

Printing with Linux


Key Components
1. Common Unix Printing System (CUPS)
Provides print drivers and print-job management
Creates and maintains print queues
2. Ghostscript
PostScript interpreter
Acts as a Raster Image Processor (RIP)

Preparing Files for Printing


pr - Convert Text Files for Printing
Adjusts text to fit specific page sizes
Adds optional headers and margins

Common pr options:

Option Description
+first[:last] Output a range of pages
-columns Organize content into specified number of columns
Option Description
-a List content horizontally in multi-column output
-d Double-space output
-D "format" Format the date in page headers
-f Use form feeds to separate pages
-h "header" Set custom page header
-l length Set page length (default: 66)
-n Number lines
-o offset Create left margin
-w width Set page width (default: 72)

Example usage:

ls /usr/bin | pr -3 -w 65 | head

Sending a Print Job to a Printer


CUPS Printing Methods
1. Berkeley/LPD: Uses lpr program
2. SysV: Uses lp program

lpr - Print Files (Berkeley Style)

Sends files to printer


Can be used in pipelines

Example:

ls /usr/bin | pr -3 | lpr

Common lpr options:


Option Description
-# number Set number of copies
-p "Pretty print" option for text files
-P printer Specify printer name
-r Delete files after printing

lp - Print Files (System V Style)


Similar to lpr, but with different options

Common lp options:

Option Description
-d printer Set destination printer
-n number Set number of copies
-o landscape Set landscape orientation
-o fitplot Scale file to fit page
-o scaling=number Scale file (100 fills page)
-o cpi=number Set characters per inch
-o lpi=number Set lines per inch
-o page-*=points Set page margins
-P pages Specify pages to print

Example with custom formatting:

ls /usr/bin | pr -4 -w 90 -l 88 | lp -o page-left=36 -o cpi=12 -o


lpi=8

a2ps - Another Printing Option

Converts various file formats to PostScript


Acts as a "pretty printer"
Sends output to default printer by default
Example usage:

ls /usr/bin | pr -3 -t | a2ps -o ~/Desktop/ls.ps -L 66

a2ps Output

a2ps typically produces "two up" format, printing two pages per sheet with
headers and footers.

Common a2ps options:

Option Description
--center-title=text Set center page title
--columns=number Arrange pages into columns
--footer=text Set page footer
--guess Report file types
--line-numbers=interval Number lines of output
--pages=range Print specific page range
-B No page headers
-f size Set font size
-l number Set characters per line
-L number Set lines per page
-M name Use specific media type
-o file Send output to file
-P printer Specify printer
-R Portrait orientation
-r Landscape orientation

Alternative to a2ps

enscript is another text-to-PostScript formatter with similar capabilities, but it


only accepts text input.
Monitoring and Controlling Print Jobs
lpstat - Display Print System Status

Shows printer names and availability

Common lpstat options:

Option Description
-a [printer...] Display printer queue state
-d Show default printer
-p [printer...] Display printer status
-r Show print server status
-s Display status summary
-t Show complete status report

lpq - Display Printer Queue Status


Shows status of printer queue and print jobs

Example:

lpq

lprm / cancel - Cancel Print Jobs


Remove jobs from print queue
lprm (Berkeley style) and cancel (System V style)

Example:

cancel 603

Job Cancellation
Make sure to use the correct job ID when cancelling print jobs to avoid affecting
other users' prints.

These commands provide comprehensive control over the printing process in Unix-
like systems, from formatting and sending print jobs to managing printer queues and
cancelling jobs when needed.

Here are extensive notes on the chapter about compiling programs, using markdown
formatting:

23. Compiling Programs


Why Compile Software?
There are two main reasons to compile software from source code:

1. Availability - Some desired applications may not be included in distribution


repositories
2. Timeliness - To get the latest version of a program that may not yet be
available pre-compiled

Distribution Repositories

Large Linux distributions like Debian maintain huge repositories of pre-compiled


binary packages (over 68,000 for Debian)

What is Compiling?
Compiling is the process of translating human-readable source code into machine
language that can be executed by the computer's processor.

Key points:

Processors execute machine language - numeric binary code representing very


basic operations
Assembly language replaced raw numeric codes with mnemonic codes (e.g.
MOV, CPY)
High-level programming languages like C and C++ allow focusing on problem-
solving rather than processor details
Compilers convert high-level language code into machine language, sometimes
via assembly
Linkers connect compiled code to shared libraries containing common functions

Interpreted vs Compiled Languages

Some languages like Python and Ruby are interpreted rather than compiled. An
interpreter executes the code directly, which is slower but allows for faster
development cycles.

Compiling a C Program
Steps to compile a C program:

1. Obtain the source code (usually as a compressed tar file)


2. Unpack the source code
3. Examine the source tree
4. Run the configure script
5. Run make to compile
6. Run sudo make install to install the compiled program

Obtaining the Source Code


Example of downloading source code using FTP:

mkdir src
cd src
ftp ftp.gnu.org
ftp> cd gnu/diction
ftp> get diction-1.11.tar.gz
ftp> bye

Alternatively, using wget:

wget https://ftp.gnu.org/gnu/diction/diction-1.11.tar.gz

Unpacking the Source Code


Unpack the downloaded tar file:

tar xzf diction-1.11.tar.gz

Examining Tar Contents

To examine the contents of a tar file before unpacking:

tar tzvf tarfile | head

Examining the Source Tree


Key files in the source directory:

README, INSTALL, NEWS, COPYING - Documentation and license info


.c files - C source code
.h files - Header files with function/module descriptions

Building the Program


Two main steps:

1. Run ./configure to analyze the build environment


2. Run make to compile the program
The configure script creates a Makefile which instructs make how to build the
program.

Makefile

The Makefile defines targets (output files), dependencies, and commands to


build each target. This allows make to only rebuild what's necessary when
source files change.

Installing the Program


To install the compiled program:

sudo make install

This typically installs the program to /usr/local/bin

Key Commands
configure - Analyzes build environment and creates Makefile
make - Compiles the program based on the Makefile
make install - Installs the compiled program

part 4 : shell scripting


24. Shell Scripting Basics
What are Shell Scripts?
Shell scripts are files containing a series of commands
The shell reads and executes these commands as if entered directly on the
command line
Shell is both a command line interface and a scripting language interpreter
Most things that can be done on the command line can be done in scripts, and
vice versa

Writing Your First Script


To create and run a shell script:

1. Write the script in a text editor


2. Make the script executable
3. Put the script somewhere the shell can find it

Script File Format


Basic structure of a shell script:

#!/bin/bash
# This is a comment
echo 'Hello World!'

Shebang

The first line #!/bin/bash is called a shebang. It tells the system to use bash to
interpret the script.

Comments start with #


Comments can also appear at the end of lines after a command
Every script should include the shebang as the first line

Making Scripts Executable


Use chmod to make scripts executable:

chmod 755 script_name # Everyone can execute


chmod 700 script_name # Only owner can execute
Scripts must be readable to be executed.

Script File Location


To run a script:

./script_name

Adding scripts to PATH

Place scripts in a directory in your PATH (like ~/bin) so they can be run from
anywhere

To add ~/bin to PATH, add this to ~/.bashrc:

export PATH=~/bin:"$PATH"

Good locations for scripts:

~/bin - Personal use scripts


/usr/local/bin - Scripts for all users
/usr/local/sbin - Scripts for system administrators

Formatting Best Practices


1. Use long option names for readability

ls --all --directory # More readable than ls -ad

2. Use indentation and line continuation for complex commands

find playground \
\( \
-type f \
-not -perm 0600 \
-exec chmod 0600 '{}' ';' \
\) \
-or \
\( \
-type d \
-not -perm 0700 \
-exec chmod 0700 '{}' ';' \
\)

3. Configure vim for script writing:


Enable syntax highlighting: :syntax on
Highlight search results: :set hlsearch
Set tab width: :set tabstop=4
Enable auto-indentation: :set autoindent

Make vim settings permanent

Add these settings (without colons) to your ~/.vimrc file

25. Shell Functions and Program Design


Top-Down Design
Break large, complex tasks into smaller, simpler tasks
Identify top-level steps, then break those down further
Allows tackling complex problems by solving many small, simple problems

Shell Functions
"Mini-scripts" inside other scripts
Act as autonomous programs
Two syntactic forms:

function name {
commands
return
}

# Or

name () {
commands
return
}

Function definitions must appear before they are called in the script

Local Variables
Accessible only within the shell function where defined
Cease to exist once the function terminates
Defined using the local keyword:

local variable_name

Benefits of local variables:

Prevent name conflicts with global variables


Make functions more portable and reusable

Variable Scope

Local variables are only visible within the function where they are defined.
Global variables are visible throughout the entire script.

Keeping Scripts Running During Development


Use "stubs" - empty function definitions - to verify logical flow early on
Add feedback in stubs to confirm execution:

report_uptime () {
echo "Function report_uptime executed."
return
}

Shell Functions in .bashrc


Can replace aliases for more complex personal commands
Example disk space function for .bashrc:

ds () {
echo "Disk Space Utilization For $HOSTNAME"
df -h
}

26. Building a System Information Script


Initial Script Structure

#!/bin/bash

TITLE="System Information Report For $HOSTNAME"


CURRENT_TIME="$(date +"%x %r %Z")"
TIMESTAMP="Generated $CURRENT_TIME, by $USER"

report_uptime () {
return
}

report_disk_space () {
return
}

report_home_space () {
return
}

cat << _EOF_


<html>
<head>
<title>$TITLE</title>
</head>
<body>
<h1>$TITLE</h1>
<p>$TIMESTAMP</p>
$(report_uptime)
$(report_disk_space)
$(report_home_space)
</body>
</html>
_EOF_

Implementing Functions
Example implementation of report_uptime :

report_uptime () {
cat <<- _EOF_
<h2>System Uptime</h2>
<pre>$(uptime)</pre>
_EOF_
return
}

Home Directory Permissions

The report_home_space function may require superuser privileges if home


directories are not world-readable.

Here Documents
Embed multi-line text directly in scripts
Syntax:

command << token


text
token

Useful for generating multi-line output or input to commands

Here Document Indentation

Use <<- instead of << to allow indentation with tabs (not spaces) in the here
document.

By following these practices and structuring scripts using functions and top-down
design, you can create more maintainable and readable shell scripts.

chmod, echo, ls, find, vim, df, du, uptime

Here are extensive notes on the provided document using markdown:

27. Flow Control: Branching with if


The if Statement
The if statement allows scripts to make decisions and execute different code
based on conditions. Basic syntax:

if commands; then
commands
[elif commands; then
commands...]
[else
commands]
fi

Evaluation

The if statement evaluates the exit status of commands. A zero exit status
means success/true, while non-zero means failure/false.
Exit Status
Commands issue an exit status (0-255) when they terminate
0 indicates success, any other value indicates failure
The $? variable holds the exit status of the last executed command

Example:

ls -d /usr/bin
echo $? # Outputs 0 (success)

ls -d /bin/usr
echo $? # Outputs non-zero (failure)

True and False Commands

The true command always exits with status 0, while false always exits with
status 1. These can be used to test if statement behavior.

The test Command


The test command performs checks and comparisons. It has two equivalent forms:

test expression
[ expression ]

Command Nature

Both test and [ are actually commands, with [ requiring ] as its final
argument.

File Expressions

Expression Is True If:


-e file file exists
Expression Is True If:
-f file file exists and is a regular file
-d file file exists and is a directory
-r file file exists and is readable
-w file file exists and is writable
-x file file exists and is executable

String Expressions

Expression Is True If:


-z string string length is zero
-n string string length is non-zero
string1 = string2 strings are equal
string1 != string2 strings are not equal

String Comparisons

The > and < operators must be quoted or escaped when used with test to
prevent shell interpretation as redirection operators.

Integer Expressions

Expression Is True If:


int1 -eq int2 integers are equal
int1 -ne int2 integers are not equal
int1 -lt int2 int1 is less than int2
int1 -le int2 int1 is less than or equal to int2
int1 -gt int2 int1 is greater than int2
int1 -ge int2 int1 is greater than or equal to int2

Modern Alternatives: [[]] and (())


The [[ ]] compound command is a more modern replacement for test :

Supports all test expressions


Adds new string expression: string =~ regex
Supports pattern matching with == operator

The (( )) compound command is designed for integer operations:

Allows natural syntax for arithmetic comparisons (e.g., < , > , == )


Recognizes variables by name without requiring $ expansion

Combining Expressions
Logical operators can combine expressions:

Operation test [[ ]] and (( ))


AND -a &&
OR -o \|\|
NOT ! !

Preference

While it's important to know test for compatibility, [[ ]] is preferred for


modern scripts due to its enhanced functionality and easier syntax.

Control Operators for Branching


The && (AND) and || (OR) operators provide another way to create branches:

command1 && command2 # Executes command2 only if command1 succeeds


command1 || command2 # Executes command2 only if command1 fails

These are useful for simple conditionals and error handling in scripts.

Portability
While portability to all Unix-like systems is sometimes emphasized, using bash-
specific features can lead to more readable and maintainable scripts, especially
since bash is widely available.

if, test, true, false, bash

Here are extensive and understandable notes on the content using markdown:

28. Reading Keyboard Input


The read Command
The read builtin command is used to read a single line of standard input. Syntax:

read [-options] [variable...]

If no variable is supplied, input is stored in the REPLY variable


Can assign input to multiple variables

Multiple Variables

If fewer values are input than variables specified, extra variables are empty. If
more values are input, the final variable contains all extra input.

Example: Integer Evaluation Script

#!/bin/bash
echo -n "Please enter an integer -> "
read int

if [[ "$int" =~ ^-?[0-9]+$ ]]; then


if [ "$int" -eq 0 ]; then
echo "$int is zero."
else
if [ "$int" -lt 0 ]; then
echo "$int is negative."
else
echo "$int is positive."
fi
if [ $((int % 2)) -eq 0 ]; then
echo "$int is even."
else
echo "$int is odd."
fi
fi
else
echo "Input value is not an integer." >&2
exit 1
fi

read Options
Option Description
-a array Assign input to array, starting with index zero
-d delimiter Use first character of delimiter string to indicate end of input
-e Use Readline to handle input, allowing editing
-i string Use string as default reply if user presses Enter (requires -e)
-n num Read num characters of input instead of entire line
-p prompt Display prompt string for input
-r Raw mode - don't interpret backslashes as escapes
-s Silent mode - don't echo characters (useful for passwords)
-t seconds Timeout after specified seconds
-u fd Use input from file descriptor fd instead of standard input

IFS (Internal Field Separator)


Controls word splitting on input
Default value contains space, tab, and newline
Can be modified to change field separation behavior
Changing IFS

Temporarily change IFS before read command to modify input parsing:

IFS=":" read user pw uid gid name home shell <<< "$file_info"

Here Strings
Use <<< operator to provide a string as input to a command:

read user pw uid gid name home shell <<< "$file_info"

Piping to read

You can't pipe directly to read as it runs in a subshell, losing variable


assignments:

echo "foo" | read # Won't work as expected

Validating Input
Always validate user input to handle unexpected or malicious data. Example:

#!/bin/bash

invalid_input () {
echo "Invalid input '$REPLY'" >&2
exit 1
}

read -p "Enter a single item > "

[[ -z "$REPLY" ]] && invalid_input


(( "$(echo "$REPLY" | wc -w)" > 1 )) && invalid_input
if [[ "$REPLY" =~ ^[-[:alnum:]\._]+$ ]]; then
echo "'$REPLY' is a valid filename."
# Additional checks...
else
echo "The string '$REPLY' is not a valid filename."
fi

Menu-Driven Programs
Create interactive menus for user selection:

#!/bin/bash

clear
echo "
Please Select:
1. Display System Information
2. Display Disk Space
3. Display Home Space Utilization
0. Quit
"
read -p "Enter selection [0-3] > "

if [[ "$REPLY" =~ ^[0-3]$ ]]; then


case $REPLY in
0) echo "Program terminated."; exit ;;
1) echo "Hostname: $HOSTNAME"; uptime; exit ;;
2) df -h; exit ;;
3)
if [[ "$(id -u)" -eq 0 ]]; then
echo "Home Space Utilization (All Users)"
du -sh /home/*
else
echo "Home Space Utilization ($USER)"
du -sh "$HOME"
fi
exit
;;
esac
else
echo "Invalid entry." >&2
exit 1
fi

Exit Points

Multiple exit points in a program can make logic harder to follow, but work well in
menu-driven scripts.

Here are extensive notes on the content using markdown:

29. Flow Control: Looping with while / until


Introduction to Looping
Looping allows portions of programs to repeat
Shell provides three compound commands for looping
This chapter covers two: while and until

The while Loop


Basic Syntax:

while commands; do
commands
done

Evaluates exit status of a list of commands


Executes loop body as long as exit status is zero

Example: Counting Script


#!/bin/bash
# while-count: display a series of numbers
count=1
while [[ "$count" -le 5 ]]; do
echo "$count"
count=$((count + 1))
done
echo "Finished."

Loop Execution

The loop continues as long as the condition [[ "$count" -le 5 ]] is true.


Once count becomes 6, the loop terminates.

Improving the Menu Program


Enclose the menu in a while loop to allow repeated selections
Use sleep command to pause between selections
Loop continues until user selects "Quit" option

User Experience

Adding a pause with sleep allows users to see results before the screen clears
for the next menu display.

Breaking Out of a Loop


Two built-in commands for loop control:

1. break: Immediately terminates the loop


2. continue: Skips remainder of current iteration, starts next iteration

Example: Enhanced Menu Program


while true; do
# Menu display code here
read -p "Enter selection [0-3] > "
if [[ "$REPLY" =~ ^[0-3]$ ]]; then
if [[ "$REPLY" == 1 ]]; then
# Option 1 code
continue
fi
# Other options...
if [[ "$REPLY" == 0 ]]; then
break
fi
else
echo "Invalid entry."
sleep "$DELAY"
fi
done

Endless Loop

Using while true creates an endless loop. The programmer must provide a
way to exit, typically with break.

The until Loop


Similar to while, but continues until it receives a zero exit status
Useful when the opposite condition is clearer to express

Example: Counting with until

#!/bin/bash
# until-count: display a series of numbers
count=1
until [[ "$count" -gt 5 ]]; do
echo "$count"
count=$((count + 1))
done
echo "Finished."

Choosing between while and until

Select the loop type that allows for the clearest test expression to be written.

Reading Files with Loops


while and until can process standard input
Allows for file processing within loops

Example: Reading a File Line by Line

#!/bin/bash
# while-read: read lines from a file
while read distro version release; do
printf "Distro: %s\tVersion: %s\tReleased: %s\n" \
"$distro" \
"$version" \
"$release"
done < distros.txt

File Redirection

The redirection operator < distros.txt is placed after the done statement to
feed the file into the loop.

Processing Piped Input

sort -k 1,1 -k 2n distros.txt | while read distro version release; do


printf "Distro: %s\tVersion: %s\tReleased: %s\n" \
"$distro" \
"$version" \
"$release"
done

Subshell Limitation

When using a pipe to feed a loop, the loop executes in a subshell. Variables
created or modified within the loop are lost when the loop terminates.

Summary
Loops are essential for repeated tasks in shell scripts
while and until provide flexible looping constructs
break and continue offer additional loop control
Loops can process files and piped input efficiently

Further Learning

Explore more complex loop structures and combine them with other flow control
techniques for advanced scripting capabilities.

Here are extensive notes on the chapter using markdown:

30. Troubleshooting Shell Scripts


Types of Errors
Syntactic Errors
Involve mistyping shell syntax elements
Shell stops executing script when encountered
Common types:
1. Missing quotes
2. Missing or unexpected tokens
3. Unanticipated expansions
Logical Errors
Don't prevent script from running
Produce undesired results due to flawed logic
Common types:
1. Incorrect conditional expressions
2. "Off by one" errors in loops
3. Unanticipated situations/data

Syntactic Error Examples


Missing Quotes

#!/bin/bash
number=1
if [ $number = 1 ]; then
echo "Number is equal to 1.
else
echo "Number is not equal to 1."
fi

Result:

/home/me/bin/trouble: line 10: unexpected EOF while looking for


matching `"'
/home/me/bin/trouble: line 13: syntax error: unexpected end of file

Quote Errors

Shell continues looking for closing quote


Can be hard to find in long scripts
Syntax highlighting in editors helps identify

Missing Tokens
#!/bin/bash
number=1
if [ $number = 1 ] then
echo "Number is equal to 1."
else
echo "Number is not equal to 1."
fi

Result:

/home/me/bin/trouble: line 9: syntax error near unexpected token


`else'
/home/me/bin/trouble: line 9: `else'

Missing Semicolon

if command accepts list of commands


Missing semicolon causes unexpected parsing
Error message points to later line

Unanticipated Expansions

#!/bin/bash
number=
if [ $number = 1 ]; then
echo "Number is equal to 1."
else
echo "Number is not equal to 1."
fi

Result:

/home/me/bin/trouble: line 7: [: =: unary operator expected


Number is not equal to 1.
Quoting Variables

Always enclose variables in double quotes to prevent word splitting:

if [ "$number" = 1 ]; then

Defensive Programming
Verify assumptions
Check exit status of commands
Validate input
Handle potential errors

Example of improved file deletion script:

if [[ ! -d "$dir_name" ]]; then


echo "No such directory: '$dir_name'" >&2
exit 1
fi
if ! cd "$dir_name"; then
echo "Cannot cd to '$dir_name'" >&2
exit 1
fi
if ! rm *; then
echo "File deletion failed. Check results" >&2
exit 1
fi

Dangerous Filenames

Unix allows almost any character in filenames, including spaces and hyphens.
Use ./ before wildcards to prevent misinterpretation:

rm ./*
Testing
Release early and often for more exposure
Use stubs to verify program flow
Develop good test cases covering edge conditions
Test coverage should reflect importance of functionality

Example of testable file deletion code:

if [[ -d $dir_name ]]; then


if cd $dir_name; then
echo rm * # TESTING
else
echo "cannot cd to '$dir_name'" >&2
exit 1
fi
else
echo "no such directory: '$dir_name'" >&2
exit 1
fi
exit # TESTING

Debugging Techniques
1. Isolate problem area by commenting out sections
2. Add tracing messages
3. Use bash's built-in tracing
4. Examine variable values during execution

Tracing
Add messages:

echo "preparing to delete files" >&2


if [[ -d $dir_name ]]; then
if cd $dir_name; then
echo "deleting files" >&2
rm *
else
echo "cannot cd to '$dir_name'" >&2
exit 1
fi
else
echo "no such directory: '$dir_name'" >&2
exit 1
fi
echo "file deletion complete" >&2

Use bash's -x option:

#!/bin/bash -x

Or use set command:

set -x # Turn on tracing


# code to trace
set +x # Turn off tracing

Customizing Trace Output

Modify PS4 variable to include line numbers:

export PS4='$LINENO + '

Examining Values
Add debug echo statements:

number=1
echo "number=$number" # DEBUG

Debugging is an Art
Developed through experience
Involves knowing how to avoid bugs
Requires effective use of tracing and testing

Here are extensive markdown notes on the provided content about flow control and
branching with the case command in bash:

31. Flow Control: Branching with case


Introduction
Continuation of flow control concepts
case is a special flow control mechanism for multiple-choice decisions
Useful alternative to multiple if statements

Syntax of case

case word in
[pattern [| pattern]...) commands ;;]...
esac

Syntax Explanation

word is the variable or value being tested


pattern is the condition to match against
commands are executed when a pattern matches
;; terminates each case
esac (case spelled backwards) ends the case statement

Example: Menu-Driven System Information Program


Original Implementation with if Statements:
#!/bin/bash
# read-menu: a menu driven system information program

clear
echo "
Please Select:
1. Display System Information
2. Display Disk Space
3. Display Home Space Utilization
0. Quit
"
read -p "Enter selection [0-3] > "

if [[ "$REPLY" =~ ^[0-3]$ ]]; then


if [[ "$REPLY" == 0 ]]; then
echo "Program terminated."
exit
fi
if [[ "$REPLY" == 1 ]]; then
echo "Hostname: $HOSTNAME"
uptime
exit
fi
if [[ "$REPLY" == 2 ]]; then
df -h
exit
fi
if [[ "$REPLY" == 3 ]]; then
if [[ "$(id -u)" -eq 0 ]]; then
echo "Home Space Utilization (All Users)"
du -sh /home/*
else
echo "Home Space Utilization ($USER)"
du -sh "$HOME"
fi
exit
fi
else
echo "Invalid entry." >&2
exit 1
fi

Improved Implementation with case :

#!/bin/bash
# case-menu: a menu driven system information program

clear
echo "
Please Select:
1. Display System Information
2. Display Disk Space
3. Display Home Space Utilization
0. Quit
"
read -p "Enter selection [0-3] > "

case "$REPLY" in
0) echo "Program terminated."
exit
;;
1) echo "Hostname: $HOSTNAME"
uptime
;;
2) df -h
;;
3) if [[ "$(id -u)" -eq 0 ]]; then
echo "Home Space Utilization (All Users)"
du -sh /home/*
else
echo "Home Space Utilization ($USER)"
du -sh "$HOME"
fi
;;
*) echo "Invalid entry" >&2
exit 1
;;
esac
Simplification

Using case simplifies the logic and makes the code more readable compared to
multiple if statements.

Patterns in case
Patterns in case are similar to those used in pathname expansion
Patterns are terminated with a ) character

Table of Pattern Examples:

Pattern Description
a) Matches if word equals "a"
:alpha:) Matches if word is a single alphabetic character
???) Matches if word is exactly three characters long
*.txt) Matches if word ends with the characters ".txt"
*) Matches any value of word (catch-all)

Catch-all Pattern

It's good practice to include *) as the last pattern to catch any invalid values.

Example of Patterns:

#!/bin/bash
read -p "enter word > "
case "$REPLY" in
[[:alpha:]]) echo "is a single alphabetic character." ;;
[ABC][0-9]) echo "is A, B, or C followed by a digit." ;;
???) echo "is three characters long." ;;
*.txt) echo "is a word ending in '.txt'" ;;
*) echo "is something else." ;;
esac
Combining Multiple Patterns
Use | (vertical bar) to separate multiple patterns
Creates an "or" conditional pattern
Useful for handling both uppercase and lowercase characters

Example with Combined Patterns:

#!/bin/bash
# case-menu: a menu driven system information program

clear
echo "
Please Select:
A. Display System Information
B. Display Disk Space
C. Display Home Space Utilization
Q. Quit
"
read -p "Enter selection [A, B, C or Q] > "

case "$REPLY" in
q|Q) echo "Program terminated."
exit
;;
a|A) echo "Hostname: $HOSTNAME"
uptime
;;
b|B) df -h
;;
c|C) if [[ "$(id -u)" -eq 0 ]]; then
echo "Home Space Utilization (All Users)"
du -sh /home/*
else
echo "Home Space Utilization ($USER)"
du -sh "$HOME"
fi
;;
*) echo "Invalid entry" >&2
exit 1
;;
esac

Performing Multiple Actions (bash 4.0+)


Prior to bash 4.0, case allowed only one action per successful match
bash 4.0+ introduces ;;& notation to allow multiple matches

Example of Multiple Actions:

#!/bin/bash
# case4-2: test a character
read -n 1 -p "Type a character > "
echo
case "$REPLY" in
[[:upper:]]) echo "'$REPLY' is upper case." ;;&
[[:lower:]]) echo "'$REPLY' is lower case." ;;&
[[:alpha:]]) echo "'$REPLY' is alphabetic." ;;&
[[:digit:]]) echo "'$REPLY' is a digit." ;;&
[[:graph:]]) echo "'$REPLY' is a visible character." ;;&
[[:punct:]]) echo "'$REPLY' is a punctuation symbol." ;;&
[[:space:]]) echo "'$REPLY' is a whitespace character." ;;&
[[:xdigit:]]) echo "'$REPLY' is a hexadecimal digit." ;;&
esac

Multiple Matches

The ;;& syntax allows case to continue to the next test rather than terminating
after the first match.

Summary
case is a powerful tool for handling multiple-choice decisions in bash scripts
It offers a cleaner alternative to multiple if statements
Pattern matching in case is versatile and can handle various scenarios
bash 4.0+ enhances case with the ability to perform multiple actions on a single
match

Best Practice

Use case when dealing with multiple conditions that can be expressed as
patterns, especially for menu-driven programs or command-line argument
parsing.

Further Reading
Bash Reference Manual on Conditional Constructs:
http://tiswww.case.edu/php/chet/bash/bashref.html#SEC21
Advanced Bash-Scripting Guide examples:
http://tldp.org/LDP/abs/html/testbranch.html

Here are extensive and understandable notes on the content using markdown:

32. Positional Parameters in Bash


Accessing Command Line Arguments
Bash provides positional parameters named $0 through $9
$0 contains the script name/path
$1 to $9 contain the first 9 command line arguments

Example script:

#!/bin/bash
echo "
\$0 = $0
\$1 = $1
\$2 = $2
...
\$9 = $9
"

Accessing More Than 9 Parameters

Use parameter expansion with braces for arguments beyond 9:


${10}, ${11}, etc.

Determining Number of Arguments


$# variable contains the number of arguments passed

Example:

echo "Number of arguments: $#"

Using shift to Access Many Arguments


shift command moves all parameters down by one
Allows processing many arguments with just $1

Example:

while [[ $# -gt 0 ]]; do


echo "Argument = $1"
shift
done

Simple Application Example


File information script:

#!/bin/bash
PROGNAME="$(basename "$0")"
if [[ -e "$1" ]]; then
echo -e "\nFile Type:"
file "$1"
echo -e "\nFile Status:"
stat "$1"
else
echo "$PROGNAME: usage: $PROGNAME file" >&2
exit 1
fi

Using basename

basename removes the path from $0, leaving just the script name

Positional Parameters with Shell Functions


Functions can also use positional parameters
Use $FUNCNAME instead of $PROGNAME for usage messages

Handling All Positional Parameters


Two special parameters:

Parameter Description
$* Expands to all parameters as a single string
$@ Expands to all parameters as separate strings

Using "$@"

"$@" is usually the safest way to handle all parameters, as it preserves spaces
in individual arguments

Command Line Option Processing


Example option processing loop:
while [[ -n "$1" ]]; do
case "$1" in
-f | --file) shift
filename="$1"
;;
-i | --interactive) interactive=1
;;
-h | --help) usage
exit
;;
*) usage >&2
exit 1
;;
esac
shift
done

Interactive Mode Implementation

if [[ -n "$interactive" ]]; then


while true; do
read -p "Enter name of output file: " filename
if [[ -e "$filename" ]]; then
read -p "'$filename' exists. Overwrite? [y/n/q] > "
case "$REPLY" in
Y|y) break
;;
Q|q) echo "Program terminated."
exit
;;
*) continue
;;
esac
elif [[ -z "$filename" ]]; then
continue
else
break
fi
done
fi

Output File Handling

if [[ -n "$filename" ]]; then


if touch "$filename" && [[ -f "$filename" ]]; then
write_html_page > "$filename"
else
echo "$PROGNAME: Cannot write file '$filename'" >&2
exit 1
fi
else
write_html_page
fi

File Writing Check

touch and test combination ensures the file is writable and a regular file

Here are extensive and understandable notes on the content using markdown
language:

33. Flow Control: Looping with for


Introduction
The for loop is another shell looping construct
It provides a means of processing sequences during a loop
Very useful and popular in bash scripting

Two Forms of for Loop in Bash


1. Traditional Shell Form
Syntax:

for variable [in words]; do


commands
done

variable : Name of variable that will increment during loop execution


words : Optional list of items sequentially assigned to variable
commands : Commands executed on each iteration

Example:

for i in A B C D; do echo $i; done

Output:

A
B
C
D

Word Generation Methods

There are several ways to generate the list of words for the loop:

1. Brace expansion: {A..D}


2. Pathname expansion: distros*.txt
3. Command substitution: $(command)

Guarding Against Failed Expansions

When using pathname expansion, always check if the expansion matched


anything:

for i in distros*.txt; do
if [[ -e "$i" ]]; then
echo "$i"
fi
done

2. C Language Form
Syntax:

for (( expression1; expression2; expression3 )); do


commands
done

expression1 : Initializes conditions for the loop


expression2 : Determines when the loop is finished
expression3 : Carried out at the end of each iteration

Example:

for (( i=0; i<5; i=i+1 )); do


echo $i
done

Output:

0
1
2
3
4

Practical Examples
1. Finding the Longest String in a File
#!/bin/bash
# longest-word: find longest string in a file
while [[ -n "$1" ]]; do
if [[ -r "$1" ]]; then
max_word=
max_len=0
for i in $(strings "$1"); do
len="$(echo -n "$i" | wc -c)"
if (( len > max_len )); then
max_len="$len"
max_word="$i"
fi
done
echo "$1: '$max_word' ($max_len characters)"
fi
shift
done

Word Splitting

The command substitution $(strings "$1") is not surrounded by quotes to


allow word splitting, generating a list of words for the loop.

2. Using Positional Parameters

#!/bin/bash
# longest-word2: find longest string in a file
for i; do
if [[ -r "$i" ]]; then
max_word=
max_len=0
for j in $(strings "$i"); do
len="$(echo -n "$j" | wc -c)"
if (( len > max_len )); then
max_len="$len"
max_word="$j"
fi
done
echo "$i: '$max_word' ($max_len characters)"
fi
done

Default Behavior

If the optional in words portion is omitted, for defaults to processing the


positional parameters.

Improving the sys_info_page Script


Enhanced report_home_space function:

report_home_space () {
local format="%8s%10s%10s\n"
local i dir_list total_files total_dirs total_size user_name
if [[ "$(id -u)" -eq 0 ]]; then
dir_list=/home/*
user_name="All Users"
else
dir_list="$HOME"
user_name="$USER"
fi
echo "<h2>Home Space Utilization ($user_name)</h2>"
for i in $dir_list; do
total_files="$(find "$i" -type f | wc -l)"
total_dirs="$(find "$i" -type d | wc -l)"
total_size="$(du -sh "$i" | cut -f 1)"
echo "<H3>$i</H3>"
echo "<pre>"
printf "$format" "Dirs" "Files" "Size"
printf "$format" "----" "-----" "----"
printf "$format" "$total_dirs" "$total_files" "$total_size"
echo "</pre>"
done
return
}
This improved version:

Uses local variables


Applies conditional logic to set variables for later use
Utilizes a for loop to process directory information
Uses printf for formatted output

Superuser Privileges

The script checks for superuser privileges to determine whether to report on all
user home directories or just the current user's home directory.

34. strings and numbers


Parameter Expansion
Parameter expansion allows manipulating and expanding variables in bash scripts.

Basic Parameters
Simple form: $a
Braced form: ${a}

Braces are required when variable is adjacent to other text:

a="foo"
echo "${a}_file" # Outputs: foo_file

Expansions for Empty Variables

Expansion Description
${parameter:-word} Use default value if parameter is unset/empty
${parameter:=word} Assign default value if parameter is unset/empty
${parameter:?word} Display error if parameter is unset/empty
Expansion Description
${parameter:+word} Use alternate value if parameter is set/non-empty

Examples:

foo=
echo ${foo:-"default value"} # Outputs: default value
echo ${foo:="new default"} # Assigns new value
echo ${foo:?"parameter empty"} # Displays error if empty
echo ${foo:+"alternate value"} # Uses alternate if set

Assignment Restriction

Positional and special parameters cannot be assigned using :=

Expansions Returning Variable Names

${!prefix*} and ${!prefix@} return names of existing variables starting with


prefix

echo ${!BASH*} # Lists all variables starting with BASH

String Operations

Expansion Description
${#parameter} String length
${parameter:offset} Extract substring from offset
${parameter:offset:length} Extract substring of given length
${parameter#pattern} Remove shortest match from start
${parameter##pattern} Remove longest match from start
${parameter%pattern} Remove shortest match from end
${parameter%%pattern} Remove longest match from end
${parameter/pattern/string} Replace first match
Expansion Description
${parameter//pattern/string} Replace all matches
${parameter/#pattern/string} Replace match at beginning
${parameter/%pattern/string} Replace match at end

Examples:

str="This is a long string"


echo ${#str} # Outputs: 21
echo ${str:5} # Outputs: is a long string
echo ${str:5:4} # Outputs: is a

file="document.txt.bak"
echo ${file#*.} # Outputs: txt.bak
echo ${file##*.} # Outputs: bak

echo ${file%.*} # Outputs: document.txt


echo ${file%%.*} # Outputs: document

echo ${file/bak/txt} # Replaces bak with txt

Efficiency

Using parameter expansion for string manipulation can be more efficient than
external commands like sed or cut

Case Conversion

Expansion Description
${parameter,,pattern} Convert to lowercase
${parameter,pattern} Convert first char to lowercase
${parameter^^pattern} Convert to uppercase
${parameter^pattern} Convert first char to uppercase

Example:
str="aBcDeF"
echo ${str,,} # Outputs: abcdef
echo ${str,} # Outputs: aBcDeF
echo ${str^^} # Outputs: ABCDEF
echo ${str^} # Outputs: ABcDeF

Case Normalization

Case conversion is useful for normalizing input before database lookups or


comparisons

Arithmetic Evaluation and Expansion


Arithmetic expansion: $((expression))
Arithmetic evaluation: (( expression ))

Number Bases

Notation Description
number Decimal (base 10)
0number Octal (base 8)
0xnumber Hexadecimal (base 16)
base#number Arbitrary base

echo $((0xff)) # Outputs: 255 (hexadecimal)


echo $((2#11111111)) # Outputs: 255 (binary)

Operators

Operator Description
+, - Addition, subtraction
*, / Multiplication, integer division
** Exponentiation
Operator Description
% Modulo (remainder)

Integer Division

Shell arithmetic only works with integers. Results of division are always whole
numbers.

Assignment Operators

Operator Equivalent
parameter = value Simple assignment
parameter += value parameter = parameter + value
parameter -= value parameter = parameter - value
parameter *= value parameter = parameter * value
parameter /= value parameter = parameter / value
parameter %= value parameter = parameter % value
parameter++ Post-increment
parameter-- Post-decrement
++parameter Pre-increment
--parameter Pre-decrement

Increment/Decrement Behavior

Post-increment/decrement operators return the value before the operation, while


pre-increment/decrement operators return the value after the operation.

Bit Operations

Operator Description
~ Bitwise negation
<< Left bitwise shift
Operator Description
>> Right bitwise shift
& Bitwise AND
` `
^ Bitwise XOR

Logical Operators

Operator Description
<= , >= Less than or equal, greater than or equal
<, > Less than, greater than
== , != Equal to, not equal to
&& , `
expr1 ? expr2 : expr3 Ternary operator

Logical Evaluation

In arithmetic context, 0 is false, non-zero is true.

bc - Arbitrary Precision Calculator


bc is an external program for complex calculations and floating-point arithmetic.

Basic usage:

bc < script.bc
bc <<< "2+2"

Interactive mode:

bc -q

Example script (loan calculation):


#!/bin/bash
# loan-calc: calculate monthly loan payments

principal=$1
interest=$2
months=$3

bc <<- EOF
scale = 10
i = $interest / 12
p = $principal
n = $months
a = p * ((i * ((1 + i) ^ n)) / (((1 + i) ^ n) - 1))
print a, "\n"
EOF

bc Features

bc supports variables, loops, and user-defined functions. Refer to the man page
for full documentation.

Here are extensive notes on the provided content using markdown language:

35. Arrays in Bash


Introduction to Arrays
Arrays are variables that hold multiple values
They are organized like a table with cells called elements
Each element is accessed using an index or subscript
Bash arrays are limited to a single dimension (like a spreadsheet with one
column)
Array support first appeared in bash version 2

Array Limitations
Unlike many programming languages that support multidimensional arrays, bash
arrays are limited to a single dimension.

Creating an Array
Arrays can be created in two ways:

1. Automatically when accessed:

a[1]=foo
echo ${a[1]}

2. Using the declare command:

declare -a a

Braces Usage

When accessing array elements, use braces to prevent the shell from attempting
pathname expansion:

echo ${a[1]}

Assigning Values to an Array


Two methods for assigning values:

1. Single value assignment:

name[subscript]=value

2. Multiple value assignment:


name=(value1 value2 ...)

Example with days of the week:

days=(Sun Mon Tue Wed Thu Fri Sat)

You can also assign values to specific elements:

days=([0]=Sun [1]=Mon [2]=Tue [3]=Wed [4]=Thu [5]=Fri [6]=Sat)

Accessing Array Elements


Arrays are useful for various data-management tasks. The provided example script,
hours , demonstrates how to use arrays to analyze file modification times in a
directory.

Key points from the hours script:

Initializes an array with 24 elements (one for each hour) set to 0


Uses stat to get file modification times
Increments array elements based on the hour of modification
Displays results in a formatted table

Array Operations
Outputting Entire Array Contents
Use * or @ subscripts to access all elements:

animals=("a dog" "a cat" "a fish")


echo ${animals[*]}
echo ${animals[@]}

Quoting Difference
When quoted, "${animals[*]}" results in a single word, while
"${animals[@]}" preserves individual elements.

Determining Number of Array Elements


Use parameter expansion:

echo ${#a[@]} # number of array elements


echo ${#a[100]} # length of element 100

Finding Used Subscripts


To determine which elements exist:

${!array[*]}
${!array[@]}

Adding Elements to the End


Use the += operator:

foo=(a b c)
foo+=(d e f)

Sorting an Array
Example script for sorting:

#!/bin/bash
a=(f e d c b a)
a_sorted=($(for i in "${a[@]}"; do echo $i; done | sort))
echo "Sorted array: ${a_sorted[@]}"
Deleting an Array or Elements
Use the unset command:

unset foo # delete entire array


unset 'foo[2]' # delete single element

Quoting Array Elements

When using unset with array elements, quote the element to prevent pathname
expansion:

unset 'foo[2]'

Associative Arrays
Supported in bash 4.0 and later
Use strings as indexes instead of integers
Must be created with declare -A

Example:

declare -A colors
colors["red"]="#ff0000"
colors["green"]="#00ff00"
colors["blue"]="#0000ff"

echo ${colors["blue"]}

Certainly! I'll create extensive and understandable notes using markdown language,
incorporating the requested notations for notes, tips, and warnings. I'll use ## for the
biggest titles, present all tables, and enclose inline commands in double square
brackets. Here are the notes:

36. exotica
Group Commands and Subshells
Group Commands
Syntax:

{ command1; command2; [command3; ...] }

Subshells
Syntax:

(command1; command2; [command3;...])

Syntax Requirements

For group commands, braces must be separated from commands by a space,


and the last command must be terminated with a semicolon or newline before
the closing brace.

Key Differences:
1. Group commands execute in the current shell
2. Subshells execute in a child copy of the current shell

Uses:
Managing redirection for multiple commands
Combining results of several commands into a single stream for pipelines

Example:

{ ls -l; echo "Listing of foo.txt"; cat foo.txt; } > output.txt


Performance

Group commands are generally preferable to subshells as they are faster and
require less memory.

Process Substitution
Syntax for processes that produce standard output:

<(list)

Syntax for processes that intake standard input:

>(list)

Process Substitution Purpose

Process substitution allows treating the output of a subshell as an ordinary file


for redirection purposes.

Example:

read < <(echo "foo")


echo $REPLY

Useful Application

Process substitution is often used with loops containing read, especially when
processing directory listings.

Traps
Traps allow scripts to respond to signals, ensuring proper termination and cleanup.
Syntax:

trap argument signal [signal...]

Example:

trap "echo 'I am ignoring you.'" SIGINT SIGTERM

Using Functions with Traps

It's common practice to specify a shell function as the command for a trap,
improving readability and maintainability.

Example with functions:

exit_on_signal_SIGINT () {
echo "Script interrupted." 2>&1
exit 0
}
trap exit_on_signal_SIGINT SIGINT

Temporary Files

Security Concerns

When creating temporary files, especially for programs running with superuser
privileges, it's crucial to use nonpredictable filenames to avoid temp race
attacks.

Best Practices:
1. Use mktemp to create and name temporary files
2. For regular users, consider creating a temporary directory in the user's home
folder

Example using mktemp:

tempfile=$(mktemp /tmp/foobar.$$.XXXXXXXXXX)

Asynchronous Execution
The wait command allows a parent script to pause until a specified child process
finishes.

Example:
Parent script:

async-child &
pid=$!
# ... other commands ...
wait "$pid"

Child script:

# Perform tasks

Named Pipes
Named pipes (FIFOs) allow communication between processes using file-like
interfaces.

Creating a Named Pipe:

mkfifo pipe1
Using Named Pipes:
Terminal 1:

ls -l > pipe1

Terminal 2:

cat < pipe1

Blocking Behavior

Writing to a named pipe blocks until another process reads from it, and vice
versa.

Further Reading
1. bash man page: "Compound Commands" and "EXPANSION" sections
2. Advanced Bash-Scripting Guide: Process substitution
3. Linux Journal articles on named pipes (September 1997 and March 2009)

These notes cover the main concepts from the provided text, including group
commands, subshells, process substitution, traps, temporary files, asynchronous
execution, and named pipes. The notes are structured using markdown, with
appropriate headings, code blocks, and special notations for notes, tips, and
warnings as requested.

You might also like