0% found this document useful (0 votes)
22 views34 pages

Operatingsystem Practicals

Best practicals of Operating System for Btech students

Uploaded by

pavitrarao2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views34 pages

Operatingsystem Practicals

Best practicals of Operating System for Btech students

Uploaded by

pavitrarao2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Practical -1

Aim: Introduction to UNIX File System.


Description:
The UNIX file system is a hierarchical structure used by UNIX-based operating systems to organize
and store files and directories on storage devices such as hard drives and SSDs. It provides a way for
users and applications to access, manage, and manipulate data stored on a computer system.

Unix File System is a logical method of organizing and storing large amounts of information in a
way that makes it easy to manage. A file is the smallest unit in which the information is stored. Unix
file system has several important features. All data in Unix is organized into files. All files are
organized into directories. These directories are organized into a tree-like structure called the file
system. Files in Unix System are organized into multi-level hierarchy structure known as a directory
tree. At the very top of the file system is a directory called “root” which is represented by a “/”. All
other files are “descendants” of root.

The Unix file system uses a directory hierarchy that allows for easy navigation and organization of
files. Directories can contain both files and other directories, and each file or directory has a unique
name.
Unix file system also uses a set of permissions to control access to files and directories. Each file
and directory has an owner and a group associated with it, and permissions can be set to allow or
restrict access to these entities.

Here are the key components and concepts of the UNIX file system:
1. Directory Structure:
• The file system is organized as a tree-like structure, with the root directory ("/") at the top.
• Directories can contain files and other directories, forming a hierarchy.
• Each directory (except the root) has a parent directory and can have multiple
subdirectories.

2. File Types:
• Regular files: Contain user data or program instructions.
• Directories: Containers for files and subdirectories.
• Special files: Represent devices such as printers, disks, and terminals.
• Links: Symbolic links (soft links) and hard links provide ways to reference files or
directories from multiple locations.

3. File Path:
• A file path is used to specify the location of a file or directory within the file system.
• Absolute path: Starts from the root directory (e.g., "/home/user/file.txt").
• Relative path: Relative to the current working directory (e.g., "Documents/report.doc").

4. Permissions:
• Each file and directory has associated permissions that define who can read, write, and
execute them.
• Permissions are categorized for the owner of the file, the group associated with the file,
and others.
• The chmod command is used to modify permissions.

5. File System Navigation:


• Commands like cd (change directory), ls (list files), pwd (print working directory), and
mkdir (make directory) are used to navigate and manipulate the file system.
• cp (copy), mv (move), rm (remove), and touch (create empty file) are used for file
operations.
6. File System Mounting:
• UNIX systems can mount external storage devices and network shares into the file system
hierarchy.
• The mount command is used to attach a file system, and umount is used to detach it.

7. File System Types:


• UNIX supports various file system types such as ext4, XFS, Btrfs, and NFS (Network
File System), each with its own features and capabilities.

8. File System Utilities:


• Utilities like df (disk free), du (disk usage), and find (search for files) help manage and
monitor disk space and file content.

UNIX-like systems, including Linux and macOS, adhere to these principles and provide powerful
tools and utilities for efficient file system management and data organization. Understanding the
UNIX file system is fundamental for system administrators, developers, and power users working
in these environments.

Directories or Files and their Description:

NAME DESCRIPTION

/ The slash / character alone denotes the root of the filesystem tree.

/bin Stands for “binaries” and contains certain fundamental utilities, such as ls or cp,
which are generally needed by all users.

/boot Contains all the files that are required for successful booting process.

/dev Stands for “devices”. Contains file representations of peripheral devices and
pseudo-devices.

/etc Contains system-wide configuration files and system databases. Originally also
contained “dangerous maintenance utilities” such as init,but these have typically
been moved to /sbin or elsewhere.

/home Contains the home directories for the users.

/lib Contains system libraries, and some critical files such as kernel modules or device
drivers.

/media Default mount point for removable devices, such as USB sticks, media players, etc.

/mnt Stands for “mount”. Contains filesystem mount points. These are used, for
example, if the system uses multiple hard disks or hard disk partitions. It is also
often used for remote (network) filesystems, CD-ROM/DVD drives, and so on.

/proc procfs virtual filesystem showing information about processes as files.


/root The home directory for the superuser “root” – that is, the system administrator. This
account’s home directory is usually on the initial filesystem, and hence not in
/home (which may be a mount point for another filesystem) in case specific
maintenance needs to be performed, during which other filesystems are not
available. Such a case could occur, for example, if a hard disk drive suffers physical
failures and cannot be properly mounted.

/tmp A place for temporary files. Many systems clear this directory upon startup; it
might have tmpfs mounted atop it, in which case its contents do not survive a
reboot, or it might be explicitly cleared by a startup script at boot time.

/usr Originally the directory holding user home directories,its use has changed. It now
holds executables, libraries, and shared resources that are not system critical, like
the X Window System, KDE, Perl, etc. However, on some Unix systems, some user
accounts may still have a home directory that is a direct subdirectory of /usr, such
as the default as in Minix. (on modern systems, these user accounts are often related
to server or system use, and not directly used by a person).

/usr/bin This directory stores all binary programs distributed with the operating system not
residing in /bin, /sbin or (rarely) /etc.

/usr/include Stores the development headers used throughout the system. Header files are
mostly used by the #include directive in C/C++ programming language.

/usr/lib Stores the required libraries and data files for programs stored within /usr or
elsewhere.

/var A short for “variable.” A place for files that may change often – especially in size,
for example e-mail sent to users on the system, or process-ID lock files.

/var/log Contains system log files.

/var/mail The place where all the incoming mails are stored. Users (other than root) can
access their own mail only. Often, this directory is a symbolic link to
/var/spool/mail.

/var/spool Spool directory. Contains print jobs, mail spools and other queued tasks.

/var/tmp A place for temporary files which should be preserved between system reboots.

Advantages of the Unix file System

• Hierarchical organization: The hierarchical structure of the Unix file system makes it easy to
organize and navigate files and directories.
• Robustness: The Unix file system is known for its stability and reliability. It can handle large
amounts of data without becoming unstable or crashing.
• Security: The Unix file system uses a set of permissions that allows administrators to control
who has access to files and directories.
• Compatibility: The Unix file system is widely used and supported, which means that files can be
easily transferred between different Unix-based systems.
Disadvantages of the Unix file System

• Complexity: The Unix file system can be complex to understand and manage, especially for
users who are not familiar with the command line interface.
• Steep Learning Curve: Users who are not familiar with Unix-based systems may find it difficult
to learn how to use the Unix file system.
• Lack of User-Friendly Interface: The Unix file system is primarily managed through the
command line interface, which may not be as user-friendly as a graphical user interface.
• Limited Support for Certain File Systems: While the Unix file system is compatible with
many file systems, there are some file systems that are not fully supported.
Practical -2
Aim: File and Directory Related Commands in UNIX.
Description:
UNIX provides a variety of commands for working with files and directories. These commands are
essential for managing files and directories, navigating the file system, and performing common
operations like copying, moving, and deleting files. Understanding their usage and options is
important for efficient file system management in UNIX-like environments.
Here are some commonly used file and directory-related commands in UNIX/Linux systems:
File Management Commands:
1. ls - List files and directories in the current directory.
• Example: ls -l (long listing), ls -a (show hidden files), ls -lh (human-readable file sizes).

2. cp - Copy files or directories.


• Example: cp file1.txt file2.txt (copy file1.txt to file2.txt), cp -r directory1 directory2 (copy
directory1 and its contents recursively).

3. mv - Move or rename files or directories.


• Example: mv oldname.txt newname.txt (rename file), mv file1.txt directory/ (move
file1.txt to directory).
4. rm - Remove (delete) files or directories.
• Example: rm file.txt (remove file), rm -r directory/ (remove directory and its contents
recursively).

5. touch - Create an empty file or update file timestamps.


• Example: touch newfile.txt (create a new empty file).

6. cat - Concatenate and display file contents.


• Example: cat file.txt (display contents of file.txt).

7. more and less - View file contents page by page.


• Example: more file.txt, less largefile.txt (useful for large files).

8. head and tail - Display the beginning or end of a file.


• Example: head file.txt (display first few lines), tail -n 10 file.txt (display last 10 lines).
Directory Management Commands:
1. pwd - Print the current working directory.
• Example: pwd.

2. cd - Change directory.
• Example: cd /path/to/directory (change to a specific directory), cd .. (move up one
directory).

3. mkdir - Create a new directory.


• Example: mkdir newdir (create a new directory named "newdir").

4. rmdir - Remove an empty directory.


• Example: rmdir emptydir (remove the empty directory "emptydir").

These commands provide a foundation for managing files and directories in a Unix environment.
They are powerful tools for navigating, organizing, and manipulating file systems from the command
line. Feel free to explore these commands further and combine them to suit your specific needs.
Practical -3
Aim: Essential UNIX Commands for working in UNIX environment.
Description:
Unix commands are essential for navigating the file system, managing processes, manipulating files,
configuring networks, and performing system administration tasks. They form the core toolkit for
working efficiently in a Unix/Linux environment, whether it's on servers, desktops, or other
computing devices running Unix-based systems like Linux.
Linux commands are a type of Unix command or shell procedure. They are the basic tools used to
interact with Linux on an individual level. Linux commands are used to perform a variety of tasks,
including displaying information about files and directories.
Linux operating system is used on servers, desktops, and maybe even your smartphone. It has a lot
of command line tools that can be used for virtually everything on the system.

Essential Unix commands for working in a Unix environment:


1. ls command
The ls command is commonly used to identify the files and directories in the working directory.
This command can be used by itself without any arguments and it will provide us the output with
all the details about the files and the directories in the current working directory.

2. pwd command
The pwd command is mostly used to print the current working directory on your terminal.

3. mkdir command
This mkdir allows us to create fresh directories in the terminal itself. The default syntax is mkdir
<directory name> and the new directory will be created.

4. cd command
The cd command is used to navigate between directories. It requires either the full path or the
directory name, depending on your current working directory. If you run this command without
any options, it will take you to your home folder.
5. rmdir command
The rmdir command is used to delete permanently an empty directory.

6. cp command
The cp command of Linux is equivalent to copy-paste and cut-paste in Windows.

7. mv command
The mv command is generally used for renaming the files in Linux.
8. rm command
rm command in Linux is generally used to delete the files created in the directory.

9. touch command
The touch command creates an empty file when put in the terminal in this format as touch <file
name>

10. cat command


The cat command is the simplest command to use when you want to see the contents of a particular
file. The only issue is that it simply unloads the entire file to your terminal.

11. clear command


The clear command is a standard command to clear the terminal screen.

12. ps command
ps command in Linux is used to check the active processes in the terminal.
13. grep command
The grep command is used to find a specific string in a series of outputs. For example, if you want
to find a string in a file, you can use the syntax: <Any command with output> | grep “<string to
find> “

14. echo command


Echo command in Linux is specially used to print something in the terminal

15. whoami command


The whoami command provides basic information that is extremely useful when working on
multiple systems. In general, if we are working with a single computer, we will not require it as
frequently as a network administrator.

16. sort command


The sort command is used generally to sort the output of the file. Let’s use the command and see
the output.

17. cal command


The cal command is not the most famous command in the terminal but it functions to view the
calendar for a particular month in the terminal. Let’s see how this works.
18. whereis command
Whereis command in Linux is generally used to see the exact location of any command typed after
this.

19. df command
Df command in Linux gets the details of the file system.

20. wc command
wc command in Linux indicates the number of words, characters, lines, etc using a set of options.
• wc -w shows the number of words
• wc -l shows the number of lines
• wc -m shows the number of characters present in a file
Practical -4
Aim: I/O Redirection and Piping.
Description:
I/O redirection and piping are powerful concepts in Unix-like operating systems (including Linux)
that allow you to manipulate input and output streams of commands and files. Here's a brief
overview:

1. Standard Input (stdin): By default, the standard input for a command is the keyboard. You can
change this using input redirection (<) to read from a file instead. For example:

2. Standard Output (stdout): By default, the standard output for a command is the terminal. You can
redirect this output to a file using output redirection (>). For example:

3. Appending Output: If you want to append output to a file instead of overwriting it, you can use
>> like this:

4. Standard Error (stderr): Error messages are typically sent to standard error. You can redirect
stderr separately using 2> or combine stdout and stderr using 2>&1. For example:

5. Piping (|): Piping allows you to send the output of one command as the input to another
command. For example:

6. Combining Commands: You can combine multiple commands using pipes to create complex data
processing pipelines. For example:

These concepts are fundamental for scripting and command-line usage in Unix-like systems, as they
provide a flexible way to manipulate data and automate tasks efficiently.
Practical -6
Aim: Introduction of Processes in UNIX.
Description:
In UNIX-like operating systems, a process is a running instance of a program. It represents the
execution of a program's instructions in memory, along with its associated resources such as CPU
time, memory space, open files, and environment variables. Understanding processes is fundamental
to managing and controlling the execution of programs in UNIX systems. Here's an introduction to
processes in UNIX:

1. Process ID (PID):
• Each process is uniquely identified by a Process ID (PID), which is a non-negative
integer.
• The PID is assigned by the operating system when a process is created and remains
associated with the process until it terminates.

2. Parent and Child Processes:


• Processes in UNIX follow a hierarchical structure, where each process (except for the
initial process, often with PID 1) has a parent process.
• When a process creates another process, the new process becomes a child of the parent
process.

3. Process States:
• Processes in UNIX can be in various states during their lifecycle. Common process states
include:
o Running: The process is currently executing on the CPU.
o Stopped: The process has been paused (usually by a signal) and is not currently
executing.
o Sleeping: The process is waiting for an event to occur (e.g., I/O operation
completion) before resuming execution.
o Zombie: A terminated process that has not been fully cleaned up yet. It remains in
the process table until its parent process acknowledges its termination.

4. Process Control:
• UNIX provides several commands and tools for managing processes:
o ps: Lists information about active processes, including their PIDs, states, and
resource usage.
o top and htop: Interactive tools for monitoring system processes, CPU usage, and
memory usage.
o kill: Sends signals to processes, allowing for termination or manipulation of
process behavior.
o killall: Terminates processes by name rather than PID.
o pgrep and pkill: Find and kill processes based on various criteria such as name or
user.

5. Process Creation:
• Processes can be created in UNIX through various means:
o Running a command from the shell creates a new process for that command.
o Forking a process using system calls like fork() followed by exec() to replace the
child process with a new program.
o Background processes (& in shell) run independently of the terminal session.
o Daemons are background processes that typically provide system services.
6. Process Scheduling:
• UNIX operating systems use scheduling algorithms to manage the execution of multiple
processes on a single CPU or across multiple CPUs (in the case of multiprocessing or
multi-core systems).
• Scheduling policies may include time-sharing (round-robin), priority-based scheduling,
and real-time scheduling for time-sensitive tasks.
Understanding processes and their management is crucial for system administrators, developers, and
users working with UNIX systems to monitor system performance, troubleshoot issues, and control
program execution efficiently.
Practical- 5
Aim: Introduction to VI Editors.
Description:
Vi is a powerful and ubiquitous text editor that has been a staple in Unix and Unix-like operating
systems for decades. It is lightweight, fast, and offers a wide range of features for editing text files
directly in the terminal. It is a very powerful application. An improved version of vi editor is vim.
Modes in VI:
The vi editor has two modes:

• Command Mode: In command mode, actions are taken on the file. The vi editor starts in
command mode. Here, the typed words will act as commands in vi editor. To pass a
command, you need to be in command mode.
• Insert Mode: In insert mode, entered text will be inserted into the file. The Esc key will
take you to the command mode from insert mode.

By default, the vi editor starts in command mode. To enter text, we have to be in insert mode, just
type 'i' and we'll be in insert mode. Although, after typing i nothing will appear on the screen but we'll
be in insert mode. Now we can type anything.

To exit from insert mode press Esc key, we'll be directed to command mode. If we are not sure which
mode we are in, we have to press Esc key twice and we'll be in command mode.

Using VI:

The vi editor tool is an interactive tool as it displays changes made in the file on the screen while you
edit the file.
In vi editor we can insert, edit or remove a word as cursor moves throughout the file.
Commands are specified for each function like to delete it's x or dd.
The vi editor is case-sensitive. For example, p allows us to paste after the current line while P allows
us to paste before the current line.

1) VI Syntax:
To start editing a file with Vi, open your terminal and type vi filename to open an existing file or
vi to create a new file.
vi <fileName>
In the terminal when you'll type vi command with a file name, the terminal will get clear and
content of the file will be displayed. If there is no such file, then a new file will be created and
once completed file will be saved with the mentioned file name.

2) Basic Commands:

• Switching Modes:
• Press i to enter Insert mode (where you can start typing).
• Press Esc to exit Insert mode and return to Normal mode.
• Saving and Quitting:
• In Normal mode, type :w to save changes.
• Type :q to quit (if there are no unsaved changes).
• Combine them as :wq to save and quit in one command.

• Navigating and Editing:


• Use arrow keys or h, j, k, l in Normal mode to move left, down, up, and right,
respectively.
• x deletes the character under the cursor.
• dd deletes the entire line.
• yy yanks (copies) the current line.
• p pastes the content after the cursor.

Linux VI Example:
To start vi open terminal and type vi command followed by file name. If your file is in some other
directory, you can specify the file path. And if in case, your file doesn't exist, it will create a new file
with the specified name at the given location.

Command Mode
This is what we'll see when we'll press enter after the above command. If we'll start typing, nothing
will appear as we are in command mode. By default vi opens in command mode.

Insert Mode
To move to the insert mode press i. Now we can write anything. To move to the next line press enter.
Once we have done with our typing, press esc key to return to the command mode.
To save and Quit
We can save and quit vi editor from command mode. Before writing save or quit command we have
to press colon (:). Colon allows us to give instructions to vi.

To exit from vi, first ensure that it is in command mode. Now, type :wq and press enter. It will save
and quit vi.
we have typed :q!, it will save our file by discarding the changes made.

Thus , all our changes have been discarded.


Practical -7
Aim: Communication in UNIX and AWK.
Description:
Awk is a powerful tool for processing text-based data and is commonly used in shell scripting, data
manipulation, and text processing tasks. The awk command programming language requires no
compiling and allows the user to use variables, numeric functions, string functions, and logical
operators. Awk operates on text files, processing each line and applying specified patterns and
actions to manipulate and extract data.
Awk is a utility that enables a programmer to write tiny but effective programs in the form of
statements that define text patterns that are to be searched for in each line of a document and the
action that is to be taken when a match is found within a line. Awk is mostly used for pattern
scanning and processing. It searches one or more files to see if they contain lines that matches with
the specified patterns and then perform the associated actions.
Awk is abbreviated from the names of the developers – Aho, Weinberger, and Kernighan.
What Operations can AWK do?

• Scanning files line by line


• Splitting each input line into fields
• Comparing input lines and fields to patterns
• Performing specified actions on matching lines

AWK Command Usefulness:

• Changing data files


• Producing formatted reports

Programming Concepts for awk command:

• Format output lines


• Conditional and loops
• Arithmetic and string operations

AWK Syntax:

awk 'selection _criteria {action }' input-file > output-file

Here's a breakdown of each part of the AWK syntax:


• awk: This is the command to invoke AWK.

• 'selection_criteria {action}': This is the AWK program enclosed in single quotes. The
selection_criteria are the conditions or patterns that determine when the action should be
executed.

• input-file: This is the input file that AWK will process.

• > output-file: This part redirects the output of AWK to the specified output file (output-file).
If you don't specify an output file, AWK will print the output to the terminal by default.
Awk Commands:
Consider the text file employee.txt as the input file for all cases below:
$ cat > employee.txt
Rohan 26 Delhi 40000
Ajay 22 Mumbai 50000
Aman 29 Kolkata 35000
Siraj 30 Delhi 60000
Piyush 31 Chennai 27000
Taniya 26 West Bengal 30000
Priya 34 Rajasthan 40000
Vivek 35 Delhi 20000
Varun 24 Rajasthan 65000
Mansi 28 Delhi 15000

1. Printing all lines in a file:


If we wish to list all the lines and columns in a file, execute:-
awk '{print}' input-file
$ awk ‘{print}’ employee.txt

2. Printing all lines that match a specific pattern:


To print all lines that match a specific pattern using AWK, we can use the following syntax:

$ awk '/pattern/ {print $0}' input-file

3. Splitting a Line Into Fields:


For each record i.e line, the awk command splits the record delimited by whitespace character by
default and stores it in the $n variables. If the line has 4 words, it will be stored in $1, $2, $3 and
$4 respectively. Also, $0 represents the whole line.

$ awk '{print $1 , $4}' employee.txt


4. Display Line Number:
The NR variable in AWK stands for "Number of Records" and represents the current line number
being processed. It is a built-in variable that can be used to display line numbers or perform actions
based on the line number. Remember that records are usually lines. Awk command performs the
pattern/action statements once for each record in a file.
Here's the syntax for using NR to display line numbers in AWK:

awk '{print NR, $0}' input-file

5. Display Line From 3 to 6:


Here is a general syntax for printing lines within a specific range using AWK:

awk 'NR >= start_line && NR <= end_line {print}' input-file


awk 'NR==3, NR==6 {print NR, $0}' employee.txt

6. Display Last Field:


To display the last field of each line using AWK, you can use the $NF variable, where NF
represents the number of fields in the current line. Here's the syntax:

awk '{print $NF}' input-file


7. To count the lines in a file:
To count the lines in a file using AWK, we can use the built-in NR variable, which represents the
total number of records (lines) processed by AWK. Here's the syntax:

awk 'END {print NR}' input-file

In this syntax:

• END is a special pattern in AWK that indicates the end of processing.

• {print NR} is the action block that prints the value of NR, which corresponds to the total number
of lines processed.
Practical -8
Aim: Introduction of the concept of Shell Scripting.
Description:
Shell scripting is a powerful concept in the world of computer programming and system
administration. It refers to writing scripts or programs using a shell, which is a command-line
interpreter for Unix-like operating systems. The most common shell used for scripting is the Bourne
Again Shell (bash), although other shells like Zsh, Ksh, and Csh are also used.

Shell scripting allows users to automate repetitive tasks, execute commands in sequence, and
perform complex operations by combining multiple commands into a script file. These scripts are
written in plain text and can be executed directly by the shell.

Here are some key concepts and features of shell scripting:


1. Syntax: Shell scripts use a syntax similar to the commands entered in the shell. They include
commands, control structures (such as loops and conditionals), variables, functions, and comments.

2. Variables: Shell scripts use variables to store data or values that can be manipulated or used later
in the script. Variables can be defined, assigned values, and accessed within the script.

3. Control Structures: Shell scripts support various control structures like if statements, loops (for,
while), case statements, and functions. These structures allow for conditional execution and
looping within the script.

4. Command Execution: Shell scripts can execute system commands, external programs, and other
scripts. They can capture the output of commands and use it as input for further processing.

5. File Permissions: Shell scripts need executable permissions (chmod +x script.sh) to be run as
standalone programs. Users can execute shell scripts directly from the command line or by
specifying their path.

6. Portability: Shell scripts written in a compatible shell (such as Bash) can be executed on different
Unix-like systems without modification, promoting cross-platform compatibility.

7. Input/Output Redirection: Shell scripts can redirect input from files or other commands (<),
redirect output to files or other commands (>), and handle error output (2>).

8. Script Execution: Shell scripts can be executed directly from the command line by specifying the
script file (./script.sh) or by using the shell interpreter explicitly (bash script.sh).

9. Environment Variables: Shell scripts can access and modify environment variables, which are
variables that affect the behavior of the shell and programs running within it.

10. Error Handling: Shell scripts can handle errors using exit codes, error messages, and error-
handling mechanisms like trap for signal handling.

11. File Operations: Shell scripts can perform file operations such as reading from files, writing to
files, copying, moving, and deleting files and directories.
Why do we need shell scripts?
There are many reasons to write shell scripts:
• To avoid repetitive work and automation
• System admins use shell scripting for routine backups.
• System monitoring
• Adding new functionality to the shell etc.

Some Advantages of shell scripts:


• The command and syntax are exactly the same as those directly entered in the command
line, so programmers do not need to switch to entirely different syntax
• Writing shell scripts are much quicker
• Quick start
• Interactive debugging etc.

Some Disadvantages of shell scripts:


• Prone to costly errors, a single mistake can change the command which might be
harmful.
• Slow execution speed
• Design flaws within the language syntax or implementation
• Not well suited for large and complex task
• Provide minimal data structure unlike other scripting languages. etc.

Shell scripting is widely used for system administration tasks, automation, batch processing, and
creating utility scripts. It provides a flexible and efficient way to interact with the operating system
and perform tasks programmatically from the command line. Learning shell scripting is beneficial
for anyone working with Unix-like systems or wanting to automate repetitive tasks efficiently.

Simple demo of Shell Scripting using bash shell:


Example 1
Open Terminal:
• Launch the Terminal application on your Ubuntu system. You can do this by searching for
"Terminal" in the application menu or by using the keyboard shortcut Ctrl + Alt + T.

Create a New File:


• Create a new file in the text editor and give it a meaningful name, such as f1_file.sh. The .sh
extension is commonly used for shell scripts.
Start the Script:
• Begin your script with a shebang line (#!) that specifies the interpreter to use. For Bash
scripts, use #!/bin/bash.
• This line tells the system to use the Bash shell to execute the script.

Write the Script Content:


• After the shebang line, you can start writing the actual script content.
• In this case, we want to print a "Hello, World!" message.

Set Execution Permissions (Optional):


• If you plan to execute the script directly from the command line, you may need to set
execution permissions.
• Open your terminal and navigate to the directory where your script is located.
• Use the chmod command to make the script executable.

Execute the Script:


• Now, you can execute your script from the terminal (assuming execution permissions
are set) by using: ./f1_file.sh

Verify Output:
• After running the script, you should see the "Hello, World!" message printed to the
terminal.
Example 2
Open Terminal:
• Launch the Terminal application on your Ubuntu system.
Create a New File:
• Create a new file in the text editor , such as f3_variable.sh.

Start the Script:


• Begin your script with a shebang line (#!) that specifies the interpreter to use. For Bash
scripts, use #!/bin/bash.
• This line tells the system to use the Bash shell to execute the script.
Write the Script Content:
• After the shebang line, you can start writing the actual script content.

Set Execution Permissions (Optional):


• Open your terminal and navigate to the directory where your script is located.
• Use the chmod command to make the script executable.

Execute the Script:


• Now, you can execute your script from the terminal (assuming execution permissions are set)
by using: ./f3_variable.sh

Verify Output:
• After running the script, you should see the "Hello, World!" message printed to the terminal.
Practical -9
Aim: Decision and Iterative Statements in Shell Scripting.
Description:
A Shell script is a plain text file. This file contains different commands for step-by-step execution.
These commands can be written directly into the command line but from a re-usability perceptive
it is useful to store all of the inter-related commands for a specific task in a single file. We can use
that file for executing the set of commands one or more times as per our requirements.
In shell scripting, decision-making and iterative statements are essential for controlling the flow of
your script based on certain conditions or for repeating a set of instructions multiple times.
1. Decision Statements:
• if statement: The if statement in shell scripting allows you to make decisions based on
conditions. It checks whether a specified condition is true or false and executes a block of
code only if the condition is true.
The syntax will be –

if [ condition ]; then
# Code block to execute if the condition is true
fi

• if-else statement: The if-else statement extends the functionality of if by providing an


alternative block of code to execute if the condition is false. This is useful when you need to
handle both true and false outcomes.
The syntax will be –

if [ condition ]; then
# Code block to execute if the condition is true
else
# Code block to execute if the condition is false
fi

• if-elif-else statement: The if-elif-else statement is used when you have multiple conditions to
check. It allows you to evaluate multiple conditions sequentially and execute the
corresponding block of code for the first condition that is true, or the else block if none of the
conditions are true.
The syntax will be –

if [ condition1 ]; then
# Code block to execute if condition1 is true
elif [ condition2 ]; then
# Code block to execute if condition2 is true
else
# Code block to execute if none of the conditions are true
fi

2. Iterative Statements:
• for loop: A for loop in shell scripting is used to iterate over a list of items. It allows you to
perform a set of commands repeatedly for each item in the list.
The syntax will be –

for item in list; do


# Code block to execute for each item
done

• while loop: The while loop executes a block of code as long as a specified condition remains
true. It's useful when you want to repeat a set of commands until a condition changes.
The syntax will be –

while [ condition ]; do
# Code block to execute while the condition is true
done

• until loop: The until loop is similar to the while loop but executes its block of code until a
specified condition becomes true. It's useful when you want to repeat a set of commands until
a condition switch from false to true.
The syntax will be –

until [ condition ]; do
# Code block to execute until the condition becomes true
done

Each of these statements serves a specific purpose in controlling the flow of your shell script,
whether it's making decisions based on conditions (if statements) or repeating commands multiple
times (for, while, until loops).

String-based Condition

The string-based condition means in the shell scripting we can take decisions by doing comparisons
within strings as well. Here is a descriptive table with all the operators –

Operator Description

== Returns true if the strings are equal

!= Returns true if the strings are not equal

-n Returns true if the string to be tested is not null

-z Returns true if the string to be tested is null

Arithmetic-based Condition

Arithmetic operators are used for checking the arithmetic-based conditions. Like less than, greater
than, equals to, etc. Here is a descriptive table with all the operators –
Operator Description

-eq Equal

-ge Greater Than or Equal

-gt Greater Than

-le Less Than or Equal

-lt Less Than

-ne Not Equal

Example 1
$ vi p1_file.sh
#!/bin/bash
Country=”India”
if [ “Country” = “India” ]; then
echo “ You are Indian. India is a land with diverse cultures.”
fi
$ chmod +x p1_file.sh
$ ./p1_file.sh

Output

Example 2
$ vi p1_file.sh
#!/bin/bash
echo "Enter your age-"
read Age
if [ "$Age" -ge 18 ]; then
echo "You can vote"
else
echo "You cannot vote"
fi
$ chmod +x p1_file.sh
$ ./p1_file.sh

Output
Example 3
#!/bin/bash

# Function to display the meaning of a traffic signal color


traffic_signal() {
local color=$1
case $color in
1)
echo "RED: Stop"
;;
2)
echo "YELLOW: Proceed with caution"
;;
3)
echo "GREEN: Go"
;;
*)
echo "Invalid input. Please enter a valid option."
;;
esac
}
# Main script starts here
echo "Traffic Signal Simulation"
echo "Select a color:"
echo "1. RED"
echo "2. YELLOW"
echo "3. GREEN"
read -p "Enter your choice (1, 2, or 3): " choice
# Call the traffic_signal function based on user input
traffic_signal $choice

Output

Example 4
$ vi p1_file.sh
#!/bin/bash
echo "Enter a number: "
read number
# Initialize the factorial
factorial=1
while [ $number -gt 1 ]; do
factorial=$((factorial * number))
number=$((number - 1))
done
# Print the factorial
echo "The factorial is $factorial"
$ chmod +x p1_file.sh
$ ./p1_file.sh

Output

Example 5
$ vi p1_file.sh
#!/bin/bash
echo "Enter the number-"
read n
for (( i=1; i<=10; i++))
do
res=`expr $i \* $n`
echo "$n * $i = $res"
done
# end of for loop
$ chmod +x p1_file.sh
$ ./p1_file.sh

Output
Practical -10
Aim: Writing the Shell scripts for unknown problems.
Description:
Shell scripting is a powerful tool for system administrators, developers, and power users to automate
tasks, create utilities, and manage the behavior of Unix-like systems efficiently from the command
line. It's a fundamental skill for anyone working in a Unix/Linux environment. Shell scripting
involves writing a series of commands, often with control structures like loops and conditional
statements, in a plain text file with a .sh extension. These scripts are then executed by the shell
interpreter, which interprets and executes the commands in sequence.
Writing shell scripts for unknown problems involves creating a flexible and adaptable script
structure that can handle a variety of scenarios or tasks.

Here are some general guidelines and best practices for writing such scripts:
1.Define Clear Goals:
• Understand the problem domain or the types of tasks the script might encounter.
• Identify the inputs, outputs, and any intermediate steps or conditions.
2.Modularize Code:
• Use functions to modularize code and separate different parts of the script logically.
• Functions make your code more organized, reusable, and easier to maintain.
3.Use Command-Line Arguments:
• Accept command-line arguments to make the script more flexible and configurable.
• Handle command-line arguments using getopts or by directly accessing $1, $2, etc., as
needed.
4.Error Handling:
• Implement error handling mechanisms such as checking for valid inputs, handling
exceptions, and providing meaningful error messages.
• Use exit codes (exit 1, exit 0, etc.) to indicate success or failure of specific operations
within the script.
5.Logging and Output:
• Use echo statements or other output commands to provide informative messages and status
updates during script execution.
• Consider logging important information or errors to a log file for troubleshooting purposes.
6.Conditional Logic:
• Use conditional statements (if, case) to handle different scenarios and make decisions based
on conditions.
• Handle edge cases and unexpected inputs gracefully to prevent script failures.
7.Iterative Processing:
• Utilize loops (for, while) for iterative processing, such as looping through files, directories,
or data sets.
• Ensure proper handling of loop termination conditions to avoid infinite loops.
8.Documentation and Comments:
• Include comments and documentation within the script to explain the purpose of each
section, function, or variable.
• Document input formats, expected outputs, and any assumptions made by the script.
9.Testing and Validation:
• Test the script with different inputs and edge cases to ensure it behaves as expected.
• Validate user inputs or external data sources to prevent potential security or reliability
issues.
10. Version Control:
• If the script is part of a larger project or used collaboratively, consider using version control
(e.g., Git) to track changes and manage revisions.
Example Code
#!/bin/bash

# Function to perform arithmetic operations based on user input


perform_task() {
local operation=$1
local num1=$2
local num2=$3
local result=0

case $operation in
add)
result=$((num1 + num2))
echo "Addition result: $result"
;;
sub)
result=$((num1 - num2))
echo "Subtraction result: $result"
;;
mul)
result=$((num1 * num2))
echo "Multiplication result: $result"
;;
div)
if [ $num2 -ne 0 ]; then
result=$((num1 / num2))
echo "Division result: $result"
else
echo "Error: Division by zero"
fi
;;
*)
echo "Invalid operation"
;;
esac
}

# Main script starts here


echo "Welcome to the Arithmetic Operation Script"

# Example usage of command-line arguments


if [ $# -eq 3 ]; then
perform_task $1 $2 $3
else
# Prompt user for input
read -p "Enter operation (add/sub/mul/div): " op
read -p "Enter first number: " num1
read -p "Enter second number: " num2
perform_task $op $num1 $num2
fi

echo "Script execution completed"


Output
Sample Input 1
add
4
5

Sample Input 2
add
8
3

Sample Input 3
mul
3
4

You might also like