Operatingsystem Practicals
Operatingsystem Practicals
Unix File System is a logical method of organizing and storing large amounts of information in a
way that makes it easy to manage. A file is the smallest unit in which the information is stored. Unix
file system has several important features. All data in Unix is organized into files. All files are
organized into directories. These directories are organized into a tree-like structure called the file
system. Files in Unix System are organized into multi-level hierarchy structure known as a directory
tree. At the very top of the file system is a directory called “root” which is represented by a “/”. All
other files are “descendants” of root.
The Unix file system uses a directory hierarchy that allows for easy navigation and organization of
files. Directories can contain both files and other directories, and each file or directory has a unique
name.
Unix file system also uses a set of permissions to control access to files and directories. Each file
and directory has an owner and a group associated with it, and permissions can be set to allow or
restrict access to these entities.
Here are the key components and concepts of the UNIX file system:
1. Directory Structure:
       • The file system is organized as a tree-like structure, with the root directory ("/") at the top.
       • Directories can contain files and other directories, forming a hierarchy.
       • Each directory (except the root) has a parent directory and can have multiple
           subdirectories.
2. File Types:
       • Regular files: Contain user data or program instructions.
       • Directories: Containers for files and subdirectories.
       • Special files: Represent devices such as printers, disks, and terminals.
       • Links: Symbolic links (soft links) and hard links provide ways to reference files or
          directories from multiple locations.
3. File Path:
       • A file path is used to specify the location of a file or directory within the file system.
       • Absolute path: Starts from the root directory (e.g., "/home/user/file.txt").
       • Relative path: Relative to the current working directory (e.g., "Documents/report.doc").
4. Permissions:
      • Each file and directory has associated permissions that define who can read, write, and
         execute them.
      • Permissions are categorized for the owner of the file, the group associated with the file,
         and others.
      • The chmod command is used to modify permissions.
 UNIX-like systems, including Linux and macOS, adhere to these principles and provide powerful
 tools and utilities for efficient file system management and data organization. Understanding the
 UNIX file system is fundamental for system administrators, developers, and power users working
 in these environments.
NAME DESCRIPTION
/ The slash / character alone denotes the root of the filesystem tree.
     /bin        Stands for “binaries” and contains certain fundamental utilities, such as ls or cp,
                                     which are generally needed by all users.
/boot Contains all the files that are required for successful booting process.
     /dev          Stands for “devices”. Contains file representations of peripheral devices and
                                                pseudo-devices.
     /etc        Contains system-wide configuration files and system databases. Originally also
                 contained “dangerous maintenance utilities” such as init,but these have typically
                                      been moved to /sbin or elsewhere.
     /lib       Contains system libraries, and some critical files such as kernel modules or device
                                                     drivers.
/media Default mount point for removable devices, such as USB sticks, media players, etc.
    /mnt            Stands for “mount”. Contains filesystem mount points. These are used, for
                 example, if the system uses multiple hard disks or hard disk partitions. It is also
                 often used for remote (network) filesystems, CD-ROM/DVD drives, and so on.
      /tmp          A place for temporary files. Many systems clear this directory upon startup; it
                    might have tmpfs mounted atop it, in which case its contents do not survive a
                       reboot, or it might be explicitly cleared by a startup script at boot time.
       /usr       Originally the directory holding user home directories,its use has changed. It now
                   holds executables, libraries, and shared resources that are not system critical, like
                 the X Window System, KDE, Perl, etc. However, on some Unix systems, some user
                  accounts may still have a home directory that is a direct subdirectory of /usr, such
                 as the default as in Minix. (on modern systems, these user accounts are often related
                              to server or system use, and not directly used by a person).
     /usr/bin    This directory stores all binary programs distributed with the operating system not
                                         residing in /bin, /sbin or (rarely) /etc.
/usr/include        Stores the development headers used throughout the system. Header files are
                      mostly used by the #include directive in C/C++ programming language.
     /usr/lib        Stores the required libraries and data files for programs stored within /usr or
                                                       elsewhere.
      /var       A short for “variable.” A place for files that may change often – especially in size,
                     for example e-mail sent to users on the system, or process-ID lock files.
    /var/mail       The place where all the incoming mails are stored. Users (other than root) can
                        access their own mail only. Often, this directory is a symbolic link to
                                                  /var/spool/mail.
/var/spool Spool directory. Contains print jobs, mail spools and other queued tasks.
/var/tmp A place for temporary files which should be preserved between system reboots.
•    Hierarchical organization: The hierarchical structure of the Unix file system makes it easy to
     organize and navigate files and directories.
•    Robustness: The Unix file system is known for its stability and reliability. It can handle large
     amounts of data without becoming unstable or crashing.
•    Security: The Unix file system uses a set of permissions that allows administrators to control
     who has access to files and directories.
•    Compatibility: The Unix file system is widely used and supported, which means that files can be
     easily transferred between different Unix-based systems.
    Disadvantages of the Unix file System
•    Complexity: The Unix file system can be complex to understand and manage, especially for
     users who are not familiar with the command line interface.
•    Steep Learning Curve: Users who are not familiar with Unix-based systems may find it difficult
     to learn how to use the Unix file system.
•    Lack of User-Friendly Interface: The Unix file system is primarily managed through the
     command line interface, which may not be as user-friendly as a graphical user interface.
•    Limited Support for Certain File Systems: While the Unix file system is compatible with
     many file systems, there are some file systems that are not fully supported.
                                        Practical -2
Aim: File and Directory Related Commands in UNIX.
Description:
UNIX provides a variety of commands for working with files and directories. These commands are
essential for managing files and directories, navigating the file system, and performing common
operations like copying, moving, and deleting files. Understanding their usage and options is
important for efficient file system management in UNIX-like environments.
Here are some commonly used file and directory-related commands in UNIX/Linux systems:
File Management Commands:
1. ls - List files and directories in the current directory.
        • Example: ls -l (long listing), ls -a (show hidden files), ls -lh (human-readable file sizes).
2. cd - Change directory.
       • Example: cd /path/to/directory (change to a specific directory), cd .. (move up one
          directory).
These commands provide a foundation for managing files and directories in a Unix environment.
They are powerful tools for navigating, organizing, and manipulating file systems from the command
line. Feel free to explore these commands further and combine them to suit your specific needs.
                                       Practical -3
Aim: Essential UNIX Commands for working in UNIX environment.
Description:
Unix commands are essential for navigating the file system, managing processes, manipulating files,
configuring networks, and performing system administration tasks. They form the core toolkit for
working efficiently in a Unix/Linux environment, whether it's on servers, desktops, or other
computing devices running Unix-based systems like Linux.
Linux commands are a type of Unix command or shell procedure. They are the basic tools used to
interact with Linux on an individual level. Linux commands are used to perform a variety of tasks,
including displaying information about files and directories.
Linux operating system is used on servers, desktops, and maybe even your smartphone. It has a lot
of command line tools that can be used for virtually everything on the system.
2. pwd command
  The pwd command is mostly used to print the current working directory on your terminal.
3. mkdir command
  This mkdir allows us to create fresh directories in the terminal itself. The default syntax is mkdir
  <directory name> and the new directory will be created.
4. cd command
  The cd command is used to navigate between directories. It requires either the full path or the
  directory name, depending on your current working directory. If you run this command without
  any options, it will take you to your home folder.
5. rmdir command
  The rmdir command is used to delete permanently an empty directory.
6. cp command
  The cp command of Linux is equivalent to copy-paste and cut-paste in Windows.
7. mv command
  The mv command is generally used for renaming the files in Linux.
8. rm command
  rm command in Linux is generally used to delete the files created in the directory.
9. touch command
  The touch command creates an empty file when put in the terminal in this format as touch <file
  name>
12. ps command
  ps command in Linux is used to check the active processes in the terminal.
13. grep command
  The grep command is used to find a specific string in a series of outputs. For example, if you want
  to find a string in a file, you can use the syntax: <Any command with output> | grep “<string to
  find> “
19. df command
  Df command in Linux gets the details of the file system.
20. wc command
  wc command in Linux indicates the number of words, characters, lines, etc using a set of options.
• wc -w shows the number of words
• wc -l shows the number of lines
• wc -m shows the number of characters present in a file
                                        Practical -4
 Aim: I/O Redirection and Piping.
 Description:
 I/O redirection and piping are powerful concepts in Unix-like operating systems (including Linux)
 that allow you to manipulate input and output streams of commands and files. Here's a brief
 overview:
 1. Standard Input (stdin): By default, the standard input for a command is the keyboard. You can
    change this using input redirection (<) to read from a file instead. For example:
 2. Standard Output (stdout): By default, the standard output for a command is the terminal. You can
    redirect this output to a file using output redirection (>). For example:
 3. Appending Output: If you want to append output to a file instead of overwriting it, you can use
    >> like this:
 4. Standard Error (stderr): Error messages are typically sent to standard error. You can redirect
    stderr separately using 2> or combine stdout and stderr using 2>&1. For example:
 5. Piping (|): Piping allows you to send the output of one command as the input to another
    command. For example:
 6. Combining Commands: You can combine multiple commands using pipes to create complex data
    processing pipelines. For example:
These concepts are fundamental for scripting and command-line usage in Unix-like systems, as they
provide a flexible way to manipulate data and automate tasks efficiently.
                                     Practical -6
Aim: Introduction of Processes in UNIX.
Description:
In UNIX-like operating systems, a process is a running instance of a program. It represents the
execution of a program's instructions in memory, along with its associated resources such as CPU
time, memory space, open files, and environment variables. Understanding processes is fundamental
to managing and controlling the execution of programs in UNIX systems. Here's an introduction to
processes in UNIX:
1. Process ID (PID):
      • Each process is uniquely identified by a Process ID (PID), which is a non-negative
          integer.
      • The PID is assigned by the operating system when a process is created and remains
          associated with the process until it terminates.
3. Process States:
      • Processes in UNIX can be in various states during their lifecycle. Common process states
          include:
              o Running: The process is currently executing on the CPU.
              o Stopped: The process has been paused (usually by a signal) and is not currently
                   executing.
              o Sleeping: The process is waiting for an event to occur (e.g., I/O operation
                   completion) before resuming execution.
              o Zombie: A terminated process that has not been fully cleaned up yet. It remains in
                   the process table until its parent process acknowledges its termination.
4. Process Control:
      • UNIX provides several commands and tools for managing processes:
             o ps: Lists information about active processes, including their PIDs, states, and
                 resource usage.
             o top and htop: Interactive tools for monitoring system processes, CPU usage, and
                 memory usage.
             o kill: Sends signals to processes, allowing for termination or manipulation of
                 process behavior.
             o killall: Terminates processes by name rather than PID.
             o pgrep and pkill: Find and kill processes based on various criteria such as name or
                 user.
5. Process Creation:
      • Processes can be created in UNIX through various means:
             o Running a command from the shell creates a new process for that command.
             o Forking a process using system calls like fork() followed by exec() to replace the
                  child process with a new program.
             o Background processes (& in shell) run independently of the terminal session.
             o Daemons are background processes that typically provide system services.
6. Process Scheduling:
       • UNIX operating systems use scheduling algorithms to manage the execution of multiple
           processes on a single CPU or across multiple CPUs (in the case of multiprocessing or
           multi-core systems).
       • Scheduling policies may include time-sharing (round-robin), priority-based scheduling,
           and real-time scheduling for time-sensitive tasks.
Understanding processes and their management is crucial for system administrators, developers, and
users working with UNIX systems to monitor system performance, troubleshoot issues, and control
program execution efficiently.
                                         Practical- 5
Aim: Introduction to VI Editors.
Description:
Vi is a powerful and ubiquitous text editor that has been a staple in Unix and Unix-like operating
systems for decades. It is lightweight, fast, and offers a wide range of features for editing text files
directly in the terminal. It is a very powerful application. An improved version of vi editor is vim.
Modes in VI:
The vi editor has two modes:
       •   Command Mode: In command mode, actions are taken on the file. The vi editor starts in
           command mode. Here, the typed words will act as commands in vi editor. To pass a
           command, you need to be in command mode.
       •   Insert Mode: In insert mode, entered text will be inserted into the file. The Esc key will
           take you to the command mode from insert mode.
By default, the vi editor starts in command mode. To enter text, we have to be in insert mode, just
type 'i' and we'll be in insert mode. Although, after typing i nothing will appear on the screen but we'll
be in insert mode. Now we can type anything.
To exit from insert mode press Esc key, we'll be directed to command mode. If we are not sure which
mode we are in, we have to press Esc key twice and we'll be in command mode.
Using VI:
The vi editor tool is an interactive tool as it displays changes made in the file on the screen while you
edit the file.
In vi editor we can insert, edit or remove a word as cursor moves throughout the file.
Commands are specified for each function like to delete it's x or dd.
The vi editor is case-sensitive. For example, p allows us to paste after the current line while P allows
us to paste before the current line.
1) VI Syntax:
   To start editing a file with Vi, open your terminal and type vi filename to open an existing file or
   vi to create a new file.
        vi <fileName>
   In the terminal when you'll type vi command with a file name, the terminal will get clear and
   content of the file will be displayed. If there is no such file, then a new file will be created and
   once completed file will be saved with the mentioned file name.
2) Basic Commands:
   •   Switching Modes:
          • Press i to enter Insert mode (where you can start typing).
          • Press Esc to exit Insert mode and return to Normal mode.
    •   Saving and Quitting:
           • In Normal mode, type :w to save changes.
           • Type :q to quit (if there are no unsaved changes).
           • Combine them as :wq to save and quit in one command.
Linux VI Example:
To start vi open terminal and type vi command followed by file name. If your file is in some other
directory, you can specify the file path. And if in case, your file doesn't exist, it will create a new file
with the specified name at the given location.
Command Mode
This is what we'll see when we'll press enter after the above command. If we'll start typing, nothing
will appear as we are in command mode. By default vi opens in command mode.
Insert Mode
To move to the insert mode press i. Now we can write anything. To move to the next line press enter.
Once we have done with our typing, press esc key to return to the command mode.
To save and Quit
We can save and quit vi editor from command mode. Before writing save or quit command we have
to press colon (:). Colon allows us to give instructions to vi.
To exit from vi, first ensure that it is in command mode. Now, type :wq and press enter. It will save
and quit vi.
we have typed :q!, it will save our file by discarding the changes made.
AWK Syntax:
   •   'selection_criteria {action}': This is the AWK program enclosed in single quotes. The
       selection_criteria are the conditions or patterns that determine when the action should be
       executed.
   •   > output-file: This part redirects the output of AWK to the specified output file (output-file).
       If you don't specify an output file, AWK will print the output to the terminal by default.
Awk Commands:
Consider the text file employee.txt as the input file for all cases below:
$ cat > employee.txt
Rohan 26 Delhi 40000
Ajay 22 Mumbai 50000
Aman 29 Kolkata 35000
Siraj 30 Delhi 60000
Piyush 31 Chennai 27000
Taniya 26 West Bengal 30000
Priya 34 Rajasthan 40000
Vivek 35 Delhi 20000
Varun 24 Rajasthan 65000
Mansi 28 Delhi 15000
In this syntax:
•    {print NR} is the action block that prints the value of NR, which corresponds to the total number
     of lines processed.
                                         Practical -8
 Aim: Introduction of the concept of Shell Scripting.
 Description:
 Shell scripting is a powerful concept in the world of computer programming and system
 administration. It refers to writing scripts or programs using a shell, which is a command-line
 interpreter for Unix-like operating systems. The most common shell used for scripting is the Bourne
 Again Shell (bash), although other shells like Zsh, Ksh, and Csh are also used.
 Shell scripting allows users to automate repetitive tasks, execute commands in sequence, and
 perform complex operations by combining multiple commands into a script file. These scripts are
 written in plain text and can be executed directly by the shell.
2. Variables: Shell scripts use variables to store data or values that can be manipulated or used later
   in the script. Variables can be defined, assigned values, and accessed within the script.
3. Control Structures: Shell scripts support various control structures like if statements, loops (for,
   while), case statements, and functions. These structures allow for conditional execution and
   looping within the script.
4. Command Execution: Shell scripts can execute system commands, external programs, and other
   scripts. They can capture the output of commands and use it as input for further processing.
5. File Permissions: Shell scripts need executable permissions (chmod +x script.sh) to be run as
   standalone programs. Users can execute shell scripts directly from the command line or by
   specifying their path.
6. Portability: Shell scripts written in a compatible shell (such as Bash) can be executed on different
   Unix-like systems without modification, promoting cross-platform compatibility.
7. Input/Output Redirection: Shell scripts can redirect input from files or other commands (<),
   redirect output to files or other commands (>), and handle error output (2>).
8. Script Execution: Shell scripts can be executed directly from the command line by specifying the
   script file (./script.sh) or by using the shell interpreter explicitly (bash script.sh).
9. Environment Variables: Shell scripts can access and modify environment variables, which are
   variables that affect the behavior of the shell and programs running within it.
10. Error Handling: Shell scripts can handle errors using exit codes, error messages, and error-
    handling mechanisms like trap for signal handling.
11. File Operations: Shell scripts can perform file operations such as reading from files, writing to
    files, copying, moving, and deleting files and directories.
Why do we need shell scripts?
There are many reasons to write shell scripts:
       • To avoid repetitive work and automation
       • System admins use shell scripting for routine backups.
       • System monitoring
       • Adding new functionality to the shell etc.
Shell scripting is widely used for system administration tasks, automation, batch processing, and
creating utility scripts. It provides a flexible and efficient way to interact with the operating system
and perform tasks programmatically from the command line. Learning shell scripting is beneficial
for anyone working with Unix-like systems or wanting to automate repetitive tasks efficiently.
Verify Output:
   • After running the script, you should see the "Hello, World!" message printed to the
       terminal.
Example 2
Open Terminal:
   • Launch the Terminal application on your Ubuntu system.
Create a New File:
   • Create a new file in the text editor , such as f3_variable.sh.
Verify Output:
   • After running the script, you should see the "Hello, World!" message printed to the terminal.
                                        Practical -9
Aim: Decision and Iterative Statements in Shell Scripting.
Description:
A Shell script is a plain text file. This file contains different commands for step-by-step execution.
These commands can be written directly into the command line but from a re-usability perceptive
it is useful to store all of the inter-related commands for a specific task in a single file. We can use
that file for executing the set of commands one or more times as per our requirements.
In shell scripting, decision-making and iterative statements are essential for controlling the flow of
your script based on certain conditions or for repeating a set of instructions multiple times.
1. Decision Statements:
   • if statement: The if statement in shell scripting allows you to make decisions based on
      conditions. It checks whether a specified condition is true or false and executes a block of
      code only if the condition is true.
      The syntax will be –
       if [ condition ]; then
          # Code block to execute if the condition is true
       fi
       if [ condition ]; then
          # Code block to execute if the condition is true
       else
          # Code block to execute if the condition is false
       fi
   •   if-elif-else statement: The if-elif-else statement is used when you have multiple conditions to
       check. It allows you to evaluate multiple conditions sequentially and execute the
       corresponding block of code for the first condition that is true, or the else block if none of the
       conditions are true.
       The syntax will be –
       if [ condition1 ]; then
          # Code block to execute if condition1 is true
       elif [ condition2 ]; then
          # Code block to execute if condition2 is true
       else
          # Code block to execute if none of the conditions are true
       fi
2. Iterative Statements:
   • for loop: A for loop in shell scripting is used to iterate over a list of items. It allows you to
       perform a set of commands repeatedly for each item in the list.
       The syntax will be –
   •   while loop: The while loop executes a block of code as long as a specified condition remains
       true. It's useful when you want to repeat a set of commands until a condition changes.
       The syntax will be –
       while [ condition ]; do
         # Code block to execute while the condition is true
       done
   •   until loop: The until loop is similar to the while loop but executes its block of code until a
       specified condition becomes true. It's useful when you want to repeat a set of commands until
       a condition switch from false to true.
       The syntax will be –
       until [ condition ]; do
         # Code block to execute until the condition becomes true
       done
Each of these statements serves a specific purpose in controlling the flow of your shell script,
whether it's making decisions based on conditions (if statements) or repeating commands multiple
times (for, while, until loops).
String-based Condition
The string-based condition means in the shell scripting we can take decisions by doing comparisons
within strings as well. Here is a descriptive table with all the operators –
Operator Description
Arithmetic-based Condition
Arithmetic operators are used for checking the arithmetic-based conditions. Like less than, greater
than, equals to, etc. Here is a descriptive table with all the operators –
    Operator                 Description
-eq Equal
Example 1
$ vi p1_file.sh
#!/bin/bash
Country=”India”
if [ “Country” = “India” ]; then
        echo “ You are Indian. India is a land with diverse cultures.”
fi
$ chmod +x p1_file.sh
$ ./p1_file.sh
Output
Example 2
$ vi p1_file.sh
#!/bin/bash
echo "Enter your age-"
read Age
if [ "$Age" -ge 18 ]; then
   echo "You can vote"
else
   echo "You cannot vote"
fi
$ chmod +x p1_file.sh
$ ./p1_file.sh
Output
Example 3
#!/bin/bash
Output
Example 4
$ vi p1_file.sh
#!/bin/bash
echo "Enter a number: "
read number
# Initialize the factorial
factorial=1
while [ $number -gt 1 ]; do
 factorial=$((factorial * number))
 number=$((number - 1))
done
# Print the factorial
echo "The factorial is $factorial"
$ chmod +x p1_file.sh
$ ./p1_file.sh
Output
Example 5
$ vi p1_file.sh
#!/bin/bash
echo "Enter the number-"
read n
for (( i=1; i<=10; i++))
do
  res=`expr $i \* $n`
   echo "$n * $i = $res"
done
# end of for loop
$ chmod +x p1_file.sh
$ ./p1_file.sh
Output
                                       Practical -10
Aim: Writing the Shell scripts for unknown problems.
Description:
Shell scripting is a powerful tool for system administrators, developers, and power users to automate
tasks, create utilities, and manage the behavior of Unix-like systems efficiently from the command
line. It's a fundamental skill for anyone working in a Unix/Linux environment. Shell scripting
involves writing a series of commands, often with control structures like loops and conditional
statements, in a plain text file with a .sh extension. These scripts are then executed by the shell
interpreter, which interprets and executes the commands in sequence.
Writing shell scripts for unknown problems involves creating a flexible and adaptable script
structure that can handle a variety of scenarios or tasks.
Here are some general guidelines and best practices for writing such scripts:
1.Define Clear Goals:
       • Understand the problem domain or the types of tasks the script might encounter.
       • Identify the inputs, outputs, and any intermediate steps or conditions.
2.Modularize Code:
       • Use functions to modularize code and separate different parts of the script logically.
       • Functions make your code more organized, reusable, and easier to maintain.
3.Use Command-Line Arguments:
       • Accept command-line arguments to make the script more flexible and configurable.
       • Handle command-line arguments using getopts or by directly accessing $1, $2, etc., as
           needed.
4.Error Handling:
       • Implement error handling mechanisms such as checking for valid inputs, handling
           exceptions, and providing meaningful error messages.
       • Use exit codes (exit 1, exit 0, etc.) to indicate success or failure of specific operations
           within the script.
5.Logging and Output:
       • Use echo statements or other output commands to provide informative messages and status
           updates during script execution.
       • Consider logging important information or errors to a log file for troubleshooting purposes.
6.Conditional Logic:
       • Use conditional statements (if, case) to handle different scenarios and make decisions based
           on conditions.
       • Handle edge cases and unexpected inputs gracefully to prevent script failures.
7.Iterative Processing:
       • Utilize loops (for, while) for iterative processing, such as looping through files, directories,
           or data sets.
       • Ensure proper handling of loop termination conditions to avoid infinite loops.
8.Documentation and Comments:
       • Include comments and documentation within the script to explain the purpose of each
           section, function, or variable.
       • Document input formats, expected outputs, and any assumptions made by the script.
9.Testing and Validation:
       • Test the script with different inputs and edge cases to ensure it behaves as expected.
       • Validate user inputs or external data sources to prevent potential security or reliability
           issues.
10. Version Control:
       • If the script is part of a larger project or used collaboratively, consider using version control
           (e.g., Git) to track changes and manage revisions.
Example Code
#!/bin/bash
    case $operation in
      add)
         result=$((num1 + num2))
         echo "Addition result: $result"
         ;;
      sub)
         result=$((num1 - num2))
         echo "Subtraction result: $result"
         ;;
      mul)
         result=$((num1 * num2))
         echo "Multiplication result: $result"
         ;;
      div)
         if [ $num2 -ne 0 ]; then
            result=$((num1 / num2))
            echo "Division result: $result"
         else
            echo "Error: Division by zero"
         fi
         ;;
      *)
         echo "Invalid operation"
         ;;
    esac
}
Sample Input 2
add
8
3
Sample Input 3
mul
3
4