DBMS Unit 4
DBMS Unit 4
Every database has tables, and constraints cannot be referred to as a rational database system. And if
any database has only relational data model, it cannot be a Relational Database System (RDBMS).
So, some rules define a database to be the correct RDBMS. These rules were developed by Dr.
Edgar F. Codd (E.F. Codd) in 1985, who has vast research knowledge on the Relational Model of
database Systems. Codd presents his 13 rules for a database to test the concept of DBMS against his
relational model, and if a database follows the rule, it is called a true relational database (RDBMS).
These 13 rules are popular in RDBMS, known as Codd's 12 rules.
The database must be in relational form. So that the system can handle the database through its
relational capabilities.
A database contains various information, and this information must be stored in each cell of a table in
the form of rows and columns.
This rule defines the systematic treatment of Null values in database records. The null value has
various meanings in the database, like missing the data, no value in a cell, inappropriate information,
unknown data and the primary key should not be null.
It represents the entire logical structure of the descriptive database that must be stored online and is
known as a database dictionary. It authorizes users to access the database and implement a similar
query language to access the database.
The relational database supports various languages, and if we want to access the database, the
language must be the explicit, linear or well-defined syntax, character strings and supports the
comprehensive: data definition, view definition, data manipulation, integrity constraints, and limit
transaction management operations. If the database allows access to the data without any language, it
is considered a violation of the database.
All views table can be theoretically updated and must be practically updated by the database systems.
Rule 7: Relational Level Operation (High-Level Insert, Update and delete) Rule
A database system should follow high-level relational operations such as insert, update, and delete in
each level or a single row. It also supports union, intersection and minus operation in the database
system.
All stored data in a database or an application must be physically independent to access the database.
Each data should not depend on other data or an application. If data is updated or the physical
structure of the database is changed, it will not show any effect on external applications that are
accessing the data from the database.
It is similar to physical data independence. It means, if any changes occurred to the logical level
(table structures), it should not affect the user's view (application). For example, suppose a table
either split into two tables, or two table joins to create a single table, these changes should not be
impacted on the user view application.
The distribution independence rule represents a database that must work properly, even if it is stored
in different locations and used by different end-users. Suppose a user accesses the database through
an application; in that case, they should not be aware that another user uses particular data, and the
data they always get is only located on one site. The end users can access the database, and these
access data should be independent for every user to perform the SQL queries.
The non-submersion rule defines RDBMS as a SQL language to store and manipulate the data in the
database. If a system has a low-level or separate language other than SQL to access the database
system, it should not subvert or bypass integrity to transform data.
Functional Dependency
Functional dependency (FD) is a set of constraints between two attributes in a relation. Functional
dependency says that if two tuples have same values for attributes A1, A2,..., An, then those two
tuples must have to have same values for attributes B1, B2, ..., Bn.
Functional dependency is represented by an arrow sign (→) that is, X→Y, where X functionally
determines Y. The left-hand side attributes determine the values of attributes on the right-hand
side.
42 Abc 17
43 Pqr 18
44 Xyz 18
42 Abc 17
43 Pqr 18
44 Xyz 18
What is Normalization?
o Normalization is the process of organizing the data in the database.
o Normalization is used to minimize the redundancy from a relation or set of
relations. It is also used to eliminate undesirable characteristics like Insertion,
Update, and Deletion Anomalies.
o Normalization divides the larger table into smaller and links them using relationships.
o The normal form is used to reduce redundancy from the database table.
Data modification anomalies can be categorized into three types:
o Insertion Anomaly: Insertion Anomaly refers to when one cannot insert a new
tuple into a relationship due to lack of data.
o Deletion Anomaly: The delete anomaly refers to the situation where the deletion
of data results in the unintended loss of some other important data.
o Updatation Anomaly: The update anomaly is when an update of a single data
value requires multiple rows of data to be updated.
NORMALIZATION PROCESS
4NF A relation will be in 4NF if it is in Boyce Codd normal form and has no multi-
valued dependency.
5NF A relation is in 5NF if it is in 4NF and not contains any join dependency and
joining should be lossless.
7272826385,
14 John UP
9064738238
20 Harry 8574783832 Bihar
7390372389,
12 Sam Punjab
8589830302
AFTER 1NF
EMP_I EMP_NAM EMP_PHON EMP_STAT
D E E E
14 John 7272826385 UP
14 John 9064738238 UP
20 Harry 8574783832 Bihar
12 Sam 7390372389 Punjab
12 Sam 8589830302 Punjab
Second Normal Form
Before we learn about the second normal form, we need to understand the following −
Prime attribute − An attribute, which is a part of the candidate-key, is known
as a prime attribute.
Non-prime attribute − An attribute, which is not a part of the prime-key, is
said to be a non-prime attribute.
If we follow second normal form, then every non-prime attribute should be fully functionally
dependent on prime key attribute. That is, if X → A holds, then there should not be any proper
subset Y of X, for which Y → A also holds true.
25 Chemistry 30
25 Biology 30
47 English 35
83 Math 38
83 Computer 38
TEACHER_ID TEACHER_AGE
25 30
47 35
83 38
TEACHER_Subject
TEACHER_ID SUBJECT
25 Chemistry
25 Biology
47 English
83 Math
83 Computer
• A relation will be in 3NF if it is in 2NF and not contain any transitive partial dependency.
• 3NF is used to reduce the data duplication. It is also used to achieve the data integrity.
• If there is no transitive dependency for non-prime attributes, then the relation must be in third
normal form.
• A relation is in third normal form if it holds atleast one of the following conditions for every
non-trivial function dependency X → Y.
• X is a super key.
• Y is a prime attribute, i.e., each element of Y is part of some candidate key.
For a relation to be in Third Normal Form, it must be in Second Normal form and the following
must satisfy −
o A is prime attribute.
We find that in the above Student_detail relation, Stu_ID is the key and only prime
key attribute. We find that City can be identified by Stu_ID as well as Zip itself.
Neither Zip is a superkey nor is City a prime attribute. Additionally, Stu_ID →
Zip → City, so there exists transitive dependency.
To bring this relation into third normal form, we break the relation into two relations as follows
−
21 Computer Dancing
21 Math Singing
34 Chemistry Dancing
74 Biology Cricket
59 Physics Hockey
The given STUDENT table is in 3NF, but the COURSE and HOBBY are two
independent entity. Hence, there is no relationship between COURSE and
HOBBY.
In the STUDENT relation, a student with STU_ID, 21 contains two
courses, Computer and Math and two hobbies, Dancing and Singing. So there is
a Multi- valued dependency on STU_ID, which leads to unnecessary repetition of
data.
STUDENT_COURSE
STU_ID COURSE
21 Computer
21 Math
34 Chemistry
74 Biology
59 Physics
STUDENT_HOBBY
STU_ID HOBBY
21 Dancing
21 Singing
34 Dancing
74 Cricket
59 Hockey
The above relation can be decomposed into the following three tables; therefore, it is not in
5NF −
EmployeeSkills
EmpName EmpSkills
David Java
John JavaScript
Jamie jQuery
Emma Java
EmployeeJob
EmpName EmpJob
David E145
John E146
Jamie E146
Emma E147
JobSkills
EmpSkills EmpJob
Java E145
JavaScript E146
JQuery E146
Java E147
What are Integrity Constraints ?
Integrity constraints in a Database Management System are rules that help keep the data in a
database accurate, consistent and reliable. They act like a set of guidelines that ensure all the
information stored in the database follows specific standards.
Integrity Constraints
Making sure every customer has a valid email address & ensuring that an order in the database is
always linked to an existing customer.Note: These rules prevent mistakes, such as adding
incomplete or incorrect data, and make sure the database is secure and well-organized.
Types of Integrity Constraints
There are Different types of Integrity Constraints used in DBMS, these are:
1. Domain Constraints
2. Entity Integrity Constraints
3. Key Constraints
4. Referential integrity constraints
5. Assertion
6. Triggers
1. Domain Constraints
Domain constraints are a type of integrity constraint that ensure the values stored in a column (or
attribute) of a database are valid and within a specific range or domain. In simple terms, they
define what type of data is allowed in a column and restrict invalid data entry. The data type of
domain include string, char, time, integer, date, currency etc. The value of the attribute must be
available in comparable domains.
Example: Below table demonstrates domain constraints in action by enforcing rules for each
column
shashwat@example.co
3 Manvendra Sharma
m
Unique Values: The email column must contain unique values. aniket@example.com and
shashwat@example.com are valid. Adding another row with aniket@example.com would
result in an error.
Allows One NULL: The email column can contain one NULL value.
Valid: NULL in the second row.
Invalid: Adding another row with NULL in email will be rejected.
4. Referential integrity constraints
Referential integrity constraints are rules that ensure relationships between tables remain
consistent. They enforce that a foreign key in one table must either match a value in the
referenced primary key of another table or be NULL. This guarantees the logical connection
between related tables in a relational database.
Why Referential Integrity Constraints Are Important ?
Maintains Consistency: Ensures relationships between tables are valid.
Prevents Orphan Records: Avoids cases where a record in a child table references a non-
existent parent record.
Enforces Logical Relationships: Strengthens the logical structure of a relational database.
Example: Here, in below example Block_No 22 entry is not allowed because it is not present in
2nd table.
20 Chandigarh
21 Punjab
25 Delhi
To read about SQL FOREIGN KEY Constraint Refer, Here.
5. Assertion
An assertion is a declarative mechanism in a database that ensures a specific condition or rule is
always satisfied across the entire database. It is a global integrity constraint, meaning it applies to
multiple tables or the entire database rather than being limited to a single table or column. An
assertion in SQL-92 takes the form:
create assertion <assertion-name> check <predicate>
When an assertion is made, the system tests it for validity. This testing may introduce a
significant amount of overhead; hence assertions should be used with great care.
Example of an Assertion:
CREATE ASSERTION sum_constraint
CHECK (
NOT EXISTS (
SELECT *
FROM branch
WHERE (
SELECT SUM(amount)
FROM loan
WHERE loan.branch_name = branch.branch_name
) >= (
SELECT SUM(amount)
FROM account
WHERE account.branch_name = branch.branch_name
)
)
);
Explanation:
The following SQL statement creates an assertion to ensure that the total loan amount at each
branch is always less than the total account balances at the same branch.
Purpose: This assertion enforces a global business rule. The sum of all loan amounts for a
branch must always be less than the sum of all account balances in the same branch. This
prevents branches from issuing loans beyond their financial capacity.
Subqueries: The first subquery (SELECT SUM(amount) FROM loan) calculates the total
loan amount for each branch. The second subquery (SELECT SUM(amount) FROM
account) calculates the total balance of accounts for the same branch.
Condition: The NOT EXISTS clause ensures there is no branch where the loan amount is
greater than or equal to the account balance.
Behavior: If a transaction (e.g., inserting a loan or updating an account) violates this rule,
the operation will be rejected by the database.
6. Triggers
A trigger is a procedural statement in a database that is automatically executed in response to
certain events such as INSERT, UPDATE, or DELETE. Triggers are often used to enforce
complex integrity constraints or implement business rules that cannot be captured using standard
constraints like primary keys or foreign keys.
Example SQL Trigger:
CREATE TRIGGER handle_overdraft
AFTER UPDATE ON account
FOR EACH ROW
BEGIN
-- Check if the balance has become negative after the update
IF NEW.balance < 0 THEN
-- Set the account balance to zero
UPDATE account
SET balance = 0
WHERE account_number = NEW.account_number;
-- Create a loan record with the same account number as the loan number
INSERT INTO loan (loan_number, loan_amount)
VALUES (NEW.account_number, ABS(NEW.balance)); -- ABS to ensure positive loan amount
END IF;
END;
Explanation:
Trigger Type: The trigger runs after an update on the account table.
Condition: It checks if the account balance is negative after the update.
Actions: If the balance is negative, the account balance is reset to zero. A loan is created
with the same account number as the loan number, and the loan amount is the absolute
value of the negative balance. The ABS() function ensures the loan amount is positive.
Indexing in DBMS
o Indexing is used to optimize the performance of a database by minimizing the
number of disk accesses required when a query is processed.
o The index is a type of data structure. It is used to locate and access the data in a database
table quickly.
Index structure:
Indexing Methods
Ordered indices
The indices are usually sorted to make searching faster. The indices which are sorted are known
as ordered indices.
Example: Suppose we have an employee table with thousands of record and each of which is 10
bytes long. If their IDs start with 1, 2, 3....and so on and we have to search student with ID-543.
o In the case of a database with no index, we have to search the disk block from starting till
it reaches 543. The DBMS will read the record after reading 543*10=5430 bytes.
o In the case of an index, we will search using indexes and the DBMS will read the record
after reading 542*2= 1084 bytes which are very less compared to the previous case.
Primary Index
o If the index is created on the basis of the primary key of the table, then it is known as
primary indexing. These primary keys are unique to each record and contain 1:1 relation
between the records.
o As primary keys are stored in sorted order, the performance of the searching operation is
quite efficient.
o The primary index can be classified into two types: Dense index and Sparse index.
Dense index
o The dense index contains an index record for every search key value in the data file. It
makes searching faster.
o In this, the number of records in the index table is same as the number of records in the
main table.
o It needs more space to store index record itself. The index records have the search key
and a pointer to the actual record on the disk.
Sparse index
o In the data file, index record appears only for a few items. Each item points to a block.
o In this, instead of pointing to each record in the main table, the index points to the records
in the main table in a gap.
Clustering Index
o A clustered index can be defined as an ordered data file. Sometimes the index is created
on non-primary key columns which may not be unique for each record.
o In this case, to identify the record faster, we will group two or more columns to get the
unique value and create index out of them. This method is called a clustering index.
o The records which have similar characteristics are grouped, and indexes are created for
these group.
Example: suppose a company contains several employees in each department. Suppose we use a
clustering index, where all employees which belong to the same Dept_ID are considered within a
single cluster, and index pointers point to the cluster as a whole. Here Dept_Id is a non-unique
key.
Secondary Index
In the sparse indexing, as the size of the table grows, the size of mapping also grows. These
mappings are usually kept in the primary memory so that address fetch should be faster. Then the
secondary memory searches the actual data based on the address got from mapping. If the
mapping size grows then fetching the address itself becomes slower. In this case, the sparse index
will not be efficient. To overcome this problem, secondary indexing is introduced.
In secondary indexing, to reduce the size of mapping, another level of indexing is introduced. In
this method, the huge range for the columns is selected initially so that the mapping size of the
first level be comes small. Then each range is further divided into smaller ranges. The mapping
of the first level is stored in the primary memory, so that address fetch is faster. The mapping of
the second level and actual data are stored in the secondary memory (hard disk).
For example:
o If you want to find the record of roll 111 in the diagram, then it will search the highest
entry which is smaller than or equal to 111 in the first level index. It will get 100 at this
level.
o Then in the second index level, again it does max (111) <= 111 and gets 110. Now using
the address 110, it goes to the data block and starts searching each record till it gets 111.
o This is how a search is performed in this method. Inserting, updating or deleting is also
done in the same manner.
Eg:
CREATE INDEX index_name
ON table_name (column1, column2, ...);
PL/SQL introduction
PL/SQL is a block structured language that can have multiple blocks in it.
PL/SQL language such as conditional statements, loops, arrays, string, exceptions, collections,
records, triggers, functions, procedures, cursors etc. There are also given PL/SQL interview
questions and quizzes to help you better understand the PL/SQL language.
SQL stands for Structured Query Language i.e. used to perform operations on
the records stored in database such as inserting records, updating records,
deleting records, creating, modifying and dropping tables, views etc.
Indexed Sequential Access Method (ISAM) is a file organization technique used in databases
to speed up data retrieval. Developed by IBM in the 1960s, it combines sequential and direct
access using indexes. ISAM stores records in sorted order and maintains an index to quickly
locate any record, making it efficient for both sequential processing and fast lookups.
Modern Use of ISAM
Despite the fact that in the modern array of quite advanced indexing methods such as B-trees,
hash index, and others, ISAM is not widely used, its principles still influence database design
very importantly. Modern DBMSs tend to combine elements from ISAM along with other
techniques to enhance access in a sequential manner and range queries. ISAM is quite obsolete
system, however, it is still applied in a couple of old-class systems where the simplicity and
efficiency of this system for peculiar workloads are easily achievable.
Components of ISAM
Primary Data File: The disk file will contain real records linked by one or more primary
key fields. Stacked records are arranged in such a way that it is convenient to look them
through as a whole, swiftly dispatching lots of data at the same time.
Index File: The index file, which contains the link addresses that eventually lead from the
key to its respective record location in the primary data file, is comprised of the keys that
facilitate the search. This race typically has limited nodes as compared to the data file,
reducing the search time.
Overflow Area: The contiguous nature of records enables ISAM to manage overflows by
employing an overflow area. When the main buffer unit gets full then or when new records
are put in a non-sequential item, they are transferred to the overflow area. It also comes
with this indexing means which makes access easier too.
Working Mechanism of ISAM
ISAM incorporates indexed characterization as well as sequential data enlistment into its
scheme to make for more effective data storage and access. The functioning of ISAM can be
summarized in the following steps:
Data Insertion: When a new record is specified, the file where data is stored is updated,
taking the correct sequential order. The basic index, which receives a new record’s key
index and its location in the data file, is where the updating occurs.
Data Retrieval: Once the lookup is undertaken, the computer hardware then scans the
primary index using the initial value. When a data retrieval process needs an index entry it
refers to the pointer that will directly access the necessary data file record.
Data Deletion: Upon the deletion of the record, it is flagged in the data file as such, and
the relative entry in the primary index becomes nonexistent or changes.
Data Updating: This process is comprised of looking through the index to find the record,
upgrading the data file record, and looking through the index for necessary alterations.
Limitations of ISAM
Despite its advantages, ISAM has certain limitations:
Static Structure: A distinctive feature of ISAM index structure is its staticity, which
means it does not change in compliance with useful data. Thus, the demand for login and
sorts will increase making the system less efficient to handle insertions, deletions, and
updates.
Reorganization Overhead: Continuous work on constant data and index file
reorganization is required to maintain performance, though it is often very labor- and
resource-intensive.
Limited Flexibility: ISAM copes with these issues less effectively than modern indexing
algorithms in the matter of processing the records of different length and data that is easily
transforming.
Applications of ISAM
ISAM plays vital roles in many industries, including all those cases where data adequacy and
effectiveness are crucial. Some common applications include:
Legacy Systems: ISAM plays an important role in many old and decentralized systems
that still count on such reliable performance in read-intense tasks.
Embedded Systems: ISAM's virtue lies in its high performance and simplicity, which
enables the use of the algorithm in systems with limited hardware platforms.
Mainframe Databases: This ISAM is implemented in a mainframe database where there
are a huge number of data files which need to be quickly accessed and effectively
managed.
Comparison with Other Indexing Methods
ISAM can be related to B-trees and B+-trees, which are other types of indexing methods, for a
comparison that demonstrates its shortcomings and its strong points in turn.
B-Tree and B+-Tree
B-trees and B+trees, conflict-free,in order to facilitate the optimal operation by adjusting
their structure in real time in response to inserting, deleting and updating.
B+ trees have all records stored at the leaf level and make a join linked list of leaf nodes.
As a result, they support operations like range queries more efficiently.
By contrast, ISAM is a static system that demands regular organizational refinement so as
to achieve top performance.
Hash Indexing
The Hash indexing mechanism may provide effective access for the equality searches,
however, for range queries it is less effective.
In addition to providing equality and range queries capabilities, the hierarchical index
structure of ISAM has a key characteristic: increased flexibility.
Advantages of ISAM
ISAM offers several benefits that make it a valuable indexing method in databases:
Efficient Data Retrieval: ISAM contributes to the instant access of data by implementing
the indexed method, thus, avoiding sequential searching.
Supports Both Access Methods: ISAM fusion brings together sequential and
randomness, providing the option to query the data in both multiple ways.
Hierarchical Index Structure: The secondary indexes with the hierarchical
structure helps in avoiding omissions during searching operations of huge databases.
Scalability: ISAM is able to handle large amounts of data, and its design allows for many
database calls with the read operations.
A B-Tree is a specialized m-way tree designed to optimize data access, especially on disk-based
storage systems.
In a B-Tree of order m, each node can have up to m children and m-1 keys, allowing it to
efficiently manage large datasets.
The value of m is decided based on disk block and key sizes.
One of the standout features of a B-Tree is its ability to store a significant number of keys
within a single node, including large key values. It significantly reduces the tree’s height,
hence reducing costly disk operations.
B Trees allow faster data retrieval and updates, making them an ideal choice for systems
requiring efficient and scalable data management. By maintaining a balanced structure at
all times,
B-Trees deliver consistent and efficient performance for critical operations such as
search, insertion, and deletion.
Following is an example of a B-Tree of order 5 .
Properties of a B-Tree
A B Tree of order m can be defined as an m-way search tree which satisfies the following
properties:
1. All leaf nodes of a B tree are at the same level, i.e. they have the same depth (height of
the tree).
2. The keys of each node of a B tree (in case of multiple keys), should be stored in the
ascending order.
3. In a B tree, all non-leaf nodes (except root node) should have at least m/2 children.
4. All nodes (except root node) should have at least m/2 - 1 keys.
5. If the root node is a leaf node (only node in the tree), then it will have no children and
will have at least one key. If the root node is a non-leaf node, then it will have at least 2
children and at least one key.
6. A non-leaf node with n-1 key values should have n non NULL children.
We can see in the above diagram that all the leaf nodes are at the same level and all non-leaf
nodes have no empty sub-tree and have number of keys one less than the number of their
children.
hmin=⌈logm(n+1)⌉−1hmin=⌈logm(n+1)⌉−1
1. Height when the B-tree is completely full (i.e., all nodes have the maximum m children):
hmax=⌊logt(n+12)⌋hmax=⌊logt(2n+1)⌋
2. Height when the B-tree is least filled (each node has the minimum t children):
Need of a B-Tree
The B-Tree data structure is essential for several reasons:
Improved Performance Over M-way Trees: While M-way trees can be either balanced
or skewed, B-Trees are always self-balanced. This self-balancing property ensures fewer
levels in the tree, significantly reducing access time compared to M-way trees. This
makes B-Trees particularly suitable for external storage systems where faster data
retrieval is crucial.
Optimized for Large Datasets: B-Trees are designed to handle millions of records
efficiently. Their reduced height and balanced structure enable faster sequential access to
data and simplify operations like insertion and deletion. This ensures efficient
management of large datasets while maintaining an ordered structure.
Operations on B-Tree
B-Trees support various operations that make them highly efficient for managing large datasets.
Below are the key operations:
Sr. No. Operation Time Complexity
1. Search O(log n)
2. Insert O(log n)
3. Delete O(log n)
4. Traverse O(n)
The key 120 is located in the leaf node containing 110 and 120. The search process is complete.
What is PL/SQL
PL/SQL is a block structured language. The programs of PL/SQL are logical blocks that can
contain any number of nested sub-blocks. Pl/SQL stands for "Procedural Language extension
of SQL" that is used in Oracle. PL/SQL is integrated with Oracle database (since version 7).
The functionalities of PL/SQL usually extended after each release of Oracle database.
Although PL/SQL is closely integrated with SQL language, yet it adds some programming
constraints that are not available in SQL.
PL/SQL Functionalities
PL/SQL includes procedural language elements like conditions and loops. It allows
declaration of constants and variables, procedures and functions, types and variable of those
types and triggers. It can support Array and handle exceptions (runtime errors). After the
implementation of version 8 of Oracle database have included features associated with object
orientation. You can create PL/SQL units like procedures, functions, packages, types and
triggers, etc. which are stored in the database for reuse by applications.
With PL/SQL, you can use SQL statements to manipulate Oracle data and flow of control
statements to process the data.
The PL/SQL is known for its combination of data manipulating power of SQL with data
processing power of procedural languages. It inherits the robustness, security, and portability
of the Oracle Database.
PL/SQL is not case sensitive so you are free to use lower case letters or upper case letters
except within string and character literals. A line of PL/SQL text contains groups of
characters known as lexical units. It can be classified as follows:
o Delimeters
o Identifiers
o Literals
o Comments
1. DECLARE
2. a integer := 30;
3. b integer := 40;
4. c integer;
5. f real;
6. BEGIN
7. c := a + b;
8. dbms_output.put_line('Value of c: ' || c);
9. f := 100.0/3.0;
10. dbms_output.put_line('Value of f: ' || f);
11. END;
Value of c: 70
Value of f: 33.333333333333333333
PL/SQL If
This syntax is used when you want to execute statements only when condition is TRUE.
1. IF condition
2. THEN
3. {...statements to execute when condition is TRUE...}
4. ELSE
5. {...statements to execute when condition is FALSE...}
6. END IF;
This syntax is used when you want to execute one set of statements when condition is TRUE
or a different set of statements when condition is FALSE.
1. IF condition1
2. THEN
3. {...statements to execute when condition1 is TRUE...}
4. ELSIF condition2
5. THEN
6. {...statements to execute when condition2 is TRUE...}
7. END IF;
This syntax is used when you want to execute one set of statements when condition1 is TRUE
or a different set of statements when condition2 is TRUE.
1. IF condition1
2. THEN
3. {...statements to execute when condition1 is TRUE...}
4. ELSIF condition2
5. THEN
6. {...statements to execute when condition2 is TRUE...}
7. ELSE
8. {...statements to execute when both condition1 and condition2 are FALSE...}
9. END IF;
It is the most advance syntax and used if you want to execute one set of statements when
condition1 is TRUE, a different set of statement when condition2 is TRUE or a different set
of statements when both the condition1 and condition2 are FALSE.
DECLARE
a number(3) := 500;
BEGIN
-- check the boolean condition using if statement
IF( a < 20 ) THEN
-- if condition is true then print the following
dbms_output.put_line('a is less than 20 ' );
ELSE
dbms_output.put_line('a is not less than 20 ' );
END IF;
dbms_output.put_line('value of a is : ' || a);
END;
After the execution of the above code in SQL prompt, you will get the following result:
PL/SQL Loop
The PL/SQL loops are used to repeat the execution of one or more statements for
specified number of times. These are also known as iterative control statements.
LOOP
Sequence of statements;
END LOOP;
PL/SQL for loop is used when when you want to execute a set of statements for a
predetermined number of times. The loop is iterated between the start and end integer values.
The counter is always incremented by 1 and once the counter reaches the value of end integer,
the loop ends.
1. BEGIN
2. FOR k IN 1..10 LOOP
note that k was not declared
3. DBMS_OUTPUT.PUT_LINE(k);
4. END LOOP;
5. END;
After the execution of the above code, you will get the following result:
Play Video
1
2
3
4
5
6
7
8
9
10
Note: You must follow these steps while using PL/SQL WHILE Loop.
o You don't need to declare the counter variable explicitly because it is declared
implicitly in the declaration section.
o The counter variable is incremented by 1 and does not need to be incremented
explicitly.
o You can use EXIT WHEN statements and EXIT statements in FOR Loops but it is
not done often.
PL/SQL while loop is used when a set of statements has to be executed as long as a condition
is true, the While loop is used. The condition is decided at the beginning of each iteration and
continues until the condition becomes false.
1. WHILE <condition>
2. LOOP statements;
3. END LOOP;
1. DECLARE
2. i INTEGER := 1;
3. BEGIN
4. WHILE i <= 10 LOOP
5. DBMS_OUTPUT.PUT_LINE(i);
6. i := i+1;
7. END LOOP;
8. END;
After the execution of the above code, you will get the following result:
1
2
3
4
5
6
7
8
9
10
Note: You must follow these steps while using PL/SQL WHILE Loop.
o Initialize a variable before the loop body.
o Increment the variable in the loop.
o You can use EXIT WHEN statements and EXIT statements in While loop but it is not
done often.
PL/SQL Cursor
When an SQL statement is processed, Oracle creates a memory area known as context area. A
cursor is a pointer to this context area. It contains all information needed for processing the
statement. In PL/SQL, the context area is controlled by Cursor. A cursor contains information
on a select statement and the rows of data accessed by it.
A cursor is used to referred to a program to fetch and process the rows returned by the SQL
statement, one at a time. There are two types of cursors:
o Implicit Cursors
o Explicit Cursors
These are created by default to process the statements when DML statements like INSERT,
UPDATE, DELETE etc. are executed.
Orcale provides some attributes known as Implicit cursor's attributes to check the status of
DML operations. Some of them are: %FOUND, %NOTFOUND, %ROWCOUNT and
%ISOPEN.
For example: When you execute the SQL statements like INSERT, UPDATE, DELETE then
the cursor attributes tell whether any rows are affected and how many have been affected. If
you run a SELECT INTO statement in PL/SQL block, the implicit cursor attribute can be
used to find out whether any row has been returned by the SELECT statement. It will return
an error if there no data is selected.
The following table soecifies the status of the cursor with each of its attribute.
Attribute Description
%FOUND Its return value is TRUE if DML statements like INSERT, DELETE and
UPDATE affect at least one row or more rows or a SELECT INTO statement
returned one or more rows. Otherwise it returns FALSE.
%NOTFOU Its return value is TRUE if DML statements like INSERT, DELETE and
ND UPDATE affect no row, or a SELECT INTO statement return no rows.
Otherwise it returns FALSE. It is a just opposite of %FOUND.
%ISOPEN It always returns FALSE for implicit cursors, because the SQL cursor is
automatically closed after executing its associated SQL statements.
%ROWCOU It returns the number of rows affected by DML statements like INSERT,
NT DELETE, and UPDATE or returned by a SELECT INTO statement.
Let's execute the following program to update the table and increase salary of each customer
by 5000. Here, SQL%ROWCOUNT attribute is used to determine the number of rows
affected:
Create procedure:
1. DECLARE
2. total_rows number(2);
3. BEGIN
4. UPDATE customers
5. SET salary = salary + 5000;
6. IF sql%notfound THEN
7. dbms_output.put_line('no customers updated');
8. ELSIF sql%found THEN
9. total_rows := sql%rowcount;
10. dbms_output.put_line( total_rows || ' customers updated ');
11. END IF;
12. END;
13. /
Output:
6 customers updated
PL/SQL procedure successfully completed.
Now, if you check the records in customer table, you will find that the rows are
updated.
Steps:
You must follow these steps while working with an explicit cursor.
1. CURSOR name IS
2. SELECT statement;
1. OPEN cursor_name;
1. Close cursor_name;
Explicit cursors are defined by programmers to gain more control over the context area. It is
defined in the declaration section of the PL/SQL block. It is created on a SELECT statement
which returns more than one row.
Let's take an example to demonstrate the use of explicit cursor. In this example, we are using
the already created CUSTOMERS table.
Create procedure:
Execute the following program to retrieve the customer name and address.
1. DECLARE
2. c_id customers.id%type;
3. c_name customers.name%type;
4. c_addr customers.address%type;
5. CURSOR c_customers is
6. SELECT id, name, address FROM customers;
7. BEGIN
8. OPEN c_customers;
9. LOOP
10. FETCH c_customers into c_id, c_name, c_addr;
11. EXIT WHEN c_customers%notfound;
12. dbms_output.put_line(c_id || ' ' || c_name || ' ' || c_addr);
13. END LOOP;
14. CLOSE c_customers;
15. END;
16. /
Output:
1 Ramesh Allahabad
2 Suresh Kanpur
3 Mahesh Ghaziabad
4 Chandan Noida
5 Alex Paris
6 Sunita Delhi
PL/SQL procedure successfully completed.
Transaction
o The transaction is a set of logically related operation. It contains a group of tasks.
o A transaction is an action or series of actions. It is performed by a single user to
perform operations for accessing the contents of the database.
Example: Suppose an employee of bank transfers Rs 800 from X's account to Y's account.
This small transaction contains several low-level tasks:
X's Account
1. Open_Account(X)
2. Old_Balance = X.balance
3. New_Balance = Old_Balance - 800
4. X.balance = New_Balance
5. Close_Account(X)
Y's Account
1. Open_Account(Y)
2. Old_Balance = Y.balance
3. New_Balance = Old_Balance + 800
4. Y.balance = New_Balance
5. Close_Account(Y)
Operations of Transaction:
Write(X): Write operation is used to write the value back to the database from the buffer.
Let's take an example to debit transaction from an account which consists of following
operations:
1. R(X);
2. X = X - 500;
3. W(X);
o The first operation reads X's value from database and stores it in a buffer.
o The second operation will decrease the value of X by 500. So buffer will contain 3500.
o The third operation will write the buffer's value to the database. So X's final value will
be 3500.
But it may be possible that because of the failure of hardware, software or power, etc. that
transaction may fail before finished all the operations in the set.
For example: If in the above transaction, the debit transaction fails after executing operation
2 then X's value will remain 4000 in the database which is not acceptable by the bank.
Transaction property
The transaction has the four properties. These are used to maintain consistency in a database,
before and after the transaction.
Property of Transaction
1. Atomicity
2. Consistency
3. Isolation
4. Durability
Atomicity
o It states that all operations of the transaction take place at once if not,
the transaction is aborted.
o There is no midway, i.e., the transaction cannot occur partially.
Each transaction is treated as one unit and either run to completion
or is not executed at all.
Abort: If a transaction aborts then all the changes made are not visible.
Example: Let's assume that following transaction T consisting of T1 and T2.
A consists of Rs 600 and B consists of Rs 300. Transfer Rs 100 from
account A to account B.
T1 T2
Read(A) Read(B)
A:= A-100 Y:= Y
Write(A) Write(B) +
1
0
0
Consistency
o The integrity constraints are maintained so that the database is
consistent before and after the transaction.
o The execution of a transaction will leave a database in either its prior
stable state or a new stable state.
o The consistent property of database states that every transaction sees
a consistent database instance.
o The transaction is used to transform the database from one consistent
state to another consistent state.
For example: The total amount must be maintained before or after the transaction.
Isolation
Durability
States of Transaction
Partially committed
o In the partially committed state, a transaction executes its final
operation, but the data is still not saved to the database.
o In the total mark calculation example, a final display of the total
marks step is executed in this state.
Committed
Failed state
o If any of the checks made by the database recovery system fails, then
the transaction is said to be in the failed state.
o In the example of total mark calculation, if the database is not able to
fire a query to fetch the marks, then the transaction will fail to
execute.
Aborted
o If any of the checks fail and the transaction has reached a failed state
then the database recovery system will make sure that the database is
in its previous consistent state. If
not then it will abort or roll back the transaction to bring the database
into a consistent state.
o If the transaction fails in the middle of the transaction then
before executing the transaction, all the executed transactions are
rolled back to its consistent state.
o After aborting the transaction, the database recovery module will
select one of the two operations:
1. Re-start the transaction