0% found this document useful (0 votes)
39 views30 pages

100 Frequently Asked Oracle DBA Interview Questions - RAC - RMAN - Data Guard - Flashback

The document provides a comprehensive list of 100 frequently asked Oracle DBA interview questions covering topics such as Oracle RAC, RMAN, Data Guard, Flashback, Data Pump, and Patching. It includes detailed explanations of key concepts like Cache Fusion, SCAN listeners, and the role of Oracle Clusterware, as well as troubleshooting techniques and high-availability features like Transparent Application Failover (TAF). This resource serves as a valuable guide for candidates preparing for Oracle DBA interviews.

Uploaded by

Venkata Lokendra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views30 pages

100 Frequently Asked Oracle DBA Interview Questions - RAC - RMAN - Data Guard - Flashback

The document provides a comprehensive list of 100 frequently asked Oracle DBA interview questions covering topics such as Oracle RAC, RMAN, Data Guard, Flashback, Data Pump, and Patching. It includes detailed explanations of key concepts like Cache Fusion, SCAN listeners, and the role of Oracle Clusterware, as well as troubleshooting techniques and high-availability features like Transparent Application Failover (TAF). This resource serves as a valuable guide for candidates preparing for Oracle DBA interviews.

Uploaded by

Venkata Lokendra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump &

Patching – Brijesh Mehra

100 Frequently Asked Oracle DBA


Interview Questions – RAC | RMAN
| Data Guard | Flashback

BRIJESH MEHRA 11/6/25


100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

Section 1: Oracle RAC (Real Application Cluster)

1. What is Oracle RAC and how is it different from a single instance database?
Oracle Real Application Clusters (RAC) is a high-availability solution that allows multiple instances running on different
servers (nodes) to access a single shared database. This ensures continuous service availability and load balancing. In
contrast, a single instance database has only one database instance managing the data. RAC provides fault tolerance,
scalability, and redundancy, making it suitable for mission-critical applications. RAC databases share storage using
technologies like ASM or shared filesystems, while the cluster software handles node coordination. If one node fails,
others continue serving users, which isn’t possible in single instance environments. RAC also requires more complex
networking and cluster configuration compared to single instance setups.

2. Explain the function of Cache Fusion in Oracle RAC.


Cache Fusion is a key feature in Oracle RAC that ensures data consistency between the buffer caches of different nodes
in the cluster. When an instance needs a block that is being modified by another instance, instead of reading it from
disk, the block is transferred directly from one instance’s memory to another using high-speed interconnects. This
mechanism reduces disk I/O and enhances performance. It avoids the need for disk-based pinging, which was common
in earlier Oracle versions. Cache Fusion enables the distributed database to function as if it were a single-instance
system, ensuring real-time access and consistency across all nodes.

3. What are SCAN listeners, and how do they work in RAC?


Single Client Access Name (SCAN) listeners provide a single point of access for clients connecting to an Oracle RAC
database. Instead of configuring the client with individual node IP addresses, SCAN allows the use of a single hostname
that resolves to three IP addresses via DNS. These IPs are tied to SCAN listeners, which redirect connections to the
appropriate local listeners on the RAC nodes. SCAN listeners simplify client connection configurations, support load
balancing, and ensure failover capabilities. They are independent of node additions or removals, making the RAC
environment more flexible and easier to manage from a client perspective.

4. What are the components of Oracle Clusterware?


Oracle Clusterware is the software that enables clustering in Oracle RAC. It includes several components such as the
Oracle Cluster Registry (OCR), Voting Disk, Cluster Synchronization Services (CSS), Oracle Cluster Ready Services (CRS),
and Event Manager (EVM). OCR stores cluster configuration information, while Voting Disk tracks the health of nodes.
CSS ensures node membership and synchronization, CRS manages cluster resources, and EVM handles event
management. Together, these components facilitate communication, resource management, and high availability in the
RAC environment. They ensure the cluster remains operational, and node failures or startups are handled seamlessly.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

5. How do you troubleshoot node eviction in a RAC environment?


Node eviction occurs when a node is forcibly removed from the cluster due to network issues, heartbeat failures, or high
CPU utilization. To troubleshoot, review logs like crsd.log, ocssd.log, and alert.log for the database and Grid
Infrastructure. Check interconnect status, voting disk communication, and OS-level events. Using oswatcher,
diagcollection, and AWR reports helps identify system bottlenecks or spikes. RAC relies heavily on reliable interconnects,
so checking NIC errors, packet drops, and switch logs is crucial. Identifying and resolving the root cause ensures better
cluster stability and avoids repeated node reboots or evictions.

6. What is the purpose of the GES and GCS layers in RAC?


The Global Enqueue Service (GES) and Global Cache Service (GCS) are responsible for managing global resource
coordination in Oracle RAC. GES handles non-cache related resources like DDL locks, while GCS manages block-level
cache transfers between nodes. These services are integral to Cache Fusion and ensure that each RAC instance
maintains data consistency without conflicts. GES resolves lock conflicts and synchronizes access to shared resources.
GCS manages the access to cached data blocks, enabling RAC to function efficiently as a single database image across all
nodes. Together, they ensure smooth operation and concurrency in multi-node environments.

7. How do you add a new node to an existing RAC cluster?


To add a new node to an existing RAC cluster, you begin by preparing the hardware and installing necessary OS
packages. Next, run addnode.sh from the Grid Infrastructure home to extend the clusterware to the new node. After
verifying network, SSH, and storage configurations, use cluvfy to validate the node readiness. Then, use the addnode
utility or run the GUI-based installer to include the node into the Grid Infrastructure and database homes. Once added,
configure services and listeners for the new node and test connectivity and failover. Always perform node addition
during a maintenance window after full backup.

8. What is the role of OCR and Voting Disk in RAC?


The Oracle Cluster Registry (OCR) stores important cluster configuration information, such as node memberships, cluster
resource settings, and service definitions. The Voting Disk maintains node status and quorum to avoid split-brain
scenarios. Each node writes a heartbeat to the Voting Disk at regular intervals. If a node fails to update the disk for a
defined period, it is considered unhealthy and may be evicted. OCR is critical for startup and cluster management, while
Voting Disk ensures that only the majority of healthy nodes can maintain cluster operations. Loss of both can lead to
cluster instability or failure.

9. How do you check interconnect status in RAC?


Interconnects are private network links between RAC nodes and are crucial for Cache Fusion. To check their status, you
can query gv$cluster_interconnects and gv$kclcrstat views for IP addresses and traffic statistics. Tools like ifconfig,
netstat, and ping help validate network interfaces. You can also run orachk or cluvfy comp nodecon to test interconnect
health and configuration. Monitoring network latency and packet loss is important, as degraded interconnects can lead
to performance issues or node evictions. Redundant interconnects are recommended for production systems to ensure
high availability and fault tolerance.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

10. What happens when a node crashes in RAC?


When a node crashes in Oracle RAC, Clusterware detects the failure via missed heartbeats on the Voting Disk and
interconnect. The failed node is evicted from the cluster to maintain overall cluster health. Surviving nodes redistribute
the workload, and ongoing sessions are either failed over (if TAF is configured) or terminated. Oracle RAC ensures that
the shared database remains accessible through the remaining nodes. The crashed node may be restarted automatically
depending on its availability profile. Logs in crsd, ocssd, and OS-level logs should be reviewed to determine the root
cause of the crash.

11. How does load balancing work in RAC?


In Oracle RAC, load balancing ensures that client connections and workload are evenly distributed across all active
database instances. It works through two methods: client-side and server-side load balancing. Client-side load balancing
distributes connection requests across available listeners using multiple address entries in the TNSNAMES.ORA file.
Server-side load balancing, on the other hand, uses the SCAN listener and Load Balancing Advisory (LBA) to direct
connections to the least-loaded instance dynamically. Services and service-level goals help Oracle determine the best
target instance. Proper configuration of listeners, services, and connection strings is key to optimal load balancing.

12. How do you perform rolling patching in RAC?


Rolling patching in RAC allows applying patches to one node at a time without bringing down the entire cluster,
maintaining high availability. This process involves patching one node, restarting it, verifying functionality, and then
repeating the process on other nodes. Patches that support rolling upgrades (like PSU, RU, or interim patches marked as
rolling applicable) can be applied in this manner. Tools like opatch or opatchauto assist in applying and verifying patches.
Rolling patching is commonly used in combination with Data Guard or GoldenGate to reduce downtime and risk during
maintenance activities in production RAC environments.

13. Explain Transparent Application Failover (TAF) in RAC.


Transparent Application Failover TAF is a high-availability feature in Oracle Real Application Clusters RAC that ensures
client connections remain intact even if the primary database instance they are connected to becomes unavailable.
When a failure occurs, TAF seamlessly redirects the client session to an alternate available instance within the RAC
environment without requiring user intervention. This automatic failover mechanism helps maintain application uptime,
reduces disruption, and ensures continued accessibility for end users.TAF works by re-establishing connections and re-
executing SELECT statements on the new database instance, allowing read operations to continue without requiring re-
authentication. While this provides smooth failover for queries, it does not preserve active transactions, meaning in-
progress updates, inserts, or deletes need to be restarted manually by the application. This makes TAF particularly suited
for applications that primarily run stateless workloads, such as reporting systems, analytical tools, and applications with
connection pooling.Configuration of TAF is managed through Oracle service attributes and TNS connect descriptors,
which define failover behavior and policies. Administrators can specify how failover occurs, including the level of support
for query re-execution and whether session recovery should include open cursors. The effectiveness of TAF depends on
how well applications handle reconnection delays and session migrations, making it most useful for applications that can
tolerate brief failover periods without impacting functionality.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

14. What are the different types of services in RAC?


In Oracle RAC, services are logical abstractions used to manage and route workloads across instances. There are three
primary service types: Preferred services (run primarily on specific instances), Available services (fail over when needed),
and Uniform services (run equally on all instances). Services help distribute application load, define service-level goals,
and provide failover configurations. Each service can be associated with resource groups, listeners, and TAF policies.
Services can also be managed dynamically using SRVCTL. Proper service configuration allows DBAs to implement
workload management strategies and improve performance and availability across the RAC cluster.

15. What is FAN and how is it used in Oracle RAC?


Fast Application Notification (FAN) is a high-availability feature in Oracle RAC that provides rapid notification of database
service status changes. FAN events are published by Oracle Clusterware and received by clients or connection pools.
These notifications help applications immediately detect and react to instance failures, load changes, or service
relocations. FAN reduces failover time and improves application responsiveness. It is especially useful in environments
using Oracle Clients, JDBC, or Universal Connection Pool (UCP). FAN works in conjunction with services and can be
integrated with load balancers and application servers to enhance end-to-end availability.

16. How do you monitor RAC performance?


RAC performance can be monitored using several Oracle tools and views. Automatic Workload Repository (AWR), Active
Session History (ASH), and Grid Infrastructure logs provide detailed insight into system activity. Views like gv$session,
gv$active_session_history, and gv$resource_limit help assess load across nodes. Cluster Health Monitor (CHM) and tools
like orachk, oswatcher, and top provide OS-level and cluster-level monitoring. Enterprise Manager Cloud Control offers a
centralized graphical interface for RAC performance metrics, alerts, and historical data. Key performance indicators
include CPU usage, interconnect latency, block contention, and load balancing efficiency.

17. Can you have different DB versions on RAC nodes?


No, all nodes in an Oracle RAC cluster must run the same Oracle database version and patch level to ensure consistency
and stability. Mismatched versions can lead to unpredictable behavior, errors, or crashes due to internal
incompatibilities between instances. However, during patching or upgrades, Oracle supports rolling upgrades where one
node is upgraded at a time. This temporary version mismatch is allowed during the upgrade window only. Once the
upgrade is complete, all nodes should run the same version. It’s critical to maintain version uniformity to ensure proper
synchronization, communication, and resource management across the cluster.

18. What is the difference between SCAN and VIP in RAC?


SCAN (Single Client Access Name) provides a simplified and centralized way for client connections, while VIPs (Virtual IP
addresses) are assigned to each node for client failover purposes. SCAN uses three IP addresses resolved through DNS
and is independent of the number of nodes. Clients connect to SCAN, and it redirects them to the appropriate node.
VIPs, on the other hand, move between nodes during failover to ensure continued availability of services. SCAN listeners
handle connection load balancing, while VIPs help detect node availability. Both SCAN and VIP are critical for high
availability and client connectivity in RAC.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

19. How does Oracle RAC ensure data consistency across nodes?
Oracle RAC ensures data consistency across all instances using Cache Fusion, which synchronizes access to data blocks in
real-time. When a node requests a data block that is being used or modified by another node, the block is transferred
via the interconnect, not the disk. Global Enqueue Services (GES) and Global Cache Services (GCS) manage locks and
block ownership to prevent conflicts. This architecture ensures that only one node at a time can modify a specific block,
avoiding corruption or inconsistency. RAC also uses a global resource directory to track block status and enforce
consistency protocols across nodes.

20. What is the difference between Oracle RAC One Node and traditional RAC?
Oracle RAC One Node is a single-instance RAC option that runs on one node at a time with the capability to relocate to
another node in case of planned or unplanned downtime. Traditional RAC allows multiple instances on different nodes
to run concurrently, providing continuous availability and scalability. RAC One Node is ideal for cost-conscious
environments that want high availability without the full complexity or licensing cost of multi-node RAC. It supports
online relocation and rolling patching. In contrast, full RAC provides parallel access to the database and is designed for
higher workloads and HA requirements.

Section 2: Oracle Data Guard

21. What is Oracle Data Guard?


Oracle Data Guard is an Oracle Database feature designed for enterprise high availability and disaster recovery. It
maintains one or more standby databases (physical, logical, or snapshot) that mirror the primary database’s data.
Redo logs are automatically transmitted from primary to standby databases to maintain synchronization. In failover
scenarios, Data Guard ensures continuity by promoting a standby to primary. It supports switchover (planned role
swap) and failover (unplanned primary loss). Multiple standby databases can exist across geographic locations for
added resilience. The Data Guard Broker automates management with DGMGRL commands. It offers protection
modes—Maximum Performance, Availability, and Protection—to balance performance and safety. Active Data Guard
extends functionality by enabling read-only reporting on standby. Data Guard reduces downtime (RTO) and data loss
(RPO). It integrates with RMAN for backup on standby. Overall, Data Guard ensures business continuity and disaster
resilience in Oracle environments.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

22. Explain the differences between Physical, Logical, and Snapshot standby databases.
Physical standby is a binary replica of the primary database using Redo Apply to maintain block-level synchronization.
It supports fast failover and maintains transactional consistency. Logical standby uses SQL Apply: redo logs are
converted into SQL and reissued, allowing structural changes in standby schema. It enables reporting and local
optimizations on standby while still applying data. Snapshot standby is based on physical standby but opened in
read/write mode for testing and validation, pausing redo apply temporarily. After tasks, it is flashed back to standby
mode and resumes resynchronization. Physical standby ensures exact data replication and is best for disaster
recovery. Logical standby allows flexibility for reporting and minor changes but with certain object limitations.
Snapshot standby is ideal for test or dev without impacting production data. Each type serves distinct use cases: DR,
reporting, and testing scenarios respectively.

23. What is the role of the redo apply process in Data Guard?
Redo Apply is the mechanism used in Physical Standby to apply redo data shipped from the primary. It uses processes
like RFS to receive redo data and MRP to apply it to datafiles in standby database. In managed recovery mode, redo is
applied after being archived. With real-time apply, Redo Apply applies data from standby redo logs as soon as they
arrive, reducing data lag. This keeps the standby nearly current with primary and assists in minimal downtime failover
operations. Redo Apply also handles situations like delayed standby or switchovers seamlessly. The process includes
monitoring, error handling, and management via DGMGRL or SQL*Plus. It can be paused, resumed, or even switched
between managed and real-time modes dynamically. Performance tuning of Redo Apply includes configuring
multiple standby redo logs and optimizing network bandwidth. Overall, Redo Apply ensures standby databases
remain synchronized and ready for role transitions.

24. What is the difference between SYNC and ASYNC modes?


SYNC (synchronous) mode ensures that redo data is written to both primary and standby before transaction commit.
This guarantees zero data loss, making it suitable for critical systems. However, it may introduce latency if standby is
geographically distant or network latency is high. ASYNC (asynchronous) mode allows primary to commit without
waiting for standby acknowledgment, improving performance with possible minimal data loss. SYNC mode is used in
Maximum Protection and Maximum Availability configurations. ASYNC mode is common in Maximum Performance
setups. In SYNC, transactions are slower but safer; in ASYNC, faster with slight risk. SYNC requires real-time network
monitoring; ASYNC tolerates network delay. Both modes rely on Log Network Services (LNS) processes. Mode
selection depends on RTO/RPO SLAs and networking constraints. Monitoring tools log transportation mode to
confirm compliance.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

25. What is Active Data Guard and how is it different from traditional standby?
Active Data Guard enhances Physical Standby by allowing read-only access while redo apply is running. Unlike
traditional standby, which remains mounted and unavailable for queries, Active Data Guard enables reporting and
backup on standby. It includes automatic block repair: standby detects damaged blocks on primary and repairs them
via fetch. Starting from Oracle 18c, it supports DML redirection for limited write operations. Backups on standby
relieve load on primary using RMAN. It enables rolling upgrades and patching with minimal downtime. Active Data
Guard enhances resource utilization by offloading read work. It maintains real-time synchronization in SYNC mode.
Licensing is required to enable Active Data Guard features. Overall, it extends standby capability to function as a
queryable, resilient database backup.

26. What are the steps to create a Physical Standby database?


First, prepare the primary database: enable FORCE LOGGING, configure ARCHIVELOG, create standby redo logs, and
setup Data Guard parameters. Then, generate a backup of the primary using RMAN or Data Pump. Transfer the
backup and control file to the standby server via secure channel. On standby, restore the backup and control file via
RMAN. Configure initialization parameters and password file. Register standby with Data Guard Broker or configure
manually. Start managed recovery using ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING
CURRENT LOGFILE DISCONNECT. Monitor apply status and validate logs apply correctly. Test role transitions via
switchover and failover. Create additional standby for redundancy if necessary. Confirm data synchronization via
queries to V$DATAGUARD_STATS and DGMGRL SHOW CONFIGURATION.

27. How do you switch over and failover in Data Guard?


Switch-over is a planned role swap between primary and standby with no data loss. First, quiesce primary, confirm
both databases are synchronized, then execute SWITCHOVER TO standby. Physical standby becomes primary, and old
primary becomes standby after opening it in read-only mode. Failover handles unplanned outages. Use FAILOVER TO
standby when primary is down; redo may be lost if not synchronized. Optionally use FORCE FAILOVER if
communications fail. Failover must be followed by re-creating a new standby from the new primary. Both operations
can be managed via DGMGRL commands or SQL*Plus. Log actions, verify system health, update applications to point
to new primary (VIP, connection strings). Test and validate system before declaring success.

28. How do you monitor lag in Data Guard?


Monitor transport lag (redo shipping) and apply lag (redo application). Use V$DATAGUARD_STATS,
V$ARCHIVE_DEST_STATUS, and V$STANDBY_LOG to check status and delays. SELECT MAX(SEQUENCE#) -
MIN(FAL_SEQUENCE#) ... in queries can reveal gap. DGMGRL SHOW CONFIGURATION indicates lag in seconds or size.
Use real-time apply mode to minimize lag; monitor DB_BLOCK_CHANGES and APPLY_LATENCY. Alerts configured in
Enterprise Manager can notify delays crossing thresholds. Use ADRCI and log tailing to detect failures in transport or
apply. Tools like OEM and scripts can automate monitoring. Set alarms for lag beyond acceptable RPO. Regular
reviews ensure standby health.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

29. What is a Far Sync instance and when is it used?


A Far Sync instance is a lightweight, no-storage instance used to relay redo data to distant standby databases. It
receives redo in SYNC mode and forwards it, allowing primary to commit locally while maintaining zero data loss. It
avoids having heavy standby in distant datacenters experiencing latency. One Far Sync can support multiple
standbys. It logs minimal undo and redo; networking reliability is critical. Interconnect between primary and Far Sync
must be high-speed, low-latency. Far Sync improves topology flexibility: primary <-> Far Sync <-> remote standbys.
Often used in GEO distributed systems to manage failover distances. It allows synchronous commits without
impacting production network. It's configured via DGMGRL or SQL for redundancy.

30. How does Data Guard Broker help in managing configurations?


Data Guard Broker centralizes management of Data Guard configurations using DGMGRL and Enterprise Manager. It
maintains configuration status in the control file and ensures synchronization via logging. It automates tasks like
switchover, failover, role transitions, and configuration validation. The Broker monitors protection modes, redo
transport status, and latency, generating alerts when thresholds are exceeded. It allows administrators to manage
complexity across multiple standby databases easily. Configuration changes are propagated automatically across the
environment. Broker can enforce rules and prevent invalid operations. It supports fast-start failover with observer. It
enhances HA by providing a single pane for monitoring and managing Data Guard.

31. Can you open a standby database in read/write mode?


A Physical Standby should remain in MOUNT mode for Redo Apply and cannot be opened in read/write mode
without resetting its role. Opening it in read/write would break synchronization with the primary and is not
supported in production. For testing, a snapshot standby can be converted from physical standby by using ALTER
DATABASE CONVERT TO SNAPSHOT STANDBY. This allows read/write operations for testing and validation. Once
ready, it can be flashed back and reconverted to Physical Standby. Logical Standby, however, can be opened in
read/write since it applies SQL changes. With Active Data Guard license, a Physical Standby can be opened read-only
for queries using ALTER DATABASE OPEN READ ONLY;. Always follow best practices and licensing rules when opening
standby databases.

32. What are the protection modes in Data Guard?


Data Guard offers three protection modes: Maximum Protection, Maximum Availability, and Maximum Performance.
Maximum Protection provides zero data loss by requiring synchronous redo writing to standby before commit
completes, but may halt if standby is unavailable. Maximum Availability also ensures zero data loss without halting; it
switches to ASYNC if standby is unreachable. Maximum Performance uses ASYNC to maximize primary performance,
accepting minimal data loss risk. The modes can be set at both primary and standby. Each mode balances availability,
performance, and protection differently. The right mode depends on business SLAs, network latency, and risk
tolerance. Changes can be made using ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE....
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

33. What is the role of LGWR and ARCH in redo transport?


LGWR is the Log Writer process on the primary that writes redo entries to online redo logs synchronously. When
Data Guard is enabled, LGWR ships redo to the remote instance via Log Network Server (LNS). ARCH refers to ARCH
(archiver), responsible for archiving redo logs. ARCH also ships complete redo archived logs to standby if LGWR
transport isn’t configured. LGWR is used in synchronous and async configuration, ensuring minimal latency. ARCH is
essential in async/archived transport mode, slower but more robust. LGWR provides near real-time shipping, while
ARCH handles log switches and older redo. Proper configuration and monitoring of these processes are crucial to
ensure Data Guard data transfer reliability.

34. How do you perform a snapshot standby conversion?


To convert a physical standby to snapshot standby, use ALTER DATABASE CONVERT TO SNAPSHOT STANDBY; while
it’s mounted. This opens the database read/write for testing, pausing Redo Apply. Perform application tests, ETL
workloads, or schema changes. After testing, use ALTER DATABASE CONVERT TO PHYSICAL STANDBY; to flashback
changes and resume Redo Apply. Use RMAN to ensure proper flashback logs and storage retention. Ensure Flashback
is enabled on standby to support quick conversion. Validate standby config afterward using DGMGRL. Keep backups
of snapshot operations. Snapshot standby conversion enables safe testing without affecting production database.

35. What are FSFO (Fast-Start Failover) and Observer?


Fast-Start Failover (FSFO) provides automatic, fast failover of a primary to standby without DBA intervention. It
requires Data Guard Broker configuration and an observer process running on a separate host. The observer
monitors the primary continuously, detects failures, and initiates failover when necessary. FSFO can complete
failover within seconds to minutes depending on configuration. It relies on Protection Mode and Observer
configurations: fast-start-policy must be enabled. Failover is automatic, but manual failover overrides may be set.
Post-failover, Observer updates roles and enables public access. FSFO simplifies DR by automating failover while
maintaining minimal data loss.

36. How do you troubleshoot gaps between primary and standby?


Gaps occur when redo transport or apply lags behind. First, query V$ARCHIVED_LOG, V$ARCHIVE_DEST_STATUS, and
V$DATAGUARD_STATS to identify missing redo sequences. Use LAM_RECOVERY views to detect gaps. Use DGMGRL
SHOW CONFIGURATION to see overall status. Retransmit missing logs via ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_<n>='ENABLE' or manual copy of files. Ensure RFS and MRP processes are running on
standby. Use network diagnostic tools to determine latency/drops. Check alert logs for errors. Monitor inter-site
connectivity and storage I/O. Add standby redo logs and tune redo transport parameters for stability.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

37. What is the difference between real-time apply and managed recovery mode?
Managed recovery mode applies redo from archived logs only after they are archived. Real-time apply applies redo
from standby redo logs as soon as they arrive, without waiting. Real-time apply reduces recovery lag, making standby
near-synchronized. Managed recovery is simpler and more stable but slower. Real-time apply requires standby redo
logs; without them, only managed is possible. Real-time is preferred when data freshness is critical. Managed mode
may be appropriate for less mission-critical standby. Switch between modes using SQL commands. Both methods
maintain data guard synchronization; real-time is preferred for minimal RPO.

38. How do you perform role transitions using DGMGRL?


Within DGMGRL, run CONNECT / to access broker-managed config. To switchover, execute SWITCHOVER TO
<standby name>;. Monitor the operation status and open the new primary if not automated. To failover during
disaster, use FAILOVER TO <standby name>;. Use SHOW CONFIGURATION; to verify primary/standby role status.
Confirm that the new primary opens read/write and the old primary opens standby. Role transitions are
transactionally safe and automate parameter update. Post-transition, re-create standby if necessary. Ensure
connection services are updated (VIP, listener, TAF).

39. Can a Logical Standby support all data types?


Logical standby supports most SQL data types but has limitations. It supports tables, indexes, PL/SQL, synonyms,
sequences, views, and many data types. However, it does not fully support LOBs, Objects, Nested Tables, or Global
Temporary Tables. Some advanced features like XMLType with certain storage options may not replicate. It cannot
replicate all DDL operations or tables with certain advanced features. Logical standby is ideal for reporting but
requires careful schema design. For unsupported features, use Physical Standby or integrate custom replication.
Assess schema compatibility before configuring logical standby.

40. How to check the Data Guard configuration status?


To check the status of an Oracle Data Guard configuration, administrators can use several methods depending on their
preferred approach. The most direct way is through DGMGRL, where the SHOW CONFIGURATION command provides an
overview of the Data Guard environment, including primary and standby roles, synchronization status, and transport lag.
Within SQLPlus, several key dynamic views offer detailed insights, including V$DATAGUARD_STATUS for system-wide
monitoring, V$ARCHIVE_DEST_STATUS to verify log shipping, and V$DATAGUARD_STATS to assess transport and apply
rates. On the primary database, administrators can run SELECT PROTECTION_MODE, PROTECTION_LEVEL FROM
V$DATABASE to confirm the configured data protection mode, such as Maximum Performance, Maximum Availability, or
Maximum Protection. On the standby database, checking archived log application status is crucial, which can be done
using SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG to ensure logs are being properly applied. The
SHOW DATABASE command in DGMGRL further confirms individual database settings, roles, and synchronization health.
Monitoring alert logs regularly helps detect errors or inconsistencies, allowing for proactive issue resolution. Oracle
Enterprise Manager OEM provides a visual dashboard for tracking replication performance, log transfer health, and
synchronization states, simplifying Data Guard management.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

Section 3: Oracle RMAN (Recovery Manager)

41. What is RMAN VALIDATE command and how is it used?


The RMAN VALIDATE command is used to check the integrity and usability of backup files and database files without
actually restoring them. It helps DBAs verify whether backups can be successfully restored if needed. This command
checks for block corruption, missing files, and media errors in datafiles, control files, archived logs, and backup sets.
VALIDATE BACKUPSET, VALIDATE DATABASE, and VALIDATE DATAFILE are commonly used forms. It doesn't alter any
database content and is a preventive check to detect potential recovery issues. It can also validate backup strategies
for compliance. The output includes any corrupt blocks or inaccessible files. VALIDATE can be scheduled as part of
health checks to ensure high availability of recovery options.

42. How can RMAN be used to duplicate a database for testing or development?
RMAN provides the DUPLICATE DATABASE command to create a copy of the primary database on a different host or
SID, often used for testing, QA, or dev environments. It can duplicate using active connection (FROM ACTIVE
DATABASE) or using backups (BACKUP LOCATION). It recreates control files, datafiles, and redo logs with new file
paths. Optional parameters like NOFILENAMECHECK, SKIP READONLY, or SET NEWNAME help manage file structures.
It doesn’t require the use of a recovery catalog but is enhanced if one is present. Oracle Net connectivity must be set
up for the target and auxiliary. Cloning ensures consistency by using SCN or PITR options. RMAN duplication simplifies
cloning without requiring OS-level file copies or shutdowns.

43. How do you restore a lost control file using RMAN?


To restore a lost control file, connect to RMAN and use the command RESTORE CONTROLFILE FROM AUTOBACKUP;.
This retrieves the latest backup from the configured location such as FRA, tape, or user-defined directory. After
restore, the database must be mounted using ALTER DATABASE MOUNT;. Then catalog the backups if required and
recover the database using RECOVER DATABASE;. Finally, open the database using ALTER DATABASE OPEN
RESETLOGS; if necessary. Restoring a control file requires correct initialization parameters and access to backup sets.
It is critical in crash recovery, especially when RMAN repository is unavailable. Having control file autobackup enabled
is best practice for recovery preparedness.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

44. What is the difference between backup optimization and compression in RMAN?
Backup optimization skips backing up files that are already backed up and unchanged since the last backup, provided
it meets the criteria set by RMAN policies. Compression reduces the physical size of the backup using algorithms like
BASIC, LOW, MEDIUM, or HIGH depending on CPU usage tolerance. Optimization saves time and I/O, while
compression saves space. Backup optimization is enabled using CONFIGURE BACKUP OPTIMIZATION ON;, while
compression is enabled with CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;. Both
features reduce backup overhead but serve different purposes. Together, they improve efficiency and resource
utilization in backup management.

45. What is the use of the REPORT OBSOLETE command in RMAN?


REPORT OBSOLETE identifies backups that are no longer needed according to the configured retention policy (e.g.,
redundancy or recovery window). It lists backups that can be safely deleted without affecting recoverability. The
command only reports; it doesn't delete files. To remove these files, use DELETE OBSOLETE; after confirming. The
output includes information like backup type, device type, creation date, and status. It ensures disk space is reclaimed
while maintaining policy compliance. Useful for periodic cleanup and audit readiness. Best run after CROSSCHECK to
ensure catalog and physical files are in sync. This is critical in managing space in FRA or tape libraries.

46. How do you back up only archived redo logs in RMAN?


To back up only archived logs, use BACKUP ARCHIVELOG ALL;. You can add options like DELETE INPUT to delete logs
after successful backup. You can specify FROM TIME, UNTIL TIME, or SCN to control which logs to include. RMAN
ensures only available logs are included, and logs are cataloged during backup. Use LIST ARCHIVELOG ALL; to view logs
before backup. This approach reduces space usage and ensures logs are not lost before backup. Backing up archived
logs is crucial for point-in-time recovery. Automating archived log backup every few hours minimizes data loss. Use
CROSSCHECK ARCHIVELOG ALL; to validate their existence beforehand.

47. How do you back up the SPFILE with RMAN?


Use BACKUP SPFILE; to create a standalone backup of the server parameter file. It can also be included as part of a full
database backup using BACKUP DATABASE INCLUDE CURRENT CONTROLFILE PLUS ARCHIVELOG;. Having the SPFILE
backed up ensures you can recreate initialization settings during disaster recovery. Use SHOW ALL; in RMAN to verify
if SPFILE backups exist. RMAN stores SPFILE backups as part of backup sets and can restore them using RESTORE
SPFILE FROM AUTOBACKUP;. It’s recommended to enable control file and SPFILE autobackup using CONFIGURE
CONTROLFILE AUTOBACKUP ON;. This provides full startup configurability during complete recovery scenarios.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

48. How can you perform a backup when the database is in MOUNT mode?
In MOUNT mode, RMAN can perform backups of the whole database, control files, SPFILE, and archived logs, but not
tablespaces that require the database to be open. Use BACKUP DATABASE; after mounting the database using
STARTUP MOUNT;. This is typically done when the database is not open due to maintenance or recovery scenarios.
The MOUNT state is sufficient because RMAN reads file headers and doesn't need transaction-level access. Useful for
offline backups or cloning from backup. Backups taken in MOUNT mode are consistent if no unarchived changes exist.
Always check logs post-backup for errors or corrupt blocks reported.

49. What are the different types of RMAN restore operations?


RMAN supports full database restore, tablespace restore, datafile restore, control file restore, and SPFILE restore. Full
database restore is used after catastrophic failure. Tablespace or datafile restores are useful for localized corruption.
Control file restore is needed when all control files are lost. SPFILE restore allows recovery of parameter settings. Use
RESTORE DATABASE;, RESTORE TABLESPACE <name>;, or RESTORE DATAFILE <n>; depending on requirement. Each
restore can be followed by RECOVER to apply redo. Partial restores help reduce downtime. Use VALIDATE before
actual restore to check integrity. RMAN supports automated dependency tracking and proper restore sequence.

50. How do you restore a specific tablespace using RMAN?


To restore a specific tablespace, use RESTORE TABLESPACE <name>; followed by RECOVER TABLESPACE <name>;.
Ensure the tablespace is taken offline using ALTER TABLESPACE <name> OFFLINE IMMEDIATE;. RMAN will fetch the
necessary backup sets and archived logs to complete the restore and recovery process. Once completed, bring the
tablespace online using ALTER TABLESPACE <name> ONLINE;. Useful for handling corruption or accidental data loss in
specific business modules. You can restore from a specific backup using UNTIL TIME, UNTIL SCN, or TAG. Always test
on lower environments before performing on production. It minimizes outage impact compared to full database
recovery.

51. How do you restore specific datafiles using RMAN?


To restore specific datafiles, use RESTORE DATAFILE <datafile_number>; followed by RECOVER DATAFILE
<datafile_number>;. First, identify the affected datafile using V$DATAFILE or DBA_DATA_FILES. Then, take the
datafile offline with ALTER DATABASE DATAFILE <number> OFFLINE; if the database is open. After restore and
recovery, bring it online using ALTER DATABASE DATAFILE <number> ONLINE;. This method is ideal when corruption
or loss affects only certain datafiles, allowing targeted recovery without downtime for the whole database. RMAN
retrieves the correct backup pieces and archived logs as needed. You can also specify a time or SCN for point-in-time
recovery.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

52. How do you perform RMAN recovery using a specific SCN or timestamp?
To recover to a specific SCN or timestamp, first use SET UNTIL SCN <number>; or SET UNTIL TIME 'YYYY-MM-DD
HH24:MI:SS'; before issuing restore and recover commands. For example: SET UNTIL TIME '2025-06-01 10:00:00';
RESTORE DATABASE; RECOVER DATABASE;. This method is useful during point-in-time recovery scenarios such as
logical corruption or accidental data changes. You can recover a full database or specific tablespaces/datafiles using
this approach. After successful recovery, open the database with ALTER DATABASE OPEN RESETLOGS;. Always ensure
the necessary archived logs are available before starting this process to avoid recovery failure.

53. How do you catalog existing backups in RMAN?


Use the CATALOG command in RMAN to register existing backup pieces or image copies that RMAN doesn't currently
track. For example: CATALOG START WITH '/u01/backups/'; catalogs all backups under that path, or use CATALOG
BACKUPPIECE '/u01/backups/bkp_01.bkp'; to catalog a single file. This is essential when you move backup files or
recover a control file and need to re-register existing backups. It helps RMAN be aware of all usable backup files for
restore operations. Always run a crosscheck after cataloging to validate existence and status. Cataloging enables full
utilization of available backups for recovery.

54. How can you use RMAN to check for physical and logical corruption?
You can use the VALIDATE and BACKUP VALIDATE commands in RMAN to check for corruption. VALIDATE DATAFILE
<n>; or BACKUP VALIDATE CHECK LOGICAL DATABASE; scans blocks for both physical and logical errors without
creating a backup. Physical corruption includes checksum errors or I/O issues, while logical corruption refers to
inconsistent or invalid data block structures. The command output reports any corrupt blocks and file names. This is
useful for proactively detecting problems before they impact backup or recovery operations. Regular validation
builds confidence in data integrity, especially before planned backups or migrations.

55. What is the difference between image copy and backup set in RMAN?

An image copy in RMAN is an exact, unmodified duplicate of a datafile, control file, or archived log, created using
BACKUP AS COPY. Since it preserves the original file format, the database can use it immediately without requiring a
restore process. Image copies are ideal for scenarios where fast recovery is needed, such as switch-based recovery,
where the database can seamlessly transition to the copied files with minimal downtime.A backup set, on the other
hand, is a proprietary RMAN format created using BACKUP AS BACKUPSET. Unlike image copies, a backup set can
contain multiple files bundled together and compressed, making it more storage-efficient. However, because backup
sets are not directly usable by the database, they must go through the restore process before they can be applied.
Backup sets also support additional features like encryption, offering enhanced security for sensitive data.RMAN
provides flexibility in choosing between these formats based on recovery goals and available system resources. While
image copies offer faster recovery due to direct usability, backup sets help conserve space and provide advanced
security features, making them suitable for long-term storage and archival purposes.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

56. How does RMAN handle corruption during backup or restore?


During backup, RMAN detects corruption using block checksums and logs errors in V$BACKUP_CORRUPTION and
V$COPY_CORRUPTION. It skips physically corrupt blocks unless told otherwise. During restore, if corruption is
detected in the backup, RMAN attempts to use a different copy or backup set if available. Logical corruption is
reported only if CHECK LOGICAL is specified. For tablespace or datafile recovery, the presence of corrupt blocks may
require manual intervention. RMAN can also use DBMS_REPAIR for known corrupt blocks. Regular validation and
multiple backup copies help reduce corruption impact. Monitoring views helps track and handle corruption events.

57. What are the best practices for configuring RMAN retention policies?
Retention policies determine how long backups are retained and include REDUNDANCY and RECOVERY WINDOW.
Use CONFIGURE RETENTION POLICY TO REDUNDANCY 2; or CONFIGURE RETENTION POLICY TO RECOVERY WINDOW
OF 7 DAYS;. The choice depends on business recovery requirements. Schedule DELETE OBSOLETE regularly to free
space. Combine with BACKUP OPTIMIZATION ON to skip unnecessary backups. Monitor with REPORT OBSOLETE and
adjust policies for database growth. Always test restore procedures to confirm policies are adequate. Use FRA wisely
to ensure RMAN does not delete required files prematurely. Periodic validation ensures compliance and recovery
readiness.

58. How do you perform a tablespace point-in-time recovery (TSPITR)?


TSPITR allows recovery of specific tablespaces to a past point without affecting the rest of the database. It uses
auxiliary instance setup. In RMAN, use RECOVER TABLESPACE <name> UNTIL TIME 'YYYY-MM-DD HH24:MI:SS'
AUXILIARY DESTINATION '/tmp/aux';. RMAN creates an auxiliary instance, restores backups, applies redo, and then
copies the recovered datafiles back. Afterward, the tablespace is brought online. TSPITR is used when unwanted
changes affect only a portion of the database. Ensure space and initialization files are available for the auxiliary
instance. It's a powerful feature for selective recovery scenarios in production environments.

59. How can you use RMAN in Data Guard environments?


In Oracle Data Guard, RMAN plays a critical role in managing backups and recovery across both the primary and
standby databases. By configuring backups to run on the standby, administrators can significantly reduce the
performance impact on the primary database while maintaining a robust recovery strategy. Using CONFIGURE
BACKUP FOR EACH DATABASE, standby backups can be scheduled independently, ensuring comprehensive data
protection. RMAN facilitates restoration and recovery operations from either site’s backups, allowing flexibility in
disaster recovery scenarios. Additionally, RMAN supports file restoration from the primary to the standby, ensuring
synchronized environments and minimizing downtime. Metadata and catalogs remain consistent across both
locations, enabling seamless recovery and data integrity. Standby backups not only help distribute I/O load efficiently
but also meet compliance requirements by providing an additional layer of security. Properly structured Data Guard-
aware scripts further enhance coordination between sites, automating recovery operations and ensuring optimal
synchronization. This deep integration of RMAN within Data Guard strengthens Oracle’s disaster recovery
architecture, offering resilience against failures while maintaining high availability and data consistency across all
protected instances.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

60. How do you recover from a missing or deleted archived log using RMAN?
If an archived log is missing, RMAN restore will fail unless it can find another backup copy. Use LIST ARCHIVELOG
ALL; to confirm availability. If missing, check backups or use a Data Guard standby copy if available. To recover, use
RESTORE ARCHIVELOG FROM BACKUP; and re-run RECOVER DATABASE;. If log is permanently lost, recovery may
only be possible to the last available log (incomplete recovery). Use SET UNTIL to recover to a known good point.
Always back up archived logs frequently and use DELETE INPUT to avoid space issues. Ensure RMAN backups include
logs for disaster recovery readiness.

Section 4: Oracle Data Pump

61. What is Oracle Data Pump?


Oracle Data Pump is a high-speed utility provided by Oracle for exporting and importing database objects and data. It
allows database administrators to transfer data between Oracle databases efficiently using proprietary dump files.
Unlike traditional exp/imp, Data Pump works with server-side processes, resulting in significantly faster operations. It
supports various modes such as FULL, SCHEMA, TABLE, and TABLESPACE. With Data Pump, you can filter specific objects,
use parallel processing for better performance, and even resume interrupted jobs. It stores export files in a directory
object on the database server, improving I/O performance. The utility includes expdp for export and impdp for import.
Data Pump can be used for full database migrations, schema refreshes, and moving subsets of data. It supports
remapping of schema names and tablespaces during import. With options like compression, encryption, and job
restartability, Data Pump is highly flexible. It also integrates with Oracle Enterprise Manager for graphical job control and
monitoring.

62. What is the difference between Data Pump and traditional export/import (exp/imp)?
The key difference between Data Pump and traditional exp/imp lies in performance, architecture, and flexibility. Data
Pump uses direct path and server-side processing, which makes it faster than the older client-side exp/imp tools.
Traditional exp/imp exports data through the client session, while Data Pump runs jobs on the server using background
processes. Data Pump allows for job parallelism using the PARALLEL parameter, while exp/imp does not support this.
Another major improvement is the ability to resume Data Pump jobs after interruption, which was not possible with
exp/imp. Data Pump supports filtering via INCLUDE and EXCLUDE clauses, as well as remapping objects during import,
providing more control. It also supports transportable tablespaces, compression, and encryption. The dump files
generated by Data Pump are not compatible with traditional tools. Data Pump jobs can be monitored and controlled
dynamically using DBMS_DATAPUMP API or Enterprise Manager. Overall, Data Pump is more scalable and suited for
large data volumes and enterprise requirements.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

63. How do you perform a full database export using Data Pump?
To perform a full database export using Oracle Data Pump, the expdp utility is used with the FULL=Y parameter. This
exports all database objects and data into a dump file, typically placed in a pre-defined directory object. A basic example
is: expdp system/password FULL=Y DIRECTORY=dp_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log. Ensure that the
user has the EXP_FULL_DATABASE role. The DIRECTORY parameter points to a server-side OS directory object, not a
local path. This operation includes users, tablespaces, roles, and actual data across all schemas. It is often used during
migrations, environment refreshes, or full backups for logical data movement. You can also use additional options like
COMPRESSION, PARALLEL, and EXCLUDE to fine-tune the export. Exported files can be copied to another server and
imported using impdp. During full export, care should be taken to manage disk space, as dump files can grow large. Full
exports are often scheduled during low-traffic windows to minimize performance impact.

64. How can you export/import specific tables?


Exporting and importing specific tables using Data Pump is done using the TABLES parameter. For export, use a
command like: expdp user/password TABLES=table1,table2 DIRECTORY=dp_dir DUMPFILE=tables.dmp
LOGFILE=tables.log. This creates a dump file containing only the specified tables. You can use filters to select rows using
the QUERY parameter. During import, the same dump file can be used with: impdp user/password TABLES=table1,table2
DIRECTORY=dp_dir DUMPFILE=tables.dmp LOGFILE=imp_tables.log. You can remap the schema or tablespace during
import using REMAP_SCHEMA or REMAP_TABLESPACE. This feature is useful for migrating subsets of data or performing
table-level backups. Data Pump also supports wildcard patterns in table names for bulk selection. Exporting only
required tables helps reduce dump size and speeds up operations. Make sure referenced objects like indexes and
constraints are also included if needed. For partitioned tables, partitions can be exported individually using the
TABLES=table:partition syntax.

65. What are the different Data Pump modes (FULL, SCHEMA, TABLE, TABLESPACE)?
Oracle Data Pump provides several modes for exporting and importing data, allowing administrators to perform
database migrations and backups efficiently. The FULL mode enables the export of an entire database, including users,
schemas, tablespaces, and all associated objects, making it ideal for full-scale migrations or backups of production
systems. The SCHEMA mode focuses on exporting individual schemas, capturing their tables, indexes, procedures, and
other objects, which is useful when moving a specific application or user environment. The TABLE mode allows selective
export of specific tables and their dependent objects, enabling granular control over data movement while preserving
referential integrity. The TABLESPACE mode targets all objects residing within a given tablespace, providing an efficient
way to manage data stored across different tablespaces or migrate sets of related tables together. During import,
administrators must ensure they use a compatible mode to restore the exported data properly. Each mode supports
filtering options such as INCLUDE, EXCLUDE, and QUERY, allowing further customization of exported or imported objects
based on specific requirements. FULL and SCHEMA modes are commonly used for production refreshes, test
environment setups, or migrating entire application ecosystems. The TABLESPACE mode is beneficial when reorganizing
storage, transferring data to optimized tablespaces, or isolating large datasets for analysis. Understanding these modes
helps administrators select the best method depending on the scope and complexity of their data transfer needs,
ensuring an efficient and structured approach to database migration and backup management.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

66. How do you use PARALLEL in Data Pump for performance?


The PARALLEL parameter in Data Pump significantly boosts performance by allowing multiple threads to process the job
simultaneously. For example: expdp user/password SCHEMAS=scott DIRECTORY=dp_dir DUMPFILE=scott_%U.dmp
PARALLEL=4 FILESIZE=1G LOGFILE=scott.log. Here, %U is a substitution variable that enables multiple dump files. Setting
PARALLEL=4 means the job will use four threads to read and write in parallel, speeding up the operation. The benefit of
parallelism is most noticeable in large data exports or imports. On import, the same parameter speeds up loading:
impdp user/password SCHEMAS=scott DIRECTORY=dp_dir DUMPFILE=scott_%U.dmp PARALLEL=4. Keep in mind, Data
Pump parallelism also requires sufficient CPU, I/O, and memory resources. Ensure dump files are spread across high-
speed disks to avoid bottlenecks. Also, more parallel threads will open more connections to the database, so check
resource limits. It is important to balance performance with resource utilization on the server.

67. How can you estimate the size of a Data Pump export?
To estimate the size of a Data Pump export without actually creating dump files, use the ESTIMATE_ONLY=Y parameter.
You can choose between BLOCKS or STATISTICS as the estimation method. For example: expdp user/password
TABLES=emp ESTIMATE_ONLY=Y ESTIMATE=BLOCKS LOGFILE=estimate.log. The BLOCKS method reads the actual
number of data blocks from the table segments, offering more accuracy. The STATISTICS method uses table statistics,
which may not reflect current data sizes if stats are outdated. This estimation is useful for capacity planning, especially
when dealing with limited disk space or large volumes of data. It helps DBAs avoid job failures due to insufficient space.
You can also use it to compare export sizes across tables or schemas. Use LOGFILE to store the size estimate for review.
The estimate output includes the total bytes expected in the dump file.

68. What is the use of the EXCLUDE and INCLUDE parameters?


The INCLUDE and EXCLUDE parameters in Oracle Data Pump help refine export and import operations by allowing
administrators to selectively include or exclude specific objects. The EXCLUDE parameter prevents certain object types
or names from being exported or imported, such as excluding a specific table with EXCLUDE=TABLE:"='EMP'".
Conversely, the INCLUDE parameter ensures only selected objects are part of the operation, such as limiting the export
to tables beginning with HR using INCLUDE=TABLE:"LIKE 'HR%'". These filters apply to various database objects,
including tables, indexes, views, packages, and other schema components. Multiple filters can be combined within a
single Data Pump command to provide granular control, reducing unnecessary data transfer and optimizing dump file
sizes. Beyond filtering by object names, these parameters can also be based on metadata attributes, allowing precise
customization of exported or imported content. This is particularly beneficial in environments with large schemas where
migrating only a subset of data is required for testing or backup purposes. However, caution must be exercised when
using INCLUDE and EXCLUDE filters, as improperly defined criteria can lead to dependency conflicts during import. It is
recommended to test complex filter configurations in a non-production environment to ensure smooth execution.
Proper use of these parameters improves efficiency, minimizes storage use, and enhances control over database
migration processes.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

69. What is transportable tablespace export/import?


Transportable tablespace (TTS) allows faster migration of large amounts of data between databases by copying datafiles
directly and importing metadata using Data Pump. Instead of unloading and reloading data, you simply move the
datafiles and apply their structure using impdp. TTS is ideal for moving large read-only or read-write tablespaces across
platforms with compatible endian formats. Export is initiated with expdp TRANSPORT_TABLESPACES=tbs_name and
includes datafiles copied from the OS. The corresponding metadata dump file is then imported using impdp. This
drastically reduces downtime compared to full export/import. For cross-platform transport, the RMAN CONVERT
command is used to adjust datafile format. The feature is widely used for large data migrations, especially in data
warehousing or consolidation scenarios. It requires both source and target databases to be in read-only mode for the
tablespaces involved. Tablespaces must be self-contained, meaning no cross-referenced objects exist.

70. How do you resume an interrupted Data Pump job?


To resume a failed or stopped Data Pump job, you can attach to the job and resume it using the ATTACH command. For
example: expdp user/password ATTACH=job_name. Once attached, type CONTINUE_CLIENT to resume job execution.
This is especially useful if a job was stopped due to network issues, power failure, or server crash. Data Pump
automatically maintains job state in the database via the master table. As long as this master table and dump files exist,
the job can be resumed. You can check running or stopped jobs using the DBA_DATAPUMP_JOBS view. In Enterprise
Manager, you can also view and resume jobs graphically. Always ensure that file locations haven’t changed before
resuming. If needed, you can kill the client session without terminating the job. Jobs can also be restarted from scratch
using the REUSE_DUMPFILES parameter if resumption fails. This feature is crucial in maintaining reliability during long-
running exports and imports.

71. How do you monitor Data Pump progress in real time?


You can track Data Pump progress in real time by attaching to an active job using the ATTACH parameter, allowing you
to issue commands such as STATUS to check ongoing operations. This displays details like the files being processed, the
percentage completed, and overall performance metrics. Another option is querying database views such as
DBA_DATAPUMP_JOBS, DBA_DATAPUMP_SESSIONS, and V$SESSION_LONGOPS, which provide insights into current
activity without interrupting the running job. The V$SESSION_LONGOPS view is particularly useful for estimating
completion time and measuring remaining workload. In Oracle Enterprise Manager, Data Pump jobs can be monitored
visually under the Scheduler or Data Movement section, offering a centralized way to assess job status. Additionally,
reviewing the log file generated during the export or import process provides live updates on progress, making it an
effective way to track execution if written frequently. Monitoring Data Pump jobs is especially critical when dealing with
large exports or using parallel processing, as it helps identify bottlenecks, unexpected delays, or performance
inefficiencies. By actively overseeing the progress of these operations, database administrators can optimize job
execution, reduce downtime, and quickly address any issues that arise, ensuring smooth and efficient data movement.
Moreover, reviewing the log file generated during the export or import process is another effective method for tracking
progress. This log file provides live updates on the execution of the job, detailing each step as it occurs. If the log is
configured to write frequently, it can serve as a reliable source of information regarding the job's progress and any
issues that may arise. By actively overseeing the progress of these operations, database administrators can optimize job
execution, minimize downtime, and swiftly address any issues that may occur. This proactive approach ensures that data
movement is conducted smoothly and efficiently, ultimately contributing to the overall performance and reliability of
the database environment.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

72. What is the difference between DIRECTORY and DUMPFILE parameters?


In Data Pump, the DIRECTORY parameter specifies the server-side directory object where dump files and log files will be
created or read. This directory must be pre-created using the CREATE DIRECTORY SQL command and must point to a
valid OS path. The DUMPFILE parameter, on the other hand, defines the name(s) of the dump files used to store
exported data or read during import. Example: DIRECTORY=dp_dir DUMPFILE=export.dmp. Without a valid DIRECTORY,
Data Pump cannot write files, and an error will occur. Multiple DUMPFILEs can be used to allow parallel threads to write
simultaneously for performance. You can also use %U in dumpfile names to auto-generate unique filenames during
parallel exports. DIRECTORY ensures Data Pump jobs remain secure by controlling where files are stored. Together,
these parameters define the I/O layer of Data Pump jobs. Misconfiguration can lead to failures or permission issues
during export/import.

73. How do you remap schema or tablespaces during import?


During Data Pump import, you can remap schema names and tablespaces using REMAP_SCHEMA and
REMAP_TABLESPACE parameters. This is useful when restoring data to a different user or tablespace than the original.
For example: REMAP_SCHEMA=HR:HR_NEW imports all objects from the HR schema into HR_NEW. Similarly,
REMAP_TABLESPACE=OLD_TBS:NEW_TBS changes the tablespace for all objects being imported. These remapping
options are helpful in test environments, schema refreshes, or consolidations. Multiple remap parameters can be
specified for complex imports. Ensure that the target schema and tablespaces exist before importing, or the import will
fail. You can combine remapping with filtering options like INCLUDE and EXCLUDE. These parameters make Data Pump
highly flexible and allow DBAs to customize where and how data is restored. Use the LOGFILE option to confirm
successful remaps during the import process. It is a powerful way to avoid overwriting production schema or data
structures accidentally.

74. What are common errors in Data Pump and how to resolve them?
Common Data Pump errors include ORA-39002 (invalid operation), ORA-31655 (no dump file found), and ORA-39087
(directory object not found). These usually stem from incorrect parameter usage, missing dump files, or permissions
issues. The ORA-39002 error may occur if incompatible parameters are used or if you try to resume a job with missing
files. ORA-31655 means the DUMPFILE or LOGFILE path may be invalid or inaccessible. The fix often involves ensuring
the directory exists and is correctly granted to the user with GRANT READ, WRITE ON DIRECTORY. Another common
issue is ORA-39126, which signals data or object inconsistencies during import. Compression and encryption errors can
occur if parameters like ENCRYPTION_PASSWORD are forgotten during import.

75. Can you compress export files? If yes, how?


Yes, Oracle Data Pump supports compression of export files using the COMPRESSION parameter. This helps reduce disk
space and speeds up file transfer. For example, expdp user/password SCHEMAS=HR COMPRESSION=ALL compresses
both metadata and data. Available options are NONE, METADATA_ONLY, DATA_ONLY, and ALL. METADATA_ONLY is the
default and compresses only the dump headers, which provides minimal savings. ALL compresses both data and
metadata, offering the highest space savings. Compression is done on the fly using Oracle’s internal algorithm, so no
need for post-export zipping. On import, no need to decompress — Data Pump handles it automatically. Note that
compression requires the Advanced Compression option to be licensed for some settings (e.g., DATA_ONLY). For optimal
performance, combine COMPRESSION=ALL with PARALLEL to balance CPU usage and speed. Compressed files are
especially useful for archiving and transferring across networks. Always monitor CPU usage, as compression increases
resource consumption during export.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

Section 5: Oracle Flashback

76. What is Oracle Flashback and why is it used?


Oracle Flashback is a powerful feature that allows you to view or revert database objects to a previous state without
traditional restore operations. It is mainly used to recover from logical errors like accidental deletions, incorrect updates,
or wrong transactions. Flashback saves time compared to full database recovery, as it avoids restoring files from
backups. It works by accessing undo data or flashback logs, depending on the operation. Flashback is especially useful in
testing, development, and production environments where uptime and data accuracy are crucial. Features like Flashback
Table, Flashback Query, and Flashback Database provide various levels of recovery options. DBAs often use Flashback as
a first line of defense before resorting to RMAN-based recovery. Flashback complements Data Guard and RMAN,
offering logical-level recovery while preserving physical structures. It increases confidence in error correction and
minimizes downtime. Overall, Flashback enhances resilience, saves time, and simplifies database administration.

77. What are the types of flashback technologies in Oracle?


Oracle offers several Flashback technologies, each serving a specific purpose: (1) Flashback Query – retrieves data from a
past point using AS OF clause. (2) Flashback Version Query – shows all versions of a row over time. (3) Flashback
Transaction Query – traces changes made by transactions. (4) Flashback Table – restores a table to a previous
timestamp. (5) Flashback Drop – recovers dropped tables from the Recycle Bin. (6) Flashback Database – rewinds the
entire database to an earlier state. (7) Guaranteed Restore Points – ensures you can revert to a specific time, used
commonly before upgrades. These features rely on undo data or Flashback Logs stored in the Fast Recovery Area. Not all
flashback options require enabling, but Flashback Database and Guaranteed Restore Points need specific configurations.
Each feature is useful depending on whether you're fixing row-level errors or full-database issues. Using these
appropriately can drastically reduce recovery efforts.

78. How do you use Flashback Table?


To use Flashback Table, you must have row movement enabled on the target table. This is done using ALTER TABLE
<table_name> ENABLE ROW MOVEMENT;. Then, you can flash the table back using the command: FLASHBACK TABLE
<table_name> TO TIMESTAMP <time>; or TO SCN <scn_number>;. This operation reverts all data changes to the
specified time or SCN without affecting other tables. It is often used to recover from user errors like mass deletions or
updates. Flashback Table is fast and doesn’t require restoring from backups, provided sufficient undo data is available.
You can also flash back multiple tables simultaneously. It is recommended to back up or export the table before using
Flashback Table to prevent irreversible changes. Flashback privileges must be granted to the user performing the
operation. This feature is limited by undo retention time, so the older the data, the less likely a successful flashback will
occur. It offers a quick recovery without downtime or complex restore procedures.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

79. What is Flashback Database and how is it enabled?


Flashback Database allows the entire database to be rewound to a prior point-in-time using flashback logs, avoiding a
full restore. It is useful in cases like failed upgrades, major logical corruption, or bad application deployments. To enable
it, the database must be in ARCHIVELOG mode, and Flashback Logging must be explicitly turned on using ALTER
DATABASE FLASHBACK ON;. You must also configure the Fast Recovery Area (FRA) and ensure sufficient space for storing
flashback logs. Once enabled, Oracle automatically creates and manages flashback logs. To flashback the database,
mount it and use the command FLASHBACK DATABASE TO TIMESTAMP <time>; or TO RESTORE POINT. This operation is
faster than traditional recovery and supports point-in-time rollback. However, it consumes disk space and slightly affects
performance. Flashback Database can only rewind to a point after Flashback Logging was enabled. It's commonly used
before upgrades or large-scale changes to provide a fallback option.

80. How do you check if flashback is enabled for the database?


To verify flashback status, query V$DATABASE and check the FLASHBACK_ON column. It will show YES if Flashback
Database is enabled. Alternatively, run SELECT FLASHBACK_ON FROM V$DATABASE; in SQL*Plus or SQL Developer. You
can also check Flashback Log retention and space usage by querying V$RECOVERY_FILE_DEST and
V$FLASHBACK_DATABASE_LOG. The DBA_FEATURE_USAGE_STATISTICS view can show detailed Flashback feature
usage. If Flashback is not enabled, the status will read NO, and attempting Flashback commands will result in errors.
Ensure that the Fast Recovery Area (FRA) is configured, as Flashback logs are stored there. Checking flashback status is
crucial before major operations like upgrades, data loads, or schema changes. Having Flashback enabled provides a
safety net and can reduce recovery complexity in case of failure. Proper monitoring of flashback health helps DBAs
ensure system recoverability is always available when needed.

81. What is the difference between Flashback Database and Restore Point?
Flashback Database and Restore Point are both Oracle recovery features, but they serve different purposes and function
in distinct ways. Flashback Database provides a continuous method to rewind the entire database to any point within
the configured flashback retention window, using flashback logs that are persistently maintained in the background. This
allows administrators to revert changes across the system efficiently without relying on traditional restore processes.
Since Flashback Database operates through continuous logging, it provides flexibility for broad recovery scenarios,
making it valuable for undoing unintended operations or correcting database-wide errors. A Restore Point, however, is a
manually created marker at a specific SCN or timestamp that acts as a fixed rollback target. Restore Points can be either
normal or guaranteed, with Guaranteed Restore Points ensuring that all necessary logs are retained for a future rollback,
even if Flashback Database is disabled. Because Restore Points do not continuously log changes like Flashback Database,
they consume significantly less space and serve a more defined purpose—typically used before major upgrades or
significant modifications to allow precise recovery to a known state.One key difference is how recovery is performed.
Flashback Database enables rolling back to any point within its retention period, offering flexibility in undoing changes
incrementally. Restore Points, on the other hand, provide a predefined recovery target, limiting rollback options to that
specific SCN or timestamp. Since Restore Points do not track every change over time like Flashback Database, they are
not suitable for general point-in-time recovery unless explicitly planned. Flashback Database is more appropriate for
recovering from unexpected data loss or corruption across the system, while Restore Points are better suited for
controlled recovery scenarios where administrators anticipate potential rollbacks. Often, both features are used
together during planned maintenance, ensuring robust recovery planning while minimizing downtime and risk.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

82. How to perform Flashback Drop (Recycle Bin)?


When a table is dropped in Oracle, it is moved to the Recycle Bin instead of being immediately removed, if the feature is
enabled. You can recover such a table using FLASHBACK TABLE <original_table_name> TO BEFORE DROP;. This restores
the table with its data and dependent objects like indexes. To view dropped tables, use SHOW RECYCLEBIN; or query the
DBA_RECYCLEBIN view. Flashback Drop only works for user tables, not for system or dictionary objects. If multiple
versions of a table exist in the Recycle Bin, you can specify the exact object name from the recycle bin listing. You can
also purge a table permanently using PURGE TABLE <table_name>; or clear the whole recycle bin with PURGE
RECYCLEBIN;. This feature is especially useful for accidental drops, as it provides a quick way to undo mistakes without
restore operations. Ensure enough undo and tablespace space for the operation to succeed.

83. How does Flashback Version Query help troubleshoot data issues?
Flashback Version Query lets you retrieve all historical versions of a specific row over a time range. It uses undo data to
show how and when changes occurred, and which transactions performed them. The syntax is SELECT VERSIONS_XID,
VERSIONS_STARTTIME, VERSIONS_ENDTIME, column_list FROM table VERSIONS BETWEEN SCN MINVALUE AND
MAXVALUE;. This is valuable for auditing, debugging application bugs, or tracking unauthorized changes. It also reveals
transaction IDs and timestamps, helping DBAs trace issues precisely. The feature depends on undo retention, so
sufficient retention time must be configured. It can be used in combination with Flashback Transaction Query to analyze
full change history. Flashback Version Query does not affect current data—it’s purely a read operation. Ideal for forensic
analysis, it aids in proactive data management and trust restoration after unwanted changes. It’s commonly used by
auditors, developers, and DBAs alike.

84. What is a Guaranteed Restore Point?


A Guaranteed Restore Point GRP is a crucial recovery feature in Oracle that ensures all necessary flashback logs are
retained, allowing the database to be rewound precisely to that moment, regardless of retention policies. Unlike normal
restore points, which may be automatically deleted when storage space is needed, GRPs remain intact until explicitly
removed by the administrator. This makes them particularly valuable in production environments where a safe rollback
option is required before performing system upgrades, patch installations, or major schema changes.To create a GRP,
administrators use the command CREATE RESTORE POINT rp_name GUARANTEE FLASHBACK DATABASE, which instructs
Oracle to keep all relevant logs necessary for future recovery. Since GRPs rely on flashback logs, the Fast Recovery Area
FRA must have sufficient space to store these logs for the intended duration, ensuring smooth recovery when needed. If
storage becomes insufficient, database operations may encounter complications, so it is essential to monitor FRA usage.
When a rollback is required, Oracle allows flashing back the database to the GRP using FLASHBACK DATABASE TO
RESTORE POINT rp_name, instantly restoring it to the exact saved state. This capability is essential for mitigating risks, as
it enables administrators to reverse unintended changes or failures without relying on traditional backups and restores.
Since GRPs override normal retention targets, logs are preserved beyond standard cleanup cycles, maintaining a stable
recovery option even in dynamic database environments. However, it is advisable to manually drop the GRP after
confirming the success of an operation to prevent excessive storage consumption in FRA, which could lead to system
performance issues.Understanding the role and proper management of Guaranteed Restore Points is fundamental for
effective database recovery planning, ensuring smooth operations and minimizing downtime in critical scenarios.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

85. Can flashback features impact performance or storage?


Yes, flashback features, especially Flashback Database and Guaranteed Restore Points, can impact both performance
and storage. Flashback Database generates flashback logs continuously, which increases disk I/O and consumes space in
the Fast Recovery Area (FRA). If the FRA becomes full, it can lead to errors or interruptions in backups and logging
operations. Flashback logging also slightly increases redo generation overhead. Features like Flashback Query or
Flashback Table have minimal performance impact as they read from undo or archived data. Recycle Bin usage can cause
tablespaces to grow if dropped tables are not purged regularly. Guaranteed Restore Points retain all logs required for
rollback, which can quickly consume large space during busy periods. It’s essential to monitor FRA usage with
V$RECOVERY_FILE_DEST and manage retention policies wisely. While these features provide excellent recovery
capabilities, they require thoughtful resource planning. Properly configured, they offer significant recovery benefits with
manageable resource costs.

Section 6: Patching and Maintenance

86. What are the different types of patches in Oracle?


Oracle offers a variety of patch types, each serving distinct purposes based on the nature of fixes required and the
environment in which they are applied. One-off patches are designed to address specific bugs affecting individual
systems, usually as a temporary measure until a more comprehensive update becomes available. Patch Set Updates
PSUs provide scheduled quarterly fixes, covering security vulnerabilities and regression corrections, ensuring ongoing
system stability. Critical Patch Updates CPUs also follow a quarterly release cycle, but they focus exclusively on security
patches to protect databases from potential exploits and vulnerabilities. Release Updates RUs go beyond basic fixes by
bundling new features, performance enhancements, and bug resolutions into a single package. When an RU introduces
regressions or compatibility concerns, Oracle releases Release Update Revisions RURs, which offer minor adjustments
and security compliance improvements tailored to that RU version. Database Release Updates DRUs are specifically
designed for long-term support LTS database versions, incorporating essential fixes while maintaining long-term
compatibility. Additionally, specialized patches like Grid Infrastructure PSUs GI PSUs cater to Oracle GI environments,
ensuring optimal performance and stability of cluster-based architectures.Some patches are rolling-compatible,
meaning they can be applied with minimal downtime in clustered environments such as Real Application Clusters RAC.
Others require planned outages, making scheduling an essential part of patch deployment. Understanding each patch’s
scope, timing, and compatibility is key to maintaining system integrity, reducing technical debt, and ensuring continued
vendor support. Administrators must carefully assess their business requirements, balancing security, functionality, and
operational stability when choosing the appropriate patch type. Prior to production deployment, thorough testing in
staging environments is necessary to validate patch applicability, prevent unexpected disruptions, and confirm smooth
integration within existing configurations.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

87. What is the difference between PSU, CPU, RU, and RUR?
Oracle provides various patch types to maintain the health, security, and performance of its database software. These
patches differ in their contents, purposes, and applicability, and understanding them is essential for designing an
effective patching strategy.

Patch Set Updates (PSU): PSUs were introduced to include both critical security fixes and important non-security bug
fixes in a single patch bundle. They are released quarterly and are cumulative, meaning each PSU includes all the fixes
from the previous PSUs within the same patch line. PSUs aim to reduce the complexity of applying multiple individual
patches. They were designed to offer a middle ground between Critical Patch Updates (CPUs) and major release
updates. However, PSUs are now deprecated and have been replaced by RUs in modern Oracle versions (starting from
12.2 and 18c onward). Organizations using older versions (11g or early 12c) might still see PSUs in use, but Oracle
recommends transitioning to RUs for better support and coverage.

Critical Patch Updates (CPU): CPUs are Oracle's quarterly security-focused patch bundles. They contain fixes only for
known security vulnerabilities and do not include any functionality or performance-related bug fixes. CPUs are
particularly useful for organizations that follow strict security policies or have regulatory compliance requirements,
where applying non-security patches could introduce change risk. CPUs are less intrusive, making them suitable for
environments that cannot afford downtime for extensive patch testing. However, CPUs can be superseded by RUs,
which contain the same security fixes plus additional updates, making CPUs less comprehensive and often redundant in
modern patching strategies. Oracle continues to release CPUs for backward compatibility, but they are no longer the
preferred patching method for most production systems.

Release Updates (RU): RUs are the modern replacement for PSUs and represent Oracle’s current standard for patch
distribution. Released quarterly, RUs are comprehensive patch bundles that include all CPU security fixes, bug fixes, and
feature enhancements. They are cumulative like PSUs and are designed to be the main patching mechanism for actively
maintained Oracle Database versions. RUs are highly recommended for production environments because they provide
the most complete and up-to-date patch set, helping organizations avoid regressions and benefit from continuous
improvements. RUs may sometimes introduce minor behavioral changes due to feature enhancements, so thorough
testing in staging environments is advised before applying to production. They are compatible with OPatch and
OPatchauto tools and are supported in both single-instance and RAC environments.

Release Update Revisions (RUR): RURs are smaller patch sets released in between RUs to address critical regressions or
bugs introduced by a specific RU. They contain only targeted fixes without adding new features or behavioral changes,
making them safer for production environments that prefer stability over new functionality. RURs are released only for
selected RUs and typically lag behind the main RU schedule. They allow administrators to remain on a known RU
baseline while selectively patching critical issues. RURs are cumulative within their RU family but not across different
RUs. For example, RUR2 of RU 19.10 will include fixes from RUR1 of 19.10, but not from RU 19.11 or higher. This model
provides flexibility for risk-averse environments to stay stable while still getting important fixes.

Conclusion:
In modern Oracle patching strategy, RUs are the primary recommended patch type, replacing both PSUs and CPUs, as
they offer a holistic approach with security, bug fixes, and enhancements. RURs serve as an optional path for minimizing
change risk after applying an RU. CPUs still exist for specific security-only requirements but are largely redundant now.
PSUs are being phased out and should be avoided in favor of the newer model. Selecting the right patch type depends
on your environment's tolerance for change, need for new features, and security compliance requirements. A well-
documented patching policy that aligns with Oracle's lifecycle support is essential for database reliability and vendor
support.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

88. How do you apply a patch using OPatch utility?


First, verify you have the correct Oracle Home and environment variables set. Shut down the database, listeners, and
Grid components if required for patching. Extract the patch zip into a temporary directory in the Oracle Home path. Run
opatch prereq to check prerequisites and conflicts. Then execute opatch apply from the patch directory and follow
prompts. OPatch backs up changed files and records patch metadata automatically. Upon completion, run opatch
lsinventory to verify patch installation. Start the services and test functionality thoroughly. If patch application fails,
consult opatch.log and resolve identified issues. Always keep the original patch files in case rollback is needed. Build a
rollback plan including service status and backup validation. Perform an integrity check post-patching. Document each
step and output for audit purposes.

89. What is OPatchauto and when should you use it?


OPatchauto automates rolling and non-rolling patching for Grid Infrastructure and Oracle Database environments. It
orchestrates patch application across nodes, services, and components with minimal manual intervention. Use
OPatchauto for patches supporting rolling upgrades to avoid downtime. The tool handles node switchover, resource
relocation, and service restarts. It's especially useful in RAC clusters where manual patching increases risk and
complexity. OPatchauto can detect patch requirements, apply patches in correct sequence, and verify inventory
consistency. For GI patches including clusterware, it's the recommended method. It logs each step and provides
checkpoints for restartability. Be sure to back up OCR, voting disks, and config files before using it. Test OPatchauto in
staging before production to familiarize with behavior. It simplifies cluster patch management and reduces human
error.

90. How do you check patch inventory?


Execute opatch lsinventory from the Oracle Home directory to list all installed patches and components. The output
includes patch IDs, versions, descriptions, and applicable modules. When performing GI patching, run the command
from the GI Home. You can also call opatch lsinventory | grep “Patch” to focus on patch entries. For a summary, use
opatch lspatches to list patch names and dates. Record the baseline inventory before patching for comparison post-
application. Comparing inventories pre- and post-patch ensures accuracy. Inventory export can be generated using
opatch lsinventory -detail -oh $ORACLE_HOME > inventory.txt. Store this file for audit and rollback validation. If patch
IDs have changed unexpectedly, investigate potential failed or skipped steps.

91. What precautions should be taken before applying patches?


Before patching, perform a full backup of the Oracle Home, Grid Home, and database. Review the patch README,
compatibility matrix, and MOS notes for known issues. Ensure that required patch dependencies or interim bundles are
identified and applied. Use opatch prereq to detect conflicts or missing prerequisites. Plan the patch in a maintenance
window with stakeholders informed. Verify sufficient disk space in Oracle Home and Fast Recovery Area. Validate that
automatic backups and logs are running. For RAC/GI, ensure all nodes have synchronized state and healthy CRS.
Prepare and test rollback steps, including backup validation. Monitor environment after patch for alert logs and
performance metrics. Test critical applications post-patch to confirm functionality. Always use LSNRCTL, CRSCtl, and
SRVCTL to verify listener and resource status.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

92. How do you rollback a patch in Oracle?


To rollback, use the opatch rollback -id <patch_id> command in the patch directory or Oracle Home. First, shut down
related services, including databases and Grid components. Run the rollback command and follow prompts. OPatch
replaces modified files with backups created during patch application. Once complete, verify removal with opatch
lsinventory. Restart services and validate functionality to detect regressions. If ‘FORCECOPY’ was used, manual cleanup
may be required. For RAC, rollback must be performed node-by-node or via OPatchauto for cluster-wide consistency. If
rollback fails, manually restore Oracle Home from backup. Document rollback logs and validation results. After rollback,
reapply or apply an alternative patch if necessary. Ensure baseline inventory matches rollback state.

93. How do you apply interim patches?


Interim patches are hot fixes targeting specific bugs. First, identify the required patch from Oracle Support. Download
and validate compatibility using opatch prereq. Extract the interim patch into the Oracle Home or patch directory. For
single-instance systems, shut down the database. On RAC/GI, use OPatchauto for safer execution. Run opatch apply for
individual patches and follow prompts. Verify installation via opatch lsinventory. Restart necessary components and
confirm system health. Interim patches may require manual change control steps after validation. Monitor alert and
listener logs for issues post-application. Document interim patch IDs and purpose for future audits. Periodically assess if
interim fixes have been subsumed by standard RUs or RURs and retire them accordingly.

94. What is the process for patching in a RAC environment?


In RAC, patches must preserve cluster availability. Begin by reviewing patch type and compatibility with rolling upgrade
support. Take backups of Oracle Home, GI Home, OCR, and Voting Disk. Apply patch on one node (non-rolling), stop
listeners/services, run opatch apply, then restart components and verify functionality. If patch allows rolling mode, use
OPatchauto to sequentially patch nodes with minimal impact. Move services using SRVCTL to avoid outage. Monitor
CRS logs and resource status on each node before proceeding. Repeat on all nodes, confirm SCAN listeners, VIPs, and
services remain functional. Validate overall cluster health with crsctl status. Test application connectivity across the
cluster. Record logs and ensure patch consistency across homes. Cleanup old backups only after successful patch
deployment.

95. How do you patch Oracle Grid Infrastructure?


Start Grid Infrastructure patching by reviewing patch documentation. Backup OCR, voting disks, and GI Home. If patch
supports rolling, use OPatchauto to apply across nodes sequentially. For non-rolling patches, services and CRS must be
stopped manually first. Run crsctl stop crs and any dependent services. Apply patch using OPatch, then run opatch auto
apply if automation is enabled. Post-patch, run crsctl start crs and srvctl start all. Repeat procedure across all nodes.
Use opatch lsinventory to verify patch installation in GI Home. Check CRS logs, use crsctl stat res -t and srvctl status
listener to validate resource health. Ensure VIP and SCAN remain resolvable. Test failover and service relocation
manually. Document patch completion and schedule monitoring window post-application.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

96. How do you handle conflicts during patching?


Conflicts occur when files in Oracle Home have been changed manually or by other patches. Before applying, run
opatch prereq CheckConflictAgainstOH to detect conflicts. If conflicts are detected, review OUI inventory or pending
patches for corrective steps. Oracle Support may provide merged patches or instructions. Avoid forcing conflict
resolution with OPatch -force unless guided by Oracle. If patching continues, manual conflict resolution may require file
replacement, re-extraction, or Home refresh. Always backup affected files before overwrite. Re-run inventory check
after resolution. For cluster environments, ensure all nodes are synchronized. Use rollback then reapply if necessary.
Consult MOS notes or patch README for specific conflict guidelines.

97. What is one-off patching?


One-off patches are ad hoc fixes provided by Oracle Support for specific customer issues. They are tailored to
environments that cannot wait for general patch releases. Before usage, test one-off patches in a non-production
system. Apply using OPatch following standard procedures. Keep track of patch ID, applied date, and reason for future
audits. Because they may not be part of future RUs, one-offs could be overwritten during standard patch cycles.
Document them in patch inventory and review before each major patch deployment. Always check if the issue has been
fixed in newer RUs/RURs to avoid long-term dependencies on one-offs. Maintain communication with Oracle Support
for updates. Clean up one-off patches when no longer needed.

98. How do you automate patching for multiple environments?


Use orchestration tools like Ansible, Puppet, Chef, or shell scripts for consistent patch application. Scripts should
validate environment, backup Oracle Homes, run opatch prereq, apply patch, and verify inventory. Use OPatchauto for
RAC or GI systems within playbooks. Maintain central patch library and version control to avoid mismatches. Schedule
patch execution during approved maintenance windows. Implement logging and error handling in automation scripts.
Send alerts on failures. Automate pre- and post-patch health checks (inventory, listener, CRS, services). Use dry-run
mode for validation before live patching. Document run details and output in a central repository. Regularly review
script and patch repository for updates. Integrate with CI/CD pipelines for test environments.

99. What are patching best practices in production systems?


Carefully plan patching windows to align with business cycles and scheduled maintenance periods, ensuring minimal
impact on production environments. Before applying any patch, perform comprehensive backups of Oracle software
and database homes, including critical components such as OCR and voting disks for RAC configurations. Review all
patch documentation, including the README file, My Oracle Support MOS notes, and potential conflicts, to ensure
compatibility with existing configurations. Always test patches in a non-production clone environment first to identify
any unforeseen issues before deploying to the live system. For minimizing downtime in RAC and Grid Infrastructure GI
setups, leverage rolling upgrades, Data Guard failover strategies, or OPatchauto for streamlined patch application.
Continuously monitor resource usage, logs, and system behavior both before and after patching to detect anomalies or
performance shifts. A thoroughly tested rollback plan should be prepared in advance to mitigate risks in case of failure.
After patching, verify successful completion using the opatch lsinventory command to confirm applied patches and
system integrity.
100 Most Asked Oracle DBA Interview Questions – RAC, RMAN, Data Guard, Flashback, Data Pump & Patching – Brijesh Mehra

100. How often should patching be done in an Oracle environment? How do you verify if a patch was successfully
applied?
Oracle follows a structured patching cycle with quarterly updates, including Release Updates RUs and Release Update
Revisions RURs, which should ideally be applied every quarter to keep systems secure and optimized. Security patches
known as Critical Patch Updates CPUs must be implemented within a month of release to mitigate vulnerabilities. In
cases where urgent fixes are necessary, interim patches may need to be deployed immediately. Patch scheduling should
align with platform and application maintenance windows to avoid unexpected downtime and disruptions. It is essential
to keep up with patch releases to prevent technical debt accumulation, maintain vendor support, and ensure
compatibility with the latest features and security enhancements. Monitoring My Oracle Support MOS for security alerts
and bug fixes allows administrators to stay updated on critical advisories and necessary patch actions. Before deploying
patches in production, organizations should evaluate business risks and test patches in staging environments such as
development or test servers. This reduces the likelihood of failures during live patching and ensures stability. Patch
reviews and deployments should be scheduled during designated change windows to minimize operational disruptions.
Coverage must include all environments, including standby databases and development instances, ensuring consistency
across the infrastructure. After applying patches, administrators must validate installations using opatch lsinventory in
both Oracle and Grid Homes to confirm patch IDs and associated changes. Patch log files should be reviewed for
successful completion and absence of errors. In Real Application Clusters RAC and Grid Infrastructure GI setups,
verification of Cluster Ready Services CRS resources can be done using crsctl stat res -t, and database listener status
should be checked with srvctl commands. Executing test transactions and queries ensures database operability and
prevents unexpected behavior post-patching. Performance metrics, alert logs, and listener sessions must be monitored
for anomalies that could indicate patch-related issues. To maintain an accurate record of patching activities,
administrators should run opatch lspatches for a summary of applied patches and compare inventory details against
baseline pre-patch logs. Documenting verification steps in post-patch change control records ensures transparency and
compliance with internal IT policies. If inconsistencies arise or patches fail to apply correctly, remediation plans should
be enacted promptly to restore system integrity and prevent disruptions. Properly managed patching cycles enhance
database performance, security, and stability while reducing risks associated with unpatched vulnerabilities and
outdated software dependencies.

You might also like