Oracle Data Pump
Data Pump: introduced in Oracle 10g
- Oracle Data Pump is a server-based technology designed for high-speed data and metadata movement between Oracle databases.
- It supports exporting data into a dump file set using the expdp utility and importing it back with impdp.
- Data Pump enables precise control over data transfers, offering features like parallel execution, filtering, and the ability to manage complex migration tasks, such as moving data between different database versions or converting databases for multitenant environments.
- It is ideal for scenarios like database upgrades, migrations, and backup management, ensuring efficient and secure data handling.
Difference between exp/imp and expdp/impdp:
- Datapump access files in the server (using ORACLE directories). Traditional export can access files (not using ORACLE directories).
- Exports (exp/imp) represent database metadata information as DDLs in the dump file, but in datapump, it represents in XML document format.
- Datapump does not support sequential media like tapes, but traditional export supports.
- Impdp/Expdp use parallel execution for improved performance, but in exp/imp single stream execution
- Data Pump will recreate the user, whereas the old imp utility required the DBA to create the user ID before importing.
- Datapump has a very powerful interactive command-line mode which allows the user to monitor and control Data Pump Export and Import operations.Datapump allows you to disconnect and reconnect to the session
- Because Data Pump jobs run entirely on the server, you can start an export or import job, detach from it, and later reconnect to the job to monitor its progress. Another amazing feature is that you can “PAUSE” and “RESUME” data pump jobs on demand.
- Data Pump gives you the ability to pass data between two databases over a network (via a database link), without creating a dump file on disk.
Oracle Data Pump Architecture:
Background Processes and Memory Components in Data Pump Operations:
The Data Pump operations, several background processes and logical memory components play crucial roles.
Background Processes:
Client Process: The expdp and impdp sessions invoked by users are the client processes
Shadow Process: Initiated when a user starts a Data Pump job, handling the user’s connection throughout the operation.
Master (Control) Process (DMnn): Coordinates the entire Data Pump operation, managing job execution and logging.
Worker Processes (DWnn): Perform the actual data transfer tasks, such as reading data during export or writing data during import. These processes can operate in parallel to enhance performance.
Parallel Execution Processes (PXnn): Activated when the PARALLEL parameter is used, allowing the job to be split across multiple processes, significantly speeding up data transfer.
Memory Concepts:
Job Table: This is a critical in-memory structure that tracks the status of the Data Pump job, records progress, and stores metadata for processed objects. It also facilitates the restarting of interrupted jobs.
Buffer Memory: Utilized to read data from or write data to dump files efficiently, with larger buffers improving performance during large data transfers.
Streams Pool: Specifically used when Data Pump operations involve Advanced Queuing (AQ) or network imports/exports, allowing memory allocation for processing queue tables or changes during such tasks.
Required Privileges for Data Pump Operations:
Users must be granted the following roles to execute Data Pump operations:
DATAPUMP_EXP_FULL_DATABASE: Grants permission for export operations beyond the user’s schema, allowing monitoring of jobs initiated by other users.
DATAPUMP_IMP_FULL_DATABASE: Enables the user to perform import operations, including creating necessary objects in the target database.
A directory object is needed, with READ and WRITE permissions granted to users, to specify where Data Pump dump files are stored.
Important Note: Running Data Pump jobs using the SYS user is discouraged to maintain data security and integrity.
Modes of Data Pump:
- Full Export Mode (entire database is export)
- Schema Mode (this is the default mode, specific schemas are exported)
- Table Mode (specified set of tables and their dependent objects are exported)
- Tablespace Mode (the tables in the specified tablespace are exported)
- Transportable Tablespace Mode (only the metadata for the tables and their dependent objects within a specified set of tablespaces are exported)
Pre-Checks Before Export and Import:
Check Object Size:
SELECT SEGMENT_NAME, BYTES/1024/1024 AS SIZE_MB FROM DBA_SEGMENTS WHERE SEGMENT_NAME = 'YOUR_TABLE_NAME';
Verify Free Space in Filesystem
Ensure that the filesystem or disk where dump files will be created has enough free space.
Use OS commands like df -h (Linux) or dir (Windows) to verify available space.
Create a Logical Directory:
Define a database directory object that maps a database directory to a filesystem location where the dump files will be stored.
Example:
CREATE DIRECTORY dpump_dir AS '/path/to/directory';
GRANT READ, WRITE ON DIRECTORY dpump_dir TO your_user;
Verify User Privileges:
Ensure that the user performing the export/import has the necessary privileges, such as DATAPUMP_EXP_FULL_DATABASE or DATAPUMP_IMP_FULL_DATABASE.
If exporting specific tables or schemas, ensure required grants are in place.
Query to generate grant scripts:
SELECT 'GRANT ' || PRIVILEGE || ' ON ' || OWNER || '.' || TABLE_NAME || ' TO ' || GRANTEE || ';' FROM DBA_TAB_PRIVS WHERE OWNER = 'SOURCE_SCHEMA' AND TABLE_NAME = 'TABLE_NAME';
Check Archive Log Generation:
Monitor archive log generation during the export process, especially in high-transaction environments, to ensure sufficient space in the Fast Recovery Area (FRA).
Example to check archive log generation(DB/Table):
Archive Generation of DB on daily basis
select trunc(COMPLETION_TIME,'DD') Day, thread#, round(sum(BLOCKS*BLOCK_SIZE)/1024/1024/1024) GB, count(*) Archives_Generated from v$archived_log group by trunc(COMPLETION_TIME,'DD'),thread# order by 1;
Archive Generation of DB on hourly basis --
alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
select trunc(COMPLETION_TIME,'HH') Hour,thread# , round(sum(BLOCKS*BLOCK_SIZE)/1024/1024/1024) GB,count(*) Archives from v$archived_log
group by trunc(COMPLETION_TIME,'HH'),thread# order by 1 ;
Archive Generation for a Table
select dhss.SNAP_ID, dhss.TS#, dhss.OBJ#, do.OBJECT_NAME, dhss.DB_BLOCK_CHANGES_DELTA from INT$DBA_HIST_SEG_STAT dhss inner join dba_objects do on do.object_id = dhss.OBJ# where do.object_type in ('TABLE','INDEX') and do.object_name= 'YOUR_TABLE_NAME';
Verify Tablespace Consistency
Ensure that the tablespace structure in the source and target databases is consistent, particularly if using the TRANSPORTABLE option.
Check Database Version Compatibility
Verify that the database versions of the source and target systems are compatible for export and import operations.
Use the following command to check the database version:
SELECT * FROM V$VERSION;
Check Tablespace Free Space in Destination
Ensure that there is enough free space in the tablespaces of the target environment to accommodate the imported data.
Example query:
set lines 200 pages 1000
col TABLESPACE_NAME for a20
col TOTAL_FREE_SPACE_MB for 99999999999
with TABLESPACE_V as (select x.TABLESPACE_NAME as TBLSPC,round(sum(x.bytes/1048576)) as TOTAL_SPACE_MB from dba_data_files x group by x.TABLESPACE_NAME order by 1), SEGMENT_V as (select y.tablespace_name as TBLSEG,round(sum(y.bytes/1048576)) as TOTAL_SIZE_OCCUPIED from dba_segments y group by y.tablespace_name order by 1) select TBLSPC, TOTAL_SPACE_MB, sum(TOTAL_SPACE_MB - TOTAL_SIZE_OCCUPIED) as TOTAL_FREE_SPACE, round(100*((TOTAL_SPACE_MB - TOTAL_SIZE_OCCUPIED) / TOTAL_SPACE_MB)) as PCT_F from TABLESPACE_V a inner join SEGMENT_V b on (a.TBLSPC=b.TBLSEG) group by TBLSPC, TOTAL_SPACE_MB, round(100*((TOTAL_SPACE_MB - TOTAL_SIZE_OCCUPIED) / TOTAL_SPACE_MB)) order by 4 desc;
--------------- OR ------------------
set echo off feedback off verify off pages 75
col tablespace_name format a20 head 'Tablespace Name'
col total format 999,999,999,999 head 'Total(MB)'
col used format 999,999,999,999 head 'Used(MB)'
col free format 999,999,999,999 head 'Free(MB)'
col pct format 999 head 'PercentUsed'
select tbs.tablespace_name, tot.bytes/1024/1024 total,(tot.bytes/1024/1024) - (sum(nvl(fre.bytes,0))/1024/1024) used, sum(nvl(fre.bytes,0))/1024/1024 free, (1 - sum(nvl(fre.bytes,0))/tot.bytes)*100 pct from dba_free_space fre, (select tablespace_name, sum(bytes) bytes from dba_data_files group by tablespace_name) tot, dba_tablespaces tbs where tot.tablespace_name = tbs.tablespace_name and fre.tablespace_name(+) = tbs.tablespace_name group by tbs.tablespace_name, tot.bytes union select tsh.tablespace_name, dft.bytes/1024/1024 total, sum(nvl(tsh.bytes_used,0))/1024/1024 used, sum(nvl(tsh.bytes_free,0))/1024/1024 free, (1 - sum(nvl(tsh.bytes_free,0))/dft.bytes)*100 pct from v$temp_space_header tsh, (select tablespace_name, sum(bytes) bytes from dba_temp_files group by tablespace_name) dft where dft.tablespace_name = tsh.tablespace_name(+) group by tsh.tablespace_name, dft.bytes order by 1;
Create Data Pump Backup Directory:
Create directory at OS level directory which will be used by Oracle for performing exports and imports.
mkdir -p /u02/dp_exp_dir
chown oracle:oinstall /u02/dp_exp_dir
chmod 755 /u02/dp_exp_dir
Create directory inside the database Level
create directory datapump as '/u02/dp_exp_dir';
Grant permissions on directory
SQL> grant read,write on directory datapump to muthu;
View directory information
SQL> select * from dba_directories;
Export and Import on Database Level:
Check expdp/impdp Options:
expdp help=y
impdp help=y
Export full database:
Step 1: Run Export Command
expdp directory=datapump dumpfile=fullprod.dmp logfile=fullprod.log full=y
Import full Database:
Pre-check:
- Create missing tablespaces on target
- Make sure target tablespace has enough free space
- Drop all non-oracle schemas (done during refresh)
DROP USER <username> CASCADE;
Step 1: On source machine check tablespace name list:
select name from v$tablespace;
Step 2: On target machine check tablespace name list:
select name from v$tablespace;
Step 3: Run Import Command:
impdp directory=datapump dumpfile=fullprod.dmp logfile=imp_fullprod.log full=y
Export and Import on Schema Level:
Schema level export:
expdp directory=datapump dumpfile=scott_bkp.dmp logfile=scott_bkp.log schemas='SCOTT'
Import source schema objects into same schema on target:
impdp directory=datapump dumpfile=scott_bkp.dmp logfile=imp_schema.log remap_schema='SCOTT:SCOTT'
Import source schema objects into a different schema on target:
impdp directory=datapump dumpfile=scott_bkp.dmp logfile=imp_schema.log remap_schema='SCOTT:HR'
Export and Import on Table Level:
Export the Table:
expdp directory=datapump dumpfile=emp_bkp.dmp logfile=emp_bkp.log tables='SCOTT.EMP'
Import table where source and target schema are same:
impdp directory=datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables='EMP'
Import table to another schema:
impdp directory=datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables='EMP' remap_schema='SCOTT:HR'
Import table to a different name or rename table or remap_table:
impdp directory=datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables='EMP' remap_table='SCOTT.EMP:HR.EMPLOYEE'
Import only the rows from an exported table without loading table any table definitions
impdp directory=datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables='EMP' content=DATA_ONLY
Import tables to another tablespace (only in datapump):
impdp directory=datapump dumpfile=emp_bkp.dmp logfile=imp_emp.log tables='EMP' remap_schema='SCOTT:HR' remap_tablespace='USERS:MYTBS'
Export and Import the Row Level:
Row level export
expdp directory=datapump dumpfile=emprows_bkp.dmp logfile=emprows_bkp.log tables='SCOTT.EMP' query=\"where deptno=10\"
Import rows where source and target schema are same
impdp directory=datapump dumpfile=emprows_bkp.dmp logfile=imp_emprows.log tables='SCOTT.EMP'
CONTENT (Optional)export only the data, only the metadata, or both.
Options:
ALL : (Default setting); exports both data and metadata.
DATA_ONLY : Exports only the table rows without the object definitions.
METADATA_ONLY: Exports only object definitions without the table rows.
Example:
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=export_metadata.dmp CONTENT=METADATA_ONLY SCHEMAS=hr
PARALLEL (Optional) Specifies the number of active worker processes for the job, allowing parallelism and faster export:
Details: This parameter can significantly reduce the time for large exports by utilizing multiple worker processes. The value of PARALLEL should match the number of dump files specified.
Note: Choosing the Right PARALLEL Value: Start by setting PARALLEL equal to the number of CPU cores or I/O channels on your database server. For example, if your server has 4 CPU cores, use PARALLEL=4. [^] Monitor system load and adjust PARALLEL if needed. Increasing it can boost speed, but too high a value can cause resource contention (CPU or I/O bottlenecks).
Example:
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=export_data%U.dmp SCHEMAS=hr PARALLEL=4
dumps in one directory:
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=export_data01.dmp, export_data02.dmp, export_data03.dmp, export_data04.dmp, export_data05.dmp SCHEMAS=hr PARALLEL=4
dumps in different directories:
expdp user/password DUMPFILE=dpump_dir1:export_data01.dmp, dpump_dir1:export_data02.dmp, dpump_dir1:export_data03.dmp, dpump_dir2:export_data04.dmp, dpump_dir2:export_data05.dmp SCHEMAS=hr PARALLEL=4
COMPRESSION (Optional, Requires Licensing)Compresses the dump file to reduce its size:
Options:
ALL: Compresses both data and metadata.
DATA_ONLY: Compresses only table rows.
METADATA_ONLY: Compresses only object definitions.
Example:
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=export_data.dmp COMPRESSION=ALL SCHEMAS=hr
INCLUDE - EXCLUDE (Optional)
INCLUDE parameter specifies which object types or specific objects should be included in the export
It is often used when you only want to export certain types of objects, such as tables, indexes, or views, from a schema
EXCLUDE parameter specifies which object types or specific objects should be excluded from the export
It is helpful when you want to export everything except certain objects, like large tables or specific object types that you don't need in the export
Examples:
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=hr_tables.dmp SCHEMAS=hr INCLUDE=TABLE, INDEX
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp SCHEMAS=hr INCLUDE=TABLE:"IN ('EMPLOYEES')"
FILTER and Data Selection (OPTIONAL)
QUERY: Applies a SQL WHERE clause to filter rows from specific tables during export:
Example: Export only rows from the employees table where department_id is 10:
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=emp_dept10.dmp TABLES=employees QUERY=employees:"WHERE department_id=10"
SAMPLE: Exports a percentage of rows from tables, effectively creating a smaller sample dataset for development or analysis without exporting the entire table.:
Example: Export 10% of rows from the orders table:
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=orders_sample.dmp TABLES=orders SAMPLE=10
Combining SCHEMA with QUERY: Filters data within multiple tables across a schema using specific conditions.
Example: Export rows from all tables in the hr schema where created_date is in the last year:
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=hr_recent.dmp SCHEMAS=hr QUERY=hr:"WHERE created_date > SYSDATE-365"
Using Parameter File (PAR FILE)
vi expdp_params.par
DUMPFILE=export_data.dmp
DIRECTORY=dpump_dir1
SCHEMAS=hr
LOGFILE=export_log.log
PARALLEL=4
Run:
expdp user/password@database PARFILE=expdp_params.par
Handling Large Exports with EXPDP:
PARALLEL Parameter: Runs multiple worker processes concurrently, improving export speed by allowing simultaneous reading and writing of data. Choose a value based on CPU and I/O capacity to avoid bottlenecks.
FILESIZE Parameter: Controls the maximum size of each dump file, splitting the export into smaller, manageable files. This helps with storage limitations and makes file transfer over the network easier.
Combining PARALLEL and FILESIZE: Allows high-speed exports and efficient file management. Specify at least one more dump file than the PARALLEL value to prevent contention between processes.
ESTIMATE Parameter: Predicts the size of the export before execution using BLOCKS or STATISTICS methods. This helps ensure there is enough space available in the target directory.
ESTIMATE_ONLY Parameter: Runs the size estimation without performing the export, allowing DBAs to plan storage needs and avoid mid-export failures due to insufficient space.
COMPRESSION: Reduces the size of dump files by compressing data during export, useful in environments with limited storage space (requires Advanced Compression license).
Dynamic File Naming with %U: Automatically creates multiple dump files with unique names, simplifying the setup for large exports and providing each parallel process a separate file for efficient data writing.
Monitoring and Managing EXPDP Jobs:
Monitoring and managing expdp jobs effectively is crucial for ensuring that export operations run smoothly and for troubleshooting any issues that may arise during the process.
STATUS parameter: Displays the progress of the export job, providing a snapshot of the job status at regular intervals.
Example: Monitor job progress every 60 seconds:
expdp user/password DIRECTORY=dpump_dir1 DUMPFILE=export_data.dmp SCHEMAS=hr STATUS=60
Monitoring with Data Pump Views:
Views like DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS provide real-time information about active Data Pump jobs.
Example: Check the status of all active Data Pump jobs:
SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
Interactive Command-Line Commands for manage the data pump jobs:
After starting an export job, you can attach to it interactively using the ATTACH parameter to manage the job.
Once attached, you can use commands like STOP_JOB, START_JOB, KILL_JOB, and STATUS interactively to control the job.
Example: Attach to an existing Data Pump job:
expdp user/password ATTACH=hr_export_job
Pause a running export job:
Export> STOP_JOB=IMMEDIATE
Resume the job later:
Export> START_JOB
Forcefully terminate a job:
Export> KILL_JOB
Data Pump Import Over Network:
When you try to move large tables or schema between two Oracle databases, datapump export might take lot of disk space. The exported dump files may take lot of space on the disk. The best workaround is to use dblink with Oracle datapump to move data from one oracle database to another.
Step 1: Add source database TNS entry into tnsnames.ora of the target database
Step 2: On target, we need to create a database link using the TNS entry
Example: CREATE PUBLIC DATABASE LINK link_name CONNECT TO remote_user_name IDENTIFIED BY remote_user_password USING ‘remote_service_name’;
Step 3: Import source schema on target database via db link.
impdp directory=MY_DUMP_DIR LOGFILE=dblink_transfer.log network_link=SOURCE_DB remap_schema=scott:hr
To import multiple schemas,(perform DBA User or Sys User)
impdp sys directory=MY_DUMP_DIR LOGFILE=dblink_transfer.log network_link=SOURCE_DB schemas=IJS,scott,hr
Create a Job for daily full database backup using schell and cron jobs:
Create a script file
su - oracle
vi daily_export.sh
export DATE=$(date +%m_%d_%y_%H_%M)
export ORACLE_SID=pearl
export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/db_1
$ORACLE_HOME/bin/expdp username/password@sid directory=export_dir dumpfile=backup_$DATE.dmp logfile=backup_$DATE.log full=y
Give the Exexute permission for script file
chmod u+x daily_export.sh
crontab –e
0020***/home/oracle/backup_script
When you run a data pump export in the background and want to know the progress status:
SELECT SID, SERIAL#, USERNAME, CONTEXT, SOFAR, TOTALWORK,ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"FROM V$SESSION_LONGOPS WHERE TOTALWORK != 0 AND SOFAR <> TOTALWORK;
Export / Import the Schema Using DBMS_DATAPUMP:
The DBMS_DATAPUMP package in Oracle allows programmatic management of Data Pump export and import operations through PL/SQL. It provides greater control and flexibility compared to the expdp/impdp command-line utilities.
Sqlplus Command for Export:
DECLARE
job_handle NUMBER;
BEGIN
-- Open a Data Pump Export job
job_handle := DBMS_DATAPUMP.OPEN(operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'SCHEMA_EXPORT');
-- Add a dump file to the job
DBMS_DATAPUMP.ADD_FILE(handle => job_handle, filename => 'schema_export.dmp', directory => 'DP_DIR');
-- Add a log file
DBMS_DATAPUMP.ADD_FILE(handle => job_handle, filename => 'schema_export.log', directory => 'DP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
-- Specify the schema to export
DBMS_DATAPUMP.METADATA_FILTER(handle => job_handle, name => 'SCHEMA_EXPR', value => 'IN (''MY_SCHEMA'')');
-- Start the job
DBMS_DATAPUMP.START_JOB(handle => job_handle);
-- Detach from the job
DBMS_DATAPUMP.DETACH(handle => job_handle);
DBMS_OUTPUT.PUT_LINE('Export job started successfully.');
END;
/
Sqlplus Command for Import:
DECLARE
job_handle NUMBER;
BEGIN
-- Open a Data Pump Import job
job_handle := DBMS_DATAPUMP.OPEN(operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'SCHEMA_IMPORT');
-- Add a dump file to the job
DBMS_DATAPUMP.ADD_FILE(handle => job_handle, filename => 'schema_export.dmp', directory => 'DP_DIR');
-- Add a log file
DBMS_DATAPUMP.ADD_FILE(handle => job_handle, filename => 'schema_import.log', directory => 'DP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
-- Specify remapping options (optional)
DBMS_DATAPUMP.METADATA_REMAP(handle => job_handle, name => 'REMAP_SCHEMA', old_value => 'MY_SCHEMA', new_value => 'NEW_SCHEMA');
-- Start the job
DBMS_DATAPUMP.START_JOB(handle => job_handle);
-- Detach from the job
DBMS_DATAPUMP.DETACH(handle => job_handle);
DBMS_OUTPUT.PUT_LINE('Import job started successfully.');
END;
/
DBMS_METADATA:
The DBMS_METADATA package in Oracle is used to extract the DDL (Data Definition Language) statements for database objects. It supports a variety of object types such as tables, indexes, views, sequences, and more.
Table: SELECT DBMS_METADATA.GET_DDL('TABLE', 'EMP', 'HR') AS ddl FROM DUAL;
Index: SELECT DBMS_METADATA.GET_DDL('INDEX', 'EMP_IDX', 'HR') AS ddl FROM DUAL;
View: SELECT DBMS_METADATA.GET_DDL('VIEW', 'EMP_VIEW', 'HR') AS ddl FROM DUAL;
Sequence: SELECT DBMS_METADATA.GET_DDL('SEQUENCE', 'EMP_SEQ', 'HR') AS ddl FROM DUAL;
Procedure: SELECT DBMS_METADATA.GET_DDL('PROCEDURE', 'EMP_PROC', 'HR') AS ddl FROM DUAL;
Function: SELECT DBMS_METADATA.GET_DDL('FUNCTION', 'EMP_FUNC', 'HR') AS ddl FROM DUAL;
Package: SELECT DBMS_METADATA.GET_DDL('PACKAGE', 'EMP_PKG', 'HR') AS ddl FROM DUAL;
To extract all objects for a schema:
Include / Exclude Storage and Tablespace Info:
BEGIN
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE', FALSE);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'TABLESPACE', FALSE);
END;
/
Schema Export: SELECT DBMS_METADATA.GET_DDL('SCHEMA_EXPORT', NULL, 'HR') AS ddl FROM DUAL;
To list all object types that DBMS_METADATA supports:
SELECT DISTINCT OBJECT_TYPE FROM DBA_OBJECTS WHERE OWNER = 'HR';
Example to Generate All Object DDLs for a Schema
Generate DDL for All Tables:
SELECT DBMS_METADATA.GET_DDL('TABLE', TABLE_NAME, 'HR') AS ddl FROM DBA_TABLES WHERE OWNER = 'HR';
Generate DDL for All Indexes:
SELECT DBMS_METADATA.GET_DDL('INDEX', INDEX_NAME, 'HR') AS ddl FROM DBA_INDEXES WHERE OWNER = 'HR';
Generate DDL for All Views:
SELECT DBMS_METADATA.GET_DDL('VIEW', VIEW_NAME, 'HR') AS ddl FROM DBA_VIEWS WHERE OWNER = 'HR';
Generate DDL for All Sequences:
SELECT DBMS_METADATA.GET_DDL('SEQUENCE', SEQUENCE_NAME, 'HR') AS ddl FROM DBA_SEQUENCES WHERE SEQUENCE_OWNER = 'HR';
Generate DDL for All Triggers:
SELECT DBMS_METADATA.GET_DDL('TRIGGER', TRIGGER_NAME, 'HR') AS ddl FROM DBA_TRIGGERS WHERE OWNER = 'HR';
Comments
Post a Comment