In this Document
APPLIES TO:Oracle Database Cloud Schema Service - Version N/A and laterOracle Database Exadata Cloud Machine - Version N/A and later Oracle Cloud Infrastructure - Database Service - Version N/A and later Oracle Database Exadata Express Cloud Service - Version N/A and later Oracle Database Backup Service - Version N/A and later Information in this document applies to any platform. PURPOSEThis document describes how to use the TRACE parameter with the Export Data Pump (expdp) and Import Data Pump (impdp) database utilities when diagnosing incorrect behavior and/or troubleshooting Data Pump errors. SCOPEThe article is intended for users of the database utilities Export Data Pump (expdp) and Import Data Pump (impdp), and who need to troubleshoot the execution of the jobs that are generated by these utilities. These database utilities were introduced with Oracle10g. The article gives detailed information how to use the undocumented parameter TRACE. DETAILS1. Introduction.In Oracle10g, we introduced the new database utilities Export Data Pump and Import Data Pump. % expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=expdp_f%U.dmp \
LOGFILE=<LOG_NAME>.log FULL=y PARALLEL=2 % ps -ef | grep expdp oracle 8874 8704 1 07:00 pts/2 00:00:03 expdp DIRECTORY=<DIRECTORY_NAME> ... % ps -ef | grep <SID> oracle 8875 8874 4 07:00 ? 00:00:11 oracle<SID> (DESCRIPTION=(LOCAL=YES) ... oracle 8879 1 3 07:00 ? 00:00:08 ora_dm00_<SID> oracle 8881 1 94 07:00 ? 00:04:17 ora_dw01_<SID> oracle 8893 1 3 07:00 ? 00:00:09 ora_dw02_<SID> ... -- Obtain Data Pump process info: set lines 150 pages 100 numwidth 7 col program for a38 col username for a10 col spid for a7 select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid, s.status, s.username, d.job_name, p.spid, s.serial#, p.pid from v$session s, v$process p, dba_datapump_sessions d where p.addr=s.paddr and s.saddr=d.saddr; DATE PROGRAM SID STATUS ------------------- -------------------------------------- ------- -------- 2007-10-19 07:01:03 ude@<HOST_NAME>(TNS V1-V3) 140 ACTIVE 2007-10-19 07:01:03 oracle@<HOST_NAME> (DM00) 152 ACTIVE 2007-10-19 07:01:03 oracle@<HOST_NAME>(DW01) 144 ACTIVE 2007-10-19 07:01:03 oracle@<HOST_NAME> (DW02) 159 ACTIVE USERNAME JOB_NAME SPID SERIAL# PID ---------- ------------------------------ ------- ------- ------- <USER> SYS_EXPORT_FULL_01 8875 8 18 <USER> SYS_EXPORT_FULL_01 8879 21 21 <USER> SYS_EXPORT_FULL_01 8881 7 22 <USER> SYS_EXPORT_FULL_01 8893 26 23 The Data Pump processes will disappear when the Data Pump job completes or is (temporary) stopped. Data Pump is server based and not client based. This means that most Data Pump specific defects will be fixed on the server side (changes in packages in source and target database) and not on the client side (Export or Import Data Pump client). If a Data Pump defect that occurs during an import job is fixed in a later patchset (e.g. 10.2.0.4.0) and the target database is still on the base release (10.2.0.1.0) then the defect will still occur when importing with a 10.2.0.4.0 Import Data Pump client into this 10.2.0.1.0 target database. The same applies to export jobs. It is therefore recommended that both the source database and the target database have the latest patchset installed. For details, see also: 2. How to create a Data Pump trace file ? Parameter: TRACETracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of Export DataPump (expdp) or Import DataPump (impdp). The first three digits enable tracing for a specific Data Pump component, while the last four digits are usually: 0300. TRACE = 04A0300
-- or: TRACE=4a0300 Some rules to remember when specifying a value for the TRACE parameter: When using the TRACE parameter, an error may occur if the Data Pump job is run with a non-privileged user, e.g.: % expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \
LOGFILE=<LOG_NAME>.log TABLES=<TABLE_NAME> TRACE=480300 Export: Release 10.2.0.3.0 - Production on Friday, 19 October, 2007 13:46:33 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options ORA-31631: privileges are required To resolve this problem: ensure that the user that connects to the database with the Export Data Pump or Import Data Pump utility, is a privileged user (i.e. a user who has the DBA role or the EXP_FULL_DATABASE resp. IMP_FULL_DATABASE role), e.g.: -- run this Data Pump job with TRACE as a privileged user:
% expdp <LOGIN_USER>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ LOGFILE=<LOG_NAME>.log TABLES=<SCHEMA_NAME>.<TABLE_NAME> TRACE=480300 -- or: -- make the login user a privileged user: CONNECT / AS SYSDBA GRANT exp_full_database TO <LOGIN_USER>; % expdp <LOGIN_USER>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ LOGFILE=<LOG_NAME>.log TABLES=<TABLE_NAME> TRACE=480300 For details, see also: Also ensure that the init.ora/spfile initialization parameter MAX_DUMP_FILE_SIZE is large enough to capture all the trace information (or set it to unlimited which is the default value), e.g.: -- Ensure enough trace data can be written to the trace files:
CONNECT / as sysdba SHOW PARAMETER max_dump NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ max_dump_file_size string 10M ALTER SYSTEM SET max_dump_file_size = unlimited SCOPE = both; SHOW PARAMETER max_dump NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ max_dump_file_size string UNLIMITED The majority of errors that occur during a Data Pump job, can be diagnosed by creating a trace file for the Master Control Process (MCP) and the Worker Process(es) only, e.g.: -- To run a Data Pump job with standard tracing:
-- This results in two trace files in BACKGROUND_DUMP_DEST: -- Master Process trace file: <SID>_dm<number>_<process_id>.trc -- Worker Process trace file: <SID>_dw<number>_<process_id>.trc % expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ LOGFILE=<LOG_NAME>.log SCHEMAS=<SCHEMA_NAME> TRACE=480300 Each Data Pump component can be specified explicitly in order to obtain tracing details of that component: -- Summary of Data Pump trace levels:
-- ================================== Trace DM DW ORA Lines level trc trc trc in (hex) file file file trace Purpose ------- ---- ---- ---- ------ ----------------------------------------------- 10300 x x x SHDW: To trace the Shadow process (API) (expdp/impdp) 20300 x x x KUPV: To trace Fixed table 40300 x x x 'div' To trace Process services 80300 x KUPM: To trace Master Control Process (MCP) (DM) 100300 x x KUPF: To trace File Manager 200300 x x x KUPC: To trace Queue services 400300 x KUPW: To trace Worker process(es) (DW) 800300 x KUPD: To trace Data Package 1000300 x META: To trace Metadata Package ------- 'Bit AND' 1FF0300 x x x 'all' To trace all components (full tracing) Combinations of tracing Data Pump components are possible, e.g.: -- Example of combination (last 4 digits are usually 0300):
40300 to trace Process services 80300 to trace Master Control Process (MCP) 400300 to trace Worker process(es) ------- 'Bit AND' 4C0300 to trace Process services and Master Control and Worker processes In order to trace all Data Pump components, level 1FF0300 can be specified: -- Run a Data Pump job with full tracing:
-- This results in two trace files in BACKGROUND_DUMP_DEST: -- Master Process trace file: <SID>_dm<number>_<process_id>.trc -- Worker Process trace file: <SID>_dw<number>_<process_id>.trc -- And one trace file in USER_DUMP_DEST: -- Shadow Process trace file: <SID>_ora_<process_id>.trc % impdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ LOGFILE=<LOG_NAME>.log FULL=y TRACE=1ff0300 Note: 10000100 - basic tracing
10000B00 - basic tracing with timing 10000F00 - extended tracing with timing DPEX lines in the DW trace file will look like: Example with trace=1000B00, to obtain the callout tracing and the timing information:
*** 2017-02-03 14:58:28.599 Example with trace=10000100, to obtain the callout tracing only: *** 2017-02-03 14:38:53.775 If there are issues with the calls as seen in the trace files, individual tags can be disabled by using 'exclude='<TAG-NAME> in the expdp command line in full mode. The TAG-NAMEs in the example would be MATVW, RULE and AQ. 3. How to start tracing the Data Pump job ?Depending on how the Export or Import Data Pump job was started, there are several ways to activate tracing of the Data Pump processes. 3.1. Use the TRACE parameter upon the start of the job. % expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \
LOGFILE=<LOG_NAME>.log TABLES=<SCHEMA_NAME>.<TABLE_NAME> TRACE=480300
% expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECOTY_NAME> DUMPFILE=<DUMP_NAME>.dmp \
LOGFILE=<LOG_NAME>.log FULL=Y Export: Release 10.2.0.3.0 - Production on Thursday, 18 October, 2007 17:11:08 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options FLASHBACK automatically enabled to preserve database integrity. Starting "<LOGIN_SCHEMA>"."SYS_EXPORT_FULL_01": <LOGIN>/******** DIRECTORY=my_dir DUMPFILE=<DUMP_NAME>.dmp LOGFILE=<LOG_NAME>.log FULL=y Estimate in progress using BLOCKS method... Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA-- press Control-C to go to the Interactive Command mode, -- and temporary stop the job with the STOP_JOB command: Export> stop Are you sure you wish to stop this job ([yes]/no): yes -- use the system generated job-name SYS_EXPORT_FULL_01 to re-attach to the job -- and specify the TRACE parameter with a tracing level: % expdp <LOGIN>/<PASSWORD> ATTACH=sys_export_full_01 TRACE=480300 Export: Release 10.2.0.3.0 - Production on Thursday, 18 October, 2007 17:23:48 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options Job: SYS_EXPORT_FULL_01 Owner: <LOGIN_SCHEMA> Operation: EXPORT Creator Privs: FALSE GUID: F4E6BF997DFA46378D543F998E82653E Start Time: Thursday, 18 October, 2007 17:23:49 Mode: FULL Instance: <INSTANCE_NAME> Max Parallelism: 1 EXPORT Job Parameters: Parameter Name Parameter Value: CLIENT_COMMAND <LOGIN>/******** DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp LOGFILE=<LOG_NAME>.log FULL=y State: IDLING Bytes Processed: 0 Current Parallelism: 1 Job Error Count: 0 Dump File: <DIRECTORY_PATH>/<DUMP_NAME>.dmp bytes written: 4,096 Worker 1 Status: State: UNDEFINED -- restart the job and change back from Interactive Command mode to Logging mode -- with CONTINUE_CLIENT (note that tracing with level 480300 is now active): Export> cont Restarting "<LOGIN_SCHEMA>"."SYS_EXPORT_FULL_01": <LOGIN>/******** DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp LOGFILE=<LOG_NAME>.log FULL=y Processing object type DATABASE_EXPORT/TABLESPACE Processing object type DATABASE_EXPORT/PROFILE ...
- add the following line to init.ora parameter file:
EVENT="39089 trace name context forever,level 0x300" - Restart the database. - Start the Export Data Pump or Import Data Pump job. Example when using spfile initialization parameter file: -- when using spfile parameter file:
CONNECT / as sysdba SHOW PARAMETER event NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ event string ALTER SYSTEM SET EVENTS = '39089 trace name context forever, level 0x300' SCOPE = spfile; SHUTDOWN immediate STARTUP SHOW PARAMETER event NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ event string 39089 trace name context forev er, level 0x300 - Start the Export Data Pump or Import Data Pump job. -- to remove the event(s) again: ALTER SYSTEM RESET EVENT SCOPE = spfile SID='*'; SHUTDOWN immediate STARTUP
-- Enable event
ALTER SYSTEM SET EVENTS = '39089 trace name context forever, level 0x300' ; -- Disable event ALTER SYSTEM SET EVENTS = '39089 trace name context off' ; Note: @ For Support: 4. How are Data Pump trace files named, and where to find them ?Data Pump trace files are written to the directories specified by the init.ora/spfile initialization parameters BACKGROUND_DUMP_DEST and USER_DUMP_DEST. 4.1 Data Pump Master Control Process (MCP). 4.2. Data Pump Worker Process trace file. 4.3. Data Pump Shadow Process trace file. -- determine location of the trace files on disk (Oracle10g):
SHOW PARAMETER dump NAME TYPE VALUE ------------------------------------ ----------- ----------------------------- background_dump_dest string <background_dump_dest_location> user_dump_dest string <user_dump_dest_location> ... -- determine location of the trace files on disk (Oracle11g): SHOW PARAMETER diag NAME TYPE VALUE ------------------------------------ ----------- ----------------------------- diagnostic_dest string <diagnostic_dest_location> Note that in Oracle10g the default location for the trace files is: $ORACLE_HOME/rdbms/log 5. How to get a detailed status report of a Data Pump job ? Parameter: STATUSIf a Data Pump is started, a detailed status report of the job can be obtains with the STATUS parameter in Interactive Command mode. % expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=expdp_f%U.dmp \
LOGFILE=<LOG_NAME>.log FILESIZE=2g FULL=y Export: Release 10.2.0.3.0 - 64bit Production on Friday, 19 October, 2007 14:05:33 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, OLAP and Data Mining options Starting "<LOGIN_SCHEMA>"."SYS_EXPORT_FULL_01": <LOGIN>/******** DIRECTORY=<DIRECTORY_NAME> DUMPFILE=expdp_f%U.dmp LOGFILE=<LOG_NAME>.log FILESIZE=2g FULL=y Estimate in progress using BLOCKS method... Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA -- press Control-C to go to the Interactive Command mode, -- and get a STATUS report of the job: Export> status Job: SYS_EXPORT_FULL_01 Operation: EXPORT Mode: FULL State: EXECUTING Bytes Processed: 0 Current Parallelism: 1 Job Error Count: 0 Dump File: <DIRECTORY_PATH>/expdp_f01.dmp bytes written: 4,096 Dump File: <DIRECTORY_PATH>/expdp_f%u.dmp Worker 1 Status: State: EXECUTING Object Schema: <SCHEMA_NAME> Object Name: <TABLE_NAME> Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA Completed Objects: 959 Worker Parallelism: 1 -- To get a detailed STATUS report every minute while in Logging mode: Export> stat=60 Export> cont Total estimation using BLOCKS method: 8.437 GB Processing object type DATABASE_EXPORT/TABLESPACE ... Note that the status information is written only to your standard output device, not to the Data Pump log file. 6. How to get timing details on processed objects ? Parameter: METRICSWith the undocumented parameter METRICS additional information can be obtained about the number of objects that were processed and the time it took for processing them. Objects in this context are for example the exported system GRANT privileges, the imported tablespace quota GRANT statements, the exported CREATE TABLE statements. Note that the METRICS parameter does not create any additional trace files. Instead, the extra details are logged in the logfile of the Export Data Pump or Import Data Pump job. % expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \
LOGFILE=expdp_s.log SCHEMAS=<SCHEMA_NAME> METRICS=y Export: Release 10.2.0.3.0 - Production on Thursday, 18 October, 2007 17:05:53 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options FLASHBACK automatically enabled to preserve database integrity. Starting "<LOGIN_SCHEMA>"."SYS_EXPORT_SCHEMA_01": <LOGIN>/******** DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp LOGFILE=<LOG_NAME>.log SCHEMAS=<SCHEMA_NAME> METRICS=y Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 192 KB Processing object type SCHEMA_EXPORT/USER Completed 1 USER objects in 0 seconds Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Completed 2 SYSTEM_GRANT objects in 0 seconds Processing object type SCHEMA_EXPORT/ROLE_GRANT Completed 2 ROLE_GRANT objects in 0 seconds Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Completed 1 DEFAULT_ROLE objects in 0 seconds Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Completed 1 TABLESPACE_QUOTA objects in 2 seconds Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Completed 1 PROCACT_SCHEMA objects in 5 seconds Processing object type SCHEMA_EXPORT/TABLE/TABLE Completed 4 TABLE objects in 0 seconds Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Completed 2 INDEX objects in 1 seconds Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Completed 2 CONSTRAINT objects in 1 seconds Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Completed 1 REF_CONSTRAINT objects in 1 seconds . . exported "<SCHEMA_NAME>"."<TABLE_NAME1>" 5.656 KB 4 rows . . exported "<SCHEMA_NAME>"."<TABLE_NAME2>" 7.820 KB 14 rows . . exported "<SCHEMA_NAME>"."<TABLE_NAME3>" 5.585 KB 5 rows . . exported "<SCHEMA_NAME>"."<TABLE_NAME4>" 0 KB 0 rows Master table "<LOGIN_SCHEMA>"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: <DIRECTORY_PATH>/<DUMP_NAME>.dmp Job "<LOGIN_SCHEMA>"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:06:14 7. How to get SQL trace files of the Data Pump processes ?For troubleshooting specific situations, it may be required to create a SQL trace file for an Export Data Pump or Import Data Pump job. These SQL trace files can be created by setting Event 10046 for a specific process (usually the Worker process). Note that these SQL trace files can become very large, so ensure that there is enough free space in the directory that is specified by the init.ora/spfile initialization parameter BACKGROUND_DUMP_DEST. event 10046, level 1 = enable standard SQL_TRACE functionality
event 10046, level 4 = as level 1, plus trace the BIND values event 10046, level 8 = as level 1, plus trace the WAITs event 10046, level 12 = as level 1, plus trace the BIND values and the WAITs Remarks:
When creating a level 8 or 12 SQL trace file, it is required that the init.ora/spfile initialization parameter TIMED_STATISTICS is set to TRUE before the event is set and before the Data Pump job is started. The performance impact of setting this parameter temporary to TRUE is minimal. The SQL trace files that were created with level 8 or 12 as especially useful for investigating performance problems. -- For Event 10046, level 8 and 12: ensure we gather time related statistics:
CONNECT / as sysdba SHOW PARAMETER timed_statistics NAME TYPE VALUE --------------------------------- ----------- --------------------------- timed_statistics string FALSE ALTER SYSTEM SET timed_statistics = TRUE SCOPE = memory; -- Now set the event and start the Data Pump job -- To set the value back to the default: ALTER SYSTEM SET timed_statistics = FALSE SCOPE = memory;
-- Trace Worker process (400300) with standard SQL_TRACE functionality (1):
% expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ LOGFILE=<LOG_NAME>.log TABLES=<SCHEMA_NAME>/<TABLE_NAME> TRACE=400301 Note that this level of tracing is usually not sufficient for tracing Data Pump when an error occurs or when there is an issue with Data Pump performance. For tracing Data Pump when an error occurs use level 4, and when there is an issue with Data Pump performance use level 12 (see sections below).
- Start the Data Pump job, e.g.:
% expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=expdp_f%U.dmp \ LOGFILE=<log_name>.log FILESIZE=2G FULL=y -- In SQL*Plus, obtain Data Pump process info: CONNECT / as sysdba set lines 150 pages 100 numwidth 7 col program for a38 col username for a10 col spid for a7 select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid, s.status, s.username, d.job_name, p.spid, s.serial#, p.pid from v$session s, v$process p, dba_datapump_sessions d where p.addr=s.paddr and s.saddr=d.saddr; DATE PROGRAM SID STATUS ------------------- -------------------------------------- ------- -------- 2007-10-19 08:58:41 ude@<HOST_NAME> (TNS V1-V3) 158 ACTIVE 2007-10-19 08:58:41 oracle@<HOST_NAME> (DM00) 143 ACTIVE 2007-10-19 08:58:41 oracle@<HOST_NAME> (DW01) 150 ACTIVE USERNAME JOB_NAME SPID SERIAL# PID ---------- ------------------------------ ------- ------- ------- <USER> SYS_EXPORT_FULL_01 17288 29 18 <USER> SYS_EXPORT_FULL_01 17292 50 22 <USER> SYS_EXPORT_FULL_01 17294 17 23 In the example output above we see that the Data Pump Master process (DM00) has SID: 143 and serial#: 50 and the Data Pump Worker process (DW01) has SID: 150 and serial#: 17. These details can be used to activate SQL tracing in SQL*Plus with DBMS_SYSTEM.SET_EV, e.g.: -- In SQL*Plus, activate SQL tracing with DBMS_SYSTEM and SID/SERIAL#
-- Syntax: DBMS_SYSTEM.SET_EV([SID],[SERIAL#],[EVENT],[LEVEL],'') -- Example to SQL_TRACE Worker process with level 4 (Bind values): execute sys.dbms_system.set_ev(150,17,10046,4,''); -- and stop tracing: execute sys.dbms_system.set_ev(150,17,10046,0,''); -- Example to SQL_TRACE Master Control process with level 8 (Waits): execute sys.dbms_system.set_ev(143,50,10046,8,''); -- and stop tracing: execute sys.dbms_system.set_ev(143,50,10046,0,''); The example output of the query above also shows that the Data Pump Master process (DM00) has OS process Id: 17292 and the Data Pump Worker process (DW01) has OS process Id: 17294. With this information, it is also possible to use 'oradebug' in SQL*Plus to activate SQL tracing for those processes, e.g.: -- In SQL*Plus, activate SQL tracing with ORADEBUG and the SPID:
-- Example to SQL_TRACE Worker process with level 4 (Bind values): oradebug setospid 17294 oradebug unlimit oradebug event 10046 trace name context forever, level 4 oradebug tracefile_name -- Example to SQL_TRACE Master Control process with level 8 (Waits): oradebug setospid 17292 oradebug unlimit oradebug event 10046 trace name context forever, level 8 oradebug tracefile_name -- To stop the tracing: oradebug event 10046 trace name context off Either DBMS_SYSTEM.SET_EV or 'oradebug' can be used to create a Data Pump trace file.
-- Activate SQL tracing database wide,
-- Be careful: all processes will be traced! -- -- never do this on production unless a maintenance window -- once issued in PROD you may not be able to stop if load is high -- careful with directories filling up -- CONNECT / as sysdba ALTER SYSTEM SET EVENTS '10046 trace name context forever, level 4'; - Start the Export Data Pump or Import Data Pump job, e.g.: % expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ LOGFILE=<LOG_NAME>.log TABLES=<SCHEMA_NAME>/<TABLE_NAME> -- Unset event immediately after Data Pump job ends: ALTER SYSTEM SET EVENTS '10046 trace name context off'; Be careful though: the steps above will result in SQL tracing on all processes, so only use this method if no other database activity takes place (or hardly any other activity), and when the Data Pump job ends relatively quickly.
-- create standard tkprof output files for Data Pump Master and Worker SQL traces:
% cd /oracle/admin/ORCL/BDUMP % tkprof <SID>_dm00_17292.trc tkprof_<SID>_dm00_17292.out waits=y sort=exeela % tkprof <SID>_dw01_17294.trc tkprof_<SID>_dw01_17294.out waits=y sort=exeela For details about Event 10046 and tkprof, see also: 8. How to get header details of Export Data Pump dumpfiles ?Every export Data Pump dumpfile starts with a header (usually 4 kb in size) which contains details about the dumpfile. Starting with Oracle10g Release 2 (10.2.0.1.0), this header information can be extracted from the dumpfile by calling the procedure DBMS_DATAPUMP.GET_DUMPFILE_INFO. Header information can be extracted from any export dumpfile, even from dumpfiles created with the classic export client. Example output of a procedure that extracts information from a dumpfile header: ----------------------------------------------------------------------------
...File Version....: 2.1 (Oracle11g Release 1: 11.1.0.x) ...Master Present..: 1 (Yes) ...GUID............: AE9D4A8A85C6444F813600C00199745A ...File Number.....: 1 ...Characterset ID.: 46 (WE8ISO8859P15) ...Creation Date...: Wed Mar 19 16:06:45 2008 ...Flags...........: 2 ...Job Name........: "<LOGIN_SCHEMA>"."SYS_EXPORT_SCHEMA_01" ...Platform........: x86_64/Linux 2.4.xx ...Instance........: <INSTANCE_NAME> ...Language........: WE8ISO8859P15 ...Block size......: 4096 ...Metadata Compres: 1 (Yes) ...Data Compressed.: 0 (No) ...Metadata Encrypt: 0 (No) ...Data Encrypted..: 0 (No) ...Master Piece Cnt: 1 ...Master Piece Num: 1 ...Job Version.....: 11.01.00.00.00 ...Max Items Code..: 20 ---------------------------------------------------------------------------- For more details and an example code of procedure SHOW_DUMPFILE_INFO, see: Besides the DBMS_DATAPUMP.GET_DUMPFILE_INFO procedure, it is also possible to start an Import Data Pump job with TRACE=100300 in order to create a trace file with the dumpfile header details. The dumpfile header details are written to the Data Pump Master trace file: [SID]dm[number]_[PID].trc which can be found in the directory specified by the init.ora/spfile parameter BACKGROUND_DUMP_DEST. -- create a SQL file with TRACE parameter value 100300 (trace file layer);
-- although this impdp job will give an error (ORA-39166: Object ... was not found) -- a trace file will be written with the file header details we are interested in: -- On windows, place all expdp parameters on one single line: D:\DataPump> impdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp NOLOGFILE=y SQLFILE=<SQLFILE_NAME>.sql TABLES=notexist TRACE=100300 Example output of trace file: <sid>_dm00_1696.trc ...
KUPF: 17:14:23.345: newImpFile: EXAMINE_DUMP_FILE KUPF: 17:14:23.355: ......DB Version = 10.02.00.03.00 KUPF: 17:14:23.355: File Version Str = 1.1 KUPF: 17:14:23.355: File Version Num = 257 KUPF: 17:14:23.355: Version CapBits1 = 32775 KUPF: 17:14:23.355: ......Has Master = 1 KUPF: 17:14:23.355: ........Job Guid = 76DC6D8BC4A6479EADECB81E71FAEF93 KUPF: 17:14:23.355: Master Table Pos = 19 KUPF: 17:14:23.355: Master Table Len = 138856 KUPF: 17:14:23.375: .....File Number = 1 KUPF: 17:14:23.385: ......Charset ID = 46 KUPF: 17:14:23.385: ...Creation date = Thu Oct 18 16:51:36 2007 KUPF: 17:14:23.385: ...........Flags = 0 KUPF: 17:14:23.385: ......Media Type = 0 KUPF: 17:14:23.385: ........Job Name = "<LOGIN_SCHEMA>"."SYS_EXPORT_SCHEMA_01" KUPF: 17:14:23.395: ........Platform = IBMPC/WIN_NT-8.1.0 KUPF: 17:14:23.395: ........Language = WE8ISO8859P15 KUPF: 17:14:23.395: .......Blocksize = 4096 KUPF: 17:14:23.405: newImpFile: file; <PATH>\<DUMP_NAME>.dmp, FID; 1 ... For more details and an example, see: 9. How to get Data Definition Language (DDL) statements ? Parameter: SQLFILEWith the Import Data Pump parameter SQLFILE you can specify the name of a file into which all of the SQL DDL is written that Import would have executed, based on other parameters. The SQL is not actually executed, and the target system remains unchanged. The file is written to the directory object specified in the DIRECTORY parameter, unless another directory_object is explicitly specified here. Any existing file that has a name matching the one specified with this parameter is overwritten. -- create a SQL file with DDL statements:
% impdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ NOLOGFILE=y SQLFILE=<SQLFILE_NAME>.sql FULL=y Example output of sqlfile: <SQLFILE_NAME>.sql -- CONNECT SYSTEM
-- new object type path is: SCHEMA_EXPORT/USER CREATE USER "<USER>" IDENTIFIED BY VALUES '<VALUE>' DEFAULT TABLESPACE "<TABLESPACE>" TEMPORARY TABLESPACE "<TEMP_TABLESPACE>"; -- new object type path is: SCHEMA_EXPORT/SYSTEM_GRANT GRANT CREATE TABLE TO "<USER>"; GRANT CREATE SESSION TO "<USER>"; -- new object type path is: SCHEMA_EXPORT/ROLE_GRANT GRANT "EXP_FULL_DATABASE" TO "<USER>"; GRANT "IMP_FULL_DATABASE" TO "<USER>"; ... For more details, see: 10. How to get the DDL both as SQL statements and as XML data ?With the classic export dumpfiles, the Data Definition Language (DDL) statements were stored inside the export dumpfile as normal (ready-to-use) SQL statements. With Data Pump however, the data to recreate the DDL statements is stored in XML format. This XML data can be obtained together with the actual DDL statements by running an import DataPump job with the SQLFILE and TRACE parameter. -- create a SQL file with DDL statements and XML data:
% impdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ NOLOGFILE=y SQLFILE=<SQLFILE_NAME>.sql FULL=y TRACE=2 Example output of sqlfile: impdp_s.sql -- CONNECT SYSTEM
-- new object type path is: SCHEMA_EXPORT/USER <?xml version="1.0"?><ROWSET><ROW> <USER_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>0</VERS_MINOR> <USER_ID>79</USER_ID><NAME><USER></NAME><TYPE_NUM>1</TYPE_NUM> <PASSWORD>F894844C34402B67</PASSWORD><DATATS>USERS</DATATS> <TEMPTS>TEMP</TEMPTS><CTIME>18-OCT-07</CTIME><PTIME>18-OCT-07</PTIME> <PROFNUM>0</PROFNUM><PROFNAME>DEFAULT</PROFNAME><DEFROLE>1</DEFROLE> <ASTATUS>0</ASTATUS><LCOUNT>0</LCOUNT> <DEFSCHCLASS>DEFAULT_CONSUMER_GROUP</DEFSCHCLASS><SPARE1>0</SPARE1></USER_T> </ROW></ROWSET> CREATE USER "<USER>" IDENTIFIED BY VALUES '<VALUE>' DEFAULT TABLESPACE "<TABLESPACE>" TEMPORARY TABLESPACE "<TEMP_TABLESPACE>"; -- new object type path is: SCHEMA_EXPORT/SYSTEM_GRANT ... Note: it is not recommended to perform an import Data Pump job with TRACE=2 unless explicitly requested by Oracle Support.
Additional ResourcesCommunity: Database Utilities REFERENCESNOTE:30824.1 - Init.ora Parameter "TIMED_STATISTICS" Reference NoteNOTE:351598.1 - Export/Import DataPump: The Minimum Requirements to Use Export DataPump and Import DataPump (System Privileges) NOTE:422893.1 - Understanding Automatic Diagnostic Repository NOTE:553337.1 - Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions [Video] BUG:5583370 - PRODUCT ENHANCEMENT: EXTEND PARAMETER TRACE TO PROVIDE TIMESTAMP IN DATAPUMP LOG NOTE:453895.1 - Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) NOTE:462488.1 - How to Gather the Header Information and the Content of an Export Dumpfile ? NOTE:1053432.1 - ORA-31631 During Export Using DataPump API NOTE:30762.1 - Init.ora Parameter "MAX_DUMP_FILE_SIZE" Reference Note NOTE:32951.1 - TKProf Interpretation (9i and below) NOTE:336014.1 - How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? NOTE:341733.1 - Export/Import DataPump Parameters INCLUDE and EXCLUDE - How to Load and Unload Specific Objects BUG:5152186 - EXPDP/IMPDP JOB WITH TRACE AND STARTED WITH NON-DBA USER FAILS: ORA-31631 NOTE:160178.1 - How To Set EVENTS In The SPFILE NOTE:552424.1 - Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? NOTE:199081.1 - SQL_TRACE (10046), TKProf and Explain Plan - Overview Reference NOTE:21154.1 - EVENT: 10046 "enable SQL statement tracing (including binds/waits)" |