Parameters to use
• ATTACH, CONTENT,DATA_OPTION,PARALLEL,REMAP_DATA,REMAP_TABLE, REMAP_TABLESPACE ,FROM USER , TO USER
• PARFILE,QUERY,RESUMABLE,STATISTICS,TRANSPORT_TABLESPACE
1)Remap_table and sqlfile
impdp dumpfile=table.dmp logfile=table.log directory=DATA_PUMP_DIR tables=emp remap_table=emp:emp1 SQLFILE=file.sql
sql file=stores the DDL statment used in the import operation.
2)attach,job_name
expdp full=y dumpfile=full.dmp logfile=full.log job_name=expdp_full directory=data_pump_dir
to view the running jobs
select job_name, state from dba_datapump_jobs;
a)to attach the job
expdp ATTACH=EXPDP_FULL
b)to stop the job
STOP_JOB=IMMEDIATE
c)to start the job
Export> START_JOB
Export> CONTINUE_CLIENT(to view updates)
d)to kill the job (which you cannot resume)
Export> KILL_JOB
3)DATA_OPTIONS
DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS | REJECT_ROWS_WITH_REPL_CHAR]
DISABLE_APPEND_HINT=to avoid TM locks and it will not create logs.
SKIP_CONSTRAINT_ERRORS=If constraint violated also it will load the data.
REJECT_ROWS_WITH_REPL_CHAR=If data type to be converted before import and this parameter is set it will not convert and load the data.
4)CONTENT=ALL, DATA_ONLY, and METADATA_ONLY
5)estimate=blocks|statistics
shows the size of data by both method
6)PARFILE
Ex.enlist all the parameter in the file and provide the name in PARFILE parameter
[cybage@oracle dpdump]$ cat parameter.par
full=y dumpfile=full.dmp logfile=full.log job_name=expdp_full directory=data_pump_dir
[cybage@oracle dpdump]$ expdp PARFILE=parameter.par
why to use parfile..??
-If command line uses'?' symbol in datafile path
-The over head of writing same parameter with same value.
-command is big and using next line.
• ATTACH, CONTENT,DATA_OPTION,PARALLEL,REMAP_DATA,REMAP_TABLE, REMAP_TABLESPACE ,FROM USER , TO USER
• PARFILE,QUERY,RESUMABLE,STATISTICS,TRANSPORT_TABLESPACE
1)Remap_table and sqlfile
impdp dumpfile=table.dmp logfile=table.log directory=DATA_PUMP_DIR tables=emp remap_table=emp:emp1 SQLFILE=file.sql
sql file=stores the DDL statment used in the import operation.
2)attach,job_name
expdp full=y dumpfile=full.dmp logfile=full.log job_name=expdp_full directory=data_pump_dir
to view the running jobs
select job_name, state from dba_datapump_jobs;
a)to attach the job
expdp ATTACH=EXPDP_FULL
b)to stop the job
STOP_JOB=IMMEDIATE
c)to start the job
Export> START_JOB
Export> CONTINUE_CLIENT(to view updates)
d)to kill the job (which you cannot resume)
Export> KILL_JOB
3)DATA_OPTIONS
DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS | REJECT_ROWS_WITH_REPL_CHAR]
DISABLE_APPEND_HINT=to avoid TM locks and it will not create logs.
SKIP_CONSTRAINT_ERRORS=If constraint violated also it will load the data.
REJECT_ROWS_WITH_REPL_CHAR=If data type to be converted before import and this parameter is set it will not convert and load the data.
4)CONTENT=ALL, DATA_ONLY, and METADATA_ONLY
5)estimate=blocks|statistics
shows the size of data by both method
6)PARFILE
Ex.enlist all the parameter in the file and provide the name in PARFILE parameter
[cybage@oracle dpdump]$ cat parameter.par
full=y dumpfile=full.dmp logfile=full.log job_name=expdp_full directory=data_pump_dir
[cybage@oracle dpdump]$ expdp PARFILE=parameter.par
why to use parfile..??
-If command line uses'?' symbol in datafile path
-The over head of writing same parameter with same value.
-command is big and using next line.
No comments:
Post a Comment