datapump scenarios

April 3, 2018 | Author: Vikram Reddy | Category: Oracle Database, Databases, Table (Database), Database Schema, Computer File


Comments



Description

http://myorastuff.blogspot.in/2008/08/expdp-impdp.html Data pump is a new feature in Oracle10g that provides fast parallel data load. With direct path and parallel execution, data pump is several times faster then the traditional exp/imp. Traditional exp/imp runs on client side. But impdp/expdp runs on server side. So we have much control on expdp/expdp compared to traditional exp/imp. When compared to exp/imp, data pump startup time is longer. Because, it has to setup the jobs, queues, and master table. Also at the end of the export operation the master table data is written to the dump file set, and at the beginning of the import job the master table is located and loaded in the schema of the user. Following are the process involved in the data pump operation: Client Process : This process is initiated by client utility. This process makes a call to the data pump API. Once the data pump is initiated, this process is not necessary for the progress of the job. Shadow Process : When client log into the database, foreground process is created. It services the client data pump API requests. This process creates the master table and creates Advanced queuing queues used for communication. Once client process ends, shadow process also go away. Master Control Process : MCP controls the execution of the data pump job. There is one MCP per job. MCP divides the data pump job into various metadata and data load or unload jobs and hands them over to the worker processes. Worker Process : MCP creates worker process based on the valule of the PARALLEL parameter. The worker process performs the task requested by MCP. Advantage of Data pump 1. We can perform export in parallel. It can also write to multiple files on different disks. (Specify parameters PARALLEL=2 and the two directory names with file specification DUMPFILE=ddir1:/file1.dmp, DDIR2:/file2.dmp) 2. Has ability to attach and detach from job, monitor the job progress remotely. 3. Has more option to filter metadata objects. Ex, EXCLUDE, INCLUDE 4. ESTIMATE_ONLY option can be used to estimate disk space requirements before performs the job 5. Data can be exported from remote database by using Database link 6. Explicit DB version can be specified, so only supported object types are exported. 7. During impdp, we can change the target file names, schema, and tablespace. Ex, REMAP_SCHEMA, REMAP_DATAFILES, REMAP_TABLESPACE 8. Has the option to filter data rows during impdp. Traditional exp/imp, we have this filter option only in exp. But here we have filter option on both impdp, expdp. 9. Data can be imported from one DB to another without writing to dump file, using NETWORK_LINK parameter. 10. Data access methods are decided automatically. In traditional exp/imp, we specify the value for the parameter DIRECT. But here, it decides where direct path can not be used , conventional path is used. 11. Job status can be queried directly from data dictionary(For example, dba_datapump_jobs, dba_datapump_sessions etc) Exp & Expdp common parameters: These below parameters exists in both traditional exp and expdp utility. FILESIZE FLASHBACK_SCN FLASHBACK_TIME FULL HELP PARFILE QUERY TABLES TABLESPACES TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace) SAMPLE .in/2008/08/expdp-impdp.NONE). FEEDBACK => STATUS FILE => DUMPFILE LOG => LOGFILE OWNER => SCHEMAS TTS_FULL_CHECK => TRANSPROT_FULL_CHECK New parameters in expdp Utility ATTACH Attach the client session to existing data pump jobs CONTENT Specify what to export(ALL.APPEND. FULL HELP PARFILE QUERY SKIP_UNUSABLE_INDEXES TABLES TABLESPACES Comparing imp & impdp parameters: These below parameters are equivalent parameters between imp & impdp.html Comparing exp & expdp parameters: These below parameters are equivalent parameters between exp & expdp. Exp and corresponding Expdp parameters. DATA_ONLY. imp and corresponding impdp parameters. ENCRYPTION_PASSWORD The table column is encrypted. The default is METADATA_ONLY. REMAP_SCHEMAS IGNORE =>TABLE_EXISTS_ACTION(SKIP. Imp & Impdp common parameters: These below parameters exist in both traditional imp and impdp utility. then it will be written as clear text in the dump file set when the password is not specified..Allows you to specify a percentage of data to be sampled and unloaded from the source database.REPLACE) INDEXFILE..TRUNCATE..blogspot. The sample_percent indicates the probability that a block of rows will be selected as part of the sample. but does not perform export EXCLUDE List of objects to be excluded INCLUDE List of jobs to be included JOB_NAME Name of the export job KEEP_MASTER Specify Y not to drop the master table after export NETWORK_LINK Specify dblink to export from remote database NOLOGFILE Specify Y if you do not want to create log file PARALLEL Specify the maximum number of threads for the export job VERSION DB objects that are incompatible with the specified version will not be exported. SHOW=>SQLFILE LOG =>LOGFILE TOUSER =>REMAP_SCHEMA New parameters in impdp Utility . ESTIMATE_ONLY It estimate the space.. DATAFILES => TRANSPORT_DATAFILES DESTROY =>REUSE_DATAFILES FEEDBACK =>STATUS FILE =>DUMPFILE FROMUSER =>SCHEMAS. COMPRESSION Specifies whether to compress metadata before writing to the dump file set. ESTIMATE Show how much disk space each table in the export job consumes. We have two values(METADATA_ONLY. We can use NONE if we want to disable during the expdp. We can define any string as a password for this parameter.http://myorastuff. METADATA_ONLY) DIRECTORY Location to write the dump file and log file. STORAGE.write on directory dumploc to scott.dmp logfile=schemaexpdb. Valid only when NETWORK_LINK parameter is used. where boolean_value is Y or N.. whereas SEGMENT_ATTRIBUTES removes physical attributes.view. For instance.cluster. STORAGE removes the storage clause from the CREATE statement DDL.table).dmp logfile=schemaimpdb. The valid values are SEGMENT_ATTRIBUTES. logging. but oracle finds the SCN close to the time specified. system@orcl> create directory dumplocation 2 as 'c:/dumplocation'. system@orcl> grant read.http://myorastuff. Load the objects which came from RES tablespace into USERS tablespace in target database. Prerequisite for expdp/impdp: Set up the dump location in the database.. and storage attributes. Export Parfile content: userid=system/password@orcl dumpfile=schemaexpdb. REMAP_SCHEMA Loads objects to a different target schema name.dmp logfile=expfulldp.in/2008/08/expdp-impdp. REMAP_DATAFILE Changes name of the source DB data file to a different name in the target..log full=y directory=dumplocation Scenario2 Export the scott schema from orcl and import into ordb database.package. NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter. The dump file will be not be created in server when we use this parameter. ordb.html FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database. This is useful if the storage characteristics of the source and target database are different. TRANSFORM We can specify that the storage clause should not be generated in the DDL for import.. TRANSFORM = name:boolean_value[:object_type]. These two parameters are only valid when we use NETWORK_LINK parameter. system@orcl> Let us experiment expdp & impdp utility as different scenario. Grant succeeded. FLASHBACK_TIME Similar to FLASHBACK_SCN. To get a consistent export from the source database.log directory=dumplocation . tablespace.log directory=dumplocation schemas=scott Import parfile content: userid=system/password@ordb dumpfile=schemaexpdb. REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target. INCLUDE. We have two database orcl. exclude some objects(sequence. Directory created. we can use the FLASHBACK_SCN or FLASHBACK_TIME parameters. Scenario1 Export the whole orcl database.blogspot. CONTENT. While import.. TRANSFORM=storage:N:table ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export operation. EXCLUDE are same as expdp utilities. All the below scenarios are tested in Oracle10g R2 version. Export Parfile content: userid=system/password@orcl dumpfile=expfulldp. 786432 rows deleted. Expdp parfile content: userid=system/password@orcl dumpfile=schemaexpdb.dmp logfile=partexpdb. Expdp parfile content: userid=system/password@orcl dumpfile=partexpdb.dmp logfile=schemaimpdb. scott@ordb> delete part_emp where deptno=20.log directory=dumplocation tables=scott. Expdb parfile content: userid=system/password@orcl logfile=tableexpdb.dmp Impdp parfile content: userid=system/password@ordb dumpfile=tableexpdb.dmp logfile=schemaexpdb. scott@ordb> delete part_emp where deptno=10.log directory=dumplocation table_exists_action=replace .html table_exists_action=replace remap_tablespace=res:users exclude=sequence.log directory=dumplocation tables=scott.log directory=dumplocation table_exists_action=REPLACE Scenario4 Export only specific partition in emp table from scott schema at orcl and import into ordb database.package.view.http://myorastuff.part_emp:part10.dmp logfile=tabimpdb.table:"in('LOAD_EXT')" Scenario3 Export the emp table from scott schema at orcl instance and import into ordb instance. then we need to delete emp table for deptno in(10.log directory=dumplocation include=table schemas=scott Impdp parfile content: userid=system/password@orcl dumpfile=schemaexpdb. scott@ordb> commit. 1310720 rows deleted.log directory=dumplocation table_exists_action=append Scenario5 Export only tables in scott schema at orcl and import into ordb database.scott.cluster. userid=system/password@ordb dumpfile=partexpdb.part_emp:part20 Impdp parfile content: If we want to overwrite the exported data in target database.in/2008/08/expdp-impdp.dmp logfile=tabimpdb. Commit complete.part_emp dumpfile=tableexpdb.20).blogspot. in/2008/08/expdp-impdp.log directory=dumplocation dumpfile=schemaexp_split_%U.log directory=dumplocation dumpfile=schemaexp_split_%U. Impdp parfile content: userid=system/password@ordb logfile=schemaimp_split.dmp.schemaexp_split_01.dmp filesize=50M schemas=scott include=table As per the above expdp parfile.log directory=dumplocation schemas=scott query="where deptno = 10" table_exists_action=APPEND Scenario7 Export the scott schema from orcl database and split the dump file into 50M sizes. initially. schemaexp_split_03. Notice that every occurrence of the substation variable is incremented each time.blogspot.dmp parallel=4 schemas=scott include=table As per the above parfile content. no more files will be created. While importing. the dump file size is 500MB. Import the dump file into ordb datbase. Import the dump file into ordb datbase.'DEPT')" query="where deptno in(10.html Scenario6 Export only rows belonging to department 10 and 20 in emp and dept table from orcl database. Expdp parfile content: userid=system/password@orcl logfile=schemaexp_split. Import the dump file in ordb database. initially four files will be created . schemaexp_split_04.dmp table_exists_action=replace remap_tablespace=res:users exclude=grant Scenario8 Export the scott schema from orcl database and split the dump file into four files.dmp logfile=data_filter_expdb.log directory=dumplocation dumpfile=schemaexp_split_%U. the next file called schemaexp_split_02.dmp will be created. Expdp parfile content: userid=system/password@orcl logfile=schemaexp_split.dmp file will be created.20)" Impdp parfile content: userid=system/password@ordb dumpfile=data_filter_expdb. then it creates 10 dump file as each file size is 50MB. schemaexp_split_01.log directory=dumplocation content=data_only schemas=scott include=table:"in('EMP'. Once the file is 50MB.dmp.dmp logfile=data_filter_impdb.dmp. Let us say. load only deptno 10 in target database. Expdp parfile content: userid=system/password@orcl dumpfile=data_filter_expdb.http://myorastuff. schemaexp_split_02. Impdp parfile content: . Since there is no FILESIZE parameter.dmp. After export is successful.dmp filesize=50M schemas=scott include=table As per above expdp par file content.dump2:schemaexp_%U.0 . import the dump file into orcl database.0.Produc tion With the Partitioning.dump2:schemaexp_%U. entire expdp dump file size is 1500MB.1. The dump files will be stored in three different location. Expdp parfile content: userid=system/password@orcl logfile=schemaexp_split.par Export: Release 10.http://myorastuff. 2009 12:06:40 Copyright (c) 2003.dmp.dmp table_exists_action=replace remap_tablespace=res:users exclude=grant Scenario9 Export the scott schema from orcl database and split the dump file into three files.dmp. Impdp parfile content: userid=system/password@ordb logfile=schemaimp_split. If there no identical privileges.in/2008/08/expdp-impdp.1.dmp table_exists_action=replace Scenario10 We are in orcl database server.dmp schemas=scott include=table network_link=ordb As per the above parfile.dump3:schemaexp_%U. the expdp user and source database schema users should have identical privileges. 2005.dmp. When we use network_link.0 . Connected to: Oracle Database 10g Enterprise Edition Release 10.html userid=system/password@ordb logfile=schemaimp_split. Let us say.log directory=dumplocation dumpfile=networkexp1. import the dump file into ordb database. After expdp is successful.blogspot.log directory=dumplocation dumpfile=dump1:schemaexp_%U.0. C:\impexpdp>expdp parfile=networkexp1.log directory=dumplocation dumpfile=dump1:schemaexp_%U. Impdp parfile content: . Since we are running expdp in orcl server. it place the dump file into three different location. Oracle. This method is especially useful if you do not have enough space in one file system to perform the complete expdp job.2.Production on Sunday.log directory=dumplocation dumpfile=schemaexp_split_%U.dmp. Then it creates 30 dump files(each dump file size is 50MB) and place 10 files in each file system. then we get the below error.2. OLAP and Data Mining options ORA-31631: privileges are required ORA-39149: cannot link privileged user to non-privileged user Expdp parfile content: userid=scott/tiger@orcl logfile=netwrokexp1. This is basically exporting the data from remote database. Now export the ordb data and place the dump file in orcl database server. expdp utility exports the ordb database data and place the dump file in orcl server. All rights reserved. 17 May.dump3:schemaexp_%U. log directory=dumplocation dumpfile=networkexp1.log . But do not write dump file in server. When we export the data.part_emp SAMPLE=20 As per the above expdp parfile. We can import the data without creating the dumpfile.dmp schemas=scott include=table Impdp parfile content: userid=system/password@ordb logfile=networkimp1.dmp logfile=schemaexpdb. The sample_percent indicates the probability that a block of rows will be selected as part of the sample.dmp logfile=schemaimpdb.dmp table_exists_action=replace remap_schema=scott:training Scenario 13 Expdp table on orcl database and imdp in ordb.dmp table_exists_action=replace Scenario11 Export scott schema in orcl and import into ordb.log directory=dumplocation dumpfile=networkexp1. Expdp parfile content: userid=scott/tiger@orcl logfile=netwrokexp1. then we can use this option to load the data. Impdp parfile content: userid=scott/tiger@ordb network_link=orcl logfile=networkimp2. export only 20 percent of the table data. The value you supply for sample_percent can be anywhere from . SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported.in/2008/08/expdp-impdp. but not including. then the sample_percent value applies to the entire export job. Expdp parfile content: userid=system/password@orcl dumpfile=schemaexpdb. 100. If no table is specified.000001 up to. Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database.blogspot.log directory=dumplocation table_exists_action=replace Scenario12 Expdp scott schema in ordb and impdp the dump file in training schema in ordb database. Impdp parfile content: userid=system/password@ordb dumpfile=schemaexpdb.log directory=dumplocation tables=scott.log directory=dumplocation dumpfile=networkexp1.html userid=system/password@orcl logfile=networkimp1. The expdp and impdp should be completed with out writing dump file in the server. The SAMPLE parameter is not valid for network exports. Here we do not need to export the data. it exports only 20 percent of the data in part_emp table. We use SAMPLE parameter to accomplish this task.http://myorastuff. It does not mean that the database will retrieve exactly that amount of rows from the table. If we do not have much space in the file system to place the dump file. Connected to: Oracle Database 10g Enterprise Edition Release 10. KILL_JOB Detaches all currently attached client sessions and terminates the job PARALLEL Increase or decrease the number of threads START_JOB Starts(or resume) a job that is not currently running.1.-----------------------------NOT RUNNING SYS_IMPORT_FULL_01 .-----------------------------EXECUTING SYS_IMPORT_FULL_01 SQL> C:\impexpdp>impdp parfile=schemaimp1. STATE JOB_NAME -----------------------------. let us kill the job and check the job status for every activity. STOP_JOB stops the current job. we can stop the job as above by using stop_job command. Scenario14 Let us start the job and in between.Produc tion With the Partitioning. Oracle.http://myorastuff. SQL> select state. SQL> select state.0. CONTINUE_CLIENT Changes mode from interactive client to logging mode EXIT_CLIENT Leaves the client session and discontinues logging but leaves the current job running. ADD_FILE Adds another file or a file set to the DUMPFILE set."SYS_IMPORT_FULL_01": parfile=schemaimp1.Production on Sunday. you can attach to that job from any computer and monitor the job or make adjustment to the job. STATE JOB_NAME -----------------------------.."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "SYSTEM".2.2.blogspot. Once it is returned to prompt(Import>).job_name from dba_datapump_jobs. Here are the data pump interactive commands. 17 May.html directory=dumplocation table_exists_action=replace Managing Data Pump jobs The datapump clients expdp and impdp provide an interactive command interface.0 .job_name from dba_datapump_jobs. After some time.1.par Import: Release 10. The detailed status is displayed to the output screen but not written to the log file. 2009 14:06:51 Copyright (c) 2003.0 .. We can find what jobs are running currently in the database by using the below query.. the refresh interval can be specified in seconds. 2005. Since each expdp and impdp operation has a job name.par Processing object type SCHEMA_EXPORT/TABLE/TABLE Import> stop_job Are you sure you wish to stop this job ([yes]/no): yes C:\impexpdp> When we want to stop the job. All rights reserved. SKIP_CURRENT option can skip the recent failed DDL statement that caused the job to stop.in/2008/08/expdp-impdp. here is the job status.0. the job can be restarted later STATUS Displays detailed status of the job. we need press Control-M to return Import> prompt. After the job is stoped. we stop the job in middle and resume the job. OLAP and Data Mining options Master table "SYSTEM". -----------------------------EXECUTING SYS_IMPORT_FULL_01 .0.2..0 .job_name from dba_datapump_jobs. SQL> select state.Production on Sunday. 17 May. STATE JOB_NAME -----------------------------. 2005.-----------------------------IDLING SYS_IMPORT_FULL_01 SQL> Attaching the job does not resume the job. All rights reserved.job_name from dba_datapump_jobs.par SQL> select state. 2009 14:17 Restarting "SYSTEM". OLAP and Data Mining options Job: SYS_IMPORT_FULL_01 Owner: SYSTEM Operation: IMPORT Creator Privs: FALSE GUID: 54AD9D6CF9B54FC4823B1AF09C2DC723 Start Time: Sunday.in/2008/08/expdp-impdp. 2009 14:17:12 Mode: FULL Instance: ordb Max Parallelism: 1 EXPORT Job Parameters: CLIENT_COMMAND parfile=schemaexp1."SYS_IMPORT_FULL_01": parfile=schemaimp1. Attaching the job does not restart the job. Now we are resuming job again.dmp Worker 1 Status: State: UNDEFINED Import> After attaching the job..Produc tion With the Partitioning. here is the job status. Import> continue_client Job SYS_IMPORT_FULL_01 has been reopened at Sunday.par IMPORT Job Parameters: Parameter Name Parameter Value: CLIENT_COMMAND parfile=schemaimp1. Oracle.blogspot. C:\impexpdp>impdp system/password@ordb attach=SYS_IMPORT_FULL_01 Import: Release 10.1.016 Percent Done: 44 Current Parallelism: 1 Job Error Count: 0 Dump File: c:/impexpdp\networkexp1. Connected to: Oracle Database 10g Enterprise Edition Release 10.086. 2009 14:17:11 Copyright (c) 2003.2..http://myorastuff. STATE JOB_NAME -----------------------------.0..par TABLE_EXISTS_ACTION REPLACE State: IDLING Bytes Processed: 1. 17 May...333. 17 May..1..0 .html SQL> Now we are attaching job again. KEEP_MASTER. Import> kill_job Are you sure you wish to stop this job ([yes]/no): yes C:\impexpdp> Now the job is disappared in the database.dbms_datapump. we need to press Control-C to return the Import> prompt. REMAP_SCHEMA... excellent info on expdp and impdp scenarios. EXCLUDE. expdp. Thanks for sharing the knowledge. Thank you for visiting my blog. However one of the issues I found in the article is as below transpoart_tablespace's' parameter exists only in expdp value being tablespace name Whereas exp has transportable_tablespace value being (Y/N) Thanks and Regards.. 2009 at 6:49 PM waheed said. I highly appreciate your valuable correction. COMPRESSION. September 14. Pratap July 5. impdp. 2009 at 9:23 PM Govind said.. Before we kill. Hi Govind.. Pratap. no rows selected SQL> Posted by Govindat 12:30 PM Labels: ATTACH.http://myorastuff. KILL_JOB. CONTINUE_CLIENT. .. SQL> select state. NETWORK_LINK.blogspot. I corrected the article and Thank you for your input..job_name from dba_datapump_jobs..... ENCRYPTION_PASSWORD.html SQL> Now again we are killing the same job.TABLE_EXISTS_ACTION 3 comments: Pratap said.in/2008/08/expdp-impdp. SAMPLE. create directory.
Copyright © 2024 DOKUMEN.SITE Inc.