Manual Database Up Gradation From 9

March 23, 2018 | Author: Prahlad Kumar Sharma | Category: Oracle Database, Pl/Sql, Sql, Database Index, Replication (Computing)


Comments



Description

Manual Database up gradation from 9.2.0 to 10.1.0 Filed under: Upgradation from 9.2.0 to 10.1.0 Manual Database up gradation from 9.2.0 to 10.1.0 in Same server Step : 1 Pre-request in the 9i Database. SQL> select name from v$database; NAME ²²² TEST SQL> select count(*) from dba_objects; COUNT(*) ²²²29511 SQL> @C:\oracle\ora92\rdbms\admin\utlrp.sql PL/SQL procedure successfully completed. Table created. Table created. Table created. Index created. Table created. Table created. View created. View created. Package created. No errors. Package body created. No errors. PL/SQL procedure successfully completed. PL/SQL procedure successfully completed. SQL> select count(*) from dba_objects; COUNT(*) ²²²29511 SQL> select count(*),object_name from dba_objects where status=¶INVALID OBJECT_NAME; no rows selected Spool the output of the below query and do the modification as mentioned after backing up the DB SQL> @E:\oracle\product\10.1.0\db_1\RDBMS\ADMIN\utlu101i.sql Oracle Database 10.1 Upgrade Information Tool . ************************************************************************* Database: ²²² ±> name: TEST ±> version: 9.2.0.1.0 ±> compatibility: 9.2.0.0.0 . ************************************************************************* 08-22-2009 21:29:58 µ GROUP BY Logfiles: [make adjustments in the current environment] ²²²²²²²²²²²²²²²²²± The existing log files are adequate. No changes are required. . ************************************************************************* Tablespaces: [make adjustments in the current environment] ²²²²²²²²²²²²²²²²²²²±> SYSTEM tablespace is adequate for the upgrade. «. owner: SYS «. minimum required size: 577 MB ±> CWMLITE tablespace is adequate for the upgrade. «. owner: OLAPSYS «. minimum required size: 9 MB ±> DRSYS tablespace is adequate for the upgrade. «. owner: CTXSYS «. minimum required size: 10 MB ±> ODM tablespace is adequate for the upgrade. «. owner: ODM «. minimum required size: 9 MB ±> XDB tablespace is adequate for the upgrade. «. owner: XDB «. minimum required size: 48 MB . ************************************************************************* Options: [present in existing database] ²²²²²²²²²²²²² ±> Partitioning ±> Spatial No changes are required. ************************************************************************* Deprecated Parameters: [Update Oracle Database 10.±> OLAP ±> Oracle Data Mining WARNING: Listed option(s) must be installed with Oracle Database 10.1 . ************************************************************************* Obsolete Parameters: [Update Oracle Database 10.ora or spfile] ²²²²²²²²²²²²²²²²²²²²²²²± ± No deprecated parameters found.1 init. ************************************************************************* Update Parameters: [Update Oracle Database 10. .1 init.ora or spfile] ²²²²²²²²²²²²²²²²²²²²²²WARNING: ±> ³shared_pool_size´ needs to be increased to at least ³150944944 ±> ³pga_aggregate_target´ is already at ³25165824 calculated new value is ³25165824 ±> ³large_pool_size´ is already at ³8388608 calculated new value is ³8388608 WARNING: ±> ³java_pool_size´ needs to be increased to at least ³50331648 .ora or spfile] ²²²²²²²²²²²²²²²²²²²²²²² ±> ³hash_join_enabled´ ±> ³log_archive_start´ .1 init. ************************************************************************* . « ±> Oracle XDK for Java ±> Oracle Java Packages ±> Oracle XML Database ±> Oracle Workspace Manager ±> Oracle Data Mining [upgrade] VALID [upgrade] VALID [upgrade] VALID [upgrade] VALID [upgrade] [upgrade] ±> OLAP Analytic Workspace ±> OLAP Catalog ±> Oracle OLAP API ±> Oracle interMedia [upgrade] [upgrade] [upgrade] «The µOracle interMedia Image Accelerator¶ is «required to be installed from the 10g Companion CD.Components: [The following database components will be upgraded or installed] ²²²²²²²²²²²²²²²²²²²²²²²²²± ±> Oracle Catalog Views ±> Oracle Packages and Types [upgrade] VALID [upgrade] VALID ±> JServer JAVA Virtual Machine [upgrade] VALID «The µJServer JAVA Virtual Machine¶ JAccelerator (NCOMP) «is required to be installed from the 10g Companion CD. « ±> Spatial ±> Oracle Text ±> Oracle Ultra Search . ************************************************************************* [upgrade] [upgrade] VALID [upgrade] VALID . ************************************************************************* SYSAUX Tablespace: [Create tablespace in Oracle Database 10. .1 server is started and BEFORE you invoke the upgrade script. ************************************************************************* Oracle Database 10g: Changes in Default Behavior ²²²²²²²²²²²²²²²² This page describes some of the changes in the behavior of Oracle Database 10g from that of previous releases. In other cases new behaviors/requirements have been introduced that may affect current scripts or applications. ³Introduction to the Optimizer.´ in Oracle Database Performance Tuning Guide.1 environment] ²²²²²²²²²²²²²²²²²²²²²²²²± ±> New ³SYSAUX´ tablespace «. See Chapter 12. SQL OPTIMIZER The Cost Based Optimizer (CBO) is now enabled by default.. . minimum required size for database upgrade: 500 MB Please create the new SYSAUX Tablespace AFTER the Oracle Database 10. In some cases the default values of some parameters have changed. * Rule-based optimization is not supported in 10g (setting OPTIMIZER_MODE to RULE or CHOOSE is not supported). * Collection of optimizer statistics is now performed by default. More detailed information is in the documentation. When upgrading to10g. See Chapter 15.2. The only supported downgrade path is for those users who have kept COMPATIBLE=9. the minimum supported release to downgrade to is Oracle 9i R2 release 9.2. so the on disk structures that 10g writes are compatible with 9i R2 structures. Users upgrading to 10g from prior releases (such as Oracle 8.3 (or later). UPGRADE/DOWNGRADE * After upgrading to 10g. and for newly created 10g databases.2.x. Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless they first install 9i R2. by default the database will remain at 9i R2 file format compatibility. and for behavior changes in SKIP_UNUSABLE_INDEXES.0. this makes it possible to downgrade to 9i R2.3 or later) executable. ³Managing Optimizer Statistics´ in Oracle Performance Tuning Guide. it is no longer possible to downgrade.2. .automatically for all schemas (including SYS).0. * See the Oracle Database Upgrade Guide for changes in behavior for the COMPUTE STATISTICS clause of CREATE INDEX.0 and have an installed 9i R2 (release 9.x).0. and the minimum value for COMPATIBLE is 9. Gathering optimizer statistics on stale objects is scheduled by default to occur daily during the maintenance window. for pre-existing databases upgraded to 10g. Once file format compatibility has been explicitly advanced to 10g (using COMPATIBLE=10. MANAGEABILITY * Database performance statistics are now collected by the Automatic Workload Repository (AWR) database component. of the Statspack readme (spdoc. see section 1. it reduces the number of tablespaces required by Oracle that you. The SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM tablespace. * A SYSAUX tablespace is created upon upgrade to 10g. MEMORY * Automatic PGA Memory Management is now enabled by default (unless PGA_AGGREGATE_TARGET is explicitly set to 0 or WORKAREA_SIZE_POLICY is explicitly set to MANUAL). See Chapter 5. This data is stored in the SYSAUX tablespace. and is used by the database for automatic generation of performance recommendations.See the Oracle Database Upgrade Guide. Because it is the default tablespace for many Oracle features and products that previously required their own tablespaces. as a DBA. must maintain. ³Automatic Performance Statistics´ in the Oracle Database Performance Tuning Guide. .txt in the RDBMS ADMIN directory) for directions on using Statspack in 10g to avoid conflict with the AWR. automatically upon upgrade to 10g and also for newly created 10g databases. * If you currently use Statspack for performance data gathering. ³Managing the Undo Tablespace. unless explicitly set. the number of SQL cursors cached by PL/SQL was determined by OPEN_CURSORS. a SYSAUX tablespace is . the number of cursors cached is determined by SESSION_CACHED_CURSORS. See the Oracle Database Reference manual. * Previously.PGA_AGGREGATE_TARGET is defaulted to 20% of the SGA size. but is typically 8KB (was typically 2KB in previous releases). * Auto tuning undo retention is on by default. Oracle recommends tuning the value of PGA_AGGREGATE_TARGET after upgrading. CREATE DATABASE * In addition to the SYSTEM tablespace. In 10g. * The default value of DB_BLOCK_SIZE is operating system specific. For more information. see Chapter 10. * SHARED_POOL_SIZE must increase to include the space needed for shared pool overhead.´ in the Oracle Database Administrator¶s Guide. TRANSACTION/SPACE * Dropped objects are now moved to the recycle bin. where the space is only reused when it is needed. See Chapter 14 of the Oracle Database Performance Tuning Guide. This allows µundropping¶ a table using the FLASHBACK DROP feature. See Chapter 14 of the Oracle Database Administrator¶s Guide. default is 50MB. it is not possible to downgrade this database to prior releases. PL/SQL procedure successfully completed. Minimum and default logfile sizes are larger. Minimum is now 4 MB. SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Archive Mode Enabled C:\oracle\oradata\test\archive 91 Next log sequence to archive 93 Current log sequence 93 .always created at database creation. unless you are using Oracle Managed Files (OMF) when it is 100 MB. ³Creating a Database. * In 10g. and upon upgrade to 10g.´ in the Oracle Database Administrator¶s Guide. by default all new databases are created with 10g file format compatibility. The SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM tablespace. it reduces the number of tablespaces required by Oracle that you. as a DBA. See Chapter 2. This means you can immediately use all the 10g features. Because it is the default tablespace for many Oracle features and products that previously required their own tablespaces. must maintain. Once a database uses 10g compatible file formats. Database dismounted. Database opened.1. C:\Documents and Settings\Administrator>set oracle_sid=test C:\Documents and Settings\Administrator>sqlplus /nolog SQL*Plus: Release 9. SQL> conn /as sysdba Connected to an idle instance. SQL> startup ORACLE instance started. ORACLE instance shut down. 453492 bytes 109051904 bytes 25165824 bytes 667648 bytes . Oracle Corporation. Total System Global Area 135338868 bytes Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. SQL> exit Backup complete database.0 ± Production on Sat Aug 22 21:36:52 2009 Copyright (c) 1982.2. (Cold backup) Step :2 Check the space needed and stop the listner and delete the sid.0. All rights reserved. 2002.SQL> shut immediate Database closed. SQL> select * from sm$ts_used. TABLESPACE_NAME ²²²²²²²²²² ²²²CWMLITE DRSYS EXAMPLE INDX ODM SYSTEM TOOLS UNDOTBS1 USERS XDB 10 rows selected. TABLESPACE_NAME ²²²²²²²²²² ²²²CWMLITE 9764864 BYTES 20971520 20971520 155975680 26214400 20971520 419430400 10485760 209715200 26214400 39976960 BYTES VARCHAR2(30) NUMBER .SQL> desc sm$ts_avail Name Null? Type ²²²²²²²²²²²²²± ²²± ²²²²²²²²²TABLESPACE_NAME BYTES SQL> select * from sm$ts_avail. TABLESPACE_NAME ²²²²²²²²²² ²²²CWMLITE DRSYS EXAMPLE INDX ODM SYSTEM TOOLS UNDOTBS1 USERS XDB 10 rows selected.DRSYS EXAMPLE ODM SYSTEM TOOLS UNDOTBS1 XDB 8 rows selected. 10092544 155779072 9699328 414908416 6291456 9814016 39714816 SQL> select * from sm$ts_free. SQL> ho LSNRCTL 11141120 10813440 131072 26148864 11206656 4456448 4128768 199753728 26148864 196608 BYTES . 16 sec off OFF OFF Listener Parameter File C:\oracle\ora92\network\admin\listener.2.ora Listener Log File C:\oracle\ora92\network\log\listener.2.0. error 1060. TNSLSNR for 32-bit Windows: Version 9.log Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee6e78e526295)(PORT=1521))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee6e78e526295)(PORT=1521))) STATUS of the LISTENER ²²²²²²²² Alias Version Start Date Uptime Trace Level Security SNMP LISTENER TNSLSNR for 32-bit Windows: Version 9.LSNRCTL> start Starting tnslsnr: please wait« Failed to open service <OracleoracleTNSListener>.0 ± Production System parameter file is C:\oracle\ora92\network\admin\listener.1.1.0 ± Production 22-AUG-2009 22:00:00 0 days 0 hr.log Listening Endpoints Summary« (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521))) Services Summary« . 0 min.0.ora Log messages written to C:\oracle\ora92\network\log\listener. ora Log messages written to C:\oracle\ora92\network\log\listener. Instance ³TEST´.Service ³TEST´ has 1 instance(s).log Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee6e78e526295)(PORT=1521))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee6e78e526295)(PORT=1521))) STATUS of the LISTENER ²²²²²²²² Alias Version Start Date Uptime Trace Level Security SNMP LISTENER TNSLSNR for 32-bit Windows: Version 9.2.1.0.2. has 1 handler(s) for this service« The command completed successfully LSNRCTL> stop Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee6e78e526295)(PORT=1521))) The command completed successfully LSNRCTL> start Starting tnslsnr: please wait« TNSLSNR for 32-bit Windows: Version 9. 0 min. 0 sec off OFF OFF .0.0 ± Production 22-AUG-2009 22:00:48 0 days 0 hr. status UNKNOWN.1.0 ± Production System parameter file is C:\oracle\ora92\network\admin\listener. 0.log Listening Endpoints Summary« (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521))) Services Summary« Service ³TEST´ has 1 instance(s). Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee6e78e526295)(PORT=1521))) The command completed successfully C:\Documents and Settings\Administrator>oradim -delete -sid test . SQL> exit Disconnected from Oracle9i Enterprise Edition Release 9. 2002. Instance ³TEST´. has 1 handler(s) for this service« The command completed successfully LSNRCTL> exit SQL> shut immediate Database closed.Listener Parameter File C:\oracle\ora92\network\admin\listener.0 ± Production C:\Documents and Settings\Administrator>lsnrctl stop LSNRCTL for 32-bit Windows: Version 9. Database dismounted.0 ± Production With the Partitioning. ORACLE instance shut down.2.ora Listener Log File C:\oracle\ora92\network\log\listener. Oracle Corporation.0 ± Production on 22-AUG-2009 22:03:14 copyright (c) 1991.0. OLAP and Oracle Data Mining options JServer Release 9.2.1. All rights reserved. status UNKNOWN.1.2.0.1. 1. SQL> startup pfile=¶E:\oracle\product\10.0\admin\test\pfile\init.ora.ora. SQL> startup upgrade ORACLE instance started.73200934649 .Step: 3 Install ORACLE 10g Software in different Home.0\admin\test\pfile\init. Total System Global Area 239075328 bytes Fixed Size Variable Size Database Buffers Redo Buffers 788308 bytes 212859052 bytes 25165824 bytes 262144 bytes ORA-01990: error opening password file (create password file) .1.73200934649 nomount ORACLE instance started. Total System Global Area 239075328 bytes Fixed Size Variable Size Database Buffers Redo Buffers 788308 bytes 212859052 bytes 25165824 bytes 262144 bytes SQL> create spfile from pfile=¶E:\oracle\product\10. File created. Starting the DB with 10g instance and upgradation Process. SQL> shut immediate ORA-01507: database not mounted ORACLE instance shut down. DOC>###################################################################### DOC>###################################################################### DOC># no rows selected DOC>####################################################################### DOC>####################################################################### DOC> The following statement will cause an ³ORA-01722: invalid number´ .SQL> conn /as sysdba Connected.sql.sql.txt contains sysaux tablespace script as shown below) create tablespace SYSAUX datafile µsysaux01. Tablespace created. DOC> Shutdown ABORT and use a different script or a different server.0\db_1\RDBMS\ADMIN\u0902000.txt´ (Sys. SQL> @E:\oracle\product\10.sql DOC>###################################################################### DOC>###################################################################### DOC> The following statement will cause an ³ORA-01722: invalid number´ DOC> error if the database server version is not correct for this script.dbf¶ size 70M reuse extent management local segment space management auto online. SQL> @´C:\Documents and Settings\Administrator\Desktop\sys.1. 1 to consolidate data from DOC> a number of tablespaces that were separate in prior releases. PERMANENT. DOC> Consult the Oracle Database Upgrade Guide for sizing estimates. and DOC> SEGMENT SPACE MANAGEMENT AUTO.DOC> error if the database has not been opened for UPGRADE. EXTENT MANAGEMENT LOCAL. DOC> DOC> Create the SYSAUX tablespace. DOC> DOC> create tablespace SYSAUX datafile µsysaux01. DOC>####################################################################### DOC>####################################################################### DOC># no rows selected DOC>####################################################################### DOC>####################################################################### DOC> The following statements will cause an ³ORA-01722: invalid number´ DOC> error if the SYSAUX tablespace does not exist or is not DOC> ONLINE for READ WRITE. DOC> DOC> The SYSAUX tablespace is used in 10. DOC> DOC> Perform a ³SHUTDOWN ABORT´ and DOC> restart using UPGRADE. for example.dbf¶ DOC> size 70M reuse . COMP_ID COMP_NAME STATUS VERSION .DOC> DOC> DOC> DOC> extent management local segment space management auto online.scripts. PL/SQL procedure successfully completed.sql script. DOC> Then rerun the u0902000.synonyms will be upgraded At last it will show the message as follows TIMESTAMP ²²²²²²²²²²²²²²²²²²²²²²²²²²± 1 row selected. The script will run according to the size of the database« All packages. DOC>####################################################################### DOC>####################################################################### DOC># no rows selected no rows selected no rows selected no rows selected no rows selected Session altered. Session altered. 1.0 10.0.2.2.1.0.1.2.2.0 Oracle Database Packages and Types VALID JServer JAVA Virtual Machine VALID 10.0 CONTEXT WK VALID VALID Oracle Ultra Search 15 rows selected.2.1.1.0.0.0 10.0.2.1.0.2.0.2.0 10.1.0 10.0.0 10. DOC> .0.0 Oracle XML Database Oracle Workspace Manager Oracle Data Mining OLAP Analytic Workspace OLAP Catalog Oracle OLAP API Oracle interMedia Spatial Oracle Text VALID VALID VALID VALID VALID VALID VALID VALID 10.0 10.1.1.1.²²²²²²²²²²²± ²²²± ²²²CATALOG CATPROC JAVAVM XML Oracle Database Catalog Views VALID 10.0 10.0.0 10.0 VALID Oracle XDK VALID CATJAVA XDB OWM ODM APS AMD XOQ ORDIM SDO Oracle Database Java Packages 10.0.²²².0.0 10.2.1.2. along with their current version and status.2.2.2.0.0 10.2. DOC>####################################################################### DOC>####################################################################### DOC> DOC> The above query lists the SERVER components in the upgraded DOC> database.1.0.0 10.2.1.0.1.1. sql to recompile any invalid application objects.2.0 10.0 Oracle Database Packages and Types VALID JServer JAVA Virtual Machine VALID 10.1.0 VALID Oracle XDK VALID CATJAVA XDB OWM ODM Oracle Database Java Packages 10.2.0.1. DOC> consult the Oracle Database Upgrade Guide for troubleshooting DOC> recommendations.1.2. restart for normal operation.0.0.0. DOC> DOC>####################################################################### DOC>####################################################################### DOC># PL/SQL procedure successfully completed.2.2.2.DOC> Please review the status and version columns and look for DOC> any errors in the spool log file.0.0 10.²²²²²²²²²²²± ²²²± ²²²CATALOG CATPROC JAVAVM XML Oracle Database Catalog Views VALID 10.0 .0 10.1. If there are errors in the spool DOC> file.2.2.1. and then DOC> run utlrp. or any components are not VALID or not the current version.1. DOC> DOC> Next shutdown immediate.0 Oracle XML Database Oracle Workspace Manager Oracle Data Mining VALID VALID VALID 10.1.0.1.0 10. COMP_ID COMP_NAME STATUS VERSION ²²².0.0. APS AMD XOQ ORDIM SDO OLAP Analytic Workspace OLAP Catalog Oracle OLAP API Oracle interMedia Spatial Oracle Text VALID VALID VALID VALID 10.0 VALID 10.0 10. and then DOC> run utlrp.0 10.0.0 10.1.1.2.0.0.1. or any components are not VALID or not the current version.0 10.2.0 10.2.2.0. DOC> DOC> Next shutdown immediate.2.1. DOC>####################################################################### DOC>####################################################################### DOC> DOC> The above query lists the SERVER components in the upgraded DOC> database.2.sql to recompile any invalid application objects.0. If there are errors in the spool DOC> file. DOC> . DOC> DOC> Please review the status and version columns and look for DOC> any errors in the spool log file.1. along with their current version and status.1.0.0 CONTEXT WK VALID VALID Oracle Ultra Search 15 rows selected. DOC> consult the Oracle Database Upgrade Guide for troubleshooting DOC> recommendations. restart for normal operation.1.0.2. SQL> shut immediate Database closed. Total System Global Area 239075328 bytes Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database dismounted. COUNT(*) ²²²788308 bytes 212859052 bytes 25165824 bytes 262144 bytes . SQL> startup ORACLE instance started.DOC>####################################################################### DOC>####################################################################### DOC># TIMESTAMP ²²²²²²²²²²²²²²²²²²²²²²²²²²± COMP_TIMESTAMP DBUPG_END 2009-08-22 22:59:09 1 row selected. SQL> select count(*) from dba_objects where status=¶INVALID¶. Database opened. ORACLE instance shut down. 1.sql PL/SQL procedure successfully completed.sql .1.776 1 row selected. SQL> @E:\oracle\product\10.0\db_1\RDBMS\ADMIN\utlu101s.1 Upgrade Status Tool 22-AUG-2009 11:18:36 ±> Oracle Database Catalog Views Normal successful completion ±> Oracle Database Packages and Types Normal successful completion ±> JServer JAVA Virtual Machine ±> Oracle XDK Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion ±> Oracle Database Java Packages ±> Oracle XML Database ±> Oracle Workspace Manager ±> Oracle Data Mining ±> OLAP Analytic Workspace ±> OLAP Catalog ±> Oracle OLAP API ±> Oracle interMedia ±> Spatial ±> Oracle Text ±> Oracle Ultra Search No problems detected during upgrade PL/SQL procedure successfully completed.0\db_1\RDBMS\ADMIN\utlrp. SQL> @E:\oracle\product\10. Oracle Database 10. 1.2.1.0.TIMESTAMP ²²²²²²²²²²²²²²²²²²²²²²²²²²± COMP_TIMESTAMP UTLRP_BGN 2009-08-22 23:19:07 1 row selected.0 ± Production .1.0.2. PL/SQL procedure successfully completed. TIMESTAMP ²²²²²²²²²²²²²²²²²²²²²²²²²²± COMP_TIMESTAMP UTLRP_END 2009-08-22 23:20:13 1 row selected.0 ± Prod PL/SQL Release 10. BANNER ²²²²²²²²²²²²²²²²²²²²²Oracle Database 10g Enterprise Edition Release 10.0.0. SQL> select count(*) from dba_objects where status=¶INVALID¶. SQL> select * from V$version. PL/SQL procedure successfully completed. COUNT(*) ²²²0 1 row selected.0 Production TNS for 32-bit Windows: Version 10. PL/SQL procedure successfully completed.1.0 ± Production CORE 10.2.2. ora file and give db_name=<dbname of production> and control_files=<location where you want controlfile to be restored> 2)Startup nomount pfile=<path of init. by Deepak ² 3 Comments February 24. 4) Issue ³alter database mount´ Make sure that backuppieces are on the same location where it were there on production db.1. 2010 Duplicate Database With RMAN Without Connecting To Target Database ± from metalink Id 732624. Just wanted to share this topic How to do duplicate database without connecting to target database using backups taken from RMAN on alternate host. Solution Follow the below steps 1)Export ORACLE_SID=<SID Name as of production> create init. 3)Connect to RMAN and issue command : RMAN>restore controlfile from µ<backuppiece of controlfile which you took on production>.1 hi.0 ± Production 5 rows selected. . If you dont have the same location. then make RMAN aware of the changed location using ³catalog´ command.2.ora>. controlfile should be restored.NLSRTL Version 10. Comment Duplicate Database With RMAN Without Connecting To Target Database Filed under: Duplicate database without connecting to target database using backups taken from RMAN on alternate host.0. Check the Database that everything is working fine. than they can be cataloged using command : RMAN>catalog start with <path where backuppieces are stored>. issue ³restore database´ command.0) ± September 2005 y y y y y Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage Oracle 10g Release 1 (10. 2010 Features introduced in the various server releases Submitted by admin on Sun.1. 5) After catalogging backuppiece.2.0) y y Grid computing ± an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features) . Oracle 10g Release 2 (10. If you need to restore datafiles to a location different to the one recorded in controlfile.dbf¶. This document describes the high level features introduced with each new version of the Oracle database.RMAN>catalog backuppiece <piece name and path>. Most DBA¶s and developers work with multiple versions of Oracle at any particular time. Comment Features introduced in the various Oracle server releases Filed under: Features Of Various release of Oracle Database by Deepak ² Leave a comment February 2. set newname for datafile 2 to µ/newLocation/undotbs. If there are more backuppieces. or if a upgrade is required. switch datafile all.dbf¶. 2005-10-30 14:02 This document summarizes the differences between Oracle Server releases. use SET NEWNAME command as below: run { set newname for datafile 1 to µ/newLocation/system. « restore database. It is intended to be used as a quick reference as to whether a feature can be implemented. file systems. Oracle 9i Release 2 (9. automatic failover Security Improvements ± Default Install Accounts locked.3 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode ± logical copy of primary database. but can be replaced with automated System Managed Undo (SMU). Flashback query (dbms_flashback. A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server. Migrate Users to Directory Oracle 9i Release 1 (9. The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed. etc. Oracle Nameserver is still available. Oracle Parallel Server¶s (OPS) scalability was improved ± now called Real Application Clusters (RAC).0.0) y y y y y y y y y Locally Managed SYSTEM tablespaces Oracle Streams ± new data sharing/replication feature (can potentially replace Oracle Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables ± only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required). AES. This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore.1) ± June 2001 y y y y y Traditional rollback segments (RBS) are still available. Use Oracle Ultra Search for searching databases.enable) ± one can query data as it looked at some point in the past. VPD on synonyms. Oracle will create it¶s own ³Rollback Segments´ and size them automatically without any DBA involvement.2.y y y y y y y y y y y y y Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row. Using SMU. Applications doesn¶t need to be cluster aware anymore. table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (E. .g Windows to Unix) New µdrop database¶ statement New database scheduler ± DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump ± faster data movement with expdp and impdp. Any application can scale in a database cluster. but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server). transaction. Create logical standby databases with Data Guard Java JDK 1. Full Cache Fusion implemented. Oracle 8i (8. URI¶s. Deep data protection ± fine grained security and auditing. This eliminates the need to restart the database each time parameter changes were made. New data types for XML (XMLType). New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations. load) Operations ± with external tables and pipelining. available for use with Oracle Net (SQL*Net). On-line table and index reorganization. etc.6) y y y y y y PL/SQL Server Pages (PSP¶s) DBA Studio Introduced Statspack New SQL Functions (rank. Dynamic Memory Management ± Buffer Pools and shared pool can be resized on-the-fly. OLAP ± Express functionality included in the DB. The Data Guard Broker allows single step fail-over when disaster strikes.1. Put security on DB level. moving average) ALTER FREELISTS command (previously done by DROP/CREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be fixed before writing to disk . SQL access do not mean unrestricted access. VI (Virtual Interface) protocol support. Resumable backups and statements ± suspend statement instead of rolling back immediately. VI provides fast communications between components in a cluster. an alternative to TCP/IP.1. List Partitioning ± partitioning on a list of values. transformation. not only disk access cost as before.7) y y y y y y y y Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat ± A new utility for analyzing Java Memory footprints OIS ± Oracle Integration Server introduced. Build in XML Developers Kit (XDK). PLSQL Gateway introduced for deploying PL/SQL based solutions on the Web Enterprise Manager Enhancements ± including new HTML based reporting and Advanced Replication functionality included. Data Mining ± Oracle Darwin¶s features included in the DB. PL/SQL programs can be natively compiled to binaries. Cost Based Optimizer now also consider memory and CPU.y y y y y y y y y y y y y y The Oracle Standby DB feature renamed to Oracle Data Guard. XML integrated with AQ. Oracle9i allows fetching backwards in a result set. Oracle 8i (8. New Database Character Set Migration utility included. Scrolling cursor support. ETL (eXtract. HTTP.y y y y XML Parser for Java New PLSQL encrypt/decrypt package introduced User and Schemas separated Numerous Performance Enhancements Oracle 8i (8. Standby Database ± Auto shipping and application of redo logs. Enterprise Manager v2 delivered NLS ± Euro Symbol supported Analyze tables in parallel Temporary tables supported. descending Oracle 8. elimination of tablespace fragmentation. OO4O support User Security Improvements ± more centralisation. Read Only queries on standby database allowed. tablespace information managed in tablespace (i.e moved from data dictionary) improving tablespace reliability Drop Column on table (Finally !!!!!) DBMS_DEBUG PL/SQL package. single enterprise user. HOP protocols Transportable tablespaces between databases Locally managed tablespaces ± automatic sizing of extents. Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities ± resource classes Hash and Composite partitioned table types SQL*Loader direct load API Copy optimizer statistics across databases to ensure same access paths across different environments. case insensitive. DBMS_SQL replaced by new EXECUTE IMMEDIATE statement Progress Monitor to track long running DML. number as in v7 SQL3 standard Call external procedures LOB >1 per table . DDL Functional Indexes ± NLS. performance.1.5) y y y y y y y y y y y y y y y y y y y y y y y y y Fast Start recovery ± Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexes/index only tables which users accessing data ± Online index rebuilds Log Miner introduced ± Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk I/O during cross-node communication Advanced Queueing improvements (security. Net8 support for SSL. users/roles across multiple databases.0 ± June 1997 y y y y y Object Relational database Object Types (not just date. character. Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes ± Coalesce. spatial) Backup/Recovery improvements ± Tablespace point in time recovery. backup/recover individual partitions merge/balance partitions Advanced Queuing for message handling Many performance improvements to SQL/PLSQL/OCI making more efficient use of CPU/Memory. PL/SQL replication code moved in to Oracle kernel. allow custom password scheme. V7 limits extended (e.y y y y y y y y y y y y y y y y y y y y y y Partitioned Tables and Indexes export/import individual partitions partitions in multiple tablespaces Online/offline. context. incremental backups. video.3 y y y y y y y y y y y y Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped. Temporary Permanent. password profiles. User password expiry. transparent failover to a new node Data Cartridges introduced on database (e. image. 1000 columns/table.g. Recovery manager introduced Security Server introduced for central user administration. . Privileged database links (no need for password to be stored) Fast Refresh for complex snapshots. parallel replication. SQL*Net replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format Oracle 7. 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently reestablishes the connection when needed) to support more concurrent users.g. time. parallel backup/recovery. Performance improvements in OPS ± global V$ views introduced across all instances. Replication manager introduced. Improved ³STAR´ Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM in v7). Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of statement). index creation. index UNRECOVERABLE Subquery in FROM clause PL/SQL wrapper PL/SQL Cursor variables Checksums ± DB_BLOCK_CHECKSUM.y y y y y y y y y Trigger compilation.2 y y y y y y y y y y y y Resizable. Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PL/SQL ± UTL_FILE Oracle 7.1 y y y y y y y y ANSI/ISO SQL92 Entry Level Advanced Replication ± Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL ± DBMS_SQL Parallel Query Options ± query. default values) Stored procedures and functions. data loading Server Manager introduced Read Only tablespaces Oracle 7. debug Unlimited extents on STORAGE clause.0 ± June 1992 y y y y y y y y Database Integrity Constraints (primary. autoextend data files Shrink Rollback Segments manually Create table. check constraints. foreign keys.ora parameters modifiable ± TIMED_STATISTICS HASH Joins. LOG_BLOCK_CHECKSUM Parallel create table Job Queues ± DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements ± SORT_DIRECT_WRITES Oracle 7. Some init. procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members ± mirrored online redo log files Resource Limits ± Profiles . y y y y y y y y y Much enhanced Auditing Enhanced Distributed database functionality ± INSERTS, UPDATES,DELETES, 2PC Incomplete database recovery (e.g SCN) Cost based optimiser TRUNCATE tables Datatype changes (i.e VARCHAR2 CHAR, VARCHAR) SQL*Net v2, MTS Checkpoint process Data replication ± Snapshots Oracle 6.2 y Oracle Parallel Server Oracle 6 ± July 1988 y y y Row-level locking On-line database backups PL/SQL in the database Oracle 5.1 y Distributed queries Oracle 5.0 ± 1986 y Supporting for the Client-Server model ± PC¶s can access the DB on remote host Oracle 4 ± 1984 y Read consistency Oracle 3 ± 1981 y y y Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions) Nonblocking queries (no more read locks) Re-written in the C Programming Language Oracle 2 ± 1979 y y First public release Basic SQL functionality, queries and joins Tags: http://www.orafaq.com/faq/features_introduced_in_the_various_server_releases Comment Schema Referesh Filed under: Schema refresh by Deepak ² 1 Comment December 15, 2009 Steps for sehema refresh Schema refresh in oracle 9i Now we are going to refresh SH schema. Steps for schema refresh ± before exporting Spool the output of roles and privileges assigned to the user .use the query below to view the role s and privileges and spool the out as .sql file. 1. SELECT object_type,count(*) from dba_objects where owner=¶SHTEST¶ group by object_type; 2. Verify total no of objects from above query. 3. write a dynamic query as below 4. select µgrant µ || privilege ||¶ to sh;¶ from session_privs; 5. select µgrant µ || role ||¶ to sh;¶ from session_roles; 6. query the default tablespace and size 7. select tablespace_name,sum(bytes/1024/1024) from dba_segments where owner=¶SH¶ group by tablespace_name; Export the µsh¶ schema exp µusernmae/password file=¶/location/sh_bkp.dmp¶ log=¶/location/sh_exp.log¶ owner=¶SH¶ direct=y steps to drrop and recreate schema Drop the SH schema 1. 2. 3. 4. Create the SH schema with the default tablespace and allocate quota on that tablespace. Now run the roles and privileges spooled scripts. Connect the SH and verify the tablespace, roles and privileges. then start importing Importing The µSH¶ schema Imp µusernmae/password¶ file=¶/location/sh_bkp.dmp¶ log=¶/location/sh_imp.log¶ Fromuser=¶SH¶ touser=¶SH¶ SQL> SELECT object_type,count(*) from dba_objects where owner=¶SHTEST¶ group by object_type; Compiling and analyzing SH Schema exec dbms_utility.compile_schema(µSH¶); execdbms_utility.analyze_schema(µSH¶,'ESTIMATE¶,ESTIMATE_PERCENT=>20); Now connect the SH user and check for the import data. Schema refresh by dropping objects and truncating objects Export the µsh¶ schema Take the schema full export as show above Drop all the objects in µSH¶ schema To drop the all the objects in the Schema Connect the schema Spool the output SQL>set head off SQL>spool drop_tables.sql SQL>select µdrop table µ||table_name||¶ cascade constraints purge;¶ from user_tables; SQL>spool off SQL>set head off SQL>spool drop_other_objects.sql SQL>select µdrop µ||object_type||¶ µ||object_name||¶;¶ from user_objects; SQL>spool off Now run the script all the objects will be dropped, Importing THE µSH¶ schema ESTIMATE_PERCENT=>20).count(*) from dba_objects where owner=¶SHTEST¶ group by object_type. Truncate all the objects in µSH¶ schema To truncate the all the objects in the Schema Connect the schema Spool the output SQL>set head off SQL>spool truncate_tables.compile_schema(µSH¶).¶ from user_objects.Imp µusernmae/password¶ file=¶/location/sh_bkp. SQL>spool off SQL>set head off SQL>spool truncate_other_objects.dmp¶ log=¶/location/sh_imp. execdbms_utility.sql SQL>select µtruncate µ||object_type||¶ µ||object_name||¶. Now connect the SH user and check for the import data. To enable constraints use the query below SELECT µALTER TABLE µ||TABLE_NAME||¶ENABLE CONSTRAINT µ||CONSTRAINT_NAME||¶.analyze_schema(µSH¶.sql SQL>select µtruncate table µ||table_name from user_tables.'ESTIMATE¶.'FROM USER_CONSTRAINTS WHERE STATUS=¶DISABLED¶. SQL>spool off .log¶ Fromuser=¶SH¶ touser=¶SH¶ SQL> SELECT object_type. Compiling and analyzing SH Schema exec dbms_utility. dmp directory=data_pump_dir schemas=sh Dropping the µSH¶ user Query the default tablespace and verify the space in the tablespace and drop the user. Disabiling the reference constraints If there is any constraint violation while truncating use the below query to find reference key constraints and disable them.ESTIMATE_PERCENT=>20).constraint_type.dmp¶ log=¶/location/sh_imp.compile_schema(µSH¶).analyze_schema(µSH¶.log¶ Fromuser=¶SH¶ touser=¶SH¶ SQL> SELECT object_type.'ESTIMATE¶. Spool the output of below query and run the script.table_name FROM ALL_CONSTRAINTS where constraint_type=¶R¶ and r_constraint_name in (select constraint_name from all_constraints where table_name=¶TABLE_NAME¶) Importing THE µSH¶ schema Imp µusernmae/password¶ file=¶/location/sh_bkp. Schema refresh in oracle 10g Here we can use Datapump Exporting the SH schema through Datapump expdp µusername/password¶ dumpfile=sh_exp. Compiling and analyzing SH Schema exec dbms_utility.count(*) from dba_objects where owner=¶SHTEST¶ group by object_type. SQL>Drop user SH cascade. exec dbms_utility. Select constraint_name. Now connect the SH user and check for the import data. .Now run the script all the objects will be truncated. which is a per-user file containing entries describing commands to execute and the time(s) to execute them.daily it will be executed once per day. 2009 CRON JOB SCHEDULING ±IN UNIX y y To run system jobs on a daily/weekly/monthly basis To allow users to setup their own schedules The system schedules are setup when the package is installed. The time that the scripts run in those system-wide directories is not something that an administration typically changes. Check for the imported objects and compile the invalid objects.dmp directory=data_pump_dir schemas=sh If you are importing to different schema use remap_schema option.Importing the SH schema through datapump impdp µusername/password¶ dumpfile=sh_exp.monthly /etc/cron. Comment JOB SCHEDULING Filed under: JOB SCHEDULING by Deepak ² Leave a comment December 15. For example if you place a script inside /etc/cron. The normal manner which people use cron is via the crontab command.daily /etc/cron. via the creation of some special directories: /etc/cron. This allows you to view or edit your crontab file. Any script which is executable and placed inside them will run at the frequency which its name suggests.hourly /etc/cron. but the times can be adjusted by editing the file /etc/crontab. these directories allow scheduling of system-wide jobs in a coarse manner. To display your file you run the following command: .d /etc/cron.weekly Except for the first one which is special. every day. The format of this file will be explained shortly. crontab -l root can view any users crontab file by adding ³-u username³.Month (1 . If you wish to change the editor used to edit the file set the EDITOR environmental variable like this: export EDITOR=/usr/bin/emacs crontab -e Now enter the following: . however they can be left as µ*¶ characters to signify any value is acceptible). 4. 6.31) | +----------.Hour (0 . The number of minutes after the hour (0 to 59) The hour in military time (24 hour) format (0 to 23) The day of the month (1 to 31) The month (1 to 12) The day of the week(0 or 7 is Sun.Day of month (1 . Now that we¶ve seen the structure we should try to ru na couple of examples. for example: crontab -u skx -l # List skx's crontab file. 5.Day of week (0-7) | | | +------. 2. Each line is a collection of six fields separated by spaces. 3. The format of these files is fairly simple to understand. or use name) The command to run More graphically they would look like this: * * * * * Command to be executed | | | | | | | | | +----. The fields are: 1.Min (0 . When you save the file and quit your editor it will be installed into the system unless it is found to contain errors.12) | | +--------.59) (Each of the first five fields contains only numbers.23) +------------. To edit your crontabe file run: crontab -e This will launch your default editor upon your crontab file (creating it if necessary). 2. A range of numbers indicates that every item in that range will be matched.2. and 4AM: # Use a range of hours matching 1. Now we¶ll finish with some more examples: # Run the `something` command every hour on the hour 0 * * * * /sbin/something # Run the `nightly` command at ten minutes past midnight every day 10 0 * * * /bin/nightly # Run the `monday` command every monday at 2 AM 0 2 * * 1 /usr/local/bin/monday One last tip: If you want to run something very regularly you can use an alternate syntax: Instead of using only single numbers you can use ranges or sets. 3 and 4AM * 1.0 * * * * /bin/ls When you¶ve saved the file and quit your editor you will see a message such as: crontab: installing new crontab You can verify that the file contains what you expect with : crontab -l Here we¶ve told the cron system to execute the command ³/bin/ls´ every time the minute equals 0. as follows: 0 * * * * /bin/ls >/dev/null 2&>1 This causes all output to be redirected to /dev/null ± meaning you won¶t see it.4 * * * /bin/some-hourly JOB SCHEDULING IN WINDOWS . if you use the following line you¶ll run a command at 1AM. every hour. 3 and 4AM * 1-4 * * * /bin/some-hourly A set is similar. 2AM. each item in the list will be matched. 3AM. We¶re running the command on the hour. 2.3. consisting of a collection of numbers seperated by commas. if you wish to stop this then you should cause it to be redirected. Any output of the command you run will be sent to you by email. ie. The previous example would look like this using sets: # Use a set of hours matching 1. Click next and browse your cold_bkp. 1. Give a name for the backup and schedule the timings.Cold backup ± scheduling in windows environment Create a batch file as cold_bkp.batGoto start -> control panel -> scheduled tasks. Click on add a scheduled tasks. If you don¶t reschedule it the job won¶t run. Click next and finish the scheduling. Comment Steps to switchover standby to primary Filed under: Switchover primary to standby in 10g by Deepak ² 1 Comment December 15. 2. 2009 SWITCHOVER PRIMARY TO STANDBY DATABASE Primary =PRIM . It will ask for o/s user name and password. 5. 3. So edit the scheduled tasks and enter the new password. 4.bat @echo off net stop OracleServiceDBNAME net stop OracleOraHome92TNSListener xcopy /E /Y E:\oracle\oradata\HRMS D:\daily_bkp_\coldbackup\hrms xcopy /E /Y E:\oracle\ora92\database D:\daily_bkp \registry\database net start OracleServiceDBNAME net start OracleOraHome92TNSListener Save the file as cold_bkp.bat file. Note: Whenever the o/s user name and password are changed reschedule the scheduled tasks. shut down and restart the former primary instance PRIM: SQL>SHUTDOWN IMMEDIATE. After step 1 finishes. Make sure the last redo data transmitted from the Primary database was applied on the standby database. . SQL>SHUTDOWN IMMEDIATE. Immediately after issuing command in step 2. Issue the following commands on Primary database and Standby database to find out: SQL>select sequence#. 2. 3. In order to apply redo data to the standby database as soon as it is received. Initiate the switchover on the primary database PRIM: SQL>connect /@PRIM as sysdba SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN. .If you are using Oracle Database 10g release 2. 4. Open another prompt and connect to SQLPLUS: SQL>connect /@STAN as sysdba SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY. STAN is now transitioned to the primary database role. use Real-time apply. SQL>STARTUP. II. Switch the original physical standby db STAN to primary role.If you are using Oracle Database 10g release 1. Before Switchover: 1. 4. Verify the primary database instance is open and the standby database instance is mounted.Standby = STAN I. applied from v$archvied_log. Quick Switchover Steps 1. you will have to Shut down and restart the new primary database STAN. As I always recommend. SQL>STARTUP MOUNT. you can open the new Primary database STAN: SQL>ALTER DATABASE OPEN. 2. Perform SWITCH LOGFILE if necessary. test the Switchover first on your testing systems before working on Production. Verify there are no active users connected to the databases. 3. After step 3 completes: . refer to the Oracle Data Pump Encrypted Dump File Support whitepaper. Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database. TDE column encryption transparently decrypts this data back to its original clear text format.from Oracle White paper Introduction The security and compliance requirements in today¶s business world present manifold challenges. As incidences of data theft increase. the protection offered by TDE does not extend beyond the database and so this . without the need for any further user or application intervention. On the new primary database STAN. encryption and decryption are performed automatically. The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word. The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access. Comment Encryption with Oracle Data Pump Filed under: Encryption with Oracle Datapump by Deepak ² Leave a comment December 14. SQL>ALTER SYSTEM SWITCH LOGFILE. Now a de facto solution in meeting regulatory compliances. data encryption is one of a number of security tools in use. Conversely. perform a SWITCH LOGFILE to start sending redo data to the standby database PRIM. However. When an authorized user inserts new data into such a column. For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set. when the user selects the column from the database. protecting data privacy continues to be of paramount importance. TDE column encryption encrypts this data prior to storing it in the database. The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works. The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column. Please note that this paper does not apply to the Original Export/Import utilities. Column data encrypted using TDE remains protected while it resides in the database. Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set. Once a table column is marked with this keyword.5. 2009 Encryption with Oracle Data Pump . TDE automatically encrypts the column data using the column encryption key and then writes it to the database. Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key. 2. After verifying that the correct password has been given. using a dump file encryption key derived from a userprovided password. Keep in mind that in Oracle Data Pump 10g release 2. Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database. it uses the external tables mechanism instead of the direct path mechanism. The steps involved in importing a table with encrypted columns are as follows: 1. TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key. Whenever Oracle Data Pump unloads or loads tables containing encrypted columns. Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job. encryption and decryption are typically CPU intensive operations. 2. Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set. Furthermore.. Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it. Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set. Although the data being processed is stored in memory buffers. As part of the INSERT operation. the ENCRYPTION_PASSWORD parameter applies only to TDE encrypted columns.protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database. To load an export dump file set containing encrypted column data into a target database. The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer. Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section. 3. . the same encryption password used at export time must be provided to Oracle Data Pump import. additional disk I/O is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks. The steps involved in exporting a table with encrypted columns are as follows: 1. the corresponding dump file decryption key is derived from this password. 3. Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column. before it is written to the export dump file set. As part of the SELECT operation. In such a case.empname VARCHAR2(100). in which the Oracle Wallet is manually closed and then the export command is re-issued.Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements. In the event that the password is not specified. .Next. Oracle Data Pump writes the encrypted column data as clear text in the dump file. In the following example. In the following example. Although the ENCRYPTION_PASSWORD is an optional parameter. as shown in the following example. The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key. the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the wallet. then TDE creates the tables column encryption key based on random data.Creating a Table with Encrypted Columns Before using TDE to create and export encrypted columns. $ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp. SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY ³wallet_pwd´ SQL> CREATE TABLE DP. a warning message (ORA-39173) is displayed.EMP (empid NUMBER(6). Oracle Data Pump re-encrypts the column data in the dump file using this dump file key. create a table with an encrypted column. If the IDENTIFIED BY clause is omitted. it is always prudent to export encrypted columns using a password. which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys. the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key. This is shown in the following example.dmp \ TABLES=emp ENCRYPTION_PASSWORD=dump_pwd SQL> ALTER SYSTEM SET WALLET CLOSE. Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128).salary NUMBER(8. Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error. The password used below in the IDENTIFIED BY clause is optional and TDE uses it to derive the tables column encryption key.2) ENCRYPT IDENTIFIED BY ³column_pwd´ Using Oracle Data Pump to Export Encrypted Columns Oracle Data Pump can now be used to export the table. When re-encrypting encrypted column data. it is first necessary to create an Oracle Encryption Wallet. 2009 8:48:43 Copyright (c) 2003. 2009 8:21:23 Copyright (c) 2003.2. then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set.4. 2007. This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter. Data Mining and Real Application Testing options ORA-39001: invalid argument value ORA-39180: unable to encrypt ENCRYPTION_PASSWORD ORA-28365: wallet is not open Restriction with Transportable Tablespace Export Mode Exporting encrypted columns is not limited to table mode exports. An attempt to perform an export using this mode when the tablespace contains tables with encrypted columns yields the following error: $ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp. There is. Connected to: Oracle Database 10g Enterprise Edition Release 10.dmp TABLES=emp Export: Release 10. . transportable tablespace export mode does not support encrypted columns. Oracle. 2007. All rights reserved.0 ± Production on Wednesday.dmp TABLES=emp \ ENCRYPTION_PASSWORD=dump_pwd Export: Release 10.$ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.0. however. tablespace. If a schema. Oracle.4. All rights reserved.0. one exception. 09 July.0 ± Production on Monday.0.0 ± Production With the Partitioning. or full mode export is performed.2.2.4. 09 July. as used in the previous examples. 2. Master table ³DP´. 09 July. All rights reserved.4.4.SYS_EXPORT_TABLE_01 is: /ade/jkaloger_lx9/oracle/work/emp.25 KB 3 rows ORA-39173: Encrypted data has been stored unencrypted in dump file set. Oracle.´SYS_EXPORT_TABLE_01 successfully loaded/unloaded ********************************************************************* Dump file set for DP. . exported ³DP´.0. 2009 8:55:07 Copyright (c) 2003.0. Data Mining and Real Application Testing options Starting ³DP´.Connected to: Oracle Database 10g Enterprise Edition Release 10.´SYS_EXPORT_TABLE_01 : dp/******** directory=dpump_dir dumpfile=emp tables=emp Estimate in progress using BLOCKS method« Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 16 KB Processing object type TABLE_EXPORT/TABLE/TABLE .0 ± Production on Thursday.´SYS_EXPORT_TABLE_01 completed with 1 error(s) at 08:48:57 $ expdp system/password DIRECTORY=dpump_dir DUMPFILE=dp.dmp Job ³DP´.dmp \ TRANSPORT_TABLESPACES=dp Export: Release 10.2.0 ± Production With the Partitioning. 2007. .´EMP´ 6. 2. In the example of the DP.´SYS_EXPORT_TRANSPORTABLE_01 : system/******** directory=dpump_dir dumpfile=dp transport_tablespaces=dp ORA-39123: Data Pump transportable tablespace job aborted ORA-29341: The transportable set is not self-contained Job ³SYSTEM´.4.EMP table. an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data. Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place. If the same transportable tablespace export is executed using Oracle Database 11g release 1. The output and resulting error messages would look as follows: $ impdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp. Otherwise. For example. an ORA-28365: wallet not open error is returned. Of course.dmp \ .0 ± Production With the Partitioning. that version does a better job at pinpointing the problem via the information in the ORA-39929 error: Using Oracle Data Pump to Import Encrypted Columns Just as when exporting encrypted column data.´SYS_EXPORT_TRANSPORTABLE_01 stopped due to fatal error at 08:55:25 The ORA-29341 error in the previous example is not very informative.Connected to: Oracle Database 10g Enterprise Edition Release 10. assume in the following example that the DP. If the encryption attributes for all columns do not exactly match between the source and target tables. the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export. then an ORA-26033 exception is raised when you try to import the export dump file set. Data Mining and Real Application Testing options Starting ³SYSTEM´.0.EMP table on the target system has been created exactly as it is on the source system except that the ENCRYPT attribute has not been assigned to the SALARY column. the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed. Oracle. Thus the use of the ENCRYPTION_PASWORD parameter is prohibited in network mode imports.7.0 ± Production With the Partitioning.´SYS_EXPORT_TRANSPORTABLE_01 : system/******** directory=dpump_dir dumpfile=dp transport_tablespaces=dp ORA-39123: Data Pump transportable tablespace job aborted ORA-39187: The transportable set is not self-contained.´SYS_EXPORT_TRANSPORTABLE_01 stopped due to fatal error at 09:09:21 Restriction Using Import Network Mode A network mode import uses a database link to extract data from a remote database and load it into the connected database instance.dmp \ TRANSPORT_TABLESPACES=dp Export: Release 11. All rights reserved. There are no export dump files involved in a network mode import and therefore there is no re-encrypting of TDE column data. 09 July.7. Data Mining and Real Application Testing Options Starting ³SYSTEM´.1. 2009 9:09:00 Copyright (c) 2003. Job ³SYSTEM´.EMP in tablespace DP has encrypted columns which are not supported.TABLES=emp ENCRYPTION_PASSWORD=dump_pwd $ expdp system/password DIRECTORY=dpump_dir dumpfile=dp. violation list is ORA-39929: Table DP. as shown in the following example: .0. Connected to: Oracle Database 11g Enterprise Edition Release 11.0 ± Production on Thursday.0.1. 2007. Data Mining and Real Application Testing options ORA-39005: inconsistent arguments ORA-39115: ENCRYPTION_PASSWORD is not supported over a network link $ impdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.4.0.dmp tables=emp encryption_password=******** table_exists_action=append .emp DIRECTORY=dpump_dir NETWORK_LINK=remote \ TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd Import: Release 10. 2007.2.0.0. Oracle. Oracle.2. Connected to: Oracle Database 10g Enterprise Edition Release 10.4.´SYS_IMPORT_TABLE_01 : dp/******** directory=dpump_dir dumpfile=emp. 2009 11:00:57 Copyright (c) 2003.4. 09 July.0. Data Mining and Real Application Testing options Master table ³DP´. All rights reserved.0 ± Production With the Partitioning. Connected to: Oracle Database 10g Enterprise Edition Release 10.dmp TABLES=emp \ ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND Import: Release 10.4.2.$ impdp dp/dp TABLES=dp.2.´SYS_IMPORT_TABLE_01 successfully loaded/unloaded Starting ³DP´. All rights reserved.0 ± Production on Friday. 09 July.0 Production With the Partitioning. 2007. 2009 10:55:40 Copyright (c) 2003.0 ± Production on Thursday. An external table definition is created using the SQL . If you are concerned about the security of the information being transmitted. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append Processing object type TABLE_EXPORT/TABLE/TABLE_DATA ORA-31693: Table data object ³DP´. When the ENCRYPTION_PASSWORD Parameter Is Not Needed It should be pointed out that when importing from an export dump file set that includes encrypted column data. The following are cases in which the encryption password and Oracle Wallet are not needed: y y y y A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with encrypted columns A table-mode import in which the referenced tables do not include encrypted columns Encrypted Columns and External Tables The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database. the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed.SALARY encryption properties differ for source or target table Job ³DP´. However. it is important to understand that any TDE column data will be transmitted in clear-text format. then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption.´EMP´ failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-26033: column ³EMP´.Processing object type TABLE_EXPORT/TABLE/TABLE ORA-39152: Table ³DP´.´SYS_IMPORT_TABLE_01 completed with 2 error(s) at 10:55:48 Oracle White PaperEncryption with Oracle Data Pump By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import.´EMP´ exists. then TDE decrypts the column data as part of the select operation. The SQL engine then inserts the data. .EMP from the database. As is always the case when dealing with TDE columns.XEMP table. 2. The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data. as one of its columns is. into the DP. Notice that datatypes for the columns are not specified. which is in clear text format. empname. The steps involved in creating an external table with encrypted columns are as follows: 1. If any columns in the external table are marked as encrypted.XEMP ( empid.EMP table. salary ENCRYPT IDENTIFIED BY ³column_pwd´) ORGANIZATION EXTERNAL ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY dpump_dir LOCATION (¶xemp. The SQL engine selects the data for the table DP. Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp). the Oracle Wallet must first be open before creating the external table.XEMP and populates it using the data in the DP. The following example creates an external table called DP. SQL> CREATE TABLE DP.dmp¶) ) REJECT LIMIT UNLIMITED AS SELECT * FROM DP.syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause. as the salary column is for DP.EMP. This is because they are determined by the column datatypes in the source table in the SELECT subquery.EMP. If any columns in the table are marked as encrypted. then TDE encrypts this column data as part of the insert operation. then TDE decrypts the data as part of the select operation.XEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file. The use of the encryption password in the IDENTIFIED BY clause is optional. it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY. The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed. as one of its columns is. However. the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file. then the parameter is ignored). So. 2. Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption. If any columns in the external table are marked as encrypted.3.XEMP. by default. Beginning in Oracle Database 11g release 1. To encrypt only TDE columns using Oracle Data Pump 11g. the ability to encrypt the entire export dump file set is introduced and with it. the data in the external table can be selected any number of times using a simple SQL SELECT statement: The steps involved in selecting data with encrypted columns from an external table are as follows: 1. several new encrypted-related parameters.dmp \ TABLES=emp ENCRYPTION_PASSWORD=dump_pwd \ ENCRYPTION=ENCRYPTED_COLUMNS_ONLY Comment . unless you plan to move the dump file to another database. So. Encryption Parameter Change in 11g Release 1 As previously discussed. if the ENCRYPTION_PASSWORD is present on the command line. Because DP. The SQL engine initiates a select operation. the 10g example previously shown becomes the following in 11g: $ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp. The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted. Because DP. In that case. A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set. then it applies only to TDE encrypted columns (if there are no such columns being exported. in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD. The data is passed back to the SQL engine. the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file. SQL> SELECT * FROM DP.XEMP is an external table. After a directory is created.htm There are two new concepts in Oracle Data Pump that are different from original Export and Import. The directory objects enforce a security model that can be used by DBAs to control access to these files. there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations. a default directory object called data_pump_dir is provided.oracle. you need to grant READ and WRITE permission on the directory to other users. the following SQL statement creates a directory object named dpump_dir1 that is mapped to a directory located at /usr/apps/datafiles.com/technology/obe/obe10gdb/storage/datapump/datapump. Oracle 10g by Deepak ² Leave a comment December 14. 1. to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1. For example.DATAPUMP Filed under: DATAPUMP. the database administrator must create a directory object and grant privileges to the user on that directory object. If a directory object is not specified. 2009 DATAPUMP IN ORACLE For using DATAPUMP through DB CONSOLE http://www. Changing from Original Export/Import to Oracle Data Pump Creating Directory Objects In order to use Data Pump. The default data_pump_dir is available only to privileged users unless access is granted by the DBA. you must execute the following command: . Create a directory. Interactive Command-Line Mode Besides regular operating system command-line mode. Directory Objects Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes. SQL> CREATE DIRECTORY dpump_dir1 AS µ/usr/apps/datafiles¶. These server processes access files for the Data Pump jobs using directory objects that identify the location of the files. In the following example. and data > exp username/password FULL=y FILE=dba.dmp Comparison of command-line parameters from Original Export and Import to Data Pump Data Pump commands have a similar look and feel to the original Export and Import commands. SQL> GRANT READ.1. >expdp username/password DIRECTORY=dpump_dir1 dumpfile=scott. Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf. the Oracle database requires permission from the operating system to read and write files in the directories. INDEXES. the user scott can export his database objects with command arguments: 1. but are different.WRITE ON DIRECTORY dpump_dir1 TO scott. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges.dmp FROMUSER=scott TOUSER=jim TABLES=(*) Data Pump Import: > impdp username/password DIRECTORY=dpump_dir1 DUMPFILE=scott.emp REMAP_SCHEMA=scott:jim Note how the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA option. Similarly. 1) Example import of tables from scott¶s account to jim¶s account Original Import: > imp username/password FILE=scott. Once the directory access is granted.dmp TABLES=scott. 2) Example export of an entire database to a dump file with all GRANTS.dmp GRANTS=y INDEXES=y ROWS=y > expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX . Below are a few examples that demonstrate some of these differences. The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job. Data Pump needs no tuning to achieve maximum performance.1 DIRECTORY=dpump_dir1 DUMPFILE=emp. 3) Tuning Parameters Unlike original Export and Import. you use the current Export version and simply use the VERSION parameter to specify the target database version. Example: > expdp username/password TABLES=hr. . you had to run an older version of Export to produce a dump file that was compatible with an older database version. CONSISTENT. You cannot mix the two parameters in one job. but once you are comfortable with Data Pump.DIRECTORY=dpump_dir1 DUMPFILE=dba. and you can use different INCLUDE and EXCLUDE options for different operations on the same dump file.dmp Data Pump Import can always read dump file sets created by older versions of Data Pump Export.employees VERSION=10. With original Export. You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g). Initialization parameters should be sufficient upon installation. The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job. COMMIT. COMPRESS.With Data Pump. Note that Data Pump Import cannot read dump files produced by original Export. Maximizing the Power of Oracle Data Pump Data Pump works great with default parameters. 4) Moving data between versions The Data Pump method for moving data between different database versions is different from the method used by original Export and Import. Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner. and RECORDLENGTH parameters. which used the BUFFER. DIRECT. Both parameters work with Data Pump Import as well. there are new capabilities that you will want to explore.dmp CONTENT=ALL Data Pump offers much greater metadata filtering than original Export and Import. If there aren¶t enough dump Files. you should do the following: ‡ Make sure your system is well balanced across CPU. memory. and I/O.dmp PARALLEL=4 REMAP ‡ REMAP_TABLESPACE ± This allows you to easily import a table into a different tablespace from which it was originally exported. You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa).1 or later. For best performance.dmp . ‡ Have at least one dump file for each degree of parallelism. Now Data Pump operations can take advantage of the server¶s parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database. The databases have to be 10.Parallelism Data Pump Export and Import operations are processed in the database as a Data Pump job. Example: > impdp username/password REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY=dpumpdir1 DUMPFILE=employees. ‡ Put files that are members of a dump file set on separate disks so that they will be written and read in parallel. performance will not be optimal because multiple threads of execution will be trying to access the same dump file. use the %U variable in the DUMPFILE parameter so multiple dump files can be automatically generated.) The number of parallel processes can be changed on the fly using Data Pump¶s interactive command-line mode. which is much more efficient that the client-side execution of original Export and Import. ‡ For export operations. Example: > expdp username/password DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=par_exp%u. This parameter changes the source datafile name to the target datafile name in all SQL statements where the source datafile is referenced. has the following content: DIRECTORY=dpump_dir1 FULL=Y DUMPFILE=db_full. you can start an export or import job. Because the REMAP_DATAFILE value uses quotation marks. power outage). Restart the job. If a job was stopped for any reason (system failure.) Attach to a job from a remote site (such as from home) to monitor status. Change the default size of the dump files. payroll. A couple of prominent features are described here. and later reconnect to the job to monitor its progress. Here are some of the things you can do while in this mode: See the status of the job. Increase or decrease the number of active worker processes for the job.dbf You can then issue the following command: > impdp username/password PARFILE=payroll. . you can attach to the job and then restart it.par Even More Advanced Features of Oracle Data Pump Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable.par.Interactive Command-Line Mode You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode. (Enterprise Edition only. Because Data Pump jobs run entirely on the server. Stop the job (perhaps it is consuming too many resources) and later restart it (when more resources become available).‡ REMAP_DATAFILES ± This is a very useful feature when you move databases between platforms that have different file naming conventions. it¶s best to specify the parameter within a parameter file.dmp REMAP_DATAFILE=´¶C:\DB1\HRDATA\PAYROLL\tbs6. All of the information needed to monitor the job¶s execution is available. detach from it. y y y y y y Add more dump files if there is insufficient disk space for an export file. Example: The parameter file.dbf¶:¶/db1/hrdata/payroll/tbs6. When you run Data Pump Import and specify the SQLFILE parameter. grants.and other metadata. Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance.Network Mode Data Pump gives you the ability to pass data between two databases over a network (via a database link). it¶s a lot easier to get a workable DDL script. Generating SQLFILES In original Import. the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script. Note that if you are moving large volumes of data.sql INCLUDE=TABLE. (Data Pump Export cannot run locally on a readonly instance because the job requires write operations on the instance. Comment Clone Database using RMAN Filed under: Clone database using RMAN by Deepak ² Leave a comment December 10. For example. not just tables and indexes. and disk space is not readily available. With Data Pump. 2009 Clone database using Rman .INDEX The SQL file named expfull.) This is useful when there is a need to export data from a standby database. without creating a dump file on disk. you would issue a command as follows: >impdp username/password DIRECTORY=dpumpdir1 DUMPFILE=expfull. if you want to create a database that contains all the tables and indexes of the source database. but that does not include the same constraints. a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types. like data marts to data warehouses. Although this output file is ready for execution. Network export gives you the ability to export read-only databases. This is very useful if you¶re moving data between databases. Network mode is probably going to be slower than file mode. SQLFILEs can be particularly useful when pre-creating tables and objects in a new database.sql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired. Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output. the DDL statements are not actually executed. so the target system will not be changed.dmp SQLFILE=dpump_dir2:expfull. SQL> archive log list. # default CONFIGURE BACKUP OPTIMIZATION OFF.0. # default CONFIGURE CONTROLFILE AUTOBACKUP ON.2. 2002. Database log mode Automatic archival Archive destination Oldest online log sequence Archive Mode Enabled c:\oracle\ora92\RDBMS 14 Next log sequence to archive 16 Current log sequence SQL> ho rman Recovery Manager: Release 9. Oracle Corporation. using target database controlfile instead of recovery catalog RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1. 16 . 1.1. RMAN> connect target connected to target database: TEST (DBID=1972233550) RMAN> show all.Take full backup using Rman. All rights reserved.Target db : test Clone db : clone In target database.0 ± Production Copyright (c) 1995. # default CONFIGURE DEFAULT DEVICE TYPE TO DISK. # default CONFIGURE MAXSETSIZE TO UNLIMITED. # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO µC:\ORACLE\ORA92\DATABASE\SNCFTEST. # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1. # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1.CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO µ%F¶. # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1. # default RMAN> backup database plus archivelog. Starting backup at 23-DEC-08 current log archived allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=17 devtype=DISK channel ORA_DISK_1: starting archive log backupset channel ORA_DISK_1: specifying archive log(s) in backup set input archive log thread=1 sequence=14 recid=1 stamp=674240935 input archive log thread=1 sequence=15 recid=2 stamp=674240997 input archive log thread=1 sequence=16 recid=3 stamp=674242208 channel ORA_DISK_1: starting piece 1 at 23-DEC-08 channel ORA_DISK_1: finished piece 1 at 23-DEC-08 piece handle=C:\ORACLE\ORA92\DATABASE4K307L0_1_1 comment=NONE channel ORA_DISK_1: backup set complete.ORA¶. elapsed time: 00:00:03 Finished backup at 23-DEC-08 Starting backup at 23-DEC-08 . DBF input datafile fno=00003:\ORACLE\ORADATA\TEST\CWMLITE01.DBF input datafile fno=00008:\ORACLE\ORADATA\TEST\TOOLS01.DBF input datafile fno=00006:\ORACLE\ORADATA\TEST\INDX01. elapsed time: 00:00:56 Finished backup at 23-DEC-08 Starting backup at 23-DEC-08 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archive log backupset channel ORA_DISK_1: specifying archive log(s) in backup set .DBF channel ORA_DISK_1: starting piece 1 at 23-DEC-08 channel ORA_DISK_1: finished piece 1 at 23-DEC-08 piece handle=C:\ORACLE\ORA92\DATABASE5K307L5_1_1 comment=NONE channel ORA_DISK_1: backup set complete.DBF input datafile fno=00002:\ORACLE\ORADATA\TEST\UNDOTBS01.DBF input datafile fno=00004:\ORACLE\ORADATA\TEST\DRSYS01.DBF input datafile fno=00009:\ORACLE\ORADATA\TEST\USERS01.DBF input datafile fno=00007:\ORACLE\ORADATA\TEST\ODM01.DBF input datafile fno=00005:\ORACLE\ORADATA\TEST\EXAMPLE01.using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00001:\ORACLE\ORADATA\TEST\SYSTEM01.DBF input datafile fno=00010:\ORACLE\ORADATA\TEST\XDB01. DBID ²²²1972233550 In clone database.password file.create service. 1. Create all the folders neeeded for a database.and put entries in tnsnames. SQL> select name from v$database.input archive log thread=1 sequence=17 recid=4 stamp=674242270 channel ORA_DISK_1: starting piece 1 at 23-DEC-08 channel ORA_DISK_1: finished piece 1 at 23-DEC-08 piece handle=C:\ORACLE\ORA92\DATABASE6K307MU_1_1 comment=NONE channel ORA_DISK_1: backup set complete. 2. NAME ²²² TEST SQL> select dbid from v$database.edit the pfile and add following commands.ora and lsnrctl.ora files. . elapsed time: 00:00:02 Finished backup at 23-DEC-08 Starting Control File and SPFILE Autobackup at 23-DEC-08 piece handle=C:\ORACLE\ORA92\DATABASE\C-1972233550-20081223-00 comment=NONE Finished Control File and SPFILE Autobackup at 23-DEC-08 RMAN> exit Recovery Manager complete. SQL> conn /as sysdba Connected to an idle instance. rman>connect auxiliary sys/sys 7.rman>connect target sys/sys@test(TARGET DB) 6.ora¶ nomount ORACLE instance started.(CLONE DBNAME) SQL> ho rman RMAN> connect target sys/sys@test connected to target database: TEST (DBID=1972233550) RMAN> connect auxiliary sys/sys 453492 bytes 109051904 bytes 25165824 bytes 667648 bytes .connect rman.¶clone db oradata path¶ 3.¶clone db oradata path¶ Log_file_name_convert=¶target db oradata path¶.startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile.Db_file_name_convert=¶target db oradata path¶. rman>duplicate target database to µclone¶. 5. SQL> startup pfile=¶C:\oracle\admin\clone\pfile\initclone. Total System Global Area 135338868 bytes Fixed Size Variable Size Database Buffers Redo Buffers SQL> ho lsnrctl status SQL> ho lsnrctl stop SQL> ho lsnrctl start 4. NAME ²²² CLONE .create temporary tablespace. Scripts will be running« SQL> select name from v$database. alter database mount * ERROR at line 1: ORA-01100: database already mounted 8.. Database altered. 10.it will run for a while and exit from rman and open the database using reset logs. check for dbid. 9. select name from v$database * ERROR at line 1: ORA-01507: database not mounted SQL> ho rman SQL> alter database mount. SQL> select name from v$database. SQL> alter database open resetlogs.connected to auxiliary database: CLONE (not mounted) RMAN> duplicate target database to µclone¶. and OMF. DBID ²²²1972233550 Comment step by step standby database configuration in 10g Filed under: Dataguard . Before you get started: 1. 2. 2009 Oracle 10g ± Manual Creation of Physical STANDBY Database Using Data Guard Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX servers.creation of standby database in 10g by Deepak ² Leave a comment December 9. Install Oracle database software without the starter database on the STANDBY server and patch it if necessary.3. and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago.0.SQL> select dbid from v$database. Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases. I. Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same. In this example the database version is 10. I use Flash Recovery Area. The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY. I maintain it daily and it works well. I would like to share my experience with the other DBAs.. and Oracle home paths are identical.and maintenance tips on the databases in a Data Guard Environment. Test the STANDBY Database creation on a test environment first before working on the Production database. data protection and disaster recovery for enterprise data. II. The PRIMARY database and STANDBY database are located on different machines at different sites.2. I have been working on Data Guard/STANDBY databases using both Grid control and SQL command line for a couple of years. On the PRIMARY Database Side: . 3. Oracle 10g Data Guard is a great tool to ensure high availability. . Configure a STANDBY Redo log. Enable forced logging on your PRIMARY database: SQL> ALTER DATABASE FORCE LOGGING. member from v$logfile. SQL>startup mount.ora password=xxxxxxxx force=y (Note: Replace xxxxxxxxx with your actual password for the SYS user. 3) Create STANDBY Redo log groups. 2. My PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commands: SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M. 1) To check if a password file already exists.) . enable the archive log mode: SQL>shutdown immediate. run the following command: SQL> select * from v$pwfile_users. 4. use the following command to create one: . SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M. SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M.ora password=xxxxxxxx force=y (Note: Replace xxxxxxxxx with the password for the SYS user. run the following query: SQL>select * from v$STANDBY_log.On Windows: $cd %ORACLE_HOME%\database $orapwd file=pwdPRIMARY. If your PRIMARY database is not already in Archive Log mode.) 3.On UNIX: $Cd $ORACLE_HOME/dbs $Orapwd file=pwdPRIMARY. BYTES ²²²52428800 52428800 52428800 2) Use the following command to determine your current log file groups: SQL> select group#. 1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files. To find out the size of your online redo log files: SQL> select bytes from v$log. Enable Archiving on PRIMARY. 2) If it doesn¶t exist. 4) To verify the results of the STANDBY redo log groups creation. Create a password file if it doesn¶t exist.1. 'E:\ oracle\product\10.0\oradata\STANDBY\ONLINELOG¶.arc LOG_ARCHIVE_MAX_PROCESSES=30 remote_login_passwordfile=¶EXCLUSIVE¶ FAL_SERVER=STANDBY FAL_CLIENT=PRIMARY STANDBY_FILE_MANAGEMENT=AUTO # Specify the location of the STANDBY DB datafiles followed by the PRIMARY location. SQL>archive log list. Set PRIMARY Database Initialization Parameters Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE).SQL>alter database archivelog.ora¶ from spfile.PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY¶ LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r. specify the path accordingly) db_name=PRIMARY db_unique_name=PRIMARY LOG_ARCHIVE_CONFIG=¶DG_CONFIG=(PRIMARY.0\oradata\PRIMARY\DATAFILE¶ # Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=¶E:\oracle\product\10.specify your Oracle home path to replace µ¶).ALL_ROLES) DB_UNIQUE_NAME=PRIMARY¶ LOG_ARCHIVE_DEST_2= µSERVICE=STANDBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES. For UNIX system. .ora¶ from spfile.STANDBY)¶ LOG_ARCHIVE_DEST_1= µLOCATION=F:\Oracle\flash_recovery_area\PRIMARY\ARCHIVELOG VALID_FOR=(ALL_LOGFILES.2. 5. .specify your Oracle home path to replace µ¶).2. SQL>alter database open. (Note.ora to add the new PRIMARY and STANDBY role parameters: (Here the file paths are from a windows system. to add the new PRIMARY role parameters. 2) Edit pfilePRIMARY. DB_FILE_NAME_CONVERT=¶E:\oracle\product\10.0\oradata\STANDBY\DATAFILE¶.On Windows: SQL>create pfile=¶\database\pfilePRIMARY.2.On UNIX: SQL>create pfile=¶/dbs/pfilePRIMARY. 1) Create pfile from spfile for the PRIMARY database: . (Note. SQL>create spfile from pfile=¶/dbs/pfilePRIMARY.¶E:\oracle\product\10. III. On UNIX.2. create a control file for the STANDBY to use: SQL>startup mount.ora¶.On windows: SQL> shutdown immediate.On UNIX: SQL> shutdown immediate. SQL>Startup. create the directory accordingly. On UNIX. Create spfile from pfile.specify your Oracle home path to replace µ¶).ora¶. .ora¶. SQL>shutdown immediate. ± Restart the PRIMARY database using the newly created SPFILE.specify your Oracle home path to replace µ¶). create the directories accordingly.2. Create a copy of PRIMARY database data files on the STANDBY Server: On PRIMARY DB: SQL>shutdown immediate.ora¶. for example. (Note. and restart PRIMARY database using the new spfile.0\oradata\PRIMARY\ONLINELOG¶.¶F:\Oracle\flash_recovery_area\PRIMARY\ONLINELOG¶ 6.0\oradata\STANDBY\DATAFILE. ± Restart the PRIMARY database using the newly created SPFILE. SQL>Startup. Create the SPFILE and restart database. SQL> startup nomount pfile=¶/dbs/pfilePRIMARY. (Note. on Windows. SQL>create spfile from pfile=¶\database\pfilePRIMARY.¶F:\Oracle\flash_recovery_area\ST ANDBY\ONLINELOG¶.0\oradata\STANDBY\ONLINELOG and F:\Oracle\flash_recovery_area\STANDBY\ONLINELOG. on windows. E:\oracle\product\10. for example. 2) Copy the data files and temp files over. . On the STANDBY Database Site: 1. 2. Data Guard must use SPFILE. 3) Create directory (multiplexing) for online logs. SQL>shutdown immediate. . 4) Copy the online logs over. E:\oracle\product\10.2. Create a Control File for the STANDBY database: On PRIMARY DB. On STANDBY Server (While the PRIMARY database is shut down): 1) Create directory for data files. SQL> startup nomount pfile=¶\database\pfilePRIMARY. 2) Rename it to pfileSTANDBY.2.2.2.0 control_files=¶E:\ORACLE\PRODUCT\10.'F:\ORACLE\FLASH_RECOVERY_AREA\STANDBY\CONTROLFILE\STAN DBY. SQL>ALTER DATABASE OPEN.3.arc LOG_ARCHIVE_MAX_PROCESSES=30 FAL_SERVER=PRIMARY FAL_CLIENT=STANDBY remote_login_passwordfile=¶EXCLUSIVE¶ # Specify the location of the PRIMARY DB datafiles followed by the STANDBY location DB_FILE_NAME_CONVERT=¶E:\oracle\product\10.0\oradata\PRIMARY\ONLINELOG¶.0\oradata\PRIMARY\DATAFILE¶.¶ E:\oracle\product\10.¶F:\Oracle\flash_recovery_area\PRI MARY\ONLINELOG¶. and modify the file as follows.0.PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY¶ LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.2.ora.0\oradata\STANDBY\ONLINELOG¶. to database folder on Windows or dbs folder on UNIX under the Oracle home path.user_dump_dest=¶E:\oracle\product\10.CTL¶ db_name=¶PRIMARY¶ db_unique_name=STANDBY LOG_ARCHIVE_CONFIG=¶DG_CONFIG=(PRIMARY.2.2.compatible=¶10.2. 3.ora from PRIMARY server to STANDBY server.0\admin\STANDBY\cdump¶ *.CTL¶.audit_file_dest=¶E:\oracle\product\10.2. specify the path accordingly) *. : (Here the file paths are from a windows system.0\admin\STANDBY\adump¶ *.ctl.0\ORADATA\STANDBY\CONTROLFILE\STA NDBY.2.ALL_ROLES) DB_UNIQUE_NAME=STANDBY¶ LOG_ARCHIVE_DEST_2= µSERVICE=PRIMARY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES.STANDBY)¶ LOG_ARCHIVE_DEST_1= µLOCATION=F:\Oracle\flash_recovery_area\STANDBY\ARCHIVELOG VALID_FOR=(ALL_LOGFILES.background_dump_dest=¶E:\oracle\product\10.2.0\admin\STANDBY\udump¶ *. Copy the PRIMARY DB pfile to STANDBY server and rename/edit the file.0\admin\STANDBY\bdump¶ *. For UNIX system.SQL>alter database create STANDBY controlfile as µSTANDBY.0\oradata\STANDBY\DATAFILE¶ # Specify the location of the PRIMARY DB online redo log files followed by the STANDBY location LOG_FILE_NAME_CONVERT=¶E:\oracle\product\10.¶E:\ oracle\product\10. 1) Copy pfilePRIMARY.core_dump_dest=¶E:\oracle\product\10.¶F:\Oracle\flash_recovery_area\STANDBY\ONLINELOG¶ STANDBY_FILE_MANAGEMENT=AUTO . Set up ORACLE_HOME and ORACLE_SID. and archived log destinations for the STANDBY database. Start up nomount the STANDBY database and generate a spfile. Then restart the listener. 11. udump.(Note: Not all the parameter entries are listed here. bdump. Create Oracle Net service names. and on UNIX copy it to /dbs directory. Then restart the listener. cdump.) 4. On STANDBY server. . 7. 1) On PRIMARY system: use Oracle Net Manager to create network service names for PRIMARY and STANDBY. . 1) On PRIMARY system: use Oracle Net Manager to configure a listener for PRIMARY and STANDBY.ora¶. For Windows. create all required directories for dump and archived log destination: Create directories adump.On Windows: SQL>startup nomount pfile=¶\database\pfileSTANDBY. Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBY.ora. create a Windows-based service (optional): $oradim ±NEW ±SID STANDBY ±STARTMODE manual 8. $lsnrctl stop $lsnrctl start 2) On STANDBY server: use Net Manager to configure a listener for PRIMARY and STANDBY. Check tnsping to both services: $tnsping PRIMARY $tnsping STANDBY 2) On STANDBY system: use Oracle Net Manager to create network service names for PRIMARY and STANDBY. Copy the STANDBY control file µSTANDBY. Check tnsping to both services: $tnsping PRIMARY $tnsping STANDBY 10. On Windows copy it to \database folder. 6. And then rename the password file.ctl¶ from PRIMARY to STANDBY destinations . $lsnrctl stop $lsnrctl start 9. Configure listeners for the PRIMARY and STANDBY databases. setup the environment variables to point to the STANDBY database. On STANDBY server. 5. 13.On UNIX: SQL>startup nomount pfile=¶/dbs/pfileSTANDBY. 12. (Note. 2) On PRIMARY. If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived. SQL>create spfile from pfile=¶/dbs/pfileSTANDBY. SQL>shutdown immediate. ± Restart the STANDBY database using the newly created SPFILE. IV. Verify the STANDBY database is performing properly: 1) On STANDBY perform a query: SQL>select sequence#. SQL>shutdown immediate. If you ever need to stop log apply services: SQL> alter database recover managed STANDBY database cancel.specify your Oracle home path to replace µ¶).ora¶. Maintenance: 1. To create multiple STANDBY databases. Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment. ± Restart the STANDBY database using the newly created SPFILE.SQL>create spfile from pfile=¶\database\pfileSTANDBY. verify the archived redo log files were applied: SQL>select sequence#. To start real-time apply: SQL> alter database recover managed STANDBY database using current logfile disconnect. force a logfile switch: SQL>alter system switch logfile. 14. . next_time from v$archived_log. 3) On STANDBY. 15. 2. SQL>startup mount.ora¶. first_time. Start Redo apply 1) On the STANDBY database. Cleanup the archive logs on PRIMARY and STANDBY servers. applied from v$archived_log order by sequence#. SQL>startup mount. .ora¶. repeat this procedure. enable the real-time apply. to start redo apply: SQL>alter database recover managed STANDBY database disconnect from session. Password management The password for the SYS user must be identical on every system for the redo data transmission to succeed. otherwise the logs won¶t be shipped to the STANDBY server. RMAN>backup archivelog all delete input. I run the following once a month: RMAN>delete backupset.I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY. . you will have to update the password file for STANDBY database accordingly. step 2 to update/recreate password file for the STANDBY Sdatabase. I run RMAN to backup and delete the archive logs once per week. $rman target /@STANDBY.2. 3. Refer to section II. To delete the archivelog backup files on the STANDBY server. If you change the password for SYS on PRIMARY database. For the STANDBY database.
Copyright © 2024 DOKUMEN.SITE Inc.