Administering databases in self-managed environments such as on premises or Amazon Elastic Compute Cloud (Amazon EC2) requires you to spend time and resources on database administration tasks such as provisioning, scaling, patching, backups, and configuring for high availability. Organizations are moving their self-managed Oracle databases to AWS managed database services to offload undifferentiated heavy lifting of their applications and gain access to cloud services.
In this post, Amazon describes the key factors you should consider for a migration and then dive into an example of performing a physical migration of a self-managed Oracle database to Amazon EC2 and Amazon Relational Database Service (Amazon RDS) Custom for Oracle using Oracle transportable tablespaces. Migration from self-managed Oracle database can be done via one of several options. Depending upon the environment and use case, you can choose the best migration path. Transportable tablespace provides a very fast and efficient mechanism to move large volumes of data from one database system to another. With transportable tablespace you can also move data across different platforms or operating systems.
Key factors to consider for migration
- Make sure there is enough storage allocated on the target database server, so that the source datafiles can be copied.
- The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle 9i, the transported tablespaces are not required to be of the same block size as the destination database standard block size.
- If you’re transporting a tablespace of a different block size than the standard block size of the database receiving the tablespace set, you must first have a DB_nK_CACHE_SIZE initialization parameter entry in the receiving database parameter file. For example, if you’re transporting a tablespace with an 8 K block size into a database with a 4 K standard block size, then you must include a DB_8K_CACHE_SIZE initialization parameter entry in the parameter file. If it’s not already included in the parameter file, you can set this parameter using the ALTER SYSTEM SET statement.
- You can reduce the transportable tablespace downtime using cross-platform incremental backup. Refer to the Oracle Doc ID 2471245.1.
- Make sure there is network connectivity using public internet, VPN or AWS Direct Connect and make sure there is enough bandwidth to support the data transfer between the source and target databases. The bandwidth should be “proportional” to the amount of data that needs to be transferred in order to meet the requirements.
- The source and target Oracle Database version and DB instance OS types must be compatible. The following link provides compatibility considerations for Transporting data.
Based on these factors, you can choose offline or online migration using physical migration, logical migration, or a combination of both physical and logical migration approaches.
In this post, Amazon focuses on the steps to migrate an Oracle database on Amazon EC2 to Amazon RDS Custom for Oracle using transportable tablespaces. You can use any Oracle source and Oracle target .
After you determine whether you need to convert the data before migration and check character set compatibility, you can complete the migration steps.
The following diagram illustrates Amazons architecture for the transportable tablespace feature.
The Source can be on premises or Oracle on Amazon EC2. In the following architecture we use transportable tablespace feature to move tablespaces from on premises to RDS Custom for Oracle. You can use Amazon Elastic File System (Amazon EFS) or Amazon Simple Storage Service (Amazon S3) as the landing zone for storing the files.
Make sure you meet the following prerequisites:
- You should have operating system access on both the source and target database host
- The source is an Oracle database running on premises or Oracle on Amazon EC2
- If the source platform and target platform are different, then refer to the section Check Endian set compatibility
- The target database is an RDS Custom for Oracle DB instance. The same steps can also be used for Oracle on Amazon EC2 as a target
- The source and target databases have identical character sets, or the destination database character set must be a superset of the source database character set
- Run the migration procedures as the OS user with owner as Oracle
Additionally, complete the following pre-migration checklist:
- Create a new, empty RDS Custom for Oracle DB instance
- Create the tablespaces in the target database, with the same name as source database for each tablespace that will be transported. The new target database consists initially of just SYSTEM, SYSAUX, UNDO, USERS, and Temporary tablespaces
- If you want to import into different user schemas than the source then specify
REMAP_SCHEMAin the data pump import command, so that all database objects (such as tables and indexes) are created in the appropriate schema on the target. By default, objects will be created in the same user schema as in the source database, and those users must already exist in the target database. If they don’t exist, then the import utility returns an error
- Compare the sizes of the SYSTEM, SYSAUX, UNDO, and temporary tablespaces of the target database with that of the source database. To avoid any space issues the best practice is to have the target tablespaces the same size or larger than those tablespaces on the source database
- The best practice is to have same number of target database log file groups and members as the source. Also, the sizes of log files for each log file group in the new target database should be the same as or larger than the source database
- As a landing zone for data file copies, you can use Amazon Elastic Block Storage (Amazon EBS), Amazon Elastic File System (Amazon EFS), or Amazon Simple Storage Service (Amazon S3)
- Make sure you have network connectivity between the source and Amazon RDS Custom for Oracle database servers, and the DB port where the database is listening (default port is 1521) is open for communication between the two
- The database options and components used in the source database should be installed on the target database
Check Endian set compatibility
Many, but not all, platforms are supported for cross-platform data transport. You can query the
V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported, and to determine each platform’s endian format (byte ordering). The following query displays the platforms that support cross-platform data transport:
If the source platform and the target platform are of the same endianness, then the data is transported from the source platform to the target platform without any data conversion.
Refer to the section Convert data if the endian formats are different.
Check character set compatibility
Run these commands on the source and destination databases to find character sets that are compatible:
On the source, use the following query:
On the target, use the following query:
Pick a self-contained set of tablespaces from the source database
There may be logical or physical dependencies between the database objects in the transportable set and the database objects outside of the transportable set. You can only transport a tablespace set that is self-contained, that is, none of the database objects inside a tablespace set are dependent on any of the database objects outside of that tablespace set.
Some examples of self-contained tablespace violations are:
- An index inside the set of tablespaces is for a table outside of the set of tablespaces. Note that it’s not a violation if a corresponding index for a table is outside of the set of tablespaces
- A partitioned table is partially contained in the set of tablespaces
- The tablespace set that you want to copy must contain either all partitions of a partitioned table, or none of the partitions of a partitioned table. To transport a subset of a partitioned table’s partitions , you must exchange the partitions into tables
- A referential integrity constraint points to a table across a set boundary. When transporting a set of tablespaces, you can choose to include referential integrity constraints. However, doing so can affect whether a set of tablespaces is self-contained. If you decide not to transport constraints, then the constraints aren’t considered as pointers
- A table inside the set of tablespaces contains a LOB column that points to LOB columns outside the set of tablespaces or vice versa
- An XML DB schema (*.xsd) that was registered by user A imports a global schema that was registered by user B, and the following is true: the default tablespace for user A is tablespace A, the default tablespace for user B is tablespace B, and only tablespace A is included in the set of tablespaces
You can use the following statement to determine whether tablespace HR are self-contained, with referential integrity constraints taken into consideration (indicated by TRUE):
After running the
DBMS_TTS.TRANSPORT_SET_CHECK procedure, you can see all the violations by selecting from the
TRANSPORT_SET_VIOLATIONS view. If the set of tablespaces is self-contained, then this view is empty.
You must resolve any violations before tablespaces are transportable. As noted in the next task, one choice for bypassing the integrity constraint violation is not to export the integrity constraints.
Generate a transportable tablespace set
After ensuring that you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set with the following steps:
- Start SQL*Plus and connect to the database as an administrator or as a user who has either the
MANAGE TABLESPACEsystem privilege.
- Make all tablespaces in the set read-only:
- Run the Data Pump export utility as a user with the
DATAPUMP_EXP_FULL_DATABASErole and specify the tablespaces in the transportable set:
You must always specify
TRANSPORT_TABLESPACES, which indicates that the transportable option is used. This example specifies the following additional Data Pump parameters:
- DUMPFILE – Specifies the name of the structural information export dump file to be created, expdat.dmp
- DIRECTORY – Specifies the directory object that points to the operating system or Oracle Automatic Storage Management location of the dump file
- DATA_PUMP_DIR – In a non-CDB, the directory object is created automatically. However, the directory object
DATA_PUMP_DIRis not created automatically in a PDB. Therefore, when exporting from or importing into a PDB, create a directory object in the PDB and specify the directory object when you run Data Pump
- LOGFILE – Specifies the log file to create for the export utility. In this example, the log file is created in the same directory as the dump file, but you can specify any other directory for storing the log file
To perform a transport tablespace operation with a strict containment check, use the
TRANSPORT_FULL_CHECK parameter, as shown in the following example:
transport_full_check, the Data Pump export utility verifies that there are no dependencies and the tablespace set being transported is self-contained, otherwise the export fails. You must resolve any violations and then run this task again.
The expdp utility displays the names and paths of the dump file and the data files on the command line, as shown in the following example. These are the files that you need to transport to the target database. Also, check the log file for any errors.
Transport the export dump file to the target database
To copy or move the dump files and the data file from source to target, you can use any of the following methods.
If both the source and destination are files systems, you can use:
- Any facility for copying flat files (for example, an operating system copy utility or FTP)
- DBMS_FILE_TRANSFER package
If either the source or target is an automatic storage management (ASM) disk group, you can use:
- FTP to or from the
/sys/asmvirtual folder in the XML DB repository
Transport the dump file to the directory pointed to by the
DATA_PUMP_DIR directory object, or to any other directory of your choosing. The new location must be accessible to the target database.
At the target database, run the following query to determine the location of
Move the datafiles of the transportable tablespace set
Transport the data files of the tablespaces (for this post, HR.dbf) from the source database to a directory on the target database. The directory should be part of the instance storage or Amazon EFS that is mounted on the instance.
If you’re transporting the HR tablespace to a different platform, you can run the following query on each platform:
If the query returns a row, the platform supports cross-platform tablespace transport.
The following is the query result from the source platform:
The following is the result from the target platform:
In this example, you can see that the endian formats are the same. If no endianness conversion of the tablespaces is needed, then you can transfer the files using any file transfer method.
Convert data if the endian formats are different
There are two ways to convert data when the endian formats are different
- Use either the
PUT_FILEprocedure in the
DBMS_FILE_TRANSFERpackage to transfer the data files. These procedures convert the data files to the target platform’s endian format automatically.
- RMAN can transport databases, data files, and tablespaces across platforms using backup sets. Performing cross-platform data transport with backup sets enables you to use block compression to reduce the size of backups. This improves backup performance and reduces the time taken to transport backups over the network.
For the detailed steps on using RMAN backup sets for cross-platform data transfer to a different endian platform, follow Oracle Support Doc ID 2013271.1.
Restore tablespaces to read/write mode (Optional)
Make the transported tablespaces read/write again at the source database:
You can postpone this task to first ensure that the import process succeeds.
Import the tablespace set
To complete the transportable tablespaces operation, import the tablespace set.
Run the Data Pump import utility as a user with the
DATAPUMP_IMP_FULL_DATABASE role to import the tablespace metadata:
This example specifies the following Data Pump parameters:
- DUMPFILE – Specifies the exported file containing the metadata for the tablespaces to be imported
- DIRECTORY – Specifies the directory object that identifies the location of the export dump file. The directory must exist and must grant read and write privileges to the user running the import
- TRANSPORT_DATAFILES – List all the data files containing the tablespaces to be imported. You can specify the
TRANSPORT_DATAFILESparameter multiple times in a parameter file specified with the PARFILE parameter if there are many data files
- REMAP_SCHEMA – Changes the ownership of database objects. If you don’t specify
REMAP_SCHEMA, then all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the target database. If they don’t exist, then the import utility returns an error
- LOGFILE – Specifies the file name of the log file to be written by the import utility. In this example, the log file is written to the directory from which the dump file is read, but it can be written to a different location
After this statement runs successfully, all tablespaces in the set being copied remain in read-only mode. Check the import log file to ensure that no error has occurred.
When dealing with a large number of data files, you can use an import parameter file to make the task definition easier. For example, you can run the Data Pump import utility as follows:
The par.f parameter file contains the following:
Complete the following tasks to prepare the target Oracle database for use. Run the following queries on the source and target databases to check if data was exported and imported completely without any errors:
- View all users that exist in the database:
- View the total number of objects in the database:
- View a list of all the tables owned by the database schema(s) that was migrated. Login to individual schemas in this example connect as HR user and execute the below commands.
- View the exact size in MBs occupied by the object at the tablespace:
- View the total space occupied in MBs:
- View the size of the database:
- Switch transported tablespaces to READ WRITE mode at the destination:
- Revert tablespaces to READ WRITE mode at the source (Optional):
The source tablespace can be made read-write after the successful completion of export command or import command. If it was made read-write after export then this step can be ignored.
- Create and start appropriate database services or network connectivity on the new destination database.
- Redirect applications to destination database.
- Clean up the staging directories.
- Remove unneeded files from the source and destination hosts.
Limitations on transportable tablespaces
Be aware of the following limitations for transportable tablespaces:
- You can’t include administrative tablespaces such as SYSTEM and SYSAUX in a transportable tablespace set
- When transporting a tablespace set, objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) aren’t transportable unless all of the underlying or contained objects are in the tablespace set
- Transportable tablespaces can’t transport tables with TIMESTAMP WITH TIMEZONE (TSTZ) data across platforms with different time zone file versions. The transportable tablespace operation skips these tables. You can export and import these tables conventionally
- When you transport a tablespace containing tables with TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ) data between databases with different time zones, the tables with the TSLTZ data aren’t transported. You can export and import these tables conventionally
- The maximum size of a single file on RDS Custom for Oracle DB instances is 16 TiB (tebibytes)
- Tablespaces that don’t use block encryption but contain tables with encrypted columns can’t be transported. You must use Oracle Data Pump to export and import the tablespace’s schema objects. You can take advantage of Oracle Data Pump that enables you to maintain encryption for the data while it is being exported and imported
The limitations described in General Limitations on Transporting Data apply to transportable tablespaces.