Migrate Data with DBMS_CLOUD_IMPORT
DBMS_CLOUD_IMPORT enables you to import data from Oracle and non-Oracle databases into Oracle Autonomous AI Database using a simple, consistent approach.
- Migrate Data with DBMS_CLOUD_IMPORT
When working across multiple database systems, data movement can be complex due to differences in technologies, formats, and tools.DBMS_CLOUD_IMPORTsimplifies this process by providing a unified approach to import data across platforms, eliminating the need to manage multiple tools or workflows. - Prerequisites
Lists the prerequisites for performing the import into the Autonomous AI Database. - Import Data using DBMS_CLOUD_IMPORT
Describes how to import data into the Autonomous AI Database usingDBMS_CLOUD_IMPORT. - Monitor and Diagnose Heterogeneous Import
To diagnose issues and monitor the progress of data import operations, you can query the following data dictionary views. These views provide detailed information about task status, execution progress, parallel operations, and related objects.
Parent topic: Migrate to Autonomous AI Database
Migrate Data with DBMS_CLOUD_IMPORT
When working across multiple database systems, data movement can be complex due to differences in technologies, formats, and tools. DBMS_CLOUD_IMPORT simplifies this process by providing a unified approach to import data across platforms, eliminating the need to manage multiple tools or workflows.
DBMS_CLOUD_IMPORT also supports high-performance and reliable data transfer. The import process uses parallel execution to improve throughput, and if an import is interrupted, such as during maintenance or a loss of connection to the source database, it automatically resumes from where it left off, ensuring completion without having to re-initiate the import.
You can import data from Oracle databases and supported non-Oracle databases, including MySQL, PostgreSQL and Amazon Redshift. For Oracle sources, both data and database objects are imported. For non-Oracle sources, the import focuses on data movement with automatic data type conversion to Oracle-compatible formats.
The capability supports flexible data import. You can import an entire database or a subset of data, such as selected schemas or tables, based on your requirements. Oracle also provides data dictionary views to monitor progress and track import status.
Key Benefits
- Unified data import across platforms: Import data from Oracle and non-Oracle databases without using multiple tools.
- High-performance data transfer: Move large volumes of data using parallel execution.
- Built-in resiliency: Resume import operations automatically after interruptions, including maintenance or a loss of connection to the source database.
- Flexible data selection: Import a full database or a subset of data, including specific schemas or tables.
See the following for more information:
Parent topic: Migrate Data with DBMS_CLOUD_IMPORT
Prerequisites
Lists the prerequisites for performing the import into the Autonomous AI Database.
Before starting the import, ensure that the following prerequisites are met.
-
You must be logged in as the
ADMINuser. -
DBMS_CLOUD_IMPORTsubprograms access the source database using a database link created implicitly as part of the import task. To establish this connection, you must create a credential object with the remote database credentials and reference it when creating the import task. See CREATE_CREDENTIAL Procedure for more information. -
For non-Oracle source databases, specify
longtovarchar => 'true'in thegateway_paramsparameter when callingDBMS_CLOUD_IMPORT.CREATE_IMPORT_TASK. This is required because metadata in non-Oracle databases consists ofLONGcolumns. For example:gateway_params => JSON_OBJECT('db_type' VALUE <db_type>, 'longtovarchar' VALUE 'true'). -
Ensure that the database specified by
service_namecan connect to and access the target schema. -
Import jobs resume from the point of interruption, such as after a planned maintenance event or an unexpected outage. Parallel processing and automatic resume are supported for Oracle, MySQL, PostgreSQL, and Amazon Redshift source databases. For non-Oracle source databases, the following prerequisites are required to enable parallel processing and automatic resume (additional prerequisites may apply depending on the source database type):
-
The source table is range-partitioned.
-
Histogram statistics exist for at least one numeric column.
-
A numeric index or primary key is present.
-
If these prerequisites are not met, the table is copied using CREATE TABLE AS SELECT (CTAS). In this case, parallel processing is not supported, and if the job is interrupted, it restarts from the beginning of the table instead of resuming from the point of interruption.
Additional prerequisites may be required to support parallel processing and reliable restartability, depending on the source database type. See the following for details:
- PostgreSQL Source Database
- Create the required views on the source PostgreSQL database. To support parallel processing and reliable restartability when migrating from PostgreSQL, create the required views on the source database. If the views are not present, the import uses
CREATE TABLE AS SELECT(CTAS), in that mode, parallel processing is not supported, and if a table load is interrupted, it restarts from the beginning. See CREATE TABLE AS SELECT for more information. - You can create these views using Oracle-provided scripts available on GitHub. Review and run the scripts on the source PostgreSQL database before starting the import.
- Create the required views on the source PostgreSQL database. To support parallel processing and reliable restartability when migrating from PostgreSQL, create the required views on the source database. If the views are not present, the import uses
See PostgreSQL 18.3 Documentation for more information.
- MySQL Source Database
-
schema_listis not supported for MySQL sources because MySQL does not use schemas. When you invokeREATE_IMPORT_TASK, specify an empty array forschema_list([]). -
The value you specify for
service_nameis used as the schema name. -
Create the required views on the source MySQL database. To support parallel processing and reliable restartability when migrating from MySQL, create the required views on the source database. If the views are not present, the import uses
CREATE TABLE AS SELECT (CTAS), in that mode, parallel processing is not supported, and if a table load is interrupted, it restarts from the beginning. See CREATE TABLE AS SELECT for more information. -
You can create these views using Oracle-provided scripts available on Github. Review and run the scripts on the source MySQL database before starting the import.
-
See MySQL Reference for more information.
- Amazon Redshift Source Database
-
Grant privileges for database link access. To read data from remote tables, you must ensure the account you use to create the database link has the required privileges, for example:
-
GRANT USAGE ON SCHEMA <remote_schema> TO <dblink_user>; -
GRANT SELECT ON <remote_schema>.<remote_table> TO <dblink_user>;
-
-
Ensure metadata visibility for non-
publicschemas. If the remote schema is notpublicand the import needs to query metadata (for example, viapg_table_def), you must include the schema in the database link user’ssearch_path, for example:-
ALTER USER <dblink_user> SET search_path TO <new_value>;
-
-
See Amazon Redshift Documentation for more information.
Parent topic: Migrate Data with DBMS_CLOUD_IMPORT
Import Data using DBMS_CLOUD_IMPORT
Describes how to import data into the Autonomous AI Database using DBMS_CLOUD_IMPORT.
To import data into Autonomous AI Database, use the DBMS_CLOUD_IMPORT.CREATE_IMPORT_TASK procedure. This procedure lets you import data from Oracle and supported non-Oracle databases by specifying the required connection details and, optionally, the objects to import.
You can import an entire database or a subset of data, such as specific schemas or tables, by specifying parameters such as schema_list or table_list. When you run this procedure, Oracle Database creates an Oracle Scheduler job to manage the import task. The job uses the provided connection details and credentials to create a database link to the source database and transfer data to Autonomous AI Database.
The behavior of the import depends on the source database type:
- Oracle-to-Oracle imports: Both data and database objects are migrated. Indexes, constraints, and partitions are automatically created in the target Autonomous AI Database. If the import is interrupted (for example, due to maintenance), it resumes from the point of interruption.
- Imports from non-Oracle databases: Only data is migrated. Keys, indexes, constraints, and other dependent objects are not created. Partitions are created only if the source table is range-partitioned. Resume capability depends on prerequisites; otherwise, the import restarts from the beginning.
You must run the import as the ADMIN user. The target Autonomous AI Database remains available during the import; however, Oracle recommends avoiding other activity on the target database until the import completes. If an import job is interrupted (for example, due to planned maintenance or an unexpected outage), it may resume from the point of interruption or restart, depending on the source database type and whether prerequisites are satisfied. You can also use suspend and resume to temporarily pause and then continue the job.
- Create an Import Task
Describes the steps to create an import task. - Suspend and Resume Import
Describes how to suspend and resume an import operation. - Monitor Import Progress
After you start an import task, the scheduler job runs in the background and the Autonomous AI Database records progress at both the task and table levels. - Drop an Import Task
Describes the steps to drop an import task.
Parent topic: Migrate Data with DBMS_CLOUD_IMPORT
Create an Import Task
Describes the steps to create an import task.
Before you create and start the import task, set up authentication to the source database. DBMS_CLOUD_IMPORT uses a credential object to connect securely to the source (and to create the required database link for the task).
Follow these steps to create an import task:
See the following for more information:
Parent topic: Import Data using DBMS_CLOUD_IMPORT
Suspend and Resume Import
Describes how to suspend and resume an import operation.
If needed, you can temporarily pause the import using the SUSPEND_IMPORT_TASK procedure:
BEGIN
DBMS_CLOUD_IMPORT.SUSPEND_IMPORT_TASK(
task_name => 'orcl_import_job'
);
END;
/This stops the scheduler job while preserving all progress and metadata. See SUSPEND_IMPORT_TASK Procedure for more information.
To continue the import, use the RESUME_IMPORT_TASK procedure:
BEGIN
DBMS_CLOUD_IMPORT.RESUME_IMPORT_TASK(
task_name => 'orcl_import_job'
);
END;
/The job resumes from the last checkpoint, skipping completed tables and continuing remaining work.
Resume behavior depends on the source database type and prerequisites. When prerequisites for parallel processing and automatic resume are met, the import continues from the point of interruption. Otherwise, the import uses
CREATE TABLE AS SELECT (CTAS); in this mode, parallel processing is not supported, and if the job is interrupted, it restarts from the beginning of the table.
See RESUME_IMPORT_TASK Procedure for more information.
Parent topic: Import Data using DBMS_CLOUD_IMPORT
Monitor Import Progress
After you start an import task, the scheduler job runs in the background and the Autonomous AI Database records progress at both the task and table levels.
- DBA_DATA_IMPORT_TASK_STATUS: Shows task-level progress, including overall status, percent complete, and a summary of activity (for example, tables currently in progress).
- DBA_DATA_IMPORT_TABLE_STATUS: Shows table-level progress for each table being imported, including status and error details for failed tables.
For example, query DBA_DATA_IMPORT_TABLE_STATUS to view table-level progress and errors. The STATUS column indicates the current state of each table, such as SUCCEEDED, FAILED, LOADING, or STOPPED.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'db1_cred',
username => '<username>',
password => '<password>'
);
END;
/The output is similar to the following:
IMPORT_TASK_NAME SCHEMA_NAME SCHEMA_OBJECT STATUS
------------------ ------------- --------------- ----------
IMPORT_JOB1 SALES ORDERS SUCCEEDED
IMPORT_JOB1 SALES CUSTOMERS SUCCEEDED
IMPORT_JOB1 HR EMPLOYEES LOADING
IMPORT_JOB1 HR DEPARTMENTS FAILEDWhen the import completes, the scheduler job is automatically disabled. See Monitor and Diagnose Heterogeneous Import for more information.
Parent topic: Import Data using DBMS_CLOUD_IMPORT
Drop an Import Task
Describes the steps to drop an import task.
After the import is complete and no longer needed, you can remove the import task. Dropping the task removes the scheduler job, database links, and associated metadata. It does not roll back objects or data that have already been created in the target schema and can leave an in-progress table partially loaded.
If you rerun the import for the same schema, you may need to manually clean up partially loaded tables. To identify incomplete tables, query data dictionary views such as DBA_DATA_IMPORT_TABLE_STATUS and review the table status before performing cleanup.
BEGIN
DBMS_CLOUD_IMPORT.DROP_IMPORT_TASK(
task_name => 'orcl_import_job'
);
END;
/This example deletes the ORCL_IMPORT_JOB import job. See DROP_IMPORT_TASK Procedure for more information.
Parent topic: Import Data using DBMS_CLOUD_IMPORT
Monitor and Diagnose Heterogeneous Import
To diagnose issues and monitor the progress of data import operations, you can query the following data dictionary views. These views provide detailed information about task status, execution progress, parallel operations, and related objects.
| View Name | Description |
|---|---|
| DBA_DATA_IMPORT_TASK_STATUS View | Displays the status of each table within an import task, including any errors encountered during processing. |
| DBA_DATA_IMPORT_TABLE_STATUS View | Provides overall task-level details, including progress information such as the number of tables processed, loaded, and successfully completed. |
Parent topic: Migrate Data with DBMS_CLOUD_IMPORT