(Reprinted) oracle expdp impdp Example usage plus Introduction

2010-11-05  来源:本站原创  分类:Database  人气:190 

Reflect the entire Data Pump export / import the complete innovation process. Do not use common SQL commands, but the application-specific API (direct path api etc) to more much faster to load and unload data.

1.Data Pump Export expdp
Example:
sql> create directory dpdata1 as '/ u02/dpdata1';
sql> grant read, write on directory dpdata1 to ananda;
$ Expdp ananda/abc123 tables = CASES directory = DPDATA1 dumpfile = expCASES.dmp job_name = CASES_EXPORT

$ Expdp ananda/abc123 tables = CASES directory = DPDATA1
dumpfile = expCASES_% U.dmp parallel = 4 job_name = Cases_Export

include / exclude examples:
include = table: "in ('DB', 'TS')"
Or include = table: "like '% E%'"
Or include = function, package, procedure, table: "= 'EMP'"
Or exclude = SEQUENCE, TABLE: "IN ('EMP', 'DEPT')"

2.Data Pump Import expdp

1) to obtain from the data source exp.dmp expdp
2) Copy of a schema in a database to another database.
3) in the same database to all the objects of a schema to another schema.

Example:

1) impdp the data source is lead out of the DMP file expdp

impdp ananda/abc123 directory = dpdata1 dumpfile = expCASES.dmp job_name = cases_import

2) Copy of a schema in a database to another database.
- 1.newwork_link established for the target database database_link,
(User test requires grant exp_full_database to TEST;)
create public database link TOLINK
connect to TEST identified by oracle
using '(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = 10.0.20.199) (PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
) ';

- 2.impdp the target database server can run on lower version to higher version of imp
impdp network_link = TOLINK schemas = test remap_schema = test: link2

3) in the same database to all the objects of a schema to another schema.

- 1. To create a connection own database link:

create public database link system_self connect to system identified by "system" using 'orcl';

Database link has been created.

- 2. Copy the hr schema to the test schema:

impdp system / system network_link = system_self schemas = hr remap_schema = hr: test

  • 2009-7-5 17:47
  • Reply

<script> </ script>

222.247.85 .* 2nd Floor

First, create a logical directory, the command will not create a real operating system directory, it is best to create a system such as Administrator.
create directory dpdata1 as 'd: \ test \ dump';

Second, the view management administrator directory (also see the existence of the operating system, because Oracle does not care whether the directory exists, and if not, then the error)
select * from dba_directories;

Third, given to the scott user authority to operate in the specified directory, it is best to wait for an admin to give system.
grant read, write on directory dpdata1 to scott;

Fourth, export data
1) according to the user guide
expdp scott / tiger @ orcl schemas = scott dumpfile = expdp.dmp DIRECTORY = dpdata1;
2) The parallel process of parallel
expdp scott / tiger @ orcl directory = dpdata1 dumpfile = scott3.dmp parallel = 40 job_name = scott3
3) guided by the table name
expdp scott / tiger @ orcl TABLES = emp, dept dumpfile = expdp.dmp DIRECTORY = dpdata1;
4) Press the query Guide
expdp scott / tiger @ orcl directory = dpdata1 dumpfile = expdp.dmp Tables = emp query = 'WHERE deptno = 20';
5) Table space guide
expdp system / manager DIRECTORY = dpdata1 DUMPFILE = tablespace.dmp TABLESPACES = temp, example;
6) lead the entire database
expdp system / manager DIRECTORY = dpdata1 DUMPFILE = full.dmp FULL = y;

Fifth, restore data
1) Introduction to the designated user under
impdp scott / tiger DIRECTORY = dpdata1 DUMPFILE = expdp.dmp SCHEMAS = scott;
2) change table owner
impdp system / manager DIRECTORY = dpdata1 DUMPFILE = expdp.dmp TABLES = scott.dept REMAP_SCHEMA = scott: system;
3) into the table space
impdp system / manager DIRECTORY = dpdata1 DUMPFILE = tablespace.dmp TABLESPACES = example;
4) into the database
impdb system / manager DIRECTORY = dump_dir DUMPFILE = full.dmp FULL = y;
5) Additional data
impdp system / manager DIRECTORY = dpdata1 DUMPFILE = expdp.dmp SCHEMAS = system TABLE_EXISTS_ACTION = append;

Expdp / Impdp related parameters
EXPDP command-line options
1. ATTACH
This option already exists in the client session and the association between the export function. Syntax
ATTACH = [schema_name.] Job_name
Schema_name used to specify the program name, job_name used to specify the export job name. Note that if you use the ATTACH option, in addition to the connection string in the command line and the ATTACH option, but does not specify any other options, the sample is as follows:
Expdp scott / tiger ATTACH = scott.export_job
2. CONTENT
This option is used to specify the content to be exported. The default value is ALL
CONTENT = {ALL | DATA_ONLY | METADATA_ONLY}
When set to ALL CONTENT, will export the object definition and all the data. To DATA_ONLY, only export the object data, as METADATA_ONLY, only export object definitions
Expdp scott / tiger DIRECTORY = dump DUMPFILE = a.dump
CONTENT = METADATA_ONLY
3. DIRECTORY
Specify where the dump file and log file directory
DIRECTORY = directory_object
Directory_object specified directory object name is used. Note, the directory object is created using the CREATE DIRECTORY statement to the object, not the OS directory
Expdp scott / tiger DIRECTORY = dump DUMPFILE = a.dump
Create a directory:
CREATE DIRECTORY dump as 'd: dump';
Query to create those subdirectories:
SELECT * FROM dba_directories;
4. DUMPFILE
Used to specify the name of the dump file, the default name is expdat.dmp
DUMPFILE = [directory_object:] file_name [, ....]
Directory_object used to specify the directory object name, file_name specifies the dump file name. Note, if you do not specify directory_object, export tool will automatically use the DIRECTORY option to specify the directory object
Expdp scott / tiger DIRECTORY = dump1 DUMPFILE = dump2: a.dmp
5. ESTIMATE
Estimates have been derived form the specified disk space occupied by sub-method. The default value is BLOCKS
EXTIMATE = {BLOCKS | STATISTICS}
When set to BLOCKS, oracle will be occupied by the target data block data block size multiplied by the estimated number of the space object, when set to STATISTICS, according to recent statistics estimate the object space
Expdp scott / tiger TABLES = emp ESTIMATE = STATISTICS
DIRECTORY = dump DUMPFILE = a.dump
6. EXTIMATE_ONLY
Specifies whether only the estimation of export operations of disk space occupied, the default value of N
EXTIMATE_ONLY = {Y | N}
Is set to Y, the export function only estimate the disk space occupied by the object and not perform the export operation, the N, not only estimate the disk space occupied by the object, but also perform the export operation.
Expdp scott / tiger ESTIMATE_ONLY = y NOLOGFILE = y
7. EXCLUDE
This option specifies the release of the implementation of operations to exclude the object type or related objects
EXCLUDE = object_type [: name_clause] [, ....]
Object_type used to specify the type of object to be excluded, name_clause used to specify the specific objects to be excluded. EXCLUDE and INCLUDE can not be used
Expdp scott / tiger DIRECTORY = dump DUMPFILE = a.dup EXCLUDE = VIEW
8. FILESIZE
Specifies the maximum size of the export file, the default is 0, (file size that there is no limit)
9. FLASHBACK_SCN
SCN specified time table to export data in particular
FLASHBACK_SCN = scn_value
Scn_value value used to identify the SCN. FLASHBACK_SCN and FLASHBACK_TIME can not be used
Expdp scott / tiger DIRECTORY = dump DUMPFILE = a.dmp
FLASHBACK_SCN = 358523
10. FLASHBACK_TIME
Specify a particular point in time to export table data
FLASHBACK_TIME = "TO_TIMESTAMP (time_value)"
Expdp scott / tiger DIRECTORY = dump DUMPFILE = a.dmp FLASHBACK_TIME =
"TO_TIMESTAMP ('25-08-2004 14:35:00 ',' DD-MM-YYYY HH24: MI: SS ')"

11. FULL
Export the database schema specified, the default is N
FULL = {Y | N}
To Y, identify the implementation of the database export.
12. HELP
Specifies whether to display EXPDP help command-line options, default is N
When set to Y, the export option will display the help information.
Expdp help = y
13. INCLUDE
Specify export the objects to be included when the type and related objects
INCLUDE = object_type [: name_clause] [, ...]
14. JOB_NAME
Specify the name of the role you want to export, the default is SYS_XXX
JOB_NAME = jobname_string
15. LOGFILE
Specifies the file name to export the log file, the default name is export.log
LOGFILE = [directory_object:] file_name
Directory_object for the specified directory object name, file_name to specify the export log file name. If you do not specify directory_object. DIRECTORY export function will automatically use the corresponding option value.
Expdp scott / tiger DIRECTORY = dump DUMPFILE = a.dmp logfile = a.log
16. NETWORK_LINK
Chain of the specified database name, if you want to export the remote database object to a local dump file routine, you must set this option.
17. NOLOGFILE
This option specifies the prohibition to export the log file generated, the default value is N.
18. PARALLEL
Perform the export specified number of parallel processes, the default value of 1
19. PARFILE
Export parameter file name specified
PARFILE = [directory_path] file_name
20. QUERY
Export data for the specified filter conditions where
QUERY = [schema.] [Table_name:] query_clause
Schema used to specify the program name, table_name specifies the table name, query_clause clause specifies the conditions. QUERY option can not be with the CONNECT = METADATA_ONLY, EXTIMATE_ONLY, TRANSPORT_TABLESPACES other option.
Expdp scott / tiger directory = dump dumpfiel = a.dmp
Tables = emp query = 'WHERE deptno = 20'
21. SCHEMAS
The program used to specify the mode of implementation of the program to export, the default program for the current user.
22. STATUS
Shows the process specified in detail the role of the state to export, the default value of 0
23. TABLES
Export the specified table model
TABLES = [schema_name.] Table_name [: partition_name] [, ...]
Schema_name used to specify the program name, table_name specifies the table name to export, partition_name used to specify the partition name you want to export.
24. TABLESPACES
To export a list of the specified table space
25. TRANSPORT_FULL_CHECK
This option specifies the table spaces and moving the table space relationship is not moving the inspection, the default is N.
When set to Y, the export function will check the integrity of the table space associated with a direct relationship between the space where the table if the table space or index of the table space is only one table space is moving, will display an error message. When set to N, the export depends only check the role of single-ended, where the index table space if the move but did not move the table where the table space, will display an error message, if you move the table where the table space, table space where the index is not moving, no error message is displayed.
26. TRANSPORT_TABLESPACES
Designated to perform tablespace mode export
27. VERSION
Specifies the database version to export the object, the default value of COMPATIBLE.
VERSION = {COMPATIBLE | LATEST | version_string}
Is COMPATIBLE, it will generate the initialization parameter COMPATIBLE object metadata; to LATEST when the actual version of the database object metadata generation. Version_string used to specify the database version string. Call EXPDP
Use EXPDP tool, the dump file can only be stored in the DIRECTORY object that corresponds to the OS directory, but can not directly specify the dump file is located in the OS directory. Therefore,
Use EXPDP tool, you must first create DIRECTORY object. And the need for the database user to grant permission to use the DIRECTORY object.

This is relatively wide, and there are examples and

3. Export the monitoring process

When the Data Pump Export (DPE) is running, press Control-C; it will prevent the messages displayed on the screen, but do not stop the export process itself. Instead, it will display DPE prompt (shown below). Process is now considered to be in "interactive" mode:

Export>

This approach allows the DPE operation and control on the input command queries the current job.

4. Parallel operation (PARALLEL)

You can use the PARALLEL parameter to export more than one thread to dramatically speed up the operation. Each thread create a separate dump file, so the parameter dumpfile should have as much parallelism and projects. You can specify a wildcard as the file name, rather than explicitly enter each file name, for example:

expdp ananda/abc123 tables = CASES directory = DPDATA1 dumpfile = expCASES_% U.dmp parallel = 4 job_name = Cases_Export

Note: dumpfile parameter has a wildcard character% U, which indicates that the file will need to create, format will expCASES_nn.dmp, where nn from 01, and then increase as needed up.

In parallel mode, the status screen will show four worker processes. (In the default mode, only one process is visible) Remove all of the work process synchronization data, and displayed on the screen in the status of their progress.

Separation of access to data files and dump directory file system input / output channels is very important. Otherwise, Data Pump operation and maintenance costs may exceed the related benefits of parallel threads, and thus reduce performance. Parallel only in the table more than the number of parallel values and the table is large to be effective.

5. Database Monitoring

You can also obtain information about the view from the database is running Data Pump jobs for more information. The main view is a monitoring job DBA_DATAPUMP_JOBS, it will tell you how many jobs working on the process (column DEGREE) at work.

Another important view is DBA_DATAPUMP_SESSIONS, when coupled with the above view and V $ SESSION will be presented with the main foreground process when the session SID.

select sid, serial #
from v $ session s, dba_datapump_sessions d
where s.saddr = d.saddr;

This instruction shows the session foreground process. More useful information can be obtained from the alert log. When the process starts, MCP and work processes in the alert log as follows:

kupprdp: master process DM00 started with pid = 23, OS
SYS.KUPM $ MCP.MAIN ('CASES_EXPORT', 'ANANDA');
kupprdp: worker process DW01 started with worker
SYS.KUPW $ WORKER.MAIN ('CASES_EXPORT', 'ANANDA');
kupprdp: worker process DW03 started with worker
SYS.KUPW $ WORKER.MAIN ('CASES_EXPORT', 'ANANDA');

It shows the data pump operation for the start of the session of the PID. You can use the following query to find the actual SID:

select sid, program from v $ session where paddr in
(Select addr from v $ process where pid in (23,24,25));

PROGRAM column will correspond to the alert log file name is displayed in the process of DM (main process) or DW (the worker process.) If a work process using the parallel query, for example, SID 23, you can see the view V $ PX_SESSION it and get it out. It will show you the work from the SID 23 processes running on behalf of all the parallel query sessions:

select sid from v $ px_session where qcsid = 23;

From the view V $ SESSION_LONGOPS can obtain other useful information to predict the time it takes to complete the job.

select sid, serial #, sofar, totalwork
from v $ session_longops
where opname = 'CASES_EXPORT'
and sofar! = totalwork;

Column shows the total workload totalwork, the number of the column sofar been added and to the current moment - so you can even use it to estimate how long it takes.

6. To deal with specific objects

l Data Pump allows you to export only a specific type of object.

For example, the following command to export only the process, not export anything else (not to export tables, views, and even functions):

expdp scott / tiger directory = dmpdir dumpfile = procedure.dmp include = PROCEDURE

l want to export only certain objects (for example, functions and procedures FUNC1 PROC1)

expdp scott / tiger directory = dmpdir dumpfile = test.dmp
include = PROCEDURE: \ "= \ 'PROC1 \' \", FUNCTION: \ "= \ 'FUNC1 \' \"

l can be exported dmp file to create dmp file contains the object DDL script for later use.

Called SQLFILE (impdp parameter) parameter allows you to create DDL script file.

impdp scott / tiger DIRECTORY = dmpdir DUMPFILE = expprocs.dmp SQLFILE = aa.sql

Using this method impdp, from dp dmp file to extract out the object DDL statements, such impdp not put data into the database, just create a DDL statement file

Into the use of these operations is relatively

相关文章
  • (Reprinted) oracle expdp impdp Example usage plus Introduction 2010-11-05

    Reflect the entire Data Pump export / import the complete innovation process. Do not use common SQL commands, but the application-specific API (direct path api etc) to more much faster to load and unload data. 1.Data Pump Export expdp Example: sql> c

  • (Reproduced) oracle expdp impdp usage example descriptions of 2010-11-05

    Reflect the entire Data Pump export / import process of completely innovative. Do not use common SQL commands, but application-specific API (direct path api etc) to a more much faster loading and unloading data. 1.Data Pump Export expdp Example: sql>

  • oracle expdp / impdp into high-low version version 2011-08-28

    oracle import and export using expdp / impdp My version 11gR2 version 11.2.0.1.0 colleagues need to specify the version number is 11.1.0.6.0 11gr1 I installed the oracle is installed is 11gR1 11gR2 colleagues with expdp impdp import export version is

  • Oracle expdp / impdp usage 2011-02-23

    <!--[ If! SupportLists] -> 1. <!--[ Endif] -> Go to the oracle server AMX currently used, execute [Oracle @ lnxas4-ce2 ~] $ sqlplus / nolog SQL * Plus: Release 10.2.0.1.0 - Production on Wed Jan 12 07:18:47 2011 Copyright (c) 1982, 2005, Oracl

  • oracle expdp / impdp operation 2011-08-09

    Just do the next oracle impdp / expdp export the data into the specific lot of information online operations are feasible, but one thing to note Such as from A to B server data migration server, then the two servers need to establish data directory,

  • Oracle data import and export imp / exp command more than 10g expdp / impdp difference between the command and 2010-12-16

    One, Oracle data import and export imp / exp command Oracle data import and export imp / exp is equivalent to oracle data restore and backup. exp command to export the data from the remote database server to the local dmp file, imp command to import

  • Oracle data import and export imp / exp command more than 10g expdp / impdp command and the difference 2010-12-16

    One, Oracle data import and export imp / exp command Oracle data import and export imp / exp is equivalent to oracle data restore and backup. exp command to export data from a remote database server to the local dmp file, imp command to import the dm

  • Migrate DB data using expdp / impdp 2010-05-07

    If you want to migrate the source db data to a target db while you don't have privileges of Read / Write Directory on the source db, then how to export the source db data and migrate them to the target db? Here we can use network_link feature of expd

  • [Reprinted] Oracle data import and export imp / exp command more than 10g expdp / impdp command 2010-08-23

    Oracle data import and export imp / exp command more than 10g expdp / impdp command http://www.cnblogs.com/jason_lb/archive/2007/02/09/645586.html Oracle data import and export imp / exp is equivalent to oracle data restore and backup. exp command to

  • Oracle data import and export imp / exp command more than 10g expdp / impdp command 2010-05-07

    Oracle data import and export imp / exp is equivalent to oracle data restore and backup. exp command to export data from a remote database server to the local dmp file, imp command to import the dmp file to the distance from the local database server

  • oracle: exp / imp and expdp / impdp data backup instance 2011-08-02

    Recently made the oracle data backup tasks, checked, a cold backup, hot backup, considering, that the exp / imp and expdp / impdp more suitable one is not too large amount of data (estimated at most millions), the two real-time requirements low. exp

  • [Reserved] Oracle data import and export imp / exp command more than 10g expdp / impdp command 2010-08-23

    Oracle data import and export imp / exp command more than 10g expdp / impdp command http://www.cnblogs.com/jason_lb/archive/2007/02/09/645586.html Oracle data import and export imp / exp is equivalent to oracle data restore and backup. exp command to

  • [Transfer] Oracle data import and export imp / exp command more than 10g expdp / impdp command 2011-07-03

    Oracle data import and export imp / exp command more than 10g expdp / impdp command Oracle data import and export imp / exp is equivalent to oracle data restore and backup. exp command to export data from a remote database server to the local dmp fil

  • Oracle数据导入导出imp/exp命令 10g以上expdp/impdp命令 2014-01-06

    Oracle数据导入导出imp/exp就相当于oracle数据还原与备份.exp命令可以把数据从远程数据库服务器导出到本地的dmp文 件,imp命令可以把dmp文件从本地导入到远处的数据库服务器中. 利用这个功能可以构建两个相同的数据库,一个用来测试,一个用来正式使用. 执行环境:可以在SQLPLUS.EXE或者DOS(命令行)中执行, DOS中可以执行时由于 在oracle 8i 中 安装目录ora81BIN被设置为全局路径, 该目录下有EXP.EXE与IMP.EXE文件被用来执行导入导出.

  • oracle 10g expdp / impdp use the Quick Reference 2010-07-19

    1, expdp / impdp sample Example 1: expdp scott / tiger DIRECTORY = expdp_dir DUMPFILE = scott.dmp - Export Data Example 2: expdp scott / tiger DIRECTORY = expdp_dir DUMPFILE = emp.dmp tables = emp - Export table Example 3: impdp scott / tiger directo

  • expdp / impdp combat 2010-04-01

    create directory exptest as 'd: \ oracle \ expdp'; / / create a path alias, used in par.txt file. [Note] The file path for the server-side path. Folder must already exist. expdp username / password @ data service name parfile = f: \ par.txt [Note] pa

  • Data Pump expdp & impdp 2010-08-19

    Oracle Database 10g introduces the new Data Pump (Data Dump) technology to DBA or developer to the database metadata (object definitions) and data quickly moved to another oracle database. Data Pump Export Import (EXPDP and IMPDP) role 1, to achieve

  • 10g or more expdp / impdp command 2010-09-01

    exp / imp has been well spent, but the only certainty is too slow, if a table of data has a hundred million, and often stop import and export on the table this time, but starting from Oracle 10g to provide a new tool called the Data Pump expdp / impd

  • More than 10g expdp / impdp command 2010-09-01

    exp / imp has been well used, but only to determine is too slow, if a table of data has a one hundred million, import and export would often stop at the table this time, but starting from Oracle 10g to provide a new tool called Data Pump expdp / impd

  • Oracle to optimize CPU usage of the actual operation of the program Detailed 2011-01-13

    Abstract: This paper describes the Oracle Optimized CPU usage and how to view the Oracle CPU usage optimization, as well as how to use the database in view of CPU time, the introduction of specific programs. Tags: Oracle Optimization If you use Oracl