Oracle数据逻辑迁移综合实战篇
2015-06-17 20:17 AlfredZhao 阅读(2935) 评论(3) 编辑 收藏 举报本文适合迁移大量表和数据的复杂需求。
如果你的需求只是简单的迁移少量表,可直接参考这两篇文章即可完成需求:
Oracle简单常用的数据泵导出导入(expdp/impdp)命令举例(上)
Oracle简单常用的数据泵导出导入(expdp/impdp)命令举例(下)
本次需求:
指定用户表结构迁移,所有表需要数据(因为此用户下的数据规模是10T的级别,所以想完全迁移不现实,最终确定为大表迁移部分数据,小表迁移全部数据)。
至于大表和小表的界定,研发侧不能提供,需要DBA自行评估划分。
最终确定迁移方案如下:
1. 首先导出所有表结构:
如果expdp 导出元数据长时间导不出来的话,可以考虑使用exp导出(rows=n)导出示例:
数据泵导出元数据:
expdp system/oracle DIRECTORY=jy SCHEMAS=scott CONTENT=metadata_only EXCLUDE=statistics DUMPFILE=scott_metadata_only.dmp LOGFILE=scott_metadata_only.log VERSION=10.2.0.2
exp导出元数据:
nohup exp scott/tiger OWNER=scott ROWS=n BUFFER=10240000 STATISTICS=none RESUMABLE=y FILE=scott_metadata_exp.dmp LOG=scott_metadata_exp.log &
注意:
低版本->高版本迁移,不需要指定VERSION选项,
高版本->低版本迁移,需要指定VERSION选项。
导入示例:
数据泵导入元数据:
impdp system/oracle DIRECTORY=jy SCHEMAS=scott REMAP_SCHEMA=scott:scott_new REMAP_TABLESPACE=users:dbs_d_xxx TABLE_EXISTS_ACTION=replace DUMPFILE=scott_metadata_only.dmp LOGFILE=scott_metadata_only.log
imp导入元数据:
nohup imp scott/tiger BUFFER=10240000 RESUMABLE=y FILE=scott_metadata_exp.dmp LOG=imp_scott_metadata_exp.log IGNORE=y FULL=y &
2. 导出所有小表(单表预估占用空间小于等于1000M)的数据:
使用expdp并行导出, CONTENT=data_only导出示例:
expdp scott/tiger PARFILE=scott_small_onlydata
配置文件scott_small_onlydata示例:
DIRECTORY=jy
TABLES=EMP,
DEPT
CONTENT=data_only
PARALLEL=16
DUMPFILE=scott_small_onlydata%U.dmp
LOGFILE=scott_small_onlydata.log
导入示例:
impdp scott/tiger DIRECTORY=jy CONTENT=data_only PARALLEL=16 DUMPFILE=scott_small_onlydata%U.dmp LOGFILE=impdp_scott_small_onlydata.log
3. 导出大表(单表预估占用空间大于1000M)部分数据:
## 3.1 统一导出大表当前月份的一个分区数据。 ##导出示例:
expdp scott/tiger PARFILE=scott_big_onlydata1
配置文件scott_big_onlydata1示例:
DIRECTORY=jy
TABLES=EMP:P20150606,
DEPT:P201507
CONTENT=data_only
PARALLEL=8
DUMPFILE=scott_big_onlydata1_%U.dmp
LOGFILE=scott_big_onlydata1.log
导入示例:
impdp scott/tiger DIRECTORY=jy CONTENT=data_only PARALLEL=8 DUMPFILE=scott_big_onlydata1_%U.dmp LOGFILE=impdp_scott_big_onlydata1.log
3.2 对于未分区的大表,视表的具体情况选择性导出全部或部分数据。
导出示例:
expdp scott/tiger PARFILE=scott_big_onlydata2
配置文件scott_big_onlydata2示例:
DIRECTORY=jy
TABLES=EMP,
DEPT
CONTENT=data_only
QUERY=EMP:"where start_time>=to_date('2015-06-01','yyyy-mm-dd') and start_time<to_date('2015-06-08','yyyy-mm-dd')",
DEPT:"where dept_no > 20"
PARALLEL=8
DUMPFILE=scott_big_onlydata2_%U.dmp
LOGFILE=scott_big_onlydata2.log
导入示例:
impdp scott/tiger DIRECTORY=jy CONTENT=data_only PARALLEL=8 DUMPFILE=scott_big_onlydata2_%U.dmp LOGFILE=impdp_scott_big_onlydata2.log
4. 逻辑迁移前的检查脚本:
#!/bin/bash
#Usage: check information before migrate data.
#ex: sh checkinfo.sh username password
#Author: AlfredZhao
#Version: 1.0.0
sqlplus -s $1/$2 <<EOF > ./checkinfo_$1.log
prompt 0.数据库版本
prompt 0.DB Version
select * from v\$version;
prompt 1.确定当前连接用户下的表空间: #
prompt 1.To examine all tablespace_name that the current user owned: #
select tablespace_name from user_tables union
select tablespace_name from user_tab_partitions union
select tablespace_name from user_indexes union
select tablespace_name from user_ind_partitions;
prompt 2.预估当前连接用户的数据量 #
prompt 2.To estimate the sum(bytes/1024/1024/1024) "GB" of current user.
select sum(bytes/1024/1024/1024) "GB" from user_segments;
--确认:用户下的预估数据量小于10G的用户,可以将表结构和数据一起导出,其他用户则先导出表结构,然后对数据量进行进一步的分析确定导出方案。
prompt 3.查询每个用户的每个对象占用空间情况 #
prompt 3.To examine the space usage of every objects;
set linesize 140 pagesize 100
col SEGMENT_NAME for a55
select /*+ rule */ segment_name, sum(bytes/1024/1024)"Size (MB)" from user_segments where segment_type in('TABLE','TABLE PARTITION') group by segment_name order by 2 desc;
prompt 查询大于1000M的表
prompt more 1000M table_name
select /*+ rule */ segment_name, sum(bytes/1024/1024)"Size (MB)" from user_segments where segment_type in('TABLE','TABLE PARTITION') group by segment_name having sum(bytes/1024/1024) > 1000;
prompt 创建2个视图:v_more1000M, v_1000M
prompt create view v_more1000M,v_1000M
--create view
create view v_more1000M as select /*+ rule */ segment_name, sum(bytes/1024/1024)"Size (MB)" from user_segments where segment_type in('TABLE','TABLE PARTITION') group by segment_name having sum(bytes/1024/1024) > 1000;
create view v_1000M as select /*+ rule */ segment_name, sum(bytes/1024/1024)"Size (MB)" from user_segments where segment_type in('TABLE','TABLE PARTITION') group by segment_name having sum(bytes/1024/1024) <= 1000;
prompt 查询大于1000M的表和小于1000M的表分别的预估总大小
prompt select total_size
select sum("Size (MB)") from v_more1000M;
select sum("Size (MB)") from v_1000M;
prompt 查询大于1000M和小于1000M的表的数量
prompt select count(1)
prompt 大于1000M的表数量
prompt more1000M
select count(1) FROM v_more1000M;
prompt 小于1000M的表数量
prompt less or equal 1000M
select count(1) FROM v_1000M;
prompt 4.查询大于1000M的表是否分区 #
prompt 4.To judge whether more than 1000M table partition.
set linesize 140 pagesize 100
col SEGMENT_NAME for a55
prompt 查询大于1000M的未分区的表:
prompt select the name of normal table which more than 1000M
select /*+ rule */ segment_name, sum(bytes/1024/1024)"Size (MB)" from user_segments where segment_type in('TABLE','TABLE PARTITION') and partition_name is null group by segment_name having sum(bytes/1024/1024) > 1000;
--prompt 查询大于1000M分区的表分区个数:
--select TABLE_NAME, PARTITIONING_TYPE, SUBPARTITIONING_TYPE, PARTITION_COUNT from user_part_tables where table_name in (select segment_name from v_more1000M) order by 4;
prompt 查询分区表的分区类型DAY(P20150606),Hour(P2015060612),MONTH(P201507)
prompt 此部分只适合分区名规范的情况
set pagesize 100
select a.table_name || ',' ||
(case length(a.partition_name)
when 9 then
'Day'
when 11 then
'Hour'
when 7 then
'MONTH'
end) tab_part_desc
from user_tab_partitions a,
(select table_name, max(partition_position) max_a
from user_tab_partitions b
group by b.table_name) c
where a.table_name = c.table_name
and a.partition_position = c.max_a
and a.table_name in (select segment_name from v_more1000M)
and a.table_name not like 'BIN%';
prompt 5.查询小表清单
prompt user_small_onlydata
set pagesize 1500
select segment_name from v_1000M where segment_name not like 'BIN$%';
prompt 删除过程中创建的2个视图:v_more1000M, v_1000M
prompt drop view v_more1000M,v_1000M
--drop view
drop view v_more1000M;
drop view v_1000M;
EOF
AlfredZhao©版权所有「从Oracle起航,领略精彩的IT技术。」