LogMiner Utility Release 8i - 11g (Doc ID 291686.1)

LogMiner Utility Release 8i - 11g (Doc ID 291686.1)

APPLIES TO:

Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Oracle Database Cloud Exadata Service - Version N/A and later
Oracle Database Cloud Schema Service - Version N/A and later
Information in this document applies to any platform.
NOTE: In the images and/or the document content below, the user information and data used represents fictitious data. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.

PURPOSE

This paper is written to provide simplest methods to use the LogMiner. It has complete information about LogMiner release 8i - 11g at one place. The LogMiner offers easy methods to read the online as well as offline redolog files. This document covers release wise features, restrictions, setup commands in details and test cases, known issues/bugs. It also covers all the easy methods to analyze the redologs in different Oracle releases 8i - 11g.

编写本文旨在提供使用 LogMiner 的最简单方法。它具有有关 LogMiner 版本8i-11g的完整信息。LogMiner 提供了简单的方法来读取 online 和 offline redolog文件。本文档涵盖了明智的发布功能,限制,详细信息和测试用例中的设置命令,已知问题/错误。它还涵盖了分析不同Oracle版本8i-11g中redo logs的所有简便方法。

SCOPE

Intended for Database Administrators  适用于数据库管理员

DETAILS

Table of Contents


1. Introduction to LogMiner  LogMiner简介

1.1 What is LogMiner ?  什么是LogMiner

The Oracle LogMiner utility enables you to query redo logs through a SQLinterface.  Oracle LogMiner 实用程序使您可以通过 SQLinterface 查询 redo logs
Redo logs contain information about the history of activity on a database. Redo logs 包含有关数据库活动历史的信息

- LogMiner is a utility which help us to extract the data from the redo logs (Online/Offline)  LogMiner是一个实用程序,可帮助我们从redo logs (Online/Offline) 中提取数据
- When you run LogMiner, you need to specify the names of redo logs that you want to analyze  运行 LogMiner 时,需要指定要分析的 redo logs 的名称
- LogMiner retrieves information from those redo logs and returns it through the V$LOGMNR_CONTENTS LogMiner从这些 redo logs 中检索信息,并通过V$LOGMNR_CONTENTS返回它
- LogMiner only runs in Oracle version 8.1 or higher but you can use it to analyzes redo logs from release 8.0 or later database.  LogMiner仅在Oracle版本8.1或更高版本中运行,但是您可以使用它来分析8.0版或更高版本数据库中的redo logs
- The block size (DB_BLOCK_SIZE) of the analyzing instance must be identical to the block size of the log producing instance.  分析实例的块大小 (DB_BLOCK_SIZE)必须与生成日志的实例的块大小相同
- If this is not the case, you will receive an error indicating the archive log is corrupted (when it is probably not).  如果不是这种情况,您将收到一条错误消息,指示归档日志已损坏(可能不是)
- By default LogMiner returns all the rows in SCN order unless you have used the option COMMITTED_DATA_ONLY. 默认情况下,除非使用了选项 COMMITTED_DATA_ONLY,否则 LogMiner 将按SCN顺序返回所有行
- The option COMMITTED_DATA_ONLY to specify that only committed transactions should be retrieved.  选项 COMMITTED_DATA_ONLY 指定仅应检索已提交的事务
- SCN order is the order normally applied in media recovery SCN顺序是通常在介质恢复中应用的顺序

1.2 Redo Logs and Dictionary Files  Redo logs和字典文件

Before you begin using LogMiner, it is important to understand how LogMiner works with redo logs and dictionary files. 在开始使用 LogMiner 之前,了解 LogMiner 如何与 redo logs and dictionary files 一起工作很重要
This will help you to get accurate results and to plan the use of your system resources.  这将帮助您获得准确的结果并计划系统资源的使用

1.2.1 Redo Logs
1.2.2 Dictionary Options
1.2.3 Tracking DDL Statements

1.2.1 Redo Logs

- When you run LogMiner, you specify the names of redo logs that you want to analyze. 运行LogMiner时,可以指定要分析的 redo logs 的名称
- LogMiner retrieves information from those redo logs and returns it through the V$LOGMNR_CONTENTS view. LogMiner从这些 redo logs 中检索信息,并通过 V$LOGMNR_CONTENTS 视图返回信息
- To get correct results, you must enable at least minimal supplemental logging.  为了获得正确的结果,您必须至少启用最少的补充日志记录(minimal supplemental logging)
- To check the Mined/Analyzed data, query the V$LOGMNR_CONTENTS view,   要检查 Mined/Analyzed 数据,请查询 V$LOGMNR_CONTENTS 视图,
- When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially.  当对 V$LOGMNR_CONTENTS 视图执行SQL选择操作时,将顺序读取 redo log 文件。
- Translated information from the redo log files is returned as rows in the V$LOGMNR_CONTENTS view.  来自 redo log 文件的转换信息在 V$LOGMNR_CONTENTS 视图中作为行返回。
- This continues until either the filter criteria specified at startup are met or the end of the redo log file is reached.  这将继续直到满足启动时指定的过滤条件或达到 redo log 文件的末尾。

* Keep the following things in mind about redo logs  关于 redo logs ,请记住以下几点

- The redo logs must be from a release 8.0 or later Oracle database.  redo logs必须来自8.0版或更高版本的Oracle数据库
- However, several of the LogMiner features introduced as of release 9.0.1 only work with redo logs produced on an Oracle9i or later database.  但是,从9.0.1版开始引入的若干LogMiner功能仅适用于在Oracle9i或更高版本的数据库上生成的redo logs
- Support for LOB and LONG data types is available as of release 9.2, but only for redo logs generated on a release 9.2 Oracle database. 从9.2版开始支持LOB和LONG数据类型,但仅支持在9.2版Oracle数据库上生成的redo logs
- The redo logs must use a database character set that is compatible with the character set of the database on which LogMiner is running. redo logs必须使用与运行LogMiner的数据库的字符集兼容的数据库字符集
- In general, the analysis of redo logs requires a dictionary that was generated from the same database that generated the redo logs. 通常,对redo logs的分析需要使用从生成redo logs的同一数据库生成的字典
- If you are using the online catalog as the LogMiner dictionary, you can only analyze redo logs from the database on which LogMiner is running. 如果您将在线目录用作LogMiner词典,则只能从运行LogMiner的数据库中分析redo logs
- LogMiner must be running on the same hardware platform that generated the redo logs being analyzed. However, it does not have to be on the same system. LogMiner必须在生成正在分析的redo logs的相同硬件平台上运行。但是,它不必在同一系统上
- It is important to specify the correct redo logs when running LogMiner.  在运行LogMiner时指定正确的redo logs很重要
- To determine which redo logs are being analyzed in the current LogMiner session, Query the V$LOGMNR_LOGS view, which contains one row for each redo log. 要确定当前LogMiner会话中正在分析哪些redo logs,请查询V$LOGMNR_LOGS视图,其中每个redo logs都包含一行

1.2.2 LogMiner Dictionary options LogMiner 字典选项

- To Analyzes the contents of the redo log files completely, recommended to use dictionary file and you can create it using PL/SQL package. 要完全分析 redo logs 文件的内容,建议使用字典文件,并且可以使用PL / SQL包创建它。
- LogMiner uses a dictionary file, which is a special file that contains information of database from which it created and the time it was created LogMiner使用词典文件,这是一个特殊文件,其中包含创建数据库和创建时间的数据库信息
- LogMiner requires a dictionary to translate internal object identifiers and data types to object name and external data formats. LogMiner需要字典将内部对象标识符和数据类型转换为对象名称和外部数据格式
- The dictionary file is not required, but is recommended  不需要字典文件,但建议
- Without a dictionary file, the equivalent SQL statements will use Oracle internal object IDs for the object name and present column values as hex data 使用字典文件-没有字典文件,等效的SQL语句将使用Oracle内部对象ID作为对象名,并将列值显示为十六进制数据

For example, instead of the SQL statement: 例如,而不是SQL语句

INSERT INTO emp (name, salary) VALUES ('<VALUE>', 50000);

 

LogMiner will display:  LogMiner将显示

insert into Object#2581 (col#1, col#2) values (hextoraw('<HEX_VALUE>'), hextoraw('c306'));

 

- You can use this dictionary file to mine the redo logs of that database in a separate database instance without being connected to the source database  您可以使用此词典文件在单独的数据库实例中挖掘该数据库的 redo logs ,而无需连接到源数据库

* LogMiner gives you following three options for supplying the dictionary  LogMiner提供了以下三种提供字典的选项

1.2.2.1 Using the Online Catalog  使用在线目录
1.2.2.2 Extracting a LogMiner Dictionary to the Redo Log Files  提取目录 LogMiner 字典Redo Log文件
1.2.2.3 Extracting the LogMiner Dictionary to a Flat File 将 LogMiner 字典提取为平面文件
1.2.2.4 Decision Tree for Choosing a LogMiner Dictionary  选择 LogMiner 字典的决策树

1.2.2.1 Using the Online Catalog

You can use this option:  您可以使用此选项:

- When you will have access to the source database from which the redo log files were created.  当您可以访问从其创建 redo log 文件的源数据库时
- When no changes to the column definitions in the tables involved  当所涉及表中的列定义没有更改时
- This is the most efficient and easy-to-use option 这是最有效和易于使用的选项

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

1.2.2.2 Extracting a LogMiner Dictionary to the Redo log files

You can use this option:  您可以使用此选项:

- If you do NOT expect to have access to the source database  如果您不希望访问源数据库

SQL> EXECUTE DBMS_LOGMNR_D.BUILD (OPTIONS => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

 

- It takes time so, do it in off-peak hours  这需要时间,因此请在非高峰时间进行
- It is faster than extracting to a flat file  比提取到平面文件要快
- Depending on the size of the dictionary, it may be contained in multiple redologs, If the relevant redo log files have been archived, you can find out which redo log files contain the START and END of an extracted dictionary, using following query:  根据字典的大小,它可能包含在多个 redo log 中,如果相关的 redo log 文件已归档,则可以使用以下查询找出哪些 redo log 文件包含已提取字典的START和END:

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES';
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END = 'YES';

1.2.2.3 Extracting the LogMiner Dictionary to a Flat file

As per 10g documentation:  根据10g文档:

- This option is maintained for backward compatibility.  保留此选项是为了向后兼容
- This option does not guarantee transactional consistency.  此选项不保证事务的一致性
- Oracle recommends that you use either the Online catalog or extract the Dictionary from redo log files instead of Dictionary to a Flat File. Oracle建议您使用 Online catalog 或从 redo log 文件中提取“字典”,而不是从“字典到平面文件”中提取

SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('dictionary.ora', '<PATH>/<DIRECTORY_NAME>', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

You could also specify a filename and location without specifying the STORE_IN_FLAT_FILE option. The result would be the same.

您还可以指定文件名和位置,而无需指定 STORE_IN_FLAT_FILE 选项。 结果将是相同的。

1.2.2.4 Decision tree for choosing a LogMiner Dictionary

- Please refer to Oracle 10g Release 2 Database Utilities Manual Figure 17-2 shows a decision tree to help you select a LogMiner dictionary, depending on your situation.

  请参考 Oracle 10g Release 2 数据库实用程序手册。图17-2显示了一个决策树,可帮助您根据情况选择 LogMiner 字典。

1.2.3 Tracking DDL statements

- If your source dictionary is a flat file dictionary or a dictionary in the redo logs, then you can use the DDL_DICT_TRACKING.  如果源字典是平面文件字典或redo logs中的字典,则可以使用DDL_DICT_TRACKING。
- By default DDL tracking is disabled. 默认情况下,DDL跟踪处于禁用状态。
- To enable it, use the OPTIONS parameter to specify DDL_DICT_TRACKING when you start LogMiner.  要启用它,请在启动 LogMiner 时使用OPTIONS参数指定DDL_DICT_TRACKING。
For example:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING);
or:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS =>DBMS_LOGMNR.DDL_DICT_TRACKING + DBMS_LOGMNR.DICT_FROM_REDO_LOGS);

 

To see only DDL statements, use query:  要仅查看DDL语句,请使用查询:

SQL> SELECT USERNAME, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE USERNAME = 'SCOTT' AND OEPRATION = 'DDL';

* Keep the following in mind when you use the DDL_DICT_TRACKING option
* 使用DDL_DICT_TRACKING选项时请记住以下几点
- The DDL_DICT_TRACKING option is NOT valid with the DICT_FROM_ONLINE_CATALOG option. DDL_DICT_TRACKING 选项对于 DICT_FROM_ONLINE_CATALOG 选项无效
- The DDL_DICT_TRACKING option requires that the database must be open DDL_DICT_TRACKING 选项要求必须打开数据库

2. Restrictions in 8i  8i的限制

The following are not supported in 8i:  8i不支持以下功能:

- Index-organized tables  索引组织表
- Clustered tables/indexes  集群表/索引
- Non-scalar data types  非标量数据类型
- Chained rows  链接的行
- LogMiner does not handle direct path insert operations, even though such operations are logged.  LogMiner不处理直接路径插入操作,即使已记录此类操作

3. Enhancements in 9i  9i的增强功能

From release 9.2:

- You have the option of accessing LogMiner functionality through the Oracle LogMiner Viewer graphical user interface (GUI),  您可以选择通过 Oracle LogMiner Viewer 图形用户界面(GUI)访问LogMiner功能,
- The LogMiner Viewer is a part of Oracle Enterprise Manager  LogMiner Viewer是Oracle Enterprise Manager的一部分
- By default all LogMinger tables are created in SYSTEM tablespace - And from release 9.2, you can re-create all LogMiner tables in an alternate tablespace using following command  默认情况下,所有 LogMinger 表都是在 SYSTEM 表空间中创建的-从9.2版开始,您可以使用以下命令在备用表空间中重新创建所有 LogMiner 表

SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE ('');

 

- LogMiner returns all the rows in SCN order unless you have used the COMMITTED_DATA_ONLY option to specify that only committed transactions should be retrieved, SCN order is the order normally applied in media recovery

LogMiner 以SCN顺序返回所有行,除非您已使用 COMMITTED_DATA_ONLY 选项指定仅应检索已提交的事务,否则SCN顺序是介质恢复中通常应用的顺序

4. New Features in 9i and 10g  9i和10g的新功能

4.1 Tracking DDL Statements using option DDL_DICT_TRACKING  使用选项 DDL_DICT_TRACKING 跟踪DDL语句
4.2 Executing Reconstructed SQL Statements  执行重建的SQL语句
4.3 Formatting of Returned Data  返回数据的格式
4.4 Extracting Actual Data Values from Redo Logs  从Redo Logs中提取实际数据值
4.5 NULL Returns From the MINE_VALUE Function  MINE_VALUE函数的NULL返回
4.6 Filtering Data That is Returned  过滤返回的数据
- 4.6.1 Showing Only Committed Transactions  仅显示已提交的事务
- 4.6.2 Skipping Redo Corruptions 跳过redo腐败
- 4.6.3 Filtering Data By Time  按时间过滤数据
- 4.6.7 Filtering Data By SCN  通过SCN过滤数据
4.7 The summary of LogMiner settings  LogMiner设置摘要
4.8 Supplemental Logging  补充记录
- 4.8.1 Database-Level Identification Key Logging  数据库级标识密钥记录
- 4.8.2 Table-Level Identification Key Logging  表级标识密钥记录

4.1 Tracking DDL Statements using option DDL_DICT_TRACKING 

- DDL tracking is disabled by default 默认情况下禁用DDL跟踪
- To enable it, use the OPTIONS parameter to specify DDL_DICT_TRACKING when you start LogMiner, for example: 
  要启用它,请在启动LogMiner时使用 OPTIONS 参数指定 DDL_DICT_TRACKING,例如

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS =>DBMS_LOGMNR.DDL_DICT_TRACKING);

 

Note: Be aware that the DDL_DICT_TRACKING option to the DBMS_LOGMNR.START_LOGMNR procedure is not supported for flat file dictionaries created prior to Oracle9i. If you attempt to use the DDL_DICT_TRACKING option with a LogMiner database extracted to a flat file prior to Oracle9i, the ORA-01330 error (problem loading a required build table) is returned.

注意:在 Oracle9i 之前创建的平面文件字典不支持 DBMS_LOGMNR.START_LOGMNR 过程的 DDL_DICT_TRACKING 选项。 如果您尝试将 DDL_DICT_TRACKING 选项与 LogMiner 数据库一起使用,提取到 Oracle9i 之前的平面文件中,则会返回ORA-01330错误(加载所需构建表的问题)。

4.2 Executing Reconstructed SQL Statements

- By default, SQL_REDO and SQL_UNDO statements are ended with a semicolon. You can disable the semicolon, using option DBMS_LOGMNR.NO_SQL_DELIMITER when you start LogMiner.

- 默认情况下,SQL_REDO 和 SQL_UNDO 语句以分号结尾。 启动 LogMiner 时,可以使用选项 DBMS_LOGMNR.NO_SQL_DELIMITER 禁用分号。

4.3 Formatting of returned data

- Sometimes a query can result in a large number of columns containing reconstructed SQL statements, which can be visually look odd on and hard to read. 

有时查询可能导致包含重新构造的SQL语句的大量列,这些列在视觉上看起来很奇怪并且难以阅读。

- LogMiner provides the DBMS_LOGMNR.PRINT_PRETTY_SQL option to address this problem. It formats the reconstructed SQL statements as follows, which makes them easier to read:

LogMiner 提供了 DBMS_LOGMNR.PRINT_PRETTY_SQL 选项来解决此问题。 它将重新构造的SQL语句格式化如下,使它们更易于阅读:

insert into "SCOTT"."EMP" values
"EMPNO": 5505,
"ENAME": "<VALUE>",
"SAL": 9000,
"DEPTNO": NULL;

 

 

4.4 Extracting actual data values from Redo logs

- LogMiner allows you to query data based on actual data values LogMiner允许您根据实际数据值查询数据
- LogMiner data extraction from redo logs is performed using two mine functions:  从 redo logs 中提取LogMiner数据是使用两个mine functions执行的

DBMS_LOGMNR.MINE_VALUE
DBMS_LOGMNR.COLUMN_PRESENT

 

These functions are part of the DBMS_LOGMNR package. The support for these mine functions is provided by the REDO_VALUE and UNDO_VALUE columns in the V$LOGMNR_CONTENTS view. The following is an example of how you could use the MINE_VALUE function:

WHERE OPERATION = 'UPDATEÃ' AND
DBMS_LOGMNR.COLUMN_PRESENT (REDO_VALUE, 'SCOTT.EMP.SAL') = 1 AND
DBMS_LOGMNR.COLUMN_PRESENT (UNDO_VALUE, 'SCOTT.EMP.SAL') = 1 AND
(DBMS_LOGMNR.MINE_VALUE (REDO_VALUE, 'SCOTT.EMP.SAL') > DBMS_LOGMNR.MINE_VALUE (UNDO_VALUE, 'SCOTT.EMP.SAL'));

 


4.5 NULL Returns from the MINE_VALUE function

If the MINE_VALUE function returns a NULL value, mean either  如果MINE_VALUE函数返回NULL值,则表示

- The specified column is not present in the redo or undo column  redo或undo列中没有指定的列
- The specified column is present and has a null value  指定的列存在并且具有空值
- To distinguish between these two cases, use the DBMS_LOGMNR.COLUMN_PRESENT function which returns 1 if the column is present in the redo or undo portion of the data. Otherwise, it returns 0.  要区分这两种情况,请使用 DBMS_LOGMNR.COLUMN_PRESENT 函数,如果该列存在于数据的 redo 或 undo 部分中,则该函数返回1。 否则,它返回0

Please see following sample query: 请参阅以下示例查询

WHERE OPERATION = 'UPDATE' AND
DBMS_LOGMNR.MINE_VALUE (REDO_VALUE, 'HR.EMPLOYEES.SALARY') > DBMS_LOGMNR.MINE_VALUE (UNDO_VALUE, 'HR.EMPLOYEES.SALARY');
Or:
WHERE OPERATION = 'UPDATE' AND
DBMS_LOGMNR.COLUMN_PRESENT (REDO_VALUE, 'HR.EMPLOYEES.SALARY') = 1 AND DBMS_LOGMNR.COLUMN_PRESENT (UNDO_VALUE, 'HR.EMPLOYEES.SALARY') = 1;

 

 

4.6 Filtering Data that is returned

There are several methods you can use to limit the information that is returned to the V$LOGMNR_CONTENTS view, as well as the speed at which it is returned.
您可以使用几种方法来限制返回到 V$LOGMNR_CONTENTS 视图的信息以及返回信息的速度。
These options to be used when you start LogMiner: 启动LogMiner时要使用这些选项

- 4.6.1 Showing Only Committed Transactions  仅显示已提交的交易
- 4.6.2 Skipping Redo Corruptions  跳过redo腐败
- 4.6.3 Filtering Data By Time 按时间过滤数据
- 4.6.7 Filtering Data By SCN 通过SCN过滤数据

4.6.1 Showing only Committed transactions

Use this option to extract only committed rows.  使用此选项仅提取已提交的行。
This enables you to filter out rolled back transactions, transactions that are in progress and internal operations. 这使您可以过滤回滚的事务,正在进行的事务和内部操作。

To enable this option, specify it when you start LogMiner, as follows: 要启用此选项,请在启动LogMiner时指定它,如下所示:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY);

 

 

4.6.2 Skipping Redo Corruptions

This option helps you to skip any corruptions in the redo log files during select operations from the V$LOGMNR_CONTENTS view. For every corrupt redo record encountered, a row is returned that contains the value CORRUPTED_BLOCKS in the OPERATION column, 1343 in the STATUS column, and the number of blocks skipped in the INFO column.
此选项可帮助您在从 V$LOGMNR_CONTENTS 视图进行选择操作期间跳过 redo log 文件中的所有损坏。 对于遇到的每个损坏的 redo 记录,将返回一行,该行在OPERATION列中包含值CORRUPTED_BLOCKS,在STATUS列中包含1343,在INFO列中包含跳过的块数。
Be aware that the skipped records may include changes to ongoing transactions in the corrupted blocks; such changes will not be reflected in the data returned from the V$LOGMNR_CONTENTS view.
请注意,跳过的记录可能包括损坏块中正在进行的事务的更改; 这样的更改将不会反映在 V$LOGMNR_CONTENTS 视图返回的数据中。
The default is for the select operation to terminate at the first corruption it encounters in the redo log file.

缺省情况是,选择操作在 redo 日志文件中遇到的第一个损坏时终止。

The following example shows how this option works, add redo log files of interest: 

以下示例显示了此选项的工作方式,添加了感兴趣的redo log文件:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (logfilename => '<PATH>/<LOGFILE_NAME>' options => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR();
SQL> SELECT operation, status FROM V$LOGMNR_CONTENTS;

ERROR at line 3:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 6 change 73528 time 11/06/2002
ORA-00334: archived log: <PATH>/<LOGFILE_NAME>

 

Restart LogMiner, this time specify the SKIP_CORRUPTION option:  重新启动LogMiner,这次指定 SKIP_CORRUPTION 选项:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (options =>-DBMS_LOGMNR.SKIP_CORRUPTION);
SQL> SELECT operation, status, info FROM V$LOGMNR_CONTENTS;

OPERATION                        STATUS       INFO
-------------------------------- ------------ ------------------------------------------
CORRUPTED_BLOCKS                         1343 corrupt blocks 4 to 19 skipped

 

 

4.7 The summary of LogMiner Options

The following list is a summary of LogMiner settings that you can specify with the OPTIONS parameter to DBMS_LOGMNR.START_LOGMNR:
以下列表是 LogMiner 设置的摘要,您可以使用 DBMS_LOGMNR.START_LOGMNR 的 OPTIONS 参数指定该设置:

DICT_FROM_ONLINE_CATALOG
DICT_FROM_REDO_LOGS
CONTINUOUS_MINE
COMMITTED_DATA_ONLY
SKIP_CORRUPTION
NO_SQL_DELIMITER
PRINT_PRETTY_SQL
STARTIME, ENDTIME
STARSCN, ENDSCN

 


4.8 Supplemental Logging

The data needed for instance and media recovery is automatically recorded in the redo log files. 实例和介质恢复所需的数据会自动记录在 redo log 文件中。
However,a redo-based application may require that additional columns be logged in the redo log files. The process of logging these additional columns is called supplemental logging. 但是,基于redo的应用程序可能需要在 redo log 文件中记录其他列。记录这些附加列的过程称为补充日志记录。

In Oracle release 9.0.1 Supplemental Logging was always on and it was not available at all in releases prior to 9.0.1. 在Oracle 9.0.1版中,补充日志记录始终处于启用状态,并且在9.0.1之前的版本中根本不可用。
In release 9.2 and 10g By default, Oracle Database does not provide any supplemental logging, which means that the following LogMiner features are NOT supported by default: 在版本9.2和10g中,默认情况下,Oracle数据库不提供任何补充日志记录,这意味着默认情况下不支持以下LogMiner功能

- Index clusters, chained rows, and migrated rows  索引集群,链接的行和已迁移的行
- Direct-path inserts (also require that ARCHIVELOG mode be enabled)  直接路径插入(还要求启用ARCHIVELOG模式)
- Extracting the LogMiner dictionary into the redo log files 将LogMiner字典提取到 redo log 文件中
- DDL tracking  DDL追踪
- Generating SQL_REDO and SQL_UNDO with identification key information 生成具有标识密钥信息的SQL_REDO和SQL_UNDO
- LONG and LOB data types  LONG和LOB数据类型

Therefore, to make full use of LogMiner features, you must enable supplemental logging. The following are examples of situations in which additional columns may be needed:

因此,要充分利用LogMiner功能,必须启用补充日志记录。以下是可能需要附加列的情况的示例:

An application that applies reconstructed SQL statements to a different database must identify the update statement by a set of columns that uniquely identify the row (for example, a primary key), not by the ROWID shown in the reconstructed SQL returned by the V$LOGMNR_CONTENTS view, because the Rowid of one database will be different and therefore meaningless in another Database.

将重构的SQL语句应用于其他数据库的应用程序必须通过一组唯一标识行的列(例如,主键)来标识更新语句,而不是通过 V$LOGMNR_CONTENTS 返回的重构的SQL中显示的ROWID来标识更新语句。视图,因为一个数据库的Rowid将有所不同,因此在另一个数据库中将毫无意义。

An application may require that the before-image of the whole row be logged, not just the modified columns, so that tracking of row changes is more efficient.
应用程序可能需要记录整个行的前映像,而不仅仅是已修改的列,以便更有效地跟踪行更改。
There are two types of supplemental log groups that determine when columns in the log group are logged:
有两种类型的补充日志组,它们决定何时记录日志组中的列:
* Unconditional supplemental log groups  无条件的补充日志组

The before-images of specified columns are logged any time a row is updated, regardless of whether the update affected any of the specified columns. This is sometimes referred to as an ALWAYS log group.  每当更新一行时,无论更新是否影响任何指定列,都会记录指定列的前映像。有时将其称为ALWAYS日志组。

* Conditional supplemental log groups  有条件的补充日志组

The before-images of all specified columns are logged only if at least one of the columns in the log group is updated. Supplemental log groups can be system-generated or user-defined.
仅当更新日志组中的至少一列时,才会记录所有指定列的前映像。补充日志组可以是系统生成的或用户定义的。
In addition to the two types of supplemental logging, there are two levels of supplemental logging, as described in the following sections:
除了两种类型的补充日志记录,还有以下两个级别的补充日志记录
- Database-Level Supplemental Logging  数据库级补充日志
- Table-Level Supplemental Logging  表级补充日志

* Minimal Supplemental Logging  最少的补充记录

Minimal supplemental logging logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo operations associated with DML changes.
最少的补充日志记录了LogMiner识别,分组和合并与DML更改关联的redo操作所需的最少信息量。
It ensures that LogMiner has sufficient information to support chained rows and various storage arrangements, such as cluster tables.
它确保LogMiner具有足够的信息来支持链接的行和各种存储安排,例如群集表。
To enable minimal supplemental logging, execute the following statement:
要启用最少的补充日志记录,请执行以下语句:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

 

* Database-Level Identification Key Logging  数据库级标识密钥记录

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

 

PRIMARY KEY system-generated uncondititional supplemental log group, If a table does not have a primary key, but has one or more non-null unique index key constraints or index keys, then one of the unique index keys is chosen for logging as a mean of uniquely identifying the row being updated.
PRIMARY KEY系统生成的无条件补充日志组,如果表没有主键,但具有一个或多个非空唯一索引键约束或索引键,则选择唯一索引键之一作为平均值记录 唯一标识要更新的行。
If the table has neither a primary key nor a non-null unique index key, then all columns except LONG and LOB are supplementally logged; this is equivalent to specifying ALL supplemental logging for that row.
如果表既没有主键又没有非空唯一索引键,那么除了LONG和LOB以外的所有列都将被补充记录; 这等效于为该行指定所有补充日志记录。
Therefore, Oracle recommends that when you use database-level primary key supplemental logging, all or most tables be defined to have primary or unique index keys.
因此,Oracle建议您在使用数据库级主键补充日志记录时,将所有或大多数表定义为具有主索引键或唯一索引键。
To enable primary key logging at the database level, execute the following statement:
要在数据库级别启用主键日志记录,请执行以下语句:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

 

UNIQUE index system-generated conditional supplemental log group:

UNIQUE索引系统生成的条件补充日志组:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

 

FOREIGN KEY system-generated conditional supplemental log group:

FOREIGN KEY系统生成的条件补充日志组:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;

 

* Table-Level Identification Key Logging  表级识别键记录

Following command works in 10g and above releases:  以下命令适用于10g及更高版本:

SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

 

If you run above SQL statement, then whenever any column in table HR.EMPLOYEES is changed, the entire row containing that column (except columns for LOBs, LONGs and ADTs) will be placed in the redo log file.

如果在SQL语句上运行,则只要更改表HR.EMPLOYEES中的任何列,包含该列的整个行(LOB,LONG和ADT的列除外)都将放置在 redo log 文件中。

* User-defined unconditional supplemental log groups  用户定义的无条件补充日志组

SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_parttime (EMPLOYEE_ID, LAST_NAME, DEPARTMENT_ID) ALWAYS;

 

* User-defined conditional supplemental log groups  用户定义的条件补充日志组

SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME,DEPARTMENT_ID);


5. Restrictions in Release 9.2  版本9.2中的限制

- Simple and nested abstract datatypes (ADTs)  简单和嵌套的抽象数据类型(ADTs)
- Object Refs 对象参考
- Collections (nested tables and VARRAYs)  集合(嵌套表和VARRAYs)
- Index organized tables (IOTs)  索引组织表(IOTs)
- CREATE TABLE AS SELECT of a table with a clustered key   CREATE TABLE AS SELECT使用聚簇键

6. Restrictions in 10.x  10.x中的限制

Support for LOB and LONG datatypes in redo log files is available only for redo log files generated on a release 9.2 or later Oracle database. LogMiner does not support these datatypes and table storage attributes (unsupported Datatypes and Table Storage Attributes):

redo log文件中对 LOB 和 LONG 数据类型的支持仅适用于在9.2版或更高版本的Oracle数据库上生成的 redo log 文件。 LogMiner不支持以下数据类型和表存储属性(不支持的数据类型和表存储属性):

- BFILE datatype  BFILE数据类型
- Simple and nested abstract datatypes (ADTs)  简单和嵌套的抽象数据类型(ADTs)
- Collections (nested tables and VARRAYs)  集合(嵌套表和VARRAYs)
- Object refs 对象引用
- XMLTYPE datatype  XMLTYPE数据类型
- Index-organized tables (IOTs) with LOB columns  具有LOB列的索引组织表(IOTs)
- Tables using table compression  使用表压缩的表

7. LogMiner Views

V$LOGMNR_CONTENTS
V$LOGMNR_DICTIONARY
V$LOGMNR_LOGS
V$LOGMNR_PARAMETERS
V$LOGMINER_CONTENTS contains information like: 包含以下信息:

- Changes made to the database like Insert, Update, Delete and DDL. 对数据库所做的更改,例如Insert, Update, Delete and DDL。
- The SCN at which a change was made (SCN Column) 进行更改的SCN (SCN Column) 
- The SCN at which a change was committed (Commit_SCN) 提交更改的SCN (Commit_SCN)
- The transaction to which a change belongs (XIDUSN, XIDSLT and XIDSQN columns) 更改所属的事务(XIDUSN, XIDSLT and XIDSQN columns) 
- Table, Schema, Username 
- SQL_REDO: Reconstructed SQL statement that is equivalent to the SQL used to generate the redo records. 重构的SQL语句,等效于用于生成redo记录的SQL。
- SQL_UNDO: Reconstructed SQL statement showing the SQL statement needed to undo the change. For DDL its shows NULL. 重构的SQL语句,显示undo更改所需的SQL语句。 对于DDL,其显示为NULL。
- SEG_OWNER, SEG_NAME, ROW_ID, TABLESPACE, etc.

You can query the appropriate USER_, ALL_, or DBA_LOG_GROUPS data dictionary view to determine whether any supplemental logging has already been specified.

您可以查询适当的 USER_, ALL_, or DBA_LOG_GROUPS 数据字典视图,以确定是否已指定任何补充日志记录。
 
- ALL_LOG_GROUP_COLUMNS (Only applicable in 10g) 仅适用于10g
Column : LOGGING_PROPERTY VARCHAR2(6) Indicates whether the column or attribute would be supplementally logged (LOG) or not (NO LOG)


In Oracle release 9.2.x:  在Oracle版本9.2.x中

- GV$DATABASE Columns : SUPPLEMENTAL_LOG_DATA_FK, SUPPLEMENTAL_LOG_DATA_ALL

- SUPPLEMENTAL_LOG_DATA_MIN VARCHAR2(3)
- SUPPLEMENTAL_LOG_DATA_PK VARCHAR2(3)
- SUPPLEMENTAL_LOG_DATA_UI VARCHAR2(3)

- V$DATABASE Columns

- SUPPLEMENTAL_LOG_DATA_MIN VARCHAR2(3)
- SUPPLEMENTAL_LOG_DATA_PK VARCHAR2(3)
- SUPPLEMENTAL_LOG_DATA_UI VARCHAR2(3)

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;

SUP SUP SUP
--- --- ---
YES NO  NO

 


ALL_LOG_GROUPS describes the log group definitions on the tables accessible to the current user.
ALL_LOG_GROUPS 描述了当前用户可访问的表上的日志组定义。
Related Views: 相关视图

- USER_LOG_GROUPS
- ALL_LOG_GROUPS
- DBA_LOG_GROUPS

Column           Datatype         NULL Description
OWNER            VARCHAR2(30) NOT NULL Owner of the log group definition
LOG_GROUP_NAME   VARCHAR2(30) NOT NULL Name of the log group definition
TABLE_NAME       VARCHAR2(30) NOT NULL Name of the table on which the log group is defined
LOG_GROUP_TYPE   VARCHAR2(19)          Type of the log group:

 


- PRIMARY KEY LOGGING
- UNIQUE KEY LOGGING
- FOREIGN KEY LOGGING
- ALL COLUMN LOGGING
- USER LOG GROUP
- ALWAYS VARCHAR2(11) Y indicates the log group is logged any time a row is updated; N indicates the log group is logged any time a member column is updated.
ALL_METHOD_RESULTS

- GENERATED VARCHAR2(14) Indicates whether the name of the supplemental log group was system generated (GENERATED NAME) or not (USER NAME)

8. LogMiner Documentation

1) Oracle8i Administrator's Guide
Chapter-7 : Managing Archived Redo Logs
Topic : Using LogMiner to Analyze Online and Archived Redo Logs

2) Oracle9i Release 2 (9.2) Database Administrator's Guide
Chapter-9 Using LogMiner to Analyze Redo Logs

3) Oracle10g Database Utilities
Part IV : Other Utilities
Chapter : 19 Using LogMiner to Analyze RedoLogs

9. Steps in a Typical LogMiner Session Operations 典型LogMiner会话操作中的步骤

This section describes the steps in a typical LogMiner session.  本节介绍典型的LogMiner会话中的步骤。

9.1 Enable Supplemental Logging  启用补充日志
9.2 Extract the LogMiner Dictionary to a Flat file  将LogMiner字典提取到平面文件
9.3 Specify first archive file which you want to mine  指定要挖掘的第一个存档文件
9.4 Add Redo Log Files for mining  添加Redo Log文件进行挖掘
9.5 Remove redo Log which is added earlier  删除先前添加的Redo Log
9.6 Start LogMiner  启动LogMiner
9.7 Query V$LOGMNR_CONTENTS  查询V$LOGMNR_CONTENTS
9.8 End the LogMiner Session  结束LogMiner会话

The LogMiner packages are owned by the sys schema. Therefore, if you are not connected as user SYS, then you must include SYS. in your calls. For example:

LogMiner软件包由sys模式拥有。 因此,如果您没有以SYS用户身份连接,则必须包括SYS。 在您的通话中。 例如:

SQL> EXECUTE SYS.DBMS_LOGMNR.START_LOGMNR;

 

9.1 Enable Supplemental Logging by issueing command

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

 

You can use LogMiner in mount mode as well as an open database mode. 您可以在mount模式以及open数据库模式下使用LogMiner。

9.2 Extract the LogMiner Dictionary to a Flat file

SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('dictionary.ora', '<PATH>/<DIRECTORY_NAME>', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

 

Note: The directory from <PATH>/<DIRECTORY_NAME> must be contained in initialization parameter UTL_FILE_DIR.
注意:<PATH>/<DIRECTORY_NAME>中的目录必须包含在初始化参数 UTL_FILE_DIR 中。

9.3 Create a list of redo log files. Specify the NEW option of the
DBMS_LOGMNR.ADD_LOGFILE procedure to signal that this is the beginning of a NEW list.  DBMS_LOGMNR.ADD_LOGFILE过程以信号表明这是NEW列表的开始。

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME>', OPTIONS => DBMS_LOGMNR.NEW);

 

9.4 If desired, add more redo log files by specifying the ADDFILE option of the DBMS_LOGMNR.ADD_LOGFILE procedure.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME>', OPTIONS => DBMS_LOGMNR.ADDFILE);

 

The OPTIONS parameter is optional when you are adding additional redo log files. 当您添加其他重做日志文件时,OPTIONS参数是可选的。
For example, you could simply enter the following:  例如,您只需输入以下内容:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME>');

 


9.5 If desired, remove redo log files by using the DBMS_LOGMNR.REMOVE_LOGFILE procedure.  如果需要,请使用DBMS_LOGMNR.REMOVE_LOGFILE过程删除重做日志文件。
For example, enter the following to remove the redo log file: 例如,输入以下内容以删除重做日志文件:

SQL> EXECUTE DBMS_LOGMNR.REMOVE_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME>');

 

9.6 Start LogMiner
After you have created a LogMiner dictionary file and specified which redo log files to analyze, you must start LogMiner.
创建LogMiner词典文件并指定要分析的 redo log 文件后,必须启动LogMiner。
Oracle recommends that you specify a LogMiner dictionary option. If you do not, LogMiner cannot translate internal object identifiers and datatypes to object names and external data formats. Therefore, it would return internal object IDs and present data as hex data.
Oracle建议您指定LogMiner词典选项。 否则,LogMiner无法将内部对象标识符和数据类型转换为对象名称和外部数据格式。 因此,它将返回内部对象ID,并将数据显示为十六进制数据。
Additionally, the MINE_VALUE and COLUMN_PRESENT functions cannot be used without a dictionary.
此外,如果没有字典,则无法使用MINE_VALUE和COLUMN_PRESENT函数。

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME =>'<PATH>/dictionary.ora');

 

9.7 Now you can query the V$LOGMNR_CONTENTS to see the mined rows.

9.8 You can End LogMiner session using:

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR

 


10. Case Studies  案例研究

[10.1 Case-1 (Extracting the Dictionary to Flat File)]  情况1(将字典提取为平面文件)

1.1 Set the Init.ora parameter UTL_FILE_DIR, with the name of dictionary where you want to create a dictionary file.  设置Init.ora参数 UTL_FILE_DIR,以及要在其中创建字典文件的字典名称。

Example : utl_file_dir = '<PATH>/<DIRECTORY_NAME>'

 

1.2 Now generate the dictionary file using following command:  现在,使用以下命令生成字典文件:

SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('<DICTIONARY_NAME>.ORA', '<PATH>/<DIRECTORY_NAME>', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

 

1.2 Now prepare the list of the archive files which were generated when the unwanted changes done.  现在,准备在进行不必要的更改时生成的存档文件列表。
Example :

SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME1>', OPTIONS => DBMS_LOGMNR.NEW);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME2>', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME3>', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME4>', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME5>', OPTIONS => DBMS_LOGMNR.ADDFILE);

 

1.3. Now you can start the logMiner as shown below:  现在,您可以启动logMiner,如下所示:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME => '<PATH>/<DIRECTORY_NAME>/<DICTIONARY_NAME>.ORA');

1.4. Now you can query v$LOGMNR_CONTENTS details using following queries:  现在,您可以使用以下查询来查询 v$LOGMNR_CONTENTS 详细信息:

SQL> select seg_owner, operation, sql_redo, sql_undo from v$logmnr_contents where SEG_NAME = '';

SQL> select username, operation, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SEG_OWNER = '';

SQL> SELECT (XIDUSN || '.'|| XIDSLT || '.'|| XIDSQN) AS XID, USERNAME ,SQL_REDO,SQL_UNDO AS SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'SCOTT';


[10.2 Case-2 (Extracting the Dictionary from ONLINE CATALOG)]  情况2(从在线目录中提取字典)

SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME1>', OPTIONS => DBMS_LOGMNR.NEW);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME2>', OPTIONS => DBMS_LOGMNR.addfile);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME3>', OPTIONS => DBMS_LOGMNR.addfile);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> select username, operation, SQL_REDO,SQL_UNDO FROM V$LOGMNR_CONTENTS where SEG_OWNER = 'SCOTT';
SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;

 


11. Summary of minimum steps to analyze a redolog  分析redolog的最少步骤的摘要

1. Check following things:  检查以下内容:

- Init.ora parameter UTL_FILE_DIR  Init.ora参数UTL_FILE_DIR
- Archive log destination (Check the destination of offline redo logs)  归档日志目标(检查offline redo logs的目标)
- Archive log mode  归档日志模式
- Check the location of online redo logs  检查online redo logs的位置

2. Dictionary Option  字典选项

- DICT_FROM_ONLINE_CATALOG
- DICT_FROM_REDO_LOGS
- STORE_IN_FLAT_FILE

3. Adding redo log file to LogMiner

- Add first log file

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME1>', OPTIONS => DBMS_LOGMNR.NEW);

- Add next files one by one, if you want mine few more files.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME2>', OPTIONS => DBMS_LOGMNR.ADDFILE);

Or:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => <PATH>/<LOGFILE_NAME1>);

 

- If you want to remove file from LogMiner which you added earlier:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '<PATH>/<LOGFILE_NAME2>', OPTIONS => DBMS_LOGMNR.REMOVEFILE);

 

4. Start LogMiner
- DICT_FROM_ONLINE_CATALOG

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS =>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG)

Or:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS =>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY);

 

- DICT_FROM_REDO_LOGS
If you specify DICT_FROM_REDO_LOGS, LogMiner expects to find a dictionary in the redo logs that you specified with the DBMS_LOGMNR.ADD_LOGFILE procedure.
如果指定 DICT_FROM_REDO_LOGS,则LogMiner希望在通过DBMS_LOGMNR.ADD_LOGFILE过程指定的重做日志中找到字典。
- STORE_IN_FLAT_FILE

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME =>'<PATH>/<DIRECTORY_NAME>/<DICTIONARY_NAME>.ORA');

5. Filter Options

- COMMITTED_DATA_ONLY
- SKIP_CORRUPTION
- STARTTIME and ENDTIME
- STARTSCN and ENDSCN

12. Known Issues

1. How to check lots of redo generation  如何检查大量redo生成

1. Set following parameters at SQLPLUS  在SQLPLUS上设置以下参数

set linesize 1000
col username format a10
col OPERATION format a10
col seg_owner format a30
col seg_name format a40

 


2. select distinct username, SEG_OWNER, seg_name, operation from V$LOGMNR_CONTENTS where SEG_OWNER not in ('SYS','SYSTEM') order by username, SEG_OWNER, seg_name, operation;

3. SELECT OPERATION, COUNT(*) FROM V$LOGMNR_CONTENTS GROUP BY OPERATION;

4. select distinct SQL_REDO, SQL_UNDO from V$LOGMNR_CONTENTS where operation = 'UNSUPPORTED';

Unsupported operations could be because one the restrictions mentioned under the following points:
不支持的操作可能是由于以下几点中提到的限制之一:
- Please see 2. Restrictions in 8i, 5. Restrictions in Release 9.2, 6. Restrictions in 10.x.

5. select distinct SEG_NAME FROM V$LOGMNR_CONTENTS where operation = 'UNSUPPORTED' AND SEG_OWNER not in ('SYS','SYSTEM');

- For more information please refer to Note 300395.1 Using LogMiner, How to determine the cause of lots of redo

2. How to reproduce the "Version Mismatch" error? 如何重现“版本不匹配”错误?

- create table skg (c1 number, c2 varchar2(10));
- insert 2-3 rows.
- add the logfile to LogMiner
- start the Logminer with the DICT_FROM_ONLINE_CATALOG
- select operation, sql_redo, sql_undo from v$logmnr_contents where SEG_OWNER = '<OWNER_NAME>';
- insert few more row or do update or delete
- select operation,sql_redo,sql_undo from v$logmnr_contents where SEG_OWNER = '<OWNER_NAME>';
- alter the table
- select operation, sql_redo, sql_undo from v$logmnr_contents where SEG_OWNER = '<OWNER_NAME>';
- do the DMLS
- select operation, sql_redo, sql_undo from v$logmnr_contents where SEG_OWNER = '<OWNER_NAME>';

Now, you will get the "Version Mismatch" errors

3. How to extract data in SCN order from RAC database?  如何从RAC数据库中按SCN顺序提取数据?

If you have a RAC environment which has three instances and you want to extract data in SCN order using LogMiner from RAC database without missing the data, please do the following steps:

1. First decide your SCN rang.
2. Query following two views to find out the SEQUENCE# belongs to your SCN range.

Run the following query on each instance/node:

SQL> select GROUP#, THREAD#, sequence#, FIRST_CHANGE# from v$log where FIRST_CHANGE# >= and FIRST_CHANGE# <= ;

SQL> SELECT THREAD#, SEQUENCE#, FIRST_CHANGE# FROM V$LOG_HISTORY where FIRST_CHANGE# >= and FIRST_CHANGE# <= ;

Or:

Run the following query to get data details about all instance from one instance/node.

SQL> select INST_ID, GROUP#, THREAD#, sequence#, FIRST_CHANGE# from gv$log where FIRST_CHANGE# >= and FIRST_CHANGE# <= ;

SQL> SELECT INST_ID, THREAD#, SEQUENCE#, FIRST_CHANGE# FROM GV$LOGHIST where FIRST_CHANGE# >= and FIRST_CHANGE# <= ;

3. Add all those sequences (archive files) to LOGMINER and then you can start it using option SCN range OR start is without SCN range and later on query V$LOGMNR_CONTENTS with order by SCN clause.

4. Known Bugs

Bug 4149779 Gen RDBMS-1010 V1010 (80)
Abstract : ORACLE10G LOGMINER DOES NOT SHOW CORRECT ROWID IF LOG CREATED IN ORACLE9I

Unpublished Bug 4096561 Gen RDBMS-9205 V9205 (80)
Abstract: LOGMINER CREATES PARTITIONS WITH NOLOGGING OPTION

Unpublished Bug 4031430 Gen RDBMS-102 V102 (80)
Abstract: ORA-600 [17092] IN LOGMINER BUILDER PROCESS

Unpublished Bug 3938909 Gen RDBMS-1010 V1010 (80)
Abstract: LOGMINER CANNOT SUPPORT CTAS FROM REDO GENERATED UNDER DIFFERENT WORDSIZE

Unpublished Bug 3928031 Gen RDBMS-1020 V1020 (80)
Abstract: LOGMINER DOES NOT PROPERLY HANDLE RESTART IN THE FIRST FILE OF A NEW THREAD

Unpublished Bug 3889289 Gen RDBMS-1020 V1020 (80)
Abstract: LOGMINER DOES NOT SUPPORT REDO GENERATED BY SQL*LOADER (SORTED INDEXES)

Unpublished Bug 3617572 Gen RDBMS-1020 V1020 (80)
Abstract: CERTAIN INDEXES USED BY LOGMINER ARE PLACED IN THE WRONG TABLESPACE

Unpublished Bug 3381899 Gen RDBMS-1010 V1010 (80)
Abstract: LOGMINER CANNOT SUPPORT MULTIPLE DDLS COMMITING AT THE SAME SCN

Unpublished Bug 3171445 Gen RDBMS-1010 V1010 (80)
Abstract: LOGMINER ALLOWS GAP IN REDO LOGS OVERLAPPING THE DICTIONARY DUMP IN RAC

Unpublished Bug 3113200 Gen RDBMS-1010 V1010 (80)
Abstract: LOGMINER DOES NOT WORK FOR INSERT /*+ APPEND */ INTO INDEX-ORGANIZED TABLE

REFERENCES

BUG:4149779 - ORACLE10G LOGMINER DOES NOT SHOW CORRECT ROWID IF LOG CREATED IN ORACLE9I
NOTE:300395.1 - How To Determine The Cause Of Lots Of Redo Generation Using LogMiner

posted @ 2019-12-09 15:50  ZYLONG-SYS  阅读(524)  评论(0编辑  收藏  举报