Skip to content

Log Mining

February 18, 2013

Archive Log Mining

In Oracle8i, LogMiner was introduced as a tool capable of reading redo records found in the redo log files using a relational interface.

To find out what Oracle is writing to the redo log files called LogMiner. The redo log files store all the data needed to perform a database recovery and record all changes (INSERT, UPDATE and DELETE) made to the database.

LogMiner can be used as powerful data audit tool that enables post auditing, fine-grained recovery at the transaction level as well as a tool for sophisticated data analysis.

LogMiner enables you to query online and archived redo log files through a SQL interface. Any DML and DDL statements executed against the database can be viewed using LogMiner.

It is important to know exactly when an error was made so that you know when to initiate time-based or change-based recovery. This enables you to restore the database to the state it was in just before corruption.

Configuring LogMiner:

======================

1) Add Supplemental Logging to database.

LogMiner is a redo-based application and as such, requires at least minimal supplemental logging be enabled on the source database. Oracle does not enable any supplemental logging by default.

Supplemental logging must be enabled on the source database before generating redo log files that will be analyzed by LogMiner.

Redo log files are generally used for instance recovery and media recovery. The data required for instance recovery and media recovery is automatically recorded in the redo log files. However a redo log based application may require that the additional columns need to be logged into redo log files. The process of adding these additional columns into redo log files is called supplemental logging.

SQL> SELECT supplemental_log_data_min FROM v$database ;
SUPPLEME
--------
NO

If the query returns a value of NO, it needs to be enabled.

SQL > ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
 
OR
col “PK_LOG” for a8
col “UI_LOG” for a8
set line 200
col supplemental_log_data_min for a28
select supplemental_log_data_min as supplemental_log_data_min,
 supplemental_log_data_pk as “PK_LOG”,supplemental_log_data_ui “UI_LOG”
from v$database;
 
SUPPLEMENTAL_LOG_DATA_MIN    PK_LOG   UI_LOG
---------------------------- -------- --------
YES                          NO       NO

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;

Database altered.

SQL> select supplemental_log_data_min as supplemental_log_data_min, supplemental_log_data_pk as “PK_LOG”,supplemental_log_data_ui “UI_LOG” from v$database;

SUPPLEMENTAL_LOG_DATA_MIN    PK_LOG   UI_LOG
---------------------------- -------- --------
YES                          YES      YES
 
Incase if you need to disable supplemental logging
==========================================
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;

 

2) To Extract the LogMiner Dictionary to a Flat File, Set Parameter utl_file_dir.

To extract the dictionary to a flat file, set the dictionary_filename parameter to the name of the file, the dictionary_location set to a directory on the database machine specified in UTL_FILE_DIR

SQL> DESC dbms_logmnr_d
PROCEDURE BUILD
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 DICTIONARY_FILENAME            VARCHAR2                IN     DEFAULT
 DICTIONARY_LOCATION            VARCHAR2                IN     DEFAULT
 OPTIONS                        NUMBER                  IN     DEFAULT

Mkdir  C:\Oracle\app\oradata\logmin

Alter system set utl_file_dir=’C:\Oracle\app\oradata\logmin’ scope=spfile;

Shut immediate

Startup

To Unset the parameter utl_file_dir
======================================
Alter system reset utl_file_dir scope=spfile;

Set time on

SQL> truncate table scott.test;

Table truncated.

10:53:56 SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
SUPPLEME
-------------
YES
col VALUE for a50
col name for a25
SELECT name, value FROM v$parameter WHERE name = ‘utl_file_dir’;
NAME                      VALUE
------------------------- -------------------------------
utl_file_dir              C:\Oracle\app\oradata\logmin

10:53:57 SQL> show parameter utl_file_dir
 NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
utl_file_dir                         string      C:\Oracle\app\oradata\logmin
10:54:15 SQL>

10:54:35 SQL> ALTER SESSION SET NLS_DATE_FORMAT = ‘DD-MON-YYYY HH24:MI:SS’;

Session altered.

10:54:36 SQL>

10:54:36 SQL> alter system switch logfile;

System altered.

10:55:05 SQL>

10:55:16 SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            C:\Oracle\app\oradata\archive
Oldest online log sequence     176
Next log sequence to archive   178
Current log sequence           178  ==> Data goes to this SEQ archive file, we will mine the same.
10:55:18 SQL>

3) We will insert two records so that we can extract same while mining from Archive logs:

==========================================================================

10:56:11 SQL> insert into scott.test values ('Data one');
1 row created.
10:56:12 SQL> commit;
Commit complete.
10:56:14 SQL>
10:56:14 SQL> insert into scott.test values ('Data two');
1 row created.
10:56:44 SQL> commit;
Commit complete.
10:56:47 SQL>
10:57:07 SQL> Select TO_CHAR(sysdate, 'DD-MON-YYYY HH24:MI:SS') timestamp from dual;
TIMESTAMP
--------------------
17-FEB-2013 10:57:08
10:57:08 SQL>

We are switching the Redo logs so that for above inserts/transaction Archived log file is created.

10:57:08 SQL> alter system switch logfile;
System altered.
10:57:26 SQL> alter system switch logfile;
System altered.
10:57:29 SQL>

Make sure you get the list of archives generated for the day using the below command.

From the below output identify the archive logs you are going to mine using log miner.

col name for a60
set line 200
set pages 50
select thread#,name, sequence#, completion_time from v$archived_log where completion_time like ’17-FEB-2013%’ order by completion_time desc;
 10:57:46 SQL> select thread#,name, sequence#, completion_time from v$archived_log where completion_time like '17-FEB-2013%' order by completion_time desc;
    THREAD# NAME                                                          SEQUENCE# COMPLETION_TIME
---------- ------------------------------------------------------------ ---------- --------------------
         1 C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000179_0764607748.0001         179 17-FEB-2013 10:57:30
         1 C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000178_0764607748.0001         178 17-FEB-2013 10:57:27
         1 C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000177_0764607748.0001         177 17-FEB-2013 10:55:05
         1 C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000176_0764607748.0001         176 17-FEB-2013 10:52:49
......
         1 C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000171_0764607748.0001         171 17-FEB-2013 10:28:54

9 rows selected.

4)  Extracting the LogMiner Dictionary to a Flat File (dictionary.ora)

To extract the dictionary to a flat file, set the dictionary_filename parameter to the name of the file, the dictionary_location set to a directory on the database machine specified in UTL_FILE_DIR, and optionally set the options parameter to the constant DBMS_LOGMNR_D.STORE_IN_FLAT_FILE. Using these options, the BUILD procedure will query the data dictionary tables of the current database and create a text-based file containing the contents of the tables.

exec dbms_logmnr_d.build (dictionary_filename => 'dictionary.ora', dictionary_location => 'C:\Oracle\app\oradata\logmin');
OR
Exec  dbms_logmnr_d.build('dictionary.ora','C:\Oracle\app\oradata\logmin');

e.g.:

10:58:20 SQL> exec dbms_logmnr_d.build (dictionary_filename => 'dictionary.ora', dictionary_location => 'C:\Oracle\app\oradata\logmin');
PL/SQL procedure successfully completed.

Now Add Archive files for Mining

——————————————————————————————————————-

Notes:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_logmnr.htm
exec dbms_logmnr.add_logfile (LogFileName => 'C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000178_0764607748.0001', Options => dbms_logmnr.new);

Or

exec  dbms_logmnr.add_logfile(LogFileName =>'C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000179_0764607748.0001',Options => dbms_logmnr.addfile);
-------------------------------------------------------------------------------------------------------------------
10:58:35 SQL> exec dbms_logmnr.add_logfile(LogFileName =>'C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000178_0764607748.0001',Options => dbms_logmnr.new);
PL/SQL procedure successfully completed.

 5) To verify that file has been added we’ll query the V$LOGMNR_LOGS view which contains all log files that LogMiner will examine:

col filename for a70
Set line 200
select log_id, filename from v$logmnr_logs;
10:59:35 SQL> select log_id, filename from v$logmnr_logs;
LOG_ID FILENAME
---------- ----------------------------------------------------------------------
178 C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000178_0764607748.0001
10:59:36 SQL>

6) Find the start and end time for archive log containing the entries.

ALTER SESSION SET NLS_DATE_FORMAT = ‘DD-MON-YYYY HH24:MI:SS’;
col filename for a70
set line 200
select FILENAME,THREAD_SQN,low_time,high_time,low_scn,next_scn from v$logmnr_logs;
10:59:56 SQL> select FILENAME,THREAD_SQN,low_time,high_time,low_scn,next_scn from v$logmnr_logs;
FILENAME                                                               THREAD_SQN LOW_TIME             HIGH_TIME               LOW_SCN   NEXT_SCN
---------------------------------------------------------------------- ---------- -------------------- -------------------- ---------- ----------
C:\ORACLE\APP\ORADATA\ARCHIVE\ARC0000000178_0764607748.0001                   178 17-FEB-2013 10:55:05 17-FEB-2013 10:57:26    1561423    1561493
10:59:56 SQL>

7) Start Log mining for the time mentioned.

exec dbms_logmnr.start_logmnr(dictfilename =>’C:\Oracle\app\oradata\logmin\dictionary.ora’, –
starttime => to_date(’17-FEB-2013 10:55:05′, ‘DD-MON-YYYY HH24:MI:SS’), –
endtime   => to_date(’17-FEB-2013 10:57:26′, ‘DD-MON-YYYY HH24:MI:SS’));
11:00:37 SQL> execute dbms_logmnr.start_logmnr(dictfilename =>'C:\Oracle\app\oradata\logmin\dictionary.ora', -
11:00:38 >                              starttime => to_date('17-FEB-2013 10:55:05', 'DD-MON-YYYY HH24:MI:SS'), -
11:00:38 >                              endtime   => to_date('17-FEB-2013 10:57:26', 'DD-MON-YYYY HH24:MI:SS'));
PL/SQL procedure successfully completed.
11:00:41 SQL>
 

8) View mined data

The redo entries mined from the archived file by LogMiner are made available through the V$LOGMNR_CONTENTS view for the current LogMiner session. Out of all of the LogMiner views, this is the one that you will use most often.

COLUMN username   FORMAT A8
COLUMN operation  FORMAT A9
COLUMN sql_redo   FORMAT A25 WORD_WRAPPED
COLUMN sql_undo   FORMAT A25 WORD_WRAPPED
COLUMN timestamp  FORMAT A20
SELECT
username,SEG_OWNER
, operation
, sql_redo
, sql_undo
, TO_CHAR(timestamp, ‘DD-MON-YYYY HH24:MI:SS’) timestamp
, scn
FROM
v$logmnr_contents
WHERE operation in(‘INSERT’,’COMMIT’)
and (SQL_REDO like ‘%TEST%’ or SQL_REDO like ‘%commit%’);
USERNAME SEG_OWNER                        OPERATION SQL_REDO                  SQL_UNDO               TIMESTAMP              SCN
-------- -------------------------------- --------- ------------------------- ------------------------- -------------------- ----------
SYS                                       COMMIT    commit;                                          17-FEB-2013 10:55:54       1561440
SYS                                       COMMIT    commit;                                          17-FEB-2013 10:55:54       1561442
SYS                                       COMMIT    commit;                                          17-FEB-2013 10:55:54       1561443
SYS                                       COMMIT    commit;                                          17-FEB-2013 10:56:12       1561453
SYS      SCOTT                            INSERT    insert into               delete from            17-FEB-2013 10:56:14       1561455
"SCOTT"."TEST"("MESG")    "SCOTT"."TEST" where
values ('Data one');      "MESG" = 'Data one' and
ROWID =
'AAAROZAAEAAAADjAAA';
SYS                                       COMMIT    commit;                                          17-FEB-2013 10:56:14       1561456
UNKNOWN                                   COMMIT    commit;                                          17-FEB-2013 10:56:37       1561465
SYS      SCOTT                            INSERT    insert into               delete from            17-FEB-2013 10:56:45       1561467
"SCOTT"."TEST"("MESG")    "SCOTT"."TEST" where
values ('Data two');      "MESG" = 'Data two' and
ROWID =
'AAAROZAAEAAAADjAAB';
SYS                                       COMMIT    commit;                                          17-FEB-2013 10:56:47       1561469
SYS                                       COMMIT    commit;                                          17-FEB-2013 10:56:54       1561473
SYS                                       COMMIT    commit;                                          17-FEB-2013 10:56:54       1561475
SYS                                       COMMIT    commit;                                          17-FEB-2013 10:56:54       1561477
12 rows selected.
11:03:03 SQL>
SELECT
username,SEG_OWNER
, operation
, sql_redo
, sql_undo
, TO_CHAR(timestamp, ‘DD-MON-YYYY HH24:MI:SS’) timestamp
, scn
FROM
v$logmnr_contents
WHERE SEG_OWNER=’SCOTT’
and operation in(‘INSERT’,’COMMIT’)
–and (SQL_REDO like ‘%TEST%’ or SQL_REDO like ‘%commit%’)
/
USERNAME SEG_OWNER                        OPERATION SQL_REDO                  SQL_UNDO               TIMESTAMP              SCN
-------- -------------------------------- --------- ------------------------- ------------------------- -------------------- ----------
SYS      SCOTT                            INSERT    insert into               delete from            17-FEB-2013 10:56:14       1561455
"SCOTT"."TEST"("MESG")    "SCOTT"."TEST" where
values ('Data one');      "MESG" = 'Data one' and
ROWID =
'AAAROZAAEAAAADjAAA';
SYS      SCOTT                            INSERT    insert into               delete from            17-FEB-2013 10:56:45       1561467
"SCOTT"."TEST"("MESG")    "SCOTT"."TEST" where
values ('Data two');      "MESG" = 'Data two' and
ROWID =
'AAAROZAAEAAAADjAAB';
11:01:35 SQL>
SELECT
username
, operation
, sql_redo
, sql_undo
, TO_CHAR(timestamp, ‘DD-MON-YYYY HH24:MI:SS’) timestamp
, scn
FROM
v$logmnr_contents
WHERE operation =’INSERT’
and SQL_REDO like ‘%TEST%’;

9) End the LogMiner session.

Exec DBMS_LOGMNR.END_LOGMNR();

Others details:

Note: If you mine a bunch of logs, below create table will take more time to get created.

Create table logmnr_tab as select * from v$logmnr_contents;


Reference:

How to use LogMiner to Extract SQL Redo for DML Against Dropped Table [ID 148937.1]

Oracle9i LogMiner New Features [ID 148616.1]

Logminer Not Returning Any Records [ID 291574.1]

How To Locate SCN Time Of DROP TABLE Using LogMiner [ID 93370.1]

http://www.stanford.edu/dept/itss/docs/oracle/10gR2/appdev.102/b14258/d_logmnr.htm

You can down load the above article by clicking here Archive Log Mining

Advertisements

From → Oracle

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: