Feed aggregator

AWS Secrets Manager

Online Apps DBA - Sun, 2021-03-07 23:04

One of the most common problems identified by code analysis tools is the presence of passwords written directly in configuration files. This can lead to security attacks and system intrusions. AWS Secrets Manager allows us to protect the necessary confidential data to access your apps, services, and IT resources. We can also rotate, manage and […]

The post AWS Secrets Manager appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

TensorFlow.js Blueprint App Step by Step

Andrejus Baranovski - Sun, 2021-03-07 06:31
I describe how to prepare data that comes from API into TensorFlow.js Dataset structure, how to shuffle, normalize, one-hot-encode, and batch the data. Next, I go on model training and explain why fitDataset is recommended way to train a neural network in TensorFlow.js. At last, I show how to do inference and print the results. This app is built with React, but the same code can be reused with any JS toolkit/framework.

 

Compress Historical Interval Partitions

Michael Dinh - Sat, 2021-03-06 23:20

There is a requirement to compress monthly interval partition older that 36 months.

First, interval partitions were renamed to more intuitive names using Renaming Interval Partitions

Even though the rename is not necessary, it does provide more clarity.

There is a demo to find and compress partitions older that 36 months.

SQL> @ partition_interval_delta.sql
SQL> select
  2  partition_name, compression, high_value,
  3  TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) months_delta
  4  from dba_tab_partitions
  5  where table_owner = UPPER('&owner')
  6  and table_name = UPPER('&table')
  7  and compression='DISABLED'
  8  and TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) >= &months_delta
  9  order by 1
 10  ;
Enter value for owner: app
Enter value for table: interval
Enter value for months_delta: 36

PARTITION_NAME        COMPRESS HIGH_VALUE                                               MONTHS_DELTA
--------------------- -------- -------------------------------------------------------- ------------
P201702               DISABLED TO_DATE(' 2017-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           49
P201705               DISABLED TO_DATE(' 2017-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           46
P201706               DISABLED TO_DATE(' 2017-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           45
P201707               DISABLED TO_DATE(' 2017-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           44
P201708               DISABLED TO_DATE(' 2017-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           43
P201709               DISABLED TO_DATE(' 2017-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           42
P201710               DISABLED TO_DATE(' 2017-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           41
P201711               DISABLED TO_DATE(' 2017-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           40
P201712               DISABLED TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           39
P201801               DISABLED TO_DATE(' 2018-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           38
P201802               DISABLED TO_DATE(' 2018-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           37
P201803               DISABLED TO_DATE(' 2018-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           36
SQL> 
SQL> set echo off

SQL> set verify on
SQL> define partition_name = P201702
SQL> alter table APP.INTERVAL modify partition &partition_name compress for OLTP;
old   1: alter table APP.INTERVAL modify partition &partition_name compress for OLTP
new   1: alter table APP.INTERVAL modify partition P201702 compress for OLTP

SQL> alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8;
old   1: alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8
new   1: alter table APP.INTERVAL move partition P201702 ONLINE UPDATE INDEXES PARALLEL 8

SQL> define partition_name = P201705
SQL> alter table APP.INTERVAL modify partition &partition_name compress for OLTP;
old   1: alter table APP.INTERVAL modify partition &partition_name compress for OLTP
new   1: alter table APP.INTERVAL modify partition P201705 compress for OLTP

SQL> alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8;
old   1: alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8
new   1: alter table APP.INTERVAL move partition P201705 ONLINE UPDATE INDEXES PARALLEL 8

SQL> @ partition_interval_delta.sql
SQL> select
  2  partition_name, compression, high_value,
  3  TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) months_delta
  4  from dba_tab_partitions
  5  where table_owner = UPPER('&owner')
  6  and table_name = UPPER('&table')
  7  and compression='DISABLED'
  8  and TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) >= &months_delta
  9  order by 1
 10  ;
Enter value for owner: app
Enter value for table: interval
Enter value for months_delta: 36

PARTITION_NAME        COMPRESS HIGH_VALUE                                               MONTHS_DELTA
--------------------- -------- -------------------------------------------------------- ------------
P201706               DISABLED TO_DATE(' 2017-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           45
P201707               DISABLED TO_DATE(' 2017-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           44
P201708               DISABLED TO_DATE(' 2017-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           43
P201709               DISABLED TO_DATE(' 2017-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           42
P201710               DISABLED TO_DATE(' 2017-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           41
P201711               DISABLED TO_DATE(' 2017-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           40
P201712               DISABLED TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           39
P201801               DISABLED TO_DATE(' 2018-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           38
P201802               DISABLED TO_DATE(' 2018-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           37
P201803               DISABLED TO_DATE(' 2018-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           36
SQL> set echo off

AWS Certified Machine Learning – Specialty: Step-by-Step Hands-On

Online Apps DBA - Fri, 2021-03-05 07:04

AWS Certified Machine Learning – Specialty [MLS-C01] Step By Step Activity Guides (Hands-On Labs) Hands-On Activity Guides that you must perform in order to learn AWS Certified Machine Learning and clear AWS MLS-C01 certification exam. Read the blog to get a deep understanding of the AWS MLS-C01 certification exam at k21academy.com/awsml05. The blog covers: • […]

The post AWS Certified Machine Learning – Specialty: Step-by-Step Hands-On appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How to export data for specific time period using DBMS_DATAPUMP api.

Tom Kyte - Fri, 2021-03-05 04:26
We have a requirement where we are looking for DB dumps to be exported using DBMS_DATAPUMP api for specific time period like 01-Jan-2020 to 31-March-2020. I am aware of Flashback time and flashback scn number but it will export only for specific timestamp but not time period. Could you please help me on this.
Categories: DBA Blogs

Hostname for oracle database will change

Tom Kyte - Fri, 2021-03-05 04:26
We have an oracle database and the hostname in listener and tnsname files is 10.<b>5</b>.6.212 and there are some upgrades from the infrastructure team and they will change vlan for server and will be 10.<b>8</b>.6.212 so what should I do to prevent any impact on my oracle database * Note: the OS is windows server.
Categories: DBA Blogs

cursor

Tom Kyte - Fri, 2021-03-05 04:26
I have a database and I want to print the following, The last 10 rows The first 10 rows The largest zipcodes number The smallest zipcodes number This is the code I used. import pymysql db = pymysql.connect(host="192.168.0.180",port=4000,user="maxuser",passwd="maxpwd") cursor=db.cursor() cursor.execute("SELECT * FROM zipcodes_one.zipcodes_one where <= MAX_ROWS") results=cursor.fetchall() for result in results: print (result) Please help Thank you
Categories: DBA Blogs

Can I change an existing column to a computed column with an Alter Table statement

Tom Kyte - Fri, 2021-03-05 04:26
Test case can be found at: https://livesql.oracle.com/apex/livesql/s/lgh3hduetd3lqd74jvuolc78k Given: <code>Drop table t1; Create table t1 (c1 Int Default 0 Not Null, c2 Int Default 0 Not Null);</code> Is it then possible to: <code>Alter table t1 Modify c2 as (c1 + 1);</code> When I do I get the following result, so I am beginning to believe this cannot be done: Table dropped Table created Alter table t1 Modify c2 as (c1 + 1) ORA-54026: Real column cannot have an expression Appreciate your help as I have not been able to find any documentation that would confirm this behaviour. Thanks, Aubrey
Categories: DBA Blogs

Oracle Pivot table

Tom Kyte - Fri, 2021-03-05 04:26
I have a Oracle pivot query that works with 19 values in the IN clause but when I add a 20th, it fails with an ORA-01427 error. This works: <code>select * from (select serial_no, buyoff_user || chr(13) || to_char(buyoff_date) || chr(13) || serial_oper_status || chr(13) || NCR as UPD_BY, oper_no from db1.vw_status s where order_no = '123456') pivot (min(UPD_BY) for oper_no in ('10.0','20.0','30.0','40.0','50.0','60.0','70.0','80.0','90.0','100.0','110.0','120.0','130.0','140.0','150.0','160.0','170.0','180.0','190.0')) order by serial_no; </code> This fails: <code>select * from (select serial_no, buyoff_user || chr(13) || to_char(buyoff_date) || chr(13) || serial_oper_status || chr(13) || NCR as UPD_BY, oper_no from db1.vw_status s where order_no = '123456') pivot (min(UPD_BY) for oper_no in ('10.0','20.0','30.0','40.0','50.0','60.0','70.0','80.0','90.0','100.0','110.0','120.0','130.0','140.0','150.0','160.0','170.0','180.0','190.0','200.0')) order by serial_no;</code> I've verified that it is not a data issue and can get the query to run if I take out any one value in the IN clause This works as well, leave '200.0' in and remove '60.0' <code>select * from (select serial_no, buyoff_user || chr(13) || to_char(buyoff_date) || chr(13) || serial_oper_status || chr(13) || NCR as UPD_BY, oper_no from db1.vw_status s where order_no = '123456') pivot (min(UPD_BY) for oper_no in ('10.0','20.0','30.0','40.0','50.0','70.0','80.0','90.0','100.0','110.0','120.0','130.0','140.0','150.0','160.0','170.0','180.0','190.0','200.0')) order by serial_no;</code> Am I going crazy (probably) or is there something I am missing? Thanks.
Categories: DBA Blogs

Manual block level recovery steps on standby database using backup from primary database

Tom Kyte - Fri, 2021-03-05 04:26
Greetings, As I as was reading the 19c Data Guard manual, I came across a topic in about being able to use a backup taken at the primary to perform a block level recovery at it's standby database. I read that block level recovery can automatically be done if we have purchased an active data guard license. It does also state that it can do it manually, too, but it doesn't give the steps on how to perform it. From Data Guard Manual: 10.2.1.6 Manual Block Media Recovery Are you able to provide the steps to perform a manual block level recovery steps on standby database using backup from primary database without a Catalog? I would be nice if we could simply use the "FOR SERVICE" clause. Thanks, John
Categories: DBA Blogs

FAST RECOVERY AREA best practice and not recommend

Tom Kyte - Fri, 2021-03-05 04:26
Greetings, My question involves the use of the FAST RECOVERY AREA for online redo logs and even controlfiles. Typically, the backup mount points are on slower disks. Therefore, if we were to set DB_RECOVERY_FILE_DEST to that backup mount, my assumption is that Oracle wouldn't recommend placing online redo logs on those slower backup mounts. Also, if we place the control files, now we would decreasing the fault tolerance. The storage backup admin could decide to do some maintenance on the backup system and forget to coordinate with the DBA. I'm just trying to understand when it is a good idea to use FAST RECOVERY AREA and when it is not. I've been working with Oracle for over 20 years and not using FAST RECOVERY AREA has worked out great for me. "When you enable fast recovery in the init.ora file, Oracle Database writes all RMAN backups, archive logs, control file automatic backups, and database copies to the fast recovery area." Oracle 18c documentation Thanks for your help, John
Categories: DBA Blogs

Comparing Timestamps with Time Zone

Jeff Kemp - Fri, 2021-03-05 01:57

If you break out into a sweat reading the title, it probably means that like me, you have had too little exposure to working with timestamps in Oracle.

(meme pretending that Gordon Ramsay says "I hate timezones ... in all timezones")

Until recently I never really had much to do with time zones because I lived in the (now even moreso, due to covid) insular state of Western Australia. In WA most places pretend that there is no such thing as time zones – so our exposure to Oracle data types is limited to simple DATEs and TIMESTAMPs, with nary a time zone in sight. We just set the server time zone to AUSTRALIA/Perth and forget it.

Now I’ve helped build a system that needs to concurrently serve the needs of customers in any time zone – whether in the US, in Africa, or here in Australia. We therefore set the server time zone to UTC and use data types that support time zones, namely:

  • TIMESTAMP WITH TIME ZONE – for dates and times that need to include the relevant time zone;
    and
  • TIMESTAMP WITH LOCAL TIME ZONE – for dates and times of system events (e.g. record audit data) that we want to always be shown as of the session time zone (i.e. UTC), and we don’t care what time zone they were originally created in.

A colleague came to me with the following issue: a business rule needed to check an appointment date/time with the current date; if the appointment was for the prior day, an error message should be shown saying that they were too late for their appointment. A test case was failing and they couldn’t see why.

Here is the code (somewhat obfuscated):

if appointment_time < trunc(current_time) then
    :p1_msg := 'This appointment was for the previous day and has expired.';
end if;

We had used TRUNC here because we want to check if the appointment time was prior to midnight of the current date, from the perspective of the relevant time zone. The values of appointment_time and current_time seemed to indicate it shouldn’t fail:

appointment_time = 05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth
current_time     = 05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth

We can see that the appointment time and current time are in the same time zone, and the same day – so the tester expected no error message would be shown. (Note that the “current time” here is computed using localtimestamp at the time zone of the record being compared)

After checking that our assumptions were correct (yes, both appointment_time and current_time are TIMESTAMP WITH TIME ZONEs; and yes, they had the values shown above) we ran a query on the database to start testing our assumptions about the logic being run here.

select
    to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time,
    to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time
from dual

APPT_TIME    = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH'
CURRENT_TIME = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH'

So far so good. What does an ordinary comparison show for these values?

with q as (
    select
        to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time,
        to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time
    from dual)
select
    q.appt_time,
    q.current_time,
    case when q.appt_time < q.current_time then 'FAIL' else 'SUCCESS' end test
from q;

APPT_TIME    = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH'
CURRENT_TIME = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH'
TEST         = 'SUCCESS'

That’s what we expected; the appointment time is not before the current time, so the test is successful. Now, let’s test the expression actually used in our failing code, where the TRUNC has been added:

with q as (
    select
        to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time,
        to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time
    from dual)
select
    q.appt_time,
    q.current_time,
    trunc(q.current_time),
    case when q.appt_time < trunc(q.current_time) then 'FAIL' else 'SUCCESS' end test
from q;

APPT_TIME           = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH'
CURRENT_TIME        = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH'
TRUNC(CURRENT_TIME) = '03/05/2021'
TEST                = 'FAIL'

Good: we have reproduced the problem. Now we can try to work out why it is failing. My initial suspicion was that an implicit conversion was causing the issue – perhaps the appointment date was being converted to a DATE prior to the comparison, and was somehow being converted to the UTC time zone, which was the database time zone?

with q as (
    select
        to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time,
        to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time
    from dual)
select
    q.appt_time,
    q.current_time,
    cast(q.appt_time as date),
    cast(q.current_time as date)
from q;

APPT_TIME                  = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH'
CURRENT_TIME               = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH'
CAST(APPT_TIME AS DATE)    = '03/05/2021 07:00:00 AM'
CAST(CURRENT_TIME AS DATE) = '03/05/2021 06:45:00 AM'

Nope. When cast to a DATE, both timestamps still fall on the same date. Then I thought, maybe when a DATE is compared with a TIMESTAMP, Oracle first converts the DATE to a TIMESTAMP?

with q as (
    select
        to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time,
        to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time
    from dual)
select
    q.appt_time,
    q.current_time,
    cast(trunc(q.current_time) as timestamp with time zone),
    case when q.appt_time < trunc(q.current_time) then 'FAIL' else 'SUCCESS' end test
from q;

APPT_TIME                              = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH'
CURRENT_TIME                           = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH'
CAST(TRUNC(CURRENT_TIME) AS TIMESTAMP) = '05-MAR-2021 12.00.00.000000 AM +00:00'
TEST                                   = 'FAIL'

Ah! Now we can see the cause of our problem. After TRUNCating a timestamp, we have converted it to a DATE (with no timezone information); since Oracle needs to implicitly convert this back to a TIMESTAMP WITH TIME ZONE, it simply slaps the UTC time zone on it. Now, when it is compared with the appointment time, it fails the test because the time is 12am (midnight) versus 7am.

Our original requirement was only to compare the dates involved, not the time of day; if the appointment was on the previous day (in the time zone relevant to the record), the error message should appear. We therefore need to ensure that Oracle performs no implicit conversion, by first converting the appointment time to a DATE:

with q as (
    select
        to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time,
        to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time
    from dual)
select
    q.appt_time,
    q.current_time,
    case when cast(q.appt_time as date) < trunc(q.current_time) then 'FAIL' else 'SUCCESS' end test
from q;

APPT_TIME                              = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH'
CURRENT_TIME                           = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH'
TEST                                   = 'SUCCESS'

Our logic therefore should be:

if cast(appointment_time as date) < trunc(current_time) then
    :p1_msg := 'This appointment was for the previous day and has expired.';
end if;

It should be noted that if the tester had done this just an hour later in the day, they would not have noticed this problem – because Perth is +08:00, and the timestamps for the test data were prior to 8am in the morning.

Lesson #1: in any system that deals with timestamps and time zones it’s quite easy for subtle bugs to survive quite a bit of testing.

Lesson #2: when writing any comparison code involving timestamps and time zones, make sure that the data types are identical – and if they aren’t, add code to explicitly convert them first.

Fetching last record from a table

Tom Kyte - Thu, 2021-03-04 10:06
I was wondering if there is some possible way that I can fetch the last record from a table. In this particular case, obviously one is unaware of the number of records in a table. All I want is to write a query whcih will simply fetch all the columns of the last record. I have tried using ROWNUM, but I guess it does'nt work that way. If I generate a report, then I want to base my report on the same query whcih will simply fecth all the columns of the last record. Are there any built-ins in report builder which can fetch the last record of a table? Thanks Hassan
Categories: DBA Blogs

select purging in V$ARCHIVED_LOG

Tom Kyte - Thu, 2021-03-04 10:06
Hello, We would like to purge old entries in V$ARCHIVED_LOG in primary and standby databases because we have noted that some queries using V$DATABASE, V$BACKUP_SET and V$ARCHIVED_LOG especially on standby are sometimes slow. We have control_file_record_keep_time set to 25 but on same databases we have a sometimes thousands of entries in V$ARCHIVED_LOG which are older: sometimes 2 month old and sometimes older than 1 year. Example: <code> > show parameter control_file_record_keep_time control_file_record_keep_time integer 25 > select sysdate - min(first_time) from v$archived_log; SYSDATE-MIN(FIRST_TIME) ----------------------- 705.846887 </code> Is there a way to run a selective purging in V$ARCHIVED_LOG in primary databases ? I know MOS Note 845361.1 but this a full purge. Thanks.
Categories: DBA Blogs

Insert 400 million faster from one table to another table

Tom Kyte - Thu, 2021-03-04 10:06
Hi Tom, We have trouble loading 400 million records from one table(temp table) to main table. We have been getting 200 million data earlier from upstream and we were able to load it 3 hrs by DBMS_PARALLEL_EXECUTE concept. Now the data is increased to 400 million and we except 6 to 7 hrs to load. Is there any other approach that the data can be inserted faster. Below is the code used currently. <code> BEGIN DBMS_PARALLEL_EXECUTE.DROP_TASK (L_TASK_NAME); EXCEPTION WHEN OTHERS THEN NULL; END; --- This create task will create a task which can be seen in USER_PARALLEL_EXECUTE_TASKS table DBMS_PARALLEL_EXECUTE.CREATE_TASK (L_TASK_NAME); --This statement chunks the data based on Rowid DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID (L_TASK_NAME, 'TIREC', 'TMP_PROFILE_OFFER_ORG', TRUE, 500000); L_SQL := 'INSERT INTO T_PROFILE_OFFER (PROFILE_ID, ENROLLMENT_ID, MEM_OFFER_ID, OFFER_ID, KIT_ID, OFFER_CODE, START_DATE, END_DATE, OFFER_PRIORITY, ACTIVATION_STATUS, STATUS, CREATE_USER, UPDATE_USER) SELECT PROFILE_ID, ENROLLMENT_ID, MEM_OFFER_ID, OFFER_ID, KIT_ID, OFFER_CODE, START_DATE, END_DATE, OFFER_PRIORITY, ACTIVATION_STATUS, STATUS, CREATE_USER, UPDATE_USER FROM TMP_PROFILE_OFFER_ORG WHERE ROWID BETWEEN :start_id AND :end_id LOG ERRORS INTO ERR$_T_PROFILE_OFFER_MO (''B109: Insert'') REJECT LIMIT UNLIMITED'; --Following statement runs multiple session based on parallel level supplied DBMS_PARALLEL_EXECUTE.RUN_TASK (TASK_NAME => L_TASK_NAME, SQL_STMT => L_SQL, LANGUAGE_FLAG => DBMS_SQL.NATIVE, PARALLEL_LEVEL => L_DEGREE); --If task is failed we need to try until it is finished or 5 attempts to execute it. LOOP EXIT WHEN DBMS_PARALLEL_EXECUTE.TASK_STATUS (L_TASK_NAME) = DBMS_PARALLEL_EXECUTE.FINISHED OR L_ATTEMPTS > L_RETRIES; L_ATTEMPTS := L_ATTEMPTS + 1; DBMS_PARALLEL_EXECUTE.RESUME_TASK (L_TASK_NAME); END LOOP; </code> Thanks, Jaganath K
Categories: DBA Blogs

Commit point optimization and Golden Gate

Tom Kyte - Thu, 2021-03-04 10:06
Hi We have a rather complicated cleaning process. We have implemented it so that we divide key space to 4 separate number spaces and run them concurrently. Each of the 4 processes make decision if that row is to be deleted and if so deletes the row and dependent rows by cascade delete. Because there were occasional deadlocks each deletion is in its own transaction. Code is PL/SQL, so no waits for writing redo logs should be happening (commit point optimization). However in some point of time replication of database was changed from materialized view to Golden Gate (volumes got much bigger and materialized views had serious difficulties in handling them). Now we have encountered issues with Golden Gate, lags of replication when large deletes are done. My question is: if in source system commit point optimization is used, it is also applied in target system ? Should we try to increase interval between commits so number of waits in redo logs writes in target system would get smaller ? Something else ? lh
Categories: DBA Blogs

use_nl_with_index

Jonathan Lewis - Thu, 2021-03-04 09:59

One of the less well-known hints is the hint /*+ use_nl_with_index() */  (link to 19c reference manual) which appeared in the 10g timeline, where the manuals supply the following description:

The USE_NL_WITH_INDEX hint will cause the optimizer to join the specified table to another row source with a nested loops join using the specified table as the inner table but only under the following condition. If no index is specified, the optimizer must be able to use some index with at least one join predicate as the index key. If an index is specified, the optimizer must be able to use that index with at least one join predicate as the index key.

It looks like a fairly redundant hint, really, since it could easily (and with greater safely, perhaps) be replaced by the pair /*+ use_nl(alias) index(alias) */ with some refinement on the index() hint that I’ve shown. In fact I think I’ve only ever seen the hint “in the wild” once, and that was in an internal view definition where it had been used incorrectly (see this recent update to a note on one of the dynamic performance views that I wrote a few years ago).

The note I’ve just referenced prompted me to take a closer look at the hint to see how accurate the definition was. Here’s a data set I created for testing:

rem
rem     Script:         use_nl_with_index.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2021
rem
rem     Last tested 
rem             19.3.0.0
rem 

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum,10)                  n10,
        mod(rownum,1000)                n1000,
        mod(rownum,2000)                n2000,
        lpad(mod(rownum,1000),10,'0')   v1000,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5   -- > comment to avoid WordPress format issue
;

create table t2 as
select distinct
        n10, n1000, v1000
from
        t1
;

create index t1_i1000 on t1(n1000);
create index t1_i10_1000 on t1(n10,n1000);
create index t1_i2000 on t1(n2000);
create bitmap index t1_b1 on t1(n1000, n10);

I’ve set up the data to do a join between t2 and t1, and I’m going to hint a query to force the join order t2 -> t1, and thanks to the data pattern the default path should be a hash join. Once I’ve established the default path I’m going to use the use_nl_with_index() hint to see how it behaves with respect to the various indexes I’ve created. So here’s the query with the default path:

set autotrace traceonly explain

select  
        /*+ leading(t2 t1) */
        t1.*
from    t2, t1
where
        t2.n10 = 1
and     t1.n1000 = t2.n1000
;

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |  1318K|   259   (8)| 00:00:01 |
|*  1 |  HASH JOIN         |      | 10000 |  1318K|   259   (8)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| T2   |   100 |   700 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   |   100K|    12M|   252   (6)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."N1000"="T2"."N1000")
   2 - filter("T2"."N10"=1)

Note
-----
   - this is an adaptive plan

So the join order is as required, and the default is a hash join. The join predicate is t1.n1000 = t2,n1000, and if you examine the indexes I’ve created you’ll see I’ve got

  • t1_i1000 on t1(n1000) – the perfect index
  • t1_i10_1000 on t1(n10, n1000) – which could be used for a skip scan
  • t1_i2000 on t1(n2000) – which doesn’t include a suitable join predicate
  • t1_b1 on t1(n1000, n10) – which is a bitmap index

So here are the first batch of tests – all rolled into a single statement with optional hints included:

select  
        /*+ 
                leading(t2 t1) 
                use_nl_with_index(t1) 
--              use_nl_with_index(t1 t1_i1000)
--              use_nl_with_index(t1(n1000))
        */
        t1.*
from    t2, t1
where
        t2.n10 = 1
and     t1.n1000 = t2.n1000
;


Execution Plan
----------------------------------------------------------
Plan hash value: 3315267048

-----------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          | 10000 |  1318K| 10133   (1)| 00:00:01 |
|   1 |  NESTED LOOPS                |          | 10000 |  1318K| 10133   (1)| 00:00:01 |
|   2 |   NESTED LOOPS               |          | 10000 |  1318K| 10133   (1)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL         | T2       |   100 |   700 |     2   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | T1_I1000 |   100 |       |     1   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T1       |   100 | 12800 |   101   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N10"=1)
   4 - access("T1"."N1000"="T2"."N1000")

If I don’t specify an index the optimizer picks the best possible index; alternatively I can specify the index on (n1000) by name or by description and the optimizer will still use it. So what do I get if I reference the index on (n2000):

select  
        /*+ 
                leading(t2 t1) 
                use_nl_with_index(t1(n2000))
        */
        t1.*
from    t2, t1
where
        t2.n10 = 1
and     t1.n1000 = t2.n1000
;


Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |  1318K|   259   (8)| 00:00:01 |
|*  1 |  HASH JOIN         |      | 10000 |  1318K|   259   (8)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| T2   |   100 |   700 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   |   100K|    12M|   252   (6)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."N1000"="T2"."N1000")
   2 - filter("T2"."N10"=1)

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   3 -  SEL$1 / T1@SEL$1
         U -  use_nl_with_index(t1(n2000))

Note
-----
   - this is an adaptive plan


I’m back to the tablescan with hash join – and since I’m testing on 19.3.0.0 Oracle kindly tells me in the Hint Report that I have an unused hint: the one that can’t be used because the referenced index doesn’t have any columns that are join predicates.

So what about the skip scan option:

select  
        /*+ 
                leading(t2 t1) 
                use_nl_with_index(t1(n10, n1000))
--              use_nl_with_index(t1(n10))
--              index_ss(t1 (n10))
        */
        t1.*
from    t2, t1
where
        t2.n10 = 1
and     t1.n1000 = t2.n1000
;


Even though the index I’ve specified in the hint does contain a column in the join predicate the execution plan reports a full tablescan and hash join – unless I include an explicit index_ss() hint: but in that case I might as well have used the vanilla flavoured use_nl() hint. I did have a look at the 10053 (CBO) trace file for this example, and found that if I didn’t include the index_ss() hint the optimizer calculated the cost of using an index full scan (and no other option) for every single index on t1 before choosing the tablescan with hash join.

Finally, and without repeating the query, I’ll just note that when I referenced t1_b1 (n1000, n10) in the hint Oracle was happy to use the index in a nested loop join:

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       | 10000 |  1318K|  2182   (1)| 00:00:01 |
|   1 |  NESTED LOOPS                 |       | 10000 |  1318K|  2182   (1)| 00:00:01 |
|   2 |   NESTED LOOPS                |       | 10000 |  1318K|  2182   (1)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL          | T2    |   100 |   700 |     2   (0)| 00:00:01 |
|   4 |    BITMAP CONVERSION TO ROWIDS|       |       |       |            |          |
|*  5 |     BITMAP INDEX RANGE SCAN   | T1_B1 |       |       |            |          |
|   6 |   TABLE ACCESS BY INDEX ROWID | T1    |   100 | 12800 |  2182   (1)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("T2"."N10"=1)
   5 - access("T1"."N1000"="T2"."N1000")
       filter("T1"."N1000"="T2"."N1000")

Summary

The use_nl_with_index() hint generally works as described in the manuals – with the exception that it doesn’t consider an index skip scan as a valid option when trying to match the join predicate. That exception is one of those annoying little details that could waste a lot of your time.

Since it’s so easy to replace use_nl_with_index() with a pair of hints – including an index hint that could be an index_desc(), index_ss(), or index_combine() hint – I can’t come up with a good reason for using the use_nl_with_index() hint.

Partner Webcast – High Performance Weblogic on OCI with Oracle GraalVM Enterprise

Whether for development of external customer-facing services, or for automating internal business processes, the use of software continues to accelerate. In an endeavor to scale applications with...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Purge Database Audit Trail Table

Michael Dinh - Wed, 2021-03-03 23:12

Segment for AUD$/FGA_LOG$ tables reside in SYSAUX tablespace and will be moved AUDIT_TBS before configuring purge.

The requirement is to purge audits older than 7 years (366*7=2562) [l_days NUMBER := 2562]

DEMO:

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Mar 3 18:22:35 2021

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

18:22:35 SYS@DB01 AS SYSDBA> select file_name from dba_data_files where tablespace_name='SYSAUX';

FILE_NAME
--------------------------------------------------------------------------------
/oradata/db01/datafile/sysaux_01.dbf

Elapsed: 00:00:00.01
18:22:44 SYS@DB01 AS SYSDBA> create tablespace AUDIT_TBS datafile '&location/audit_tbs_01.dbf' size 5g autoextend on maxsize 30g;
Enter value for location: /oradata/db01/datafile
old   1: create tablespace AUDIT_TBS datafile '&location/audit_tbs_01.dbf' size 5g autoextend on maxsize 30g
new   1: create tablespace AUDIT_TBS datafile '/oradata/db01/datafile/audit_tbs_01.dbf' size 5g autoextend on maxsize 30g

Tablespace created.

Elapsed: 00:00:24.68
18:24:29 SYS@DB01 AS SYSDBA> @ audit.sql
18:24:37 SYS@DB01 AS SYSDBA>
18:24:37 SYS@DB01 AS SYSDBA> select owner, segment_name, segment_type, tablespace_name, round(bytes/1024/1024,2) USED_MB
18:24:37   2  from dba_segments where segment_name IN ('AUD$','FGA_LOG$') order by 2
18:24:37   3  ;

OWNER                SEGMENT_NAME         SEGMENT_TYPE       TABLESPACE_NAME         USED_MB
-------------------- -------------------- ------------------ -------------------- ----------
SYS                  AUD$                 TABLE              SYSTEM                     2946
SYS                  FGA_LOG$             TABLE              SYSTEM                      .06

Elapsed: 00:00:00.09
18:24:37 SYS@DB01 AS SYSDBA> select MIN(TIMESTAMP), MAX(TIMESTAMP), TRUNC(systimestamp)-MIN(TIMESTAMP) diff_day, COUNT(*)
18:24:37   2  from dba_audit_trail
18:24:37   3  ;

MIN(TIMES MAX(TIMES   DIFF_DAY   COUNT(*)
--------- --------- ---------- ----------
23-SEP-14 03-MAR-21 2352.58642   20801590

Elapsed: 00:00:53.32
18:25:30 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_config_params order by 1
18:25:30   2  ;

PARAMETER_NAME                 PARAMETER_VALUE      AUDIT_TRAIL
------------------------------ -------------------- --------------------
AUDIT FILE MAX AGE             5                    OS AUDIT TRAIL
AUDIT FILE MAX AGE             5                    XML AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                OS AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                FGA AUDIT TRAIL
DB AUDIT TABLESPACE            SYSAUX               FGA AUDIT TRAIL
DB AUDIT TABLESPACE            SYSAUX               STANDARD AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                 XML AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                 OS AUDIT TRAIL

10 rows selected.

Elapsed: 00:00:00.01
18:25:30 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_last_arch_ts
18:25:30   2  ;

no rows selected

Elapsed: 00:00:00.00
18:25:30 SYS@DB01 AS SYSDBA> exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS')

PL/SQL procedure successfully completed.

Elapsed: 00:09:17.79
18:34:48 SYS@DB01 AS SYSDBA> ;
  1  select * from dba_audit_mgmt_last_arch_ts
  2*
18:34:48 SYS@DB01 AS SYSDBA> exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS')

PL/SQL procedure successfully completed.

Elapsed: 00:00:07.83
18:34:56 SYS@DB01 AS SYSDBA> ;
  1  select * from dba_audit_mgmt_last_arch_ts
  2*
18:34:56 SYS@DB01 AS SYSDBA>
18:34:56 SYS@DB01 AS SYSDBA> begin
18:34:56   2    dbms_audit_mgmt.INIT_CLEANUP (
18:34:56   3      audit_trail_type         => dbms_audit_mgmt.audit_trail_all,
18:34:56   4      default_cleanup_interval => 24);
18:34:56   5  end;
18:34:56   6  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.21
18:35:01 SYS@DB01 AS SYSDBA>
18:35:01 SYS@DB01 AS SYSDBA> begin
18:35:01   2  if
18:35:01   3    dbms_audit_mgmt.IS_CLEANUP_INITIALIZED(dbms_audit_mgmt.AUDIT_TRAIL_ALL) then
18:35:01   4    dbms_output.put_line('******* YES *******');
18:35:01   5  else
18:35:01   6    dbms_output.put_line('******* NO *******');
18:35:01   7  end if;
18:35:01   8  end;
18:35:01   9  /
******* YES *******

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
18:35:01 SYS@DB01 AS SYSDBA>
18:35:01 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_config_params order by 1
18:35:01   2  ;

PARAMETER_NAME                 PARAMETER_VALUE      AUDIT_TRAIL
------------------------------ -------------------- --------------------
AUDIT FILE MAX AGE             5                    XML AUDIT TRAIL
AUDIT FILE MAX AGE             5                    OS AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                OS AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                FGA AUDIT TRAIL
DB AUDIT TABLESPACE            AUDIT_TBS            STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE            AUDIT_TBS            FGA AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      24                   XML AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      24                   OS AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      24                   STANDARD AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      24                   FGA AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                 OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                 XML AUDIT TRAIL

14 rows selected.

Elapsed: 00:00:00.01
18:35:01 SYS@DB01 AS SYSDBA>
18:35:01 SYS@DB01 AS SYSDBA> begin
18:35:01   2    dbms_audit_mgmt.CREATE_PURGE_JOB(
18:35:01   3      audit_trail_type           => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
18:35:01   4      audit_trail_purge_interval => 24,
18:35:01   5      audit_trail_purge_name     => 'PURGE_ALL_AUDIT_TRAILS',
18:35:01   6      use_last_arch_timestamp    => TRUE);
18:35:01   7  end;
18:35:01   8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.77
18:35:02 SYS@DB01 AS SYSDBA>
18:35:02 SYS@DB01 AS SYSDBA> begin
18:35:02   2    dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(
18:35:02   3      audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
18:35:02   4      last_archive_time => SYSTIMESTAMP-2562);
18:35:02   5  end;
18:35:02   6  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.05
18:35:02 SYS@DB01 AS SYSDBA>
18:35:02 SYS@DB01 AS SYSDBA> begin
18:35:02   2    DBMS_SCHEDULER.create_job (
18:35:02   3      job_name        => 'AUDIT_LAST_ARCHIVE_TIME',
18:35:02   4      job_type        => 'PLSQL_BLOCK',
18:35:02   5      job_action      => 'DECLARE
18:35:02   6                            l_days NUMBER := 2562;
18:35:02   7                          BEGIN
18:35:02   8                            dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02   9                            dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02  10                            dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_OS,      TRUNC(SYSTIMESTAMP)-l_days);
18:35:02  11                            dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_XML,     TRUNC(SYSTIMESTAMP)-l_days);
18:35:02  12                          END;',
18:35:02  13      start_date      => SYSTIMESTAMP,
18:35:02  14      repeat_interval => 'freq=daily; byhour=0; byminute=0; bysecond=0;',
18:35:02  15      end_date        => NULL,
18:35:02  16      enabled         => TRUE,
18:35:02  17      comments        => 'Automatically set audit last archive time.');
18:35:02  18  end;
18:35:02  19  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.07
18:35:02 SYS@DB01 AS SYSDBA>
18:35:02 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_config_params order by 1
18:35:02   2  ;

PARAMETER_NAME                 PARAMETER_VALUE      AUDIT_TRAIL
------------------------------ -------------------- --------------------
AUDIT FILE MAX AGE             5                    XML AUDIT TRAIL
AUDIT FILE MAX AGE             5                    OS AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                OS AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                FGA AUDIT TRAIL
DB AUDIT TABLESPACE            AUDIT_TBS            STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE            AUDIT_TBS            FGA AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      24                   XML AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      24                   OS AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      24                   STANDARD AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      24                   FGA AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                 OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                 XML AUDIT TRAIL

14 rows selected.

Elapsed: 00:00:00.00
18:35:02 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_last_arch_ts
18:35:02   2  ;

AUDIT_TRAIL          RAC_INSTANCE LAST_ARCHIVE_TS
-------------------- ------------ ---------------------------------------------
STANDARD AUDIT TRAIL            0 26-FEB-14 06.35.02.000000 PM +00:00

Elapsed: 00:00:00.00
18:35:02 SYS@DB01 AS SYSDBA> select job_action from dba_scheduler_jobs where job_name='PURGE_ALL_AUDIT_TRAILS'
18:35:02   2  ;

JOB_ACTION
------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(15, TRUE);  END;

Elapsed: 00:00:00.17
18:35:02 SYS@DB01 AS SYSDBA> select job_action from dba_scheduler_jobs where job_name='AUDIT_LAST_ARCHIVE_TIME'
18:35:02   2  ;

JOB_ACTION
------------------------------------------------------------------------------------------------------------------------------------------------------
DECLARE
                          l_days NUMBER := 2562;
                        BEGIN
                          dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, TRUNC(SYSTIMESTAMP)-l_days);
                          dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, TRUNC(SYSTIMESTAMP)-l_days);
                          dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_OS,      TRUNC(SYSTIMESTAMP)-l_days);
                          dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_XML,     TRUNC(SYSTIMESTAMP)-l_days);
                        END;


Elapsed: 00:00:00.14
18:35:02 SYS@DB01 AS SYSDBA> commit;

Commit complete.

Elapsed: 00:00:00.00
18:35:02 SYS@DB01 AS SYSDBA>
--- audit.sql
col parameter_name for a30
col parameter_value for a20
col audit_trail for a20
col owner for a20
col segment_name for a20
col tablespace_name for a20
col last_archive_ts for a45
col job_action for a150
set lines 200 pages 100 serverout on echo on

select owner, segment_name, segment_type, tablespace_name, round(bytes/1024/1024,2) USED_MB
from dba_segments where segment_name IN ('AUD$','FGA_LOG$') order by 2
;
select MIN(TIMESTAMP), MAX(TIMESTAMP), TRUNC(systimestamp)-MIN(TIMESTAMP) diff_day, COUNT(*)
from dba_audit_trail
;
select * from dba_audit_mgmt_config_params order by 1
;
select * from dba_audit_mgmt_last_arch_ts
;
exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS')
;
exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS')
;

begin
  dbms_audit_mgmt.INIT_CLEANUP (
    audit_trail_type         => dbms_audit_mgmt.audit_trail_all,
    default_cleanup_interval => 24);
end;
/

begin
if
  dbms_audit_mgmt.IS_CLEANUP_INITIALIZED(dbms_audit_mgmt.AUDIT_TRAIL_ALL) then
  dbms_output.put_line('******* YES *******');
else
  dbms_output.put_line('******* NO *******');
end if;
end;
/

select * from dba_audit_mgmt_config_params order by 1
;

begin
  dbms_audit_mgmt.CREATE_PURGE_JOB(
    audit_trail_type           => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
    audit_trail_purge_interval => 24,
    audit_trail_purge_name     => 'PURGE_ALL_AUDIT_TRAILS',
    use_last_arch_timestamp    => TRUE);
end;
/

begin
  dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(
    audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
    last_archive_time => SYSTIMESTAMP-2562);
end;
/

begin
  DBMS_SCHEDULER.create_job (
    job_name        => 'AUDIT_LAST_ARCHIVE_TIME',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE
                          l_days NUMBER := 2562;
                        BEGIN
                          dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, TRUNC(SYSTIMESTAMP)-l_days);
                          dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, TRUNC(SYSTIMESTAMP)-l_days);
                          dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_OS,      TRUNC(SYSTIMESTAMP)-l_days);
                          dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_XML,     TRUNC(SYSTIMESTAMP)-l_days);
                        END;',
    start_date      => SYSTIMESTAMP,
    repeat_interval => 'freq=daily; byhour=0; byminute=0; bysecond=0;',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Automatically set audit last archive time.');
end;
/

select * from dba_audit_mgmt_config_params order by 1
;
select * from dba_audit_mgmt_last_arch_ts
;
select job_action from dba_scheduler_jobs where job_name='PURGE_ALL_AUDIT_TRAILS'
;
select job_action from dba_scheduler_jobs where job_name='AUDIT_LAST_ARCHIVE_TIME'
;
commit;

Find all sundays of the year

Tom Kyte - Wed, 2021-03-03 15:50
How to find all sundays of the year without writing any block or procedure or function?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator