A lesson learnt in oracle space management

So I hit a little bug the other day where space that was being removed from a CLOB through an automated purge routine wasn’t being reused. This lead to an heated discussion with a colleague regarding the High Water Mark, Freelists and table fragmentation. His view was that this expected behaviour and that the database was acting normally. I asked him where the data was being added to the table and he stated quite clearly that data only gets added to the end of an extent and that deleted space in a table doesn’t get reused which leads to table fragmentation and hence why the purge routine has an alter table move and index rebuild built in. We talk in circles for a while then I asked him to prove his point and I went off to prove mine. While generating my proof I hit upon the following note on MOS:-
HOW TO DEMONSTRATE SPACE RELEASE / REUSE AFTER A DELETE (Doc ID 1098608.1)
Which shows the expected behaviour quite nicely I thought and so I used it as the basis of my proof which I am blogging below.

So the first thing is you need to verify is the you are using Automated Segment Space Management (ASSM) for this. So lets check the tablespace:-

SELECT tablespace_name, extent_management, segment_space_management
FROM dba_tablespaces
WHERE tablespace_name = 'USERS';

TABLESPACE_NAME                EXTENT_MAN SEGMEN
------------------------------ ---------- ------
USERS                          LOCAL      AUTO

Ok, so as we can see extent_management is set to auto – Great. Now lets create a user with minimal privs to run the test.

create user testassm identified by password default tablespace users;
alter user testassm quota unlimited on users;
grant connect, create any table, create any procedure, create sequence to testassm;

As the testassm user we are going to create a table containing a clob.

CREATE TABLE  tbl_test(
idcol number, 
cola varchar2(500), 
clobcol clob) 
storage (initial 1k next 1k);

If you are following the note you will see that I’m not creating the clob as a securefile. This is due to the fact my prod doesn’t use them. Now lets create a sequence to act as the PK for the table and insert a row so we can look some object metrics:-

CREATE SEQUENCE seq_test START WITH 1 INCREMENT BY 1;

INSERT INTO tbl_test VALUES(seq_test.nextval, dbms_random.string('A',500), dbms_random.string('A',5000));
COMMIT;

SQL> analyze table tbl_test compute statistics;

Table analyzed.

SQL> column segment_name format a30
SQL> SELECT segment_type, segment_name, blocks, extents FROM user_segments;

SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                                8          1
LOBINDEX           SYS_IL0000079224C00003$$                8          1
LOBSEGMENT         SYS_LOB0000079224C00003$$               8          1

SQL> SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                             .375          5            3          1

So we have the number of extents and some info about used blocks for our test object. The next step is to insert a bit of data:-

DECLARE
  i number;
  j number;
  l_clob clob;
BEGIN
-- Generate clob for inserting
  FOR i IN 1 .. 1000 LOOP
    L_clob := l_clob||dbms_random.string('A',10);
  END LOOP;
-- Populate the table with data
  FOR j IN 1 .. 500 loop
    INSERT INTO tbl_test VALUES (seq_test.nextval, dbms_random.string('A',500), l_clob);
  END LOOP;
COMMIT;

END;
/

So we have inserted 500 rows, lets check the size changes for the lob_segment and table.

SQL>  analyze table tbl_test compute statistics;

Table analyzed.

SQL> SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                       .104166667         43            5        501

SQL>  SELECT segment_type, segment_name, blocks, extents FROM user_segments;

SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                               48          6
LOBINDEX           SYS_IL0000079224C00003$$                8          1
LOBSEGMENT         SYS_LOB0000079224C00003$$            1152         24

SQL>

So from the first 500 rows we have increased the number of blocks for the table by 40 and the 1144 for the lobsegment. Lets insert another 500 rows and check again.

SQL> analyze table tbl_test compute statistics;

Table analyzed.

SQL> SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                       .090909091         80            8       1001

SQL> SELECT segment_type, segment_name, blocks, extents FROM user_segments;

SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                               88         11
LOBINDEX           SYS_IL0000079224C00003$$                8          1
LOBSEGMENT         SYS_LOB0000079224C00003$$            2176         32

So again we add 40 blocks to the table and about 1050 blocks to the lobsegment. The next test is to delete some data and see the effect.

SQL> DELETE tbl_test WHERE idcol BETWEEN 250 AND 900;

651 rows deleted.

SQL> analyze table tbl_test compute statistics;

Table analyzed.

SQL> SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                       .090909091         80            8        350

SQL>  SELECT segment_type, segment_name, blocks, extents FROM user_segments;

SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                               88         11
LOBINDEX           SYS_IL0000079224C00003$$               16          2
LOBSEGMENT         SYS_LOB0000079224C00003$$            2176         32

SQL>

So to test the theory that oracle only writes to the end of a segment lets insert another 500 rows and

SQL> SELECT max(idcol) FROM tbl_test;

MAX(IDCOL)
-----------
       1501

SQL> analyze table tbl_test compute statistics;

Table analyzed.

SQL>  SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                       .090909091         80            8        850

SQL>  SELECT segment_type, segment_name, blocks, extents FROM user_segments;


SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                               88         11
LOBINDEX           SYS_IL0000079224C00003$$               16          2
LOBSEGMENT         SYS_LOB0000079224C00003$$            3072         39


So we can see that there has been no increase in for the table but the lobsegment but interesting the lobindex hasnt increased. Anyway this some guise of the bug that we where seeing in live.

NOTE: For the article I am testing on 11.2.0.3.0. I have tested this on 11.2.0.4.0 and 12.1.0.1.0 vanilla without issue.

Tagged with: , , , ,
Posted in 10G, 11gR2, Performance, Uncategorized

Using ORACLE Log Miner

This post is going to be a very basic tutorial on using logminer. As well as being used to recovery from logical Corruptions (user error!) Logminer can be used for data auditing and analysis. Its this functionality that i needed to assist with the resolution of a particularly nasty blocking lock issue.

For a blocking lock I would usually I would perform a 10046 trace on sessions, review the output and then advise. Problem with this site is the tracing cripples the server, so much so that it becomes unusable and users can barely logon let alone traverse through the system enough to generate the required data. As such we were at an impasse, we knew the application routine that was causing the issue, but we didnt know where in the routine the problem was being caused. We had looked at the SQL the session was reported to be running just before is failed but this piece of SQL was fired so frequently during the process that is shed no light at. It was at this point that I decided it was time to use logminer to attempt to work out where we were during the process when it failed as this could at least assist the developed by saying where the issue wasn’t – Basically mine the logs for the transaction, work out the SQL fired and try and figure it all out with the application developers.

In the example below i’m not going to replicate the blocking lock issue that I worked on but I will reproduce enough to show how this method helped and outline why it isn’t as good as a session trace. Below is an outline of the steps the apps routine will complete

STEP1: Insert row into tbl_test1 using sequence squ_test1 to populate idcol. 
STEP2: Get Max (idcol) from tbl_mtest2, increment by 1 and insert a row into tbl_test2.
STEP3: Select Value from tbl_mtest3 for use in apps routine and insert into tbl_test3.

Below are the object definitions:-

create sequence seq_test start with 1 increment by 1 nomaxvalue order;
create table tbl_test1(idcol number(9) ,cola varchar2(30));
create table tbl_test2(idcol number(9) ,cola varchar2(30));
create unique index ind_test2_1 on tbl_test2(idcol);
create table tbl_test3(idcol number(9) ,cola varchar2(30));

The first steps are run as SYSDBA. For this example I am on LINUX 5.9, running 11.2.0.4.2 and am going to use the online logs and not an archived redo log. The principle is the same though. First step is to check that the database has supplemental logging enabled:-

SQL> SELECT force_logging, supplemental_log_data_min,supplemental_log_data_pk,supplemental_log_data_ui, supplemental_log_data_fk,supplemental_log_data_all, supplemental_log_data_pl
 2 FROM v$database;
FOR SUPPLEME SUP SUP SUP SUP SUP
--- -------- --- --- --- --- ---
NO  NO       NO  NO  NO  NO  NO
SQL>

If supplemental logging is no active then run the following:-

SQL> alter database add supplemental log data;

After this we need to identify the files required to be mined and present them to logminer. Once this has completed we will then start the mining process:-

AS SYSDBA 
SQL> insert into tbl_test2 values (100, 'Load');

1 row created.

SQL> commit;

Commit complete.

SQL> SELECT member FROM v$logfile;

MEMBER
--------------------------------------------------------------------------------
+DATA/ora11g23/onlinelog/group_3.263.801863059
+DATA/ora11g23/onlinelog/group_2.262.801863057
+DATA/ora11g23/onlinelog/group_1.261.801863055

SQL> BEGIN
2 DBMS_LOGMNR.ADD_LOGFILE ('+DATA/ora11g23/onlinelog/group_1.261.801863055');
3 DBMS_LOGMNR.ADD_LOGFILE ('+DATA/ora11g23/onlinelog/group_3.263.801863059');
4 DBMS_LOGMNR.ADD_LOGFILE ('+DATA/ora11g23/onlinelog/group_2.262.801863057');
5 END;
6/

PL/SQL procedure successfully completed.

SQL> BEGIN
2 DBMS_LOGMNR.START_LOGMNR(options => dbms_logmnr.dict_from_online_catalog);
3 END;
4 /

PL/SQL procedure successfully completed.

SQL>

Now we are going to start 2 user sessions running through the apps code. The code will be executed in session1 then session 2

SQL> Select dbms_transaction.local_transaction_id from dual;
SESSION 1                          SESSION 2

LOCAL_TRANSACTION_ID               LOCAL_TRANSACTION_ID
---------------------------------  ---------------------------------

SQL> insert into tbl_test1 values (seq_test.nextval,'Test1 First Row');

1 row created.                     1 row created.

SQL> Select dbms_transaction.local_transaction_id from dual;

LOCAL_TRANSACTION_ID               LOCAL_TRANSACTION_ID
---------------------------------  ----------------------------------
5.4.7995                           4.33.7691

SQL> SELECT max(idcol) from tbl_test2;

MAX(IDCOL)                         MAX(IDCOL)
----------                         ----------
       100                                100

SQL> INSERT INTO tbl_test2 values (105+1, 'Test2 row 106');   

1 row created.                     ** SESSION HANGS ** 

So now we have a blocking session, lets login as sysdba and look at logminer for each of the sessions:-

For Session 1:-
-------------
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,SQL_REDO, SQL_UNDO
2 FROM V$LOGMNR_CONTENTS
3 WHERE XIDUSN=5 and XIDSLT=4 and XIDSQN=7995
4 ORDER BY timestamp ASC;
BURY
5.4.7995
set transaction read write;

BURY
5.4.7995
insert into "BURY"."TBL_TEST2"("IDCOL","COLA") values ('101','Test2 row 106');
delete from "BURY"."TBL_TEST2" where "IDCOL" = '101' and "COLA" = 'Test2 row 1
06' and ROWID = 'AAATOdAAHAAAAFTAAB';
BURY
5.4.7995
insert into "BURY"."TBL_TEST1"("IDCOL","COLA") values ('21','Test1 First Row')
;
delete from "BURY"."TBL_TEST1" where "IDCOL" = '21' and "COLA" = 'Test1 First
Row' and ROWID = 'AAATOcAAHAAAAFrAAA';

For Session 2:-
-------------
BURY
4.33.7691
set transaction read write;


BURY
4.33.7691
insert into "BURY"."TBL_TEST2"("IDCOL","COLA") values ('101','Test2 row 106');
delete from "BURY"."TBL_TEST2" where "IDCOL" = '101' and "COLA" = 'Test2 row 1
06' and ROWID = 'AAATOdAAHAAAAFVAAA';

BURY
4.33.7691
insert into "BURY"."TBL_TEST1"("IDCOL","COLA") values ('22','Test1 First Row')
;
delete from "BURY"."TBL_TEST1" where "IDCOL" = '22' and "COLA" = 'Test1 First
Row' and ROWID = 'AAATOcAAHAAAAFtAAA';

For session as we can see the insert statements and update are there but not the select. Also looking at the sql you can see that the values for the sequence are explicit. This is why its not an exact cience more of an art to use logminer in this way. As i knew the application SQL it was easy to map the redo_sql to the actual sql used to produce the update.

HTH

Tagged with: , , , , ,
Posted in 11gR2, 12c, DB Recovery, LogMiner, RAC, SQL*Plus

Finding out information about ASM

So had an odd request today. Some batch jobs were taking to long to complete so an incident call was opened and we all went to work. From the start the issue seemed IO related so we got storage on the call. Before they would start to look at the call they wanted to know the device ids of the disks being used by the ASM diskgroups. Any odd request but not that difficult i thought so i started to google and didnt come up with anything – oh i said. Anyway here are the scripts that i used to generate the required data

To find the asm disc names i wrote the following piece of SQL.

SELECT SUBSTR(PATH,INSTR(PATH,'/',-1)+1)
FROM v$asm_disk
WHERE group_number IN 
  (
  SELECT group_number
  FROM v$asm_diskgroup
  WHERE name = 'DATA'
  );

ASM_DISK2
ASM_DISK1

I then pasted this output into a file called get_devno.output. After this i created a shell with the required permissions called get_devno.sh, which looks like:-

#!/bin/ksh
echo " " > get_devno.output

for a in `cat get_devno.input`
do
  for b in `oracleasm querydisk -p $a | grep sd | awk -F: '{print substr($1,6)}'`
  do
    ls -ltr /dev/disk/by-id | grep $b >> get_devno.output
  done
done

The next step is to run the shell script and view the get_devno.output file:-

scsi-SATA_VBOX_HARDDISK_VB6f5a5b2b-9d70b284-part1
ata-VBOX_HARDDISK_VB6f5a5b2b-9d70b284-part1
scsi-SATA_VBOX_HARDDISK_VBfe0fe3e5-194821d8-part1
ata-VBOX_HARDDISK_VBfe0fe3e5-194821d8-part1

Ta-Dah! One list of disk ids

Posted in 11gR2, 12c, ASM, Performance

PRVF-0002 : Could not retrieve local nodename

Whilst performing a Oracle 12c grid installation on a new OUEL Linux box, I launched the Oracle Universal Installer (OUI) and received the following message:-

PRVF-0002 : Could not retrieve local nodename

So I checked the hostname and everything seemed to be fine:-

[root@o64-12c-n11 ~]# uname -n
o64-12c-n11.localdomain
[root@o64-12c-n11 ~]# hostname
o64-12c-n11.localdomain

So whats is the problem here? Well, it appears that the OUI performs some pretty hefty checks, including a reverse lookup of the hostname.

For me I recieved the error due to the fact that I didn’t complete the network setup properly, as it turns out that there was a typo in the hosts file.

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.56.131 o64-121-n11 o64-121-n11.localdomain node1

I corrected the typo and the installation continued as normal.

Tagged with: ,
Posted in 11gR2, 12c, Installation, Patchset, VBox install

ORACLE Transparent Data Encryption

What is ORACLE Transparent Data Encryption(TDE)?

ORACLE TDE is part of Oracle’s Advanced Security Option (ASO) which is a chargable addition to the Enterprise Edition RDBMS. TDE is used to safeguard sensitive data against unauthorized access from outside the database environment by encrypting data within the datafiles. This prevents unauthorised access at the operating system level by inspecting the contents of datafile. TDE also offers protect of the loss, theft or inadequate decommission of media used to house or backup the database.

Its called transparent as there nothing that an application needs to do to either encrypt or decrypt the data as it is all done by at the database layer, and its fast as it uses oracle database caching optimizations.

In this article we are going to look at implementing both tablespace and column level encryption and investigate how secure the data is.

Example 1 – Encrypting Tablespaces

The first step is to create an Oracle Wallet to store the encryption key details. There are several type of wallet available we are going to create a simple file based one.  The default location of the wallet for a database is denoted by one of the following:-

  1. ENCRYPTION_WALLET_LOCATION parameter in the sqlnet.ora file.
  2. WALLET_LOCATION parameter in the sqlnet.ora file.
  3. The default location for the wallet ($ORACLE_BASE/admin/$ORACLE_SID/wallet).

In this case we are going to go for the default. Login as sysdba and run the following command:-

SQL> alter system set encryption key identified by "M0resecure";

System altered.

At this point it is worth noting that you will need to reopen  the wallet after an instance restart with the following command:-

SQL> alter system set encryption wallet open identified by "M0resecure";

Should you need to the wallet can also be closed with the following:-

SQL> Alter system set encryption wallet close identified by "M0resecure";

 

The next step is to create the encrypted tablespace. Now for this example I am going to create two tablespaces, one encrypted and the other not.

SQL> CREATE TABLESPACE encryptedts DATAFILE '/u01/app/oracle/datapump/data/encryptedts.dbf' size 10M autoextend on encryption using 'AES256' DEFAULT STORAGE(ENCRYPT);
SQL> CREATE TABLESPACE nocryptedts DATAFILE '/u01/app/oracle/datapump/data/nocryptedts.dbf' size 10M autoextend on;

The syntax for the tablespace encryptedts is very basic and there are a lot more options you can add, please review the official documentation. Please note that the datafiles are created on the local filesystem and not within ASM. This isn’t a TDE requirement, I’m doing it for investigation work later.

SQL> SELECT tablespace_name, encrypted FROM dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
UNDOTBS1                       NO
TEMP                           NO
USERS                          NO
EXAMPLE                        NO
ENCRYPTEDTS                    YES
NOCRYPTEDTS                    NO

Then create a couple of users and some objects to test with to test with:-

create USER test_encrypt identified by password;
alter user test_encrypt QUOTA UNLIMITED ON encryptedts;
alter user test_encrypt QUOTA UNLIMITED ON users;
grant create session to test_encrypt;
grant create any table to test_encrypt;

create table test_encrypt.enc_test (nocrypt  VARCHAR2(50),encrypt  VARCHAR2(50)) tablespace encryptedts;

insert into test_encrypt.enc_test values('Nocrypt column', 'This is a not encrypted!');

commit;
create USER test_nocrypt identified by password;
alter user test_nocrypt QUOTA UNLIMITED ON nocryptedts;
alter user test_nocrypt QUOTA UNLIMITED ON users;
grant create session to test_nocrypt;
grant create any table to test_nocrypt;

create table test_nocrypt.enc_test (nocrypt  VARCHAR2(50), encrypt  VARCHAR2(50))

tablespace nocryptedts;

insert into test_nocrypt.enc_test values('nocrypt column', 'This is a not encrypted!');

commit;

Now we are going to drop to os and inspect the datafiles. For safetys sake shutdown the database but if you’re a bit more cavalier than me flush the buffer cache to write the data down to disc first.

SQL> connect / as sysdba
Connected.
SQL> alter system flush buffer_cache;

So the next few steps are for LINUX users, if your running on windows (Why?) you can use a HEX editor like ultraedit or something and search the file. For the sensible in the (e-)room you can use the LINUX (or OS equivalent) strings command.

[oracle@oel6 data]$ strings nocryptedts.dbf | grep crypt
Nocrypt column 
This is a not encrypted!
[oracle@oel6 data]$ strings encryptedts.dbf | grep crypt

In the unencrypted datafile you will see the entered data as plain as the nose on your face, however in the encrypted file you will see no matches hence the data is now protected at the operating system layer.

Example 2 – Encrypting columns

Next step is to look at column level encryption. For this we are going to use the test_nocrypt user and create a new table with an encrypted column.

CREATE TABLE test_nocrypt.enc_test_col (
  nocrypt  VARCHAR2(50),
  encrypt  VARCHAR2(50) encrypt NO SALT
)
TABLESPACE nocryptedts;

INSERT INTO test_nocrypt.enc_test_col VALUES ('nosecretcolln1', 'secretcol');
COMMIT;
INSERT INTO test_nocrypt.enc_test_col VALUES ('nosecretcolln2', 'secretcol');
COMMIT;
INSERT INTO test_nocrypt.enc_test_col VALUES ('nosecretcolln3', 'secretcol');
COMMIT;

Notice the parameter NO SALT at the end of the definition for the encrypt column. SALT is a way to strengthen the security of encrypted data. Basically it’s a random string added to the data before its encrypted causing repetition of text in the clear to appear different when encrypted, it’s a method to prevent hackers matching patterns of encrypted text. By default all encrypted columns have salt applied, however if you wish to index the table by the encrypted column you will need to remove salt from the column.

Again either shutdown or flush the buffer_cache and then use strings to scan the file for secret:-

[oracle@oel6 data]$ strings nocryptedts.dbf | grep secret
nosecretcolln34nosecretcolln24Znosecretcolln14

From the above output we can see that the same results as with the encrypted tablespace where the encrypted data is no longer visible within the datafile.

Example 3 – Encrypting and existing column

Just to prove the point we are going to alter the table test_nocrypt.enc_test and encrypt the encrypt (poor naming I know!) column. Too do this run the following:-

Alter table test_nocrypt.enc_test modify(encrypt encrypt no salt);

Again shutdown or flush the buffer_cache and then strings the file for the word crypt

SQL> !strings nocryptedts.dbf | grep cryptNocrypt columnThis is a not encrypted!
SQL> ! strings nocryptedts.dbf | grep cryptNocrypt column4Nocrypt columnThis is a not encrypted!

For this example I have dropped the table, tablespace and recreated them , then added 10 rows of data. Flushed the data down to disc, encrypted the column and flushed the data down to disc again.What witch craft is this? A new entry has been added containing the unencrypted column and the value that’s meant to be encrypted in the datafile is still visible. Well this is due to the fact that the row that contained the unencrypted data has been duplicated when adding the encryption to the column and the old entry has been marked for deletion but hasn’t yet been over written due to all the other free blocks in the datafile. Lets test with a table with more data.

[oracle@oel6 data]$ strings nocryptedts.dbf | grep cryptnocrypt col10notencrypted10,nocrypt col09

notencrypted09,

nocrypt col08

notencrypted08,

nocrypt col07

notencrypted07,

nocrypt col06

notencrypted06,

nocrypt col05

notencrypted05,

nocrypt col04

notencrypted04,

nocrypt col03

notencrypted03,

nocrypt col02

notencrypted02,

nocrypt col01

notencrypted01

[oracle@oel6 data]$ strings nocryptedts.dbf | grep cryptnocrypt col10$nocrypt col09$nocrypt col08$^~B

nocrypt col07$

nocrypt col06$

nocrypt col05$

nocrypt col04$)%

nocrypt col03$

nocrypt col02$^

nocrypt col01$,

nocrypt col10

notencrypted10,

nocrypt col09

notencrypted09,

nocrypt col08

notencrypted08,

nocrypt col07

notencrypted07,

nocrypt col06

notencrypted06,

nocrypt col05

notencrypted05,

nocrypt col04

notencrypted04,

nocrypt col03

notencrypted03,

nocrypt col02

notencrypted02,

nocrypt col01

notencrypted01

[oracle@oel6 data]$

We can see that the old unencrypted data is still visible directly in the datafile. It also appears that its not only TDE where this is an issue and it should also be noted that any other encryption software will leave the old clear text somewhere in a datafile when updating a column to encrypted.

During the testing for this article I have completed several tests with the most conclusive way of removing the clear text being to migrate data from the tablespace, drop and recreate the tablespace and move the data back. I also moved the table to a new tablespace filled the old tablespace with junk data then dropped the junk and moved the tablespace back. This worked for me but in real life it would not be so practical I think.

Tagged with: , ,
Posted in 11gR2, Encryption, TDE

ORA-28368: cannot auto-create wallet

[oracle@oel6 ~]$ sqlplus "/ as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 10 13:46:53 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production 
With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options

SQL> alter system set encryption key identified by "mypassword";
alter system set encryption key identified by manager

ERROR at line 1:
ORA-28368: cannot auto-create wallet

The cause of this issue in my case was due to the location of the wallet (or lack of!). The location of the wallet for a database is denoted by one of the following:-

1. ENCRYPTION_WALLET_LOCATION parameter in the sqlnet.ora file.
2. WALLET_LOCATION parameter in the sqlnet.ora file.
3. The default location for the wallet ($ORACLE_BASE/admin/$ORACLE_SID/wallet).

I created the directory in the default location and reran the command.

[oracle@oel6 ~]$ sqlplus "/ as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 10 13:46:53 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production 
With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options

SQL> alter system set encryption key identified by manager;

System altered.
Tagged with: ,
Posted in 11gR2, TDE

Resolving ORA-28113: policy predicate has error

Whilst reviewing VPD I came across many errors that get introduced due to the incorrect setting up of policies and general poor quality (I was learning) implementation. With a little thought the majority of issues quickly solved however resolving some of the ORA-28113 can be baffling at time so why not get a little help straight from the databases mouth?

While working on my blog overview for VPD (LINK) I setup an example using the employees table from the sample schema. The general idea was to write a policy to allow user ‘SKING’ to see the entire contents of the hr.employees table while all other employees could only see their own row plus those of the employees that they directly managed. As such I wrote the following function to enforce the policy.

CREATE or REPLACE FUNCTION sysadmin_vpd.ac_emp_email(schema_p IN VARCHAR2,table_p  IN VARCHAR2) RETURN VARCHAR2
AS

return_val VARCHAR2 (400):= '1=2';

BEGIN
  CASE
    WHEN SYS_CONTEXT('USERENV', 'SESSION_USER') = 'SKING' THEN
      return_val :='1=1';
    ELSE
      return_val := 'EMAIL = SYS_CONTEXT(''USERENV'', ''SESSION_USER'') or manager_id = (select employee_id from hr.employees where email = SYS_CONTEXT(''USERENV'', ''SESSION_USER'')) ';
  END CASE;

  RETURN return_val;

END ac_emp_email;
/

With the idea being that the subquery feed the emp_id into the main query and generate the required data. I test this is sqlplus with the desired results:-

SQL>  connect mweiss/password

Connected.
SQL> SELECT count(*) FROM hr.employees WHERE email=sys_context('USERENV','SESSION_USER') or manager_id = (SELECT employee_id FROM hr.employees WHERE email=sys_context('USERENV','SESSION_USER'));

COUNT(*)
----------
9

SQL>

However the results of testing proved to be not so positive

SQL> connect sking/password
Connected.
SQL> select count(*) from hr.employees;

COUNT(*)
----------
107

SQL> connect sking/password

Connected.

SQL> select count(*) from hr.employees;

select count(*) from hr.employees

*

ERROR at line 1:

ORA-28113: policy predicate has error

SQL>

ORA-28113: policy predicate has error – WTF?

Off to the great oracle error lookup to see what a 28113 is:-

[oracle@oel6 ~]$ oerr ORA 28113
28113, 00000, "policy predicate has error"
// *Cause: Policy function generates invalid predicate.
// *Action: Review the trace file for detailed error information.

Review the trace file – Hiho its of to udump we go!

A quick cat of the file shows the error quite clearly!?!:-

-------------------------------------------------------------
Error information for ORA-28113:
Logon user     : LOZER
Table/View     : HR.EMPLOYEES
Policy name    : ACCESSCTRL_EMP_EMAIL
Policy function: SYSADMIN_VPD.AC_EMP_EMAIL

RLS view  :

SELECT  "EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID" FROM "HR"."EMPLOYEES"   "EMPLOYEES" WHERE (EMAIL = SYS_CONTEXT('USERENV', 'SESSION_USER') or manager_id = (select employee_id from hr.employees where email = SYS_CONTEXT('USERENV', 'SESSION_USER')) )

ORA-28108: circular security policies detected

-------------------------------------------------------------

So my ORA-28113 was actually caused by an ORA-28108 error. Off to the great book of oracle error lookup again

[oracle@oel6 ~]$ oerr ORA 28108
28108, 00000, "circular security policies detected"
// *Cause: Policies for the same object reference each other.
// *Action: Drop the policies
[oracle@oel6 ~]$

So policies for the same object reference each other – eh?

Basically the issue was due to the subquery I was using to return the employees directs, as it was a subquery it is treated as a different query and hence can only return data according to the table policy that its currently trying to set evalute – hence the circular reference. This meant for me that I had to use an application context assist with my test VPD deployment. For you – well this depends on your issue – hopefully as simple as mine!

Tagged with: , , , ,
Posted in FGA, Oracle Security, RLS, VPD
Follow

Get every new post delivered to your Inbox.