Exporting a CLOB

I’m in the final phase of a clear down process to remove data that is no longer legally required from a production database. Unfortunately for me this involves writing the data directly to disc in csv format for reference later, not my preferred option for this piece of work but it pays the bills – so who am I to complain? Anyway the process has been relatively painless up to now – I’m hitting tables with clobs and large ones at that – ranging from 10,000 characters up to just over a 1,000,000 in length. How the devil do I write these to a CSV?

From the start of this piece of work the best method appeared to be to export the data in a format file that would make it easy to use as reference via an external table due to a potential requirement to have to audit from the data even though the legal retention requirement for the data had passed.
To do this I have written a three procedures and a driving table. The driving table stores the table names, sql conditions for the tables and processing information for the run, the first procedure generates the external table build command for the table and the second generates a SQL script to be used to export the data and calls the first procedure to generate the external table script and the third runs the delete of the data. The export script is basically the same format for each table:-

1. CREATE TABLE as SELECT * FROM table.
2. Format all table columns for fixed length field format.
3. Call procedure to generate external table.
4. Spool data as CSV by replacing the column separator with a comma

Below is example output from the export data script:-

CREATE TABLE exp_tab_name AS
  SELECT *
  FROM tab_name
  WHERE ;

COLUMN  FORMAT a20
COLUMN  FORMAT 999999999
 
SET PAGES 0
SET LINES 30
SET COLSEP ','
SPOOL tab_name.desc
EXEC proc_GenExtTableFile("", "TABLE_NAME");
SPOOL off 
SELECT * FROM exp_tab_name

SPOOL taba.out
/
SPOOL off

For none clob tables this is working like a dream, but for a million+ characters potentially this wasn’t going to work. Well after a little reading I found that oracle allows you to reference a clob as an actual file, which meant that I no longer has to worry about multline clobs ruining my csv, but I still had the issue of getting the clob into the file. The next step seemed obvious but painful – I was going to have to use UTL_FILE to output clob. This would mean that I would need to update the generate script to routine to name all columns individually and write a new routine to output the clob. I decided to implement this a function that writes the clob to a file and outputs the clob file location. Lets generate some test data and see

create table tbl_testclob
(idcol number
,clobcol clob
)
;

DECLARE
bigtext   clob;
BEGIN

  EXECUTE IMMEDIATE 'truncate table tbl_testclob';

  FOR i IN 1 .. 11 LOOP
    bigtext := dbms_random.string('A',4000);
    INSERT INTO tbl_testclob VALUES (i, bigtext);
    
    FOR j in 1 .. i LOOP
      IF i = j THEN
        null;
      ELSE
        UPDATE tbl_testclob set clobcol = clobcol || clobcol where idcol=i;
      END IF;
    END LOOP;
  
  END LOOP;
  commit;

END;
/

SELECT idcol,dbms_lob.getlength(clobcol)
FROM tbl_testclob;

     IDCOL DBMS_LOB.GETLENGTH(CLOBCOL)
---------- ---------------------------
         1                        4000
         2                        8000
         3                       16000
         4                       32000
         5                       64000
         6                      128000
         7                      256000
         8                      512000
         9                     1024000
        10                     2048000
        11                     4096000

You should see the same output as above. Now lets try to output the data via utl_file. The first step is to create a directory to output the data too.

CREATE DIRECTORY test_dir AS '/home/oracle/extdata/test';

Now lets create the procedure to output the clob using utl file

CREATE or REPLACE PROCEDURE proc_utlclob
AS

CURSOR cur_tab
IS SELECT * FROM tbl_testclob;

lv_clobsize number(8);
lv_clobPos number(8);
csvHandler  UTL_FILE.FILE_TYPE;

BEGIN
csvHandler := UTL_FILE.FOPEN('TEST_DIR', 'utlclob.csv', 'W',32001);
      UTL_FILE.FCLOSE(csvHandler); 

  -- Open Cursor for tbl_testclob
  FOR c in cur_tab LOOP
    -- Get clob size
    lv_clobsize:= dbms_lob.getLength(c.clobcol);
    -- reset clob position
    lv_ClobPos:=1;
    -- open csv file Write Mode
    csvHandler := UTL_FILE.FOPEN('TEST_DIR', 'utlclob_'||to_char(c.idcol)||'.csv', 'W',32001);

    WHILE lv_ClobPos < lv_clobsize LOOP
      -- Write 32000 char to file
      UTL_FILE.put_line(csvHandler, dbms_lob.substr(c.clobcol,32000,lv_ClobPos));
      utl_file.fflush(csvHandler);
      -- Move column position
      lv_ClobPos := lv_ClobPos + 32000;
    END LOOP;
      -- Close file
      UTL_FILE.FCLOSE(csvHandler); 

  END LOOP;

END proc_utlclob;
/

Lets execute the procedure and see what look at the results:-

o64-12c-n01:oracle@PROD12C1 > ls –ltr /home/oracle/extdata/test
total 16024
-rw-r--r-- 1 oracle asmadmin    4001 Oct 10 13:28 utlclob_1.csv
-rw-r--r-- 1 oracle asmadmin    8001 Oct 10 13:28 utlclob_2.csv
-rw-r--r-- 1 oracle asmadmin   16001 Oct 10 13:28 utlclob_3.csv
-rw-r--r-- 1 oracle asmadmin   32001 Oct 10 13:28 utlclob_4.csv
-rw-r--r-- 1 oracle asmadmin   64002 Oct 10 13:28 utlclob_5.csv
-rw-r--r-- 1 oracle asmadmin  128004 Oct 10 13:28 utlclob_6.csv
-rw-r--r-- 1 oracle asmadmin  256008 Oct 10 13:28 utlclob_7.csv
-rw-r--r-- 1 oracle asmadmin  512016 Oct 10 13:28 utlclob_8.csv
-rw-r--r-- 1 oracle asmadmin 1024032 Oct 10 13:28 utlclob_9.csv
-rw-r--r-- 1 oracle asmadmin 2048064 Oct 10 13:28 utlclob_10.csv
-rw-r--r-- 1 oracle asmadmin 4096128 Oct 10 13:28 utlclob_11.csv

Not quite what I was expecting. Looking at the files it appears that utl_file is adding a newline character to the end of each line added as each line contains the length of the string/32000 lines in it. I played around with this for some time and was unable to find the answer – If you know please inform me.

So although I was getting the data out and the client wasn’t really all that bothered by the extra lines I was unhappy with the end result so hit the Google hard and strayed upon the following little gem dbms_xslprocessor. The little beauty has an in built procedure called clob2file which conveniently outputs a clob to a file!

SQL> desc dbms_xslprocessor
PROCEDURE CLOB2FILE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 CL                             CLOB                    IN
 FLOCATION                      VARCHAR2                IN
 FNAME                          VARCHAR2                IN
 CSID                           NUMBER                  IN     DEFAULT

Well blow me oracle hits the jackpot again! Lets drop it into a procedure and output out test clobs.

CREATE or REPLACE PROCEDURE proc_xmlclob
AS

CURSOR cur_tab
IS SELECT * FROM tbl_testclob;

BEGIN

  -- Open Cursor for tbl_testclob
  FOR c in cur_tab LOOP
    dbms_xslprocessor.clob2file(c.clobcol, 'TEST_DIR', 'xmlclob_'||to_char(c.idcol)||'.csv');
  END LOOP;

END proc_xmlclob;
/

Lets run and compare

o64-12c-n01:oracle@PROD12C1 > ls –ltr /home/oracle/extdata/test
total 16016
-rw-r--r-- 1 oracle asmadmin    4001 Oct 10 13:28 utlclob_1.csv
-rw-r--r-- 1 oracle asmadmin    8001 Oct 10 13:28 utlclob_2.csv
-rw-r--r-- 1 oracle asmadmin   16001 Oct 10 13:28 utlclob_3.csv
-rw-r--r-- 1 oracle asmadmin   32001 Oct 10 13:28 utlclob_4.csv
-rw-r--r-- 1 oracle asmadmin   64002 Oct 10 13:28 utlclob_5.csv
-rw-r--r-- 1 oracle asmadmin  128004 Oct 10 13:28 utlclob_6.csv
-rw-r--r-- 1 oracle asmadmin  256008 Oct 10 13:28 utlclob_7.csv
-rw-r--r-- 1 oracle asmadmin  512016 Oct 10 13:28 utlclob_8.csv
-rw-r--r-- 1 oracle asmadmin 1024032 Oct 10 13:28 utlclob_9.csv
-rw-r--r-- 1 oracle asmadmin 2048064 Oct 10 13:28 utlclob_10.csv
-rw-r--r-- 1 oracle asmadmin 4096128 Oct 10 13:28 utlclob_11.csv
-rw-r--r-- 1 oracle asmadmin    4000 Oct 10 13:28 xmlclob_1.csv
-rw-r--r-- 1 oracle asmadmin    8000 Oct 10 13:28 xmlclob_2.csv
-rw-r--r-- 1 oracle asmadmin   16000 Oct 10 13:28 xmlclob_3.csv
-rw-r--r-- 1 oracle asmadmin   32000 Oct 10 13:28 xmlclob_4.csv
-rw-r--r-- 1 oracle asmadmin   64000 Oct 10 13:28 xmlclob_5.csv
-rw-r--r-- 1 oracle asmadmin  128000 Oct 10 13:28 xmlclob_6.csv
-rw-r--r-- 1 oracle asmadmin  256000 Oct 10 13:28 xmlclob_7.csv
-rw-r--r-- 1 oracle asmadmin  512000 Oct 10 13:28 xmlclob_8.csv
-rw-r--r-- 1 oracle asmadmin 1024000 Oct 10 13:28 xmlclob_9.csv
-rw-r--r-- 1 oracle asmadmin 2048000 Oct 10 13:28 xmlclob_10.csv
-rw-r--r-- 1 oracle asmadmin 4096000 Oct 10 13:28 xmlclob_11.csv

So we can see that xslprocessor has produced the clobs as files with the exact bytes count s character count for single line clobs, but what about multi line one. Below is a data generator.

DECLARE
bigtext   clob;
BEGIN

  EXECUTE IMMEDIATE 'truncate table tbl_testclob';

  FOR i IN 1 .. 5 LOOP
    bigtext := dbms_random.string('A',4000);
    INSERT INTO tbl_testclob VALUES (i, bigtext);
    
    FOR j in 1 .. i LOOP
      IF i = j THEN
        null;
      ELSE
        UPDATE tbl_testclob set clobcol = clobcol ||chr(10)|| clobcol where idcol=i;
      END IF;
    END LOOP;
  
  END LOOP;
  commit;

END;
/

SELECT idcol,dbms_lob.getlength(clobcol)
FROM tbl_testclob;
  
     IDCOL DBMS_LOB.GETLENGTH(CLOBCOL)
---------- ---------------------------
         1                        4000
         2                        8001
         3                       16003
         4                       32007
         5                       64015

Remove the old testing files and run both routines again

o64-12c-n01:oracle@PROD12C1 > ls -ltr
total 248
-rw-r--r-- 1 oracle asmadmin  4001 Oct 10 13:37 utlclob_1.csv
-rw-r--r-- 1 oracle asmadmin  8002 Oct 10 13:37 utlclob_2.csv
-rw-r--r-- 1 oracle asmadmin 16004 Oct 10 13:37 utlclob_3.csv
-rw-r--r-- 1 oracle asmadmin 32009 Oct 10 13:37 utlclob_4.csv
-rw-r--r-- 1 oracle asmadmin 64018 Oct 10 13:37 utlclob_5.csv
-rw-r--r-- 1 oracle asmadmin  4000 Oct 10 13:38 xmlclob_1.csv
-rw-r--r-- 1 oracle asmadmin  8001 Oct 10 13:38 xmlclob_2.csv
-rw-r--r-- 1 oracle asmadmin 16003 Oct 10 13:38 xmlclob_3.csv
-rw-r--r-- 1 oracle asmadmin 32007 Oct 10 13:38 xmlclob_4.csv
-rw-r--r-- 1 oracle asmadmin 64015 Oct 10 13:38 xmlclob_5.csv

Again the xsl files are the exact byte size of the relevant clob, but the utl files contain extra bytes/characters from the utl_file export routine.

Tagged with: , , , ,
Posted in 11gR2, 12c, PL/SQL, SQL*Plus

ORACLE Edition Based Redefinition

Introduction

What can I say about Oracle Edition Based Redefinition (EBR) apart from the fact its awesome, absolutely bloody awesome! As I’m writing this I have just completed a 6 month piece of work to move a clients application to EBR – and the results are in they completed their first apps upgrade with 8 minutes of downtime, It should have been zero but there was an issue with the ALB which required a reboot to resolve. Prior to the golive the client test a ran the upgrade side by side in a pre production environment with the edition and non edition approach, both took around the same amount of time, about 20 hours, but the non edition version would have been all downtime all the time whereas the edition version it was zero. I can’t lie this wasn’t an easy process, just to get to the point where they were able to look at editioning took alot of backend changes to code and some considerable downtime to correct previous poor design/programming as well as to implement the changes required to enable EBR itself, but now they are their and had their first (almost) zero downtime upgrade they are extremely happy.

Anyway moving on – What I intend to do for this is article is give a brief overview of EBR, introduce the EBR basics required to understand and implement EBR then move on to demonstrating EBR using some simple examples using a simple test app.

EBR Overview

As of 11.2 every database has at least one edition available in the database being the default edition ora$base. You can check this by running the following:-

SELECT property_value
FROM   database_properties
WHERE  property_name = 'DEFAULT_EDITION'; 

PROPERTY_VALUE
--------------------------------------------------------------------------
ORA$BASE

An edition is effectively a copy of the entire original database. Once you have created your edition you are able to start making changes to the editionable objects in that edition without fear of causing issue for users connected to another edition.

Editionable objects

The following is a list of all objects that can be editioned:-
1. Function
2. Procedure
3. Synonym (but not public)
4. Package body/spec
5. View
6. Type (specification and body)
7. Library
8. Trigger

Note how not every object type in the database is editionable most notable of these are tables and indexes. Working around this limitation will be outlined later.

Permissions

Before you can create an edition you require the CREATE ANY EDITION and DROP ANY EDITION privilege. As these privs have the potential to cause some major havoc it is my opinion that edition management is best done from either a specific edition management account or another privileged user account.

CREATE USER ebrman IDENTIFIED BY ebrman DEFAULT TABLESPACE users;
GRANT CREATE SESSION, CREATE ANY EDITION, DROP ANY EDITION TO ebrman;

To create an edition you can use the following syntax:-

CREATE EDITION editionname;
CREATE EDITION editionname AS CHILD OF parenteditionname;

Each edition is limited to a single child, so by default the create edition command creates the edition under the only childless edition available.

SQL> column EDITION_NAME format a20
SQL> column PARENT_EDITION_NAME format a20
SQL> select * from all_editions;

EDITION_NAME         PARENT_EDITION_NAME  USA
-------------------- -------------------- ---
ORA$BASE                                  YES

SQL> connect ebrman/ebrman

Connected.

SQL> create edition rel_v1;

Edition created.

SQL> create edition rel_v2 as child of rel_v1;

Edition created.

SQL> select * from all_editions;

EDITION_NAME         PARENT_EDITION_NAME  USA
-------------------- -------------------- ---
ORA$BASE                                  YES
REL_V1               ORA$BASE             YES
REL_V2               REL_V1               YES

To before a user can use an edition they must be granted access to it. The syntax for this is:-

GRANT USE ON EDITION {editionname} TO {user};

In the example below the ebrman user creates the edition rel_v2, then sys creates rel_v3 and then ebrman then tries to create rel_v4

SQL> connect ebrman/ebrman
Connected.
SQL> Create edition rel_v2;

Edition created.

SQL> connect / as sysdba
Connected.
SQL> create edition rel_v3;

Edition created.

SQL> connect ebrman/ebrman
Connected.
SQL> create edition rel_v4;
create edition rel_v4
*
ERROR at line 1:
ORA-38802: edition does not exist

SQL> conn / as sysdba
Connected.
SQL> grant  use on edition rel_v3 to ebrman;

Grant succeeded.

SQL> connect ebrman/ebrman
Connected.
SQL> create edition rel_v4;

Edition created.

SQL>

Even though the ebrman user has the ability to create editions, it cannot create an edition based on one it doesn’t have direct access too.
As previously mentioned an edition can only have a single child, below is the error you receive when you try and create an edition off an edition that already has a child.

SQL> create edition rel_v4 as child of rel_v2;
create edition rel_v4 as child of rel_v2
*
ERROR at line 1:
ORA-38807: Implementation restriction: an edition can have only one child

For me this seems quite logical as code builds on code. However one feature that I would like to see introduced would be the ability to merge unused parent/child editions into a single edition and hence reduce the edition hierarchy. For example the client I helped migrate to EBR releases updates every Quarter with an average system life of 5 years that’s up to 20 different editions to review/manage, when using things like cross edition triggers it becomes very complicated the larger the number fo editions you have to consider.

Before a schema owner can start to edition objects it needs to be enabled for the owner. This action is irreversible so ensure that you want do this before implementing.

ALTER USER {username} ENABLE EDITIONS; 

Editioning views

With editioning enabled you are able to store and use multiple versions of the same piece of PL/SQL available in a schema, but what if you updates require changes to a table, such as adding or modifying columns? This is where editionining views come to the rescue. As a view is a piece of PLSQL you introduce an editioning view as a wrapper for access to your base table allowing each addition to see the parts of the table it should have access too. The view can only be a straight up query of the table, it is able to display a subset of the available columns and alias them to mimic a rename.

CREATE OR REPLACE EDITIONING VIEW viewname AS
SELECT col1,
col3 AS col_alias
FROM tablename;

Editioning views are read-write by default but can be made readonly should the need arise.

Crossedition triggers

The introduction of editioning view can be a bit of a problem when DML is applied to the base tables as columns may no longer exist, formats may have changed etc. As such crossediton trigger must be used to transform the data so that it is in the correct format when coming from the parent edition to the child edition, while reverse crossedition triggers do the opposite transporting data entered into the child edition so that it is valid for the parent. This means that multiple editions can be used simultaneously without causing logical corruption of the data in the base table.

Both crossedition and reverse crossedition triggers should be applied directly to editioning views, rather than the base tables, which allows triggers to be edition-specific.

Oracle guarantee the execution plan for a query against an editioning view will be identical to the same query against the base table, so there is no performance penalty associated with the use of editioning views.

Edition Dictionary views

When working within an edition you can only see the objects that are available to that edition, but what happens if you want to review information about for than 1 edition at a time? Where there are a series of dictionary views that have been created to assit with this at as

*_EDITIONS
*_EDITION_COMMENTS
*_OBJECTS
*_OBJECTS_AE
*_ERRORS
*_ERRORS_AE
*_USERS
*_VIEWS
*_EDITIONING_VIEWS
*_EDITIONING_VIEWS_AE
*_EDITIONING_VIEW_COLS
*_EDITIONING_VIEW_COLS_AE

The AE suffix means all editions, hence that you can see all objects no matter what the current edition, for example dba_objects_ae shows all objects for all editions where as dba_objects only shows for the current edition.

Changing the edition

The last of the basics is changing editions. The first method that i am going to show is how to change the default edition that the database uses. This changes the edition for EVERYONE that logs into the database after the point it is fired, it doesn’t change the version for users already logged on. The command to change the edition at the database level is:-

Alter database default edition = rel_v1

The alternative is to change the edition at the session level. The command for this is:-

Alter session set edition = rel_v1;

Worked Example

First thing that we are going to do is create a new user called my_car and setup some objects in their schema to form the basis of a simple application.

CREATE USER my_car
IDENTIFIED BY mycar
DEFAULT TABLESPACE users
QUOTA UNLIMITED ON users;

GRANT CREATE SESSION, CREATE TABLE, CREATE VIEW, CREATE PROCEDURE, CREATE SEQUENCE, CREATE TRIGGER TO my_car;

Now connect as the my_car user and lets create the application objects.

CREATE TABLE tbl_car (
  car_id        NUMBER(5)    NOT NULL,
  car_name      VARCHAR2(25) NOT NULL,
  car_date      DATE         NOT NULL,
  CONSTRAINT pk_emp PRIMARY KEY (car_id));

CREATE SEQUENCE seq_tblcar; 

CREATE OR REPLACE PROCEDURE my_car.proc_insertdata (p_string  IN tbl_car.car_name%TYPE) 
AS
BEGIN  
  INSERT INTO tbl_car (car_id, car_name, car_date)  VALUES (seq_tblcar.NEXTVAL, p_string, sysdate);
  commit;
END proc_insertdata;
/

Now lets insert some test data and see the results:-

exec proc_insertdata('Ford Fiesta');
select * from tbl_car;

    CAR_ID CAR_NAME                  CAR_DATE
---------- ------------------------- ---------
         1 Ford Fiesta               08-OCT-14

Yay – Now we have have our “application” in place its time to set up editioning. The first step is to login as sys and enable editions for the schema owner and then grant access to all available editions:-

ALTER USER my_car ENABLE EDITIONS; 
GRANT USE ON EDITION rel_v1 TO my_car; 
GRANT USE ON EDITION rel_v2 TO my_car; 
GRANT USE ON EDITION rel_v3 TO my_car; 

At this point i would also change the current edition at the database level to ensure that all edition based worked is recorded within the first user created edition. This is personal preference, not a requisite.

ALTER DATABASE DEFAULT EDITION = rel_v1

Now its time to prepare the application so that it is ready for editions which involves setting up the initial editioning_views. The way that you do this is entirely up to you. You could completely rewrite your application to point to the new editioning view instead of directly to the table, costly in both development time and risk, or you could rename the tables and create the editioning views with the name of the tables, no costly development work but downtime and probably lots of it! In my experience the downtime is the preferred option of the vendor, the client doesn’t want any downtime so wants neither option! Renaming the table is no big deal, oracle has worked hard to make this sort of task quick and easy:-

ALTER TABLE tbl_car RENAME TO ebt_car;

Now we have renamed the table it is time to create the editioning view used to access the table:-

CREATE OR REPLACE EDITIONING VIEW tbl_car AS
SELECT car_id, car_name, car_date
FROM   ebt_car;

Now lets inset a second row of data int othe table and view the results.

exec proc_insertdata('Vauxhall Corsa');
select * from tbl_car;

    CAR_ID CAR_NAME                  CAR_DATE
---------- ------------------------- ---------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14

select * from ebt_car;

    CAR_ID CAR_NAME                  CAR_DATE
---------- ------------------------- ---------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14

So this is the initial release complete, the application now has editioning enabled on it. Lets look at the objects and their current edition and all editions:-

COLUMN object_name FORMAT A20
COLUMN object_type FORMAT A20
COLUMN edition_name FORMAT A20

SELECT object_name, object_type, edition_name
FROM   user_objects
ORDER BY object_name;


OBJECT_NAME          OBJECT_TYPE          EDITION_NAME
-------------------- -------------------- --------------------
EBT_CAR              TABLE
PK_EMP               INDEX
PROC_INSERTDATA      PROCEDURE            REL_V1
SEQ_TBLCAR           SEQUENCE
TBL_CAR              VIEW                 REL_V1

SELECT object_name, object_type, edition_name
FROM   user_objects_ae
ORDER BY object_name;

OBJECT_NAME          OBJECT_TYPE          EDITION_NAME
-------------------- -------------------- --------------------
EBT_CAR              TABLE
PK_EMP               INDEX
PROC_INSERTDATA      PROCEDURE            REL_V1
SEQ_TBLCAR           SEQUENCE
TBL_CAR              VIEW                 REL_V1

As we can see there is no difference between the tables as we would expect as we have only been working within the rel_v1 edition.
So now its time for our first change to the application. We are going to add an new column to the table and update the insert procedure for it. The first thing we need to do is change the edition that we are working in to ensure any user logged on using our application doesn’t get affected by the changes to the table, editioning view or insert procedure.

ALTER SESSION SET EDITION = rel_v2;

Now we modify the table and add the column that we want

ALTER TABLE ebt_car ADD car_edition varchar2(20); 

Now redefine cross edition view to reflect the use of the new column

CREATE OR REPLACE EDITIONING VIEW tbl_car AS
SELECT car_id, car_name, car_date, car_edition
FROM   ebt_car;

The last step is to modify the procedure to include the new functional requirement:-

CREATE OR REPLACE PROCEDURE my_car.proc_insertdata (p_string  IN tbl_car.car_name%TYPE) 
AS
BEGIN
  INSERT INTO tbl_car (car_id, car_name, car_date, car_edition)
  VALUES (seq_tblcar.NEXTVAL, p_string, sysdate, SYS_CONTEXT('USERENV', 'SESSION_EDITION_NAME')); 
  COMMIT;
END proc_insertdata;
/

Our changes are complete, lets check the contents of the table and use the procedure in insert some data.

select * from tbl_car;

CAR_ID     CAR_NAME                  CAR_DATE  CAR_EDITION
---------- ------------------------- --------- --------------------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
SQL> exec proc_insertdata('Peugeot 305');
SELECT * FROM tbl_car;

CAR_ID     CAR_NAME                  CAR_DATE  CAR_EDITION
---------- ------------------------- --------- --------------------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14 REL_V2

SELECT * FROM ebt_car;

CAR_ID     CAR_NAME                  CAR_DATE  CAR_EDITION
---------- ------------------------- --------- --------------------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14 REL_V2

So the insert works fine, but whats happening in edition REL_V1, lets hop over and have a look:-

ALTER SESSION SET EDITION = rel_v1;
select * from tbl_car;

    CAR_ID CAR_NAME                  CAR_DATE
---------- ------------------------- ---------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14

select * from ebt_car;

    CAR_ID CAR_NAME                  CAR_DATE  CAR_EDITION
---------- ------------------------- --------- --------------------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14 REL_V2

So in the editioning view we can see the additional row but the additional column data is still not visible to the user. We are now going to insert a row in this edition and see what happens in both edition rel_v1 and rel_v2.

exec proc_insertdata('Honda Prius');

SQL> SELECT * FROM tbl_car;
CAR_ID     CAR_NAME                  CAR_DATE
---------- ------------------------- ---------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14
         4 Honda Prius               08-OCT-14

SQL> SELECT * FROM ebt_car;

    CAR_ID CAR_NAME                  CAR_DATE  CAR_EDITION
---------- ------------------------- --------- --------------------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14 REL_V2
         4 Honda Prius               08-OCT-14

As expected the data is inserted but the the car_edition column isn’t populated. Back in rel_v2 we can see that the tbl_car is also presenting the data correctly.

ALTER SESSION SET EDITION = rel_v2;

SELECT * FROM tbl_car;

    CAR_ID CAR_NAME                  CAR_DATE  CAR_EDITION
---------- ------------------------- --------- --------------------
       1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14 REL_V2
         4 Honda Prius               08-OCT-14

So let have a look at the objects in rel_v2 and across all editions:-

COLUMN object_name FORMAT A20
COLUMN object_type FORMAT A20
COLUMN edition_name FORMAT A20

SELECT object_name, object_type, edition_name
FROM   user_objects
ORDER BY object_name;

OBJECT_NAME          OBJECT_TYPE          EDITION_NAME
-------------------- -------------------- --------------------
EBT_CAR              TABLE
PK_EMP               INDEX
PROC_INSERTDATA      PROCEDURE            REL_V2
SEQ_TBLCAR           SEQUENCE
TBL_CAR              VIEW                 REL_V2

SELECT object_name, object_type, edition_name
FROM   user_objects_ae
ORDER BY object_name; 

OBJECT_NAME          OBJECT_TYPE          EDITION_NAME
-------------------- -------------------- --------------------
EBT_CAR              TABLE
PK_EMP               INDEX
PROC_INSERTDATA      PROCEDURE            REL_V2
PROC_INSERTDATA      PROCEDURE            REL_V1
SEQ_TBLCAR           SEQUENCE
TBL_CAR              VIEW                 REL_V2
TBL_CAR              VIEW                 REL_V1

You can see that we have have a copy of the edition based view and insert procedure for each edition that we worked in.

For the final example we are going to look at crossedition triggers. The developers have decided to split the car_name field into 2 distinct fields car_manufacture and car_model. To enable this change we start as normal by changing the edition and modifying the table:-

ALTER SESSION SET EDITION = rel_v3;

ALTER TABLE ebt_car ADD 
(car_manufacturer varchar2(25)
,car_model       varchar2(25)
);

Lets update the edition view to reflect the drop of the name and addition of the manufacturer and model columns:-

CREATE OR REPLACE EDITIONING VIEW tbl_car AS
SELECT car_id, car_manufacturer, car_model, car_date, car_edition
FROM   ebt_car;

The last step is to modify the procedure to include the new functional requirement:-

CREATE OR REPLACE PROCEDURE my_car.proc_insertdata 
    (p_manufacturer  IN tbl_car.car_manufacturer%TYPE,
     p_model    IN tbl_car.car_model%TYPE) 
AS
BEGIN
  INSERT INTO tbl_car (car_id, car_manufacturer, car_model, car_date, car_edition)
  VALUES (seq_tblcar.NEXTVAL, p_manufacturer, p_model, sysdate, SYS_CONTEXT('USERENV', 'SESSION_EDITION_NAME'));
  COMMIT;
END proc_insertdata;
/

Notice that the new version of the procedure and insert view do not reference the old car_name column as in this release/edition this column is now no longer used. But how do we make sure that data for columns car_name, car manufacturer and car_model remain consistent between editions? Well this is where forward and reverse crossedition triggers come into play. Each trigger is created in the child edition and is used to manipulate data between the parent and child. The forward cross edition trigger transforms the data from the parent edition so that the child edition can use it and the reverse cross edition trigger does the opposite making the data correct for the parent when entered into the child. First we are going to create the forward cross edition trigger:-

CREATE OR REPLACE TRIGGER trg_xed_fwd_car
  BEFORE INSERT OR UPDATE ON ebt_car
  FOR EACH ROW
  FORWARD CROSSEDITION
  DISABLE
BEGIN
  :NEW.car_manufacturer := SUBSTR(:NEW.car_name, 1, INSTR(:NEW.car_name, ' ')-1);
  :NEW.car_model  := SUBSTR(:NEW.car_name, INSTR(:NEW.car_name, ' ')+1);
END trg_xed_fwd_car;
/

And now the reverse crossedition trigger:-

CREATE OR REPLACE TRIGGER trg_xed_rev_car
  BEFORE INSERT OR UPDATE ON ebt_car
  FOR EACH ROW
  REVERSE CROSSEDITION
  DISABLE
BEGIN
  :NEW.car_name :=  :NEW.car_manufacturer || ' ' || :NEW.car_model;
END trg_xed_rev_car;
/

Once both of the triggers have been created its time to enable them:-

ALTER TRIGGER trg_xed_fwd_car ENABLE;
ALTER TRIGGER trg_xed_rev_car ENABLE;

From this point forward any new data should now be populated correctly for all editions. So lets test

ALTER SESSION SET EDITION = rel_v3;

exec proc_insertdata('Audi','A3');

COLUMN CAR_MANUFACTURER FORMAT A15
COLUMN CAR_MODEL FORMAT A15
COLUMN CAR_EDITION FORMAT A11
SELECT * FROM tbl_car;

    CAR_ID CAR_MANUFACTURER          CAR_MODEL                 CAR_DATE  CAR_EDITION
---------- ------------------------- ------------------------- --------- --------------------
         1                                                     08-OCT-14
         2                                                     08-OCT-14
         3                                                     08-OCT-14 REL_V2
         4                                                     08-OCT-14
         5 Audi                      A3                        08-OCT-14 REL_V3

SELECT * from EBT_car;

    CAR_ID CAR_NAME                  CAR_DATE  CAR_EDITION CAR_MANUFACTURER CAR_MODEL
---------- ------------------------- --------- ----------- ---------------- ---------------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14 REL_V2
         4 Honda Prius               08-OCT-14
         5 Audi A3                   08-OCT-14 REL_V3      Audi              A3                 

From this we can see that the reverse crossedition trigger has fired and populated the data for use by parent editions. Lets go to rel_v1 and see what happens:-

ALTER SESSION SET EDITION = rel_v1;

exec proc_insertdata('Porsche 911');

SELECT * FROM tbl_car;

    CAR_ID CAR_NAME                  CAR_DATE
---------- ------------------------- ---------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14
         4 Honda Prius               08-OCT-14
         5 Audi A3                   08-OCT-14
         6 Porsche 911               08-OCT-14

SELECT * from EBT_car;

CAR_ID     CAR_NAME                  CAR_DATE  CAR_EDITION CAR_MANUFACTURE CAR_MODEL
---------- ------------------------- --------- ----------- --------------- ---------------
         1 Ford Fiesta               08-OCT-14
         2 Vauxhall Corsa            08-OCT-14
         3 Peugeot 305               08-OCT-14 REL_V2
         4 Honda Prius               08-OCT-14
         5 Audi A3                   08-OCT-14 REL_V3      Audi            A3
         6 Porsche 911               08-OCT-14             Porsche         911

So even when inserting from the grandparent edition the Forward edition trigger has fired and kept the data consistent.
The next thing to address is the previously entered data, How do we back fill the gaps? Well in the past I would have just gone to a previous edition and run an update statement as follows:-

UPDATE ebt_car
SET car_name = car_name
WHERE car_model is null;

Which uses the cross edition trigger to populate the required fields. However after reading through the 11gR2 whitepaper on Edition Based Redefinition its advises that this is not allowed as you should not be flitting through editions to get the desired results.

“The firing rules for crossedition triggers dictate that regular DML issued by a session using
edition e will not fire forward crossedition triggers that are actual in edition e. But the
paradigm for edition-based redefinition requires that a session that is installing the upgrade
should use the post-upgrade edition. How, then, can such a session make a relevant forward
crossedition trigger fire?”

Trust me once you have reviewed pages 26-30 you will probably be just as confused as me after the first read or 20. Just remember this is a simple example and leads back to my previous comment about the fact that unused editions should be able to be merged to remove unnecessary edition based hierarchy.

What oracle recommends is that you use new features of DBMS_SQL.PARSE in the current release to apply the crossedition trigger to the update statement. This would mean you would run the following:-

alter session set edition = rel_v3;

DECLARE
  lv_cur NUMBER := DBMS_SQL.open_cursor();
  lv_retval NUMBER;
BEGIN
  DBMS_SQL.PARSE(
    c                          => lv_cur,
    Language_Flag              => DBMS_SQL.NATIVE,
    Statement                  => 'UPDATE ebt_car SET car_name = car_name WHERE car_model is null',
    apply_crossedition_trigger => 'trg_xed_fwd_car' ); 
lv_retval := DBMS_SQL.execute(lv_cur); DBMS_SQL.close_cursor(lv_cur); 
COMMIT; 
END; 
/

Now that we have fired the update, lets check the results:-

    CAR_ID CAR_NAME                  CAR_DATE  CAR_EDITION CAR_MANUFACTURE CAR_MODEL
---------- ------------------------- --------- ----------- --------------- ---------------
         1 Ford Fiesta               08-OCT-14             Ford            Fiesta
         2 Vauxhall Corsa            08-OCT-14             Vauxhall        Corsa
         3 Peugeot 305               08-OCT-14 REL_V2      Peugeot         305
         4 Honda Prius               08-OCT-14             Honda           Prius
         5 Audi A3                   08-OCT-14 REL_V3      Audi            A3
         6 Porsche 911               08-OCT-14             Porsche         911

Now that we have completed the backfill lets test data insertion between editions:-

alter session set edition = rel_v3;
exec proc_insertdata('Ferrari', 'Testarossa');
alter session set edition = rel_v2;
exec proc_insertdata('Skoda Fabia');
alter session set edition = rel_v1;
exec proc_insertdata('Fiat 500');

    CAR_ID CAR_NAME                  CAR_DATE  CAR_EDITION CAR_MANUFACTURE CAR_MODEL
---------- ------------------------- --------- ----------- --------------- ---------------
         1 Ford Fiesta               08-OCT-14             Ford            Fiesta
         2 Vauxhall Corsa            08-OCT-14             Vauxhall        Corsa
         3 Peugeot 305               08-OCT-14 REL_V2      Peugeot         305
         4 Honda Prius               08-OCT-14             Honda           Prius
         5 Audi A3                   08-OCT-14 REL_V3      Audi            A3
         6 Porsche 911               08-OCT-14             Porsche         911
         7 Ferrari Testarossa        08-OCT-14 REL_V3      Ferrari         Testarossa
         8 Skoda Fabia               08-OCT-14 REL_V2      Skoda           Fabia
         9 Fiat 500                  08-OCT-14             Fiat            500

As you can see all fields are populated for all editions.

Connecting to different Editions

The last thing to discuss here is connecting to different editions. I have already touched on this earlier in the article outlining methods to change edition at the database level and session level. But what if you are accessing through a application? How do you change your edition setting then? Well there are ways and means.
One method that can be used to manage the edition for all users is updating the edition for the service that users are connecting to the database with. To check the available services and editions run the following:-

COLUMN name FORMAT A20
COLUMN edition FORMAT A20

SELECT name, edition
FROM   dba_services;

NAME                 EDITION
-------------------- --------------------
SYS$BACKGROUND
SYS$USERS      
LIVE12CXDB
LIVE12C              

From this list the users are using the LIVE12C service to connect to the database and as the edition value is not specifically shown here it uses the database default (rel_v1). To change this use the dbms_service.modify_service procedure to change the edition.

BEGIN
  DBMS_SERVICE.modify_service(service_name => 'LIVE12C',edition  => 'REL_V2', modify_edition=> TRUE);
END;
/

SELECT name, edition
FROM   dba_services;

NAME                 EDITION
-------------------- --------------------
SYS$BACKGROUND
SYS$USERS
LIVE12CXDB
LIVE12C              REL_V2

o64-12c-n01:oracle@LIVE12C1 > sqlplus my_car/mycar@LIVE12C

SQL*Plus: Release 12.1.0.2.0 Production on Wed Oct 8 13:29:28 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Oct 08 2014 13:28:28 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL>  select SYS_CONTEXT('USERENV', 'SESSION_EDITION_NAME') from dual;

SYS_CONTEXT('USERENV','SESSION_EDITION_NAME')
--------------------------------------------------------------------------------
REL_V2

So we can see that all new connections will ignore the database default edition and use the edition specified for the service. The only issue with this is it’s for all users, for say post upgrade application testing this wouldn’t really work. For this you could add another service specifically for testing new releases.

srvctl add service -d LIVE12C -r "LIVE12C1"  -s TEST2 –P NONE –y AUTOMATIC 

Then modify the service edition as above:-

BEGIN
  DBMS_SERVICE.modify_service(
    service_name   => 'test',
    edition        => 'REL_V4',
    modify_edition => TRUE);
END;
/

One this is in place you just need to create a tnsnames string to connect to the new service and bobs your uncle a testing connection.

o64-12c-n01:oracle@LIVE12C1 > sqlplus test/testpass@TESTLIVE12C

SQL*Plus: Release 12.1.0.2.0 Production on Wed Oct 8 13:45:46 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Oct 08 2014 13:28:28 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL>  select SYS_CONTEXT('USERENV', 'SESSION_EDITION_NAME') from dual;

SYS_CONTEXT('USERENV','SESSION_EDITION_NAME')
--------------------------------------------------------------------------------
REL_V3

Obviously this places an overhead on the user to connect to the database using the correct service etc which has the potential for issues (trust me – seems simple enough but not for some people!), so lets look at some methods to control access at the user level.
If you only want a small group of testing users to login using the new release then you might want to try a using an onlogon trigger to alter the edition at the session level, such as:-

CREATE OR REPLACE TRIGGER trg_log_on_database
AFTER LOGON ON DATABASE
ENABLE
BEGIN
  
  IF USER = 'MY_CAR' THEN
    dbms_session.set_edition_deferred('rel_v2');
  END IF;
END;
/

This an good way enabling a few users to use a different edition with a new login. Obviously as the number of users increases management of this method does too. Below is an example:-

SQL> conn test/testpass
Connected.
SQL> select SYS_CONTEXT('USERENV', 'SESSION_EDITION_NAME') from dual;

SYS_CONTEXT('USERENV','SESSION_EDITION_NAME')
--------------------------------------------------------------------------------
REL_V2

A much better way of user managing editions would be to store the edition in an application context and write a method to use this to set the edition for the logging on session. In the application I mentioned upgrading to EBR earlier I pushed forward the change to use application context to replace some slow variable based logic that existed(Each user had a series of fixed security IDs etc that where continually queried throughout the entire application). After the application upgrade was complete an update was run against the table to set testing users to the new edition. Once the upgrade had been completed and signed off the update was made for all users. They were then notified to return to the main application screen where a method automatically runs to re-initialize the users app context and hence no logoff/on was required to get all users to use the new upgraded version. Obviously this only works with database code based drops and not application changes but every little helps ;)!

Hopefully this has whet your appetite for EBR. Remember this is only an introduction and the 11gR2 whitepaper on Edition Based Redefinition plus the EBR guides in the 12C Database development Guide as well as other oracle docs provide much more in site into the topic outlined here – Happy Editioning!

Tagged with: , , , ,
Posted in 11gR2, 12c, Database Editioning, Edition Based Redefinition

ORA-15056, ORA-15046 when using ASM CP to FTP to ASM location

Been playing around with ASM FTP(add link to ASM FTP article), copying files to the file system of the local and remote servers then thought i’d test to an remote ASM filesystem. The format for the command is slighty different from the normal ASM cp command:-

cp <sourcefile> <asmdbuser>@<IP/name of server>.<portno>.<asm sid>:<target file> 

and hit the following error:-

ASMCMD> cp +fra/prod12c/controlfile/Current.256.858866935 sys@192.168.56.207.+ASM1:+FRA/dist12c/controlfile/Current.256.858866935
Enter password: *******
copying +fra/prod12c/controlfile/Current.256.858866935 -> 192.168.56.207:+FRA/dist12c/controlfile/Current.256.858866935
ASMCMD-8016: copy source '+fra/prod12c/controlfile/Current.256.858866935' and target '+FRA/dist12c/controlfile/Current.256.858866935' failed
ORA-15056: additional error message
ORA-15046: ASM file name '+FRA/dist12c/controlfile/Current.256.858866935' is not in single-file creation form
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 486
ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)
ASMCMD>

Basically because I was putting the full target name down when entering the command conflicted with OMF naming conventions and hence had to be dropped, as when you use cp to FTP between servers it doesn’t create the file but in diskgroup/ASM/file_type/file_name.#.# and creates a symbolic link as the target file you specified to this location.

ASMCMD> ls -l
Type         Redund  Striped  Time             Sys  Name
CONTROLFILE  UNPROT  FINE     SEP 25 20:00:00  N    Current => +FRA/ASM/CONTROLFILE/Current.256.859236701

Anothe rthing to note is the target directory structure needs to exist else you will receive the following errors:-

ASMCMD> cp +fra/prod12c/controlfile/Current.256.858866935 sys@192.168.56.207.+ASM1:+FRA/prod12c/controlfile/Current
Enter password: *******
copying +fra/prod12c/controlfile/Current.256.858866935 -> 192.168.56.207:+FRA/prod12c/controlfile/Current
ASMCMD-8016: copy source '+fra/prod12c/controlfile/Current.256.858866935' and target '+FRA/prod12c/controlfile/Current' failed
ORA-17628: Oracle error 19505 returned by remote Oracle server
ORA-19505: failed to identify file ""
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 486
ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)
ASMCMD>

An ORA-19505 error helpfully(not!) identifying the area of issue.

Find out more about the asm cp command and others here:-

http://docs.oracle.com/database/121/OSTMG/asm_util004.htm#OSTMG94456

Tagged with: , , , , ,
Posted in ASM

TNS-12547: TNS:lost contact and Linux Error: 32: Broken pipe

Yet another odd issue experienced during my 12c installation tests. After have a completely successful 12.1.0.2.0 – seriously no errors, nothing ignored, a genuine 100% perfect installation I went to connect to the database and received the following

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 23 15:51:16 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Mon Sep 22 2014 13:16:22 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options



TNS-12547: TNS:lost contact

Oh my, i think where the words i used. Anyway this meant little to me so off I toddled to the listener log to see if there was any more info here.

TNS-12518: TNS:listener could not hand off client connection
TNS-12547: TNS:lost contact
TNS-12560: TNS:protocol adapter error
TNS-00517: Lost contact
Linux Error: 32: Broken pipe

OK so the listener couldn’t hand off the connection to the database – Interesting! Anyway a little bit of googling and MOS later and I hit upon note 550859.1 – Troubleshooting Guide for TNS-12518 TNS listener could not hand off client connection. From this the issue could be due to many reasons but its nothing to do with the listener its more OS or DB resources being exhausted. After going through the tests in the note (bequeath connection, checking database and server resources). I worked through this note but no resolution however it pointed me in the direction of 1069517.1 – ORA-12537 / ORA-12547 or TNS-12518 if Listener (including SCAN Listener) and Database are Owned by Different OS User. Looking at this note if you are using role separation (which i was) the error may be due to incorrect permissions on the oracle binary in the ORACLE_HOME. A quick ls -ltr on the oracle binary revealed this to be true.

o64-12c-n01:oracle@PROD12C1 > ls -ltr /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle
-rwxrwsr-x 1 oracle asmadmin 323762228 Sep 21 09:45 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

When the permissions as stated in the document should be:-

-rwsr-s--x 1 oracle asmadmin 184286251 Aug  9 16:25 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

so a quick su to root and a chmod

[root@o64-12c-n01 ~]# chmod 6751 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle
[root@o64-12c-n01 ~]# ls -ltr /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle
-rwsr-s--x 1 oracle asmadmin 323762228 Sep 21 09:45 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

and I was able to connect instantly – yay !!!


SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 23 15:51:16 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Mon Sep 22 2014 13:16:22 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL>

BACKGROUND: When connecting bequeath the database you are logged on as the oracle software owner and hence can create the OS process required for the database connection (either shared or dedicated) as you bypass the listener. However when going via TNS/EZConnect etc. then you connect the database via a listener and the listener owner ,in this case grid, needs to create an os process as the oracle software owner for the database connection. Since sticky bit was not set up for the owner permission of @ORACLE_HOME/bin/oracle binary the grid user couldn’t complete the process required to handover the inbound connection to the database, which give the linux error 32: broken pipe.

Tagged with: , ,
Posted in 12c, Installation, LINUX Admin, VBox install

A lesson learnt in oracle space management

So I hit a little bug the other day where space that was being removed from a CLOB through an automated purge routine wasn’t being reused. This lead to an heated discussion with a colleague regarding the High Water Mark, Freelists and table fragmentation. His view was that this expected behaviour and that the database was acting normally. I asked him where the data was being added to the table and he stated quite clearly that data only gets added to the end of an extent and that deleted space in a table doesn’t get reused which leads to table fragmentation and hence why the purge routine has an alter table move and index rebuild built in. We talk in circles for a while then I asked him to prove his point and I went off to prove mine. While generating my proof I hit upon the following note on MOS:-
HOW TO DEMONSTRATE SPACE RELEASE / REUSE AFTER A DELETE (Doc ID 1098608.1)
Which shows the expected behaviour quite nicely I thought and so I used it as the basis of my proof which I am blogging below.

So the first thing is you need to verify is the you are using Automated Segment Space Management (ASSM) for this. So lets check the tablespace:-

SELECT tablespace_name, extent_management, segment_space_management
FROM dba_tablespaces
WHERE tablespace_name = 'USERS';

TABLESPACE_NAME                EXTENT_MAN SEGMEN
------------------------------ ---------- ------
USERS                          LOCAL      AUTO

Ok, so as we can see extent_management is set to auto – Great. Now lets create a user with minimal privs to run the test.

create user testassm identified by password default tablespace users;
alter user testassm quota unlimited on users;
grant connect, create any table, create any procedure, create sequence to testassm;

As the testassm user we are going to create a table containing a clob.

CREATE TABLE  tbl_test(
idcol number, 
cola varchar2(500), 
clobcol clob) 
storage (initial 1k next 1k);

If you are following the note you will see that I’m not creating the clob as a securefile. This is due to the fact my prod doesn’t use them. Now lets create a sequence to act as the PK for the table and insert a row so we can look some object metrics:-

CREATE SEQUENCE seq_test START WITH 1 INCREMENT BY 1;

INSERT INTO tbl_test VALUES(seq_test.nextval, dbms_random.string('A',500), dbms_random.string('A',5000));
COMMIT;

SQL> analyze table tbl_test compute statistics;

Table analyzed.

SQL> column segment_name format a30
SQL> SELECT segment_type, segment_name, blocks, extents FROM user_segments;

SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                                8          1
LOBINDEX           SYS_IL0000079224C00003$$                8          1
LOBSEGMENT         SYS_LOB0000079224C00003$$               8          1

SQL> SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                             .375          5            3          1

So we have the number of extents and some info about used blocks for our test object. The next step is to insert a bit of data:-

DECLARE
  i number;
  j number;
  l_clob clob;
BEGIN
-- Generate clob for inserting
  FOR i IN 1 .. 1000 LOOP
    L_clob := l_clob||dbms_random.string('A',10);
  END LOOP;
-- Populate the table with data
  FOR j IN 1 .. 500 loop
    INSERT INTO tbl_test VALUES (seq_test.nextval, dbms_random.string('A',500), l_clob);
  END LOOP;
COMMIT;

END;
/

So we have inserted 500 rows, lets check the size changes for the lob_segment and table.

SQL>  analyze table tbl_test compute statistics;

Table analyzed.

SQL> SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                       .104166667         43            5        501

SQL>  SELECT segment_type, segment_name, blocks, extents FROM user_segments;

SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                               48          6
LOBINDEX           SYS_IL0000079224C00003$$                8          1
LOBSEGMENT         SYS_LOB0000079224C00003$$            1152         24

SQL>

So from the first 500 rows we have increased the number of blocks for the table by 40 and the 1144 for the lobsegment. Lets insert another 500 rows and check again.

SQL> analyze table tbl_test compute statistics;

Table analyzed.

SQL> SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                       .090909091         80            8       1001

SQL> SELECT segment_type, segment_name, blocks, extents FROM user_segments;

SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                               88         11
LOBINDEX           SYS_IL0000079224C00003$$                8          1
LOBSEGMENT         SYS_LOB0000079224C00003$$            2176         32

So again we add 40 blocks to the table and about 1050 blocks to the lobsegment. The next test is to delete some data and see the effect.

SQL> DELETE tbl_test WHERE idcol BETWEEN 250 AND 900;

651 rows deleted.

SQL> analyze table tbl_test compute statistics;

Table analyzed.

SQL> SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                       .090909091         80            8        350

SQL>  SELECT segment_type, segment_name, blocks, extents FROM user_segments;

SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                               88         11
LOBINDEX           SYS_IL0000079224C00003$$               16          2
LOBSEGMENT         SYS_LOB0000079224C00003$$            2176         32

SQL>

So to test the theory that oracle only writes to the end of a segment lets insert another 500 rows and

SQL> SELECT max(idcol) FROM tbl_test;

MAX(IDCOL)
-----------
       1501

SQL> analyze table tbl_test compute statistics;

Table analyzed.

SQL>  SELECT table_name, ((blocks + empty_blocks) - blocks)/(blocks + empty_blocks) HWM,  blocks, empty_blocks, num_rows FROM user_tables;

TABLE_NAME                            HWM     BLOCKS EMPTY_BLOCKS   NUM_ROWS
------------------------------ ---------- ---------- ------------ ----------
TBL_TEST                       .090909091         80            8        850

SQL>  SELECT segment_type, segment_name, blocks, extents FROM user_segments;


SEGMENT_TYPE       SEGMENT_NAME                       BLOCKS    EXTENTS
------------------ ------------------------------ ---------- ----------
TABLE              TBL_TEST                               88         11
LOBINDEX           SYS_IL0000079224C00003$$               16          2
LOBSEGMENT         SYS_LOB0000079224C00003$$            3072         39


So we can see that there has been no increase in for the table but the lobsegment but interesting the lobindex hasnt increased. Anyway this some guise of the bug that we where seeing in live.

NOTE: For the article I am testing on 11.2.0.3.0. I have tested this on 11.2.0.4.0 and 12.1.0.1.0 vanilla without issue.

Tagged with: , , , ,
Posted in 10G, 11gR2, Performance, Uncategorized

Using ORACLE Log Miner

This post is going to be a very basic tutorial on using logminer. As well as being used to recovery from logical Corruptions (user error!) Logminer can be used for data auditing and analysis. Its this functionality that i needed to assist with the resolution of a particularly nasty blocking lock issue.

For a blocking lock I would usually I would perform a 10046 trace on sessions, review the output and then advise. Problem with this site is the tracing cripples the server, so much so that it becomes unusable and users can barely logon let alone traverse through the system enough to generate the required data. As such we were at an impasse, we knew the application routine that was causing the issue, but we didnt know where in the routine the problem was being caused. We had looked at the SQL the session was reported to be running just before is failed but this piece of SQL was fired so frequently during the process that is shed no light at. It was at this point that I decided it was time to use logminer to attempt to work out where we were during the process when it failed as this could at least assist the developed by saying where the issue wasn’t – Basically mine the logs for the transaction, work out the SQL fired and try and figure it all out with the application developers.

In the example below i’m not going to replicate the blocking lock issue that I worked on but I will reproduce enough to show how this method helped and outline why it isn’t as good as a session trace. Below is an outline of the steps the apps routine will complete

STEP1: Insert row into tbl_test1 using sequence squ_test1 to populate idcol. 
STEP2: Get Max (idcol) from tbl_mtest2, increment by 1 and insert a row into tbl_test2.
STEP3: Select Value from tbl_mtest3 for use in apps routine and insert into tbl_test3.

Below are the object definitions:-

create sequence seq_test start with 1 increment by 1 nomaxvalue order;
create table tbl_test1(idcol number(9) ,cola varchar2(30));
create table tbl_test2(idcol number(9) ,cola varchar2(30));
create unique index ind_test2_1 on tbl_test2(idcol);
create table tbl_test3(idcol number(9) ,cola varchar2(30));

The first steps are run as SYSDBA. For this example I am on LINUX 5.9, running 11.2.0.4.2 and am going to use the online logs and not an archived redo log. The principle is the same though. First step is to check that the database has supplemental logging enabled:-

SQL> SELECT force_logging, supplemental_log_data_min,supplemental_log_data_pk,supplemental_log_data_ui, supplemental_log_data_fk,supplemental_log_data_all, supplemental_log_data_pl
 2 FROM v$database;
FOR SUPPLEME SUP SUP SUP SUP SUP
--- -------- --- --- --- --- ---
NO  NO       NO  NO  NO  NO  NO
SQL>

If supplemental logging is no active then run the following:-

SQL> alter database add supplemental log data;

After this we need to identify the files required to be mined and present them to logminer. Once this has completed we will then start the mining process:-

AS SYSDBA 
SQL> insert into tbl_test2 values (100, 'Load');

1 row created.

SQL> commit;

Commit complete.

SQL> SELECT member FROM v$logfile;

MEMBER
--------------------------------------------------------------------------------
+DATA/ora11g23/onlinelog/group_3.263.801863059
+DATA/ora11g23/onlinelog/group_2.262.801863057
+DATA/ora11g23/onlinelog/group_1.261.801863055

SQL> BEGIN
2 DBMS_LOGMNR.ADD_LOGFILE ('+DATA/ora11g23/onlinelog/group_1.261.801863055');
3 DBMS_LOGMNR.ADD_LOGFILE ('+DATA/ora11g23/onlinelog/group_3.263.801863059');
4 DBMS_LOGMNR.ADD_LOGFILE ('+DATA/ora11g23/onlinelog/group_2.262.801863057');
5 END;
6/

PL/SQL procedure successfully completed.

SQL> BEGIN
2 DBMS_LOGMNR.START_LOGMNR(options => dbms_logmnr.dict_from_online_catalog);
3 END;
4 /

PL/SQL procedure successfully completed.

SQL>

Now we are going to start 2 user sessions running through the apps code. The code will be executed in session1 then session 2

SQL> Select dbms_transaction.local_transaction_id from dual;
SESSION 1                          SESSION 2

LOCAL_TRANSACTION_ID               LOCAL_TRANSACTION_ID
---------------------------------  ---------------------------------

SQL> insert into tbl_test1 values (seq_test.nextval,'Test1 First Row');

1 row created.                     1 row created.

SQL> Select dbms_transaction.local_transaction_id from dual;

LOCAL_TRANSACTION_ID               LOCAL_TRANSACTION_ID
---------------------------------  ----------------------------------
5.4.7995                           4.33.7691

SQL> SELECT max(idcol) from tbl_test2;

MAX(IDCOL)                         MAX(IDCOL)
----------                         ----------
       100                                100

SQL> INSERT INTO tbl_test2 values (105+1, 'Test2 row 106');   

1 row created.                     ** SESSION HANGS ** 

So now we have a blocking session, lets login as sysdba and look at logminer for each of the sessions:-

For Session 1:-
-------------
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,SQL_REDO, SQL_UNDO
2 FROM V$LOGMNR_CONTENTS
3 WHERE XIDUSN=5 and XIDSLT=4 and XIDSQN=7995
4 ORDER BY timestamp ASC;
BURY
5.4.7995
set transaction read write;

BURY
5.4.7995
insert into "BURY"."TBL_TEST2"("IDCOL","COLA") values ('101','Test2 row 106');
delete from "BURY"."TBL_TEST2" where "IDCOL" = '101' and "COLA" = 'Test2 row 1
06' and ROWID = 'AAATOdAAHAAAAFTAAB';
BURY
5.4.7995
insert into "BURY"."TBL_TEST1"("IDCOL","COLA") values ('21','Test1 First Row')
;
delete from "BURY"."TBL_TEST1" where "IDCOL" = '21' and "COLA" = 'Test1 First
Row' and ROWID = 'AAATOcAAHAAAAFrAAA';

For Session 2:-
-------------
BURY
4.33.7691
set transaction read write;


BURY
4.33.7691
insert into "BURY"."TBL_TEST2"("IDCOL","COLA") values ('101','Test2 row 106');
delete from "BURY"."TBL_TEST2" where "IDCOL" = '101' and "COLA" = 'Test2 row 1
06' and ROWID = 'AAATOdAAHAAAAFVAAA';

BURY
4.33.7691
insert into "BURY"."TBL_TEST1"("IDCOL","COLA") values ('22','Test1 First Row')
;
delete from "BURY"."TBL_TEST1" where "IDCOL" = '22' and "COLA" = 'Test1 First
Row' and ROWID = 'AAATOcAAHAAAAFtAAA';

For session as we can see the insert statements and update are there but not the select. Also looking at the sql you can see that the values for the sequence are explicit. This is why its not an exact cience more of an art to use logminer in this way. As i knew the application SQL it was easy to map the redo_sql to the actual sql used to produce the update.

HTH

Tagged with: , , , , ,
Posted in 11gR2, 12c, DB Recovery, LogMiner, RAC, SQL*Plus

Finding out information about ASM

So had an odd request today. Some batch jobs were taking to long to complete so an incident call was opened and we all went to work. From the start the issue seemed IO related so we got storage on the call. Before they would start to look at the call they wanted to know the device ids of the disks being used by the ASM diskgroups. Any odd request but not that difficult i thought so i started to google and didnt come up with anything – oh i said. Anyway here are the scripts that i used to generate the required data

To find the asm disc names i wrote the following piece of SQL.

SELECT SUBSTR(PATH,INSTR(PATH,'/',-1)+1)
FROM v$asm_disk
WHERE group_number IN 
  (
  SELECT group_number
  FROM v$asm_diskgroup
  WHERE name = 'DATA'
  );

ASM_DISK2
ASM_DISK1

I then pasted this output into a file called get_devno.output. After this i created a shell with the required permissions called get_devno.sh, which looks like:-

#!/bin/ksh
echo " " > get_devno.output

for a in `cat get_devno.input`
do
  for b in `oracleasm querydisk -p $a | grep sd | awk -F: '{print substr($1,6)}'`
  do
    ls -ltr /dev/disk/by-id | grep $b >> get_devno.output
  done
done

The next step is to run the shell script and view the get_devno.output file:-

scsi-SATA_VBOX_HARDDISK_VB6f5a5b2b-9d70b284-part1
ata-VBOX_HARDDISK_VB6f5a5b2b-9d70b284-part1
scsi-SATA_VBOX_HARDDISK_VBfe0fe3e5-194821d8-part1
ata-VBOX_HARDDISK_VBfe0fe3e5-194821d8-part1

Ta-Dah! One list of disk ids

Posted in 11gR2, 12c, ASM, Performance
Follow

Get every new post delivered to your Inbox.