Thursday, April 12, 2012

Parallel Information


  • The degree of parallelism for a query can be specified
    • By specifying the parallel clause during table/index creation (Or later using alter)
    • Using a parallel hint in the sql statement
    • Using ‘alter session force parallel query parallel integer’ statement
  • If One of the options in the above statement has not been specified (To enable parallelism) then, irrespective of whether the table or index is partitioned or not, oracle does not use parallelism in the query
  • When using The dedicated server connection model (As opposed to the shared server model), the sessions shadow process acts as the query co-ordinator.
  • The query uses as many “parallel execution servers” as determined by the query optimizer, to execute the query (This is based on a varitey of factors)
  • When there is No parallel degree specified at the table level, in the query hint or at the session level, but parallelism is enabled at the session level (using alter session  force parallel query) then the optimizer uses the default degree of parallelism.
  • The Default Degree of Parellism (DOP) is determined by cpu_count x parallel_threads_per_cpu
  • If you specified parallelism by issueing ‘alter session force parallel query parallel integer’ statement, then the value used for integer is used for the degree of parallelism (If a parallel hint is not specified in the query).
  • All of the above statements assume that there are enough query servers available in the pool and parallel_adaptive_multi_user does not reduce the number of parallel execution servers.
  • If interoperation parallelism can be used then you could end up using double the number of parallel execution servers as the degree of parallelism
  • If for some reason your query cannot be allocated enough parallel execution servers (typically when there are multiple sessions and all the parallel execution servers (gated by parallel_max_servers) are currently in use) and parallel_min_percent > 0 (0 is the default so your query will still get run without parallelism), your query will get an error and you can try again later.
 For a good primer on parallel query in oracle please read the white paper, Oracle Sql Parallel Execution

What Is Being Audited

Here are some little scripts to find that out:



select * from DBA_PRIV_AUDIT_OPTS
union all
select * from DBA_STMT_AUDIT_OPTS;

select * from DBA_PRIV_AUDIT_OPTS
union all
select * from DBA_STMT_AUDIT_OPTS;

Also a few audit commands I find handy:

noaudit all;
-(Turns off all statement audit options).

noaudit all privileges;
-(Turns off all privilege audit options).

noaudit all on default;
-(Turns off all default object audit options).

audit drop any table:
-(Adds auditing for any table that is dropped.

A Faster Insert


Question: I need to speed up some slow running big inserts, what can I do?  Would if help if the table was partitioned?

Well first off, partitioning a table does not make inserts run any faster.   Now if you were trying to improve the performance of deletes, then partitioning would buy you some speed there. So if partitioning isn't the answer what else can you do to help out your slow processing inserts?   

Are you doing any joins withing the insert code to generate the data you are using for the inserts?  If you are, then that would be a good place to take a long hard look.  As often that can very will be a large part of the issue.  It is important to know what all tables are being joined together.  You might find that you are using a view or two or three within you join conditions; and views are often built on multi-table joins.  You could find yourself joining a larger number of tables together and not even realize it. 

Speaking of joins; You can play with your explain plan (I know you executed one before just throwing some new code out there on the server and letting it run wild consuming resources-) by trying things like a sort merge or maybe a nested loops join instead of hash join (which don't seem to be the best choice when doing inserts as it tends to execute much slower).

Some more basic first-line type of things you can do to help with insert speeds are:
-Ensure an appropriate parallel degree setting for the table.
1.     -Alter session enable parallel dml.
2.      -Add the /*+ append */ hint.  Note that this hint will increase your insert speed, but it will also increase your table fragmentation over time. Therefore, you will need to keep an eye on your table and rebuild it as needed.
3.     

I Want A Bigger SGA But My Server Won't Let Me

Well let's say you want to increase the size of the SGA on one of your instances residing on a Linux box.  So you get an updated pfile (create pfile form spfile;), edit it, increasing the sga_max_size and sga_target parameters.  Then you update the spifle (create spfile from pfile='$ORACLE_HOME/dbs/init.ora;) and restart the instance. But the new size throws an error each time you try a startup.

If you find yourself here, a quick place to look is the kernel parameter shmall.  Many times this tends to be the limiting factor.  Find out what your shmall setting is: (cat /proc/sys/kernel/shmall).  Now sum all the values of the sga_target or sga_max_size (whichever is greater) parameter for all the instance(s) you have on this server.  Don't worry about the PGA, as it does not factor into the shmall value.  Let's say your shmall value is 4194304 and the sum of all your instances is 16 gbs.  And you attempted to increase one of your instances SGA by adding 22 more gbs (this would have given you an new overall total SGA size of 38 gbs).  This of course would account for the failure during the startup attempt as your shmall setting is too small.

Now let's see what you need to do to figure out the correct shmall setting to accommodate your desires:

First you will need to know your page size (getconf PAGE_SIZE); you issue this command and find out your pagesize is 4096.

Here is the simple math to get you there:
                 
                    ( X 1024 X 1024) / 4
                                              (16 X 1024 X 1024) / 4 = 9961472

Therefore you new shmall setting will need to be 9961472 to accommodate 38 gbs of SGA.

Easy As Pie: Changing REDO Logfile Sizes

Below is a quick and to the point example of how to quickly change you current logfile sizes.  

Log in as sys

Select * from v$log; 
- (The Group# field will tell you what you current highest logfile group number being used is).


Select * from v$controlfile_record_section where type ='REDO LOG';
- (Comparing the Records_Used against the Records_Total will tell you how man more groups can be added).

Alter database add logfile group 5 ''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 6 ''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 7 ''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 8 ''+DGRP_RAID5_01_0000' size 1G reuse;

Or


Alter database add logfile group 5 ''+DGRP_RAID5_01_0000' size 1G reuse
                                                ,''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 6 ''+DGRP_RAID5_01_0000' size 1G reuse
                                                ,''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 7 ''+DGRP_RAID5_01_0000' size 1G reuse
                                                ,''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 8 ''+DGRP_RAID5_01_0000' size 1G reuse
                                               , ''+DGRP_RAID5_01_0000' size 1G reuse;



-(Add your new logfile groups).


Select * from v$log;
-(Use the Status filed to ensure that none of you old logfile groups are "CURRENT").
-(If any of them are, then use the following as many times as necessary to move the "CURRENT" to a new log).


Alter system switch logfile;

Alter system checkpoint;
-(This will force a checkpoint and write all the current contents of the logs to the appropriate data file(s)).



Alter database drop logfile group 1;
Alter database drop logfile group 2;
Alter database drop logfile group 3;
Alter database drop logfile group 3;
-(This will drop the old log file groups).


Select * form v$log;
-(To see your handy work).

Tuesday, April 10, 2012

DBD ERROR: OCISessionBegin

Here is a quick workaround you can use to get an instance back-up an open for business if you are running into these types of Oracle errors and users cannot log-in to the system:


ERROR:
ORA-04031: unable to allocate 4064 bytes of shared memory ("shared
pool","update seq$ set increment$=:...","sga heap(1,0)","kglsim heap")

Or maybe:


ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 2336 bytes of shared memory ("shared pool","unknown object","sga heap(1,1)","ksv work msg") (DBD ERROR: OCISessionBegin)

SQL> ALTER SYSTEM FLUSH SHARED_POOL;

Note:  Taking this action will get rid of everything stored in the shared pool.  This will affect you hard/soft parse ratios.

This of course will not correct the underlying issue, but it will allow you to get the instance open quickly.  Then you can go looking for the offender(s).

Friday, March 16, 2012

Preparing LUN's for ASM with ASMLIB

A few simple steps for preparing new LUN's for addition to ASM.

Become root.


root@svr01[inst1]:/etc/init.d> oracleasm listdisks
VOL1
VOL10
VOL11
VOL12
VOL13
VOL14
VOL2
VOL3
VOL4
VOL5
VOL6
VOL7
VOL8
VOL9

root@svr01[inst1]:/etc/init.d> oracleasm createdisk VOL15 /dev/dm-34
Writing disk header: done
Instantiating disk: done

root@svr01[inst1]:/etc/init.d> oracleasm createdisk VOL16 /dev/dm-35
Writing disk header: done
Instantiating disk: done

root@svr01[inst1]:/etc/init.d> oracleasm createdisk VOL17 /dev/dm-36
Writing disk header: done
Instantiating disk: done

root@svr01[inst1]:/etc/init.d> oracleasm listdisks
VOL1
VOL10
VOL11
VOL12
VOL13
VOL14
VOL15
VOL16
VOL17
VOL2
VOL3
VOL4
VOL5
VOL6
VOL7
VOL8
VOL9

Monday, March 05, 2012

ASM Map To Disk

Here is a quick and simple way to map your ASM disks to their physical disks.  The examples are from Linux.


SQL> select path from v$asm_disk;

PATH
----------------------------------------------
/dev/oracleasm/disks/VOL3
/dev/oracleasm/disks/VOL7
/dev/oracleasm/disks/VOL4
/dev/oracleasm/disks/VOL8
/dev/oracleasm/disks/VOL5
/dev/oracleasm/disks/VOL9
/dev/oracleasm/disks/VOL10
/dev/oracleasm/disks/VOL11
/dev/oracleasm/disks/VOL12
/dev/oracleasm/disks/VOL6
/dev/oracleasm/disks/VOL13
/dev/oracleasm/disks/VOL14
/dev/oracleasm/disks/VOL1
/dev/oracleasm/disks/VOL2

14 rows selected.

(I used the following to get the Major and Minor numbers to id the disks):

$>:/home/oracle> cd /dev/oracleasm/disks
$>:/dev/oracleasm/disks> ls -ltarF
total 0
drwxr-xr-x 4 root   root       0 Sep 23 22:30 ../
drwxr-xr-x 1 root   root       0 Sep 23 22:30 ./
brw-rw---- 1 oracle dba  253, 30 Mar  5 13:54 VOL8
brw-rw---- 1 oracle dba  253, 32 Mar  5 13:54 VOL7
brw-rw---- 1 oracle dba  253, 31 Mar  5 13:54 VOL4
brw-rw---- 1 oracle dba  253, 33 Mar  5 13:54 VOL3
brw-rw---- 1 oracle dba  253, 19 Mar  5 13:54 VOL2
brw-rw---- 1 oracle dba  253, 22 Mar  5 13:54 VOL14
brw-rw---- 1 oracle dba  253, 23 Mar  5 13:54 VOL13
brw-rw---- 1 oracle dba  253, 27 Mar  5 13:54 VOL10
brw-rw---- 1 oracle dba  253, 20 Mar  5 13:54 VOL1
brw-rw---- 1 oracle dba  253, 28 Mar  5 13:54 VOL9
brw-rw---- 1 oracle dba  253, 24 Mar  5 13:54 VOL6
brw-rw---- 1 oracle dba  253, 29 Mar  5 13:54 VOL5
brw-rw---- 1 oracle dba  253, 25 Mar  5 13:54 VOL12
brw-rw---- 1 oracle dba  253, 26 Mar  5 13:54 VOL11

$>:/home/oracle> ls -l /dev
brw-rw----  1 root root 253,     0 Sep 23 22:29 dm-0
brw-rw----  1 root root 253,     1 Sep 23 22:29 dm-1
brw-rw----  1 root root 253,    10 Sep 23 22:29 dm-10
brw-rw----  1 root root 253,    11 Sep 23 22:29 dm-11
brw-rw----  1 root root 253,    12 Sep 23 22:29 dm-12
brw-rw----  1 root root 253,    13 Sep 23 22:29 dm-13
brw-rw----  1 root root 253,    14 Sep 23 22:29 dm-14
brw-rw----  1 root root 253,    15 Sep 23 22:29 dm-15
brw-rw----  1 root root 253,    16 Sep 23 22:29 dm-16
brw-rw----  1 root root 253,    17 Sep 23 22:29 dm-17
brw-rw----  1 root root 253,    18 Sep 23 22:29 dm-18
brw-rw----  1 root root 253,    19 Sep 23 22:29 dm-19
brw-rw----  1 root root 253,     2 Sep 23 22:29 dm-2
brw-rw----  1 root root 253,    20 Sep 23 22:29 dm-20
brw-rw----  1 root root 253,    21 Sep 23 22:29 dm-21
brw-rw----  1 root root 253,    22 Sep 23 22:29 dm-22
brw-rw----  1 root root 253,    23 Sep 23 22:29 dm-23
brw-rw----  1 root root 253,    24 Sep 23 22:29 dm-24
brw-rw----  1 root root 253,    25 Sep 23 22:29 dm-25
brw-rw----  1 root root 253,    26 Sep 23 22:29 dm-26
brw-rw----  1 root root 253,    27 Sep 23 22:29 dm-27
brw-rw----  1 root root 253,    28 Sep 23 22:29 dm-28
brw-rw----  1 root root 253,    29 Sep 23 22:29 dm-29
brw-rw----  1 root root 253,     3 Sep 23 22:29 dm-3
brw-rw----  1 root root 253,    30 Sep 23 22:29 dm-30
brw-rw----  1 root root 253,    31 Sep 23 22:29 dm-31
brw-rw----  1 root root 253,    32 Sep 23 22:29 dm-32
brw-rw----  1 root root 253,    33 Sep 23 22:29 dm-33
brw-rw----  1 root root 253,     4 Sep 23 22:29 dm-4
brw-rw----  1 root root 253,     5 Sep 23 22:29 dm-5
brw-rw----  1 root root 253,     6 Sep 23 22:29 dm-6
brw-rw----  1 root root 253,     7 Sep 23 22:29 dm-7
brw-rw----  1 root root 253,     8 Sep 23 22:29 dm-8
brw-rw----  1 root root 253,     9 Sep 23 22:29 dm-9

Thx

Friday, February 03, 2012

AIX - Often Missed Parameter

I am part of an excellent DBA team that support many VLDB's (multi-tera byte range) on many different platforms (primarily Unix and Linux).  It was reiterated today that many people miss setting the AIXTHREAD_SCOPE parameter in the OS.

This tends to be important because Oracle recommends the usage of system-wide contention, which in-turn maps Oracle threads to kernel threads statically.  This fact becomes even more critical in an Oracle RAC environment.

Setting the AIXTHREAD_SCOPE parameter in the OS provides the desirable benefit of reducing memory allocation at the Oracle process level in AIX version 5.2 or later.

The AIXTHREAD_SCOPE parameter should be set at the OS level as an environment variable for your Oracle instance owner.

“AIXTHREAD_SCOPE=S”  ...don't for to export it.

PS: Since we are talking about AIX another parameter you will want to consider setting is an Oracle init. one:

filesystemio_options='SETALL'



File System caching is better suited when your work load tends to be heavily sequential with low 
write content. 
To enable caching for Journal-ed File System (JFS) you would set: default file-system mount options and the Oracle init. parm: filesystemio_options=ASYNCH.



On the other hand, to disable JFS caching and in-turn utilize Direct I/O (DIO)  which tends to benefit heavily random access workloads, you would want to set the init. parm. to filesystemio_options=DIRECT.


Then again, if you want the best of both worlds, Cached I/O (CIO) which tends to be better for heavy update workloads, and DIO, then you will want to use the filesystemio_options='SETALL' parameter.


As an after thought you may want to consider increasing you init. param's for db_cache_size and db_file_multiblock_read_count.  This is because, when you are using CIO and DIO then your file-system buffer cache is not being used.

Monday, January 30, 2012

Arc-Log-Space/RMAN


This is a little script I used for checking archive log space usage:

connect / as sysdba
set linesize 132
set numf 999,999,999,999,999
column name format a32

archive log list;

show parameter db_recovery_file_dest;

SELECT * FROM V$RECOVERY_FILE_DEST;

connect /


Then for manual RMAN backup and removal of log files I use the following:

ORACLE_SID= export ORACLE_SID
rman
connect target /
BACKUP ARCHIVELOG ALL DELETE ALL INPUT;

Thx

Friday, January 27, 2012

Alter My Table

When one of our developers was thinking of executing a couple of alter table statements they asked the following questions:

               (alter table  enable row movement; 
                alter table  shrink space cascade;)

What do I need to consider before executing?  Here are my concerns:

1  Does this command require additional tablespace to shrink the table?
2  Do I need to make sure nothing is hitting the table while running the commands?
3  Is there a better way to accomplish the same thing?
4  Can I expect only a 10% savings in TS as my test on test I performed?
5  What is the max size of table (or other limitation) I should consider before executing the command?

   Here is the response from one of our great DBA's:

   1  No, it doesn't, as it uses same table to move blocks down below the high water mark (HWM).
2     It depends. Whole operation could be broken down into two parts:
a.     Blocks move-down operation (ALTER TABLE table SHRINK SPACE COMPACT) causes row locks, leaving whole table unlocked.
b.    Amend HWM:  ALTER TABLE SHRINK SPACE (CASCADE) will require exclusive lock on the table for very short time.
3   .Yes. Please try (as long as you have space available), CREATE TABLE table2 AS SELECT * FROM table1. Then create relevant indexes. Drop original tables. This will insert all the blocks into new table sequentially, increasing HWM together with loaded data.
4   No, there is no rule. DBAs have a Segment Advisor tool that might give approximate saving on particular table.
5.    No size limitation, as whole process is done on the original table, plus some temp space is needed. It also utilizes PGA (Program Global Area) – memory assigned to a session. Plus, the I/O utilization . Also for partitioned tables it would be wiser to shrink partitions instead of entire table (ALTER TABLE table MODIFY PARTITION partition SHRINK SPACE).


Wednesday, January 25, 2012

Twin UID's

Just today our DBA support group came across this little feller:  One of our Oracle 11R2 instances running on Linux using LDAP suddenly started to no longer allow users who were logging in via putty to execute CTAS's or SQL*Loader.  Oddly however, they could execute SQL*Plus with no issues.  After troubleshooting the issue it was found that a LDAP group was created with the same UID as Oracle's.  Once the new group received a unique UID the problem went away.