Thursday, April 12, 2012

Parallel Information


  • The degree of parallelism for a query can be specified
    • By specifying the parallel clause during table/index creation (Or later using alter)
    • Using a parallel hint in the sql statement
    • Using ‘alter session force parallel query parallel integer’ statement
  • If One of the options in the above statement has not been specified (To enable parallelism) then, irrespective of whether the table or index is partitioned or not, oracle does not use parallelism in the query
  • When using The dedicated server connection model (As opposed to the shared server model), the sessions shadow process acts as the query co-ordinator.
  • The query uses as many “parallel execution servers” as determined by the query optimizer, to execute the query (This is based on a varitey of factors)
  • When there is No parallel degree specified at the table level, in the query hint or at the session level, but parallelism is enabled at the session level (using alter session  force parallel query) then the optimizer uses the default degree of parallelism.
  • The Default Degree of Parellism (DOP) is determined by cpu_count x parallel_threads_per_cpu
  • If you specified parallelism by issueing ‘alter session force parallel query parallel integer’ statement, then the value used for integer is used for the degree of parallelism (If a parallel hint is not specified in the query).
  • All of the above statements assume that there are enough query servers available in the pool and parallel_adaptive_multi_user does not reduce the number of parallel execution servers.
  • If interoperation parallelism can be used then you could end up using double the number of parallel execution servers as the degree of parallelism
  • If for some reason your query cannot be allocated enough parallel execution servers (typically when there are multiple sessions and all the parallel execution servers (gated by parallel_max_servers) are currently in use) and parallel_min_percent > 0 (0 is the default so your query will still get run without parallelism), your query will get an error and you can try again later.
 For a good primer on parallel query in oracle please read the white paper, Oracle Sql Parallel Execution

What Is Being Audited

Here are some little scripts to find that out:



select * from DBA_PRIV_AUDIT_OPTS
union all
select * from DBA_STMT_AUDIT_OPTS;

select * from DBA_PRIV_AUDIT_OPTS
union all
select * from DBA_STMT_AUDIT_OPTS;

Also a few audit commands I find handy:

noaudit all;
-(Turns off all statement audit options).

noaudit all privileges;
-(Turns off all privilege audit options).

noaudit all on default;
-(Turns off all default object audit options).

audit drop any table:
-(Adds auditing for any table that is dropped.

A Faster Insert


Question: I need to speed up some slow running big inserts, what can I do?  Would if help if the table was partitioned?

Well first off, partitioning a table does not make inserts run any faster.   Now if you were trying to improve the performance of deletes, then partitioning would buy you some speed there. So if partitioning isn't the answer what else can you do to help out your slow processing inserts?   

Are you doing any joins withing the insert code to generate the data you are using for the inserts?  If you are, then that would be a good place to take a long hard look.  As often that can very will be a large part of the issue.  It is important to know what all tables are being joined together.  You might find that you are using a view or two or three within you join conditions; and views are often built on multi-table joins.  You could find yourself joining a larger number of tables together and not even realize it. 

Speaking of joins; You can play with your explain plan (I know you executed one before just throwing some new code out there on the server and letting it run wild consuming resources-) by trying things like a sort merge or maybe a nested loops join instead of hash join (which don't seem to be the best choice when doing inserts as it tends to execute much slower).

Some more basic first-line type of things you can do to help with insert speeds are:
-Ensure an appropriate parallel degree setting for the table.
1.     -Alter session enable parallel dml.
2.      -Add the /*+ append */ hint.  Note that this hint will increase your insert speed, but it will also increase your table fragmentation over time. Therefore, you will need to keep an eye on your table and rebuild it as needed.
3.     

I Want A Bigger SGA But My Server Won't Let Me

Well let's say you want to increase the size of the SGA on one of your instances residing on a Linux box.  So you get an updated pfile (create pfile form spfile;), edit it, increasing the sga_max_size and sga_target parameters.  Then you update the spifle (create spfile from pfile='$ORACLE_HOME/dbs/init.ora;) and restart the instance. But the new size throws an error each time you try a startup.

If you find yourself here, a quick place to look is the kernel parameter shmall.  Many times this tends to be the limiting factor.  Find out what your shmall setting is: (cat /proc/sys/kernel/shmall).  Now sum all the values of the sga_target or sga_max_size (whichever is greater) parameter for all the instance(s) you have on this server.  Don't worry about the PGA, as it does not factor into the shmall value.  Let's say your shmall value is 4194304 and the sum of all your instances is 16 gbs.  And you attempted to increase one of your instances SGA by adding 22 more gbs (this would have given you an new overall total SGA size of 38 gbs).  This of course would account for the failure during the startup attempt as your shmall setting is too small.

Now let's see what you need to do to figure out the correct shmall setting to accommodate your desires:

First you will need to know your page size (getconf PAGE_SIZE); you issue this command and find out your pagesize is 4096.

Here is the simple math to get you there:
                 
                    ( X 1024 X 1024) / 4
                                              (16 X 1024 X 1024) / 4 = 9961472

Therefore you new shmall setting will need to be 9961472 to accommodate 38 gbs of SGA.

Easy As Pie: Changing REDO Logfile Sizes

Below is a quick and to the point example of how to quickly change you current logfile sizes.  

Log in as sys

Select * from v$log; 
- (The Group# field will tell you what you current highest logfile group number being used is).


Select * from v$controlfile_record_section where type ='REDO LOG';
- (Comparing the Records_Used against the Records_Total will tell you how man more groups can be added).

Alter database add logfile group 5 ''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 6 ''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 7 ''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 8 ''+DGRP_RAID5_01_0000' size 1G reuse;

Or


Alter database add logfile group 5 ''+DGRP_RAID5_01_0000' size 1G reuse
                                                ,''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 6 ''+DGRP_RAID5_01_0000' size 1G reuse
                                                ,''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 7 ''+DGRP_RAID5_01_0000' size 1G reuse
                                                ,''+DGRP_RAID5_01_0000' size 1G reuse;
Alter database add logfile group 8 ''+DGRP_RAID5_01_0000' size 1G reuse
                                               , ''+DGRP_RAID5_01_0000' size 1G reuse;



-(Add your new logfile groups).


Select * from v$log;
-(Use the Status filed to ensure that none of you old logfile groups are "CURRENT").
-(If any of them are, then use the following as many times as necessary to move the "CURRENT" to a new log).


Alter system switch logfile;

Alter system checkpoint;
-(This will force a checkpoint and write all the current contents of the logs to the appropriate data file(s)).



Alter database drop logfile group 1;
Alter database drop logfile group 2;
Alter database drop logfile group 3;
Alter database drop logfile group 3;
-(This will drop the old log file groups).


Select * form v$log;
-(To see your handy work).

Tuesday, April 10, 2012

DBD ERROR: OCISessionBegin

Here is a quick workaround you can use to get an instance back-up an open for business if you are running into these types of Oracle errors and users cannot log-in to the system:


ERROR:
ORA-04031: unable to allocate 4064 bytes of shared memory ("shared
pool","update seq$ set increment$=:...","sga heap(1,0)","kglsim heap")

Or maybe:


ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 2336 bytes of shared memory ("shared pool","unknown object","sga heap(1,1)","ksv work msg") (DBD ERROR: OCISessionBegin)

SQL> ALTER SYSTEM FLUSH SHARED_POOL;

Note:  Taking this action will get rid of everything stored in the shared pool.  This will affect you hard/soft parse ratios.

This of course will not correct the underlying issue, but it will allow you to get the instance open quickly.  Then you can go looking for the offender(s).