Saturday, December 5, 2015

Exadata: Calculating Offload Efficiency

Calculating offload efficiency for Sql statement
Querying cell I/O statistics help us to understand the smart scan behavior for individual sql statements.
Find out the cursor value ( sql_id) for the sql we want to calculate offload efficiency from v$sql


Use following Query to calculate offload efficiency for each cursor( sql statement )
select    
sql_id,
 (case when io_cell_offload_eligible_bytes = 0 then 0
    else 100*(1-(io_interconnect_bytes/io_cell_offload_eligible_bytes))
 end) offload1,
 (case when phybytes = 0 then 0
    else 100*(1-(io_interconnect_bytes/phybytes))
 end) offload2
from (
 select  sql_id,
         physical_read_bytes+physical_write_bytes phybytes,
         io_cell_offload_eligible_bytes,
         io_interconnect_bytes
    from v$sql
    where sql_id='&&sqlid');

EstOffloadEfficiency%= The number of bytes sent over the storage interconnect / The number of bytes eligible for predicate offload
TrueOffloadEfficiency%=dividing the number of bytes set over the interconnect to the total read and write bytes for the SQL cursor 


Calculating offload efficiency for session:


select ic_bytes.sid,
            (case when offload_elig.value = 0 then 0
               else 100*(1-(ic_bytes.value/offload_elig.value))
            end) offload1,
            (case when totbytes.value = 0 then 0
               else 100*(1-(ic_bytes.value/totbytes.value))
            end) offload2
              from
               (select sess.sid,stat.name,sess.value
                from   v$sesstat sess, v$statname stat
                where  sess.sid='&&sid' and sess.statistic#=stat.statistic#
                and    stat.name='cell physical IO interconnect bytes') ic_bytes,
               (select sess.sid,stat.name,sess.value
                from v$sesstat sess, v$statname stat
                where  sess.sid='&&sid' and sess.statistic#=stat.statistic#
                and    stat.name='cell physical IO bytes eligible for predicate offload')
                    offload_elig,
               (select sess.sid,sum(sess.value) value
                from v$sesstat sess, v$statname stat
                where  sess.sid='&&sid' and sess.statistic#=stat.statistic#
                and    stat.name in ('physical read bytes','physical write bytes')
                group by sess.sid) totbytes
where ic_bytes.sid=offload_elig.sid
and ic_bytes.sid=totbytes.sid; 

---Nikhil Tatineni---
---Exadata---

Exadata: Smartscan


Using smart Scan is a cell offload feature, 100 gg’s of data has to be transferred to the database servers. when query’s runs on large table ( consists of million of rows) i.e alot of I/0 between database and storage servers. Here storage index’s are crated depend on storage servers. and we can deliver 50gg per /sec using flash cache and data has to move from storage layer to database layer through network. With the help of smartscan only the relevant bytes will be loaded into the database memory from storage layer to database layer 

When database server send query details to the storage cells via protocol known ad IDB(intelligent database). Oracle Exadata storage cells can search storage disks with added intelligence about the query and send only the relevant bytes, not all the database blocks, to the database nodes—hence the term smart scan. Full table scans with selected functions and operators such as =, >, and so on in predicates and index fast full scans can use smart scans.

How to enable smartscan?
There are different techniques to enable smart scan on Exadata machine by setting parameter “cell_off_load_processing” parameter at database level 

Controlling offload behavior on machine at database level  
Disable and enable cell offload for all sessions on database 
alter system set cell_offload_processing=false; 
alter system set cell_offload_processing=true;
At session level,
alter session set cell_offload_processing=false; 
alter session set cell_offload_processing=true; 

we can measure the amount of I/O saved at system wide/ database level using following query 
SQL> select  inst.instance_name,
        b.name,
        a.value/1024/1024/1024 value
from    gv$sysstat a, gv$statname b, gv$instance inst
where   a.statistic# = b.statistic#
and     b.name in
                 ('cell physical IO bytes eligible for predicate offload',
                        'cell physical IO interconnect bytes',
                'cell physical IO interconnect bytes returned by Smart Scan')
and     inst.inst_id=a.inst_id
and     inst.inst_id=b.inst_id
order by 1,2;

At session level 
select stat.name, round(sess.value/1024/1024/1024,2) value from v$mystat sess,v$statname stat where   stat.statistic# = sess.statistic# and stat.name in ('cell physical IO bytes eligible for predicate offload', 'cell physical IO interconnect bytes', 'cell physical IO interconnect bytes returned by Smart Scan') order by 1;

cell physical IO bytes eligible for predicate offload -indicates data transfered over the storage interconnect to database layer . 
cell physical IO interconnect bytes' -Number of bytes eligible for smart scan i.e I/0 saved during cell offload processing.
cell physical IO interconnect bytes returned by Smart Scan -cell I/0 returned by smartscan

--Nikhil Tatineni--
--Exadata--
--12c--

12C: parameter: temp_undo_enabled


In 11g, when ever temporary tables are updated, undo and redo will be generated.undo tablespace holds the rollback data and online redo stores changed vector.  Here data is logged into online redo and undo 

In 12c, when ever parameter “ temp_undo_enabled”, Undo generated by temporary objects are written to Temp tablespace rather than using undo and redo which helps in performance of database 


"temp_undo_enabled" is dynamic parameter, can be enabled  and disabled  as follows at database level or at session level

sql> alter session set temp_undo_enabled=true;
Session altered.

SQL> alter system set temp_undo_enabled=false;
System altered.

-- when we enable “temp_undo_enabled" parameter and application or user loading into the temporary tables or updating here is no redo generated by DML performed by global temporary tables 
-- when ever you enable this parameter, increase the size of the temporary tablespace 


—NOTE: To implement this parameter,database compatibility should be greater than equal to 12.0.0 

we can monitor the tempundo usage by views "v$tempundostat".
stats for this particular view is updated for every 10 minutes 

To avoid - snapshot too old errors, using "v$undostat" tune the parameter "undo_rentention" on the database 

--Thank you--
--Nikhil Tatineni--
--12c-

Sunday, November 29, 2015

Exadata: Hybrid Columnar Compression

Hybrid Columnar Compression enables us to implement different levels of data compression and provides cost-savings at storage level and performance improvements due to reduced I/O when accessing large tables. When we Implement HCC. Tables are organized and stored in compression unit. In this compression unit, data is stored in columnar format and after it is compressed. similar values inside the compression unit are grouped and will enhance the  expected compression ratio. 
 
Two types of  Hybrid Columnar Compression

1) Query optimized compression will help fetch large data workloads such at Data Warehouses
As we know data warehousing  environment  have historical data. Here compressing helps in storage saving and helps in analytic performance
2)Archive compression provides the highest degree of compression and is targeted for particular accessed data ( partition on the table) which are kept online.

On OLTP systems, Hybrid Columnar Compression can be used to compress older, less active data while newer, more active and update intensive data can be compressed using Advanced Row Compression.


For example we can create different HCC compression Types while creating table
HCC compression available are:

compress for query low;
compress for query high;
compress for archive low;
compress for archive high;

for example we can create a table with different partition levels and apply different HCC for different partitions as shown below 

create table world (id num(10))
partition by range (id)
(partition p1 values less than (100000) nocompress,
partition p2 values less than (200000) compress for archive low,
partition p3 values less than (300000)  compress for query high,
partition p4 values less than (maxvalue) compress for query low)
enable row movement;


following query will display type of compression level is implemented on the partition table  
select partition_name, compression, compress_for from dba_tab_partitions where table_name like ‘WORLD’;
 


from the above table, we can confirm that, we can apply different HCC levels to provide cost saving at storage level and to improve performance of sql query's 

lets say, of the table is partition is unused by the application, we can compress with HCC option "compress for archive high"

sql> alter table  world modify partition p4 compress for archive high; 

HCC is mostly compatible with OLAP dataware housing environments 
for OLTP we can use basic compression techniques introduced in 12c AD0 :) 

---Nikhil Tatineni--
---Exadata---

Friday, November 27, 2015

Exadata: Storage index

Database is transfered between storage server and database server . Exadata is meant to deliver performance. Therefore the amount of data that should be reduced by oracle. So oracle implemented new feature called Storage index 

 When ever a table is queried by different range of values, storage index is created on the storage server automatically no additional set up is required. It will keep minimum and maximum range of values with in the cells by storing in a block . The amount of I/O saved by storage index can be known by this following query 

Query to check I/O bytes saved by storage Index

col stat_value format 9,999.9999
select  
name,value/1024/1024 as stat_value from v$mystat s, v$statname n
where 
s.statistic# = n.statistic#
and 
n.name in (
' cell physical IO bytes saved by storage index’,
‘cell physical IO interconnect bytes returned by smart scan’
)
/

_kcfis_storageidx_disabled  is parameter that controls on database to enable or disable storage index on exadata storage server. By default it is false. If you set it to True, i.e storage index will be disabled 

--Nikhil Tatineni--
--Exadata--

Exadata: Smart Flash Logging



"Exadata Smart Flash Logging” uses flash storage in the storage server to help database  to avoid latency writing to online redo logs. This small amount of flash memory temporary record the online redologs on flashlog created on flashdisks 

“ cellsrv " manages I/O processing on exadata servers. Oracle database consists of on line redo logs. when ever a transaction is committed, log writer writes the data from redo buffer cache to online redo logs. online redo logs are stored on storage severs. Here database interacts with Exadata server process “ cellsrv  “ to write the data to online redologs. simultaneously when request is completed it notifies the database.  

On the storage server, we have 4 FLASHPCI’S. Each consists of 4 flashdisks. We can check flashdisks on any Exadata storage using following command
cellcli> list clerks where disktype=flashdisk

when ever we got list of flashdisk on the storage server, we can go ahead and create flash log on single flashdisk available on the storage server
cellcli> create flash log celldisk=‘<flashdisk>’, size <>

following command displays, the flashlog available on the server
cellcli> list flashlog
cellcli> list flashlog detail 

or we can create on all storage cells as follows
In order to create the flashlog, we need to drop the existing flashcache and create the flash log and recreate the flashcache 
cellcli> create flashlog all
cellcli> create flashcache all
cellcli> list flashlog detail 

References:

Exadata-Storage-Architecture



Each storage server has 12 physical hard disks and 16 flash disks by default 
when hard disk is found, the operating system determines the device driver needed and create a block device called " lun " for application access. hard disks acts as first layer for luns. create celldisks using physical disks(luns) available on the server. 
    Physical Disk > Luns > celldisks > griddisk > Asmdisk  

Using Physical disks celldisks are created 
we can list physical disks using cellcli, following command displays hard disks and flash disks attached to the storage server 
cellcli> list physicaldisk

LUN(Logical Unit Number) 

LUN is automatically created on each physical disk
# on storage server issue command # lsscsi to list all luns 
these luns are used to create cell disks. First 2 disks contain the system AREA, we can see information using fdisk # list the partitions available on first 2 physical disks 
# fdisk -l /dev/sda
# fdisk -l /dev/sdb

CellCLI> list lun attributes name, deviceName, isSystemLun, physicalDrives, lunSize
( This output tells about the both physical disks and flash disks available on server )

using physical harddisks and flashdisks, celldisks are created using cellcli 
cellcli>  create celldisk all ( This command will create the cell disks for both physical hard disks and flash disks)
cellcli> list celldisks (This command will display all cell disks on the storage server) 

using celldisks we can create griddisks using cellcli as follows
cellcli> create griddisk <GroupName> celldisk = <Cell-Disk-Name>, size =< >

following command displays all griddisks available on the storage server
cellcli> list griddisk 

--Nikhil Tatineni--
--Exadata--
        

Exadata-Hardware-X3-2



X3-2 hardware as follows 
Try to understand each hardware component before going in detail about EXADATA



x3-2 Half Rack 
x3-2 consists of pair of Skinnier sun server x3-2 boxes (formerly known as Sun Fire x4170 M3) with Xeon E5-2690 processor

4 Database Servers  Oracle Linux 5.8 x3-2 Half Rack 
X3-2 Node in the database appliance has 2 x 8 core Intel Xeon E5-2690 processor
256 GB RAM per database server 
4 x 300 GB 10,000 RPM Disks
2 x QDR (40 Gb/sec)  ports 
Infiniband Switches 
4 x 1 Gb Ethernet ports (copper)
2 x 10 Gb Ethernet Ports (optical)

7 Exadata Storage Servers x3-2 Half Rack 
Each storage has Four 2.5 inch 200 GB SAS Solid state disks ( for database redo logs )
In total 7 x 4 = 28 PCI flash cards with 11.2 TB exadata Smart Flash Cache 

Each storage Server consists of 12  x 2.5 inch 600 GB SAS-2 hard drives  
7.2 TB of raw capacity  high performance disks 
84 cpu for sql processing # 
Infiniband Switches 
3 x 36 Port QDR (40 Gb/sec) Infiniband Switches 


Reference:
http://www.oracle.com/technetwork/database/exadata/exadata-dbmachine-x4-2-ds-2076448.pdf

--Nikhil Tatineni--

--Exadata--

Thursday, November 26, 2015

12c: In Memory Column Store

In 11g relational table row store in memory, rows are stored with row id in rom format 
In 12c relational table stored in column store

column representation :
On space management level, on disk logical column representation ( segment blocks on disks ). When we implemented in memory feature, IM store are created in SGA.  In memory compression unit : IMCU: chunks are created automatically. for Each IMCU,  transaction Journal TX is created automatically. Each column is stored separately in IMCU, chunks have maximum speed for cpu. Values for predicate columns in IMCU are scanned for cycle of the cpu and these chunks are eligible for fast processing. IMCU can store up to 1 million rows at run time based on table same and structure. IMCU: stores min and max values for range of column in memory. Range is nothing but predicate value in where clause




Table loaded in memory in imcu and transactional journal is updated automatically. when ever value in buffer cache is updated TX journal in IM store is updated automatically and a record of IMCU are updated to stale state. It will  update the value in IMCU on when the transaction is committed in the buffer cache and transaction journal is updated automatically. when Transaction jOURNAL cache received thresholds or by in memory co-ordinator refreshes the transactional journal depend on internal threshold or journal run low on memory or or every 2 minutesIn memory co-ordiantor is back ground process, refreshes  address values from transactional journal to IMCU store

Implementation: 
In memory column store  can be implemented when application is scanning large number of rows or  query subset of columns in a table or joining  small and big tables, querying dimensions tables or joining fact tables

set following parameter on database 
sql> alter system set inmemory_size=200mb;
(minimum size is 100MB as per oracle documentation)
we have new view associated with this in memory “v$inmemory_area"

using in memory option, tablespace and tables will be loaded into IM Store automatically by issuing following command
--Note: if you enable at tablespace level, all tables and materialized view's on that tablespace will be loaded into IM store by default  

syntax:
sql> alter table <table_name> default inmemory;

we can priority the tables to load the data into the IM store by setting different priority levels 

sql> alter table <table_name> priority critical;

critical - when database is opened, table is loaded into the IM store automatically 
High - If the space are available after all the priority tables are loaded, the high priority tables are loaded into the IM store 
Medium - objects or tables are loaded after all critical and high Priority tables are loaded in IM store 
Low - objects or tables are loaded after all critical, high Priority and medium tables are loaded in IM store 
None - Objects are loaded only when they are queried for first time and the space is available in IM store 

Note - by default it is None

Exceptions: 
-- Objects owned by sys user and objects stored in system and sysaux tablespace can be stored in IM store
-- We cannot load IOT tables ( Index organized tables ) 
-- longs & lobs are not supported in IM store 

---Nikhil Tatineni---
---Oracle in memory--- 



Thursday, November 12, 2015

12c: Oracle Goldengate wallet

Hi Guys,

I found very cool feature in 12c Goldengate creating password wallet and avoiding password in paramfile of goldengate process. when we implement wallet we can add goldengate user and password to encrypted wallet credential store and we can use alias instead of username and password in goldengate parameter files 

Example of paramfile without wallet credential alias 
extract ENTNEW
SETENV (ORACLE_SID=oval007p)
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/dbhome_1")
userid ggsuser, password oracle
exttrail /u01/app/trails/ba
reportcount every 10 minutes, rate
Table universe.world

In the above param file, we are using goldengate user and password, our objective is to avoid user credentials in goldengate paramfile 

Implementation plan for creating wallet and adding a credential store to wallet
step1: create wallet
step2: add goldengate user to the credential store 
Step3: validate using alias 

Step1: 
[oracle@tnc61 goldengate]$ ./ggsci
Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:21:34
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.


GGSCI (tnc61.ffdc.com) 1> create wallet
Created wallet at location 'dirwlt'.
Opened wallet at location 'dirwlt'.

Step2:
GGSCI (tnc61.ffdc.com) 2> add credentialstore
Credential store created in ./dircrd/.

GGSCI (tnc61.ffdc.com) 3> info credentialstore
Reading from ./dircrd/:
No information found in credential store.

GGSCI (tnc61.ffdc.com) 9> alter credentialstore add user ggsuser password oracle alias ggs_user
Credential store in ./dircrd/ altered.

syntax:
alter credentialstore add user <goldengateuser> password <passwordofgguser> alias <aliasname>

Step3:
GGSCI (tnc61.ffdc.com) 10> dblogin useridalias ggs_user;
Successfully logged into database.

Finally we can replace goldengate user credentials with alias. Goldengate process with alias 

ggsci> view param entnew
extract ENTNEW
SETENV (ORACLE_SID=oval007p)
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/dbhome_1")
useridalias ggs_user
exttrail /u01/app/trails/ba
logallsupcols
updaterecordformat compact
reportcount every 10 minutes, rate
Table universe.world

   Nikhil Tatineni-- "
  12c goldengate-- "



Wednesday, November 11, 2015

Delete Unused Process in Goldengate

Scenario: When you are  decommissioning goldengate, we need to purge unused process in goldengate. In order to decommission the goldengate process we need to perform dblogin and delete the process from goldengate home 

For example:  decommissioning replicat from goldengate home 
GGSCI (tnc61.ffdc.com) 8> dblogin USERID GGSUSER, PASSWORD oracle
Successfully logged into database.

GGSCI (tnc61.ffdc.com as GGSUSER@bond07p) 9> delete REORD117
Deleted REPLICAT REORD117.


----Nikhil Tatineni--
----Oracle In-memory--





Migrating table between tablespaces


Hi Guys, Welcome back


scenarios in migrating table to different tablespaces 

  • In-order to reorganize table 
  • Filesystem is full, need to migrate the table to different tablespace

following the steps 


STEP1: Find out the tables and objects want to move to new tablespace ? 
STEP2: Find out the size of each object to migrate ?
STEP3: Create new tablespace with size from step 2 
STEP4: Discuss the best strategy to move tables to newly created tablespace in step3

STEP5: When to implement it ?

Best strategy as follows

a) move table from one tablespace to another tablespace
sql> alter table <Table Name> move tablespace <Tablespace_Name>;

when we move table to another tablespace, index's on that table will be in unused state and we need to rebuild index's on the table. We want to distribute the data across tablespaces make sure index is created on different tablespace

Sql> alter index >index_name> rebuild tablespace <tablespacename>;

Note ---- Don't forget to collect stats on table. I recommend to create histograms on the large volume tables 

----Thank you --
----Nikhil Tatineni---
----Oracle In-memory---


OGG:Replicat with Handlecollisions

Hey Guys, 

Tested handlecollisions parameter in replicat and performed DML on source database 
Results as follows --- 

scenario 1: 
Replicat with no parameters 
Test Results: Replicat is abended by throwing a discard 


scenario 2:
Replicat with handle collisions
Test results:Replicat is ignoring transactions (no discard)

scenario 3: 
Replicat with following parameters  
reperror (0001,discard)
reperror (1403,discard)

BATCHSQL BATCHERRORMODE, BATCHTRANSOPS 100000
Test Results: Replicat throwing discard's and continuing :) ...


we will not get notified, when some thing is wrong with replication. Replicat will ignore transactions (no discard) and shows operation in stats indicates DML is performed on database, which is not true. 


-Note--I never recommend to use handlecollisions in your replicat paramfile. 

--Thank you 



----Nikhil Tatineni---
----Oracle in memory---

Tuesday, November 10, 2015

DBMS_METADATA.GET_DDL To extract Index DDL

Scenario: If some ask you to drop and index and re-create it. How you are going to get DDL of the Index? before we to extract index DDL using package "DBMS_METADATA.GET_DDL "

"dba_indexes" tell about what are the index's on the objects 
SQL> select dbms_metadata.get_ddl ('INDEX','INDEXNAME','SCHEMANAME') from dual;

                       (or)

If you know the object name in the schema and you want to extract DDL
here we go this is very cool. :)

select dbms_metadata.get_ddl (object_type, object_name) from user_objects where object_type='INDEX';

--Nikhil Tatineni--
--Oracle In-Memory--

Goldengate: Supplemental Logging

In order to configure and install goldengate we have to enable Minimum Supplemental logging at database level. Supplemental logging can be enabled at database level and from ggsci( goldengate command line utility). supplemental logging is of two types for primary key columns (minimal logging) and identify columns need to be logged on oracle database

How to add supplemental logging for table?

Depend on application functionality and keys on tables supplemental logging can be vary for one table to another table. when we enable supplemental logging at table level, along with changed column, primary key columns will be logged into online redo logs. Extract will collect the changes, along with primary key and writes to local trails. using this transaction info logged on the trail, replicat can make consistent SQL using primary key columns and updates target table. 


perform dblogin from goldengate command line utility and add  supplemental logging as follows 
example: 
schema: universe
table:india 

ggsci> dblogin userid ggsuser, password oracle
ggsci> add trandata universe.india 

we can supplemental logging from goldengate command line utility as follows
ggsci> info trandata universe.india 

The GoldenGate ADD TRANDATA command is equivalent to the following SQL statement
SQL> alter table Emp add supplemental log group GGS_EMP ([COLUMN_LIST]) always;


COLUMN_LIST is either the primary key columns or unique index columns, or if there is no primary key or unique index, it will be all columns


checking supplemental logging info for a table involved in goldengate replication:


checking supplemental log groups at table level 
Select log_group_name, table_name, decode (always,'always', 'Unconditional',NULL, 'Conditional') always from DBA_LOG_GROUPS where table_name='EMP';

conditional : If any one of the column is updated in the conditional group before images of all the columns will be logged in the online redo 
unconditional :  Any time the row is updated, unconditional group column will be logged into online redo(before image are logged in the online redo) 

checking supplemental log group columns at table level 
select * from dba_log_group_columns where table_name='EMP';


--NOTE:
Make sure Minimum supplemental logging, force Logging and table level supplemental logging should be enabled for OGG to work good

---Nikhil Tatineni---
---Oracle In-Memory---

Querys to monitor RAC

following few  Query's will help to find out culprits-  Query to check long running transaction from last 8 hours  Col Sid Fo...