Sunday, November 29, 2015

Exadata: Hybrid Columnar Compression

Hybrid Columnar Compression enables us to implement different levels of data compression and provides cost-savings at storage level and performance improvements due to reduced I/O when accessing large tables. When we Implement HCC. Tables are organized and stored in compression unit. In this compression unit, data is stored in columnar format and after it is compressed. similar values inside the compression unit are grouped and will enhance the  expected compression ratio. 
 
Two types of  Hybrid Columnar Compression

1) Query optimized compression will help fetch large data workloads such at Data Warehouses
As we know data warehousing  environment  have historical data. Here compressing helps in storage saving and helps in analytic performance
2)Archive compression provides the highest degree of compression and is targeted for particular accessed data ( partition on the table) which are kept online.

On OLTP systems, Hybrid Columnar Compression can be used to compress older, less active data while newer, more active and update intensive data can be compressed using Advanced Row Compression.


For example we can create different HCC compression Types while creating table
HCC compression available are:

compress for query low;
compress for query high;
compress for archive low;
compress for archive high;

for example we can create a table with different partition levels and apply different HCC for different partitions as shown below 

create table world (id num(10))
partition by range (id)
(partition p1 values less than (100000) nocompress,
partition p2 values less than (200000) compress for archive low,
partition p3 values less than (300000)  compress for query high,
partition p4 values less than (maxvalue) compress for query low)
enable row movement;


following query will display type of compression level is implemented on the partition table  
select partition_name, compression, compress_for from dba_tab_partitions where table_name like ‘WORLD’;
 


from the above table, we can confirm that, we can apply different HCC levels to provide cost saving at storage level and to improve performance of sql query's 

lets say, of the table is partition is unused by the application, we can compress with HCC option "compress for archive high"

sql> alter table  world modify partition p4 compress for archive high; 

HCC is mostly compatible with OLAP dataware housing environments 
for OLTP we can use basic compression techniques introduced in 12c AD0 :) 

---Nikhil Tatineni--
---Exadata---

Friday, November 27, 2015

Exadata: Storage index

Database is transfered between storage server and database server . Exadata is meant to deliver performance. Therefore the amount of data that should be reduced by oracle. So oracle implemented new feature called Storage index 

 When ever a table is queried by different range of values, storage index is created on the storage server automatically no additional set up is required. It will keep minimum and maximum range of values with in the cells by storing in a block . The amount of I/O saved by storage index can be known by this following query 

Query to check I/O bytes saved by storage Index

col stat_value format 9,999.9999
select  
name,value/1024/1024 as stat_value from v$mystat s, v$statname n
where 
s.statistic# = n.statistic#
and 
n.name in (
' cell physical IO bytes saved by storage index’,
‘cell physical IO interconnect bytes returned by smart scan’
)
/

_kcfis_storageidx_disabled  is parameter that controls on database to enable or disable storage index on exadata storage server. By default it is false. If you set it to True, i.e storage index will be disabled 

--Nikhil Tatineni--
--Exadata--

Exadata: Smart Flash Logging



"Exadata Smart Flash Logging” uses flash storage in the storage server to help database  to avoid latency writing to online redo logs. This small amount of flash memory temporary record the online redologs on flashlog created on flashdisks 

“ cellsrv " manages I/O processing on exadata servers. Oracle database consists of on line redo logs. when ever a transaction is committed, log writer writes the data from redo buffer cache to online redo logs. online redo logs are stored on storage severs. Here database interacts with Exadata server process “ cellsrv  “ to write the data to online redologs. simultaneously when request is completed it notifies the database.  

On the storage server, we have 4 FLASHPCI’S. Each consists of 4 flashdisks. We can check flashdisks on any Exadata storage using following command
cellcli> list clerks where disktype=flashdisk

when ever we got list of flashdisk on the storage server, we can go ahead and create flash log on single flashdisk available on the storage server
cellcli> create flash log celldisk=‘<flashdisk>’, size <>

following command displays, the flashlog available on the server
cellcli> list flashlog
cellcli> list flashlog detail 

or we can create on all storage cells as follows
In order to create the flashlog, we need to drop the existing flashcache and create the flash log and recreate the flashcache 
cellcli> create flashlog all
cellcli> create flashcache all
cellcli> list flashlog detail 

References:

Exadata-Storage-Architecture



Each storage server has 12 physical hard disks and 16 flash disks by default 
when hard disk is found, the operating system determines the device driver needed and create a block device called " lun " for application access. hard disks acts as first layer for luns. create celldisks using physical disks(luns) available on the server. 
    Physical Disk > Luns > celldisks > griddisk > Asmdisk  

Using Physical disks celldisks are created 
we can list physical disks using cellcli, following command displays hard disks and flash disks attached to the storage server 
cellcli> list physicaldisk

LUN(Logical Unit Number) 

LUN is automatically created on each physical disk
# on storage server issue command # lsscsi to list all luns 
these luns are used to create cell disks. First 2 disks contain the system AREA, we can see information using fdisk # list the partitions available on first 2 physical disks 
# fdisk -l /dev/sda
# fdisk -l /dev/sdb

CellCLI> list lun attributes name, deviceName, isSystemLun, physicalDrives, lunSize
( This output tells about the both physical disks and flash disks available on server )

using physical harddisks and flashdisks, celldisks are created using cellcli 
cellcli>  create celldisk all ( This command will create the cell disks for both physical hard disks and flash disks)
cellcli> list celldisks (This command will display all cell disks on the storage server) 

using celldisks we can create griddisks using cellcli as follows
cellcli> create griddisk <GroupName> celldisk = <Cell-Disk-Name>, size =< >

following command displays all griddisks available on the storage server
cellcli> list griddisk 

--Nikhil Tatineni--
--Exadata--
        

Exadata-Hardware-X3-2



X3-2 hardware as follows 
Try to understand each hardware component before going in detail about EXADATA



x3-2 Half Rack 
x3-2 consists of pair of Skinnier sun server x3-2 boxes (formerly known as Sun Fire x4170 M3) with Xeon E5-2690 processor

4 Database Servers  Oracle Linux 5.8 x3-2 Half Rack 
X3-2 Node in the database appliance has 2 x 8 core Intel Xeon E5-2690 processor
256 GB RAM per database server 
4 x 300 GB 10,000 RPM Disks
2 x QDR (40 Gb/sec)  ports 
Infiniband Switches 
4 x 1 Gb Ethernet ports (copper)
2 x 10 Gb Ethernet Ports (optical)

7 Exadata Storage Servers x3-2 Half Rack 
Each storage has Four 2.5 inch 200 GB SAS Solid state disks ( for database redo logs )
In total 7 x 4 = 28 PCI flash cards with 11.2 TB exadata Smart Flash Cache 

Each storage Server consists of 12  x 2.5 inch 600 GB SAS-2 hard drives  
7.2 TB of raw capacity  high performance disks 
84 cpu for sql processing # 
Infiniband Switches 
3 x 36 Port QDR (40 Gb/sec) Infiniband Switches 


Reference:
http://www.oracle.com/technetwork/database/exadata/exadata-dbmachine-x4-2-ds-2076448.pdf

--Nikhil Tatineni--

--Exadata--

Thursday, November 26, 2015

12c: In Memory Column Store

In 11g relational table row store in memory, rows are stored with row id in rom format 
In 12c relational table stored in column store

column representation :
On space management level, on disk logical column representation ( segment blocks on disks ). When we implemented in memory feature, IM store are created in SGA.  In memory compression unit : IMCU: chunks are created automatically. for Each IMCU,  transaction Journal TX is created automatically. Each column is stored separately in IMCU, chunks have maximum speed for cpu. Values for predicate columns in IMCU are scanned for cycle of the cpu and these chunks are eligible for fast processing. IMCU can store up to 1 million rows at run time based on table same and structure. IMCU: stores min and max values for range of column in memory. Range is nothing but predicate value in where clause




Table loaded in memory in imcu and transactional journal is updated automatically. when ever value in buffer cache is updated TX journal in IM store is updated automatically and a record of IMCU are updated to stale state. It will  update the value in IMCU on when the transaction is committed in the buffer cache and transaction journal is updated automatically. when Transaction jOURNAL cache received thresholds or by in memory co-ordinator refreshes the transactional journal depend on internal threshold or journal run low on memory or or every 2 minutesIn memory co-ordiantor is back ground process, refreshes  address values from transactional journal to IMCU store

Implementation: 
In memory column store  can be implemented when application is scanning large number of rows or  query subset of columns in a table or joining  small and big tables, querying dimensions tables or joining fact tables

set following parameter on database 
sql> alter system set inmemory_size=200mb;
(minimum size is 100MB as per oracle documentation)
we have new view associated with this in memory “v$inmemory_area"

using in memory option, tablespace and tables will be loaded into IM Store automatically by issuing following command
--Note: if you enable at tablespace level, all tables and materialized view's on that tablespace will be loaded into IM store by default  

syntax:
sql> alter table <table_name> default inmemory;

we can priority the tables to load the data into the IM store by setting different priority levels 

sql> alter table <table_name> priority critical;

critical - when database is opened, table is loaded into the IM store automatically 
High - If the space are available after all the priority tables are loaded, the high priority tables are loaded into the IM store 
Medium - objects or tables are loaded after all critical and high Priority tables are loaded in IM store 
Low - objects or tables are loaded after all critical, high Priority and medium tables are loaded in IM store 
None - Objects are loaded only when they are queried for first time and the space is available in IM store 

Note - by default it is None

Exceptions: 
-- Objects owned by sys user and objects stored in system and sysaux tablespace can be stored in IM store
-- We cannot load IOT tables ( Index organized tables ) 
-- longs & lobs are not supported in IM store 

---Nikhil Tatineni---
---Oracle in memory--- 



Thursday, November 12, 2015

12c: Oracle Goldengate wallet

Hi Guys,

I found very cool feature in 12c Goldengate creating password wallet and avoiding password in paramfile of goldengate process. when we implement wallet we can add goldengate user and password to encrypted wallet credential store and we can use alias instead of username and password in goldengate parameter files 

Example of paramfile without wallet credential alias 
extract ENTNEW
SETENV (ORACLE_SID=oval007p)
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/dbhome_1")
userid ggsuser, password oracle
exttrail /u01/app/trails/ba
reportcount every 10 minutes, rate
Table universe.world

In the above param file, we are using goldengate user and password, our objective is to avoid user credentials in goldengate paramfile 

Implementation plan for creating wallet and adding a credential store to wallet
step1: create wallet
step2: add goldengate user to the credential store 
Step3: validate using alias 

Step1: 
[oracle@tnc61 goldengate]$ ./ggsci
Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:21:34
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.


GGSCI (tnc61.ffdc.com) 1> create wallet
Created wallet at location 'dirwlt'.
Opened wallet at location 'dirwlt'.

Step2:
GGSCI (tnc61.ffdc.com) 2> add credentialstore
Credential store created in ./dircrd/.

GGSCI (tnc61.ffdc.com) 3> info credentialstore
Reading from ./dircrd/:
No information found in credential store.

GGSCI (tnc61.ffdc.com) 9> alter credentialstore add user ggsuser password oracle alias ggs_user
Credential store in ./dircrd/ altered.

syntax:
alter credentialstore add user <goldengateuser> password <passwordofgguser> alias <aliasname>

Step3:
GGSCI (tnc61.ffdc.com) 10> dblogin useridalias ggs_user;
Successfully logged into database.

Finally we can replace goldengate user credentials with alias. Goldengate process with alias 

ggsci> view param entnew
extract ENTNEW
SETENV (ORACLE_SID=oval007p)
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/dbhome_1")
useridalias ggs_user
exttrail /u01/app/trails/ba
logallsupcols
updaterecordformat compact
reportcount every 10 minutes, rate
Table universe.world

   Nikhil Tatineni-- "
  12c goldengate-- "



Wednesday, November 11, 2015

Delete Unused Process in Goldengate

Scenario: When you are  decommissioning goldengate, we need to purge unused process in goldengate. In order to decommission the goldengate process we need to perform dblogin and delete the process from goldengate home 

For example:  decommissioning replicat from goldengate home 
GGSCI (tnc61.ffdc.com) 8> dblogin USERID GGSUSER, PASSWORD oracle
Successfully logged into database.

GGSCI (tnc61.ffdc.com as GGSUSER@bond07p) 9> delete REORD117
Deleted REPLICAT REORD117.


----Nikhil Tatineni--
----Oracle In-memory--





Migrating table between tablespaces


Hi Guys, Welcome back


scenarios in migrating table to different tablespaces 

  • In-order to reorganize table 
  • Filesystem is full, need to migrate the table to different tablespace

following the steps 


STEP1: Find out the tables and objects want to move to new tablespace ? 
STEP2: Find out the size of each object to migrate ?
STEP3: Create new tablespace with size from step 2 
STEP4: Discuss the best strategy to move tables to newly created tablespace in step3

STEP5: When to implement it ?

Best strategy as follows

a) move table from one tablespace to another tablespace
sql> alter table <Table Name> move tablespace <Tablespace_Name>;

when we move table to another tablespace, index's on that table will be in unused state and we need to rebuild index's on the table. We want to distribute the data across tablespaces make sure index is created on different tablespace

Sql> alter index >index_name> rebuild tablespace <tablespacename>;

Note ---- Don't forget to collect stats on table. I recommend to create histograms on the large volume tables 

----Thank you --
----Nikhil Tatineni---
----Oracle In-memory---


OGG:Replicat with Handlecollisions

Hey Guys, 

Tested handlecollisions parameter in replicat and performed DML on source database 
Results as follows --- 

scenario 1: 
Replicat with no parameters 
Test Results: Replicat is abended by throwing a discard 


scenario 2:
Replicat with handle collisions
Test results:Replicat is ignoring transactions (no discard)

scenario 3: 
Replicat with following parameters  
reperror (0001,discard)
reperror (1403,discard)

BATCHSQL BATCHERRORMODE, BATCHTRANSOPS 100000
Test Results: Replicat throwing discard's and continuing :) ...


we will not get notified, when some thing is wrong with replication. Replicat will ignore transactions (no discard) and shows operation in stats indicates DML is performed on database, which is not true. 


-Note--I never recommend to use handlecollisions in your replicat paramfile. 

--Thank you 



----Nikhil Tatineni---
----Oracle in memory---

Tuesday, November 10, 2015

DBMS_METADATA.GET_DDL To extract Index DDL

Scenario: If some ask you to drop and index and re-create it. How you are going to get DDL of the Index? before we to extract index DDL using package "DBMS_METADATA.GET_DDL "

"dba_indexes" tell about what are the index's on the objects 
SQL> select dbms_metadata.get_ddl ('INDEX','INDEXNAME','SCHEMANAME') from dual;

                       (or)

If you know the object name in the schema and you want to extract DDL
here we go this is very cool. :)

select dbms_metadata.get_ddl (object_type, object_name) from user_objects where object_type='INDEX';

--Nikhil Tatineni--
--Oracle In-Memory--

Goldengate: Supplemental Logging

In order to configure and install goldengate we have to enable Minimum Supplemental logging at database level. Supplemental logging can be enabled at database level and from ggsci( goldengate command line utility). supplemental logging is of two types for primary key columns (minimal logging) and identify columns need to be logged on oracle database

How to add supplemental logging for table?

Depend on application functionality and keys on tables supplemental logging can be vary for one table to another table. when we enable supplemental logging at table level, along with changed column, primary key columns will be logged into online redo logs. Extract will collect the changes, along with primary key and writes to local trails. using this transaction info logged on the trail, replicat can make consistent SQL using primary key columns and updates target table. 


perform dblogin from goldengate command line utility and add  supplemental logging as follows 
example: 
schema: universe
table:india 

ggsci> dblogin userid ggsuser, password oracle
ggsci> add trandata universe.india 

we can supplemental logging from goldengate command line utility as follows
ggsci> info trandata universe.india 

The GoldenGate ADD TRANDATA command is equivalent to the following SQL statement
SQL> alter table Emp add supplemental log group GGS_EMP ([COLUMN_LIST]) always;


COLUMN_LIST is either the primary key columns or unique index columns, or if there is no primary key or unique index, it will be all columns


checking supplemental logging info for a table involved in goldengate replication:


checking supplemental log groups at table level 
Select log_group_name, table_name, decode (always,'always', 'Unconditional',NULL, 'Conditional') always from DBA_LOG_GROUPS where table_name='EMP';

conditional : If any one of the column is updated in the conditional group before images of all the columns will be logged in the online redo 
unconditional :  Any time the row is updated, unconditional group column will be logged into online redo(before image are logged in the online redo) 

checking supplemental log group columns at table level 
select * from dba_log_group_columns where table_name='EMP';


--NOTE:
Make sure Minimum supplemental logging, force Logging and table level supplemental logging should be enabled for OGG to work good

---Nikhil Tatineni---
---Oracle In-Memory---

Migrating Oracle database using Goldengate:


Perform all sanitary operations before we migrate schema / database
and after 

STEP1: Install and configure goldengate on both source and target servers
STEP2: Configure Manager, Extract and pump on source server / goldengate home and Manager, replicat on Target goldengate  home
STEP3: start manager, Extract and Pump on source goldengate home
STEP4:  get scn_number from v$database;
STEP5: Take consistent backup of database / schema using Expdp (Using flashback_scn parameter) which we are trying to migrate 
STEP6: Sftp dumpfile to target server
STEP7: Load the data into the target database 
STEP8: Start replicat with CSN number from step 4 ( we know that SCN number of database is Equal to the Commit sequence number of the database ) 
STEP9: When lag is cleared data is in sync with source database 


—Nikhil Tatineni—
—Oracle In-memory— 

Monday, November 9, 2015

Matching PID to SID & SID to PID & SQL_TEXT using SID


Identify OS process ID based on database SID

select a.sid, a.serial#,a.username, a.osuser, b.spid
from v$session a, v$process b
where a.paddr= b.addr
and a.sid='&sid'
order by a.sid;

How to find out PID using SID

select pid, sid
from v$process, v$session
where v$process.addr = v$session.paddr
and sid = ###;

How to find "sql_text" using SID

select
a.sid,
a.serial#,
b.sql_text
from
v$session a,
v$sqlarea b
where
a.sql_address=b.address
and
sid= ###;


----Nikhil Tatineni----
----Oracle in memory----

Goldengate replicat waiting for commit


Scenario: Replicat is struck on database and have around 17 hour lag, it is outage on our environment :) & data is out of sync. 

REPLICAT    RUNNING     RWTEMP        0:42:30      17:34:02

On Initial investigation 
Check for blocking sessions on  oracle database 
Check for long running transaction on oracle database
Check for Replicat is moving on the trail using "info processname" command 
Check Trail is healthy using log dump utility
check for abended process ( extract or pump ) associated with the replicat 

performed all steps during initial investigation, I found pump which is writing to target server is abended and after when i started this pump, lag is cleared on the replicat 

Finally concluded, if there is any opened transaction on the replicat, replicat is waiting for commit, which created lag on the replicat 

How to check open transaction on oracle database? 

SQL> SELECT COUNT(*) FROM v$transaction t, v$session s, v$mystat m WHERE t.ses_addr = s.saddr AND s.sid = m.sid AND ROWNUM = 1;

----Nikhil Tatineni---
----Oracle in memory----

Sunday, November 8, 2015

Oracle Histograms

Histogram Holds data the data about values within a column in a table and number of occurrences for specific value or range. Depend on predicate values used in a query it stores the range of the index. Optimizer uses the histogram to generate execution plan, it may generate a plan with Index full table scan or Index range scan. Histograms are useful when we have distinct data in the table. The information about histogram is stored in data dictionary view "dba_tab_col_statistics". METHOD_OPT parameter is used to control the creation of histograms 

creating histogram on table depends on what columns we are gathering on table. we can use different operations while creating histogram on tables 

METHOD_OPT:
FOR COLUMNS SIZE AUTO: Fastest. you can specify one column instead of all columns.
FOR ALL COLUMNS SIZE REPEAT: Prevent deletion of histograms and collect it only for columns already have histograms.
FOR ALL COLUMNS:collect histograms on all columns .
FOR ALL COLUMNS SIZE SKEWONLY:collect histograms for columns have skewed value .
FOR ALL INDEXES COLUMNS:collect histograms for columns have indexes.

when we use Auto OPTION while gathering or creating histogram on the table, Oracle will track the need of histograms depend on the column usage from sys.col_usage$. using this info internally it creates buckets (histogram) on the columns and stores access  info ( range is stored inside the bucket )

Exec dbms_stats.gather_table_stats
(ownname => 'UNIVERSE', tabname=>'INDIA'
estimate_percent => dbms_stats.auto_sample_size,
method_opt => 'FOR COLUMNS SIZE AUTO',
degree => 4); 

For example Creating histogram for frequently used column or frequency histogram :)
If the column is repeatedly access by the application. we can create histogram on the column.Here we are creating histogram for column "state" in schema universe & table india

Exec DBMS_STATS.GATHER_TABLE_STATS 
(ownname => 'UNIVERSE',
tabname => 'INDIA',
method_opt => 'FOR COLUMNS STATE',
degree => 4);

Creating Hybrid - histogram in 12c: 
before creating hybrid column, check for distinct values on table.
method_opt => 'FOR COLUMNS STATE size 10', 10 buckets are created and access info ( range is stored inside the bucket 

Exec DBMS_STATS.GATHER_TABLE_STATS 
(ownname => 'UNIVERSE',
tabname => 'INDIA',
method_opt => 'FOR COLUMNS STATE size 10',
degree => 4);

The optimizer considers both hybrid and top frequency histograms. To qualify for a top frequency histogram, the percentage of rows occupied by the top 10 most frequent values must be equal to or greater than threshold p, where p is (1-(1/10))*100, or 90%.

Query to investigate histogram  created for column "STATE" 

select table_name,column_name,num_distinct, histogram from user_tab_col_statistics where table_name='INDIA' and column_name='STATE';

Limitations 
Never create histogram on the primary key and have a less distinct values 

----Nikhil Tatineni---
----Oracle in memory --- 

Querys to monitor RAC

following few  Query's will help to find out culprits-  Query to check long running transaction from last 8 hours  Col Sid Fo...