Monday, July 2, 2018

RMAN Block Change Tracking and “_bct_public_dba_buffer_size” Hidden Parameter

One important feature  “block change tracking” that comes with 10g, keeps log of the blocks changed since the last backup. During the next backup it uses the log file “block change tracking” to detect the changed blocks  instead of scanning  all data files.  Changed blocks are determined and written to the log file by the process CTWR. After enabling “block change tracking” RMAN incremantal backups will run effectively.

You can enable or disable block change tracking as below:

SQL>alter database enable block change tracking using file '/rman_bkups/change.log';

SQL>alter database disable block change tracking;
During RMAN incrremental backup you can see  ‘block change tracking buffer space’ wait events. This wait event appears especially in large databases. It affects between 10.2 and 11.2 databases.

According to metalink documentation [ID 1311518.1] you must do following operations to solve this problem.

1- Do not put “Block Change Tracking” log file to disk which has higher  I/O ratio.

log dosyası çok fazla I/O gören verilerin bulunduğu diskde olmamalıdır.

2- The value of Large_pool_size must be examined. If it is lower then must be increased .

3- Set hidden parameter “_bct_public_dba_buffer_size”.

Ypu can query the memory area which is allocated for change tracking.

select dba_buffer_count_public*dba_entry_count_public*dba_entry_size from x$krcstat;

Multiply by 2 the obtained value and you will calculate value of “_bct_public_dba_buffer_size” parameter.

Output Post Processor is Down with Actual Process is 0 And Target Process is 1


If you see OPP is Down with Actual Process is 0 And Target Process is 1 then do the following
1. Shutdown concurrent server via command adcmctl.sh under $COMMON_TOP/admin/scripts/<context_name>

2. To ensure concurrent manager down; check there is no FNDLIBR process running.
ps -ef | grep FNDLIBR
ps -ef | grep applmgr | grep FNDLIBR

3. Run adadmin to relink FNDSVC executable.

a. Invoke adadmin from command prompt
b. Choose option 2 (2. Maintain Applications Files menu)
c. Choose option 1 (1. Relink Applications programs )
d. Then type “FND” When prompted; ( Enter list of products to link (‘all’ for all products) [all] : FND )
e. Ensure adrelink is exiting with status 0
4. Start Concurrent Managers using adcmctl.sh

Concurrent Processing – R12 Output Post Processor Service Not Coming Up

Reason :

If Service Manager for the node is not running.  Possible cause might be service manager definition is missing under

Concurrent ->Manager ->Define form. If the Service Manager is not present/defined for a particular node,then this causes all the services provided by Service Manager like OPP,WF etc.. not to work.

1. Shutdown all the services.

——Below Step 2 will create  Service Manager “FNDSM”——-

2. Log in as applmgr
cd to $FND_TOP/patch/115/sql
Run the script: afdcm037.sql

3. Relink FNDSM and FNDLIBR executables as mentioned below:

$ adrelink.sh force=y link_debug=y “fnd FNDLIBR”
$ adrelink.sh force=y link_debug=y “fnd FNDSM”

4. Run cmclean.sql
5. Start up the managers/services

Sunday, July 1, 2018

Oracle RAC Architecture 10g and 11g

Oracle RAC Architecture 10g and 11g
Architecture Diagram:
This is a basics architecture of Oracle RAC, We will discuss about new features/additions in 11g in each sections.




RAC components
Shared Disk System
Oracle clusterware
Cluster interconnects
Oracle Kernel Components
Shared Disk System
Below are the three major type of shared storage which are using in RAC

Raw volumes: A raw logical volume is an area of physical and logical disk space that is under the direct control of an application such as database or partition rather than under the direct control of the operating system or a file system. This as the only available option in 9i.

Cluster File system: This option is not widely used and here the cluster file system such as Oracle Cluster file system (OCFS) for MS Windows and Linux holding the all datafiles of RAC database

Automatic Storage Management (ASM): Oracle recommended storage option which is optimized for cluster file system for Oracle database files introduced in Oracle 10g.

Oracle Clusterware
Clusterware is the mandatory piece of software which is required to run Oracle RAC option,it provides the basic clustering support at OS level and enables oracle software to run in cluster mode.
Clusterware enables nodes to communicate each other and forms the cluster that make the nodes work as a single logical server.

Clusterware is managed by cluster ready services (CRS) using the Oracle Cluster registry (OCR) and voting disk

Cluster Ready Services (CRS)
Oracle clusterware uses CRS for interaction between the OS and the Database.CRS have five components - namely Process Monitor daemon (OPROCd: obsolete in 11gR2),CRS daemon(CRSd),Oralce Custer Synchronization Service daemon(OCSSd),Event Volume Manager daemon(EVMd) and the Oracle Notification Service (ONS)

CRS is installed and run from ORA_CRS_HOME in 10g and GRID_HOME in 11g

Starting and Stopping CRS

crsctl stop crs      # Stops Oracle clusterware
crsctl start crs     # Starts Oracle clusterware
crsctl enable crs    # Enables Oracle clusterware
crsctl disable crs   # Disables Oracle clusterware

Note: These commands should be used in root user (Superuser)
Oracle Cluster Synchronization Services
OCSSd provides synchronization between the nodes. Failure of OCSSd causes the machine to reboot to avoid split-brain situation.OCSSd runs as oracle user.



Event Manager Process
This runs to daemon process EVMd. The daemon process spawns a permanent child process called evmlogger and generate the events when things happens.



Oracle Notification Services
Whenever state of cluster resource changes ONS process in each node will communicate with each other and exchange the high availability information.
Oracle Process Monitor
The oracle process monitor daemon(OPROCd) identifies the potential cluster node hang and rebooting the hang node.
Oracle Cluster Registry (OCR)
OCR uses the cluster registry to keep the configuration information. This should be a shared storage and should be accessible  to all the nodes in the cluster simultaneously.This shared storage is known as Oracle cluster registry (OCR) 


Oracle Logical Registry(11gR2)
Similar to OCR, introduces in 11gR2 but it only stores information about the local node. It is not shared by other nodes of cluster and used by OHASd while starting or joining a cluster.


Voting Disk
A voting disk is a shared diks that will be accessed by all the member of the nodes in the cluster.It is used as   central referece for all the nodes and keeps the heartbeat information between the nodes.If any of the node is unable to ping the voding disk,cluster immediatly recogonize the comminucation failure and envicts the node from the cluster.


Cluster interconnect
It is the communication path used by the cluster for the synchronization of resources and it is also used in some cases for transfer of data from one instance to another.Typically, the interconnect is a network connections that is dedicated to the server nodes of a cluster (thus is sometimes referred as private interconnect)

Single Client Access Name (SCAN)-11gR2
SCAN is a single network name that resolves in three different IPs registered in DNS or GNS.
Prior the 11g R2 if we add or remove a node from cluster it was required to changes the connection setting in client . By configuring the connection using SCAN name this problem is eliminated.

Oracle Kernel Components
Set of additional background process in each instance is known as oracle kernel components in RAC environment.Since buffer and shared pool became global in RAC , special handling is required to manage the resources  to avoid conflicts and corruption.Additional background process (for RAC) and single instance background process works together and achieved this.


Global Cache and Global Enqueue Services
In Oracle RAC each instance will have its own cache but it is required for an instance to access the data blocks currently residing in  another instance cache.This management and data sharing is done by Global Cache services (GCS). Blocks other than data such as locks, enqueue details and shared across the instances are known as Global Enqueue Services (GES).
Global Resource Directory
In a cluster group, then centre repository formed by all the resources is known as Global Resource Directory (GRD).GRD is managed by GCS and GES.
Oracle RAC Background Processes
LMS       Global Cache Service Process
LMON   Global Enqueue Service Monitor
LMD      Global Enqueue service Daemon
LCK0     Instance Enqueue Process
LMS: Global Cache Services Process
LMS- Lock Manager Server Process is used in Cache Fusion.It enebles consistent copies of blocks to be transfered from a holding instance's buffer cache to a requesting instance bufer cache with out a disk write under certian conditions.
It rollbacks any uncommitted transactions for any blocks that are being requested for a consistent read by the remote instance.

LMON:Global Enqueue Services Monitor
LMON-Lock Monitor Process is responsible to manage Global Enqueue Services (GES).It maintain consistency of GCS memory in case of any process death. LMON is also responsible for the cluster reconfiguration when an instance joins or leaves the cluster. It also check for the instance death and listens for local manages.

LMD: Global Enqueue Services Daemon
LMD-Lock Manager Daemon process manages Enqueue manager service requests for GCS. It also handles deadlock detection and remote resource requests.
LCK0:Instance Enqueue Process
LCK-Lock Process manages instance resource requests and cross-instance call operations for shared resources.
DIAG Process
Separate process using for diagnostic purpose, PMON will start incase this process dies.DIAG process will not affect any other background process.

Oracle 12C New Feature

Oracle 12C New Feature…!!!!


Pluggable database:
This is most popular feature of oracle, so I am writing little longer on this feature.

With 12C, Oracle is trying to address the problem of Multi tenancy. There is a radical change and a major change in the core database architecture through the introduction of Container Databases also called CDB and Pluggable Databases (PDB).

The memory and process is owned by the Container Database. The container holds the metadata where the PDBs hold the user data.

We can create up to 253 PDBs including the seed PDB.

In a large setup, it is common to see 20 or 30 different instances running in production environment. With these many instances, it is a maintenance nightmare as all these instances have to be separately Upgraded, Patched, Monitored, Tuned, RAC Enabled, Adjusted Backed up and Data Guarded.

With Pluggable Databases feature, we just have to do all this for ONE single instance. Without this feature, prior to 12C, we would have to create separate schemas and there is always a thread of security how much ever the isolation we build into it. There are problems with namespace conflicts, there is always going to be one public synonym that we can create. With PDBs we can have a separate HR or Scott schema for each PDB, separate Emp, Dept Tables and separate public synonyms. Additionally, 2 PDBs can talk to each other through the regular DB Link feature. There is no high startup cost of creating a database any more. Instead of one instance per day, the shift is into one instance per many databases. For the developer community, we can be oblivious of all this and still continue to use the PDBs as if it were a traditional database, but for the DBAs the world would look like it has changed a lot.

Another cool feature is, we can allocate a CPU percentage for each PDB.

Undo tablespace is common in all PDBs but system, sysaux users & temp tablespace individual .

Whereas container DB have all their own.

Data dictionary one in container DB & one in each PDB so in both (obj$,tab$,Source$).

Total 252 pluggable DB we can create

Container  DB will contain 2 DB (Root having separate DD view & PDB Seed).

Using PDB seed basics template we can create new PDB

Root & main template seed exist initially

DBCA command if we run .. then here creation mode we have to mention global DB name & Plugable DB name

Command to create PDB

SCQL>CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pbd1_admin IDENTIFIED BY oracle ROLES=(CONNECT) FILE_NAME_CONVERT=(/oracle/SID/data1/pdbseed’,’/oracle/SID/data1/pdb1’);

Where all datafile are present … using this we are creating PDB … Copping all files from seed to pdb1

Creating system, sysaux and temp tablespace and then u can create user tablespace

How to Unplug PDB1

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READY ONLY;

SQL>ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO ‘pdb1.xml’; (IT WONT WORK IN rw MODE)

SQL>DROP PLUGGABLE DATABASE pdb1 KEEP DATAFILES; (Datafile will remain at location)

Now plugged unplugged PDB in CDB

SQL>CREATE PLUGGABLE DATABASE pdb1 USING ‘/stage/pdb1.xml’ NOCOPY;

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READ WRITE;

Now cloning pdb2 from pdb2

SQL>ALTER PLUGGABLE DATABASE pdb1 CLOSE;

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READ ONLY;

SQL>ALTER SYSTEM SET db_create_file_dest=’/oracle/SID/data1/pdb2’;

SQL>CREATE PLUGGABLE DATABASE pdb2 FROM pdb1;

SQL>ALTER PLUGGABLE DATABASE pdb2 OPEN;

What is Common & Local User

Common Users are one which is created in root container that has same identity across all users.

Local Users are users which are created & exist in only PDBs , they cant be created in ROOT.

How to create common user

SQL>CREATE USER sam IDENTIFIED BY sam CONTAINER = ALL; (By Default container=ALL). – connect / as  sysdba

Now pdb1

Connect to user sam (Local user created by Common User) – connect sam/sam@pdb1

SQL>CREATE USER mgr IDENTIFIED BY mgr CONNTAINER=CURRET;

Connect to mgr (Local User created by Local User) – connect mgr/mgr@pdb1

SQL>CREATE USER emp IDENTIFIED BY emp;

Data type size for column increase:
In 12c varchar limit increase upto 32k, it’s like we can have one PL SQL block.

I remember in Oracle 7 we use to have 253 character limits.

In Oracle 8 it was up to 4000 character limit.

The extended character size will reduce the use of going for LOB data types, whenever possible. In order to enable the extended character size, we will have to set the MAX_STRING_SIZE initialization database parameter to EXTENDED.

The following procedure need to run to use the extended data types:

Shutdown the database
Restart the database in UPGRADE mode
Modify the parameter: ALTER SYSTEM SET MAX_STRING_SIZE=EXTENDED;
Execute utl32k.sql as sysdba : SQL> @?/rdbms/admin/utl32k.sql
Shutdown the database
Restart the database in READ WRITE mode
Once modified we can’t change the setting back to STANDARD

Row Limiting Clause:
Run sql query and we can ask for top n rows or rows from 100 to 120, so it means we can select particular no of records.

e.g.

SQL>SELECT Eno

FROM   emp

ORDER BY Eno DESC

FETCH FIRST 5 ROWS ONLY;

The following example fetches all similar records of Nth row. For example, if the 10th row has salary of 5000 value, and there are other employees whose salary matches with the Nth value, the will also be fetched upon mentioning WITH TIES clause.

SQL> SELECT eno,ename,sal FROM emp ORDER BY SAL DESC FETCH FIRST 10 ROWS ONLY WITH TIES;

The following example offsets the first 5 rows and will display the next 5 rows from the table:

SQL> SELECT eno,ename,sal FROM emp ORDER BY SAL DESC  OFFSET 5 ROWS FETCH NEXT 5 ROWS ONLY;

Column Invisible
I remember when I was in payment domain based IT company where we have to hide some important data from being displayed we use to create some view.

In Oracle Database 12c, table columns can be defined as invisible either during its creation with the CREATE TABLE command or by modifying existing table columns via the ALTER TABLE statement. By default, table columns are always visible. Once a column has been set to invisible, it can be reverted back to visible using the ALTER TABLE statement.

SQL> create table sam (col1 data-type invisible);

SQL> alter table sam modify col1 invisible;

SQL> alter table sam modify column-name visible;

We must explicitly refer to the invisible column name with the INSERT statement to insert the database into invisible columns. A virtual column or partition column can be defined as invisible too. However, temporary tables, external tables and cluster tables won’t support invisible columns.

Identity Columns
In Oracle 12c when we create a table we can populate a column automatically via a system generated sequence by using the GENERATED [] AS IDENTITY clause in the CREATE TABLE statement.

We can use GENERATED AS IDENTITY with the ALWAYS, DEFAULT or DEFAULT ON NULL keywords and that will affect the way or when the identity column value is populated.

By default the GENERATED AS IDENTITY clause implicitly includes the ALWAYS keyword i.e GENERATED ALWAYS AS IDENTITY.

When the ALWAYS keyword is specified it is not possible to explicitly include values for the identity column in INSERT OR UPDATE SQL statements.

Temporary UNDO
Oracle database contains a set of system related tablespaces, such as SYSTEM, SYSAUX, UNDO & TEMP and each are used for different purposes within the Oracle database. Prior to Oracle 12c undo records generated by the temporary tables used to be stored in undo tablespace, much similar to a general table undo records. However, with the temporary undo feature in 12c , the temporary undo records can now be stored in a temporary table instead of stored in undo tablespace. The prime benefits of temporary undo includes: reduction in undo tablespace and also less redo data generation as the information won’t be logged in redo logs. We have the flexibility to enable the temporary undo option either at session level or database level.

Enabling temporary UNDO

For Session Level:

ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE;

ALTER SESSION SET TEMP_UNDO_ENABLED = FALSE;

For System Level:

CONN sys@pdb1 AS SYSDBA



ALTER SYSTEM SET TEMP_UNDO_ENABLED = TRUE;

ALTER SYSTEM SET TEMP_UNDO_ENABLED = FALSE;

Above functionality is only available if the COMPATIBLE=12.0.0 or higher.

Multiple indexes on the same column
This is one also popular feature from optimization point of view.

Previous to 12c, we could not create an index if the same column list is already indexed and would generate an error ORA-01408: such column list already indexed error.

So, if we wanted to change an index from being say from Unique to Non-Unique or B-Tree index to a Bitmap index, or from being Non-Partitioned to Partitioned in same manner, etc. then we had to first drop the original index and re-create it again as required.

This means for the period in which the index is being re-created (which could be a considerable period for a larger index), the column list is not covered by an index, which might prove to be problematic from DB performance point of view.

However, only one type of index is usable / visible at a given time.

This means we can now for e.g. replace the index policing the PK constraint quicker (or convert a B-Tree to a Bitmap index or convert a Non-Partitioned index to a Partitioned index, etc.) as we don’t now have to wait for the new index to be created first:

SQL> alter table emp drop primary key;

SQL> drop index emp_id_i1;

SQL> alter index emp_id_i2 visible;

SQL> alter table emp add constraint emp_pk primary key(Emp_id);

Moving and Renaming datafile is now ONLINE
Prior to 12c moving datafile is always offline task.

While the data file is being transferred, the end user can perform queries, DML and DDL tasks. Additionally, data files can be migrated between storages e.g. from non-ASM to ASM and vice versa.

Syntax:

ALTER DATABASE MOVE DATAFILE ( ‘filename’ | ‘ASM_filename’ | file_number )

 [ TO ( ‘filename’ | ‘ASM_filename’ ) ]

 [ REUSE ] [ KEEP ]

Where REUSE keywords indicate new file should be created even if it already existed.

The KEEP keyword indicates the original copy of the datafile should be retained.

SQL> ALTER DATABASE MOVE DATAFILE ‘/u01/app/oracle/oradata/sam/system01.dbf’ TO ‘/tmp/system01.dbf’;



DDL LOGGING
In 12cR1, we can now log the DDL action into xml and log files. This will be very useful to know when the drop or create command was executed and by who.

The ENABLE_DDL_LOGGING initiation parameter must be configured in order to turn on this feature. The parameter can be set at the database or session levels.

When this parameter is enabled, all DDL commands are logged in an xml and a log file under the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location.

An xml file contains information, such as DDL command, IP address, timestamp etc.

This helps to identify when a user or table dropped or when a DDL statement is triggered.

SQL> ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE;

Backup user privileges
SYSBACKUP – username SYSBACKUP – the ability to perform RMAN backup and recovery commands both from SQL and RMAN command line
SYSDG – username SYSDG – the ability to perform Data Guard operations with Data Guard Broker or the DGMGRL command line
SYSKM – username SYSKM – the ability to manage the encryption keys for Transparent Data Encryption
Table Recovery in RMAN
This is also one popular feature of 12c.

I was waiting this feature since collage days as wondering what will happened if mistakenly anyone drop table.

From 12c onwards we can recover particular table to Point In Time or SCN from RMAN backups in the event of table DROP or TRUNCATE.

Following action is performed whenever table recovery initiated.

Required backup sets are identified to recover the table/partition
An auxiliary database will be configured to a point-in-time temporarily in the process of recovering the table/partition
Required table/partitions will be then exported to a dumpfile using the data pumps
Optionally, we can import the table/partitions in the source database
Rename option while recovery
Some of the limitation we have in table recovery

SYS user table can’t be recovered
Tables stored under SYSAUX and SYSTEM tablespaces can’t be recovered
Recovery of a table is not possible when REMAP option used to recovery a table that contains NOT NULL constraints
Example

RMAN> connect target “username/password as SYSBACKUP”;

RMAN> RECOVER TABLE username.tablename UNTIL TIME ‘TIMESTAMP…’

                                AUXILIARY DESTINATION ‘/sapdata1/tablerecovery’

                                DATAPUMP DESTINATION ‘/sapdata1/dpump’

                                DUMP FILE ‘tab.dmp’

                                NOTABLEIMPORT    — this option avoids importing the table automatically.

REMAP TABLE ‘username.tablename’: ‘username.new_table_name’;    — can rename table with this option.

Restricting PGA size
Prior to 12c Database there was no hard limit for the PGA.

Although, we set a certain size to PGA_AGGREGATE_TARGET initialization parameter, Oracle could increase/reduce the size of the PGA dynamically based on the workload and requirements

12c Oracle has introduced a new Parameter PGA_AGGREGATE_LIMIT for controlling the maximum amount of PGA. The default limit of this Parameter is set to greatest value of these rules:

2 GB Memory
200% of PGA_AGGREGATE_TARGET
3MB per process (Parameter)
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; –disables the hard limit

So what happen, when the limit is reached? Simple Oracle is aborting the session which has the most untunable PGA, if the PGA is still over the limit then this session are terminated.

So need to be careful to set this parameter not to low (or to high), because if a important batch job is killed, it doesn’t looks good.

Database upgrade improvements
By reading all above features one question always comes in mind is to utilize those feature how to upgrade to 12c.

Oracle released the major version upgrade of Database 12c in July 2013, which also triggered the 11g version end-of-life support cycle.  The 11g support clock is ticking, with the first step of Extended Support starting Jan 2015 (in order to get Extended Support we will have to be on Oracle Database 11.2.0.4, the terminal patch set).

Preupgrade Utility
Preupgrade utility “utlu121s.sql”  is replaced with “preupgrd.sql”. The new utility provides fixup scripts “preupgrade_fixups.sql” and “postupggrade_fixups.sql” to address issues that might be present both before and after the upgrade.These fixup scripts can be executed interactively.

Upgrade Utility
The catupgrd.sql Upgrade utility is replaced with the catctl.pl utility. The new utility allows we to take advantage of CPUs by running the upgrade process in parallel thereby reducing the upgrade time.  This new script is used more like a wrapper on the existing one. Also by default “catuppst.sql” is run when “catctl.pl” is executed. The new script has lot of options , few of them are explained below.

Option “-n” specifies the degree of parallelism, the default is 4 and maximum is 8.

Option “-p” supports rerun of the upgrade by skipping successful steps.

Example: $ORACLE_HOME/perl/bin/perl catctl.pl -n 6 -p $ORACLE_HOME/rdbms/admin/catupgrd.sql
Other Miscellaneous Features.
Execute SQL statement in RMAN
Now execute any SQL and PL/SQL commands in RMAN without the need of a SQL prefix

RMAN> SELECT username,machine FROM v$session;

                RMAN> ALTER TABLESPACE users ADD DATAFILE SIZE 121m;

GATHERING STATISTICS CONCURRENTLY ON MULTIPLE TABLES
Prior to 12c whenever we execute a DBMS_STATS procedure to gather table, index, schema or database level statistics, Oracle used to collect stats one table at a time. If the table is big enough, then increasing the parallelism was recommended. With 12c we can now collect stats on multiple tables, partitions and sub partitions concurrently.  Before we start using it, we must set the following at the database level to enable the feature:

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=’DEFAULT_MAIN’;

SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=4;

SQL> EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘CONCURRENT’, ‘ALL’);

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘SCOTT’);



TRUNCATE TABLE CASCADE
Prior to 12c there wasn’t a direct option provided to truncate a master table while it is referred to by the child tables and child records exist. The TRUNCATE TABLE with CASCADE option in 12c truncates the records in the master table and automatically initiates recursive truncate on child tables too, subject to foreign key reference as DELETE ON CASCADE. There is no CAP on the number of recursive levels as it will apply on all child, grand child and great grandchild etc.

This enhancement gets rid of the prerequisite to truncate all child records before truncating a master table. The new CASCADE clause can also be applied on table partitions and sub-partitions etc.

SQL> TRUNCATE TABLE <table_name> CASCADE;

SQL> TRUNCATE TABLE <table_name> PARTITION <partition_name> CASCADE;



RESOURCE role doesn’t include UNLIMITED TABLESPACE anymore.
No need to shutdown DB for enabling / disabling archive log mode.
Some new views / packages in oracle 12c
dba_pdbs

v$pdbs

cdb_data_files

dbms_pdb

dbms_qopatch

UTL_CALLSTACK

dbms_redact



Once again sorry for long gap for new post.

As now since more than 5 yrs as I am also familiar with SAP and recently gone through the SAP HANA training. And as working in SAP Partner Company only listening SAP’s innovations.

So just decided to write something to write on latest hot topic

SAP HANA v/s Oracle Exadata v/s IBM DB2 BLU acceleration

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Ganesh

Friday, June 29, 2018

How to Add Disk in ASM Disk Group

We have raw disk /dev/sde which we are going add in existing Disk Group DATA.
 [root@stage ~]# fdisk /dev/sde
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x185ef87e.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
 Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
 WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').
 Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-8192, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-8192, default 8192):
Using default value 8192
 Command (m for help): w
The partition table has been altered!
 Calling ioctl() to re-read partition table.
Syncing disks.
 [root@stage ~]#  partprobe /dev/sde
 [root@stage ~]#  fdisk -l
Disk /dev/sdc: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6dab55b0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        1011     1048376+  83  Linux

Disk /dev/sdd: 6442 MB, 6442450944 bytes
199 heads, 62 sectors/track, 1019 cylinders
Units = cylinders of 12338 * 512 = 6317056 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x9904376c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        1019     6286180   83  Linux

Disk /dev/mapper/VolGroup00-root: 24.2 GB, 24201134080 bytes
255 heads, 63 sectors/track, 2942 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
  
Disk /dev/mapper/VolGroup00-swap: 2113 MB, 2113929216 bytes
255 heads, 63 sectors/track, 257 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
 Disk /dev/sde: 8589 MB, 8589934592 bytes
64 heads, 32 sectors/track, 8192 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x185ef87e

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        8192     8388592   83  Linux

SQL> SELECT GROUP_NUMBER,NAME,SECTOR_SIZE,BLOCK_SIZE,ALLOCATION_UNIT_SIZE,STATE,TYPE TOTAL_MB,FREE_MB FROMV$ASM_DISKGROUP;

NAME
NAME
SECTOR_SIZE
BLOCK_SIZE
ALLOCATION_UNIT_SIZE
STATE
TOTAL_MB
FREE_MB
1
DATA
512
4 096
1 048 576
MOUNTED
EXTERN
438
2
INDX
512
4 096
1 048 576
MOUNTED
EXTERN
795

[root@stage ~]# /etc/init.d/oracleasm createdisk DATA_0001 /dev/sde1
Marking disk "DATA_0001" as an ASM disk:                   [  OK  ]

[grid@stageoem1 ~]$ asmcmd -p
ASMCMD [+] > lsdsk
Path
ORCL:DATA_0000
ORCL:DATA_0001
ORCL:INDX_0000
ASMCMD [+] >
-- Identify the candidate Disks

SQL> SELECT
     NVL(a.name, '[CANDIDATE]')      disk_group_name
     , b.path                          disk_file_path
     , b.name                          disk_file_name
     , b.failgroup                     disk_file_fail_group
FROM
    v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY
    a.name;
DISK_GROUP_NAME
DISK_FILE_PATH
DISK_FILE_NAME
DISK_FILE_FAIL_GROUP
DATA
ORCL:DATA_0000
DATA_0000
DATA_0000
INDX
ORCL:INDX_0000
INDX_0000
INDX_0000
[CANDIDATE]
ORCL:DATA_0001



[grid@stage ~]$ sqlplus / as sysasm

-- Add disk to the group DATA (you need to login as sysasm not sysdba)
SQL> ALTER DISKGROUP DATA ADD DISK 'ORCL:DATA_0001';

Diskgroup altered.

SQL> SELECT
     NVL(a.name, '[CANDIDATE]')      disk_group_name
     , b.path                          disk_file_path
     , b.name                          disk_file_name
     , b.failgroup                     disk_file_fail_group
FROM
    v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY
    a.name;

DISK_GROUP_NAME
DISK_FILE_PATH
DISK_FILE_NAME
DISK_FILE_FAIL_GROUP
DATA
ORCL:DATA_0000
DATA_0000
DATA_0000
DATA
ORCL:DATA_0001
DATA_0001
DATA_0001
INDX
ORCL:INDX_0000
INDX_0000
INDX_0000

SQL> SELECT GROUP_NUMBER,NAME,SECTOR_SIZE,BLOCK_SIZE,ALLOCATION_UNIT_SIZE,STATE,TYPE TOTAL_MB,FREE_MB FROMV$ASM_DISKGROUP;

NAME
NAME
SECTOR_SIZE
BLOCK_SIZE
ALLOCATION_UNIT_SIZE
STATE
TOTAL_MB
FREE_MB
1
DATA
512
4 096
1 048 576
MOUNTED
EXTERN
8627
2
INDX
512
4 096
1 048 576
MOUNTED
EXTERN
795

SQL> SELECT MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,STATE,TOTAL_MB,FREE_MB,NAME,PATH,LABEL FROM V$ASM_DISK;

MOUNT_STATUS
HEADER_STATUS
MODE_STATUS
STATE
TOTAL_MB
FREE_MB
NAME
PATH
LABEL
CACHED
MEMBER
ONLINE
NORMAL
2 046
1 720
DATA_0000
ORCL:DATA_0000
DATA_0000
CACHED
MEMBER
ONLINE
NORMAL
1 023
795
INDX_0000
ORCL:INDX_0000
INDX_0000
CACHED
MEMBER
ONLINE
NORMAL
8 191
6 907
DATA_0001
ORCL:DATA_0001
DATA_0001

RMAN Block Change Tracking and “_bct_public_dba_buffer_size” Hidden Parameter

One important feature  “block change tracking” that comes with 10g, keeps log of the blocks changed since the last backup. During the next ...