Table of Contents
Introduction
ASM disk: In this article, we are about to cover how to set up an Oracle ASM from scratch and How to add an ASM disk in an existing production environment.
- Setting up and ASM from Scratch
- Adding ASM disks in an existing production environment
In the new server setup, we will cover with additional steps like Installing packages, Initializing Oracle ASM. This guide covered for all RPM-based Linux servers, It can be Red Hat Linux or Cent-OS Linux Server.
To start on Red Hat Linux or on Cent-OS Linux we need to install with ASMlib package. If you have an Oracle Linux we can ignore installing Oracle ASM packages because by default Oracle Linux servers are coming with those packages.
First, let start with New Oracle Server Setup.
Steps we are going to perform for a New Server Setup
- Getting SAN disks from Storage.
- Scanning for LUN’s in Oracle Linux servers.
- Installing with OracleASM packages and dependencies.
- Configuring multi-path entries and reloading
- Verifying the created alias.
- Initializing the Oracle ASM module and creating the mount point.
- Configuring Oracle ASM with DB user and Group.
- Marking multi-path devices as Oracle ASM disks.
- Verifying created ASM disks by listing.
- Scan for the existing ASM disks on secondary nodes in a clustered DB servers.
User Accounts and Group Required for ASM
Assume server already installed with Oracle Database 12 C standard or Enterprise Edition or any other version, Hence this is an existing installation we have below groups and users in servers.
user : > oracle groups : > oinstall, dba, oper, backupdba, asmdba, asmadmin
Installing Oracle ASM Packages
Note:> Ignore this step if you are running with any Oracle Linux server OEL 6.x or OEL 7.x or OEL 8.x. This Step applied only for Red Hat Enterprise Linux 6.x, 7.x and 8.x & Cent-OS Linux servers.
Now let us begin with setting up Oracle ASM. We need below packages for creating ASM disks on an RHEL based Linux Servers. If this server is an already existing DB server in production then skip this step.
Install with kernel module for oralceasm, Oracle ASM management package, and a simple utility to scan for SAN storage Disks (LUN).
# yum install kmod-oracleasm -y
# yum install oracleasm-support -y
# yum install sg3_utils -y
To scan SAN LUN’s we required the package.
# yum install sg3_utils -y
Above listed packages are completed with the installation.
Running transaction
Updating : sg3_utils-libs-1.37-17.el7.x86_64 1/4
Updating : sg3_utils-1.37-17.el7.x86_64 2/4
Cleanup : sg3_utils-1.37-12.el7.x86_64 3/4
Cleanup : sg3_utils-libs-1.37-12.el7.x86_64 4/4
sg3_utils-1.37-17.el7.x86_64 1/4
Verifying : sg3_utils-libs-1.37-17.el7.x86_64 2/4
Verifying : sg3_utils-1.37-12.el7.x86_64 3/4
Verifying : sg3_utils-libs-1.37-12.el7.x86_64 4/4
Updated:
sg3_utils.x86_64 0:1.37-17.el7
Dependency Updated:
sg3_utils-libs.x86_64 0:1.37-17.el7
Initializing and Configuring OracleASM
Once we installed with Oracleasm package it can’t start without initializing. Below “oracleasm init
” command will load the “oracleasm” module, it will create the mount point under /dev/oracleasm
.
# oracleasm init
[root@oradb1 ~]# oracleasm init Creating /dev/oracleasm mount point: /dev/oracleasm Loading module "oracleasm": oracleasm Configuring "oracleasm" to use device physical block size Mounting ASMlib driver filesystem: /dev/oracleasm
Configure oracleasm with respective user and group by running “oracleasm configure -i”, to start the Oracle ASM during boot time.
Default user will be Oracle and group will be oinstall, to start the oracleasm during reboot proceed with “y“.
# oracleasm configure -i
[root@oradb1 ~]# oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: oinstall Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done [root@oradb1 ~]#
check the status for oracleasm.
# /etc/init.d/oracleasm status
[root@oradb1 ~]# /etc/init.d/oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes [root@oradb1 ~]
Creating ASM Disks
To create the ASM disks on multi-path devices run the oracleasm command with option and argument as shown below.
# /etc/init.d/oracleasm createdisk DATA_001 /dev/mapper/DATA_001 # /etc/init.d/oracleasm createdisk DATA_002 /dev/mapper/DATA_002
[root@oradb1 ~]# /etc/init.d/oracleasm createdisk DATA_001 /dev/mapper/DATA_001 Marking disk "DATA_001" as an ASM disk: [ OK ] [root@oradb1 ~] [root@oradb1 ~]# /etc/init.d/oracleasm createdisk DATA_002 /dev/mapper/DATA_002 Marking disk "DATA_002" as an ASM disk: [ OK ] [root@oradb1 ~]
Once created with ASM disks list the disks using “listdisks“.
# /etc/init.d/oracleasm listdisks
[root@oradb1 ~]# /etc/init.d/oracleasm listdisks DATA_001 DATA_002 [root@oradb1 ~]#
Created ASM disks will be at below location.
# ls -lthr /dev/oracleasm/disks/
[root@oradb1 ~]# ls -lthr /dev/oracleasm/disks/ total 0 brw-rw---- 1 oracle oinstall 8, 16 Nov 22 02:10 DATA_001 brw-rw---- 1 oracle oinstall 8, 32 Nov 22 02:10 DATA_002 [root@oradb1 ~]#
Multinode Oracle Servers
If our setup has multiple clustered DB nodes, Just we need to follow the below steps on them.
- Scan for SAN disks.
- Put the same multi-path entry, (No changes to Alias and LUN ID).
- Reload multi-path daemon.
- Then no need of creating Oracle ASM disks just only we need to do a “
scandisks
” as shown below in all clustered DB nodes.
# /etc/init.d/oracleasm scandisks
# /etc/init.d/oracleasm listdisks
[root@oradb2 ~]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@oradb2 ~]# [root@oradb2 ~]# /etc/init.d/oracleasm listdisks DATA_001 DATA_002 [root@oradb2 ~]#
We are able to list the newly created ASM disks.
Adding ASM disks in an existing Production Environment
Real-time scenarios in production Oracle Servers
There will be an incident or there will be a request from the DBA team to add more space to the existing ASM disks group. Now once request came to Unix team we required increasing the space in ASM disk group, so it’s time to request with the storage team to add more disk to those DB servers.
If it’s an individual DB server we can request the storage team to assign the required LUN in that single server. In case we have two or more DB servers in HA mode we need to request the same LUN’s across all the servers.
To read about how to create ASM disk using UDEV in Linux servers. Kindly go through below article.
Above URL only guide us to create an ASM disk using UDEV. So let us see how to mark a SAN disk through multi-path for creating an ASM disk with reliability and high availability across multiple nodes.
Check the Running Kernel
First check for which kernel currently under use. If it’s UEK kernel we need to check for the oracleasm package.
If Oracle ASM installed below steps can be carried out.
# uname -r
Scanning for attached SAN LUN’s
Once we getting LUN from Storage team we have to scan for the LUN’s manually using below re-scan command.
# rescan-scsi-bus.sh
Once done with scanning we will get the timestamp for newly added disks under below location.
# cd /dev/disk/by-id/ # ls -lthr
Output for reference
lrwxrwxrwx 1 root root 10 Nov 25 20:55 wwn-0x6ac61751004ce7d050207ee300000129 -> ../../sdcs lrwxrwxrwx 1 root root 10 Nov 25 20:55 scsi-36ac61751004ce7d050207f4c0000012a -> ../../sdct lrwxrwxrwx 1 root root 10 Nov 25 20:55 wwn-0x6ac61751004ce7d050207f4c0000012a -> ../../sdct lrwxrwxrwx 1 root root 10 Nov 25 20:55 scsi-36ac61751004ce7d050207fa10000012b -> ../../sdcu
The same time we can observe the new disks from dmesg output.
# dmesg
Sample output of dmesg
scsi 1:0:1:137: Direct-Access HUAWEI XSG1 4303 PQ: 0 ANSI: 6 sd 1:0:1:137: Attached scsi generic sg565 type 0 sd 1:0:1:124: [sduc] Attached SCSI disk sd 1:0:1:134: [sdum] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA sd 1:0:1:135: [sdun] Write Protect is off sd 1:0:1:135: [sdun] Mode Sense: 97 00 00 08 sd 1:0:1:137: [sdup] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
Configure Multi-path for SAN LUNs
Once we confirmed the new disk by matching with scanned output move forward to configure multipath.
# vim /etc/multipath.conf
At the bottom of the file add the above scanned LUN information as follows.
wwwid = LUN ID // This need to match scanned one and provided by storage team.
Alias = OCS_DATA_089 // This is just a reference for you, Name requested by DBA team.
multipath { wwid 36ac61751004ce7d050208e2900000146 alias OCS_DATA_089 } multipath { wwid 36ac61751004ce7d050208e7d00000147 alias OCS_ARCHIVE_006 }
save and quit from the configuration file by using wq!
Once done with configuring multi-path reload the new configuration changes.
# sudo /etc/init.d/multipathd reload
List the newly labelled Alias name using “multipath -ll” the output will be long. To get the specific Alias output we can do a grep.
Listing the configured Multi-path SAN Disk
# sudo multipath -ll OCS_DATA_089
The output will be similar to below. We need to verify the number of paths, path status (active), Size, Alias name, State (Ready)
OCS_DATA_089 (36ac61751004ce7d050208e2900000146) dm-137 HUAWEI,XSG1 size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 0:0:0:124 sddv 71:208 active ready running |- 1:0:0:124 sdor 129:368 active ready running |- 0:0:1:124 sdjg 8:416 active ready running `- 1:0:1:124 sduc 66:576 active ready running OCS_ARCHIVE_006 (36ac61751004ce7d05020336c000000e0) dm-24 HUAWEI,XSG1 size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 0:0:0:22 sdx 65:112 active ready running |- 1:0:0:22 sdkt 67:272 active ready running |- 0:0:1:22 sdfi 130:64 active ready running `- 1:0:1:22 sdqe 131:480 active ready running
verify the device name and Alias under below location. Both the below dm device value and Alias need to match the above “multiapth -ll
” output.
# ls -lthr /dev/mapper/OCS_DATA_089 lrwxrwxrwx 1 root root 9 Nov 26 04:27 /dev/mapper/OCS_DATA_089 -> ../dm-137
# ls -lthr /dev/mapper/OCS_ARCHIVE_006 lrwxrwxrwx 1 root root 8 Nov 26 04:27 /dev/mapper/OCS_ARCHIVE_006 -> ../dm-24
Create ASM Disks on Multi-path Devices
Once everything verified and looks good. Create the ASM disk using the command.
# /etc/init.d/oracleasm createdisk OCS_DATA_089 /dev/mapper/OCS_DATA_089
- oracleasm command to mark any disk as ASM disk.
- createdisk to mark a disk as ASM disk.
- OCS_DATA_089 is Label of ASM disk.
- Device which we are about to use for ASM disk.
Once it created we will get the ASM disk under /dev/oracleasm/disks/
with oracle : oinstall
ownership if it’s a standalone node. Or if you run with a grid setup it may be oracle : asmadmin
. Moreover, The ownership will change according to your Oracle Database setup.
check this file for the ownership information
# cat /etc/sysconfig/oracleasm-_dev_oracleasm
And look for ORACLEASM_UID=oracle
and ORACLEASM_GID=oinstall
In our setup, it looks like below.
Listing the created ASM Disks
Once the ASM disks are created the list and verify.
# ls -lthr /dev/oracleasm/disks/OCS_DATA_089 # ls -lthr /dev/oracleasm/disks/OCS_ARCHIVE_006 brw-rw---- 1 oracle oinstall 252, 137 Nov 30 13:11 /dev/oracleasm/disks/OCS_DATA_089 brw-rw---- 1 oracle oinstall 252, 24 Nov 30 12:42 /dev/oracleasm/disks/OCS_ARCHIVE_006
List the created ASM disks by running.
# /etc/init.d/oracleasm listdisks
Note: > If the ASM disk wants to be in HA nodes (multiple servers) just we need to do the above step 2 to 5 and run the below scandisks
on those nodes.
# /etc/init.d/oracleasm scandisks
That’s it we have done with creating ASM disks under single or multiple servers.
If you are running your Oracle Database Server on a Virtual machine follow below guide.
Only for Virtual Environment (Virtual Machines)
If your Oracle Database server running on a virtual environment like VMware, Red Hat Virtualization, Oracle OVM we should create a Partition to mark the disk as ASM.
While running “dmesg” command at the end of the output we are able to notice the new disk added to Virtual machine.
# dmesg
[ 1395.581989] sd 3:0:0:0: [sdb] Write Protect is off [ 1395.581992] sd 3:0:0:0: [sdb] Mode Sense: 77 00 00 08 [ 1395.583252] sd 3:0:0:0: [sdb] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA [ 1395.584558] sd 3:0:0:1: Attached scsi generic sg3 type 0 [ 1395.600922] sd 3:0:0:1: [sdc] 999948288 512-byte logical blocks: (511 GB/476 GiB) [ 1395.601993] sd 3:0:0:1: [sdc] Write Protect is off [ 1395.601998] sd 3:0:0:1: [sdc] Mode Sense: 77 00 00 08 [ 1395.603854] sd 3:0:0:1: [sdc] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA [ 1395.625901] sd 3:0:0:0: [sdb] Attached SCSI disk [ 1395.631273] sd 3:0:0:1: [sdc] Attached SCSI disk [ 1450.703076] scsi host4: iSCSI Initiator over TCP/IP [ 1450.707262] scsi 4:0:0:0: Direct-Access COMPAQ-HSV110 VIRTUAL-DISK 0 PQ: 0 ANSI: 4 [ 1450.708944] sd 4:0:0:0: Attached scsi generic sg4 type 0 [ 1450.709541] sd 4:0:0:0: [sdd] 999948288 512-byte logical blocks: (511 GB/476 GiB) [ 1450.709872] sd 4:0:0:0: [sdd] Write Protect is off [ 1450.709875] sd 4:0:0:0: [sdd] Mode Sense: 77 00 00 08 [ 1450.709939] scsi 4:0:0:1: Direct-Access COMPAQ-HSV110 VIRTUAL-DISK 0 PQ: 0 ANSI: 4 [ 1450.711674] sd 4:0:0:0: [sdd] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA [ 1450.715162] sd 4:0:0:1: [sde] 999948288 512-byte logical blocks: (511 GB/476 GiB) [ 1450.715533] sd 4:0:0:1: [sde] Write Protect is off [ 1450.715536] sd 4:0:0:1: [sde] Mode Sense: 77 00 00 08 [ 1450.715768] sd 4:0:0:1: Attached scsi generic sg5 type 0 [ 1450.718097] sd 4:0:0:1: [sde] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA [ 1450.724506] sd 4:0:0:0: [sdd] Attached SCSI disk [ 1450.732771] sd 4:0:0:1: [sde] Attached SCSI disk [root@oradb1 ~]#
And list using fdisk command.
# fdisk -l
Disk /dev/sdb: 512.0 GB, 511973523456 bytes
255 heads, 63 sectors/track, 62243 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 512.0 GB, 511973523456 bytes
255 heads, 63 sectors/track, 62243 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Once completed with partition create the ASM disk on those partitions instead of pointing to use a RAW disk.
Conclusion
To marking a SAN disk as ASM we need to get the disk from storage team and align with an alias to identify the disk. By following reload the multi-path service and mark the SAN disk as ASM disks. If you have any query kindly provide your comment in below comment section, we will be back to you with the solution.
Thanks for this useful post.
We are building a 2 node RAC with SAN storage.Once the storage is ready,what commands should we execute on linux box in both servers.
Getting SAN disks from Storage.
Scanning for LUN’s in Oracle Linux servers.
Thanks,
Satish
@Satish,
I believe you have not gone through the full post, Kindly read from “Adding ASM disks in an existing Production Environment”
Thanks & regards,
Babin Lonston
Thank you for the prompt reply and valuable advice.
I have another query, since I am working on host-based migration (migrating Oracle ASM disks from legacy SAN storage to new SAN storage), is it mandatory to create partition on LUNs in new SAN for ASM disks.
If yes, what utility you will prefer to use “kpartx” or “fdisk” in Oracle RAC 12c R1 database in order to partition LUNs in new SAN for ASM disks.
Many thanks in advance
@Masooma,
It’s my pleasure to help you.
No, You don’t require to create a partition on SAN LUNs. If your case is Virtual Machine then it requires partition.
To create a partition on VMware environment we use fdisk.
Thanks & Regards,
Babin Lonston
Thank you for such a wonderful article.
I am planning to move ASM disks from one SAN to another SAN and have configured DM-Multipath and managed to create ASM Disks in the new SAN. I am using the dm-x devices to create new ASM disks (as listed below). Will it cause any issue in Oracle RAC environment in terms of storage devices naming persistence (especially after server reboot)
/etc/init.d/oracleasm create disk new_data /dev/dm-26
@Masooma,
While creating a multipath device you are defining an alias in multipath.conf, It’s good to practise to create the ASM disk using Alias. DM device will rotate the name during a reboot.
Regards,
Bobin Lonston
What If don’t use alias to identify the disk will create any problem, because I didn’t do it on my production and when I mapped it looks like below can you help. [14:58, 29/03/2020] Mughees Ahmed: [root@racnode3 ~]# /etc/init.d/oracleasm querydisk -d -v DSK_CRS1
Disk “DSK_CRS1” is a valid ASM disk on device /dev/dm-24[253,24]
[14:58, 29/03/2020] Mughees Ahmed: [root@racnode4 ~]# /etc/init.d/oracleasm querydisk -d -v DSK_CRS1
Disk “DSK_CRS1” is a valid ASM disk on device /dev/sdaf[65,240]
@Mughees,
No, it won’t create any problem. Once you have n numbers of disks, to identify the right disk it will be easier if you have an alias.
Regards,
Babin Lonston
i’d prefer using udev rules or afd because asmlib is – or will be – deprecated.
@Wolfrootel,
Thanks for the feedback, Kindly find our old post related to UDEV.
i’d prefer using udev rules or afd because asmlib is – or will be – deprecated.
@Wolfrootel,
Thanks for the feedback, Kindly find our old post related to UDEV.