Configuring OpenFiler for RAC

So after the openfiler installation, biggest task is to configure it to use for our RAC Installation. So in this blog i configured it to use it.

So before you start this machine, we need to add disk to this machine to use it as storage.

1

Click on Machine Settings and click on Add. Below screen will show up and you have to Click yes for the Administrator Privilege.

2

Select Hard Disk and Press Next.

3

Select SCSI and Press Next..

4

Create a new virtual disk, so select the option and press Next.

5

Give this disk a size and check on “Store virtual disk as a single file”. Press Next..

6

You can keep the same name or any name you want to keep. Click Finish.

7

So now we have 2 Disk, 8 GB and 40 GB, 8 GB is assigned to this OpenFiler OS and 40 GB is something we recently added for our storage.

8

Now start the VM and connect it with root access to view the disk from CMD Line.

/dev/sdb – disk has been added and visible too.

9

Click on Services to Enable and Disable important services that will be required to have this Storage configured. For now we only need to enable iSCSI Target Services.

1011

Click System and 2 Private IP Address:

12

13

So we configured two Private IP. Private IP is only responsible for intercommunication among all Nodes in RAC Cluster.

14

Now click on Volumes to finally Configure the Disk which we attached to be used for the Storage. There is nothing as of now on the screen. Click on – “create new physical volume”.

15

You will see 2 Disk Attached to this storage. 1 is off course 8 GB and other one is 40 GB.

16

Click on /dev/sdb. You will find out the below image, click on Create Below to create a partition on /dev/sdb.

17

Once you are done with that, 1 Partition will be created with space 38.14 GB (/dev/sdb1). check below. 1.85 will be used by openfiler to maintain the metadata about the disk.

18

Volume Group Management:

The next step is to create a Volume Group, we will only create 1 Volume Group for our newly created partition /dev/sbd1.

19

I just gave it a name as “storage” to the whole disk.

20

Once you click on “Add Volume Group”, you will see Volume Group has been created with name – “storage”.

21

Now create Logical Volumes on Volume Group, we created CRS, DATA and FRA Logical Volumes, we have to repeat the steps for each Logical Volume:

22

CRS logical Volume Has been Created:

23ASM01 Volume has been created for DATA.

24

Last one is ASM02 for FRA.

25

Note: Ideally there is no need to create 3-4 Logical Volumes, we can use 1 logical volume for testing.

So now we have 3 Logical Volumes like below:

26

Now start the iSCSI target Service to access these logical Volumes from RAC Nodes.

27

So it has been started now:

28

Now we have 3 iSCSI Logical Volumes, before a client actually start accessing them, we need to create iSCSI Target on each of these logical volumes.

Each and every iSCSI Logical Volume will be mapped to a specific iSCSI Target and proper network permission to that target will be granted to both Oracle Nodes, if we have 3 Nodes or more same steps will be followed for each node.

In this example we will be mapping 1 to 1 Mapping between an iSCSI logical Volume and iSCSI Target.

This is a 3 Step process of creating and configuring an iSCSI target:

1. Create a unique target IQN (universal name for an iSCSI Target)
2. Map each iSCSI target to Each iSCSI Logical Volumes
3. Grant access to both the Oracle RAC Nodes

3 Steps are needed to be followed to create iSCSI Target and we have to follow these steps for our 3 iSCSI Logical Volumes.

 Create a new Target IQN:

From the OpenFiler Storage Control Center, navigate to [Volumes] / [iSCSI Targets]. Verify the grey sub-tab “Target Configuration” is selected. This page allows you to create a new iSCSI target. A default value is automatically generated for the name of the new
iSCSI target (better known as the “Target IQN”).

Press Add button, you will find the detail below that:

2930

LUN Mapping:

After creating the new iSCSI target, the next step is to map the appropriate iSCSI logical volumes to it. Under the “Target Configuration” sub-tab, verify the correct iSCSI target is selected in the section “Select iSCSI Target”. If not, use the pull-down menu to select the correct iSCSI target and hit the “Change” button.

Next, click on the grey sub-tab named “LUN Mapping” (next to “Target Configuration” sub-tab). Locate the appropriate iSCSI logical volume (/dev/storage/crs in this case) and click the “Map” button. You do not need to change any settings on this page.

31

After you Map it will be shown like below:

32

Network ACL:

Before an iSCSI client can have access to the newly created iSCSI target, it needs to be granted the appropriate permissions. A while back, we configured network access in OpenFiler for two hosts (the Oracle RAC nodes). These are the two nodes that will need
to access the new iSCSI targets through the storage (private) network. We now need to grant both of the Oracle RAC nodes access to the new iSCSI target.

Click on the grey sub-tab named “Network ACL” (next to “LUN Mapping” sub-tab). For the current iSCSI target, change the “Access” for both hosts from ‘Deny’ to ‘Allow’ and click the ‘Update’ button.

3334

So now both Nodes has access to this LUN.

Same steps has to be followed for other 2 iSCSI Target. So finally we created 3 IQN and Mapped with 3 LUNs and gave them the proper permission:

iqn.2006-01.com.openfiler:tsn.281dc4023cd6 – /dev/storage/crs
iqn.2006-01.com.openfiler:tsn.b71534a43aa – /dev/storage/asm01
iqn.2006-01.com.openfiler:tsn.98ca329824f2 – /dev/storage/asm02

Now go to /etc/initiators.deny and comment these IQN to be accessed by RAC Nodes else it wont be reachable.

Before commenting it out:

35

After Commenting it Out:

36

Now how to access these iSCSI targets from RAC Nodes:

Go to Node 1 and Install the RPM from the repository or use yum installer.

rpm -Uvh iscsi-initiator-utils*

37

Now we have to discover these targets using the below commands:

iscsiadm -m discovery -t st -p 192.168.0.150 ( This is our Storage IP). Ideally after you execute this command it will display all 3 IQN Values which we got.

Next is to login to targets using the command – iscsiadm -m node -l -p 192.168.0.150

Finally we need to configure these IQN while system starts up so that we don’t need to do it every time after systems boot up.

iscsiadm -m node -T <IQN_Value> -p 192.168.0.150 –op update -n node.startup -v automatic

38

Do the same thing on Node 2 as well and we are sorted.

39

Now check fdisk -l output on both the machines and you will be able to see the disks storage disks:

[root@node1 ~]# fdisk -l

Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00049bc1

Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 2611 20765696 8e Linux LVM

Disk /dev/mapper/VolGroup00-LogVol01: 15.0 GB, 14969470976 bytes
255 heads, 63 sectors/track, 1819 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup00-LogVol00: 6291 MB, 6291456000 bytes
255 heads, 63 sectors/track, 764 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdb: 33.6 GB, 33554432000 bytes
64 heads, 32 sectors/track, 32000 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 5234 MB, 5234491392 bytes
162 heads, 62 sectors/track, 1017 cylinders
Units = cylinders of 10044 * 512 = 5142528 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@node1 ~]#

So we have got 3 Disks – /dev/sdb, /dev/sdc and /dev/sdd.

Now we can configure these disks to be used by ASM.

Any questions, let me know…

Thanks!

Advertisements

7 thoughts on “Configuring OpenFiler for RAC

  1. I followe your instructions and i have this problem i doe not understand.

    [root@racnode1 ~]$ iscsiadm -m discovery -t sendtargets -p openfiler
    192.168.192.104:3260,1 iqn.2006-01.com.openfiler:tsn.fra.277069a86a08
    192.168.43.104:3260,1 iqn.2006-01.com.openfiler:tsn.fra.277069a86a08
    192.168.192.104:3260,1 iqn.2006-01.com.openfiler:tsn.data.e405525e8231
    192.168.43.104:3260,1 iqn.2006-01.com.openfiler:tsn.data.e405525e8231
    [root@racnode1 ~]$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.data.e405525e8231 -p openfiler:3260 -l -n node.startup -v automatic
    automatic
    Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.data.e405525e8231, portal: 192.168.192.104,3260] (multiple)
    Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.data.e405525e8231, portal: 192.168.192.104,3260] successful.
    [root@racnode1 ~]$ iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.fra.277069a86a08 -p openfiler:3260 -l -n node.startup -v automatic
    Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.fra.277069a86a08, portal: 192.168.192.104,3260] (multiple)
    Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.fra.277069a86a08, portal: 192.168.192.104,3260] successful.
    [root@racnode1 ~]$ fdisk -l | grep /dev/sd
    Disk /dev/sda: 34.4 GB, 34359738368 bytes, 67108864 sectors
    /dev/sda1 * 2048 2099199 1048576 83 Linux
    /dev/sda2 2099200 67108863 32504832 8e Linux LVM
    Disk /dev/sdb: 32.7 GB, 32749125632 bytes, 63963136 sectors
    Disk /dev/sdc: 32.7 GB, 32749125632 bytes, 63963136 sectors
    [root@racnode1 ~]$ ls -l /dev/disk/by-path/
    total 0
    lrwxrwxrwx. 1 root root 9 Jul 24 13:31 ip-192.168.192.104:3260-iscsi-iqn.2006-01.com.openfiler:tsn.data.e405525e8231-lun-0 -> ../../sdb
    lrwxrwxrwx. 1 root root 9 Jul 24 13:31 ip-192.168.192.104:3260-iscsi-iqn.2006-01.com.openfiler:tsn.fra.277069a86a08-lun-0 -> ../../sdc
    lrwxrwxrwx. 1 root root 9 Jul 24 13:28 pci-0000:00:01.1-ata-2.0 -> ../../sr0
    lrwxrwxrwx. 1 root root 9 Jul 24 13:28 pci-0000:00:07.0-ata-1.0 -> ../../sda
    lrwxrwxrwx. 1 root root 10 Jul 24 13:28 pci-0000:00:07.0-ata-1.0-part1 -> ../../sda1
    lrwxrwxrwx. 1 root root 10 Jul 24 13:28 pci-0000:00:07.0-ata-1.0-part2 -> ../../sda2
    [root@racnode1 ~]$ reboot

    [root@racnode1 ~]$ fdisk -l | grep /dev/sd
    Disk /dev/sda: 34.4 GB, 34359738368 bytes, 67108864 sectors
    /dev/sda1 * 2048 2099199 1048576 83 Linux
    /dev/sda2 2099200 67108863 32504832 8e Linux LVM
    Disk /dev/sdb: 32.7 GB, 32749125632 bytes, 63963136 sectors
    Disk /dev/sdc: 32.7 GB, 32749125632 bytes, 63963136 sectors
    Disk /dev/sdd: 32.7 GB, 32749125632 bytes, 63963136 sectors
    Disk /dev/sde: 32.7 GB, 32749125632 bytes, 63963136 sectors
    [root@racnode1 ~]$ ls -l /dev/disk/by-path/
    total 0
    lrwxrwxrwx. 1 root root 9 Jul 24 13:32 ip-192.168.192.104:3260-iscsi-iqn.2006-01.com.openfiler:tsn.data.e405525e8231-lun-0 -> ../../sde
    lrwxrwxrwx. 1 root root 9 Jul 24 13:32 ip-192.168.192.104:3260-iscsi-iqn.2006-01.com.openfiler:tsn.fra.277069a86a08-lun-0 -> ../../sdb
    lrwxrwxrwx. 1 root root 9 Jul 24 13:32 ip-192.168.43.104:3260-iscsi-iqn.2006-01.com.openfiler:tsn.data.e405525e8231-lun-0 -> ../../sdd
    lrwxrwxrwx. 1 root root 9 Jul 24 13:32 ip-192.168.43.104:3260-iscsi-iqn.2006-01.com.openfiler:tsn.fra.277069a86a08-lun-0 -> ../../sdc
    lrwxrwxrwx. 1 root root 9 Jul 24 13:32 pci-0000:00:01.1-ata-2.0 -> ../../sr0
    lrwxrwxrwx. 1 root root 9 Jul 24 13:32 pci-0000:00:07.0-ata-1.0 -> ../../sda
    lrwxrwxrwx. 1 root root 10 Jul 24 13:32 pci-0000:00:07.0-ata-1.0-part1 -> ../../sda1
    lrwxrwxrwx. 1 root root 10 Jul 24 13:32 pci-0000:00:07.0-ata-1.0-part2 -> ../../sda2

    now i notice that ../../sdb and ../../sdc have been linked to a diffrent iscsi device.
    how do i get then bound to eachother like :

    ip-192.168.192.104:3260-iscsi-iqn.2006-01.com.openfiler:tsn.data.e405525e8231-lun-0 -> ../../sde
    alsways to each other…

    whenevery i reboot the machine the assignments change.

    Cheers.
    using ol7.3

  2. Can you tell me what is in the file – /etc/scsi_id.config and /etc/iscsi/iscsid.conf?

    1. For /etc/scsi_id.config:
    If it is blank then add the following line to your /etc/scsi_id.config file:
    options=-g

    2. For /etc/iscsi/iscsid.conf config:
    node.startup = automatic

    After this make sure you complete as per this link – https://getsomeoracle.wordpress.com/2016/03/08/udev-scsi-rules-configuration-for-asm/

    Then restart your machine and check. Let me know if it works for you.

    Thanks!

    1. as for point 1 and 2 they are set like you asked.
      as for the instuctions in the link.
      using : For OEL 7 – /usr/lib/udev/scsi_id -g -u -d

      [root@racnode1 sdb]$ for i in `cat /proc/partitions | awk {‘print $4’} |grep sd`; do echo “### $i: `/usr/lib/udev/scsi_id -g -u /dev/$i`”; done
      ### sda: 1ATA_QEMU_HARDDISK_QM00005
      ### sda1: 1ATA_QEMU_HARDDISK_QM00005
      ### sda2: 1ATA_QEMU_HARDDISK_QM00005
      ### sdb: 14f504e46494c45527762353259332d763344562d48783758
      ### sdc: 14f504e46494c45526532784a33322d6f4937312d6d574b62
      ### sdd: 14f504e46494c45525864473364742d465866502d6a377466
      ### sde: 14f504e46494c45526c4c733031612d6a776d522d4f666659
      ### sdf: 14f504e46494c4552536e7576747a2d734965792d646a6451
      ### sdg: 14f504e46494c45525450624337432d7a4430562d6c643930
      ### sdh: 14f504e46494c45524e356148394e2d304f79652d7054614f
      ### sdi: 14f504e46494c4552694b303562472d6d5830702d59304578
      ### sdj: 14f504e46494c455264726758507a2d655050312d6e6e596d
      ### sdk: 14f504e46494c45525249724237512d433279392d4e7a6d45
      [root@racnode1 sdb]$

      [root@racnode1 sdb]$ cat /etc/udev/rules.d/99-oracle-asmdevices.rules
      KERNEL==”sdb1″, BUS==”scsi”, PROGRAM==”/usr/lib/udev/scsi_id -g -u -d /dev/$parent”, RESULT==”14f504e46494c45527762353259332d763344562d48783758″, NAME=”asm-disk1″, OWNER=”oracle”, GROUP=”dba”, MODE=”0660″

      [root@racnode1 sdb]$ partprobe /dev/sdb1
      Error: Could not stat device /dev/sdb1 – No such file or directory.

      => an error but still i continue <=

      [root@racnode1 sdb]$ udevadm test /block/sdb/sdb1
      calling: test
      version 219
      This program is for debugging only, it does not run any program
      specified by a RUN key. It may show incorrect results, because
      some values may be different, or not available at a simulation run.

      === trie on-disk ===
      tool version: 219
      file size: 7259752 bytes
      header size 80 bytes
      strings 1887992 bytes
      nodes 5371680 bytes
      Load module index
      Created link configuration context.
      timestamp of '/etc/udev/rules.d' changed
      Reading rules file: /usr/lib/udev/rules.d/10-dm.rules
      …..
      …..
      Reading rules file: /etc/udev/rules.d/99-oracle-asmdevices.rules
      unknown key 'BUS' in /etc/udev/rules.d/99-oracle-asmdevices.rules:1
      invalid rule '/etc/udev/rules.d/99-oracle-asmdevices.rules:1'
      Reading rules file: /usr/lib/udev/rules.d/99-systemd.rules
      rules contain 24576 bytes tokens (2048 * 12 bytes), 13123 bytes strings
      1998 strings (24978 bytes), 1342 de-duplicated (12512 bytes), 657 trie nodes used
      unable to open device '/sys/block/sdb/sdb1'
      Unload module index
      Unloaded link configuration context.
      [root@racnode1 sdb]$

      here i have a invalid rule : unknown key 'BUS' (BUS=='scsi' is in the line)

      i am missing something but what?

      this is my base install maybe it helps you help me.

      setup OL7.3
      disable kdump
      network & hostname : Hostname racnode1.localdomain
      software selection minimal install (do not change)
      installation source : local media (do not change)
      Date & Time : Europe / Amsterdam timezone
      Keyboard : (do not touch)
      Language support : (do not touch)
      Begin installation
      Root password : think of one
      user create : your choice

      ##### user root is used for the following
      ##### connection via console as ssh is not possible yet :
      nmcli con show
      nmcli con del ens19
      nmcli con add type ethernet con-name public ifname ens19 ip4 192.168.192.108/24 # adjust for racnode
      nmcli con up public
      ##### connect via ssh now
      ##### Modify publc connection
      nmcli con mod public gw4 192.168.192.1 ipv4.dns "192.168.192.102" # Always the same !!!!
      ##### clear network config from default setup #####
      nmcli con del ens18
      nmcli con del ens20
      ##### create new network config #####
      nmcli con add type ethernet con-name nat ifname ens18 ip4 10.0.2.15/8 gw4 10.0.2.2 # Always the same !!!!
      nmcli con add type ethernet con-name private ifname ens20 ip4 192.168.43.108/24 # adjust for racnode
      ##### activate new network config #####
      nmcli con up nat
      nmcli con up private
      reboot
      ##### update the system
      yum -y update
      ##### disable firewall
      systemctl disable firewalld
      service firewalld stop
      ##### install needed packages
      yum -y install xorg-x11-apps xorg-x11-fonts-misc wget rpm net-tools cifs-utils ntp
      ##### iscsi stuff #####
      yum -y install iscsi-initiator-utils #device-mapper-multipath oracleasm-support kmod-oracleasm
      ##### oracle enviroment setup
      yum -y install oracle-rdbms-server-12cR1-preinstall.x86_64

      Many thanks in advance… it must be something i forgot….

      1. What I think is that you have forgotten to create partitions on your disks: /dev/sdb1,/dev/sdc1 and so on.

        Create partitions and give them the whole cylinder value then proceed with partprobe /dev/sdb1, partprobe /dev/sdc1 and so on. I believe after the creation of disks your command will be successful.

        Check and let me know.

        Thanks!

  3. I figured it out :

    KERNEL==”sdb1″, BUS==”scsi”, PROGRAM==”/usr/lib/udev/scsi_id -g -u -d /dev/$parent”, RESULT==”14f504e46494c45527762353259332d763344562d48783758″, NAME=”asm-disk1″, OWNER=”oracle”, GROUP=”dba”, MODE=”0660″

    BUS==”scsi” is not OL7 but subsystem==”block” is
    NAME is not OL7 but symlink+ is

    making this a working statement :
    KERNEL==”sd?1″, SUBSYSTEM==”block”, PROGRAM==”/usr/lib/udev/scsi_id -g -u -d /dev/$parent”, RESULT==”14f504e46494c45525249724237512d433279392d4e7a6d45″, SYMLINK+=”asm-disk1″, OWNER=”oracle”, GROUP=”dba”, MODE=”0660″

    thanks for all the help 🙂 .

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s