To increase the GPFS mount point size we need at-least one shared storage free disk,

  • Storage disks to needs to be share wherever the mount point has been mounted in this cluster Nodes.
  • We can’t use available free space from the existing file-system.
  • Create New NSD disks  from the newly allocated storage disks.

      ➡  mmcrnsd

  • Assign created NSD disks into the remaining available cluster Nodes.

      ➡  mmchnsd

  • Add the created NSD disks into the GPFS mount Point.

     ➡  mmadddisk

  • Verify the Increased GPFS mount Point Size.

Step 1: Allocate new shared storage LUNs to all the cluster nodes.

Step 2: To change the disks attributes for all the available cluster Nodes.

NODE1+NODE2:# chdev -l hdisk4 -a algorithm=round_robin -a queue_depth=32 -a reserve_policy=no_reserve
NODE1+NODE2:# lsattr -El hdisk4 -a queue_depth -a algorithm -a reserve_policy

Step 3: For NODE1 to Make a stanza file with the details of new hdisk4 , storage pools properties.

NODE1:# cat>/gpfs/conf/DiskDesc-new
hdisk4:NODE1-GPFS::dataAndMetadata::nsd2::

Step 4: To convert normal hdisk into GPFS NSD disk.

mmcrnsd

❗  Specify storage pools and pool properties

❗  Assign disk devices to storage pools

❗  Assign failure groups to disk devices

NODE1:# mmcrnsd -F /gpfs/conf/DiskDesc-new
mmcrnsd: Processing disk hdisk4
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Step 5: To view the Updated stanza file after creating NSD.

NODE1:# cat /gpfs/conf/DiskDesc-new
# hdisk4:NODE1-GPFS::dataAndMetadata::nsd2::
nsd2:::dataAndMetadata:-1::system

Step 6: From that updated stanza file we have to remove ‘#’ entry line and verfy the updated NSD disk creation format.

NODE1:# cat /gpfs/conf/DiskDesc-new| grep -v “#” >>/gpfs/conf/DiskDesctiensd-new
NODE1:# cat /gpfs/conf/DiskDesctiensd-new
nsd2:::dataAndMetadata:-1::system

Step 7: To Create one more stanza file with the details of NSD disks information and Nodes details.

NODE1:# cat>/gpfs/conf/NSDfilesystemcreation-new
nsd2:NODE1-GPFS,NODE2-GPFS

Step 8: Assign nsd2 disk into both the Nodes. in my case i have two node GPFS cluster.

NODE1:# mmchnsd -F /gpfs/conf/NSDfilesystemcreation-new
mmchnsd: Processing disk nsd2
mmchnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Step 9: To verify if the NSD disk successfully added to both the nodes or not.

NODE1:# mmlsnsd
File system Disk name NSD servers
—————————————————————————
testgpfs nsd1 NODE1-GPFS,NODE2-GPFS
(free disk) Tiebreak1 NODE1-GPFS,NODE2-GPFS
(free disk) nsd2 NODE1-GPFS,NODE2-GPFS

Step 10: After adding the NSD disk into the cluster NODES we have to increase the GPFS mount point size.

Newly added disk it will show as like free disk.

NODE1:# mmlsnsd -F
File system Disk name NSD servers
—————————————————-
(free disk) Tiebreak1 NODE1-GPFS,NODE2-GPFS
(free disk) nsd2 NODE1-GPFS,NODE2-GPFS

Step 11: Create one more stanza file to increase the GPFS mount point size.

NODE1:# cat /gpfs/conf/DiskDesctiensd-new
nsd2:::dataAndMetadata:-1::system

Step 12: To add the Free NSD disk into the GPFS File-system lv.

NODE1:# mmadddisk testgpfs -F /gpfs/conf/DiskDesctiensd-new
The following disks of testgpfs will be formatted on node NODE1:
nsd2: size 142606336 KB
Extending Allocation Map
Checking Allocation Map for storage pool system
Completed adding disks to file system testgpfs.
mmadddisk: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.<strong>p 13:</strong> Newly created NSD disk successfully added into the File-system.
NODE1:# mmlsnsd
File system Disk name NSD servers
—————————————————
testgpfs nsd1 NODE1-GPFS,NODE2-GPFS
testgpfs nsd2 NODE1-GPFS,NODE2-GPFS
(free disk) Tiebreak1 NODE1-GPFS,NODE2-GPFS<strong> 14:</strong> Now we have successfully increased GPFS mount point size.
NODE1:# df -gt /testgpfs
Filesystem GB blocks Used Free %Used Mounted on
/dev/testgpfs 272.00 138.60 133.40 51% /testgpfs