IBM Link: Steps to unlock a volume group



This document describes how to unlock a volume group with an error like:

0516-1201 lsvg: Warning: Volume group <VG Name> is locked.


Here are some steps to find out why the volume group is locked and/or unlock it.

  1. Run “chvg -u <VG Name>” to try to unlock it.
  2. Check the complete ps -ef output for LVM processes such as chlv, chvg, mklv that would modify something in LVM and therefore lock the vg.

    If you find any query commands like lsvg or lslv with/without flags as such:

    # ps -ef | grep vg

    root 14942378 19988506 0 Oct 16 - 0:00 lsvg -p rootvg
    root 15597578 12189770 0 23:55:05 - 0:00 lsvg -p rootvg
    root 4850006 15532252 0 13:45:05 pts/3 0:00 /usr/sbin/lsvg -l

    Then it is safe to kill those process (14942378, 15597578, and 4850006 in the case above) since they only query information about the volume group then retry step 1

  1. Check for maragent or navisphere process running. This is an EMC product and older versions have been known to lock volume groups illegally.

    We can check using:

    # ps -ef | grep -Ei ‘maragent|navisphere’

    root 6881442 17957332 0 06:44:16 pts/5 0:00 maragent

    Then try killing the process (6881442) and try step 1 to unlock the volume group

  2. Check if you are experiencing a full filesystem

    # df -g

    Filesystem GB blocks Free %Used Iused %Iused Mounted on
    /dev/hd4 1.28 0.81 97% 10080 56% /
    /dev/hd2 3.81 1.99 48% 42906 9% /usr
    /dev/hd9var 0.69 0.38 85% 6701 67% /var
    /dev/hd3 0.12 0.12 6% 142 1% /tmp
    /dev/hd1 0.03 0.03 2% 25 1% /home
    /dev/hd11admin 0.12 0.12 1% 5 1% /admin
    /dev/hd10opt 0.28 0.10 64% 7032 23% /opt
    /dev/livedump 0.12 0.12 1% 4 1% /var/adm/ras/livedump
    The lines in bold may pose an issue since the system gets chocked resulting in the volume getting locked.

    To resolve this issue, you would to increase the size of the filesystem by running:

    # chfs -a size=+1G / < filesystem to expand>

    Which will extend the filesystem by 1G

  3. Check /etc/vg directory for lock file.


    # odmget -q “name=rootvg and attribute=vgserial_id” CuAt

    CuAt:name= "vg name"
    attribute= "vgserial_id"
    value ="0007b47c00004c000000013b717a0988"
    type = "R"
    generic = "D"
    rep = "n"
    nls_index = 637

    The value above (0007b47c00004c000000013b717a0988) is the serial of the volume group.

    You would then run:

    # fuser -f /etc/vg/vgXX

    Where XX will be the upper-case of the value you got from the odmget command, in this example you would run:

    # fuser -f /etc/vg/vg0007B47C00004C000000013B717A0988

    If there is a process running locking the vg we will see a line similar to the one below:

    /etc/vg/vg0007B47C00004C000000013B717A0988: 9895958

Which we can then identify:

# ps -fp 9895958

root 9895958 9896034 1 20:07:53 pts/0 0:00 lmigratelv -l 0007b47c00004c000000013b717a0988 -s 50 /tmp/mig_map198

So in this case, we identified that the process that was locking up the volume group is the lmigratelv command, however, you may see a different process running against the volume group. To unlock the volume group, you need to troubleshoot that process (in the case above, I would wait till the migration has completed)

If you find a process that points to a third party application you could contact the vendor for assistance on why it is locking the volume group)

  1. check the ODM

    # odmget -q “name=VGname and attribute=lock” CuAt

    CuAt:name= "vg name"
    attribute= "lock"
    value ="24892"
    type = "R"
    generic = ""
    rep = "l"

The value above (24892) is the PID for the process that has the vg locked. So you can then look at the processes table and find out the process name (i.e ps -fp 24892).

Once we get the process id, we would troubleshoot the process that is running against the volume group like we have done in step 4.

If there is no process, there may be a lock left behind in the ODM. The low-level command “putlvodm” can be used to clear this:

$ VGID=$(getlvodm -v NAME_OF_LOCKED_VG)
$ putlvodm -K ${VGID}
  1. Check errpt -a for any disk issues or vscsi adapter mapping from the vio server failing. Hung I/O from disks will cause LVM processes to hang waiting for the I/O to complete, and some of these processes will lock the vg.
  1. If you are running HACMP, there has been a previous defect against LVM and HACMP that will leave a lock behind if an LVM cmd has been killed. Check APARs for this or other lock related issues.
  1. If none of these work force a dump on the system and send it in for analysis. On the other hand, if a root cause analysis (RCA) or problem source identification(PSI) is not needed then rebooting may solve the locked volume group.