Unifying VMS Clusters
with Multiple
Boot Disks


Do you have a VMS Cluster with multiple Boot disks or Multiple CPU Architectures? Do you want to maintain a common system with one user authorization file, Rights list, and Network Proxy databases. This page will help you design your multiple boot disk VMS Cluster.

Before proceeding the generation of your cluster, or modifying an existing one, you should first decide where each of your boot disks will reside, and pick a logical name for each of them that is unique in the cluster. Then you must decide which of them will be your master boot disk. This boot disk is where your database files will reside. All other nodes on other boot disks will mount this disk, and use logical names to point to all the necessary database files.

Each boot disk, including the master should have the proper logical names defined in the file: SYS$COMMON:[SYSMGR]SYLOGICALS.COM. The only difference in this file on each of the boot disks is that the master boot disk will have an extra logical name definition, while the non-master boot disks will have a mount command, which mounts the master boot disk to each node that boots off a non-master boot disk.

If the master boot disk cannot be accessed directly by your non-master boot servers, then the server node for the master boot disk must be the first node in the cluster to boot, and represents a single point of failure. Therefore, in choosing your master boot disk, it is best to select a disk that is directly accessible by all boot serving nodes in your cluster.

In the following examples, I have selected the disk $4$DKA100: as my master boot disk. It's volume name is AXPSYS070, and I have selected a cluster wide logical name of AXPVMSSYS for it. So the file SYS$COMMON:[SYSMGR]SYLOGICALS.COM on the master boot disk has this line as the first executable line :

$DEFINE/SYSTEM/EXEC/NOLOG AXPVMSSYS SYS$SYSDEVICE

The SYS$COMMON:[SYSMGR]SYLOGICALS.COM file on all other boot disks have this line as it's first executable statement instead :

$MOUNT/SYSTEM $4$DKA100: AXPSYS070 AXPVMSSYS

This makes sure that the master boot disk is mounted on all nodes in the cluster, regardless of which system or boot disk it boots off of.

Then all SYS$COMMON:[SYSMGR]SYLOGICALS.COM files on all boot disks should have the following lines:

$DEFINE/SYSTEM/EXEC/NOLOG QMAN$MASTER AXPVMSSYS:[VMS$COMMON.SYSEXE]
$DEFINE/SYSTEM/EXEC/NOLOG SYSUAF
AXPVMSSYS:[VMS$COMMON.SYSEXE]SYSUAF.DAT
$DEFINE/SYSTEM/EXEC/NOLOG RIGHTSLIST
AXPVMSSYS:[VMS$COMMON.SYSEXE]RIGHTSLIST.DAT
$DEFINE/SYSTEM/EXEC/NOLOG NETPROXY
AXPVMSSYS:[VMS$COMMON.SYSEXE]NETPROXY.DAT
$DEFINE/SYSTEM/EXEC/NOLOG NET$PROXY
AXPVMSSYS:[VMS$COMMON.SYSEXE]NET$PROXY.DAT
$DEFINE/SYSTEM/EXEC/NOLOG VMSMAIL_PROFILE-                          
AXPVMSSYS:[VMS$COMMON.SYSEXE]VMSMAIL_PROFILE.DATA
$DEFINE/SYSTEM/EXEC/NOLOG MAIL$SYSTEM_FLAGS 7

The last two lines are for the mail system, to insure that the entire cluster acts as one node when sending and receiving mail, regardless of which node is sending or receiving the email. A discussion on how to unify the mail system if you failed to do these steps when you first generated your cluster is given on the page "Unifying VMS Mail In Clusters with Multiple Boot Disks".

If you have any questions or problems with the above procedure, please feel free to eMail me with the specifics of your problem or question.


 

My Home Page | VMS Home

DCL | Utilities | Management | Tips

FORTRAN | Pascal

eMail Questions

Quiz?