Individual Entry

Sizing the Moment

I have been running a web site on an HP Integrity server for the past few years. It is running on linux; Debian Linux 3.1 (sarge). It was initially established on a 36.4GB drive and, at the time, it was thought to be more than ample drive space for many years to come. However, there is a forum on this web site and its users have a voracious appetite for posting to it. Recently, it became apparent that it would be sage to move this web site's system drive to a larger drive. The bigger drive was simple but getting the system files moved over to it wasn't exactly an easy task.

An exposé of OpenVMS image backup, this is not! In the early days, when this web site was first established on the HP Integrity server, I had several 36.4GB drives. Two were earmarked for use with the web site. Once a month, I would take the web site down and put the two drives into another HP Integrity server running OpenVMS. I would then mount these two drives as foreign volumes and perform a physical backup using OpenVMS BACKUP; thus, imaging the web server's system drive. This system image also made it possible for me to move the linux system drive to the larger drive without having to keep the web site off-line for an extended period. I only needed to keep it off-line long enough to effect the physical backup — roughly, one hour. During times of backup, I would put a forwarding rule into the Cisco router to send web site requests to another web server. This would inform people visiting the web site that the site was down temporarily for maintenance.
# ip nat inside source static tcp IN.TE.RN.AL 80 EX.TE.RN.AL 80 extendable
Another web server, which just happens to be Apache running on OpenVMS, at the address IN.TE.RN.AL has a virtual domain defined for the web server's domain. The default index page on this server explains that the site is down for maintenance.

The linux system disk was imaged and the web site was returned to operation. However, since I knew this would take some time to complete the migration to the larger drive, I disabled the web site's forum. This would insure that the contents of the migrated system would be the same as the old. The users would have to grin and bear the inconvenience of no forum access for a day or two.

My first thought was to simply make a physical backup of the existing 36.4GB linux system drive to the new 73.4GB drive. Then, make use of one of the partition tools to change the system partition to use the full extent of the new larger drive. So, I made the backup to the 73.4GB drive. Using this drive, the linux system — using the alternate HP Integrity system — booted fine. Great! We were well on our way. However, the joy was quickly quelled with dismay when the partitioning tools under linux refused to expand the existing roughly 36GB partition to roughly 73G. That tossed a huge spanner into the planned migration effort.

I began my quest for a way to move the old system partition from the 36.4GB drive to the 73.4GB drive. I took the existing 73.4GB drive, fired up fdisk and deleted the 36GB linux partition. I then employed fdisk, one again, to create the larger 73GB partition. This accomplished, I ran mkfs to make this new 73GB partition an ext3 file system partition.

Now, the task was to move the contents of the old root file system partition on the 36.4GB drive to the new 73.4GB drive and the larger ext3 file system partition I'd just created. This should have been a relatively straight-forward operation. The general wisdom is to use the linux commands dump and restore to effect the transfer.

$ sudo mkdir /mnt/old
$ sudo mkdir /mnt/new
$ sudo mount /dev/sdb3 /mnt/old
$ sudo mount /dev/sdc3 /mnt/new
$ cd /mnt/new
$ sudo dump -0uan -f - /mnt/old | restore -r -f -

So, with the old 36.4GB linux system drive installed in the SCSI 0 slot of the HP Integrity rx2600, the 73.4GB backup in SCSI 1 and the newly partitioned 73.4GB drive in SCSI 2, I set out to use dump and restore.

The first thing I encountered was that dump and restore did not exist on this system. OK. I thought that this would be simple enough to remedy. I would simply install dump and restore. I typed in:
$ sudo apt-get install dump
the system began to retrieve the files needed to install dump and restore. However, since this was an old version of Debian Linux, the files in the stable tree, which are much newer, would not install. There were too many dependencies rendering the installation a failure. I even tried building dump and restore from source to no avail. There were simply too many dependencies to resolve and this old version of Debian Linux was not going to allow me to do it. At least, not without great effort with questionable outcome.

Dead ended? Never! I simply do not give up that easily. I then decided that I would try to use tar to do what I had wanted to do with dump and restore. I did a little Googling and I found that others had reported successful drive migrations using tar, so I decided that I would give it a whirl.

$ sudo mkdir /mnt/old
$ sudo mkdir /mnt/new
$ sudo mount /dev/sdb3 /mnt/old
$ sudo mount /dev/sdc3 /mnt/new
$ cd /mnt/new
$ sudo tar cf - /mnt/old | tar xvf -

Well, it might have worked for others but, as luck would have it, it would not work for me. It didn't like piping the output from the tar backup of the old drive's contents to the tar restore to the new drive. Damn! I was so looking forward to a quick and simple resolution to the migration process but, to date, it had eluded me.

Alright, I thought, if I can't do it in one process, I could certainly do it in two. So, armed with yet another 73.4GB drive, I partitioned that drive as a single ext3 linux file system. I created another mount point and mounted this drive.

$ sudo mkdir /mnt/backup
$ sudo mkdir /dev/sdc1 /mnt/backup

I then proceeded to tarchive the old drive to this new drive.

$ cd /mnt/backup
$ sudo tar -cvvf backup.tar /mnt/old

The backup began and file names were exuded into the terminal display. Judging from the rate at which the filenames appeared, this was to be a long and slow process. I went out for the evening with the family to a local fireworks display and, afterwards, a late evening snack and drinks at a local watering hole.

I was awake at 2:00 Sunday morning, so I decided to check on the progress of the backup. The backup completed. Now it was time to reverse the process and, since I was awake at 2:00 in the morning, there was no time like the present to effect the restore. I shutdown the HP Integrity and swapped the disk drives around. I pulled the old system drive image on the 73.4GB drive and installed the drive readied with the EFI partition, swap partition and the new 73GB ext3 linux partition. I then powered up the HP Integrity.

In order to restore the old system drive backup to the new drive, I essentially had to repeat the commands used to do the backup. I first mounted the drives.

$ sudo mount /dev/sdc1 /mnt/backup
$ sudo mount /dev/sdb3 /mnt/old

With the two drives mounted, the tarchive restore could begin. Because the backup was created with /mnt/old, the new drive was mounted to the same mount point for the restore.

$ cd /mnt/backup
$ tar -xvvf backup.tar

Once again, the display came to life streaming with the filenames of the files being restore to the new drive. I sat and watched for about half an hour to make certain that the restore was going properly. I checked too on the progress by logging in with another terminal and perusing through the contents being restore to /mnt/old. When I had convinced myself that all was working as it should be, I left the backup running to settle back into bed at about 3:00.

$ sudo mkdir /mnt/old
$ sudo mkdir /mnt/new
$ sudo mount /dev/sdb3 /mnt/old
$ sudo mount /dev/sdc3 /mnt/new
$ cd /mnt/new
$ sudo dump -0uan -f - /mnt/old | restore -r -f -

Morning — well, later that morning — came around and, armed with my first cup of coffee, I checked on the restore. It had finished. Now, the moment of truth. It was time to see if the system could be booted using this new drive.

I issued the linux shutdown and then connected to the HP Integrity's console. At the EFI shell, I edited the elilo.conf file in the fs1: (dev/sdb1)partition to point the root to /dev/sdb3 instead of the previous /dev/sda3. I then typed in:
fs1:\EFI\debian> elilo -C \EFI\debian\elilo.conf
Familiar looking information began to fill the terminal display. About a minute later, there was a login prompt. I entered my username and password, and I was logged in. I spent a brief few moments checking to see that everything that should have started up had indeed started. Everything looked fine. Time to put this drive into production but not before I imaged this new drive for backup purposes. I shutdown the HP Integrity and installed the OpenVMS system drive. I put a blank 73.4GB drive into one of the slots and then booted OpenVMS. When OpenVMS was up and running, I performed a physical backup of the new 73.4GB system disk to the blank 73.4GB drive. This took about three hours to complete, so I left it be and took care of the typical morning rituals.

About noon, that physical backup was completed. I shutdown both HP Integrity servers — the OpenVMS system and the linux system. I then pulled and marked the two 73.4GB linux system drives, and installed the new system disk into the HP Integrity web server. I issued the power-up command in the EFI console and the HP Integrity came to life. About a minute or so later, I was able to log into the web site through a web browser.

A long and slow, arduous but enlightening, saga has come to its conclusion; leaving me pining for OpenVMS BACKUP on linux.


Comments?


To thwart automated comment SPAM, you must answer this question to post.

Comment moderation is enabled. Your comment(s) will not be visisble until approved.
Remember personal info?
Notify?
Hide email?
All html tags, with the exception of <b> and <i>, will be removed from your comment. You can make links by simply typing the url or email-address.
Powered by…