My plans to equip the lab with older, but solid equipment has been going very well thus far. It's not cheap, but it's going to be very functional. The two Netra X1 servers are doing a great job, and I'm really enjoying having a LOM. I wish my "big iron" 420R had a LOM, but a Sun serial port still beats an x86 BIOS program. And what could be cooler than accessing those serial LOM devices through a terminal server? (Yes, I suppose a modern Sun server with an Ethernet LOM would be cooler, but don't burst my bubble, ok?).
So now that I've accumulated these boxes and am beginning to use them on a regular basis, you can imagine that patching wasn't far behind. Patching is one of many activities where a console connection comes in pretty handy. To make a long story short, I quickly grew tired of trucking my laptop downstairs, attaching a serial cable to it, and then performing an elaborate contortion routine to find the LOM port in the back of my rack while pressing my face through cobwebs. Been there before? Yes. I have decided that I need a terminal server.
So, what is my ideal terminal server? Well, there's a few requirements. It must be a quiet, low power device - no giant noisy fans need apply. I need an 8-port device, but 16 would give me room to grow if the price is right. I don't care too much about security protocols - this is a home lab that sits behind a firewall, and all my systems can be reprovisioned from a flash archive in a heartbeat. Should be easy right?
The first thing I learned is there are a LOT of 32-48 port high end (not old!) term servers available, primarily Cyclades devices. These look like Ferraris to me, and I dream of winning an auction for about $50 and attaching that puppy to my rack. Not going to happen... The next thing I noticed is a bunch of really old Xyplex and Perle devices. These rack up, but I read a bunch of horror stories, and got the idea from a few USENET postings that they are loud. I found a few other older devices, but they all had something that didn't seem right to me. It was time to get drastic...
I went with plan "C". In this case, the C stands for Cisco. Turns out that with some auction patience, a properly equipped Cisco 2509 (8 port) or 2511 (16 port) can be had with cables for around $150 or less. That's right at my pain threshold, but acceptable given what it provides. This solution appears to be hit or miss with the issue of spontaneous break signals halting the Sparc machines, which usually happens if the TS powers down, but the kbd command can be used to configure an alternate break sequence and avoid the issue.
The other appealing feature seems to be that I can configure reverse-telnet. This would allow me to run a command like "telnet termserver 2001" to get to port 1. Much more convenient than authenticating to a termserver and navigating annoying menus. And finally, being a full size 19" box I can rack it up without coming up with some combination of plywood and duct-tape. Suh-weet.
The downside? Well, ssh would be more cool than Telnet, but I can swallow my pride. Who knows? Maybe there's a Cisco update that would provide this. It might be a loud device. I have no diea. Another issue which decrements the coeficient of cool: It requires an AUI adapter to convert to an Ethernet RJ45 port. On the other hand, there's probably a lot of new SAs in the world who would look at that like a vintage muscle car... "Whoa - is that a REAL aui adapter, dude? You're must be hard core." Um, yeah. Maybe not. Although the loudness and power consumption concern me, I think I can live with these issues if it works, which I'm reasonably confident it will.
Now, to set up an eBay search and begin the hunt...
Friday, July 27, 2007
Thursday, July 26, 2007
Learning to think in Z
In the traditional disk mounting world we had a device uner the /dev directory which is mounted on a (aptly named) mount point. For example:
On a large database server you might see the common convention of mounting disks with /uXX names...
This is the frame of reference I used when walking into the building of my new JumpStart server. My goal was to stick as close as possible to standard mount points. The first file system was to be mounted on /export/install. The second file system would serve as my home directory, and I didn't much care where it lived since I'd use the auto mounter.
The default zfs configuration is to mount a complete pool under its pool name. I tried to be creative in coming up with a naming convention, but slipped into mediocrity with a "z##" name. Hey, I'm tired of seeing /u##; It's amazing what a difference one letter can make in spicing up a server. Having come up with my name, I created the pool from my second disk:
Wow. That was easy!
But now there's a sort of a problem. I can't quite get past seeing the JumpStart directory under /z01. It's not intuitive there. The world of Solaris sysadmins looks for JumpStart files in /export/install. So, how can we get this sweet ZFS file system to show up where I want it? Turns out this is pretty easy as well.
It even unmounts and remounts the file system for me. Oh yes, I'm a fan at this point.
One thing that's interesting is that once you move a mountpoint from its default, it can be easy to "loose" that file system. For example, if I list the contents of z01 at this point, I only see home. "install" no longer shows up there because its mounted on /export/install. In this example it's hard to loose anything, but on a large production server there could be many pools and many file systems. As you would expect, there's an easy command to list the file systems and their mount point:
I decided to leave the z01/home in place and just repoint the auto-mounter. From zero to "get it done!" in about 20 minutes with some play time. I love it.
# mount /dev/dsk/c0t2d0s0 /export/install
On a large database server you might see the common convention of mounting disks with /uXX names...
# ls -1d /u*
/u01
/u02
/u03
/u04
This is the frame of reference I used when walking into the building of my new JumpStart server. My goal was to stick as close as possible to standard mount points. The first file system was to be mounted on /export/install. The second file system would serve as my home directory, and I didn't much care where it lived since I'd use the auto mounter.
The default zfs configuration is to mount a complete pool under its pool name. I tried to be creative in coming up with a naming convention, but slipped into mediocrity with a "z##" name. Hey, I'm tired of seeing /u##; It's amazing what a difference one letter can make in spicing up a server. Having come up with my name, I created the pool from my second disk:
# zpool create z01 c0t2d0
# zfs create z01/install
# zfs create z01/home
# Hmm, why not make my home its own fs?
# zfs create z01/home/cgh
Wow. That was easy!
But now there's a sort of a problem. I can't quite get past seeing the JumpStart directory under /z01. It's not intuitive there. The world of Solaris sysadmins looks for JumpStart files in /export/install. So, how can we get this sweet ZFS file system to show up where I want it? Turns out this is pretty easy as well.
# zfs set mountpoint=/export/install z01/install
It even unmounts and remounts the file system for me. Oh yes, I'm a fan at this point.
One thing that's interesting is that once you move a mountpoint from its default, it can be easy to "loose" that file system. For example, if I list the contents of z01 at this point, I only see home. "install" no longer shows up there because its mounted on /export/install. In this example it's hard to loose anything, but on a large production server there could be many pools and many file systems. As you would expect, there's an easy command to list the file systems and their mount point:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
z01 1.61M 36.7G 26.5K /z01
z01/home 1.49M 36.7G 1.45M /z01/home
z01/home/cgh 35.5K 36.7G 35.5K /z01/home/cgh
z01/install 28.5K 36.7G 28.5K /export/install
I decided to leave the z01/home in place and just repoint the auto-mounter. From zero to "get it done!" in about 20 minutes with some play time. I love it.
First impressions of ZFS
If you're anything like me, you cling to that which you know while yearning for that which you haven't yet dabbled in. Tonight was a small victory for my self discipline, and a great example of why I think I'm going to be good friends with ZFS.
I've been mentally moving forward with a new JumpStart server layout for a while now. This server would have very little need for horsepower with storage space being what I really needed. It's main purpose is to help me consistently provision lab environments here at home for projects. I ended up selecting a Netra X1, which is very inexpensive on eBay. It's a nice low power draw platform that has plenty of power, and one less common feature among the Sun lines: IDE (PATA) drives. Yes, I mean that in a good way.
I was able to load it up with a 40gb boot drive and 120gb data disk to house install media images, flash archives, home directories, and some crude backups for the rest of the lab environment. The cost of a SCSI disk in that size is insane by comparison, and would provide no advantage for the tiny demand it would be charged with. I jumpstarted the hardware from another Sun machine, then loaded the Jupmstart Enterprise Toolkit (JET) and prepared to boogie.
Ahh, but now the moral dilemma rears its ugly head. How to manage that data disk? I haven't spent much time playing with Solaris Volume Manager (SVM) soft partitions, but enough to know it was a snap and would do the job. On the other hand, I've been twitching to learn ZFS, and this could be just the excuse I needed to get started.
The hard part about this decision was deciding whether or not I perceived ZFS to be an abyss, or a simple technology. I can't count the number of times I've done something silly like saying, "Oh sure, we could write a quick Perl script to do that." Only to find that two months later I'd grossly underestimated the complexity. I'm a chronic and pathological optimist.
I'm happy to report ZFS was painless and a pleasure to use. I'm still in shock from the simplicity. This is fun... I don't miss Linux at all.
I've been mentally moving forward with a new JumpStart server layout for a while now. This server would have very little need for horsepower with storage space being what I really needed. It's main purpose is to help me consistently provision lab environments here at home for projects. I ended up selecting a Netra X1, which is very inexpensive on eBay. It's a nice low power draw platform that has plenty of power, and one less common feature among the Sun lines: IDE (PATA) drives. Yes, I mean that in a good way.
I was able to load it up with a 40gb boot drive and 120gb data disk to house install media images, flash archives, home directories, and some crude backups for the rest of the lab environment. The cost of a SCSI disk in that size is insane by comparison, and would provide no advantage for the tiny demand it would be charged with. I jumpstarted the hardware from another Sun machine, then loaded the Jupmstart Enterprise Toolkit (JET) and prepared to boogie.
Ahh, but now the moral dilemma rears its ugly head. How to manage that data disk? I haven't spent much time playing with Solaris Volume Manager (SVM) soft partitions, but enough to know it was a snap and would do the job. On the other hand, I've been twitching to learn ZFS, and this could be just the excuse I needed to get started.
The hard part about this decision was deciding whether or not I perceived ZFS to be an abyss, or a simple technology. I can't count the number of times I've done something silly like saying, "Oh sure, we could write a quick Perl script to do that." Only to find that two months later I'd grossly underestimated the complexity. I'm a chronic and pathological optimist.
I'm happy to report ZFS was painless and a pleasure to use. I'm still in shock from the simplicity. This is fun... I don't miss Linux at all.
Monday, July 16, 2007
Inconsistency in prtdiag output
I've been doing a lot of work recently writing Perl scripts to mine data from local Explorer repositories. It's a phenominal resource as a sort of RAW input to a configuration DB, and with Perl it's a snap to pull out data. My latest excecise was pretty trivial. I need to yank out the memory size field from prtdiag for each system, then dump it into an XML feed that serves one of our databases.
The information resides in the prtdiag-v.out file, and looks something like this:
So, we throw together a little Perl script that does this:
No problem!
Then I put together a simple loop to check what I'd found... Now help me understand why this can't be simple and consistent? Here's some of the variety:
Can't we just agree to use either MB or GB? Or if we're in a verbose frame of mind, Megabytes or Gigabytes. My response is to normalize the exceptions I can locate so that it comes out consistently with GB or MB, but I wonder whether this will remain a stable interface?
What I find even more entertaining is a daydream of an engineering team sitting around a table having a serious debate about changing the output from Megabytes to MB. With such a controversial topic, I'd imagine the debate was heated.
The information resides in the prtdiag-v.out file, and looks something like this:
fooserver{sysconfig}$ more ./prtdiag-v.out
System Configuration: Sun Microsystems sun4u Sun Fire E20K
System clock frequency: 150 MHz
Memory size: 65536 Megabytes
So, we throw together a little Perl script that does this:
sub get_memory_size {
my $explodir=shift();
my $prtdiagfile="$explodir/sysconfig/prtdiag-v.out";
my $line;
my $memsize;
if ( -e "$prtdiagfile" ) {
open(PRTDIAG,$prtdiagfile);
while () {
chomp;
last if ( $_ =~ /^Memory size:\s/ );
};
close(PRTDIAG);
s/Memory size:\s//g; # Kill the label
s/\s+$//; # Remove any trailing whitespace
return $_;
} else {
# We did noit find the prtdiag file.
return 0;
} #end if
} #end get_memory_size
No problem!
Then I put together a simple loop to check what I'd found... Now help me understand why this can't be simple and consistent? Here's some of the variety:
[2GB]
[6144 Megabytes]
[512MB]
Can't we just agree to use either MB or GB? Or if we're in a verbose frame of mind, Megabytes or Gigabytes. My response is to normalize the exceptions I can locate so that it comes out consistently with GB or MB, but I wonder whether this will remain a stable interface?
What I find even more entertaining is a daydream of an engineering team sitting around a table having a serious debate about changing the output from Megabytes to MB. With such a controversial topic, I'd imagine the debate was heated.
Subscribe to:
Posts (Atom)