Monday, May 30, 2011

Accessing WDS information using VBScript and COM objects

Hi all

A few weeks ago I found myself looking for a way to script various WDS functions using VBScript. My searches on VBscript turned up empty - nobody knew of COM objects or WMI namespaces available for use. I did a search on COM objects and found that there are indeed objects to use - and this person found it in Powershell.

The Object's name is WdsMgmt.WdsManager - and it allows you to approve and reject computers in the pending queue, list approved and rejected computers, among other functions. Here's a script that will list the approved computers - see the link above for ways to explore the object's properties and methods using Powershell.

Dim objWDS, objServer, objPending, objComputer, strMac

Set objWDS = CreateObject("WdsMgmt.WdsManager", "PBORBU01")
Set objServer = objWDS.GetWdsServer("localhost")
Set objPending = objServer.PendingDeviceManager.GetApprovedDevices

While not objPending.Eof

 Set objComputer=objPending.getNext()

 strMac = objComputer.MacAddress
 strMac = Mid(strMac, 21, 2) & "-" & Mid(strMac, 23, 2) & "-" & _
  Mid(strMac, 25, 2) & "-" & Mid(strMac, 27, 2) & "-" & _
  Mid(strMac, 29, 2) & "-" & Mid(strMac, 31, 2)

 Wscript.Echo "Name: " & objComputer.MachineName
 Wscript.Echo "MAC: " & strMac
 Wscript.Echo "Architecture: " & objComputer.Architecture
 Wscript.Echo "Last Changed: " & objComputer.LastChangeTime
 Wscript.Echo "Last Changed By: " & objComputer.LastChangeUser
 Wscript.Echo "Joined domain?: " & objComputer.JoinDomain
 Wscript.Echo ""


Friday, July 23, 2010

Petabytes by the penny

Hi all,

I recently found this very interesting blog post about a mass storage company who built their own SAN infrastructure using commodity hardware. I know, it's old news, but I still think it's awesome - one of these things holds over 3 times what my organisation's EqualLogic PS6000s can at a fraction of the cost.

A few storage experts have waded into the debate citing numerous issues - there's vibration (which is addressed by using "anti-vibration sleeves" (aka rubber bands) and a large piece of foam in the top part of the case), the high failure rate of the hard drives (which they are aware of and say they only have to replace one drive a week on average across their entire infrastructure), poor throughput as a result of the use of PCI, and multiple single points of failure (the boot hard drive and SATA cards) - which I will address here.

So what's wrong with the design? Nothing if throughput and business continuity aren't your goals (and you don't put all of your data on one machine - which these guys don't). However, there are still problems that need to be faced, especially if you intend to use only one of these units.

Consider the following diagram, which assumes five 2-port SATA cards instead of three 2-port cards and a 4-port card:

Each cell represents a drive. The number pair relates to the SATA controller and port. These are in blocks of five to represent each port multiplier.

So each controller gets 10 drives. Now say each column represents a RAID-6 array. What if controller 1 dies? There's 10 drives down and a RAID-6 array decimated. What about controller 2? There's two RAID arrays down the toilet in one fell swoop.

Not only that, the entire system is hedged on a single, non-redundant hard drive. If that goes, there goes your system.

If 'twere up to me, I would run the array on EON - a version of OpenSolaris that is stripped down to the point where it'll fit into approx. 200mb - and use a CompactFlash card instead of a hard drive (the image file would be backed up so that if the card failed, I could just slap the image onto a new card, power down the pod and plug it in). I would then arrange the drives into five RAIDZ2 groups like so:
In this way, the arrays take one drive from each SATA port. In this scenario, the hypothetical SATA controller failure would, at worst, degrade an array.

I hope I haven't scared anybody away from exploring cheap cloud storage - it does help greatly to know the pitfalls, and how to get around them. The key to designing storage systems is not designing them to fail by putting a large number of eggs into one basket.

Building these things is a great learning exercise, and a rewarding one at that. Good luck!

Tuesday, January 5, 2010

Creating a custom OpenBSD RAM Disk


  • One OpenBSD system with enough space to build a release and with the compilers install set installed. This guide assumes you have OpenBSD 4.6.
  • The src.tar.gz for your OpenBSD version.
  • The sys.tar.tgz for your OpenBSD version.
  • The cdboot and cdbr files for your OpenBSD version.

Disclaimer (or: My lawyer made me do it)

This guide is just that – a guide. No responsibility is taken, implied or otherwise, for any liability for any damage whatsoever, be it immediate, as a residual effect or otherwise, caused by following this guide. OpenBSD’s developers will not answer any support requests regarding any issues that arise from following this guide. If it breaks, you get to keep both pieces.


  1. Unpack the src.tar.gz and sys.tar.gz into the /usr/src directory.
  2. Compile and install crunchgen:
    # cd /usr/src/usr.sbin/crunchgen
    # make && make install
  3. Compile the special tools used in the ramdisk:
    # cd /usr/src/distrib/special
    # make
  4. Edit /usr/src/distrib/i386/ramdisk_cd/list.local and add the path to the program you wish to add. For instance, to add rsh to the ram disk, add the following to the lines like it:
    LINK instbin                                    bin/rsh
  5. Build the ram disk:
    # cd /usr/src/distrib/i386/ramdisk_cd
    # make
  6. Build an iso image:
    # mkdir isofiles
    # cp /path/to/your/cdbr isofiles
    # cp /path/to/your/cdboot isofiles
    # mkdir isofiles/{etc,4.6/i386}
    # cp bsd.rd isofiles/etc/4.6/i386
    # mkhybrid –r –V "OpenBSD 4.6 Backup" –b 4.6/i386/cdbr \
    > –c –o cd46.iso isofiles/
  7. Burn the CD, or fire it up in your favourite VM.
  8. Profit!

Other uses

  • You could also use the bsd.rd image and pxeboot from your CD and boot systems over a network using this ram disk you created.
  • You could add additional drivers to the ramdisk kernel to support devices needed to install your system. Be sure to modify /usr/src/sys/arch/i386/RAMDISK_CD to add the drivers you require. If the drivers aren't in GENERIC, this will necessitate a full release build, which is beyond the scope of this document.
  • You could modify the install scripts to perform a certain purpose. I created one that would allow the user to backup and restore their entire system to a central server.

Final word

Above all, you are responsible for whatever you do when you follow this guide. If it breaks, you get to keep both pieces.

In any case, good luck and have fun – that’s what it’s all about.

Tuesday, December 15, 2009

OpenBSD IPSec made easy

Hello all,

After much struggling and screaming, I now have a working IPSec configuration in a pair of VMs.

My "network", consisting of 3 host-only networks:


1) copy /etc/isakmpd/ from left side to /etc/isakmpd/pubkeys/ipv4/ on right
2) copy /etc/isakmpd/ from right side to /etc/isakmpd/pubkeys/ipv4/ on left
3) on left side:

cat >/etc/ipsec.conf <<EOF

ike esp from { \$local_ip \$local_network } to \
{ \$remote_ip \$remote_network } peer $remote_ip
ike esp from \$local_ip to \$remote_ip

4) on right side:

cat >/etc/ipsec.conf <<EOF

ike esp from { \$local_ip \$local_network } to \
{ \$remote_ip \$remote_network } peer $remote_ip
ike esp from \$local_ip to \$remote_ip

5) To test, run "isakmpd -K -d", then "ipsecctl -f /etc/ipsec.conf" on each side.
6) Route each network to the other side's gateway, eg:

obsd-ipsec-left# route add -net 192.168.33/24
obsd-ipsec-right# route add -net 192.168.120/24

7) Ping each side.
8) Fire up 'tcpdump -ni enc0' and ping each side again. If you get output, then we have succeeded.
9) Make ISAKMPD and IPSec start on boot (both machines):

# sed -e 's/^isakmpd_flags=NO/isakmpd_flags="-K"/' \
-e 's/^ipsec=NO/ipsec=YES/' /etc/rc.conf

10) Make the route setting permanent:

obsd-ipsec-left# echo '!route add -net 192.168.33/24' \
>> /etc/hostname.vic1
obsd-ipsec-right# echo '!route add -net 192.168.120/24' \
>> /etc/hostname.vic1

11) Reboot
12) ...
13) Profit!

Wednesday, October 21, 2009

Bits from Bill: Free #1 Tweak to Improve Windows Performance

I was wandering aimlessly through some technology blogs and found the blog of a writer for PC Pitstop who also wrote a comprehensive tool for diagnosing Windows performance issues. In particular he wrote a post about a way to improve performance by emptying the Temporary Internet Files folder [1] which is a very valid point - Internet Explorer permeates every facet of Windows these days, and has ever since Windows 95 OSR2.

I posted a comment stating that there was another way to do this - and it is something I do on every machine I build - to give the swap file, system files and your data separate partitions.

A general rule of thumb for swap partitions (which had a home in Unix based operating systems) is between 2 and 2.5 times the amount of RAM in your system. The system partition is a tricky one - you have to account for not only the system itself, but for any applications you're likely to install. A good rule of thumb is 16gb for Windows XP and 40gb for Vista/7.

So let's put this into perspective: say you have a 500 gigabyte drive and we wanted to install Windows 7 on it. Our system has 4 gigabytes of RAM. Here's what we'd do:

C: (System) - 40GB (create this, install, then create the other two)
D: (Files) - ~ 415GB*
E: (Swap) - 10.5GB

* = this value will vary depending on the true size of the drive - I used 500 billion bytes as an example.

Once you've set this up, right click on Computer in the Windows menu, click Properties, click "Change Advanced Settings", click Performance, select drive C, disable swap, select drive E, set the size minimum to by 89% of the drive - otherwise it'll be whining at you that drive E is getting full.

This has twofold benefits - one, by moving the swap file away from the system drive, it prevents it from getting fragmented as new applications get installed, so a fragmented system drive won't cause as many performance issues as they otherwise would. Two, if you need to reinstall Windows for whatever reason, there's no need to backup your data as it will still be in the same place after installation. That said, you should backup anyway in case of any other "mishaps" that may occur from time to time.

Good luck!

[1] Bits from Bill: Free #1 Tweak to Improve Windows Performance

Thursday, September 24, 2009

Dumb security idea #3: Enumerating Badness

Hi all,

Today I was in the support chat room for OpenWrt when someone was asking about stopping people in his organisation from draining available bandwidth using Youtube. OpenWrt uses dnsmasq so it was a simple matter of blocking all domain names ending in - and once you do that, you then only need to worry about CollegeHumor, Vimeo, Google Video, Ebaum's World...

It really highlights the stupidity of what's often referred to as Enumerating Badness, as outlined in an article about The Six Dumbest Ideas in Computer Security. In a nutshell, it's where you say "block this, block that, let everything else through", and while it made sense in the very early days, it stopped making sense when the level of bad on the Internet began to vastly outweight the level of good. It is estimated that for every bit of good out there, there's somewhere in the order of dozens of malware, spyware, adware, trojans and viruses - which number in the millions these days. In fact, in the year to April 2008, Symantec discovered over 711,000 new viruses. There's a good reason we pay $30 per year for our anti-virus updates - it's a mammoth job trying to contain them all.

The stupid thing is, it could me made so much simpler if we focused our attention on enumerating the good programs we use on our computer. It's a near impossible task to track over a million bits of bad when even a simpleton could track 30 bits of good. Sadly, no operating system really supports this. Vista and Windows 7's UAC is a step in the right direction, but the problem is far from licked.

A far simpler solution is to look at the sorts of traffic that need priority, then assign the highest priority to those streams, then set everything else at rock bottom. I would also put the remaining traffic onto a throttle, just to be sure. This is far simpler than blocking every single video streaming website.

Wednesday, September 16, 2009

Free RAID! (Mk 2)

Hi all,

As it turns out, the NetBSD-sourced RAIDframe is being phased out of OpenBSD in favour of the internally developed softraid driver. This driver is still under development and lacks a number of user-space tools to make it useful (such as recovery and the like) so once that goes production, I'll blog about it here.

In the meantime, Sun has had a very nicely working framework that, now that they have released OpenSolaris, is now available for free. FreeBSD have integrated it into their system, and it's available in Linux distributions as a user-space fuse driver. Both use the same set of tools that we'll be using here.

First, and foremost, we'll be using VMware again, but this time we'll be using EON (based on Solaris Express CE)as our operating system because it runs in memory and is stripped down enough to fit into memory and still have what you need. I recommend setting a small primary hard drive (2Gb is plenty) and several large SCSI drives. 512MB of RAM will be fine. I created a 2GB primary IDE drive and 9 500GB SCSI drives.

Download an ISO image for EON from the link above, then use it as your CD-ROM drive. Once it boots, log in as root/eonsolaris. There's no need to install just yet - we can play around here without having to install a thing.

Now doing everything in the VMware window makes life fun - especially if it sometimes doubles up your keystrokes like it does for me - so type ifconfig -a to get your IP address (it won't be on lo0, that's for sure - mine showed up as pcn0). SSH in using that address but log in as admin/eonstore (the SSH server won't accept root logins for good reason), then type su - and use the root password to get root access.

Type format to get a list of the drives - if all goes to plan, you should see this:

Searching for disks...done

       0. c0d0 <default>
       1. c2t0d0 <default>
       2. c2t1d0 <default>
       3. c2t2d0 <default>
       4. c2t3d0 <default>
       5. c2t4d0 <default>
       6. c2t5d0 <default>
       7. c2t6d0 <default>
       8. c2t8d0 <default>
       9. c2t9d0 <default>
Specify disk (enter its number):

Hit Ctrl+C to cancel out of this and to get back to the prompt. Now that we know what each hard drive is called, we can start creating an array. Let's start with raidz (which is similar to RAID 5 but has variable stripes to avoid the write hole):

eon:3:~#zpool create battery raidz c2t0d0p0 c2t1d0p0 c2t2d0p0 c2t3d0p0 c2t4d0p0 c2t5d0p0 c2t6d0p0 c2t8d0p0 c2t9d0p0
eon:4:~#zpool list
battery  4.39T  147K  4.39T    0%  ONLINE  -
eon:5:~#zpool status
  pool: battery
state: ONLINE
scrub: none requested

        NAME          STATE    READ WRITE CKSUM
        battery      ONLINE      0    0    0
          raidz1      ONLINE      0    0    0
            c2t0d0p0  ONLINE      0    0    0
            c2t1d0p0  ONLINE      0    0    0
            c2t2d0p0  ONLINE      0    0    0
            c2t3d0p0  ONLINE      0    0    0
            c2t4d0p0  ONLINE      0    0    0
            c2t5d0p0  ONLINE      0    0    0
            c2t6d0p0  ONLINE      0    0    0
            c2t8d0p0  ONLINE      0    0    0
            c2t9d0p0  ONLINE      0    0    0

errors: No known data errors

And there we are. With one command, we have a 4.3TB drive array. It gets better though - with another command, we can set it up for sharing on CIFS:

eon:6:~#zfs set sharesmb=name=battery,guestok=true battery
eon:7:~#touch /battery/bob

Now, using that IP address, access the Samba share. You should see the file named "bob".

With another command, we can create an NFS export, an iSCSI target (with two commands), even set up swap space.

And that's ZFS in a nutshell. When softraid grows up, I'll cover that in a blog as well. Later days!