Archive for the ‘Archived’ Category


Linux USB Multiboot

In Archived on January 16, 2011 by netritious

Boot Ubuntu Live CD, Server Installation, Alternate Installation, and Much More From USB Flash Drive.


  • Fix A Broken Computer
  • Reduce waste, cost, and clutter by burning fewer CD’s.
  • Install the latest Ubuntu Long Term Support (LTS) release of your choice with ease.
  • Powerful yet small enough to fit in your pocket.
  • Safely demonstrate Ubuntu to friends and family on their computer without making permanent changes.
  • Backup and Restore files and/or entire drives to alternate media or network file server.
  • Boot to a Live CD when Windows is broken for emergency access to internet services.
  • Remove Windows Virus Infections/Spyware/Malware even if you can’t boot into Windows!
  • Perform Hardware and Software Diagnostics
  • Choose from Live CD versions or Server/Alternate installs, both i386 and amd64 releases.
  • Can be extended to support just about any distribution of Linux or Windows.
  • Works with almost any IBM Compatible PC manufactured during and after 2005.

Project Requirements

  • Intermediate computer skills including changing BIOS boot priority options
  • IBM Compatible PC that supports booting USB-HDD
  • 8 USB Flash Drive
  • Windows XP/Vista/7 (I used Windows 7 Professional 64-bit; Support for creating from Ubuntu Coming Soon!)*
  • Time and Patience. This project can span many hours, possibly days.


  • Only works on modern computers manufactured during and/or after 2005, and even then it is not exactly standardized. You may do all this work just to find out you have to burn a CD anyway for some old computer that doesn’t support booting USB-HDD in the motherboard BIOS.
  • Easier to create using Windows than Ubuntu.**
  • I haven’t worked out how to do this completely from scratch using Ubuntu desktop. However, a Clonezilla image to use as base to build on is prepared and will be linked soon.
  • This method worked for me but it’s no guarantee it will work for you or the computer you want to boot using USB. The reasons why your particular computer won’t boot from the USB device is beyond the scope of this article. You can leave a comment if you run into trouble. Someone probably will reply.

* My procedures fall short of amazing using Ubuntu to create the multi-boot device therefore scope is limited simply to Ubuntu 10.04 LTS versions/remixes and ISO based booting. Please do leave comments about how to properly setup GRUB4DOS via Ubuntu Desktop or command line. I can’t seem to figure out the difference between syslinux.exe -maf <drive>: in Windows vs syslinux -F /dev/sd<Xn> except the latter just doesn’t seem to work like I think it should.

** If you can avoid reinventing the wheel and save time you should. Using Windows XP-7 with “MultiBootISOs USB Creator” from is the fastest way I’ve found yet.

Steps to Create Using Windows

1. Plug in your USB flash drive that you want to make bootable. Close any prompts or windows that open concerning the drive.

2. Open Windows Explorer and Right-click mouse on the Flash Drive. Select “Format”

3. Change “File System” to FAT32 (if it isn’t already).

4. Uncheck the box “Quick Format” and click “Start”.

5. You will receive a Warning message about formatting. Click “Ok”.

6. When formatting is complete click “Ok”, then “Close”, then exit Windows Explorer.

7. Download MultiBootISOs USB Creator from

8. Find the downloaded file MultiBootISOs- Double-click to run.

9. Read the License. If you Agree click “I Agree” button. (Bottom-right)

10. Select your target drive using the list box. (It’s not A:, which is your floppy drive if you have one.)

11. Select ONLY “MemTest86+ (Memory Testing Tool)”

12. Check the box “Download the zip” and click “Yes” when prompted. This will open a web browser and prompt you to download the file. Remember where you save it.

13. Click the “Browse” button in MultiBootISOs USB Creator Window to select the that you downloaded in the last step.

14. Click the “Create” button. Click “Yes” when prompted both times.

15. When you see “Installation Done. Process is Complete” click “Next”.

16. When prompted with “Would you like to add more ISOs now?” :

  • Click “Yes” to add BackTrack 4 Final. Follow steps 3-15 just replacing references to Memtest with what you are actually adding.
  • Click “Yes” to add and Clonezilla if you have already added BackTrack 4 Final. Follow steps 3-15 just replacing references to Memtest with what you are actually adding.
  • Click “No” to exit “MultiBootISOs USB Creator” and go to next step.

17. Download versions of Ubuntu 10.04.1 Desktop, Server, Alternate (i386 and amd64), and Ubuntu Netbook Remix using your preferred method from I recommend using uTorrent on Windows and the torrent links listed at the bottom of the web page, but if you are not familiar with torrent use regular http:// links at the top of the page to download using just your browser. You *CAN* save Desktop and Netbook versions directly to USB flash drive, but save Server and Alternate ISOs elsewhere. Server and Alternate do not work properly in ISO form. More instruction on this to follow.

18. Other ISOs to download and save to USB flash drive:

  • Ubuntu Rescue Remix 10.04 – Ubuntu Server command line Live CD. Download | Home Page
  • AVG Rescue CD – Offline Windows Spyware/Antivirus Removal. Download | Home Page
  • Darik’s Boot And Nuke (DBAN) – Hard drive eraser. Also good for preparing hard drives for Full Disk Encryption. Download | Home Page
  • Offline NT Password & Registry Editor  – Best known for resetting Windows Administrator password, and/or enabling the account. Un-zip the .iso file to flash drive. Download | Home Page

19. Create the following directories on your USB flash drive where E: is the drive letter assigned to flash drive; amend as necessary:

  • E:\server\i386
  • E:\server\amd64
  • E:\alternate\i386
  • E:\alternate\amd64

Convention: <release>= server or alternate; <arch>= i386 or amd64

20. Extract the entire contents from the ubuntu-10.04.1-<release>-<arch>.iso to the corresponding directory on the flash drive, eg E:\<release>\<arch>

21. Rename E:\<release\<arch>\isolinux\ to E:\<release>\<arch>\syslinux\

22. Rename E:\<release>\<arch>\syslinux\isolinux.cfg to E:\<release>\<arch>\syslinux\syslinux.cfg

23. Copy E:\ldlinux.sys to E:\<release>\<arch>\ldlinux.sys

  • If completed <4 ISO files, repeat steps 20-23 for each additional Server or Alternate ISO file not yet completed.
  • If completed all four ISO files, continue to step 24.

24. Copy/paste menu.lst on the USB flash drive to create a copy. Open menu.lst with notepad and replace the contents with the menu.lst here.

25. Optional: Download my customized Ubuntu themed splash.xpm.gz to the USB flash drive, replacing the one from

Reboot the computer and have fun!

Comments Off on Linux USB Multiboot


Local Ubuntu Repository

In Archived on September 17, 2010 by netritious

When I first set out using Ubuntu last year I tried using apt-cacher to minimize the amount of traffic to and from my network and Ubuntu servers for updates.

Why? Because the experience of installing a package or updating a fresh Ubuntu installation takes a fraction of the time, saves bandwidth, frees up Ubuntu servers, and works when an internet connection is unavailable.

apt-cacher did not work out for me, probably because I was so green at the time. However, the process described in the Ubuntu Community Documentation about using debmirror is straight forward and thorough. So far following this guide has provided me with good results.

Here are some tips and tricks before running the debmirror script from the community documentation:

  1. Don’t create a mirror for a single computer. It’s a complete waste of time, effort, and bandwidth if you do IMHO.
  2. Use the list at to find an official Ubuntu mirror instead of using the default <your country> Consider speed and latency from the hosts that are available in your country from the list. I used the Argonne National Laboratory mirror in Chicago, IL, since it was up to date, offers a fast 10Gbps connection, and had a low ping latency.
  3. Do your initial run from a selected mirror to speed up the process, but change your config in the script from the documentation back to <suitable-mirror>, and re-sync. I had over 75 HTTP 404 errors after the first run and changing to use resolved the issue.
  4. Prepare to give up 100-200 GB of storage space somewhere on your network. As of 12pm today:
    netritious@myob:~# du -s -h /usr/local/mirror/
    89G	/usr/local/mirror/
  5. Prepare for your connection downstream to become congested during the first few syncs. If you have less than a 3Mbps connection, it will take a few days. I have a 16Mbps connection and it still took about 36 hours and the speed averaged (almost) 7Mbps.
  6. If you followed the documentation for debmirror to a ‘T’ then you will need to remove the line from the script that contains –progress so your cron generated emails aren’t a mile long.
  7. Again, if you followed the tutorial to a ‘T’ then like myself you may be wondering what the effect of only mirroring the .deb’s and not the deb-src are in the sense of installing packages that require building some dependency from source. To be honest I’m uncertain if it has any affect at all. I have installed at least 50 or packages with my new local repository and haven’t seen any ill effects.

In comparison, this is something that is very convoluted with Microsoft products. I was never happy with the amount of work involved to keep local machines updated from a local source be it from a network share or CD/DVD media. In my opinion there are fewer caveats with a locally hosted Ubuntu mirror.

Here is the final script that now runs every twelve hours.

# Credits:

export GNUPGHOME=/home/mirrorkeyring

debmirror -a $arch
-s $section
-h $server
-d $release
-r $inPath
-e $proto
Donate a comment if you found this useful. ;)

Updated (22:58): Fixed grammar and typos. Added example of current mirror size. Added my adjusted script from the debmirror Ubuntu Community Documentation.



In Archived on June 26, 2010 by netritious

What do you get when you install pfSense on a WatchGuard FireBox x1250e? You guessed it – pfGuard.

Some before and after shots.

“pfGuard” to the best of my knowledge was coined by someone at the pfSense forums, which by the way has some mighty helpful folks.

A Special Thank You To “Anonymous” for donating the WatchGuard FireBox, To BSD Perimeter, LLC. for pfSense, and the members of the pfSense forums, who without posting their successes and failures I wouldn’t have been able to pull this off!

Comments Off on pfGuard


Home Office

In Archived on April 25, 2010 by netritious

I did quite a bit of reorganizing, upgrading, and re-purposing at the house. Here are some pics of the effort. Pictures taken with Motorola BackFlip are a little grainy, but you get the picture. Pun intended :-p

Comments Off on Home Office


Does pfSense Really Make Sense?

In Archived on April 12, 2010 by netritious

Far from being an expert in network security (or novice for that matter) I tend to follow my instincts. This instinct is primal and most likely physically unhealthy, but it keeps my network of systems healthy — INTRUSION PARANOIA.

To the best of my knowledge I have only experienced one serious security breach by someone that was not authorized to test my systems back in 2001. The breach could have been serious but wasn’t really. Someone was able to create an arbitrarily named directory in the FTP root directory and subdirectories within it. However no files existed in these directories. Windows Server 2000 refused to delete these directories. I suspect it was due to the directory naming convention used by the attacker and Window’s inablity to understand the set of characters that made up the directory names. In the end I performed a backup of the essentials and was able to format the drive and restore a thoroughly scanned backup. Nothing was broken exactly, but it made me nervous.

What if the entire server would have been compromised? What if the attacker had breached more than what was apparent? What if it was a new type of exploit that was not readily known? Let’s just say I lost a bit of sleep over it. I was so certain that my $200 SMC Barricade, which came highly recommended by a close friend knee deep in security certifications, would keep the bad hackers out but I was seriously mistaken. Not because of the product, but because I was complacent not to find out for myself.

Since the breach I have taken network and system security seriously. Custom system ACLs, frequent penetration testing, classes, lots of reading, and quite a bit of programming in the pursuit of understanding exactly what occurs during server/client communications and how difficult compromising that communication would be. Last but not least — better firewalls at the perimeter.

This has led me on a long journey of research, development, and testing which is winding down. On the journey I’ve discovered that I can’t keep out ill-intentioned individuals that know more than I do, but I can sure try to make it as difficult as possible and there have been and are many projects based on that particular viewpoint.

One product that I have thoroughly enjoyed testing is Astaro Security Linux. It is an administrator’s dream, and while Astaro meets my standards for ease-of-use, RTO, and reporting, the licensing doesn’t meet my budget. I don’t like the sales model either — free versions with basic features and/or strict usage clauses (home, less than 10 LAN IP addresses, no more than 10,000 connections, etc) and paid versions that require annual subscription renewals. To me it’s an obvious route to vendor lock in, and no matter how much I like a vendor’s product I can’t afford $500-$1000  for features which are based on free software from open source projects.

It’s not just about being free though — how can I contribute anything other than criticism to a closed source project? I mean let’s face facts, closed source means the vendor has control of the software, so no matter how much I would like to enhance or extend the software the vendor isn’t going to let me, and in most cases it would be a violation of the vendor’s Terms of Use.

This has led me to the discovery and use of m0n0wall and a fork of the m0n0wall project called pfSense. Both are based on FreeBSD and I have some prior experience that made it seem like a comfortable idea. The drawback with m0n0wall is that it isn’t easily extensible out-of-the-box, and is so customized that I would have to seriously dedicate myself to just m0n0wall itself to do so. This is not congruent with my agenda.

pfSense on the other hand is meant to be extended  through either pre-packaged, downloadable software from pfSense or from FreeBSD repositories via pkg_add. What’s strange is that like m0n0wall it is a highly customized installation of FreeBSD, and my previous experience involved building software using ports, not installing binary packages.

Earlier this morning I spent a good bit of time criticizing pfSense shortcomings to one of the resident security hackers who frequents the LoCoTN channel. (Name withheld because I dislike name dropping and it might not be appreciated.) One important thing that was mentioned by this generous listener was the fact that we administrators are like David and Goliath — administrators being David, the army of black hats being Goliath — and that we have come along quite nicely compared to a single person with five stones and a slingshot versus a larger than life threat. At least that was my interpretation. It was also mentioned that if pfSense is providing security with the features I’ve chosen to employ with the packages available then what exactly is the problem? What standard do I have to compare? Well Astaro of course.

But then I realized that I hadn’t mentioned anything I liked about pfSense, and that I was being short-sighted. So, to redeem myself I restate here for the world to see what it is I do like about pfSense:

Snort, Stunnel, nMap, SSH server with support for Authorized Keys, Squid, SquidGuard — even cron. All features that are unavailable in m0n0wall, available in Astaro Security Linux for $$$, but free with pfSense.

At this point I have tested ZeroShell, eBox, Untangle, IP Cop, SmoothWall, and others that aren’t worth mentioning. So far pfSense has the best set of features with a BSD license and runs on x86 hardware just fine. I had stability issues with a previous version mid last year, but so far this round of testing hasn’t been as disastrous as the first. I won’t go into the details of last years hiccups, mainly because I did earlier this morning, and partly because this isn’t a critical review, but rather my personal soap box. I mean look at how long this article is — what else could it be? :-)

As I stated earlier, the journey is about to end and it appears it might end with pfSense as the final destination for my security needs. I’m 85% certain as I still have to test for RTO. One thing I’ve discovered along the way is there is not ONE answer — not ONE particular appliance or open source project; not ONE mind set or set of instructions that will slay Goliath, but picking the right slingshot and set of stones will probably increase my chances of not having to slay anyone at all.

If you are looking to put a modest PC to work as a firewall with the best feature set available in free open source software, I recommend giving pfSense a try. You can visit the web site at

Comments Off on Does pfSense Really Make Sense?


Ubuntu as VMware Guest

In Archived on November 6, 2009 by netritious

From the ACPI/piix4_smbus driver module conflict that causes a Ubuntu Server guest to hang on boot, to iptables not compiled in linux-image-virtual, I was beginning to wonder (again) why I considered Ubuntu for various production virtual machines.

But now that Karmic has been officially released I decided to install it on an older laptop that serves as the office jukebox and general net-top. Previously installed was Jaunty Server with Gnome added with intentions of being a lean install,  so a clean install of Ubuntu Desktop was in order. (Should I really describe the pain and suffering of using minimal server install with bare Gnome? Trust me, if you want a desktop just install the Desktop version of Ubuntu. Besides, I’ve heard chatter on Freenode about upgrades going bad and taking a long time to do so — Jaunty->Karmic, all variants.)

It appears to run well enough. Slightly sluggish with Flash and Screenlets but that’s to be expected with older hardware. All in all I would say it works as well as Jaunty. Boot time is a bit longer by 10 seconds, but who’s counting.

This encouraged me to use Karmic Server as a VMware guest for the Ubuntu TN LoCo Team VPS I promised to contribute roughly two months ago. The first installation was default, default, default. After the initial reboot I see the tired message about ACPI and piix4_smbus conflict:

ACPI: I/O resource piix4_smbus [0x1040-0x1047] conflicts with ACPI region SMB_ [0x1040-0x104b]

I decided to reinstall using the ‘minimal_vm’ mode, and voila! The message was gone! But after a little more setup, I realized I had traded one problem for another:

# iptables -L
FATAL: Module ip_tables not found.
iptables v1.4.4: can't initialize iptables table `filter': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

Long story short, iptables (and net/ipv4 modules for that matter) are not compiled into the 2.6.31-14-virtual kernel. At this point I headed over to #ubuntu-server where it was suggested to try:

sudo apt-get install linux-image-server

..and after adding some required dependencies I rebooted to see:

ACPI: I/O resource piix4_smbus [0x1040-0x1047] conflicts with ACPI region SMB_ [0x1040-0x104b]

Look familiar? Yep, it’s the initial error I received after installing Karmic Server with setup defaults, but a quick iptables test revealed:

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

At this point I recall sending to the #ubuntu-server channel “/me may not have cake and eat it too” but I refused to give in. I knew there had to be a way to disable the offending module (i2c_piix4) and have iptables. I wasn’t able to get an answer in #ubuntu-server but I did eventually track down similar bugs in launchpad. In a comment it was suggested to ‘blacklist the driver’ which led me on a small Google expedition to find out what this meant exactly.

After some persistence Google yielded results I could use. Based on an example to blacklist an ATI driver module in Dapper, I was able to locate the module blacklist configuration file and edit it accordingly.

So the steps I used to install minimal_vm with iptables minus the nagging, boot lagging ACPI/piix4_smbus conflict:

Step 1. Install Karmic Server using minimal_vm mode. (Press F4, DOWN, DOWN, ENTER,ENTER after Language prompt and provide defaults if just testing.)

Step 2. After the initial reboot you will /NOT/ see the ACPI/piix4_smbus driver module conflict message. Boot time should be quick with semi-modern hardware. At this point though iptables is not available as a kernel module. Resolved using:

sudo -s
apt-get update
apt-get upgrade
apt-get install linux-image-`uname -r`
apt-get install linux-image-generic-pae
apt-get install linux-image-server
#for good measure

After a second reboot the ACPI/piix4_smbus message will appear. If you test iptables using iptables -L you should not see any errors.

Step 3. Now it’s time to rid myself of that message:

sudo -s
vi /etc/modprobe.d/blacklist.conf

At the very bottom I added the following lines:

# 2009-11-06 - Fix ACPI/piix4_smbus conflict message and hang time on boot
blacklist i2c_piix4

Exit/save changes in vi and reboot:


Finished! I now have minimal_vm install WITH iptables and no pesky driver module adding boot time. I’m sure there is a little cleanup left to do, but I am so happy to finally have a lean Karmic Server install in a vm, be rid of the boot error, boot up about 20 seconds faster, and have access to iptables.

Shouts out to Reepicheep in #ubuntu-server for pointing me in a direction with apt-get install linux-image-server.


VMware Guests on DRBD

In Archived on October 29, 2009 by netritious

For the past six months I have used Heartbeat, DRBD, and VMware Server to research Highly Available Virtual Machines as a platform for new hosting business ventures.

When the platform works it works well, but if DRBD encounters any type of problem stability of the virtual machines is directly affected, usually by becoming inaccessible and unresponsive via host console or otherwise.

I have found the same to be true for Heartbeat. I have encountered twice now complete VMware guest break down from Heartbeat refusing to kill itself (sudo killall heartbeat), and not due to a failover or failback — just sporadically, like the host server lights are on but no body is home. I typically have to kill the power by holding in the power button, wait a few seconds and turn the power on again.

One of the two times this happened it resulted in DRBD split brain, and both servers were designated secondary/secondary.  Because the VMware Guest images are located on the unmounted DRBD volume on both servers there is no chance of accessing them until this is resolved. Luckily the documentation on DRBD is excellent although Heartbeat documentation is not as well organized but it is available, and I was able to recover but not in a very short period of time. More like 2-4 hours.

The second time split brain happened DRBD was primary/primary. I had no idea which one was secondary last, so which to recover? Which is the victim and which is the survivor?

This defeats the purpose of using ha-linux (Heartbeat+DRBD) solutions…at least at the host level for virtual machine replication. It is very possible that it is due to some misconfiguration on my part but why does it work so well then sporadically crash? It could be I need a fencing device to disable the victim node, but I still don’t quite understand fencing. I know what it does, but everything I’ve read implies additional equipment which I have been unable to locate. It’s still a little unclear, but that’s my point.

Recently I stumbled upon a KB document on the VMware web site which might solve my problem. I can’t recall exactly what my Google keyword search contained, but here is the link to a very good document on clustering solutions for ESX Server v1 which also applies to VMware Server v1.

Essentially what’s described in the document that is appealing to me is not replicating the guests, but instead deploy the guest on two host servers as a set. Replication and failover happens at the guest layer, between the guests. The Host servers do just that — host the guest virtual machines.

This is in direct contrast to what I have been doing and at first glance appears to be a more complex solution, but not really. DRBD secondary volumes can not be mounted. It can be done but it’s tricky and has the feel of a hack instead of a solution. No mount means no access to the guests.

I have a lot of work to do but I’m on the right track. Hopefully I’ll have all the necessary changes implemented this weekend including installation and configuration of two new firewalls.

Wish me luck!

Comments Off on VMware Guests on DRBD

%d bloggers like this: