Does pfSense Really Make Sense?

In Archived on April 12, 2010 by netritious

Far from being an expert in network security (or novice for that matter) I tend to follow my instincts. This instinct is primal and most likely physically unhealthy, but it keeps my network of systems healthy — INTRUSION PARANOIA.

To the best of my knowledge I have only experienced one serious security breach by someone that was not authorized to test my systems back in 2001. The breach could have been serious but wasn’t really. Someone was able to create an arbitrarily named directory in the FTP root directory and subdirectories within it. However no files existed in these directories. Windows Server 2000 refused to delete these directories. I suspect it was due to the directory naming convention used by the attacker and Window’s inablity to understand the set of characters that made up the directory names. In the end I performed a backup of the essentials and was able to format the drive and restore a thoroughly scanned backup. Nothing was broken exactly, but it made me nervous.

What if the entire server would have been compromised? What if the attacker had breached more than what was apparent? What if it was a new type of exploit that was not readily known? Let’s just say I lost a bit of sleep over it. I was so certain that my $200 SMC Barricade, which came highly recommended by a close friend knee deep in security certifications, would keep the bad hackers out but I was seriously mistaken. Not because of the product, but because I was complacent not to find out for myself.

Since the breach I have taken network and system security seriously. Custom system ACLs, frequent penetration testing, classes, lots of reading, and quite a bit of programming in the pursuit of understanding exactly what occurs during server/client communications and how difficult compromising that communication would be. Last but not least — better firewalls at the perimeter.

This has led me on a long journey of research, development, and testing which is winding down. On the journey I’ve discovered that I can’t keep out ill-intentioned individuals that know more than I do, but I can sure try to make it as difficult as possible and there have been and are many projects based on that particular viewpoint.

One product that I have thoroughly enjoyed testing is Astaro Security Linux. It is an administrator’s dream, and while Astaro meets my standards for ease-of-use, RTO, and reporting, the licensing doesn’t meet my budget. I don’t like the sales model either — free versions with basic features and/or strict usage clauses (home, less than 10 LAN IP addresses, no more than 10,000 connections, etc) and paid versions that require annual subscription renewals. To me it’s an obvious route to vendor lock in, and no matter how much I like a vendor’s product I can’t afford $500-$1000  for features which are based on free software from open source projects.

It’s not just about being free though — how can I contribute anything other than criticism to a closed source project? I mean let’s face facts, closed source means the vendor has control of the software, so no matter how much I would like to enhance or extend the software the vendor isn’t going to let me, and in most cases it would be a violation of the vendor’s Terms of Use.

This has led me to the discovery and use of m0n0wall and a fork of the m0n0wall project called pfSense. Both are based on FreeBSD and I have some prior experience that made it seem like a comfortable idea. The drawback with m0n0wall is that it isn’t easily extensible out-of-the-box, and is so customized that I would have to seriously dedicate myself to just m0n0wall itself to do so. This is not congruent with my agenda.

pfSense on the other hand is meant to be extended  through either pre-packaged, downloadable software from pfSense or from FreeBSD repositories via pkg_add. What’s strange is that like m0n0wall it is a highly customized installation of FreeBSD, and my previous experience involved building software using ports, not installing binary packages.

Earlier this morning I spent a good bit of time criticizing pfSense shortcomings to one of the resident security hackers who frequents the LoCoTN channel. (Name withheld because I dislike name dropping and it might not be appreciated.) One important thing that was mentioned by this generous listener was the fact that we administrators are like David and Goliath — administrators being David, the army of black hats being Goliath — and that we have come along quite nicely compared to a single person with five stones and a slingshot versus a larger than life threat. At least that was my interpretation. It was also mentioned that if pfSense is providing security with the features I’ve chosen to employ with the packages available then what exactly is the problem? What standard do I have to compare? Well Astaro of course.

But then I realized that I hadn’t mentioned anything I liked about pfSense, and that I was being short-sighted. So, to redeem myself I restate here for the world to see what it is I do like about pfSense:

Snort, Stunnel, nMap, SSH server with support for Authorized Keys, Squid, SquidGuard — even cron. All features that are unavailable in m0n0wall, available in Astaro Security Linux for $$$, but free with pfSense.

At this point I have tested ZeroShell, eBox, Untangle, IP Cop, SmoothWall, and others that aren’t worth mentioning. So far pfSense has the best set of features with a BSD license and runs on x86 hardware just fine. I had stability issues with a previous version mid last year, but so far this round of testing hasn’t been as disastrous as the first. I won’t go into the details of last years hiccups, mainly because I did earlier this morning, and partly because this isn’t a critical review, but rather my personal soap box. I mean look at how long this article is — what else could it be? :-)

As I stated earlier, the journey is about to end and it appears it might end with pfSense as the final destination for my security needs. I’m 85% certain as I still have to test for RTO. One thing I’ve discovered along the way is there is not ONE answer — not ONE particular appliance or open source project; not ONE mind set or set of instructions that will slay Goliath, but picking the right slingshot and set of stones will probably increase my chances of not having to slay anyone at all.

If you are looking to put a modest PC to work as a firewall with the best feature set available in free open source software, I recommend giving pfSense a try. You can visit the web site at

Comments Off on Does pfSense Really Make Sense?


Ubuntu as VMware Guest

In Archived on November 6, 2009 by netritious

From the ACPI/piix4_smbus driver module conflict that causes a Ubuntu Server guest to hang on boot, to iptables not compiled in linux-image-virtual, I was beginning to wonder (again) why I considered Ubuntu for various production virtual machines.

But now that Karmic has been officially released I decided to install it on an older laptop that serves as the office jukebox and general net-top. Previously installed was Jaunty Server with Gnome added with intentions of being a lean install,  so a clean install of Ubuntu Desktop was in order. (Should I really describe the pain and suffering of using minimal server install with bare Gnome? Trust me, if you want a desktop just install the Desktop version of Ubuntu. Besides, I’ve heard chatter on Freenode about upgrades going bad and taking a long time to do so — Jaunty->Karmic, all variants.)

It appears to run well enough. Slightly sluggish with Flash and Screenlets but that’s to be expected with older hardware. All in all I would say it works as well as Jaunty. Boot time is a bit longer by 10 seconds, but who’s counting.

This encouraged me to use Karmic Server as a VMware guest for the Ubuntu TN LoCo Team VPS I promised to contribute roughly two months ago. The first installation was default, default, default. After the initial reboot I see the tired message about ACPI and piix4_smbus conflict:

ACPI: I/O resource piix4_smbus [0x1040-0x1047] conflicts with ACPI region SMB_ [0x1040-0x104b]

I decided to reinstall using the ‘minimal_vm’ mode, and voila! The message was gone! But after a little more setup, I realized I had traded one problem for another:

# iptables -L
FATAL: Module ip_tables not found.
iptables v1.4.4: can't initialize iptables table `filter': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

Long story short, iptables (and net/ipv4 modules for that matter) are not compiled into the 2.6.31-14-virtual kernel. At this point I headed over to #ubuntu-server where it was suggested to try:

sudo apt-get install linux-image-server

..and after adding some required dependencies I rebooted to see:

ACPI: I/O resource piix4_smbus [0x1040-0x1047] conflicts with ACPI region SMB_ [0x1040-0x104b]

Look familiar? Yep, it’s the initial error I received after installing Karmic Server with setup defaults, but a quick iptables test revealed:

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

At this point I recall sending to the #ubuntu-server channel “/me may not have cake and eat it too” but I refused to give in. I knew there had to be a way to disable the offending module (i2c_piix4) and have iptables. I wasn’t able to get an answer in #ubuntu-server but I did eventually track down similar bugs in launchpad. In a comment it was suggested to ‘blacklist the driver’ which led me on a small Google expedition to find out what this meant exactly.

After some persistence Google yielded results I could use. Based on an example to blacklist an ATI driver module in Dapper, I was able to locate the module blacklist configuration file and edit it accordingly.

So the steps I used to install minimal_vm with iptables minus the nagging, boot lagging ACPI/piix4_smbus conflict:

Step 1. Install Karmic Server using minimal_vm mode. (Press F4, DOWN, DOWN, ENTER,ENTER after Language prompt and provide defaults if just testing.)

Step 2. After the initial reboot you will /NOT/ see the ACPI/piix4_smbus driver module conflict message. Boot time should be quick with semi-modern hardware. At this point though iptables is not available as a kernel module. Resolved using:

sudo -s
apt-get update
apt-get upgrade
apt-get install linux-image-`uname -r`
apt-get install linux-image-generic-pae
apt-get install linux-image-server
#for good measure

After a second reboot the ACPI/piix4_smbus message will appear. If you test iptables using iptables -L you should not see any errors.

Step 3. Now it’s time to rid myself of that message:

sudo -s
vi /etc/modprobe.d/blacklist.conf

At the very bottom I added the following lines:

# 2009-11-06 - Fix ACPI/piix4_smbus conflict message and hang time on boot
blacklist i2c_piix4

Exit/save changes in vi and reboot:


Finished! I now have minimal_vm install WITH iptables and no pesky driver module adding boot time. I’m sure there is a little cleanup left to do, but I am so happy to finally have a lean Karmic Server install in a vm, be rid of the boot error, boot up about 20 seconds faster, and have access to iptables.

Shouts out to Reepicheep in #ubuntu-server for pointing me in a direction with apt-get install linux-image-server.


VMware Guests on DRBD

In Archived on October 29, 2009 by netritious

For the past six months I have used Heartbeat, DRBD, and VMware Server to research Highly Available Virtual Machines as a platform for new hosting business ventures.

When the platform works it works well, but if DRBD encounters any type of problem stability of the virtual machines is directly affected, usually by becoming inaccessible and unresponsive via host console or otherwise.

I have found the same to be true for Heartbeat. I have encountered twice now complete VMware guest break down from Heartbeat refusing to kill itself (sudo killall heartbeat), and not due to a failover or failback — just sporadically, like the host server lights are on but no body is home. I typically have to kill the power by holding in the power button, wait a few seconds and turn the power on again.

One of the two times this happened it resulted in DRBD split brain, and both servers were designated secondary/secondary.  Because the VMware Guest images are located on the unmounted DRBD volume on both servers there is no chance of accessing them until this is resolved. Luckily the documentation on DRBD is excellent although Heartbeat documentation is not as well organized but it is available, and I was able to recover but not in a very short period of time. More like 2-4 hours.

The second time split brain happened DRBD was primary/primary. I had no idea which one was secondary last, so which to recover? Which is the victim and which is the survivor?

This defeats the purpose of using ha-linux (Heartbeat+DRBD) solutions…at least at the host level for virtual machine replication. It is very possible that it is due to some misconfiguration on my part but why does it work so well then sporadically crash? It could be I need a fencing device to disable the victim node, but I still don’t quite understand fencing. I know what it does, but everything I’ve read implies additional equipment which I have been unable to locate. It’s still a little unclear, but that’s my point.

Recently I stumbled upon a KB document on the VMware web site which might solve my problem. I can’t recall exactly what my Google keyword search contained, but here is the link to a very good document on clustering solutions for ESX Server v1 which also applies to VMware Server v1.

Essentially what’s described in the document that is appealing to me is not replicating the guests, but instead deploy the guest on two host servers as a set. Replication and failover happens at the guest layer, between the guests. The Host servers do just that — host the guest virtual machines.

This is in direct contrast to what I have been doing and at first glance appears to be a more complex solution, but not really. DRBD secondary volumes can not be mounted. It can be done but it’s tricky and has the feel of a hack instead of a solution. No mount means no access to the guests.

I have a lot of work to do but I’m on the right track. Hopefully I’ll have all the necessary changes implemented this weekend including installation and configuration of two new firewalls.

Wish me luck!

Comments Off on VMware Guests on DRBD


VMware Server + Vista != Working

In Archived on October 27, 2009 by netritious

I bought Vista Ultimate x64 the week it was released. I ran it on my PC for roughly two weeks before deciding that XP was better.

However, XP will retire soon — maybe not in the next year or so but it will be retired — and with it security updates. I bothered to install it a couple of weeks ago thinking that now SP2 is available most of the problems people experienced over the past couple of years with Vista would be a non-issue.

For the most part this is correct. However, the one tool I use everyday for R & D is VMware Server which will not run on any version of Vista and connect to localhost, which means you can only connect to a remote server.

The actual problem is Vista has closed all methods of bypassing the signed driver policy and VMware Server drivers for Vista are unsigned.

The *only* fix that works (and I have tried them all) is to press F8 on startup and select the ‘Disable Driver Signing Policy.’ After login to Windows is successful I can run VMware Server as Administrator and this allows me to connect to localhost.

From what I’ve read so far this is a non-issue for Windows 7 and prior to Vista SP1 you could bypass unsigned drivers without remembering to press F8 every time you rebooted your computer.

So how is this for a break up letter:

Dear Mr Money Bags,

I am about to throw in the towel. I paid you your money, now let me use the software I want without making me jump through hoops. No wonder people just keep ripping you off — you keep screwing them.

I know it’s not very professional, and I won’t waste time writing something I know won’t be read by the people that need to and even if they do it’ll end up being the latest office joke. 

I paid for the flagship version of Vista, VMware Server is legitimate software, but I can’t use it without pressing F8 every time I startup or reboot my computer? Come on MS, why should I spend another $200 bucks to do so and with no guarantees you won’t do this to me again?

And EMC you are not off the hook with me either. If you would just have MS sign the drivers for Vista this would be a non-issue.


VMware Server 1.0.10 Released

In Archived on October 27, 2009 by netritious

Yesterday VMware Server 1.0.10 was released for both Linux and Windows.

According to the release notes a security vulnerablity has been patched:

Exception handling privilege escalation on guest operating system
This release addresses a security vulnerability in exception handling. Improper setting of the exception code on page faults might allow for local privilege escalation on the guest. The Common Vulnerabilities and Exposures project ( has assigned the name CVE-2009-2267 to this issue.

Download VMware Server 1.0.10

Comments Off on VMware Server 1.0.10 Released


Fixing Broken VMware Server After Ubuntu Upgrade

In Archived on October 27, 2009 by netritious

I have discovered two ways of breaking a running VMware Server installation on Ubuntu:

  1. Using apt-get autoclean or apt-get clean will remove dependencies required by VMware Server.
  2. Using apt-get dist-upgrade to upgrade the Linux kernel will break VMware on restart — Grub will load the latest kernel version by default.

To fix VMware Server after running into this problem is simple and straight forward by using the following command:

sudo apt-get install build-essential linux-headers-`uname -r`

Afterwards you will need to run the VMware Server configuration script.

sudo /usr/bin/

This script is created by VMware Server during the initial installation. I elected to chose the defaults as the script is written well enough to check for the most recent VMware Server configuration.

Note: In the case of issue number two — dist-upgrade breaks VMware Server — you need to reboot first in order to load the newly installed kernel, run the posted commands, and reboot again.

Comments Off on Fixing Broken VMware Server After Ubuntu Upgrade


PIVT Screenshots

In Archived on October 20, 2009 by netritious

PIVT - VMware Server Console

PIVT - VMware Server Console

Ubuntu Jaunty 64-bit as Host, Ubuntu Hardy 64-bit as Guest

Ubuntu Jaunty 64-bit as Host on /dev/sda, Ubuntu Hardy 64-bit as Guest on /dev/sdb

Ubuntu Hardy 64-bit as Host on /dev/sda, Ubuntu Jaunty 64-bit as Guest on /dev/sdb

Comments Off on PIVT Screenshots

%d bloggers like this: