True Consolidation

Back in 2000 I managed to acquire several retired systems to bring to Uni: this included 4-5 cheap P120 machines. At the time, I thought this was great; I had an OpenBSD box as my gateway, a FreeBSD box, a few Linux boxes, and likely something else that doesn’t even exist now. The school has a superfast connection, unlimited bandwidth, and I was curious. Although I didn’t really have time, I still managed to install and have all these servers running from my room.

I realized I was doing at home what I was being paid to do at work.

Fast forward to 2007, and my mindset has changed. In 2007 I didn’t want to have 6 servers running at once, I wanted to have one server running 12 servers at once! Thanks to Xen and VMware this was easily obtained. Initially using Xen, and then ESXi, I had the freedom to setup Domains, tear them down, and start over. Eventually, however, I realized I was doing at home what I was being paid to do at work. That doesn’t sound like fun. I also realized that, despite picking a motherboard and processor that could shift into low power usage, I was still using more watts than I needed to. I was also spending way too much time mucking around with things - I want to focus on just one or two projects at a time, and I really want to start programming more.

Last month I finally finished the ultimate ‘consolidation’: I moved everything to a tiny embedded Linux box. While back in the U.S. I contacted WDL Systems and requested for shipping costs on a tiny embedded box. I bought the eBox-3300, with an embedded board from ICOP, and it was promptly shipped out. After returning home to Sydney I migrated all my apps from the various virtual servers to my little box running Debian 5.0: OSSEC, Samba, Lighttpd, Asterisk and flow-tools. The little box is just perfect for what I need - a tiny home server. I still get around 8MB/sec transferring files, which indicates the network is still the bottleneck, and VOIP calls with Asterisk are still clear.

Overall, I’ve been happy with this little box. My ‘playing time’ with IT has gone down significantly, my energy usage has gone down, and I now have a server I can take with me wherever I go.

Renaming Apache Log Locations

I realized a few of my log files were growing unusually large, and even worse, logrotate was skipping them. I took a look in logrotate.d and straight away realized why: I had created silly names for the log file. logrotate look for .log files, but I had specified mine as .log – e.g. kelvinism_access_log. I was as familiar with logrotate when I set up the domains, so set forth to get them in the rotation.

Firstly, I had to rename the actual log files. So, to rename kelvinism_access_log to kelvinism_access.log, a one-liner:

for x in *_log; do mv $x `basename $x _log`.log; done;

Next, I needed to rename the log location inside each of the Apache config files. While a one-liner might be possible, I used the following tiny script:

#!/bin/sh
 
for x in *
do
sed 's/_log/\.log/' $x > /tmp/tmpfile.tmp
mv /tmp/tmpfile.tmp $x
done

Alexa Thumbnail Service

Amazon offers some pretty cool services: S3, EC2, Alexa Site Thumbnail, and others. A while back I wanted to use AST with Django, so ended up writing the Python bindings to the REST API (they didn’t previously exist. I even wrote up a quick tutorial.

Update: Amazon no longer maintains AST. I’ve decided to archive a few of the old sites, so no longer need to take thumbnails. However, a few other thumbnail services seem to have crept up, including SnapCasa", and WebSnapr.

Charting the Hackers

A normal internet connection gets attacked, a lot. The majority of attacks are of the form “hello, anybody there?” – where most people just don’t answer. But sometimes, just sometimes, the question gets an answer. Depending on the answer, the attacker will start to explore.

A few weeks back I was a little bored and started fiddling. I wanted to play with my Cisco, but also wanted to play with OSSEC, but also has a GIS craving. In the end I decided to create a map of the people who ask, “hello”.

Take a look at the map and explanation if that sort of thing is your cup of tea.

Migrating large disks into ESXi

I recently had the need to move a rather large (450GB) VMDK file from an external hard drive into ESXi. Since ESXi doesn’t support external hard drives, this makes things quite a bit more difficult. At first I tried using SCP to copy the file over (after enabling SSH access for ESXi). However, when I tried to do this the time left was almost 20 hours – a tad too long!

I rethought my idea and decided to use this process:

  1. Create an NFS share on my laptop, using the external hard drive (with the VMDK) as a mount point.
  2. Use vmkfstools to move the image over.
  3. Update any bugs I encountered.

Creating the NFS share on Linux is extermily easy. After install nfs via whatever package management tool you choose, put this entry into your /etc/exports file:

  
/media/disk-1 192.168.1.0/24(ro,no_root_squash,async)  

This assumes your USB disk is mounted as /media/disk-1, and your local subnet is 192.168.1.0/24. In OpenFiler, add a new storage with type NFS and use your laptops IP as the hose, and /media/disk-1 as the mount point. For safey, tick read-only.

Next, unlock SSH if you haven’t already. Once you are in, browse to /vmfs/volumes and you can see your nfs share and your other datastores. Let’s say you USB virtual disk is located at /vmfs/volumes/nfs/bigdisk.vmdk, and you want to import it into your normal datastore, under a folder called ‘NAS’. Using vmware specific tools, you can import the file as so:

  
# vmkfstools -i /vmfs/volumes/nfs/bigdisk.vmdk /vmfs/volumes/datastore1/NAS/bigdisk.vmdk  

I needed to update the hardware version of my imported disk. To do this, open up the .vmdk file (you should also have a -flat.vmdk file), and update the virtualHWVersion entry from 7 to 4. With that, join your disk to an image, and you should be good to go.

An addition result I noticed was the speed at which it came over. By using SCP, the entire file was going to take 20hr. By using NFS and vmkfstools, the files was migrated in under 10 hours.

OpenFiler Permission Issue

I’ve had issues before with OpenFiler where doesn’t update the permissions, although they appear correct in the UI. To rectify that, I stumbled upon a one liner that fixed it. Let’s say you have a group called “Trusted” that you want to have full access to your music folder. Here’s the one-liner:

[root@files data]# pwd
/mnt/openfiler/data
[root@files data]# setfacl --recursive -m u:nobody:rwx,g:Trusted:rwx music

Speeding Up VMWare Server

I found VMWare Server to have very slow I/O, and sought to improve it. Below are a list of tests I performed, the change, and the results.

  
  
### Host OS ###  
/dev/sdb1:  
 Timing buffered disk reads:  220 MB in  3.05 seconds =  72.17 MB/sec  
kelvin@gorilla:~$ sudo hdparm -t /dev/sdb1  
  
/dev/sdb1:  
 Timing buffered disk reads:  266 MB in  3.01 seconds =  88.33 MB/sec  
kelvin@gorilla:~$ sudo hdparm -t /dev/sdb1  
  
/dev/sdb1:  
 Timing buffered disk reads:  310 MB in  3.01 seconds = 102.99 MB/sec  
  
  
### Before Changes ###  
  
/dev/mapper/openfiler-data:  
 Timing buffered disk reads:    8 MB in  3.36 seconds =   2.38 MB/sec  
[root@files etc]# hdparm -t /dev/mapper/openfiler-data  
  
/dev/mapper/openfiler-data:  
 Timing buffered disk reads:   24 MB in  3.63 seconds =   6.61 MB/sec  
[root@files etc]# hdparm -t /dev/mapper/openfiler-data  
  
/dev/mapper/openfiler-data:  
 Timing buffered disk reads:   28 MB in  4.54 seconds =   6.16 MB/sec  
  

I made several changes, but the changes that seemed to have the most impact are below:

vm.dirty_background_ratio = 5  
vm.dirty_ratio = 10  
vm.swappiness = 0  
  

Pop this into the virtual machine’s .vmx file, reboot, and off you go. One unfortunate side effect is that you can no longer overload the memory (e.g. allocate more memory with the VMs than you actually have available).

  
  
### After Changes ###  
  
/dev/mapper/openfiler-data:  
 Timing buffered disk reads:   52 MB in  3.13 seconds =  16.61 MB/sec  
[root@files ~]# hdparm -t /dev/mapper/openfiler-data  
  
/dev/mapper/openfiler-data:  
 Timing buffered disk reads:   82 MB in  3.31 seconds =  24.75 MB/sec  
[root@files ~]# hdparm -t /dev/mapper/openfiler-data  
  
/dev/mapper/openfiler-data:  
 Timing buffered disk reads:  118 MB in  3.19 seconds =  36.97 MB/sec  
[root@files ~]# hdparm -t /dev/mapper/openfiler-data  
  
/dev/mapper/openfiler-data:  
 Timing buffered disk reads:  144 MB in  3.32 seconds =  43.37 MB/sec  
  
[root@files ~]# hdparm -t /dev/mapper/openfiler-data  
  
/dev/mapper/openfiler-data:  
 Timing buffered disk reads:  160 MB in  3.10 seconds =  51.57 MB/sec  

UPDATE: Those wanting all the speed and still want to use memory overloading, I’d suggested you give ESXi a try. So far, so good.

  
## With ESXi, same hardware ##  
[root@files ~]# hdparm -t /dev/mapper/openfiler-data   
  
/dev/mapper/openfiler-data:  
 Timing buffered disk reads:  200 MB in  3.18 seconds =  62.92 MB/sec  

Integrating OSSEC with Cisco IOS

I rank OSSEC as one of my favorite pieces of open source software, and finally decided to play around with it more in my own free time. (Yup, I do this sort of stuff for fun). My goal was quite simple: send syslog packets from my Cisco to my “proxy” server, running OSSEC. I found that, although OSSEC supports Cisco IOS logging, it didn’t really work. In fact, I couldn’t find any examples or articles of anybody actually getting it to work.

I initially tried to get it to work “correctly,” and soon settled to “just getting it to work.” I implemented some rules in the local_rules.xml file, which worked, but I’m pretty stubborn, and wanted to do it “correctly.” With a couple pots of tea I became much, much more familiar with OSSEC. The key (and a lot of credit) goes to Jeremy Melanson for hinting at some of the updates to the decoder.xml file that need to take place.

The first step is to read the OSSEC + Cisco IOS wiki page. Everything on that page is pretty straight forward. I then added three explicit drop rules at the end of my Cisco’s ACL:

...

access-list 101 deny tcp any host 220.244.xxx.xxx log
access-list 101 deny ip any host 220.244.xxx.xxx log
access-list 101 deny udp any host 220.244.xxx.xxx log

(220.244.xxx.xxx is my WAN IP, and I’m sure you could figure out xxx.xxx pretty darn easily, but I’ll x them out anyways).

To reiterate, OSSEC needs to be told to listen for syslog traffic, which you should be set on the Cisco. If you haven’t done this, go re-read the wiki above.

<remote>
<connection>syslog</connection>
<allowed-ips>192.168.0.1</allowed-ips>
</remote>

On or around line 1550 in /var/ossec/etc/decoder.xml I needed to update the regex that was used to detect the syslog stream.

...

<decoder name="cisco-ios">
<!--<prematch>^%\w+-\d-\w+: </prematch>-->
<prematch>^%\w+-\d-\w+: |^: %\w+-\d-\w+:</prematch>
</decoder>
 
<decoder name="cisco-ios">
<program_name>
<!--<prematch>^%\w+-\d-\w+: </prematch>-->
<prematch>^%\w+-\d-\w+: |^: %\w+-\d-\w+: </prematch>
</program_name></decoder>
 
<decoder name="cisco-ios-acl">
<parent>cisco-ios</parent>
<type>firewall</type>
<prematch>^%SEC-6-IPACCESSLOGP: |^: %SEC-6-IPACCESSLOGP: </prematch>
<regex offset="after_prematch">^list \d+ (\w+) (\w+) </regex>
<regex>(\S+)\((\d+)\) -> (\S+)\((\d+)\),</regex>
<order>action, protocol, srcip, srcport, dstip, dstport</order>
</decoder>


...

In the general OSSEC configuration file, re-order the list of rules. I had to do this because syslog_rules.xml includes a search for “denied”, and that triggers an alarm.

...
<include>telnetd_rules.xml</include>
<include>cisco-ios_rules.xml</include>
<include>syslog_rules.xml</include>
<include>arpwatch_rules.xml</include>
...

Remember that these dropped events will go into /var/ossec/logs/firewall/firewall.log. Because this is my home connection, and I don’t have any active_responses configured (yet!), I tightened the firewall_rules.xml file (lowering the frequency, raising the timeframe).

And in the end, I get a pretty email when somebody tries to port scan me.

Pretty email

OSSEC HIDS Notification.
2008 Nov 15 23:19:36
 
Received From: proxy->xxx.xxx.xxx.xxx
Rule: 4151 fired (level 10) -> "Multiple Firewall drop events from same source."
Portion of the log(s):
 
: %SEC-6-IPACCESSLOGP: list 101 denied tcp 4.79.142.206(36183) -> 220.244.xxx.xxx(244), 1 packet
: %SEC-6-IPACCESSLOGP: list 101 denied tcp 4.79.142.206(36183) -> 220.244.xxx.xxx(253), 1 packet
: %SEC-6-IPACCESSLOGP: list 101 denied tcp 4.79.142.206(36183) -> 220.244.xxx.xxx(243), 1 packet
: %SEC-6-IPACCESSLOGP: list 101 denied tcp 4.79.142.206(36183) -> 220.244.xxx.xxx(254), 1 packet
 
 
 
--END OF NOTIFICATION

Upgrading Cisco Wireless Firmware

I’m always forgetting the exact string to enter at the CLI for updating the IOS on a wireless Cisco AP, so I’ll just put it here to end my future searches:

Chimp# archive download-sw /force-reload /overwrite tftp://192.168.83.150/c1100-k9w7-tar.123-8.JEC1.tar

192.168.83.150 obviously being your tftp server, and the .tar file sitting in the root of the tftp server.

I suppose if you wanted to backup your IOS you could do something along the lines of:

Chimp# archive upload-sw tftp://192.168.83.150/someimage.tar

But I haven’t tried it…

Capped Internet

I’ve lived in several different parts of the world, and they all do internet differently. Back in the US I had 8Mb/sec cable (leaving just before Fios was really an option, darn!) In New Zealand, for instance, I was paying for “high speed ADSL” rated at 1.5Mb/256k. Vrooom. Up in Taiwan I was paying 1/2 what I paid in New Zealand, but for 12Mb/1Mb. Down to Sydney and we have a rated 24Mb/1Mb.

But there’s a catch with the plans in New Zealand and Australia: they are ‘capped’. This means you only get XGB/month – and it isn’t like Comcast capping at 250GB/month, I’m talking about 1GB/5GB/10GB and so forth. And there’s more – just like mobile phones, you get on-peak and off-peak times.

This all does make a bit of sense to me – there are only X amount of tubes going in and out of NZ and AU, and I would imagine they get pretty clogged.

Either way, last month was pretty painful. Two weeks into our plan I checked out usage: 14GB of 18GB! We had only 4GB left to use for 15 days. This sounds like a lot, but for the two of us, and my 10 virtual servers, it isn’t. The first thing I did was looking at a way to do WSUS with Linux – I ended up using apt-cacher (I’m using only Ubuntu at home). BitTorrent, out; downloading any new ISOs, out; streaming music, totally out. For a while I has to VPN to home, and then VPN to a client, as our router at work didn’t seem to like letting us access one of our clients. I even disconnected from the VPN if I wouldn’t be doing work for 20m!

We eventually made it, and used only 2GB in two weeks. What an accomplishment!