Building Custom LiveCDs

Published on Thursday, January 26, 2006

I have a feeling we will shortly be deploying many Linux servers to perform certain actions. Maybe we will implement Asterisk to be used as a VoIP interchange between locations, maybe the backup servers will be Linux based, maybe the BDCs.

One thing that could speed up implementation at remote sites is to build live cds for certain purposes. For instance, on the file server in PDX to keep updated live cds for certain projects. Like, a BDC live cd or a backup live cd. Already setup with the most current packages (or scripts to fetch+install them). So when we get to the site we just put the CD in, click or type "load" and poof, the server is installed and configured.

These links (haven't read all of the process) may be helpful:

http://www.linuxjournal.com/article/7233

http://gentoo-wiki.com/HOWTO_build_a_LiveCD_from_scratch


Python + Web Developement

Published on Wednesday, January 25, 2006

A developer just showed me an interesting framework to produce python-backed sites VERY quickly. This is mainly for you Ian, it natively supports AJAX as well. Here's the link:

http://www.turbogears.org/

I watched the demo, pretty interesting.


Server Virtualization

Published on Tuesday, January 24, 2006

We don't want to have a billion servers each doing their own task -- so what can we use as a solution? Server virtualization (or semi-virtualization or para-virtualization). This involves cutting down a server into mini servers that each have full customization. Our VPS at hostmysite is like this. So why would you want to do this? A few reasons actually.

-Localize exploits. Let's say DNS gets exploited -- the access gained would only be for DNS, and not for mail and web and everything else.

-Easy "upgrades," backups and redundancy. Let's say we start to use MySQL more and more, but the server can't handle it. To upgrade (ignoring replication for this example) we could just turn off the virtual server (in essense lock files), move it to other server, drop it into another server that is setup to do virtualization, and turn it it on. Nearly no downtime, and you know it will work.

Anyhow, worth looking at. Here are some of the most mature linux virtualization packages out there:

http://openvz.org/ -- This is the open source version of hostmysites VPS. The main difference is it isn't setup for doing mass hosting (like, 1000 VPSs on a huge mainframe).

http://www.openvps.org/

http://linux-vserver.org/ -- Very plain website, but there is news that the authors are pushing for this code to be included in the Linux kernal natively.

http://www.cl.cam.ac.uk/Research/SRG/netos/xen/ -- I've heard rumors also about this being one of the most advanced.

http://www.vmware.com/ -- The one and only. This is full virtualization so will contain the most overhead (some of the previous packages have almost no overhead, not even 1%). Oh yea, and this "costs" money.


MySQL Replication

Published on Monday, January 23, 2006

Status:

The webapp server is running fine, but backups are important. Better yet, a hot computer is a great idea. To do this, I setup an older spare rackmount as a 'live' webapp server, just in case. A duplicate LAMP was setup, web apps copied over SSH via rsync on a regular basis, and the icing on the cake: mysql replication.

So, if the dedicated webapp server dies a painful death, a quick change of IP for the webapp server in the internal DNS to the backup rackmount, and nobody will know anything happened.


Hamachi

Published on

My friend Ian told me about this originally, but my pen-testing cousin just send me the link as well. p2p VPN, w00t. Hamachi is a VPN alternative that does not have the normal router problems associated with IPSEC and PPTP vpns. That is good because of firewalls and nat and things like that.

http://www.hamachi.cc/


Linux as a TFTP Server

Published on Monday, January 16, 2006

So, you need a TFTP server for something? Cool, you must be doing something fun. I need a TFTP server to copy Cisco IOS images onto the routers; hopefully you are doing something cooler.
1) Enable TFTP in inetd.conf
Open up /etc/inetd.conf and look for the following line:
kelvin@pluto:~$ vi /etc/inetd.conf

#tftp  dgram   udp     wait    root    /usr/sbin/in.tftpd  in.tftpd -s /tftpboot -r blksize
This is on line 72 for me (hint: in vi press ctrl+c, then :set number). Uncomment it. If you don't have this line, bummer. Search for in.tftpd and use that as a substitute.

kelvin@pluto:~$ which in.tftpd
/usr/sbin/in.tftpd
kelvin@pluto:~$

2) Create the TFTP directory
As you can see, we need the directory tftpbood. Create it.

 kelvin@pluto:~$ sudo mkdir /tftpboot 

3) Restart inetd

kelvin@pluto:~$ sudo kill -1 [inetd pid]

You can get the inetd pid by typing:
kelvin@pluto:~$ ps -aux | grep inetd 
Cheers.

Edit: A colleague in New Zealand was searching for something and stumbled upon this page. I gave him the tip that if you need to find the tftp server (or any service), you can do it based on port:
lsof -i :69

New File and Webapp Server

Published on Friday, January 13, 2006

Status:

Time has come to upgrade a few servers in the office. An older P4 2.8 was being used as a webapp server, and that needs to go. The resource utilization wasn't too much of an issue, however the computer was aging. Plus, it wasn't strictly built to host critical services, but since we grew so quickly, it is what was available. Additionally, the PDC was hosting user files and with these mounting in size, a dedicated file server is in order.


Oh, and Ian and I are on a strict budget, as usual.

Our trusty CDW shipped over two IBM rackmounts. Plenty of CPU and RAM to grow, the key feature that we were needing was hardware RAID1. With those shipped out, Ian screwed them into the rackmount and we started working on them. Both servers had Debian slapped on, and one then into a true LAMP server. On the LAMP server we also loaded up our ticketing system, and several IMAP based email accounts (good ol' Dovecot).

On the other server was setup as a dedicated file server. For several reasons, including the strict budget, we synced Samba up to the 2003 PDC. Thus, all profiles (through file redirection) are mapped to the Samba box, which does auth via kerberos back to the PDC. Besides user profiles, several shared folders exist, and access is based on GPO. I must admit, Samba+Windows2003 is a very handy combo.