Simple Chrooted SSH

You might be asking: why would you want to chroot ssh? Why use ssh anyways? Here are the quick answers:

  • FTP usually isn’t great. Unless sent over SSL, all information is sent cleartext.
  • SSH usually is much better. SSH sends all data over an encrypted channel – the main drawback is: you can often browse around the system, and if permissions aren’t set right, read things you shouldn’t be able to.
  • Chroot’d SSH rocks. The solution to both the above problems.

So, let me tell a quick story.
When I started uni in 2001 I was a nerd. Still a nerd, I guess. I was cramped in my apartment on campus with like 5 boxes, most of them old p100s running Linux or OpenBSD. Life was good.
I started a CS degree (shifted into Business with a focus on IT), and we were told to use the school’s main servers to compile our programs. The other interesting thing is that all user accounts were visible when logged in via ssh – but hey, that is just the nature of Linux. I knew this, but asked the head I.T. person “why don’t you jail the connections?” He responded quickly telling me to go away.
Well, shortly after making the comment (although solutions existed at the time being), pam-chroot was released. This is right about the time students figured they could spam everybody in the school, some 25,000 emails, quickly and easily – ‘cause all the accounts were displayed. Sweet – now we can chroot individual ssh connections.
This quick demo will be on Debian, we’ll create a pretend user named “karl.” (I’ll assume you’ve already added the user before beginning these steps). Also, the jails will be in /var/chroot/{username}

First: Install libpam-chroot and makejail

session required pam_chroot.so

kelvin@server ~$ sudo apt-get install libpam-chroot makejail

Second: makejail config file

Put the following in /etc/makejail/create-user.py:

#Clean the jail

cleanJailFirst=1
preserve=["/html", "/home"]
chroot="/var/chroot/karl"
users=["root","karl"]
groups=["root","karl"]
packages=["coreutils"]

Edit: If you need to use SFTP also, try this config:

cleanJailFirst=1
preserve=["/html", "/home"]
chroot="/home/vhosts/karl"
forceCopy=["/usr/bin/scp", "/usr/lib/sftp-server", /
 "/usr/bin/find", "/dev/null", "/dev/zero"]
users=["root","karl"]
groups=["root","karl"]
packages=["coreutils"]

As you’ll see, there is a “preserve” directive. This is so that when you “clean” the jail (if you need to refresh the files, for instance), you won’t wipe out anything important. I created an /html so that the user can upload their web docs to that file.

Third: configure libpam_chroot

Add the following to /etc/pam.d/ssh:

# Set up chrootd ssh

session required pam_chroot.so

Forth: allow the actual user to be chrootd

Edit /etc/security/chroot.conf and add the following:

karl /var/chroot/karl

Fifth: create/chown the chroot’d dir

kelvin@server ~$ sudo mkdir -p /var/chroot/karl/home

kelvin@server ~$ sudo chown /var/chroot/karl/home

Now you should be able to log in, via the new username karl.

Layer Images Using ImageMagick

For one of my webapp projects I’m needing to layer two images. This isn’t a problem on my laptop – I just fire up GIMP, do some copy ’n pasting, and I’m done. However, since everything needs to be automated (scripted), and on a server – well, you get the point.
The great ImageMagick toolkit comes to the rescue. This is highly documented elsewhere, so I’m going to be brief.

Take this:

And add it to this:

I first tried to use the following technique:

convert bg.jpg -gravity center world.png -composite test.png

This generated a pretty picture, what I wanted. What I didn’t want was the fact that the picture was freaking 1.5 megs large, not to mention the resources were a little high:

real    0m7.405s
user    0m7.064s
sys     0m0.112s

Next, I tried to just use composite.

composite -gravity center world.png bg.png output.png

Same results, although the resource usage was just a tad lower. So, what was I doing wrong? I explored a little and realized I was slightly being a muppet. I was using a bng background that was 1.2 megs large (long story). I further changed the compose type to “atop,” as that is what appeared to have the lowest resource usage. I modified things appropriately:

 composite -compose atop -gravity center world.png bg.jpg output.jpg

This also yielded an acceptable resource usage.

The result:

A Dying Laptop

I have the pleasure of owning an old T23 laptop. To show you how old this puppy is, the current T series is at T60, and those have been out for over a year. This laptop was made in 2012, and I picked it up somewhat discounted late in 2003. It is now March 2007, and this puppy is still rock solid.

You heard me, it is almost six years old and still working fine – that is testimony to how well this laptop was built. There are several small cracks around the case, but nothing you would notice by just walking by. This laptop has been to more countries than many people.

I had the first problem this weekend, and it isn’t even related to the laptop. The hard drive, a 30G I put in at some point, started to crap out on me. Bad sectors were everywhere, so some of the programs were slightly unhappy (e.g. I couldn’t boot into X).

I’m going to buy a new laptop soon, I promise, about the time my MBA goals are reached. Until then, I’ll continue to be frugal, and deal with the bad sectors. Being a good IT nerd, everything is backed up to an external hard drive (and most stuff backed up remotely).

Luckily I’m using Linux – so was able to runs fsck/smartmontools a few times in recovery mode, make the bad blocks happy, and continue as “normal.” Phew, disaster averted.

One More Point Linux

It should come as a surprise that I enjoy using Linux. For the record, the first time I booted into Linux on my own was 1997, this was just before entering high school. So, while some of my tech friends played with NT, I was rumbling with the Penguin. Starting in 2000 I was using Linux as my main operating system, sometimes supplemented by OS X, and only using Windows when the gaming urge surfaced. In 2004 I mostly dropped playing any games, which resulted in dropping Windows – and besides for work, I haven’t used it since.

For me, I’ll admit, there are three things that Linux still lacks:

  • Simplistic video conferencing support
  • Video editing support
  • Gaming

I know that all of these are supported, but, in my opinion, not particularly well. Well, I don’t care about any of these enough to actually need windows, but it would be nice to see them improve.

So, I’m set. I’m 100% legal (don’t steal a single piece of software). And don’t have to be too afraid of virus’. What prompted me to write this little excerpt? A recent article at the Washington Post scared the beejeepers out of me, and makes me wish even more for Vista to either cure security problems, or everybody move over to Linux. The article details the aftermath a virus can cause, not on damaging one’s computer, but on capturing information. The author further details his experience hunting down the data. This was one of the better articles I’ve read, and I thoroughly enjoyed the further details. If you want a little more motivation to move to Linux (or just tighten your machine), then I suggest you take a few moments to read the articles as well.

The Risk in Risk Mitigation

Back in the day the barrier to entry for the Internet was quite high. The technology used required a steep learning curve, the equipment extremely expensive, and sometimes even hard to acquire. Fast forward to 2007 and things have certainly changed. If you know any tech people you can likely get free hosting for a small website, and even more demanding websites can be hosted for not much. The cost of dedicated servers has dropped even more. And the final kicker: web services. I’ve started to think of some web services not as a service, but more like outsourcing requirements.

This very dependency adds risk for a multitude of reasons, and when your entire web application platform revolves around a third party, such as is the case with mashups, you incur great risk.

One of the nice things when requirements are outsourced is the fact that risk is mitigated. I’ll use SmugMug as an example. In summary, they moved their storage to Amason’s S3 network, which is something I will be utilizing as well. Amazon’s S3 (and other web services) continue to drive down the barrier of entry – now you don’t even need to purchase hugely expensive servers for the sole purchase of storage! If you don’t need to purchase them, you also don’t need to manage them. Risk mitigated.

However, continuing the slight allusion from The Other Blog’s article on mashups, I see a slight problem with the outsourcing of requirements. While the following thought isn’t particularly innovative: mitigating risk and outsourcing requirements creates a dependency on the third-party. This very dependency adds risk for a multitude of reasons, and when your entire web application platform revolves around a third party, such as is the case with mashups, you incur great risk.

But, as is evident by the fact that I’ve had stitches nine different times, I’m still going to do some cool mashups anyways, so stay tuned.

Python, AST and SOAP

For one of my projects I need to generate thumbnails for a page. And lots and lots and lots of them. Even though I can generate them via a python script and a very light “gtk browser”, I would prefer to mitigate the server load. To do this I’ve decided to tap into the Alexa Thumbnail Service. They allow two methods: REST and SOAP. After several hours of testing things out, I’ve decided to toss in the towel and settle on REST. If you can spot the error with my SOAP setup, I owe you a beer.
I’m using the ZSI module for python.

1. wsdl2py

I pull in the needed classes by using wsdl2py.

wsdl2py -b http://ast.amazonaws.com/doc/2006-05-15/AlexaSiteThumbnail.wsdl

2. Look at the code generated.

See AlexaSiteThumbnail_types.py and AlexaSiteThumbnail_client.py.

3. Write python code to access AST over SOAP.


#!/usr/bin/env python
import sys
import datetime
import hmac
import sha
import base64
from AlexaSiteThumbnail_client import *

print 'Starting...'

AWS_ACCESS_KEY_ID = 'super-duper-access-key'
AWS_SECRET_ACCESS_KEY = 'super-secret-key'

print 'Generating signature...'

def generate_timestamp(dtime):
    return dtime.strftime("%Y-%m-%dT%H:%M:%SZ")

def generate_signature(operation, timestamp, secret_access_key):
    my_sha_hmac = hmac.new(secret_access_key, operation + timestamp, sha)
    my_b64_hmac_digest = base64.encodestring(my_sha_hmac.digest()).strip()
    return my_b64_hmac_digest

timestamp_datetime = datetime.datetime.utcnow()
timestamp_list = list(timestamp_datetime.timetuple())
timestamp_list[6] = 0
timestamp_tuple = tuple(timestamp_list)
timestamp_str = generate_timestamp(timestamp_datetime)

signature = generate_signature('Thumbnail', timestamp_str, AWS_SECRET_ACCESS_KEY)

print 'Initializing Locator...'

locator = AlexaSiteThumbnailLocator()
port = locator.getAlexaSiteThumbnailPort(tracefile=sys.stdout)

print 'Requesting thumbnails...'

request = ThumbnailRequestMsg()
request.Url = "alexa.com"
request.Signature = signature
request.Timestamp = timestamp_tuple
request.AWSAccessKeyId = AWS_ACCESS_KEY_ID
request.Request = [request.new_Request()]

resp = port.Thumbnail(request)

4. Run, and see error.


ZSI.EvaluateException: Got None for nillable(False), minOccurs(1) element 
(http://ast.amazonaws.com/doc/2006-05-15/,Url), 



 xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" 
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" 
xmlns:ZSI="http://www.zolera.com/schemas/ZSI/" 
xmlns:ns1="http://ast.amazonaws.com/doc/2006-05-15/" 
xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

[Element trace: /SOAP-ENV:Body/ns1:ThumbnailRequest]

55. Conclusion

I’m not entirely certain what I’m doing wrong. I’ve also written another version but actually with NPBinding connecting to the wsdl file. It seems to work much better, as it fully connects, and I get a 200, but it doesn’t return the thumbnail location in the response, and I get a:

TypeError: Response is "text/plain", not "text/xml"

So, while I have things working fine with REST, I would like to get the SOAP calls working. One beer reward.

AWS in Python (REST)

As some of you may know, I have some projects cooked up. I don’t expect to make a million bucks (wish me luck!), but a few extra bills in the pocket wouldn’t hurt. Plus, I’m highly considering further education, which will set me back a few-thirty grand. That said, one of my projects will rely heavily on Amazon Web Services. Amazon has, for quite some time now, opened up their information via REST and SOAP. I’ve been trying (virtually the entire day) to get SOAP to work, but seem to get snagged on a few issues. Stay tuned.
However, in my quest to read every RTFM I stumbled upon a post regarding Python+REST to access Alexa Web Search. After staring at Python code, especially trying to grapple why SOAP isn’t working, updating the outdated REST code was a 5 minute hack. So, if you are interested in using Alexa Web Search with Python via Rest, look below:

websearch.py


#!/usr/bin/python

"""
Test script to run a WebSearch query on AWS via the REST interface.  Written
 originally by Walter Korman ([email protected]), based on urlinfo.pl script from 
  AWIS-provided sample code, updated to the new API by  
Kelvin Nicholson ([email protected]). Assumes Python 2.4 or greater.
"""

import base64
import datetime
import hmac
import sha
import sys
import urllib
import urllib2

AWS_ACCESS_KEY_ID = 'your-access-key'
AWS_SECRET_ACCESS_KEY = 'your-super-secret-key'

def get_websearch(searchterm):
    def generate_timestamp(dtime):
        return dtime.strftime("%Y-%m-%dT%H:%M:%SZ")
    
    def generate_signature(operation, timestamp, secret_access_key):
        my_sha_hmac = hmac.new(secret_access_key, operation + timestamp, sha)
        my_b64_hmac_digest = base64.encodestring(my_sha_hmac.digest()).strip()
        return my_b64_hmac_digest
    
    timestamp_datetime = datetime.datetime.utcnow()
    timestamp_list = list(timestamp_datetime.timetuple())
    timestamp_list[6] = 0
    timestamp_tuple = tuple(timestamp_list)
    timestamp = generate_timestamp(timestamp_datetime)
    
    signature = generate_signature('WebSearch', timestamp, AWS_SECRET_ACCESS_KEY)
    
    def generate_rest_url (access_key, secret_key, query):
        """Returns the AWS REST URL to run a web search query on the specified
        query string."""
    
        params = urllib.urlencode(
            { 'AWSAccessKeyId':access_key,
              'Timestamp':timestamp,
              'Signature':signature,
              'Action':'WebSearch',
              'ResponseGroup':'Results',
              'Query':searchterm, })
        return "http://websearch.amazonaws.com/?%s" % (params)
    
    # print "Querying '%s'..." % (query)
    url = generate_rest_url(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, searchterm)
    # print "url => %s" % (url)
    print urllib2.urlopen(url).read()

You run it like this:

>>> from websearch import get_websearch
>>> get_websearch('python')

Hamachi Rules

I’ve been playing around more with Hamachi, and have decided that it officially rules. Since I’m a big Linux guy I don’t have access to some features, but the program seems to be a gem. It is brainlessly easy to install (even when doing 20 things at once), and works quite well. Thanks to Ben and Sean for helping me test it out.

Ian the Apt

You know you are too nerd like when your conversations are like this. Preface: I posted some packages I needed to upgrade into a Skype window (much better than a clipboard)….

[11:56:03] Kelvin Nicholson: sorry, needed to post that somewhere

[11:56:04] Ian FItzpatrick: i am not apt!

[11:56:15] … you can’t pass packages to me ;)

[11:56:34] Kelvin Nicholson: blah blah blah

[11:56:43] … apt-get upgrade ian

[11:57:02] Ian FItzpatrick: apt-get error: unmet dependency, “beer 1.0-4 not found”

[11:57:14] Kelvin Nicholson: yea, that got a good laugh

Version 3.0 Part Two

Well, I’m basically all done upgrading to Version 3.0, I deserve a cake or something. Here’s the 411:

For the past few years I have been using Mambo, then Joomla, to manage the content on my site. It worked quite well, and was in PHP, so I could add or remove any code. Indeed, I’ve written a decent amount of PHP apps. In early 2004 I wrote a PHP platform to track adventures people had gone on, and networked people seeking to go on adventures with each other. I never marketed it, and mainly created it to learn PHP, but it was a CMS (Content Management System), and a little more. Late in 2004 I wrote another blog-esque platform for my second trip to Europe. It was pretty cool, I’ll admit: Casey and I each had a blog, and people could leave us “dares” and/or messages – and we could easily update our status. Overall, it worked great. You can also see the projects section of my site for some of the other things I’ve done in PHP.

Fast forward a few years, and here it is in early 2007. I’ve never really liked PHP all that much, but I couldn’t put my thumb on it. Deciding to switch to something else, I picked up and read the book, Beginning Python, from Novice to Professional. If anybody is looking for a well written book, I would highly recommend this one. Anyways, with my goal to drop PHP in mind, I held the debate of Django and TurboGears. I went through the demos for each, and felt like I really played around with them. Ultimately it came down to 1) Django has obvious crazy cool caching, 2) Django has pretty darn good documentation, and a freaking online book, and 3) the “powered by” sites are quite impressive – both the length of the list and the large amount of traffic some of these sites entertain.

So I went with Django. My friend in New Zealand, Ben Ford, has been ragging me for two months to get my ass in gear and learn it, saying I would love it. And he is right, the framework is simply beautiful. For the last week I’ve been reading through the documentation, going through the online book (both are incomplete, in my opinion, but compliment each other nicely). I think it is important to write your own code instead of just repeating examples, so my goal: transform my blog/site by using just Django.

So, while some of the kinks still need to be worked out, everything is no transfered over. I’ll mention my experiences shortly, but overall: I’m very impressed.