The random rantings of a concerned programmer.

(Untitled)

April 21st, 2009 | Category: Random

I had a machine upstairs hooked up to the cable modem acting as both a wireless router and a SMB/NFS server. Yesterday, we noticed some young birds had taken up a nest on our roof and were tweeting away.

You might not think these two pieces of data are related, but they are. Those weren’t birds — the sounds were coming from the primary hard drive of our router. FUUUUUUUUUUUU. For some reason, two of the three drives in the machine failed. Sounds like some kind of mechanical failure in the primary drive, and on one of the data drives the partition table was completely screwed up (though I managed to rebuild it by hand). As I was dumping the data off this second drive, I noticed something that I had forgotten to do –

The RAID5 array that I built isn’t encrypted with GELI. So I just wiped it and layered GELI on top of the RAID5 volume — it was really incredibly easy to do. Except now I’m filling the 2.6TB volume from /dev/random before running newfs (and re-copying over all my data) and fuck this is going to take forever. But now I’ve got a secure volume for all of my totally legal material. Huzzah.

3 comments

(Untitled)

April 18th, 2009 | Category: Random

FFFFFFFUUUUUUUUUUUUUUUUUUUUUUUUUUUU

I just discovered a really fucking massive huge flaw in my shitty ORM shit. The ORM layer is supposed to lazily resolve foreign references, right? So if you have a schema

You’ll notice that the post table contains a self-reference. So the ORM layer will generate the following code for that table –

The own-table foreign key, post_parent is a killer, because it causes parseSql' to lazily enter an infinite loop. While this isn’t bad, when you throw that DbRecord to the automatically-generated JSON thingy, it tries chomping through the entire thing and gets tangled up.

There’s no good way to fix this, aside from doing a topological sort of all the inter-table relations and having the ORM layer go “oh hey” when it starts to loop. That kind of functionality could be added to either the parseSql' function above, or to the toAscListJSON function, which converts the DbRecord into a JSON-injestable form (and should be replaced by using Data.Data instead…)

Realistically, I should throw this shit out and do it some other way, especially since the code generation bit has become pig disgusting. Fuck! (Eventually I’ll rewrite it using Template Haskell or something).

No comments

gvinum + FreeBSD

April 17th, 2009 | Category: Random

On CURRENT back in February, some changes were commited to gvinum (the frontend for vinum) which made the interface not suck balls. Before you had to weave some weird magic configuration file and sacrifice your first male child:

$ gvinum printconfig
drive gvinumdrive3 device /dev/ad8
drive gvinumdrive2 device /dev/ad6
drive gvinumdrive1 device /dev/ad3
drive gvinumdrive0 device /dev/ad1
volume gv0
plex name gv0.p0 org raid5 512s vol gv0
sd name gv0.p0.s3 drive gvinumdrive3 len 6s driveoffset 265s plex gv0.p0 plexoffset 1536s
sd name gv0.p0.s2 drive gvinumdrive2 len 6s driveoffset 265s plex gv0.p0 plexoffset 1024s
sd name gv0.p0.s1 drive gvinumdrive1 len 6s driveoffset 265s plex gv0.p0 plexoffset 512s
sd name gv0.p0.s0 drive gvinumdrive0 len 6s driveoffset 265s plex gv0.p0 plexoffset 0s

But in the new release of gvinum, instead of creating that pig disgusting configuration file and passing it to gvinum create, you can simply invoke it with

$ gvinum raid5 -n gv0 ad1 ad3 ad6 ad8

and boom, there’s your new volume to run newfs on or whatever.

For some reason though, the RAID5 volume is horribly slow for me:

$ dd if=/dev/zero of=test bs=1m count=100
100+0 records in
100+0 records out
 bytes transferred in 9.002813 secs (11647204 bytes/sec)

11MB/s write speed is paltry compared to the 33MB/s both my laptop’s single drive and the gmirror array in the same machine get. This might be a problem with the shitty on-board Intel controller I’m using, or it might be a problem with the disks (because I’m using different disks for the RAID1 and RAID5 volumes), though the read speeds on the RAID5 are fine. I’m testing speeds with systat which isn’t really accurate for benchmarking — I’ll have to tinker around with it more later.

I figure I can test the disks by pulling a drive off the RAID5 and snapping the RAID1 and do tests on one of each of the type of disk. I don’t think there’s a way to simulate a disk failure — I tried first using gvinum detach gv0.p0.s0, but simply got an error code spit back at me. Next I took one of the disks and overwrote the partition table, thinking that gvinum might be using that. It doesn’t :D

So now I’m rebuilding the volume with gvinum rebuildparity gv0.p0, getting all kinds of parity errors to the syslog etc; we’ll have to see what happens when it finishes running. I suspect that the only way for me to break the volume is to physically remove the disk or something which is bleh :(

2 comments

(Untitled)

January 14th, 2009 | Category: Random

First day at the new job as a Senior Application Developer, and the first thing I notice is that they use a lot of WordPress. A lot of WordPress. I sat there for a good hour going through all three machines, thumbing through services and tabbing through mostly WordPress instances. We’ve easily got in excess of 10-15 separate instances of WordPress, though only about 7 of them are in production use, I think.

Most of them haven’t been updated since they were initially installed. We had one instance running v2.0.1. They’ve been limping by on mod_security and haven’t gotten rooted yet, but my god.

In the future, we’re going to be handing out more WordPress instances for whatever reason ( oh hey I want a blog too), so we’re setting up a WordPress µ, which is basically a meta-Wordpress which lets you maintain a collection of instances. WordPress.com runs the shit.

tl;dr migrating a bunch of existing wordpress instances into that thing is a massive bitch and I fat fingered something (with pg_dump, the -d switch is used to specify the database; with mysqldump it silently says “DON’T DUMP THE DATA”. I subsequently dicked the running database up and don’t have a backup!)

FUCK!

5 comments