The random rantings of a concerned programmer.

Calling a templated member function of a typedef’d template class

July 11th, 2011 | Category: Random

C++ is insane.

Assume you have a templated Object:

template 
struct Object {
        template  void func(){};
};

And you want to wrap up the instance in a Proxy object:

template 
struct Proxy {
        typedef Object WrappedType;
        WrappedType obj;

        static void Func() {
                Proxy *self = new Proxy;
                self->obj.func();
        }
};

Pretty straightforward, but when you actually try to invoke Proxy::Func on an arbitrary T using g++

struct Foo {};

int main() {
        Proxy::Func();
        return 0;
}

g++ shits itself completely:

$ g++ test1.cpp
test1.cpp: In static member function ‘static void Proxy::Func()’:
test1.cpp:13: error: ‘Foo’ was not declared in this scope
test1.cpp:13: error: expected primary-expression before ‘)’ token
$ g++ --version
i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3)

Fucking fantastic.

Some tinkering reveals that the compiler is getting confused as to what the fuck obj.func is somewhere. The following implementation of Func works fine (but defeats the point of using templates) --

   static void Func() {
                Proxy *self = new Proxy;
                Object bar = self->obj;
                bar.func();
        }

I searched for awhile and turned up jack diddly squat, then a co-worker informed me the fix is to use the following:

   static void Func() {
                Proxy *self = new Proxy;
                self->obj.template func();
        }

I don't know what the fuck this instance.template function<..>() bullshit is, but apparently MSVC implicitly puts it in there for you. I've certainly never seen it before and it's completely orthogonal to any fix I would have assumed.

tl;dr C++ is a clusterfuck.


EDIT: A stack overflow post which contains a reference to the C++03 standard (14.2/4) in the answers. fml.

4 comments

(Untitled)

May 16th, 2009 | Category: Random

ahahahaha. I know, I fail for a Monty Python reference, but I just had to try it.

WolframAlpha just won the game.

2 comments

Fuck you again, Comcast

March 20th, 2009 | Category: Random

I just got my Comcast bill today. After being pissed off that I’m paying $100/month for cable TV (internet is only $33, but IIRC that’s because I got a package deal or some shit like that), I glanced through some of the other shit they threw in there. One of them was another change to the residential service contract which included the following clause –

You agree that by using the Services, you are enabling and authorizing Comcast, its authorized agents and equipment manufacturers to send code updates to the Comcast Equipment and Customer Equipment, including, but not limited to cable modems and digital interactive televisions [...] Whether a cable modem, gateway/router, MTA or other device is owned by you or us, we have the right, but not the obligation, to upgrade or change the firmware in these devices remotely or on the Premises at any time that we determine it necessary or desirable

Seriously, what the fucking hell.

Look, I’m pissed off enough that I have to blow $80 for a worthless piece of shit consumer-grade cable modem. You could break into my house and take a hammer to the piece of shit, I don’t fucking care. (I’m going to buy a better one anyway, fuck you).

But stay the fuck away from the rest of my goddamn network. I don’t need your goddamn filthy hands all over my fucking equipment.

Moreover, the mere fact that you can do this remotely raises some serious fucking questions. What kind of security measures are there? Are you working with the fucking hardware manufacturers to put backdoors for updates? Are the updates cryptographically signed and verified, or are you just using security through obscurity and hoping that no one RE’s the port knocking pattern?

Despite my warranted aggression, I’m still sucking your dick Comcast, because you have a fucking monopoly in Charlottesville and there are no other providers to choose from (unless I grab a beefy 802.11 repeater and piggyback off the University’s network — it’d cost less in the long run).

Goddammit.

7 comments

(Untitled)

January 30th, 2008 | Category: Random

7277.jpgSo I got around to tinkering with my new HDDs yesterday Setting up netboot for like the 9000th time was fun; it’s interesting to see how many mistakes and inaccuracies are in that old post of mine. I should update it one of these days…

Anyway.

So after much tinkering, it looks like everything is working, for the most part: the new IDE controller supports 48-bit LBA, so I can write to the entire disk. At least, it hasn’t crashed yet, and I’ve been dumping my entire (local) anime collection to it. Given that I’ve got a 10bT internal subnet set up, it’ll be awhile before enough shit’s dumped that I’ll be completely confident that it’s correct.

When I said “everything is working”, I was lying a little bit. One of the brand new 500GB Seagate drives is a squeaker. A loud squeaker. I’m going to declare it “DEAD” and send it back for a replacement (look forward to upcoming posts on dealing with such a case!), because my god. There’s no way in hell I can trust such a broken-sounding drive.

The other problem is that fsck_ufs still breaks the thing. Well, it doesn’t lock up the entire machine anymore, it just locks up it’s own process during the biord state. Which kinda makes sense, since the machine is netbooted, it doesn’t need to touch the drive like ever.

Now, I did some light digging and biord is just a random undocumented part in the code where the process enters a sleep state. You’re supposed to be able to find it by grep‘ing the source tree, but I haven’t gotten around to doing that.

2261.jpgThe thing is, biord looks suspiciously close to “BIOS Read”, like, dumping a part of the disk from the BIOS. Which, if this is the case, is why the damn thing is locking up the drive: the BIOS doesn’t support 48-bit LBA, despite the IDE controller supporting it. Going to take some more investigation, but I wouldn’t be surprised if this is the case. I mean, how many other people are running huge drives on ancient hacked together hardware?

From my experiences with it thus far, I doubt there are many others. For good reason. lol ;P

No comments

Non-Apache Webservers for High Availability and High Load Deployment Scenarios

December 20th, 2007 | Category: Random

lol, this is a short report I wrote for my E-Commerce class. Figured I’d post it since I have nothing better to rant about today. Kind of sucks that ALL MY GOOD MACHINES ARE DOWN and won’t be back up until the end of break, because it would have been kinda fun to be able to fuck around some more with lighttpd and Squid and stuff. oh well.

When a potential customer comes into a datacenter, the first two questions he should ask are, “What kind of uptimes do you guarantee,” and “how much can I push through the pipes?” Availability and extensibility are possibly the two most important factors on the infrastructure side of E-Commerce – if your server goes down, whether the cause was a hardware failure or too many subsequent requests, your E-Business is shot until the problem can be repaired. Every second that your business is down is money lost. The goal of this short report is to broadly discuss and evaluate several technologies to guarantee redundancy and ensure expandability, such that your E-Business stays on solid ground.

For me, this all started when looking for a lightweight web server to tinker with. I had toyed with Apache in the past, and while Apache provides possibly every feature you can dream up, it just seemed too heavyweight for the simple FastCGI chores I was using it for.

While browsing the www-ports listing on one of my FreeBSD boxes, I noticed an entry called lighttpd, and checked the package description:

“lighttpd a secure, fast, compliant and very flexible web-server which has been optimized for high-performance environments. It has a very low memory footprint compared to other webservers and takes care of cpu-load. Its advanced feature-set (FastCGI, CGI, Auth, Output-Compression, URL-Rewriting and many more) make lighttpd the perfect webserver-software for every server that is suffering load problems.”

Browsing through the lighttpd documentation, it seems like there’s quite a few neat trick features available. The first, and probably most interesting for me is the ability to specify a list of remote FastCGI-running servers to offload requests to. The node running lighttpd, in this case, simply acts as a pipe connecting the user to the nodes which are running a dynamic scripting language within a FastCGI module.

The idea here is that, when running dynamic scripts, the first bottleneck often encountered is the CPU. Since most scripting languages don’t cache by default, they end up crunching a lot of the same data for each request, and this can quickly eat up CPU cycles. There are lots of ways to cache both the compiled script (when using a language which compiles to bytecode before being passed to a VM) and the script output, but in many situations this may not be ideal.

Because all the heavy CPU computations are offloaded to independent FastCGI modules running on several machines, expansion of such a setup is fairly trivial – just add more machines and add them into lighttpd’s list of FastCGI nodes. This is effective until either the network bottlenecks (which can be solved by either upgrading the hardware, or through some channel bonding tricks), or shared resources like databases and disks become overtasked (which Google solves by using it’s own BigTable distributed database and GoogleFS).

On lighttpd’s homepage, there’s an impressive list of prominent sites which claim to use lighttpd, among which are wikipedia, and meebo. Not wanting to take this at face value, I decided to check out some of the headers myself. Originally I was using WireShark, a network protocol analyzer to get the headers, but WireShark gets confused when the response is broken up over multiple packets. Instead, I’m using wget with the –S switch to grab the server response headers and dump the rest of the file.
So, the first page I tried was Wikipedia, the headers returned were:

  HTTP/1.0 200 OK
  Date: Wed, 19 Dec 2007 22:23:28 GMT
  Server: Apache
  X-Powered-By: PHP/5.1.2
  Content-Language: en
  Vary: Accept-Encoding,Cookie
  Cache-Control: private, s-maxage=0, max-age=0, must-revalidate
  Last-Modified: Wed, 19 Dec 2007 22:17:35 GMT
  Content-Length: 51279
  Content-Type: text/html; charset=utf-8
  X-Cache: HIT from sq27.wikimedia.org
  X-Cache-Lookup: HIT from sq27.wikimedia.org:3128
  Age: 7
  X-Cache: HIT from sq30.wikimedia.org
  X-Cache-Lookup: HIT from sq30.wikimedia.org:80
  Via: 1.0 sq27.wikimedia.org:3128 (squid/2.6.STABLE16), 1.0 sq30.wikimedia.org:80 (squid/2.6.STABLE16)
  Connection: keep-alive

The first thing I noticed was, hey, they’re serving the page with Apache, not lighttpd! Looking further down though, you can see from the Via header that they’re forwarding the returned page through Squid, a proxy server . This in itself adds a layer of redundancy – there’s several tiers of servers all running the same scripts. Should one of the Apache servers fail, Squid will simply request the page from a different working one. And, should one of the Squid servers fail, there’s at least a 2-level hierarchy to take the rest of the load.
In this case, we’re actually hitting the page caches of the Squid layer, so our request probably didn’t even go down to Apache. We probably got served the same copy of a page someone else requested a while back.

In the end though, there isn’t any lighttpd in this transaction! So I decided to take a closer look at their claims; checking out their PoweredBy page, next to the entry about Wikipedia it states that lighttpd is used for upload.wikimedia.org. Whoo, false advertising much?

Just to verify that, here’s a couple of the headers returned by wget for the index page –

  HTTP/1.0 200 OK
  Server: lighttpd/1.4.18
  X-Cache: HIT from sq10.wikimedia.org
  X-Cache-Lookup: HIT from sq10.wikimedia.org:3128
  X-Cache: MISS from sq46.wikimedia.org
  X-Cache-Lookup: MISS from sq46.wikimedia.org:80
  Via: 1.0 sq10.wikimedia.org:3128 (squid/2.6.STABLE16), 1.0 sq46.wikimedia.org:80 (squid/2.6.STABLE16)

So they are using it for something, just not the main load of the page. upload.wikimedia.org essentially serves all the static images for Wikipedia. Looking at the list of sites powered by lighttpd, it seems almost all of them use the server exclusively to serve static data, and rely on an Apache+Squid combination for load distribution.

There are many other web servers written in a variety of languages; the only other one I’d consider looking at is Yaws, a web server written entirely in Erlang. Erlang is a functional language developed by Ericsson with an emphasis on high availability. Most of Erlang’s feature sets are geared toward massively multi-threaded network applications. Internally, Erlang uses a lightweight process system to manage thousands of threads with ease.

The benefit of Yaws over Apache lies in these lightweight threads – because Apache relies on the OS for threading support, it’s threads are inherently very heavyweight. In both applications, each incoming request is serviced within it’s own thread. In situations where there are many incoming requests (such as a DDoS attack), the Apache software will consume system resources much faster than Yaws. A study done showed that Yaws can handle more than twenty times the number of parallel sessions.

High availability and maintainable performance under load, in addition to easy expansion is key when developing fledgling E-Commerce startups. If the underlying infrastructure is not present, then the entire operation is doomed. And while Apache may currently be the reigning champion for web servers, there are a variety of other solutions which are just as extensible, though much less widely used.

3 comments

Next Page »