Wednesday, December 28, 2005
The T5 was a pretty exciting project--a high-performance, extremely ruggedized, Bluetooth- and 802.11b-enabled voice-controlled wearable about the size of a large mouse. Unfortunately, I can't talk about the internals of which I'm most proud, but ite's a damned slick piece of hardware, and probably the most fun product of my career.
So now that I have a little breathing room, I'm going to be able to think about non-work-related geekery a little more. I've become fairly smitten with TurboGears over the last few months, and it's been frustrating not to have enough time to properly follow what's developing there. With any luck, I'll be blogging more about that in the near, near future.
Edit 17 Jan 2006: finally got "real" links for VVH and T5...
Thursday, September 01, 2005
Recently, I found this chart on the Alienware web site. What's wrong with this picture?
Helpfully, the chart explains that lower numbers are better. Unhelpfully, it doesn't provide numbers. (Not to mention that percentage-based comparisons of temperature aren't terribly meaningful to start with.)
Here's another, from Logitech's web site. I was looking around at their "digital pen" technology, and noticed that they had two different versions, which differ in price by $100. Fortunately, there's a "Compare Products" link. Unfortunately, this is the chart it gives you.
Six products are listed: three cradles (all with identical "features"), two pens (each with identical "features"), and ink. The two pens each include "PC", "USB", "Optical Sensor", and "Ballpoint Pen". The ink refills, however, don't include "PC", "USB", or "Optical Sensor", and the cradles don't have "Optical Sensors" or "Ballpoint Pens". That's a bit of a relief--I would be a little concerned if I had to attach my ink to the PC in some way.
A little intelligence goes a long way...
Friday, July 15, 2005
Yes, continually rewriting the infrastructure code is probably the biggest reason it's a continually unfinished project, but on the other hand, it's become a fun sandbox for trying out different app servers, persistence layers, and template engines.
I've written a pretty large chunk of CherryPy-based plumbing, and I've liked it pretty well, but lately the HTMLTemplate-based code started getting a little too hairy. CherryTemplate didn't look particularly better, and I didn't like my previous experience with Cheetah, so I started looking around again. A few conversations with Jonathan Ellis convinced me to give Spyce a try.
Now, Spyce is more than a templating language--it implements both the app server and templating bits of the stack, and it doesn't look like CherryPy+Spyce would be a clean or efficient separation. Fortunately both frameworks make it easy to isolate non-presentation logic into pure, non-framework modules (which I have done), so much of it is probably salvageable.
So far, I like what I see. I didn't really grok tag libraries the first time I looked at it, but after getting my head around them by reading the source, they're pretty elegant. The templating language is nicely structured and expressive (which was the whole point of the exercise) and it fits my brain better than Cheetah's.
The one gripe I have with Spyce is its lack of URL abstraction. Given the multiple framework changes, I've become a total convert to the RESTful/"Cool URI" paradigm, which avoids exposing the site implementation structure in your URLs. CherryPy and Quixote do this very well, but Spyce is centered around the "one template file=one URL" paradigm.
If and when I move beyond Spyce's internal server and onto Apache, mod_rewrite will provide an inelegant but useable way to do it, but I'd rather my framework didn't force me to hack around it at that level.
Wednesday, May 25, 2005
Python is known for its "batteries-included" nature. One of the batteries that gets too little attention, I think, the asynchat module.One of my pastimes is playing a certain web game, which features live chat. In a previous incarnation of the game, one of the players wrote a very useful "bot"--an automated pseudo-player that sat around in chat, and provided useful information when queried. She quit before the newest revision of the game was released, though, and some of the players were missing the bot.
So, I pulled out the asynchat module, launched Ethereal, and started reverse-engineering the chat protocol (source and some documentation are available, but the version I was talking to seems to be somewhat customized).
45 minutes later, I had a fully-functional bot. Another hour, and I had a nicely-factored module from which you could build a whole new ultra-wizzy chat client. (Teaching the bot all about the game took another eight hours, of course, but I don't think any libraries would help with that...)
Now the cooler-than-thou Pythonistas out there are probably saying, "Bah! Twisted Rules!". That's nice. Twisted may be sexy, but asynchat/asyncore has some advantages:
- It's simple. Two modules, under 1k lines of code (as opposed to a raft of modules and 80k lines of code). No surprises.
- It's documented, so I don't have to hang out in an IRC room or grovel through thousands of lines of code to figure out what's wrong.
- It's included with Python, so I know it's tested--no surprises when I try it on a new machine.
- It's written in everyday bog-standard Python, not its own framework on top of Python, so there's no prerequisite learning to do.
- I'm reasonably sure that it's not going to be drastically changed.
Thursday, May 12, 2005
Dave admits that academic degrees and IQ tests are imperfect, but businesses have to use them to evaluate potential employees because there's no other way to do it. The "anarchic world of open-source coding" doesn't use them, because there's no evaluation to be done: "any person can contribute to the code, at any time, regardless of qualification".
On the contrary, no serious Open Source project would ever think of letting Joe Random contribute changes at will. Sure, anyone can download the source and make their own changes, but "commit privileges"--the ability to make those changes to the official codebase--are tightly controlled. That's a big distinction.
Open Source contributors start by having to submit every change as a "patch" to the existing code, rather than changing the code directly. A current developer examines the patch, then either rejects it or commits it on the contributor's behalf. After slowly establishing a track record of both good patches and the ability to work with the rest of the team, the contributor may receive the ability to directly commit his own changes.
On most projects, even programmers with years of experience and spotless reputations have to go through this process. Some projects are so conservative with commit privileges that even valued, long-time contributors still have to submit patches.
So granting commit privileges to a contributor is the Open Source equivalent of hiring an employee. Both represent serious commitments--an incompetent contributor with commit privileges is as dangerous to the project as an incompetent employee is to a business. And revoking commit privileges carries the same political and psychological baggage as firing an employee.
Businesses try to predict whether a candidate will be a good employee, while Open Source projects say, "show us you're good by doing work at no risk to us, and then maybe we'll offer you a position." It's unlikely that the software industry can get away with this--the media and medical industries do, but only for entry-level positions. So what can we do?
I think the solution is to increase our use of true "contract-to-hire" positions. Contract-to-hire gives the company the ability to bring a candidate on at low risk, then hire the candidate or decline with no repercussions. It's also far better handling the unfortunate case of a competent employee that simply isn't a good fit for the company, because it limits the company's liability (both legal and emotional) while letting the employee avoid a resume-busting dismissal.
Yes, some companies abuse contract-to-hire. I know one programmer who was assured he would be "converted" in six months, only to spend two years in "headcount limbo" before being released with no warning. To be fair to both parties, the contract has to specify both the duration of the contract and a deadline for exercising or declining the option to hire.
The lack of benefits like health insurance for contractors is an issue, too, but it's hardly insurmountable. Contractors already command higher rates than they would get as full-time salary in order to pay for the missing benefits. When negotiating the contract terms, negotiate the proposed full-time salary (and thus the contractor's "benefit allowance") up front.
Fair and honest contract-to-hire is a win for both employers and individuals, and it's the only way I can see to achieve the hiring benefits Open Source projects enjoy. So what am I missing?
Wednesday, April 27, 2005
The recent influx of "name" talent at Google reminds me of the projections we used to make in the mid-nineties of how many years, at current growth rates, it would take until Microsoft employed all of Washington State (where I was living at the time).
Friday, April 15, 2005
He cites problems like "why doesn't my phone automatically remember a number I get via 411?" and "why can't my email system automatically read contact information from emails without an a priori, standardized format like vCard?" The complaints have a common theme: the user thinks, "why doesn't it do this very logical thing?", and the technology provider looks at the technology and thinks "given what I already know how to do, this is what I can do".
When the user says, "What I really want is to have my email automatically pull website addresses out of mail messages," the programmer's initial reaction is to write some code that pulls "http://" out of mail messages, and ship it as a feature.
Great--it did just what the user asked for. But what the user really wanted was "how can I make my email program and my browser share a brain?" The developer's logical but incorrect response leads to surgical, limited, and frustrating "fixes" for life's problems.
An example: Microsoft introduced the Start menu in Windows 95. They found that a large segment of users didn't know where to begin without Windows 3.1's visible Program Manager, so they added an animation that slid across the taskbar and pointed to the Start button. They correctly reasoned that the animation would get annoying after seeing it a few dozen times, so after the user uses the Start menu a few times, the animation no longer occurs.
The close-up problem is "how do we tell new users to use the Start menu, and then stop telling them when they know how?" The surgical solution is a bit of text, a chunk of code, and a Registry setting.
Later, they discovered that people who wanted to get rid of the Office Assistant altogether typically just hid it every time it came up. Another close-up problem, with another close-up solution: if you hide the assistant right away several times in a row, it asks if you really want to get rid of it permanently. Another small chunk of code and a Registry setting.
These are really two examples of the same problem: help the users until they don't need it anymore, then go away. But we keep solving the specific problem again and again, each time we see it come up, with specific one-off fixes.
On the other hand, after seeing this one too many times, the programmers bring out the big guns: they design a large, rigid, all-singing, all-dancing specification that attempts to predict and address all possible isomorphs of the problem.
Dwight Eisenhower is credited with saying, "If a problem cannot be solved, enlarge it." I'd add, "Enlargement is a strategy—not a goal!"
This is COM. This is OLE. This is MFC's document architecture, the vCard spec, J2EE and every other "boil the ocean" design that aims to enumerate all possible needs and create a design that addresses everything in one perfect system. It solves the original problem—sort of—and then acquires a life of its own. Now, when a different problem comes up, the programmer tries to force the solution into the amazing problem-solving framework. This works great if the new problem is truly identical to the old one. That's usually not the case.
So, instead of solving real, root problems in a sustainable way, we continue to choose between doing as little as we can to solve the facet we're staring at, or by designing a massive architecture that's too fragile to adapt to the next challenge that arrives.
The first person who understands this, addresses it in a way that adapts to new challenges instead of trying to predict them, and then packages it in a useable and attractive form, will be a true pioneer.
Thursday, April 07, 2005
Phoneblogging was easier than I had feared, although my phone insists I am clogging.
RMS spoke at Pitt. On the one hand I was a little disappointed in the content of the talk: I was expecting more "current affairs" and state-of-the-FSF, but the bulk of the talk was right out of Free as in Freedom. My disappointment was tempered about half way through the talk, when I realized that two thirds of the audience weren't alive when the events he was explaining occurred. The reactions from the crowd implied that for many of them, this was the first time they were hearing about the beginnings of the Free Software movement. The audience were roughly an even mix of free software cynics ("how can I ever earn a living if I give my work away for free?!") and free software advocates ("how will issue X affect free software?"), and a blessedly-small number of red-faced "oh-my-God-it's-really-him" fanboys.
I recorded the talk on an ancient minicassette recorder. It's not good enough to Oggify and put up, but I know some other people were recording; hopefully one of them will podcast it. He said a few interesting and new things--most interesting, he rattled off a list of a half-dozen or so changes that are being discussed for GPL 3, some of which I haven't heard before.
The question-and-answer period was fun. When I introduced myself as "one of the bad guys", who had the temerity to use Emacs to develop non-Free software, the room groaned, but Richard was gracious ("Using Free software to develop non-Free software doesn't make it any worse.").
What struck me most was his honesty and surprisingly, his humility. That's a word I haven't previously associated with him. He acknowledged when people pointed out "grey areas" in the philosophy (for example, the dichotomy between free software and non-free content). Before the talk began, I wanted to snap a picture of him, and asked his permission. He looked at me, paused, and said, "You know, your freedom is more important than me. Go ahead, but I'm not the important one here."
Unfortunately, that picture is the only one my phone has munged so far. Fitting, given his statement.
Wednesday, April 06, 2005
Thursday, March 31, 2005
I've just about settled on CherryPy for my web framework needs, but if Spyce is coming back, it looks like yet another thing I need to check out one of these days.
Update: looks like the svn repo is at http://svn-hosting.com/svn/spyce.
Tuesday, March 29, 2005
Monday, March 28, 2005
Thomas Hazlett of The Financial Times starts off with an interesting question: "Is Microsoft Toast?" He believes that while the anti-trust suits failed to break Microsoft hegemony, Google, Mozilla, and Apple are examples of the rest of the market "chipping away" at the giant.
Wow. Major revelation to anyone who hasn't been online in the past five years.
It gets worse. The article is coated with a glossy sheen of outright errors:
Apple, with its tight, integrated interfaces cinching hardware to software has proven powerfully resistant to viruses and spyware, the poisonous infections of the Internet. Meanwhile, Microsoft users scramble to update their software with the latest patches, frantically downloading anti-viral software, running and re-running spyware disinfectants.
Erm, no. "Cinching hardware to software" has nothing to do with preventing malware infections, at least not until we get into the realm of things like Microsoft's "Trusted Computing" systems. Apple OSs do seem to have less malware issues, but hardware isn't the cause: it's a combination of marketing and software issues. Commercial malware authors wouldn't make enough money from Apple ports of their software to justify the costs, and most noncommercial (i.e., virus) authors don't run Macintoshes, so they can't program for them. Cross-platform languages don't typically lend themselves to the small size and tight OS coupling required for stealthy malware, and Apple's shift to the UNIX-based OS/X means they have a better security model than either their traditional OSs or "home"-targeted Microsoft OSs. (Beside that, I have a hard time taking seriously anyone who thinks I "scramble" when I update, or that I'm "frantically downloading" random software on any of the OSs I run).
Recall that the government’s anti-monopoly solutions focused on gaining access for multi-media software, such as that provided by RealNetworks, to piggyback on the Windows network. Yet the creation of an entirely new web-based gizmo, tied online to Apple’s iTunes, has proven the killer app. And despite the explosion in (legal) online music downloads, RealNetworks has seen its shares rise less than the Nasdaq over the past two years
Mr. Hazlett seems to be confused about who's benefitting from iTunes. This sentence falls at the end of a paragraph talking about how Apple is eating Microsoft's lunch. But then he tosses in RealNetworks. So which "entirely new web-based gizmo" is the "killer app": Apple's iTunes (which is being brutally undercut by other, cheaper services), or RealNetwork's Harmony (which Apple is trying to kill off by making their iPods incompatible with Harmony downloads)? I don't think Mr. Hazlett groks the phrase "killer app".
He also doesn't seem to understand much about Google or thin-client computing:
[T]ake the Google gambit. A company provides a new and improved search engine, splices in a few well-targeted ads, and is now capitalized at $50bn. Microsoft, despite ‘owning’ the software on which the applications run, did not get here first.
The whole idea of thin-client computing (like Google Search) is that the applications run on the server, not on the client. Microsoft doesn't own the software on which the applications run, and that's to their detriment. The software Microsoft owns is the eyepiece, not the telescope.
Apple is today on the upsurge because its personal computing systems have been vacuum-sealed, and because the company has – to the point of fetish – delighted in producing its own devices. While either was a distinct liability a decade ago, when Microsoft blew past by seizing the scale advantages of “open” operating system software, Apple’s obsessions look smarter now.
Perhaps Mr. Hazlett is unaware that the operating system that's powered Apple's comeback (at least on the non-iPod hardware) is itself based around an open-source operating system. Even so, as a developer I'm gobsmacked to hear someone describe Microsoft's operating system software as more open than Apple's. They're really about at the same level of openness for an applications developer, and Apple certainly has a higher level of openness if you're looking at the OS itself (Microsoft's limited and legally-encumbered "Shared Source" program for Windows CE notwithstanding).
And then, one more non-sequitir: a faint-praise swipe at Mozilla and Google themselves:
Yet, Mozilla opens its code to the world, generating robustness on pretty much the strategic polar extreme. Somehow, this seems to work in today’s marketplace, as does the Google business model, which looks a lot like the standard pre-bubble dot.com. With the exception of the revenues (Google has some).
It works for Mozilla because Mozilla is not a sold product. Financial people seem to have a hard time grasping the idea that people in open source work for non-monetary rewards.
More surprising was that offhand shot at Google. I suppose $3.19 billion qualifies as "some" revenue; in fact, I think Financial Times would be rather happier with those numbers than with its own.
To be honest, Mr. Hazlett's hypothesis actually squares pretty well with what I see from my technological vantage point. It just makes me cringe to see a good-looking house built on questionable timbers.
Tuesday, March 22, 2005
Sunday, March 20, 2005
Firefox's "Search for <selection>" (and to a lesser extent the small, unobtrusive search box built into the interface) are good examples of creating and integrating features cheaply, without increasing clutter. They use the existing tabbed page functionality, which may not be as elegant as a more customized presentation, but adds no additional code and no additional program behavior for the user to learn. They leverage someone else's freely-available interface and program (Google's in this case), without using either a specialized integration interface or a complex general-purpose integration layer. They expose 80% of the external feature with 20% of the work. And, most importantly, they do it in a way that feels natural to the user.
The original version of "Search for <selection>" brought up the search page in the current tab. One could argue that that's the correct thing to do, but it's just more convenient for the user to put it in a new tab, even if that requires additional action from the user to view and dismiss the tab.
The moral of the story? Feature integration doesn't always have to be a high-ceremony, elegantly-scripted and standardized process, despite what the Web Services folks (among others) might say. The current implementation of "Search for
Friday, March 11, 2005
allpredicate functions). In the discussion that followed, he made an interesting comment:
What worries me a bit about doing a PEP for this simple proposal is that it might accidentally have the wrong outcome: a compromise that can carry a majority rather than the "right" solution because nobody could "sell" it.For those not familiar with Python development, a "PEP" is a Python Enhancement Proposal. Generally, when someone proposes a new feature for the Python core or standard library, they float the proposal on Python-Dev, and if it seems promising, they write a formal PEP, which covers the reason for the feature, the potential effects, a reference implementation, etc. Then after more discussion and informal voting, GvR makes a "pronouncement" accepting, rejecting, or deferring the PEP.
When maintaining any project (but particularly a large infrastructure project like a programming language), there's a natural tension between having a single authoritative architect, and having a democratic process that tries to meet the needs of as many users as possible. If the project leans too far toward the single-architect model, it can wither because it doesn't meet the needs of enough new users; worse, if the original architect loses interest, it can die on the vine. On the other hand, if the project goes fully democratic, it can suffer from loss of focus, bureaucratic slowness, and feature bloat.
As Guido mentioned, ideas can become "sanitized" into a least-common-denominator version based on political will rather than usefulness, aesthetics, or logical correctness. For example, in The Design and Evolution of C++, Bjarne Stroustrup explains why pure virtual methods use the awkward "
virtual foo() = 0;" syntax rather than a "
pure" keyword: the last standards committee meeting for release 2.0 of the spec was just around the corner, and if he'd suggested adding a "
pure" or "
abstract" keyword, the resulting debate would have delayed inclusion of abstract classes beyond the impeding release of the spec. In other words, a short-term hack, based on the fear of bureaucratic delays, has been enshrined in the language for all time.
The need to placate and compromise is another source of problems in hyper-democratic projects. At a previous job, we were developing a coding standard by team consensus. On indentation, we were divided roughly equally into a "four spaces, dammit!" camp, a "two spaces, dammit!" camp, and a "I don't care, just make a decision, dammit!" camp. We ended up with three-space indentation, which had the alleged benefit of pleasing no one, but offending everyone equally.
Based on completely non-scientific observations over my career, more projects (both open-source and proprietary) stay coherent and successful when a single architect can make decisions. That doesn't mean an autocratic process: like Linus Torvalds or Dave Cutler, they can have "trusted lieutenants" to whom the single architect can delegate trust and responsibility, but a single vision tends to keep a project's design tighter and less susceptible to unproductive ratholes and non-orthogonality.
Chief architects walk a fine line, of course: they need to exert just enough control to keep the project coherent, but not so much that the project stagnates (or in the case of open source projects, people abandon the project). It's more an art than a science, though. I don't think you can describe hard-and-fast rules for how to do this; on the other hand, there's good money to be made convincing people otherwise.
Disclaimer: I'm not disparaging projects linked to here, and I'm not saying they're inherently broken. Each is successful by some measures, but each has flaws that can be traced to its location in the "autocratic-democratic" continuum.
Monday, March 07, 2005
>>> self.happiness = sum([pig.happiness for pig in slop])I can't believe it took me so long to find out these CE development tools exist.
Among the goodies:
- A real command shell for Windows CE 4.x (ok, it's not bash, but it's better than nothing)
- A remote control app (no more stylus-mistapping-on-the-cradled-device nonsense).
Saturday, February 26, 2005
Two weights of T-shirts, plus a golf shirt for the collarly-inclined, and two long-sleeved shirts (just in case DC is a bit chilly in March).
All prices are only US$1.00 over cost; proceeds will benefit the PSF.
Incidentally, a number of people have mentioned the Adminspotting shirts. I was aware of them, but I can't say that Adminspotting directly inspired Choose Python. I'd been wanting to rent Trainspotting when I got a chance, so it was at the front of my brain at the time.
UPDATE 2: Enough people have asked about permission to use, modify, etc. that I want to make it clear and official: I hereby release the text, rendering, and design of "Choose Python" to the public domain.
Friday, February 11, 2005
In the past, I worked for a brief stint in the, ahem, online ad industry. I'm curious how a company whose motto is "do no evil"1 goes about things, so I signed up for an AdSense account.
I'll leave it up for a little while, and after I get a good feel for it, I'll remove it (and take a good, hot shower).
This is just idle curiosity, mind you... I have precisely zero inclination to go back into that world. Ever.
1 Unfortunately, 'evil' is one of the nicer adjectives I would use to describe some of the folks I met in that industry.
Wednesday, February 09, 2005
Jeremy Cole remarked:
Basically, I always follow these basic criteria when I blog about work:Those are exactly the same criteria I use, plus a fourth:
- Is it about anything sensitive in any way?
- Is it disrespectful to either your employer or any coworkers?
- Would you flinch in the slightest if your boss, his boss, all the way up to the CEO and the board of directors read it?
- If I were interviewing for a job, and the interviewer read this, would it present an inaccurate picture of me?
And as Jeremy concludes, you're often not left with much company-related information that's bloggable. For example, there are a lot of very cool things going on at work, but they're not yet public knowledge, so I have to bite my metaphorical tongue (and usually end up writing something about Python instead).
On the other hand, I'm still lusting after one of the new lightweight headsets that we released last year (and thus can be talked about). All the cool kids have them here.
Friday, February 04, 2005
- Thinking about what it would take to make a true, single-executable py2exe on Windows
- Exploring Karrigell and CherryPy, benchmarking them, and contrasting them with Rails
- Thinking about how to make PyPI work more like CPAN or Gems
- Checking out the update of wxPython on FreeBSD
- Checking out the new version of WingIDE and comparing/contrasting it with SPE
- Thinking about the equivalent of WTL for Python: take the bare-metal approach of venster, then apply the clean, Pythonic interface of Wax
I think there are some sprints going on at Pycon for some of the above, but I can't make it due to work obligations.
On the other hand, I seem to have acquired a free weekend sans wife and kids, so maybe I'll be able to pick one of the above and hack for a few hours.
Thursday, February 03, 2005
Truth be told, I'm not a Python-über-alles fanatic. In my day job, I write C++ for an embedded device. [Insert obligatory corporate disclaimer here]. I've successfully proven that it's possible to run a Python interpreter on said device, but you know what? Not gonna happen here... not when the footprint of the language runtime is half the footprint of the OS. When I do crank out some Python at work, it's something like a script to automate a build or test process, which runs on my workstation.
So, no, in my world, Python isn't going to prevent tsunamis, root out the terrorists, and usher in a new age of global eudaemonia.
What it has become is the "uppermost tool" in my development toolbox. When I need to try something to see if it could possibly work, when I need to automate some annoying computer task, or when I just want to hack for the sake of hacking, I usually grab Python first.
Wednesday, January 19, 2005
Tuesday, January 18, 2005
PyWebOff is a compare-and-contrast exercise to evaluate the strengths and weaknesses of some of the major Python web application frameworks.Thank you! This is something that we Pythonistas really need.
So far, the answer to "how does Python support web programming?" has been "well, there's Zope, Webkit, CherryPy, Quixote, Woven..." There hasn't been much "If you want X, then you probably want Y." advice.
XML support in Python has the same issues. Maybe some disinterested party can do a similar analysis there.
Friday, January 07, 2005
Thursday, January 06, 2005
I like the clean and idiomatic Wax design. But when you commit to Wax, you're committing to a lot. For example, on Windows, your dependency chain looks like this:
Each of those arrows is an abstraction (or for those who like big words, a "paradigm boundary transition"). And each abstraction leaks. This is true for all abstractions, because an abstraction is just a bridge between two different sets of assumptions.
One good example of abstraction leaks occurs when you're a procedural process like Win32 window creation in an object framework (I'll use C++ for this example). To create a window from C, you first create and register a window class structure, which includes a pointer to a window procedure (a callback function that handles all messages for windows of that class). Then you call CreateWindowEx(), specifying your window class. This creates the window and returns a unique identifier for the window (called a window handle, or HWND). Simple, right?
Object-oriented folks would want to create a C++ class for each "window class" and one instance of that C++ class for each window. So you implement the window procedure as a static class method, because Windows expects callbacks to have C linkage. Your window procedure needs to dispatch messages to the appropriate object, so you need a map of HWNDs to instances, which you populate in your object's constructor with the HWND returned by CreateWindowEx.
But there's a subtle gotcha: when CreateWindowEx creates the window it immediately sends several messages to the window procedure and processes the results before returning. When your static window procedure receives these messages, CreateWindowEx hasn't returned yet, so your object hasn't updated the mapping table yet, which means your window procedure doesn't know which object should handle the message!
Object toolkits solve this with different, but equally egregious hacks. MFC and wxWidgets abuse a little-known but documented Windows feature called a CBT hook to get a notification at the moment a window is created (before messages are processed). ATL is more evil--it injects a hand-written assembly language thunk into the beginning of your window procedure to replace the HWND parameter with the address of the C++ object (of course, this means they have to hand-write assembly code for each CPU they support, but as far as Microsoft cares, "portability is for canoes").
The point is that abstractions have to do some interesting gymnastics to jump to the next paradigm. Gymnastics means code and data, and code and data mean additional performance cost. A Wax application has four levels of abstraction about the "native" environment. That's why (on my machine) a Wax version of "Hello World" has a memory footprint of many megabytes and takes about five seconds to start, while a bare-metal Win32 version written in C eats less than 50KB and starts in under a second.
wxWidgets was written over a decade ago as a cross-platform toolkit for C++ programmers, so it implements what C++ programmers in the 1990s needed (like a string class, cross-platform sockets, and the Windows CBT hook hack). wxPython is a Python binding for wxWidgets, which means it can rely on some well-used and well-tested code, but it brings along parts of wxWidgets that Python programmers don't need. And Wax is a more idiomatic API on top of wxPython, but it, in turn, has to bring along parts of wxPython that it doesn't need.
I'm not knocking any of the libraries (or their authors). Each decision to adapt the previous library was a good one, in its own context. But it adds up to a tall library stack with a big footprint.
On the other hand, if someone got the itch to take something like the Wax API, and implement it more directly (say, with ctypes interfacing to the native toolkit)... that would be cool.
Wednesday, January 05, 2005
As usual, the essay is a mixed bag, but on his last point (getting a good internship), I agree completely. After my third year, I interned with a small startup company. This was way before the Internet bubble, so working for a small company wasn't the "in" thing to do--people favored either research internships at the university or else internships with big companies like Andersen or IBM. I didn't even find their product that interesting at the time--I was looking toward UNIX system administration as a career, and they were developing the first software-based video editor for Windows.
But it broadened my worldview. I found out why source control is important (they didn't have any). I found out why microecomics is important (hint: make sure your company can make payroll). I found out that compilers don't give partial credit, and that real quality matters, because released software has a longer lifespan than homework assignments, and customers and magazine reviewers grade a hell of a lot harder than bored TAs.
In other words, the real world isn't your BSCS program, and I learned that a lot faster in a small company than I would have at a cushy Fortune 500 internship.
It was also the best career move I've ever made. The three-programmer and two-intern company turned into a one-programmer and one-intern company by the end of the summer (the other programmers left for greener pa$ture$). The president asked me to stay on for a year, and after we presented in the Microsoft booth at Comdex that fall, Paul Allen bought the little company, and me along with it.
So I'd add to Joel's advice: get an internship, but make sure it's one where you're not insulated from reality. Lots of companies put their interns in the corporate equivalent of a padded cell. They let interns write some low-impact code, do some random-monkey testing, or handle some scut-work that isn't cost-effective to have a more senior programmer do. But an internship at a small company that's just scraping by will expose you to the real world, and the insight and bruised knuckles you get there will give you a real advantage over your blue-suited entry-level colleagues.
I've been playing the web game Carnage Blender for about a year and a half. It's got a small but very loyal player base, and in-game items sell for hard currency as well as they do in the big boys. But the real secret to CB's success seems to be the game community. CB is the only web game I've seen that incorporates live in-game chat. That leads to real ties between players, and a real sense of community. The strong community sense lets them get away with pretty strong community behavior guidelines (even chat is kept to PG standards). There's no artificial reputation metric, but all player behavior is transparent--not only is everything logged, but every player can see what every other player is doing, so reputation really matters.
After the recent tsunami disaster, the CB community pulled together in a big way. In-game cash and items were donated, and then sold for real currency to donate to disaster aid. In less than a week, over $24,000,000 in-game dollars were converted to almost $200 US, which will be sent to OxFam for disaster relief.
This is even more amazing considering that the game world in which the donations were taken is being wound down in favor of "Carnage Blender 2", which launched at the beginning of the year. CB1's exchange rate, usually steady at US$10 to CB$1M, dropped to about US$4 to the million the previous week, but people stepped right up to buy game cash at higher exchange rates because it was going to charity.
Danielle Bunten Berry once told me that online games have to have two things in order to be legitimate: the game itself has to be solid, honest, and fun, and the community around the game has to gel.
Carnage Blender has both. In spades.