Wherever you go, there you are.

The Reality of Edmonton Urban Sprawl


I was listening in on the Edmonton City Council's public hearing regarding the proposed west and southeast LRT routes, when someone brought up that old bugaboo about Edmonton being quite possibly the worst major city in the world for urban density. You don't have to look far to find this claim... the first paragraph on Wikipedia will do the trick:

At 684 km², the City of Edmonton covers an area larger than Chicago, Philadelphia, Toronto, or Montreal. Edmonton has one of the lowest urban population densities in North America, about 9.4% that of New York City.

This claim has always bugged me; not because it's strictly untrue or even fundamentally invalid, but mainly because it's spiritually misleading. To show why, let's start with a naive comparison of city population density for some of the cities mentioned. This is easily determined by dividing each city's population by its area, using the numbers from Wikipedia:

City Density Area
Edmonton 1067.2/km² 684km²
Calgary 1360.2/km² 726km²
Toronto 3973.5/km² 630km²
Chicago 4707.3/km² 606km²
New York City 6887.1/km² 1214km²

This looks pretty bad; 27% of Toronto's density and only 15% of New York City's density. With numbers like that, we must all be living on acreages in the middle of downtown Edmonton! Of course, a quick look at a couple of maps will tell us why these numbers are not really that helpful for comparing density in these cities.

map of Chicago city limits map of Toronto city limits

We can see in the top two images (Chicago and Toronto respectively) that the city limits don't even come close to encompassing the actual urban population of those cities. Let's see how Edmonton compares...

map of Edmonton city limits

Hmm... those city limits seem to include quite a bit of farmland, and don't reflect the urban/suburban population distribution all that closely. You can see that the entire northeast quadrant is basically empty, not to mention the fringe of farmland around the west and south sides.

Further, you might notice the green strip of land and the river running through the heart of Edmonton. Heading on over to trusty Wikipedia, we find that between the river valley park system and local neighborhood parks, Edmonton has over 111km² of parkland. With a city area of 684km², this means more than 16% of the city area is being used for green space. In contrast, the famed Chicago Park District, the largest urban park system in the United States, is approximately 30km², or only 5% of its city area.

Edmonton river valley

A more enlightened approach might be to then compare the "metropolitan" area of Edmonton with those of other cities. Using metro numbers, we get a table that looks like:

Metro Density Area % metro pop outside city limits
Edmonton CMA 109.9/km² 9418km² 30%
Calgary CMA 224.1/km² 5083km² 13%
Toronto CMA 866.0/km² 5904km² 51%
Golden Horseshoe Core 642.5/km² 10097km² 61%
Chicago Metro 347.5/km² 28163km² 71%
New York City Metro 1092.0/km² 17405km² 56%

This seems like a perfectly valid idea, but it again falls short in implementation. The numbers used for the comparison for Canadian cities is the "Census Metropolitan Area (CMA)", a census apparatus defined by Statistics Canada. The numbers for the US cities are basically whatever Wikipedia came up with, and I'm sure the methodology doesn't line up 1 to 1. In fact, even between Canadian CMAs, comparisons are difficult. The Edmonton CMA is the largest CMA in Canada, and yet only 30% of this CMA's population lives outside the city limits of Edmonton (which is a refreshing number to help put our sprawl issues in perspective).

map of Golden Horseshoe areaIn contrast, the Toronto CMA doesn't even match the area known as the Greater Toronto Area, and arguably the more valid comparison would be the "Golden Horseshoe Core", which encompasses a total area equivalent to the Edmonton CMA. The Golden Horseshoe effectively supports the Toronto urban centre, and 61% of this population is "sprawled" outside of Toronto city limits. In fact, if we were to run the numbers on the extended Golden Horseshoe, which contains about 25% of Canada's population and which is arguably essential to the city of Toronto, we see density drop off to 256.7/km².

The point of all these numbers and comparisons isn't to prove anything specific, or to argue that Edmonton doesn't have more sprawl than it probably needs. It's simply to point out that a blind comparison of density stats tells us very little about the actual density and sprawl issues of any particular city. It also reminds us that the incredibly high population densities of metro centres like Toronto or New York are artifacts of huge populations spread across entire regions, without which the central cities would simply cease to function.

As a prairie city wrapped around a river, the issues Edmonton faces with sprawl are different than many of the cities we are often compared to. We have a strongly identified central core (well, two of them), a really good public transit system for a city of this size -- and if you don't believe this, stop comparing your city to New York or Toronto, and instead try to find a US city in the 750k range that can match Edmonton transit -- and a surprisingly consolidated set of suburbs. From a completely personal perspective, I've lived in cities 1/3 of the size of Edmonton with end-to-end commute times that are worse than anything I've encountered here.

Edmonton city hall

I certainly think Edmonton could use more density, and I'm glad to see that sprawl is a continuing concern. However, I also think that Edmonton is more dense than people realize, and that the overall situation is not as dire as some believe. It's nice to see that with our hyper-sensitivity to sprawl, it seems unlikely that it will ever be left to get truly out of hand.

On a final note, I think that one of the reasons the sprawl issue is both frustrating and exciting is because it means that the average Edmontonian is not content with simply living in a Houston or a Denver. We compare ourselves to cultural giants of cities; Toronto, Chicago, New York. We feel our transit system is inadequate because we don't have the London underground, or that our density is suspect because we don't have a New York skyline. I'm glad that we hold ourselves up to this incredibly high standard. I hope that we never stop striving to be a world-class place to live -- it's what I love about this city.

EVE Online - Anathema


A friend of mine recently posted some screenshots of his favorite mounts, including an Amarri Anathema from EVE Online. Not to be outdone, I had to dredge up a screenshot of my Anathema as well :)

screenshot of Amarri Anathema

Fallout 3 Modding - Primer


In this post I want to spend some time talking about the fundamentals of Fallout 3 modding; the engine and how it loads mods, the relevant files and file types involved, and the various tools and their purposes. This primer is not intended to serve as a practical modding guide; it will not necessarily teach you how to do anything just yet; we will not be building any tutorial mod in this post or even opening any of the tools. What it will do is set the ground-work for understanding how modding works, all of the files and locations involved, and should provide a much better place to start from when approaching all of the many practical tutorials already available on the web. This guide will also, as an added bonus, provide a relatively comprehensive list of the tools you will need (and explanations of why you will need them), as well as links to additional resources where you can get your feet wet with practical applications.

Engines, Masters, and Plugins, Oh My!

Fallout guy holding survival kitThe engine Bethesda used for Fallout 3 is the Gamebryo engine. This is not the only game to use this engine; in addition to other games out there, Bethesda has used progressive versions of this engine for Morrowind and Oblivion as well as Fallout 3. The details I will discuss in this series are most relevant to Fallout 3 modding, but a large host of details would work equally well for Oblivion modding, which uses a very similar version of the Gamebryo engine.

The first thing to know about modding Fallout 3 is how the engine loads the game world. There are two types of data files; "Master Files" with the suffix ".esm", and "Plugin Files" with the suffix ".esp". In general, the engine usually loads only ONE master file, and then loads any additional plugin files that are available and configured. Modders are usually working with the plugin ".esp" files. It is possible for the engine to load more than one master file, and some mods do need to be masters (such as the case of a total conversion), but these cases are few and far between so I'm not going to discuss them here.

In Fallout 3, the entire game world is loaded from a master file that came with the game called "Fallout3.esm", which is located in the "Data" directory of your Fallout 3 installation (probably something like C:\Program Files\Bethesda Softworks\Fallout 3). This file contains or references everything necessary to load the game world; all the layouts, NPCs, quests, dialog, models, items, etc. After this master file has been loaded, the engine can then load any additional plugins that it finds in this same "Data" directory (files ending in ".esp"). You can choose which plugin files the game should load using the "Data Files" option in the Fallout 3 launcher.

list of Fallout launcher data files

In essence, plugins modify the data in the master file that is first loaded. The plugin can add new elements, remove existing elements, or modify existing elements. We'll go into more detail about how this is done later, because it is important, but it suffices right now to know that the master file contains the original game world, and the plugin file you create contains all of your changes to the original game world.

The last thing to note is that the master file and the plugin files are essentially identical in content and structure. The concept of a "master file" is more logistical than technical; it just indicates that the file has a self-contained and complete description of a world, and it is the file which should be loaded first to form the base against which all additional plugins make modifications. A plugin file can also be this "self-contained" if so desired, though realistically we often want to use plugins to make changes to the existing world rather than defining an all-new world, because, hey, that would be a lot of work :)

Resource Data

You may have noticed earlier that I said the master file "contains or references" everything needed for the game world. What a master file or plugin file "contains" is the logical description of the world and its elements, but all of the "heavy hitting" items such as game models, textures, sound resources, and so on are actually stored in a different place. So the plugin file (and the original master file) may store information about, for example, an NPC -- that it exists, what its name is, where it is located -- but some of the information about the NPC in the plugin file will be references to external elements, such as which model and texture to use and which voice file to use for its spoken dialog.

These references are defined in the plugin file, but the actual model data, texture data, and sound data live in one of two places. The first place they may live is in a ".bsa" file, which probably stands for "Bethesda Softworks Archive" or something equally witty. These ".bsa" files are also located in the "Data" directory alongside the master and plugin files, and you will notice that the original Fallout 3 game comes with several of these archive files. These contain all of the original game resources, and are significantly larger than the master or plugin files.

The second place that this data can live is in subdirectories within the "Data" directory. These subdirectories contain individual resource files, but you won't see any in a fresh Fallout 3 installation because all of the original game resources live in the ".bsa" archives instead. However, you can put new resources, such as new models or textures, in subdirectories within the "Data" directory for your plugin to reference, and the game will load these just as it does the resources in the archives, allowing you to add completely new content in this way.

example BSA resource list

In fact, if you were to look inside the ".bsa" archive files that come with Fallout 3, you will see that the archives are really just a collection of resource files organized in a directory tree. It is possible to extract these archives into a separate location so that you can look at all of the resources available in the game, and even use them as a basis to make new resources for you plugin, e.g. by taking an existing model and applying a new texture. You would then put the new resource into a subdirectory within the "Data" directory and reference it in your plugin. The engine would then load up the referenced resource and display it in game!

Further, it is even possible to override the resource files that are contained within the original game archive files, by placing an identically named resource file in an identically named directory tree within the "Data" directory. To make sure this is what you intend, Fallout 3 additionally requires a special ArchiveInvalidation.txt file be created in the "Data" directory that lists all of the things in the archive that you want to override with copies in the local directory tree. However, there is a tool that makes this unnecessary, so I won't describe the details of messing with the ArchiveInvalidation.txt file, but it's useful to know at least that much about that process when it comes to troubleshooting.

Finally, you might be interested in what type of resources are available. Links to tools for working with these will be mentioned later in the tools section:

  • sound formats include wav, mp3, and ogg; additionally, lip files are used for dialog lip synching
  • all textures are stored in the standard dds, using DXT1 compression
  • models are stored in the nif format, which is also used for save-games and some other things
  • lots of other things, like skeletons and animations and other bits of data that I'm not going to enumerate here

Plugin File Structure

In theory, you rarely have to worry about the underlying structure of the plugin or master files, because this is all handled transparently by the editor tools for you. In practice, unfortunately, there are often conflicts and issues that can be most simply and easily discovered and resolved by low-level inspection and editing of the plugin files. Knowing about these fundamentals will very likely save you (and your users) hours of frustration in the future.

As mentioned earlier, master and plugin files are quite similar. In fact, when it comes to the underlying structure, they are basically identical, except for a few flags that mark a master file as a master. Simply put, these files are collections of records. A record is an entry in the file that has a type, some data, and possibly some child (or sub) records. So a master file or plugin file is simply a collection of records and subrecords, nested as deeply as needed, each containing data about the element it describes.

Every item, NPC, quest, script, location, effect, and any other element of the game world is contained and described by one of these records. The type defines what the data for that record contains; the SCPT record data contains a script, the QUST record data contains information about a quest, and so on. Some record types are quite complex, and they need additional subrecords in order to completely describe the content because the record data itself is not sufficient or dynamic enough. This makes implicit sense; if you have a record that describes a complex entity like a room, for example, you would expect that it would have a bunch of subrecords, one for each object contained inside that room. And it does.

example of plugin record entries

Reaching back to the discussion on plugin loading, you may remember that first the master file is loaded, and then all plugin files are loaded. At a low-level, what this means is that all of the records in the master file are loaded, and then all of the records and modifications to existing records are loaded from the plugin files. This is how a plugin is able to modify or delete existing content, rather than just adding new content; the plugin can instruct the engine to modify or even delete records that have been loaded previously.

This record-based approach gives plugins their power and flexibility, but it can also cause some problems. If you make modifications to the same records (i.e. elements within the master file) as a different plugin, you may have conflicts with that plugin. It is also very easy to accidentally modify or delete records from the master file unintentionally, which is generally referred to as a "dirty" mod. This happens routinely when you are in the editor looking at existing content from the master file, and accidentally delete or move an item without noticing. Not only can this cause annoyances within the original game world completely unrelated to the scope of your plugin, but it can also cause unintentional conflicts with other plugins that would not ordinarily conflict with your mod.

When this happens, the only way to undo your unintentional changes are to find the DELE records in your plugin that tell the engine to delete the other record loaded from the master file, or else to find the records in your plugin that are making modifications to the master file records that you didn't really want to change (such as giving it a new position or rotation, a ridiculously easy thing to do by accident in the editor).

To track down records and items for these (and other) troubleshooting purposes, it's helpful to know that every record has a FormID. The FormID is a unique hex identifier for that record. You will use this quite often in scripts or when looking through the records of your plugin for specific changes. These FormIDs are created automatically by the editor tool, though you will have to set a "prefix" so that the FormIDs in your mod are unique from the FormIDs in the master file and everyone else's plugin.

In addition to the FormID, many elements in a plugin also have an EditorID. The EditorID is something that can be set within the editor and is a useful way to unique identify items within scripts and references without having to figure out the FormID. I mention this mainly because it is important to know the difference when troubleshooting, but some tutorials use the terms interchangeably, which can make that confusing. Interestingly enough, EditorIDs are in fact defined by an EDID record, which will be a subrecord of the item to which it applies.

Tools and Resources

One thing to note before going into the tools and resources is that the Fallout 3 engine is just a newer version of the Oblivion engine. If you cannot find a specific resource, tool, or tutorial about how to do something in Fallout 3, it may be possible to find equivalent information for doing that same task in Oblivion. Since it is (mostly) the same engine and even a similar editor, the information you learn from an Oblivion tutorial transposes quite nicely to Fallout 3.

Before installing the tools, you may want to re-install Fallout 3 to a simple base directory, like "C:\Fallout3". This makes your file paths much simpler and easier to find, and it can also prevent some known-issues with Vista and UAC when operating out of the Program Files directory. This is entirely up to your discretion, however.

The Garden of Eden Construction Kit (G.E.C.K.)

Get it here.

The G.E.C.K. is the official editor for Fallout 3 and is the modder's bread and butter tool. It is where you do everything you need to edit the plugin files; create and modify items, quests, NPCs, rooms, layouts, etc.

FO3 Archive Utility

Get it here.

This archive utility lets you inspect and extract files from the ".bsa" archives. This allows you to extract resources from the original archives so that you can re-use original game components. You will almost certainly want to do this extraction before starting on your mod. Even if all you want to do is re-use existing resources, the editor does not know how to select these items out of the ".bsa" archives. Instead, you must select it from the extracted resources, and the editor then strips off the root of the resource path (everything before the "Data" directory part). When the engine loads, it then uses this relative path to lookup the resource in the ".bsa" files (or on disk, if a copy exists in one of the subdirectories in the "Data" directory and the necessary override has been configured). So the engine loads the resource from the archive, but the editor cannot select it from the archive, hence the need to extract even for simple re-use.

It is not recommended that you extract these to the Fallout 3 "Data" directory, as you will then be potentially overriding resources within the ".bsa" files, which you don't want to do (except in the specific cases when you do want to do that, but you don't want to be doing it in the general sense for every single resource). Since I installed Fallout 3 to "C:\Fallout3", and I put all of my tools in "C:\Fallout3Tools", it made sense for me to extract the original game archives to "C:\Fallout3Tools\Data" to mirror the original structure, but you can put them in any location that makes sense for you.

This tool will also let you create new ".bsa" archive files, which is a great way to distribute any new resources for your mod in a single self-contained way, instead of requiring the end-user to extract all your resources into individual subdirectories within the "Data" directory.

ArchiveInvalidation Invalidator

Get it here.

As you may recall during the discussion on resources, it was necessary to use an ArchiveInvalidation.txt file if you wanted to override resources within the ".bsa" archives with local copies in the "Data" directory. This tool makes maintaining the ArchiveInvalidation.txt file unnecessary, and instead forces Fallout 3 to check in the "Data" directory first and use anything it finds there to override the ".bsa" archive versions.

Remember that if your mod overrides built-in resources, the end-user will also need to install this tool (or update their ArchiveInvalidation.txt file) in order to use your mod properly.

Games for Windows Disabler

Get it here.

Bethesda chose to include Games for Windows Live as part of the Fallout 3 package, which has been something of a controversial issue. I'll remain safely neutral on the point here, except to point out that during mod development you will be potentially loading and reloading Fallout 3 dozens or hundreds of times. This tool will disable the use of Games for Windows Live within Fallout 3, making startup faster and trouble-shooting potentially easier.


Get it here.

The NIF model format is somewhat proprietary to the Gamebryo engine, though there are plugins to work with it in Blender and other tools. That said, the de-facto tool for viewing and editing NIF files directly is NifSkope, which lets you preview model files and even make simple changes (such as re-texturing).

DDS plugins

For Gimp.
For Photoshop.

The Direct-Draw Surface format is relatively common, and the above links provide plugins for the two most common graphics packages that you would use to edit these textures, which is the most likely use-case. However, there are many more DDS tools available out there; viewers, editors, etc., if you need to do something else with them.

Fallout Mod Manager (FOMM)

Get it here.

Installing and managing mods is not intensely difficult, but many users like to have a tool that handles the task for them. The FOMM is probably the de-facto standard mod manager that end-users have, so it makes sense to be familiar with its features and to make your mod FOMM-friendly.


Get it here.

The G.E.C.K. editor is a great tool, but when you need to get into the nitty-gritty record-level details of your mod to resolve conflicts or clean up dirty modifications, you'll want to use the FO3Edit tool to manage the low-level details.


Posts in this series:

All of my posts about Fallout 3 modding.

Fallout 3 Modding - Introduction


Fallout 3 logoI have recently decided to create a mod for Fallout 3. Since I am just starting out, I don't plan to talk about the details of the mod just yet. However, the project is relatively ambitious and there's no guarantee that it will ever see the light of day, so I wanted to write a couple of posts about the experience so that some utility may be gleaned from the endeavor, regardless of the eventual outcome. These posts will probably vary between simple curiosities to full-fledged tutorials; hopefully they will all share at least one thing in common, and that is to be in some way be helpful to other Fallout 3 modders and modder-hopefuls.

This will not be my first experience modding this particular engine, since it is the same engine used by the game Oblivion. I have already written a mod for Oblivion, which was in many ways a much more technically challenging mod due to what it was trying to accomplish and the ways in which it pushed the engine. Its coolest feature was arguable, but its second-coolest feature was definitely the fact that the mannequins introduced by the mod would disappear from the game after a few days time, taking all of your best armor and weapons with them. I like to pretend this was intentional; that my Oblivion character was sitting on a hoard of stolen loot inside her castle keep, cackling gleefully under the moonlight as armies of magical mannequins quietly converged on her lair in the depths of the night, dumbly ferrying the stolen booty to their evil master. In reality, of course, it was just a bug, and one I never got around to fixing at that.

The point of this digression is not just to reminisce about my past glories, but also to plan out at least a few of these posts. While I should have known what I was doing going into this mod, I found myself struggling to remember most of the important details. Further, while there were many practical "how-to" tutorials on specific topics, there were very few tutorials discussing the basic details of how mods function and interact with the engine, which are things that I had to learn the hard way and which are just now slowly percolating back into my working memory. While these practical tutorials are probably very useful for some people, I find myself to be more of an abstract, theoretical learner. A step-by-step guide does very little to help me out if it does not explain the "why" behind each step.

Keeping this in mind, I want to write a primer that tackles two main points. First, I want to talk about the fundamentals behind modding this engine, because I feel this is an area that is most lacking within the existing knowledge-base. Knowing about the basics of the file layout, structure, and data resources used by the engine make a lot of common problems much easier to understand and resolve. This will be somewhat technical and abstract, and is intended to introduce all of the basics to someone who knows nothing about modding Oblivion or Fallout 3. Second, I want to build on that basis to provide a comprehensive list of tools required to work with each of these fundamental areas. Instead of just randomly downloading tools and blundering through a "My First Dungeon" tutorial, I hope that this combination will give you a firm understanding of what each of the tools does and why you will need it.

After that, we'll just see where things go. As I make progress on the mod I will probably be making additional posts about the various things I run into. I'm sure at least some of them will be interesting :)

Posts in this series:

All of my posts about Fallout 3 modding.

Technology is Neat, or How I Learned to Love the Virtual Machine


serves into machine funnelIn my last post I talked about my current development environment, and one key aspect of this environment is the use of virtualization and virtual machines. While virtualization is not a new concept (to me or to the world), it has only recently crossed into the realm of day-to-day utility in my life, and I find the entire topic to be one of those eye-catching technological wonders that throws me back to the first time I turned on a VIC-20 or started hacking away at an Osborne. In short, it makes me smile and think, "Hey... that's really neat!".

Virtualization can mean a lot of things, but lately it's been the new, trendy way of talking about what used to be lumped in with emulation. At the technical and semantic level, they are not the same thing: in emulation, the hardware you present inside the VM is entirely abstracted and is often not the same as the physical hardware of the host (e.g. emulating a Nintendo-64 on PC hardware), whereas in virtualization the guest machine sees the architecture of the host and probably even has direct access to some of its bits and pieces. From a practical standpoint, the most common end-goal for both is the same: inside a "host" OS running on real, physical hardware, create one or more "guest virtual machines" (VMs or guests) that look just like real, physical machines to anything that runs on them. The practical upshot of all this is that one piece of physical hardware can be running multiple OS instances, not even necessarily the same OS, and they all think they are running on their own little piece of hardware without a care in the world.

Again, this is nothing new. Virtualization has existed as a practical reality since at least 1972. My first negative experience with it was VMWare, and my first positive experience with it was Xen. But only recently has it truly become a useful tool for my day-to-day tasks, for a number of different reasons.

Virtualization used to be very slow and buggy in the x86 world, which is one of the major reasons that I stayed away from it. VMWare was always such a let-down for me; what they accomplished with the hardware at hand is really pretty amazing, but for day-to-day use it was far too slow and unstable for my tastes. However, hardware support for virtualization finally entered the mainstream x86 processor market (AMD and Intel CPUs) around 2007, which has opened up a world of stable and fast virtualization options.

Another limitation had always been hardware resources, and specifically memory. I've rarely had enough RAM to comfortably run my regular OS, let alone a couple of tag-along virtual machines. It's not always been just about price, but also the 4GB RAM barrier of a 32-bit OS. But with Vista 64-bit and the current prices for RAM, I've finally been able to afford and actually use a surplus of cheap memory; far more than my OS currently needs, and more than enough to handle the requirements of several VMs simultaneously.

Lastly, there seems to be a wellspring of VM options these days. VMWare has come out with a free version, Microsoft has entered the fray, and there are a number of other choices. With all of these options, I was able to find one that met my needs: Sun's VirtualBox. It's free (hey, what can I say, I'm cheap). Setting up a VM is easy, running multiple VMs is stable, and it "just works". It supports seamless mouse and keyboard integration with the guest machine, so the window running the guest OS behaves almost identically to every other window on my desktop. It supports a network mode that gives the guest OS full visibility on my internal network through promiscuous use of my NIC (sounds kinky, I know, but it isn't). And it can share files and folders on my host OS with the guest OS through an embedded file-share device that behaves just like a network mount.

With all of these forces combined, I've suddenly started using virtualization all over the place. As I already talked about in detail, I've been using it to run a guest Linux webserver VM on my Vista host in order to serve up the web apps I'm actively developing. I also use it to run a Windows XP image in order to test with IE6.

Once I started with this setup, I found other handy uses for my little virtual machines. For instance, my day-job requires that I spend a lot of time connected to various corporate VPNs, and these have an annoying tendency to kill my regular network connectivity. Also, a lot of them don't work very well on Vista; especially not 64-bit Vista. No problem now, though; I have an XP VM dedicated just to VPNs. It has all of my VPNs configured and I can connect anywhere I need without killing my host OS network. Even better, when I go on the road, I simply copy over the VM image to my laptop and run it there, without having to worry about maintaining two sets of configurations or worrying about OS compatibility.

And perhaps the most endearing and important aspect of it all is that it's just really neat! Running a Linux webserver in a little window on my Vista PC as I edit the files it serves up in real-time on my host OS while simultaneously testing these changes using IE6 on another version of Windows running inside another little window... well, that's just fundamentally cool. And I don't even take advantage of all the other nifty things you can do with virtualization, like migrating a running virtual machine from one physical machine to another without interruption, or scaling physical resources on demand between multiple VMs. With all of this cool techno-wizardry going on, it's easy to see why I finally learned to love the virtual machine.

My Development Environment


StackOverflow logoMy answer to a recent stackoverflow question got me to thinking about the long and painful process I went through in creating a development environment that met all of my idiosyncratic needs.

The question asks people what kind of setup they use for their development environments. My environment for building web applications and other personal projects went through a lot of iterations before I finally settled on one that met all of my needs and did not irritate me in any discernible fashion. Some of the personal requirements that I slowly identified as I went through this process were:

  • I need Windows: First and foremost, I'm a PC gamer. Additionally, my day-job requires a variety of Windows applications, so I need a functional Windows desktop no matter what the underlying task. Also, many of my personal projects are built in Visual Studio.
  • I need Linux: I'm also an open-source web developer and so I need Linux for many of those projects too. I use a Linux server for production deployments and it helps to have a functional Linux development server that matches that environment as closely as possible.
  • I treat personal projects like real ones: For even the simplest of personal webpages, I like to have a development environment separate from my production environment. I even like to have a test environment too, but that's not generally as critical. While it may seem like overkill for a lot of basic tasks, it enforces good habits and has saved my bacon too many times to count.
  • I need to edit locally: While I have nothing against network mounts, and in fact use a really cool NAS device for a ton of handy things, I've always found the random network delays an annoyance when editing text files. I'm a compulsive saver and every slight delay drives me nuts.
  • I don't like messing with character encodings: DOS newlines, UNIX newlines, UTF vs. standard ANSI... I'm very picky about my character encodings. I want files deployed to my production Linux environment to have UNIX newlines with UTF-8 encodings, no exceptions.
  • I'm power-conscious: At the end of the day, I want to be able to turn off as many of my electronic devices as possible, as easily as possible.
  • I like to share: I need to be able to share up-to-the-second changes in development with my wife and co-developer, with no more hassle than looking at the changes myself.

Of course, there's no single magic bullet that resolved all of these needs at once. Many of my setups excelled in some areas while falling down in others. I had several setups that were "good enough", but the slight deficiencies constantly irritated me. In some ways these were the worst, because I always felt guilty about spending time trying to improve the setup when I should have been working on code instead. But I did, finally, settle on a setup that meets all my needs and has no apparent annoyances. Without further delay, I present it to you!

The Host OS

I tried dual-booting various versions of Windows with a Linux desktop environment for a long time -- I've toyed on and off with that kind of setup for almost a decade now. But the maintenance overhead of a dual-boot system has always been a drawback to me, as well as the complexity and potential (often realized) for boot-sector SNAFUs. I've also tried Linux as my primary desktop with virtualized Windows environments, but that's always been a problem for gaming and performance concerns.

In the end, I finally settled on Windows Vista, 64-bit. 64-bit allows me to take advantage of all 4GB of my RAM (and more soon, I hope!), and I can play all of the latest and greatest PC gaming titles without problem. Further, because it is my host OS and not just a VM, I always feel like I'm getting "all the bang for my buck" when it comes to gaming or other intensive Windows tasks, like Visual Studio compiles.

The Dev Server

Finding a good choice for my Linux development server has likewise been a long and often twisting road. I've gone from dedicated servers to using my desktop as my development environment and all the way back again, before finally settling on the option that now works for me. I run a Linux (Ubuntu) server as a virtual machine (VM) on my desktop host, using Sun's VirtualBox.

Running a local VM for my development server has solved a lot of problems for me. I can edit files locally on Windows but have the guest OS serve them up without any hassle. The VM instance has its own IP, so other computers on my network can access it just like a physical host. It's easy to start and stop at will, and shuts down when I turn off my desktop at night. It also lets me experiment very easily; I can create a new server image (or clone my existing one) at any time to try out something risky or dramatic, with far less pain and suffering than in the past. It also lets me take advantage of my desktop hardware for something other than just gaming, which makes the cost of my inevitable and continuous desktop upgrades a little easier to justify. This allows me to concentrate my precious hardware funds into one machine.

The Editor

I can't even count the number of editors I've tried over the years. The one I had been using most recently was jEdit. It was powerful, had lots of optional plug-ins, and was cross-platform, which tended to come up a lot as I switched my environment around ad nauseam. But it never handled character encodings in a way that I liked, and it was always a little slow and a little clunky due to its Java implementation. I've always had problems with the damn installer too, which is a trite thing to get hung up on I suppose, but there it is.

Lately I've been using Notepad++ which is fast, lightweight, and handles character encodings exactly the way I want. I can force the default encoding to UNIX newlines and UTF-8 for all new files, and never worry about it again. It clearly indicates which encoding a file is using in case I need to convert something, and has no problem displaying any of the standard encodings I regularly come across. It has excellent syntax highlighting, but it doesn't have a lot of plug-in options, which has actually been a good thing for me. I'm no longer tempted to try all these stupid editor features which seem cool at first but ultimately become a waste of time and don't really address the problem I'm trying to solve.

So, there you have it. My ideal development environment, finally realized after almost a decade of experimentation. So... maybe now I should get on with the task of actually writing some code, eh?

What do you do with a drunken... website?


I run the Fashion Mash website, together with mezamashii, and lately I have been spending a lot of time thinking about where to go from here.

The reality is that we made a lot of mistakes during the development of Fashion Mash. We started down a promising path with a utility concept, but wandered astray into a fashion-based world that we had very strictly meant to avoid. Even the name was, in retrospect, a poor and misleading choice. This was never meant to be a fashion site but a wardrobe utility.

As a result of our many misjudgements, the site became something of an unwanted child, a poor lost project with differing taste and opinions from its parent creators. It has been neglected and left to fend for its own for many months as our attentions turned to projects more exciting or practical.

The thing is, though, that in spite of our downright abusive neglect towards this poor little website, it has still managed to keep puttering. We routinely get new signups and there is active, though somewhat anemic, community participation. There are frankly so many things wrong with the website in its current state that I'm amazed it still manages to pick up several users a week.

All of which leaves me in a bit of a quandary. The site itself still has quite a lot of untapped potential. It may never be the next Facebook, but it is certainly a concept that people seem to find attractive and, at the moment, there's only a smattering of competition. I'm certain that with a major overhaul based on all that we've learned so far, the turnover and community participation rates could be greatly improved. Of course, that would be a lot of work, and without the interest to drive that effort it becomes a mere drudgery.

And that, I suppose, is the real problem. We've fallen out of love with the site itself, and our obligations to it feel like exactly that; a sense of forlorn duty rather than one of excited passion.

As I often do in these cases, I turned to the Lucky Honu Oracle in hope of inspiration. While I have no belief in its mystical capacity, I find the presentation of a presumptuously authoritative answer to be an excellent jump start to the introspective process. My gut reactions to its prophecies frequently tell me a lot about my true feelings on the matter.

I asked the I Ching Oracle: What should we do with Fashion Mash? Fix it up or let it lie?
37. Chia Jen - Family Put the family or group first. Take the mother figure into account when making decisions.
What does the Oracle hold for you? http://luckyhonu.com/

Unfortunately, this hasn't yet opened up any new wellspring of insight into my problem. Devote energy into fixing the site and hope the spark rekindles, or let it lie dormant and let bygones be bygones?

I wonder what the legal precedent is for divorcing a website.