First, let me start by thanking you for all of the very helpful responses. While I've encountered many of these ideas over the years, admittedly I'd been putting off a proper study of DSP and this discussion really inspired me to just buckle down and give it a go. I spent a good chunk of time yesterday browsing the book that Sam recommended and it's fantastic (and free online)! I even went ahead and ordered a more recent edition from Amazon which just arrived today and am looking forward to sitting down with it over the weekend. I also spent a fair amount of time experimenting with Photoshop's Gaussian Blur (in hindsight, I probably should have done this to begin with) and discovered many of the things that were mentioned here (the relationship between downsampling and kernel sizes, how multiple passes fit in, etc.). Interestingly, the Photoshop implementation is a bit odd -- what they indicate as a radius in pixels (the only input to the function) actually appears to be treated as a half radius which is subsequently doubled behind the scenes, so you wind up with kernel sizes that are twice as large as one would expect given the input. That one had me scratching my head as I agonized over how they were able to get results that were so much blurrier than mine with such small kernels :) Once I figured it out, I was able to learn a lot about its behavior under different conditions.
Anyway, I think I'm all set at this stage, and I thank you all again for the insights.