Yes, unfortunately.  That may be a problem, especially if the object is high poly.  There are more complex methods that use shaders.  One of the better ones I've found is:

Render the scene to two textures, one RGB and the other depth
Apply an edge-detection filter to the depth texture (no edge = white, edge = black)
Multiply the two textures together on a fullscreen quad. 

The problem here is you'd need shaders if you want to only have one pass, which may be overkill.  (MRT to render to both textures simultaneously or storing the depth in the alpha channel of the texture).  If you want to use fixed function, you'd need two render-to-textures; one for each texture--and so there wouldn't really be an advantage to this method over the original one. 

There's also normal-based edge detection, which I have tried as well.  It's simpler, though you'll probably need a shader here too.  This unfortunately suffers from line-thickness, which may or may not be a problem.  You'd only need one pass though. 

HTH,
Ian