Yeah, there are better ways to do that, somewhat.  The problem with the proposed solution is that it relies on non-public APIs, which are can be subject to change without deprecation.  Instead, I would have created the figimage object with a particular transform object that would have placed it at the appropriate data points. 

Maybe your or someone from this list can help me understand more about this.  So, if I take the code that I have adapted to my purposes, there are questions I have about it:

        # constants
        dpi = 72; imageSize = (32,32)
        # read in our png file
        im = image.imread('redX_10.png')

So far, so good--just setting the dpi and getting the image.
       
        fig = self.figure
        ax = self.subplot
        ax.get_frame().set_alpha(0)

Does the current version of Matplotlib require the frame be set to fully transparent?  I need a white canvas, so I think I'd rather not do that.

        # translate point positions to pixel positions
        # figimage needs pixels not points
        line = self.line_collections_list[0][0]

"line" here is my line of datapoints from elsewhere in my app.

        line._transform_path()
        path, affine = line._transformed_path.get_transformed_points_and_affine()
        path = affine.transform_path(path)

I have no understanding of the purpose of the previous three lines.  Can someone give me a quick explanation?

        for pixelPoint in path.vertices:
            # place image at point, centering it
            fig.figimage(im,pixelPoint[0]+80,pixelPoint[1]+180,origin="upper")

This is just a way to put the image somewhere on my canvas to see it, so these offsets are just for this exercise.

I should state that if I do it this way, the images appear on the canvas but are NOT repositioned in data coordinates (and they should be)--which is probably just Ben's point, right? 

Thanks,
Che