The idea is fairly straight-forward, actual support may
not be. Basically, allow either a toggle
(pre-merge/post-merge) for when to apply the filtering,
or allow the filtering to be specified seperately for
each layer on a game in addition to a filter applied to
the 'merged' video frame. Yes, I realize this would
only be for either AVI recording or super-high-speed
(likely even future) computers, but it would also be a
prime candidate for supporting multi-threaded machines
better, by letting each sub-filter run in parallel.