Without doubt, gqview is a great software. I especially find
the feature "find duplicates (similarity)" very attractive.

Currently I am trying to clean up my foto collection. Over
time it has spread over various partitions. There are the
big original fotos, and a list of downsized versions,
various thumbnails, as well as the forks which resulted
from selected editings. The idea is to consider all of the
smaller fotos as duplicates leaving just the big original
ones.

I guess this is a reasonable and fairly common task.

In order to execute it using gqview, I start putting       
a selection of images, big and small ones, in the          
duplicates-window and run the "find duplicates - similarity
(low)".                                                    

Ideally (what I would like to), I could "select the first
group" and move all of the to a save place because they are
'the big originals'; complementary, the second group would
qualify for the smaller versions.

In fact, however, images do not show any user-visible
sorting-criteria *within* their respective similarity group.
The fact that the first image is implicitly marked with a
100%-rank and the second one, let's suppose with a 98%-rank
appears abitrary, because it could easily be expressed the
other way round, giving 100% to the second, leaving 98% for
the second. Of course implications are passing on the the
other members of each particular similarity group.

To me it boils down to the question if the grouping
algorithm could be 'tweaked', always chosing the biggest
foto of a similarity group as the parent (first entry)
to which all the childs (second and further entries) are
related.

------

More conceptually, my request could be seen as a second
order search criterium (size), where other criteria can
be thought of, like date of image file, existence of
exif-header, etc.