If you have many [pix_image] objects in a patch, even if none of them loads any image and even if none of them is connected to any gemhead, Pd will consume a lot of CPU, apparently proportional to the number of [pix_image] instances.
On a dual core machine, the amount of CPU consumption can exceed 100% of a core, so I guess the issue must be in some part of the code that is threaded - hopefully this may help to track down the issue.
The attached patch (64 pix_images) eats 150% of my intel core duo t9300 on Ubuntu 10.04 while the patch is doing absolutely nothing.
This makes it impossible to have a handful of pix_image objects with preloaded images and dynamically choose which one to show, just to name one thing that can't be done and which should not be resource-demanding. This kills any project working with images.
Anonymous
I've just realised that if I send [pix_image] a "thread 0" message it stops eating CPU.
So it is the threaded mode. Note that there's no need to actually load an image to trigger the issue: just because threaded mode is enabled, it is always wasting CPU.
Confirm this issue for Debian Squeeze. pix_image causes a cpu load of ~4% per instance when thread loading is enabled.
Debian Squeeze
Panasonic CF 74 with Intel integrated 945GM
Pd-extended 0.42.5
Gem 0.93
Since thread loading is enabled by default, patches with multiple pix_image created on other systems (where this issue does not exist) can not be used without modification. If the pix_image cpu load issue is common for Linux, the object should initialize with threadMess(0) instead of threadMess(1) as long as the issue is not solved. I recompiled Gem with this small change in the code and can now load arbitrary patches with multiple pix_image. Cross-platform patch sharing would be better served by source and binary distributions with this initialisation, until the issue can be solved in a more satisfactory way.
Katja Vetter
the threaded loading in [pix_image] has been totally rewritten for 0.93 (current SVN) and i believe that this has also fixed this issue