The Paraview people recommend making use of multiple available CPU cores per GPU ( see http://www.vtk.org/Wiki/Setting_up_a_ParaView_Server\) as rendering rendering speed isn't usually the bottleneck in visualisation using modern hardware ad it would be nice if the viz-paraview script could run in this way. Perhaps it would be easiest to implement this when using GPUs exclusively, and then the user could specify how many pvserver instances to start per GPU although I suppose it could be implemented with shared GPUs as well. As an illustration of the issue currently, to make use of all the CPU cores with viz-paraview on our system we would need to share each GPU six ways (12 core systems with two GPUs) which is less than ideal for other uses and in any case starting six X servers per GPU seems a bit excessive. It would seem to be more efficient to have a single X server per GPU and six pvserver instances all using the same X server for rendering. I think this would be a nice addition to the already very useful script.
I have added an option "-S|--share-gpu-count" to the viz-paraview script. This makes it possible to use multiple CPUs for rendering onto an allocated GPU.
GPUs can be allocated either shared or exclusive. Many high-end servers (from HP as well as other vendors) support 4 or 8 CPU sockets. Current generation CPUs support as many as 12 cores per socket, so there can be a large number of CPU cores compared to GPUs on a system. For this reason, I feel it makes sense to allocate GPUs in shared or exclusive mode.
This is implemented in Changeset 356
https://sourceforge.net/apps/trac/vizstack/changeset/356
I have tested this change up-to 4 pvservers per GPU. Let me know if this works fine for you.