View Manipulation add-on

Help
Pete
2014-05-20
2014-08-29
  • Pete
    Pete
    2014-05-20

    Hi...

    Lately I have tried to have a look at, what I said some time last autumn that I'd "give it try" to write a plugin for view manipulation.

    Well.... I can make a plugin happen, I can extend and override stuff and Ant accepts my code. Only one essential thing is missing: I don't know how to "hi-jack" the mouse events, that are supposed to move or rotate the views?

    BR

    -P-

    EDIT: Aha. This was mentioned in an earlier post. The metaTool and altTool.... I think I got it. -- I'll letyou know. :)

     
    Last edit: Pete 2014-05-20
  • Pete
    Pete
    2014-05-20

    ...And I have a compiled piece that works. So far only so much, that I know it has an effect.

     
  • Pete
    Pete
    2014-05-25

    Hi.

    ....And time to call back my cry for help, it seems...

    package artofillusion.viewassistant;
    
    import artofillusion.*;
    import buoy.widget.*;
    
    public class ViewAssistantPlugin implements Plugin
    { 
        public void processMessage(int message, Object args[])
        {
            if (message == Plugin.SCENE_WINDOW_CREATED) // <-- This level was missing
            {
                final LayoutWindow layout = (LayoutWindow) args[0]; //<-- The problem appeared here.
                //MenuBar menuBar = layout.getMenuBar();
            }
        }
    }
    

    '
    Now I have an entirely different set of problems, so I'm sure I'll be back. :)

     
    Last edit: Pete 2014-05-25
  • Pete
    Pete
    2014-06-09

    OK. So far I have got the basic move and rotate functions work the way I wanted, but the rest of the view handling is not working very well together with the plugin.

    Basically there are a few methods in the Camera and ViewerCanvas classes, that would need to be replaced, but if understand this right, it'd be better just to keep list of view parameters inside the plugin and just keep sending that infirmaition to the existing Viewers and Cameras. Though I suspect that there would be some flickering if (for example using the setOrientation()) the view first changes, then the plugin checks what changed and moves the camera, where it is supposed to be ... Any comments on that?

    -P-

     
  • Luke S
    Luke S
    2014-06-10

    Though I suspect that there would be some flickering
    Any comments on that?

    Test first. Get the rest of your functionality working, try it out with a larger scene. If it flickers, then worry about it.

    Just my $0.02:)

     
  • Peter Eastman
    Peter Eastman
    2014-06-13

    Which methods?

    Peter

     
  • Pete
    Pete
    2014-06-13

    At least setOrientation, setScreenParams and setScreenParamsParallel of the Camera class.

    The setOrientation has it's built-in logic about turning the camera (around a point it defines itself), so it's easy to get "lost in space" as the plugin is using a system of it's own. The parameter setters are not exchanging any information (zoom on one and swich the mode, the magnification changes too...) I'd like to keep the modes matched.

    In the ViewerCanvas -- well, I was hoping to find a "setCamera" so, I could have extended the Camera class and overridden those methods to match the logic of the plugin, but if I'm not mistaken the camera can not be changed? And at some point I'd probably like to make the frameWithCamera animated (and the Camera class functions too) but that's not highest on the proirity list now.

    And the scroll wheel zoom should also be matched with the plugin - I havent studied that part yet.

    The list will probably get longer, but at least it looks possible to watch for ViewChanged events and the "re-perfom" the changes with different programming.

    BR

    -P-

     
    Last edit: Pete 2014-06-13
  • Pete
    Pete
    2014-07-10

    Hi again...

    In the Camera class there are two variables "viewDist" and "distToScreen". Somehow these two seem to be working independent of each others... Even so, that when a method is taking one of them in as parameter, it is actually using the otherone for something.

    What exactly is the purpose of having two of those in there? Aren't they basically supposed to be the same thing?

    BR

    -P-

     
    Last edit: Pete 2014-07-10
  • Peter Eastman
    Peter Eastman
    2014-07-14

    Wow, it's been years since I wrote that code. I honestly don't remember what the reason for that distinction was! It appears to me that viewDist is always 0. Anyway, I did a search for uses of setScreenParams(), and that invariably seems to be the value that gets passed.

    Peter

     
  • Pete
    Pete
    2014-07-15

    How about the setScreenParamsParallel()? I think that tells a different story.

     
  • Peter Eastman
    Peter Eastman
    2014-07-15

    setScreenParamsParallel() neither changes nor uses viewDist.

    Peter

     
  • Pete
    Pete
    2014-07-16

    Hmmm.... I wonder what I think, I'm remembering... Anyway, I'll go on reverse-engineering the thing some time next week. I'm pretty sure, I'll be reporting back. :)

     
  • Pete
    Pete
    2014-07-29

    So, out of my curiosity, I tested the deprecated setDistToScreen() of Camera. The results were a kind of surprising. :D .... but never mind. Now a couple of more acute questions:

    1) I'd like to add a "point to center" feature. This is something that exists in those professional softwares, that I have used.

    The function is that, when I point and click something on the screen, the view is then centered to that point (assuming that there was something in there) -- So, it should find the point on the surface of an object.

    The feature could be for instance under the center button the mouse or then under some key-press+mouse-click combination.

    So how do I get started? I suppose it needs some mouse listener thing, probably extended from something that already is there? (I tried something with the Move or Rotate tools, but with no effect)

    2) On the mesh editor windows I'd like to add a center (f fit view) to selection command, but it turned out to be a little bit trickier, than I was hoping. The compound manipulator always seems to find the center of a selection, so how does it do that?

    Actually I thought that this should be easy as the list of what is selected is easily available, but then finding the corresponding parts of the mesh turned out to be more complex. The "compass" manipulator however seems to know how to do this...

    I also had a looka at the polyMesh code that defines a center to a selection, but I did not quite get how it works ...

    EDIT: Well, I got started with 2). I got it somehow vorking with vertices... But there does not seem to be a universal tool to change all other types of selections into vertices? I found the one for triangle meshes, but where is the conversion happening in the SplineMesh code?

    BR

    -P-

     
    Last edit: Pete 2014-07-31
  • Peter Eastman
    Peter Eastman
    2014-08-02

    MeshEditingTool.findSelectionBounds() works out the bounding box for whatever is selected in a mesh editor. It's not very complicated, and you should be able to adapt the code directly. The trick for handling arbitrary selection modes is to call getSelectionDistance(). That always refers to vertices, whatever mode you're in.

    For the UI of #1, that depends how you want it to work. A mouse listener will let you detect the click and respond to it, but you probably also want to prevent the current tool from responding to it. I'm not sure if there's really a clean, universal way to handle that. You could do it with a particular EditingTool (it would just respond to the click differently), or a particular ViewerCanvas subclass (it would implement mousePressed() differently). But something that works with arbitrary EditingTools and arbitrary ViewerCanvases is harder.

    A different option is to do it with the keyboard instead of a mouse button. For example, pressing F1 centers the view on whatever the mouse is pointing at. Then you could just create a keyboard shortcut in the standard way, and its script would call your routine. In fact, you could probably write the entire thing in the shortcut script.

    Peter

     
  • Pete
    Pete
    2014-08-24

    Thanks.

    2) I got the centerToSelection working (well after a few trials with transfomation matrices).

    1) I also got a kind of a pototype for the point to center-function... the code is in the rotation tool: When you press and release the center button without dragging it moves the view to that point. Only that now it does not recognize objects or their surfaces as it should.

    In the editing tools there are a lot of functions to recognize a pointed vertex, line or a face -- I haven't really studied those... And I'm not sure where to look, but I think one approach would be to get a rendeing mesh for the pointed object and recognize the nearest to camera triangle ... And then check, where on that triangle the mouse pointed. I only got the feeling that the pieces, I could use as examples are scattered in quite a few diffeent places? So, any hints where I should look?

    3) I really would like to get the automatic moves (Like change of orientation, Fit to selection...) animated.

    I built a ptototype of and animation engine, but it is not doing, what I was hoping for.... all it did was to snap the view back to it's initial orientation in one step. And it kept doing that even, when not asked to. Even with the timer canceled... (Of course it still could be, that I have used a wrong variable somewhere, but that did not seem to be all that was wrong.)

    I built that one out of java.utils.Timer. I wonder if I should have used the swing timer instead? The way I think, I'd need only one animation tool, which looks something like:

    animate(ViewerCanvas view, CoordinateSystem start, CoordinateSystem end)
    // and probably a few more parameters
    

    The current move methods are already taking care of the start and end states -- now they'd only need to send the information to the animation engine. It does not sound so difficult, but it seems, that I'll need help to get that right.

     
  • Pete
    Pete
    2014-08-28

    Ok -- Did some homewok. Should go with swing timer.

    The #2: How about sending a ray from the mouse cursor tip? Shouldn't that work a kind of naturally right?

     
  • Peter Eastman
    Peter Eastman
    2014-08-29

    Is your goal to recognize the specific point, edge, or face that was clicked on? Or just to get a location in 3D space of the surface point under the cursor?

    If the former, this is generally done by having an EditingTool return HANDLE_CLICKS from whichClicks(), and then implement mouseClickedOnHandle(). For the latter, then the method you described of looking through faces in the rendering mesh would work well.

    Peter