Hi,
I totally agree with your comments on the subject, and sorry for my negligence, as I am only a newbie in CAD software design. Truly speaking, I felt that a multi-touch based user-interface will be a very good interface for the users ( agreeing also on the fact that nothing in near future will replace the command-line ). It was just a thought that I could integrate that with BRL-CAD and hence I started pondering over the subject and hoping to find a mentor on it who can help me understand the CAD software much better.

The first and the foremost I wanted was just to develop a framework (which can be integrated into MGED to work on multi-touch enabled surfaces ) where already developed models in BRL-CAD could be displayed and interacted on a Multi-Touch Surface.
For this, after going through the BRL-CAD tutorial series ( Volume I ) , I had started with the libdm ( the display manager ), librt libraries - the first thing was to build it. ( libmultispectral and liboptical might also be of some use ). Now I am trying to go through them and through the functions helping in implementing the below mentioned functionalities, in which a person with insight in the code base will be very helpful.
Going through the second volume, I was working with MGED ( please pardon me if I got something wrong, just a newbie in the field ), I was using the functions like "zoom", viewing different angles and azimuth-elevation combination with "ae" can be very naturally and smoothly done with both the hands. Similarly for the SHIFT key translation and rotation by CTRL.
Another feature that I felt was the boolean operations done with the models. In the proposed user-interface a user can take two models and do the desired operation ( union, subtraction or interssection ) as he moves the model and the result being displayed, say in wireframe to help him visualize what the result is.
Next we can go for discussing and coming up with gestures which can help in designing and rapid prototyping with Multi-Touch. Very fast prototyping can be done with gestures, say you can define that a small circle will result into a sphere and similarly for cube, cyllinder and other primitves and then you can do the interesection/ union/ subtraction moving them with your hand and watching the result at the same time.Defnitely this will be not very precise for which commandline is the best, but while interacting with a model and for quick discussion sessions this could prove helpful.

But it seems that this is  very much a rejected proposal. Definitely this will be just for a better user experience and not to the deep core of CAD software in terms of engineering feature and robustness.

Thanks for all the help, I will try to work on such an integration of CAD and multi-touch in near future and hope that such a user interface will definitely  come up  in the future.

On Mon, Mar 23, 2009 at 7:13 PM, Christopher Sean Morrison <brlcad@mac.com> wrote:

Ashish,

Not to get pedantic but Google Sketchup is *not* CAD software, although it is 3D modeling software (but not solid modeling software).   At least not for all of the most important criteria that defines one 3D modeling package from a CAD system in terms of engineering features, robustness, fidelity, etc, it's not CAD software.  It's about as equivalent a comparison as saying 3D Studio Max is CAD.  While it could be potentially used for some CAD purposes, it's not a CAD software system.  Sketchup is in a similar category.

That said, I realize your point is more to emphasize that Sketchup has some basic gesture support.  I don't think that much was ever in dispute.  My original comment, though, was with regards to being an effective and productive means of CAD modeling.  Even AutoCAD's work is still very immature and has yet to show much more than being "a neat demo".  If you have references to the contrary, please do share them.  Also, be specific about one or two of the gestures you have in mind and how it would correspond with a given action.

That doesn't make it still be an interesting project, but it still would be nice to see some more in-depth review of how this might actually be an effective interface beyond being "a cool thing for CAD design systems" (sic).  The proposed plan sounds a whole lot to me like an excuse to continue a project you worked on last year with still very little relation to BRL-CAD.  What is this "BRL-CAD scene manager" you speak of?  I certainly know what that 'could' translate to in our system, but I don't think you do -- we certainly don't have a traditional scene management system like it sounds like you are referring to.

Also, what do you mean by "to develop a CAD multi-touch application"?  If you mean a new application, that sounds like it's even further detached from BRL-CAD.  That won't work.  You need to propose how what you want to do would directly integrate with BRL-CAD.  Not high-level conceptual, you need to be a bit more specific.

Thanks again for the reply.  I look forward to seeing what you come up with.

Cheers!
Sean

 p.s. Please reply to all next time so that the developer mailing list is included.




On Mar 21, 2009, at 5:35 PM, Ashish Kumar Rai wrote:

Hi,

Thanks for your enthusiastic reply. I was working on the basic blocks of the framework so that I could come with a concrete idea. Google Sketchup is a CAD software where they build 3D structures from 2D structures by pulling it etc. Such type of gestures will be very useful . Similarly , panning, rotation  and movement through a 3D structure will be very easy with MT-gestures just as we have seen in MT-based Google earth application. A lot of work is being done in this field and coming up in news , notably by AutoCAD .( http://labs.autodesk.com/technologies/multitouch/ )

Here goes the propsed plan:

The project envisages the development of a package that could be used in conjunction with Tbeta/Touchlib  and BRL-CAD to develop a CAD multi-touch application to interact  with 3D models. Tbeta/Touchlib (from NUIGroup ) will encapsulate the hardware of the Multi-Touch setup and will send data of touch-events in TUIO protocol through TCP/UDP packets. The functionalities provided by the package will start from this step.



In the implementation the touch-events will be received by a particular application, relayed by an MPX-type sever, and then it will be patterned according to the scene ( which is divided into regions of various sizes which needs to be updated alone when a touch-event occurs in that region ). The powerful collision detection of BRL-CAD will be used for thie purpose of patterning.

The MPX-based server thus developed will help in collaboration of multiple persons working on the design at the same time - through both at MT surface(s) or with multiple mice support systems- which I hope will be a cool thing for CAD design systems.

According to the corresponding region, the touch-event data will be clubbed and sent to Gesture Recognition module in a particular format which will then, as per the application requirement, pass commands to the scene-manager of BRL-CAD to update the required region(s).


The whole work can be summarized in the following five sequential stages :

1. Design and testing of a ANN based gesture recognition and its integration with OSCPack and patterning .
2. Integration of the gesture recognition with the BRL-CAD scene manager.
3. Development of some primitive MT-primitives for BRL-CAD which can be easily used for model developement .
4. Gesture recognition for designing, editing and model building from the MT-primitives.


Waiting for your comments and suggestions eagerly !!

On Thu, Mar 19, 2009 at 11:42 PM, Christopher Sean Morrison <brlcad@mac.com> wrote:

Ashish,

Thanks for the message and interest in working with BRL-CAD!  Your idea of working on a touch screen interface for CAD is interesting and something several of us have been following the research literature on for many years now.  It's a very enticing concept but I have several questions about what you'd foresee being the near term gains and what you have to say about some of the criticisms that MTI's have received.


>In this regard I feel that we can have a headway start in developing
>multi-touch capabilities in CAD softwares.
>I do feel that Multi-Touch(MT) stuff is still in infancy stage but it has
>started to show its effect through the phenomenal success of IPhone and
>various big-shot companies are puring in huge money to develop in-house
>multi-touch software. Hence I would like to propose to start with the
>development of MT based user interface for CAD systems in open source.

You do identify one potential problem there.  MT is in its infancy, even with efforts like Surface and various phones that now support MT.  What's not been shown yet, though, is the efficacy of MT for non-casual interactions.

>It is much more intuitive and friendly way for an architect or a designer to
>develop and show models with his both hands by directly touching the models
>instead of using the mouse as a pointer device which is more abstract.

Intuitive and friendly, but not shown to necessarily be more (or less) productive either as far as I know for "real work".  It makes a great demo and probably even makes for a great visualization interface, but would you want to use it for actual architecting or designing?  Maybe.  Therein, though, is a dilemma in terms of defining a project that can be scoped with specific goals. :-)  How do you see that working?

Similarly, the various multicontact gestures for 2D imagery is easy enough but how would you go about extending that to 3D?  Is there any research you'd be using to back up a given direction?

>In the first phase I would like to develop aaplication and libraries to help
>in interacting with the already build models in an interactive way on a
>multi-touch device.
>
>In the second phase, I will work on how to develop a User Interface so that
>a designer can very easily develop models as well as has a commandline
>running at the same time to give fast inputs. I agree to the fact that
>currently it is much more covinient and faster to develop a model with
>commandline. It will start with taking inspirations from Google's Sketchup
>but will be much more intuitive and easy to use and I hope will become much
>different than that.

As is often the case, the least intuitive and least user-friendly interface is often one of the most efficient interfaces.  Ignoring productivity, though, as there are plenty of folks that use operating systems that are entirely less efficient because they think they're easier/familiar, these phases you speak of would need to tie directly in with BRL-CAD.

You need to propose how you'd enhance BRL-CAD, not how you'd make something that could then be used to enhance BRL-CAD.  How does your project fit in with our tools and services specifically?  If it's some general framework that you want to work on that could just as easily be completely independent of BRL-CAD, then that would be more like an independent study project that is outside the realm of GSoC.

>own a MT surface. The project has been highly popular and is available for
>both windows and Lniux. This project was my GSoC project last summer which I
>successfully completed under the mentorship of NuiGroup (Pawel Solyga).

Glad to hear that.  Paweł is a good guy with a lot of good insights to share.

>I have a working knowledge experince with MGED and Archer and have already
>started looking into the code base and the libraries of BRL-CAD.

Awesome.  The more you can tie your work into what we already have and do, the better off your proposal will come across as being directly beneficial and applicable to our community at large.

In any regard, thanks again for the interest and I look forward to hearing more about your project and your proposal.

Cheers!
Sean





--
Cheers,

Ashish Kumar Rai
Electronics Engineering Department,
IT-BHU
<software-engine.jpg>




--
Cheers,

Ashish Kumar Rai
Electronics Engineering Department,
IT-BHU