From: Ryan A. <rya...@gm...> - 2011-03-31 13:47:58
|
iOS will recognize the different gestures for you and you don't need to implement any time functions to determine if a gesture has happened or not. These are the following gestures that you can look for: UITapGestureRecognizer - You can look for multiple taps too UIPinchGestureRecognizer - I'd assume that this also handles the opposite "pull" gesture UIRotationGestureRecognizer " When the user moves the fingers opposite each other in a circular motion, the underlying view should rotate in a corresponding direction and speed. Rotation is a continuous gesture. It begins when two touches have moved enough to be considered a rotation. The gesture changes when a finger moves while the two fingers are down. It ends when both fingers have lifted. At each stage in the gesture, the gesture recognizer sends its action message." UISwipeGestureRecognizer "Looks for swiping gestures in one or more directions. A swipe is a discrete gesture, and thus the associated action message is sent only once per gesture. Swipes can be slow or fast. A slow swipe requires high directional precision but a small distance; a fast swipe requires low directional precision but a large distance." UIPanGestureRecognizer You can pan with one finger or two fingers and I believe you can recognize gestures for either. "Clients of this class can, in their action methods, query the UIPanGestureRecognizer object for the current translation of the gesture ( translationInView:) and the velocity of the translation (velocityInView:). They can specify the view whose coordinate system should be used for the translation and velocity values." UILongPressGestureRecognizer "The user must press one or more fingers on a view for at least a specified period for the action message to be sent. In addition, the fingers may move only a specified distance for the gesture to be recognized; if they move beyond this limit the gesture fails. " On Wed, Mar 30, 2011 at 1:21 PM, Dave Joubert <dav...@go...>wrote: > On 30 March 2011 17:06, doug sanden <hig...@ho...> wrote: > > > Or is there a way to accomodate both. For example is the goal to be able > to gesture:pinch to zoom in/out and still be able to also touch 2 > touchsensors simaltaneously when you mean to? > > > > I think the convention with a single mouse in freewrl -and other web3d > browsers- is when you aren't over a sensor node, then mouse action falls > through to navigation action. > > Yes, nnd I am sure it will cause us grief in the future. > > To add to Doug's questions: > > >From reading the article in the original link, I got the impression that: > > A) Touch events and gestures are indistinguishable initially. > > B) That the gesture will arrive 'some time later' when enough touch > events have arrived into the OS so that it can fire a gesture event > (Say you put two fingers down, and then sloooooowly twisted or pinched > them. Would this be registered as a gesture?) > > So, my first question is: is the above true ? If it is true, I suppose > one could ignore the mouse events (ie not act on them) in case you get > a gesture event soon afterwards. > > I suppose then a 'gesture' can be defined as: > a initial touchdown followed immediately (10th sec? 50th? 100th?) by > some movement and the path of the movement is recognisable, > ie 'towards each other on the diagonal' > > Dave > > > ------------------------------------------------------------------------------ > Create and publish websites with WebMatrix > Use the most popular FREE web apps or write code yourself; > WebMatrix provides all the features you need to develop and > publish your website. http://p.sf.net/sfu/ms-webmatrix-sf > _______________________________________________ > FreeWRL-develop mailing list > Fre...@li... > https://lists.sourceforge.net/lists/listinfo/freewrl-develop > |