RE: [Algorithms] Re: Motion tracking from a live video source
Brought to you by:
vexxed72
From: Willem de B. <wd...@mi...> - 2005-10-12 08:07:52
|
This problem is called scene-reconstruction in computer vision, and there is a lot of research on it out there. The method you mention coincides with that of extracting feature vectors for two frames and use the disparity between matching vectors to deduce a (affine) view transformation.=20 =20 Finding the view transform giving a set of matching feature vectors of images is fairly straightforward. I would suggest googling for RANSAC and specifically SIFT, and Epipolar geometry. A good first source would be =20 http://www.cs.ubc.ca/~lowe/keypoints/ =20 Extracting feature vectors that are partially invariant to changes in illumination, as well as affine transformations is quite hard. SIFT seems to be very good at it, but I'm not sure if this is feasible in real-time. =20 Cheers, =20 --- Willem H. de Boer Homepage: http://www.whdeboer.com <http://www.whdeboer.com/>=20 =20 =20 ________________________________ From: gda...@li... [mailto:gda...@li...] On Behalf Of Ken Noland Sent: 11 October 2005 20:28 To: gda...@li... Subject: Re: [Algorithms] Re: Motion tracking from a live video source =20 Actually, funny you ask this because I've been debating on writing a small demo that uses the camera as an input to orient the perspective in a 3D game. Kinda like a big trackball using the camera to point and shoot.=20 =20 The idea I was toying around with and starting to seriously consider was to analyze the frame to determine a grid of distinctive points, say 16x16 distinctive points with color gradient stored, and possibly even some edge detection in there to sample the most distinctive points on the 2D image. The next frame it would search for the 16x16 points and attempt to match up the difference with the current frame and then that *should* spit out a 3D coordinate change.=20 =20 However, there are certain things that would disrupt that really basic algorithm. The first of which is moving objects in the scene. That is why I decided to have a distinctive weighting value added to each point chosen to sample and each frame that goes by that contains that particular point at a high degree of accuracy then the weighting value is incremented, otherwise the weighting value is decremented.=20 =20 Has anyone else experimented with this? =20 -Krad =20 On 10/11/05, Juhana Sadeharju <ko...@ni...> wrote:=20 >From: Aras Pranckevicius <ne...@gm...> > >PiAgQXQgdGhlIG1vbWVudCwgSSdtIHRyeWluZyB0byBmaWd1cmUgb3V0IGEgY291cGxlIG9 mIHRo=20 >aW5ncyByZWxhdGluZyB0bwo+IHdlYmNhbSBtb3Rpb24gdHJhY2tpbmcuCgpJZiB5b3UgY2F uIHNv [ Reposting the coded message in plain ascii. Juhana ] > At the moment, I'm trying to figure out a couple of things relating to=20 > webcam motion tracking. If you can somehow segment "foreground" (moving object) and "background" (the rest), then you've got a black&white image. From there, quite many approaches exist; quite easy is MHI (motion history=20 images) - basically, just fade out the foreground along time, so you have fading out "motion trails". Then compute image gradients of that. Of course, all this assumes you've got background extraction working,=20 which is hard :) Anyway, I suggest taking a look at OpenCV library - it has lots of vision-related algorithms already implemented: http://sourceforge.net/projects/opencvlibrary It won't solve your problem immediately, but at least can provide some building blocks. -- Aras 'NeARAZ' Pranckevicius http://nesnausk.org/nearaz | http://nearaz.blogspot.copad: fix the code ------------------------------------------------------- This SF.Net email is sponsored by: Power Architecture Resource Center: Free content, downloads, discussions,=20 and more. http://solutions.newsforge.com/ibmarch.tmpl _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 =20 |