Just Launched: You can now import projects and releases from Google Code onto SourceForge
We are excited to release new functionality to enable a 1-click import from Google Code onto the Allura platform on SourceForge. You can import tickets, wikis, source, releases, and more with a few simple steps. Read More
It was made in the midle of september. I just played with my TV tuner
and videodev. My conclusions: it is possible to use mapped device data
as image data; blending is possible, but with 100% load. On my PIII-650
i can do 18-25 fps only for 64x64 without alphablending.
Somehow Moxi media center crew did their job much better (i heard about
it in november). I can't say much about it, i even wonder how they put
macromedia flash ontop of kernel. Ask them :)
> Is there any support (or plans) for video in evas2? An interface that maps to Xv or software rendering or so...
> Yves De Muyter
From: Carsten Haitzler (The Rasterman) <raster@ra...> - 2002-12-27 01:20:38
On Tue, 24 Dec 2002 18:15:11 +0100 Yves De Muyter <yves@...>
> Is there any support (or plans) for video in evas2? An interface that maps to
> Xv or software rendering or so...
the short answer:
from the evas documentation (evas.c.in):
@todo (1.4) Add video/movie/animation objects
So the answer is...
yes I plan on adding these in. supporting mpeg, avi (divx and other) quicktime,
mpeg2, mpeg4 etc. video support (ans custom codec support too somehow). I want
to make the engine as powerful as possible so the video engines can work on all
display targets, and on those that support acceleration of some sort, use it
(eg: xvideo). This isn't a small task by any means - ESPECIALLY to make it
powerful and fast.
But the end result, I hope, will be a simple api to play a movie (after you have
set up a canvas with video objects etc...):
sleep_for = evas_render_next_frame_sleep_get(evas);
the calls will ell you if you have time to go to sleep before having to decode
more frames. internally evas would decode and buffer a frame or 2 ahead.. i am
wondering what to do with the audio though... and if parts of a video object are
covered by objects with alpha channels i'd have to decode some parts of the
video in software and some in hardware to do the alpha blends... also xvideo
normally only supports 1 accelerated video surface. the 2nd and 3rd would have
to be software. i'm hoping to have good support in the GL engine for YUV
textures for this kind of stuff...
anyway.. it's a minefield.. but i plan on treading it at some point...
--------------- Codito, ergo sum - "I code, therefore I am" --------------------
The Rasterman (Carsten Haitzler) raster@...
Mobile Phone: +61 (0)413 451 899 Home Phone: 02 9698 8615