From: <bar...@t-...> - 2003-06-14 14:51:16
|
hallo there, still one week to go with my diploma thesis, but i am beginning to wonder what xine's future looks like. just checked and updated the todo list and found there are only very few items left. could all developers please check the TODO list again and add/correct/remove items as needed? next step would be to assign the remaining required tasks to individual developers. stefan and i would take care of the end-user documentation (as we're currently writing an article for a german linux user magazine anyway). next question: what would a time frame for all this look like? it has already been mentioned that more testing is needed, anyone willing to take care of that (e.g. write up some test cases, test streams, test protocol, try to automate them...) - maybe a rc series of releases is needed here after the beta series? last item: maybe it is time to start thinking about xine 2. ideas are meeded here, e.g. things like: - big, radical API cleanup - compatibility and cooperation: can xine share interfaces, plugins etc. with other media player projects? - xine engine for embedded system (small memory footprint) - ... thanks, guenter -- How come wrong numbers are never busy? |
From: James S. <jst...@us...> - 2003-06-24 21:28:36
|
Hi, On Saturday 14 Jun 2003 15:50, Guenter Bartsch wrote: > last item: maybe it is time to start thinking about xine 2. ideas are > meeded here Two things that I would like to see in xine 2: 1) Separation of lots of audio and video processing functionality into post plugins: - Separate plugins for software scaling, colour space conversion, postprocessing, deinterlacing, cropping, audio resampling, software volume etc. If it's convenient/efficient plugins can do more than one thing at the same time. - Decoders set flags suggesting what needs to be done, e.g. ffmpeg decoder might suggest postprocessing and cropping. - Output drivers advertise what features are supported natively and what limitations there are, e.g. scaling and maximum frame size for Xv. - Front end informs engine what processing the user requests, e.g. audio equaliser - xine engine automatically inserts a chain of plugins for carrying out the necessary processing. Doing this would: - Simplify decoder and output plugins, and therefore make developing new ones much easier - Reduce common code - Increase flexibility of the engine The api should also allow the automatic insertion to be overridden/controlled for applications such as video processing. 2) A complete overhaul of the config options. xine offers a lot of configuration options, while I definitely think this is a good thing they really need to be organised in a more user friendly manner: - Use more logical categories such as Network, Devices, Quality, Visualisations, Languages etc. Working out what these should is a very difficult task in itself. - Get rid of the "configuration experience level". I suspect that most people either have this on beginner or expert/master of the known universe. It would be simpler just to have the more commonly used options available in the section and then an "advanced" button/tab to access the more obscure/little used options. Again splitting the options in this way is not at all easy. - Allow grouping of options, e.g. for vidix there is a slider for each of the rgb components of the colour key, it should be possible to specify that they belong together. - Allow dependencies between options, if something isn't selected then certain options should be disabled, e.g. osd_timeout should be disabled when osd_enabled isn't selected. It would also be nice to have some ordering of options, though I think this isn't possible with the dynamic nature of plugins providing configuration options. Any thoughts? James. |
From: Miguel F. <mi...@ce...> - 2003-06-24 22:23:58
|
Hi James, On Tue, 2003-06-24 at 18:24, James Stembridge wrote: > On Saturday 14 Jun 2003 15:50, Guenter Bartsch wrote: > > last item: maybe it is time to start thinking about xine 2. ideas are > > meeded here > > Two things that I would like to see in xine 2: > > 1) Separation of lots of audio and video processing functionality into post > plugins: > - Separate plugins for software scaling, colour space conversion, > postprocessing, deinterlacing, cropping, audio resampling, software > volume etc. If it's convenient/efficient plugins can do more than one > thing at the same time. i fully agree with you about that, i just don't think we need to wait xine 2 for it. I have started with a new deinterlacer plugin (based on tvtime), now i'm planning to check what can be done on image post processing front within the same idea. > - Decoders set flags suggesting what needs to be done, e.g. ffmpeg decoder > might suggest postprocessing and cropping. again, this remind me a lot the deinterlace case where i added flags so it can be enabled/disabled automatically. unfortunately it seems (i'm not sure) that ffmpeg doesn't export everything we need (like if interlaced dct is used). > - Output drivers advertise what features are supported natively and what > limitations there are, e.g. scaling and maximum frame size for Xv. ok. (do we need any api change for that one?) > - Front end informs engine what processing the user requests, e.g. audio > equaliser we should add a post plugin for equalizer. front end can already build a chain of processing depending on what user requests, right? > - xine engine automatically inserts a chain of plugins for carrying out the > necessary processing. sounds interesting... :) > 2) A complete overhaul of the config options. xine offers a lot of > configuration options, while I definitely think this is a good thing they > really need to be organised in a more user friendly manner: > - Use more logical categories such as Network, Devices, Quality, > Visualisations, Languages etc. Working out what these should is a very > difficult task in itself. > - Get rid of the "configuration experience level". I suspect that most > people either have this on beginner or expert/master of the known > universe. It would be simpler just to have the more commonly used options > available in the section and then an "advanced" button/tab to access the > more obscure/little used options. Again splitting the options in this way > is not at all easy. > - Allow grouping of options, e.g. for vidix there is a slider for > each of the rgb components of the colour key, it should be possible to > specify that they belong together. > - Allow dependencies between options, if something isn't selected then > certain options should be disabled, e.g. osd_timeout should be disabled > when osd_enabled isn't selected. very good ideas (imho)! :) so far, i haven't seen any serious problems with the current architecture to justify xine 2. that is, i believe this discussion is very important to think about future plans and requested features and whether these are possible with xine 1 or not. imho the most important things we need now are improving the frontends, usability, desktop integration and adding new features in the form as plugins (post plugins?). care should be taken to not create a new branch of the development (maybe api breakage) without good reasons for it. regards, Miguel |
From: <bar...@t-...> - 2003-06-26 19:36:22
|
hallo miguel, > > Two things that I would like to see in xine 2: > > > > 1) Separation of lots of audio and video processing functionality into post > > plugins: > > - Separate plugins for software scaling, colour space conversion, > > postprocessing, deinterlacing, cropping, audio resampling, software > > volume etc. If it's convenient/efficient plugins can do more than one > > thing at the same time. > > > i fully agree with you about that, i just don't think we need to wait > xine 2 for it. I have started with a new deinterlacer plugin (based on > tvtime), now i'm planning to check what can be done on image post > processing front within the same idea. oki, i also see that "xine 2" might be a bad name for future planning. of course i thought about collecting ideas for the far future as well as things for the near future under this topic. i will collect the ideas under a "xine's future" section in the TODO file > > - xine engine automatically inserts a chain of plugins for carrying out the > > necessary processing. > > sounds interesting... :) i think this is the important point here: if post plugins are to be used for core media player functionality, the api must ensure they are easy to use for playback-only frontends. xine's api is, at least in some parts, way to complicated already so let's make sure we do not force frontend developers to understand yet another concept they're not interested in. > imho the most important things we need now are improving the frontends, > usability, this is also at the very top of my todo list. a lot of work has to be done here. > desktop integration and adding new features in the form as > plugins (post plugins?). care should be taken to not create a new branch > of the development (maybe api breakage) without good reasons for it. cannot agree here. while the public api should be kept as far as possible, creating new branches to explore the future of xine has always been supported in the project and i think we should keep it that way. if those branches are called "xine-2" is another question. guenter -- The Official MBA Handbook on business cards: Avoid overly pretentious job titles such as "Lord of the Realm, Defender of the Faith, Emperor of India" or "Director of Corporate Planning." |
From: Miguel F. <mi...@ce...> - 2003-06-26 21:12:12
|
On Thu, 2003-06-26 at 16:26, Guenter Bartsch wrote: > > desktop integration and adding new features in the form as > > plugins (post plugins?). care should be taken to not create a new branch > > of the development (maybe api breakage) without good reasons for it. > > cannot agree here. while the public api should be kept as far as > possible, creating new branches to explore the future of xine has always > been supported in the project and i think we should keep it that way. if > those branches are called "xine-2" is another question. maybe i have mispressed what i think. what i meant is: lets not break just for having fun breaking and putting pieces together again! :) lets make sure we have good reasons when we decide that it is time for radical changes... regards, Miguel |
From: Stephen T. <st...@sb...> - 2003-06-25 00:05:08
|
On Tue, 2003-06-24 at 16:24, James Stembridge wrote: > Hi, >=20 > On Saturday 14 Jun 2003 15:50, Guenter Bartsch wrote: > > last item: maybe it is time to start thinking about xine 2. ideas are > > meeded here >=20 > Two things that I would like to see in xine 2: >=20 > 1) Separation of lots of audio and video processing functionality into po= st=20 > plugins: > - Separate plugins for software scaling, colour space conversion, > postprocessing, deinterlacing, cropping, audio resampling, software=20 > volume etc. If it's convenient/efficient plugins can do more than one > thing at the same time. Plugins are fine. Don't lose focus on having so many plugins that there is no 'auto' setting that detects the best one to use for a new user. > - Decoders set flags suggesting what needs to be done, e.g. ffmpeg deco= der > might suggest postprocessing and cropping. This would help my preview suggestion. I like the idea of objects passing on hints to plugins further on the chain. > - Output drivers advertise what features are supported natively and wha= t > limitations there are, e.g. scaling and maximum frame size for Xv. > - Front end informs engine what processing the user requests, e.g. audi= o > equaliser > - xine engine automatically inserts a chain of plugins for carrying out= the > necessary processing. Sound like we are on the same wavelength here. :) > Doing this would: > - Simplify decoder and output plugins, and therefore make developing ne= w > ones much easier > - Reduce common code > - Increase flexibility of the engine > The api should also allow the automatic insertion to be overridden/contro= lled=20 > for applications such as video processing. >=20 > 2) A complete overhaul of the config options. xine offers a lot of=20 > configuration options, while I definitely think this is a good thing they= =20 > really need to be organised in a more user friendly manner: > - Use more logical categories such as Network, Devices, Quality, > Visualisations, Languages etc. Working out what these should is a ver= y > difficult task in itself. > - Get rid of the "configuration experience level". I suspect that most > people either have this on beginner or expert/master of the known > universe. It would be simpler just to have the more commonly used opt= ions > available in the section and then an "advanced" button/tab to access = the > more obscure/little used options. Again splitting the options in this= way > is not at all easy. I agree here. Honestly looking at half to if not more of the configuration options leave me in the dark as how best to utilize them. Its easy to screw up a configuration when you not sure what a feature does. > - Allow grouping of options, e.g. for vidix there is a slider for > each of the rgb components of the colour key, it should be possible t= o > specify that they belong together. > - Allow dependencies between options, if something isn't selected then > certain options should be disabled, e.g. osd_timeout should be disabl= ed > when osd_enabled isn't selected. > It would also be nice to have some ordering of options, though I think th= is=20 > isn't possible with the dynamic nature of plugins providing configuration= =20 > options. Stephen --=20 Stephen Torri <st...@sb...> |
From: <bar...@t-...> - 2003-06-26 19:49:33
|
hallo james, nice to see this discussion is finally taking of. with that week of silence after my first post on the mailing list i was a little worried that xine developers have become completely disillusioned about the future ... i think it is very important to have an aim for the near as well as for the farther away future. > 1) Separation of lots of audio and video processing functionality into post > plugins: > - Separate plugins for software scaling, colour space conversion, > postprocessing, deinterlacing, cropping, audio resampling, software > volume etc. If it's convenient/efficient plugins can do more than one > thing at the same time. efficiency, especially cache-efficiency is the key here. so the plugin interface has to be designed very carefully. besides that, no objections - not sure how far the current post plugin infrastukture can handle this already > 2) A complete overhaul of the config options. xine offers a lot of > configuration options, while I definitely think this is a good thing they > really need to be organised in a more user friendly manner: > - Use more logical categories such as Network, Devices, Quality, > Visualisations, Languages etc. Working out what these should is a very > difficult task in itself. well, just make a proposal then :) renaming config options shouldn't be too much of a problem, i think it can be done without breaking the api so this could be done incrementally as well. > - Get rid of the "configuration experience level". I suspect that most > people either have this on beginner or expert/master of the known > universe. why not give this feature a chance? so far, no frontend i know has implemented this correctly. most frontends do not implement it at all (e.g. gxine), others (e.g. xine-ui) got the concept completely wrong. anyway, i think having those experience levels shouldn't hurt anyone guenter -- The Official MBA Handbook on business cards: Avoid overly pretentious job titles such as "Lord of the Realm, Defender of the Faith, Emperor of India" or "Director of Corporate Planning." |
From: Bastien N. <ha...@ha...> - 2003-06-26 21:04:18
|
On Thu, 2003-06-26 at 20:19, Guenter Bartsch wrote: > hallo james, > > - Get rid of the "configuration experience level". I suspect that most > > people either have this on beginner or expert/master of the known > > universe. > > why not give this feature a chance? so far, no frontend i know has > implemented this correctly. most frontends do not implement it at all > (e.g. gxine), others (e.g. xine-ui) got the concept completely wrong. And I can tell you that after the rubbishing of the experience level stuff in Nautilus post-Eazel, I have absolutely no plans on using it. > anyway, i think having those experience levels shouldn't hurt anyone In my experience, it hasn't caused any problems to the front-ends that chose not to take experience into account. -- Bastien Nocera <ha...@ha...> |
From: ChristianHJW <chr...@ma...> - 2003-06-25 06:33:26
|
Guenter Bartsch wrote: > hallo there, > still one week to go with my diploma thesis, but i am beginning to > wonder what xine's future looks like. > - big, radical API cleanup > guenter HI, i am still dreaming about somebody extending one of the many working codec/plugin APIs to something more powerful, so that we finally have a new, opensource codec plugin to be used to standardize for all platforms, like UCI was planning to do once ( http://uci.sf.net ). Unfortunately the maker of UCI, a gentleman called Alex 'Foogod' Stewart, disappeared competely from the scene. I meanwhile do have admin rights for the project on sf.net, together with another guy, but we could very well either drop the project and create a 'Xine Codec API Project' or try to rebuild/finish UCI starting from a powerful existing API such as a Xine or FFMPEG plugin API. We had thoughts to use the Gstreamer plugin API for that, but its hardly possible without having Gstreamer as complete framework underlying, so this is not an option. In any case, this API should be desgined x-platform, so that codec developers ( like the XviD, Lame, MPC, FAAC/FFAD people ) can release their code with one working API only, and this can be compiled for all platforms. No idea if Xine could be extended for this, if so i promise to work hard the complete OSS Windows world will stop using crappy VCM/ACM stuff and support the API at least as 2nd option. The XviD dev API4 could be used as a good example API, i dont know, after all i heard it is pretty powerful and can be used for more than just a MPEG4 codec. Sorry if my ideas are not feasible or simply stupid, i'm not a developer myself but only a project admin, its hard to estimate for me if a general playback app API like Xine's ( with its own encoder in the package also ) could be a basis for such an API . Best regards Christian http://www.matroska.org |
From: Mike M. <mel...@pc...> - 2003-06-25 14:24:50
|
On Wed, 25 Jun 2003, ChristianHJW wrote: > 'Xine Codec API Project' > > or try to rebuild/finish UCI starting from a powerful existing API such > as a Xine or FFMPEG plugin API. We had thoughts to use the Gstreamer > plugin API for that, but its hardly possible without having Gstreamer as > complete framework underlying, so this is not an option. Personally, I think ffmpeg/libavcodec is a much more appropriate choice for a universal codec API. I would like to see all of xine's present decoders moved there (in fact, I am moving in that direction) to facilitate cross-project collaboration. -- -Mike Melanson |
From: ChristianHJW <chr...@ma...> - 2003-06-25 22:59:58
|
Mike Melanson wrote: >>or try to rebuild/finish UCI starting from a powerful existing API such >>as a Xine or FFMPEG plugin API. We had thoughts to use the Gstreamer >>plugin API for that, but its hardly possible without having Gstreamer as >>complete framework underlying, so this is not an option. > > Personally, I think ffmpeg/libavcodec is a much more appropriate > choice for a universal codec API. I would like to see all of xine's > present decoders moved there (in fact, I am moving in that direction) to > facilitate cross-project collaboration. > -Mike Melanson This is exactly the idea behind that all, to make it easier to use opensource codecs from opensource apps. Some questions here Mike : - would this API be usable for audio also, or only for video ( dont know libavcodec too well, sorry ) ? - you talk about decoders, but what about encoders ? I assume libavcodec has encoders coming with it also ? - most important question : is there any chance to use the libavcodec API on other platforms than Linux ? If so, what were the conditions to make it work ? Thanks for a quick answer Christian http://www.matroska.org P.S. Pete 'Suxendrol' from the XviD team told me about an extension to the VfW API he has planned to allow more advanced features that current VCM codec API doesnt allow, but then again we had only a solution for Windows, and not a real x-platform opensource API ..... what do you all think ? |
From: Mike M. <mel...@pc...> - 2003-06-25 23:11:59
|
[I refuse to help propagate the cross-posting] On Thu, 26 Jun 2003, ChristianHJW wrote: > - would this API be usable for audio also, or only for video ( dont know > libavcodec too well, sorry ) ? Yep. That's what the 'a' in libavcodec stands for. > - you talk about decoders, but what about encoders ? I assume libavcodec > has encoders coming with it also ? Sure does. Here is a list of A/V formats that libavcodec can presently encode/decode: http://ffmpeg.sourceforge.net/ffmpeg-doc.html#SEC16 > - most important question : is there any chance to use the libavcodec > API on other platforms than Linux ? If so, what were the conditions to > make it work ? Yep. In fact, there is already some work in this area: http://cutka.szm.sk/ffvfw/ffvfw.html BTW, the main project page (ffmpeg.sf.net) has a long list of projects presently known to incorporate ffmpeg. > P.S. Pete 'Suxendrol' from the XviD team told me about an extension to > the VfW API he has planned to allow more advanced features that current > VCM codec API doesnt allow, but then again we had only a solution for > Windows, and not a real x-platform opensource API ..... what do you all > think ? If you can stand programming the Windows VfW API, it might be plausible. Not my proverbial cup of tea, though. -- -Mike Melanson |
From: ChristianHJW <chr...@ma...> - 2003-06-25 23:28:50
|
Mike Melanson wrote: >>- most important question : is there any chance to use the libavcodec >>API on other platforms than Linux ? If so, what were the conditions to >>make it work ? > Yep. In fact, there is already some work in this area: > http://cutka.szm.sk/ffvfw/ffvfw.html > BTW, the main project page (ffmpeg.sf.net) has a long list of projects > presently known to incorporate ffmpeg. > -Mike Melanson I know Milan Cutka's FFVFW project all too well, his FFDshow decoder is the backbone of our own little project if it comes to playback of any kind of MPEG4 stuff on DirectShow, but i am not sure if this could be helpful for what we would try to achieve ? After all, Milan's work is mainly a VCM wrapper ( and this is pretty complicated ) to allow using FFMPEG via a normal VfW interface on VFW apps like Virtualdub, and this is exactly what i wanted to get rid of in first place, the limitations of VfW ( no proper b-frame handling, etc. ) with the new approach .... ... or are you stressing this example to show me that libavcodec and its API *CAN* be used fine on Windows ? Christian http://www.matroska.org |
From: Mike M. <mel...@pc...> - 2003-06-26 15:02:42
|
On Thu, 26 Jun 2003, ChristianHJW wrote: > ... or are you stressing this example to show me that libavcodec and its > API *CAN* be used fine on Windows ? Multimedia codecs are pretty straightforward in that they take an array of bytes and transform them into another array of bytes. They don't do things like interface to specific hardware or read/write the disk/network. Their fundamental operation makes them quite portable. So, in principle, it is quite workable to use libavcodec on Windows or any other general-purpose computing platform. -- -Mike Melanson |
From: Pamel <pa...@ms...> - 2003-06-26 20:33:55
|
"Mike Melanson" <mel...@pc...> wrote > Multimedia codecs are pretty straightforward in that they take an > array of bytes and transform them into another array of bytes. Yes, mostly. The problem with that is what the array of bytes contains. Could a codec, for instance, drop some frames, and adjust the timecodes on new frames to spread them out for a low-motion scene? Currently if a codec decides that a frame has to little movement to encode, it returns an frame that has a byte sequence indicating that there is no change in that frame. Then on decode the codec can output a duplicate frame. This is idiocy. Its a throwback from old codec API's that no one has bothered get rid of. If a codec doesn't want to code any change, it should be able to simply not produce that frame. Matroska can handle not having a frame at that timecode, and even AVI could just have a "drop frame" instead. Another issue is the 2-pass encoding method. Codecs will output dummy frames while creating a log file to use on the next pass. Why should it have to output anything? Why does the application have to perform two separate encodes? You should be able to tell the codec to perform 2 passes on the file. It could keep requesting frames without outputting anything. Once it has requested all required frames, just start requesting frames from the beginning again and output encoded frames. A codec should be able to request any frame that it wants, when it wants. All that matters is that it outputs encoded data with timecodes and references(IE P and B frames) in an order that it can receive them back in for decoding the video. Well, that is my thought on an advanced codec API. Anybody else have any thoughts? Pamel |
From: Mike M. <mel...@pc...> - 2003-06-27 01:00:23
|
On Thu, 26 Jun 2003, Pamel wrote: > Currently if a codec decides that a frame has to little movement to encode, > it returns an frame that has a byte sequence indicating that there is no > change in that frame. Then on decode the codec can output a duplicate > frame. This is idiocy. Its a throwback from old codec API's that no one > has bothered get rid of. If a codec doesn't want to code any change, it > should be able to simply not produce that frame. Fortunately, the developers of the Quicktime format were being absolutely visionary when they incorporated variable framerates and edit lists into the format. Works around this problem nicely. There are other general-purpose multimedia formats besides AVI and Matroska. > Another issue is the 2-pass encoding method. Codecs will output dummy > frames while creating a log file to use on the next pass. Why should it > have to output anything? Why does the application have to perform two > separate encodes? I give up. Why does an application have to perform two separate encodes? -- -Mike Melanson |
From: Pamel <pa...@ms...> - 2003-06-27 01:44:11
|
"Mike Melanson" <mel...@pc...> wrote.. > On Thu, 26 Jun 2003, Pamel wrote: > Fortunately, the developers of the Quicktime format were being > absolutely visionary when they incorporated variable framerates and edit > lists into the format. Works around this problem nicely. There are other > general-purpose multimedia formats besides AVI and Matroska. Right, but the point is that we need an API that can support the creations of VFR on the codec level. > > Another issue is the 2-pass encoding method. Codecs will output dummy > > frames while creating a log file to use on the next pass. Why should it > > have to output anything? Why does the application have to perform two > > separate encodes? > > I give up. Why does an application have to perform two separate > encodes? Because there isn't currently a A/V API that allows the codec to request frames that it wants. So currently, to get the entire video stream twice, either you have the application encode the video twice, changing the settings in the codec's config between encodes to write a different set. Or you have a codec that caches the entire video stream(not practical). But again, the point is that you can avoid this if you design an A/V API that allows the codec to request a frame. Pamel |
From: Mike M. <mel...@pc...> - 2003-06-27 03:24:21
|
On Thu, 26 Jun 2003, Pamel wrote: > Right, but the point is that we need an API that can support the creations > of VFR on the codec level. I still think the codec level shouldn't have any knowledge of the timing/sync information; that should be handled at the demuxer level. > Because there isn't currently a A/V API that allows the codec to request > frames that it wants. So currently, to get the entire video stream twice, > either you have the application encode the video twice, changing the > settings in the codec's config between encodes to write a different set. Or > you have a codec that caches the entire video stream(not practical). > > But again, the point is that you can avoid this if you design an A/V API > that allows the codec to request a frame. I don't know much about encoding, so please give me examples of production codecs that do this. Thanks... -- -Mike Melanson |
From: Pamel <pa...@ms...> - 2003-06-27 06:53:48
|
"Mike Melanson" <mel...@pc...> wrote... > I still think the codec level shouldn't have any knowledge of the > timing/sync information; that should be handled at the demuxer level. Its just differing philosophies. I don't believe that a codec should HAVE to handle it, but I believe that it should have the ability to alter them if it like. > > Because there isn't currently a A/V API that allows the codec to request > > frames that it wants. So currently, to get the entire video stream twice, > > either you have the application encode the video twice, changing the > > settings in the codec's config between encodes to write a different set. Or > > you have a codec that caches the entire video stream(not practical). > > > > But again, the point is that you can avoid this if you design an A/V API > > that allows the codec to request a frame. > > I don't know much about encoding, so please give me examples of > production codecs that do this. DivX 5, XviD, Real Video 9, Windows Media 9, and most other modern video codecs. The reason for this technique is, for example, to be able to average out the bitrate more exactly. On the first pass for encoding, the codec analyzes each frame and writes into a log file the compressibility level of each frame. On the second pass it analyzes the log file to determine how many bits to devote to each frame. High motion frames will receive more bits, and low motion frames will receive less bits. This allows one to predetermine the exact size of the encoded video, which becomes extremely important if you are trying to store a video on a CD. If you have 650MB for a video file, encoding to only 500MB means your video quality is going to be much lower than it could have been. If it ends up being 670MB, then you won't be able to fit it on a CD. Most codecs will let you specify a number of kbps to use for a 1-pass method. In this method the same number of bits is allocated for each frame. The reason that this is bad is that while low-motion scenes will look very clear, high-motion scenes will have a lot of artifacts because they won't have enough bits to encode smooth video. Pamel |
From: Robin K. <kom...@my...> - 2003-07-01 03:44:09
|
I've received some bug reports from one of the users of my xine packages. http://140.114.32.3/usr2/ming-yuan/chuan/3d-anime/PolyHer_Mount_630i_DoF.mov When I tried this on a Windows machine it identifies itself as a QuickTime VR file and lets me rotate the model in one degree of freedom. I suspect it's just a SQV3 video where the angle of rotation corresponds to a video timestamp as xine does play the first few frames before stopping. The image is, however, corrupted in manner that suggests the SQV3 decoder desn't adhere to pitches for the U and V channels. http://www.blastwave.org/~phil/AnnaK.1stS.Deuce.Front.mov The video plays with audio for a few seconds before it segfaults. The backtrace shows that the segfault happens inside a malloc call by xine_event_send. Heap corruption? The console output produced is listed below. mjpeg comment: 'AppleMark' mjpeg: unsupported coding type (ce) mjpeg: unsupported coding type (c5) mjpeg: unsupported coding type (cf) mjpeg: unsupported coding type (cf) mjpeg: unsupported coding type (c1) mjpeg: unsupported coding type (c6) mjpeg: unsupported coding type (c2) mjpeg: unsupported coding type (c6) -- Wishing you good fortune, --Robin Kay-- (komadori) |
From: Ewald S. <ew...@ra...> - 2003-07-01 06:42:38
|
Hi Robin, > I've received some bug reports from one of the users of my xine packages. > > http://140.114.32.3/usr2/ming-yuan/chuan/3d-anime/PolyHer_Mount_630i_DoF.mo >v > > When I tried this on a Windows machine it identifies itself as a > QuickTime VR file and lets me rotate the model in one degree of freedom. > I suspect it's just a SQV3 video where the angle of rotation corresponds > to a video timestamp as xine does play the first few frames before > stopping. The image is, however, corrupted in manner that suggests the > SQV3 decoder desn't adhere to pitches for the U and V channels. The movie plays just fine here with the latest xine (CVS). It could be related to some of the recent SVQ3 fixes (B-frame support and various other fixes). [...] bye, ewald |
From: Robin K. <kom...@my...> - 2003-07-01 20:45:47
|
Ewald Snel wrote: > Hi Robin, > >>I've received some bug reports from one of the users of my xine packages. >> >>http://140.114.32.3/usr2/ming-yuan/chuan/3d-anime/PolyHer_Mount_630i_DoF.mo >>v >> >>When I tried this on a Windows machine it identifies itself as a >>QuickTime VR file and lets me rotate the model in one degree of freedom. >>I suspect it's just a SQV3 video where the angle of rotation corresponds >>to a video timestamp as xine does play the first few frames before >>stopping. The image is, however, corrupted in manner that suggests the >>SQV3 decoder desn't adhere to pitches for the U and V channels. > > The movie plays just fine here with the latest xine (CVS). It could be related > to some of the recent SVQ3 fixes (B-frame support and various other fixes). No, that was with the latest CVS as of the time of posting. I tried it again with the now latest CVS and it still doesn't work. Also, taking a snapshot produces an image with a bad pitch in the Y channel too. This doesn't happen with any other media I've tested including other SQV3 streams. BTW, this is with the pgx64 video output driver. When I tried it with the xshm driver it segfaults thus: (gdb) bt #0 0xfef957bc in mlib_v_VideoColorYUV2ARGB420_all_allign () from /usr/local/lib/libmlib.so.2 #1 0xfef951c0 in mlib_VideoColorYUV2ARGB420 () from /usr/local/lib/libmlib.so.2 #2 0xfc2bfd08 in mlib_yuv420_argb32 () from /usr/local/lib/xine/plugins/1.0.0/xineplug_vo_out_xshm.so #3 0xfc2c5a40 in xshm_frame_copy () from /usr/local/lib/xine/plugins/1.0.0/xineplug_vo_out_xshm.so #4 0xff2cc154 in vo_frame_driver_copy () from /usr/local/lib/libxine.so.1 #5 0xff2ccf3c in overlay_and_display_frame () from /usr/local/lib/libxine.so.1 #6 0xff2cac44 in video_out_loop () from /usr/local/lib/libxine.so.1 -- Wishing you good fortune, --Robin Kay-- (komadori) |
From: suxen_drol <sux...@ho...> - 2003-06-26 08:34:35
|
hi, On Thu, 26 Jun 2003 00:56:41 +0200 ChristianHJW <chr...@ma...> wrote: > P.S. Pete 'Suxendrol' from the XviD team told me about an extension to > the VfW API he has planned to allow more advanced features that current > VCM codec API doesnt allow, but then again we had only a solution for > Windows, and not a real x-platform opensource API ..... what do you all > think ? yea i have lots of plans, and that vfwext was proposed some >15 months ago. recently i did experiment with some extension to make the vfw configuration window aware of the encoding fps and dimensions, but thats hardly **advanced**. developing such an api, framework or protocol is both easy and hard. it depends on the scope your want to capture within that framework. the issues surrounding frame-based encoding and decoding and encoding are well known, and developing a small api to deal with it isn't very hard (and that is what xine, mplayer & transcode have done). and i agree with mike, and have said in the past, that ffmpeg sets a very good example. problems arise when you consider features which dont easy fit in either the codec or the application layer: - configuration. how do we configure the codec? whw is responsible for storing the configuration info, and what if we want to re-configure the codec during the encoding or decoding (such as preprocessing)? vfw has a configure command which display a gui configuration, and some commands for getting/setting a "block" of the configuration memory. uci & ffmpeg have enum/get/set functions for enumerating through configuration options. mplayer has something similar. - two-pass management. 2pass is neither a codec layer issue or an application layer issue, it fits somewhere in between. say i encode a complete first pass, but then want to encode only a portion of that video in the second pass. presently with xvid and divx you have to cut your 2pass stats files, and mess around external scaling/analysis tools. external applications such as gordinaknot(win32) help with this process, but they're not integrated. ideally there would be some what to manage all this stuff, thats sits between codec and application layer. to do this, we need to define what data the codec must provide to such a "management layer". i believe this is difficult, because 2pass is still somewhat unsailed waters. - suspending/restoring codec state. the ability to suspend the encoder, reboot and resume encoding at the same position. - future: video codecs support multiple input streams (think: mp3 stereo). and maybe someday video segmentation will emerge to be popular. are you looking for immediate usefulness chris (run with what we know about video codecs now), or some future-proof-ness (dwindle on future thoughts)? cheers, -- pete |
From: Pamel <pa...@ms...> - 2003-06-26 21:01:33
|
"suxen_drol" <sux...@ho...> wrote in message > - future: video codecs support multiple input streams (think: mp3 stereo). > and maybe someday video segmentation will emerge to be popular. I hadn't thought about that. I guess an API should be able to handle multiple input/outputs to a single source. I realize DirectShow can do this, but I hadn't thought about the need to include have that in a future API. Its pretty important. > are you looking for immediate usefulness chris (run with what we know > about video codecs now), or some future-proof-ness (dwindle on future > thoughts)? I think Christian's intention is future proof. That's part of the concept of Matroska, to be future proof for the next decade. The DirectShow concept is excellent for playback, but editing is not so good. I think that we need something designed for editing, that is future proof. Pamel |
From: Pamel <pa...@ms...> - 2003-06-26 20:59:35
|
"Pamel" <pa...@ms...> wrote in message news:bdedn4$7cn$1...@ma...... > "suxen_drol" <sux...@ho...> wrote in message I also meant to point out that I think UCI may be dead. Given that there is no development, it may be a good idea to start a new project with certain goals in mind. For instance, being able to take advantage of all of the design features of Matroska. :) One of the big issues with a new API that allowed all sorts of neat features is how to tell it, "You can't use these features when writing to AVI." or whatever other container. Pamel |