You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(7) |
Oct
(54) |
Nov
(46) |
Dec
(26) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(3) |
Feb
(11) |
Mar
(25) |
Apr
(31) |
May
(52) |
Jun
(43) |
Jul
(54) |
Aug
(50) |
Sep
(86) |
Oct
(48) |
Nov
(45) |
Dec
(99) |
2003 |
Jan
(78) |
Feb
(27) |
Mar
(58) |
Apr
(46) |
May
(61) |
Jun
(53) |
Jul
(23) |
Aug
(78) |
Sep
(20) |
Oct
(52) |
Nov
(57) |
Dec
(22) |
2004 |
Jan
(16) |
Feb
(55) |
Mar
(54) |
Apr
(26) |
May
(17) |
Jun
(32) |
Jul
(26) |
Aug
(17) |
Sep
(7) |
Oct
(12) |
Nov
(1) |
Dec
(11) |
2005 |
Jan
(10) |
Feb
(8) |
Mar
(27) |
Apr
(27) |
May
(42) |
Jun
(3) |
Jul
(3) |
Aug
(4) |
Sep
(9) |
Oct
(42) |
Nov
(19) |
Dec
(2) |
2006 |
Jan
(6) |
Feb
(18) |
Mar
(9) |
Apr
(4) |
May
(8) |
Jun
(4) |
Jul
(11) |
Aug
|
Sep
(10) |
Oct
(5) |
Nov
|
Dec
|
2007 |
Jan
(8) |
Feb
(5) |
Mar
(6) |
Apr
(33) |
May
(14) |
Jun
(16) |
Jul
(4) |
Aug
(7) |
Sep
(3) |
Oct
(1) |
Nov
(4) |
Dec
(4) |
2008 |
Jan
(11) |
Feb
(40) |
Mar
(4) |
Apr
(25) |
May
(23) |
Jun
(1) |
Jul
(13) |
Aug
(3) |
Sep
(6) |
Oct
(10) |
Nov
(2) |
Dec
(1) |
2009 |
Jan
(2) |
Feb
(4) |
Mar
(9) |
Apr
(1) |
May
(3) |
Jun
(2) |
Jul
(7) |
Aug
|
Sep
(14) |
Oct
(6) |
Nov
(5) |
Dec
(4) |
2010 |
Jan
|
Feb
|
Mar
(5) |
Apr
(1) |
May
(7) |
Jun
(8) |
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(9) |
Jun
(3) |
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2012 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(4) |
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Peter G. <pe...@el...> - 2001-11-05 14:46:59
|
Hmm... I've thought you don't have refresh yet %-) I'm not X11 guru, so alas, I know nothing to help you. I suppose you can try to subclass vo_instance_s and vo_frame_s coming from libvox11 in your application, and keep track of what get_frame method returns you. Once you have a frame, you can trigger your own events on draw method. Peter Kees Cook wrote: >Actually, I meant specifically the libvo X11 window. I do my own >refreshes by sending a whole GOP to libmpeg2, but I can't figure out how >to capture window events from the libvo window. > >On Mon, Nov 05, 2001 at 09:27:08PM +0700, Peter Gubanov wrote: > >>Hi Kees, >> >>I'll use what I know for sure for reference. On Windows DirectShow is >>used to present video. There are 2 ways to keep video window up to date: >>either video renderer keeps the last frame in its own memory and >>refreshes the window with a chached copy, or it sends special requests >>to DirectShow filtergraph. The special request is handled by MPEG >>demultiplexer - the filtergraph manager sends a request to decode >>current frame, and it is demultiplexers responsibility to feed the >>decoder with I frame and all preceding reference frames until the >>required frame is found. >>But considering the latest achivements in MPEG-2 encoding, this could be >>impossible to implement - see my previous posts today regarding >>I-frame-less video stream. So cacheing a frame is the best way IMHO. >> >>Regards, >> Peter >> >>Kees Cook wrote: >> >>>Hello! >>> >>>I'm using libvo in an application I've written, and I have a strange >>>problem. I'm using libvo/libmpeg to display 1 GOP at a time, so the image >>>stays still most of the time. However, the window brought up my libvo >>>doesn't repaint itself on an uncover event. I figure it's my duty to >>>detect and issue a repaint, but I don't know where to start on >>>implementing that. Any ideas? >>> >> >> > |
From: Kees C. <mp...@ou...> - 2001-11-05 13:35:19
|
Hello! I'm using libvo in an application I've written, and I have a strange problem. I'm using libvo/libmpeg to display 1 GOP at a time, so the image stays still most of the time. However, the window brought up my libvo doesn't repaint itself on an uncover event. I figure it's my duty to detect and issue a repaint, but I don't know where to start on implementing that. Any ideas? -- Kees Cook @outflux.net |
From: Peter G. <pe...@el...> - 2001-11-05 10:05:56
|
>The thing about 132 idct updates is in mpeg2 annex A. They also refer >you to section 2.3 of the IEEE1180 standard. But you're right, in >practice this is impossible to use, since you'd have to follow up >pixels as they are motion compensated... > Aha! Here it is: NOTES - 1 Clause 2.3 Std 1180-1990 =93Considerations of Specifying IDCT=20 Mismatch Errors=94 requires the specification of periodic intra-picture=20 coding in order to control the accumulation of mismatch errors. Every=20 macroblock is required to be refreshed before it is coded 132 times as=20 predictive macroblocks. Macroblocks in B-pictures (and skipped=20 macroblocks in P-pictures) are excluded from the counting because they=20 do not lead to the accumulation of mismatch errors. This requirement is=20 the same as indicated in 1180-1990 for visual telephony according to=20 ITU-T Recommendation H.261. But fortunately, it stated for macroblocks, not for pixels. ;-) A bit=20 easier to care about, but I haven't ever seen a encoder which cared.=20 BTW, I have tested my encoding engine and generated a stream of 1000=20 frames. It contained 1 I frame and 999 P frames, all predcted. The=20 mismatch error becomes rather noticable even despite I use exactly the=20 same IDCT for encoder and decoder, and encoder is closed loop (i.e. it=20 reconstructs a reference picture before predicting the next). >:)) A lot of the same here... been busy but I always hope to get more >time. > Hope dies hard! ;-) Peter |
From: Michel L. <wa...@zo...> - 2001-11-05 09:02:16
|
On Mon, Nov 05, 2001 at 03:30:19PM +0700, Peter Gubanov wrote: > I can demux video for you if you like. But I've looked at the code > briefly, and I suppose there should be no problems getting the video > stream out of the transport. No, you dont need to demux it - mpeg2dec -t 11 on this stream works just fine :) > Where did you get the number? And who ever managed to count a number of > updates for a particular pixel? There is "intra slice" flag in MPEG > slice header, but it is never used by decoder and rarely used by > encoders. It is intended primarily for DSM CC. But personally I now no > more than 4 implementations os DSM CC... The thing about 132 idct updates is in mpeg2 annex A. They also refer you to section 2.3 of the IEEE1180 standard. But you're right, in practice this is impossible to use, since you'd have to follow up pixels as they are motion compensated... > Just busy too much. I know I've promised some checkins to libmpeg2, but > there are new problems every day on my primary projects, and I waste > time. Students, PhD thesis, junior collegues - all takes so much time... > But it seems I will have some time to breath freely in the nearest > future ;-) :)) A lot of the same here... been busy but I always hope to get more time. Cheers, -- Michel "Walken" LESPINASSE Is this the best that god can do ? Then I'm not impressed. |
From: Peter G. <pe...@el...> - 2001-11-05 08:31:12
|
>OK, so I'm still downloading the stream, but you make me feel glad >already that I integrated the new TS demuxer into mpeg2dec :)) > I can demux video for you if you like. But I've looked at the code briefly, and I suppose there should be no problems getting the video stream out of the transport. >I had digital cable TV previously, and I had noticed how you can see >the screen refresh after switching channels. Also one known case where > I've got a number of captured streams, European, US, Chineese - all of them are regular streams with I-frames. This FOX broadcast is something unusual. No I-frames, alternate scan, 10 bit DC precision... I haven't ever seen such streams. > >they dont use I frames is videoconferencing - basically to achieve >lower latency they dont use I frames because that would require a >bigger stream buffer and thus more buffering latency, and they dont >use B frames either because of the reordering latency, so they are >stuck with only P frames and intra refresh. > Yes, this is known as "low delay" mode. They use intra slices to refresh. > >I will send this to guenter of the xine project, I suspect it will be >a problem for them. > Heh. There are no problems for decoder - there are problems for users who have watch broken frames ;-) > >I dont know how people could deal with this without stream analysis - >one possible way would be to rely on the fact that it is invalid in >MPEG2 to send more than 132 IDCT updates to a given pixel value, but I >think this doesnt work because non-coded blocks are not included in >this count... > Where did you get the number? And who ever managed to count a number of updates for a particular pixel? There is "intra slice" flag in MPEG slice header, but it is never used by decoder and rarely used by encoders. It is intended primarily for DSM CC. But personally I now no more than 4 implementations os DSM CC... > >If you see other funny streams I'm always interested to see these :) > You're welcome! As fas as Elecard implements more fixes to the decoder and demultiplexer, we receive less streams ;-( > >PS whats up with you lately ? > Just busy too much. I know I've promised some checkins to libmpeg2, but there are new problems every day on my primary projects, and I waste time. Students, PhD thesis, junior collegues - all takes so much time... But it seems I will have some time to breath freely in the nearest future ;-) Peter |
From: Michel L. <wa...@zo...> - 2001-11-05 07:52:06
|
OK, so I'm still downloading the stream, but you make me feel glad already that I integrated the new TS demuxer into mpeg2dec :)) On Mon, Nov 05, 2001 at 10:27:02AM +0700, Peter Gubanov wrote: > 6.1.1.11 Frame reordering > When the sequence contains coded B-frames, the number of consecutive > coded B-frames is variable and unbounded. The first coded frame after a > sequence header shall not be a B-frame. > A sequence may contain no coded P-frames. A sequence may also contain > no coded I-frames in which case some care is required at the start of > the sequence and within the sequence to effect both random access and > error recovery. I had digital cable TV previously, and I had noticed how you can see the screen refresh after switching channels. Also one known case where they dont use I frames is videoconferencing - basically to achieve lower latency they dont use I frames because that would require a bigger stream buffer and thus more buffering latency, and they dont use B frames either because of the reordering latency, so they are stuck with only P frames and intra refresh. But yeah, these streams are pretty unusual, and I never saw one before that used B frames as well. I'll add it in my test suite :) > Fortunately, libmpeg2 doesn't care whether it is capable of decoding a > frame or not, so this is not a problem. There could be problems for > applications that will display blocky frames. And there are no means to > guarantee the frame contains no errors except temporal stream analysis ;-) I will send this to guenter of the xine project, I suspect it will be a problem for them. I dont know how people could deal with this without stream analysis - one possible way would be to rely on the fact that it is invalid in MPEG2 to send more than 132 IDCT updates to a given pixel value, but I think this doesnt work because non-coded blocks are not included in this count... > So, put a new stream to your collection. If you see other funny streams I'm always interested to see these :) PS whats up with you lately ? -- Michel "Walken" LESPINASSE Is this the best that god can do ? Then I'm not impressed. |
From: Peter G. <pe...@el...> - 2001-11-05 03:27:29
|
Well, I've managed to find the required paragraph in the MPEG spec: 6.1.1.11 Frame reordering When the sequence contains coded B-frames, the number of consecutive coded B-frames is variable and unbounded. The first coded frame after a sequence header shall not be a B-frame. A sequence may contain no coded P-frames. A sequence may also contain no coded I-frames in which case some care is required at the start of the sequence and within the sequence to effect both random access and error recovery. .... Fortunately, libmpeg2 doesn't care whether it is capable of decoding a frame or not, so this is not a problem. There could be problems for applications that will display blocky frames. And there are no means to guarantee the frame contains no errors except temporal stream analysis ;-) So, put a new stream to your collection. Cheers, Peter Peter Gubanov wrote: > Hi Michel, > > I've promised you to share some interesting streams long ago. But the > streams I had turned to be not so interested. Finally I've got > something for you. Download > ftp://nimbus.elecard.net.ru/pub/mpeg/transport/CH_49-1.ts.0000 This is > a transport stream captured from FOX broadcast with HiPix card. It > contains regular SDTV video stream. But this stream doesn't contain > any I frames! The file is 144MB, but if you don't want to wait too > much, 8MB of it would be enough to understand. I haven't found any > hints like intra refresh yet, but I'll keep trying to find standard > compliance of this stream. > > Have fun! > > Peter > > > > > > _______________________________________________ > Libmpeg2-devel mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmpeg2-devel > > |
From: Peter G. <pe...@el...> - 2001-11-05 02:54:41
|
Hi Michel, I've promised you to share some interesting streams long ago. But the streams I had turned to be not so interested. Finally I've got something for you. Download ftp://nimbus.elecard.net.ru/pub/mpeg/transport/CH_49-1.ts.0000 This is a transport stream captured from FOX broadcast with HiPix card. It contains regular SDTV video stream. But this stream doesn't contain any I frames! The file is 144MB, but if you don't want to wait too much, 8MB of it would be enough to understand. I haven't found any hints like intra refresh yet, but I'll keep trying to find standard compliance of this stream. Have fun! Peter |
From: Kees C. <ke...@ou...> - 2001-11-05 01:04:44
|
Hey, saw the list of applications using mpeg2dec, and thought I'd drop a line about "GOPchop". I just released it, it's my stab at a GOP-accurate MPEG2 editor for linux. It uses mpeg2dec for frame display. :) http://outflux.net/unix/software/GOPchop/ And can I just say: Thank you for such a good library! :) My task would have been much more difficult without libmpeg2. :) (any comments, please email me directly, I'm not subscribed to this list...) -- Kees Cook @outflux.net |
From: Michel L. <wa...@zo...> - 2001-11-04 11:41:37
|
On Fri, Nov 02, 2001 at 07:42:25PM +0100, Gert Vervoort wrote: > With the following path libmpeg2 is able to decode HDTV streams. > - if ((width > 768) || (height > 576)) > - return 1; /* size restrictions for MP@ML or MPEG1 */ > - Hmmm. Right, we dont need much to be able to decode MP@HL instead of just MP@ML. I'm about to commit a patch to CVS for this. I'm not totally removing the size restrictions though, because we still dont support arbitrary size - for pictures higher than 2800 lines, we would need some special support that we dont have right now. If you need resolutions higher than 1920x1152, let me know (in which case I also need to know what the corresponding maximum VBV buffer size is - I've been taking the parameters for MP@HL as a reference there). Cheers, -- Michel "Walken" LESPINASSE Is this the best that god can do ? Then I'm not impressed. |
From: Gert V. <Ger...@wx...> - 2001-11-02 18:41:09
|
Hi, With the following path libmpeg2 is able to decode HDTV streams. Gert --- mpeg2dec/libmpeg2/header.c.1 Fri Nov 2 19:36:48 2001 +++ mpeg2dec/libmpeg2/header.c Fri Nov 2 19:37:15 2001 @@ -87,9 +87,6 @@ width = ((height >> 12) + 15) & ~15; height = ((height & 0xfff) + 15) & ~15; - if ((width > 768) || (height > 576)) - return 1; /* size restrictions for MP@ML or MPEG1 */ - picture->coded_picture_width = width; picture->coded_picture_height = height; |
From: Erik W. <om...@te...> - 2001-10-29 08:57:33
|
Here's an attempt at a piece of user code for a basic mpeg2 player, using the lowest-level API. This is gonna look like a gstreamer plugin, and really really rough....: int more_data(bitstream_t *bs, unsigned char **buf, int *len) { mpeg2element *element = bitstream_get_private(bs); GstBuffer *buffer = gst_pad_pull(element->dec); *buf = GST_BUFFER_DATA(buffer); *len = GST_BUFFER_SIZE(buffer); // store buffer somewhere for future unref } // would probably need a callback to unref the buffer dec = mpeg2_init(); bs = bitstream_init(); bitstream_set_callback(bs,more_data); bitstream_set_private(bs,element); mpeg2_set_bitstream(dec,bs); mpeg2_parse_sequence(dec,seq); while(1) { mpeg2_parse_picture(dec,&pic); while (mpeg2_parse_slice(dec,&slice)) { while (mpeg2_parse_macroblock(dec,&mb)) { mpeg2_idct(dec,&mb); mpeg2_motion_comp(dec,&mb); mpeg2_place(dec,&mb,&image); } } } A mainloop for something with hardware motion comp would look like: while(1) { mpeg2_parse_picture(dec,&pic); while (mpeg2_parse_slice(dec,&slice)) { while (mpeg2_parse_macroblock(dec,&mb)) { send_mb_data_to_hardware(&slice,&mb); } } } An alternate use for the low-level API is when recoding to something else, say MPEG-4. You'd want to extract the motion-vectors and keep track of them as hints for motion-search in the mpeg-4 code, or even just steal them <g>. For something like mp3, there are things you can do with the subband data that will save you time, like applying basic equalization. Given a low-level API for that (parse, process, synthesize), you can do whatever funky processing you need to. Totally impossible without a low-level API, unless you spend a lot more time later doing the processing the hard way (i.e. doing full motion-search, or applying an expensive EQ). Erik Walthinsen <om...@te...> - System Administrator __ / \ GStreamer - The only way to stream! | | M E G A ***** http://gstreamer.net/ ***** _\ /_ |
From: Erik W. <om...@te...> - 2001-10-29 07:57:13
|
vektor and taaz brought this thread to my attention, and strongly suggested I respond <g> So as far as I can tell, Arpi, you're dead on. This is a problem I've been thinking about for 2 years now, ever since I started GStreamer and had to significantly rewrite mpg123 to make it a) reentrant, and b) loadable (results are in gstreamer cvs for those interested). As a result of all this thinking (many many hours), I think I've come up with a set of general guidelines for codec APIs (mostly decoders so far, because they're harder). I haven't been able to explain them very well so far, but I'm gonna make another attempt. Also, I'm writing an mpeg-1 layer 1 audio decoder where I intend to put as many of these ideas to use as possible, over time. Eventually I may add layers 2 and 3 to make it a full mp3 decoder library that's fully optimized for both use and speed. I also have a totally from-scratch mpeg1 video decoder I did a year ago that I should resurrect, fix the remaining decode but (in mcomp), and release. Anyway, here goes another attempt: ---------------------------------------------------------------------- A good decoder API should be a 3-layered API: 1) dummy layer: feed it, get data back, totally unframed 2) frame layer: operates on specific 'frames' of data at a time 3) wizard layer: access to low-level functions specific to the codec The top layers are generally implemented by using the lower layers, of course. An application can use any one of these layers, and depending on the conceptual overlap between them, can mix calls to the layers. Note that this document does *not* specify an API. A decoder must be fast and optimal, and creating a general-purpose API for a codec is the best way to break both of those requirements. A higher level solution like GStreamer is appropriate for that. Instead, this document will attempt to provide guidelines whereby an API for a given codec can be constructed to both closely match the codec's requirements, and be usable by many different types of applications. Bitstream manipulation ====================== Another area I've spent significant time working on is bitstream manipulation. I have a library called libbitstream (duh) that attempts to provide optimal implementations of getbits and eventually putbits for various architectures. There are quite a few issues here, but for the purposes of this discussion, we can consider the API to be roughly the following: bitstream_t *bitstream_init(); void bitstream_new_buffer(bitstream_t *bs, uint8 *data, uint32 length); void bitstream_new_buffer_cb(bitstream_t *bs, bitstream_callback_t *cb); uint32 bitstream_get(bitstream_t *bs, int numbits); uint32 bitstream_peek(bitstream_t *bs, int numbits); void bitstream_flush(bitstream_t *bs, int numbits); void bitstream_unget(bitstream_t *bs, int numbits, uint32 data); The callback mechansim is fairly straightforward, once you understand how the bitstream code works. At all times there are two registers with data in them: current and next. Each is paired with a bit count. The current register holds the next bits in the stream. The next register is only accessed when current runs out of bits, at which time it is copied to the current register and a new next register is acquired. Actually, that's only strictly true for _get. _peek has the option of looking non-destructively into the next buffer as needed. A new next register is obtained from the current memory buffer. If the buffer doesn't have a full register's worth of data, that's fine, within limits (there would be problems atm if current_bits + next_bits is less than a getbits request, to be fixed). Currently there is no support for queueing multiple buffers, but this is easy to add, I just haven't gotten around to it. In callback mode, the callback will be called to retrieve the next buffer. Wizard Layer ============ This layer consists of functions that are very very codec-specific. For instance, for mpeg video, you'd have something like: mpeg2_t mpeg2_init(); void mpeg2_set_bitstream(mpeg2_t *dec, bitstream_t *bs); int mpeg2_parse_sequence(mpeg2_t *dec, mpeg2_sequence_t *seq); int mpeg2_parse_picture(mpeg2_t *dec, mpeg2_picture_t *pic); int mpeg2_parse_slice(mpeg2_t *dec, mpeg2_slice_t *slice); int mpeg2_parse_macroblock(mpeg2_t *dec, mpeg2_macroblock_t *mb); [ int idct_S16_8x8(sint16 *block); ] int mpeg2_motion_compensate(mpeg2_t *dec, mpeg2_macroblock_t *mb); etc... The above list is pretty rough, I'll have to look at my libmpeg1 code to get a better idea of the sequence again, mostly with regard to reference frames. But you get the idea. With this you can write your own version of decode.c, and do whatever you want with it. You can extract the motion vectors via other functions, or if you really want to expose the struct you can. Note that each of these functions takes a mpeg2_t*. This is critical: all data must be encapsulated into 'objects', or the library is unsafe to use with other libraries or with another copy of itself. It also encapsulates things like the bitstream_t * where all the data comes from. Each of these functions runs until it's done, getting bits from the bitstream as necessary. This means that either the bitstream has to be known to have enough data, or be prepared to get more as needed. More about this in a later section. One variation that could be made either in general (to this recommendation) or possibly to a given function, would be to take a bitstream_t * directly. This might remove the need in some cases for the *dec argument, if that's an issue. However, the utility of this change should be debated based on usage cases, and may be very codec-specific. Frame layer =========== This layer is really only a function or two, as it's essentially just the contents of decode.c. It orchestrates the lower-level functions in the right order to decode a new frame, and spit one out. For mpeg2 this is pretty straightforward. For something like mp3 with frames of a determinable size, you'll want to provide a few more functions. For instance: int mp3_decode_frame_header(mp3_t *dec, mp3_frame_header_t *head); int mp3_bpf_from_header(mp3_t *dec, mp3_frame_header_t *head); int mp3_decode_frame(mp3_t *dec, mp3_frame_t *frame); This allows an application to actively go and find the data necessary, because in the case of mp3 at least, the frame size can be calculated from the 4-byte header, within a half a byte (which should be documented, such that either the bpf value is padded up, or the app knows to add one). Dummy Layer =========== For some codecs, the dummy layer may not be significantly (or at all) different from the frame layer, simply because there's not that much difference to expose. MPEG video is a good example of this, because the real difference in the frame layer is the fact that you can tell up front how long a frame is. The API would be something like what most APIs seem to resemble these days: int mpeg2_more_data(mpeg2dec_t *dec, unsigned char *data, int len); mpeg2_frame *mpeg2_decode_frame(mpeg2dec_t *dec); Code would look something like: do { buf = acquire_more_data(); } while (mpeg2_more_data(dec,buf,...); display(mpeg2_decode_frame(dec)); Which kinda sucks, but hey, it's a *dummy* interface after all <G> Source issues ============= One of the main concerns is that using callbacks to get data can suck. Well, yes. But not using callbacks can suck more. And a non-callback interface can be constructed from a callback interface if necessary. In the case of mp3, the dummy layer simply doesn't call the backend calculation routines until it's gathered together (by copying, probably) at least a full frame. For something like mpeg2, the evil-nasty-slow hack of copying till the next start code has to be used. <g> In a more reasonable environment, like, say, GStreamer <g>, the infrastructure provides a callback-friendly approach. In gst, you have a gst_pad_pull() in the callback, and everything's happy. Zero copy, demand-driven, etc. I was thinking about a solution kinda like exceptions, except where when the bitstream callback runs out of bits and falls out to the driver function, it can be switched back in. But then I realized that it won't work without general-purpose cothreads available. However, where they are, it can be implemented quietly in the dummy layer without any visible change to the app except better performance. ---------------------------------------------------------------------- So, this isn't quite complete, but I'll work on the few more sections (like out-of-band and error conditions) RSN. Erik Walthinsen <om...@te...> - System Administrator __ / \ GStreamer - The only way to stream! | | M E G A ***** http://gstreamer.net/ ***** _\ /_ |
From: <vi...@po...> - 2001-10-27 05:03:01
|
On Fri, Oct 26, 2001 at 05:40:45PM -0700, vi...@po... wrote: > On Fri, Oct 26, 2001 at 04:40:52PM -0700, vi...@po... wrote: > > is_sequence_needed never gets unset. i printed the buffer i'm > > passing into mpeg2_decode_data and i see a 0xb3 every few hundred > > bytes. i tried mpeg2dec-0.2.1-cvs -- it failed in the same way. > > It seems like there is something strange about mpeg1 video steams > > that confuses decode.c:copy_chunk. > > > > What else can i try? Do you want to reproduce this on your box? > > Maybe my mpeg1 test file isn't encoded properly. What sequence > of bytes is libmpeg2 looking for? Is 0x00 0xb3 enough or does > it need 0x00 00 00 b3 ? What does the mpeg1 spec say? OK, my mpeg1 test file was broken. i got a real mpeg1 and it works fine. -- Victory to the Divine Mother!! ... after all, http://sahajayoga.org http://why-compete.org |
From: <vi...@po...> - 2001-10-27 00:40:21
|
On Fri, Oct 26, 2001 at 04:40:52PM -0700, vi...@po... wrote: > is_sequence_needed never gets unset. i printed the buffer i'm > passing into mpeg2_decode_data and i see a 0xb3 every few hundred > bytes. i tried mpeg2dec-0.2.1-cvs -- it failed in the same way. > It seems like there is something strange about mpeg1 video steams > that confuses decode.c:copy_chunk. > > What else can i try? Do you want to reproduce this on your box? Maybe my mpeg1 test file isn't encoded properly. What sequence of bytes is libmpeg2 looking for? Is 0x00 0xb3 enough or does it need 0x00 00 00 b3 ? What does the mpeg1 spec say? -- Victory to the Divine Mother!! ... after all, http://sahajayoga.org http://why-compete.org |
From: <vi...@po...> - 2001-10-26 23:40:32
|
On Fri, Oct 26, 2001 at 03:46:57PM -0700, vi...@po... wrote: > Here's the patch i made to verify that libmpeg2 is not getting restarted > after a DISCONTINUOUS event : mpeg2dec->decoder->is_sequence_needed = 1; > > Basically, as soon as is_sequence_needed gets set, i stop getting > vo * frame warnings even though g_warning ("mpeg2dec with %d", size); > keeps showing more data getting pumped into libmpeg2. > > This only happens with mpeg1. mpeg2 works flawlessly. i'm using > mpeg2dec-0.2.0.tar.gz and current gstreamer CVS. is_sequence_needed never gets unset. i printed the buffer i'm passing into mpeg2_decode_data and i see a 0xb3 every few hundred bytes. i tried mpeg2dec-0.2.1-cvs -- it failed in the same way. It seems like there is something strange about mpeg1 video steams that confuses decode.c:copy_chunk. What else can i try? Do you want to reproduce this on your box? -- Victory to the Divine Mother!! ... after all, http://sahajayoga.org http://why-compete.org |
From: <vi...@po...> - 2001-10-26 22:46:38
|
Here's the patch i made to verify that libmpeg2 is not getting restarted after a DISCONTINUOUS event : mpeg2dec->decoder->is_sequence_needed = 1; Basically, as soon as is_sequence_needed gets set, i stop getting vo * frame warnings even though g_warning ("mpeg2dec with %d", size); keeps showing more data getting pumped into libmpeg2. This only happens with mpeg1. mpeg2 works flawlessly. i'm using mpeg2dec-0.2.0.tar.gz and current gstreamer CVS. -- Victory to the Divine Mother!! ... after all, http://sahajayoga.org http://why-compete.org |
From: Gernot Z. <gz...@ly...> - 2001-10-23 14:15:17
|
Hej Bruno ! > My only problem with this is, isn't this too MPEG specific? The API should > not be designed this way, as implementing different codecs, as mjpeg for > example, would be a pain with it. Let's keep this in mind when we write this > API. This is not the task of this API. There are already a bunch of better general media frameworks (dmSDK, and GStreamer, www.gstreamer.net), and libmpeg2 is already integrated there. No need to worry about general format handling. Servus, Gernot |
From: Bruno R. B. <bar...@uf...> - 2001-10-23 12:44:16
|
On Monday 22 October 2001 11:54 pm, Billy Biggs wrote: > Maybe Arpi you could describe how you think a minimal API could look? > > > What about leaving this to the players ? Just export slice_* and > > header_* functions and leave the right to the players to implement > > their decode.c > > I think this is a good idea to start with. My only problem with this is, isn't this too MPEG specific? The API should not be designed this way, as implementing different codecs, as mjpeg for example, would be a pain with it. Let's keep this in mind when we write this API. []'s Bruno R. Barreyra bar...@uf... |
From: Billy B. <ve...@du...> - 2001-10-23 03:48:48
|
Arpi (ar...@th...): > > IV - libmpeg2 API > > [...] >=20 > I think demuxer is out of scope of the decoder. decode.c has an ES > demuxer with internal buffering - I removed it in mplayer and replaced > with my very simplified version. >=20 > I think you should split this. Provide a header/slice layer and a > demuxer layer. So people who has already demued content, can call > header/slice parsers. People who know nothing abot mpeg streams just > call the demuxer with raw steram and it will do the ps/es/whatever > parsing. I tend to agree. As quickly as possible, let's do to libmpeg2 what was done to liba52. Strip out much of the intelligence and leave a more hardcore API, then we can start building easier-to-use APIs on top of that. o Quicker release. o Broader applicability. o Move the hard work aside while we get something useable. Maybe Arpi you could describe how you think a minimal API could look? > What about leaving this to the players ? Just export slice_* and > header_* functions and leave the right to the players to implement > their decode.c I think this is a good idea to start with. --=20 Billy Biggs bb...@du... http://www.billybiggs.com/ wb...@uw... |
From: Gernot Z. <gz...@ly...> - 2001-10-22 14:12:45
|
Hej folks ! You find the code that I talked about under http://www.lysator.liu.se/~gz/storm.tar.gz video_vtex.c is a minimal callback system for the app. It is accepted as vo plugin by libmpeg2, and after it has internally converted the Y,U,V-planes to YUV or RGB, it returns the frame buffers with callbacks to the application. The framebuffer pointers are also provided by the application callbacks, which allows you to have as many frame buffers as you like ! texmpeg.cc contains the high-level routines that interface with the "host application" that wishes to utilize MPEG2 playback. (For an example of the high-level calls see desert.cc, the main app) whirl_mpeg.cc (based on Arpi's MPLayer 0.17, but heavily rehacked - I call it "Frankenstein-versions" of MPlayer and mpeg2dec ;) ) contains the playback loops that are forked away when the MPEG2 decoding starts and which are remote-controlled by texmpeg. Audio handling is still implicit inside the Mplayer code (I would fancy a similar system as the one used in video_vtex, but that will have to wait, see TODO) You find a test screenshot on http://www.lysator.liu.se/~gz/storm_testpic.png Servus, Gernot /-----------------------------W-E-L-C-O-M-E------------------------------\ T The Austria <=> Sweden connection..... T | E-Mail: gz...@ly... H O Homepage: http://www.lysator.liu.se/~gz E \------------------------------F-U-T-U-R-E-------------------------------/ |
From: Gernot Z. <gz...@ly...> - 2001-10-22 13:45:18
|
Hej Michael ! > been so responsive lately but I think its time to make a kind of > roadmap and ask for comments about what to do and how to prioritize > things. well, here you go: > I - figuring out a nice way to export all the information necessary to libvo > > This includes frame rate, aspect ratio, pan-scan offsets, display > size, information about the exact color space, which field to display > first, skipped frames or fields, etc... maybe also more mpeg-internal > information like picture types and the appearance of repeated sequence > headers and maybe even qscale information and/or motion vector > information to help provide the necessary information to deblocking > and/or error concealment backends (if it doesnt slow down things too > much, but its most probably OK to just save the information > somewhere). Sounds good ! The thing that would be most important for me is information on the presentation time stamp, so that I don't have to calculate my own ones on the libvo side. > II - behaviour when decoding bad streams > > This is a pretty huge weakness in libmpeg2 right now. After 0.2.0 I > started fixing some of the crash sources I knew about but this is not > complete yet. I don't have any problems with that (I can jump forwards and backwards in the stream with Arpi's demultiplexer, and it won't crash the standard libmpeg2 I'm using) ... but I guess it is pretty important for half-damaged streams .. > III - optimizations > > Since 0.2.0 I have added the altivec optimizations. It's about 90% What I would need is a way to either parallelize the decoding (threads/processes, can be only optional) or a drop_frame capability (like Arpi has) - I am using a R10000 CPU with 250 MHz, and it is just an edge too slow to keep up with the decoding of PAL MPEG2 streams from DVDs (which is my ultimate goal over here). It would also be very advantageous to have mpeg2dec IDCT that decode directly to YUV pixels (without separate layers) to speed up decoding on these CPUs even more (since graphics displaying is done with DMA downloads and on-the-fly YUV->RGB conversion). ... but if this is too specific to be implement in libmpeg2, I would only need some data structs (some kind of YUV-framebuffer-pointer, and a variable that chooses the active mode, or possibilities to overload meminit and the IDCT itself, and I just "abuse" the y-plane buffer pointer), and I'll add an C-IDCT by myself ... :-) > We can always try and get more optimizations here and there, but these > can be done in parallel with the other stuff so we dont need to plan > for it too much right now. Right. > IV - libmpeg2 API > > This is the really hard part IMHO. On the other hand, this is > something I've been pushing back for probably too much time already. Yes. I think the main problem was that mpeg2dec is too simple, and doesn't show the demands and problems that most MPEG2-Player developers have. > Some say the current API is too hard to use already and that we should > push more stuff (demuxer etc) into libmpeg2. Some say they want to Too hard ? An MPEG2 decoder API can hardly be more simply than that ;) ... the main problem that I see is that the communication flow between libmpeg2 and the app is too limited ... all the necessary information has to be extracted by hacks (including mpeg2_internal.h) and workarounds (feeding libmpeg2 with two packets, and then stop the process to find out about the MPEG2 stream format that was stored in the stream) > have a get_decoded_mpeg2_frame() function and register a callback for > when libmpeg2 needs more data. I think these solutions are not > flexible enough for general usage by everyone. I'm not quite sure what > the proper solution is though. I don't even think you need such a solution ! Patric Ljung's video_vtex vo callback that he sent to you in May solves that, it allows the application to regain control for all frames that it receives. :-) As a proof of concept I'll send you some code that I am working on (it is still on "a sandbox level" and far from general enough to be released in public) - you won't be able to compile it (currently a rather delicate process :-} ), but texmpeg and whirl_mpeg (based on Arpi's MPlayer) show how you can utilize (standard) libmpeg2 and video_vtex to get a player with A/V sync. > so implementing an API is not that much work really - the only problem > is deciding what API you want to implement ! I would suggest that you break down libmpeg2 into small code snippets, and let them all become called over internal function pointers that are saved in the central struct that the application receives - libjpeg implemented it this way, and I can only recommend it :-) It allows you to "overload" any function you need (and if you don't feel secure with it in the beginning, you simply base it on a copy of the internal function source code from libmpeg2.) > * libmpeg2 should not have to call libvo directly (except possibly for > slice yuv2rgb callbacks). Instead we should pass libmpeg2 a buffer > that it can fill. This implies that one libmpeg2 call should never > decompress more than one full frame. (By the way I'm quite happy about > how I did this in liba52 - but the mpeg2 case is harder of course) Have a look at video_vtex, you'll like it, it does exactly that (and even more general, it lets the application provide the _buffer filling functions_)... > * I dont like the notion of read-mpeg2-stream callbacks, because if > you need this then it means you can not tell in advance if you have > enough data to decompress a full frame. I dont think it would be a > good idea to block inside such a callback waiting for more data to > become available. well, a good player has to be multi-processing/treaded anyways, and then it won't matter if it blocks inside the read-mpeg2-stream callback. > * It should be flexible enough to implement very diverse > applications. For example in a video editor you might want to > decompress the frame number 12345 first - so you'd have to find the > previous I frame (not libmpeg2's job), decompress that I frame, the > next P frames (skipping all B frames in the middle), and maybe one B > frame if 12345 is a B frame). Current API is not flexible enough to do > that kind of stuff. If you really have the flexibility you can also > implement stuff like playing a stream backwards, etc... Take that for later - this is going to be very complex. > I'm still looking at how to do a proper interface that meets all these > goals. I suspect such a generic interface will be harder to use that I > would want though, and maybe expose too much of the mpeg internals. So Why are you afraid of exposing them ? > maybe in addition of that interface, we also need to build an higher > level interface that is easier to use but maybe not as flexible ??? or Yes, exactly. People that want to extract an MPEG2 stream straight forward will use mpeg2dec, and it's "easy" API - others will have to delve down into the API documentation to understand how libmpeg2 works. > is it possible to build an easy to use interface, that still lets you > do all the funky stuff if you ask for it ??? I guess so - but let that rest for later, otherwise you start with too much from the beginning :-) > I'm a bit confused by the interace issues, I think at the very least I > should look at the mplayer and xine interface mods, but I have not had > time for it yet. Anyone has good suggestions for a libmpeg2 interface ? the libjpeg struct and export of all the internal variables is a good one for the beginning, I think, and doesn't take too much work ... then you can extend it with more Get/Set-functions, and other convenience functions (that are more high-level and do safety checks). > this. I'm not sure what do do after that though, I'd like to get some > feedback about what different projects need and also about the > libmpeg2 API issues. I can assist you in the design decisions if you need (and want) help, just ask here on the mailing list (or mail me directly if I unsubscribe) - you are not alone ! ;) Servus, Gernot /-----------------------------W-E-L-C-O-M-E------------------------------\ T The Austria <=> Sweden connection..... T | E-Mail: gz...@ly... H O Homepage: http://www.lysator.liu.se/~gz E \------------------------------F-U-T-U-R-E-------------------------------/ |
From: Arpi <ar...@th...> - 2001-10-22 12:41:12
|
Hi, > II - behaviour when decoding bad streams > > This is a pretty huge weakness in libmpeg2 right now. After 0.2.0 I > started fixing some of the crash sources I knew about but this is not > complete yet. Yes, if it doesn't slow things down... Currently I'm using that signal/longjmp method and it works very well. > IV - libmpeg2 API > > This is the really hard part IMHO. On the other hand, this is > something I've been pushing back for probably too much time already. > > Some say the current API is too hard to use already and that we should > push more stuff (demuxer etc) into libmpeg2. Some say they want to more? less. I think demuxer is out of scope of the decoder. decode.c has an ES demuxer with internal buffering - I removed it in mplayer and replaced with my very simplified version. I think you should split this. Provide a header/slice layer and a demuxer layer. So people who has already demued content, can call header/slice parsers. People who know nothing abot mpeg streams just call the demuxer with raw steram and it will do the ps/es/whatever parsing. > have a get_decoded_mpeg2_frame() function and register a callback for > when libmpeg2 needs more data. I think these solutions are not > flexible enough for general usage by everyone. I'm not quite sure what > the proper solution is though. I like such callbacks MUCH more than pass-few-bytes and see-what-happens. I know when i need a frame, then call the lib to decode it for me. I see no sense of guessing buffer sizes, and call the lib in loop until i get something usefull. I know - you handle all timings and display at libvo called from libmpeg2. It is not true for mplayer, and other players not only using libmpeg2. > Right now the external libmpeg2 API is implemented in > libmpeg2/decode.c, on top of the functions header_process_*() and > slice_process(). This current API is not very good I think, and both > mplayer and xine have been using havily modified versions of it. The > good news though is that decode.c is only 265 lines of code currently, > so implementing an API is not that much work really - the only problem > is deciding what API you want to implement ! What about leaving this to the players ? Just export slice_* and header_* functions and leave the right to the players to implement their decode.c > My general goals for an API are: > > * libmpeg2 should not have to call libvo directly (except possibly for > slice yuv2rgb callbacks). Instead we should pass libmpeg2 a buffer good idea. > * I dont like the notion of read-mpeg2-stream callbacks, because if > you need this then it means you can not tell in advance if you have > enough data to decompress a full frame. I dont think it would be a > good idea to block inside such a callback waiting for more data to > become available. disagree. you should support _both_ passing frame (without buffering/ demultiplexing in libmpeg2) and callback-style stream reading. A'rpi / Astral & ESP-team -- mailto:ar...@th... http://esp-team.scene.hu |
From: Michel L. <wa...@zo...> - 2001-10-22 02:20:17
|
OK, I'm back. I've been reinstalling my system lately lately (hard disk crashed, reinstalled in RAID-1 so I'm not so exposed anymore), and trying to get a life, but things should be mostly in order now. OK, so there has been many feature requests for libmpeg2 and I havent been so responsive lately but I think its time to make a kind of roadmap and ask for comments about what to do and how to prioritize things. I can see several tracks of development (some of these can be done in parallel): I - figuring out a nice way to export all the information necessary to libvo This includes frame rate, aspect ratio, pan-scan offsets, display size, information about the exact color space, which field to display first, skipped frames or fields, etc... maybe also more mpeg-internal information like picture types and the appearance of repeated sequence headers and maybe even qscale information and/or motion vector information to help provide the necessary information to deblocking and/or error concealment backends (if it doesnt slow down things too much, but its most probably OK to just save the information somewhere). I've been promising this to vektor for way too long now, and this is now my first priority. This not complex stuff, just figuring out exactly what needs to be exported and in which format. II - behaviour when decoding bad streams This is a pretty huge weakness in libmpeg2 right now. After 0.2.0 I started fixing some of the crash sources I knew about but this is not complete yet. We need to finish the work that's been started (i.e. make sure we dont crash on bad MC vectors), have a test suite, and maybe also (in a second time) make sure we exit the slice-parsing function with a known error code when we get a parsing error in the mpeg stream. III - optimizations Since 0.2.0 I have added the altivec optimizations. It's about 90% complete - it needs to have two small routines added to save/restore the altivec registers so we respect the altivec ABI. When I do that I will also use the restore routine to do the emms() in the x86 case, instead of the ugly small hack we have right now. For the altivec code, some ppc people told me the scheduling could be improved a lot for 7450 processors, but I think I'm mostly wait for them to do the work instead of me. We dont have any VIS (ultrasparc) optimizations yet. I dont care too much about ultrasparc personally, but I know some people do, and right now they rely on mlib code to get reasonable performance. The problem with mlib though, is that it has a very bad precision, and you basically can not pass the mpeg2 conformance tests if you use mlib at all. Oh, and mlib also has a licence that is not compatible with the GPL, so if you can not distribute any binaries that include libmpeg2 compiled with mlib support. So, basically mlib is something I'd really really like to get rid of sometime, and in order to not screw up the people who care for ultrasparc performance, that would mean having our own optimized routines for MC (easy) and IDCT (hard). I started to look at the VIS instruction set and I can probably write such routines, but well, it would probably take at least 2 or 3 weeks at my current (slowish) pace. We can always try and get more optimizations here and there, but these can be done in parallel with the other stuff so we dont need to plan for it too much right now. IV - libmpeg2 API This is the really hard part IMHO. On the other hand, this is something I've been pushing back for probably too much time already. Some say the current API is too hard to use already and that we should push more stuff (demuxer etc) into libmpeg2. Some say they want to have a get_decoded_mpeg2_frame() function and register a callback for when libmpeg2 needs more data. I think these solutions are not flexible enough for general usage by everyone. I'm not quite sure what the proper solution is though. Right now the external libmpeg2 API is implemented in libmpeg2/decode.c, on top of the functions header_process_*() and slice_process(). This current API is not very good I think, and both mplayer and xine have been using havily modified versions of it. The good news though is that decode.c is only 265 lines of code currently, so implementing an API is not that much work really - the only problem is deciding what API you want to implement ! My general goals for an API are: * libmpeg2 should not have to call libvo directly (except possibly for slice yuv2rgb callbacks). Instead we should pass libmpeg2 a buffer that it can fill. This implies that one libmpeg2 call should never decompress more than one full frame. (By the way I'm quite happy about how I did this in liba52 - but the mpeg2 case is harder of course) * I dont like the notion of read-mpeg2-stream callbacks, because if you need this then it means you can not tell in advance if you have enough data to decompress a full frame. I dont think it would be a good idea to block inside such a callback waiting for more data to become available. * It should be flexible enough to implement very diverse applications. For example in a video editor you might want to decompress the frame number 12345 first - so you'd have to find the previous I frame (not libmpeg2's job), decompress that I frame, the next P frames (skipping all B frames in the middle), and maybe one B frame if 12345 is a B frame). Current API is not flexible enough to do that kind of stuff. If you really have the flexibility you can also implement stuff like playing a stream backwards, etc... I'm still looking at how to do a proper interface that meets all these goals. I suspect such a generic interface will be harder to use that I would want though, and maybe expose too much of the mpeg internals. So maybe in addition of that interface, we also need to build an higher level interface that is easier to use but maybe not as flexible ??? or is it possible to build an easy to use interface, that still lets you do all the funky stuff if you ask for it ??? I'm a bit confused by the interace issues, I think at the very least I should look at the mplayer and xine interface mods, but I have not had time for it yet. Anyone has good suggestions for a libmpeg2 interface ? As for the roadmap. My highest priorities right now are to implement I and II. I think I should do I (libvo API changes) as soon as possible, and do a 0.2.1 release after that. II is important too (at least the dont-crash part - the error reporting can wait a little). So its probably the next priority, after 0.2.1 I should probably work on this. I'm not sure what do do after that though, I'd like to get some feedback about what different projects need and also about the libmpeg2 API issues. Cheers, -- Michel "Walken" LESPINASSE Of course I think I'm right. If I thought I was wrong, I'd change my mind. |
From: Billy B. <ve...@du...> - 2001-10-19 19:57:45
|
Arpi (ar...@th...): > > > and most of our work (including postprocess, libvo) are common > > > code, used for many codecs. postprocessing control is outside the > > > scope of libmpeg2. libmpeg2 (and other codecs too) needs only a > > > few lines code to export qscale which is a requred input data for > > > postprocessing code. > > > > Ok sorry, I was just talking about exporting the qscale stuff then. > > You're correct that the actual processing code should not be inside > > libmpeg2. > > ok > but it's something used only by me. it increases cpu usage a few percent and > has no use for other projects... Huh? If you have a working deblocking filter, I'd really like to get that in my player too. > > But please scream real loud if there is something you need exported > > from libmpeg2 that is not in the base release. > > i sent a few patch long time ago - all rejected. i found it's simple > to have own version and apply changes immediately. and now is too > late to collect them and merge back to main trunk... at least it's a > big work and need someone with cvs access and many free time. Well, that's a really disappointing attitude. Personally, I'm pretty ticked off with walken then for just blatantly rejecting useful patches. Of course I know the feeling, I sent my patch to walken in July and am still waiting for some result. movietime has been waiting for a release now for at least that long. :( So, walken, what do you have to say? I want to see this issue resolved in such a way that we get a common, featureful libmpeg2 as quick as possible. -- Billy Biggs ve...@du... |