From: Miguel A. Figueroa-V. <mi...@ms...> - 2006-01-05 01:23:07
|
I guess you guys are too fast for me... I've been trying to reply to Matt's email, only to be interrupted by incoming messages on the subject :) I guess, I like the fact that the feedback is flowing! Amitha Perera wrote: > 1. Parameter blocks. > > First, a class should not inherit from it's parameter > block. Inheritance should be a is-a relationship, and that does not > apply. The cost is simply a five character prefix: > struct object_params { > int p1, p2; > }; > class object { > object_params prms_; > public: > object( object_params const& p ) : prms_(p) { } > > int foo() { return prms_.p1 + prms_.p2; } > // instead of > // int foo() { return p1 + p2; } > }; I favor object composition as well. > Parameter blocks should have (clearly documented) default values, and > set methods to change them. > > struct video_out_params { > //: The width of the image (default 320). > int width_; > > //: The height of the image (default 240). > int height_; > > video_out_params() > : width_( 320 ), > height_( 240 ) > { } > > //: \sa width > video_out_params& set_width( int v ) { width_ = v; return *this; } > > //: \sa height > video_out_params& set_height( int v ) { height_ = v; return *this; } > }; > > Using the parameter block is then something like > > video_out_params p; > p.set_width( 640 ); > video_out ostr( p ); > > or even > > video_out ostr( video_out_params() > .set_width( 640 ) > .set_height( 320 ) ); This is similar to the setup I have, except (as noted in my discussion with Matt) that object_params is a hierarchy of classes. Now, I think having one parameter block for all types of camera/frame grabber streams is nice, but can it accomodate all of them without cluttering the use for other devices? For example, for matrox I have options like the following that might not be supported in other devices: M_GRAB_SCALE: M_FILL_DESTINATION M_GRAB_START_MODE: M_FIELD_START M_GRAB_FIELD_NUM: 1 M_GRAB_AUTOMATIC_INPUT_GAIN: M_DISABLE M_CAMERA_LOCK: M_ENABLE That is why I thought about a params hierarchy that would abstract common parameters in a base class and leave obscure device-specific parameter handling in a subclass. Addressing Matt's comment: >>// then configure some settings, possibly through GUI menu items >>params->set_xxx(...); >>params->set_yyy(...); > > > This only works for the parameters in the base class. You would have > to downcast to set the other parameters which seems to defeat the > purpose. I should have been more clear... You can have something like: params->set_property(tag, value); // virtual function if the tag is not recognized it can return false or throw an exception. In summary the question that remains is, can one parameter block class accomodate all of the stream devices without cluttering the use for some devices? > 2. Factory classes > > The way to get a run-time configurable input or output stream would > be to use a factory class _for_the_whole_stream_. > > vidl2_istream_sptr stream = vidl2_create_stream_from_xml( "config.xml" ); > > The factory would then understand the available streams, and delegate > to other classes to actually create the appropriate stream objects. > > Run-time changes of the input source is actually a very useful > thing. Imagine, for example, > <reader>ffmpeg</reader> > <filename>test.avi</filename> > vs > <reader>directshow</reader> > <filename>test.avi</filename> > which allows the same file to be read via different mechanisms. That > allows the user to work around known shortcomings in the various > reader implementations. This clearly explains why I don't like having to fix the stream at compile time. Addressing Matt's comment: >>I'm not very confident that we want the vidl2 code to automatically >>determine the type of stream to use (as is done in vil_load). There >>could be more than one available stream that works (such as Directshow >>and FFMPEG). While I think that we shouldn't require to fix the stream type at compile time, I'm not sure we want to let the vidl2 code figure out the type automatically either. This is because, as Matt mentions, multiple toolkits could handle it. Although one could argue towards letting the first in the list of toolkits handle it by default... BTW, how do you specify a device in windows (alternative to "/dev/video0")? But, more importantly it might take a lot of effort to probe a toolkit to see wether it can handle it (not for video files, but for frame grabbers and cameras). For example, in DirectShow you would need to allocate two enumerator COM objects to get to the first available device. I don't know how fast this would be... and don't know how many times the default would be the one the user wants. > 3. Memory allocation > > I think the default behaviour of read_frame should be to give access > to an image that is valid until the next call to read_frame. This > allows a framegrabber to create an image_view that simply wraps the > framegrabber's memory, if that is possible. Providing a pre-allocated > memory segment does not allow for this. > > If the user wishes to keep the image around for longer, they would > have to call deep_copy on that. If the efficiency of that becomes a > question, then a method read_frame_copy() could generate a image_view > that would not be subsequently modified by the video reader, and the > video reader could implement this as efficiently as possible. > > I think that video users should be aware of the efficiencies > associated with memory allocation, and should consciously choose to > do a deep copy. I agree here. But I think there should be an option to establish a pipeline in the stream (multi-buffering), so that read_frame should give access to an image that is valid until the next N calls to read_frame. For example: stream.set_pipeline(3); // or set_n_buffers or whatever else vil_image_resource_sptr img[3]; img[0] = stream.read_frame(); img[1] = stream.read_frame(); int i = 1; while (++i) { img[i%3] = stream.read_frame(); process_background_subtraction(img[(i-2)%3], img[(i-1)%3], img[i%3]); } Note that here when I set_pipeline(3) I allocate 3 images in the vidl2_istream class and no more allocations are needed. > 4. Asynchronous capture > > In general, I prefer two methods for getting images: advance() and > current_frame(). The simple loop would then be something like > while( stream.advance() ) { > process( stream.current_frame() ); > } > > For async capture, advance() could be divided into advance_start() > (non-blocking), advance_wait() (blocking), is_frame_available() > (non-blocking). > > Scenario 1: > > while( stream.advance_wait() ) { > img = stream.current_frame().deep_copy(); > stream.advance_start(); > do other processing > } > > Scenario 2: > > while( stream.advance_wait() ) { > img = stream.current_frame().deep_copy(); > stream.advance_start(); > while( ! stream.is_frame_available() ) { > do idle processing > } > } If I understand things correctly, stream.advance_wait() is unnecessary in Scenario 2, right? That is because I checked the is_frame_available before... And why the deep_copy... Am I to assume that as soon as I call advance_start, current_frame is not valid. This would be another reason for the multi-buffering as explained in the previuous item. Then this would work: Scenario 2: stream.set_pipeline(2); while (stream.advance_wait()) { img = stream.current_frame(); // no deep_copy stream.advance_start(); process(img); } --Miguel |