I can't understand why the description of the clips of the plugin should depend on the context. It seems a little strange that the plugin defines "clips" at the same time as "params" (function : "describeInContext"). Why the plugin "Basic" wouldn't always declare the same clips, without depending on the "context" given by the host ?
I don't understand how a plugin can have different contexts. Doesn't this idea make sense only when a filter needs multiple input clips ( which are declared with "filter" and "general" context ) ?
Best regards,
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I get the idea but I don't understand why I have to use the context of inArgs to do that. A host has no interest in calling a plugin with different contexts. Instead, the plugin can directly read in the host properties to see the list of contexts which are declared. If there is no GeneralContext, the plugin can use only one input clip. Or else, you can use another kOfxImageEffectPropSupportsXXX.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
A plugin has to describe itself uniquely for each context is will be used in by the host. Many hosts support more than a single context, for example Baselight from FilmLight support filter, general, retimer and transition. So the effect needs to be described for each of those contexts ready for use whenever the user instantiates the plugin in the appropriate way.
The easiest way to do the multiple descriptions is to call into the effect once per context the host is interested, thus the describe in context call. So on something like Baselight, the example 'basic' plugin will be called once to describe itself as a filter, then again to describe itself as a general.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The reason why is that not all hosts support all the different ways a plug-in can behave, or can support that effect
The prime example is an editing application will only allow a single input/single output effect versus a compositing application allows multiple input multiple output effects. Two commercial applications that support OFX in this way are FilmMaster from DigitalVision, which supports only filters on their timeline, and NUKE which supports multiple inputs in a node graph.
For a plug-in to work optimally it needs to know how the application intends on using it, multi-input? filter? transition? etc...
This is the reason for contexts, it allows the same effect to be malleable and modify it's behaviour according to abilities of the host application. So on an editor, the 'basic' effect will appear as a single input filter, and in NUKE it will appear as a multi-input effect.
Does that make sense?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello,
I can't understand why the description of the clips of the plugin should depend on the context. It seems a little strange that the plugin defines "clips" at the same time as "params" (function : "describeInContext"). Why the plugin "Basic" wouldn't always declare the same clips, without depending on the "context" given by the host ?
I don't understand how a plugin can have different contexts. Doesn't this idea make sense only when a filter needs multiple input clips ( which are declared with "filter" and "general" context ) ?
Best regards,
Thank you for your quick answer.
I get the idea but I don't understand why I have to use the context of inArgs to do that. A host has no interest in calling a plugin with different contexts. Instead, the plugin can directly read in the host properties to see the list of contexts which are declared. If there is no GeneralContext, the plugin can use only one input clip. Or else, you can use another kOfxImageEffectPropSupportsXXX.
A plugin has to describe itself uniquely for each context is will be used in by the host. Many hosts support more than a single context, for example Baselight from FilmLight support filter, general, retimer and transition. So the effect needs to be described for each of those contexts ready for use whenever the user instantiates the plugin in the appropriate way.
The easiest way to do the multiple descriptions is to call into the effect once per context the host is interested, thus the describe in context call. So on something like Baselight, the example 'basic' plugin will be called once to describe itself as a filter, then again to describe itself as a general.
The reason why is that not all hosts support all the different ways a plug-in can behave, or can support that effect
The prime example is an editing application will only allow a single input/single output effect versus a compositing application allows multiple input multiple output effects. Two commercial applications that support OFX in this way are FilmMaster from DigitalVision, which supports only filters on their timeline, and NUKE which supports multiple inputs in a node graph.
For a plug-in to work optimally it needs to know how the application intends on using it, multi-input? filter? transition? etc...
This is the reason for contexts, it allows the same effect to be malleable and modify it's behaviour according to abilities of the host application. So on an editor, the 'basic' effect will appear as a single input filter, and in NUKE it will appear as a multi-input effect.
Does that make sense?