Is there any way to render Interlaced (for tv purpose: 1080i)?
Actual Solution: I'm rendering my scene every 20msecs and copying odds and even rows to create 1 frame every 40msecs. This "interlaced" frame goes to a Graphic Engine.
The problem with this is that tooks a lot of copyes and scanlines to do this.
We are making a little scene editor but the final result must go to a TV OUT in an Interlaced Mode.
PD: searching on Internet I read this:
*There's no reason to render 576 lines then discard half of them; you can just render 288 lines for each field.
One slightly tricky point is to get the projection matrix correct. The pixel centres for line i (0<=i<288) should be at (2i+0.5)/576 for an even[] field or (2i+1.5)/576 for an odd[] field. To achieve this, a projection which is correct for a 576-line frame should be pre-multiplied by a Y-translation of +/- 1/1152 for a 288-line field.
*
Anyone knows the way to do this or where?
Thanks in advance.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Rick
I also develop for tv.
From my experience, the suggestion to render at "half height" is not applicable.
Or better. It depends on how the graphics (tv) engine you are using receives graphics data.
If (as usual) it receives a single interlaced (merged) frame (every 40 ms) in any case you have to provide the "merging" job.
Otherwise, if your grphics tv engine can receive 2 different fields (and then it merges them by itself), in this case you could provide directly 2 diffentent fields, full frame or half frame: it depends on engine requirements.
This second approach can avoid you to not to do the merging process, but you must be sure that your rendering tv engine works in this way.
Rendering at half height, for my experience, is not so much faster instead of full frame renderinf (I'm speaking just about OpenGL rendering process, not merging). So half rendering could impact in performances "only" if your rendering tv engine receives 2 separated fields of half height. Only in this case.
In any case you have to render 2 fields (full of half). And timings to render full or half are not so different with current hardware.
The bottleneck is the merging process, and you can avoid it only in the case I wrote before.
Also, rendering at half height involves some complexity about scene viewing.
If merging is needed (as usual), suggestion is to implement it directly in assembly or using CUDA.
My approach is like this: OpenGL renders 2 FULL fields in 40 ms (they are time separated by 20 ms, obviously). Then I provide these 2 full images (every 40 ms) to a sw layer (a dll developed by us) that merges them (even and odd lines) and than sends to the graphics tv engine driver.
This approach is useful (and the same, not only useful) also for "progressive", because OpenGL renders in any case full frames.
Massimo
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Massimo, what you explain it's exactly what we are doing right now:
I'm rendering my scene every 20msecs and copying odds(Frame A) and even(Frame B) rows to create 1 Interlaced frame every 40msecs.This frame goes to the Graphic Engine (the engine needs a complete frame).
What we think to do is, in a Direct OPENGL way, only draw Odds or Even pixels (filling the between pixel lines with alpha) and then blend/merge the FRAME A and FRAME B. This will make a huge performance difference because this way we dont have to scan the Complete Frame (evens and odds pixels) to modify it.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If tv engine needs a full (interlaced) frame, you have to do the merge operation in any case, even if you render only odd and then even lines for 2 fields. So, you haven't any time saving using a "half height" rendering.
I know that the merging is the operation that takes huge amount of resources (time and cpu charge).
For this reason my advice is to implement it in a separate thread directly in assembly language, or (better) to implement it usign CUDA (Cuda is perfect for these kind of operations).
Some years ago we tried to experiment the "half height" way, but problems were greater than benefits.
About ten years ago, in tv graphics systems it was quite common to see the half height rendering approach, but the reason was because of OpenGL rendering perfomances, not to avoid merging.
In recent years, OpenGL cards are fast enough to ensure rendering of 2 full frames, also in HD, also in NTSC frame rate (2 frames into 30 ms). For this reason, no tv graphics system no longer uses the "half height" rendering.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello,
maybe it is also possible to pass the Framebuffer directly to the Videocard? This is how we did it by using a Decklink-Card. We only have the Problem, that we are not getting a Key out of the card. So we are not sure, if the rendering is really happen with Alpha in Framebuffer.
So if you are unsing a Decklink card, I could offer some codes.
Cheers Tom
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
this is all we have. If you or other guys have some expierience of how to render wit alpha to framebuffer by using GLScene - it would be amazing if you would also sharing this. ;-)
unitMainForm;interfaceusesWinapi.Windows,Winapi.Messages,System.SysUtils,System.Variants,System.Classes,Vcl.Graphics,Vcl.Controls,Vcl.Forms,Vcl.Dialogs,DeckLinkAPI_TLB_10_3_1,Winapi.Activex,Vcl.StdCtrls,Vcl.ExtCtrls,Vcl.ImgList,Vcl.ComCtrls,GLSimpleNavigation,GLScene,GLObjects,GLCoordinates,GLBaseClasses,GLCadencer,GLCrossPlatform,GLMaterial,GLWin32Viewer,GLUtils,GLGraphics,GLContext,OpenGLTokens,GLAsyncTimer,GLFullScreenViewer,GLFireFX;typeTCallBackProc=class(TInterfacedObject,IDeckLinkVideoOutputCallback)protectedm_refCount:integer;publicfunctionScheduledFrameCompleted(constcompletedFrame:IDeckLinkVideoFrame;res:_BMDOutputFrameCompletionResult):HResult;stdcall;functionScheduledPlaybackHasStopped:HResult;stdcall;functionQueryInterface(constIID:TGUID;outObj):HResult;stdcall;function_AddRef:Integer;stdcall;function_Release:Integer;stdcall;end;TFormMain=class(TForm)PanelPrev:TPanel;MemoLog:TMemo;RBKeyingOff:TRadioButton;RBInternalKeying:TRadioButton;RBExternalKeying:TRadioButton;StatusBarMain:TStatusBar;CheckBoxShowPrev:TCheckBox;CheckBoxDisplayVideoFrameSync:TCheckBox;CheckBoxExtendedLogging:TCheckBox;GLMaterialLibrary1:TGLMaterialLibrary;GLCadencer1:TGLCadencer;GLScene1:TGLScene;GLCamera1:TGLCamera;Sprite1:TGLSprite;DummyCube1:TGLDummyCube;Sprite2:TGLSprite;ScrollBox1:TScrollBox;GLSceneViewer1:TGLSceneViewer;GLMemoryViewer1:TGLMemoryViewer;CheckBoxRenderFromMemory:TCheckBox;Timer1:TTimer;procedureFormCreate(Sender:TObject);procedureFormDestroy(Sender:TObject);procedureRBKeyingOffClick(Sender:TObject);procedureRBInternalKeyingClick(Sender:TObject);procedureRBExternalKeyingClick(Sender:TObject);procedureCheckBoxDisplayVideoFrameSyncClick(Sender:TObject);procedureCheckBoxExtendedLoggingClick(Sender:TObject);procedureGLCadencer1Progress(Sender:TObject;constdeltaTime,newTime:Double);procedureGLAsyncTimer1Timer(Sender:TObject);procedureGLSceneViewer1AfterRender(Sender:TObject);procedureGLMemoryViewer1AfterRender(Sender:TObject);procedureCheckBoxRenderFromMemoryClick(Sender:TObject);procedureGLMemoryViewer1PostRender(Sender:TObject);procedureCheckBoxShowPrevClick(Sender:TObject);procedureTimer1Timer(Sender:TObject);privatem_deckLinkOutput:IDeckLinkOutput;m_videoFrameGDI:IDeckLinkMutableVideoFrame;m_DeckLinkKeyer:IDeckLinkKeyer;m_frameWidth,m_frameHeight:Integer;m_frameDuration,m_TimeScale,m_TotalFrames:Int64;m_UseDisplayVideoFrameSync:Boolean;m_ScheduledCallback:TCallBackProc;DoExtendedLogging:Boolean;DoRenderFromMemory:Boolean;procedureSetupDecklink(DeckLink:IDeckLink);procedureAddToLog(Msg:string);end;varFormMain:TFormMain;constNearClipping=1;FarClipping=1000;implementation{$R*.dfm}usesSystem.Math;procedureTFormMain.AddToLog(Msg:string);beginMemoLog.Lines.Add(FormatDateTime('dd.mm.yyyy hh:nn:ss',now)+': '+Msg);SendMessage(MemoLog.Handle,EM_LINESCROLL,0,MemoLog.Lines.Count);end;procedureTFormMain.CheckBoxDisplayVideoFrameSyncClick(Sender:TObject);varStopTime:Int64;beginm_UseDisplayVideoFrameSync:=CheckBoxDisplayVideoFrameSync.Checked;ifnotm_UseDisplayVideoFrameSyncthenm_deckLinkOutput.StartScheduledPlayback(0,m_TimeScale,1.0)elsem_deckLinkOutput.StopScheduledPlayback(0,StopTime,m_TimeScale);end;procedureTFormMain.CheckBoxExtendedLoggingClick(Sender:TObject);beginDoExtendedLogging:=CheckBoxExtendedLogging.Checked;end;procedureTFormMain.CheckBoxRenderFromMemoryClick(Sender:TObject);beginDoRenderFromMemory:=CheckBoxRenderFromMemory.Checked;end;procedureTFormMain.CheckBoxShowPrevClick(Sender:TObject);beginGLSceneViewer1.Visible:=CheckBoxShowPrev.Checked;end;procedureTFormMain.FormCreate(Sender:TObject);varRes:HRESULT;DeckLinkIterator:IDeckLinkIterator;DeckLink:IDeckLink;deckLinkAPIInformation:IDeckLinkAPIInformation;deckLinkVersion:Int64;dlVerMajor,dlVerMinor,dlVerPoint:Integer;numDevices:Integer;deviceNameBSTR:Widestring;varSpr:TGLSprite;I:Integer;MediaPath:String;beginSetGLSceneMediaDir;//Loadtextureforsprite2,thisisthehand-codedwayusingaPersistentImage//Sprite1usesaPicFileImage,andsotheimageisautomagicallyloadedby//GLScenewhennecessary(nocodeisrequired).//(HadIusedtwoPicFileImage,Iwouldhaveavoidedthiscode)GLMaterialLibrary1.TexturePaths:=GetCurrentDir;MediaPath:=GLMaterialLibrary1.TexturePaths+'\';Sprite1.Material.Texture.Image.LoadFromFile(MediaPath+'test.png');GLMaterialLibrary1.Materials[0].Material.Texture.Image.LoadFromFile('test.png');//Newspritesarecreatedbyduplicatingthetemplate"sprite2"fori:=1to9dobeginspr:=TGLSprite(DummyCube1.AddNewChild(TGLSprite));spr.Assign(Sprite2);end;GLCamera1.FocalLength:=1800*50/333;DoExtendedLogging:=CheckBoxExtendedLogging.Checked;DoRenderFromMemory:=CheckBoxRenderFromMemory.Checked;//COM-SchnittstellefürDecklinkinitalisieren:Res:=CoInitialize(nil);ifFAILED(Res)thenbeginAddToLog(format('Initialization of COM failed - result = %08x.',[Res]));Exit;end;//DecklinkIterator:Res:=CoCreateInstance(CLASS_CDeckLinkIterator,nil,CLSCTX_ALL,IID_IDeckLinkIterator,DeckLinkIterator);ifFAILED(Res)thenbeginAddToLog(('A DeckLink iterator could not be created. The DeckLink drivers may not be installed.'));ifAssigned(DeckLinkIterator)thendeckLinkIterator:=nil;end;//DecklinkAPI-Informationen:Res:=DeckLinkIterator.QueryInterface(IID_IDeckLinkAPIInformation,DeckLinkAPIInformation);ifSucceeded(Res)thenbegindeckLinkAPIInformation.GetInt(BMDDeckLinkAPIVersion,deckLinkVersion);dlVerMajor:=(deckLinkVersionand$FF000000)shr24;dlVerMinor:=(deckLinkVersionand$00FF0000)shr16;dlVerPoint:=(deckLinkVersionand$0000FF00)shr8;AddToLog((Format('DeckLinkAPI version: %d.%d.%d',[dlVerMajor,dlVerMinor,dlVerPoint])));end;//Decklinkdevices:numDevices:=0;while(deckLinkIterator.Next(DeckLink)=S_OK)dobegin//PrintthemodelnameoftheDeckLinkcardRes:=deckLink.GetModelName(deviceNameBSTR);ifRes=S_OKthenbeginInc(numDevices);AddToLog((Format('Found Blackmagic device: %s',[deviceNameBSTR])));//Decklinkinitialisieren:SetupDecklink(DeckLink);end;ifAssigned(deckLink)thenDeckLink:=nil;end;ifAssigned(DeckLinkIterator)thenDeckLinkIterator:=nil;if(numDevices=0)thenAddToLog('No Blackmagic Design devices were found.');end;procedureTFormMain.FormDestroy(Sender:TObject);beginApplication.OnIdle:=nil;m_deckLinkOutput.DisableVideoOutput;m_DeckLinkKeyer:=nil;m_deckLinkOutput:=nil;m_ScheduledCallback:=nil;//UninitalizeCOMonthisthreadCoUninitialize;end;procedureTFormMain.GLAsyncTimer1Timer(Sender:TObject);begin//end;procedureTFormMain.GLCadencer1Progress(Sender:TObject;constdeltaTime,newTime:Double);vari:Integer;a,aBase:Double;begin//angularreference:90°persecond<=>4secondperrevolutionaBase:=90*newTime;//"pulse"thestara:=DegToRad(aBase);Sprite1.SetSquareSize(4+0.2*cos(3.5*a));//rotatethespritesaroundtheyellow"star"fori:=0toDummyCube1.Count-1dobegina:=DegToRad(aBase+i*8);with(DummyCube1.Children[i]asTGLSprite)dobegin//rotationmovementPosition.X:=4*cos(a);Position.Z:=4*sin(a);//ondulationPosition.Y:=2*cos(2.1*a);//spritesizechangeSetSquareSize(2+cos(3*a));end;end;end;procedureTFormMain.GLMemoryViewer1AfterRender(Sender:TObject);varpFrame:Pointer;Res:HRESULT;beginExit;Caption:=Format('%.1f',[GLMemoryViewer1.Buffer.FramesPerSecond]);m_deckLinkOutput.CreateVideoFrame(m_frameWidth,m_frameHeight,m_frameWidth*4,bmdFormat8bitBGRA,bmdFrameFlagFlipVertical,m_videoFrameGDI);m_videoFrameGdi.GetBytes(pFrame);GLMemoryViewer1.Buffer.RenderingContext.Activate;try//pFrame:=AllocMem(m_frameWidth*m_frameHeight*4);GL.ReadPixels(0,0,m_frameWidth,m_frameHeight,GL_BGRA,GL_UNSIGNED_BYTE,pFrame);finallyGLMemoryViewer1.Buffer.RenderingContext.Deactivate;end;withGLMemoryViewer1.Buffer.CreateSnapShotBitmapdobeginSaveToFile('C:\test.bmp');Free;end;Res:=m_deckLinkOutput.DisplayVideoFrameSync(m_videoFrameGDI);ifFAILED(RES)thenbeginifDoExtendedLoggingthenbegincaseResofE_OUTOFMEMORY:AddToLog('Too many frames are already scheduled');E_FAIL:AddToLog('Failure');E_ACCESSDENIED:AddToLog('The video output is not enabled.');E_INVALIDARG:AddToLog('The frame attributes are invalid.');elseAddToLog('Unknown Failure');end;end;endelsebeginifDoExtendedLoggingthenAddToLog('OK');//Inc(m_TotalFrames);end;//Dispose(pFrame);//Application.ProcessMessages;end;procedureTFormMain.GLMemoryViewer1PostRender(Sender:TObject);varpFrame:Pointer;Res:HRESULT;beginm_deckLinkOutput.CreateVideoFrame(m_frameWidth,m_frameHeight,m_frameWidth*4,bmdFormat8bitBGRA,bmdFrameFlagFlipVertical,m_videoFrameGDI);m_videoFrameGdi.GetBytes(pFrame);//GLMemoryViewer1.Buffer.RenderingContext.Activate;try//pFrame:=AllocMem(m_frameWidth*m_frameHeight*4);GL.ReadPixels(0,0,m_frameWidth,m_frameHeight,GL_BGRA,GL_UNSIGNED_BYTE,pFrame);finally//GLMemoryViewer1.Buffer.RenderingContext.Deactivate;end;{withGLMemoryViewer1.Buffer.CreateSnapShotBitmapdobeginSaveToFile('C:\test.bmp');Free;end;}Res:=m_deckLinkOutput.DisplayVideoFrameSync(m_videoFrameGDI);ifFAILED(RES)thenbeginifDoExtendedLoggingthenbegincaseResofE_OUTOFMEMORY:AddToLog('Too many frames are already scheduled');E_FAIL:AddToLog('Failure');E_ACCESSDENIED:AddToLog('The video output is not enabled.');E_INVALIDARG:AddToLog('The frame attributes are invalid.');elseAddToLog('Unknown Failure');end;end;endelsebeginifDoExtendedLoggingthenAddToLog('OK');//Inc(m_TotalFrames);end;//Dispose(pFrame);//Application.ProcessMessages;end;procedureTFormMain.GLSceneViewer1AfterRender(Sender:TObject);varpFrame:Pointer;Res:HRESULT;beginifnotDoRenderFromMemoryornotGLSceneViewer1.Buffer.RenderingContext.GL.W_ARB_pbufferthenbeginifDoRenderFromMemoryandnotGLSceneViewer1.Buffer.RenderingContext.GL.W_ARB_pbufferthenAddToLog('W_ARB_pbuffer not supported...');Caption:=Format('%.1f',[GLSceneViewer1.Buffer.FramesPerSecond]);m_deckLinkOutput.CreateVideoFrame(m_frameWidth,m_frameHeight,m_frameWidth*4,bmdFormat8bitBGRA,bmdFrameFlagFlipVertical,m_videoFrameGDI);m_videoFrameGdi.GetBytes(pFrame);GLSceneViewer1.Buffer.RenderingContext.Activate;//GLSceneViewer1.Buffer.RenderingContext.SwapBuffers;try//pFrame:=AllocMem(m_frameWidth*m_frameHeight*4);GL.ReadPixels(0,0,m_frameWidth,m_frameHeight,GL_BGRA,GL_UNSIGNED_BYTE,pFrame);finallyGLSceneViewer1.Buffer.RenderingContext.Deactivate;end;Res:=m_deckLinkOutput.DisplayVideoFrameSync(m_videoFrameGDI);ifFAILED(RES)thenbeginifDoExtendedLoggingthenbegincaseResofE_OUTOFMEMORY:AddToLog('Too many frames are already scheduled');E_FAIL:AddToLog('Failure');E_ACCESSDENIED:AddToLog('The video output is not enabled.');E_INVALIDARG:AddToLog('The frame attributes are invalid.');elseAddToLog('Unknown Failure');end;end;endelsebeginifDoExtendedLoggingthenAddToLog('OK');//Inc(m_TotalFrames);end;endelsebeginGLMemoryViewer1.Render;end;end;procedureTFormMain.RBKeyingOffClick(Sender:TObject);beginif(m_DeckLinkKeyer<>nil)thenbeginm_DeckLinkKeyer.Disable;AddToLog('Keying = off');end;end;procedureTFormMain.RBInternalKeyingClick(Sender:TObject);beginif(m_DeckLinkKeyer<>nil)thenbeginm_DeckLinkKeyer.Enable(0);m_DeckLinkKeyer.SetLevel(100);AddToLog('Keying = internal');end;end;procedureTFormMain.RBExternalKeyingClick(Sender:TObject);beginif(m_DeckLinkKeyer<>nil)thenbeginm_DeckLinkKeyer.Enable(1);m_DeckLinkKeyer.SetLevel(100);AddToLog('Keying = external');end;end;procedureTFormMain.SetupDecklink(DeckLink:IDeckLink);vardm:_BMDDisplayMode;sup:_BMDDisplayModeSupport;modeNameBSTR:WideString;DeckLinkdisplayMode:IDeckLinkdisplayMode;beginm_deckLinkOutput:=nil;m_videoFrameGdi:=nil;deckLinkDisplayMode:=nil;m_DeckLinkKeyer:=nil;m_TotalFrames:=0;m_frameWidth:=0;m_frameHeight:=0;m_UseDisplayVideoFrameSync:=True;m_ScheduledCallback:=nil;//Decklinkinitalisieren:if(DeckLink.QueryInterface(IID_IDeckLinkOutput,m_deckLinkOutput)<>S_OK)thenbeginAddToLog('Could not obtain the IDeckLinkOutput interface');ifAssigned(m_deckLinkOutput)thenm_deckLinkOutput:=nil;endelsebegindm:=bmdModeHD1080i50;m_deckLinkOutput.DoesSupportVideoMode(dm,bmdFormat8bitBGRA,bmdVideoOutputFlagDefault,sup,DeckLinkdisplayMode);ifsup=bmdDisplayModeSupportedthenbeginif(m_deckLinkOutput.EnableVideoOutput(dm,bmdVideoOutputFlagDefault)<>S_OK)thenbeginAddToLog('Could not enable Video output');ifAssigned(m_deckLinkOutput)thenm_deckLinkOutput:=nil;endelsebegin//DecklinkKeyererzeugen:ifDeckLink.QueryInterface(IID_IDeckLinkKeyer,m_DeckLinkKeyer)<>s_OKthenbeginAddToLog('Keying not available');ifAssigned(m_DeckLinkKeyer)thenm_DeckLinkKeyer:=nil;end;m_frameWidth:=DeckLinkdisplayMode.GetWidth;m_frameHeight:=DeckLinkdisplayMode.GetHeight;DeckLinkdisplayMode.GetFrameRate(m_frameDuration,m_TimeScale);DeckLinkdisplayMode.GetName(modeNameBSTR);AddToLog(format('Using video mode: %s, width: %d, height: %d, frameDuration: %d, TimeScale: %d',[modeNameBSTR,m_frameWidth,m_frameHeight,m_frameDuration,m_TimeScale]));//GetaBGRAframeif(m_deckLinkOutput.CreateVideoFrame(m_frameWidth,m_frameHeight,m_frameWidth*4,bmdFormat8bitBGRA,bmdFrameFlagFlipVertical,m_videoFrameGDI)<>S_OK)thenbeginAddToLog('Could not obtain the IDeckLinkOutput CreateVideoFrame interface');ifAssigned(m_videoFrameGDI)thenm_videoFrameGDI:=nil;endelsebegin//end;m_ScheduledCallback:=TCallBackProc.Create;m_deckLinkOutput.SetScheduledFrameCompletionCallback(m_ScheduledCallback);end;end;end;end;procedureTFormMain.Timer1Timer(Sender:TObject);beginCaption:=Format('%.1f',[GLMemoryViewer1.Buffer.FramesPerSecond]);GLMemoryViewer1.Buffer.ResetPerformanceMonitor;end;{TCallBackProc}functionTCallBackProc.QueryInterface(constIID:TGUID;outObj):HResult;constIID_IUnknown:TGUID='{00000000-0000-0000-C000-000000000046}';beginResult:=E_NOINTERFACE;Pointer(Obj):=nil;ifIsEqualGUID(IID,IID_IUnknown)thenbeginPointer(Obj):=Self;_addRef;Result:=S_OK;endelseifIsEqualGUID(IID,IDeckLinkInputCallback)thenbegin//GetInterface(IDeckLinkInputCallback,obj);Pointer(Obj):=Pointer(IDeckLinkVideoOutputCallback(self));_addRef;Result:=S_OK;end;end;functionTCallBackProc.ScheduledFrameCompleted(constcompletedFrame:IDeckLinkVideoFrame;res:_BMDOutputFrameCompletionResult):HResult;begin//completedFrame._Release;FormMain.AddToLog('');end;functionTCallBackProc.ScheduledPlaybackHasStopped:HResult;beginend;functionTCallBackProc._AddRef:Integer;beginResult:=InterlockedIncrement(m_RefCount);end;functionTCallBackProc._Release:Integer;varnewRefValue:integer;beginnewRefValue:=InterlockedDecrement(m_RefCount);ifnewRefValue=0thenbegin//Destroy;free;Result:=0;end;result:=newRefValue;end;end.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Is there any way to render Interlaced (for tv purpose: 1080i)?
Actual Solution: I'm rendering my scene every 20msecs and copying odds and even rows to create 1 frame every 40msecs. This "interlaced" frame goes to a Graphic Engine.
The problem with this is that tooks a lot of copyes and scanlines to do this.
We are making a little scene editor but the final result must go to a TV OUT in an Interlaced Mode.
PD: searching on Internet I read this:
*There's no reason to render 576 lines then discard half of them; you can just render 288 lines for each field.
One slightly tricky point is to get the projection matrix correct. The pixel centres for line i (0<=i<288) should be at (2i+0.5)/576 for an even[] field or (2i+1.5)/576 for an odd[] field. To achieve this, a projection which is correct for a 576-line frame should be pre-multiplied by a Y-translation of +/- 1/1152 for a 288-line field.
*
Anyone knows the way to do this or where?
Thanks in advance.
Hi Rick
I also develop for tv.
From my experience, the suggestion to render at "half height" is not applicable.
Or better. It depends on how the graphics (tv) engine you are using receives graphics data.
If (as usual) it receives a single interlaced (merged) frame (every 40 ms) in any case you have to provide the "merging" job.
Otherwise, if your grphics tv engine can receive 2 different fields (and then it merges them by itself), in this case you could provide directly 2 diffentent fields, full frame or half frame: it depends on engine requirements.
This second approach can avoid you to not to do the merging process, but you must be sure that your rendering tv engine works in this way.
Rendering at half height, for my experience, is not so much faster instead of full frame renderinf (I'm speaking just about OpenGL rendering process, not merging). So half rendering could impact in performances "only" if your rendering tv engine receives 2 separated fields of half height. Only in this case.
In any case you have to render 2 fields (full of half). And timings to render full or half are not so different with current hardware.
The bottleneck is the merging process, and you can avoid it only in the case I wrote before.
Also, rendering at half height involves some complexity about scene viewing.
If merging is needed (as usual), suggestion is to implement it directly in assembly or using CUDA.
My approach is like this: OpenGL renders 2 FULL fields in 40 ms (they are time separated by 20 ms, obviously). Then I provide these 2 full images (every 40 ms) to a sw layer (a dll developed by us) that merges them (even and odd lines) and than sends to the graphics tv engine driver.
This approach is useful (and the same, not only useful) also for "progressive", because OpenGL renders in any case full frames.
Massimo
Hi Massimo, what you explain it's exactly what we are doing right now:
I'm rendering my scene every 20msecs and copying odds(Frame A) and even(Frame B) rows to create 1 Interlaced frame every 40msecs.This frame goes to the Graphic Engine (the engine needs a complete frame).
What we think to do is, in a Direct OPENGL way, only draw Odds or Even pixels (filling the between pixel lines with alpha) and then blend/merge the FRAME A and FRAME B. This will make a huge performance difference because this way we dont have to scan the Complete Frame (evens and odds pixels) to modify it.
If tv engine needs a full (interlaced) frame, you have to do the merge operation in any case, even if you render only odd and then even lines for 2 fields. So, you haven't any time saving using a "half height" rendering.
I know that the merging is the operation that takes huge amount of resources (time and cpu charge).
For this reason my advice is to implement it in a separate thread directly in assembly language, or (better) to implement it usign CUDA (Cuda is perfect for these kind of operations).
Some years ago we tried to experiment the "half height" way, but problems were greater than benefits.
About ten years ago, in tv graphics systems it was quite common to see the half height rendering approach, but the reason was because of OpenGL rendering perfomances, not to avoid merging.
In recent years, OpenGL cards are fast enough to ensure rendering of 2 full frames, also in HD, also in NTSC frame rate (2 frames into 30 ms). For this reason, no tv graphics system no longer uses the "half height" rendering.
Hi Massimo, I will investigate the "CUDA" way to do it.
I really appreciate your help, thanks a lot!
Regards.
Hello,
maybe it is also possible to pass the Framebuffer directly to the Videocard? This is how we did it by using a Decklink-Card. We only have the Problem, that we are not getting a Key out of the card. So we are not sure, if the rendering is really happen with Alpha in Framebuffer.
So if you are unsing a Decklink card, I could offer some codes.
Cheers Tom
Hi Torud, we are not using Decklink cards, but maybe I can "translate" the framebuffer transportation to our SDK, that would be awesome.
Regards!
Hi Rick,
this is all we have. If you or other guys have some expierience of how to render wit alpha to framebuffer by using GLScene - it would be amazing if you would also sharing this. ;-)