You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(13) |
Jul
(9) |
Aug
(4) |
Sep
(4) |
Oct
(2) |
Nov
(4) |
Dec
(7) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(5) |
Feb
(18) |
Mar
(11) |
Apr
(1) |
May
(2) |
Jun
|
Jul
(3) |
Aug
|
Sep
(16) |
Oct
(2) |
Nov
(2) |
Dec
(12) |
2012 |
Jan
(12) |
Feb
(2) |
Mar
(8) |
Apr
(16) |
May
(33) |
Jun
(5) |
Jul
(5) |
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(4) |
Mar
|
Apr
(14) |
May
(9) |
Jun
|
Jul
(8) |
Aug
|
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2016 |
Jan
(1) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Médéric B. <Med...@in...> - 2011-09-20 07:37:39
|
Dear Alberto, In case of different frame rate between two layers, pictures of the lower temporal layer are displayed several times to keep the same frame rate between all layers. In a nutshell, if there is two layers (base layer at 12.5fps and one enhancement at 25fps), the frame of the base layer are displayed twice to keep the same number of displayed frame between all layers. When decoder_svc_VideoParameter_ImgToDisplay is equals to 2, there is a temporal scalability in another layer, so the frame displayed is a copy. decoder_svc_VideoParameter_ImgToDisplay can also be equals to 0 (no frame available) and to 1 (frame to display). Regards, Médéric Le 19/09/2011 20:21, alberto alvarez gonzález a écrit : > Thank you Mickael. > > I supected the timing was left to muxer, thank you for making it > straight to me. > I can manage with -fps as input for now, without bringing a container > into play yet. > > What about the " decoder_svc_VideoParameter_ImgToDisplay" parameter? > Why can it take two values and in the case of double frames, which > ones should I consider? > > Thanks for the quick response. > Best regards > Alberto Alvarez > > > ------------------------------------------------------------------------ > CC: ope...@li... > From: Mic...@in... > Subject: Re: [Opensvcdecoder-support] SVC standalone display timing > Date: Mon, 19 Sep 2011 19:49:05 +0200 > To: alb...@ho... > > A raw stream does not contain timing information. For any timing > information, you need a container mp4, mp2ts, Mkv. That's why you > going as fast as possible. > > Mickael > > Sent by my iPhone > > Le 19 sept. 2011 à 19:29, alberto alvarez > gonzález<alb...@ho... > <mailto:alb...@ho...>> a écrit : > > Hello > > I'm opening a new thread with more questions. Your help has > been invaluable so far. > > Regarding the display timing, yesterday's svn showed the video as > fast as it was processed. > I have come across a quick workaround which pretty much works for > me. It is based on SDL_Delay function which forces the display > to wait 1/FPS. Given the dynamic fps nature of SVC I have thought > using NAL indexes MaxTemporalId and TempToDisplay to compute > the fraction of original fps (the max) that is used (linked to > -tempId input). Roughly: > fps_start_time=SDL_GetTicks(); > > SDL_Display(16, XDIM, YDIM, Y, U, V); > > > current_fps=FPS>>(DqIdNextNal_Nal_o->MaxTemporalId-DqIdNextNal_Nal_o->TempToDisplay); > SDL_Delay((unsigned > int)1000.0/current_fps-(SDL_GetTicks()-fps_start_time)); > > However, I experienced two issues. > 1- Can't find FPS of the sequence in any header of raw sequences > ¿is it there yet? For now is an input parameter. > 2- With a fps reduction factor of 2 I should expect half the > frames. However, it shows ¿repeated? all the frames > Depending on decoder_svc_VideoParameter_ImgToDisplay value ([1,2]) > there are two paths possible and in my example, the > frames are displayed from those code sequences alternatively. I > think the frames are repeated. > > What are the implications of that parameter? Am I getting it right? > > Any help would be appreciated. > > Best Regards, > > Alberto Alvarez > > ------------------------------------------------------------------------------ > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > Learn about the latest advances in developing for the > BlackBerry® mobile platform with sessions, labs & more. > See new tools and technologies. Register for BlackBerry® > DevCon today! > http://p.sf.net/sfu/rim-devcon-copy1 > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > <mailto:Ope...@li...> > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure contains a > definitive record of customers, application performance, security > threats, fraudulent activity and more. Splunk takes this data and makes > sense of it. Business sense. IT sense. Common sense. > http://p.sf.net/sfu/splunk-d2dcopy1 > > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support -- Médéric Blestel Ingénieur de Recherche / Research Engineer Institut d' Electronique et de Télécommunications de Rennes (IETR) UMR CNRS 6164 Tél : +33 (2) 23 23 85 67 Fax : +33 (2) 23 23 82 62 IETR/Groupe Image INSA DE RENNES 20 AVENUE DES BUTTES DE COESMES CS 70 839 35 708 RENNES CEDEX 7 |
From: Mickaël R. <Mic...@in...> - 2011-09-20 06:11:20
|
Hi hong Liu, We will make a new release soon. We will prepare a tar.gz of the source in this release. Hope this help, Mickael Sent by my iPhone Le 19 sept. 2011 à 23:12, hong Liu <hon...@ya...> a écrit : > > Hi, > > I downloaded Players_1.11.tar.bz2 and src_1.11.tar.bz2 on a machine with CentOS 6 running. > I ran mplayer from Players_1.11.tar.bz2 to player H.264 SVC video. It can play it back but it > cause the machine shut down. I attempted to build it from the source coded and I could not find > any note in src_1.11.tar.bz2 to introduce how to build it from the source. Could you give me any > help on it? > > Thank you very much. > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure contains a > definitive record of customers, application performance, security > threats, fraudulent activity and more. Splunk takes this data and makes > sense of it. Business sense. IT sense. Common sense. > http://p.sf.net/sfu/splunk-d2dcopy1 > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support |
From: hong L. <hon...@ya...> - 2011-09-19 21:12:19
|
Hi, I downloaded Players_1.11.tar.bz2 and src_1.11.tar.bz2 on a machine with CentOS 6 running. I ran mplayer from Players_1.11.tar.bz2 to player H.264 SVC video. It can play it back but it cause the machine shut down. I attempted to build it from the source coded and I could not find any note in src_1.11.tar.bz2 to introduce how to build it from the source. Could you give me any help on it? Thank you very much. |
From: alberto a. g. <alb...@ho...> - 2011-09-19 18:21:12
|
Thank you Mickael. I supected the timing was left to muxer, thank you for making it straight to me. I can manage with -fps as input for now, without bringing a container into play yet. What about the "decoder_svc_VideoParameter_ImgToDisplay" parameter?Why can it take two values and in the case of double frames, which ones should I consider? Thanks for the quick response. Best regardsAlberto Alvarez CC: ope...@li... From: Mic...@in... Subject: Re: [Opensvcdecoder-support] SVC standalone display timing Date: Mon, 19 Sep 2011 19:49:05 +0200 To: alb...@ho... A raw stream does not contain timing information. For any timing information, you need a container mp4, mp2ts, Mkv. That's why you going as fast as possible. Mickael Sent by my iPhone Le 19 sept. 2011 à 19:29, alberto alvarez gonzález<alb...@ho...> a écrit : Hello I'm opening a new thread with more questions. Your help has been invaluable so far. Regarding the display timing, yesterday's svn showed the video as fast as it was processed. I have come across a quick workaround which pretty much works for me. It is based on SDL_Delay function which forces the displayto wait 1/FPS. Given the dynamic fps nature of SVC I have thought using NAL indexes MaxTemporalId and TempToDisplay to compute the fraction of original fps (the max) that is used (linked to -tempId input). Roughly: fps_start_time=SDL_GetTicks(); SDL_Display(16, XDIM, YDIM, Y, U, V); current_fps=FPS>>(DqIdNextNal_Nal_o->MaxTemporalId-DqIdNextNal_Nal_o->TempToDisplay); SDL_Delay((unsigned int)1000.0/current_fps-(SDL_GetTicks()-fps_start_time)); However, I experienced two issues. 1- Can't find FPS of the sequence in any header of raw sequences ¿is it there yet? For now is an input parameter.2- With a fps reduction factor of 2 I should expect half the frames. However, it shows ¿repeated? all the framesDepending on decoder_svc_VideoParameter_ImgToDisplay value ([1,2]) there are two paths possible and in my example, theframes are displayed from those code sequences alternatively. I think the frames are repeated. What are the implications of that parameter? Am I getting it right? Any help would be appreciated. Best Regards, Alberto Alvarez ------------------------------------------------------------------------------ BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA Learn about the latest advances in developing for the BlackBerry® mobile platform with sessions, labs & more. See new tools and technologies. Register for BlackBerry® DevCon today! http://p.sf.net/sfu/rim-devcon-copy1 _______________________________________________ Opensvcdecoder-support mailing list Ope...@li... https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support |
From: Mickaël R. <Mic...@in...> - 2011-09-19 17:49:15
|
A raw stream does not contain timing information. For any timing information, you need a container mp4, mp2ts, Mkv. That's why you going as fast as possible. Mickael Sent by my iPhone Le 19 sept. 2011 à 19:29, alberto alvarez gonzález<alb...@ho...> a écrit : > Hello > > I'm opening a new thread with more questions. Your help has been invaluable so far. > > Regarding the display timing, yesterday's svn showed the video as fast as it was processed. > I have come across a quick workaround which pretty much works for me. It is based on SDL_Delay function which forces the display > to wait 1/FPS. Given the dynamic fps nature of SVC I have thought using NAL indexes MaxTemporalId and TempToDisplay to compute > the fraction of original fps (the max) that is used (linked to -tempId input). Roughly: > > fps_start_time=SDL_GetTicks(); > > SDL_Display(16, XDIM, YDIM, Y, U, V); > > current_fps=FPS>>(DqIdNextNal_Nal_o->MaxTemporalId-DqIdNextNal_Nal_o->TempToDisplay); > SDL_Delay((unsigned int)1000.0/current_fps-(SDL_GetTicks()-fps_start_time)); > > However, I experienced two issues. > 1- Can't find FPS of the sequence in any header of raw sequences ¿is it there yet? For now is an input parameter. > 2- With a fps reduction factor of 2 I should expect half the frames. However, it shows ¿repeated? all the frames > Depending on decoder_svc_VideoParameter_ImgToDisplay value ([1,2]) there are two paths possible and in my example, the > frames are displayed from those code sequences alternatively. I think the frames are repeated. > > What are the implications of that parameter? Am I getting it right? > > Any help would be appreciated. > > Best Regards, > > Alberto Alvarez > ------------------------------------------------------------------------------ > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > Learn about the latest advances in developing for the > BlackBerry® mobile platform with sessions, labs & more. > See new tools and technologies. Register for BlackBerry® DevCon today! > http://p.sf.net/sfu/rim-devcon-copy1 > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support |
From: alberto a. g. <alb...@ho...> - 2011-09-19 17:29:53
|
Hello I'm opening a new thread with more questions. Your help has been invaluable so far. Regarding the display timing, yesterday's svn showed the video as fast as it was processed. I have come across a quick workaround which pretty much works for me. It is based on SDL_Delay function which forces the displayto wait 1/FPS. Given the dynamic fps nature of SVC I have thought using NAL indexes MaxTemporalId and TempToDisplay to compute the fraction of original fps (the max) that is used (linked to -tempId input). Roughly: fps_start_time=SDL_GetTicks(); SDL_Display(16, XDIM, YDIM, Y, U, V); current_fps=FPS>>(DqIdNextNal_Nal_o->MaxTemporalId-DqIdNextNal_Nal_o->TempToDisplay); SDL_Delay((unsigned int)1000.0/current_fps-(SDL_GetTicks()-fps_start_time)); However, I experienced two issues. 1- Can't find FPS of the sequence in any header of raw sequences ¿is it there yet? For now is an input parameter.2- With a fps reduction factor of 2 I should expect half the frames. However, it shows ¿repeated? all the framesDepending on decoder_svc_VideoParameter_ImgToDisplay value ([1,2]) there are two paths possible and in my example, theframes are displayed from those code sequences alternatively. I think the frames are repeated. What are the implications of that parameter? Am I getting it right? Any help would be appreciated. Best Regards, Alberto Alvarez |
From: alberto a. g. <alb...@ho...> - 2011-09-15 15:12:43
|
Hello! I just noticed. It is amazing to see how live this project is. Thank you. Awesome work. I will continue working on my goals with your code. But first I need to fully undertand how things work, which is taking me more than I expected. My displayed sequence seems to run a little too fast. I have been seeking for timing stuff for frames, butno clue to date. I am confused with SPS data and its mapping to the display. Great job! Best Regards > From: ope...@li... > Subject: Opensvcdecoder-support Digest, Vol 10, Issue 3 > To: ope...@li... > Date: Thu, 15 Sep 2011 14:14:25 +0000 > > Send Opensvcdecoder-support mailing list submissions to > ope...@li... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > or, via email, send a message with subject or body 'help' to > ope...@li... > > You can reach the person managing the list at > ope...@li... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Opensvcdecoder-support digest..." > > > Today's Topics: > > 1. SVC decoder standalone and reference videos > (alberto alvarez gonz?lez) > 2. Re: SVC decoder standalone and reference videos (M?d?ric Blestel) > 3. Re: SVC decoder standalone and reference videos (M?d?ric Blestel) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 9 Sep 2011 10:00:56 +0000 > From: alberto alvarez gonz?lez <alb...@ho...> > Subject: [Opensvcdecoder-support] SVC decoder standalone and reference > videos > To: open SVC decoder mailing list > <ope...@li...> > Message-ID: <DUB...@ph...l> > Content-Type: text/plain; charset="iso-8859-1" > > > > Hi, > I have tested the svn and still have the segfault problem, as with 1.11. > When debugging it shows the first frame http://db.tt/27GmoeO and after decoding some other NALs finally breaks with segfault. There I am lost.Where is the dump to YUV utility/code? > Thanks for the patience > Best regards > > > From: alb...@ho... > To: ope...@li... > Subject: SVC decoder standalone and reference videos > Date: Fri, 9 Sep 2011 08:17:18 +0000 > > > > > > > > > Hi again,This is the screenshot of the display. http://db.tt/85lMGvM > > For me it seems a sampling problem or any misconfiguration between sdl and the code.The bits per pixel are badly interpreted and so the video info is concentrated in the upper side of the display while the rest is green.I would really appreciate some hints in the matter.System is Ubuntu 10.04 and libsdl1.2Thanks,Alberto > -------------- next part -------------- > An HTML attachment was scrubbed... > > ------------------------------ > > Message: 2 > Date: Fri, 09 Sep 2011 15:38:16 +0200 > From: M?d?ric Blestel <Med...@in...> > Subject: Re: [Opensvcdecoder-support] SVC decoder standalone and > reference videos > To: ope...@li... > Message-ID: <4E6...@in...> > Content-Type: text/plain; charset="iso-8859-1" > > Dear Alberto, > > You can find from the last SVN revision (you should update) into the > "Libs\SVC\libview" directory two files(extract_picture.c, WriteYUV.c), > which may help you to dump the yuv video. > These files can be copy into the src_1.10 version. > You will be able to activate the dumping of the yuv file by activating > the macro WRITE_YUV_ on line 37 from extract_picture.c. > > The svn version is not crashing on my computer, but i will have a look > to the problem. > > > Kind regards, > M?d?ric > > > > > Le 09/09/2011 12:00, alberto alvarez gonz?lez a ?crit : > > > > Hi, > > > > I have tested the svn and still have the segfault problem, as with 1.11. > > > > When debugging it shows the first frame http://db.tt/27GmoeO and > > after decoding some other NALs finally breaks with segfault. There I > > am lost. > > Where is the dump to YUV utility/code? > > > > Thanks for the patience > > > > Best regards > > > > > > > > ------------------------------------------------------------------------ > > From: alb...@ho... > > To: ope...@li... > > Subject: SVC decoder standalone and reference videos > > Date: Fri, 9 Sep 2011 08:17:18 +0000 > > > > Hi again, > > > > This is the screenshot of the display. http://db.tt/85lMGvM > > > > > > For me it seems a sampling problem or any misconfiguration between sdl > > and the code. > > The bits per pixel are badly interpreted and so the video info is > > concentrated in the upper side of the display while the rest is green. > > > > I would really appreciate some hints in the matter. > > > > System is Ubuntu 10.04 and libsdl1.2 > > Thanks, > > Alberto > > > > > > ------------------------------------------------------------------------------ > > Why Cloud-Based Security and Archiving Make Sense > > Osterman Research conducted this study that outlines how and why cloud > > computing security and archiving is rapidly being adopted across the IT > > space for its ease of implementation, lower cost, and increased > > reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/ > > > > > > _______________________________________________ > > Opensvcdecoder-support mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > -- > > M?d?ric Blestel > Ing?nieur de Recherche / Research Engineer > Institut d' Electronique et de T?l?communications de Rennes (IETR) > UMR CNRS 6164 > > T?l : +33 (2) 23 23 85 67 > Fax : +33 (2) 23 23 82 62 > > > IETR/Groupe Image > INSA DE RENNES > 20 AVENUE DES BUTTES DE COESMES > CS 70 839 > 35 708 RENNES CEDEX 7 > > -------------- next part -------------- > An HTML attachment was scrubbed... > > ------------------------------ > > Message: 3 > Date: Thu, 15 Sep 2011 16:14:19 +0200 > From: M?d?ric Blestel <Med...@in...> > Subject: Re: [Opensvcdecoder-support] SVC decoder standalone and > reference videos > To: ope...@li... > Message-ID: <4E7...@in...> > Content-Type: text/plain; charset="iso-8859-1" > > Dear Alberto, > > The SVN version of the Open SVC Decoder is not crashing anymore with the > video available on the web site. > > Thank you for using Open SVC Decoder. > > Regards, > M?d?ric > > > Le 09/09/2011 12:00, alberto alvarez gonz?lez a ?crit : > > > > Hi, > > > > I have tested the svn and still have the segfault problem, as with 1.11. > > > > When debugging it shows the first frame http://db.tt/27GmoeO and > > after decoding some other NALs finally breaks with segfault. There I > > am lost. > > Where is the dump to YUV utility/code? > > > > Thanks for the patience > > > > Best regards > > > > > > > > ------------------------------------------------------------------------ > > From: alb...@ho... > > To: ope...@li... > > Subject: SVC decoder standalone and reference videos > > Date: Fri, 9 Sep 2011 08:17:18 +0000 > > > > Hi again, > > > > This is the screenshot of the display. http://db.tt/85lMGvM > > > > > > For me it seems a sampling problem or any misconfiguration between sdl > > and the code. > > The bits per pixel are badly interpreted and so the video info is > > concentrated in the upper side of the display while the rest is green. > > > > I would really appreciate some hints in the matter. > > > > System is Ubuntu 10.04 and libsdl1.2 > > Thanks, > > Alberto > > > > > > ------------------------------------------------------------------------------ > > Why Cloud-Based Security and Archiving Make Sense > > Osterman Research conducted this study that outlines how and why cloud > > computing security and archiving is rapidly being adopted across the IT > > space for its ease of implementation, lower cost, and increased > > reliability. Learn more.http://www.accelacomm.com/jaw/sfnl/114/51425301/ > > > > > > _______________________________________________ > > Opensvcdecoder-support mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > -- > > M?d?ric Blestel > Ing?nieur de Recherche / Research Engineer > Institut d' Electronique et de T?l?communications de Rennes (IETR) > UMR CNRS 6164 > > T?l : +33 (2) 23 23 85 67 > Fax : +33 (2) 23 23 82 62 > > > IETR/Groupe Image > INSA DE RENNES > 20 AVENUE DES BUTTES DE COESMES > CS 70 839 > 35 708 RENNES CEDEX 7 > > -------------- next part -------------- > An HTML attachment was scrubbed... > > ------------------------------ > > ------------------------------------------------------------------------------ > Doing More with Less: The Next Generation Virtual Desktop > What are the key obstacles that have prevented many mid-market businesses > from deploying virtual desktops? How do next-generation virtual desktops > provide companies an easier-to-deploy, easier-to-manage and more affordable > virtual desktop model.http://www.accelacomm.com/jaw/sfnl/114/51426474/ > > ------------------------------ > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > End of Opensvcdecoder-support Digest, Vol 10, Issue 3 > ***************************************************** |
From: Médéric B. <Med...@in...> - 2011-09-15 14:14:24
|
Dear Alberto, The SVN version of the Open SVC Decoder is not crashing anymore with the video available on the web site. Thank you for using Open SVC Decoder. Regards, Médéric Le 09/09/2011 12:00, alberto alvarez gonzález a écrit : > > Hi, > > I have tested the svn and still have the segfault problem, as with 1.11. > > When debugging it shows the first frame http://db.tt/27GmoeO and > after decoding some other NALs finally breaks with segfault. There I > am lost. > Where is the dump to YUV utility/code? > > Thanks for the patience > > Best regards > > > > ------------------------------------------------------------------------ > From: alb...@ho... > To: ope...@li... > Subject: SVC decoder standalone and reference videos > Date: Fri, 9 Sep 2011 08:17:18 +0000 > > Hi again, > > This is the screenshot of the display. http://db.tt/85lMGvM > > > For me it seems a sampling problem or any misconfiguration between sdl > and the code. > The bits per pixel are badly interpreted and so the video info is > concentrated in the upper side of the display while the rest is green. > > I would really appreciate some hints in the matter. > > System is Ubuntu 10.04 and libsdl1.2 > Thanks, > Alberto > > > ------------------------------------------------------------------------------ > Why Cloud-Based Security and Archiving Make Sense > Osterman Research conducted this study that outlines how and why cloud > computing security and archiving is rapidly being adopted across the IT > space for its ease of implementation, lower cost, and increased > reliability. Learn more.http://www.accelacomm.com/jaw/sfnl/114/51425301/ > > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support -- Médéric Blestel Ingénieur de Recherche / Research Engineer Institut d' Electronique et de Télécommunications de Rennes (IETR) UMR CNRS 6164 Tél : +33 (2) 23 23 85 67 Fax : +33 (2) 23 23 82 62 IETR/Groupe Image INSA DE RENNES 20 AVENUE DES BUTTES DE COESMES CS 70 839 35 708 RENNES CEDEX 7 |
From: Médéric B. <Med...@in...> - 2011-09-09 13:38:23
|
Dear Alberto, You can find from the last SVN revision (you should update) into the "Libs\SVC\libview" directory two files(extract_picture.c, WriteYUV.c), which may help you to dump the yuv video. These files can be copy into the src_1.10 version. You will be able to activate the dumping of the yuv file by activating the macro WRITE_YUV_ on line 37 from extract_picture.c. The svn version is not crashing on my computer, but i will have a look to the problem. Kind regards, Médéric Le 09/09/2011 12:00, alberto alvarez gonzález a écrit : > > Hi, > > I have tested the svn and still have the segfault problem, as with 1.11. > > When debugging it shows the first frame http://db.tt/27GmoeO and > after decoding some other NALs finally breaks with segfault. There I > am lost. > Where is the dump to YUV utility/code? > > Thanks for the patience > > Best regards > > > > ------------------------------------------------------------------------ > From: alb...@ho... > To: ope...@li... > Subject: SVC decoder standalone and reference videos > Date: Fri, 9 Sep 2011 08:17:18 +0000 > > Hi again, > > This is the screenshot of the display. http://db.tt/85lMGvM > > > For me it seems a sampling problem or any misconfiguration between sdl > and the code. > The bits per pixel are badly interpreted and so the video info is > concentrated in the upper side of the display while the rest is green. > > I would really appreciate some hints in the matter. > > System is Ubuntu 10.04 and libsdl1.2 > Thanks, > Alberto > > > ------------------------------------------------------------------------------ > Why Cloud-Based Security and Archiving Make Sense > Osterman Research conducted this study that outlines how and why cloud > computing security and archiving is rapidly being adopted across the IT > space for its ease of implementation, lower cost, and increased > reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/ > > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support -- Médéric Blestel Ingénieur de Recherche / Research Engineer Institut d' Electronique et de Télécommunications de Rennes (IETR) UMR CNRS 6164 Tél : +33 (2) 23 23 85 67 Fax : +33 (2) 23 23 82 62 IETR/Groupe Image INSA DE RENNES 20 AVENUE DES BUTTES DE COESMES CS 70 839 35 708 RENNES CEDEX 7 |
From: alberto a. g. <alb...@ho...> - 2011-09-09 10:01:04
|
Hi, I have tested the svn and still have the segfault problem, as with 1.11. When debugging it shows the first frame http://db.tt/27GmoeO and after decoding some other NALs finally breaks with segfault. There I am lost.Where is the dump to YUV utility/code? Thanks for the patience Best regards From: alb...@ho... To: ope...@li... Subject: SVC decoder standalone and reference videos Date: Fri, 9 Sep 2011 08:17:18 +0000 Hi again,This is the screenshot of the display. http://db.tt/85lMGvM For me it seems a sampling problem or any misconfiguration between sdl and the code.The bits per pixel are badly interpreted and so the video info is concentrated in the upper side of the display while the rest is green.I would really appreciate some hints in the matter.System is Ubuntu 10.04 and libsdl1.2Thanks,Alberto |
From: Médéric B. <Med...@in...> - 2011-09-09 08:35:07
|
Dear Alberto, It seems that the window's width is not the good one. However, it's normally set by the decoder. We have tried this version with several version of operating system (windows, mac, ), with libsdl 1.3. I think you should use the SVN code, in which we will be able to dump the yuv file to know if the decoding process is correct. Kind regards, Médéric Le 09/09/2011 10:17, alberto alvarez gonzález a écrit : > Hi again, > > This is the screenshot of the display. http://db.tt/85lMGvM > > > For me it seems a sampling problem or any misconfiguration between sdl > and the code. > The bits per pixel are badly interpreted and so the video info is > concentrated in the upper side of the display while the rest is green. > > I would really appreciate some hints in the matter. > > System is Ubuntu 10.04 and libsdl1.2 > Thanks, > Alberto > > > ------------------------------------------------------------------------------ > Why Cloud-Based Security and Archiving Make Sense > Osterman Research conducted this study that outlines how and why cloud > computing security and archiving is rapidly being adopted across the IT > space for its ease of implementation, lower cost, and increased > reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/ > > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support -- Médéric Blestel Ingénieur de Recherche / Research Engineer Institut d' Electronique et de Télécommunications de Rennes (IETR) UMR CNRS 6164 Tél : +33 (2) 23 23 85 67 Fax : +33 (2) 23 23 82 62 IETR/Groupe Image INSA DE RENNES 20 AVENUE DES BUTTES DE COESMES CS 70 839 35 708 RENNES CEDEX 7 |
From: alberto a. g. <alb...@ho...> - 2011-09-09 08:17:25
|
Hi again,This is the screenshot of the display. http://db.tt/85lMGvM For me it seems a sampling problem or any misconfiguration between sdl and the code.The bits per pixel are badly interpreted and so the video info is concentrated in the upper side of the display while the rest is green.I would really appreciate some hints in the matter.System is Ubuntu 10.04 and libsdl1.2Thanks,Alberto |
From: alberto a. g. <alb...@ho...> - 2011-09-06 09:04:34
|
Hi, I am using ubuntu 10.04 and the 1.10 version because the latest version caused a segmentation fault for me at run time. I did not dig further into this though.I have tried different parameters an different videos in the command line with 1.10 and I always get the green display. Maybe is something to do with libsdl1-2 I am using. But I am lost there.The commands are, for example, ./svc -h264 video_5.264 -layer 16 Thank you for the quick response. Best regards Alberto > > Message: 3 > Date: Tue, 06 Sep 2011 10:09:47 +0200 > From: M?d?ric Blestel <Med...@in...> > Subject: Re: [Opensvcdecoder-support] SVC decoder standalone and > reference videos > To: ope...@li... > Message-ID: <4E6...@in...> > Content-Type: text/plain; charset="iso-8859-1" > > Dear Alberto, > > Thank you for using OpenSVCDecoder. > > First, you may use the latest version of the decoder (1.11) which is > more stable. > The videos uploaded on the web site have been encoded by myself, so it > should be working with all latest version of the decoder. > > I have downloaded the 1.10 version of the decoder and one of the video > (video_5) from the web site. > Unfortunately for you, the decoder is working fine. So, you may have > done something wrong. > > Can you give me further information about: > > * your operating system? > * the command line you used? (should be "-h264 video_5.h264 -layer 0" > or "-h264 video_5.h264 -layer 16"). > > > Kind regards, > > M?d?ric > > > > > > Le 06/09/2011 09:41, alberto alvarez gonz?lez a ?crit : > > Hello all, > > > > First of all, thank you for the awesome work you are doing by > > providing the world with an open implementation of SVC. > > > > I am working on SVC trying to understand the standalone project > > (src_1.10). But I have found out that none of the reference video > > files available at > > http://sourceforge.net/projects/opensvcdecoder/files/Video%20Streams/ is > > working for me. The display is all green but a raw in the top with > > badly decoded pixels. > > > > I am quite new to this but it seems to me that there is a mismatch > > between subsampling of sources and the subsampling used in the code. > > At least the display looks like you select a different subsampling in > > a raw YUV sequence. > > > > Do you know what is the subsampling of video files and where is > > subsampling stuff coded in the standalone code? > > > > Maybe I am getting it all wrong. I would appreciate any advise you > > could give me about this matter. > > > > Best Regards, > > > > Alberto > > > > > > ------------------------------------------------------------------------------ > > Special Offer -- Download ArcSight Logger for FREE! > > Finally, a world-class log management solution at an even better > > price-free! And you'll get a free "Love Thy Logs" t-shirt when you > > download Logger. Secure your free ArcSight Logger TODAY! > > http://p.sf.net/sfu/arcsisghtdev2dev > > > > > > _______________________________________________ > > Opensvcdecoder-support mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > -- > > M?d?ric Blestel > Ing?nieur de Recherche / Research Engineer > Institut d' Electronique et de T?l?communications de Rennes (IETR) > UMR CNRS 6164 > > T?l : +33 (2) 23 23 85 67 > Fax : +33 (2) 23 23 82 62 > > > IETR/Groupe Image > INSA DE RENNES > 20 AVENUE DES BUTTES DE COESMES > CS 70 839 > 35 708 RENNES CEDEX 7 > > -------------- next part -------------- > An HTML attachment was scrubbed... > > ------------------------------ > > ------------------------------------------------------------------------------ > Special Offer -- Download ArcSight Logger for FREE! > Finally, a world-class log management solution at an even better > price-free! And you'll get a free "Love Thy Logs" t-shirt when you > download Logger. Secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsisghtdev2dev > > ------------------------------ > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > End of Opensvcdecoder-support Digest, Vol 10, Issue 1 > ***************************************************** |
From: Médéric B. <Med...@in...> - 2011-09-06 08:09:56
|
Dear Alberto, Thank you for using OpenSVCDecoder. First, you may use the latest version of the decoder (1.11) which is more stable. The videos uploaded on the web site have been encoded by myself, so it should be working with all latest version of the decoder. I have downloaded the 1.10 version of the decoder and one of the video (video_5) from the web site. Unfortunately for you, the decoder is working fine. So, you may have done something wrong. Can you give me further information about: * your operating system? * the command line you used? (should be "-h264 video_5.h264 -layer 0" or "-h264 video_5.h264 -layer 16"). Kind regards, Médéric Le 06/09/2011 09:41, alberto alvarez gonzález a écrit : > Hello all, > > First of all, thank you for the awesome work you are doing by > providing the world with an open implementation of SVC. > > I am working on SVC trying to understand the standalone project > (src_1.10). But I have found out that none of the reference video > files available at > http://sourceforge.net/projects/opensvcdecoder/files/Video%20Streams/ is > working for me. The display is all green but a raw in the top with > badly decoded pixels. > > I am quite new to this but it seems to me that there is a mismatch > between subsampling of sources and the subsampling used in the code. > At least the display looks like you select a different subsampling in > a raw YUV sequence. > > Do you know what is the subsampling of video files and where is > subsampling stuff coded in the standalone code? > > Maybe I am getting it all wrong. I would appreciate any advise you > could give me about this matter. > > Best Regards, > > Alberto > > > ------------------------------------------------------------------------------ > Special Offer -- Download ArcSight Logger for FREE! > Finally, a world-class log management solution at an even better > price-free! And you'll get a free "Love Thy Logs" t-shirt when you > download Logger. Secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsisghtdev2dev > > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support -- Médéric Blestel Ingénieur de Recherche / Research Engineer Institut d' Electronique et de Télécommunications de Rennes (IETR) UMR CNRS 6164 Tél : +33 (2) 23 23 85 67 Fax : +33 (2) 23 23 82 62 IETR/Groupe Image INSA DE RENNES 20 AVENUE DES BUTTES DE COESMES CS 70 839 35 708 RENNES CEDEX 7 |
From: alberto a. g. <alb...@ho...> - 2011-09-06 07:41:24
|
Hello all, First of all, thank you for the awesome work you are doing by providing the world with an open implementation of SVC. I am working on SVC trying to understand the standalone project (src_1.10). But I have found out that none of the reference video files available at http://sourceforge.net/projects/opensvcdecoder/files/Video%20Streams/ is working for me. The display is all green but a raw in the top with badly decoded pixels. I am quite new to this but it seems to me that there is a mismatch between subsampling of sources and the subsampling used in the code.At least the display looks like you select a different subsampling in a raw YUV sequence. Do you know what is the subsampling of video files and where is subsampling stuff coded in the standalone code? Maybe I am getting it all wrong. I would appreciate any advise you could give me about this matter. Best Regards, Alberto |
From: Daniel Y. <dan...@gm...> - 2011-07-25 13:02:50
|
Dear Mederic, Thank you for the config files, I used them with a 176x144 video at 15fps and I got a 749 ms/frame speed. I use JSVM 9.19.14. Command : ./H264AVCEncoderLibTestStatic -pf ../tests/encoder.cfg -lqp 0 45 -lqp 1 30 Here is the log I get : SUMMARY: bitrate Min-bitr Y-PSNR U-PSNR V-PSNR --------- ---------- -------- -------- -------- 176x144 @ 0.9375 11.1394 11.1394 27.1395 37.1000 37.6895 176x144 @ 1.8750 17.0529 17.0529 25.8561 36.8582 37.3022 176x144 @ 3.7500 23.3585 23.3585 25.0287 36.7661 37.2075 176x144 @ 7.5000 29.4000 29.4000 24.6742 36.7580 37.2050 176x144 @ 15.0000 36.3192 36.3192 24.4259 36.7827 37.2206 176x144 @ 0.9375 56.7338 56.7338 38.5099 42.1073 43.4655 176x144 @ 1.8750 94.2900 94.2900 36.9405 41.4466 42.5772 176x144 @ 3.7500 134.6008 134.6008 35.6831 41.0227 42.1064 176x144 @ 7.5000 175.9440 175.9440 34.7756 40.8292 41.9402 176x144 @ 15.0000 219.4392 219.4392 33.9399 40.7471 41.8076 Encoding speed: 749.539 ms/frame, Time:37476.940 ms, Frames: 50 Thanks, Daniel On 7/25/11 2:29 PM, "Médéric Blestel" <Med...@in...> wrote: > Dear Daniel, > > Thank for using Open SVC Decoder. > > I don't have an estimation of the encoding time, but your version seems to be > very slow to me. > > Which JSVM version are you using? > > > Kind regards, > > Médéric > > > > > > Le 20/07/2011 14:07, Daniel YANISSE a écrit : >> JSVM config Hello, >> >> Thanks for sharing Open svc decoder. >> >> I tried to encode 2 spatial layers videos (QCIF-CIF) with JSVM but its speed >> is very low (about 1fps). Can you give me an estimation of the encoding >> framerate you can achieve with JSVM (for instance on video sequences >> presented in your wiki page). >> Can I ask you if you can send or share your configuration files ? >> >> Ultimately I would be interested to achieve real time encoding. Do you have >> any suggestions ? Have you tried the open codec from p2p-next ? >> >> http://multimediacommunication.blogspot.com/2009/06/open-source-scalable-vide >> o-coding-svc.html >> >> Thanks, regards. >> Daniel >> >> >> ----------------------------------------------------------------------------->> - >> 10 Tips for Better Web Security >> Learn 10 ways to better secure your business today. Topics covered include: >> Web security, SSL, hacker attacks & Denial of Service (DoS), private keys, >> security Microsoft Exchange, secure Instant Messaging, and much more. >> http://www.accelacomm.com/jaw/sfnl/114/51426210/ >> >> >> >> _______________________________________________ >> Opensvcdecoder-support mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support >> > > > |
From: Daniel Y. <dan...@gm...> - 2011-07-20 12:07:32
|
Hello, Thanks for sharing Open svc decoder. I tried to encode 2 spatial layers videos (QCIF-CIF) with JSVM but its speed is very low (about 1fps). Can you give me an estimation of the encoding framerate you can achieve with JSVM (for instance on video sequences presented in your wiki page). Can I ask you if you can send or share your configuration files ? Ultimately I would be interested to achieve real time encoding. Do you have any suggestions ? Have you tried the open codec from p2p-next ? http://multimediacommunication.blogspot.com/2009/06/open-source-scalable-vid eo-coding-svc.html Thanks, regards. Daniel |
From: Médéric B. <Med...@in...> - 2011-05-18 07:34:45
|
Dear Alexis, I send you sample configuration file for the JSVM encoder. You can specify the IDR period for each layer (cf IDRPeriod in layer0.cfg or layer1.cfg). Regards, Médéric Le 17/05/2011 15:46, Martin Alexis a écrit : > > Hi, > > I would like to encode a SVC video with JSVM encoder that works with > OpenSVC decoder. > I am not able to encode a video that allows me to change the spatial > layer in Mplayer. > I read that spatial layer changes occur only on IDR pictures. > Someone could send me a sample configuration file for JSVM encoder ? > > Thanks. Best regards. > > Alexis > > > ------------------------------------------------------------------------------ > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support |
From: Martin A. <ale...@al...> - 2011-05-17 14:21:31
|
Hi, I would like to encode a SVC video with JSVM encoder that works with OpenSVC decoder. I am not able to encode a video that allows me to change the spatial layer in Mplayer. I read that spatial layer changes occur only on IDR pictures. Someone could send me a sample configuration file for JSVM encoder ? Thanks. Best regards. Alexis |
From: Eun S. R. <hop...@gm...> - 2011-04-30 00:30:09
|
Hello, First of all, I appreciate many researchers' great endeavors on SVC decoder. I could conduct many researches with it. BTW, I am finding real-time SVC encoder now. Though there are some commercial DSP-based SVC encoders, I couldn't find any open RT-SVC encoder. Some institutes developed it some years ago, but they were not opened to public. Thus, it would be highly appreciated if you give me any meaningful information about the SW-based open real-time SVC encoder. Thanks, Eun Seok Ryu |
From: Mickaël R. <mic...@in...> - 2011-03-17 23:23:12
|
Siyuan, yes it is for tpcmp and it also works for mplayer, but it doesn't matter as you said since it works for you. Mickaël Le 18 mars 2011 à 00:19, Siyuan Xiang a écrit : > Dear Médéric, > > Thank you very much! It works! > I download the latest version and use PC.c instead of H264_plg.c, I do not know how to use H264_plg.c. Is this used for TCPMP? > Anyway, every layer can be decoded. :) > > Regards, > Siyuan Xiang > --- > > > > On Thu, Mar 17, 2011 at 02:19, Médéric Blestel <Med...@in...> wrote: > Dear Siyuan, > > It seems that the stream is correctly decoded with the last version of the decoder. > Can you confirm it? > > I join you the last version of the h264_plg.c file. > > Kind regards, > Médéric > > > Le 14/03/2011 20:51, Siyuan Xiang a écrit : >> I see. Thank you Mickael and Mederic. >> >> I get an access violation when decoding a bitstream, which has 6 layers. >> The configuration is 320x180 (2 quality layers), 640x360 (2 quality layers), 1280x720 (2 quality layers). Each layer has the same frame rate 24. >> >> The lower resolutions can be decoded without problem. But when I set the DqID to 32 or 33, decoding the 1280x720 resolution, I will get access violation at mb4x4_mode [mode](ptr_img, PicWidthInPix, residu -> AvailMask, locx, locy); in file 264_baseline_decoder\lib_baseline\decode_MB_I.c. >> >> It seems that mode is larger than the array size. >> >> I divide the source video into segment of 17 frames and encode them separately, then concatenate them together. The bitstream has 17x5 = 85 frames. >> >> >> >> >> Regards, >> Siyuan >> --- >> >> >> >> On Mon, Mar 14, 2011 at 02:04, Médéric Blestel <Med...@in...> wrote: >> Dear Siyuan, >> >> To clarify the ghost picture discussion. >> *ImageToDisplay is equals to 2 when a temporal layer has been detected in another layer. >> >> This mechanism helps to keep the frame rate between all layers. >> In fact, when temporal scalability is present, each layer has got different number of frames. >> >> For instance, with a 2 temporal layer (and 1 enhancement layer): >> - Base layer: 150 frames at 15 fps >> - Top layer: 300 frames at 30 fps. >> >> Each layer has to be decoded at different frame rate, between the stream duration is the same (10 s). >> Without this mechanism, the base layer would be decoded at 30 fps instead of 15 fps. So to keep the same video duration, >> *ImageToDisplay is set to 2, and a frame is displayed, and will be displayed again with *ImageToDisplay = 1. >> >> Kind regards, >> Médéric >> >> >> Le 14/03/2011 09:26, Mickaël Raulet a écrit : >> >> With a temporal layer with a lower frame rate, we are inserting ghost pictures to give the same framerate of the enhancement layers. >> Mickaël >> >> Le 14 mars 2011 à 09:23, Siyuan Xiang a écrit : >> >> Hi, all, >> >> I downloaded the latest source from svn, now all the frames can be flushed out. >> I tried a 17-picture bitstream, but found 18 pictures are displayed. The second displayed picture is a ghost picture (I checked *ImageToDisplay which is equal to 2). >> What is a ghost picture? Can I just ignore the ghost picture? >> >> By the way, I found the decoding is faster than before :) >> >> Regards, >> Siyuan >> --- >> >> ------------------------------------------------------------------------------ >> Colocation vs. Managed Hosting >> A question and answer guide to determining the best fit >> for your organization - today and in the future. >> http://p.sf.net/sfu/internap-sfd2d_______________________________________________ >> Opensvcdecoder-support mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support >> >> ------------------------------------------------------------------------------ >> Colocation vs. Managed Hosting >> A question and answer guide to determining the best fit >> for your organization - today and in the future. >> http://p.sf.net/sfu/internap-sfd2d >> _______________________________________________ >> Opensvcdecoder-support mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support >> >> >> >> ------------------------------------------------------------------------------ >> Colocation vs. Managed Hosting >> A question and answer guide to determining the best fit >> for your organization - today and in the future. >> http://p.sf.net/sfu/internap-sfd2d >> >> _______________________________________________ >> Opensvcdecoder-support mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d_______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support |
From: Siyuan X. <xia...@gm...> - 2011-03-17 23:20:28
|
Dear Médéric, Thank you very much! It works! I download the latest version and use PC.c instead of H264_plg.c, I do not know how to use H264_plg.c. Is this used for TCPMP? Anyway, every layer can be decoded. :) Regards, Siyuan Xiang --- On Thu, Mar 17, 2011 at 02:19, Médéric Blestel < Med...@in...> wrote: > Dear Siyuan, > > It seems that the stream is correctly decoded with the last version of the > decoder. > Can you confirm it? > > I join you the last version of the h264_plg.c file. > > Kind regards, > Médéric > > > Le 14/03/2011 20:51, Siyuan Xiang a écrit : > > I see. Thank you Mickael and Mederic. > > I get an access violation when decoding a bitstream, which has 6 layers. > The configuration is 320x180 (2 quality layers), 640x360 (2 quality > layers), 1280x720 (2 quality layers). Each layer has the same frame rate > 24. > > The lower resolutions can be decoded without problem. But when I set the > DqID to 32 or 33, decoding the 1280x720 resolution, I will get access > violation at mb4x4_mode [mode](ptr_img, PicWidthInPix, residu -> AvailMask, > locx, locy); in file 264_baseline_decoder\lib_baseline\decode_MB_I.c. > > It seems that mode is larger than the array size. > > I divide the source video into segment of 17 frames and encode them > separately, then concatenate them together. The bitstream has 17x5 = 85 > frames. > > > > > Regards, > Siyuan > --- > > > > On Mon, Mar 14, 2011 at 02:04, Médéric Blestel < > Med...@in...> wrote: > >> Dear Siyuan, >> >> To clarify the ghost picture discussion. >> *ImageToDisplay is equals to 2 when a temporal layer has been detected in >> another layer. >> >> This mechanism helps to keep the frame rate between all layers. >> In fact, when temporal scalability is present, each layer has got >> different number of frames. >> >> For instance, with a 2 temporal layer (and 1 enhancement layer): >> - Base layer: 150 frames at 15 fps >> - Top layer: 300 frames at 30 fps. >> >> Each layer has to be decoded at different frame rate, between the stream >> duration is the same (10 s). >> Without this mechanism, the base layer would be decoded at 30 fps instead >> of 15 fps. So to keep the same video duration, >> *ImageToDisplay is set to 2, and a frame is displayed, and will be >> displayed again with *ImageToDisplay = 1. >> >> Kind regards, >> Médéric >> >> >> Le 14/03/2011 09:26, Mickaël Raulet a écrit : >> >> With a temporal layer with a lower frame rate, we are inserting ghost >>> pictures to give the same framerate of the enhancement layers. >>> Mickaël >>> >>> Le 14 mars 2011 à 09:23, Siyuan Xiang a écrit : >>> >>> Hi, all, >>>> >>>> I downloaded the latest source from svn, now all the frames can be >>>> flushed out. >>>> I tried a 17-picture bitstream, but found 18 pictures are displayed. The >>>> second displayed picture is a ghost picture (I checked *ImageToDisplay which >>>> is equal to 2). >>>> What is a ghost picture? Can I just ignore the ghost picture? >>>> >>>> By the way, I found the decoding is faster than before :) >>>> >>>> Regards, >>>> Siyuan >>>> --- >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Colocation vs. Managed Hosting >>>> A question and answer guide to determining the best fit >>>> for your organization - today and in the future. >>>> >>>> http://p.sf.net/sfu/internap-sfd2d_______________________________________________ >>>> Opensvcdecoder-support mailing list >>>> Ope...@li... >>>> https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support >>>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Colocation vs. Managed Hosting >>> A question and answer guide to determining the best fit >>> for your organization - today and in the future. >>> http://p.sf.net/sfu/internap-sfd2d >>> _______________________________________________ >>> Opensvcdecoder-support mailing list >>> Ope...@li... >>> https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support >>> >> >> > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future.http://p.sf.net/sfu/internap-sfd2d > > > _______________________________________________ > Opensvcdecoder-support mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > |
From: Médéric B. <Med...@in...> - 2011-03-17 09:20:04
|
#include "type.h" #include "main_data.h" #include "svc_type.h" /* 2011-03-17 10:16:26, application decoder_svc, processor PC_H264_plg type=tcpmp */ #include <stdio.h> #include <stdlib.h> #define uchar unsigned char #define ushort unsigned short #define uint unsigned int #define prec_synchro int #define stream unsigned char #define image_type unsigned char #include "SvcInterface.h" #include "SVCDecoder_ietr_api.h" void init_svc_vectors(SVC_VECTORS *svc); void decode_init_vlc(VLC_TABLES *VLc ); void vector_main_init(MAIN_STRUCT_PF *pf); int readnal_without_start_code(unsigned char* nal,int nal_length,uchar *buffer); void init_int(int *tab); void init_mmo(int num_of_layers,MMO *mmo_stru); void init_slice(SLICE *slice); void init_pps(PPS *sps); void init_sps(SPS *sps); void InitListMmo(LIST_MMO *RefPicListL0); void slice_header_svc(const stream *data,SPS *sps_id,PPS *pps_id,int *entropy_coding_flag,W_TABLES *quantif,LIST_MMO *current_pic,SPS *sps,PPS *pps,int *position,SLICE *slice,MMO *mmo,LIST_MMO RefPicListL0[],LIST_MMO RefPicListL1[],NAL *nal,int *end_of_slice,int *ImgToDisplay,int *xsize,int *ysize,int *AddressPic,int *Crop); void pic_parameter_set(stream *data,uchar *ao_slice_group_id,PPS *pps,SPS *sps,const int NalBytesInNalunit); void decoderh264_init(const int pic_width,const int pic_height); void Display_tcpmp(const int xsize,const int ysize,int edge,unsigned char *Y,unsigned char *U,unsigned char *V,OPENSVCFRAME *CurrFrame); void NextNalDqIdPlayer(ConfigSVC *Buffer,NAL *NAL,int *DqIdMax); void init_nal_struct(NAL *nal,unsigned char NumOfLayer); void NalUnitSVC(stream *data_in,int *nal_unit_type,int *nal_ref_idc,NAL *Nal); void init_int(int *tab); void svc_calculate_dpb(const int total_memory,const int mv_memory,int nb_of_layers,MMO *mmo_struct,SPS *sps); void sei_rbsp(stream *data,int NalInRbsp,SPS *sps,SEI *Sei); void seq_parameter_set(stream *data,SPS *sps); void FlushSVCFrame(SPS *sps,NAL *nal,MMO *mmo,int *address_pic,int *x_size,int *y_size,int *Crop,int *img_to_display); void PrefixNalUnit(stream *data,int *NalinRbsp,NAL *nal,MMO *mmo,SPS *sps,int *EndOfSlice); void subset_sps(stream *data,int * NalInRbsp,SPS *sps,NAL *nal); void NalUnitHeader(const stream *data,int *pos,NAL *nal,int *EndOfSlice); void slice_data_in_scalable_extension_cavlc(const int size_mb,const stream *ai_pcData,int * NalInRbsp,const int *ai_piPosition,const NAL *nal,const SPS *ai_pstSps,PPS *ai_pstPps,const VLC_TABLES *vlc,uchar *ai_slice_group_id,SLICE *aio_pstSlice,uchar *aio_tiMbToSliceGroupMap,uchar *aio_tiSlice_table,DATA *aio_tstTab_block,RESIDU *residu ,int * aio_piEnd_of_slice); void SliceCabac(const int size_mb,uchar *data,int *position,int *NalBytesInNalunit,const NAL *Nal,SPS *sps,PPS *pps,uchar *ai_slice_group_id,short *mv_cabac_l0,short *mv_cabac_l1,short *ref_cabac_l0,short *ref_cabac_l1,SLICE *slice,uchar *MbToSliceGroupMap,uchar *slice_table,DATA *Tab_block,RESIDU *picture_residu,int *end_of_slice); void slice_base_layer_cavlc(const stream *ai_pcData,int * NalInRbsp,const int *ai_piPosition,const SPS *ai_pstSps,PPS *ai_pstPps,const VLC_TABLES *Vlc,uchar *ai_slice_group_id,LIST_MMO *Current_pic,LIST_MMO *RefListl1,NAL *Nal,SLICE *aio_pstSlice,uchar *aio_tiMbToSliceGroupMap,uchar *aio_tiSlice_table,DATA *aio_tstTab_block,RESIDU *picture_residu,int * aio_piEnd_of_slice,short *mv_io,short *mvl1_io,short *ref_io,short *refl1_io); void slice_base_layer_cabac(uchar *data,int *position,int *NalBytesInNalunit,SPS *sps,PPS *pps,uchar *ai_slice_group_id,LIST_MMO *Current_pic,LIST_MMO *RefListl1,NAL *Nal,short *mv_cabac_l0,short *mv_cabac_l1,short *ref_cabac_l0,short *ref_cabac_l1,SLICE *slice,uchar *MbToSliceGroupMap,uchar *slice_table,DATA *Tab_block,RESIDU *picture_residu,int *end_of_slice,short *mvl0_io,short *mb_l1_io,short *refl0_io,short *refl1_io); void Decode_P_avc(const SPS *ai_pstSps,const PPS *ai_pstPps,const SLICE *ai_pstSlice,const uchar *ai_tiSlice_table,const RESIDU *picture_residu,const STRUCT_PF *pf,const LIST_MMO *ai_pstRefPicListL0,const LIST_MMO *ai_pstCurrent_pic,W_TABLES *quantif_tab,NAL *Nal,short *aio_tiMv,short *aio_tiRef,uchar *aio_tucDpb_luma,uchar *aio_tucDpb_Cb,uchar *aio_tucDpb_Cr,short *Residu_Img,short *Residu_Cb ,short *Residu_Cr); void Decode_B_avc(SPS *ai_stSps,PPS *ai_stPps ,SLICE *ai_stSlice,uchar *ai_tSlice_table,RESIDU *picture_residu,MAIN_STRUCT_PF *main_vector,LIST_MMO *ai_pRefPicListL0,LIST_MMO *ai_pRefPicListL1,LIST_MMO *ai_pCurrent_pic,W_TABLES *quantif,NAL *Nal,short *aio_tMv_l0,short *aio_tMv_l1,short *aio_tref_l0,short *aio_tref_l1,uchar *aio_tDpb_luma,uchar *aio_tDpb_Cb,uchar *aio_tDpb_Cr,short *Residu_img,short *Residu_Cb ,short *Residu_Cr); void Decode_I_avc(SPS *sps,PPS *pps,SLICE *slice,uchar *slice_table,RESIDU *picture_residu,STRUCT_PF *pf,W_TABLES *quantif_tab,NAL *Nal,uchar *image,uchar *image_Cb,uchar *image_Cr); void FinishFrameSVC(const int NbMb,NAL *Nal,SPS *Sps,PPS *Pps,LIST_MMO *Current_pic,SLICE *Slice,int EndOfSlice,uchar *SliceTab,DATA *TabBlbock,RESIDU *Residu,short *MvL0,short *MvL1,short *RefL0,short *RefL1,int *Crop,int *ImgToDisplay,int *AdressPic,MMO *Mmo,unsigned char *RefY,unsigned char *RefU,unsigned char *RefV,int *xsize,int *ysize); void Decode_P_svc(const int size,const SPS *ai_pstSps,const PPS *ai_pstPps,const SLICE *ai_pstSlice,const NAL *nal,const uchar *ai_tiSlice_table,const DATA *ai_tstTab_Block,RESIDU *residu,STRUCT_PF *baseline_vector,const LIST_MMO *ai_pstRefPicListL0,const LIST_MMO *ai_pstCurrent_pic,W_TABLES *quantif_tab,SVC_VECTORS *svc,short *px,short *py,short *Upsampling_tmp,short *xk16,short *xp16,short *yk16,short* yp16,short *aio_tiMv,short *aio_tiRef,uchar *aio_tucDpb_luma,uchar *aio_tucDpb_Cb,uchar *aio_tucDpb_Cr,short *Residu_Img,short *Residu_Cb ,short *Residu_Cr); void Decode_B_svc(const int size,const SPS *ai_pstSps,const PPS *ai_pstPps,const SLICE *ai_pstSlice,const NAL *nal,const uchar *ai_tiSlice_table,const DATA *ai_tstTab_Block,RESIDU *residu,MAIN_STRUCT_PF *baseline_vector,const LIST_MMO *ai_pstRefPicListL0,const LIST_MMO *ai_pstRefPicListL1,const LIST_MMO *ai_pstCurrent_pic,W_TABLES *quantif_tab,SVC_VECTORS *svc,short *px,short *py,short *Upsampling_tmp,short *k16,short *p16,short *yk16,short *yp16,short *aio_tiMv_l0,short *aio_tMv_l1,short *aio_tiRef_l0,short *aio_tiRef_l1,uchar *aio_tucDpb_luma,uchar *aio_tucDpb_Cb,uchar *aio_tucDpb_Cr,short *Residu_Img,short *Residu_Cb ,short *Residu_Cr); void Decode_I_svc(const int size,SPS *sps,PPS *pps,SLICE *slice,NAL *nal,uchar *slice_table,DATA *Block,RESIDU *residu,STRUCT_PF *vector,LIST_MMO *Current_pic,W_TABLES *quantif,unsigned char *aio_tucImage,unsigned char *aio_tucImage_Cb,unsigned char *aio_tucImage_Cr); void Extract_tcpmp(int xsize,int ysize,int edge,int Crop,uchar *img_luma_in,uchar *img_Cb_in,uchar *img_Cr_in,int address_pic,OPENSVCFRAME *Frame); typedef struct{ image_type Display_1_Extract_1_Image_Y_o[3279368]; image_type Display_1_Extract_Image_Y_o[3279368]; NAL DqIdNextNal_Nal_o[1]; int GetNalBytes_NalUnitBytes_o_buf[1]; ConfigSVC GetNalBytes_StreamType[1]; int GetNalBytes_rbsp_o_size[1]; stream GetNalBytes_rbsp_o[101376]; int NumBytesInNal_buf_1[1]; short decoder_svc_MvBuffer_1_Mv[9400320]; short decoder_svc_MvBuffer_1_Ref[4700160]; short decoder_svc_MvBuffer_Mv[9400320]; short decoder_svc_MvBuffer_Ref[4700160]; int decoder_svc_NalUnit_NalRefIdc_io[1]; int decoder_svc_NalUnit_NalUnitType_io[1]; int decoder_svc_Nal_Compute_NalDecodingProcess_Set_Pos_Pos[1]; int decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_entropy_coding_flag[1]; PPS decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_pps_id[1]; W_TABLES decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_quantif[1]; SPS decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_sps_id[1]; int decoder_svc_Nal_Compute_NalDecodingProcess_Slice_type_SliceType_o[1]; short decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CABAC_mv_cabac_l0_o[261120]; short decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CABAC_mv_cabac_l1_o[261120]; short decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CABAC_ref_cabac_l0_o[32640]; short decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CABAC_ref_cabac_l1_o[32640]; int decoder_svc_Nal_Compute_SetPos_Pos[1]; int decoder_svc_Nal_Compute_SliceHeaderIDR_entropy_coding_flag[1]; PPS decoder_svc_Nal_Compute_SliceHeaderIDR_pps_id[1]; W_TABLES decoder_svc_Nal_Compute_SliceHeaderIDR_quantif[1]; SPS decoder_svc_Nal_Compute_SliceHeaderIDR_sps_id[1]; int decoder_svc_Nal_Compute_nal_unit_header_svc_ext_20_pos_o[1]; SEI decoder_svc_Nal_Compute_sei_rbsp_Sei[1]; int decoder_svc_Nal_Compute_seq_parameter_set_IdOfsps_o[1]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_B_svc_Upsampling_tmp[2088960]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_B_svc_px[1920]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_B_svc_py[1088]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_B_svc_xk16[1920]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_B_svc_xp16[1920]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_B_svc_yk16[1088]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_B_svc_yp16[1088]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_P_svc_Upsampling_tmp[2088960]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_P_svc_px[1920]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_P_svc_py[1088]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_P_svc_xk16[1920]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_P_svc_xp16[1920]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_P_svc_yk16[1088]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_Decode_P_svc_yp16[1088]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_SliceLayerCabac_mv_cabac_l0_o[261120]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_SliceLayerCabac_mv_cabac_l1_o[261120]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_SliceLayerCabac_ref_cabac_l0_o[32640]; short decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_SliceLayerCabac_ref_cabac_l1_o[32640]; int decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_type_SliceType_o[1]; int decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_entropy_coding_flag[1]; PPS decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_pps_id[1]; W_TABLES decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_quantif[1]; SPS decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_sps_id[1]; short decoder_svc_Nal_Compute_slice_layer_main_CABAC_mv_cabac_l0_o[261120]; short decoder_svc_Nal_Compute_slice_layer_main_CABAC_mv_cabac_l1_o[261120]; short decoder_svc_Nal_Compute_slice_layer_main_CABAC_ref_cabac_l0_o[32640]; short decoder_svc_Nal_Compute_slice_layer_main_CABAC_ref_cabac_l1_o[32640]; uchar decoder_svc_PictureBuffer_RefU[8198400]; uchar decoder_svc_PictureBuffer_RefV[8198400]; uchar decoder_svc_PictureBuffer_RefY[32793600]; short decoder_svc_ResiduBuffer_RefU[3279360]; short decoder_svc_ResiduBuffer_RefV[3279360]; short decoder_svc_ResiduBuffer_RefY[13117440]; DATA decoder_svc_Residu_Block[8160]; LIST_MMO decoder_svc_Residu_Current_pic[1]; uchar decoder_svc_Residu_MbToSliceGroupMap[8160]; MMO decoder_svc_Residu_Mmo[1]; PPS decoder_svc_Residu_PPS[255]; LIST_MMO decoder_svc_Residu_RefL0[16]; LIST_MMO decoder_svc_Residu_RefL1[16]; RESIDU decoder_svc_Residu_Residu[48960]; SPS decoder_svc_Residu_SPS[32]; SLICE decoder_svc_Residu_Slice[1]; uchar decoder_svc_Residu_SliceGroupId[8160]; uchar decoder_svc_Residu_SliceTab[8160]; int decoder_svc_SetZeor_Pos[1]; SVC_VECTORS decoder_svc_Svc_Vectors_PC_H264_plg_Svc_Vectors[1]; int decoder_svc_VideoParameter_Crop[1]; int decoder_svc_VideoParameter_EndOfSlice[1]; int decoder_svc_VideoParameter_ImgToDisplay[1]; int decoder_svc_VideoParameter_address_pic_o[1]; int decoder_svc_VideoParameter_xsize_o[1]; int decoder_svc_VideoParameter_ysize_o[1]; VLC_TABLES decoder_svc_VlcTab_PC_H264_plg_o[1]; MAIN_STRUCT_PF decoder_svc_slice_main_vector_PC_H264_plg_Main_vector_o[1]; uchar read_SVC_DataFile_o[101376]; int read_SVC_pos_o[1]; SLICE *decoder_svc_Nal_Compute_slice_layer_main_CondO9_o; short *decoder_svc_Nal_Compute_slice_layer_main_CondO8_o; short *decoder_svc_Nal_Compute_slice_layer_main_CondO7_o; short *decoder_svc_Nal_Compute_slice_layer_main_CondO6_o; short *decoder_svc_Nal_Compute_slice_layer_main_CondO5_o; uchar *decoder_svc_Nal_Compute_slice_layer_main_CondO4_o; RESIDU *decoder_svc_Nal_Compute_slice_layer_main_CondO3_o; DATA *decoder_svc_Nal_Compute_slice_layer_main_CondO0_o; uchar *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO5_o; SLICE *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO4_o; int *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO3_o; DATA *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO2_o; RESIDU *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO1_o; short *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO6_o; short *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO5_o; short *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO4_o; short *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO3_o; uchar *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO2_o; uchar *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO1_o; uchar *decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO0_o; SLICE *decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO9_o; short *decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO8_o; short *decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO7_o; short *decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO6_o; short *decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO5_o; uchar *decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO4_o; RESIDU *decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO3_o; int *decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO1_o; DATA *decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO0_o; uchar *decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO2_o; uchar *decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO1_o; uchar *decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO0_o; int *decoder_svc_Nal_Compute_CondO7_o; int *decoder_svc_Nal_Compute_CondO6_o; int *decoder_svc_Nal_Compute_CondO5_o; uchar *decoder_svc_Nal_Compute_CondO4_o; uchar *decoder_svc_Nal_Compute_CondO3_o; uchar *decoder_svc_Nal_Compute_CondO2_o; int *decoder_svc_Nal_Compute_CondO1_o; int *decoder_svc_Nal_Compute_CondO0_o; int *NumBytesInNal_o_1; int *GetNalBytes_NalUnitBytes_o; }RAM_tcpmp; int WINEXPORT SVCDecoder_init(void **PlayerStruct) { /* for link with C runtime boot */ RAM_tcpmp *RAM_tcpmp_alloc_ = malloc(sizeof(RAM_tcpmp)); if (!RAM_tcpmp_alloc_){ #ifdef TCPMP #ifdef WIN_32 MessageBoxA(NULL,(LPCSTR)"The decoder requieres more memory than the architecture can provide", (LPCSTR)"OpenSVCDecoder has encountered an error", MB_OK); #endif #endif exit(20); } *PlayerStruct = RAM_tcpmp_alloc_; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO9_o = RAM_tcpmp_alloc_->decoder_svc_Residu_Slice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO8_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO7_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO6_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO5_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO4_o = RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO3_o = RAM_tcpmp_alloc_->decoder_svc_Residu_Residu; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO0_o = RAM_tcpmp_alloc_->decoder_svc_Residu_Block; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO5_o = RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO4_o = RAM_tcpmp_alloc_->decoder_svc_Residu_Slice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO3_o = RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO2_o = RAM_tcpmp_alloc_->decoder_svc_Residu_Block; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO1_o = RAM_tcpmp_alloc_->decoder_svc_Residu_Residu; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO6_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO5_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO4_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO3_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO2_o = RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO1_o = RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Decode_IPB_svc_CondO0_o = RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO9_o = RAM_tcpmp_alloc_->decoder_svc_Residu_Slice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO8_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO7_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO6_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO5_o = RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO4_o = RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO3_o = RAM_tcpmp_alloc_->decoder_svc_Residu_Residu; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO1_o = RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO0_o = RAM_tcpmp_alloc_->decoder_svc_Residu_Block; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO2_o = RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO1_o = RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO0_o = RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO7_o = RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO6_o = RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO5_o = RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO4_o = RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO3_o = RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO2_o = RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO1_o = RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO0_o = RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop; RAM_tcpmp_alloc_->NumBytesInNal_o_1 = RAM_tcpmp_alloc_->NumBytesInNal_buf_1; RAM_tcpmp_alloc_->GetNalBytes_NalUnitBytes_o = RAM_tcpmp_alloc_->GetNalBytes_NalUnitBytes_o_buf; init_svc_vectors(RAM_tcpmp_alloc_->decoder_svc_Svc_Vectors_PC_H264_plg_Svc_Vectors); decode_init_vlc(RAM_tcpmp_alloc_->decoder_svc_VlcTab_PC_H264_plg_o); vector_main_init(RAM_tcpmp_alloc_->decoder_svc_slice_main_vector_PC_H264_plg_Main_vector_o); init_nal_struct(RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, 6); init_int(RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice); init_int(RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay); init_int(RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o); init_int(RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o); init_int(RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o); init_int(RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop); init_slice(RAM_tcpmp_alloc_->decoder_svc_Residu_Slice); init_sps(RAM_tcpmp_alloc_->decoder_svc_Residu_SPS); init_pps(RAM_tcpmp_alloc_->decoder_svc_Residu_PPS); InitListMmo(RAM_tcpmp_alloc_->decoder_svc_Residu_RefL0); InitListMmo(RAM_tcpmp_alloc_->decoder_svc_Residu_RefL1); InitListMmo(RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic); init_mmo(6, RAM_tcpmp_alloc_->decoder_svc_Residu_Mmo); decoderh264_init(1920, 1088); RAM_tcpmp_alloc_->NumBytesInNal_o_1[0]=101376; init_int(RAM_tcpmp_alloc_->decoder_svc_SetZeor_Pos); return 1; } int WINEXPORT decodeNAL(void *PlayerStruct, unsigned char* nal, int nal_length, OPENSVCFRAME *CurrFrame, int *LayerCommand){ RAM_tcpmp *RAM_tcpmp_alloc_ = (RAM_tcpmp *) PlayerStruct; int result = 0; RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size[0] = readnal_without_start_code(nal, nal_length, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o); NextNalDqIdPlayer(RAM_tcpmp_alloc_->GetNalBytes_StreamType, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, LayerCommand); NalUnitSVC(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->decoder_svc_NalUnit_NalUnitType_io, RAM_tcpmp_alloc_->decoder_svc_NalUnit_NalRefIdc_io, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o); init_int(RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay); switch ( RAM_tcpmp_alloc_->decoder_svc_NalUnit_NalUnitType_io[0]) { /* switch_3 */ case 1 : {/* case_4 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Set_Pos_Pos[0] = 8; slice_header_svc(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_pps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_entropy_coding_flag, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_quantif, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Residu_SPS, RAM_tcpmp_alloc_->decoder_svc_Residu_PPS, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Set_Pos_Pos, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_Residu_Mmo, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL0, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL1, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop); break; }/* case_4 */ case 5 : {/* case_5 */ svc_calculate_dpb((2186240 * (5 + 5 + 6 - 1)), (1920 * 1088 / 8 * 6 * (5 + 1)), 6, RAM_tcpmp_alloc_->decoder_svc_Residu_Mmo, RAM_tcpmp_alloc_->decoder_svc_Residu_SPS); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SetPos_Pos[0] = 8; break; }/* case_5 */ case 6 : {/* case_6 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay; sei_rbsp(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size[0], RAM_tcpmp_alloc_->decoder_svc_Residu_SPS, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_sei_rbsp_Sei); break; }/* case_6 */ case 7 : {/* case_7 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay; seq_parameter_set(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->decoder_svc_Residu_SPS); break; }/* case_7 */ case 8 : {/* case_8 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay; break; }/* case_8 */ case 11 : {/* case_9 */ FlushSVCFrame(RAM_tcpmp_alloc_->decoder_svc_Residu_SPS, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Mmo, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop; break; }/* case_9 */ case 14 : {/* case_10 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay; PrefixNalUnit(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Mmo, RAM_tcpmp_alloc_->decoder_svc_Residu_SPS, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice); break; }/* case_10 */ case 15 : {/* case_11 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay; subset_sps(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size, RAM_tcpmp_alloc_->decoder_svc_Residu_SPS, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o); break; }/* case_11 */ case 20 : {/* case_12 */ NalUnitHeader(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_nal_unit_header_svc_ext_20_pos_o, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice); slice_header_svc(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_pps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_entropy_coding_flag, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_quantif, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Residu_SPS, RAM_tcpmp_alloc_->decoder_svc_Residu_PPS, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_nal_unit_header_svc_ext_20_pos_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_Residu_Mmo, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL0, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL1, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop); break; }/* case_12 */ } /* end switch_3 */ switch ( RAM_tcpmp_alloc_->decoder_svc_NalUnit_NalUnitType_io[0]) { /* switch_13 */ case 5 : {/* case_14 */ slice_header_svc(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_pps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_entropy_coding_flag, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_quantif, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Residu_SPS, RAM_tcpmp_alloc_->decoder_svc_Residu_PPS, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SetPos_Pos, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_Residu_Mmo, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL0, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL1, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop); break; }/* case_14 */ case 8 : {/* case_15 */ pic_parameter_set(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceGroupId, RAM_tcpmp_alloc_->decoder_svc_Residu_PPS, RAM_tcpmp_alloc_->decoder_svc_Residu_SPS, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size[0]); break; }/* case_15 */ case 20 : {/* case_16 */ switch ( RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_entropy_coding_flag[0]) { /* switch_17 */ case 0 : {/* case_18 */ slice_data_in_scalable_extension_cavlc(1920*1088/256, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_nal_unit_header_svc_ext_20_pos_o, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_pps_id, RAM_tcpmp_alloc_->decoder_svc_VlcTab_PC_H264_plg_o, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceGroupId, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_Residu_MbToSliceGroupMap, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab, RAM_tcpmp_alloc_->decoder_svc_Residu_Block, RAM_tcpmp_alloc_->decoder_svc_Residu_Residu, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Slice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Block; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Residu; break; }/* case_18 */ case 1 : {/* case_19 */ SliceCabac(1920*1088/256, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_nal_unit_header_svc_ext_20_pos_o, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_slice_header_21_pps_id, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceGroupId, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_SliceLayerCabac_mv_cabac_l0_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_SliceLayerCabac_mv_cabac_l1_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_SliceLayerCabac_ref_cabac_l0_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_SliceLayerCabac_ref_cabac_l1_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_Residu_MbToSliceGroupMap, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab, RAM_tcpmp_alloc_->decoder_svc_Residu_Block, RAM_tcpmp_alloc_->decoder_svc_Residu_Residu, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Slice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Block; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Residu; break; }/* case_19 */ } /* end switch_17 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_type_SliceType_o[0]=RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_in_scalable_ext_20_Slice_Layer_CondO4_o[0].slice_type; break; }/* case_16 */ } /* end switch_13 */ switch ( RAM_tcpmp_alloc_->decoder_svc_NalUnit_NalUnitType_io[0]) { /* switch_20 */ case 1 : {/* case_21 */ switch ( RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_entropy_coding_flag[0]) { /* switch_22 */ case 0 : {/* case_23 */ slice_base_layer_cavlc(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Set_Pos_Pos, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_pps_id, RAM_tcpmp_alloc_->decoder_svc_VlcTab_PC_H264_plg_o, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceGroupId, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL1, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_Residu_MbToSliceGroupMap, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab, RAM_tcpmp_alloc_->decoder_svc_Residu_Block, RAM_tcpmp_alloc_->decoder_svc_Residu_Residu, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO9_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Slice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO8_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Residu; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Block; break; }/* case_23 */ case 1 : {/* case_24 */ slice_base_layer_cabac(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Set_Pos_Pos, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_pps_id, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceGroupId, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL1, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CABAC_mv_cabac_l0_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CABAC_mv_cabac_l1_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CABAC_ref_cabac_l0_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CABAC_ref_cabac_l1_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_Residu_MbToSliceGroupMap, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab, RAM_tcpmp_alloc_->decoder_svc_Residu_Block, RAM_tcpmp_alloc_->decoder_svc_Residu_Residu, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO9_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Slice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO8_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Residu; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Block; break; }/* case_24 */ } /* end switch_22 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Slice_type_SliceType_o[0]=RAM_tcpmp_alloc_->decoder_svc_Residu_Slice[0].slice_type; switch ( RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Slice_type_SliceType_o[0]) { /* switch_25 */ case 0 : {/* case_26 */ Decode_P_avc(RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_pps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO9_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO4_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO3_o, RAM_tcpmp_alloc_->decoder_svc_slice_main_vector_PC_H264_plg_Main_vector_o->baseline_vectors, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL0, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_quantif, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, &(RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO5_o[RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic->MvMemoryAddress]), &(RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO7_o[RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic->MvMemoryAddress >> 1]), RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY, RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU, RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV, RAM_tcpmp_alloc_->decoder_svc_ResiduBuffer_RefY, RAM_tcpmp_alloc_->decoder_svc_ResiduBuffer_RefU, RAM_tcpmp_alloc_->decoder_svc_ResiduBuffer_RefV); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; break; }/* case_26 */ case 1 : {/* case_27 */ Decode_B_avc(RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_pps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO9_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO4_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO3_o, RAM_tcpmp_alloc_->decoder_svc_slice_main_vector_PC_H264_plg_Main_vector_o, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL0, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL1, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_quantif, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO5_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO6_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO7_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO8_o, RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY, RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU, RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV, RAM_tcpmp_alloc_->decoder_svc_ResiduBuffer_RefY, RAM_tcpmp_alloc_->decoder_svc_ResiduBuffer_RefU, RAM_tcpmp_alloc_->decoder_svc_ResiduBuffer_RefV); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; break; }/* case_27 */ case 2 : {/* case_28 */ Decode_I_avc(RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_pps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO9_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO4_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO3_o, RAM_tcpmp_alloc_->decoder_svc_slice_main_vector_PC_H264_plg_Main_vector_o->baseline_vectors, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_quantif, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, &(RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY[RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic->MemoryAddress]), &(RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU[RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic->MemoryAddress>>2]), &(RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV[RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic->MemoryAddress>>2])); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; break; }/* case_28 */ } /* end switch_25 */ FinishFrameSVC(1920*1088/256, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_SliceHeader_pps_id, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO9_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO1_o[0], RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO4_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO0_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO3_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO5_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO6_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO7_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_slice_layer_CondO8_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Mmo, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO2_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO0_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO1_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO2_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO1_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_NalDecodingProcess_Decode_IPB_avc_CondO0_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop; break; }/* case_21 */ case 5 : {/* case_29 */ switch ( RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_entropy_coding_flag[0]) { /* switch_30 */ case 0 : {/* case_31 */ slice_base_layer_cavlc(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SetPos_Pos, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_pps_id, RAM_tcpmp_alloc_->decoder_svc_VlcTab_PC_H264_plg_o, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceGroupId, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL1, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_Residu_MbToSliceGroupMap, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab, RAM_tcpmp_alloc_->decoder_svc_Residu_Block, RAM_tcpmp_alloc_->decoder_svc_Residu_Residu, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO9_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Slice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO8_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Residu; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Block; break; }/* case_31 */ case 1 : {/* case_32 */ slice_base_layer_cabac(RAM_tcpmp_alloc_->GetNalBytes_rbsp_o, RAM_tcpmp_alloc_->GetNalBytes_rbsp_o_size, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SetPos_Pos, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_pps_id, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceGroupId, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Residu_RefL1, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CABAC_mv_cabac_l0_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CABAC_mv_cabac_l1_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CABAC_ref_cabac_l0_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CABAC_ref_cabac_l1_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_Residu_MbToSliceGroupMap, RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab, RAM_tcpmp_alloc_->decoder_svc_Residu_Block, RAM_tcpmp_alloc_->decoder_svc_Residu_Residu, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref, RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO9_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Slice; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO8_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Ref; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_1_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_MvBuffer_Mv; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_Residu_SliceTab; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Residu; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_Residu_Block; break; }/* case_32 */ } /* end switch_30 */ Decode_I_avc(RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_pps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO9_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO4_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO3_o, RAM_tcpmp_alloc_->decoder_svc_slice_main_vector_PC_H264_plg_Main_vector_o->baseline_vectors, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_quantif, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, &(RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY[RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic->MemoryAddress]), &(RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU[RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic->MemoryAddress>>2]), &(RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV[RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic->MemoryAddress>>2])); FinishFrameSVC(1920*1088/256, RAM_tcpmp_alloc_->DqIdNextNal_Nal_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_sps_id, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_SliceHeaderIDR_pps_id, RAM_tcpmp_alloc_->decoder_svc_Residu_Current_pic, RAM_tcpmp_alloc_->decoder_svc_Residu_Slice, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_EndOfSlice[0], RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO4_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO0_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO3_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO5_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO6_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO7_o, RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_slice_layer_main_CondO8_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o, RAM_tcpmp_alloc_->decoder_svc_Residu_Mmo, RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY, RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU, RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o, RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o); RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO7_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ysize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO6_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_xsize_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO5_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_address_pic_o; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO1_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_ImgToDisplay; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO0_o=RAM_tcpmp_alloc_->decoder_svc_VideoParameter_Crop; break; }/* case_29 */ case 6 : {/* case_33 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; break; }/* case_33 */ case 7 : {/* case_34 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; break; }/* case_34 */ case 8 : {/* case_35 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; break; }/* case_35 */ case 11 : {/* case_36 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefV; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO2_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefU; break; }/* case_36 */ case 14 : {/* case_37 */ RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO4_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_RefY; RAM_tcpmp_alloc_->decoder_svc_Nal_Compute_CondO3_o=RAM_tcpmp_alloc_->decoder_svc_PictureBuffer_... [truncated message content] |
From: Médéric B. <Med...@in...> - 2011-03-15 14:40:55
|
Dear Siyuan, I will have a look to the stream as soon as possible. Médéric e 14/03/2011 20:51, Siyuan Xiang a écrit : > I see. Thank you Mickael and Mederic. > > I get an access violation when decoding a bitstream, which has 6 layers. > The configuration is 320x180 (2 quality layers), 640x360 (2 quality > layers), 1280x720 (2 quality layers). Each layer has the same frame > rate 24. > > The lower resolutions can be decoded without problem. But when I set > the DqID to 32 or 33, decoding the 1280x720 resolution, I will get > access violation at mb4x4_mode [mode](ptr_img, PicWidthInPix, residu > -> AvailMask, locx, locy); in file > 264_baseline_decoder\lib_baseline\decode_MB_I.c. > > It seems that mode is larger than the array size. > > I divide the source video into segment of 17 frames and encode them > separately, then concatenate them together. The bitstream has 17x5 = > 85 frames. > > > > > Regards, > Siyuan > --- > > > > On Mon, Mar 14, 2011 at 02:04, Médéric Blestel > <Med...@in... > <mailto:Med...@in...>> wrote: > > Dear Siyuan, > > To clarify the ghost picture discussion. > *ImageToDisplay is equals to 2 when a temporal layer has been > detected in another layer. > > This mechanism helps to keep the frame rate between all layers. > In fact, when temporal scalability is present, each layer has got > different number of frames. > > For instance, with a 2 temporal layer (and 1 enhancement layer): > - Base layer: 150 frames at 15 fps > - Top layer: 300 frames at 30 fps. > > Each layer has to be decoded at different frame rate, between the > stream duration is the same (10 s). > Without this mechanism, the base layer would be decoded at 30 fps > instead of 15 fps. So to keep the same video duration, > *ImageToDisplay is set to 2, and a frame is displayed, and will be > displayed again with *ImageToDisplay = 1. > > Kind regards, > Médéric > > > Le 14/03/2011 09:26, Mickaël Raulet a écrit : > > With a temporal layer with a lower frame rate, we are > inserting ghost pictures to give the same framerate of the > enhancement layers. > Mickaël > > Le 14 mars 2011 à 09:23, Siyuan Xiang a écrit : > > Hi, all, > > I downloaded the latest source from svn, now all the > frames can be flushed out. > I tried a 17-picture bitstream, but found 18 pictures are > displayed. The second displayed picture is a ghost picture > (I checked *ImageToDisplay which is equal to 2). > What is a ghost picture? Can I just ignore the ghost picture? > > By the way, I found the decoding is faster than before :) > > Regards, > Siyuan > --- > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d_______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > <mailto:Ope...@li...> > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > <mailto:Ope...@li...> > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > > > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d > > > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support |
From: Médéric B. <Med...@in...> - 2011-03-14 09:04:51
|
Dear Siyuan, To clarify the ghost picture discussion. *ImageToDisplay is equals to 2 when a temporal layer has been detected in another layer. This mechanism helps to keep the frame rate between all layers. In fact, when temporal scalability is present, each layer has got different number of frames. For instance, with a 2 temporal layer (and 1 enhancement layer): - Base layer: 150 frames at 15 fps - Top layer: 300 frames at 30 fps. Each layer has to be decoded at different frame rate, between the stream duration is the same (10 s). Without this mechanism, the base layer would be decoded at 30 fps instead of 15 fps. So to keep the same video duration, *ImageToDisplay is set to 2, and a frame is displayed, and will be displayed again with *ImageToDisplay = 1. Kind regards, Médéric Le 14/03/2011 09:26, Mickaël Raulet a écrit : > With a temporal layer with a lower frame rate, we are inserting ghost pictures to give the same framerate of the enhancement layers. > Mickaël > > Le 14 mars 2011 à 09:23, Siyuan Xiang a écrit : > >> Hi, all, >> >> I downloaded the latest source from svn, now all the frames can be flushed out. >> I tried a 17-picture bitstream, but found 18 pictures are displayed. The second displayed picture is a ghost picture (I checked *ImageToDisplay which is equal to 2). >> What is a ghost picture? Can I just ignore the ghost picture? >> >> By the way, I found the decoding is faster than before :) >> >> Regards, >> Siyuan >> --- >> >> ------------------------------------------------------------------------------ >> Colocation vs. Managed Hosting >> A question and answer guide to determining the best fit >> for your organization - today and in the future. >> http://p.sf.net/sfu/internap-sfd2d_______________________________________________ >> Opensvcdecoder-support mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d > _______________________________________________ > Opensvcdecoder-support mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opensvcdecoder-support |