Menu

QuestionsAnswers

Anonymous

Please put comments / questions and i'll try to give a good answer


Discussion

1 2 3 > >> (Page 1 of 3)
  • Anonymous

    Anonymous - 2014-09-22

    Originally posted by: DLJonsso...@gmail.com

    Summary : What do we need to do to add FreeFrameGL "New Source Plugin" to the GLMixer menus.

    It appears we did not activate FreeFrameGL or OpenCV ?

    We do not see "New Source Plugin" or "New Source Device"

    Per previous communication. - If you have compiled with FreeFrameGL -- you have the menu entry 'New Source Plugin' :

    this gives you the ability to use the freeframegl plugins (http://community.freeframe.org/plugindatabase) and to create sources from shadertoy GLSL code (see https://www.shadertoy.com/).

    - If you compiled with OpenCV, you have the menu entry 'New Source Device'

    : this allows adding webcams

    - If you compiled with Shared Memory support, you could possibly

    create a link with external programs which are generating graphics (there is example code).

    FreeFrameGL is of interest to us.

    What do we need to do to add FreeFrameGL "New Source Plugin" to the GLMixer menus.

    Please advise

     
  • Anonymous

    Anonymous - 2014-09-22

    Originally posted by: DLJonsso...@gmail.com

    Are there any Linux example glm files that use FreeFrameGL plugins?

    Where is the example code for creating a link with external programs which are generating graphics?

     
  • Anonymous

    Anonymous - 2014-09-25

    Originally posted by: DLJonsso...@gmail.com

    What Codecs are known to have the optimal quality/performance with GLMixer?

    ** Is there any roadmap for implementing HAP codec with GLMixer /w Linux?

    _if not how difficult do you think it would be to implement HAP Codec with GLMixer? _

    **

    ** In monitoring CPUs on 2 Linux duo-core workstations w/Nvidea cards we see extensive

    CPU activity with GL Mixer. I brings to mind the following questions...

    1. How are the processing resources distributed between the GPU and CPU/s in GLMixer?

    2. Should upgrading GPU or CPU be a priority in attempting to get more capacity from GLMixer?

    **

    ************************* AMD Phenom X2/ 8GB Ram / Seagate Hybrid 7200 RPM OS drive / 5200 RPM data drive Ubuntu Studio 14 *************************

     
  • Anonymous

    Anonymous - 2014-09-25

    Originally posted by: DLJonsso...@gmail.com

    Should add we are running Nvidea GT 630 on our test unit, with Nvidea proprietary driver.

    AMD Phenom X2 3ghz/ 8GB Ram / Seagate Hybrid 7200 RPM OS drive / 5200 RPM data drive Ubuntu Studio 14

    Loading 2.5 GB in 7 AVI/MP4 codecs using a mix of H264 and HuffyUV codecs in constant playback we see AVG 82% CPU usage on 2 CPUs, and AVG 42% GPU utilization.

    Our 8 GB RAM is about 40% usage in the scenario above.

    At this point our frames loss GLMixer indicator is at about 25% in the RED.

    We do experience some front end slowdown at this point on the GLMixer, adding filters at this point in a performance is not a comfortable option due to potential lockup as the duo-core CPUs hit the ceiling. As well rendering composites only offer us low FPS rates of 9-15 GPS

    It would be good to get more headroom out of GLMixer, but we are a little uncertain which upgrade would give us the better advantage, and how much leverage would be gained.

    For instance a 4 core CPU on the same MB seems an easy enough upgrade, as well a GTX 750 or GTX 780 Nvidea card.

     
  • Anonymous

    Anonymous - 2014-09-26

    Originally posted by: bruno.he...@gmail.com

    Here are some answers to the questions above:

    • Adding FreeFrameGL plugins : I added instructions in the Compilation wiki page
    • Example glm files for FreeFrameGL : i cannot provide example glm because the path to the plugin file depends on your system
    • External program generating graphics and using shared memory : I added instructions in the Compilation wiki page for using the shared memory mechanism. Two programs are provided as example.
     
  • Anonymous

    Anonymous - 2014-09-26

    Originally posted by: bruno.he...@gmail.com

    Best codec for GLMixer ; H.264 (MPEG). GLMixer uses x264 which is very fast at decoding and has quality encoding (NB : Handbrake (https://handbrake.fr/) is an excellent tool for configuring your encoding and tuning it for your needs).

    NB: you can get the list of supported video codec in menu option 'Help/Available Formats and Codecs'

    Support of HAP Codec ; seems interesting considering the high performance for playback (GPU decompression).

    However GLMixer is based on libavcodec (https://libav.org) which does not support this codec.

    Adding implementation for HAP codec: - The clean way : integrate the decoding to libavcodec. It seems complex though because the codec is based on compressed packets to be decompressed in GPU (procedure which is not integrated into the libav pipeline). Couldn't really find an answer on the web. - The dirty way : create a specific source type for GLMixer. This means programming a new source class (inheriting from class Source) and implementing the multi-threading (ok, doable). The problem will be to enable the control of playback : the whole seeking & play speed control mechanisms are based on libav, and will not be available anymore.

     
  • Anonymous

    Anonymous - 2014-09-26

    Originally posted by: bruno.he...@gmail.com

    How are the processing resources distributed between the GPU and CPU/s in GLMixer?

    The reading, parsing and decoding of video files is done through libav. GLMixer has three threads for bridging libav to the openGL rendering, using asynchronous stack mechanisms to transfer data between them:

    • parsing thread : loop to get packets from file through libav.
    • decoding thread : loop to decompress packets into frames.
    • rendering timer in the main thread : loop to send the frames to display at the right time (relative speed)
      The display loop (OpenGL render) is finally updating the whole scene using the current frame for each video file.

    In addition, depending on the libav implementation of each codec, the decoding can be split into several internal threads (e.g. for x264).

    Should upgrading GPU or CPU be a priority in attempting to get more capacity from GLMixer?

    The bottlenecks for real-time decoding and rendering in GLMixer are rather

    • the transfer between between CPU and GPU
    • the reading on the hard drive (e.g. SATA BUS)

    For the latter, it is a matter of compromise between file size (how much to read on HD) and compression (read less data on HD, but spend lots of CPU to decompress). High speed HD (e.g. SSD) are obviously better for reading on many different files simultaneously. But this is rarely the cause of slow FPS or high CPU load IMHO.

    For the first, it is a question of image size to be transfered from central memory (where decompressed) to GPU memory (where displayed). Full-HD 1080p frames are heavy in RAM (32MB in RGBA 32), and, for instance, transfering 7 of them at 60 fps would require ~14 GB/s bandwidth. In practice, this computation is too simplistic and i do not know the details (depends on motherboard, internal bus speed, RAM speed, etc), but this gives an idea of the limitations of your computer when looking at the fillrate and Memory Bandwidth comlumns of the nvidia specs in http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units ...

    The ways to solve this :

    • Use GPU decompression : transfering less data from CPU to GPU (compressed frames) and decompressing in GPU (ok, it can handle it !!). For example the HAP Codec above. I need a month of vacation to do this :)
    • you can upgrade the Graphics card : newest GeForce? have high fillrate and badwidth, and the Quadro serie is even higher (e.g. Quadro K6000). But this kind of upgrade often requires to change the motherboard too (compatible speed internal BUS), and is very expensive :(
    • you can use lower resolution images : HD 720p images are half the size in memory. As GLMixer is automatically smoothing, and depending on the image content (small details vs. smooth surfaces) it might not be noticable.
     
  • Anonymous

    Anonymous - 2014-09-27

    Originally posted by: bruno.he...@gmail.com

    In monitoring CPUs on 2 Linux duo-core workstations w/Nvidea cards we see extensive CPU activity with GL Mixer.

    High CPU activity is normal as several threads are running for the reading and decoding of videos. Dual core might indeed be a limitation. Upgrading to Quad Core CPU could help.

    Changing the Graphics card should focus on the fillrate and bandwidth (as opposed to the processor speed and computational power that is not so much used as there is no geometrical operation, and mostly raster operations). Lookinat at http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units, you can see for example that the GTX 780 has a fillrate 4 times higher than the GTX 750.

    But the hardware upgrade is always the solution when one cannot optimize software :) !

    Many things can be done :

    • Setting the output resolution to 1080p (and 16:9 AR) does not require that all sources are at that resolution. Some that we want to zoom in should be high res., some we put smaller should be lower res, and some that have a content with blurs and soft surfaces could be low res too.
    • shared CPU-GPU decompression could help (e.g. HAP codec) (although if you want to apply filters and plugins, better save a bit of GPU).
    • an important optimization for GLMixer could be to decouple the rendering of the interface from the rendering of the output. But this is not that simple.

    We do experience some front end slowdown at this point on the GLMixer, adding filters at this point in a performance is not a comfortable option

    Please note that the GPU filtering is already operating on your sources for the basic options, and does not require more power if using them (all gamma adjustments an color manipulation in the Mixing toolbox). Only adding filtering (e.g. erosion, dilation) and even more the external OpenGL plugins would add load to the GPU.

    NB: only the resolution of the source is a factor of GPU load for the filtering and plugins. Number of pixels increase fast with resolution change !!

     
  • Anonymous

    Anonymous - 2014-09-27

    Originally posted by: bruno.he...@gmail.com

    Using GPU accelerated H.264 decoding with GLMixer

    Since early 2014, libav integrates natively VDPAU (http://fr.wikipedia.org/wiki/VDPAU) which uses nvidia drivers to decode MPEG streams (MPEG1 to 4, including H.264) in GPU.

    To check that VDPAU works on your system, install vdpauinfo and run it. For instance on my linux box (nvidia 331 native drivers) it says :

    Information string: NVIDIA VDPAU Driver Shared Library  331.38  Wed Jan  8 19:13:15 PST 2014
    
    Video surface:
    
    name   width height types
    -------------------------------------------
    420     4096  4096  NV12 YV12 
    422     4096  4096  UYVY YUYV
    
    Decoder capabilities:
    
    name               level macbs width height
    -------------------------------------------
    MPEG1                 0  8192  2048  2048
    MPEG2_SIMPLE          3  8192  2048  2048
    MPEG2_MAIN            3  8192  2048  2048
    H264_MAIN            41  8192  2048  2048
    H264_HIGH            41  8192  2048  2048
    ...
    

    It seems to be automatically added in GLMixer if you compile with recent libavcodec and libavformat libraries (e.g. install libavcodec-extra-54 package). You can also compile libav libs yourself and ensure the --enable-vdpau tag is set at configuration stage.

     
  • Anonymous

    Anonymous - 2014-09-27

    Originally posted by: bruno.he...@gmail.com

    Are there any Linux example glm files that use FreeFrameGL plugins?

    Plse check the tutorial 12 to learn how to add FreeFrameGL plugins (and more) : https://vimeo.com/album/2401475

     
  • Anonymous

    Anonymous - 2014-09-29

    Originally posted by: triptofa...@gmail.com

    Hello Bruno. I'm Esperanza, working with Darrell Jónsson.

    I have vdpau support working already in my AMD/ATI Radeon (4000 series). GLMixer has been compiled with libavcodec 55.69.100 and by checking supported codec list seems also that the VDPAU acceleration is included. However, I'm still curious to know if it's possible to check if GLmixer is using or not vdpau decoding while playing the videos. I'm now running from the shell the GLMixer,doing a test with a .mp4 with h264 codec, but it seems that the command line doesnt show any kind of revelant output for tell if vdpau decoding is being used or not, it justs tells about h264 being used.

     
  • Anonymous

    Anonymous - 2014-10-01

    Originally posted by: triptofa...@gmail.com

    Also, during this week I recompiled GLMixer for making use of shared memory, I tried the test programs and they work fine. But now, I would like to use my own programs and "link" them with GLMixer by this method, however I'm not sure how to proceed. Do I need some kind of API or something like that?

     
  • Anonymous

    Anonymous - 2014-10-01

    Originally posted by: bruno.he...@gmail.com

    How to proceed to create a program that link with Glmixer through Shared Memory

    You should create a C++ program which links with Qt (lib Qt Core) and SharedMemoryManager.cpp.. It should also include SharedMemoryManager.h. See here for details :

    https://code.google.com/p/glmixer/source/browse/trunk/SharedMemory/SharedMemoryManager.h

    Your program should :

    1) Capture a first frame or setup your image buffer with appropriate size and color depth

            QImage _image("typicalimage.png");
    

    2) Create a !QSharedMemory object

            qint64 id = QCoreApplication::applicationPid();
            QString m_sharedMemoryKey = QString("programname%1").arg(id);
    
            QSharedMemory *m_sharedMemory = new QSharedMemory(m_sharedMemoryKey);
    

    3) Declare itself to the SharedMemoryManager

            QVariantMap processInformation;
            processInformation["program"] = "programname";
            processInformation["size"] = _image.size();
            processInformation["format"] = (int) _image.format();
            processInformation["opengl"] = false;
            processInformation["info"] = QString("My program info");
            QVariant variant = QPixmap(QString::fromUtf8(":/root/myicon.png"));
            processInformation["icon"] = variant;
            processInformation["key"] = m_sharedMemoryKey;
    
            SharedMemoryManager::getInstance()->addItemSharedMap(id, processInformation);
    

    4) Loop for filling in _image and copying it to RAM

            // e.g. you have a temporary QPixmap _buffer with your image
            _image = QImage( _buffer.size(), QImage::Format_RGB888);
    
            m_sharedMemory->lock();
            memcpy(m_sharedMemory->data(), _image.constBits(), qMin(m_sharedMemory->size(), _image.byteCount()));
            m_sharedMemory->unlock();
    

    5) Unregister from SharedMemoryManager and free !QSharedMemory

           SharedMemoryManager::getInstance()->removeItemSharedMap(id);
           m_sharedMemory->detach();
           delete m_sharedMemory;¨
    

    I recommend following the example of the ScreenMix program which uses also QTimer for the configurable update rate. https://code.google.com/p/glmixer/source/browse/trunk/SharedMemory/ScreenMix/

     
  • Anonymous

    Anonymous - 2014-10-01

    Originally posted by: triptofa...@gmail.com

    Right now I was taking a look to the source files provided for ScreenMix? and Scribble and I was about to edit my comment to tell I somehow figured out how to proceed. But your comment clears up more what needs to be done so thank you very much :)

     
  • Anonymous

    Anonymous - 2014-10-01

    Originally posted by: DLJonsso...@gmail.com

    After weeks of use all of a sudden my output and preview windows are black.

    Why would both preview and output Windows be black even if multiple sources are mixed to the center of the mixing window?

    I've tried reseting to factory/default settings but see no difference.

     
  • Anonymous

    Anonymous - 2014-10-02

    Originally posted by: bruno.he...@gmail.com

    Complete reset GLMixer

    In case of inexplicable problem (e.g. 'all of a sudden my output and preview windows are black'), a complete removal of GLMixer preferences should restore the program to initial installation stage:

    stop GLmixer, remove the folder ~/.config/bhbn and run GLMixer again

     
  • Anonymous

    Anonymous - 2014-10-14

    Originally posted by: DLJonsso...@gmail.com

    In preparing video sequences, you have mentioned that GLMixer ingests H.264 encoding the best, this brings another question.

    - Does it matter if the the file is in MP4, AVI or MKV matroska? container?

     
  • Anonymous

    Anonymous - 2014-10-14

    Originally posted by: bruno.he...@gmail.com

    To my knowledge, the container should really not matter.

     
  • Anonymous

    Anonymous - 2014-10-31

    Originally posted by: DLJonsso...@gmail.com

    -What may cause GLMixer to sometimes refuse to load video clips into an existing session?

    The video clips in question are relatively small and have the same codec as the other clips in the session, also RAM/CPU and GPU resources have plenty of open resources when this happens, so am puzzled as to why the session can't load any more clips. I found a workaround by adding clips to a new session and then appending the main session, but it would be good to know the causality of why it does not load directly.

     
  • Anonymous

    Anonymous - 2014-11-01

    Originally posted by: bruno.he...@gmail.com

    I am puzzled too by the problem you report. Could you please report an issue with more details ? https://code.google.com/p/glmixer/issues/list

     
  • Anonymous

    Anonymous - 2014-11-02

    Originally posted by: DLJonsso...@gmail.com

    What may cause GLMixer to slow down and often lock up when accessing swap memory on Linux?

    We try to keep the size of our sessions to 4 or 8 GB depending on what machine we are using. Still we find on 2 machines in performance and rehearsals that if the contents of RAM when running GLMixer access the swap drive, even for a few seconds, slow down and lock up usually occur.

    We are using a 7200 RPM Seagate Momentus Sata III drive on a Sata III bus - for OS and swap space. We were wondering if moving our Swap drive to another physical hard drive might resolve this, but wanted to ask here in case anyone has found this or another solution to the issue.

     
  • Anonymous

    Anonymous - 2014-11-02

    Originally posted by: DLJonsso...@gmail.com

    VDPAU on Ubuntu not enabled?

    The issues we mentioned earlier about not seeing VDPAU working with GLMixer on multiple machines continues to be a puzzle.

    Being able to use VDPAU would be a major help. Chatter on the internet remains confusing, and it appears from this expert discussion that VDPau is not enabled on Ubuntu.

    "Ubuntu Will Not Enable Open-Source VDPAU Support" http://www.phoronix.com/scan.php?page=news_item&px=MTYwNzU

    I'm wondering

    Have any GLMixer users have tried a different distro with GLMixer, were VDPAU works more or less out of the box?

    Have any GLMixer users succesfully gotten VDPAU to work with Ubuntu?

     
  • Anonymous

    Anonymous - 2014-11-02

    Originally posted by: bruno.he...@gmail.com

    VDPAU on Ubuntu not enabled?

    The answer is from what I found not about Ubuntu, but about ffmpeg / libav and the client program (i.e glmixer).

    I found some programs (like mplayer) who implemented a front-end using the libav (early) implementation of vdpau. But this is a mess... its not a judgement, its an observation of the authors themselves as they push the packets from CPU to GPU for decompression, and push them back from GPU to CPU RAM for display. Not to mention the complication in data structure sharing for keeping the time stamps...

    Therefore unfortunately I couldn't easily implement vdpau for GLMixer. I want to avoid the transfer back from GPU to CPU memory and display frames directly after decoding on GPU, but I found no example code. I would need more time to dig into this. Help welcome.

    I was rather looking at NVidia native API for decoding/encoding : https://developer.nvidia.com/nvidia-video-codec-sdk. This looks easier.

     
  • Anonymous

    Anonymous - 2014-11-02

    Originally posted by: bruno.he...@gmail.com

    What may cause GLMixer to slow down and often lock up when accessing swap memory on Linux?

    First - thank you for your tolerance and understanding for these performance limitation of GLmixer : these are important issues, but they are hard to reproduce and optimize.

    Tests with my machine (8GB, i7 3.4 Gz, NV555) indeed show that once you hit the RAM limit and system starts swapping, all is slowing down and this has a long lasting effect. I have no idea of the way to optimize swap on Linux, but changing the partitions or drive location does not look like a solution.

    But, decoding performance seems like a lower performance limitation than RAM / Swap : on my PC glmixer can decode and display simultaneously 5 FullHD videos (1920x1080, H264) at 60 Fps and use (only) 2.3 GB RAM. One more video playing and its down to 30 fps (and little more RAM, not swapping).

    This limit is obviously hardware dependent. Solutions or workarounds I can see:

    • Maintain the number of simultaneously playing videos bellow your HW limit. To keep more videos into the workspace, use the second ring of the mixing view to keep clips ready but on hold : https://vimeo.com/album/2401475/video/67089012
    • Lower the resolution of (some) videos (This is the most effective) NB: the output window can maintain the high resolution, OpenGL smoothing giving better results than video projector smoothing.
    • Put the videos in a RAM drive : http://www.jamescoyle.net/how-to/943-create-a-ram-disk-in-linux (not actually sure it helps...).

    But obviously, GLMixer should help maintaining good performances and provide tools for optimization. Here are some ideas of what i have in the stack of TODOs:

    • nvidia native CUDA-Based Video Decoding plugin (freeframe source)
    • a mechanism to stop decoding videos when out of sight while keeping the clock ticking
    • multi-threading of openGL rendering / separation of GUI and output rendering
    • monitor the RAM resources in addition to fps.

    Sorry that this does not really solve your problems...

     
  • Anonymous

    Anonymous - 2014-11-03

    Originally posted by: DLJonsso...@gmail.com

    Thanks for your reply which in fact does forward us to solutions to our problems in fine tuning how we use this tool.

    Should also say for any new users here.

    Presently with much of the VJ community using primarily 720p or lower res much of this discussion maybe a non-issue for immediate use, in sets primarily involving short loops and loop manipulation.

    Still by carefully using more 1080p content sequences up to 2-10 mins in length we have been able with GLMixer to produce 2 shows so far for live audiences on Ubuntu.

    To do this On smaller laptop machine using 4GB Ram and Duo-Core we have mixed 720p and 1080p H264 files to good effect.

    On a faster 8GB ram 2 to 6 core machines we have managed using HuffyUV and lossless video. Being based primarily on experimental film transfers, we do not need 30FPS but are satisfied with 6-24 FPS, we are not pushing FPS performance.

    Our performances have been made more robust with the following considerations...

    Noting a nearly 1 to 1 relationship between the clips in our session size and RAM capacity, we allow for about 10% RAM headroom.

    Meaning about 5-8 files averaging 750MB on a 8GB RAM system, or 6 files averaging 500MG on a 4GB system.

    On Linux we notice...

    Little or no tolerance for exceeding RAM capacity, in session size.

    Best to stay below ceiling of CPU usage as well to avoid lock up during live performance.

    We have been thinking of using an extra monitor for keeping a System Monitor window open, to best track resource usage.

    Limit our live composites to no more than 3 sources.

    We only use cloned sources during rehearsals and experiments. We then render these ideas into video files to be loaded as direct sources as we find cloned sources disable stopping loops, limiting our capacity to control RAM use.

    We are experimenting with using GLMixer as a live editor/rendering to RAW video for composited sequences to be used in live performances.

    Bear in mind we are pushing the envelope by the following.

    Using long sequences 2-4GB as well as shorter loops. Our technique of long sequences (scenes) is more closely related to theatrical presentation that what is commonly thought of as strictly visual music, video art or what has more recently become know as VJ application.

    Given all that -- GLMixer has proven a useful tool, now tested several times by ourselves and affiliates in live performance. Also as a live editing compositing tool, it is effective for fluid manipulation of images for rendering into sequences to edit or mixed with other software.

    Per the future of VDPAU, HAP and better GPU usage on Linux that seems to be more dependent on the Linux and proprietary developers than on GLMixer, perhaps its a problem that will solve itself and/or RAM/CPU prices will drop making it a non-issue for most users.

     
1 2 3 > >> (Page 1 of 3)

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.