Hi does snowmix work on arm?
Any future plans if not?
You seem to have CSS turned off.
Please don't fill out this field.
Can you describe a scenario where Snowmix on an ARM based architecture would make sense?
What input streams would you want in terms of bandwidth, geometry and fps?
What mixer geometry and frame rate would you want?
What would you do with output?
Can you describe a scenario where Snowmix on an ARM based architecture would make sense? NVR For IPCams with some nice features provided by snowmix.
What input streams would you want in terms of bandwidth, geometry and fps? 5 IPCams. Over Network/Wifi. Geometry 640x480 15fps.
What mixer geometry and frame rate would you want?640x480 10fps
Output: output2screen or to file.
I assume NVR means Network Video Recorder. What compelling reasons would you have for using an ARM based CPU instead of a x86 based architecture?
I would doubt that an ARM based computer would be able to decode 5 streams of 640x480@14fps. Lets assume each stream is 700kbps H.264 or 1Mbps MPEG-4 part 2 or 1.5Mbps MPEG-2. Thats 3.5-7Mbps. That an ARM could receive, but decoding into raw video I420 is doubtful. Ignoring the processing power needed, you need to move 640x480x15x1.5=5 bytes to get raw video in I420. That's 34MB/sec. Then you need to convert it to ARGB. That's 640x480x15x4x5=92MB/sec. Moving it into Snowmix is using shm, so that's not an issue. Then inside Snowmix, you need to move 640x480x10x4=12MB/sec. Then you need to move it out, but that's not a problem using shm. Then you need to convert from ARGB to I420. Thats additional 12MB/sec and then to the screen, that's additional 4.5MB.
34MB + 92MB + 12MB + 12MB + 4.6MB = 155MB/sec
I seriously doubt that most ARM architectures has that capability for moving data around and most ARM CPU's can only decode 1-2 streams at most.
Do you have an ARM platform that can do that?
I don't have an ARM platform available for development, but if one was made available, I would quite likely consider porting Snowmix for ARM though.
Could you try to simulate Snowmix by decoding 5 streams on whatever ARM platform you have available and post the results? It should be something like 5 of these two pipelines.
gst-launch -v udpsrc ! decodebin ! shmsink
gst-launch -v shmsrc ! queue ! fakesink
How many of these sets of pipelines can you run?
I was able to simulate 16 feeds of the above replacing udpsrc by videotestsrc on odroid u2.
videotestsrc ! decodebin ! shmsink socket-path=/tmp/gX
shmsrc socket-path=/tmp/gX ! queue ! fakesink
It would be nice to port the project to ARM. Let me know if i can help in any way making that happen.
That doesn't tell you much. Decodebin in your pipeline does nothing except pass a pointer to a buffer. Also you are running 320x240 by default unless you set up something in caps. You need to set it up with an external encoded video stream encoded for 640x480@15fps. That'll put some load on your TCP/IP stack and definitely put some load on your decoder.
Let me know how it goes.
Is it hard to port snowmix into Arm? With the new quad cores, it is performing pretty well and the chips come small in size so easier for mobility.
We did get ARM working with 5 live feeds 720p over an odroidu2
Does it take too much time to port it?
It's hard for me to port because I don't have access to an ARM platform with a reasonably performance. Otherwise it will not be hard to port. It'll go faster if someone can lend me piece of hardware or give network access to one.
However to get any reasonably performance out of it, libcairo and libpango and GStreamer MUST have a good hardware acceleration support. Even if that is the case though, I am still not convinced, that your ARM platform can move enough MB of data per second between Memory and CPU for it be worth anything. But we might get there one day.
Then when you have mixed the video, you need to encode it without using 100% CPU. Does your platform have hardware support for video encoding and still leave plenty of CPU for other tasks?
Now you said you got 5 live 720p feeds working. What does that mean?
Does it take a long time to port? Probably not, but it does take some time to test and debug. The biggest difference between x86 and ARM, not taking performance into account, is one is little endian and the other is big endian. Not many places in Snowmix where this will have an effect, but there could be something with BGRA and ARGB formats that needs to be cleared out. Snowmix would still work, just look a little weird.
Another important difference is acceleration for x86 (MMX/SSEx etc) and ARM called something else I forgot (neon or something), but that is all hidden in libcairo/pango and liborc for GStreamer. The code for ARM hw acceleration may not be as mature as for x86, but it might work to some extend.
Thank you for your reply, i can actually provide you access to my machine and you can do whatever you want with it for testing purposes.
I can also help in porting and assistance if needed.
Please let me know when is a good time to share the credentials.
GST does provide an OMX plugin using the GPU on the board which makes it encode/decode even better than an equiv x86
Please send me you email address through a private mail here http://sourceforge.net/u/pmaersk/profile/
Then we can exchange information needed.