Hm, this is giving me some food for thought. After some additional testing it seems as if the bottleneck has occurred at both the network level and the camera level (at least, I think so). I put my 10/100 switch back and sure enough things were acting slow as they did before when I had multiple streams going on. Once on the gigabit line, things worked significantly better. I'm not sure if it's just the switch processing the data faster that helps or what, because the cameras do indeed have 10/100 ports on them. I did the jpg test and most images were in the 180 Kb area. Using 200 as a "worst case scenario" type of number, that comes out to 5.8 MB/s. That's at 1280x800 with 30 fps. The max possible for 10/100 lines is 12.5 MB/s, so 5.8 MB/s isn't hitting the 10/100 throttle at the camera level. However, perhaps my 10/100 switch having to process two cameras @ 5.8 MB/s consistently was enough to make the Netgear switch seem significantly slower with the teleporting, whereas the Dell (gigabit) worked much better. During my typical Motion usage, I do not capture snapshots. I simply capture 1 fps timelapses and avi files of the motion recorded, typically at 15 fps (but this 30 fps kick I'm on is really just for comparisons)

That said, I'm finding some strange things here. Last night I had both cameras running at 30 fps with webcam_maxrate 30 fps and I was streaming both of them simultaneously. There was zero skipping, absolutely zero. When cars would go flying by I would see absolutely zero skipping, both live when I was viewing it and even when I viewed the .avi's back on the server. Today, things are much different and it seems as if skipping is much more apparent. Now granted, I was tinkering around with some network stuff Perhaps that's the difference in quality of night vision black/white/and I assume lesser quality feeds versus day time where there's much more going on. Hard to say. 

Today I decided to remove Motion from the picture, so I disabled the service and just began streaming the direct mjpg url. There's three fields to work with when it comes to mjpg streams. Resolution, Framerate, and Quality. We're working with 1280x800/30 fps here, but the quality I have set to "excellent." I've been doing some comparisons of that excellent field. Available options are Medium, Standard, Good, Detailed, Excellent. At 30 fps with the 1280x800 resolution, excellent has skipping. Again, no Motion here - just a direct mjpg url streaming over cat6 lines and a gigabit switch. If I clock it back to 30 fps/good (instead of excellent), I see no skipping at all. If I swap them around and do 20 fps/excellent, it's the same deal. I guess utilizing excellent with the highest fps possible is what's tanking it.

Networking wise, I've been doing some tracking based on the different settings possible. Even if I'm using excellent, 30fps, 1280x800, I'm still barely hitting a consistent 5.0 MB/s. It's more like an average of 4.3 MB/s with an occasional bump to the 4.9 area. Even on 10/100, your max throughput given that there's zero noise etc is 12.5 MB/s, which I'm still far off from even when you take some headroom into consideration. I'm really beginning to lean towards the camera's processor being able to crunch the "excellent" video quality with 30 fps is where the root cause is. Couple that in with Motion and you have quite a few cooks in the kitchen who might be responsible. Given the circumstances, it doesn't look like Motion really played a key role at all in this.

I think what I'm going to do is just throttle the fps settings back, as that tends to be a "best of both worlds" type of situation. Even at 15 fps, I'm still getting fifteen shots a second - that should be more than enough in case Mr. Burglar decides he wants my grill.

Thanks for the discussion Bob. This got some gears turning!

On Sat, Nov 24, 2012 at 3:50 AM, Bob Bob <> wrote:

Originally they were purchased and setup as WiFi devices. As the system evolved though 3 of them became direct Ethernet connected.

Okay I understand what your test setup was. I assume your network topography is a gigabit from PC to switch, but are the cameras only 100MBit or running WiFi? I also assume the Ubuntu box is running both motion and the web browser? ie you are only seeing camera to PC traffic. If you have two PC's in the test setup that complicates analysis.

In terms of bandwidth calculation you have 30fps times two. What we don't know at this point is the size of each MJPG frame. That will be resolution and quality dependent. A good guess is to edit a saved motion image and try saving it at varying qualities that align with the motion config and camera specs. What is the camera resolution etc set to? If you are running 1280x1024 for example you can easily saturate a 100MBit/sec LAN at 30fps.

I get the impression that the stage 3 100% CPU usage was probably the root cause of less Ethernet throughput when compared against stage 4! ie two cameras, two streams and teleporting.. You can prove that by seeing if you get a similar looking bump in network throughput at a lower frame rate. (ie the CPU usage will be less and the network throughput will be flatter)

The RAID array? Well your best indicator is the activity LED! I don't think software RAID gets upset by high CPU usage either. I assume it will just wait in the buffer a bit longer. You can test the disk channel in sequential mode pretty easily with dd too.

dd if=/dev/zero of=FileNameOnRAID bs=1024K count =100

dd if=FileNameOnRAID of=/dev/null bs=1024k count=100

ie a 100MByte file. dd tells you the rate in MB/Sec when completed.  You can play with the write buffer size and elevator algorithm too. My software RAID10 (6 15KRPM SCSI320 disks) gets about 300-400MBytes/sec

This is where you theorise at the maximum write rate that motion will do. ie assume there is a change at ever frame and see if that number gets close to the disk rate. If however you only get a sporadic HDD LED flash during motion usage then that isn't the problem.

Like I said I think you are getting CPU bound and that is throttling the video input.


On 24/11/12 16:59, Jason Sauders wrote:
Bob, thanks for your response. I do have a question for you. Were all 6 of these cameras wireless cameras, or did you have wired cameras on the LAN as well? I have to assume they were wireless, which based on that + everything else you said, yeah I can imagine it was a bit of a headache. 

I'm not sure how definitive this is or if it really helps paint a clearer picture, but I decided to conduct a quick 1 minute test here to see how certain streams compare. I utilized System Monitor built into Ubuntu and watched it closely and took a screenshot at the end of it. I wanted to see if there was any sort of performance hit when I would browse to my webcam URL of Motion. I bookmarked two sites in my toolbar, one with the custom html/css page I made which streams and both in the same screen (so I can see the front and rear camera in one page), and the other was a direct stream of my rear camera, so it wasn't two cams like the first one, but only one camera. It was streaming to the exact same URL Motion is set to (netcam_url), which is If anything, this would suggest that the video4.mjpg URL would have a greater chance of performing better since it's one camera instead of two.

My plan was to break up the tests into 10 second segments. Sure, not overly scientific but I still think the results were some food for thought. My webcam_maxrate was set to 30 fps, the cameras themselves were set to 30 fps, and each thread file for each camera was set to 30 fps. Overall, 30 was the ideal target because it was the heaviest fps setting possible. The plan was this:

Stage 1 - Motion disabled, no streaming.
Stage 2 - Motion running, no streaming.
Stage 3 - Motion running, streaming webcam_url to front and rear camera simultaneously.
Stage 4 - Motion running, streaming direct video4.mjpg url of only the rear camera.
Stage 5 - Motion running, no streaming.
Stage 6 - Motion disabled, no streaming.

The System Monitor screenshot:

Based on the seconds counter just below the Network History graph (60, 50, 40, etc.) the different stages go like this:

Stage 1 - 60 to 55
Stage 2 - 55 to 45
Stage 3 - 45 to 35
Stage 4 - 35 to 15
Stage 5 - 15 to 5
Stage 6 - 5 to 0

If you look at the graph, you can see stage 2 and 3 were identical the entire time. This suggests that there's no additional network traffic being pulled from the camera to handle the stream as I had touched base on earlier. Once stage 4 hit you can see some additional network traffic hit the scale, plus you can see my RAM begins taking an odd up and down series of hits as well. After that it's pretty self explanatory.

One thing I thought was interesting is last time when I had the choppiness issue, I thought for sure the camera was getting stressed because it was pushing out two streams of 30 fps feeds. This kind of irked me because the camera has 4 stream presets, so I thought it'd be weird it would get bottlenecked that badly by 2x30fps. That being said, I just remembered I did swap out a 10/100 switch for a gigabit switch a few nights ago. Just now when I tried to duplicate the skipping I noticed before, I was unable to, which suggests the gigabit switch likely solved the issue and it wasn't necessarily the camera itself. I guess because I thought it was the camera being overloaded that was causing the skip I had kind of forgotten that I did the switch swap. That being said, it still doesn't take away from the fact that utilizing Motion's built in web server seems to be lighter duty on the network than to stream to the direct URL's of the cameras, at least based on my findings. I can't even recall why I did this, but in my custom HTML page I made, both cameras were streaming directly to their mjpg URLs. After this I have since switched them to ip.of.server:8081 and ip.of.server:8082. I just felt the performance was a bit better when using the Motion web server. It just seemed to be a bit smoother when a car drive by, whereas with the video4.mjpg direct stream to the camera, it certainly worked very decent, but I felt as though I could notice some hesitation here and there as the feed was displayed. I have to wonder if this is Motion being smart enough to simply pass the stream it's already acquired directly to the viewer instead of making the camera fire out a secondary stream, like with running the regular Motion process + streaming the direct mjpg URL to the browser.

Beyond this point I still have to wonder what the next bottleneck is. I have to assume it's my software RAID array writing to the hard drives. Part of me wants to get an SSD and have the OS on it and point Motion to write data to it as well. Then once a night via bash scripts move the data to a fat RAID array in the system. That way I have the write speed of an SSD while retaining the fat RAID array for long term storage.

Anyway like I said, nothing overly scientific, but it brings enough of a visual to the table to suggest that gigabit is your friend and Motion's built in web server seems to be a bit lighter on the network load than direct camera URL streaming.

As always, thanks for the insight. It's appreciated.

Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
Motion-user mailing list