From: Bob B. <bob...@bi...> - 2012-11-24 08:50:54
|
Jason Originally they were purchased and setup as WiFi devices. As the system evolved though 3 of them became direct Ethernet connected. Okay I understand what your test setup was. I assume your network topography is a gigabit from PC to switch, but are the cameras only 100MBit or running WiFi? I also assume the Ubuntu box is running both motion and the web browser? ie you are only seeing camera to PC traffic. If you have two PC's in the test setup that complicates analysis. In terms of bandwidth calculation you have 30fps times two. What we don't know at this point is the size of each MJPG frame. That will be resolution and quality dependent. A good guess is to edit a saved motion image and try saving it at varying qualities that align with the motion config and camera specs. What is the camera resolution etc set to? If you are running 1280x1024 for example you can easily saturate a 100MBit/sec LAN at 30fps. I get the impression that the stage 3 100% CPU usage was probably the root cause of less Ethernet throughput when compared against stage 4! ie two cameras, two streams and teleporting.. You can prove that by seeing if you get a similar looking bump in network throughput at a lower frame rate. (ie the CPU usage will be less and the network throughput will be flatter) The RAID array? Well your best indicator is the activity LED! I don't think software RAID gets upset by high CPU usage either. I assume it will just wait in the buffer a bit longer. You can test the disk channel in sequential mode pretty easily with dd too. Writing; dd if=/dev/zero of=FileNameOnRAID bs=1024K count =100 Reading dd if=FileNameOnRAID of=/dev/null bs=1024k count=100 ie a 100MByte file. dd tells you the rate in MB/Sec when completed. You can play with the write buffer size and elevator algorithm too. My software RAID10 (6 15KRPM SCSI320 disks) gets about 300-400MBytes/sec This is where you theorise at the maximum write rate that motion will do. ie assume there is a change at ever frame and see if that number gets close to the disk rate. If however you only get a sporadic HDD LED flash during motion usage then that isn't the problem. Like I said I think you are getting CPU bound and that is throttling the video input. Bob On 24/11/12 16:59, Jason Sauders wrote: > Bob, thanks for your response. I do have a question for you. Were all > 6 of these cameras wireless cameras, or did you have wired cameras on > the LAN as well? I have to assume they were wireless, which based on > that + everything else you said, yeah I can imagine it was a bit of a > headache. > > I'm not sure how definitive this is or if it really helps paint a > clearer picture, but I decided to conduct a quick 1 minute test here > to see how certain streams compare. I utilized System Monitor built > into Ubuntu and watched it closely and took a screenshot at the end of > it. I wanted to see if there was any sort of performance hit when I > would browse to my webcam URL of Motion. I bookmarked two sites in my > toolbar, one with the custom html/css page I made which streams > 192.168.1.15:8081 <http://192.168.1.15:8081> and 192.168.1.15:8082 > <http://192.168.1.15:8082> both in the same screen (so I can see the > front and rear camera in one page), and the other was a direct stream > of my rear camera, so it wasn't two cams like the first one, but only > one camera. It was streaming to the exact same URL Motion is set to > (netcam_url), which is 192.168.1.11/video4.mjpg > <http://192.168.1.11/video4.mjpg>. If anything, this would suggest > that the video4.mjpg URL would have a greater chance of performing > better since it's one camera instead of two. > > My plan was to break up the tests into 10 second segments. Sure, not > overly scientific but I still think the results were some food for > thought. My webcam_maxrate was set to 30 fps, the cameras themselves > were set to 30 fps, and each thread file for each camera was set to 30 > fps. Overall, 30 was the ideal target because it was the heaviest fps > setting possible. The plan was this: > > Stage 1 - Motion disabled, no streaming. > Stage 2 - Motion running, no streaming. > Stage 3 - Motion running, streaming webcam_url to front and rear > camera simultaneously. > Stage 4 - Motion running, streaming direct video4.mjpg url of only the > rear camera. > Stage 5 - Motion running, no streaming. > Stage 6 - Motion disabled, no streaming. > > The System Monitor screenshot: http://imgur.com/NlOpf > > Based on the seconds counter just below the Network History graph (60, > 50, 40, etc.) the different stages go like this: > > Stage 1 - 60 to 55 > Stage 2 - 55 to 45 > Stage 3 - 45 to 35 > Stage 4 - 35 to 15 > Stage 5 - 15 to 5 > Stage 6 - 5 to 0 > > If you look at the graph, you can see stage 2 and 3 were identical the > entire time. This suggests that there's no additional network traffic > being pulled from the camera to handle the stream as I had touched > base on earlier. Once stage 4 hit you can see some additional network > traffic hit the scale, plus you can see my RAM begins taking an odd up > and down series of hits as well. After that it's pretty self explanatory. > > One thing I thought was interesting is last time when I had the > choppiness issue, I thought for sure the camera was getting stressed > because it was pushing out two streams of 30 fps feeds. This kind of > irked me because the camera has 4 stream presets, so I thought it'd be > weird it would get bottlenecked that badly by 2x30fps. That being > said, I just remembered I did swap out a 10/100 switch for a gigabit > switch a few nights ago. Just now when I tried to duplicate the > skipping I noticed before, I was unable to, which suggests the gigabit > switch likely solved the issue and it wasn't necessarily the camera > itself. I guess because I thought it was the camera being overloaded > that was causing the skip I had kind of forgotten that I did the > switch swap. That being said, it still doesn't take away from the fact > that utilizing Motion's built in web server seems to be lighter duty > on the network than to stream to the direct URL's of the cameras, at > least based on my findings. I can't even recall why I did this, but in > my custom HTML page I made, both cameras were streaming directly to > their mjpg URLs. After this I have since switched them to > ip.of.server:8081 and ip.of.server:8082. I just felt the performance > was a bit better when using the Motion web server. It just seemed to > be a bit smoother when a car drive by, whereas with the video4.mjpg > direct stream to the camera, it certainly worked very decent, but I > felt as though I could notice some hesitation here and there as the > feed was displayed. I have to wonder if this is Motion being smart > enough to simply pass the stream it's already acquired directly to the > viewer instead of making the camera fire out a secondary stream, like > with running the regular Motion process + streaming the direct mjpg > URL to the browser. > > Beyond this point I still have to wonder what the next bottleneck is. > I have to assume it's my software RAID array writing to the hard > drives. Part of me wants to get an SSD and have the OS on it and point > Motion to write data to it as well. Then once a night via bash scripts > move the data to a fat RAID array in the system. That way I have the > write speed of an SSD while retaining the fat RAID array for long term > storage. > > Anyway like I said, nothing overly scientific, but it brings enough of > a visual to the table to suggest that gigabit is your friend and > Motion's built in web server seems to be a bit lighter on the network > load than direct camera URL streaming. > > As always, thanks for the insight. It's appreciated. > -J > |