Speed & performance complaints need a Separation-Of-Concerns. Separate the network transfer, the data buffering, target location calculations, and actual disk writing and examine the performance for each. I'm sure with a bit of searching the web some performance measuring models could be found.
Acronis appears so far to be fast enough but it's not FOSS. They do use a LINUX kernel for the Acronis backup-recovery boot CD-flash you can make. I have not yet seen they have made available the modified LINUX source for that CD, flashdrive setup.
Outside of seeing the Acronis LINUX setup, I would be interested to know the CloneZilla testing model and techniques. This should give insight of how the likes of Samba, SSH, and NFS are performing. It can also give direction where effort to improve CloneZilla's data transfer and drive writing
Testing and measuring are boring but necessary.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
One area of interest is the concurrency taking place inside partClone. That could play a major role in how reliable network access performs. Once I understand that it will be time to look at how that relates to LINUX's concurrent activities. This implies that an understanding of the networking performs in the raw can be compared to what partClone is doing.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
A CrossPlatform OpenSource solution for getting network performance metrics is needed even if a LAN does not use any form of imaging software.
iPerf and jPerf are both CrossPlatform OpenSource, simple bandwidth measuring tools.
iPerf is command line only.
jPerf adds charts and graphs.
Combined with drbl, you have the only practical method of getting real world bandwidth performance measurements from disparate hardware on different LANS in real world scenarios. Imagine a drbl server that automatically boots your LAN with some systems WindowsXP, some systems Win7, some Linux, some BSD, some Win2008R2. Some would be servers, most clients. Some would run QOS applications such as VOIP and HD streaming. Some would be webservers that other clients would access. Some would be samba servers that clients would have scripts run to mimic real world scenarios. A mysql database. That way, you could compare your Dell hardware connected to a 3Com switch versus a NetGear switch. Upload the metrics to common website to compare against somebody else's same scenario. You could sort by firmware version of NIC. Sort by device driver version. Sort by system manufacturer. Sort by cable brand and type. Sort by NIC manufacturer. Sort by firmware in switch. There was a guy employed by HP to kinda have a website, but he didn't have the automated means of measuring a LANs performance that drbl would provide.
There are many LANS that do not get anywhere near wireline speeds and there can be many reasons for it. Reasons for slowdowns may be intel vPro nics, non optimal switch and nic pairing, cables gone bad, major differences in copper rated for the same performance, outlets, … i could go on and on.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thank you for your posting about networking reliability. It was good to read about the software and background information. I've visited the sites this morning. Your network descriptions are typical of what is out there. There needs to be for LAN's as there is for the WAN side ways to gather and characterize networks. You can buy CISCO equipment and get much data but that will bust the budget.
Data preservation is one of the important issues of computing. Since the commercial software is failing it would appear to be an opportunity for FOSS to develop industry standards, reliable, good performing, easy to use, and low cost recovery software.
My first proposed rename backup software to recovery software and services. It could be a risky phrase but FOSS is positioned to take charge of that.
Here are some additional URL's I spotted. I've two book references below the URL's.
Much of what has been published is focused on the Internet WAN, it can be crossed over to LAN's. As far as online backup and recovery those too are having performance and reliability issues. Reading reviews and form comments about well known services, and there are many, I see many negative remarks about slow recovery performance. Online upload seems to work. That's like the LAN upload experience I had using Acronis and partClone. The download from the server or online is repeatedly mentioned as an issue. Why? I have yet to figure that out. Yet.
Internet Measurement; Mark Crovella, Balachander Krishnamurthy; 2006; 978-0-470-01461-5; $70 new retail;
…while concentrating on the Internet WAN view, it has techniques to measure performance
Capacity Planning for Web Services: Metrics, Models, and Methods; Daniel A. Menasce, Vergilio A. F. Almeida; 2002; ISBN-10: 0-13-065903-7; Hardcover; …has models for client and server models that cover issues of workload, performance, and availability.
Many other book titles are presented at book distributor's sites .
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Speed & performance complaints need a Separation-Of-Concerns. Separate the network transfer, the data buffering, target location calculations, and actual disk writing and examine the performance for each. I'm sure with a bit of searching the web some performance measuring models could be found.
Acronis appears so far to be fast enough but it's not FOSS. They do use a LINUX kernel for the Acronis backup-recovery boot CD-flash you can make. I have not yet seen they have made available the modified LINUX source for that CD, flashdrive setup.
Outside of seeing the Acronis LINUX setup, I would be interested to know the CloneZilla testing model and techniques. This should give insight of how the likes of Samba, SSH, and NFS are performing. It can also give direction where effort to improve CloneZilla's data transfer and drive writing
Testing and measuring are boring but necessary.
One area of interest is the concurrency taking place inside partClone. That could play a major role in how reliable network access performs. Once I understand that it will be time to look at how that relates to LINUX's concurrent activities. This implies that an understanding of the networking performs in the raw can be compared to what partClone is doing.
A CrossPlatform OpenSource solution for getting network performance metrics is needed even if a LAN does not use any form of imaging software.
iPerf and jPerf are both CrossPlatform OpenSource, simple bandwidth measuring tools.
iPerf is command line only.
jPerf adds charts and graphs.
Combined with drbl, you have the only practical method of getting real world bandwidth performance measurements from disparate hardware on different LANS in real world scenarios. Imagine a drbl server that automatically boots your LAN with some systems WindowsXP, some systems Win7, some Linux, some BSD, some Win2008R2. Some would be servers, most clients. Some would run QOS applications such as VOIP and HD streaming. Some would be webservers that other clients would access. Some would be samba servers that clients would have scripts run to mimic real world scenarios. A mysql database. That way, you could compare your Dell hardware connected to a 3Com switch versus a NetGear switch. Upload the metrics to common website to compare against somebody else's same scenario. You could sort by firmware version of NIC. Sort by device driver version. Sort by system manufacturer. Sort by cable brand and type. Sort by NIC manufacturer. Sort by firmware in switch. There was a guy employed by HP to kinda have a website, but he didn't have the automated means of measuring a LANs performance that drbl would provide.
There are many LANS that do not get anywhere near wireline speeds and there can be many reasons for it. Reasons for slowdowns may be intel vPro nics, non optimal switch and nic pairing, cables gone bad, major differences in copper rated for the same performance, outlets, … i could go on and on.
Mr. Townley,
Thank you for your posting about networking reliability. It was good to read about the software and background information. I've visited the sites this morning. Your network descriptions are typical of what is out there. There needs to be for LAN's as there is for the WAN side ways to gather and characterize networks. You can buy CISCO equipment and get much data but that will bust the budget.
Data preservation is one of the important issues of computing. Since the commercial software is failing it would appear to be an opportunity for FOSS to develop industry standards, reliable, good performing, easy to use, and low cost recovery software.
My first proposed rename backup software to recovery software and services. It could be a risky phrase but FOSS is positioned to take charge of that.
Here are some additional URL's I spotted. I've two book references below the URL's.
Iperf at sourceforge: http://iperf.sourceforge.net/
xjperf at Google Code: http://code.google.com/p/xjperf/
Wikipedia: http://en.wikipedia.org/wiki/Iperf
Cooperative Association for Internet Data Analysis, CAIDA: Internet research & analysis
Research: http://www.caida.org/research/
Home: http://www.caida.org/home/
Visualizations: http://www.caida.org/publications/visualizations/
Featured images: http://www.caida.org/home/featured_images.xml
Net vs Disk performance, from Google project notes:
http://iperf.sourceforge.net/community/sc07-measurement-bof/tierney-nioperf.html
Books:
Much of what has been published is focused on the Internet WAN, it can be crossed over to LAN's. As far as online backup and recovery those too are having performance and reliability issues. Reading reviews and form comments about well known services, and there are many, I see many negative remarks about slow recovery performance. Online upload seems to work. That's like the LAN upload experience I had using Acronis and partClone. The download from the server or online is repeatedly mentioned as an issue. Why? I have yet to figure that out. Yet.
Internet Measurement; Mark Crovella, Balachander Krishnamurthy; 2006; 978-0-470-01461-5; $70 new retail;
…while concentrating on the Internet WAN view, it has techniques to measure performance
Capacity Planning for Web Services: Metrics, Models, and Methods; Daniel A. Menasce, Vergilio A. F. Almeida; 2002; ISBN-10: 0-13-065903-7; Hardcover; …has models for client and server models that cover issues of workload, performance, and availability.
Many other book titles are presented at book distributor's sites .
Rick Jones of HP is the main author of www.netperf.org.
They have a mailing list:
http://www.netperf.org/cgi-bin/mailman/listinfo/netperf-talk
His benchmark system may be more comprehensive.
You may want to get on the dev mailing list to talk about the next version - 4.
Thank you again, M$ has done some tools too. I suppose I'll have to include their products as well.
There is a Java version of Iperf here, http://code.google.com/p/xjperf/