I heard 2017 accidentally in few threads - world around in some places in Web - that VC (and maybe also TC before?) have a problem with large volumes/containers, respectively big data move in these.
In dimension of some Terabytes. The problems thought to be: lost of data integrity. But I also dont know whether regarding of crypto or of filesystem inside crypted volumes. It was in those cases each time NOT a problem with hardware haywire devices.
At that time it was uninterested for me. Now, 2019, it is.
I (but this meant maybe nothing :)) have never saw in VC readmes that suchlike problems are found or fixed.
If we have nothing about this in readmes, have we a sort like a /proof of concept/, that this being NOT so? Like a 1 week cmd batchjob to copy, delete and move files on 2TB disk or 1.5TB container on write volume not less theam 1TB? With everday 1x unmount/mount?
Ok, above can be also a problem with client hardware. Mostyl with partly flowed RAM wiht non-ECC RAM or generally with not prudent tested to bullish overclocking.
But if not? Like told, have we, self realised ("internal") data integrity tests with bigger volumes and relatively big data movement?
Regards
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi (sorry for my english)
I heard 2017 accidentally in few threads - world around in some places in Web - that VC (and maybe also TC before?) have a problem with large volumes/containers, respectively big data move in these.
In dimension of some Terabytes. The problems thought to be: lost of data integrity. But I also dont know whether regarding of crypto or of filesystem inside crypted volumes. It was in those cases each time NOT a problem with hardware haywire devices.
At that time it was uninterested for me. Now, 2019, it is.
I (but this meant maybe nothing :)) have never saw in VC readmes that suchlike problems are found or fixed.
If we have nothing about this in readmes, have we a sort like a /proof of concept/, that this being NOT so? Like a 1 week cmd batchjob to copy, delete and move files on 2TB disk or 1.5TB container on write volume not less theam 1TB? With everday 1x unmount/mount?
Ok, above can be also a problem with client hardware. Mostyl with partly flowed RAM wiht non-ECC RAM or generally with not prudent tested to bullish overclocking.
But if not? Like told, have we, self realised ("internal") data integrity tests with bigger volumes and relatively big data movement?
Regards
"It was in those cases each time NOT a problem with hardware haywire devices."
Mean: Not hardware defect of data carrier self.