From: Patrick Z. <pa...@na...> - 2014-08-20 08:27:11
|
Thanks Thijs, I’m leaning toward exporting standard files located on a ZFS file system, but I need to understand how to best use SCST-ISCSI in that case. My understanding is that if I go FIO then VFS and ZFS caches will be fighting against each other’s. Then if I use BIO, well, can we use BIO to a file backing “device” ? I’m waiting for some more inputs… Cheers, - Patrick - From: Thijs Cramer [mailto:thi...@gm...] Sent: mercredi 20 août 2014 10:06 To: Patrick Zwahlen Cc: scs...@li... Subject: Re: [Scst-devel] How to best use RAM and SSD for caching behind scst-iscsi (ZFS) Don't use bcache or any other caching (non-ZFS) mechanism with ZFS, it defeats the purpose of ZFS entirly (negating bit corruption and such). ZFS already has those techniques builtin. If you want SSD caching, use L2ARC (for reads), and ZIL (for writes). ZFS already has a RAM caching mechanism called ARC which you can tune via configuration (sizing). Please understand that the ZFS ZVOL performance on ZFSonLinux is not yet on par with other Operating Systems, so using ZFS ZVOL's as an ESXi backend is fine, but don't expect lightning performance. - Thijs 2014-08-20 9:29 GMT+02:00 Patrick Zwahlen <pa...@na...<mailto:pa...@na...>>: Dear all, Some background: We are serving ESXi datastores with scst-iSCSI (v2) sinces years without any issue. We are using a pair of server with DAS. The DAS are replicated with DRBD and we export the DRBD block devices on a pair of 1GB ethernet for iSCSI multipath to ESXi. Both servers have some active block devices, so even if each ressource is active/standby, all the hardware is being used. We export everything as BIO with nv_cache=1 (because of BBWC). Glued together with pacemaker. Now we're trying to figure out how to introduce some READ ONLY (writethrough) caching, using both RAM and SSD. Our first attempt was to go with bcache, where the backing device is the same as before (drbd) and the caching device is either a straight SSD, or even a btier layered RAMDISK+SSD device (!). The idea looked good, but in practice it fails too often. We have tried to move to FIO (still using the DRBD block devices as backend) but performance where much worse. Current idea is to make an attempt with ZFS (on linux), and I would like to know if some amongst you have tested it already. I also would appreciate input on how/where to combine FIO/BIO. For instance, I understand ZFS will bypass the VFS page cache and perform it's own caching. So if we use scst-FIO on top of ZFS files, we might use both the VFS cache and the ZFS cache, which might not be the best option. Can we use scst-BIO over ZFS files ? Can we use scst-BIO over ZFS ZVOLS ? (I think ZVOL introduces an overhead compared to standard files and might actually decrease performance, but again, input welcomed). How would you configure nv_cache in such a scenario ? As you can see, this is really an open discussion at the moment. I'm building the whole test environement on CentOS7 and I will make tests here and try to report back to the list. Thanks for the input, and have a nice day everyone. - Patrick - ************************************************************************************** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. "pos...@na...<mailto:pos...@na...>" Navixia SA ************************************************************************************** ------------------------------------------------------------------------------ Slashdot TV. Video for Nerds. Stuff that matters. http://tv.slashdot.org/ _______________________________________________ Scst-devel mailing list https://lists.sourceforge.net/lists/listinfo/scst-devel |