Re: [Hscale-user] Hscale + high write performance
Status: Alpha
Brought to you by:
antarapero
From: Peter R. <pr-...@op...> - 2008-07-31 03:31:04
|
Hi Karel, first of all: We are making progress towards supporting multiple backends. There is already working code in the subversion trunk. Some important thinks like transaction handling are missing but it is a beginning. > Hi, I am looking at sharding as a means to spread the load of write > operations. I understand that XA sharding is a 'far future' idea for > Hscale, but I am wondering if it would make sense to use the FEDERATED > storage engine on the sharded tables: > The application would see 'my_huge_table', which HScales translates to > my_huge_table[1-3] on server 0. > --server 0-- > my_huge_table_1 ===> federated, connects to server 1 > my_huge_table_2 ===> federated, connects to server 2 > my_huge_table_3 ===> federated, connects to server 3 > Would this kind of setup make sense? It still has a one-server > bottleneck, but at least we do not to buy 'better/faster/bigger' > storage > for that one server, we can spread the actual storage to multiple > servers. > Would this help with write performance or would the FEDERATED engine > give too much overhead on server 0? This is a neat idea! But there are some drawbacks with the FEDERATED engine that might make it a bad choice (see http://dev.mysql.com/doc/refman/5.0/en/federated-limitations.html). First of all FEDERATED is not transactional. The other big thing is that FEDERATED supports indexes very poorly. We used it for a while for not so important data (both performance and integrety wise) and it worked. We had no data loss but performance was very low. If storage is your main concern and you are willing to sacrifice some performance you could put partitions on another local(!) db (so database files are in another folder beneath /var/lib/mysql) and mount that via your favorite network storage system (like nfs). The ultimate advice I can give you here is: Benchmark, benchmark, benchmark ;) Or wait for hscale-0.3... Hope this answers your question! Greetings Peter |