From: jose m. <aso...@zo...> - 2010-01-26 21:50:26
|
El Martes 26 Enero 2010, Michał Borychowski escribió: > Hola Jose! > > Estoy muy contento que tenemos usuarios en Espana :) > * This open source project is essential for mass storage work, its characteristics are necessary. * GlusterFS has different ones. * The answer to question number 6 will make you unique, along with a backup procedure in real time machine to another metaserver in passive mode, ready for immediate replacement. * Low cost and fundamental characteristics plus a tar.gz of 15Kb, can you ask for more? * Start the deployment to replace the secondary and tertiary servers replicated. * Much obliged and made available to project developers, both privileged access to the cluster, like any other network or storage resources of our institution according to their convenience and free of course. > > > 2 .- Very important, vsftpd can write directly on the MFS mount point, > > or > > need to mount --bind. A better idea? commercial code, other > > implementation > > of mfsmount, native module? > > We are not sure if we fully understand the question, but we'll try to > answer. > > There is no ready ftp server which would communicate directly with the > master server and chunkservers so the solution is to "go through" the > normal mfsmount. It gives better results than creating a special code for > communication - kernel really supports I/O operations very efficiently > (eg. by using read-ahead, buffering, writing in background, etc.). That's > why we also use classic file systems for storing data in chunkservers > rather than using the disk directly as a block-device. > * clarifying example * On the client, mfs mounted in /media/mfs. configured vsftpd root directory at: local_root = /media/mfs/virtualusers/$USER user_sub_token = $USER * Virtualusers directory physically reside in the cluster fysesystem, or virtualusers2 3 4 5. * Vsftpd I can not write to that directory as it really is a socket. * I would need to mount --bind /media/mfs /media/local and change local_root = /media/local/virtualusers/$USER in sshfs using FUSE is necessary. * Was it necessary in mfs? * on vsftpd.conf , use_sendfile = NO ¿Would the solution? proves it. > > > 4.- stored millions of small files, backup software cript. compress and > > split > > files 5/30 Mb, any advice on file systems to use ext3-ext4 or block > > size. kernel > > elevator, another tuning? > > (Again we are not fully sure about the question) > > At our company we do not use any tuning, just normal ext3. > > How many milions of files do you have? Generally speaking 1 million files > takes approximately 300 MiB of RAM. Installation of 25 million files > requires about 8GiB of RAM and 25GiB space on HDD. > * command executed in all primary servers find /media/volumen0/virtualusers -type f | wc -l Current all total 369,026,500 * 80%+ write process 20%- read process (recovery backups) * On mfs X 3 copies * Ram required metaserver no problem. * In chunservers clients and whether * It intends to use existing or adquire conventional hardware and he added that support. * Currently, both the primary servers, secondary and tertiary are HP ML110 2GB ECC RAM, six 1TB SATA disks each, 2 netlinks 2Gb/s per server, and ridiculous price for wholesale * The current uptime restarts excluding kernel changes and changes of hard drives failing is 2 years. 24x7 * openSUSE distribution, and are totally loose in performance and ram. > > 5.- ¿Any suggestions for the network, MTU, bandwitch Agrégation, > > windows > > size, etc ... on sysctl.conf? > > We strongly recommend using jumbo-frames (MTU=9000). With the greater > amount of chunkservers switches have to be connected through optical fiber > or links aggregated. > * OK > > 6.- I want to store 3 copies of the files, is there any possibility to > > specify that one of them is stored in another group of chunkservers > > remote > > but connected by optical fiber on the same subnet? > > ¿planned in de future? > > There is no such a possibility now. But maybe it will be introduced in the > future. > * Excellent. > > 7.- I maintain high retention time of deleted files. What is the > > technique that > > uses MFS?. ¿It retains the three copies scheduling their final > > deletion? > > ¿Delete two copies and keep one?. if so, ¿how that machine is stored? > > All the copies in the trash are retained. And still for the files in the > trash you can set the goal (ie. number of copies), so you can easily write > a simple script which would for example change goal for all files in the > trash from 3 to 2 (and later to 1). > * goodly > > 8.- What would be the best option on the clients automounter or > > desconecction, autofs, daemon guard, cron script? > > We have not used mfsmount with automounter nor autofs, we think you have to > just monitor its activity. If the master server or chunkservers are > termporarily not available, mfsmount tries to reconnect automatically. > * I understand that with the reconnect feature of mfsmount or FUSE in /etc/fstab * Jose Maria. |