From: Piotr R. K. <pio...@mo...> - 2015-08-18 14:02:36
|
Hi Bernd, That’s good :) Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 18 Aug 2015, at 3:44 pm, Bernd Burg <b....@ho...> wrote: > > Hello Piotr Robert, > > thanks for your answer. I was already told this by you in an other email. > > i got 5 chunkservers and 2 Master working as separate physical maschines in my setup now. > The tests so far were fine an i am 1 test away from purchasing > > I will do another Stress Test to see a complete powerup off my cluster. > I hope the moosfs system behaves ok. > > > Mit freundlichen Grüßen > > Bernd Burg > > ------------------------------------------------------------ > HOLDE AG > Zum Roppertsborn 14, 66646 Marpingen > > Telefon +49-6827-267988-0 > Telefax +49-6827-267988-9 > > Email b....@ho... <mailto:kr...@ac...> > > Sitz der Gesellschaft: Marpingen > AG Saarbrücken, HRB 101630 > Ust-Id-Nr.: DE29460253 > > Vorstand: > Dipl.-Ing. Bernd Burg > > Aufsichtsrat: > Dipl.-Ing. Axel Gaus (Vorsitz) > Dipl.-Ing. Andreas Krolzig > Dipl.-Ing. Gabor Richter > ------------------------------------------------------------ > > Am 17.08.2015 um 19:04 schrieb Piotr Robert Konopelko: >> Hi Bernd, >> >>> I have the following questions to that >>> >>> 1. Do i have to have the master software on separate hardware to the chunks ? so that master and chunk servers are physically spit systems ? >> >> No, you don’t have to, but it is recommended to have machines dedicated for Master Servers. >> >>> 2. If that is the case how can i deal in a 2 storage Room situation with 2 UPS Systems and Master and Chunk Server on the same ups >>> Do i experience the same behavior ihe if i loose one UPS System amd with that 1 server with master service and one server vith chunk service? >> >> Problem you encounter is caused, because to run MooseFS with HA feature, you need to have minimum 3 chunkservers. >> In case of one master failure (e.g. as you said - powerloss), MooseFS uses a quorum mechanism. >> So you need to have quorum of chunkservers, i.e. a half + 1 to choose a Leader Master. >> >> When you poweroff one of your two chunkservers you have, you loss quorum (2/2+1 = 2, so quorum of 2 chunkservers is 2). >> >> >> PS: I’m sorry for so late answer, but our mailing list sometimes holds messages for some time. >> >> >> Best regards, >> >> -- >> Piotr Robert Konopelko >> MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >>> On 23 Jul 2015, at 6:31 pm, Bernd Burg < <mailto:b....@ho...>b....@ho... <mailto:b....@ho...>> wrote: >>> >>> Hallo togehter >>> >>> First of all, Thanks to sales i am able to have a test License for the Pro versions 3.0.38 >>> >>> i did set up 2 Servers as shared Master and Chunkserver with 5 Disks each from scratch (no Metalogger) >>> I have 3 Proxmox Servers which i set up with moosfs-pro-clients >>> I changed my dns-server in the way, that it displays >>> >>> ------------------------------------------------------------------- >>> root@proxmox-4:~# host mfsmaster >>> mfsmaster.holde.lan has address 10.120.3.162 >>> mfsmaster.holde.lan has address 10.120.3.161 >>> ------------------------------------------------------------------- >>> >>> In this setup the system works as expected with a very good performance >>> >>> If i stop the leader with mfsmaster stop the follower ist promoted to master >>> If i start the old leader again with mfsmaster start that machine is denoted as follower >>> >>> so far so good and exected. >>> >>> What i don't unterstand is the following behaviour >>> >>> >>> Status before test: >>> ------------------------------------------------------------------------------ >>> root@mfsmaster1:~# mfssupervisor >>> 1 (10.120.3.161:9419) : FOLLOWER (leader ip: 10.120.3.162 ; meta version: 126865 ; synchronized ; duration: 1511) >>> 2 (10.120.3.162:9419) : LEADER (meta version: 126865 ; duration: 1504) >>> ------------------------------------------------------------------------ >>> >>> mfsmaster1 is ip:xxx.161 >>> mfsmaster2 is ip:xxx.162 >>> >>> >>> If i poweroff the leader (machine 162) Then i have this result >>> >>> ------------------------------------------------------------------------------- >>> root@mfsmaster1:~# mfssupervisor >>> 10.120.3.162: connection timed out >>> master states: >>> 1 (10.120.3.161:9419) : ELECT (meta version: 127149 ; synchronized ; duration: 96) >>> 2 (10.120.3.162:9419) : DEAD >>> ------------------------------------------------------------------------------------- >>> The cgi shows additionally the message: >>> >>> ------------------------------------------------------------------ >>> Leader master server not found, but there is an elect, so make sure that all chnunkservers are running - elect should became a leader soon >>> ------------------------------------------------------------------ >>> >>> This stays like this until the server is powered on again. There was no change for over 15 min then i powereup the machine (162) again >>> >>> -------------------------------------------------------------------------------------------- >>> 1 (10.120.3.161:9419) : LEADER (meta version: 127253 ; duration: 63) >>> 2 (10.120.3.162:9419) : FOLLOWER (leader ip: 10.120.3.161 ; meta version: 127253 ; synchronized ; duration: 69) >>> -------------------------------------------------------------------------------------------------------------- >>> >>> >>> This is in both ways >>> >>> I have the following questions to that >>> >>> 1. Do i have to have the master software on separate hardware to the chunks ? so that master and chunk servers are physically spit systems ? >>> >>> 2. If that is the case how can i deal in a 2 storage Room situation with 2 UPS Systems and Master and Chunk Server on the same ups >>> Do i experience the same behavior ihe if i loose one UPS System amd with that 1 server with master service and one server vith chunk service? >>> >>> >>> I would be very thankful if somebody could give me a hint >>> >>> Thanks in advance >>> >>> -- >>> >>> Mit freundlichen Grüßen >>> >>> Bernd Burg >>> >>> ------------------------------------------------------------ >>> HOLDE AG >>> Zum Roppertsborn 14, 66646 Marpingen >>> >>> Telefon +49-6827-267988-0 >>> Telefax +49-6827-267988-9 >>> >>> Email b....@ho... <mailto:kr...@ac...> >>> >>> Sitz der Gesellschaft: Marpingen >>> AG Saarbrücken, HRB 101630 >>> Ust-Id-Nr.: DE29460253 >>> >>> Vorstand: >>> Dipl.-Ing. Bernd Burg >>> >>> Aufsichtsrat: >>> Dipl.-Ing. Axel Gaus (Vorsitz) >>> Dipl.-Ing. Andreas Krolzig >>> Dipl.-Ing. Gabor Richter >>> ------------------------------------------------------------ >>> >>> ------------------------------------------------------------------------------ >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... <mailto:moo...@li...> >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> > > |