You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Stas O. <sta...@gm...> - 2010-07-08 19:05:24
|
By the way, the master file should have the snippet from here added: http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html This would enable it to automatically replay the logs. |
From: Stas O. <sta...@gm...> - 2010-07-08 19:03:51
|
Hi. Master server flushes metadata kept in RAM to the metadata.mfs.back binary > file every hour on the hour (xx:00). So probably the best moment would be to > copy the metadata file every hour on the half hour (30 minutes after the > dump). Then you would maximally lose 1.5h of data. You can choose any method > of copying – cp, scp, rsync, etc. Metalogger receives the metadata file > every 24 hours so it is better to copy metadata from the master server. > > Can I also save the change logs, and replay them into meta-data? How much time it will be able recover up to point of crash? > Having metadata backed up you won’t see newly created files, files to > which some data was appended would come back to the previous size, there > would still exist deleted files and files you changed names of would come > back to their previous names (and locations). But still you would have > information and access to all the files created in the X past years. > > > > If you have not read this entry, we strongly recommend it: > > http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html > > Thanks, now it much more clear. Speaking of crashes, will the master detect it has crashed, and will try to replay the logs itself? Or it's recommended to use the script described at the end of the entry, to always check for repayable logs? Regards. |
From: Stas O. <sta...@gm...> - 2010-07-08 18:53:32
|
. > It seems difficult to me, as user it runs with is configurable. Or did > I misunderstood your point ? > I meant the /var/mfs folder for master and logger, which is a good location and can be prepared in advance. > you can just run /usr/sbin/mfscgiserv > > That worked great - much faster then lighttpd. |
From: Stas O. <sta...@gm...> - 2010-07-08 18:51:51
|
Sorry, forwarding to list as well. ---------- Forwarded message ---------- Hi. > Generally speaking, MooseFS should not be run as a user nobody. If user > nobody has some other services running and if somebody gets control over > one service, it can interfere other services run on the same user. We > recommend just creating a user mfs and a group mfs. > > This makes sense. Laurent, that should be a good addition to that RPM ;). Maybe we also should consider placing a script, which would assign proper permissions to selected data folders. The master and logger can actually pre-format and use the /var/mfs directory they are using today. Regards. |
From: Michał B. <mic...@ge...> - 2010-07-07 10:50:12
|
Hi! Generally speaking, MooseFS should not be run as a user nobody. If user nobody has some other services running and if somebody gets control over one service, it can interfere other services run on the same user. We recommend just creating a user mfs and a group mfs. Regards Michał > -----Original Message----- > From: Laurent Wandrebeck [mailto:lw...@hy...] > Sent: Tuesday, July 06, 2010 3:23 PM > To: Stas Oskin; moo...@li... > Subject: Re: [Moosefs-users] Caveats to running MFS under root > > On Tue, 6 Jul 2010 15:30:06 +0300 > Stas Oskin <sta...@gm...> wrote: > > Don't forget the list™, grmbl :) > > > From a security point of view, do not ever do that if you can avoid > > > it, which is the case for every moosefs server. > > > > > > > It's clear, I'm just using in testing environment I wanted just to run > > it quickly. > you just need to put a value in line WORKING_USER = of the config file. > Difficult to make it quicker, isn't it ? :) > > > > By the way, can the MFS really run well under the nobody account? > Don't know, I only tested under a regular user. Try and tell us ? :) > > Just an advice, it worth adding the rights setting for metadata > > folders in the RPM you made. > It seems difficult to me, as user it runs with is configurable. Or did I > misunderstood your point ? > > The storage folders on chunkservers could be set manually later. > > > > > > > The cgi server does not have (yet ?) a config file to allow you to > > > specify which user you want to run the server with, but it runs fine > > > being launched by any regular user, as the port by default it uses > > > (9425) is unprivileged. > > > > > > > Speaking of, how you running the CGI? > > I've routed it through Lighttpd CGI interface, perhaps there is another way? > you can just run /usr/sbin/mfscgiserv > Regards, > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Michał B. <mic...@ge...> - 2010-07-07 10:20:36
|
Hi! Master server flushes metadata kept in RAM to the metadata.mfs.back binary file every hour on the hour (xx:00). So probably the best moment would be to copy the metadata file every hour on the half hour (30 minutes after the dump). Then you would maximally lose 1.5h of data. You can choose any method of copying - cp, scp, rsync, etc. Metalogger receives the metadata file every 24 hours so it is better to copy metadata from the master server. Having metadata backed up you won't see newly created files, files to which some data was appended would come back to the previous size, there would still exist deleted files and files you changed names of would come back to their previous names (and locations). But still you would have information and access to all the files created in the X past years. If you have not read this entry, we strongly recommend it: http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html Kind regards Michał From: Stas Oskin [mailto:sta...@gm...] Sent: Tuesday, July 06, 2010 12:07 PM To: moo...@li... Subject: [Moosefs-users] Backing up MFS metadata Hi. Is there any recommended procedure how to back-up MFS meta-data? I.e: * Where to backup from - master or logger? * Backup the flat files, or use some mfs tool? * Should I stop the logger before? Thanks! |
From: Stas O. <sta...@gm...> - 2010-07-07 08:56:15
|
Hi. This is my example (/etc/fstab): > > > > mfsmount /mnt/mfs fuse > mfswritecachesize=0,mfsmaster=secintmoosemaster01,mfssubfolder=/virt-repo > 0 0 > > Thanks for the example. Laurent, this probably means that there is no need for mfsmount init file as it will be mounted straight from fstab. In case mfsmount crashes, what would bring it back online? fstab? |
From: Stas O. <sta...@gm...> - 2010-07-06 13:57:43
|
> Reread my answer, it's already there :) > I see, what about "mount -a -t fuse.mfs" requirement? Do you skip it somehow? |
From: Scoleri, S. <Sco...@gs...> - 2010-07-06 13:55:43
|
The doc is vague and need an actual example. Make sure mfsmount command is in the path for root. This is my example (/etc/fstab): mfsmount /mnt/mfs fuse mfswritecachesize=0,mfsmaster=secintmoosemaster01,mfssubfolder=/virt-rep o 0 0 From: Stas Oskin [mailto:sta...@gm...] Sent: Tuesday, July 06, 2010 9:52 AM To: Laurent Wandrebeck Cc: moo...@li... Subject: Re: [Moosefs-users] MooseFS init files >From http://www.moosefs.org/reference-guide.html: As of 1.6.x the mfsmount may be set up in the /etc/fstab to facilitate having MooseFS mounted on the system startup: MFSMASTER_IP:9421 on /mnt/mfs type fuse.mfs (rw, nosuid, nodev, allow_other, default_permissions) And additionally, the system dependent mount script would need to invoke something like mount -a -t fuse.mfs. |
From: Stas O. <sta...@gm...> - 2010-07-06 13:52:11
|
>From http://www.moosefs.org/reference-guide.html: As of 1.6.x the mfsmount may be set up in the /etc/fstab to facilitate having MooseFS mounted on the system startup: MFSMASTER_IP:9421 on /mnt/mfs type fuse.mfs (rw, nosuid, nodev, allow_other, default_permissions) *And additionally, the system dependent mount script would need to invoke something like mount -a -t fuse.mfs.* |
From: Laurent W. <lw...@hy...> - 2010-07-06 13:47:24
|
On Tue, 6 Jul 2010 16:31:20 +0300 Stas Oskin <sta...@gm...> wrote: > > Reread my answer, it's already there :) > > > > I see, what about "mount -a -t fuse.mfs" requirement? If the fs is mounted via fstab, you don't need the mount command line. Or did I miss a step ? -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-06 13:23:09
|
On Tue, 6 Jul 2010 15:30:06 +0300 Stas Oskin <sta...@gm...> wrote: Don't forget the list™, grmbl :) > > From a security point of view, do not ever do that if you can avoid it, > > which is the case for every moosefs server. > > > > It's clear, I'm just using in testing environment I wanted just to run it > quickly. you just need to put a value in line WORKING_USER = of the config file. Difficult to make it quicker, isn't it ? :) > > By the way, can the MFS really run well under the nobody account? Don't know, I only tested under a regular user. Try and tell us ? :) > Just an advice, it worth adding the rights setting for metadata folders in > the RPM you made. It seems difficult to me, as user it runs with is configurable. Or did I misunderstood your point ? > The storage folders on chunkservers could be set manually later. > > > > The cgi server does not have (yet ?) a config file to allow you to > > specify which user you want to run the server with, but it runs fine > > being launched by any regular user, as the port by default it uses > > (9425) is unprivileged. > > > > Speaking of, how you running the CGI? > I've routed it through Lighttpd CGI interface, perhaps there is another way? you can just run /usr/sbin/mfscgiserv Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-06 12:59:22
|
On Tue, 6 Jul 2010 15:22:12 +0300 Stas Oskin <sta...@gm...> wrote: > > > > > > Thanks, it helps indeed. > > My concern that us MFS presents additional layer over FS, it would take a > long time to find out what causing issues, ext4 or MFS. > Moreover, as Michal says they using ext3 so probably never encountered any > issues with ext4. > > I'm going to convert a machine to ext4 / xfs (also testing HA in process), > and benchmark then, and we could share the results. That would be nice numbers to have, indeed. > > Speaking of, do you know how ext4 compares to xfs, especially for such > clustered file systems as mfs? I have no XFS at work these days. I may have some in a couple months, but nothing sure. All I can say is that XFS is more efficient with big files than ext* is. I tend to find XFS snappier than ext* in general, but for deletion operations. Nothing scientifically proved, anyway. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-06 12:43:40
|
On Tue, 6 Jul 2010 13:26:26 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > Are there any known issues for running MFS under root? >From a security point of view, do not ever do that if you can avoid it, which is the case for every moosefs server. The cgi server does not have (yet ?) a config file to allow you to specify which user you want to run the server with, but it runs fine being launched by any regular user, as the port by default it uses (9425) is unprivileged. Apart from that, I can't see any reason why you couldn't run it as root. Just don't forget it is a BAD, BAD idea™. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-06 12:42:32
|
On Tue, 6 Jul 2010 15:12:46 +0300 Stas Oskin <sta...@gm...> wrote: > I'm actually going to try HA procedure outlined in MFS docs later on. Nice, don't forget to take notes, it'll help others :) > > The question right now, if I can just backup the entire /var/mfs directory, > on master or on logger. I really can't answer for sure. Michal ? -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-06 12:36:51
|
On Tue, 6 Jul 2010 15:14:21 +0300 Stas Oskin <sta...@gm...> wrote: > > Please find attached the ones I just created. TOTALLY untested. > > It should just work I hope. Feedback welcome. > > > > > > Based on fact MFS daemonic itself, it should be quite easy. > > > > > > Only question, is how to run mount operation, together with fstab (the > > > manual says the system needs to run "mount -a -t fuse.mfs") sometime. > > MFSMASTER_IP:9421 on /mnt/mfs type fuse.mfs (rw, nosuid, nodev, > > allow_other, default_permissions) > > > > > Thanks! > > Will give them a look. > > Just for understanding, how do you handling the on-boot mounting? Reread my answer, it's already there :) And please don't forget to reply to the list too. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-06 12:10:06
|
On Tue, 6 Jul 2010 13:07:28 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > Is there any recommended procedure how to back-up MFS meta-data? > > I.e: > > * Where to backup from - master or logger? > * Backup the flat files, or use some mfs tool? > * Should I stop the logger before? > > Thanks! Hmmmmm I guess I'll answer a bit aside your question. I'd use drbd as a backup :) Anyway, it may be possible to deploy several metaloggers, but I don't know the code enough to know if it works. Could be a nice feature to implement, until high availability is available by default. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-06 11:30:46
|
On Tue, 6 Jul 2010 12:53:37 +0300 Stas Oskin <sta...@gm...> wrote: > > I bet it is due to reserved blocks for root. > > If you run ext2/3/4 try: > > tune2fs -m 0 /dev/bla with ext2/3 tune4fs if ext4. > > No need to unmount or put the volume read-only. > > Some space will be kept as used, but formatting is taking a bunch of > > blocks. > > > > > Yep, that was it, thanks! > > Speaking of ext4, have you tried it with mfs? I seen some bad experience > posted here about this. The only mail I found is « I tried it on secondary cluster in ext4 and btrfs, for 3 months ok, failed after I could not find the reason, » I'd blame btrfs for sure, as it is still experimental. I have two boxes whose moosefs volume is ext4, no problem up to now. Actually, to be more precise, the file which is loop-mounted is stored on ext4. the FS in the given file is ext3. (yes, yes, testing deployment for now). Other than that, I have 2 classical volume of 14TB each running ext4 without a glitch. Yes, they do not run mfs, but as mfs is in user-space, it can't be blamed for FS problems (which are kernel-space). Hope it helps. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-06 11:21:57
|
On Tue, 6 Jul 2010 13:00:21 +0300 Stas Oskin <sta...@gm...> wrote: Hi, > Hi. > > Thanks, I will try to write some of my own, and we could compare them. Please find attached the ones I just created. TOTALLY untested. It should just work I hope. Feedback welcome. > > Based on fact MFS daemonic itself, it should be quite easy. > > Only question, is how to run mount operation, together with fstab (the > manual says the system needs to run "mount -a -t fuse.mfs") sometime. MFSMASTER_IP:9421 on /mnt/mfs type fuse.mfs (rw, nosuid, nodev, allow_other, default_permissions) See http://www.moosefs.org/reference-guide.html Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-07-06 10:26:54
|
Hi. Are there any known issues for running MFS under root? Regards. |
From: Stas O. <sta...@gm...> - 2010-07-06 10:07:59
|
Hi. Is there any recommended procedure how to back-up MFS meta-data? I.e: * Where to backup from - master or logger? * Backup the flat files, or use some mfs tool? * Should I stop the logger before? Thanks! |
From: Michał B. <mic...@ge...> - 2010-07-06 09:10:55
|
MooseFS doesn't have any recommended file system to be run on. You can easily use XFS, it is just your choice. Here, we have 70 chunkservers running on ext3. Kind regards Michal > -----Original Message----- > From: jose maria [mailto:let...@us...] > Sent: Monday, July 05, 2010 9:51 PM > To: moo...@li... > Subject: Re: [Moosefs-users] XFS support > > El Lunes 05 Julio 2010, Stas Oskin escribió: > > Hi. > > > > How well MooseFS supports XFS? > > > > Or the recommended file systems is Ext3? > > > > Regards. > > > > * I tried it on secondary cluster in ext4 and btrfs, for 3 months ok, failed > after I could not find the reason, surely xfs can be used if supported by the > kernel. > > * Safety is an issue of stability of the file system, ext3 is stable xfs is > also stable, I prefer ext3. > > * Regards. > > > ---------------------------------------------------------------------------- -- > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Laurent W. <lw...@hy...> - 2010-07-06 08:07:46
|
On Tue, 6 Jul 2010 10:56:13 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > I've installed MFS chunkservers on two boxes with 4TB each. > > For some reason, the MFS shows as 187 GiB already used - while I didn't > place any files there yet, and the chunks number equal to 0. > > Any idea why it shows such stats? I bet it is due to reserved blocks for root. If you run ext2/3/4 try: tune2fs -m 0 /dev/bla with ext2/3 tune4fs if ext4. No need to unmount or put the volume read-only. Some space will be kept as used, but formatting is taking a bunch of blocks. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-07-06 08:01:02
|
Please read this faq entry: http://www.moosefs.org/moosefs-faq.html#df Are the disks used exclusively for MooseFS chunks or do you have some other data there? Regards Michal From: Stas Oskin [mailto:sta...@gm...] Sent: Tuesday, July 06, 2010 9:56 AM To: moo...@li... Subject: [Moosefs-users] MFS reports used space Hi. I've installed MFS chunkservers on two boxes with 4TB each. For some reason, the MFS shows as 187 GiB already used - while I didn't place any files there yet, and the chunks number equal to 0. Any idea why it shows such stats? Regards. |
From: Stas O. <sta...@gm...> - 2010-07-06 07:56:40
|
Hi. I've installed MFS chunkservers on two boxes with 4TB each. For some reason, the MFS shows as 187 GiB already used - while I didn't place any files there yet, and the chunks number equal to 0. Any idea why it shows such stats? Regards. |