Is Barman memory usage depending on the database size? I have just switched from 1.x to 2.3-2.pgdg80+1 version and I am unable to backup 38GB cluster with out of memory errors.
I am using rsync type of backup.
It is killed by OOM killer when consuming 1.6GB! RAM or it ends with error:
Starting backup using rsync-exclusive method for server pgsql1 in /var/lib/barman/pgsql1/base/20180516T195504
Backup start at LSN: 3F/A3000028 (000000020000003F000000A3, 00000028)
Starting backup copy via rsync/SSH for 20180516T195504
Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 859, in emit
msg = self.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 732, in format
return fmt.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 474, in format
s = self._fmt % record.__dict__
MemoryError
Logged from file command_wrappers.py, line 337
Why is barman using that much memory? The memory is allocated directly by barman process. Example:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
29068 barman 20 0 1230964 8160 684 S 0.0 0.5 0:00.01 barman
27474 barman 20 0 1230964 6132 604 S 0.3 0.4 1:47.83 barman
I would have expected a high memory consumption with parallel backups and version 2.2. But version 2.3 contaqins the fix (https://github.com/2ndquadrant-it/barman/issues/116)
Regarding the DB size, We use it to backup databases over 20TB without noticing any abnormal memory usage.
Could you please attach the
barman diagnoseoutput?Last edit: Marco Nenciarini 2018-05-17
Attached barman diagnose output.