#419 memory leak when calling top objects with conky_parse in lua

git
open
nobody
Code (277)
5
2014-08-18
2012-07-14
arclance
No

Calling top objects with conky_parse() in a lua script causes a memory leak.
This bug causes one of my conkys memory use to grow from 7MiB to 117MiB in about 24 hours.

The simplest lua script I have tested so far that demontrates this is

function conky_main()
local topCPU_line = conky_parse("${top name 1} ${top pid 1}${top cpu 1}${top mem 1} ${top time 1}")
local topMem_line = conky_parse("${top_mem name 1} ${top_mem pid 1}${top_mem cpu 1}${top_mem mem 1} ${top_mem mem_res 1}")
return ""
end

using this .conkyrc

### Position ###
alignment top_right
gap_x 1300
gap_y 780
minimum_size 619 1080
maximum_width 619
### End Position ###

### Borders ###
border_inner_margin 0
border_outer_margin 0
border_width 0
draw_borders no
draw_graph_borders yes
draw_outline no
draw_shades yes
### End Borders ###

### Window ###
own_window yes
own_window_transparent yes
own_window_argb_visual false
own_window_argb_value 0
own_window_class systemConky_test
own_window_type normal
own_window_title system_Conky_test
own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager
### End Window ###

### Color ###
default_color CC9900
default_outline_color 000000
default_shade_color 8B0000
color0 EE7600
color1 8B0000
### End Color ###

### Font ###
use_xft yes
xftfont DejaVu Sans Mono:size=10
xftalpha 1
### End Font ###

### Conky Settings ###
disable_auto_reload false
double_buffer yes
background no
update_interval 1.0
cpu_avg_samples 1
net_avg_samples 1
diskio_avg_samples 1
no_buffers yes
out_to_console no
out_to_stderr no
extra_newline no
uppercase no
use_spacer none
show_graph_scale no
show_graph_range no
text_buffer_size 5000
default_bar_size 475 9
imlib_cache_size 0
top_name_width 10
if_up_strictness address
max_specials 1000
### End Conky Settings ###

### Lua ###
lua_load ~/conky_parse_bug_top_example.lua
lua_draw_hook_pre main
### End Lua ###

TEXT

conky -v

Conky 1.9.1_pre2426 compiled Wed Jul 11 19:43:34 EDT 2012 for Linux 3.4.0-4.dmz.2-liquorix-amd64 (x86_64)

Compiled in features:

System config file: /usr/local/etc/conky/conky.conf
Package library path: /usr/local/lib/conky

X11:
* Xdamage extension
* XDBE (double buffer extension)
* Xft
* ARGB visual

Music detection:

General:
* math
* hddtemp
* portmon
* wireless
* nvidia
* config-output
* Imlib2
* apcupsd
* iostats
* ncurses
* Lua

Lua bindings:
* Cairo
* Imlib2

Discussion

  • arclance

    arclance - 2012-07-14

    I attached a valgrind memcheck on a debug build of conky 1.9.1 running my example lua script and .conkyrc.
    If you want me to run any other debuging tools on conky just let me know

     
  • pavelo

    pavelo - 2012-07-20
    • status: open --> closed-fixed
     
  • pavelo

    pavelo - 2012-07-20

    fixed.
    thanks for catching this, it was caused by my hasty fix of the previous top bug :)

     
  • arclance

    arclance - 2012-07-20

    That got most of the memory leak but it is not fixed yet.

    ==00:00:20:51.421 6511== 1,887 bytes in 188 blocks are still reachable in loss record 224 of 259
    ==00:00:20:51.421 6511== at 0x4C28BED: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
    ==00:00:20:51.421 6511== by 0x739B971: strndup (strndup.c:46)
    ==00:00:20:51.421 6511== by 0x4463A0: process_parse_stat (top.c:291)
    ==00:00:20:51.421 6511== by 0x446802: calculate_stats (top.c:407)
    ==00:00:20:51.421 6511== by 0x4468EA: update_process_table (top.c:461)
    ==00:00:20:51.421 6511== by 0x447248: process_find_top (top.c:770)
    ==00:00:20:51.421 6511== by 0x40D7CC: update_top (linux.c:2212)
    ==00:00:20:51.421 6511== by 0x418C8D: run_update_callback (common.c:380)
    ==00:00:20:51.421 6511== by 0x4E35B4F: start_thread (pthread_create.c:304)

    ==00:00:20:51.421 6511== 3,008 bytes in 188 blocks are still reachable in loss record 236 of 259
    ==00:00:20:51.421 6511== at 0x4C28BED: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
    ==00:00:20:51.421 6511== by 0x445C28: hash_process (top.c:65)
    ==00:00:20:51.421 6511== by 0x445F95: new_process (top.c:185)
    ==00:00:20:51.421 6511== by 0x4468DA: update_process_table (top.c:457)
    ==00:00:20:51.421 6511== by 0x447248: process_find_top (top.c:770)
    ==00:00:20:51.421 6511== by 0x40D7CC: update_top (linux.c:2212)
    ==00:00:20:51.421 6511== by 0x418C8D: run_update_callback (common.c:380)
    ==00:00:20:51.421 6511== by 0x4E35B4F: start_thread (pthread_create.c:304)

    ==00:00:20:51.422 6511== 28,576 bytes in 188 blocks are still reachable in loss record 257 of 259
    ==00:00:20:51.422 6511== at 0x4C28BED: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
    ==00:00:20:51.422 6511== by 0x445ECA: new_process (top.c:158)
    ==00:00:20:51.422 6511== by 0x4468DA: update_process_table (top.c:457)
    ==00:00:20:51.422 6511== by 0x447248: process_find_top (top.c:770)
    ==00:00:20:51.422 6511== by 0x40D7CC: update_top (linux.c:2212)
    ==00:00:20:51.422 6511== by 0x418C8D: run_update_callback (common.c:380)
    ==00:00:20:51.422 6511== by 0x4E35B4F: start_thread (pthread_create.c:304)

     
  • pavelo

    pavelo - 2012-07-20

    Does the leak grow with time? It is my impression that it is constant. If that is the case I would leave it for now.

    I do not like leaks like this, but I do not want to expend effort in solving things like this, because I have a feeling that If I started, I would end up rewrting most of top.c. I think I'll leave this for conky 2.0, where it will get rewritten anyway (and i'll make sure it doesnt leak there)

    cheers,
    pavelo

     
  • arclance

    arclance - 2012-08-04

    If you are not going to fix this anytime soon could you at least reopen the bug report (since you have not completely fixed it) and assign it to yourself as a reminder?

     
  • pavelo

    pavelo - 2012-08-26
    • status: closed-fixed --> open
     
  • pavelo

    pavelo - 2012-08-26

    reopening, as requested :)

     

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks