From: SourceForge.net <no...@so...> - 2003-02-27 22:48:22
|
Bugs item #694713, was opened at 2003-02-27 22:57 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=111118&aid=694713&group_id=11118 Category: funct: header manipulation Group: version 3.0 Status: Open Resolution: None Priority: 5 Submitted By: Dan Price (dp2112) Assigned to: Nobody/Anonymous (nobody) Summary: significant memory leak in parse_http_header() Initial Comment: [ I formatted this to 80 columns, which SF doesn't seem to like... sorry if the formatting is busted] [dp...@en..., 2/27/2003] I run privoxy on Solaris (I work at Sun, actually) and placed the proxy under our newest userland memory allocator, called libumem (this will be generally available in the next update of Solaris 9). Besides being lightning fast for MT-programs which do malloc, libumem has a great set of debugging tools, including a debug mode which can be used to detect memory leaks. After running privoxy under libumem for a couple of weeks, I found that it had grown to roughly 380Mb, at which point it crashed. I realized I hadn't compiled it '-g', so I rebuilt it and enabled the libumem debugging features: Here is the leak analysis, after just a couple of pages have been processed by privoxy: $ gcore `pgrep privoxy` gcore: core.13217 dumped $ mdb core.13217 Loading modules: [ libumem.so.1 libc.so.1 libthread.so.1 ld.so.1 ] > ::findleaks CACHE LEAKED BUFCTL CALLER dd89bb2c 67 dd79c8c0 libc.so.1`strdup+0x23 dd89b7ac 2 dd85cea0 zalloc+0xf ---------------------------------------------------------------------- Total 69 buffers, 6656 bytes So we can see that the first leak to address is the one with nearly 67 buffers leaked (after just about a dozen page loads!!). After a couple of weeks, this leaked 50,000 buffers. By using the 'bufctl_audit' macro, we can find out where the leaked buffer was allocated: > dd79c8c0$<bufctl_audit 0xdd79c8c0: next addr slab dd8a9600 dd79dd90 dd8dd900 0xdd79c8cc: cache timestamp thread dd8f706c 153571213167365642 0xdd79c8dc: lastlog contents stackdepth 0 0 10 libumem.so.1`umem_cache_alloc_debug+0x140 libumem.so.1`umem_cache_alloc+0x583 libumem.so.1`umem_alloc+0x47 libumem.so.1`malloc+0x30 libc.so.1`strdup+0x23 parse_http_request+0x26 chat+0x15f serve+0xc libthread.so.1`_thr_setup+0x42 libthread.so.1`_lwp_start You can see a stack trace of the allocation that caused the leak. The leak happened at parse_http_request+0x26 (which means 26 bytes into the code of the function): jb_err parse_http_request(const char *req, struct http_request *http, struct client_state *csp) { ... memset(http, '\0', sizeof(*http)); >>>buf = strdup(req); So it is this buffer which leaked. But how? All of the error cases in parse_http_request() are properly checked. But in the case of *success* of the routine, buf isn't freed: if ( (http->cmd == NULL) || (http->gpc == NULL) || (http->ver == NULL) ) { free(buf); (2)>>>free_http_request(http); return JB_ERR_MEMORY; } (1)>>> return JB_ERR_OK; } I added a free to this code path at (1) and the leak went away. Staring at the code, there is something else which jumps out at you, at (2). The 'http' variable is passed into this routine, and should be filled out by it. It's very, very suspect that this routine is calling free_http_request(http), since this memory logically 'belongs' to the caller. Looking at the only caller of parse_http_request(), we can see things are a mess (jcc.c, line 841): (1) parse_http_request(req, http, csp); freez(req); break; } (2)if (http->cmd == NULL) { strcpy(buf, CHEADER); write_socket(csp->cfd, buf, strlen(buf)); log_error(LOG_LEVEL_CLF, "%s - - [%T] \ \ 400 0", csp->ip_addr_str); return; } First of all, the caller (1) doesn't check the return value. And, if the parse_http_request() routine *freed* the http pointer, the caller will immediately dereference 'http' at (2) and may crash or corrupt data. Badness, all around; this needs to be fixed. The error seems to be repeated in parse_http_url() as well. On to the next leak! Here is the stack trace: > dd85cea0$<bufctl_audit 0xdd85cea0: next addr slab 0 dd85df80 dd8ddd80 0xdd85ceac: cache timestamp thread dd89b7ac 153581227862329331 0xdd85cebc: lastlog contents stackdepth 0 0 12 libumem.so.1`umem_cache_alloc_debug+0x140 libumem.so.1`umem_cache_alloc+0x583 libumem.so.1`umem_alloc+0x47 libumem.so.1`malloc+0x30 zalloc+0xf load_one_actions_file+0x52e load_actions_file+0x3f run_loader+0x59 load_config+0x1d32 listen_loop+0x1b main+0x5fd _start+0x77 Here it looks like the the problem is in the code which handles the config file; looking at the disassembly at load_one_actions_file+0x52e shows us that this buffer was allocated at (actions.c:1185): cur_action = (struct action_spec *)zalloc(sizeof(*cur_action)); if (cur_action == NULL) { fclose(fp); log_error(LOG_LEVEL_FATAL, "can't load actions file '%s': out of memory", csp->config->actions_file[fileid]); return 1; /* never get here */ } init_action(cur_action); The problem here is that freeing an action is a two-part process, as shown in this fragment: if (!cur_action_used) { free_action(cur_action); free(cur_action); } cur_action = NULL; But, at the bottom of the routine, we have: fclose(fp); free_action(cur_action); (1) free_alias_list(alias_list); Inserting 'free(cur_action);' at (1) fixes the leak. I hope this is helpful. I suspect there is a much more severe leak lurking in privoxy, but I'm guessing that I need to install a slightly newer version of libumem to find it. I'll file a separate bug if/when I track it down. There are some other miscellaneous leaks I've seen which I will track down as I have the chance, but they are rare. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=111118&aid=694713&group_id=11118 |