|
From: Paul F. <pj...@wa...> - 2023-01-25 20:17:09
|
Hi I'd like some feedback on https://bugs.kde.org/show_bug.cgi?id=464103 This allows the user to specify that a block of memory should be histogrammed. It's intended for blocks larger than the existing 1024 byte limit - recently at work I was profiling and the top data structure was about 1600 bytes so I couldn't see the histogram straight way - I rebuilt dhat with a limit of 2048 to see it. Since this is a user request it shouldn't bee too overwhelming (either for runtime memory or the size of the html generated). I'll do a bit of trial and error and put in some upper limit. About 2500 will fit on my screen, so probably somewhere between 10k and 25k. For such large arrays someone needs to write a nice zoomable GUI. One change I'll probably make straight away is to add an argument for the initial count (normally 0 for malloc or 1 for calloc or std::vector with an initial size and value). Or possibly have 2 macros like DHAT_HISTOGRAM_MEMORY_UNINIT and DHAT_HISTOGRAM_MEMORY_INIT I've also been thinking about an array version of these user requests. For instance say we have 'struct Node' which is 80 bytes. If a user has std::vector<Node> nodeVec(500); or struct Node *nodes = malloc(500*sizeof(struct Node)); Then this would be too big for the DHAT histogram. Just increasing the limiat as with the 1st user request isn't very useful either. So what I'm thinking of is a user request DHAT_HISTOGRAM_ARRAY that takes 3 arguments - pointer to memory - number of elements - size of each element That would look like this std::vector<Node> nodeVec(500); DHAT_HISTOGRAM_ARRAY(nodeVec.data(), nodeVec.size(), sizeof(Node)); In DHAT this would find and throw away the old block. A bit inefficient but I don't see an alternative. Then create number_of_elements subblocks. I did think about automating that. The only alloc function where that would be possible is calloc. malloc new and new[] just have the overall size available. A+ Paul |