From: Shailabh N. <na...@us...> - 2001-09-21 20:54:42
|
Rohit, Could you point out the location of the patch you mentioned ? I'd like = to test it out to see the performance impact. Regarding the breaking up of the raw I/O requests into 512 byte chunks,= I'm in the process of submitting a simple (hack) that allows us to experime= nt with different block sizes. In an earlier posting to lse-tech today, I'= d reported that by using 4K sized chunks instead of 512-byte ones, the nu= mber of buffer heads being used can be significantly reduced (from about 163= 68 to around 2048 for 128 reads of 64K size each). While the lower layers do combine the I/O requests, there might still b= e some performance gain from reducing the number of submit_bh() calls (th= ough I saw the opposite effect from some rough benchmarking). The data posted by Bill is a partial output from kernprof. I'm appendin= g the explanation of the columns (it normally appears at the end of the c= all graph listing). index %time self children called name ----------------------------------------------- =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 0.00=A0=A0=A0 0.31=A0=A0= =A0=A0 150/150=A0=A0=A0=A0=A0=A0=A0=A0 system_call [3] [4]=A0=A0=A0=A0=A0 1.4=A0=A0=A0 0.00=A0=A0=A0 0.31=A0=A0=A0=A0 150=A0= =A0=A0=A0=A0=A0=A0=A0 sys_read [4] =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 0.00=A0=A0=A0 0.29=A0=A0= =A0=A0 128/128=A0=A0=A0=A0=A0=A0=A0=A0 raw_read [5] =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 0.00=A0=A0=A0 0.02=A0=A0= =A0=A0=A0 18/18=A0=A0=A0=A0=A0=A0=A0=A0=A0 proc_file_read [22] =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 0.00=A0=A0=A0 0.00=A0=A0= =A0=A0=A0=A0 4/14=A0=A0=A0=A0=A0=A0=A0=A0=A0 generic_file_read [78] =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 0.00=A0=A0=A0 0.00=A0=A0= =A0=A0 151/253=A0=A0=A0=A0=A0=A0=A0=A0 fput [191] =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 0.00=A0=A0=A0 0.00=A0=A0= =A0=A0 150/177=A0=A0=A0=A0=A0=A0=A0=A0 fget [194] ----------------------------------------------- This table describes the call tree of the program, and was sorted by the total amount of time spent in each function and its children. Each entry in this table consists of several lines. The line with the= index number at the left hand margin lists the current function. The lines above it list the functions that called this function, and the lines below it list the functions this one called. This line lists: index A unique number given to each element of the table. Index numbers are sorted numerically. The index number is printed next to every function name so it is easier to look up where the function in the table. % time This is the percentage of the `total' time that was spen= t in this function and its children. Note that due to different viewpoints, functions excluded by options, etc, these numbers will NOT add up to 100%. self This is the total amount of time spent in this function. children This is the total amount of time propagated into this function by its children. called This is the number of times the function was called. If the function called itself recursively, the number only includes non-recursive calls, and is followed by a `+' and the number of recursive calls. name The name of the current function. The index number is printed after it. If the function is a member of a cycle, the cycle number is printed between the function's name and the index number. For the function's parents, the fields have the following meanings: self This is the amount of time that was propagated directly from the function into this parent. children This is the amount of time that was propagated from the function's children into this parent. called This is the number of times this parent called the function `/' the total number of times the function was called. Recursive calls to the function are not included in the number after the `/'. name This is the name of the parent. The parent's index number is printed after it. If the parent is a member of a cycle, the cycle number is printed between the name and the index number. If the parents of the function cannot be determined, the word `<spontaneous>' is printed in the `name' field, and all the other fields are blank. For the function's children, the fields have the following meanings: self This is the amount of time that was propagated directly from the child into the function. children This is the amount of time that was propagated from the child's children to the function. called This is the number of times the function called this child `/' the total number of times the child was called. Recursive calls by the child are not listed in the number after the `/'. name This is the name of the child. The child's index number is printed after it. If the child is a member of a cycle, the cycle number is printed between the name and the index number. If there are any cycles (circles) in the call graph, there is an entry for the cycle-as-a-whole. This entry shows who called the cycle (as parents) and the members of the cycle (as children.) The `+' recursive calls entry shows the number of function calls that were internal to the cycle, and the calls entry for each member shows,= for that member, how many times it was called from other members of the cycle. Shailabh Nagar Enterprise Linux Group, IBM TJ Watson Research Center (914) 945 2851, T/L 862 2851 "Seth, Rohit" <roh...@in...>@lists.sourceforge.net on 09/21/200= 1 03:50:59 PM Sent by: lse...@li... To: "Carbonari, Steven" <ste...@in...>, "'lse...@li...'" <lse...@li...= t> cc: "Mallick, Asit K" <asi...@in...> Subject: RE: [Lse-tech] code path of 128KB read() from raw device (ker= npro f acg) Steve, Thanks for forwarding this mail.=A0 Recently I have gone over handling of iobufs in kernel in great detail with Andreas (maintainer)= . That was one issue resulting in severe loss of performance starting wi= th 2.4.5 kernel.=A0 If you put Andrea's or my patch the result on IA-32 c= ould go as much as 40% higher and on IA-64 it will be about 20% higher depending on the workload.=A0 Most of this benefit in performance is c= oming directly or indirectly from better iobuf (and associated buffer head) management. Couple of points about this mail below: On 2.4.9 kernel the 128KB bloc= ks should go as is (without breaking them in smaller chunk of 64K).=A0 Th= e limit is 512K.=A0 So all IO requests <=3D512K get sumbitted in one ite= ration. But if these requests are greater than 512K then kernel first waits for= completeing the first 512K before submitting the next.=A0 The data belo= w indicates ---look at the numbers against wait_kio for instance.=A0 Anyw= ays, the limitation is mainly because of the current handling of bhs associa= ted with iobufs.=A0 After putting my patch this limitation can be complete= ly removed without any problems (Haven't tried that though).=A0 The other= part that this mail is refering to is breaking the requests to 512 byte sec= tor quantity.=A0 I've this on my todo list to try and see its affect on re= al world situation.=A0 But currently I'm on AIO ...thats is the top prior= ity. So, it is possible that I may not touch this for another couple of wee= ks. Having said that, I also think that the lower layer is combining these= requests broken by raw IO layer.=A0=A0And the data below kind of indic= ates that too.=A0 But this needs to be properly investigated. Could you also please explain the name of all the columns in the follo= wing data. rohit -----Original Message----- From: Carbonari, Steven Sent: Friday, September 21, 2001 10:05 AM To: Seth, Rohit Cc: Mallick, Asit K; Prickett, Terry O Subject: FW: [Lse-tech] code path of 128KB read() from raw device (kernprof acg) Rohit, I thought you might be interested in seeing this, since you have looke= d closely at raw IO with the TPC-C testing.=A0 Does you patch address an= y of the issues brought up below? Steve = |
From: Andrea A. <an...@su...> - 2001-09-22 19:41:28
|
Hello! On Fri, Sep 21, 2001 at 04:53:11PM -0400, Shailabh Nagar wrote: > Could you point out the location of the patch you mentioned ? I'd like to It's included in mainline, see 2.4.10pre14. (see the f_iobuf field in the file structure) Since the latest pre patches O_DIRECT on ext2 is supported as well (and now that the blkdev is in pagecache you can even open("/dev/hda", O_DIRECT|...) instead of using /dev/raw*). Splitting the kiobuf and in turn integrating part of Rohit's patch (using slab cache rather than vmalloc) will be the next step (for contention cases and allocation of kiobufs in fast paths). Andrea |