From: Wenyuan G. <guo...@gm...> - 2010-07-21 23:26:17
|
Hi all, With more careful profiling of the program, I found my earlier theory to be wrong: I thought the delay when switching resolution mainly came from HDD delay. This component turns out to be minor, thanks to OS caching. The following two components are much more significant: 1. Surprisingly, the librsvg function that determines the dimension of an input SVG image file takes substantial time. I had thought that it ought to be as simple as to locate the relevant strings from the SVG file involving only minimum parsing efforts. In any case, I have build an in-memory caching table to store dimensions of SVG files seen before to minimize this performance penalty to once per file for each session. 2. The PNG function to parse input file into SDL surface is the second contributor to the delay, although smaller than 1. I have built another in-memory caching mechanism for these SDL surfaces, with reference copying instead of actual data copying. This assumes that all loaded SDL surfaces are never modified, which seem to be the case for tuxmath. This is faster and more memory efficient. With the additional caching, pressing F10 to switch resolution takes only 0.5s, compared with about 2s before. Given this small delay, I have scratched the multithreading mechanism implemented before, which is no longer necessary. Moreover, the multithreading functions still have mysterious bugs. There are too many rendering functions that have global variable and I/O conflicts. It is extremely difficult to identify all race conditions between them at this point. Cheers Wenyuan |
From: Tim H. <ho...@wu...> - 2010-07-22 00:50:31
|
Hi Wenyuan, This is good, careful work. I think your willingness to change your plans is admirable, and the half-second delay seems very livable, a big improvement on the previous 2s. Long live profiling! :-). Best, --Tim On Wednesday, July 21, 2010 06:26:11 pm Wenyuan Guo wrote: > Hi all, > > With more careful profiling of the program, I found my earlier theory > to be wrong: I thought the delay when switching resolution mainly came > from HDD delay. This component turns out to be minor, thanks to OS > caching. The following two components are much more significant: > > 1. Surprisingly, the librsvg function that determines the dimension of > an input SVG image file takes substantial time. I had thought that it > ought to be as simple as to locate the relevant strings from the SVG > file involving only minimum parsing efforts. In any case, I have build > an in-memory caching table to store dimensions of SVG files seen > before to minimize this performance penalty to once per file for each > session. > > 2. The PNG function to parse input file into SDL surface is the second > contributor to the delay, although smaller than 1. I have built > another in-memory caching mechanism for these SDL surfaces, with > reference copying instead of actual data copying. This assumes that > all loaded SDL surfaces are never modified, which seem to be the case > for tuxmath. This is faster and more memory efficient. > > With the additional caching, pressing F10 to switch resolution takes > only 0.5s, compared with about 2s before. Given this small delay, I > have scratched the multithreading mechanism implemented before, which > is no longer necessary. Moreover, the multithreading functions still > have mysterious bugs. There are too many rendering functions that have > global variable and I/O conflicts. It is extremely difficult to > identify all race conditions between them at this point. > > Cheers > Wenyuan > > --------------------------------------------------------------------------- > --- This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > Tuxmath-devel mailing list > Tux...@li... > https://lists.sourceforge.net/lists/listinfo/tuxmath-devel |
From: Wenyuan G. <guo...@gm...> - 2010-07-22 19:53:47
|
Hi Tim, Thank you for the compliments! Although things didn't go exactly as expected, I'm just quite happy and relieved that the new problems could be easily solved and the performance boost I promised earlier can be delivered. Has anyone else tried the new program? Any feedback is most welcome! Cheers Wenyuan On Thu, Jul 22, 2010 at 8:50 AM, Tim Holy <ho...@wu...> wrote: > Hi Wenyuan, > > This is good, careful work. I think your willingness to change your plans is > admirable, and the half-second delay seems very livable, a big improvement on > the previous 2s. Long live profiling! :-). > > Best, > --Tim > > On Wednesday, July 21, 2010 06:26:11 pm Wenyuan Guo wrote: >> Hi all, >> >> With more careful profiling of the program, I found my earlier theory >> to be wrong: I thought the delay when switching resolution mainly came >> from HDD delay. This component turns out to be minor, thanks to OS >> caching. The following two components are much more significant: >> >> 1. Surprisingly, the librsvg function that determines the dimension of >> an input SVG image file takes substantial time. I had thought that it >> ought to be as simple as to locate the relevant strings from the SVG >> file involving only minimum parsing efforts. In any case, I have build >> an in-memory caching table to store dimensions of SVG files seen >> before to minimize this performance penalty to once per file for each >> session. >> >> 2. The PNG function to parse input file into SDL surface is the second >> contributor to the delay, although smaller than 1. I have built >> another in-memory caching mechanism for these SDL surfaces, with >> reference copying instead of actual data copying. This assumes that >> all loaded SDL surfaces are never modified, which seem to be the case >> for tuxmath. This is faster and more memory efficient. >> >> With the additional caching, pressing F10 to switch resolution takes >> only 0.5s, compared with about 2s before. Given this small delay, I >> have scratched the multithreading mechanism implemented before, which >> is no longer necessary. Moreover, the multithreading functions still >> have mysterious bugs. There are too many rendering functions that have >> global variable and I/O conflicts. It is extremely difficult to >> identify all race conditions between them at this point. >> >> Cheers >> Wenyuan >> >> --------------------------------------------------------------------------- >> --- This SF.net email is sponsored by Sprint >> What will you do first with EVO, the first 4G phone? >> Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first >> _______________________________________________ >> Tuxmath-devel mailing list >> Tux...@li... >> https://lists.sourceforge.net/lists/listinfo/tuxmath-devel > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > Tuxmath-devel mailing list > Tux...@li... > https://lists.sourceforge.net/lists/listinfo/tuxmath-devel > |
From: David B. <dav...@gm...> - 2010-07-23 01:06:40
|
Hi, > Has anyone else tried the new program? Any feedback is most welcome! I built it just now on my home desktop and it works great. I also built and ran it on a Dell Mini 9 (a much slower machine), and also had excellent results. The first execution has a few seconds of delay, but after that the program responds quite quickly both at startup and change of resolution. Hooray! On a completely unrelated note, I noticed one build pitfall that is triggered if the same build directory is used for a CMake build and then an autotools build - CMake puts config.h under src, whereas autoheader puts config.h directly into the build directory. The problem is that they don't create equivalent files, and the CMake version shows up first in the include path, so a subsequent autotools build doesn't find the correct config.h. This leads to an compile error the first time something from config.h is needed, in this case PACKAGE. The moral of the story is that when testing both build systems, keep each in its own build directory. David |
From: Wenyuan G. <guo...@gm...> - 2010-07-24 00:32:02
|
Hi David, Glad to hear about your experiment results! It is great to know that the program runs smoothly on older machines too. Tim: should I move onto PNG -> SVG conversion phase, now that we have achieved most of our objectives of program optimization? It might be interesting to see what problems may occur as more images are in SVG. What is your opinion? Cheers Wenyuan On Fri, Jul 23, 2010 at 9:06 AM, David Bruce <dav...@gm...> wrote: > Hi, > >> Has anyone else tried the new program? Any feedback is most welcome! > > I built it just now on my home desktop and it works great. I also > built and ran it on a Dell Mini 9 (a much slower machine), and also > had excellent results. The first execution has a few seconds of > delay, but after that the program responds quite quickly both at > startup and change of resolution. > > Hooray! > > On a completely unrelated note, I noticed one build pitfall that is > triggered if the same build directory is used for a CMake build and > then an autotools build - CMake puts config.h under src, whereas > autoheader puts config.h directly into the build directory. The > problem is that they don't create equivalent files, and the CMake > version shows up first in the include path, so a subsequent autotools > build doesn't find the correct config.h. This leads to an compile > error the first time something from config.h is needed, in this case > PACKAGE. > > The moral of the story is that when testing both build systems, keep > each in its own build directory. > > David > |
From: Tim H. <ho...@wu...> - 2010-07-24 09:25:55
|
Hi Wenyuan, On Friday, July 23, 2010 07:31:56 pm Wenyuan Guo wrote: > Tim: should I move onto PNG -> SVG conversion phase, now that we have > achieved most of our objectives of program optimization? It might be > interesting to see what problems may occur as more images are in SVG. > What is your opinion? I agree that this is probably the best thing to do now. Certainly it seems likely that loading times will increase as you have more images. If that becomes problematic, one obvious solution would be to defer loading of images that are not needed immediately (e.g., menu items only when you need them, comets game images only when you need them, etc.). But assuming that's not already implemented, it seems sensible to first try the straightforward "just load everything" and see how long it takes; if it's too long, then deal with the issues when you can profile and see where the bottlenecks are. Exciting! --Tim > > Cheers > Wenyuan > > On Fri, Jul 23, 2010 at 9:06 AM, David Bruce <dav...@gm...> wrote: > > Hi, > > > >> Has anyone else tried the new program? Any feedback is most welcome! > > > > I built it just now on my home desktop and it works great. I also > > built and ran it on a Dell Mini 9 (a much slower machine), and also > > had excellent results. The first execution has a few seconds of > > delay, but after that the program responds quite quickly both at > > startup and change of resolution. > > > > Hooray! > > > > On a completely unrelated note, I noticed one build pitfall that is > > triggered if the same build directory is used for a CMake build and > > then an autotools build - CMake puts config.h under src, whereas > > autoheader puts config.h directly into the build directory. The > > problem is that they don't create equivalent files, and the CMake > > version shows up first in the include path, so a subsequent autotools > > build doesn't find the correct config.h. This leads to an compile > > error the first time something from config.h is needed, in this case > > PACKAGE. > > > > The moral of the story is that when testing both build systems, keep > > each in its own build directory. > > > > David |
From: Brendan L. <bm...@ri...> - 2010-07-25 16:45:38
|
> > I agree that this is probably the best thing to do now. Certainly it seems > likely that loading times will increase as you have more images. If that > becomes problematic, one obvious solution would be to defer loading of > images > that are not needed immediately (e.g., menu items only when you need them, > comets game images only when you need them, etc.). > My 2 cents: lazy loading might actually create the illusion of slowness, since the loads would be during program use. IMO, if everything can be loaded quickly enough for us to "hide" it behind the Tux4Kids logo, that's best. |
From: Wenyuan G. <guo...@gm...> - 2010-07-26 13:59:49
|
Hi Brendan, On Mon, Jul 26, 2010 at 12:45 AM, Brendan Luchen <bm...@ri...> wrote: >> I agree that this is probably the best thing to do now. Certainly it seems >> likely that loading times will increase as you have more images. If that >> becomes problematic, one obvious solution would be to defer loading of >> images >> that are not needed immediately (e.g., menu items only when you need them, >> comets game images only when you need them, etc.). > > > My 2 cents: lazy loading might actually create the illusion of slowness, > since the loads would be during program use. IMO, if everything can be > loaded quickly enough for us to "hide" it behind the Tux4Kids logo, that's > best. I think you are right! Actually in the worst case, we may create a low-priority thread that loads things aren't needed immediately, while allowing the main program to proceed as soon as it is ready. Wenyuan |