From: <445...@qq...> - 2024-06-27 07:25:24
|
Hi, all. God! It is very much difficult work to do to optimize multiple .C files project compiling with SDCC. Can we use a seperate tool to analyze and compose multiple .C files into one .c file (but need remaining the debugging infomation), after the Single .C file generated,using SDCC compile it,then the SDCC can do more optimization such as use one byte to contain multiple bit variable. Has anyone has some idea about this question? Best regards, Chunjiang Li 月明风清 445...@qq... |
From: Oleg E. <ole...@t-...> - 2024-07-02 02:31:47
|
Hi, On Thu, 2024-06-27 at 15:25 +0800, 月明风清 via sdcc-devel wrote: > > Hi, all. > God! It is very much difficult work to do to optimize multiple .C files project compiling with SDCC. > Can we use a seperate tool to analyze and compose multiple .C files into one .c file (but need remaining the debugging infomation), after the Single .C file generated,using SDCC compile it,then the SDCC can do more optimization such as use one byte to contain multiple bit variable. > > Has anyone has some idea about this question? > This is more or less what GCC does with LTO. The individual files are compiled and instead ( or in addition to) the relocatable machine code it also stores a dump of the compiler's intermediate code (kind of precompiled code). That is passed to binutils linker and the linker then will invoke the "linker plugin" which is again GCC itself. Or something like that. I think it's a hack to stay compatible with the traditional compilation model. But that's one way of doing it. There's also the https://llvm-mos.org/ project. They're also doing a couple of whole-program optimizations. For example they solve the problem of how to optimally allocate function arguments on stack vs. using shared global variables -- a common problem on 8-bit systems. Maybe you can get some inspiration from that project. Best regards, Oleg Endo |
From: Benedikt F. <b.f...@gm...> - 2024-07-02 17:09:02
|
Am 02.07.24 um 04:31 schrieb Oleg Endo: > Hi, > > On Thu, 2024-06-27 at 15:25 +0800, 月明风清 via sdcc-devel wrote: >> Hi, all. >> God! It is very much difficult work to do to optimize multiple .C files project compiling with SDCC. >> Can we use a seperate tool to analyze and compose multiple .C files into one .c file (but need remaining the debugging infomation), after the Single .C file generated,using SDCC compile it,then the SDCC can do more optimization such as use one byte to contain multiple bit variable. >> >> Has anyone has some idea about this question? >> > This is more or less what GCC does with LTO. The individual files are > compiled and instead ( or in addition to) the relocatable machine code it > also stores a dump of the compiler's intermediate code (kind of precompiled > code). That is passed to binutils linker and the linker then will invoke > the "linker plugin" which is again GCC itself. Or something like that. > > I think it's a hack to stay compatible with the traditional compilation > model. But that's one way of doing it. > > There's also the https://llvm-mos.org/ project. They're also doing a couple > of whole-program optimizations. For example they solve the problem of how > to optimally allocate function arguments on stack vs. using shared global > variables -- a common problem on 8-bit systems. Maybe you can get some > inspiration from that project. > > Best regards, > Oleg Endo I still favor the AspectC++ approach to whole program analysis because of its simplicity: Applied to SDCC, you would basically let the compiler analyze the source code for each individual translation unit and let it write all the information (function and variable usage, constant propagation etc.) to a common database file (XML, JSON or whatever), but skip actual code generation. You then use a specialized make tool to keep re-translating translation units until the database does not change any further. Eventually, you re-translate every translation unit one last time, this time with global knowledge, and output optimized object files, which can be linked using the same primitive linker as before. I believe that the necessary changes to SDCC would be considerably less invasive. -- Benedikt |
From: Oleg E. <ole...@t-...> - 2024-07-03 00:39:47
|
On Tue, 2024-07-02 at 19:08 +0200, Benedikt Freisen via sdcc-devel wrote: > > I still favor the AspectC++ approach to whole program analysis because > of its simplicity: > > Applied to SDCC, you would basically let the compiler analyze the source > code for each individual translation unit and let it write all the > information (function and variable usage, constant propagation etc.) to > a common database file (XML, JSON or whatever), but skip actual code > generation. I'm not that familiar with the details, but that step sounds essentially very similar to the LTO approach. It doesn't do the full compilation and optimization of each translation unit, but just dumps the internal compiler data structures at some set point. Since SDCC tends to slow down on larger code sizes sooner, I'd rather avoid any fancy interoperable database formats which add a (wasted) data marshaling round trip. > > You then use a specialized make tool to keep re-translating translation > units until the database does not change any further. > > Eventually, you re-translate every translation unit one last time, this > time with global knowledge, and output optimized object files, which can > be linked using the same primitive linker as before. > > I believe that the necessary changes to SDCC would be considerably less > invasive. > Sounds like that would require changing the build system of all the user projects in order to benefit from it. Some folks might not be using makefiles directly at all (e.g. me usually using cmake + ninja and/or make). It might be less invasive on the SDCC side, but more invasive for all users of SDCC, unless the build tools have built-in support for this approach. Best regards, Oleg Endo |
From: <lic...@qq...> - 2024-07-03 00:59:28
|
Sure, there is no simple and perfect solution for optimization cross multiple translation unit(.c file). If we want that it dose not affect the building sequence of SDCC C project,leave all the cross file optimization in link time (maybe include regenerating the .asm/.o file when linking). ------------------ Original ------------------ From: "Development chatter about sdcc" <ole...@t-...>; Date: Wed, Jul 3, 2024 08:39 AM To: "Development chatter about sdcc"<sdc...@li...>; Subject: Re: [sdcc-devel] About optmize multiple .c file compilation with SDCC On Tue, 2024-07-02 at 19:08 +0200, Benedikt Freisen via sdcc-devel wrote: > > I still favor the AspectC++ approach to whole program analysis because > of its simplicity: > > Applied to SDCC, you would basically let the compiler analyze the source > code for each individual translation unit and let it write all the > information (function and variable usage, constant propagation etc.) to > a common database file (XML, JSON or whatever), but skip actual code > generation. I'm not that familiar with the details, but that step sounds essentially very similar to the LTO approach. It doesn't do the full compilation and optimization of each translation unit, but just dumps the internal compiler data structures at some set point. Since SDCC tends to slow down on larger code sizes sooner, I'd rather avoid any fancy interoperable database formats which add a (wasted) data marshaling round trip. > > You then use a specialized make tool to keep re-translating translation > units until the database does not change any further. > > Eventually, you re-translate every translation unit one last time, this > time with global knowledge, and output optimized object files, which can > be linked using the same primitive linker as before. > > I believe that the necessary changes to SDCC would be considerably less > invasive. > Sounds like that would require changing the build system of all the user projects in order to benefit from it. Some folks might not be using makefiles directly at all (e.g. me usually using cmake + ninja and/or make). It might be less invasive on the SDCC side, but more invasive for all users of SDCC, unless the build tools have built-in support for this approach. Best regards, Oleg Endo _______________________________________________ sdcc-devel mailing list sdc...@li... https://lists.sourceforge.net/lists/listinfo/sdcc-devel |
From: <lic...@qq...> - 2024-07-03 01:01:24
|
------------------ Original ------------------ From: "Development chatter about sdcc" <ole...@t-...>; Date: Wed, Jul 3, 2024 08:39 AM To: "Development chatter about sdcc"<sdc...@li...>; Subject: Re: [sdcc-devel] About optmize multiple .c file compilation with SDCC On Tue, 2024-07-02 at 19:08 +0200, Benedikt Freisen via sdcc-devel wrote: > > I still favor the AspectC++ approach to whole program analysis because > of its simplicity: > > Applied to SDCC, you would basically let the compiler analyze the source > code for each individual translation unit and let it write all the > information (function and variable usage, constant propagation etc.) to > a common database file (XML, JSON or whatever), but skip actual code > generation. I'm not that familiar with the details, but that step sounds essentially very similar to the LTO approach. It doesn't do the full compilation and optimization of each translation unit, but just dumps the internal compiler data structures at some set point. Since SDCC tends to slow down on larger code sizes sooner, I'd rather avoid any fancy interoperable database formats which add a (wasted) data marshaling round trip. > > You then use a specialized make tool to keep re-translating translation > units until the database does not change any further. > > Eventually, you re-translate every translation unit one last time, this > time with global knowledge, and output optimized object files, which can > be linked using the same primitive linker as before. > > I believe that the necessary changes to SDCC would be considerably less > invasive. > Sounds like that would require changing the build system of all the user projects in order to benefit from it. Some folks might not be using makefiles directly at all (e.g. me usually using cmake + ninja and/or make). It might be less invasive on the SDCC side, but more invasive for all users of SDCC, unless the build tools have built-in support for this approach. Best regards, Oleg Endo Sure, there is no simple and perfect solution for optimization cross multiple translation unit(.c file). If we want that it dose not affect the building sequence of SDCC C project,leave all the cross file optimization in link time (maybe include regenerating the .asm/.o file when linking). Best regards, Chunjiang Li _______________________________________________ sdcc-devel mailing list sdc...@li... https://lists.sourceforge.net/lists/listinfo/sdcc-devel |