From: subramanya s. <po...@ou...> - 2013-07-12 20:42:25
|
Hi, My simulations are failing as I am running out of memory. Is there any way to reduce the memory usage for large systems - systems with a large number of individual systems? Thanks. Subramanya . |
From: Kirk, B. (JSC-EG311) <ben...@na...> - 2013-07-12 20:47:06
|
On Jul 12, 2013, at 3:42 PM, subramanya sadasiva <po...@ou...> wrote: > systems with a large number of individual systems? How many systems, and how many variables per system? The nature of the data structures is such that it may be better to pack N variables into 1 system rather than have N systems with 1 variable each. -Ben |
From: Roy S. <roy...@ic...> - 2013-07-12 20:50:19
|
On Fri, 12 Jul 2013, subramanya sadasiva wrote: > My simulations are failing as I am running out of memory. Is there > any way to reduce the memory usage for large systems - systems with > a large number of individual systems? Thanks. Subramanya . In increasing order (IMHO) of typical difficulty: Use fewer MPI ranks and use threading to keep all your processor cores in use. Use ParallelMesh. Use a matrix-free solve. Reduce your matrix density (with a CouplingMatrix) or size (by operator splitting). --- Roy |
From: subramanya s. <po...@ou...> - 2013-07-12 21:05:08
|
> From: ben...@na... > To: po...@ou... > CC: lib...@li... > Date: Fri, 12 Jul 2013 15:46:52 -0500 > Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > > On Jul 12, 2013, at 3:42 PM, subramanya sadasiva <po...@ou...> wrote: > > > systems with a large number of individual systems? > > How many systems, and how many variables per system? I have 4 implicit systems and 1 explicit system the 4 implicit systems have 1,3,2 and 1 variable and the explicit system (for stresses etc ) has 14 variables . The case I ran had only 250000 HEX20 elements. most of the systems are solved with linear elements except the elasticity solver for which I use quadratic elements. > The nature of the data structures is such that it may be better to pack N variables into 1 system rather than have N systems with 1 variable each. Won't this slow down the solution schemes way too much? > > -Ben |
From: Kirk, B. (JSC-EG311) <ben...@na...> - 2013-07-12 21:14:39
|
On Jul 12, 2013, at 4:04 PM, subramanya sadasiva <po...@ou...> wrote: > I have 4 implicit systems and 1 explicit system > the 4 implicit systems have 1,3,2 and 1 variable and the explicit system (for stresses etc ) has 14 variables . The case I ran had only 250000 HEX20 elements. most of the systems are solved with linear elements except the elasticity solver for which I use quadratic elements. It seems then that you may be allocating storage for all matrices at the same time, then. You could manually clear the matrices between solves I'd guess and save a lot of storage. -Ben |
From: subramanya s. <po...@ou...> - 2013-07-12 21:07:41
|
Hi Roy, Are there any examples where the assembly is threaded? Thanks, Subramanya > Date: Fri, 12 Jul 2013 15:50:11 -0500 > From: roy...@ic... > To: po...@ou... > CC: lib...@li... > Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > > > On Fri, 12 Jul 2013, subramanya sadasiva wrote: > > > My simulations are failing as I am running out of memory. Is there > > any way to reduce the memory usage for large systems - systems with > > a large number of individual systems? Thanks. Subramanya . > > In increasing order (IMHO) of typical difficulty: > > Use fewer MPI ranks and use threading to keep all your processor cores > in use. > > Use ParallelMesh. > > Use a matrix-free solve. > > Reduce your matrix density (with a CouplingMatrix) or size (by > operator splitting). > --- > Roy |
From: Roy S. <roy...@ic...> - 2013-07-12 21:23:24
|
On Fri, 12 Jul 2013, subramanya sadasiva wrote: > Hi Roy, Are there any examples where the assembly is threaded? Only indirectly - FEMSystem based apps (see the fem_system and the adjoints examples) get multithreaded assembly automatically via the code in fem_system.C. But there's loads of (often simpler) multithreaded code in the library you could look at. grep for "Threads::parallel_for" to find it. I'd look at Ben's suggestion first, though. If you've already split your implicit solves into a number of smaller systems, then clearing each matrix in between solves should do wonders for your maximum memory consumption. --- Roy |
From: subramanya s. <po...@ou...> - 2013-07-12 21:44:16
|
Hi Roy,To delete and reallocate memory at every step, I'd just need to call clear on the matrix after solve and then call init() and attach the dof_map to it. right?Thanks, Subramanya > Date: Fri, 12 Jul 2013 16:23:17 -0500 > From: roy...@ic... > To: po...@ou... > CC: lib...@li... > Subject: RE: [Libmesh-users] Strategies to reduce memory usage? > > > On Fri, 12 Jul 2013, subramanya sadasiva wrote: > > > Hi Roy, Are there any examples where the assembly is threaded? > > Only indirectly - FEMSystem based apps (see the fem_system and > the adjoints examples) get multithreaded assembly automatically via > the code in fem_system.C. > > But there's loads of (often simpler) multithreaded code in the library > you could look at. grep for "Threads::parallel_for" to find it. > > I'd look at Ben's suggestion first, though. If you've already split > your implicit solves into a number of smaller systems, then clearing > each matrix in between solves should do wonders for your maximum > memory consumption. > --- > Roy |
From: subramanya s. <po...@ou...> - 2013-07-13 04:20:39
|
Hi, Thanks for all the help. I finally figured out where the issue was. Does anybody have any idea for a good preconditioner to use for linear elasticity?Thanks,Subramanya > From: po...@ou... > To: lib...@li... > Date: Fri, 12 Jul 2013 17:44:09 -0400 > Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > > Hi Roy,To delete and reallocate memory at every step, I'd just need to call clear on the matrix after solve and then call init() and attach the dof_map to it. right?Thanks, Subramanya > > > Date: Fri, 12 Jul 2013 16:23:17 -0500 > > From: roy...@ic... > > To: po...@ou... > > CC: lib...@li... > > Subject: RE: [Libmesh-users] Strategies to reduce memory usage? > > > > > > On Fri, 12 Jul 2013, subramanya sadasiva wrote: > > > > > Hi Roy, Are there any examples where the assembly is threaded? > > > > Only indirectly - FEMSystem based apps (see the fem_system and > > the adjoints examples) get multithreaded assembly automatically via > > the code in fem_system.C. > > > > But there's loads of (often simpler) multithreaded code in the library > > you could look at. grep for "Threads::parallel_for" to find it. > > > > I'd look at Ben's suggestion first, though. If you've already split > > your implicit solves into a number of smaller systems, then clearing > > each matrix in between solves should do wonders for your maximum > > memory consumption. > > --- > > Roy > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: David K. <dkn...@se...> - 2013-07-13 04:22:09
|
There was a recent discussion on the list about using AMG via hypre. On 07/13/2013 12:20 AM, subramanya sadasiva wrote: > Hi, Thanks for all the help. I finally figured out where the issue was. Does anybody have any idea for a good preconditioner to use for linear elasticity?Thanks,Subramanya >> From: po...@ou... >> To: lib...@li... >> Date: Fri, 12 Jul 2013 17:44:09 -0400 >> Subject: Re: [Libmesh-users] Strategies to reduce memory usage? >> >> Hi Roy,To delete and reallocate memory at every step, I'd just need to call clear on the matrix after solve and then call init() and attach the dof_map to it. right?Thanks, Subramanya >> >>> Date: Fri, 12 Jul 2013 16:23:17 -0500 >>> From: roy...@ic... >>> To: po...@ou... >>> CC: lib...@li... >>> Subject: RE: [Libmesh-users] Strategies to reduce memory usage? >>> >>> >>> On Fri, 12 Jul 2013, subramanya sadasiva wrote: >>> >>>> Hi Roy, Are there any examples where the assembly is threaded? >>> Only indirectly - FEMSystem based apps (see the fem_system and >>> the adjoints examples) get multithreaded assembly automatically via >>> the code in fem_system.C. >>> >>> But there's loads of (often simpler) multithreaded code in the library >>> you could look at. grep for "Threads::parallel_for" to find it. >>> >>> I'd look at Ben's suggestion first, though. If you've already split >>> your implicit solves into a number of smaller systems, then clearing >>> each matrix in between solves should do wonders for your maximum >>> memory consumption. >>> --- >>> Roy >> >> ------------------------------------------------------------------------------ >> See everything from the browser to the database with AppDynamics >> Get end-to-end visibility with application monitoring from AppDynamics >> Isolate bottlenecks and diagnose root cause in seconds. >> Start your free trial of AppDynamics Pro today! >> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >> _______________________________________________ >> Libmesh-users mailing list >> Lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libmesh-users > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: subramanya s. <sub...@gm...> - 2013-07-13 04:24:58
|
Hi David, I have been trying hypre but am having a really hard time with it. I will check the discussion out. Thanks, Subramanya > Date: Sat, 13 Jul 2013 00:22:01 -0400 > From: dkn...@se... > To: lib...@li... > Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > > There was a recent discussion on the list about using AMG via hypre. > > > On 07/13/2013 12:20 AM, subramanya sadasiva wrote: > > Hi, Thanks for all the help. I finally figured out where the issue was. Does anybody have any idea for a good preconditioner to use for linear elasticity?Thanks,Subramanya > >> From: po...@ou... > >> To: lib...@li... > >> Date: Fri, 12 Jul 2013 17:44:09 -0400 > >> Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > >> > >> Hi Roy,To delete and reallocate memory at every step, I'd just need to call clear on the matrix after solve and then call init() and attach the dof_map to it. right?Thanks, Subramanya > >> > >>> Date: Fri, 12 Jul 2013 16:23:17 -0500 > >>> From: roy...@ic... > >>> To: po...@ou... > >>> CC: lib...@li... > >>> Subject: RE: [Libmesh-users] Strategies to reduce memory usage? > >>> > >>> > >>> On Fri, 12 Jul 2013, subramanya sadasiva wrote: > >>> > >>>> Hi Roy, Are there any examples where the assembly is threaded? > >>> Only indirectly - FEMSystem based apps (see the fem_system and > >>> the adjoints examples) get multithreaded assembly automatically via > >>> the code in fem_system.C. > >>> > >>> But there's loads of (often simpler) multithreaded code in the library > >>> you could look at. grep for "Threads::parallel_for" to find it. > >>> > >>> I'd look at Ben's suggestion first, though. If you've already split > >>> your implicit solves into a number of smaller systems, then clearing > >>> each matrix in between solves should do wonders for your maximum > >>> memory consumption. > >>> --- > >>> Roy > >> > >> ------------------------------------------------------------------------------ > >> See everything from the browser to the database with AppDynamics > >> Get end-to-end visibility with application monitoring from AppDynamics > >> Isolate bottlenecks and diagnose root cause in seconds. > >> Start your free trial of AppDynamics Pro today! > >> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > >> _______________________________________________ > >> Libmesh-users mailing list > >> Lib...@li... > >> https://lists.sourceforge.net/lists/listinfo/libmesh-users > > > > ------------------------------------------------------------------------------ > > See everything from the browser to the database with AppDynamics > > Get end-to-end visibility with application monitoring from AppDynamics > > Isolate bottlenecks and diagnose root cause in seconds. > > Start your free trial of AppDynamics Pro today! > > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > > _______________________________________________ > > Libmesh-users mailing list > > Lib...@li... > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: David K. <dkn...@se...> - 2013-07-13 04:29:33
|
What's happening with hypre? It's not converging, or it's taking a lot of memory, or what? Jens Eftang posted some hypre preconditioning options that I think worked pretty well for him for a linear elasticity problem with ~30million dofs. But IIRC it required about 120GB of RAM with hypre, whereas with preconditioned CG it took about 60GB (but was much slower due to more iterations). Jens can probably give more info if he's watching the mailing list at the moment. David On 07/13/2013 12:24 AM, subramanya sadasiva wrote: > Hi David, > I have been trying hypre but am having a really hard time with it. I > will check the discussion out. > Thanks, > Subramanya > > > Date: Sat, 13 Jul 2013 00:22:01 -0400 > > From: dkn...@se... > > To: lib...@li... > > Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > > > > There was a recent discussion on the list about using AMG via hypre. > > > > > > On 07/13/2013 12:20 AM, subramanya sadasiva wrote: > > > Hi, Thanks for all the help. I finally figured out where the issue > was. Does anybody have any idea for a good preconditioner to use for > linear elasticity?Thanks,Subramanya > > >> From: po...@ou... > > >> To: lib...@li... > > >> Date: Fri, 12 Jul 2013 17:44:09 -0400 > > >> Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > > >> > > >> Hi Roy,To delete and reallocate memory at every step, I'd just > need to call clear on the matrix after solve and then call init() and > attach the dof_map to it. right?Thanks, Subramanya > > >> > > >>> Date: Fri, 12 Jul 2013 16:23:17 -0500 > > >>> From: roy...@ic... > > >>> To: po...@ou... > > >>> CC: lib...@li... > > >>> Subject: RE: [Libmesh-users] Strategies to reduce memory usage? > > >>> > > >>> > > >>> On Fri, 12 Jul 2013, subramanya sadasiva wrote: > > >>> > > >>>> Hi Roy, Are there any examples where the assembly is threaded? > > >>> Only indirectly - FEMSystem based apps (see the fem_system and > > >>> the adjoints examples) get multithreaded assembly automatically via > > >>> the code in fem_system.C. > > >>> > > >>> But there's loads of (often simpler) multithreaded code in the > library > > >>> you could look at. grep for "Threads::parallel_for" to find it. > > >>> > > >>> I'd look at Ben's suggestion first, though. If you've already split > > >>> your implicit solves into a number of smaller systems, then clearing > > >>> each matrix in between solves should do wonders for your maximum > > >>> memory consumption. > > >>> --- > > >>> Roy > > >> > > >> > ------------------------------------------------------------------------------ > > >> See everything from the browser to the database with AppDynamics > > >> Get end-to-end visibility with application monitoring from > AppDynamics > > >> Isolate bottlenecks and diagnose root cause in seconds. > > >> Start your free trial of AppDynamics Pro today! > > >> > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > > >> _______________________________________________ > > >> Libmesh-users mailing list > > >> Lib...@li... > > >> https://lists.sourceforge.net/lists/listinfo/libmesh-users > > > > > > > ------------------------------------------------------------------------------ > > > See everything from the browser to the database with AppDynamics > > > Get end-to-end visibility with application monitoring from AppDynamics > > > Isolate bottlenecks and diagnose root cause in seconds. > > > Start your free trial of AppDynamics Pro today! > > > > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > > > _______________________________________________ > > > Libmesh-users mailing list > > > Lib...@li... > > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > > > > > > ------------------------------------------------------------------------------ > > See everything from the browser to the database with AppDynamics > > Get end-to-end visibility with application monitoring from AppDynamics > > Isolate bottlenecks and diagnose root cause in seconds. > > Start your free trial of AppDynamics Pro today! > > > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > > _______________________________________________ > > Libmesh-users mailing list > > Lib...@li... > > https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: subramanya s. <po...@ou...> - 2013-07-13 04:31:54
|
Hi David, I am running out of memory. I can send Jens an email to find out. Thanks, Subramanya > Date: Sat, 13 Jul 2013 00:29:22 -0400 > From: dkn...@se... > To: sub...@gm... > CC: lib...@li... > Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > > What's happening with hypre? It's not converging, or it's taking a lot > of memory, or what? > > Jens Eftang posted some hypre preconditioning options that I think > worked pretty well for him for a linear elasticity problem with > ~30million dofs. But IIRC it required about 120GB of RAM with hypre, > whereas with preconditioned CG it took about 60GB (but was much slower > due to more iterations). Jens can probably give more info if he's > watching the mailing list at the moment. > > David > > > > > On 07/13/2013 12:24 AM, subramanya sadasiva wrote: > > Hi David, > > I have been trying hypre but am having a really hard time with it. I > > will check the discussion out. > > Thanks, > > Subramanya > > > > > Date: Sat, 13 Jul 2013 00:22:01 -0400 > > > From: dkn...@se... > > > To: lib...@li... > > > Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > > > > > > There was a recent discussion on the list about using AMG via hypre. > > > > > > > > > On 07/13/2013 12:20 AM, subramanya sadasiva wrote: > > > > Hi, Thanks for all the help. I finally figured out where the issue > > was. Does anybody have any idea for a good preconditioner to use for > > linear elasticity?Thanks,Subramanya > > > >> From: po...@ou... > > > >> To: lib...@li... > > > >> Date: Fri, 12 Jul 2013 17:44:09 -0400 > > > >> Subject: Re: [Libmesh-users] Strategies to reduce memory usage? > > > >> > > > >> Hi Roy,To delete and reallocate memory at every step, I'd just > > need to call clear on the matrix after solve and then call init() and > > attach the dof_map to it. right?Thanks, Subramanya > > > >> > > > >>> Date: Fri, 12 Jul 2013 16:23:17 -0500 > > > >>> From: roy...@ic... > > > >>> To: po...@ou... > > > >>> CC: lib...@li... > > > >>> Subject: RE: [Libmesh-users] Strategies to reduce memory usage? > > > >>> > > > >>> > > > >>> On Fri, 12 Jul 2013, subramanya sadasiva wrote: > > > >>> > > > >>>> Hi Roy, Are there any examples where the assembly is threaded? > > > >>> Only indirectly - FEMSystem based apps (see the fem_system and > > > >>> the adjoints examples) get multithreaded assembly automatically via > > > >>> the code in fem_system.C. > > > >>> > > > >>> But there's loads of (often simpler) multithreaded code in the > > library > > > >>> you could look at. grep for "Threads::parallel_for" to find it. > > > >>> > > > >>> I'd look at Ben's suggestion first, though. If you've already split > > > >>> your implicit solves into a number of smaller systems, then clearing > > > >>> each matrix in between solves should do wonders for your maximum > > > >>> memory consumption. > > > >>> --- > > > >>> Roy > > > >> > > > >> > > ------------------------------------------------------------------------------ > > > >> See everything from the browser to the database with AppDynamics > > > >> Get end-to-end visibility with application monitoring from > > AppDynamics > > > >> Isolate bottlenecks and diagnose root cause in seconds. > > > >> Start your free trial of AppDynamics Pro today! > > > >> > > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > > > >> _______________________________________________ > > > >> Libmesh-users mailing list > > > >> Lib...@li... > > > >> https://lists.sourceforge.net/lists/listinfo/libmesh-users > > > > > > > > > > ------------------------------------------------------------------------------ > > > > See everything from the browser to the database with AppDynamics > > > > Get end-to-end visibility with application monitoring from AppDynamics > > > > Isolate bottlenecks and diagnose root cause in seconds. > > > > Start your free trial of AppDynamics Pro today! > > > > > > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > > > > _______________________________________________ > > > > Libmesh-users mailing list > > > > Lib...@li... > > > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > > > > > > > > > > ------------------------------------------------------------------------------ > > > See everything from the browser to the database with AppDynamics > > > Get end-to-end visibility with application monitoring from AppDynamics > > > Isolate bottlenecks and diagnose root cause in seconds. > > > Start your free trial of AppDynamics Pro today! > > > > > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > > > _______________________________________________ > > > Libmesh-users mailing list > > > Lib...@li... > > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users |