From: Boyce Griffith <griffith@ci...>  20100421 02:57:44

Hi, Folks  I have a system involving two variables for which I would like to use a preconditioner which involves solvers for the two variables separately. (E.g., block Jacobi or block GaussSeidel using one subdomain per variable.) Is there a recommended way to do this other than making a new system for each of the variables and copying data between the systems? Thanks,  Boyce 
From: Jed Brown <jed@59...>  20100422 10:56:40

On Tue, 20 Apr 2010 22:37:17 0400, Boyce Griffith <griffith@...> wrote: > Hi, Folks  > > I have a system involving two variables for which I would like to use a > preconditioner which involves solvers for the two variables separately. > (E.g., block Jacobi or block GaussSeidel using one subdomain per > variable.) Is there a recommended way to do this other than making a > new system for each of the variables and copying data between the systems? You can do this with PCFieldSplit, if you define index sets for each variable, then pc_type fieldsplit \ pc_fieldsplit_type <additive,multiplicative,symmetricmultiplicative> # Jacobi, GS, SGS This gives you a KSP and PC under fieldsplit_0_ and fieldsplit_1_. This support is more complete with 3.1 than 3.0. You can define a composite matrix to define the action of the full operator if you never assemble the truly coupled thing, then implement MatGetSubMatrix() to extract the blocks the preconditioner will need (this is better if memory is tight, assembling the coupled thing is slightly quicker to get going). Jed 
From: Boyce Griffith <griffith@ci...>  20100422 13:42:46

On 4/22/10 6:56 AM, Jed Brown wrote: > On Tue, 20 Apr 2010 22:37:17 0400, Boyce Griffith<griffith@...> wrote: >> I have a system involving two variables for which I would like to use a >> preconditioner which involves solvers for the two variables separately. >> (E.g., block Jacobi or block GaussSeidel using one subdomain per >> variable.) Is there a recommended way to do this other than making a >> new system for each of the variables and copying data between the systems? > > You can do this with PCFieldSplit, if you define index sets for each > variable, then > > pc_type fieldsplit \ > pc_fieldsplit_type<additive,multiplicative,symmetricmultiplicative> # Jacobi, GS, SGS > > This gives you a KSP and PC under fieldsplit_0_ and fieldsplit_1_. > This support is more complete with 3.1 than 3.0. > > You can define a composite matrix to define the action of the full > operator if you never assemble the truly coupled thing, then implement > MatGetSubMatrix() to extract the blocks the preconditioner will need > (this is better if memory is tight, assembling the coupled thing is > slightly quicker to get going). How should one setup the index set? Is it just a mapping from Vec index to field index? E.g., each Vec entry is assigned to field 0 or 1 (or 2 or ...)? For now, I am just assembling the whole system. I am also using bi/trilinear elements  I recall you sending some emails to the list about getting better performance with unassembled systems, but that this was mainly for higher order elements. Am I remembering correctly? Thanks!  Boyce 
From: Boyce Griffith <griffith@ci...>  20100422 14:21:37

On 4/22/10 9:56 AM, Jed Brown wrote: > On Thu, 22 Apr 2010 09:32:12 0400, Boyce Griffith<griffith@...> wrote: >> How should one setup the index set? Is it just a mapping from Vec index >> to field index? E.g., each Vec entry is assigned to field 0 or 1 (or 2 >> or ...)? > > If your variables are interlaced like > > u0,v0,w0,u1,v1,w1,... In the present context, the order of variables in memory is whatever is determined by libMesh  and I haven't yet looked to see what that is. It's my impression that variables are not interlaced like this, but I could be mistaken. > Then you can (a) set a block size for the matrix with MatSetBlockSize, > (b) use BAIJ in which case the block size is built in, or (c) use > pc_fieldsplit_block_size. Then to define the splits, use > > pc_fieldsplit_0_fields 0,3 pc_fieldsplit_1_fields 1 pc_fieldsplit_2_fields 2,4,5 > > to split six fields into three splits of sizes 2,1,3. In this case, is field N the set of global Vec indices for which idx%6 == N? > For something more general, use ISCreate() and PCFieldSplitSetIS(). The > index sets should be created on the same communicator as the matrix and > contain the global indices corresponding to the desired field. So if we > had two processors with two nodes each (24 dofs total), the splits above > would be given by the following index sets (where '' separates processors): > > split0: 0 3 6 9  12 15 18 21 > split1: 1 7  13 19 > split2: 2 4 5 8 10 11  14 16 17 20 22 23 > > I hope this is clear. Sorry for being dense, but in this case, if I were to create the IS manually using ISCreateGeneral, on process 0, I think that I would use: idx[0] = 0 idx[1] = 1 idx[2] = 2 idx[3] = 0 ... and similarly on process 1: ... idx[4] = idx[1612] = 2 idx[5] = idx[1712] = 2 idx[6] = idx[1812] = 0 idx[7] = idx[1912] = 1 ... Is this the basic idea? Thanks,  Boyce 
From: Jed Brown <jed@59...>  20100422 14:36:59

On Thu, 22 Apr 2010 10:21:26 0400, Boyce Griffith <griffith@...> wrote: > > > On 4/22/10 9:56 AM, Jed Brown wrote: > > On Thu, 22 Apr 2010 09:32:12 0400, Boyce Griffith<griffith@...> wrote: > >> How should one setup the index set? Is it just a mapping from Vec index > >> to field index? E.g., each Vec entry is assigned to field 0 or 1 (or 2 > >> or ...)? > > > > If your variables are interlaced like > > > > u0,v0,w0,u1,v1,w1,... > > In the present context, the order of variables in memory is whatever is > determined by libMesh  and I haven't yet looked to see what that is. > It's my impression that variables are not interlaced like this, but I > could be mistaken. Right, but Roy said this: Block variable ordering with "node_major_dofs" is perhaps the least welldocumented option in libMesh, which is saying a lot, but we're already using it. > > Then you can (a) set a block size for the matrix with MatSetBlockSize, > > (b) use BAIJ in which case the block size is built in, or (c) use > > pc_fieldsplit_block_size. Then to define the splits, use > > > > pc_fieldsplit_0_fields 0,3 pc_fieldsplit_1_fields 1 pc_fieldsplit_2_fields 2,4,5 > > > > to split six fields into three splits of sizes 2,1,3. > > In this case, is field N the set of global Vec indices for which idx%6 == N? Yes > > split0: 0 3 6 9  12 15 18 21 > > split1: 1 7  13 19 > > split2: 2 4 5 8 10 11  14 16 17 20 22 23 > > > > I hope this is clear. > > Sorry for being dense, but in this case, if I were to create the IS > manually using ISCreateGeneral, on process 0, I think that I would use: > > idx[0] = 0 > idx[1] = 1 > idx[2] = 2 > idx[3] = 0 > ... > > and similarly on process 1: > > ... > idx[4] = idx[1612] = 2 > idx[5] = idx[1712] = 2 > idx[6] = idx[1812] = 0 > idx[7] = idx[1912] = 1 > ... To create the index set for split0: /* This assumes communicator has size 2 */ PetscInt idx[2][] = {{0,3,6,9},{12,15,18,21}}; int rank; IS is0; MPI_Comm_rank(comm,rank); ISCreateGeneral(comm,4,idx[rank],&is0); This is a parallel index set containing the global indices of the dofs in the desired split. Jed 
From: Boyce Griffith <griffith@ci...>  20100422 14:43:11

On 4/22/10 10:36 AM, Jed Brown wrote: > On Thu, 22 Apr 2010 10:21:26 0400, Boyce Griffith<griffith@...> wrote: >> >> >> On 4/22/10 9:56 AM, Jed Brown wrote: >>> On Thu, 22 Apr 2010 09:32:12 0400, Boyce Griffith<griffith@...> wrote: >>>> How should one setup the index set? Is it just a mapping from Vec index >>>> to field index? E.g., each Vec entry is assigned to field 0 or 1 (or 2 >>>> or ...)? >>> >>> If your variables are interlaced like >>> >>> u0,v0,w0,u1,v1,w1,... >> >> In the present context, the order of variables in memory is whatever is >> determined by libMesh  and I haven't yet looked to see what that is. >> It's my impression that variables are not interlaced like this, but I >> could be mistaken. > > Right, but Roy said this: > > Block variable ordering with "node_major_dofs" is perhaps the least > welldocumented option in libMesh, which is saying a lot, but we're > already using it. Agh, OK. I should look into this! >>> split0: 0 3 6 9  12 15 18 21 >>> split1: 1 7  13 19 >>> split2: 2 4 5 8 10 11  14 16 17 20 22 23 >>> >>> I hope this is clear. >> >> Sorry for being dense, but in this case, if I were to create the IS >> manually using ISCreateGeneral, on process 0, I think that I would use: >> >> idx[0] = 0 >> idx[1] = 1 >> idx[2] = 2 >> idx[3] = 0 >> ... >> >> and similarly on process 1: >> >> ... >> idx[4] = idx[1612] = 2 >> idx[5] = idx[1712] = 2 >> idx[6] = idx[1812] = 0 >> idx[7] = idx[1912] = 1 >> ... > > To create the index set for split0: > > /* This assumes communicator has size 2 */ > PetscInt idx[2][] = {{0,3,6,9},{12,15,18,21}}; > int rank; > IS is0; > > MPI_Comm_rank(comm,rank); > ISCreateGeneral(comm,4,idx[rank],&is0); > > This is a parallel index set containing the global indices of the dofs > in the desired split. OK, then I am apparently confused about how PCFieldSplitSetIS is intended to work. Does one call PCFieldSplitSetIS once for each field? I was imagining that you call this function only once...  Boyce 
From: Jed Brown <jed@59...>  20100422 14:47:57

On Thu, 22 Apr 2010 10:42:52 0400, Boyce Griffith <griffith@...> wrote: > OK, then I am apparently confused about how PCFieldSplitSetIS is > intended to work. Does one call PCFieldSplitSetIS once for each field? Call it once per split (it creates a new split each time you call it). Jed 
From: Boyce Griffith <griffith@ci...>  20100422 14:56:13

On 4/22/10 10:46 AM, Jed Brown wrote: > On Thu, 22 Apr 2010 10:42:52 0400, Boyce Griffith<griffith@...> wrote: >> OK, then I am apparently confused about how PCFieldSplitSetIS is >> intended to work. Does one call PCFieldSplitSetIS once for each field? > > Call it once per split (it creates a new split each time you call it). OK, got it. This was the source of my confusion  I think I was tripped up by the name of the function. I had assumed incorrectly that PCFieldSplitSetIS would "reset" the IS associated with the PC rather than "add" an additional IS. I should have looked at the source code! Do you think it would be worth adding a note to this effect to the documentation for PCFieldSplitSetIS? Thanks for your patience with me on this!  Boyce 
From: Liang <goeasyon@gm...>  20100423 15:08:12

Hi developers and users, Is that possible to dump the flux value at each node? Taking the thermal problem at ex10, the independent variable u (Temperature) could be generated in the postprocessing data, while how to obtain the other dependent variable, such as flux(gradient of the temperture). I have been thinking two methods without sure to solve this. One method is deriving the mixed weak form and the flux is the second variable, then the flux could be dumped as the independent variables. The cost is having to derive the mixed form and may leading to the instability if the constrains are ill set. Another method is adding flux variables directly with the function system.add_variable ("gradT_x", FIRST), then plugging additional flux computing codes code in the element part. The later one is my preference, but I don't know if it is on right way. So would you experts please give me some advice? Thanks! Liang 
From: Roy Stogner <roystgnr@ic...>  20100423 15:23:49

On Fri, 23 Apr 2010, Liang wrote: > Is that possible to dump the flux value at each node? Taking the thermal > problem > at ex10, the independent variable u (Temperature) could be generated in the > postprocessing data, while how to obtain the other dependent variable, > such as > flux(gradient of the temperture). > > I have been thinking two methods without sure to solve this. > > One method is deriving the mixed weak form and the flux is the second > variable, then the flux could be dumped as the independent > variables. The cost is having to derive the mixed form and may > leading to the instability if the constrains are ill set. Yes, this would work, but it's overkill if you're just looking for the output and don't have other reasons for solving with a mixed form. > Another method is adding flux variables directly with the function > system.add_variable ("gradT_x", FIRST), then plugging additional flux > computing codes code in the element part. This is almost right. Instead of adding an additional variable to your system (which would then have to be taken into account in your implicit solves), create a separate ExplicitSystem with a variable there to store the data. Then write a postprocessing loop that computes the flux of your main system and sets it into those variables in the secondary system. If you're using FIRST order variables and you want the same approximation for your fluxes look at ZienkiewiczZhu.  Roy 
From: Liang <goeasyon@gm...>  20100423 15:41:31

> This is almost right. Instead of adding an additional variable to > your system (which would then have to be taken into account in your > implicit solves), create a separate ExplicitSystem with a variable > there to store the data. Then write a postprocessing loop that > computes the flux of your main system and sets it into those variables > in the secondary system. If you're using FIRST order variables and > you want the same approximation for your fluxes look at > ZienkiewiczZhu. >  > Roy > Make sense! putting the variable in another system then loop it. Let me work on this way right now, I appreciate your clear and fast answer. Liang 
From: Liang <goeasyon@gm...>  20100423 19:15:45

> This is almost right. Instead of adding an additional variable to > your system (which would then have to be taken into account in your > implicit solves), create a separate ExplicitSystem with a variable > there to store the data. Then write a postprocessing loop that > computes the flux of your main system and sets it into those variables > in the secondary system. If you're using FIRST order variables and > you want the same approximation for your fluxes look at > ZienkiewiczZhu. >  > Roy > Can I use the add_vector function instead of add_variable to make the code neat, shown as below: // by add an "ExplicitSystem" to the EquationSystems object equation_systems.add_system<ExplicitSystem> ("Flux_System"); equation_systems.get_system("Flux_System").add_vector ("gradT", FIRST); instead of equation_systems.add_system<ExplicitSystem> ("Flux_System"); equation_systems.get_system("Flux_System").add_variable ("gradT_x", FIRST); equation_systems.get_system("Flux_System").add_variable ("gradT_y", FIRST); equation_systems.get_system("Flux_System").add_variable ("gradT_z", FIRST); If so, I found this "add_vector" function also is called at the element assembly part, which usually goes like "force.add_vector (Fe, dof_indices)" at the end of the element code. I don't know if the both "add_vector" are the same? Thanks! Liang 
From: Roy Stogner <roystgnr@ic...>  20100426 21:05:20

On Fri, 23 Apr 2010, Liang wrote: >> This is almost right. Instead of adding an additional variable to >> your system (which would then have to be taken into account in your >> implicit solves), create a separate ExplicitSystem with a variable >> there to store the data. Then write a postprocessing loop that >> computes the flux of your main system and sets it into those variables >> in the secondary system. If you're using FIRST order variables and >> you want the same approximation for your fluxes look at >> ZienkiewiczZhu. > Can I use the add_vector function instead of add_variable to make the code > neat, shown as below: > > // by add an "ExplicitSystem" to the EquationSystems object > equation_systems.add_system<ExplicitSystem> ("Flux_System"); > equation_systems.get_system("Flux_System").add_vector ("gradT", FIRST); > > instead of > > equation_systems.add_system<ExplicitSystem> ("Flux_System"); > equation_systems.get_system("Flux_System").add_variable ("gradT_x", FIRST); > equation_systems.get_system("Flux_System").add_variable ("gradT_y", FIRST); > equation_systems.get_system("Flux_System").add_variable ("gradT_z", FIRST); > > > If so, I found this "add_vector" function also is called at the element > assembly part, which usually goes like "force.add_vector (Fe, dof_indices)" > at the end of the element code. I don't know if the both "add_vector" are the > same? There's two different add_vector functions here. The one being called like force.add_vector() just means that the element's DenseVector called Fe is being summed into the global NumericVector called force. You don't want that. The System::add_vector() function stores another NumericVector in System. You might or might not want this. It's less flexible but it might be easier to use. Here's the limitation: when you do a System::add_vector(), the new vector has the exact same structure as your solution vector. So if your solution has one component T and you need to store gradT_x, gradT_y, and gradT_z, you'd need to add three vectors for that. If your solution has a second component you and you don't need to store gradu_x, gradu_y, and gradu_z, too bad, because the vectors created by add_vector will all have space for u variables anyway.  Roy 
From: Liang <goeasyon@gm...>  20100426 22:19:32

> > There's two different add_vector functions here. The one being called > like force.add_vector() just means that the element's DenseVector > called Fe is being summed into the global NumericVector called force. > You don't want that. > > The System::add_vector() function stores another NumericVector in > System. You might or might not want this. It's less flexible but it > might be easier to use. Here's the limitation: when you do a > System::add_vector(), the new vector has the exact same structure as > your solution vector. So if your solution has one component T and you > need to store gradT_x, gradT_y, and gradT_z, you'd need to add three > vectors for that. If your solution has a second component you and you > don't need to store gradu_x, gradu_y, and gradu_z, too bad, because > the vectors created by add_vector will all have space for u variables > anyway. >  > Roy > Thanks! I thought the System::add_vector() stores the arbitrary vectors. I appreciate your explanation. So the System::add_variable() will be a good choice to define the flux component such as gradT_x, gradT_y, If I just want to dump the these flux solution, right? Liang 
From: Lorenzo Botti <bottilorenzo@gm...>  20100428 10:06:54

Sorry to bother you, just to understand if I got it. If your variables are interlaced like > > u0,v0,w0,u1,v1,w1,... > > Then you can (a) set a block size for the matrix with MatSetBlockSize, > (b) use BAIJ in which case the block size is built in, or (c) use > pc_fieldsplit_block_size. Then to define the splits, use > > pc_fieldsplit_0_fields 0,3 pc_fieldsplit_1_fields 1 > pc_fieldsplit_2_fields 2,4,5 > > to split six fields into three splits of sizes 2,1,3. > > > If my variables are u v w p (3D stokes equation) and I have an equal order approximation Can I just do pc_fieldsplit_0_fields 0,1,2 pc_fieldsplit_1_fields 3 pc_fieldsplit_type schur pc_filedsplit_block_size <n_dofs_per_variable> Thank you very much Lorenzo 
From: Jed Brown <jed@59...>  20100428 14:52:19

On Wed, 28 Apr 2010 12:06:47 +0200, Lorenzo Botti <bottilorenzo@...> wrote: > If my variables are u v w p (3D stokes equation) and I have an equal order > approximation > Can I just do > > pc_fieldsplit_0_fields 0,1,2 > pc_fieldsplit_1_fields 3 > pc_fieldsplit_type schur > pc_filedsplit_block_size <n_dofs_per_variable> Yes, and you can control the solver for the velocity block under fieldsplit_0_{ksp,pc} and for the Schur complement under fieldsplit_1_{ksp,pc}. Note that with some discretizations, like stabilized Q1Q1, you may get better results using an unsplit method (multigrid on the coupled system). This doesn't work with mixed methods unless you define custom restrictions and smoothing operators. Jed 
From: Roy Stogner <roystgnr@ic...>  20100428 17:06:37

On Mon, 26 Apr 2010, Liang wrote: > Thanks! I thought the System::add_vector() stores the arbitrary vectors. > I appreciate your explanation. So the System::add_variable() will be a > good choice to define the flux component such as gradT_x, gradT_y, > If I just want to dump the these flux solution, right? Yes, if you use a separate ExplicitSystem. You don't want to add variables to your main System unless you're planning to solve for them.  Roy 
From: Liang <goeasyon@gm...>  20100428 18:21:58

> > Yes, if you use a separate ExplicitSystem. You don't want to add > variables to your main System unless you're planning to solve for > them. >  > Roy > Thanks Roy! Liang 
From: Lorenzo Botti <bottilorenzo@gm...>  20100429 11:02:59

Thank you for the hint, I had to put also pc_type fieldsplit, just in case someone else wants to give it a try... Is there a way to monitor the convergence of the solvers? Another question, is bs, the block size, always equal to the total number of fields? Sorry but I'm a beginner with PETSc. Lorenzo 2010/4/28 Jed Brown <jed@...> > On Wed, 28 Apr 2010 12:06:47 +0200, Lorenzo Botti <bottilorenzo@...> > wrote: > > If my variables are u v w p (3D stokes equation) and I have an equal > order > > approximation > > Can I just do > > > > pc_fieldsplit_0_fields 0,1,2 > > pc_fieldsplit_1_fields 3 > > pc_fieldsplit_type schur > > pc_filedsplit_block_size <n_dofs_per_variable> > > Yes, and you can control the solver for the velocity block under > fieldsplit_0_{ksp,pc} and for the Schur complement under > fieldsplit_1_{ksp,pc}. Note that with some discretizations, like > stabilized Q1Q1, you may get better results using an unsplit method > (multigrid on the coupled system). This doesn't work with mixed methods > unless you define custom restrictions and smoothing operators. > > Jed > 
From: Jed Brown <jed@59...>  20100429 11:36:06

On Thu, 29 Apr 2010 13:02:49 +0200, Lorenzo Botti <bottilorenzo@...> wrote: > Thank you for the hint, > I had to put also pc_type fieldsplit, just in case someone else wants to > give it a try... > > Is there a way to monitor the convergence of the solvers? The most common monitors are ksp_monitor, ksp_monitor_true_residual, ksp_converged_reason These can be set on inner solves with, e.g. fieldsplit_0_ksp_converged_reason fieldsplit_1_ksp_monitor_true_residual > Another question, is bs, the block size, always equal to the total > number of fields? Yeah, specifying the fields with options like pc_fieldsplit_0_field 0,1,2 requires a collocated discretization where the fields at each node are interlaced (this gives better performance anyway). It won't work for mixed discretizations, or where the fields are not interlaced. In that case, you should set them with PCFieldSplitSetIS(). Jed 
From: Lorenzo Botti <bottilorenzo@gm...>  20100429 12:20:17

> > > > Yeah, specifying the fields with options like pc_fieldsplit_0_field > 0,1,2 requires a collocated discretization where the fields at each node > are interlaced (this gives better performance anyway). It won't work > for mixed discretizations, or where the fields are not interlaced. In > that case, you should set them with PCFieldSplitSetIS(). > > I'm not sure I understand this point. What do you mean with interlaced? Isn't the offdiagonal block realizing the interfield coupling? I'm looking at the source code and PCSetUp_FieldSplit() is creating the IS according to the field splits. What kind of situation can I manage with with PCFieldSplitSetIS() ? Thanks Lorenzo 
From: Jed Brown <jed@59...>  20100429 12:33:22

On Thu, 29 Apr 2010 14:18:23 +0200, Lorenzo Botti <bottilorenzo@...> wrote: > > > > > > > > Yeah, specifying the fields with options like pc_fieldsplit_0_field > > 0,1,2 requires a collocated discretization where the fields at each node > > are interlaced (this gives better performance anyway). It won't work > > for mixed discretizations, or where the fields are not interlaced. In > > that case, you should set them with PCFieldSplitSetIS(). > > > > > I'm not sure I understand this point. > What do you mean with interlaced? Isn't the offdiagonal block realizing the > interfield coupling? Interlaced is sometimes called nodal blocking. The idea is that you have [u0,v0,p0,u1,...] instead of [u0,u1,...,v0,v1,...,p0,p1,...]. The former is only possible for nonmixed discretizations. > I'm looking at the source code and PCSetUp_FieldSplit() is creating the IS > according to the field splits. > What kind of situation can I manage with with PCFieldSplitSetIS() ? Hopefully you are looking at PETSc3.1 (or dev). The implementation treats both the cases where the split is defined in terms of the nodal blocks, and where the index sets were provided explicitly with PCFieldSplitSetIS() which allows you to put any set of variables in any number of splits. Just create an IS for each field if your variables have a different numbering or if you want to order/group them differently. The special treatment for nodal blocking is only provided for the cases where it is natural to the problem in which case it gives you a convenient way to play around at runtime, otherwise specifying the fields requires a problemsized amount of information and thus cannot be specified at runtime. Jed 
From: Lorenzo Botti <bottilorenzo@gm...>  20100429 13:47:03

Ok, I'm starting to understand. So the command line options won't consider the cases when you have more than one dof per node or (as in my dG code) per element. Even if I use node_major_dofs I'm getting something like u0,u1,u2,u3,v0,v1,v2,v3,w0,w1,w2,w3,p0,p1,p2,p3 in each element for a first order approximation in 3D. I'm trying something like node_major_dofs pc_type fieldsplit pc_fieldsplit_block_size 16 pc_fieldsplit_0_fields 0,1,2,3,4,5,6,7,8,9,10,11 pc_fieldsplit_1_fields 12,13,14,15 pc_fieldsplit_type schur pc_fieldsplit_schur_precondition self fieldsplit_1_ksp_monitor Does it make sense? I still need better preconditioning than nothing in my schur complement! Lorenzo 2010/4/29 Jed Brown <jed@...> > On Thu, 29 Apr 2010 14:18:23 +0200, Lorenzo Botti <bottilorenzo@...> > wrote: > > > > > > > > > > > > Yeah, specifying the fields with options like pc_fieldsplit_0_field > > > 0,1,2 requires a collocated discretization where the fields at each > node > > > are interlaced (this gives better performance anyway). It won't work > > > for mixed discretizations, or where the fields are not interlaced. In > > > that case, you should set them with PCFieldSplitSetIS(). > > > > > > > > I'm not sure I understand this point. > > What do you mean with interlaced? Isn't the offdiagonal block realizing > the > > interfield coupling? > > Interlaced is sometimes called nodal blocking. The idea is that you > have [u0,v0,p0,u1,...] instead of [u0,u1,...,v0,v1,...,p0,p1,...]. The > former is only possible for nonmixed discretizations. > > > I'm looking at the source code and PCSetUp_FieldSplit() is creating the > IS > > according to the field splits. > > What kind of situation can I manage with with PCFieldSplitSetIS() ? > > Hopefully you are looking at PETSc3.1 (or dev). The implementation > treats both the cases where the split is defined in terms of the nodal > blocks, and where the index sets were provided explicitly with > PCFieldSplitSetIS() which allows you to put any set of variables in any > number of splits. Just create an IS for each field if your variables > have a different numbering or if you want to order/group them > differently. > > The special treatment for nodal blocking is only provided for the cases > where it is natural to the problem in which case it gives you a > convenient way to play around at runtime, otherwise specifying the > fields requires a problemsized amount of information and thus cannot be > specified at runtime. > > Jed > 
From: Jed Brown <jed@59...>  20100422 13:58:55

On Thu, 22 Apr 2010 09:32:12 0400, Boyce Griffith <griffith@...> wrote: > How should one setup the index set? Is it just a mapping from Vec index > to field index? E.g., each Vec entry is assigned to field 0 or 1 (or 2 > or ...)? If your variables are interlaced like u0,v0,w0,u1,v1,w1,... Then you can (a) set a block size for the matrix with MatSetBlockSize, (b) use BAIJ in which case the block size is built in, or (c) use pc_fieldsplit_block_size. Then to define the splits, use pc_fieldsplit_0_fields 0,3 pc_fieldsplit_1_fields 1 pc_fieldsplit_2_fields 2,4,5 to split six fields into three splits of sizes 2,1,3. For something more general, use ISCreate() and PCFieldSplitSetIS(). The index sets should be created on the same communicator as the matrix and contain the global indices corresponding to the desired field. So if we had two processors with two nodes each (24 dofs total), the splits above would be given by the following index sets (where '' separates processors): split0: 0 3 6 9  12 15 18 21 split1: 1 7  13 19 split2: 2 4 5 8 10 11  14 16 17 20 22 23 I hope this is clear. > For now, I am just assembling the whole system. I am also using > bi/trilinear elements  I recall you sending some emails to the list > about getting better performance with unassembled systems, but that this > was mainly for higher order elements. Am I remembering correctly? Yeah, unassembled doesn't pay off for Q_1 elements. But if you have a system J = [A B; C D] and use a preconditioner that needs these blocks separately, then *purely to reduce total memory usage*, you might want to define the action of J in terms of these separate blocks (rather than assembling J and extracting the blocks). It's easy enough to define this with MatShell, I think generic support is coming in the next few months (so that this choice can become a runtime option). Jed 
From: Jed Brown <jed@59...>  20100429 13:59:04

On Thu, 29 Apr 2010 15:46:54 +0200, Lorenzo Botti <bottilorenzo@...> wrote: > Ok, I'm starting to understand. > So the command line options won't consider the cases when you have more than > one dof per node or (as in my dG code) per element. It *only* handles the case where you have one kind of node, and *all* dofs are at that node. > Even if I use node_major_dofs I'm getting something like > u0,u1,u2,u3,v0,v1,v2,v3,w0,w1,w2,w3,p0,p1,p2,p3 in each element for a first > order approximation in 3D. Roy, what's going on here? I thought you said that this option would do nodal blocking (aka field interlacing). > I'm trying something like > node_major_dofs pc_type fieldsplit pc_fieldsplit_block_size 16 > pc_fieldsplit_0_fields 0,1,2,3,4,5,6,7,8,9,10,11 pc_fieldsplit_1_fields > 12,13,14,15 pc_fieldsplit_type schur pc_fieldsplit_schur_precondition self > fieldsplit_1_ksp_monitor > > Does it make sense? Yes. > I still need better preconditioning than nothing in my schur complement! Yeah, these usually come from semiimplicit solvers or approximate commutators (which are very closely related). Jed 