From: Buffat M. <bu...@uf...> - 2003-12-01 09:26:03
|
Dear Ben, thank you very much for your fast help! I am evaluating libmesh for a CFD research project using a Finite Volume formulation. I have found some problems with the library which I have solved, and which may be of interest for other users: - writing element data in the solution file (in GMV format): it is not to difficult, even in //e -a bug in the distributed vector init function here is the patch for the file distributed_vector.h 440a441 > // BUG MPI ? 441a443 > std::vector<int> local_sizes_send(n_proc, 0); 443c445 < local_sizes[proc_id] = n_local; --- > local_sizes_send[proc_id] = n_local; 445c447 < MPI_Allreduce (&local_sizes[0], &local_sizes[0], local_sizes.size(), --- > MPI_Allreduce (&local_sizes_send[0], &local_sizes[0], local_sizes.size(), 462c464,465 < assert (sum == static_cast<int>(size())); --- > // BUG Set the initialized flag not set > //assert (sum == static_cast<int>(size())); the MPI_Allreduce needs a different buffer for the send and the receive and we cannot call the size() function here because the vector is not initialized ! - I also add a public access to the PetSC vector in the PetscVector class i.e. // acces au vecteur Petsc Vec & PetscVec() { return vec; } and in the class PetscMatrix i.e. // acces a la matricePetsc Mat & PetscMat() { return mat; } in order to be able to use all the PETSC lib, specially the print function. Indeed, in //e the print function of the numeric_vector class works only for the processor 0 and gives erroneous values for the other proc. Thank you very much again Marc |