Menu

Elk and Superheavy elements

Elk Users
2020-06-29
2020-08-17
  • Miroslav Iliaš

    Miroslav Iliaš - 2020-06-29

    Hello,

    how about Elk and superheavy elements ? Has somebody species up to Z=118 ?

    Miro

     
    • Miroslav Iliaš

      Miroslav Iliaš - 2020-08-10

      Hello,

      I have system with more than 300 atoms, see the attachement.

      To be able to run, I had to decrease muffin-tin radius in src/checkmt.f90 to 0.0005:
      do is=1,nspecies
      !if (rmt(is).lt.0.25d0) then
      !if (rmt(is).lt.0.05d0) then
      if (rmt(is).lt.0.0005d0) then

      Anyhow, the job crashed,see below. If you would please give some hints on howto run such a system.

      Thanks, Miro


      milias@lxir127.gsi.de:~/Work/qch/projects/valeria-band/MOH_on_quartz/TlOH_on_quartz/elk/prepare_input/manual_preparation_for_elk/.bash_lxir127.01

      Elk code version 6.8.04 started

      Number of MPI processes : 1
      Number of OpenMP threads per MPI process : 16
      Total number of threads : 16
      Maximum OpenMP nesting level : 4
      Number of threads at first nesting level : 16
      Number of MKL threads : 8

      Info(elk): several copies of Elk may be running in this path
      (this could be intentional, or result from a previous crash,
      or arise from an incorrect MPI compilation)

      Info(elk): current task : 5

      Info(checkmt): reduced muffin-tin radius of species 1 (Si) from 2.2000 to 0.0921

      Info(checkmt): reduced muffin-tin radius of species 3 (H) from 1.4000 to 0.2511

      Info(checkmt): reduced muffin-tin radius of species 4 (O) from 1.8000 to 0.0034

      Info(findsymcrys): atomic basis shift (lattice) :
      -0.2739101200 -0.4877726900 0.2084116300
      See GEOMETRY.OUT for new atomic positions
      forrtl: severe (174): SIGSEGV, segmentation fault occurred
      Image PC Routine Line Source
      elk 0000000000704434 for__signal_handl Unknown Unknown
      libpthread-2.19.s 00007F961FD9D890 Unknown Unknown Unknown
      elk 0000000000417566 Unknown Unknown Unknown
      elk 000000000052052C Unknown Unknown Unknown
      elk 0000000000562C35 Unknown Unknown Unknown
      elk 00000000004130B9 Unknown Unknown Unknown
      elk 000000000040B8EE Unknown Unknown Unknown
      libc-2.19.so 00007F961FA04B45 __libc_start_main Unknown Unknown
      elk 000000000040B7E9 Unknown Unknown Unknown
      forrtl: severe (174): SIGSEGV, segmentation fault occurred
      Image PC Routine Line Source
      elk 0000000000704761 for__signal_handl Unknown Unknown
      libpthread-2.19.s 00007F961FD9D890 Unknown Unknown Unknown
      libiomp5.so 00007F9620D24DA4 Unknown Unknown Unknown
      ld-2.19.so 00007F962507C218 Unknown Unknown Unknown
      libc-2.19.so 00007F961FA1AB29 Unknown Unknown Unknown
      libc-2.19.so 00007F961FA1AB75 Unknown Unknown Unknown
      elk 00000000006FFB39 for__issue_diagno Unknown Unknown
      elk 0000000000704434 for__signal_handl Unknown Unknown
      libpthread-2.19.s 00007F961FD9D890 Unknown Unknown Unknown
      elk 0000000000417566 Unknown Unknown Unknown
      elk 000000000052052C Unknown Unknown Unknown
      elk 0000000000562C35 Unknown Unknown Unknown
      elk 00000000004130B9 Unknown Unknown Unknown
      elk 000000000040B8EE Unknown Unknown Unknown
      libc-2.19.so 00007F961FA04B45 __libc_start_main Unknown Unknown
      elk 000000000040B7E9 Unknown Unknown Unknown


      Primary job terminated normally, but 1 process returned
      a non-zero exit code. Per user-direction, the job has been aborted.



      mpirun detected that one or more processes exited with non-zero status, thus causing
      the job to be terminated. The first process to do so was:

      Process name: [[33650,1],0]
      Exit code: 174


       
  • Andrew Shyichuk

    Andrew Shyichuk - 2020-08-12

    Dear Miro,

    This:
    Info(checkmt): reduced muffin-tin radius of species 1 (Si) from 2.2000 to 0.0921
    Info(checkmt): reduced muffin-tin radius of species 3 (H) from 1.4000 to 0.2511
    Info(checkmt): reduced muffin-tin radius of species 4 (O) from 1.8000 to 0.0034

    means that your geometry is very wrong.

    Your coordinates are cartesian, not fractional.
    Start with checking IADIST.OUT for the culprit, there must be some atoms there that are too close.

    Also, the third vector (0 0 50) looks like it was manually specified to add vacuum. If you simply increase the vector length, you would end up with different (and probably wrong) geometry - because of the fractional coordinates. You'd have to edit the fractional coordinates in order to do that correctly.

    Also, with a system that large, you might have a problem of run time. I'd recommend to start with Quantum Espresso and ultrasoft psedopotentials.

    Best regards.
    Andrew

     

    Last edit: Andrew Shyichuk 2020-08-12
    • Miroslav Iliaš

      Miroslav Iliaš - 2020-08-12

      Dear Andrew,
      thanks for your answer.

      I was trying to get elk coordinates from xyz/cif (https://sourceforge.net/p/cif2cell/bugs/14/), apparently but without success.

      Is there a way to get proper fractional coordinates for elk ? xyz coordinates are attached.

       
  • Andrew Shyichuk

    Andrew Shyichuk - 2020-08-12

    Dear Miro,

    I use my own scripts, that is why I do not know of the other codes.
    Bugs might occur from the input convention being missed or something. Your xyz has 307 in the header, whike it has 310 "atoms" if we count the vectors as atoms. For me, I changed that and named the vectors "Tv". Vector naming might matter.

    The cell is sane, it replicates flawlessly.

    Here's the file.

    But.
    You are going to have some trouble converging this. Vacuum alone will cost - a lot. I thus recommend ultrasoft pseudopotentials, or PAW. I made you a Quantum Espresso input too, but you gotta tweak it for your purposes.

    Or maybe, I'd go with QM/MM for this system.

    As for the species - go on and try it :)
    There is a species cooking recipe in a sticky post of this forum. I do not see a reason for it not to work.

    Good luck.
    Andrew

     
  • Miroslav Iliaš

    Miroslav Iliaš - 2020-08-16

    Dear Andrew,
    thanks for QE input, I am already trying it.

    Concerning elk (and other all-electron codes, like wienk2, fleur) I would like to use also superheavy elements as adsorbates - for them, we do not have pseudopotentials. So elk (wien2k) seems to be resonable option, in parallel mode.

    Are there any public scripts that can prepare input for elk from xyz geometries ?

    Also, my parallel run "mpirun -np 2 $ELK" crashes for unknown reasons...any help, please ? I have Intel-OpenMPI, F90 = mpif90, F77 = mpif77 .

    milias@lxir127.gsi.de:~/Work/qch/projects/valeria-band/MOH_on_quartz/TlOH_on_quartz/elk/prepare_input/manual_preparation_for_elk/.bash_lxir127.01 
    
    Elk code version 6.8.04 started
    
    Elk code version 6.8.04 started
    
    Number of MPI processes :      1
    Number of OpenMP threads per MPI process :   16
    Total number of threads :     16
    Maximum OpenMP nesting level :    4
    Number of threads at first nesting level :   16
    Number of MKL threads :    8
    
    Number of MPI processes :      1
    Number of OpenMP threads per MPI process :   16
    Total number of threads :     16
    Maximum OpenMP nesting level :    4
    Number of threads at first nesting level :   16
    Number of MKL threads :    8
    forrtl: No such file or directory
    
    Elk code stopped
    forrtl: severe (28): CLOSE error, unit 95, file "Unknown"
    Image              PC                Routine            Line        Source             
    elk                00000000006FE993  for__io_return        Unknown  Unknown
    elk                00000000006FC8C8  for_close             Unknown  Unknown
    elk                00000000004107BB  Unknown               Unknown  Unknown
    elk                00000000004127E9  Unknown               Unknown  Unknown
    elk                000000000040B8EE  Unknown               Unknown  Unknown
    libc-2.19.so       00007F5D400E9B45  __libc_start_main     Unknown  Unknown
    elk                000000000040B7E9  Unknown               Unknown  Unknown
    
     
  • Andrew Shyichuk

    Andrew Shyichuk - 2020-08-17

    Dear Miro,

    The fact that "Number of MPI processes :" is printed twice reporting one process, means that your MPI configuration is wrong.

    The run command is likely the culprit.

    See this post for more details.
    https://sourceforge.net/p/elk/discussion/897820/thread/bb25c53e70/

    However, the system you run it at is defining. Supercomputer centers should provide manuals.

    Best regards.
    Andrew

     

Log in to post a comment.