Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

What kind of hardware is best for ELK?

Elk Users
DDDDD
2012-09-18
2013-06-11
  • DDDDD
    DDDDD
    2012-09-18

    Hello,

    we're thinking about building a computer for dedicated ELK calculations. I'm wondering what sort of hardware is best for an ELK system. For example, is a system with lots of RAM preferred? A system with lots of very fast single / dual cores processors? A system with an extremely large amount of cores? Are there any processors which ELK particularly prefers over others?

    Information regarding this would be much appreciated.

    Thank you

     
  • Before being able to give you any suggestions, you have to declare what sort of calculations you primarily intend to use the computers for. ELK is a code capable of so many different things and its optimization do very much depend on the problem.
    Best,
        Lars

     
  • Anton F.
    Anton F.
    2012-09-21

    Hi!

    Due to the fact that both MPI and OpenMP are used in Elk in different types of calculations, it's better that your system will contain both MPI and as much cores as possible for OpenMP performance. In some calculations like DOS or phonons calculations only OpenMP works, so I think having much cores is crucial. As for the memory, the large amount is required if you are going to calculate large sytems with tens of atoms in the unit cell.
    Best regards,
    Anton F.

     
  • Markus
    Markus
    2012-09-21

    Hi!

    From my experience, I would say that less nodes with as many cores as possible are preferred. There is not so much parallelism for MPI implemented. Mainly, that's the k-point parallelism for the SCF loop. With OpenMP you can parallelize more things, as Anton said.

    If you have to choose, get less faster cores than more slower ones. I have hooked up five Intel 6-core computers via Gigabit Ethernet, which is okay for systems with up to about ten atoms. However, the memory requirement will also play a role. If you are going for Bethe-Salpeter-Equation calculations, you will need A LOT (easily several tens of gigabytes per node) of memory, even for quite small systems.

    Do some test calculations and have a look at the scaling of your runs.

    Regards,
    Markus