Menu

Usage of RAM in Karel`s DNN

Help
JTDamaja
2015-05-12
2015-05-14
  • JTDamaja

    JTDamaja - 2015-05-12

    Hello everyone,

    I've been experimenting lately with GPU enabled pretraining DBN. I've tried pretraining two DBN in parallel, but I found that when pretrain two DBN in parallel, it`s twice slowly, although GPU have a lot of free memory (my GPU have 2 GB memory, but each pretrainig process require only 300-400 MB)

    I would like to know if this is happened cause two pretraining process competing for RAM, or is there another reason for slowdown?

    Maybe can I specify RAM volume for each pretraining process?

    Thank all.

     
    • Jan "yenda" Trmal

      The way you described it, it seems that the competitions is about the GPU
      "cores", i.e. computing performance, not really memory.
      y.

      On Tue, May 12, 2015 at 2:32 PM, JTDamaja jtdamaja@users.sf.net wrote:

      Hello everyone,

      I've been experimenting lately with GPU enabled pretraining DBN. I've
      tried pretraining two DBN in parallel, but I found that when pretrain two
      DBN in parallel, it`s twice slowly, although GPU have a lot of free memory
      (my GPU have 2 GB memory, but each pretrainig process require only 300-400
      MB)

      I would like to know if this is happened cause two pretraining process
      competing for RAM, or is there another reason for slowdown?

      Maybe can I specify RAM volume for each pretraining process?

      Thank all.


      Usage of RAM in Karel`s DNN


      Sent from sourceforge.net because you indicated interest in <
      https://sourceforge.net/p/kaldi/discussion/1355348/>

      To unsubscribe from further messages, please visit <
      https://sourceforge.net/auth/subscriptions/>

       
      • JTDamaja

        JTDamaja - 2015-05-13

        Thanks for your reply.

        Maybe I did not describe it in detail. I have a workstation with the following characteristics: Intel Core i5-3570 (4 physical cores), 12 GB RAM, nVidia GeForce GTX 770 with 2 GB video memory.

        When I'm not doing DBN pretraining, the GPU is occupied only 200-300 MB of video memory.

        When I'm doing pretraining of one DBN, the GPU is occupied only 500-700 MB of video memory, the RAM is occupied almost all free memory. Pretraining takes about 5 hours.

        It seems to me that the computing power of GPU used is not fully...

        When I'm doing pretraining of two DBN, the GPU is occupied only 800-1100 MB of video memory, the RAM still is occupied almost all free memory. Pretraining takes about 10 hours, that is twice slowly.

        Could this be due to that both of these processes using the same core of CPU for rbm-train-cd1-f?

        Or could this be due to that both of these processes competing for limited quantity of RAM?

        Or could this be due to that both of these processes competing for cuda cores on GPU?

         
        • Daniel Povey

          Daniel Povey - 2015-05-13

          Likely it is competition for CUDA cores that is causing the slowdown.
          You cannot trust the reported (CPU) memory usage of processes that use
          GPUs, they tend to be very large but that is not real.
          Dan

          On Wed, May 13, 2015 at 7:14 AM, JTDamaja jtdamaja@users.sf.net wrote:

          Thanks for your reply.

          Maybe I did not describe it in detail. I have a workstation with the
          following characteristics: Intel Core i5-3570 (4 physical cores), 12 GB RAM,
          nVidia GeForce GTX 770 with 2 GB video memory.

          When I'm not doing DBN pretraining, the GPU is occupied only 200-300 MB of
          video memory.

          When I'm doing pretraining of one DBN, the GPU is occupied only 500-700 MB
          of video memory, the RAM is occupied almost all free memory. Pretraining
          takes about 5 hours.

          It seems to me that the computing power of GPU used is not fully...

          When I'm doing pretraining of two DBN, the GPU is occupied only 800-1100 MB
          of video memory, the RAM still is occupied almost all free memory.
          Pretraining takes about 10 hours, that is twice slowly.

          Could this be due to that both of these processes using the same core of CPU
          for rbm-train-cd1-f?

          Or could this be due to that both of these processes competing for limited
          quantity of RAM?

          Or could this be due to that both of these processes competing for cuda
          cores on GPU?


          Usage of RAM in Karel`s DNN


          Sent from sourceforge.net because you indicated interest in
          https://sourceforge.net/p/kaldi/discussion/1355348/

          To unsubscribe from further messages, please visit
          https://sourceforge.net/auth/subscriptions/

           
          • JTDamaja

            JTDamaja - 2015-05-14

            Thank you for your response

             
MongoDB Logo MongoDB