Raw partition ?

  • Vladimir Ilnitsky

    Has anybody GT.M installed in a raw partition? Do everything work well?
    Really such variant works faster and on how much?

    Please, answer. I need to know this for making decision - buy new hard disk or this will be money for the wind.

    Do anybody know in what manner GT.M can work as fastly as possible?

    • K.S. Bhaskar

      K.S. Bhaskar - 2008-01-06

      Vladimir --

      We do not recommend installing databases on raw partitions.  Although databases on raw partitions were supported historically, we do not test GT.M on raw partitions today.

      All of our benchmarks are performed on normal file systems.  You may want to experiment with different file systems (our benchmarks have found noticeable differences depending on the workload).  Also, we have also found that good performance is dependent on having a sufficiently large file buffer cache at the operating system level (or at the SAN, if you are using one).

      Should you test / benchmark GT.M databases on raw partitions, we would be interested in hearing of your experiences.

      Also, are you journaling your databases?  What sort of journaling?  Are you using transaction processing?

      -- Bhaskar

    • Vladimir Ilnitsky

      I have a discussion with admirers of RDBMS about performance R- and M- DBMS. I want develop a small SQL package for GT.M and make SELECT query faster ORACLE and MS SQL Server on the similar machines. What do you think - have I any chance?

      • K.S. Bhaskar

        K.S. Bhaskar - 2008-01-07

        I think you will find M to be much faster than relational databases for transaction processing.  For pure queries, a lot depends on query complexity, how much cross-reference indexing has been performed, etc. - I still think M has an edge, but there are a lot more variables involved in the comparison.

        Also there are some new goodies coming in the next month or two that may give you a bit of an advantage, but I can't say a lot more about that right now except to point you to http://socallinuxexpo.org/scale6x/conference-info/schedules/ (look for my name on the page and follow the link).

        -- Bhaskar

    • Vladimir Ilnitsky

      I want find solution for very restricted task for a while. I'm trying develop a program for serve SELECT queries to one table and get the best results. If this is interesting for you and you would like help me by any advices, I will describe you all details and my steps for
      critique. Are you agree?

    • Vladimir Ilnitsky

      One strange problem. For 1 000 000 records. Each record has 7 fields with tab delimiter.

          s n=""
          f  s n=$o(^D(n)) q:n=""  d
          . s s=^(n)

          . ; -- this part of loop works 2 sec.

          . s s1=^D(n,1)

          . ; -- after adding line above - 3 sec.

          . f m=1:1:$l(s,$c(9)) d
          . . s fldN="C"_m
          . . i $p(s1,$c(9),m)'="" s @fldN=$p(s1,$c(9),m) q
          . . s @fldN=$p(s,$c(9),m)

          ; -- after adding above loop for decompose record into fields - 15 sec.

      Actions in the memory take a time more then disk i/o operations?
      Very strange. Can be this do faster?

    • Vladimir Ilnitsky

      When I replace this loop (work 5 sec.)

      . f m=1:1:$l(s,$c(9)) d
      . . s fldN="C"_m
      . . i $p(s1,$c(9),m)'="" s @fldN=$p(s1,$c(9),m) q
      . . s @fldN=$p(s,$c(9),m)

      by stupied

      . . s C1=$p(...
      . . s C2=$p(...

      i.e. don't use "@", I have 1 sec. T.e. using "@" make performance slower in 5 times!
      Is there any way for increase performance of indirections?

      • Steven Estes

        Steven Estes - 2008-02-04


        The current code has an extra hash lookup of the variable name that it may be possible to reduce/eliminate when the variable is looked up with an indirect reference rather than a "regular" reference. I do not know if this explains the performance difference or not. How long is the string you are processing? How many variables are being set? Your example isn't complete enough for us to attempt to duplicate the issue. How big are the values? Any further information you can give would probably be useful.



Log in to post a comment.