Menu

MPD box best setup to get best performance

Help
2011-11-26
2013-03-27
1 2 > >> (Page 1 of 2)
  • Sami Halabi

    Sami Halabi - 2011-11-26

    Hi,

    I want to use MPD5.5 with freebsd to get l2tp/pptp & pppoe sessions of users.

    I read somwhere its possible to run 6000+ sessions.

    I would like your help to know what FBSD version and what ports, what sysctls should be set and so on to get best performance.

    another question, i see on sourceforge patches fixed,are they applied on the FBSD port? or i should apply them manually?

    Thanks in advance,
    Sami

     
  • Dmitry S. Lukhtionov

    1. Increase net.graph.maxdata and net.graph.recvspace sysctl values
    2. Get latest source from CVS manually
    3. That's all :)

     
  • mike tancsa

    mike tancsa - 2011-11-30

    Hi, I didnt realize development was still going on.  Any reason why the port in FreeBSD has not been updated ? In terms of what has changed since 5.5, is there a list somewhere ? Having a quick look at the cvs web  repo
    I see libpdel has been removed as a dependency
    some improvements to the CLI and web interface
    netflow v9 support
    new 'set iface group command' ? What does that do ?
    new set iface name and descr

    Anything else ?

     
  • Dmitry S. Lukhtionov

    set iface froup add ability to group some interfeces as one group
    This useful, if you use PF

     
  • Sami Halabi

    Sami Halabi - 2011-12-18

    Hi,

    1. Increase net.graph.maxdata and net.graph.recvspace sysctl values
    mpd2# sysctl net.graph | more
    net.graph.msg_version: 8
    net.graph.abi_version: 12
    net.graph.maxdata: 128000
    net.graph.maxalloc: 128000
    net.graph.threads: 4
    net.graph.control.proto: 2
    net.graph.data.proto: 1
    net.graph.family: 32
    net.graph.recvspace: 40960
    net.graph.maxdgram: 40960

    do i need to increase?
    2. Get latest source from CVS manually
    how do I do that?

    3. That's all :)

     
  • Dmitry S. Lukhtionov

    You always can see, whats memory allocated for netgraph by use "vmstat -z" command
    In it's output, search thees lines
    NetGraph items:
    NetGraph data items:
    First - net.graph.maxalloc
    Second - net.graph.maxdata

     
  • Sami Halabi

    Sami Halabi - 2011-12-19

    Hi,
    here is the output:

    # vmstat -z
    ITEM                     SIZE     LIMIT      USED      FREE  REQUESTS  FAILURES

    UMA Kegs:                 208,        0,       95,        7,       95,        0
    UMA Zones:                256,        0,       95,       10,       95,        0
    UMA Slabs:                568,        0,     1826,        1,     6799,        0
    UMA RCntSlabs:            568,        0,      771,        6,      771,        0
    UMA Hash:                 256,        0,        2,       13,        4,        0
    16 Bucket:                152,        0,       75,        0,       75,        0
    32 Bucket:                280,        0,      119,        7,      119,        0
    64 Bucket:                536,        0,       80,        4,       80,       31
    128 Bucket:              1048,        0,      184,        2,      184,        5
    VM OBJECT:                216,        0,    43094,      322,  1800371,        0
    MAP:                      232,        0,        7,       25,        7,        0
    KMAP ENTRY:               120,    55955,       52,      165,   114785,        0
    MAP ENTRY:                120,        0,      922,      876,  3491969,        0
    DP fakepg:                120,        0,        0,        0,        0,        0
    SG fakepg:                120,        0,        0,        0,        0,        0
    mt_zone:                 2056,        0,      289,       33,      289,        0
    16:                        16,        0,     2149,      371,  9081505,        0
    32:                        32,        0,     2506,      423, 374615100,        0
    64:                        64,        0,     3923,     1173,  2216975,        0
    128:                      128,        0,    10519,      675,  1796549,        0
    256:                      256,        0,     3220,     2225,   812541,        0
    512:                      512,        0,     1514,      166,   216750,        0
    1024:                    1024,        0,      196,       56,   305451,        0
    2048:                    2048,        0,      525,      159,  5270465,        0
    4096:                    4096,        0,      675,      184,   185776,        0
    Files:                     80,        0,      330,      210,   826877,        0
    TURNSTILE:                136,        0,      229,       31,      229,        0
    umtx pi:                   96,        0,        0,        0,        0,        0
    MAC labels:                40,        0,        0,        0,        0,        0
    PROC:                    1120,        0,       50,       49,   124442,        0
    THREAD:                  1112,        0,      160,       68,    53449,        0
    SLEEPQUEUE:                80,        0,      229,       32,      229,        0
    VMSPACE:                  392,        0,       25,       65,   124403,        0
    cpuset:                    72,        0,        2,       98,        2,        0
    audit_record:             952,        0,        0,        0,        0,        0
    mbuf_packet:              256,        0,     1024,      384, 712380446,        0
    mbuf:                     256,        0,        3,      389, 1429580636,        0
    mbuf_cluster:            2048,   409600,     1408,       80,     1411,        0
    mbuf_jumbo_page:         4096,    12800,        0,       27,      102,        0
    mbuf_jumbo_9k:           9216,     6400,        0,        0,        0,        0
    mbuf_jumbo_16k:         16384,     3200,        0,        0,        0,        0
    mbuf_ext_refcnt:            4,        0,        0,        0,        0,        0
    g_bio:                    232,        0,        0,     3632,  1913357,        0
    ttyinq:                   160,        0,      135,      105,      465,        0
    ttyoutq:                  256,        0,       72,       63,      248,        0
    ata_request:              320,        0,        0,       24,       18,        0
    ata_composite:            336,        0,        0,        0,        0,        0
    cryptop:                   88,        0,        0,        0,        0,        0
    cryptodesc:                72,        0,        0,        0,        0,        0
    VNODE:                    472,        0,    59526,       74,   264616,        0
    VNODEPOLL:                112,        0,        0,        0,        0,        0
    NAMEI:                   1024,        0,        0,       36,  1643569,        0
    S VFS Cache:              108,        0,    59750,     1498,   278933,        0
    L VFS Cache:              328,        0,      125,      403,     2709,        0
    DIRHASH:                 1024,        0,     1031,       53,     1037,        0
    NFSMOUNT:                 624,        0,        0,        0,        0,        0
    NFSNODE:                  688,        0,        0,        0,        0,        0
    pipe:                     728,        0,        6,       49,   108846,        0
    ksiginfo:                 112,        0,       85,      971,     4287,        0
    itimer:                   344,        0,        0,        0,        0,        0
    bridge_rtnode:             64,        0,        0,        0,        0,        0
    KNOTE:                    128,        0,        0,      116,     1408,        0
    socket:                   680,   204804,      363,       93,    84163,        0
    ipq:                       56,    12852,        0,      252,    44266,        0
    udp_inpcb:                336,   204809,       56,       65,    69422,        0
    udpcb:                     16,   204960,       56,      448,    69422,        0
    tcp_inpcb:                336,   204809,      104,       83,     1417,        0
    tcpcb:                    880,   204800,      104,       88,     1417,        0
    tcptw:                     72,    27800,        0,      200,      169,        0
    syncache:                 144,    15366,        0,       78,     1400,        0
    hostcache:                136,    15372,        4,      108,      169,        0
    tcpreass:                  40,    25620,        0,      252,       70,        0
    sackhole:                  32,        0,        0,      303,      134,        0
    sctp_ep:                 1272,    25602,        0,        0,        0,        0
    sctp_asoc:               2240,    40000,        0,        0,        0,        0
    sctp_laddr:                48,    80064,        0,      216,        5,        0
    sctp_raddr:               616,    80004,        0,        0,        0,        0
    sctp_chunk:               136,   400008,        0,        0,        0,        0
    sctp_readq:               104,   400032,        0,        0,        0,        0
    sctp_stream_msg_out:       96,   400026,        0,        0,        0,        0
    sctp_asconf:               40,   400008,        0,        0,        0,        0
    sctp_asconf_ack:           48,   400032,        0,        0,        0,        0
    ripcb:                    336,   204809,       86,       68,     3240,        0
    unpcb:                    240,   204800,       10,       54,     4660,        0
    rtentry:                  200,        0,      153,       94,     1370,        0
    IPFW dynamic rule:        120,        0,        0,        0,        0,        0
    selfd:                     56,        0,      355,      338, 1001010507,        0
    SWAPMETA:                 288,   116519,        0,        0,        0,        0
    ip4flow:                   56,    50211,        0,        0,        0,        0
    ip6flow:                   80,    50220,        0,        0,        0,        0
    Mountpoints:              752,        0,        5,       10,        5,        0
    FFS inode:                168,        0,    59494,       60,   264556,        0
    FFS1 dinode:              128,        0,        0,        0,        0,        0
    FFS2 dinode:              256,        0,    59494,      146,   264556,        0
    NetGraph items:            72,   128006,        4,      112, 300107892,        0
    NetGraph data items:       72,   128006,        0,      203, 679841047,        0
    IpAcct:                  2032,        0,        0,        0,        0,        0

    any ideas what should i improve?

     
  • Dmitry S. Lukhtionov

    You are should pay attention on FAILURES column

     
  • Sami Halabi

    Sami Halabi - 2011-12-26

    Hi,

    ITEM                     SIZE     LIMIT      USED      FREE  REQUESTS  FAILURES
    64 Bucket:                536,        0,       80,        4,       80,       31
    128 Bucket:              1048,        0,      184,        2,      184,        5

    on another box
    ITEM                     SIZE     LIMIT      USED      FREE  REQUESTS  FAILURES
    128 Bucket:              1048,        0,      109,       83,     1791,    16143

    what can i do with these bucket failures?

    Sami

     
  • Dmitry S. Lukhtionov

    Nothing

     
  • Sami Halabi

    Sami Halabi - 2011-12-26

    :)
    to what failures should i give attention?

    Sami

     
  • Dmitry S. Lukhtionov

    No. This is the usual behavior. With this failures all work fine.
    I am not found workaround for this.

     
  • Sami Halabi

    Sami Halabi - 2011-12-26

    okay, thank you.

    so you think that i shouldn't improve anything else, sysctl, tweak something in the source and so on?
    what is the best way to get source from cvs and install it.

    Thanks again,
    Merry Christmass
    Sami

     
  • Alexander Motin

    Alexander Motin - 2011-12-26

    Mpd-5.6 was released in ports few days ago. So at this moment there is no reason to take CVS sources - they are the same now.

    What's about tuning, there is no much recipes of "one size fits all" type now. Most of previous were integrated to work out of the box now. Fine tuning for specific workload and hardware still could be done, but requires in-place experimenting. The only thing I can recommend now in general is to disable devd or make it to not react on interface events.

     
  • Sami Halabi

    Sami Halabi - 2011-12-26

    amotin,
    Thanks for your reply,

    >The only thing I can recommend now in general is to disable devd or make it to not react on interface events.
    how can i do that?

    Sami

     
  • Sami Halabi

    Sami Halabi - 2011-12-26

    Hi,
    I read a bit about devd, why is it bad for mpd?

    Sami

     
  • Alexander Motin

    Alexander Motin - 2011-12-26

    In /etc/devd.conf comment sections with
    match "system" "IFNET";

    To disable it completely you may set devd_enable="NO" in /etc/rc.conf.

     
  • Alexander Motin

    Alexander Motin - 2011-12-26

    Note that devd used also for other devices, such as mouses, etc.

     
  • Alexander Motin

    Alexander Motin - 2011-12-26

    devd is not bad, but it runs number of shell scripts on interface events. That creates additional load on system, that may be significant in time of peak loads.

     
  • Sami Halabi

    Sami Halabi - 2011-12-26

    Hi,

    here is what i have in /etc/devd.conf

    notify 0 {
            match "system"          "IFNET";
            match "type"            "ATTACH";
            action "/etc/pccard_ether $subsystem start";
    };

    notify 0 {
            match "system"          "IFNET";
            match "type"            "LINK_UP";
            media-type              "ethernet";
            action "/etc/rc.d/dhclient quietstart $subsystem";
    };

    to disable both?

    Sami

     
  • Sami Halabi

    Sami Halabi - 2011-12-26

    BTW: is mpd5.6 introduce new features? is that reflected in the documentation?

    Sami

     
  • Dmitry S. Lukhtionov

    Yes. You can simply find it in /usr/local/share/doc/mpd5/mpd.html

     
  • Sami Halabi

    Sami Halabi - 2011-12-27

    I know where to find it :)
    the question was if it was updated with the new features that added to 5.6 version….

     
  • Dmitry S. Lukhtionov

    Hmm.. Look "Changes since version 5.5" section

     
  • Sami Halabi

    Sami Halabi - 2011-12-27

    I see basicly no major changes of functionality of MPD, bugfixes are top priority, should i upgrade immediatly from 5.5 to 5.6?

    has anyone else tested the version already in production?

    Sami

     
1 2 > >> (Page 1 of 2)

Log in to post a comment.

MongoDB Logo MongoDB