Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


CPU Routers - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

CPU Routers

2»

Comments

  • Not really first-hand experience, but I've been involved in the design process for some router boards last month.

    Generally, what matters most is plain clock speed - on the first glance. In the end you have to select a CPU based on the amount of NICs you've got in your server. Ideally, you're allocating each NIC it's own CPU using smp_affinity which makes all IRQs be handled by a certain CPU ( this is based on threads and allocated on hex ) which enhances performance. You'd still have the control plane CPU load split across all threads, but that's not really a problem and you could limit that to one particular CPU as well to leave cpu cycles available for the NICs on the other threads.

    10Gbps on x86 is not really anything special anymore nowadays, plain packet processing you could push a dozen Gbps when the setup's sophisticated.

    And one point that of course also matters: the actual NICs you're going to use. Moving a lot of packet processing to them ( by utilizing ringbuffers ) would generally give you way more scalability in the entire system, but that'd involve a really customized setup and probably also require you to adapt parts of the kernel and how I/O queues are handled.

    TL,DR: Get a CPU with at least as much threads as NICs you're going to fit in the server and get some higher clock-speed models. And don't buy cheap ass NICs.

  • rm_rm_ IPv6 Advocate, Veteran
    edited March 2018

    florianb said: you have to select a CPU based on the amount of NICs you've got in your server. Ideally, you're allocating each NIC it's own CPU using smp_affinity which makes all IRQs be handled by a certain CPU

    That's a very primitive advice. I have a dual-port NIC which uses 16 IRQs to load-balance traffic.

      62:    1896542    5948223   27162675   24888443   24163521    5403079    5909803    4029454  IR-PCI-MSI 524288-edge      eth0-0
      63:       4075    1001480     616013    2592543     254938     285175     207985    1107316  IR-PCI-MSI 524289-edge      eth0-1
      64:     197073     476197    2610381    1581599     115544     456792    2367032    1137384  IR-PCI-MSI 524290-edge      eth0-2
      65:        472     543840     811196     709408     103906     550840     596107     425457  IR-PCI-MSI 524291-edge      eth0-3
      66:       2409    5424355     972116    1462395    1030481    3641199    7206461    4259615  IR-PCI-MSI 524292-edge      eth0-4
      67:       5535     265198     930528     513243     118704     643945     501805    1140134  IR-PCI-MSI 524293-edge      eth0-5
      68:      12060      31388     334306     960722     246619     524946     271478     710516  IR-PCI-MSI 524294-edge      eth0-6
      69:     344659      31796     625465     232659     129618     263416     102907     434099  IR-PCI-MSI 524295-edge      eth0-7
      71:      21205    2236294     300813    2104031   10496248    3379412   43952473    3295831  IR-PCI-MSI 526336-edge      eth1-0
      72:      99507     182759     215797     309394     111105     542341     443447     900701  IR-PCI-MSI 526337-edge      eth1-1
      73:        195     127120     531312     981886     195660     374321     548677     458949  IR-PCI-MSI 526338-edge      eth1-2
      74:     293280     949778    9030179    1523234     477791    8326407    4555267    1733807  IR-PCI-MSI 526339-edge      eth1-3
      75:       5415      66744     342149     319960     178360    1203989     366363     869589  IR-PCI-MSI 526340-edge      eth1-4
      76:        614     365066     992802   12528360     108179     587753     364028     830202  IR-PCI-MSI 526341-edge      eth1-5
      77:        303     323144     567313     317869      71399     844893     157082     932051  IR-PCI-MSI 526342-edge      eth1-6
      78:   13480646     128160     447888    1508453     436341    1011511     584349     690085  IR-PCI-MSI 526343-edge      eth1-7
    

    Most server NICs do the same.

    florianb said: Get a CPU with at least as much threads as NICs you're going to fit in the server

    Soooo do I still get one thread per NIC? Or 8 per NIC. Or 16. Or did you mean per port. Feels like you're just going off something "you've read", not having actually touched any of this stuff in real life.

  • ZerpyZerpy Member

    @rm_ said:
    Soooo do I still get one thread per NIC? Or 8 per NIC. Or 16. Or did you mean per port. Feels like you're just going off something "you've read", not having actually touched any of this stuff in real life.

    I guess he means 1 core per queue (being RX or TX), because that's ideally what you want, assigning a core to a specific queue.

Sign In or Register to comment.