Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Oh no! BuyVM down?! - Page 13
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Oh no! BuyVM down?!

11011131516

Comments

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @david said: I got a reply, and it sounds like there's still problems with "Legacy Webhosting, IPv6, US VPN, and rDNS." So the likely culprit is the USVPN setup, though I tried to disable it and still had problems.

    I'll just wait until the rest of the systems are fixed and try again later.

    RDNS shouldn't be buggering up but who knows, maybe a part of it didn't boot up properly. Try updating RDNS entries again? I'm thinking rc.d was silly and booted powerdns before SQL.

    Francisco

  • @Francisco said: @Kris - Do you not have new V6 addresses? Old SJ IP's were 2607:f358 or something like that. New ones will be similar to that of NY, 2605:6500 or 6400.

    I have the new series it seems, but before I was getting Network Unreachable, now another error when trying to ping / get access out :

    [root@e ~]# ping6 ipv6.he.net
    PING ipv6.he.net(ipv6.he.net) 56 data bytes
    From 2607:f358:1:fed5:39::3 icmp_seq=1 Destination unreachable: Address unreachable
    

    From 2607:f358:1:fed5:39::3 icmp_seq=2 Destination unreachable: Address unreachable

    [root@e ~]# traceroute6 ipv6.he.net
    traceroute to ipv6.he.net (2001:470:0:64::2), 30 hops max, 80 byte packets
     1  2607:f358:1:fed5:39::3 (2607:f358:1:fed5:39::3)  0.792 ms  0.026 ms  0.017 ms
     2  2607:f358:1:fed5:39::3 (2607:f358:1:fed5:39::3)  2223.469 ms !H * *
    

    Hope that helps. LV-node38

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Kris said: Hope that helps. LV-node38

    Yep, OK.

    Anthony just has to update the host node and I need to hook up BGP.

    Francisco

  • For everybody who has problems caused by the non-working ipv6, put this in your /etc/gai.conf


    precedence ::ffff:0:0/96 100
  • 24khost24khost Member
    edited January 2013

    @Francisco let me know what you find, as we seem to be experiencing random drops in speed there and want to see if it is just us.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @24khost said: @Francisco let me know what you find, as we seem to be experiencing low throughputs sometimes from fiberhub, so want to see if it is just us.

    It could just be 1 path. I know Rob has an extra 10 gig on the way and is just pending the fiber being run.

    I'll get some monitors running against the brocade soon and i'll find out exactly what's up.

    It could very well have just been a stupid amount of multicast traffic. I didn't spend a lot of time dicking around with the brocade since it's just in layer2 mode. I'll know more tonight for sure :)

    Francisco

  • @Francisco SSH is being extremely slow to establish a connection for me. It's taking an average of 18 seconds. Once it's connected, there are no delays at all.

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited January 2013

    'usedns no' In your sshd conf file and restart SSH.

    If that helps then your connecting ISP has some slow RDNS servers. It's also possible I need to tune named some more :)

    Let me know.

  • @Francisco said: If that helps then your connecting ISP has some slow RDNS servers. It's also possible I need to tune named some more :)

    Yes nice and fast again :-)

    It was only after the move that I noticed any slowness though, perhaps a coincidence.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Nick said: Yes nice and fast again :-)

    It was only after the move that I noticed any slowness though, perhaps a coincidence.

    Unlikely :) Please, log me a ticket 'Slow DNS resolvers' as the title and link this comment :) I'll check into it once I'm home. I'm fairly sure it's just BIND wanting moar RAM. It has 500MB but I'm thinking that's simply not enough.

    Francisco

  • WintereiseWintereise Member
    edited January 2013

    Fran, network is still really slow. @lbft confirmed that as well. Node 19, do you want an IP to be pmed?

  • It was really quite slow earlier today (something like 1 MB/s over HTTP from my VPS on storage01 to my Versaweb dedi in the same DC) but just now when @Wintereise mentioned it I tried again and it seems to have picked up, hitting 13 MB/s.

  • This is quite ironic. @Francisco moved out of SJ because of network performance, and we're experiencing more issues in LAS.

    I'm getting ~1.2MB/s download speeds from my VPS(node 55).

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @mnpeep said: This is quite ironic.

    Well it could be a few things :)

    And sure, PM me that wint.

    I've been seeing a lot of ugly things I don't like so I'm seeing if I can clean it out of the network.

    We've been seeing more and more attempts at dns reflection attacks going on so I'm wondering if it's causing enough PPS to slow some things down.

    I'm working on getting SNORT setup right now to clean it out.

    It could also be a configuration issue on my primary ethernet port. I've already emailed Rob asking him to confirm his MTU settings.

    I may just rush the 10Gbit port facing FH so then I can just run jumbo frames on both ends.

    Francisco

  • I pmed ya so you can make sure, but things seem to be getting better slowly :x

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Wintereise said: I pmed ya so you can make sure, but things seem to be getting better slowly :x

    I'm not changing anything, i'm chipping away at my TODO list first.

    I'm just picking out a 2nd 10gig card right now. I should have snagged it when I was in LAS, ah well.

    Francisco

  • ~$ wget http://cachefly.cachefly.net/100mb.test
    --2013-01-23 22:13:55--  http://cachefly.cachefly.net/100mb.test
    Resolving cachefly.cachefly.net... 205.234.175.175
    Connecting to cachefly.cachefly.net|205.234.175.175|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 104857600 (100M) [application/octet-stream]
    Saving to: `100mb.test'
    
    100%[======================================>] 104,857,600 24.8M/s   in 4.2s
    
    2013-01-23 22:13:59 (24.1 MB/s) - `100mb.test' saved [104857600/104857600]
    

    No network problems here? lv-node-29.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @mojeda said: No network problems here? lv-node-29.

    I don't think it's so much the nodes as just busy times for our ports. LAS' network is just just faster so we're getting to a point we were waiting to see. I'll hear from Rob come morning about where we can go from here. I got a few ideas already going on.

    Working on v6 right now.

    Francisco

  • @mojeda Pipe the output to /dev/null

  • mojedamojeda Member
    edited January 2013

    @MiguelQ said: @mojeda Pipe the output to /dev/null

    /lazy

  • ithith Member

    down again

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Nothings down over here?

    I've been working for hours now.

    I know a node or two threw up but that's about it.

    Francisco

  • @ith said: down again

    looks all good from http://buyvmstatus.com/

    Just that one lv kvm node seems to go up and down, possibly a non issue

  • CVPS_ChrisCVPS_Chris Member, Patron Provider

    @Francisco said: I may just rush the 10Gbit port

    Does FiberHub even have 10Gb?

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited January 2013

    @CVPS_Chris said: Does FiberHub even have 10Gb?

    They have like 60Gbit+ <_< Abovenet is due some time in the next month or so, so another 10gbit of that.

    I'm just waiting for Rob to let me know what connector he needs from me. I'm guessing it's just SPF+.

    When we were in there in Oct they had at least 40 gig in use.

    They got some huge streaming guy there that accounts for a lot of it and he's getting ready to ramp up a crap load. HE also has their POP for Vegas in there in their MMR.

    EDIT - Fixed the month we were in there previously

    Francisco

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    V6 has been brought online in both LAS & NY.

    I just finished running a mass update script updating all stallion IP entries so they should be correct now. I'll likely run a mass update script copying over all RDNS entries in a little bit.

    Francisco

  • CVPS_ChrisCVPS_Chris Member, Patron Provider

    @Francisco said: They have like 60Gbit+

    Very doubtful, but anyway, I think it was a bad move on your part as the problems you have been seeing will not go away. FiberHub alone only has 22,000 ip's, pretty much the same size as myself.

    Small time DC, small time commits = problems

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @CVPS_Chris said: Very doubtful, but anyway, I think it was a bad move on your part as the problems you have been seeing will not go away. FiberHub alone only has 22,000 ip's, pretty much the same size as myself.

    Small time DC, small time commits = problems

    EGI only had like 20Gbit and of that only a % was for the rest of their clients. They have at least 1 large time client that uses like a 10gig or something close to that.

    Speeds are better and the limits we're hitting now are because we need to upgrade our own port, not something on the DC's end :)

    We got more rack space (a cage now), our new brocade is online and a few extra nodes are coming up for sale.

    I wasn't happy to have to do the move but I'm happy with the end results :)

    Francisco

  • CVPS_ChrisCVPS_Chris Member, Patron Provider

    @Francisco said: We got more rack space (a cage now), our new brocade is online

    If I recall right, the brocade you got doesn't even support 10Gb connectivity, and as for the cage I'm sure they were dying for someone to take the space.

    The mix of BW is worse then EGI too.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @CVPS_Chris said: If I recall right, the brocade you got doesn't even support 10Gb connectivity, and as for the cage I'm sure they were dying for someone to take the space.

    It has 2 10gig ports on it <_<

    @CVPS_Chris said: The mix of BW is worse then EGI too.

    Sure, if we got egi's full blend. Fact of the matter, though, is that EGI was forcing our routes to only go over HE. Whenever HE had issues we felt it heavily. We had VERY minimal nlayer routes and almost no GBLX routes.

    My usual example is this:

    • Cloudflare uses nlayer in San Jose.
    • HE & EGI both have nlayer in their mix
    • Our route went FRANTECH->EGI->HE->NLAYER->CF

    Egi always said they would 'look into it', but the fact of the matter is that the only reason I'd ever take HE to get to nlayer is because they were forcing my routes to go that way. BGP is strictly about shortest path and going direct nlayer would have gone that

    Francisco

Sign In or Register to comment.