Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Alright BuyVM Status, I've had just about enough...
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Alright BuyVM Status, I've had just about enough...

MrOwenMrOwen Member
edited March 2013 in General

I'm surprised no one else has posted this problem yet but several monitors on BuyVM Status seem to be malfunctioning. Take for example, http://buyvmstatus.com/info/22. I'm on that node and I have not had that kind of downtime (no downtime, actually). Promise. Same thing appears to be happening for LV9 and LV16. Anyone else notice this before?

Comments

  • The footer of the site says:

    This site is not operated by, sponsored by or affiliated with BuyVM in any way. Information may be inaccurate.

  • @Bogdacutuu said: The footer of the site says:

    >

    This site is not operated by, sponsored by or affiliated with BuyVM in any way. Information may be inaccurate.

    Oh, I'm aware. It's still kind of annoying though.

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited March 2013

    Those nodes are decom'd for the time being.

    09, 16, & 20 all have bad RAM and given different reboots will boot with different amounts of (anywhere from 40<>72GB RAM).

    As of this week Anthony pulled all the users off them to perm. new homes until I head back to Vegas and replace all the sticks in it.

    On our trip in January I simply didn't have enough time to go through and finish replacing RAM & CPU's for the upgrades so a few nodes kinda went off the deep end.

    Fear not, I have a trip this month where I'll address those amongst a few other things (a new router for instance).

    Francisco

  • @Francisco said: As of this week Anthony & I pulled all the users off them to perm. new homes until I head back to Vegas and replace all the sticks in it.

    Oh shit son, I'm on a new node! Wow, never thought the problem would have been my server moving to a different node. Thanks for the info Fran!

  • krypskryps Member

    @Francisco Just curious: Do you use ECC RAM?

    -- kryps

  • @kryps

    Yes, of course they do

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited March 2013

    @kryps said: @Francisco Just curious: Do you use ECC RAM?

    -- kryps

    You have to on 55xx series CPU's <_<

    ECC registered.

    The ECC can only do so much before the motherboard flat out rejects the stick.

    Sad part is a few of the nodes will report the bad stick, say it's going to remove it....then doesn't remove it.

    I'm going for almost a full week this time just to make sure I cover everything that I need.

    Also going to go through updating the raid card drivers on our adpatec 6805's. It seems the 28,000 series driver is pretty meh and is performing really crappy. I'm compiling a new kernel right now with the latest 29,xxx driver so we'll see what that does. It if doesn't help, i'll simply next day in a bunch of 5805's and drop those in quickly.

    Francisco

  • @Francisco said: raid card drivers on our adpatec 6805's.

    How are the Adaptec compared to LSI?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @concerto49 said: How are the Adaptec compared to LSI?

    The adaptecs are pretty decent but require a lot of love. THe 5805's are really solid cards and give us 0 issues where as the 6805's have been a very hate/hate relationship.

    I've got some testing already in place that looks to be in our favour. If it doesn't work out, we're simply ordering a huge stack of 5805's and mass replacing.

    The 7805's are supposed to be really good but since we aren't running any pure SSD nodes it isn't an affordable option.

    Francisco

  • vldvld Member

    @MrOwen said: but several monitors on BuyVM Status seem to be malfunctioning

    I expect some kind of apology
    /butthurt

  • @Francisco said: The adaptecs are pretty decent but require a lot of love.

    Thanks for the response. Been using LSI all this time and hear most are, so is good to hear they're great too.

  • @Francisco said: I'm going for almost a full week this time just to make sure I cover everything that I need.

    Vegas, full week... sounds dangerous =D

  • jarjar Patron Provider, Top Host, Veteran

    @Francisco said: head back to Vegas

    @Francisco said: I'm going for almost a full week this time just to make sure I cover everything that I need

    Right, that's why you're going to Vegas for a week ;)

  • DamianDamian Member
    edited March 2013

    @Francisco said: The adaptecs are pretty decent but require a lot of love.

    Is the price break for the Adaptecs value enough to warrant the necessary love?

    (edit) That was a terrible sentence. Let's try again:

    Are the Adaptecs cheap enough versus the LSI cards that giving the extra love is not a problem?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Damian said: Is the price break for the Adaptecs value enough to warrant the necessary love?

    Well, when we bought them we paid $500/ea. If you shop around you can get 5805's for < $200 which is fine since we're still only using mechanical drives.

    You can get some big $600 LSI but it's only worth it if you're doing pure SSD raids or you're dropping cachecade into it.

    For us? The 5805's work well and we do our caching in software. Our storage nodes run dual 5805's w/o any performance issues and people thrash those things all day long.

    I'm still not 100% on if this new kernel + drivers helps with it. I've rolled together a new kernel with the latest aacraid driver so we'll see I guess. I'm waiting to see if Anthony wants to put it in place this weekend just so we know what work we need to get done.

    Francisco

Sign In or Register to comment.