Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


(eBay+Colocation) vs Dedicated Offers ? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

(eBay+Colocation) vs Dedicated Offers ?

2»

Comments

  • shovenoseshovenose Member, Host Rep

    @dano sounds like a great seller, who is it?

  • SPSP Member

    We are actually doing something similar right now. We purchased a PowerEdge 1855 blade system, and are currently stress testing to make sure all is well. So far, so good. Besides that, We are going to have a good backup plan in place and a few spare blades ready to go in.

    Unfortunately I won't be able to offer huge hard drive space with my VPS', but we are going for low end. If this test works and we actually sell some VPS', we are going to invest in more servers (new this time) and expand from there.

  • danodano Member

    @shovenose - http://myworld.ebay.com/mobile_computer_pros/

    @Sperryman - I also did similar, as I knew the day would come where a motherboard/disk-raid card/ethernet_device would die and I would have to move the procs/disks/mem to another machine, in order to quickly recover from hardware failure. I have found spare chassis/mobo's for pretty good prices, and have ran them to burn in, and have also been forced them in production at times.

  • anyone have luck with mrrackables? He seems to have mucho supermicro stock always

    http://stores.ebay.com/MrRackables/_i.html?_nkw=supermicro&submit=Search&_sid=955087150

  • qpsqps Member, Host Rep
    edited March 2013

    @bdtech said: anyone have luck with mrrackables?

    We bought 24 servers from him. A lot of them had problems. Some of them were also missing parts.

    He shipped us replacement parts, but shipped the wrong replacement parts and then wouldn't rectify the situation.

    Within a few days later, some of them also had additional problems.

    I think we ended up with about 18 working servers. The others we parted out.

    We've had a lot of success with Belmont Trading (ebay: btregv, they have a few other eBay usernames too). Their servers have been solid overall and whenever there's a problem they've been willing to replace it or refund.

  • @BronzeByte said: E3's weren't designed for VPS nodes, E5's were

    based on what?

  • pcanpcan Member

    @BronzeByte said: You can build four E5 nodes with 512GB RAM for around $10k, that is overpriced Dell garbage

    E7 processors have high availability features. The ECC that protects the RAM is more powerful; if a uncorrectable ECC error is found, the affected virtual machine is halted but the hypervisor will continue to run (this feature needs hypervisor support). The R910 server also support RAID 1 over the RAM (memory mirroring), so you can have 2 Tb Ram without mirroring or 1 Tb with mirror protection.

  • @pcan said: if a uncorrectable ECC error is found, the affected virtual machine is halted

    That's configuration, not hardware...

  • DamianDamian Member
    edited March 2013

    @BronzeByte said: That's configuration, not hardware...

    No. Read http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-e7-family-ras-server-paper.pdf from "Software-Assisted Extensibility of Machine Check Architecture (MCA) Recovery" onward.

    This is the first time that this feature is present in x86 processors.

  • erhwegesrgsrerhwegesrgsr Member
    edited March 2013

    @Damian said: No. Read http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-e7-family-ras-server-paper.pdf from "Software-Assisted Extensibility of Machine Check Architecture (MCA) Recovery" onward.

    From the whitepaper:

    System works in conjunction with the BIOS, firmware, and OS to recover or restart processes and continue normal operation

    That's what all ECC does

  • DamianDamian Member
    edited March 2013

    @BronzeByte said: That's what all ECC does

    I know. What ECC does isn't what we were discussing; we were discussing that E7 processors can signal all the way to the VM's OS that an uncorrectable error has been determined:

    In a VMM, multiple virtual machines share the 
    silicon platform‟s resources, with each virtual 
    machine (VM) running an OS and applications.
    In systems without MCA recovery, an 
    uncorrectable data error would cause the 
    entire system and all of its virtual machines
    to crash, disrupting multiple applications.
    
    However, with the Intel Xeon processor E7 
    family, when an uncorrectable data error is 
    detected, the system can isolate the error to 
    only the affected VM. Here the hardware 
    notifies the VMM, which then attempts to 
    retire the failing memory page(s) and notify 
    affected VMs and components
  • @Damian said: What ECC does isn't what we were discussing

    Isn't it quite obvious? Can't recover, throw an error to the process, for KVM it will make the VM crash, for OpenVZ only a single process

  • erhwegesrgsrerhwegesrgsr Member
    edited March 2013

    Not sure which one I am, I feel stupid now

  • @BronzeByte said: I feel stupid though

    I wouldn't; there's no need.

  • We are actually doing something similar right now. We purchased a PowerEdge 1855 blade system, and are currently stress testing to make sure all is well. So far, so good. Besides that, We are going to have a good backup plan in place and a few spare blades ready to go in.

    I had two of these babies and they are an excellent choice even today!!!

    I am glad there is a brave soul out there convinced of the validity of such business plan.

    Good luck @Sperryman

  • SPSP Member

    Thanks, its been a learning experience for sure. Bought the blade system, then had to get a drac and a digital KVM (bought analog to start). After upgrading the firmware, it seems to be smooth sailing. Working with a very small DC here to proof the concept, then hoping to provision our own fiber connection and host ourselves. If the concept doesn't come to fruition, we aren't in any contracts and can walk away with our equipment. No leases or payments, just pay for what we use and keep it running. Have a spare blade already, but going to buy a couple of more and another power supply for emergencies.

  • earlearl Member
    edited March 2013

    @Sperryman said: Have a spare blade already, but going to buy a couple of more and another power supply for emergencies.

    Just wondering what type of electrical plug the blades use? Can they use the 220v outlet that is normally used in clothes dryer? similar to the pic:

    image

  • May i ask how much you paid and where your bought the backplain and each server slot

  • What's the pros vs cons for using blades? I can't imagine it's too common in the LEB market

  • DamianDamian Member
    edited March 2013

    @bdtech said: What's the pros vs cons for using blades? I can't imagine it's too common in the LEB market

    A couple of pros is that you get a much higher density per rack unit, and the blades themselves are quite cheap, because their resale value is low because blades only fit into blade enclosures.

    A rather severe con for the American market is that most (all?) blade enclosures run on 240v power, which for reasons that continue to escape me, are not common in American datacenters, if available at all.

  • RyanDRyanD Member

    @Damian said: A couple of pros is that you get a much higher density per rack unit, and the blades themselves are quite cheap, because their resale value is low because blades only fit into blade enclosures.

    A rather severe con for the American market is that most (all?) blade enclosures run on 240v power, which for reasons that continue to escape me, are not common in American datacenters, if available at all.

    In the US we'll use 208v. It's very common. In fact every rack of dedicateds we deploy is 208v.

    The problem with the blades is that they are expensive and power density is expensive. You will not see any cost savings because you can increase the rack density. In fact you'll likely pay a higher operational cost per unit because of the increased cooling charges of the high density (power) deployment.

    Blades have been notorious for being less efficient on power than their componentized counter parts.

    Sure, they have the added advantage of centralized management and modular i/o and network modules.

    Typically the only case in which we recommend blade deployments to clients is when your physical floor space costs are significantly higher than normal.

    We instead generally make recommendations of use of SuperMicro's microcloud or FatTwin solutions which offer nearly the same density, much better power usage, and a wider variety of storage options.

  • earlearl Member

    @RyanD said: We instead generally make recommendations of use of SuperMicro's microcloud or FatTwin solutions

    There is also the supermicro twin, 1U form factor 2 server..

  • RyanDRyanD Member
    edited March 2013

    @earl said: There is also the supermicro twin, 1U form factor 2 server..

    Yes, That is their first-generation TWINU server. It has a single, shared, 1000w non hot-swap power supply.

    So, to do any maint work to either node, you have to power down both. Should your PSU die, you lose both.

    They are dirt-cheap on the second hand market but they are power hogs, you can expect 4-6A @ 120v on one of those Dual 5300/5400 SM 1U Twins. If you use them knowing their faults, they are certainly cheap hardware to pickup.

Sign In or Register to comment.