Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


★ VirMach ★ Black Friday & Cyber Week 2018 ★ RAID 10 SSD ★ OpenVZ & KVM ★ Check inside for offers! - Page 433
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

★ VirMach ★ Black Friday & Cyber Week 2018 ★ RAID 10 SSD ★ OpenVZ & KVM ★ Check inside for offers!

1430431433435436547

Comments

  • VirMachVirMach Member, Patron Provider
    edited June 2021

    Packing stuff up for Amsterdam, NYC Metro and the rest of Los Angeles. Storage node for LA still in limbo.

    We're also short on IPs in Los Angeles, and still none in Amsterdam. We found a datacenter partner that would provide IPv4 for many of the other locations as we expand, but Los Angeles and Amsterdam are with Psychz right now and IPv4 didn't work out how we initially wanted, so we need to lease them elsewhere. We should still have some capacity.

    As for RAID, the supplier for the controllers is raising the price on us 2x and honestly we still don't know much about their reliability. RAM prices have also gone up, as have NVMe prices (at least temporarily.) We may need to consider options without RAID. Feel free to provide feedback, we're all ears. Right now we're considering heavily increasing the frequency of backups and doing our own system since SolusVM's ends up having issues. We'd obviously lose the benefit of not losing any data, but it would still require that the datacenter hands open up the controller, hopefully removes the correct NVMe SSD, and then hope the controller rebuilds properly. Thing is though, with RAID on these controllers, we do of course increase the chance of everyone's data being lost if the controller fails. Even though they're physical, they're not real "hardware" RAID.

    @JabJab said: @VirMach you hired new staff for support? Something changed? You involucrated tickets?

    We reduced ticket quantities by about 86% with automation, knowledgebase, and bulk processing.

    @FAT32 said: I got a feeling our double reward wouldn't even happen after COVID ends

    I opened these up recently. I just have multiple versions from working from office/home and trying to locate/merge them together so I know who we already gave prizes out to, otherwise we might just have to start over and some people might get doubly doubly doubly prizes.

    @FrankZ said: I like the automated RDNS feature. It works well and updates fast.

    This might temporarily go away with Ryzens since it's currently coded to work with ColoCrossing only.

    @FrankZ said: Noticed a drive array issue the other day, wait state was averaging over 30%. Did not ticket because they have been good at catching these things on their own lately. Sure enough they fixed it within a day.

    Doing better at finding and fixing the problems quickly, and not running any crazy sales may be the reason for the lack of 'VirMach do not response to tickets' threads.

    Yes, this is a huge part of it as well.

    Thanked by 1FrankZ
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    @VirMach said:
    We're also short on IPs in Los Angeles, and still none in Amsterdam. We found a datacenter partner that would provide IPv4 for many of the other locations as we expand, but Los Angeles and Amsterdam are with Psychz right now and IPv4 didn't work out how we initially wanted, so we need to lease them elsewhere. We should still have some capacity.

    What if you rent lots of IP from OVH first and tunnel it? I know it will be a lot far from optimal but seems good enough during transition period? (I am not sure when their IPv4 one-time fee will last but maybe can temporary use it if needed?)

    I believe all hardware prices are start to going down now, due to crypto crash and stuff. All the best on the transitioning process

  • VirMachVirMach Member, Patron Provider

    @FAT32 said:

    @VirMach said:
    We're also short on IPs in Los Angeles, and still none in Amsterdam. We found a datacenter partner that would provide IPv4 for many of the other locations as we expand, but Los Angeles and Amsterdam are with Psychz right now and IPv4 didn't work out how we initially wanted, so we need to lease them elsewhere. We should still have some capacity.

    What if you rent lots of IP from OVH first and tunnel it? I know it will be a lot far from optimal but seems good enough during transition period? (I am not sure when their IPv4 one-time fee will last but maybe can temporary use it if needed?)

    I believe all hardware prices are start to going down now, due to crypto crash and stuff. All the best on the transitioning process

    We're still trying to minimize the annoyance for customers. We have some emergency backup plans should we need it but it ends up being best having IPs from a reliable source we know will continue leasing them to us.

    We decided today to move forward with more space outside of Psychz with the same provider we'll be using for other locations (who uses Inap.) They have enough IPv4 for us for now.

    Further Update

    Servers will most likely be shipped out this week, in the following quantity:

    • 5 to Amsterdam (2 Windows, 3 Linux)
    • 11 to Los Angeles (1 Windows, 10 Linux)
    • 14 to NYC Metro (2 Windows, 12 Linux)

    Buffalo will most definitely end up in Secaucus , as will Piscataway. It's basically 15 minutes from Times Square, so it's super close to Manhattan/NYC and it's considered part of the NYC Metro area. Versus 45 minutes for Piscataway.

    Then next batch of servers in 1-3 weeks:

    • 6 to San Jose
    • 3 to Atlanta
    • 4 to Dallas
    • 3 to Seattle

    Then we're still trying to do Chicago but stuck on IPv4, we might do Chicago with IPv6 only as a little experiment, if that happens customers will get an email and price reduction if they decide to stay in Chicago.

    Japan would be later. We're also looking into bringing Phoenix back or doing Las Vegas/Colorado instead. Frankfurt might come back but don't hold your breath.

    Thanked by 3FAT32 FrankZ _MS_
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    @VirMach said:
    We decided today to move forward with more space outside of Psychz with the same provider we'll be using for other locations (who uses Inap.) They have enough IPv4 for us for now.

    Great to hear that :) So most of the IPs will be leased instead of bought? I have heard IPv4 price increased by a lot lately.

    Further Update

    Servers will most likely be shipped out this week, in the following quantity:

    • 5 to Amsterdam (2 Windows, 3 Linux)
    • 11 to Los Angeles (1 Windows, 10 Linux)
    • 14 to NYC Metro (2 Windows, 12 Linux)

    Buffalo will most definitely end up in Secaucus , as will Piscataway. It's basically 15 minutes from Times Square, so it's super close to Manhattan/NYC and it's considered part of the NYC Metro area. Versus 45 minutes for Piscataway.

    Is it colocation based where you own the server or most of the servers are leased? In addition, is there a public information about the number of nodes you have (to get a sense of progress of migration)? (actually this can be guessed looking at the Network Status, since the node number seems to be sequential)

    Japan would be later. We're also looking into bringing Phoenix back or doing Las Vegas/Colorado instead. Frankfurt might come back but don't hold your breath.

    I believe it is best to focus on migration first, try to focus on existing customers before getting more new customers.

  • VirMachVirMach Member, Patron Provider

    @FAT32 said: Great to hear that So most of the IPs will be leased instead of bought? I have heard IPv4 price increased by a lot lately.

    Buying it is absolutely out of the question, everyone's just price-gouging at this point while hoarding (more than usual.)

    @FAT32 said: Is it colocation based where you own the server or most of the servers are leased? In addition, is there a public information about the number of nodes you have (to get a sense of progress of migration)? (actually this can be guessed looking at the Network Status, since the node number seems to be sequential)

    We have about 325 nodes. Most of them are partially empty by now as we've locked them off/winded down without any major sales. That means 1 Ryzen node = slightly under 2 of our current nodes. RAM per Ryzen node is 128GB, and RAM on our current nodes is 192-256GB but all our current nodes pretty much bottlenecked on the nearly 10 year old CPUs and we never oversold RAM, so actual RAM usage is probably an "average" of 40% and that average includes inactive/buffer/cache.

    @FAT32 said: Is it colocation based where you own the server or most of the servers are leased?

    These are owned.

    @FAT32 said: I believe it is best to focus on migration first, try to focus on existing customers before getting more new customers.

    We still have a few people on Frankfurt, enough for one or two servers.

    Thanked by 3FAT32 FrankZ _MS_
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    @VirMach said:
    We have about 325 nodes. Most of them are partially empty by now as we've locked them off/winded down without any major sales. That means 1 Ryzen node = slightly under 2 of our current nodes. RAM per Ryzen node is 128GB, and RAM on our current nodes is 192-256GB but all our current nodes pretty much bottlenecked on the nearly 10 year old CPUs and we never oversold RAM, so actual RAM usage is probably an "average" of 40% and that average includes inactive/buffer/cache.

    That's a lot... but as you have mentioned the nodes will theoretically cut down by half, 46 nodes coming in 3 weeks is basically 46/163 = 28% after so many months / years.

    And since you mentioned it is owned, kudos for forking out so much money at the beginning for the hardware. Hope everything goes well to the migration.


    To: VirMachGang - Guys let's post some GIFs and encourage VirMach on this hard time

  • VirMach, working late into the night on the new servers! You can do it, good job!

    Thanked by 2FrankZ imok
  • JabJabJabJab Member

    Thanked by 2randomq imok
  • FrankZFrankZ Veteran

    Thanked by 2randomq imok
  • Thanked by 3FrankZ randomq imok
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    @randomq said:
    VirMach, working late into the night on the new servers! You can do it, good job!

    VirMach is a girl!? :open_mouth:

    Thanked by 1randomq
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    I can try my best to help though... if you don't mind my tiny hands

    Thanked by 2FrankZ randomq
  • aliletalilet Member

    So when is the next "page" update? You know in SEO world people wait for new Google update so here on LET we need to wait for next page update when number of posts on page will change.

  • @FAT32 said:

    @randomq said:
    VirMach, working late into the night on the new servers! You can do it, good job!

    VirMach is a girl!? :open_mouth:

    The concept of gender is so outdated. VirMach can be anything xe wants to be!

    Thanked by 2FAT32 storm
  • FrankZFrankZ Veteran

    @VirMach said: As for RAID ...

    IMHO there is no advantage to RAID if is more likely to cause more problems then it solves. Personally I am fine being on a non RAID node, given the prices that I am paying.

  • imokimok Member

    Hello again. Any positive news about anything?

  • randomqrandomq Member
    edited June 2021

    @VirMach Oh yeah, I can live without RAID, as long as OS reinstall works. Backups or an A/B drive setup would also be cool. Then I could choose the boot drive, and sync my system as desired. Or use both for more space and no redundancy

    Thanked by 1FrankZ
  • VirMachVirMach Member, Patron Provider

    @randomq said: @VirMach Oh yeah, I can live without RAID, as long as OS reinstall works. Backups or an A/B drive setup would also be cool. Then I could choose the boot drive, and sync my system as desired. Or use both for more space and no redundancy

    What we're starting to do is ensure the newer chassis we order have 4 hotswap bays up front. Outside of the first dozen or so nodes, this means all of our nodes should have these available and therefore some time in the future we'll add in hard drives, most likely 4x5TB, and use them as either some kind of additional disk space offered either for free or for a small fee and/or for snapshots or block storage if we ever work in that feature.

    Note, these hard drives will be independent from the hard drive that goes in each node, that will not be hotswappable and will be instead used for the daily backups.

    Later on, we still plan on also adding external backups that are done weekly or bi-weekly for disaster recovery. We haven't finalized the details yet.

    @FrankZ said:

    @VirMach said: As for RAID ...

    IMHO there is no advantage to RAID if is more likely to cause more problems then it solves. Personally I am fine being on a non RAID node, given the prices that I am paying.

    I think what really needs to happen is [A] pricing becomes affordable again, and [B] NVMe RAID is tried and tested. For the latter, we do still have about a dozen controllers. These are the Highpoint SSD7103 as the LSI 9460-16i would otherwise be required for four PCIe x4 NVMe drives and those run about $1,000 each not including any of the required adapters or bays that end up running something like another $300-400 last I checked.

    Outside of the above two options, AMD-RAID is pretty terrible and also not even really supported for Linux on newer motherboards, other software RAID option(s) are either proprietary and expensive and/or they utilize a lot of CPU and memory just to function to some level.

    The Highpoint controller has been tested thoroughly and does appear to be "real" RAID to some degree. It does have two PEX8747 onboard and does seam to actually reach speeds similar to the combination of the four NVMe SSDs that go in it. We still don't know what will really happen after a failure. We spoke with their engineers/tech team and didn't really get a reassuring response. I'm unsure if the system is built to be able to recover from a controller failure. It does have support for rebuilding, but again, we haven't tested failure and the MTBF is concerning because it's significantly lower than the SSDs that go in it. There seem to be potentials for data loss when uninstalling/re-installing drivers and/or moving it to a new system. Then there appears to be a chance for Linux using default NVMe support, which means potential data loss if let's say there are disk issues to where the driver doesn't properly load.

    The drivers are open source, which is good. And it does seem like they're nearly full-featured and comparable to some degree to MegaRAID. However, I have my concerns for future compatibility issues after kernel version changes and/or driver errors after updates or worse, potential issues with updating drivers that results in the data loss described previously.

    So, yes, it may cause more problems than it solves. However, it does give pretty kick-ass speeds assuming it is not abused. What we could potentially do is test it thoroughly and if the opportunity arises, ship out the controllers with new NVMe SSDs in them later down the line and have them placed in servers, and migrate customers over from the NVMe SSDs already in there, which should only take an hour or two. This would most likely occur if we realize it could actually be beneficial, and would be done before reaching an age where the NVMe SSDs may start failing in higher numbers.

    @imok said: Hello again. Any positive news about anything?

    Wood pallet is ready to be filled with the servers going out to NYC Metro. Will most likely be shipped out on Monday, and get there by July 5th. Los Angeles may be taken down to the datacenter this weekend, which means alpha/beta services can be deployed very soon, at least with IPv6.

    Amsterdam probably being shipped out by end of next week fingers crossed.

    San Jose, Atlanta, and Dallas are being set up. Chicago is still in the "talking" stages. Tokyo, we are working out the networking and finalizing the timeline with Equinix. IPv4, we are still on the early stages there (Tokyo.) I'm pretty excited about that one even though it will end up costing us 3-5x more than all the other locations in terms of colocation, with nightmarish datacenter hands rates, and probably a PITA to do logistics-wise.

    Thanked by 1FrankZ
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire
    edited June 2021

    @VirMach said:
    San Jose, Atlanta, and Dallas are being set up. Chicago is still in the "talking" stages. Tokyo, we are working out the networking and finalizing the timeline with Equinix. IPv4, we are still on the early stages there (Tokyo.) I'm pretty excited about that one even though it will end up costing us 3-5x more than all the other locations in terms of colocation, with nightmarish datacenter hands rates, and probably a PITA to do logistics-wise.

    I never really setup a server / mount it onto a rack before, but if you want some affordable remote hands (me myself) on setup phase I can try to help and learn if you decided to expand to Singapore. (Just hope the datacenter won't disallow me to stay there for hours :joy:)

  • Monday is near.

  • aliletalilet Member

    @ben47955 said:
    Monday is near.

    cociu's Monday or regular Monday?

  • VirMachVirMach Member, Patron Provider

    We've hit a small roadblock as I don't feel comfortable shipping these out right now after running a few burn-in tests. Certain SSD models that previous ran fine in the RAID controller are getting too hot outside of it when trying to handle a large workload by themselves.

    I'm going to either put them in a PCIe expansion card with a fan and heatsink or try beefier heatsinks instead of the stock ones. A good few are coming in on Sunday, if it works out perfectly then we're still on track as I just need to test them out quickly and record temperatures/ensure they don't thermal throttle. I'd hate to send them and then have to try to coordinate and test a bunch of changes with datacenter hands. Either way, this shouldn't be too big of a delay.

    @FAT32 said: I never really setup a server / mount it onto a rack before, but if you want some affordable remote hands (me myself) on setup phase I can try to help and learn if you decided to expand to Singapore. (Just hope the datacenter won't disallow me to stay there for hours )

    Singapore ended up being voted for less in the poll we sent out as well as just being more expensive in general. We haven't given up on it but if we do it, it'll most likely be after Japan.

    @alilet said:

    @ben47955 said:
    Monday is near.

    cociu's Monday or regular Monday?

    We have 27 Mondays left in 2021.

    Thanked by 3FAT32 FrankZ imok
  • VirMachVirMach Member, Patron Provider

    Looks like we're going to have to actually get one of those 2 inch thick heatsinks for these 4TB NVMe SSDs we have... they're hitting 77-82C during normal sustained usage with an aftermarket heatsink I happened to have laying around.

    It ends up throttling itself just enough to cool down and ramps back up, rinse repeat.

    I'm also not very impressed with the performance once these things get full. During sequential synthetic benchmarks it basically becomes as bad as a hard drive after a while, but luckily it seems like in real-world situations it never performs worse than an SSD outside of a few times where they seem to get scary bad and freeze up. It doesn't help that it's virtually impossible to know what you're getting anymore. I could spend days researching it, testing it, and manufacturers don't care to correctly label what they're giving you so we could end up with several variants.

    Relevant:

    We might have to just figure out a way to only end up using Samsung NVMes as they (and maybe WD Black) seem to be the only ones that aren't all over the place.

    Thanked by 2FAT32 FrankZ
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    Honestly all these shows that VirMach is doing something to ensure the best for their customers.

    @VirMach I am kinda curious if you are looking for the best quality / performance / endurance etc, why not price VirMach as a much premium brand? (BuyVM / NexusBytes kind of pricing where theres little to none promos?)

    Moreover those are all new hardware with the latest technologies possible

  • VirMachVirMach Member, Patron Provider
    edited June 2021

    @FAT32 said: @VirMach I am kinda curious if you are looking for the best quality / performance / endurance etc, why not price VirMach as a much premium brand? (BuyVM / NexusBytes kind of pricing where theres little to none promos?)

    I think we've always strived to be the "best value" as in highest quality before diminishing returns, whether or not we've achieved that or not.

    I personally try to run things how I would want it to be ran if I was a customer of VirMach. In particular when it comes to making a decision on the NVMe SSDs, I'm looking for basically these things:

    1. It actually performs better than SATA SSDs in RAID10.
    2. If it costs more than SATA SSDs in RAID10 or even other NVMe SSDs, it at least outperforms them proportionally to the price.
    3. It does well in average/sustained writes, especially when it comes to queue depth and performing well even when SLC cache is depleted (and without overheating.)

    So, the new WD Black SN850 is pretty insane, but so is the price: $500 for 2TB, or $1000 for 4TB. The Samsung 970 Evo Plus 2TB on the other hand is $312 right now, so nearly 40% cheaper but with maybe a maximum of only 30% decrease in performance. So, 0% decrease in capacity, 30% decrease in performance for 40% savings = best value (between the two.) But then there's a third contender, about 35% lower performance, 0% lower capacity, and 40% lower pricing while still meeting the three requirements above (this is the important part) as long as I can confirm with the manufacturer that they're not swapping parts around.

    Continue this past 3 products and basically do it for every NVMe SSD in existence and that's what I've been trying to do for a while. Now introduce the recently discovered uncertainty in the components supply chain and throttling/overheating issue and that's where we're at right now.

    Thanked by 3FAT32 FrankZ _MS_
  • VirMachVirMach Member, Patron Provider
    edited June 2021

    @FAT32 said: @VirMach I am kinda curious if you are looking for the best quality / performance / endurance etc, why not price VirMach as a much premium brand? (BuyVM / NexusBytes kind of pricing where theres little to none promos?)

    Moreover those are all new hardware with the latest technologies possible

    We may do this though, it's not a bad idea. We've been wanting to have an "Enterprise" line of services. We actually have a set of hardware that doesn't really fit what we're going for right now so it could end up being perfect for this, I'll share more once it solidifies.

    Thanked by 1FAT32
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire
    edited June 2021

    @VirMach said:
    We may do this though, it's not a bad idea. We've been wanting to have an "Enterprise" line of services. We actually have a set of hardware that doesn't really fit what we're going for right now so it could end up being perfect for this, I'll share more once it solidifies.

    I am sorry for all the guys in this thread if VirMach decided to become more expensive 🙈 But it is sad to see him having hard time on migrating :(

  • FrankZFrankZ Veteran
    edited June 2021

    @FAT32 said: why not price VirMach as a much premium brand? (BuyVM / NexusBytes kind of pricing where theres little to none promos?)

    Thanked by 1alilet
  • @FAT32 said:

    @VirMach said:
    We may do this though, it's not a bad idea. We've been wanting to have an "Enterprise" line of services. We actually have a set of hardware that doesn't really fit what we're going for right now so it could end up being perfect for this, I'll share more once it solidifies.

    I am sorry for all the guys in this thread if VirMach decided to become more expensive 🙈

    No, that's the Low End dream! You get cheapo openvz on Black Friday, and then that provider does well and becomes or gets bought out by a premium provider. Then you are grandfathered in with a low price but on an enterprise level VM and network!

    Thanked by 2FAT32 FrankZ
Sign In or Register to comment.