Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Prometeus ?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Prometeus ?

Hi there,

I was originally considering to go with IWstack and then I found out those yearly special deals at Prometeus now I am kinda confused which one to go for..
I am sure many of you have dealt with Prometeus. Any active clients out there who can review uptime and their connectivity? (Can they customize a plan??)

My demands are pretty low. 512MB XEN/KVM would work just fine for me. (No OpenVZ)
I need Uptime, Security and Better connectivity with the rest of the world. (All 3 are Absolute must)

It wont be hosting any site. Occasional VPN (for myself) and a place to connect with other servers I manage.

Thanks

«13456714

Comments

  • Which yearly special deals for prometeus are you talking about? :)

  • I'm sure uptime and security is great, but you can't get excellent connectivity around the world regardless of whether you choose Milan or Dallas

    http://www.prometeus.net/site/special-offers.php

    KVMSSD5 looks good.

  • @hostnoob said:
    I'm sure uptime and security is great, but you can't get excellent connectivity around the world regardless of whether you choose Milan or Dallas

    Umm .. Mindexplaingin that part ??

    @0xdragon said:
    Which yearly special deals for prometeus are you talking about? :)

    KVMSSD5

  • @Umair said:
    KVMSSD5

    Yeah, I just mean if you pick their Milan location, you'll get great connections to places all over Europe, but not in Asia and USA (west coast). If you pick Dallas you'll get better connections to Asia and USA, but not as good in Europe.

  • Umair said: Mindexplaingin that part ??

    You can get passable performance and passable latency most of the time from a single location, but high performance and low latency requires multiple locations -- choosing Dallas may mean that you won't get as good of latency/speeds for visitors from Asia as you might if you'd chosen Seattle or Los Angeles, for example.

  • @hostnoob said:

    @ihatetonyy said:

    Well like I said, I am not going to host a site there. It's more for me connecting to other VPS I have (most in US, few in EU). So I need no ssh lagging. (My ISP sucking at times fo me)

    My ping to Milan is pretty good. That's why considering them.

  • I'm in the UK and get a good connection (SSH) to all of Europe, and the east coast of USA, but west coast USA or Asia it's not that smooth. It's usable, but you can't avoid the fact that the data has a long way to travel, and light only travels so fast.

    I would say the UK is definitely the best for a combination of US and EU boxes.

  • If you require fail-over and more advanced control, iwStack is your best option. If you're fine with a normal machine, go for the normal products.

    Both have excellent performance and uptime, and benefit from the amazing Prometeus support!

    Thanked by 3netomx Maounique Rami
  • Never used iwStack, however, with Prometeus machines you won't be disappointed.

    Thanked by 1netomx
  • There is a Dallas 512mb Xen SSD plan in their offers .

    Two years for just 6 euros more than annual rate.

    See : https://www.prometeus.net/billing/cart.php?gid=18

    Here's a bench of my 1GB 4 core SSD from the same Prometeus node deployment in Incero Dallas:

    http://serverbear.com/benchmark/2014/07/16/1Eihd6naYKCezIzi

    The 4k random FIO read on those SSD VPSes should be in 50K+ territory according to this.

    On network side, Incero DC has real good transfer rates to everywhere.

    CPU should be the super speedy e3-1270v2 on these. Combination is greased lightning.

    I know all this because until yesterday I was seriously considering getting this as my long-term Command-n-control VPS (Ansible et al)

    Decided to use one of my storage VPSes as CnC instead. Too many VPSes!

  • @hostnoob said:

    KVMSSD5 looks good.

    I'd asked them a while back: KVMSSD nodes are on a RAID5 SSD setup.

    Don't know if throwing so many unnecessary writes at SSD is a good idea.

    Guess it doesn't matter if they are using some Datacenter-grade SSDs.

    Have a bulletproof(tested) backup+restore plan.

  • MaouniqueMaounique Host Rep, Veteran
    edited January 2015

    vimalware said: Don't know if throwing so many unnecessary writes at SSD is a good idea.

    The myth that HDDs are more reliable than SSDs has been disproved a while ago. It matches our experience too, for example we only had a couple of SSDs fail so far IIRC, perhaps 3, but much more than 10 mechanical drives, even SAS short stroke ones, top of the range fail more often than SSDs.
    Not all KVM SSD setups are on raid 5, only the ones with lots of space. Raid 10 keeps a duplicate of data while raid 5 only a computed hash to recover from it if the real copy fails, so, I think raid 10 writes more bytes in all.

  • Well, I'm one of those guys who don't like SSD that much on a server. But then, my perspective is that of a client; translation: I don't know what SSD a provider uses and therefore have to assume the bad case.

    The good case are "enterprise SSDs" which are build from much higher quality cells and do, indeed, havea considerably higher life expectation. The bad case, and I dare to strongly assume that there are more than a few of those, is rather cheap desktop-kind of SSDs.
    Thanks to considerably improved firmware, drivers, OS support, and the like those cheap SSDs may survive quite well enough so as to make them attractive for cheap cost cutters.

    Most importantly though, I think that while SSDs, of course, are a major marketing gimmick ("We have speeeeeed!!1!") they are practically very often next to worthless for web hosting. Simple reason: The bottleneck isn't the disk, it's dynamic content, PHP & Co.

    That said, SSDs can be worthwhile if you're dealing with a serious and well established provider and if your use case profits from faster disks.

  • MaouniqueMaounique Host Rep, Veteran

    bsdguy said: they are practically very often next to worthless for web hosting. Simple reason: The bottleneck isn't the disk, it's dynamic content, PHP & Co.

    Actually, here the "veterans" (which I read as nostalgics which still think squeezing an extra MB from a 32 MB ram system is an achievement worth the time invested and the risks in case of higher demand than expected) need fast disks to make up for the low memory. In today's usage scenarios, caching is the solution to most situations and RAM is not that expensive to not be worth the huge performance increase, but, if you constrain yourself to low RAM setups, you have to use fast disks to make up for the almost 0 caching capabilities of the system squeezed in such inhumane conditions.
    It is a marketing gimmick, but here is one of the very few places where it is still needed to breath a last bit of poor, ignoble life in the real low end boxes.

  • @Maounique

    You are right and I understand that thinking. However, let's be honest: Caches are among the most perfect SSD killers. Why? Write, write, write ... small pieces usually.

    Full disclusure: I could, of course, not resist to put an (end user type (low quality)) SSD into my desktop. It's now 3 or 4 years old and still doing fine. No hiccups whatsoever.
    My trick? No swap on SSD.
    My SSD is for the OS and some virtual disks (for the OS) for virtual boxes. Data is on RAID 1'ed spindles.

    And btw. when I talked about well established and reputable providers, I had, among very few others, Prometeus in mind. ;)

    (Although, it seems, no provider likes to tell details like he SSD model they use)

  • henkbhenkb Member
    edited January 2015

    @bsdguy

    I have a Samsung 830, it has been running for 901 days, and written 34TB, still going strong.
    Has has OS, SWAP, newsgroup and torrent downloading on it.

    No reallocated blocks, but the wear level counter is 600 at the moment.
    It does have 10% reserved space, and between 2-6 and 30GB free space on partition.

    So a bit of writing is no problem for a SSD

  • @henkb said: No reallocated blocks, but the wear level counter is 600 at the moment. So a bit of writing is no problem for a SSD

    What's "wear level counter" of 600 mean? Is this like 1% or 10% wear or what?

  • @aglodek
    I am not sure. But I think the "max" is 3000 on a Samsung 830.

    Thanked by 1aglodek
  • MaouniqueMaounique Host Rep, Veteran
    edited January 2015

    bsdguy said: (Although, it seems, no provider likes to tell details like he SSD model they use)

    That is because it is not easy and no long time provider has the same model in all servers.
    We started out with samsung 830s. Uncle had some laying around which he used to give as gifts to some collaborators and customers. Shoved a few 256 GB ones in a few servers and we had a SSD offering.
    However, it soon became clear that the customers were only doing a few tests in the cache and then the SSDs were mostly idle, the heavy lifters were still the short stroke sas 10-15k ones due to a bit larger space even though the price was similar.
    Thus we realised that this SSD craze is a gimmick, people do not really need fast disks for production with few exceptions, they rather wish to boast their "dd speed" in the controller's cache.
    That may work for some providers, but for us, which still have VMWare clusters from 10 years ago with those times disks (obviously top of the range quality) in service and customers on them paying top dollar for the stability and dependability of the hardware and network, we realized that catering for the LE market wet dreams was not for us and, even though we expanded the SSD offer, we did it with large disks to make it usable for serious applications too which cannot be cached in RAM and need many IOPS, because the deal with SSD is not the speed of "dd", but the hugely higher number of IOPS due to no moving parts and real random access. The cloud was also an experiment to see what is more important even for LE people, the SSD or the HA? So we put local SSD storage at the same price with the SAN storage GB per GB and the SSD servers are 80% empty while we had to expand 3-4 times the SAN clusters and they were larger 4-5 times to begin with.
    In the realm of even semi-pro users, people know what they need.

    Thanked by 2Dylan Blanoz
  • GunterGunter Member
    edited February 2015

    Anyone else notice that Prometeus just crashed in entirety.

    My two cloud instances and the iwstack website is down.

  • Dun.

    Dun.

    Dunnnnnnnnnnn.

    My stuff is down too :(

  • telephonetelephone Member
    edited February 2015

    Yep, Prometeus, Iperweb, and Xenpower are also down... My guess is a large DDOS (that took down the core router?).

  • TheLinuxBugTheLinuxBug Member
    edited February 2015

    I would say this isn't a DDOS, it looks like they dropped their BGP announcement as a traceroute or MTR can't get past the internal border router. My guess is their main router took a shit or they are doing some type of emergency maintenance (or one of their providers is which is causing their announcement to fail).

    For example:

    1. 123-29-200-109.gosport.uk.abpni.net 0.0% 91 212.8 82.0 0.6 246.3 83.0

    2. 229-12-200-109.rackcentre.redstation.net.uk 0.0% 91 0.3 11.0 0.3 219.8 35.8

    3. ???

    my 2 cents.

    Cheers!

    Thanked by 2vimalware Gunter
  • Is it just me or did Prometeus lose the link to MIX 10 Minutes ago?

  • MaouniqueMaounique Host Rep, Veteran

    Everything is down, we are investigating, looks like network issue.

  • Iwstack Dallas is working for me but both XEN and KVM instances in Italy down :{

  • yes, Italy completely down for almost an hour now.

    Thanked by 1zakdr
  • My beloved prometeus is down. Hopefully they fix the issue soon

  • @marrco said:
    yes, Italy completely down for almost an hour now.

    Yep, monitoring reporting just over 1 hour of downtime.

Sign In or Register to comment.