Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Prometeus ? - Page 14
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Prometeus ?

18910111214»

Comments

  • MaouniqueMaounique Host Rep, Veteran

    If your old kvm was really old, like months old, then it was on the old kvm zone we are phasing out. It does not accept new instances unless you force it via affinity.
    However, migrating tens of tb from one zone to another is too much and we will try to phase it out that way. New instances in new zone, until very few remain in old, so the move will not last too long.

  • vyalavyala Member

    @Maounique,

    Does Prometeus support offloaded mysql for Wordpress? Tried wordpress with azure, it works very slow. Looking for my business website.

  • MaouniqueMaounique Host Rep, Veteran

    Yes, we still offer it, but I dont know for how much longer, because we integrated the server with our shared hosting offer, so, that is on "regular" SAS raid, while mysql is on SSD.

  • vyalavyala Member

    Do you have link? and pricing info please

  • CrabCrab Member
    edited March 2015

    Maounique said: If your old kvm was really old, like months old, then it was on the old kvm zone we are phasing out. It does not accept new instances unless you force it via affinity.

    Yes my instance was a year old. Stability was very good, but the performance just wasn't there. Now I'm hoping the stability stays the same if not even better and the performance much better.

    Thanked by 1netomx
  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    @vyala said:
    Do you have link? and pricing info please

    https://www.prometeus.net/billing/cart.php for all products currently on offer.

    Crab said: but the performance just wasn't there

    Initially, it was, however, we did not think the product will be so popular, it was a bit more expensive than regular VPSes, so, we figured it wont be selling that much.
    This meant we grew the initial zone and -people were making many small disks which created a condition on the storage where space remains unallocated but still marked as in use and poor performance of the volume groups.
    Now a lot of cpu is spent in managing the storage and it is still slow with all the background improvements we are doing, so, we decided to completely redesign storage and use smaller pools. This was done some 2 months ago with outstanding success, therefore we are slowly phasing out the old KVM zone and new VMs are almost exclusively started in the new pools.
    However, Xen zone is designed from the ground up to be leaner and faster, as long as you are using the xen tools, it will be better than even the new KVM zone on same hardware. That is why we are putting some restrictions on smaller VMs and the cheap storage is not available there to keep it for more professional people which appreciate Xen and know how to manage it. Also, the templates come with predefined passwords, a professional person will know to change it immediately, while on KVM we had to implement the random password feature just to add a bit more help for newbies and hobbists. Hopefully, those will leave the Xen zone to people which know how to plan capacity, install xen tools, etc, and crippled machines, unable to migrate, will not pop up there. Since the regular people think KVM is the best, there is a chance they will keep away from Xen,especially since the performance of KVM is increasing in the new zones.

  • @Maounique said:
    If your old kvm was really old, like months old, then it was on the old kvm zone we are phasing out. It does not accept new instances unless you force it via affinity.
    However, migrating tens of tb from one zone to another is too much and we will try to phase it out that way. New instances in new zone, until very few remain in old, so the move will not last too long.

    As a iwstack customer in your KVM zone, how is one to understand this?

    Sorry if I sound a little "grumpy" now... :)

    But you have on several occasions in this thread mentioned iwstacks kvm offerings as something you (Prometeus) doesn't take quite as seriously as your other offerings.

    And said that you have much better iwstack offers now, except existing customers will not be informed about these if they don't happen to read your comments here on lowendstack?

    I find that a little disheartening - and contrary to the otherwise excellent reputation Prometeus has earned.

    My problem is not that you come up with better solutions as experience teaches them. That's great and we all live to learn, certainly me included.

    But as an existing customer, running things on iwstack that isn't just shortlived throwaway stuff, I'm left feeling less cared for than I'd like to. :)

    Anyway, if I want to find out if I'm the old zone. And would like to move to the new and hopefully more stable zone. What am I supposed to do, now that I know about them? ;)

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    The old zone is not more unstable. It will take longer to recover from a catastrophe like the powercut we had. It is also migrated slowly away from the bugged storage. As we move more an more people away, it will no longer have issues. Already there is an improvement.
    Of course, if you wish to move right away, just make a new instance in the Xen zone to be sure it ends up in the redesigned area. If you make another KVM instance, might still land in the first zone.
    It is not that we take it less seriously, but it grew out of our experience with corporate cloud where people do not count pennies and do not make disks of 1 GB for root+swap and then add more 1 GB disks and remove as needed. We imagined there will be smaller disks, but we didnt quite think we will have 1000+ of 1 GB ones.
    This way we thought:

    1. For business projects we put the Xen zone with some limitations. Instances start from 1 GB ram and there is no cheap storage available. This, together with the more demanding Xen hypervisor which requires xentools installed without which instances will be unable to migrate and perform poorly, will probably keep away the people which aim to spin an instance for half a cent, do an attack and destroy it putting lots of pressure on the storage and deployment.
    2. We created new KVM smaller zones with a redesigned storage where we are slowly moving everyone, in the order of disk sizes. Small disks means lower downtime, but it is a very slow process because we do not wish to put additional stress on the bugged old storage.
    3. We sent notifications each time we launched something new and gave incentives for people to try it out by lowering the cost of credits. Of course, we will not send a new notification, each time we improve things, people do not read them when they are once in 3 months, not to mention once a week.
    4. It is normal to think any new thing is better grown on newer hardware and with the experience from the past. After all, the first zone is 2 years old which is a lot in terms of hardware and development today in IT.
      We do not force anyone to migrate, but progress is to be expected all the time and everyone should believe that newer zones are better than older ones, this is natural, albeit there can be exceptions with instability in the first days of a new deployment or mistakes in design which may lead to problems for the instances in the first few weeks, this is why we offer discounts to try those new things. Once you are satisfied it runs stable for a month, it will probably be safe to check performance and move the production VMs after you see a serious progress in performance and you had no issues at all, but that is solely your decision, we offer the information and recommendation, create the opportunity, but do not push you. We will terminate the old zone some time in the future when we either moved everyone out or there are very few instances there and those are unmovable due to internal problems. We will give ample time to migrate.
    Thanked by 1netman
  • vyalavyala Member

    @Maounique, sorry if I don't understand basic. Should I install xentools on ubuntu 14.04? I am in Xen zone (or DC2)

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    Xentools are needed on all VMs, including windows. Failure to do so can have various consequences, from poor performance to inability to migrate and even silent failures in some rare cases. But, it depends, they might already be installed. Some OSes detect they run on Xena nd install automatically while our templates have them by default.

  • vyalavyala Member

    I am running without xentools about 3+ months. How do I install, do you have documentation on that?

  • MaouniqueMaounique Host Rep, Veteran

    It depends on the OS. In debian, and I assume Ubuntu, something like this: apt-get install xen-tools

    For windows you need to mount the ISO with them, as well as for some linux distributions which do not have working support in their reporitories. We are using Citrix xenServer, therefore, the ISO contains those specific builds optimized for our setup.

    Thanked by 1vyala
  • @Maounique

    Thank you for your explanations, much of which I do not disagree with in anyway. :)

    But I still think you guys should consider the information side of things (this goes for handling info during major outages as well).

    And that Prometeus have been a bit unclear about what you are offering in iwstack, even if you didn't mean to be.

    On one hand you talk in this thread about iwstack being for people spinning up instances for half a cent to do an attack, and running 1GB disk instances and so on, and say you setup the system with this kind of customers in mind.

    On the other hand you advertize iwstack as "high performance", "high availability", "fail over" and offer 16 GB/12 CPU plans and so on.

    Forgive me for being a little confused then - or feeling that your explanations in this thread have painted a somewhat different picture than the one I got from www.iwstack.com when I became a customer in the first place.

    I don't mean to say I feel terribly cheated, want my money back, or anything like that.

    It's just that I set my expectations a little higher from your reputation and the features offered, than reality (and your writings about the iwstack project) quite lived up to so far.

    And I say all this mostly to tell you where I think you can do a better in the future: better info, and a clearer plan for what you intend iwstack to be.

    And okay, to vent a little frustration, which has hereby been done. Sorry. :)

  • Some of the older KVM instances on iwstack used a different storage backend that didn't like the large number of smaller files (At least that what I understood from the previous comments in this thread) However the newer instances should be on the newer SAN and not have that problem

    Thanked by 1Maounique
  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    netman said: On one hand you talk in this thread about iwstack being for people spinning up instances for half a cent to do an attack

    Not at all, but there are such users which we suspend when found for ToS/AUP breaches. However, the point was not the attacks, but the people which use the wrong service for hosting their VPNs, for example and creating weird setups, like those with 1 GB disk.
    We are taking steps to discourage them, not that we designed the product with them in mind, actually, it malfunctioned BECAUSE such users pay 30 Eur for a tightly monitored service to launch attacks when they can have under 1 $ a month instances on services where nobody gives a damn and they can do whatever they want without issues. We set the initial deposit at 30 Eur to discourage them, but it did not work, at least not to the degree we wanted.

    We discontinued the 384 MB instances which were the most abused, we suspend people which breach the ToS/AUP, but cannot do anything against those that simply manage to squeeze a VM in 1 GB disk, nor if they decide to create additional 1 GB disks whenever they like. IWStack is about freedom and we must cope with whatever people experiment as long as they do not abuse it, but we did not expect this kind of behaviour to be such large scale when we first launched, but, in the meantime, learned the lesson and adapted to the new reality, even if it takes time to phase out the old zone for already mentioned reasons.

    IWStack is more stable than big providers such as Microsoft, less stable than others like AWS and google, but it was never intended to be our top of the range product, we have VMWare and a separate deployment for our corporate customers, shielded from abusers, on completely separate circuits and with other safeties in place (unfortunately, when power failed, those were affected too but it was one hour downtime in 10 years). You would never consider such a service which starts from thousands per customer, though and we do not offer it otherwise than with direct discussions and contracted setups which include other services, for example dark fiber transport or even data channels internationally.

    dragon2611 said: newer SAN

    Not exactly, Xen in DC2 is on the newer SAN, the problem was not the SAN, though, but the LVM model we spread over it, it was at the logical level, not hardware. The new KVM instances are ont he same SAN, but newer nodes and differently organized storage (At the logical level) with Uncles's secret sauce.

  • @Maounique I'm also little disappointed after this thread. I have used you for a long time now, been a very loyal customer, like many others. But I have had some few comments in tickets about a slow system, and I can't say that I did get any info, or did get it so I understood it, so I could move my servers to a better zone.
    That was my main reason why I left you a couple of weeks before all this happened. I'm glad I'm moved then, and not after, then it would only have looked like I moved because the two large issues.

    On the other side, while reading this thread, I see you confirm lots of what I made my decision to leave iwstack after over a year, and Prometeus after three years or so.
    My iwstack instances was not better then DigitalOcean or Vultr, because I had no need for all the fancy stuff (since I have a DNS failover setup with multible servers in several locations, with different hosts). But I could spin up new server MUCH faster at DO or Vultr, do almost everything and more at the other hosts, and when your HA failed, several times, your service was no better then DO or Vultr (that only put your "cloud" VPS on a single node.

    Hope I will be back, maybe when your KVM zone is brand new, and faster, and your HA do not fail etc. When you can offer the real cloud iwstack.com is, not the fake cloud stuff like the others. But then your real cloud actually has to be better then the "fakes" ones.

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    myhken said: But then your real cloud actually has to be better then the "fakes" ones.

    As I said, there are no miracles, everything can fail and will, sooner or later, we were actually better than Azure last year, much better, but, yeah, AWS was also way better than us, so was google.

    We took into account all feedback and we sent letters to everyone when we launched the xen zone, for example, together with an offer to buy cheap credits to try it, but people bought the cheap credits and used them for their old instances.
    As I said, we create the opportunity, inform people and give incentives, but cannot push anyone to take the new and better offer.
    some people think KVM is way better than Xen and there is nothing we can do to convince them of the contrary, this is why we will keep offering KVM, but it will be directed to the lower end of the semi-pro people and hobbists which wish to see a real entry-level cloud for cheap and we will continue to improve the product and create additional zones, more expensive and more stable because will have almost exclusively people which need the features and know what they are doing.
    For the others, we will create something like DO/Vultr, still on Cloudstack, but with simple zones, only shared network and ssd local storage, which will be also more stable because the interface will permit only basic operations (but still way more than solus, for example) and will not allow connections to the main cloudstack UI.

    Thanked by 1myhken
  • vyalavyala Member

    @myhken, I don't have experience with DO or Vultr, but tried couple of providers like AWS and Azure. I can confirm that iwstack is more stable than Azure.

    My azure instances failed 3 times over this month (just 18 days), before Azure I had my deployment with iwstack on pre-production stage. I never faced single downtime (except the cable issue Maounique mentioned). Uptime was so great with iwstack. Just because we got free credit for 3 years with azure, we thought we could save cost being self funded startup.

    May be Maounique given more information about internal stuffs than anyone else, but I see they don't hide anything.

    They can do little different model like lowering cost for KVM, with clearly mentioned reasons.

  • afonicafonic Member
    edited April 2015

    I wanted to add something that is relevant to iwStack.

    You may pay for the private network, but at least it is really private. As I mentioned earlier, I've moved my services to Linode for better uptime and support, however something I found out is that their "Private LAN" is not really private. With a simple Nmap search I found more than 130 IPs in my "Private Network". I asked on a ticket, and it turns out their so called "private" network is in reality just a local network for each node, where bandwidth between Linodes doesn't count. Any other user in that node can access my VPS through my "private" IP.

    That creates security issues (you might feel relaxed in servers that just connect to the private network when you shouldn't) and I believe it falls in the extended category of "false advertising".

    So in iwStack you pay for a private network, but it's really only you that has access to it.

  • afonicafonic Member

    Hi guys,

    Any updates regarding VENOM and if any actions were taken by Prometeus at both the KVM and Xen nodes ?

    I tried the forum but there seems to be an SSL related problem, didn't want to open a ticket.

  • InfinityInfinity Member, Host Rep

    @afonic said:
    Hi guys,

    Any updates regarding VENOM and if any actions were taken by Prometeus at both the KVM and Xen nodes ?

    I tried the forum but there seems to be an SSL related problem, didn't want to open a ticket.

    Yes, it is planned, information should be out very soon this night. M will probably keep everyone posted if he has the time to do that.

  • MaouniqueMaounique Host Rep, Veteran
    edited May 2015

    Already sent mails, but it may take some time as we have thousands of them in the pipe.

    Basically, I think the VMs cannot be left up, they have to be shutdown, even if the nodes are patched and VMs migrated, so, I believe we cannot avoid this even for iwstack, just to be on the safe side.

    Thanked by 1afonic
  • afonicafonic Member

    How about Xen (Xenpower), were they using Qemu?

  • MaouniqueMaounique Host Rep, Veteran

    Yes, you can run arbitrary OS there, so the templates include HVM ones.

  • joepie91joepie91 Member, Patron Provider
    edited May 2015

    @Maounique Do you also offer extra traffic on iwStack, and if so, at what cost?

    EDIT: Beyond the included 1TB, I mean.

  • jvnadrjvnadr Member
    edited May 2015

    @joepie91 If you have more then one instance, the bandwidth will be combined.
    Additional outgoing traffic after 1GB per month, is billed @ € 0.002 x GB transferred

    Thanked by 1Maounique
  • MaouniqueMaounique Host Rep, Veteran
    edited May 2015

    jvnadr said: the bandwidth will be combined.

    Indeed, this is why i recommend people to split their installations over multiple instances to offer flexibility, like db separately, web server separately, storage, etc. Each instance, no matter how small, has the same traffic allocation and you can upgrade one by one by cloning and then switching IP/NAT for uninterrupted service as well as load balancing, etc. So, one big web hosting instance with everything included, might be more expensive and less flexible than a combination of 3 instances, for example, but the flexibility offered can be priceless in some situations.
    1 instance, 1 TB upload a month, 3 instances, 3 TB upload a month in total, no matter which of those is doing the actual upload.

  • afonicafonic Member

    @Maounique Any updates on how VENOM was dealt? I have a Xenpower VPS but didn't get any emails or seen it restart.

Sign In or Register to comment.