Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Prometeus ? - Page 8
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Prometeus ?

1568101114

Comments

  • @alepore said:

    check rage4dns is a project in with uncle Sal is believing and it's awesome. i'm using is and it has failover supported by NewRelic or api, geodns and you can chance dns in case od failure in less then 5 seconds

  • aleporealepore Member
    edited February 2015

    elbandido said: check rage4dns

    thanks, will take a look.

    anyway i'm realising dns isn't really the main problem... several things need to be perfectly replicated on a typical web app of mine: postgresql, redis, elasticsearch...

  • @Maounique said:
    Hello!

    Not all shared is up, we have some legacy servers (non-biz, nondedicated rsources) which still need fixxing as well as the myoffload.
    IWStack still has issues, generate by heavy load ont he master and broken snapshots which were probably in progress when power went down. I am also guessing at least some people have issues with the virtual routers which are not necesarily on the same nodes with their main VMs and might have started later or not at all leaving the private networking broken. If you have private networking, shutdown all VMs which use it, wait some 10 minutes and start one VM, test connection and if it works, start the others. Restarting it with clean up option will also help while the vms using it are down.
    Wee will be solving these later on.

    Hello,
    just a gently question on how is iwstack node doing...
    thank you

  • @Maounique there is no part mention for my package "HA VPS - VM Extra Small" So what's update for my vps? It's order from very first start and have no control panel. Just to make sure that you are keep going on to fix this vps package (Because it's 15+ down now) .... thank you

  • CrabCrab Member
    edited February 2015

    @Maounique said: Xen zone of iwstack came back immediately as it has a very different design with the experience we got after 1 year of fixing bugs in cloudstack. Still, without the master, it will not work to manage VMs.

    So if people want to play safe, should they be creating instances on Xen? Are there any other differences than more expensive SAN between KVM and XEN for IWStack?

  • MaouniqueMaounique Host Rep, Veteran
    edited February 2015

    Hello!

    Crab said: So if people want to play safe, should they be creating instances on Xen? Are there any other differences than more expensive SAN between KVM and XEN for IWStack?

    The old VMs created on iwstack, when the Xen zone did not exist, were on an out of the box solution with many nodes in a cluster and regular storage.
    we run into trouble with that, pretty soon as the storage was slow, unreliable and the space remained allocated and not used in certain conditions.
    The Xen zone was designed to alleviate these problems. New storage scheme, fewer but more powerful nodes in each cluster, Xen, etc.
    We also created a few new KVM clusters with this storage model and fewer nodes in a cluster and it shows. We cannot move customers from the old KVM clusters without extensive downtime, but, if they create a new VM, it will most likely land int he new clusters, and if they create a Xen one, will certainly go in a Xen cluster.
    Yes, the Xen clusters are safer, we spent more than a year looking at how it fares before coming up with the solution.
    TBH did not think it will be this successful. We thought people just like the cloud hype and this will fade, but it was not so, not only corporations and serious IT departments love the clouds and IaaS, but also semi-pro and hobbist people. At this time we closed the rhev6 cloud and moved some old corporate customers in the Xen zone. The VMWare zone is the only one left exclusively for corporations.

    zidit said: my package "HA VPS - VM Extra Small"

    That is probably the last remaining of it's kind. Unfortunately is in a cluster where I do not have access and salvatore most likely did not remember there are still customers with those 4 yo+ packages. I flagged the ticket for him to look at it when will wake up, he went to bed at 6 am. If you would like, we can offer you a permanent discount for iwstack in response, so you can have a cp like everyone else.

    elbandido said: Hello, just a gently question on how is iwstack node doing... thank you

    I put it in the announcement. Basically, everything is up and running, but there are issues with the isolated networks which need to be restarted with the clean up option to recreate the virtual router because the nodes did not come up in the same time and many people had their vms up on one node but the VR was not up yet on another, so the network got messed up. Also, there are expected issues with people which left the VM to boot from ISO, snapshots stuck because were going on when the power went out and some stuck the VMs in a starting state. These problems will be fixed on a case-by-case basis, whatever is left after our automated garbage collector scripts did not manage to tackle.

    Thanked by 1praveen
  • Maounique said: The old VMs created on iwstack, when the Xen zone did not exist, were on an out of the box solution with many nodes in a cluster and regular storage.

    we run into trouble with that, pretty soon as the storage was slow, unreliable and the space remained allocated and not used in certain conditions.

    Why have you not informed your old customers of this before? I did leave you, after 14 months and with almost €200 i credit, just because of how slow my servers was, and how slow they was to turn on and turn off. Turning on a server took easily 5-10 minutes.
    I have wrote this in at least one ticket also, but did not get any reply about maybe moving my servers. :(

    I really hope if I come back one day, I get the best service out there. €200 i lots to loose, but I did not want to have so slow servers anymore.

  • MaouniqueMaounique Host Rep, Veteran
    edited February 2015

    Hello!

    You can always ask for the refund, but we did notify everyone and offered a promo when we launched the Xen zone. we told many times over that is the zone for business where people learned to depend on Xen for critical applications, while KVM will remain for the hobbists which think it is way better than Xen. KVM was our training ground, to test the market and this setup, previously only having experience with vmware and rhev6+some proxmox and XenServer/XCP.
    The slow start was also due to the storage model, the new kvm zone starts in under a minute, i think.

    Here is the Xen announcement which you should have from the start of september last year:

    we are pleased to inform you that a new deploy zone is available in iwStack: MILANO/DC2/XEN
    This zone (as you can guess from the name itself) is located into another datacenter in the same Milan campus and is composed of XEN hypervisors only.
    New hosts are powered by dual E5-2680 2.7 Ghz, dual port 8Gbps fiber channel HBA which connect each server to a brand new powerful enterprise SAN which is capable to provide more IOPS to the instances.
    Another secondary storage nas has been deployed so that templates and snapshots activities and files on the new zone are not mixed with the MILANO zone.
    
    -snip-
    
    As an incentive to create some instances to try the new zone we are offering a 25% discount for your next iwCredits purchase using the coupon code: TRYXENIWSTACK
    The coupon is valid for one single purchase of any of the available addons (10,30,60,100,200) and the offer is valid for this month only.
    
    Thank you!
    
    Best regards
    
    Prometeus team
    

    It was clearly meant for business with lowest instance 1 GB, no cheap storage option, no fool-proof safeties such as generated password, instead a fixed one which the people will change because they are pro and know the risks, etc.

    Thanked by 1myhken
  • For some weird reason HTTPS iperweb client area is giving me SSL errors, but HTTP works normally. Baffles me how its connected to the outage, but it worked before. I was actually convinced for a long time that the website was down since my bookmark was https :)

  • admin@home# curl -v https://my.iperweb.com/clientarea/
    * About to connect() to my.iperweb.com port 443 (#0)
    *   Trying 195.88.5.7...
    * connected
    * Connected to my.iperweb.com (195.88.5.7) port 443 (#0)
    * successfully set certificate verify locations:
    *   CAfile: none
      CApath: /etc/ssl/certs
    * SSLv3, TLS handshake, Client hello (1):
    * error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
    * Closing connection #0
    curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
    
  • MaouniqueMaounique Host Rep, Veteran

    There are still issues with the shared hosting and minor things to solve, but everyone had to take a break or there would have been big human errors due to people being tired. When you almost wipe a server, it means it is time to go to bed.

    Thanked by 1tomsfarm
  • Is that the reason why rage4.com was down for several hours?

  • MaouniqueMaounique Host Rep, Veteran
    edited February 2015

    Yes, the site, but the service was up. they have dedicated servers, not using our shared hosting.

  • marrcomarrco Member
    edited February 2015

    @Maounique, you have a pm. Sorry to bother, but each restart my server is slowly consuming all disk space, right now its 96% full. pls. see http://i.imgur.com/VjG23ag.png
    Do you have any update for me?

    -edit: Maounique contacted me and solved the issue in a few minutes. THANKS for the terrific support.

  • MaouniqueMaounique Host Rep, Veteran
    edited February 2015

    There are multiple reports already about similar issues om pm63, i just got the third one. VMs going down with no apparent cause. This is a really strange issue, since 90% of the VMs are up without issues on the node from the beginning and there is no obvious sign of something being wrong.
    I am not kvm expert so I asked uncle to take a look.

  • @Maounique said:
    Yes, the site, but the service was up. they have dedicated servers, not using our shared hosting.

    Yup, I meant the site only. Their service itself has been stellar, only encountered 1min of downtime in a year due to a misconfiguration on their end.

  • @Maounique are the Xen instances Xen HVM or Xen PV?

  • MaouniqueMaounique Host Rep, Veteran
    edited February 2015

    In the cloud, can be both, depending on template. The rest of our offering is only PV.

  • @Maounique have you released the post mortem information yet? My client is asking why his 16Gb server was down for an entire day, and what is being done to ensure it can't happen again. He's already making noises about returning to Rackspace, having forgotten the outages we had with them!

  • MaouniqueMaounique Host Rep, Veteran
    edited February 2015

    No, not yet, Salvatore believes the switch from ups to generators malfunctioned for 2 buildings in the campus, but he was busy all day catching up with other things he had to do yesterday.
    We expect to be fully back up with everything tonight or tomorrow at most. This includes the backlog of individual things, like manually solving some hangs with VMs caught in a bad time, such as migration or snapshotting, as well as people which were reinstalling and have to be reprovisioned, things like those.
    At this time, there is the error with the shared hosting and some legacy services we forgot about in the first day. 99% of everything is up, but, as I said, there are some individual cases caught in a bad moment, we have many thousands of VMs, and it was bound to happen.

  • @Maounique Thanks - I look forward to hearing more once you get caught up with the remaining emergencies.

  • LeeLee Veteran

    Every year from now the 17th Feb will now be known as

    "The great Prometeus power down of 2015"

    Sorry, just trying to be a bit light hearted over it all now it's just about sorted.

    Thanked by 3Shade netomx Pwner
  • @W1V_Lee There was already "the great Italy power down of 2003" - http://en.wikipedia.org/wiki/2003_Italy_blackout

  • LeeLee Veteran

    @rds100 said:
    W1V_Lee There was already "the great Italy power down of 2003" - http://en.wikipedia.org/wiki/2003_Italy_blackout

    Oh, I remember that happening.

  • @Maounique urgent PM for you. Sorry to bother here, but the new vps, went down too. It looks like there still is a problem on that node

  • MaouniqueMaounique Host Rep, Veteran
    edited February 2015

    I have 4 complains now, I looked at it from all angles I know, nothing seems to be wrong, I called Uncle to help, it is well beyond my limited kvm expertise. One of those cases when weirdness strikes. I presume all other 50+ VMs are fine since I only had 4 complaints for pm63.
    Once all is working 100% we will try to compensate everyone. At this time, a week a free service is in the cards, remains to be seen how that will work out in the billing panels, but for iwstack we have own billing and should be easy, based on the resources in use at the time o the blackout.

  • MaouniqueMaounique Host Rep, Veteran

    We may have found the issue plaguing pm63.

    Please look through logs about "overheating" reported by sensors if your VM is shutting down by itself from time to time. This might be a KVM bug due to some combinations of software. If that is the case, uninstall the sensors package(s) as they are not needed for a VM with no physical parts.

  • I created a XEN instance in IWStack and it is much more snappier than the old KVM one I had. Time to migrate the services over!

  • MaouniqueMaounique Host Rep, Veteran
    edited February 2015

    Old instances (read, older than last 2-3 months) are on a cluster plagued by storage problems, wasteful and slow. That has nothing to do with hardware (unless the cheap storage option with SATA) but with the way storage is organized. We did not expect so many small volumes, there are hundreds with 1 GB, the absolute minimum, and did not plan accordingly. That has been at first mitigated by opening a Xen zone, now we opened 2 more KVM clusters with a totally overhauled storage deployment, over the same hardware. From my tests, templates are deployed in under a minute in the night.
    So, even if you insist on staying with KVM (I dont understand why, as you probably know I am a Xen fan from the beginning), creating a VM now will most likely land you in the new clusters and will fare much better.

  • jvnadrjvnadr Member
    edited February 2015

    Created a small test vm in Milano zone (kvm) and I saw a huge difference from the past. The vm deployed in seconds from a snapshot I had from older instances, had good connectivity to Greece (54ms ping, ~70Mbps download and ~35Mbs upload) and expunged in 3 seconds, too. Big improvement, @Maounique!

    Thanked by 2Maounique netomx
Sign In or Register to comment.