Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Prometeus ? - Page 12
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Prometeus ?

189101214

Comments

  • MaouniqueMaounique Host Rep, Veteran

    elbandido said: So dc2 will be up shortly?

    It should be up already. Is the master not working for you?

  • bruzlibruzli Member, LIR

    DC1 ok, DC2 still not responding

  • Instances are unreachable, restart commands are not working, private network doesn't work, public IP do not appear in the master.

  • ohhh i love the sound of ping -a

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    Instances are unreachable, restart commands are not working, private network doesn't work, public IP do not appear in the master.

    This is not what I am seeing, please logout and clear cache.

    I am trying to reproduce, but based on past experience, it might be that the virtual router got stuck. restart the network with clean up option if this is the case, possibly with the VMs shutdown if it fails with them up.

  • mine went online 5 minutes ago

  • @Maounique said:
    This is not what I am seeing, please logout and clear cache.

    Now it's working.

  • mi5h0mi5h0 Member

    DC2 looks fine for me. Ping OK, traceroute OK, page loading speed OK.

    Thanked by 1vimalware
  • also my DC2 vms are ok and network it's fine...@Maounique what about Salvo..is he "alive" after all this hours? :)

  • MaouniqueMaounique Host Rep, Veteran

    Salvatore: ho terra pure nelle mutande, non mangio da ieri e sono stanco :)
    hope murphy had enough of me
    Sent at 10:23 PM on Tuesday

    Thanked by 1alepore
  • one word "GrandeSalvo!" ..tanti cari saluti. Simone

  • vyalavyala Member

    Two downtimes in recent months, first they credited 2.1 iw for downtime.
    Generally a good service, good people.

  • MaouniqueMaounique Host Rep, Veteran

    The credit is related to the active service at the time of downtime. It is supposed to cover one week of those active services running.

  • @vyala said:
    Two downtimes in recent months, first they credited 2.1 iw for downtime.
    Generally a good service, good people.

    Forgive me if I am being cynical, but I don't care if my hosting provider is good people. I won't go to the movies with them, I need to run my business.

    In all fairness, iwStack is honest, the system is pretty stable, you have full control Cloudstack-style and dozens of features, the VMs are very fast (faster than a lot of providers claiming they have "SSD powered" VMs), and the prices are just great.

    However, a host that cannot be reached by ticket, email, phone, twitter for OVER 7 HOURS when a major problem occurs, is not professional.

    Sadly because I liked the servers and I've spent many hours configuring them, I will migrate all my services to Linode by the end of the month. I wish them the best, and I know I will return some day. (probably I'll get a Xen VPS to hold backups too).

  • MaouniqueMaounique Host Rep, Veteran

    I will have to admit IWStack, in spite of the HA and failover features, has less uptime than our staple xen plans. Cloudstack is not very stable at this time in spite of our bugfixes.
    But the last incident was impossible to predict and hit the people which had private links between the DCs only. We now have another link through some other location and will adapt both DC presences to have redundant exits and links. We though 2x redundant links were enough, but apparently, they were pretty close to each other, so, that didnt cut it.

    I also admit this was our fault for not communicating better and mine in particular. Also, whmcs was down because was "inbetween" DCs. we just moved the database to the new DC and about to replicate it in the old one, but this happened just in time when it was not yet replicated, only the frontends were double issue.
    It was a combination of bad luck and some screw up due to the timing of it, me being away for the day.

    Remember we give money back in case it was a fault of ours, so, people wishing to move out can apply for a refund.

  • vyalavyala Member

    @afonic,

    I agree on your points, I had production deployment with iwstack, moved to azure since they provided us BizSpark offer for 3 years, enough for us. Still our staging server, backups are with iwstack, have good amount of credits, no plan to close it.

    Lets give one more time, may be they had bad time.

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    vyala said: azure

    You may wish to pick aws or google for better uptime:

    http://www.computerweekly.com/news/2240238379/Microsoft-Azure-had-more-downtime-than-main-cloud-rivals

    IWStack performed better in 2014 than Azure, roughly 3 times better.

    Azure Object Storage was down for 10.89 hours and Azure Virtual Machines experienced 42.94 hours of downtime globally in 2014. Azure had 241 outages in total.

    Thanked by 1linuxthefish
  • nice post @Maounique

    Thanked by 1netomx
  • Did anything happen at 06:45 GMT this morning, as I had an alert from uptime robot for one of the sites I have hosted on iwstack, looks like it recovered about 15mins later.

  • MaouniqueMaounique Host Rep, Veteran

    No, the only issue we had was a server needing a reboot in dallas, ovz, not part of iwstack. It could have been an issue with your vm.

  • netmannetman Member
    edited March 2015

    My iwstack server also had a problem at that time, with no connectivity, and the iwstack control panel showing it as neither running nor stopped.

    A reboot command gave an error message in the control panel, but while I was writing a ticket everything started working again (and the reboot command got executed).

    P.s. my outage also lasted about 15 minutes.

  • MaouniqueMaounique Host Rep, Veteran

    Please give me your details in a ticket. This might have been a case of migration to the newer storage, it is happening in background all the time, but it should not involve downtime that big, at most a few seconds.

  • @Maounique

    I did make the ticket, but closed it again when everyting started working: #441429

  • @Maounique - #796437,

    I'm not overly bothered by it as it was only a short downtime and it's not a critical site, but it's always good to know what caused it if possible.

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    That was an example of self-healing working. One node was rebooted by the orchestrator due to some problems with it's storage connectivity and the instances started on other nodes to prevent data inconsistency. I didnt notice, i though the reboot was part of the tests salvatore was doing to check the health of everything after the power failure last month.
    Even if he did not initiate it, it was an interesting test and it came up as working (ha and fail-over).
    When we are doing such tests, we empty first the live vms and move some to test, so had no idea it meant live VMs were impacted because, as I said, though was salvatore testing stuff.

  • vyalavyala Member

    @Maounique,

    At Azure, we have free credit, otherwise we are happy with iwstack. For next product, we shall choose iwstack only.

    One comment on iwstack vlan feature, we found that it is expensive to use the virtual Lan, as there shall be added cost about 4.3 Euro per month per instance. When we talk about cloud solution, it is also interconnection between nodes, like Replicas, Web servers, with internal IPs, we can exchange the data without exposing ports.

    Can iwstack bring that cost down? Initially I thought, we need to pay 4.3 Euro for a vlan, but it was not the case, every instances on vlan, we need to pay additional cost.

  • @vyala said:
    Maounique,

    Can iwstack bring that cost down? Initially I thought, we need to pay 4.3 Euro for a vlan, but it was not the case, every instances on vlan, we need to pay additional cost.

    Are we talking about the Virtual router/Network, I thought that was per virtual network instance not per machine connected?

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    It is per instance.
    Those are system VMs, the user has no access tot hem and the more usage, the more load on them. We did not find a way to bill them incrementally, meaning, 10 eur for the network per se, and .5 for every vm using it, for example. So we brought the cost down and charging per vm on the virtual network.
    This is usually an advanced feature needed by companies or advanced users and will be used only when needed as we have the shared network alternative for the people which simply need a VM to install their own ISO on, for example. For the intended target, 4 Eur per vm given the security advantages and free traffic between usually large VMs (4-8 GB ram and hundreds of GB disk usage) VMs, will be worth it. For a regular user wishing to host his cat pictures, that is an overkill, of course, and will not use it.
    In fact, we do not recommend IWStack to newbies as the interface is cumbersome, but we will be launching a low end version entirely controlled by our interface from the billing panel, no cloudstack access, separate deployment, local SSD storage, etc, some kind of DO, but with own ISO, free kernel choice, fast deployment and storage, etc, based on XenServer.

    We already have a high end version, some rhev6 customers tried iwstack and liked it, so we deployed a business version, that is not accessible from our IWStack, but it is sold separately. It remains to have a low end simplified version so we can cover all needs. It will probably be marketed under the XenPower brand, possibly XenPower-C from cloudstack.

  • @Maounique It's not always used for large VM's though some people (Like me) use it simply because they'd rather have the traffic fire-walled outside of the VM rather than relying on the VM itself

  • MaouniqueMaounique Host Rep, Veteran

    dragon2611 said: rather have the traffic fire-walled outside of the VM rather than relying on the VM itself

    I do not understand. How to firewall the traffic outside the VM?

Sign In or Register to comment.