Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Do you have a problem with ZxHost? - Page 8
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Do you have a problem with ZxHost?

1568101134

Comments

  • Mr_TomMr_Tom Member, Host Rep

    Falzo said: but esp. for that case I stand by my statement that it would have helped and probably sped up things a lot, if he'd asked clients first what services would be needed to try and recover and which could have just been ignored/removed. to transfer some TB or not can make a little big difference ;-)

    I understand people want to get access to their VM, but I also know a few people have mentioned that their data isn't required.

    I actually thought this last night - not sure if it would be quicker to weed through all the "yes/no" responses about who wants their data.

    Also, there would possibly be saying they want access when it isn't required.

    Pretty much all the data on my second disk can isn't needed, but there is some on the primary I would like to get if possible.

    Personally I'm happy to keep paying for the service while they are working on getting stuff accessible (doesn't need to be a working VM, just access) - not that my monthly amount probably justifies the work that's going into this :/

    I think in terms of how long should people wait/zx try depends on what's actually going on. Is reverting to an older version CEPH as possibility? If there's some fix or CEPH actually come up with an answer then ideally I'd say wait, but I also get zx don't want to spend weeks trying to recover stuff that only a handful of people want.

  • lurchlurch Member
    edited November 2017

    Checked mine via novnc and it's sitting at grub rescue.

  • chrispchrisp Member
    edited November 2017

    Falzo said: while I understand your situation (hence I wrote he should ask and try to filter out the once willing to drop it), how long do you expect him to try (and others to wait) before a final cut should be made? no offense, serious question.

    Yes, I agree there are two kinds of people. The ones, who are keen to get their data back and the other ones, who want a working service. I am certainly one of the first group, because I have lots of other servers, that I could just use right away if I'd want. I totally agree, that he should be more active in here and involve people if it makes sense. Especially if it helps to minimize the efforts of recovery.
    For me it's only some megabytes of SQL backups that I would need. So having access to the VM once more for an hour with degraded performance would already be awesome. And yes, I would wait for many weeks to get it and I would even still pay the service meanwhile. Heck I would even pay for recovery.

    Edit: Why is it even important for the storage people to get the service to work if it will be discontinued end of this year anyway? Do you really need a server, that will only run for one more month?

  • svmosvmo Member
    edited November 2017

    Letting the cluster run - read only - and having the bug fixed before doing more recovery might be the best course of action ... trying to fix things while the software is still broken seems risky.

    A couple of links to make you appreciate what sort of effort might be required
    obsidiancreeper.com/2017/05/06/Recovering-from-a-complete-node-failure/

    https://s3itwiki.uzh.ch/display/clouddoc/2016/08/08/Ceph+OSD+flapping+while+resizing+a+big+RBD+image

    please notice the last line in the last link !

    Anybody have an idea of the scale of the problem - how large is the cluster
    and just as important how many customers are affected ... ?

    Does anybody actually have a working service running at the cluster at the moment ?

  • svmo said: Letting the cluster run - read only - and having the bug fixed before doing more recovery might be the best course of action

    Read only for 20 minutes and I'm cool with everything that happens next ;)

  • Mr_TomMr_Tom Member, Host Rep

    Brilliant update from ZX just came though.

    This looks like good news:

    1/ We have fully rebuilt and recovered the storage tier for OS Storage disks, any small VM's that only have an OS disk we will work to bring these online shortly.

    2/ We are working our way through the Storage Tier to implement the same patch as above to bring the storage tier fully 100% Online, this will then allow us to boot VM's that have extra storage disks added to them.

    We are also working on bringing a new enviornment online to allow the continuation of service going forward and removing the issue from further reoccurance.

    Thanks to ZX/Ashley for the continued efforts.

  • That sounds like he has a patch for the assertion failures? Nice to hear, if that's the case.

    Thanked by 1Falzo
  • lurchlurch Member
    edited November 2017

    @chrisp said

    Edit: Why is it even important for the storage people to get the service to work if it will be discontinued end of this year anyway? Do you really need a server, that will only run for one more month?

    It was being discontinued because of the current issue so people would still want it to work so they can retrieve their data.

    I have a couple of bits stored that would be nice to have but not critical.

  • Oh this is good news! The main reason I would like the server back is to migrate data rather than rebuild. A nextcloud migration is a lot easier than a rebuild :)

  • @lurch said:

    @chrisp said

    Edit: Why is it even important for the storage people to get the service to work if it will be discontinued end of this year anyway? Do you really need a server, that will only run for one more month?

    It was being discontinued because of the current issue so people would still want it to work so they can retrieve their data.

    I have a couple of bits stored that would be nice to have but not critical.

    Sure, I meant the ones saying "just wipe it, nothing important on there". So can we also assume, that with CEPH being fixed the storage plans live on?

  • chrisp said: Sure, I meant the ones saying "just wipe it, nothing important on there". So can we also assume, that with CEPH being fixed the storage plans live on?

    Personally I'd prefer to get refund I just replaced my backup location with other provider and also ZXhost's CEPCH I/O speed was...emm slow so it will be much better for me with getting the refund at this time but... We will see. I think that making a refund for everyone could be an issue for Ashley

  • @Mr_Tom said:
    Brilliant update from ZX just came though.

    This looks like good news:

    1/ We have fully rebuilt and recovered the storage tier for OS Storage disks, any small VM's that only have an OS disk we will work to bring these online shortly.

    2/ We are working our way through the Storage Tier to implement the same patch as above to bring the storage tier fully 100% Online, this will then allow us to boot VM's that have extra storage disks added to them.

    We are also working on bringing a new enviornment online to allow the continuation of service going forward and removing the issue from further reoccurance.

    Thanks to ZX/Ashley for the continued efforts.

    i didnt get this, came in email this morning ?

  • Mr_TomMr_Tom Member, Host Rep

    Yeah.

    I've not received all of the emails mentioned in the thread mind, so perhaps it's customers on specific nodes, etc?

  • etilicoetilico Member
    edited November 2017

    @Mr_Tom said:
    Yeah.

    I've not received all of the emails mentioned in the thread mind, so perhaps it's customers on specific nodes, etc?

    Yeah, me too, no mail received and still an error "Could not create PVE2_API object" in the CP

    Thanked by 1rahoolm
  • Mr_TomMr_Tom Member, Host Rep

    @etilico said:

    @Mr_Tom said:
    Yeah.

    I've not received all of the emails mentioned in the thread mind, so perhaps it's customers on specific nodes, etc?

    Yeah, me too, no mail received and still an error "Could not create PVE2_API object" in the CP

    I'm getting that now, but I was getting "No route to host".

    I figured they're still busy sorting things out so just waiting to see what happens :)

  • etilico said: Yeah, me too, no mail received and still an error "Could not create PVE2_API object" in the CP

    Same here, logged into the billing portal all old news and "Could not create PVE2_API object"

  • rdesrdes Member
    edited November 2017

    This offer was too good to be true :(. Already moved my stuff to wishosting kvm nat storage offer, hope that one will be better.

  • emptyPDemptyPD Member
    edited November 2017

    @epaslv said:

    etilico said: Yeah, me too, no mail received and still an error "Could not create PVE2_API object" in the CP

    Same here, logged into the billing portal all old news and "Could not create PVE2_API object"

    same, two Vm with "Could not create PVE2_API object"

  • hi guys, any news? someone get his vm back ? mine still with "Could not create PVE2_API object" message

  • MikePTMikePT Moderator, Patron Provider, Veteran

    I'm really happy to see that Ashley was/is able to sort the issue, redhat has probably issued a patch for this specific bug, hopefully they'll keep the business going strong, they seem to have quite a lot loyal clients.

  • My problem with ZX Host is that they don't have the slightest mention of Sinclairs anywhere. I bet they don't have any Sinclair machines their server farm.

    Thanked by 1uptime
  • @emptyPD said:
    hi guys, any news? someone get his vm back ? mine still with "Could not create PVE2_API object" message

    not yet, during the last days it seemed he send out a mail every day around noon (european timezone) though it didn't get to everyones inbox. let's see if he keeps up with this. most likely restoring the second cluster will take much longer than for the mentioned OS storage :7

    just for the fun do some math: the VMs get IDs in proxmox, on his nodes they seem to start with the node number followed by two more digits e.g from 300 to 399 which makes me guess he could have about 100 VMs per node, four nodes equal about 400 VMs. if on average each has at least 1TB attached this will give an estimated size of 400 TB or bigger for the ceph cluster...
    working on or even moving such amount of data will take time, no matter what.

    I don't think we're going to see real results before the weekend.

  • Mr_TomMr_Tom Member, Host Rep

    Falzo said: just for the fun do some math: the VMs get IDs in proxmox, on his nodes they seem to start with the node number followed by two more digits e.g from 300 to 399 which makes me guess he could have about 100 VMs per node

    Just curious Falzo do you work for ZX? Or do you provide their nodes?

  • @Mr_Tom said:

    Falzo said: just for the fun do some math: the VMs get IDs in proxmox, on his nodes they seem to start with the node number followed by two more digits e.g from 300 to 399 which makes me guess he could have about 100 VMs per node

    Just curious Falzo do you work for ZX? Or do you provide their nodes?

    neither/nor.

    just a long time customer with some interest in the technical background and always trying to watch closely on details ;-)
    also a heavy user of proxmox for my own dedis and stuff, so just might know how to see some things, that aren't obvious at a first glance...

    I am often just guessing around as everyone else, but always try to have more facts to consider and hopefully come closer to reality with that.

    I honestly can't tell if the nodes are (partially) rented from 23media or/and if Ashley is colocating there. For this whole storage setup I'd more likely assume the latter...

    Thanked by 1Mr_Tom
  • It's so weird since my instance is on cn1 but now I even can't find my instance in client area.
    5 days past with no news. Hope those poor guys in disaster are still fine.
    I've no idea to stay or transfer to alternatives yet. It's really a pity.

    @Falzo said:
    had a short glimpse into the status and as far as I can tell not much changed.

    the low RAM usage suggest no VMs running, CPU usage might be a hint to sync/rebuilding activity, still one node down. it's been looking the same since yesterday...

    I think Ashley still tried to recover something... will see if he gives another update via mail this morning.

    most likely it is what is it and he already stated on the ceph mailing list yesterday:

    However right now the cluster it self is pretty much toast due to the amount of OSD's now with this assert.

    I'd say time to ask people if they really want him to continue to recover data or if it is fine to make a cut and throw it all away...

  • MikePT said: I'm really happy to see that Ashley was/is able to sort the issue, redhat has probably issued a patch for this specific bug, hopefully they'll keep the business going strong, they seem to have quite a lot loyal clients.

    But nothing has been sorted out yet, all VMS are still down and no communication for 5 days

  • johnklos said: My problem with ZX Host is that they don't have the slightest mention of Sinclairs anywhere. I bet they don't have any Sinclair machines their server farm.

    Showing your age there !

  • are there any alternative services close to my existing zxhost configuration in a close price range? i believe it is around 50 to 60 USD yearly. i just use it as a personal plex server for myself so it doesnt utilize a lot of resources.
    2 VCPU Core
    1TB Disk Space
    2GB DDR3 Memory

  • dfroedfroe Member, Host Rep

    @nachobeard said:

    >

    are there any alternative services close to my existing zxhost configuration in a close price range?

    Obviously not in that price range, but have a look at this thread for probably the best alternative options we are going to have.

    https://lowendtalk.com/discussion/129842/wanted-dirt-cheap-storage-for-secondary-backups-eu

    Basically there will be some nice Black Friday offers from HostHatch and UltraVPS (aff). But you will definitely get lower specs for the same price as used with ZXHost.

    Thanked by 1Falzo
  • @Falzo said:
    had a short glimpse into the status and as far as I can tell not much changed.

    the low RAM usage suggest no VMs running, CPU usage might be a hint to sync/rebuilding activity, still one node down. it's been looking the same since yesterday...

    I think Ashley still tried to recover something... will see if he gives another update via mail this morning.

    most likely it is what is it and he already stated on the ceph mailing list yesterday:

    However right now the cluster it self is pretty much toast due to the amount of OSD's now with this assert.

    I'd say time to ask people if they really want him to continue to recover data or if it is fine to make a cut and throw it all away...

    Hi @Falzo , can you view again the status ? so we can try to guess what is happening ?

    thanks :)

Sign In or Register to comment.