Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


microLXC Public Test - Page 5
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

microLXC Public Test

1235735

Comments

  • How to try?

  • @redvi4 said:
    How to try?

    Read first post!

  • NeoonNeoon Community Contributor, Veteran
    edited August 2020

    Melbourne suffered total data loss, everything is toast.
    Due to a corrupted array, recovery attempts have been made but they where not successful.

    I have rebuild the AU node, however due to imageserver fuckery, the deploy of new containers will be delayed for a bit.
    I will send the affected users a message as soon its ready.

    Thanked by 1bdl
  • @Neoon said:
    Melbourne suffered total data loss, everything is toast.
    Due to a corrupted array, recovery attempts have been made but they where not successful.

    I have rebuild the AU node, however due to imageserver fuckery, the deploy of new containers will be delayed for a bit.
    I will send the affected users a message as soon its ready.

    Can we request/change in another location ? Like SG

  • please be patience. this is a weekend.

  • NeoonNeoon Community Contributor, Veteran

    @add_iT said:

    @Neoon said:
    Melbourne suffered total data loss, everything is toast.
    Due to a corrupted array, recovery attempts have been made but they where not successful.

    I have rebuild the AU node, however due to imageserver fuckery, the deploy of new containers will be delayed for a bit.
    I will send the affected users a message as soon its ready.

    Can we request/change in another location ? Like SG

    Yes, I removed the dead beef from the affected accounts.
    You can login, and click Deploy on Manage and deploy on a different location if you want.

    Since the recent patch, you can destroy & deploy on an existing account anytime, on the location you want.

    Yet there is still a 24 hour cooldown, for new deployments per account, this will be reduced soon.

    Thanked by 1bdl
  • bdlbdl Member

    @Neoon said:

    @add_iT said:

    @Neoon said:
    Melbourne suffered total data loss, everything is toast.
    Due to a corrupted array, recovery attempts have been made but they where not successful.

    I have rebuild the AU node, however due to imageserver fuckery, the deploy of new containers will be delayed for a bit.
    I will send the affected users a message as soon its ready.

    Can we request/change in another location ? Like SG

    Yes, I removed the dead beef from the affected accounts.
    You can login, and click Deploy on Manage and deploy on a different location if you want.

    Since the recent patch, you can destroy & deploy on an existing account anytime, on the location you want.

    Yet there is still a 24 hour cooldown, for new deployments per account, this will be reduced soon.

    @Neoon , just wanted to say thank you for such a great service :smile:

    Thanked by 1Neoon
  • NeoonNeoon Community Contributor, Veteran

    IPv6 died entirely on AU now, so I would not expect a fix before monday.

  • bdlbdl Member

    @Neoon said:
    IPv6 died entirely on AU now, so I would not expect a fix before monday.

    died:beef :blush:

  • @Neoon said:

    @add_iT said:

    @Neoon said:
    Melbourne suffered total data loss, everything is toast.
    Due to a corrupted array, recovery attempts have been made but they where not successful.

    I have rebuild the AU node, however due to imageserver fuckery, the deploy of new containers will be delayed for a bit.
    I will send the affected users a message as soon its ready.

    Can we request/change in another location ? Like SG

    Yes, I removed the dead beef from the affected accounts.
    You can login, and click Deploy on Manage and deploy on a different location if you want.

    Since the recent patch, you can destroy & deploy on an existing account anytime, on the location you want.

    Yet there is still a 24 hour cooldown, for new deployments per account, this will be reduced soon.

    Thank you

    8656-9c64-786f-e1d8

  • The sponsor banners are not all of the same height. This causes the text below to move slightly up and down on the page as the banners cycle. Viewed on mobile.

    59b0-5455-a943-716f Thanks!

  • Are IPv6 still not work / functionally in SG ??

  • NeoonNeoon Community Contributor, Veteran

    @add_iT said:
    Are IPv6 still not work / functionally in SG ??

    Its correctly configured, however, some NDP issue, working on solving it.

  • bdlbdl Member

    @Neoon said:

    @add_iT said:

    @Neoon said:
    Melbourne suffered total data loss, everything is toast.
    Due to a corrupted array, recovery attempts have been made but they where not successful.

    I have rebuild the AU node, however due to imageserver fuckery, the deploy of new containers will be delayed for a bit.
    I will send the affected users a message as soon its ready.

    Can we request/change in another location ? Like SG

    Yes, I removed the dead beef from the affected accounts.
    You can login, and click Deploy on Manage and deploy on a different location if you want.

    Since the recent patch, you can destroy & deploy on an existing account anytime, on the location you want.

    Yet there is still a 24 hour cooldown, for new deployments per account, this will be reduced soon.

    @Neoon , I'm trying to redeploy after the Melbourne node borkiness and getting a "Cooldown, wait for a bit." - would this because the 24 hr cooldown period begins from when the dead:beef removal occurred?

  • bdlbdl Member

    cdf9-88b5-52b9-88d7

    (looks like it started working)

  • WorldWorld Veteran

    Cool project, but any plan to restock JP location?

  • bdlbdl Member
    edited August 2020

    @World said:
    Cool project, but any plan to restock JP location?

    It was in stock an hour or so ago, but not in stock now (as you noticed). Maybe keep on checking...

    Thanked by 1World
  • WorldWorld Veteran
    edited August 2020

    dd5e-71f6-1934-1791

    It started to work, but still keeping on checking if JP location get restocked. :wink:

  • cpsdcpsd Member

    8eb2-db7e-9a60-0332

    Thanks !

  • Hi Neoon ! :smile:

    aedb-e9d6-013f-78fb

  • Hi Neoon,

    The container got spawned in no time, the verification process is also pretty clever
    and worked flawlessly ! I thought about manual confirmation, respect so far :smile:

    I can connect to my container without any problems, will install it later that day :smile:

    By the way: The website is good looking and works well, thats by far better than
    many websites i visited

  • r0xzr0xz Member

    227d-0ad9-8358-5c39

    Thanks!

  • NeoonNeoon Community Contributor, Veteran

    @bdl said:

    @World said:
    Cool project, but any plan to restock JP location?

    It was in stock an hour or so ago, but not in stock now (as you noticed). Maybe keep on checking...

    The reason why, because someone killed his container.
    But there is no restock planned, since I as on NanoKVM wont oversell memory or disk storage yet..

    @Timtimo13 Thanks.

    Current update on SG, waiting for a resolution with Virtualitzor.
    On AU, the ticket is open since 30 hours+ without reply, sad.

    Thanked by 1bdl
  • WorldWorld Veteran

    1169-4f35-4fc6-c229

    After deleted VM it needs to verify again :smile:

  • NeoonNeoon Community Contributor, Veteran

    @World said:
    1169-4f35-4fc6-c229

    After deleted VM it needs to verify again :smile:

    Yes, because reasons, works as intended.

  • 2fcb-ff80-ca31-f14e

  • NeoonNeoon Community Contributor, Veteran
    edited August 2020

    Melborune is back online.
    Additionally, the Cooldown has been reduced to 2 hours per account for a new deployment.

    microLXC does feature a full API, but its not intended to be used as instant deploy, testing and destroy it, hence the cooldown.

    At some point, when the API will feature static keys, currently it only uses dynamic keys, by then it will be publicly documented.

    Thanked by 4bdl Ganonk World Pwner
  • WorldWorld Veteran
    edited August 2020

    @Neoon said:
    Melborune is back online.
    Additionally, the Cooldown has been reduced to 2 hours per account for a new deployment.

    microLXC does feature a full API, but its not intended to be used as instant deploy, testing and destroy it, hence the cooldown.

    At some point, when the API will feature static keys, currently it only uses dynamic keys, by then it will be publicly documented.

    Just terminated in SG and tried to deploy it in AU, but the last step returned 'LET Verification fuckup', is that because of the cooldown or other reason?

  • NeoonNeoon Community Contributor, Veteran

    @World said:

    @Neoon said:
    Melborune is back online.
    Additionally, the Cooldown has been reduced to 2 hours per account for a new deployment.

    microLXC does feature a full API, but its not intended to be used as instant deploy, testing and destroy it, hence the cooldown.

    At some point, when the API will feature static keys, currently it only uses dynamic keys, by then it will be publicly documented.

    Just terminated in SG and tried to deploy it in AU, but the last step returned 'LET Verification fuckup', is that because of the cooldown or other reason?

    Lets say, the LET verification is "special".
    Which requires the system to claim a new token every 30 days.

    And if this goes wrong, it says LET verification fuckup, likely if you try again it works.
    It has nothing to do with a cooldown, the cooldown only applies if you do a successful deploy.

    Reminds me, to put that Token refresh into a cronjob to reduce these errors but I can never fully eliminate them.

    Thanked by 3FAT32 World Ganonk
  • WorldWorld Veteran
    edited August 2020

    @Neoon said:

    @World said:

    @Neoon said:
    Melborune is back online.
    Additionally, the Cooldown has been reduced to 2 hours per account for a new deployment.

    microLXC does feature a full API, but its not intended to be used as instant deploy, testing and destroy it, hence the cooldown.

    At some point, when the API will feature static keys, currently it only uses dynamic keys, by then it will be publicly documented.

    Just terminated in SG and tried to deploy it in AU, but the last step returned 'LET Verification fuckup', is that because of the cooldown or other reason?

    Lets say, the LET verification is "special".
    Which requires the system to claim a new token every 30 days.

    And if this goes wrong, it says LET verification fuckup, likely if you try again it works.
    It has nothing to do with a cooldown, the cooldown only applies if you do a successful deploy.

    Reminds me, to put that Token refresh into a cronjob to reduce these errors but I can never fully eliminate them.

    UPDATE: After got this error, tried to re-login and submit the deploy request again, sometimes it will get the same error, but sometimes it will return 'Cooldown, wait for a bit.' .
    As only applies Cooldown if do a successful deploy, it seems strange.

    Now I've deleted account, will wait for some time then try to deploy again.


    Thanks for the information, I've tried three times in the last hour, but all returned this error, will try again later then.

Sign In or Register to comment.