Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


microLXC Public Test - Page 8
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

microLXC Public Test

1568101135

Comments

  • and SG is gone again :(

  • NeoonNeoon Community Contributor, Veteran
    edited December 2020

    @Asim said:
    and SG is gone again :(

    I did not said its fixed, I am still working on with the dev trying to find out why LXD is acting up.
    Only the deployment is affected.

    Thanked by 1Asim
  • NeoonNeoon Community Contributor, Veteran

    Maintenance announcement:

    • SG will be rebooted tomorrow night, to help troubleshoot the ongoing deployment issues.
      All Containers will be automatically started after the reboot, it should not take longer than 5 minutes.

    • NL will be physically moved to another data room next week, more information will follow

    Thanked by 2Asim ferri
  • @Neoon said: Only the deployment is affected.

    hopefully it will show some space tomorrow and I can get an SG

  • NeoonNeoon Community Contributor, Veteran
    edited December 2020

    @Asim said:

    @Neoon said: Only the deployment is affected.

    hopefully it will show some space tomorrow and I can get an SG

    eh no? There is no ETA getting it fixed, its unlikely that a reboot will fix this issue however I try it anyway.
    Since LXD does play dead without any error messages its just a guess.

  • @Neoon what timezone is this reboot going to happen?

    Thanked by 1yoursunny
  • Now I get "Cooldown, wait for a bit." while SG location is showing low stock

  • NeoonNeoon Community Contributor, Veteran

    Another Maintenance announcement:

    • NL will be moved into another rack on Tuesday, it will be de racked and racked up again so should not take long, no exact time window on that.
    • LA, SG and NO have currently issues with the LVM backend, which results in poor I/O performance or failed deployments in SG.
      The Plan would be for these locations, to migrate existing containers to a new LVM pool, which should solve these issues.
      However, the operation could lead to data loss so I advise anyone if you have a container in one of these locations, take a backup before they will be migrated.

    This will take place at the following days:

    LA Friday, 16:00 GMT, approximately 1 hour
    SG Friday 19:00 GMT, approximately 1 hour
    NO Sunday, 19:00 GMT, approximately 2 hours

    The Downtime will likely to be shorter for each container, as long nothing comes in-between.
    During the maintenance, you won't be able to control the container via microlxc.net.

    After the maintenance stock will be available again on these locations, including NO.
    Tokyo is not affected by this, however if we upgrade NL and AU, we likely will performance this maintenance additional or if it becomes necessary.

    CH will be not moved to another LVM Backend, since I plan to discontinue it, due to the network related issues.
    However don't plan to discontinue CH until we have a replacement, currently I am still looking for one.
    I keep you updated on this.

    Thanked by 2ferri adly
  • NeoonNeoon Community Contributor, Veteran

    "Thursday" not Tuesday, I am sorry for that mistake.

  • NeoonNeoon Community Contributor, Veteran
    edited January 2021

    SG and LA migrations are done, total downtime about 3 minutes + 1 minute for each container. LA took a bit longer because Virtualizor fucked it up again and broke v6.

    Next & last migration for now is NO on the weekend.

    Thanked by 2brueggus atomi
  • ee10-bcf6-9873-6246

  • Hello

    153c-f6d6-0f63-60ab

  • @Neoon said:
    SG and LA migrations are done, total downtime about 3 minutes + 1 minute for each container. LA took a bit longer because Virtualizor fucked it up again and broke v6.

    Next & last migration for now is NO on the weekend.

    It feels much faster now.

  • How is the service?

  • NeoonNeoon Community Contributor, Veteran

    NO Maintenance done, it took a bit longer since I needed to request a IPMI.

  • NeoonNeoon Community Contributor, Veteran

    NOVOS (Antwerpen) just announced a maintenance for tonight, network will be unreachable for a few seconds up to a few minutes.

  • NeoonNeoon Community Contributor, Veteran

    This Morning, NL had a emergency maintenance, issues with some switches, this has been solved.
    If still face any issues, lemme know.

  • NeoonNeoon Community Contributor, Veteran

    Patch Notes:

    • Zug has been removed from microLXC, existing containers have been wiped after hitting the deadline.
    • Antwerp was recently added as replacement for Zug, Thanks to Novos.be
    • Package tiny has now increased port speed of 50Mbit instead of 25Mbit*
    • Overall I/O limits have been increased up to 100MB/sec*
    • Several nodes are using now a local image store, to shorten deploy times and increase reliability
    • Removed Fedora 31 since EOL + Fedora 33+ won't be any longer supported, since dnf runs OOM on 256MB with Fedora 33
    • Added CentOS Stream

    *New deployments

    Thanked by 4brueggus _MS_ atomi adly
  • @Neoon said: *New deployments

    So... Terminate and request a new instance?

  • brueggusbrueggus Member, IPv6 Advocate

    MS said:

    @Neoon said: *New deployments

    So... Terminate and request a new instance?

    You can delete the old and create a new instance from the panel. No need to file a request again.

  • NeoonNeoon Community Contributor, Veteran
    edited January 2021

    MS said:

    @Neoon said: *New deployments

    So... Terminate and request a new instance?

    • I/O limit does change on Reinstall, since the container is recreated
    • Port Speed does only change if you delete and deploy the instance again

    But keep in mind, its a shared environment, these are only hard limits / peak limits which shall not be used 24/7.

    @brueggus said:

    MS said:

    @Neoon said: *New deployments

    So... Terminate and request a new instance?

    You can delete the old and create a new instance from the panel. No need to file a request again.

    Indeed, a few patches back, if you have an account, you can delete the instance anytime and deploy a new one in any location you like without verification.

    Thanked by 2_MS_ yoursunny
  • _MS__MS_ Member
    edited January 2021

    @brueggus said:

    MS said:

    @Neoon said: *New deployments

    So... Terminate and request a new instance?

    You can delete the old and create a new instance from the panel. No need to file a request again.

    Thanks, I knew about the Reinstall option, but I thought that, maybe, the instance needs to be recreated/redeployed by the admin/Neoon for the new limits to take place.
    I was half-correct.

    @Neoon said: But keep in mind, its a shared environment, these are only hard limits / peak limits which shall not be used 24/7.

    That's totally fine :).

  • @Neoon

    Thank you for this great service. My server is on Sandefjord and I checked couple times in the past it was stopped. Is this due to my account usage past some limit? Is there such limit?

    I use the server for development, so I run a very small web server with almost no CPU, RAM, bandwidth usage.

  • NeoonNeoon Community Contributor, Veteran
    edited January 2021

    @jamuja said:
    @Neoon

    Thank you for this great service. My server is on Sandefjord and I checked couple times in the past it was stopped. Is this due to my account usage past some limit? Is there such limit?

    I use the server for development, so I run a very small web server with almost no CPU, RAM, bandwidth usage.

    We had a few I/O alerts recently and if a container does abuse I/O its is usually stopped.
    You should have gotten a message.

    Otherwise containers won't be stopped yet.

  • I forgot my password for the MicroLXC account.
    Here's a token to reset it. Thank you

    e9d1-2985-babb-2f08

  • f90b-5b9d-22ca-6548

  • b887-e5c2-02c3-0b80

  • d8c3-4a7e-68e3-cbf0

    Thanked by 1Ganonk
  • @nyamenk said:
    d8c3-4a7e-68e3-cbf0

    deployed.

  • v3ngv3ng Member, Patron Provider

    c381-152f-b4fc-c940

Sign In or Register to comment.