Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


โ€บ microLXC Public Test - Page 35
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

microLXC Public Test

1293031323335ยป

Comments

  • daviddavid Member
    edited April 17

    @Neoon said: Automatically with a delay.

    Thanks. The delay, when I did it, was about 10 minutes. I redeployed a 64MB server, and there's still some stock left if anybody wants one in Johannesburg.

    Edit: And now it's out of stock again. Maybe it's according to number of slots and not the RAM specifically.

  • NeoonNeoon Community Contributor, Veteran
    edited April 18

    @david said:

    @Neoon said: Automatically with a delay.

    Thanks. The delay, when I did it, was about 10 minutes. I redeployed a 64MB server, and there's still some stock left if anybody wants one in Johannesburg.

    Edit: And now it's out of stock again. Maybe it's according to number of slots and not the RAM specifically.

    Yea if its suddenly goes out of stock, its either running out of storage or memory.
    The slot limit usually goes to low first before going out of stock.

    Thanked by 1david
  • dd72-2180-0fa6-5a00

    Thank you <3

  • fadedmaplefadedmaple Member
    edited April 18

    @Neoon said:

    @fadedmaple said:

    @Neoon said:

    @fadedmaple said:

    @Neoon said:

    @Neoon said:
    OS availability updates
    - Added Alpine 3.19 (LXC/KVM)
    - Added NixOS (LXC)

    Alpine is as before available from 64MB, NixOS from 128MB.

    NixOS has been disabled again, since people found issues with the template that I didn't see.
    A few Locations will be migrated/moved to Incus in the next days/weeks/months, which should make NixOS run as expected, will post updates on this.

    That's cool, will all servers migrate to incus in the future?

    Some will be replaced or upgraded.
    Either a new Machine same Location with Incus or the existing Machine will get a clean reinstall.
    Some Locations are old with old configurations hence the reinstall.

    I found 128 MiB containers I created myself with LXD or incus are accurate 128MiB, not complaining about missing a few MiB just curious ๐Ÿ˜†

    Because the Packages are created as MB not as MiB.
    Thanks for pointing it out though, I updated the Packages.

    It looks like the RAM quota limit is a little buggy after changing it to MiB. After seven 128MiB containers it is not possible to deploy neither an eighth 128mib nor a 64MiB

    On what Node did you try to deploy a container?
    Did you already have one or two on that node?

    Thanks for the explanation, I didn't know that. I deployed 3 containers on a single node, It's reasonable to prevent CPU abuse in this situation.
    It does cause some annoyance though, for example, with 384MiB remaining, if I deploy two 128 first and then one 64, after I destroy a 128 only 192 remain and I can't recover the previous 128MiB again.

  • NeoonNeoon Community Contributor, Veteran

    @fadedmaple said:
    Thanks for the explanation, I didn't know that. I deployed 3 containers on a single node, It's reasonable to prevent CPU abuse in this situation.
    It does cause some annoyance though, for example, with 384MiB remaining, if I deploy two 128 first and then one 64, after I destroy a 128 only 192 remain and I can't recover the previous 128MiB again.

    Its not for CPU abuse, its about preventing people filling up a single node.
    Technically in the backend, there is no difference in-between MiB and MB.

    LXD makes that difference for reasons.
    The Backend also does not make it unable to recover anything.

    If you delete a container or virtual machine, when its gone, its gone.
    When you go to the Dashboard, it gets calculated for your again or if you Deploy.

    It could be a bug in the calculation.
    Can you drop me a PM, with your current usage, including a screenshot from the Dashboard?

  • @Neoon said:
    Its not for CPU abuse, its about preventing people filling up a single node.

    But there are enough ports and IPV6, right? What would be under-resourced if I deployed three 128 vs one 384?

    @Neoon said:
    Technically in the backend, there is no difference in-between MiB and MB.

    I see. The back end is just numbers.

    @Neoon said:
    The Backend also does not make it unable to recover anything.

    By recovery I mean redeploying a container with the same RAM

    @Neoon said:
    When you go to the Dashboard, it gets calculated for your again or if you Deploy.
    It could be a bug in the calculation.

    It's a feature not a bug since 192<128*2 ๐Ÿ˜‚

  • NeoonNeoon Community Contributor, Veteran

    @fadedmaple said:

    @Neoon said:
    Its not for CPU abuse, its about preventing people filling up a single node.

    But there are enough ports and IPV6, right? What would be under-resourced if I deployed three 128 vs one 384?

    Running out of IPv6 would be funny, but sadly that is basically impossible.
    And before the Node runs out of allocated ports, the slot limit would take it out of stock.

    Most nodes will be running out of resources before that.

    @Neoon said:
    When you go to the Dashboard, it gets calculated for your again or if you Deploy.
    It could be a bug in the calculation.

    It's a feature not a bug since 192<128*2 ๐Ÿ˜‚

    As said before, send me a DM, I will have a look.
    You also have on the Dashboard a Progress bar, that shows you how much Memory you have free.

  • fadedmaplefadedmaple Member
    edited April 18

    @Neoon said:

    @fadedmaple said:

    @Neoon said:
    Its not for CPU abuse, its about preventing people filling up a single node.

    But there are enough ports and IPV6, right? What would be under-resourced if I deployed three 128 vs one 384?

    Running out of IPv6 would be funny, but sadly that is basically impossible.
    And before the Node runs out of allocated ports, the slot limit would take it out of stock.

    Most nodes will be running out of resources before that.

    @Neoon said:
    When you go to the Dashboard, it gets calculated for your again or if you Deploy.
    It could be a bug in the calculation.

    It's a feature not a bug since 192<128*2 ๐Ÿ˜‚

    As said before, send me a DM, I will have a look.
    You also have on the Dashboard a Progress bar, that shows you how much Memory you have free.

    Thanks for your patience. The progress bar is correct, I hadn't noticed that it would cost double before. Everything is working as intended.

    Suppose I deployed five 128 containers in different locations, 1024-5ร—128=384

    Then I want to deploy three 128 containers on another location, and since the third container cost double, I can only deploy 2ร—128+1ร—64ร—2=384

    If I ternimate one 128 of these three containers, 384-128-64=192 < 128ร—2

    I hope that explains the situation

  • NeoonNeoon Community Contributor, Veteran

    @fadedmaple said:

    @Neoon said:

    @fadedmaple said:

    @Neoon said:
    Its not for CPU abuse, its about preventing people filling up a single node.

    But there are enough ports and IPV6, right? What would be under-resourced if I deployed three 128 vs one 384?

    Running out of IPv6 would be funny, but sadly that is basically impossible.
    And before the Node runs out of allocated ports, the slot limit would take it out of stock.

    Most nodes will be running out of resources before that.

    @Neoon said:
    When you go to the Dashboard, it gets calculated for your again or if you Deploy.
    It could be a bug in the calculation.

    It's a feature not a bug since 192<128*2 ๐Ÿ˜‚

    As said before, send me a DM, I will have a look.
    You also have on the Dashboard a Progress bar, that shows you how much Memory you have free.

    Thanks for your patience. The progress bar is correct, I hadn't noticed that it would cost double before. Everything is working as intended.

    Suppose I deployed five 128 containers in different locations, 1024-5ร—128=384

    Then I want to deploy three 128 containers on another location, and since the third container cost double, I can only deploy 2ร—128+1ร—64ร—2=384

    If I ternimate one 128 of these three containers, 384-128-64=192 < 128ร—2

    I hope that explains the situation

    If you need more Memory, Helsinki still has the 50% discount, up to 1GB is available.

  • NeoonNeoon Community Contributor, Veteran

    NixOS has been enabled again, I replaced the current Image with a new one for LXD.
    Testing so far was fine, if nesting is enabled, don't forget that to enable it under Settings.

    Thanked by 1ask_seek_knock
  • There seems no "Arch Linux" option in the re-installation for KVM.
    Is that normal? Thank you.

  • NeoonNeoon Community Contributor, Veteran

    @ask_seek_knock said:
    There seems no "Arch Linux" option in the re-installation for KVM.
    Is that normal? Thank you.

    I added the KVM images however I forgot to enable Arch Linux for KVM.
    Fixed.

    Thanked by 1ask_seek_knock
  • NyrNyr Community Contributor, Veteran

    @neoh said: Second this. Do you plan on using the native wg kernel in the following releases, @Nyr ?

    This has now been addressed and the installer will properly use the kernel module if available, even when it is running inside a container.

    Thanked by 1neoh
  • neohneoh Member

    @Nyr said:

    @neoh said: Second this. Do you plan on using the native wg kernel in the following releases, @Nyr ?

    This has now been addressed and the installer will properly use the kernel module if available, even when it is running inside a container.

    Thanks for your hard work. It works properly now.

  • NeoonNeoon Community Contributor, Veteran

    OS availability updates
    - Added Ubuntu Noble Numbat (LXC/KVM)

  • NeoonNeoon Community Contributor, Veteran
    edited April 27

    @fadedmaple said:
    I've noticed that some containers show wrong ipv6 addresses in the panel.

    Fixed, will now display correct after deployment / reinstall.

    Thanked by 2david fadedmaple
  • NeoonNeoon Community Contributor, Veteran
    edited April 28

    This week I had a few cases of CPU abuse.

    So I wrote some code to add a simple CPU abuse detection system.
    This will notify users via email if the CPU usage is higher than 50% for the last 30 minutes.

    The System doesn't stop or suspend anything yet.
    However, the idea would be a strike like system.

    If you have been notified a bunch of times, your container / virtual machine will be stopped.

  • NeoonNeoon Community Contributor, Veteran

    @Neoon said:
    This week I had a few cases of CPU abuse.

    So I wrote some code to add a simple CPU abuse detection system.
    This will notify users via email if the CPU usage is higher than 50% for the last 30 minutes.

    The System doesn't stop or suspend anything yet.
    However, the idea would be a strike like system.

    If you have been notified a bunch of times, your container / virtual machine will be stopped.

    Smol update.

    You will be send 3 notifications via email before the System will take action.
    Roughly 2 Hours with more than 50% CPU load.

    The 4th time you exceed the threshold your virtual machine / container will be stopped and you will be notified via email.

    I will post here again once the automatic suspension is enabled, until then, it will just send notifications.
    If you notice any bugs, feel free to let me know.

    Thanked by 3Void Carlin0 jwg29859
Sign In or Register to comment.