Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


microLXC Public Test - Page 28
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

microLXC Public Test

1252628303135

Comments

  • dc3e-e46b-3fd1-7b73

    Nice, thanks a lot :)

  • NeoonNeoon Community Contributor, Veteran
    edited December 2023

    I wanted actually to do more, however, it has been delayed already but the patch has been tested sooooo here we go

    Patchnotes
    Switching from slot based to resource based Instead of 1 Slot, you get 1GB of available memory you can use as you want.
    Currently there are no rules or limits, besides your allocation and if the location has stock.
    I will see how things will develop, adding rules or limits based on that.

    Due to switching to resource based pools, the 50GB Plan in Norway has been limited to 10 slots, which is about half of the SSD's storage, the Package just exists because terrahost decided to throw in a TB SSD.

    Enabled the 128MB Package again, probably will do more Packages, already at roughly 20 right now, counting disabled and enabled ones.
    May do Regional 128MB Packages with more Storage.

    The current 128MB Package is Global.
    Had no time yet, to test the templates on 64MB, so not enabled yet, 128MB Package already does not support all operating systems.

    Increased minimum Uplink to 100Mbit on all Packages, no matter what size, same goes for IPv6 you get a /64 on everything, routed, except Tokyo.
    Also increased the CPU caps to a minimum of 50%, Norway has been unleashed to 200%, since its a Dedi

  • NeoonNeoon Community Contributor, Veteran

    NZ just went dark, so if you get a deployment error, that is normal currently.

    Thanked by 1Void
  • WorldWorld Veteran
    edited December 2023

    Just created some VMs to test new feature - limit VMs on resource based, and noticed:
    In SG location, deployment was failed. In NL location, the 128MB plan deployment was failed but larger plans were fine.

  • NeoonNeoon Community Contributor, Veteran

    @World said:
    Just created some VMs to test new feature - limit VMs on resource based, and noticed:
    In SG location, deployment was failed. In NL location, the 128MB plan deployment was failed but larger plans were fine.

    What OS did you try?

  • WorldWorld Veteran
    edited December 2023

    @Neoon said:

    @World said:
    Just created some VMs to test new feature - limit VMs on resource based, and noticed:
    In SG location, deployment was failed. In NL location, the 128MB plan deployment was failed but larger plans were fine.

    What OS did you try?

    Mostly Debian (from Buster to the newest version), but I've also tried Ubuntu.

    I created VMs with 128MB package and Debian in other locations too, everything was fine.

  • Would it be possible to add an Alpine image? May work better on the lower resource setups

  • NeoonNeoon Community Contributor, Veteran
    edited December 2023

    @World said:

    @Neoon said:

    @World said:
    Just created some VMs to test new feature - limit VMs on resource based, and noticed:
    In SG location, deployment was failed. In NL location, the 128MB plan deployment was failed but larger plans were fine.

    What OS did you try?

    Mostly Debian(from Buster to the newest version), but I've also tried Ubuntu.

    I created VMs with 128MB package and Debian in other locations too, but everything was fine.

    Usually I get detailed reports about failed deployments, I got none.
    There is an issue with MXRoute right now, emails are not getting delivered, already did open a Ticket.

    From experience, NL was KVM only, means it may miss some LXC templates.
    Since I enabled the 128MB globally, I probably forgot about that.

    I have scripts for that, that keep the node templates in sync, so that this usually won't happen.

    SG, from experience, some network issue at some point, might be just a short one during a deployment, so its not catched by monitoring.
    The current comms are running via https, which is fine, however the LXD dev's decided to clear out information about tasks really quick.

    So if there is a network issue, microLXC does assume if LXD doesn't return anything it was/is a failure. Which can lead to stuck containers sometimes, which then leads to further failed deployments because the system is not aware that these do exist and LXD errors out because it does exist, which is to be expected.

    Ideally, you would keep the task info longer, but the dev's are refusing to implement that.
    Even a modifiable setting, was refused, because he said he would have to write tests for that.

    He is way more lazy than me
    I have to write additional code to prevent this in the future.

    Thanked by 1World
  • NeoonNeoon Community Contributor, Veteran

    @Erisa said:
    Would it be possible to add an Alpine image? May work better on the lower resource setups

    Yes, however these templates come without anything installed, including openSSH.
    When I recall it correctly, I had issues getting it to work reliably, hence I didn't add them.

    I will take a look again.

    Thanked by 1Erisa
  • @Neoon said:

    @World said:

    @Neoon said:

    @World said:
    Just created some VMs to test new feature - limit VMs on resource based, and noticed:
    In SG location, deployment was failed. In NL location, the 128MB plan deployment was failed but larger plans were fine.

    What OS did you try?

    Mostly Debian(from Buster to the newest version), but I've also tried Ubuntu.

    I created VMs with 128MB package and Debian in other locations too, but everything was fine.

    Usually I get detailed reports about failed deployments, I got none.
    There is an issue with MXRoute right now, emails are not getting delivered, already did open a Ticket.

    From experience, NL was KVM only, means it may miss some LXC templates.
    Since I enabled the 128MB globally, I probably forgot about that.

    I have scripts for that, that keep the node templates in sync, so that this usually won't happen.

    SG, from experience, some network issue at some point, might be just a short one during a deployment, so its not catched by monitoring.
    The current comms are running via https, which is fine, however the LXD dev's decided to clear out information about tasks really quick.

    So if there is a network issue, microLXC does assume if LXD doesn't return anything it was/is a failure. Which can lead to stuck containers sometimes, which then leads to further failed deployments because the system is not aware that these do exist and LXD errors out because it does exist, which is to be expected.

    Ideally, you would keep the task info longer, but the dev's are refusing to implement that.
    Even a modifiable setting, was refused, because he said he would have to write tests for that.

    He is way more lazy than me
    I have to write additional code to prevent this in the future.

    Wow!! So detailed about issues, thanks for the information and hard-working for everything. <3

    Thanked by 1Neoon
  • NeoonNeoon Community Contributor, Veteran
    edited December 2023

    Its not a problem with MXRoute, emails are still getting delivered.
    However its a problem I have to troubleshoot later.

    I though so, because I tested it on multiple addresses but apparently my testing method not good enough and Direct Admin didn't wanted to show me the logs.

  • NeoonNeoon Community Contributor, Veteran
    edited December 2023

    Its not a desync issue in SG, rather a bug in my code.
    NL as expected was missing the templates.

    I put SG and NL out of stock for now until its fixed.
    Gonna fix it tomorraw though, since I am getting drunk and enjoying my weekend.

    Thanked by 1balloon
  • bd72-daad-fc80-5557

  • NeoonNeoon Community Contributor, Veteran

    Update on SG.

    Its not even a Bug, that was actually a Feature.
    I did rewrite the Network class a few months ago, so It can dynamically assign /64, in theory even bigger to containers from the /48 I get allocated.

    In theory, I can put an nearly infinite amount of containers on a Node.
    However, I put a cap on the amount it generates, that cap was hit, way to low and resulted in a IPv6 allocation error, which caused the deployment to stuck and fail.

    That caused a desync, which resulted in further failed deployments.
    Patch is already in git and tested, will deploy that tomorrow and fix the stuck container on SG.

    It wasn't a network issue as initially suspected, neither is SG to blame, SG is just highly in demand.
    Existing Containers are not affected by this in any way, since the Network was already allocated and configured, even if you Reinstall.

    Thanked by 3Ganonk balloon World
  • NeoonNeoon Community Contributor, Veteran

    SG & NL is back available.
    Plus, the Dashboard will show you now how much Memory you have spend and how much you can spend.

    Smol useful feature.

  • Waiting for 64MB ❤️

    Thanked by 1yoursunny
  • harrisonharrison Member
    edited December 2023

    Does anyone know which image uses the least amount of RAM? Debian 10?
    And do newer debian servers use more RAM than the older versions?

    @Neoon That is beyond cool. Thanks!

    Thanked by 1Neoon
  • edited December 2023

    @harrison said: Does anyone know which image uses the least amount of RAM?

    I'm not sure but for your reference this is the result of a newly deployed Arch Linux:

    root@lxc-sg:~#  free -h
                   total        used        free      shared  buff/cache   available
    Mem:           244Mi        30Mi       182Mi       116Ki        31Mi       213Mi
    Swap:             0B          0B          0B
    
    Thanked by 1harrison
  • NeoonNeoon Community Contributor, Veteran

    As per request I added Alpine, 3.18 also added Devuan Daedalus ( Debian 12 )

    Was a bit more work, since microLXC never saved the installed OS, it wasn't necessary after deployment.
    However to spawn the correct shell when you use _Console its now needed for Alpine since its uses ash instead of bash.

    Thanked by 4balloon 0xC7 Erisa MMzF
  • NeoonNeoon Community Contributor, Veteran
    edited December 2023

    Stock update / Maintenance
    We have a bunch of nodes that still have spare capacity however are limited due to the current configuration.
    I plan to reboot the following nodes to increase capacity.

    • Melbourne
    • Johannesburg
    • Auckland
    • Valdivia

    At around 19:00 UTC Wednesday next week.
    Expect a few minutes of downtime while the nodes will be rebooted.

    Thanked by 3Ganonk Carlin0 bdl
  • namhuynamhuy Member
    edited December 2023

    3740-103d-8cf2-ca6d

  • endercatendercat Member
    edited December 2023

    These conditions are a bit harsh for me ಠ﹏ಠ

  • NeoonNeoon Community Contributor, Veteran

    Regarding Japan, I am still waiting for IPv6.
    However, I will patch microLXC for NAT only, when the IPv6 prefix will become available it should be possible to enable it without a reboot and add a button to the Panel, so you can let microLXC assign you a /64 prefix.

  • NeoonNeoon Community Contributor, Veteran

    @Neoon said:
    Stock update / Maintenance
    We have a bunch of nodes that still have spare capacity however are limited due to the current configuration.
    I plan to reboot the following nodes to increase capacity.

    • Melbourne
    • Johannesburg
    • Auckland
    • Valdivia

    At around 19:00 UTC Wednesday next week.
    Expect a few minutes of downtime while the nodes will be rebooted.

    Done, restock will happen later though.

  • 80ed-1da1-eb42-a480

  • NeoonNeoon Community Contributor, Veteran

    @hresser said:
    80ed-1da1-eb42-a480

    Bypassing the verification system is forbidden, especially giving yourself thanks to just hit the threshold.
    Your account has been banned, @tunatech too.

    Thanked by 1ask_seek_knock
  • NeoonNeoon Community Contributor, Veteran
    edited December 2023

    JP2 is available now, no IPv6 yet only NAT, IPv6 will be available once I got the Prefix.
    Thanks to @Abd / https://webhorizon.net/

    Currently 3 Packages are available.

    • 1x Core (50%), 128MB, 2.5GB ZFS, 200GB with 100Mbit
    • 1x Core (50%), 256MB, 3.5GB ZFS, 200GB with 100Mbit
    • 1x Core (50%), 512MB, 5GB ZFS, 200GB with 200Mbit

    If you have any suggestions, lemme know.

  • d916-ce39-47e9-2809
    Thanks.

  • @Neoon said:
    JP2 is available now, no IPv6 yet only NAT, IPv6 will be available once I got the Prefix.
    Thanks to @Abd / https://webhorizon.net/

    Currently 3 Packages are available.

    • 1x Core (50%), 128MB, 2.5GB ZFS, 200GB with 100Mbit
    • 1x Core (50%), 256MB, 3.5GB ZFS, 200GB with 100Mbit
    • 1x Core (50%), 512MB, 5GB ZFS, 200GB with 200Mbit

    If you have any suggestions, lemme know.

    I have very slow speed to servers outside of Japan.

    [root@lxc006e0495 ~]# wget -O /dev/null https://proof.ovh.net/files/100Mb.dat
    --2023-12-23 13:24:47-- https://proof.ovh.net/files/100Mb.dat
    Resolving proof.ovh.net (proof.ovh.net)... 141.95.207.211, 2001:41d0:242:d300::
    Connecting to proof.ovh.net (proof.ovh.net)|141.95.207.211|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 104857600 (100M) [application/octet-stream]
    Saving to: ‘/dev/null’

    /dev/null 0%[ ] 407.75K 17.1KB/s eta 92m 41s^C
    [root@lxc006e0495 ~]# wget -O /dev/null https://at.edis.at/100MB.test
    --2023-12-23 13:25:53-- https://at.edis.at/100MB.test
    Resolving at.edis.at (at.edis.at)... 149.154.154.90, 2a03:f80:ed15:435a::1
    Connecting to at.edis.at (at.edis.at)|149.154.154.90|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 100000000 (95M) [application/octet-stream]
    Saving to: ‘/dev/null’

    /dev/null 0%[ ] 359.70K 17.6KB/s eta 87m 2s ^C
    [root@lxc006e0495 ~]# wget -O /dev/null https://jp.edis.at/100MB.test
    --2023-12-23 13:27:14-- https://jp.edis.at/100MB.test
    Resolving jp.edis.at (jp.edis.at)... 194.68.27.10, 2a03:f80:81:c266::1
    Connecting to jp.edis.at (jp.edis.at)|194.68.27.10|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 100000000 (95M) [application/octet-stream]
    Saving to: ‘/dev/null’

    /dev/null 100%[=============================================================>] 95.37M 11.6MB/s in 8.2s

    2023-12-23 13:27:22 (11.7 MB/s) - ‘/dev/null’ saved [100000000/100000000]

  • NeoonNeoon Community Contributor, Veteran

    @Strikerr said:

    @Neoon said:
    JP2 is available now, no IPv6 yet only NAT, IPv6 will be available once I got the Prefix.
    Thanks to @Abd / https://webhorizon.net/

    Currently 3 Packages are available.

    • 1x Core (50%), 128MB, 2.5GB ZFS, 200GB with 100Mbit
    • 1x Core (50%), 256MB, 3.5GB ZFS, 200GB with 100Mbit
    • 1x Core (50%), 512MB, 5GB ZFS, 200GB with 200Mbit

    If you have any suggestions, lemme know.

    I have very slow speed to servers outside of Japan.

    [root@lxc006e0495 ~]# wget -O /dev/null https://proof.ovh.net/files/100Mb.dat
    --2023-12-23 13:24:47-- https://proof.ovh.net/files/100Mb.dat
    Resolving proof.ovh.net (proof.ovh.net)... 141.95.207.211, 2001:41d0:242:d300::
    Connecting to proof.ovh.net (proof.ovh.net)|141.95.207.211|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 104857600 (100M) [application/octet-stream]
    Saving to: ‘/dev/null’

    /dev/null 0%[ ] 407.75K 17.1KB/s eta 92m 41s^C
    [root@lxc006e0495 ~]# wget -O /dev/null https://at.edis.at/100MB.test
    --2023-12-23 13:25:53-- https://at.edis.at/100MB.test
    Resolving at.edis.at (at.edis.at)... 149.154.154.90, 2a03:f80:ed15:435a::1
    Connecting to at.edis.at (at.edis.at)|149.154.154.90|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 100000000 (95M) [application/octet-stream]
    Saving to: ‘/dev/null’

    /dev/null 0%[ ] 359.70K 17.6KB/s eta 87m 2s ^C
    [root@lxc006e0495 ~]# wget -O /dev/null https://jp.edis.at/100MB.test
    --2023-12-23 13:27:14-- https://jp.edis.at/100MB.test
    Resolving jp.edis.at (jp.edis.at)... 194.68.27.10, 2a03:f80:81:c266::1
    Connecting to jp.edis.at (jp.edis.at)|194.68.27.10|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 100000000 (95M) [application/octet-stream]
    Saving to: ‘/dev/null’

    /dev/null 100%[=============================================================>] 95.37M 11.6MB/s in 8.2s

    2023-12-23 13:27:22 (11.7 MB/s) - ‘/dev/null’ saved [100000000/100000000]

    11.6MB/s = 100Mbit
    To reduce abuse, the containers only have 100-200Mbit as Uplink.

    Currently the allocated traffic isn't high either, so it doesn't make sense to provide anything higher with 200GB of traffic.
    I can ask @Abd if he can bump up the allocated traffic, so we can increase that.

Sign In or Register to comment.