New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Thanks. The delay, when I did it, was about 10 minutes. I redeployed a 64MB server, and there's still some stock left if anybody wants one in Johannesburg.
Edit: And now it's out of stock again. Maybe it's according to number of slots and not the RAM specifically.
Yea if its suddenly goes out of stock, its either running out of storage or memory.
The slot limit usually goes to low first before going out of stock.
dd72-2180-0fa6-5a00
Thank you
Thanks for the explanation, I didn't know that. I deployed 3 containers on a single node, It's reasonable to prevent CPU abuse in this situation.
It does cause some annoyance though, for example, with 384MiB remaining, if I deploy two 128 first and then one 64, after I destroy a 128 only 192 remain and I can't recover the previous 128MiB again.
Its not for CPU abuse, its about preventing people filling up a single node.
Technically in the backend, there is no difference in-between MiB and MB.
LXD makes that difference for reasons.
The Backend also does not make it unable to recover anything.
If you delete a container or virtual machine, when its gone, its gone.
When you go to the Dashboard, it gets calculated for your again or if you Deploy.
It could be a bug in the calculation.
Can you drop me a PM, with your current usage, including a screenshot from the Dashboard?
But there are enough ports and IPV6, right? What would be under-resourced if I deployed three 128 vs one 384?
I see. The back end is just numbers.
By recovery I mean redeploying a container with the same RAM
It's a feature not a bug since 192<128*2 ๐
Running out of IPv6 would be funny, but sadly that is basically impossible.
And before the Node runs out of allocated ports, the slot limit would take it out of stock.
Most nodes will be running out of resources before that.
As said before, send me a DM, I will have a look.
You also have on the Dashboard a Progress bar, that shows you how much Memory you have free.
Thanks for your patience. The progress bar is correct, I hadn't noticed that it would cost double before. Everything is working as intended.
Suppose I deployed five 128 containers in different locations, 1024-5ร128=384
Then I want to deploy three 128 containers on another location, and since the third container cost double, I can only deploy 2ร128+1ร64ร2=384
If I ternimate one 128 of these three containers, 384-128-64=192 < 128ร2
I hope that explains the situation
If you need more Memory, Helsinki still has the 50% discount, up to 1GB is available.
NixOS has been enabled again, I replaced the current Image with a new one for LXD.
Testing so far was fine, if nesting is enabled, don't forget that to enable it under Settings.
There seems no "Arch Linux" option in the re-installation for KVM.
Is that normal? Thank you.
I added the KVM images however I forgot to enable Arch Linux for KVM.
Fixed.
This has now been addressed and the installer will properly use the kernel module if available, even when it is running inside a container.
Thanks for your hard work. It works properly now.
OS availability updates
- Added Ubuntu Noble Numbat (LXC/KVM)
Fixed, will now display correct after deployment / reinstall.
This week I had a few cases of CPU abuse.
So I wrote some code to add a simple CPU abuse detection system.
This will notify users via email if the CPU usage is higher than 50% for the last 30 minutes.
The System doesn't stop or suspend anything yet.
However, the idea would be a strike like system.
If you have been notified a bunch of times, your container / virtual machine will be stopped.
Smol update.
You will be send 3 notifications via email before the System will take action.
Roughly 2 Hours with more than 50% CPU load.
The 4th time you exceed the threshold your virtual machine / container will be stopped and you will be notified via email.
I will post here again once the automatic suspension is enabled, until then, it will just send notifications.
If you notice any bugs, feel free to let me know.
It would be nice if I could grab one, however I don't have enough likes on let. /Shrug
Maintenance Announcement
I have to carry out some changes on the backend, this will make the backend unavailable for roughly 1 hour or less.
Running machines are not affected, however no tasks or deployments can be done.
Will be done this week, Saturday, 11th of May, at around 20:00 GMT.
Done.
Maintenance Announcement
microLXC still got 3 nodes running on an older version of Ubuntu.
I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.
Affected Locations
- Sandefjord
- Melbourne
- Tokyo
The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
Will be done next week, Saturday, 18th of May, at around 20:00 GMT.
a320-8340-b9d5-3f6f
Thank for the free chicken ๐ I will enjoy this kfc ๐ฎโ๐จ
Maintenance Announcement
I mentioned a while ago once Incus becomes LTS, that microLXC will slowly migrate away from LXD.
Incus is basically a fork from LXD, which has been created a few months after Canonical took over LXD and changed the licenses.
Technically LXD still has support until 2029, so does Incus, however due to the consistent issues, with LXD and snap, I did choose to migrate to Incus.
So far my testing has gone without any issues, hence I want to start migrating the first nodes.
The first batch of Nodes that will be migrated this Weekend are Johannesburg and Valdivia.
This will be done Sunday, 19th of May, at around 20:00 GMT.
Downtime should be minimal, expected to be 10 minutes or less, since no reboot is needed.
ngl It's the first service provider > @Neoon said:
Just curious how the upgrade and migration works. Will a simple do-release-upgrade and lxd-to-incus command suffice?
Why do-release-upgrade? But for the rest, basically yes.
It would make no sense to write my own migration tool.
I was referring to the upgrade from 20.04 to 22.04. I'm a bit skeptical with the do-release-upgrade. I have a server I'd like to migrate to 24.04, and if there's enough time I'll backup, clean install, restore
It works fine, however I would not touch 24.04 yet.
Done.
Its done, please lemme know if you face any issues.