New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Okay, but where are the issues you mentioned with the images?
I feel like it might not be a issue it's just my lack of patience. After deploying 128MB ubuntu 22.04, I couldn't connect to SSH, even though it appears in the panel for more than 1 minute. I can connect on web shortly after clicking restart, but
apt update && apt upgrade
will ask me todpkg --configure -a
, It then outputs the configuration of the ssh-server. One time I also found that there was no authorized_keys file in the ~/.ssh directoryUbuntu did ran fine with 128MB before.
Something must have changed.
For some reason Ubuntu now runs out of memory on 128MB.
20.04 and 22.04, so I increased the memory requirement to 192MB instead of 128MB.
Plus I added a 192MB Package just for testing, seems to work fine again.
Will add it later to other Locations too.
So images with lxc are lxc only, same for KVM. Without these two tags there is no limit?Can't believe RHEL 9 variants works on 128MB but Ubuntu doesn't
Looks like I need to switch to debian.
Can you explain your question again?
If an OS can't install basic packages with 128MB, I won't offer the option to install that OS on 128MB.
I mean the flavour without tags actually provide both KVM and LXC images behind the scenes, right? Since lxc only needs rootfs
Nixos seems to be unselectable no matter which Package it's paired with.
And I think it's worth to disable PasswordAuthentication for all images since everyone has submitted their pubkey. Can reduce some IO (logs) and even CPU usage๐
They do but its more complicated than that.
Not every Image is available for LXC or KVM, different memory requirements....
Was not yet announced to be available.
OS / Package availability updates
OS
Packages
great work! hope i'll get to participate in some of the next public tests on LET
OS availability updates
- Added Alpine 3.19 (LXC/KVM)
- Added NixOS (LXC)
Alpine is as before available from 64MB, NixOS from 128MB.
NixOS has been disabled again, since people found issues with the template that I didn't see.
A few Locations will be migrated/moved to Incus in the next days/weeks/months, which should make NixOS run as expected, will post updates on this.
That's cool, will all servers migrate to incus in the future?
I found 128 MiB containers I created myself with LXD or incus are accurate 128MiB, not complaining about missing a few MiB just curious ๐
Some will be replaced or upgraded.
Either a new Machine same Location with Incus or the existing Machine will get a clean reinstall.
Some Locations are old with old configurations hence the reinstall.
Because the Packages are created as MB not as MiB.
Thanks for pointing it out though, I updated the Packages.
2b29-7964-62f2-22b1
Thanks!
c0fb-0393-0604-7f52
thanks
I noticed the DKIM on your email is failing (for the new email address verification code). Not a problem for me, but just to let you know in case you didn't notice it.
This is the DKIM signature from the email:
Sorry, I like looking at email headers.
I didn't know that MXRoute also supports DKIM out of the box.
So It wasn't enabled.
I added the DNS Records, should be fine now.
Thanks for letting me know.
I set up some Debian 128MB instances. So far so good. I did notice at these 2 locations the IPv6 address listed in the dashboard is different than the one that is automatically provisioned with the server.
Sandefjord
Groningen
It is from the /64 allocation, but not the one that's provisioned and working. Not a problem, just for your info.
^^ Sorry, I see that's already been discussed earlier.
I've got some 64MB Alpine Linux servers setup now that are working fine with shadowsocks. I had installed 128MB Debian at first, since that's what I'm used to, but Alpine Linux is ok for this, too (something new).
Here's the latency I'm seeing from my Vultr vps in Tokyo.
The best from this location is (of course) Tokyo and Singapore. And Auckland, Sadelfjord, and Groningen aren't too bad. The slowest are Valdivia, Johannesburg, Helsinki, and Oradea.
These are ipv6 ping times, except for Tokyo Equinix and Helsinki that are ipv4 only. ipv4 vs ipv6 is usually close to the same for most. Oradea latency over ipv4 is a bit better at 276 ms, but both ipv4 & ipv6 have some packet loss to that location.
It looks like the RAM quota limit is a little buggy after changing it to MiB. After seven 128MiB containers it is not possible to deploy neither an eighth 128mib nor a 64MiB
It has been already reported, its a known bug, will fix it at some point.
I don' think its buggy, the resource allocation works different than you think.
You can deploy more than 1 Container / VM per Node, however, if you do so, it cost you more.
This is to prevent people from putting everything on one Node.
If its a small node, the second container already costs you double.
If its a bigger node, the third container cost you double and so on.
I agree I have to improve the error message to be more detailed.
a677-57df-c82f-a3e2
Thanks.
If a location is out of stock, and an instance is terminated for that location, will stock for it be automatically added?
For example, if I terminate a 128MB instance, will I be able to create a new 64MB instance in the same location, and there will still be extra stock for someone else?
Is there any issues on SG? High load (50+) and ports not reachable - I checked via shell, outbound traffic is fine, so it might be a problem with port mapping/forwarding I guess.
Automatically with a delay.
Yea I see the 50+ Load, however Hetrixtools didn't report anything, so I didn't notice.
I gave it a reboot, will keep an eye on it.
I don't see any issues regarding port forwarding at all, just did a test deploy went fine with all the ports. If you have an issue, check your end and please provide more information.
After you rebooted the host server, everything works fine now
I tried it before, with a test deploy, before I rebooted it.
Port forwarding was fine, that's why I asked.
The Node was still responsive, despite the Load of 50.
No CPU Abuse or I/O abuse, from the looks of it, a stuck kernel thread was causing the Load.
However I could not tell what exactly the cause was, so I rebooted it.
On what Node did you try to deploy a container?
Did you already have one or two on that node?
Hmm.. When I noticed issues, I tried tcping on different ports from different locations, the results were all failed, then I rebooted my container but the problem still, then I logged into my container via control panel to see if it was actually "alive" - it was and outbound networking was fine. I also tried tcping on SSH ports for some "adjacent" containers - you know the default port for SSH always ends with 00, they were accessible. I have no idea for what causing the problem.