New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
and SG is gone again
I did not said its fixed, I am still working on with the dev trying to find out why LXD is acting up.
Only the deployment is affected.
Maintenance announcement:
SG will be rebooted tomorrow night, to help troubleshoot the ongoing deployment issues.
All Containers will be automatically started after the reboot, it should not take longer than 5 minutes.
NL will be physically moved to another data room next week, more information will follow
hopefully it will show some space tomorrow and I can get an SG
eh no? There is no ETA getting it fixed, its unlikely that a reboot will fix this issue however I try it anyway.
Since LXD does play dead without any error messages its just a guess.
@Neoon what timezone is this reboot going to happen?
Now I get "Cooldown, wait for a bit." while SG location is showing low stock
Another Maintenance announcement:
The Plan would be for these locations, to migrate existing containers to a new LVM pool, which should solve these issues.
However, the operation could lead to data loss so I advise anyone if you have a container in one of these locations, take a backup before they will be migrated.
This will take place at the following days:
LA Friday, 16:00 GMT, approximately 1 hour
SG Friday 19:00 GMT, approximately 1 hour
NO Sunday, 19:00 GMT, approximately 2 hours
The Downtime will likely to be shorter for each container, as long nothing comes in-between.
During the maintenance, you won't be able to control the container via microlxc.net.
After the maintenance stock will be available again on these locations, including NO.
Tokyo is not affected by this, however if we upgrade NL and AU, we likely will performance this maintenance additional or if it becomes necessary.
CH will be not moved to another LVM Backend, since I plan to discontinue it, due to the network related issues.
However don't plan to discontinue CH until we have a replacement, currently I am still looking for one.
I keep you updated on this.
"Thursday" not Tuesday, I am sorry for that mistake.
SG and LA migrations are done, total downtime about 3 minutes + 1 minute for each container. LA took a bit longer because Virtualizor fucked it up again and broke v6.
Next & last migration for now is NO on the weekend.
ee10-bcf6-9873-6246
Hello
153c-f6d6-0f63-60ab
It feels much faster now.
How is the service?
NO Maintenance done, it took a bit longer since I needed to request a IPMI.
NOVOS (Antwerpen) just announced a maintenance for tonight, network will be unreachable for a few seconds up to a few minutes.
This Morning, NL had a emergency maintenance, issues with some switches, this has been solved.
If still face any issues, lemme know.
Patch Notes:
*New deployments
So... Terminate and request a new instance?
You can delete the old and create a new instance from the panel. No need to file a request again.
But keep in mind, its a shared environment, these are only hard limits / peak limits which shall not be used 24/7.
Indeed, a few patches back, if you have an account, you can delete the instance anytime and deploy a new one in any location you like without verification.
Thanks, I knew about the Reinstall option, but I thought that, maybe, the instance needs to be recreated/redeployed by the admin/Neoon for the new limits to take place.
I was half-correct.
That's totally fine
.
@Neoon
Thank you for this great service. My server is on Sandefjord and I checked couple times in the past it was stopped. Is this due to my account usage past some limit? Is there such limit?
I use the server for development, so I run a very small web server with almost no CPU, RAM, bandwidth usage.
We had a few I/O alerts recently and if a container does abuse I/O its is usually stopped.
You should have gotten a message.
Otherwise containers won't be stopped yet.
I forgot my password for the MicroLXC account.
Here's a token to reset it. Thank you
e9d1-2985-babb-2f08
f90b-5b9d-22ca-6548
b887-e5c2-02c3-0b80
d8c3-4a7e-68e3-cbf0
deployed.
c381-152f-b4fc-c940