New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Right.
"ARIN issues IP addresses and AS numbers to registered organizations in ARINs region for use in ARINs region. So once you have a registered business name in ARINs region you are welcome to request resources at any time."
But I assume they don't really check the usage after the range is issued. Rest assured, when they start doing this(probably very shortly) CC and numerous others will be first to be dealt with and buyvm has more than enough time to re-number when it smells as trouble so no worries there.
That's stupid. Doesn't it rule out global anycast with their IPs?
Dont know about anycast. Other rules apply for anycast most likely
Yes, they check it when you go back to ask for more IPs. If you use the IP addresses out of region, they do not consider it as in use for utilization purposes, which can prevent you from being able to justify additional IPs being issued.
Anycast is being rolled out Monday
Most of the code will be committed on Sunday but the actual switch flipping will be on Monday.
Francisco
I'm wondering if there's any real use of anycast without having 6 instances (with heartbeat etc)?
I have three but if a location goes down (at the OS/host level, not the whole datacentre) then that IP is dead within that region.
I guess it's somehow complementary to the non-anycast IPs as in a os/host down scenario, the IP needs to be removed and failed (via dns or depending on the config) to a non-anycast IP.
So technically this wouldn't be useful for HA but for speed (provided a host doesn't go down).
Correct
Still, nodes don't go down all that often. OVZ's 2.6.32 kernels are very stable these days and our KVM nodes see a year+ uptime on average. They'd be at the 2 year mark right now if we didn't move away from E3's.
We have nothing new in the maintenance chain minus upgrading NJ plans to SSD's, but that'll be a live migration, meaning at worst your VPS is down for almost no time.
Francisco
Best maintenance page evar
So how much is it to get a box in all the locations and the Anycast IPs @Francisco
Cheapest would be $45/year using 3 x 128MB's.
The anycast addresses are free.
Francisco
Yep, my vps is running great, but Stallion is still in maintenance.
I'm curious, how long does it take to complete?
Pending a few more commits and such and we'll bring stallion back up.
We're just touching up some networking with the anycast subnet before we populate it into Stallion
Francisco
@Francisco saw it up and running, trying to get more instances to try up the anycasting, but looks like everything is sold out? Is it intentional? Any way to snatch up a new VPS in LV/NJ?
It's intentional until we get the last few bugs out of the way.
Francisco
Insufficient spare anycast IP addresses. Please contact support.
I know
The subnets fixed up though. You can ping 198.251.86.1 to see it in action
I'll bind off the IP addresses in a few minutes, I just have to fix IPV6 RDNS.
Francisco
Is ordering back up?
No, just working through a few bugs
About to put the anycast subnet in place.
Francisco
Okey doke, just excited
As promised, I did pick up a BuyVM VPS. I couldn't get the NJ location b/c they were sold out, got the LU. I will say I was really excited initially... within the first 2 days of having the service had about 1.5-2 hours of outages in total. If that's any indication I'll probably not use the box for anything important that needs to be available, so probably won't use it at all. I'll try to use them for at least a week or two and if the uptime is similar I'll just let it sit. Nothing wrong with them, just if I get multiple outages in a week or so... I just don't use it and cancel at the year end cause it's useless for me. :P
Just counted, at least 4 outages in a few hour time frame (2nd day with them).
I didn't renew one of my rock solid VPS's in exchange for this... kinda sad but sometimes you get lemons. I guess I'm just bummed out but maybe it's a 'fluke' and it will have decent availability (day 1 was perfect). :P
@Xei
Had up and down alerts for lu-node1 last night.
I personally didn't feel any of those and i've had SSH sessions open for well over a week to most of the nodes over there.
With that being said, I think the biggest issue (DNS resolution) was related to the anycast code we committed. I already got some changes in testing and I'll be merging them today.
Please keep me posted once I get the update on that one and let me know if you continue to feel blips/drags or anything like that.
The network itself is fine, as is the hardware. There's been no crashes or anything like that.
Francisco
I was on Node05, didn't receive any emails regarding that one. Or do they announce that on their portal only - I don't see anything on the portal. If it was rebooted that may explained some or all of the down time instances (and then that would be totally cool if that were the case). I'll give it two weeks at least though before moving on probably (if availability is spotty), maybe return if the NJ location ever opens up since I tended to hear good stuff about that. I'd like to stick around though so I will do my best to remain optimistic (first time I had a VPS from any provider down for that long in the first 48 hours which I guess is why I was like oh no... what did I do). :P
I didn't pick LV cause the past issues with that location that people used to talk about. So that's why I opted for LU, but maybe LV would have been a better choice IDK - haven't read up on the LV location in > 2 years or more.
LV's issues were always because of my insistence to use software routers (BSD, Vyatta, VyOS, etc) and was rarely DC related. Software routers were simply a better fit for what we were integrating into the network (our IDS, autonull, etc) but ultimately we said screw it and put a MLX over there. Ever since then the location has been awesome, & the pure SSD's helped a lot.
05's not had any downtime so it's possible it was related to the anycast stuff as well. Put in a ticket about it please so I can keep track? I'll likely leave a ping running from my home to you to see if it does it again.
Francisco
Thanks for all the insight and quick responses. I'll keep an eye on it. But yeah the instance I'm on never rebooted as you said. If it happens again I'll ticket it as well. Do you know when the anycast stuff will be completed?
It's all completed, works solid, etc. The latest thing is more so an interesting bug I found last night and are working on a fix for that. It's unrelated to the anycast code itself, it's more related to a quirk in how OpenVZ does things.
I figure i'll have the last of it bug tested in the next couple hours.
Francisco