yes, we do. We guarantee 12 hours after payment but that's only to cover edge cases (for example orders at Friday or Saturday night when no one from us is awake anymore and no one is in the office at 9:00 next day). In these edge cases it can - in theory - take up to 12 hours, but as verifying and provisioning orders is - if all data is correct - a thing of a few seconds, we keep an eye on orders out of our business hours. As long as at least one of us is awake and at his desk, it's usually a matter of minutes or a few hours.
The "prague" reverse DNS entry of a Telia router, actually located in Frankfurt (FFM), has been corrected.
The only MTR we received about this "issue" shows - compared with First Colo - an identical route to the destination, besides the fact that it doesn't get routed via DE-CIX but the own internet exchange of CDN77, which doesn't change the latency at all. Sometimes it's even 1 ms better (which is, however, absolutely irrelevant on a path from Germany to the USA).
11:47 Uhr
I've written to Telia to update the PTR. Just for your information, the IP with Prague record is Telia side of BGP between us in Frankfurt
11:47 Uhr
so it's definitely in Frankfurt, we're not establishing BGP from Frankfurt to Telia in Prague
btw, to make this absolutely transparent and public, this is the hop with the wrong PTR: 213.248.70.153
We are talking about a latency of 0.8 ms. Everyone can check this with a server located in or near Frankfurt. It is physically impossible that this IP is located in Prague. That would be the best fibre in the world, leaving the speed of light behind
So... This is the "bad route" in our maincubes location (indicated by our own router as first hop, it's our ASN):
@PHP_Friends said:
it doesn't get routed via DE-CIX but the own internet exchange of CDN77
I think this is the key to some hypothetical bad routes. My host node is not yet moved to Maincubes, but what I can reproduce is the following:
For instance from HE.net in Frankfurt:
core1.fra1.he.net> traceroute 193.41.237.1 source 216.218.252.174 numeric
Target 193.41.237.1
Hop Start 1
Hop End 30
1 7 ms 7 ms 25 ms 100ge14-1.core1.prg1.he.net (184.105.213.234)
2 25 ms 39 ms 15 ms datacamp.peering.cz (91.213.211.28)
3 29 ms 15 ms 28 ms cdn77.cr1.synlinq.de (84.17.33.51)
4 20 ms 45 ms 16 ms lt0.1.cr3.synlinq.de (45.135.200.15)
5 16 ms 16 ms 24 ms mx240-1.maincubes.as58212.net (193.41.237.1)
So this one first travels to Prague, where they peer with CDN77 on Peering.cz and then obviously back to Frankfurt.
I see the same coming from iFog network in Zürich. Then is Zürich, Frankfurt, Prague, Frankfurt. So not optimal.
This is caused by CDN77 not peering on DE-CIX, but just on Peering.cz on multiple locations (which Frankfurt is one of). So while CDN77 has a PoP in Frankfurt on Peering.cz, Hurricane Electric only has this in Prague. Hence the detour.
The same goes for Google, this travels to Prague as well.
@Neoon said: You can clearly see, that telia claims Prague is n Frankfurt yes, but where do the additional 5 ms come from? looks like the traffic actually went Frankfurt => Prague => Frankfurt.
I see a round trip time Frankfurt-Zürich of 7ms. This is actually not bad and as expected. As you already confirmed, the router dat indentifies itself as 'prag' is actually in Frankfurt, with 1 ms latency.
So the only problem would be the Telia router that identifies itself as 'ffm' but has a higher rtt than you'd expect. This could be caused by the return route, however it doesn't really affect the total latency to your server in Zürich.
@Neoon said: You can clearly see, that telia claims Prague is n Frankfurt yes, but where do the additional 5 ms come from? looks like the traffic actually went Frankfurt => Prague => Frankfurt.
I see a round trip time Frankfurt-Zürich of 7ms. This is actually not bad and as expected. As you already confirmed, the router dat indentifies itself as 'prag' is actually in Frankfurt, with 1 ms latency.
So the only problem would be the Telia router that identifies itself as 'ffm' but has a higher rtt than you'd expect. This could be caused by the return route, however it doesn't really affect the total latency to your server in Zürich.
7ms is ideal, but I saw routes up to 20ms, while the route back was 7 ms, from the same machine. There is some kind of problem since the migration.
@Neoon said: 7ms is ideal, but I saw routes up to 20ms, while the route back was 7 ms, from the same machine. There is some kind of problem since the migration.
Ping is always a round trip time so the latency from machine a to b is always the same as from b to a. Even in the situation where servers are located next to eachother and a to b is a direct route but b to a goes over (for intance) the US, the rtt will be very high either way due to the routing.
So when the route itself hasn't changed for you, the varying ping is caused by other problems.
@Neoon said: 7ms is ideal, but I saw routes up to 20ms, while the route back was 7 ms, from the same machine. There is some kind of problem since the migration.
Ping is always a round trip time so the latency from machine a to b is always the same as from b to a. Even in the situation where servers are located next to eachother and a to b is a direct route but b to a goes over (for intance) the US, the rtt will be very high either way due to the routing.
So when the route itself hasn't changed for you, the varying ping is caused by other problems.
I am aware what RTT means.
These were just my results between multiple mtr's.
The new location probably means that some hiccups need to be ironed out. When I look at routing to my locations, it's actually pretty good and direct but there's some strange routing here and there.
I think the major issue is that they're relying on just 1 upstream (which is relying on 1 upstream themselves) at the moment.
I'm sure @PHP_Friends will eventually get this fixed however.
I used Hetzner & some other hosts in Germany but @PHP_Friends is getting higher latency than them from My Connection.
To Hetzner:
Tracing route to hetzner.de [78.47.166.55]
over a maximum of 30 hops:
1 1 ms 1 ms 3 ms 192.168.0.1
2 3 ms 2 ms 1 ms 59.153.100.24
3 4 ms 3 ms 2 ms 172.16.16.89
4 3 ms 2 ms 2 ms 59.153.100.105
5 5 ms 5 ms 3 ms 43.224.113.69
6 5 ms 3 ms 3 ms 103.230.17.112
7 47 ms 45 ms 45 ms 180.87.39.117
8 153 ms 224 ms 153 ms 180.87.38.1
9 * * * Request timed out.
10 155 ms 154 ms 259 ms 80.231.217.2
11 268 ms 200 ms * 80.231.200.78
12 190 ms 205 ms 196 ms 195.219.87.195
13 205 ms 179 ms 221 ms 195.219.219.10
14 222 ms 201 ms 208 ms 213.239.245.34
15 204 ms 201 ms 201 ms 88.198.247.253
16 208 ms 201 ms 201 ms 213.239.203.214
17 203 ms 201 ms 201 ms 78.47.166.55
Typically getting 200ms.
To Linode:
Tracing route to speedtest.frankfurt.linode.com [139.162.130.8]
over a maximum of 30 hops:
1 4 ms 1 ms 1 ms 192.168.0.1
2 4 ms 2 ms 3 ms 59.153.100.24
3 3 ms 2 ms 3 ms 172.16.16.89
4 3 ms 3 ms 2 ms 59.153.100.105
5 4 ms 4 ms 4 ms 43.224.113.69
6 6 ms 4 ms 3 ms 103.230.17.112
7 44 ms 43 ms 45 ms 180.87.39.117
8 264 ms 200 ms 202 ms 180.87.38.1
9 * * * Request timed out.
10 168 ms 201 ms * 80.231.217.2
11 * * * Request timed out.
12 164 ms 183 ms 201 ms 195.219.87.88
13 202 ms 201 ms 201 ms 195.219.61.48
14 200 ms 178 ms 229 ms 195.219.61.30
15 230 ms 201 ms 201 ms 139.162.129.11
16 199 ms 201 ms 176 ms 139.162.130.8
Tracing route to 193.41.237.1 over a maximum of 30 hops
1 7 ms 6 ms 3 ms 192.168.0.1
2 2 ms 2 ms 2 ms 59.153.100.24
3 4 ms 2 ms 2 ms 172.16.16.89
4 6 ms 3 ms 3 ms 59.153.100.105
5 5 ms 2 ms 3 ms 43.224.113.69
6 6 ms 4 ms 6 ms 103.230.17.112
7 51 ms 52 ms 51 ms 103.230.17.51
8 53 ms 52 ms 52 ms 103.230.17.30
9 51 ms 55 ms 53 ms 116.51.26.33
10 206 ms 287 ms 201 ms 129.250.2.123
11 54 ms 55 ms 52 ms 129.250.3.83
12 297 ms 228 ms 201 ms 129.250.7.9
13 284 ms 201 ms 201 ms 129.250.6.14
14 314 ms 303 ms 303 ms 213.198.94.170
15 312 ms 299 ms 208 ms 84.17.33.59
16 313 ms 304 ms 303 ms 45.135.200.17
17 305 ms 304 ms 303 ms 193.41.237.1
A 100ms increase than other hosts. What is it so ?
Only thing I can spot is that the reverse route is different from your route. It goes through Hurricane Electric, with a little detour but not up to the point that you'd gain 100ms.
Comments
According to my experience: yes.
Thanks @fendix, much appreciated.
Yes , they usually work in outside normal hours , Tim maybe will provision your order
he is the best support ever and super guy
Vouch for php-friends
Hi,
yes, we do. We guarantee 12 hours after payment but that's only to cover edge cases (for example orders at Friday or Saturday night when no one from us is awake anymore and no one is in the office at 9:00 next day). In these edge cases it can - in theory - take up to 12 hours, but as verifying and provisioning orders is - if all data is correct - a thing of a few seconds, we keep an eye on orders out of our business hours. As long as at least one of us is awake and at his desk, it's usually a matter of minutes or a few hours.
Best Regards,
Tim
Thanks for the repsonses everyone.
For everyone's information, provisioning took about 40 minutes after invoice was paid.
Well, since the Location has been switched, a lot of traffic seems to get dumped of via free peering at CDN77.
MTR to a Server in dallas has 24 hops, and goes via Prague.
Not only Dallas, also Swiss goes over Prague.
A lot of stuff goes over Prague since the location switched, instead of direct from Frankfurt.... sad.
Provide us an example please? I can’t reproduce this.
Short statement @PHP_Friends ?
Everything, as it seems.
I guess then Avoro is to blame not php-friends, my fault.
Avoro used php-friends network, but I guess since then, a lot of stuff changed... to he worst.
edit: mtr suggest its still php-friends network to blame.
For Php Friends traceroute shows
Chennai --> Singapore -->Frankfurt for me.
I am glad it's data and not an airlines.
As far as I can reproduce it, it’s just providers that use HE.net for transit.
Please contact our support with MTRs / traceroutes. I don't know what you expect from us when you don't say what exactly we should improve.
Also I don't know what you mean with free peering at CDN77. We are paying for our traffic
Probably some old cdn77 reverse host name in the mtr/traceroute output.
I see that on my liteserver vm too when I test from some sources
The "prague" reverse DNS entry of a Telia router, actually located in Frankfurt (FFM), has been corrected.
The only MTR we received about this "issue" shows - compared with First Colo - an identical route to the destination, besides the fact that it doesn't get routed via DE-CIX but the own internet exchange of CDN77, which doesn't change the latency at all. Sometimes it's even 1 ms better (which is, however, absolutely irrelevant on a path from Germany to the USA).
Lemme send the routes where it goes over prag-b3-link.telia.net and back to ffm.
Is there any test IP or looking glass of PHP-friends ?
We still have not received a single report which shows a real problem. Why don't you just share the "bad" route instead of talking about it?
You can use 193.41.237.1.
Latest update from our main carrier:
btw, to make this absolutely transparent and public, this is the hop with the wrong PTR: 213.248.70.153
We are talking about a latency of 0.8 ms. Everyone can check this with a server located in or near Frankfurt. It is physically impossible that this IP is located in Prague. That would be the best fibre in the world, leaving the speed of light behind
So... This is the "bad route" in our maincubes location (indicated by our own router as first hop, it's our ASN):
And this is the "good route" via First Colo:
A lot of trouble for absolutely nothing.
Yes, this sort of goofup happens to me too.
Sometimes the latency is the only clue I have whether the ip is in the same city especially if it's anycast or something.
I think this is the key to some hypothetical bad routes. My host node is not yet moved to Maincubes, but what I can reproduce is the following:
For instance from HE.net in Frankfurt:
So this one first travels to Prague, where they peer with CDN77 on Peering.cz and then obviously back to Frankfurt.
I see the same coming from iFog network in Zürich. Then is Zürich, Frankfurt, Prague, Frankfurt. So not optimal.
This is caused by CDN77 not peering on DE-CIX, but just on Peering.cz on multiple locations (which Frankfurt is one of). So while CDN77 has a PoP in Frankfurt on Peering.cz, Hurricane Electric only has this in Prague. Hence the detour.
The same goes for Google, this travels to Prague as well.
No, that is not the case.
See, the first traceroutes, I provided, it does indeed confirm, that my eyes where of the table and I did not looked at the latency.
But I did more as just these and found what I was talking about.
Before I posted here yesterday, I forwarded everything to avoro.
You can clearly see, that telia claims Prague is n Frankfurt yes, but where do the additional 5 ms come from?
Anyway, these PTR records as misleading and are causing confusion.
I see a round trip time Frankfurt-Zürich of 7ms. This is actually not bad and as expected. As you already confirmed, the router dat indentifies itself as 'prag' is actually in Frankfurt, with 1 ms latency.
So the only problem would be the Telia router that identifies itself as 'ffm' but has a higher rtt than you'd expect. This could be caused by the return route, however it doesn't really affect the total latency to your server in Zürich.
7ms is ideal, but I saw routes up to 20ms, while the route back was 7 ms, from the same machine. There is some kind of problem since the migration.
Ping is always a round trip time so the latency from machine a to b is always the same as from b to a. Even in the situation where servers are located next to eachother and a to b is a direct route but b to a goes over (for intance) the US, the rtt will be very high either way due to the routing.
So when the route itself hasn't changed for you, the varying ping is caused by other problems.
I am aware what RTT means.
These were just my results between multiple mtr's.
The new location probably means that some hiccups need to be ironed out. When I look at routing to my locations, it's actually pretty good and direct but there's some strange routing here and there.
I think the major issue is that they're relying on just 1 upstream (which is relying on 1 upstream themselves) at the moment.
I'm sure @PHP_Friends will eventually get this fixed however.
I used Hetzner & some other hosts in Germany but @PHP_Friends is getting higher latency than them from My Connection.
To Hetzner:
Typically getting 200ms.
To Linode:
Same around.
To @PHP_Friends :
A 100ms increase than other hosts. What is it so ?
Only thing I can spot is that the reverse route is different from your route. It goes through Hurricane Electric, with a little detour but not up to the point that you'd gain 100ms.