New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Chrisy-poo, if you feel like taking a dump, please . . . do it in a toilet. Shit elsewhere and you usually look like a buffoon.
This all could have been avoided if you moved to ColoCrossing right? Just take BW from them.
Got any pictures of the brocade? Id like to see it
Unfortunately I find LV slower even than SJ, which was already bad. I could expect 1-3mbit/s from SJ, I'm getting 0.5-0.8mbit/s from LV.
Not really an issue since I use it for ZNC, and though throughput is poor the ping is great, but a little disappointing for 2013.
[edit] sorry, LV not LA
I can't seem to find the imgur gallery I put together but he's a picture of most of it:
https://fbcdn-sphotos-a-a.akamaihd.net/hphotos-ak-ash3/165029_496662910385206_1613977535_n.jpg
We ordered a few extra 24 port blades for future growth
If we were doing layer3 with it we wouldn't want to use this unit since layer3 switches in software. But, since we're using it strictly in layer2 mode, it completely destroys things.
And then i'm paying EGI $4000/m for 2 racks. Then another $2000/m to Jon. Why should EGI get $4000/m from me for providing a crappy service? Coresite racks are not worth that price considering they made me move half my nodes to new power strips twice because they were replacing things.
Francisco
Let me get some things tied up with the 10gig port and we'll see where we go. We were expecting to see a large surge in traffic once moved in, but figured we wouldn't need the 10 gig port till after these storage nodes.
Oh well, no biggy
Francisco
+1
Also, that Brocade looks amazing
I thought he meant moving to Colocrossing altogether? Don't they have a presence at Coresite San Jose?
They did...inside EGI's suite.
They use EGI for the couple racks they have there (far as Jon said it's literally a couple racks just for the sake of a POP there). Bandwidth they have a 10 gig or something from nLayer and then some fallback transit with EGI.
Francisco
They do indeed:
http://www.colocrossing.com/downloads/ColoCrossing-SJ1-Datacenter-Specs.pdf
@Francisco I need to get around to adding a dual 10Gb to ours
It's a nice unit Thanks for letting me know. I'm happy we brought it from SJ though, instead of chancing the big clusterfuck you had getting yours shipped.
Francisco
UPS can die in a fire. We never got our insurance money, either.
Why in the world would @Francsico move to Colocrossing? He already got suckered into their nest in Buffalo.
I'll never understand why @CVPS_Chris continues to bang on the Colocrossing sales drum if he's not a partner/interest in that business.
Plus going CC solves nothing with the San Jose location. Just adds costs and in case of outage could take both BuyVM locations out.
Surprised everyone is being so civil about the Chris comments.
I thought the Force 10 was not superior enough for you.
Nice to see a 3com switch.
Tripp Lite racks are pretty awesome too
The Force10 didn't survive shipping, took the pic before I knew that.
The 3coms are dinky little things, only for the IPMI network
now calm down skeeter they aint hurtin' nobody
Francisco
The point was to cancel everything with EGI and take 2 cabs from ColoCrossing. I don't care what you do either way, just seemed like a very bad decision and was trying to help you not make the mistake.
As for the SuperX its far below par as to be considered a good switch and pretty worthless: http://www.ebay.com/sch/i.html?_trksid=p5197.m570.l1313&_nkw=foundry+superx&_sacat=0&_from=R40
I'm done here, points were made, help was offered. Good luck
Going for a datacenter that still doesn't have IPv6 in 2013 would be a very bad decision.
I know multiple large datacenters, far bigger than you or CC, that use nothing but superX units for all customer facing ports and have exactly 0 issues with them.
For Layer3 usage? You're 100% right. L3 is not done in hardware on them. But, we have a layer2 firmware on here and do all L3 traffic handling on our router.
I appreciate all the help you and everyone else has offered. I've used many pointers brought up as well
Francisco
Be nice. FH doesn't have V6 right now either and I gave Rob a really odd look when he told me that.
I brought up an HE BGP tunnel for our V6 and it's running great. I took a /48 from my /32 allocation just for LAS so whenever Rob does get V6 rolled out for BGP clients, I can cut over w/o any burps.
Francisco
@CVPS_Chris said: I'm done here, points were made, help was offered. Good luck
Tip 'o the cap to you sir, I haven't laughed this hard all day. Feeble attempts, but vaguely entertaining.
Let him be, I don't think he was doing it with the intention of trolling, just he couldn't wrap his head around what we were attempting to do.
Francisco
mmm Brocade
I like my MLX othe than the one time it rebooted. Still not sure how what I was doing caused a reboot, but I will never do that again T_T
We looked at an MLX but I was hating the prices on 24 port blades.
Francisco
Weird, this is what I am getting when grabbing a file from my vps (Node 59)
root@xxxx:~# wget http://xxxx.ch/400mb.test
--2013-01-24 16:53:43-- http://xxxx.ch/400mb.test
Resolving xxxx.ch... x.141.x.x
Connecting to xxxx.ch|x.141.x.x|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 419430400 (400M) [text/plain]
Saving to: "400mb.test"
100%[==================================================================================================================>] 419,430,400 26.2M/s in 19s
2013-01-24 16:54:03 (21.0 MB/s) - "400mb.test" saved [419430400/419430400]
root@xxxx:~#
You aren't the only one I had a few more people doing large file transfers over their filtered ip's and ended up trigger some of the monitors.
Francisco
Diggin it! Looks like you fixed that issue btw.
I can attest to EGI forcing you over HE. I have a ..few.. servers with them and routes were completely different when going over our servers versus my buyvm vps's. You were HE and the server(s) with them would take a lot nLayer. I never see much gblx with them though.
Yep.
The same DST ip from 2 different locations within EGI went the same way.
Funny part is, I have boxes in other racks in 1090 and they all pass through the same central 1090 switch. This means they're doing the route adjustments at the edge and it isn't some screw up with gateway IP's.
Francisco
I just about had a heart attack thinking ColoCrossing might have Native v6....