New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
It does, quite well, however, you can check the link and see how the BW was going with spikes for a few minutes so it is clearly not a self-limitation.
However, thank you, this does settle it.
If you do put up those changes I officially declare myself satisfied for all that I had to endure from Aldryic and will forget the whole incident.
I would suggest you do buy more BW anyway, even without me pointing them out ppl will still notice the problems because they are huge and cant be really mitigated some other way.
M
root@edge01:~# ifstat -i bond0
bond0
KB/s in KB/s out
43971.73 109245.6
40198.89 118167.3
47747.48 122150.3
52313.37 105391.3
46810.50 106227.5
ifstat: warning: rollover for interface bond0, reinitialising.
0.00 0.00
54715.01 121594.1
48394.89 117885.6
48092.52 117624.5
44736.00 116098.4
root@edge01# show interfaces bonding bond0
address 173.245.xx.xx/29
description WAN
hash-policy layer3+4
mode 802.3ad
primary eth0
[edit]
root@edge01# show interfaces ethernet
ethernet eth0 {
bond-group bond0
duplex auto
hw-id 00:25:90:27:f4:2a
smp_affinity auto
speed auto
}
ethernet eth1 {
bond-group bond0
duplex auto
hw-id 00:25:90:27:f4:2b
smp_affinity auto
speed auto
}
ethernet eth2 {
bond-group bond1
duplex auto
hw-id 00:15:17:3c:c5:8b
smp_affinity auto
speed auto
}
ethernet eth3 {
bond-group bond1
duplex auto
hw-id 00:15:17:3c:c5:8a
smp_affinity auto
speed auto
}
I assure you, we have a full bonded 2Gbit commit. This is our off peak hours, peak is around 1.3Gbit - 1.4Gbit.
Francisco
That is not what I have seen, many ppl have problems and a non-exit node that runs for such a long time does self-regulate to take the best routes.
I would think bad peering for your provider if everything else checks out.
There is something very wrong, you know best, tho, good luck in fixing it any way you can.
M
It looks like this thread accidentally fell into a retarded time machine.
fell*
Your valuable input is highly appreciated, sir, thank you very much !
M
There is something very wrong, you know best, tho, good luck in fixing it any way you can.
We got lots of HE so if they fran something up it's out of our hands. Before the switches we were rimming at round 900Mbit/sec for anything behind the router but we could still rim full speeds past that. It's possible those dells' just have a crappy LACP setup, I don't know. Our HP 2848 was even worse and rimmed at ~500Mbit with our PPS levels.
Don't get me wrong, if you do jumbo frames and all local stuff then i'm sure the boxes handle great, but remember that all of those ports are then getting routed through 2 ports to the router. Switches have a fixed amount of memory for caching and the dell & hp's don't let you change those. The Force10's do but I've not had a need to do that yet.
If people get a bad path then the best we can do is take it up with our upstream and hopefully it'll either self correct or just not be a problem. In Atlanta we don't have any HE for ipv4 as far as I know so hopefully those little burps won't be there at all.
Francisco
When you apologize for all of your lies, I will be happy to amend the AUP to simple enough words for you to understand.
YOU ARE CORRECT SIR! MR. TYPO FINDER!
For that I will prove you are incompetent too.
You say you didnt allow Tor and you police the network. I pointed you out to many scripts to check things up, and yet today we have a node with 119 days uptime, clearly since before the AUP change.
You are not only rude and bullish, you are also incompetent, sir.
M
Good, remember the 'sir' when you speak to your betters.
Like I said, when you apologize for your lies (and cease with your libel campaign), then I will update the AUP in simple enough words that you can understand it. Until then, door's to your left.
non-exits (relays I guess?) don't require a SWIP
exits require a SWIP or an amendment to a users account allowing otherwise (up to @Aldryic and the client to work out)
users that run an exit w/o permission will have their VM suspended until they address things
Francisco
Hopefully, it is out of your hands now, it is a shame how much the company had to go through because of your attitude and incompetence.
I only let it down because I admire Francisco and what BuyVM did for the community in the past. Rest assured, it has nothing to do with you.
M
please just stoooooooooooooooooooop! I back up @subigo thinking
I am fully willing to. The end is satisfactory for me, it remains to be seen if the problems underneath will also be solved, but is no longer my concern, since it was proven it is not because of some pedophiles conspiracy.
M
And what is your point? Providers should accept TOR?
Sorry for dragging this thread completely offtopic, but has the problem with free disk space been solved yet?
Francisco
No - HW raid. Was just showing people about overselling. We have ~45% disk space free on most nodes with tons of GB sold and our disk aren't as big as what we sold.
Planning to upgrade to raid10 in the future with SSD.
No, my point is they should not make up reasons to block some apps because they lead to high BW usage. When those reasons paint in black a whole bunch of ppl it is even more inexcusable.
By allowing non-exit nodes without conditions that make it unsustainable for ppl that pay their own money for the project, the "righteous" fabricated reasons are stomped and I am satisfied with this.
They could revert after a reasonable amount of time to the old block all policy, as long as it is not justified as a crusade against child porn and i am still satisfied.
M
Waste of money, HW RAID doesn't improve RAID1 at all
Good way to destroy drives or have a complete failure on all members There are no hardware RAID's that properly handle SSD's.
Francisco
Keeps us from having a lot of different issues that software raid has that we have outlined in other threads here. (BTW the controllers support raid10 as well for when we upgrade)
Thank you for that information I'll do research on that.
I have tested for a whole week heavy IO on a strip of 2 128 GB SSDs. That is not technically raid, but couldnt afford to throw away more money just for a test and a mirror would have been moot.
It did OK, tho worse than I expected since I was one of those ppl that thought SSDs are really THAT fast.
M
There are no RAID cards on the market that pass TRIM through so your drives will simply keep degrading performance wise until the drives are TRIM'd/etc.
I don't think even MDADM supports it. Intel is working on a firmware update to their ICH*R onboard RAID's to allow TRIM but they'll likely only do that for the newest of chipsets.
Fran
A stripe or a strip with LVM? If it's a RAID stripe (mdadm, etc) then the performance will suffer with TRIM not passing.
SSD's aren't about dd speeds, it's about your iops. Ripping 40,000 read operations a second is mind boggling considering even super expensive SAS 15k's are like... 400 iops/sec?
Francisco
@Maounique it depends on what SSD you bought >.>....they have some pretty low end ones.
Most of the ones I've tested have pretty high DD speeds.
Hardware stripe, intel controller and intel SSDs. I mean I thought SSDs can saturate the bus in that configuration, it wasnt that way, obviously.
M
Yea, you two, please don't use shitty ass sandforce ones. They have 'compressed' speeds so you ripping a dd will compress and 'give you 500MB/sec writes', but if you do uncompressed data (there's testers that do this) it brings it down to a humble ~260M/sec at peak.
Sandforces have a 30% failure rate so yea.
Francisco
I think this calls for a harddrive thread let me do the honors.
I am out of popcorn. We need some wht style imoticons here.
@Francisco I haven't tried them in HW raid yet. Not sandforce ones.
off-topic here we gooo!