New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
#SEOPoisioning
Thanks for making me laugh
The GVH/Jon threads keep getting better and better all the time haha.
Not to mention that we've all had a spillage, right guys? Ain't no thang, sometimes that baby gravy just won't stay in the tissue/sock/pet.
Great, now I have to clean up my laptop display. And no, it's not what you think. I coughed up soda while laughing at your unexpected photo response.
Next GreenValueHost offer will be coming tomorrow.
You know too much. "We" shall die you.
@lewissue
@lewissue how come your hitleap proxying generating a load of 20?
Anyone's hitleap proxying gneerates a high load when over multiple instances (on Windows.) On *nix, it can be optimized to use less ,but that takes effort on the client's part.
Hitleap is very I/O intensive. This individual had 20 individual containers running Hitleap inside of one node. As a result, and as should be expected, the entire node slowed down due to resource exhaustion. It's important to note that while often hardware is attributed for lack of I/O, it should actually often be attributed to poor software design / programming practices. Essentially, there's very little you can do in the way of hardware to run that many Hitleap instances simultaneously. For reference, a load average of 20 on an quad-core, hyper-threaded (8 threads) is screaming high.