New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Sorry, your gpp version is too small. You can borrow someone else's, or grow your own.
Gay ping pong?Girls walk away as soon as I mention NetBSD, so no hope for me.
If the current pricing continues I'd expect to soon see either overloaded (slow) servers or constant out-of-stock situations since they're almost giving this stuff away. If that happens I'd like to see a premium line (like DO Optimized, OVH Public Cloud etc) with dedicated resources and higher prices. People will want to use these things as dynamic compute instances rather than just as low-utilization monthly servers that happen to have hourly billing, and that means high cpu loads.
Gonna benchmark a GCC compilation now .
Added: I uploaded an ssh pubkey through the cloud console but it didn't seem to work and I still got that email with a random root password... hmm.
Added 2: Apt-get install build-essential fails:
It seems to me that the Debian 9.3 install image is not what I'm used to. (Edited:) I don't know why it wants that stuff that's normally just on the root path, but adding it fixed things. The apt-get install itself is amazingly fast, presumably thanks to the NVMe disk and the local mirror on the 10 gbit network .
Added 3: Opened ticket 2018012303024392 about the above.
Do 2.7.2.1 for me thx
We rely on you for these things. :-)
Use TransferWise. Very easy.
Don't listen to @WSS. Do gcc49 instead. :-)
By the way, which plan did you get?
dpkg: warning: 'start-stop-daemon' not found in PATH or not executable
dpkg: error: 2 expected programs not found in PATH or not executable
Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin
E: Sub-process /usr/bin/dpkg returned an error code (2)
You would think that they would have tested something as basic as an apt-get install.
.. did that seriously suggest that ROOT should have /usr/local/sbin in it's path?
Yeah, Debian, you're gone.
But isn't this correct? I don't have Debian in front of me, but this is the case on both Slackware and NetBSD.
Mine is an RSA key and it didn't work either.
I'm doing gcc 7.2 on a 32gb server. From last week's runs on other servers, I think building gcc49 might be about 25% faster.
Only certain packages fail. It's weird. Adding /sbin/etc to PATH fixes it.
Added: compilation finished,
Pretty decent, beats my i7-3770 by a little I believe. It's probably about right for the 2.1ghz clock and I begin to think that the Geekbench cpu benchmark is worthless.
That build is with make -j8 and --disable-multilib and I think the ~500% cpu utilization (i.e. lower than the desired 800%) is just because of how the build script works. I did this on a 16 core E5-2670 recently and got around 800% instead of 1600%.
Added 2: compiling ffmpeg (default config) on 10 threads gets:
By comparison, 5 threads on a dedicated i5-3470S gets:
so that is pretty impressive. I think I was getting slightly under 2 minute builds on the i7-3770 but that box is still running Wheezy and ffmpeg no longer builds out of the box there.
Is it possible that /proc/cpuinfo is mis-reporting the hardware clock speed on these servers, and they're really (currently) faster than advertised?
Yes, if random idiots can install stuff there then the system is already pwned, so it doesn't seem like a big prob. I do see /usr/local/sbin is in the root path on other Debian boxes that I have.
Wow, that's a monster server. It should be quick.
And here we go, account is disabled again without any notice via email/sms.
Just like trial period.
never experienced that... that same message after 5 minutes? seems like a browser issue.
It says there have been too many log in attempts, so that was not you?
I added the results to the earlier post. As before, the gcc build script is bottlenecked to single thread between stages, so it doesn't completely use the available parallelism and that slows it down. But it still beats my i7-3770 slightly. I compiled ffmpeg as well, and it's surprisingly fast. This seems too good to last before noisy neighbors arrive, but it's great so far.
Very impressive performance & pricing. Also the website interface is very nice, congrats to Hetzner.
It's me. I was tried to login again after 5 times fail with same message. Because I thought it's my browser error but no.
Ran a few basic benchmarks on their NVMe SSD lineup, looks quite good actually. A bit of variance on the CX11 nodes regarding I/O so your numbers might be higher/lower:
CX11
https://serverscope.io/trials/36Ga & https://browser.geekbench.com/v4/cpu/6633598
CX21
https://serverscope.io/trials/dXYr & https://browser.geekbench.com/v4/cpu/6635614
CX41
https://serverscope.io/trials/96ZY & https://browser.geekbench.com/v4/cpu/6637564
CX51
https://serverscope.io/trials/jl1n & https://browser.geekbench.com/v4/cpu/6637881
Ironically enough, I've ignored my logged-in tab for 3 hours, and a refresh- and I'm still there.
you keeping all them or just spinning them up to do a benchmark?
@Hetzner_OL: What is the CPU usage policy?
you keeping all them or just spinning them up to do a benchmark?
Keeping the majority & most likely spinning up a few more C31's in the next couple of days if the performance holds.
I wonder if there's a way to disable the AES crypto instructions to keep the miners away. Yeah some other applications will be affected too, but not that many and nowhere near as much.
Set it as a module, and don't load it by default?
It's a hardware instruction, so what I wonder is whether QEMU can somehow trap it.
Or just earmark certain nodes and migrate all the miners (based on CPU usage profiles) to those nodes and let them fight out for CPU cycles :-)
Root shouldn't inherit local paths which are ill-defined. This is done for ease of use and the proliferation of local ports/packages. There's no a reason for this to become "standard".
Same goes for /opt, which was /usr/local, but faster to type.
But some of us are using CPU cycles for actual computing. As long as there's the usual mix of listener users, cpu users, bandwidth users etc., the average cpu load isn't all that high and cpu sharing doesn't result in people getting slowed down much. The demands balance out.
Mining changes that since there is infinite demand for free money. So as long as cycles are cheap enough, miners will suck out every last one of them. So we need a way to dissuade the miners while leaving regular users (including cpu users within reason) alone.