New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
ARM was restocked today, so not dead yet.
I missed out. Is there a way to get an alert if it does restock, I know some sites provide alerts etc?
https://checkservers.ovh/ ... you get emailed, at least.
LOL I'm going through this exact scenario now with SYS support. It's hard to believe that they still refuse to even acknowledge the issue. They just keep sending me in circles and want me to waste time running unrelated tests. Lesson learned- OVH support is useless.
Wish I'd grabbed a 6TB one when I had the chance instead of a 2TB, handy little boxes.
They did that with me and it was so bad I ended up just saying fuck it and giving up. Right when I thought I was actually getting somewhere with the network problem I had...the next ticket ended up being them starting the circle over and asking the exact same questions the first ticket asked.
Damn, missed this earlier, had been waiting for one. Oh well.
There were 2TB CA and 4TB CA dedis available yesterday. Join Ne00n's discord for notifications.
6TB are rare, the last one I see was available almost 2 weeks ago
Thanks, unfortunately I don't use discord (don't want to sign up for yet another platform site).
has anyone got docker running on the ARM? or nextcloud with instructions?
should all work properly.. only thing is that the images you use need to be ARM based
I recently snapped one of these up hoping to replace an ageing Kimsufi 2G (still running its original hard disk at 10 years 24/7)
Unfortunately the issues with these servers are not documented very well - all I could find in places like this thread before buying is an upload speed kernel bug.
Does not run Debian stock armhf kernel, only an OVH custom kernel or https://github.com/kotc/ovhnas
AES acceleration, while present, requires a custom build of OpenSSL - again more effort required to keep up to date
I can see why there's talk of OVH discontinuing them now
All that said I'm going to continue digging and see if I can get a debian stock kernel booting, that'd be enough for me to put up with the other issues
That's a very rare ancient deal, generally Kimsufi doesn't give such discount, and currently doesn't on any of the normal or flash-sale servers.
yes it is. But kimsufi's installation template has been updated which there are ZFS with proxmox.
Btw i have found link, "how to install lvm and install debian with debootstrap on kimsufi " on here https://blog.tincho.org/posts/Setting_up_my_server:_re-installing_on_an_encripted_LVM/
You won't be able to with 'working' network drivers as there was a regression in the driver some while back that broke it. The (#SYSarm) OVHNAS project was formed to fix this and provide a working kernel. We ALSO provide with the kernel the .config file, so you of course can build things your self, but you will be lacking the patch for the NIC that fixes the speed issues.
The 4.5 series kernel is actually the original and likely last kernel built by Marvell directly for OVH, that is the reason it actually works decently. The subsequent updates and kernels were likely all rolled by OVH them selves and they didn't know how to fix the driver regressions for the NIC. You can reach it (4.5.2) by net booting the device (removing all contents from /boot) or by entering 'rescue' mode.
Additionally, we don't believe that OVH had the man power or expertise to handle fixing this, this is why someone with ARM development experience was recruited to help make things work. I also believe the intention of the project is to continue to keep supplying an updated kernel, so far we have been releasing new versions as we are able to build a compatible kernel. You can re-run the OVHnas script anytime to obtain the latest version we have available.
I can't say for certain you could setup LVM if you wanted, obviously their installer just doesn't provide it, however you can place the server in 'rescue' mode (which does work and is helpful if you want to do any type of custom install or test kernels) and then build the system as you like and or test a custom setup. Additionally, if you are skilled enough to be going this route and get stuck, come by IRC and we generally don't have an issue with trying to help people out (don't confuse this for thinking we will do it for you though).
The ARM systems have a lot of potential if you spend the time to set them up correctly and take advantage of all the features, including the hardware crypto engines, which the OVHNAS kernel also support. The only reason you have to recompile openssl is because you are trying to use a hardware crypto function that isn't default enabled in the software, you don't HAVE to recompile it or use the custom debs we created, since using the hardware crypto is in no way mandatory.
If you are expecting any type of discount you likely have no idea what the original prices for this product were, just to give you an idea, the server were originally introduced in 2016 and the prices were 2-3x as much as they are now. They also offered 2T, 4T and 8T instead of 6T. The price you are getting the SYSarm servers at is a STEAL, considering it would likely take at least a 6 month rental to get close to recovering cost of the almost brand new drives they had been deploying with them. So expecting any kind of discount, even for longer periods is kinda silly since the price is already ridiculously low, there is no other storage product in its price range that provides you dedicated hardware with gigabit connection and an almost brand new hard drive.
I am not trying to beat you up, just simply trying to get you thinking about the product a little differently as it seems you may have had some incorrect expectations.
Ohh, and if you need documentation, treat this thread as documentation or you can feel free to submit some of your own and we can add it to the OVHnas Github (see signature) for people if you feel it would be helpful, just stop by and we can help you get it added, we just don't have the time for it right now our selves.
Come visit us on #SYSarm on Freenode if you need any help.
my 2 cents.
Cheers!
Has anyone figured out how to get Strongswan IPSec fully working on the armada375 yet?
congrats! it's not easy finding one available
that's because network driver requires special treatment that isnt in mainline (even in 4.19/4.20, yes, i've tried). and ovh only recently provided updated 4.9.12x (also bugged) kernel, so that's not going to fly.
only if you need hwaccel in software using openssl. kernel drivers (luks for example) dont need it, and will use hwcrypto out of the box with that custom kernel.
..or knowing a bit or two about linux. fixing that kernel bug wasnt easy but since i did it, it's possible without kvm.
yup. but again, you can prepare your own kernel+ramdisk and do whatever you like
i wouldnt mind the discount as well, but heck, 10usd for 4tb dedi box? how low can YOU go?
we will see.
as mentioned above, it's painful process. and i'm going to warn you to save some pain for you: stock kernel mvpp2 driver wouldnt work with those boards, and the ovh provided one is buggy.
Aye I've been following the checkservers mailing list and finally got lucky after months, I think all the effort to get one raised my expectations a bit
If it's just the network driver that doesn't work, could the patched version of that be loaded as a module to the stock debian kernel?
I'm hoping for as close to possible full link speed with SSH and SoftEther, I think those both need the patched openssl?
How did you get the debug info to do this (edit: assuming it wasn't booting)? I'm fairly comfortable with kernel dev but usually with at least a serial console
Yeah I normally setup servers that way but couldn't get it to boot as far as producing any log output, but that was without a custom/ovh kernel which sounds like it's pretty much essential to have. I'll have to pop on IRC when I get a chance, I'm interested to see the build process for this custom kernel especially creation of the uboot image
Aha that explains a lot, I was wondering where that 4.5.2 kernel was coming from!
sure, but even then stock kernel doesnt have optimal config
hwaccel makes sense only for big chunks of data, for small packets cpu wins. that's why it's only important if you want to offload cpu for other tasks and requires comparing both modes before settling for one.
ramdisk + saving interesting info to disk then rebooting into working kernel (even netbooted one)
I don't have a use of it for now so not going to renew my ARM server.
i've found some tunings in other thread that might be interesting. here are the results on my /ca sysarm:
tuning:
sysctl -w net.ipv4.tcp_rmem='32768 349520 113246208'
sysctl -w net.ipv4.tcp_wmem='32768 65536 113246208'
sysctl -w net.ipv4.tcp_mem='32768 349520 113246208'
sysctl -w net.ipv4.tcp_fin_timeout='180'
sysctl -w net.core.netdev_max_backlog='30000'
iperf3 -c ping-90ms.online.net -p 5209
before:
Connecting to host ping-90ms.online.net, port 5209
[ 4] local 54.39.62.122 port 39076 connected to 62.210.18.41 port 5209
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 5.86 MBytes 49.1 Mbits/sec 0 1.61 MBytes
[ 4] 1.00-2.00 sec 31.5 MBytes 264 Mbits/sec 0 7.51 MBytes
[ 4] 2.00-3.00 sec 40.2 MBytes 338 Mbits/sec 0 8.00 MBytes
[ 4] 3.00-4.00 sec 40.0 MBytes 335 Mbits/sec 1 8.00 MBytes
[ 4] 4.00-5.00 sec 39.4 MBytes 330 Mbits/sec 3 8.00 MBytes
[ 4] 5.00-6.00 sec 40.2 MBytes 338 Mbits/sec 0 8.00 MBytes
[ 4] 6.00-7.00 sec 36.5 MBytes 306 Mbits/sec 23 5.60 MBytes
[ 4] 7.00-8.00 sec 36.6 MBytes 307 Mbits/sec 77 3.92 MBytes
[ 4] 8.00-9.00 sec 37.1 MBytes 311 Mbits/sec 0 3.92 MBytes
[ 4] 9.00-10.00 sec 37.5 MBytes 314 Mbits/sec 0 3.92 MBytes
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 345 MBytes 289 Mbits/sec 104 sender
[ 4] 0.00-10.00 sec 344 MBytes 289 Mbits/sec receiver
after:
Connecting to host ping-90ms.online.net, port 5209
[ 4] local 54.39.62.122 port 39084 connected to 62.210.18.41 port 5209
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 602 KBytes 4.93 Mbits/sec 0 153 KBytes
[ 4] 1.00-2.00 sec 6.53 MBytes 54.8 Mbits/sec 0 1.70 MBytes
[ 4] 2.00-3.00 sec 25.0 MBytes 210 Mbits/sec 0 9.58 MBytes
[ 4] 3.00-4.00 sec 132 MBytes 1.11 Gbits/sec 0 58.3 MBytes
[ 4] 4.00-5.00 sec 209 MBytes 1.75 Gbits/sec 5 58.3 MBytes
[ 4] 5.00-6.00 sec 220 MBytes 1.85 Gbits/sec 0 58.3 MBytes
[ 4] 6.00-7.00 sec 221 MBytes 1.86 Gbits/sec 0 58.3 MBytes
[ 4] 7.00-8.00 sec 221 MBytes 1.85 Gbits/sec 0 58.3 MBytes
[ 4] 8.00-9.00 sec 220 MBytes 1.85 Gbits/sec 0 58.3 MBytes
[ 4] 9.00-10.00 sec 221 MBytes 1.85 Gbits/sec 0 58.3 MBytes
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.44 GBytes 1.24 Gbits/sec 5 sender
[ 4] 0.00-10.00 sec 1.41 GBytes 1.21 Gbits/sec receiver
^ Goes to try it out on Fr server.
How to get this on my arm server too
Doesn't change anything significantly in my case (SySARM in GRA2).
Up/down still within the same range, roughly 500/300 Mbps
Detrimental?
`root@sys:~# cat iperf-before.txt
Connecting to host ping.online.net, port 5209
[ 4] local 54.37.xx.xx port 36358 connected to 62.210.18.40 port 5209
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.01 sec 144 MBytes 1.20 Gbits/sec 0 1.34 MBytes
[ 4] 1.01-2.00 sec 148 MBytes 1.24 Gbits/sec 0 1.34 MBytes
[ 4] 2.00-3.00 sec 148 MBytes 1.24 Gbits/sec 0 1.34 MBytes
[ 4] 3.00-4.00 sec 148 MBytes 1.24 Gbits/sec 0 1.34 MBytes
[ 4] 4.00-5.00 sec 149 MBytes 1.24 Gbits/sec 0 1.34 MBytes
[ 4] 5.00-6.00 sec 148 MBytes 1.24 Gbits/sec 0 1.34 MBytes
[ 4] 6.00-7.01 sec 149 MBytes 1.24 Gbits/sec 0 1.34 MBytes
[ 4] 7.01-8.00 sec 148 MBytes 1.24 Gbits/sec 0 1.34 MBytes
[ 4] 8.00-9.00 sec 169 MBytes 1.42 Gbits/sec 0 1.34 MBytes
[ 4] 9.00-10.00 sec 175 MBytes 1.47 Gbits/sec 0 1.49 MBytes
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.49 GBytes 1.28 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 1.49 GBytes 1.28 Gbits/sec receiver
iperf Done.
root@sys:~# cat iperf-after.txt
Connecting to host ping.online.net, port 5209
[ 4] local 54.37.xx.xx port 36380 connected to 62.210.18.40 port 5209
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 142 MBytes 1.18 Gbits/sec 0 1.55 MBytes
[ 4] 1.00-2.01 sec 149 MBytes 1.25 Gbits/sec 0 1.55 MBytes
[ 4] 2.01-3.00 sec 148 MBytes 1.24 Gbits/sec 0 1.55 MBytes
[ 4] 3.00-4.01 sec 136 MBytes 1.13 Gbits/sec 0 1.55 MBytes
[ 4] 4.01-5.00 sec 126 MBytes 1.07 Gbits/sec 0 1.55 MBytes
[ 4] 5.00-6.00 sec 120 MBytes 1.01 Gbits/sec 0 1.62 MBytes
[ 4] 6.00-7.01 sec 131 MBytes 1.09 Gbits/sec 0 1.66 MBytes
[ 4] 7.01-8.01 sec 102 MBytes 863 Mbits/sec 0 1.84 MBytes
[ 4] 8.01-9.01 sec 109 MBytes 915 Mbits/sec 0 1.84 MBytes
[ 4] 9.01-10.00 sec 145 MBytes 1.22 Gbits/sec 0 1.84 MBytes
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.28 GBytes 1.10 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 1.28 GBytes 1.10 Gbits/sec receiver
iperf Done.`
@suricloud: just run the sysctl lines as root
@shot2: i've tried it on both boxes i control (2TB/fr/gra2/4.9.x and 4tb/ca/bhs6/4.14.x) and got quite nice speedup on ping-90ms.online.net iperf on both
(I should have mentionned I'm not using iPerf, just regular http & rsync tcp upload/download, towards server in Online DC2 ~5ms away and Aruba IT1 ~25ms away)
Not sure if such a tuning makes sense for the average use case...
also, i had a typo in the post before, please try with ping-90ms.online.net, it helps with higher latency sites apparently
Hello guys,
Is this possible to install docker on soyoustart armv7 with armada375 ?
Tried it, but a lot of things required is missing
Thanks
Congrats on first post in year old Thread