Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Hetzner Brings Back GPU Servers
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Hetzner Brings Back GPU Servers

JasonhyperhostJasonhyperhost Member, Patron Provider

Hetzner brings back GPU servers at last,

they used to have the 1080 in the i7-8700

With the new Dedicated Server GEX44, you'll experience the epic performance and computing power of a GPU; its processing architecture enables it to process many tasks in parallel, making it ideal for processing large and complex amounts of data especially quickly and efficiently.

The new GPU Server GEX44 houses a heavy-duty NVIDIA RTX™ 4000 SFF Ada generation graphics card with 20 GB of GDDR6 ECC GPU memory, making it ideal for the use of trained AI models. You can use the GEX44, for example, to run the open source large language model Mixtral via API. And of course, it cuts through other graphic-heavy workloads, such as 3D modelling and CAD.

The GEX44 wouldn't be complete without its Intel® Core™ i5-13500 processor from the 13th "Raptor Lake" generation with Hyper-Threading Technology Virtualization (Intel-VT), which houses an impressive 6 performance cores and 8 efficiency cores. Two speedy 1.92 TB Gen3 Datacenter Edition NVMe SSDs plus 64 GB of DDR4 RAM complete this powerhouse of a GPU server.

The GEX44, which includes an IPv4 address, can be yours for the impressively small price of just € 184.00 a month and a one-time setup fee of € 79.00.

The GEX44 will initially be reserved exclusively for existing customers and can be ordered directly via the robot.

only available for existing clients

Comments

  • got one after an hour. looks like its being built by order

  • JasonhyperhostJasonhyperhost Member, Patron Provider

    @nanankcornering said:
    got one after an hour. looks like its being built by order

    @nanankcornering can you do a yabs?

  • Currently testing some production stuffs for encoding - so might affect the result

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2024-01-01                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Tue Feb 20 16:19:28 CET 2024
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 0 hours, 29 minutes
    Processor  : 13th Gen Intel(R) Core(TM) i5-13500
    CPU cores  : 20 @ 2500.000 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 62.6 GiB
    Swap       : 32.0 GiB
    Disk       : 1.7 TiB
    Distro     : Ubuntu 22.04.3 LTS
    Kernel     : 5.15.0-89-generic
    VM Type    : NONE
    IPv4/IPv6  : ✔ Online / ✔ Online
    
    IPv6 Network Information:
    ---------------------------------
    ISP        : Hetzner Online GmbH
    ASN        : AS24940 Hetzner Online GmbH
    Host       : Hetzner
    Location   : Nuremberg, Bavaria (BY)
    Country    : Germany
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/md2):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 829.78 MB/s (207.4k) | 919.00 MB/s  (14.3k)
    Write      | 831.97 MB/s (207.9k) | 923.83 MB/s  (14.4k)
    Total      | 1.66 GB/s   (415.4k) | 1.84 GB/s    (28.7k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 1.01 GB/s     (1.9k) | 1.10 GB/s     (1.0k)
    Write      | 1.06 GB/s     (2.0k) | 1.18 GB/s     (1.1k)
    Total      | 2.07 GB/s     (4.0k) | 2.29 GB/s     (2.2k)
    
    Geekbench 6 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 2522                          
    Multi Core      | 11596                         
    Full Test       | https://browser.geekbench.com/v6/cpu/4996170
    
    YABS completed in 5 min 5 sec
    
  • Too expensive, if only it was $20/year....

  • vitobottavitobotta Member
    edited February 20

    We spend a lot of money with GPU instances on Google Cloud (for AI stuff) as the ones with A100 80GB of RAM cost a fortune. How can Hetzner sell this server for less than 200 bucks per month? Unbelievable.

  • @Moopah said:
    Too expensive, if only it was $20/year....

    Nah not really, just wait till 3109.

  • darkimmortaldarkimmortal Member
    edited February 20

    EX44 is in desperate need of ECC in my experience (fast DDR4 is absolute dog tier for memory errors)… I guess this is a good way to get rid of these dud machines, where most processing can be done away from the dodgy main memory

  • edited February 20

    Sh#t. The RTX4000 is limited to 70w of power only. 2.5x less than their GTX1080 server lineup (max power of 180w)

    I'm currently running stuff with power usage of 63W already.

  • raindog308raindog308 Administrator, Veteran

    @Jasonhyperhost said: can you do a yabs?

    Does YABS need a section to benchmark GPU?

    Thanked by 2loay TrendyJack
  • lentrolentro Member, Host Rep

    @vitobotta said: A100 80GB of RAM

    Not to steal Hetzner's spotlight here, but if you need any of these GPUs, come try us out, happy to give a few dollars of starting credit!
    https://marketplace.tensordock.com/deploy

    @vitobotta said: sell this server for less than 200 bucks per month

    If you amortize costs over a period of 3 years, the numbers aren't too ugly. The problem with the big clouds is that they will charge prices such that they can earn back the cost of the hardware in 6-12 months, which is objectively crazy.

  • @lentro said:

    @vitobotta said: A100 80GB of RAM

    Not to steal Hetzner's spotlight here, but if you need any of these GPUs, come try us out, happy to give a few dollars of starting credit!
    https://marketplace.tensordock.com/deploy

    @vitobotta said: sell this server for less than 200 bucks per month

    If you amortize costs over a period of 3 years, the numbers aren't too ugly. The problem with the big clouds is that they will charge prices such that they can earn back the cost of the hardware in 6-12 months, which is objectively crazy.

    I would like to try your service but with Google I can have Kubeflow installed in GKE and provision nodes with GPUs in the cluster on demand, automatically with autoscaling etc. So it's not something I can easily replace with some third party service :(

    Thanked by 1lentro
  • xxslxxsl Member, LIR
    edited February 23

    @nanankcornering said:
    Sh#t. The RTX4000 is limited to 70w of power only. 2.5x less than their GTX1080 server lineup (max power of 180w)

    I'm currently running stuff with power usage of 63W already.

    The new RTX4000 SFF Ada is designed to be max 70w. It's a 4nm chip with increased efficiency so result in low power consumption.

  • xxslxxsl Member, LIR

    @vitobotta said:
    We spend a lot of money with GPU instances on Google Cloud (for AI stuff) as the ones with A100 80GB of RAM cost a fortune.

    Then why not build your own device locally?
    The Google/Amazon Clouds are always black-hearted robber.

Sign In or Register to comment.