Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


TensorDock: Affordable, Easy, Hourly Cloud GPUs From $0.32/hour | Free $15 Credit!
New on LowEndTalk? Please Register and read our Community Rules.

TensorDock: Affordable, Easy, Hourly Cloud GPUs From $0.32/hour | Free $15 Credit!

lentrolentro Member, Host Rep
edited December 2021 in General

TensorDock

Looking for an alternative to big, expensive cloud providers who are fleecing you of money when it comes to cloud GPUs? Meet TensorDock.

TensorDock

We're a small, close-knit startup based in Connecticut that sells virtual machines with a dedicated GPUs attached. Our goal is not to make money. Rather, our primary goal is to democratize large-scale high-performance computing (HPC) and make it accessible for everyday developers.

Why TensorDock?

1. Ridiculously Easy
Your time is money, so we've tried to make your life as easy as possible. We built our own panel, designed for the GPU use case. No WHMCS here. We did things our way. We have an API too.

When you deploy a Linux server, NVIDIA drivers, Docker, NVIDIA-Docker2, CUDA toolkit, and other basic software packages are preinstalled. For Windows, we include Parsec.

2. Ridiculously Cheap
The cheapest VM that you can launch is a $0.32/hour Quadro RTX 4000 + 2 vCPUs + 4 GB of RAM and 100 GB of NVMe storage. If you are running an hourly GPU instance at another provider, check our pricing, and you'll save by switching to us. If you can commit long term, we can give discounts up to 40%.

Our pricing is very unique. During our experimentation phase, we purchased a ton of different servers and ended up with a heterogeneous fleet of servers. So, we decided to charge per-resource. Customers are rewarded for choosing the smallest amount of CPU/RAM, and they'll be placed on the smallest host node available. Select your preferred GPU and other configurations and you'll only be billed for what you are allocated. It's that simple.

If you are training an ML model for 5 hours on an 4x NVIDIA A5000, it'll cost you less than $20

3. Live GPU Stock
As of this very moment, we have over 1000 GPUs in-stock, with another 5000 GPUs available through reservation, where you contact us and then we tell our partner cloud providers to install our host node software stack on their idle GPUs. We can handle your computing needs, no matter how large.

The details

Because we charge per-resource, just check out our pricing:
https://tensordock.com/pricing

You can register here:
https://console.tensordock.com/register

And then deploy a server here:
https://console.tensordock.com/deploy

It's that simple.



The LET Exclusive Offer

Not everyone needs GPUs, especially on a server forum like LET. So, this is more of a soft launch for us before we go onto other ML-related forums at the start of next year :)

This is only for LET users with at least 5 thanks, 5 posts/comments, and a registration date before November 15th. Don't create an account just to claim this account credit!

$5 in account credit for registering and posting your user ID

Register: https://console.tensordock.com/register
User ID: https://console.tensordock.com/home (find it under the "Your Profile" box)

Then, post:
#Cloud GPUs at https://tensordock.com/, ID [Your User ID]

E.g. if your user ID was recbob0gcd, you'd post:
#Cloud GPUs at https://tensordock.com/, ID recbob0gcd

Additional $10 in account credit for creating a server & giving feedback

Once we've given you $5 in account credit, go create a GPU server and give us some feedback on the experience. At least 2 sentences please! Again, post your user ID with this comment, and we'll give you an additional $10 in account credit. Bonus if you try using our API :)

For now, we're setting a limit of 100 users to participate. If a lot of people like it, we might do some more. Goal is to get some feedback to improve the product before we go bigger :)

~ Mark & Richard



Website: https://tensordock.com/
Contact: https://tensordock.com/contact


Questions? Feel free to ask within this thread.

«13456711

Comments

  • #Cloud GPUs at https://tensordock.com/, ID rec2hmoiz3

  • lentrolentro Member, Host Rep

    @Erisa said: Cloud GPUs at https://tensordock.com/, ID rec2hmoiz3

    Congrats on being the first! Check your account :)

    Thanked by 1Erisa
  • Cloud GPUs at https://tensordock.com/, ID rec7ggtxpo

  • lentrolentro Member, Host Rep

    Congratulations @sanvit on being one of the top 10!

    Check your account :)

    Thanked by 1sanvit
  • Cloud GPUs at https://tensordock.com/, ID recrrswbmf

  • @lentro said:
    Congratulations @sanvit on being one of the top 10!

    Check your account :)

    @lentro said:
    Congratulations @sanvit on being one of the top 10!

    Check your account :)

    Thank you, and congratulations on the new service! The dashboard looks awesome and the pricing also seems reasonable. I'll give it a spin when I get home :)

    Thanked by 1lentro
  • lentrolentro Member, Host Rep

    Congratulations @giang on being one of the top 10!

    Check your account :)

    Thanked by 1giang
  • lentrolentro Member, Host Rep

    @sanvit said: Thank you, and congratulations on the new service! The dashboard looks awesome and the pricing also seems reasonable. I'll give it a spin when I get home

    Thanks for being the first to give feedback! Check your account for the feedback bonus :)

  • gazmullgazmull Member
    edited December 2021

    Cloud GPUs at https://tensordock.com/, ID reck6dus4f

  • @lentro said:

    @sanvit said: Thank you, and congratulations on the new service! The dashboard looks awesome and the pricing also seems reasonable. I'll give it a spin when I get home

    Thanks for being the first to give feedback! Check your account for the feedback bonus :)

    TBH that wasn't for the bonus, it was just my first impression :) anyway, I'll come with a 'real' review soon! Good luck!

  • Cloud GPUs at https://tensordock.com/, ID recervjsdo

  • yoursunnyyoursunny Member, IPv6 Advocate
    edited December 2021

    #Cloud GPUs at https://tensordock.com/, ID recremmoxl

    If a physical server has multiple NUMA sockets, does the allocation algorithm ensure the CPU and GPU are on the same NUMA socket?

    I hear NVIDIA has CUDALink feature that interconnects multiple GPUs.
    If multiple GPUs are passthrough into a KVM, is CUDALink going to work?

    I see three OS choices.
    How to choose a CUDA compiler version?
    Is there a way to automatically start a provisioning command after the machine boots?
    This is a common feature in supercomputing facilities.

    In supercomputing facilities, there's usually an option to store dataset on NFS, and allocate compute nodes (CPU or GPU) on demand.
    I hope the platform could offer HDD-based NFS storage (not iSCSI block storage), so that user doesn't need to ingress and egress dataset every time they create/destroy an hourly server.

    Thanked by 2lentro HalfEatenPie
  • lentrolentro Member, Host Rep

    @gazmull said: reck6dus4f

    @Umut said: recervjsdo

    Thanks for signing up as part of the first 10! Check your accounts :)

    Thanked by 1gazmull
  • lentrolentro Member, Host Rep

    @sanvit said: TBH that wasn't for the bonus, it was just my first impression anyway, I'll come with a 'real' review soon! Good luck!

    Aw, thanks so much! Looking forward to hearing your real review!

  • Cloud GPUs at https://tensordock.com/, ID recyehywps

  • lentrolentro Member, Host Rep

    @yoursunny said: #recremmoxl

    Check your account, keep up the jokes and the feedback! :)

    Thanked by 1yoursunny
  • lentrolentro Member, Host Rep

    @rooted said: recyehywps

    Thanks for signing up as part of the first 10! Check your account :)

  • ErisaErisa Member
    edited December 2021

    Just playing around with a cheap VM but wow I love it, the dashboard is easy and looks good and the attention to detail on the OS images is sublime. Can tell a lot of thought and dedication was put into this, so kudos. Probably will end up using it occasionally for ad-hoc use, the pricing is really impressive for that use-case!

    I sent the site and thread to my friend who was interested in something like this, they really love the look of it too but can't get the free credits because they're not on the forum :P

    Thanked by 1lentro
  • Cloud GPUs at https://tensordock.com/, ID recr4pbunq

  • lentrolentro Member, Host Rep

    @smile93 said: recr4pbunq

    Congratulations on being part of the first 10! Check your account :)

  • @lentro said: lentro

    Can you clearly explain what exactly you offer and how much does it cost per month without advertising tricks?

  • lentrolentro Member, Host Rep

    @jenkki said: Can you clearly explain what exactly you offer and how much does it cost per month

    We offer virtual machines with a GPU attached. All resources are fully dedicated.

    Pick your GPU (pricing depends on GPU selected), pick your vCPU ($0.01/hr/vCPU), pick your RAM ($0.005/hr/GB), pick your storage ($0.0002/hr/GB)

    Here are the costs:
    https://tensordock.com/pricing

    For monthly servers and longer terms, you can email me at [email protected] We might be able to provide some more discounts. We anticipate usage more being for surge "I need 10 8x NVIDIA A100s for 24 hours"

    Eventually we'll probably launch cloud gaming. Storage cost is $0.01/hr, and you pay $0.50/hr per hour your VM is turned on, for example. Sum should come out to less than $20/month for most people which is less than Shadow.

    TLDR; Not traditional LET offer, not made for monthly usage

    ~Mark

  • lentrolentro Member, Host Rep

    @yoursunny you hit a lot of really good points. I really like you.

    @yoursunny said: multiple NUMA sockets

    NUMA was a big pain to deal with, and tbh I'm not that smart, but one of our sysadmins figured out how to make it all come together. The actual provisioning system is based on k8s and designed for replication and scale.

    I hear NVIDIA has CUDALink feature that interconnects multiple GPUs.

    I haven't heard about CUDALink, do you mean NVLink? And yes, if you deploy SXM cards (V100 or A100 models only), NVLink should work. For this reason, A100 NVLinks are super high in demand and out of stock right now, and that's also why V100 NVLinks cost more than PCIE ones.

    How to choose a CUDA compiler version?

    We'll make more OS images, especially ones with Pytorch, Jupyter Notebook, etc installed ahead of our actual launch for developers.

    Is there a way to automatically start a provisioning command after the machine boots?

    We use CloudInit so I'll look into adding support

    I hope the platform could offer HDD-based NFS storage (not iSCSI block storage), so that user doesn't need to ingress and egress dataset every time they create/destroy an hourly server.

    This is one of our long term goals for next year Q2 if all goes well. Ideally, you upload data, it can be mounted to your compute VM so you don't pay for compute as you upload data. We'll also offer dedicated CPU compute, up to 512 GB RAM, at similar prices as DigitalOcean, in some time.

    ~Rich

    Thanked by 1yoursunny
  • lentrolentro Member, Host Rep

    @KENTKING said: recl1ndncn

    Congratulations on being one of the top 10! Check your account :)

  • @lentro said: We offer virtual machines with a GPU attached. All resources are fully dedicated.

    Can you explain price per month for simplest configuration for those who has bad maths? Per hour and per hour..

  • Cloud GPUs at https://tensordock.com/, ID recbfrprvn

    Thanked by 1lentro
  • lentrolentro Member, Host Rep
    edited December 2021

    @jenkki said: per month

    If you go here and press "RTX 4000" in the GPU section, you'll see the cheapest config that we offer:
    https://console.tensordock.com/deploy

    $0.27/hr for GPU
    $0.02/hr for 4 GB RAM
    $0.02/hr for 2 vCPU
    $0.01/hr for 50GB 3x Replicated NVMe Storage

    --> $0.32/hr

    $0.32/hr * 730 hrs/month = $233.60/month

    Month hours might vary a bit, but that's basically around the cost if you want to rent a full month with the server turned on the entire time

  • ErisaErisa Member
    edited December 2021

    A lot of the stuff says storage is priced at $0.01/GB/hr when actually it seems to be $0.01 for 50 GB-hours instead. Was that a mistake or am I missing something?

    Thanked by 2lentro coreflux
  • lentrolentro Member, Host Rep

    @Erisa said: 0.01 for 50 GB-hours

    Yesss cost is $0.0002/GB/hr but in 50 GB increments, thanks for catching that!

    I see it on the console's deploy page, anywhere else? Awesome catch :)

    ~Mark

Sign In or Register to comment.