TensorDock: Affordable, Easy, Hourly Cloud GPUs From $0.32/hour | Free $15 Credit!
Looking for an alternative to big, expensive cloud providers who are fleecing you of money when it comes to cloud GPUs? Meet TensorDock.
We're a small, close-knit startup based in Connecticut that sells virtual machines with a dedicated GPUs attached. Our goal is not to make money. Rather, our primary goal is to democratize large-scale high-performance computing (HPC) and make it accessible for everyday developers.
1. Ridiculously Easy
Your time is money, so we've tried to make your life as easy as possible. We built our own panel, designed for the GPU use case. No WHMCS here. We did things our way. We have an API too.
When you deploy a Linux server, NVIDIA drivers, Docker, NVIDIA-Docker2, CUDA toolkit, and other basic software packages are preinstalled. For Windows, we include Parsec.
2. Ridiculously Cheap
The cheapest VM that you can launch is a $0.32/hour Quadro RTX 4000 + 2 vCPUs + 4 GB of RAM and 100 GB of NVMe storage. If you are running an hourly GPU instance at another provider, check our pricing, and you'll save by switching to us. If you can commit long term, we can give discounts up to 40%.
Our pricing is very unique. During our experimentation phase, we purchased a ton of different servers and ended up with a heterogeneous fleet of servers. So, we decided to charge per-resource. Customers are rewarded for choosing the smallest amount of CPU/RAM, and they'll be placed on the smallest host node available. Select your preferred GPU and other configurations and you'll only be billed for what you are allocated. It's that simple.
If you are training an ML model for 5 hours on an 4x NVIDIA A5000, it'll cost you less than $20
3. Live GPU Stock
As of this very moment, we have over 1000 GPUs in-stock, with another 5000 GPUs available through reservation, where you contact us and then we tell our partner cloud providers to install our host node software stack on their idle GPUs. We can handle your computing needs, no matter how large.
Because we charge per-resource, just check out our pricing:
You can register here:
And then deploy a server here:
It's that simple.
The LET Exclusive Offer
Not everyone needs GPUs, especially on a server forum like LET. So, this is more of a soft launch for us before we go onto other ML-related forums at the start of next year
This is only for LET users with at least 5 thanks, 5 posts/comments, and a registration date before November 15th. Don't create an account just to claim this account credit!
$5 in account credit for registering and posting your user ID
User ID: https://console.tensordock.com/home (find it under the "Your Profile" box)
#Cloud GPUs at https://tensordock.com/, ID [Your User ID]
E.g. if your user ID was
recbob0gcd, you'd post:
#Cloud GPUs at https://tensordock.com/, ID recbob0gcd
Additional $10 in account credit for creating a server & giving feedback
Once we've given you $5 in account credit, go create a GPU server and give us some feedback on the experience. At least 2 sentences please! Again, post your user ID with this comment, and we'll give you an additional $10 in account credit. Bonus if you try using our API
For now, we're setting a limit of 100 users to participate. If a lot of people like it, we might do some more. Goal is to get some feedback to improve the product before we go bigger
~ Mark & Richard
Questions? Feel free to ask within this thread.
#Cloud GPUs at https://tensordock.com/, ID rec2hmoiz3
Congrats on being the first! Check your account
Cloud GPUs at https://tensordock.com/, ID rec7ggtxpo
Congratulations @sanvit on being one of the top 10!
Check your account
Cloud GPUs at https://tensordock.com/, ID recrrswbmf
Thank you, and congratulations on the new service! The dashboard looks awesome and the pricing also seems reasonable. I'll give it a spin when I get home
Congratulations @giang on being one of the top 10!
Check your account
Thanks for being the first to give feedback! Check your account for the feedback bonus
Cloud GPUs at https://tensordock.com/, ID reck6dus4f
TBH that wasn't for the bonus, it was just my first impression anyway, I'll come with a 'real' review soon! Good luck!
Cloud GPUs at https://tensordock.com/, ID recervjsdo
#Cloud GPUs at https://tensordock.com/, ID recremmoxl
If a physical server has multiple NUMA sockets, does the allocation algorithm ensure the CPU and GPU are on the same NUMA socket?
I hear NVIDIA has CUDALink feature that interconnects multiple GPUs.
If multiple GPUs are passthrough into a KVM, is CUDALink going to work?
I see three OS choices.
How to choose a CUDA compiler version?
Is there a way to automatically start a provisioning command after the machine boots?
This is a common feature in supercomputing facilities.
In supercomputing facilities, there's usually an option to store dataset on NFS, and allocate compute nodes (CPU or GPU) on demand.
I hope the platform could offer HDD-based NFS storage (not iSCSI block storage), so that user doesn't need to ingress and egress dataset every time they create/destroy an hourly server.
Thanks for signing up as part of the first 10! Check your accounts
Aw, thanks so much! Looking forward to hearing your real review!
Cloud GPUs at https://tensordock.com/, ID recyehywps
Check your account, keep up the jokes and the feedback!
Thanks for signing up as part of the first 10! Check your account
Just playing around with a cheap VM but wow I love it, the dashboard is easy and looks good and the attention to detail on the OS images is sublime. Can tell a lot of thought and dedication was put into this, so kudos. Probably will end up using it occasionally for ad-hoc use, the pricing is really impressive for that use-case!
I sent the site and thread to my friend who was interested in something like this, they really love the look of it too but can't get the free credits because they're not on the forum :P
Cloud GPUs at https://tensordock.com/, ID recr4pbunq
Congratulations on being part of the first 10! Check your account
Can you clearly explain what exactly you offer and how much does it cost per month without advertising tricks?
We offer virtual machines with a GPU attached. All resources are fully dedicated.
Pick your GPU (pricing depends on GPU selected), pick your vCPU ($0.01/hr/vCPU), pick your RAM ($0.005/hr/GB), pick your storage ($0.0002/hr/GB)
Here are the costs:
For monthly servers and longer terms, you can email me at [email protected] We might be able to provide some more discounts. We anticipate usage more being for surge "I need 10 8x NVIDIA A100s for 24 hours"
Eventually we'll probably launch cloud gaming. Storage cost is $0.01/hr, and you pay $0.50/hr per hour your VM is turned on, for example. Sum should come out to less than $20/month for most people which is less than Shadow.
TLDR; Not traditional LET offer, not made for monthly usage
Cloud GPUs at https://tensordock.com/recl1ndncn
@yoursunny you hit a lot of really good points. I really like you.
NUMA was a big pain to deal with, and tbh I'm not that smart, but one of our sysadmins figured out how to make it all come together. The actual provisioning system is based on k8s and designed for replication and scale.
I haven't heard about CUDALink, do you mean NVLink? And yes, if you deploy SXM cards (V100 or A100 models only), NVLink should work. For this reason, A100 NVLinks are super high in demand and out of stock right now, and that's also why V100 NVLinks cost more than PCIE ones.
We'll make more OS images, especially ones with Pytorch, Jupyter Notebook, etc installed ahead of our actual launch for developers.
We use CloudInit so I'll look into adding support
This is one of our long term goals for next year Q2 if all goes well. Ideally, you upload data, it can be mounted to your compute VM so you don't pay for compute as you upload data. We'll also offer dedicated CPU compute, up to 512 GB RAM, at similar prices as DigitalOcean, in some time.
Congratulations on being one of the top 10! Check your account
Can you explain price per month for simplest configuration for those who has bad maths? Per hour and per hour..
Cloud GPUs at https://tensordock.com/, ID recbfrprvn
If you go here and press "RTX 4000" in the GPU section, you'll see the cheapest config that we offer:
$0.27/hr for GPU
$0.02/hr for 4 GB RAM
$0.02/hr for 2 vCPU
$0.01/hr for 50GB 3x Replicated NVMe Storage
$0.32/hr * 730 hrs/month = $233.60/month
Month hours might vary a bit, but that's basically around the cost if you want to rent a full month with the server turned on the entire time
A lot of the stuff says storage is priced at $0.01/GB/hr when actually it seems to be $0.01 for 50 GB-hours instead. Was that a mistake or am I missing something?
Yesss cost is $0.0002/GB/hr but in 50 GB increments, thanks for catching that!
I see it on the console's deploy page, anywhere else? Awesome catch