New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Where would I find that please?
Does it matter where the server is for computational work load?
I sent you a PM.
What is the application if you don't mind my asking? The usual DL libs like Pytorch and Tensorflow have gpu bindings. There's also Intel optimizations for numpy/scipy:
https://software.intel.com/en-us/blogs/python-optimized
If you're running a separate algorithm instance for each user it sounds like you can do fine with small servers, or maybe hourly VM instances from providers with lots of gear.
If you're really literally just running pure Python code then check into at least using Cython or Numpy or spinning off the heavy numerics to an external program.
Python is great but it's more for glueing components together than running heavy computation directly.
We're doing facial recognition stuff. This looks very interesting, going to look into it. We are indeed using Numpy and also using DLib.
Same server is also being used for a web API and static assets that need to be passed to clients in the USA (for simplicity sake) so I would assume it does matter
You should probably use pytorch, which has good cuda bindings.
That is too bad, the drop in crypto prices recently has created a surplus of GPU's on the market (especially used ones). I just bought a lot of used GeForce 1080ti video cards for under $350 per card and used them to build a half-rack cluster of data-processing power.
Anyway, if you want to build the ultimate system, buy a HP c3000 Blade Chassis and 8 HP ProLiant BL680c G7 Blade Servers that have quad CPU's. Each blade will have a total of 32 cores, for a total chassis core count of 256 cores (512 if you use hyperthreading) at over 2ghz per core. Total cost would probably be around 5k or 6k depending on how much memory you toss into each blade.
If you want a more traditional setup and don't need that many cores, try this 4u Dell PowerEdge R910. It has 4 8-core processors for a total of 32 (64 with hyperthreading) for about $600. You'll just need to toss in a hard drive.
WHAT? Where? I don't see them anywhere near that cheap around here, despite the 1180 supposedly being on its way. If they've really gotten that affordable maybe I should buy one instead of using paperspace.
I'm in a few cryptocurrency mining slack and discord rooms and every now-and-then a miner gets out of the business.
Well ok but that seems way below market price for those cards. They're $650+ on craigslist around here. Haven't checked ebay. Flipping them could be more profitable than mining with them.