New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
How can file hosting sites provides 50TB/uploader storage? Do they cheating?
I have talked with some affiliate person from some file host. He said "we provide 50TB per uploader for uploader storage. So what kind of server do you think do they use? Is it just cheating/trolling? How can they survive? Or what kind of cheap storage host provides those kind of biggest storages?
Comments
ask them again
distributed storage and overselling or just the latter.
So you cant answer?
How should we know an unnamed File Hosts practices?
1 Fichier according to Reddit sources suspends accounts exceeding 30TB
I can answer. They make extensive use of an advanced technique known as "extreme deduplification". Ultimately every terabyte is reduced to a gigabyte. Subsequently, each gigabyte is reduced to a megabyte, and then further down to a kilobyte. And so on.
Ultimately all data is represented by a single superbit, which is able to simulate the entire 50 TB per uploader by oscillating values of zero and one at appropriate intervals, but very quickly - much faster than the eye can see! So we never realize the illusion ....
Why not ask that affiliate person from that file host?
I mean, it would be difficult for @sibaper or for anyone here to answer your question. You haven't even told us which file host.
@sibaper they are cheating... hide your files hide your dirs.
What's your deficiency?
They dont answer because its their secret.
I am asking this from the server-side. What kind of servers provides this for cheap?
Define cheap?
Also (forgot to mention) they are using blockchain technology to leverage a synergistic distributed economy of scale. They can then pass the savings along to you!
True but lets not forget about advantages of multidimensional quantum disks.
@uptime Cloud-powered distributed replicated blockchain technology with DataMind™ deep learning assisted deduplication technology is really the way to go these days.
#disks!
"Hi, we replaced the poorly encoded 'Game of Thrones Complete Series' files in your account with an existing H.264-encoded set that is already in 7 million other accounts."
Hmmm .. perhaps you be making some kind of jokey-joke ...? But there is possibly a connection to be made from compression to AI: https://en.m.wikipedia.org/wiki/Hutter_Prize
>
>
More about this, from Herr Doktor Hutter's mouth, as it were: http://prize.hutter1.net/
(And the 100 MB file to compress is actually a sample of wikipedia, aka the sum of all human knowledge!)
TL;DR - "brevity is the soul of wit"
I will provide you with unlimited, infinite storage if you pay me $0.01 per kb for bandwidth. So 8e+9 per TB.
John Oliver can confirm:
50TB thats a lot, imagine all these use cases.
I don’t think that anyone has 50 terabytes worth of GoT.
wieners!
I've participated in the Hutter Prize (no, never won) and really, there is virtually zero to do with AI there. It's just a compression contest.
It's not even a general compression contest. It's specifically for the first 100MB of the English version of wikipedia as of the date the prize started. If you can compress that further than anyone else, you win, even if the code you deliver is not applicable generally.
I think the organizers are completely wrong. It's the same fallacy once held by people working on chess engines...by figuring out how to play chess, computers will develop AI. Nope...they just develop really specific algorithms (backed by tons of speed) for chess which cannot be generalized. Same thing here.
That changed recently:
https://www.technologyreview.com/s/609736/alpha-zeros-alien-chess-shows-the-power-and-the-peculiarity-of-ai/
I understand your point but I disagree. I interpreted the work you linked as taking a general game-solving system and applying it to chess. The original hope for chess programming is that they'd study chess (or Go in this case, originally) and from that branch out into many areas of AI.
In this case, studying a specific game (Go) lead to a system that could study other generally similar games. Yes, I know both games and they're different, but they're still pieces on a board. It's not like by studying Go, the system could then turn around and play Dungeons and Dragons.
This limitation also applies to humans though. I won't know how to play Eve by studying chess.
If you are looking for a genuine big storage solution, take a look at Incero's 192TBs:
Thats about $2.5 per TB, and if you get to be VIP, it gets to about $2
@vovler
So ... given (promised) 50 TB per uploader would need about $100 monthly revenue from each to pay for these monster servers. That's a lot of pop-unders etc. (or maybe they are fibbing a bit with regards to the 50 TB per uploader, I dunno ....)
Back to our regularly scheduled deprogramming:
@raindog308 - very cool to run into someone who has actually participated in the Hutter prize. (Noting that the long-time winner seems to be expert in compression via more traditional maths vs. esoteric AI.)
My own reach-to-grasp ratio for this stuff is about as over-extended as most other humans, but I am a fan of the work of Juergen Schmidhuber (known for developing LSTM "Long Short-Term Memory" neural network method for time series data analysis). I believe that Schmidhuber advised Hutter's post-doctoral work - in any case later proposing an interesting paradigm named OOPS (for "Optimally Ordered Problem Solver") in a paper purporting to demonstrate a possible pathway to "strong" AI by learning to solve a 30-disk Towers of Hanoi puzzle: https://arxiv.org/abs/cs/0207097 ...
I think the compression-as-intelligence argument rings true in terms of philosophy relating to algorithmic information theory - but we are currently so far back in what will eventually be considered the dark ages (and possibly also the weirdest timeline) in the pre-history of strong AI so as to make the Hutter compression challenge painfully sparse in terms of yielding much insight as to the deeper nature of intelligence - so far. We are still very much in the early days though.
@willie - thinking a bit about recent emergence of super-human game playing performance by algorithmic approaches (ie for the successful Alpha Go effort, using Monte Carlo tree search combined with deep learning amplifying data gleaned from playing human experts - by then having the AI play more against itself) ... I would like to make some kind of analogy with equivalence of NP-complete problems, whereby a solution to the traveling salesman problem transforms into graph isomorphism and satisfiability solvers, etc. ... but I don't really know what I'm talking about so may want to simply leave it at that. For now ... Until such time when it becomes necessary to derail this wonderful thread once again ~;^)
Great price! To bad even without knowing OPs definition of cheap i have a feeling it's likely still fucking expensive to him.
Edit:
Tbh i don't think OP is interested in math or facts. Looks like he has already abadoned the thread since it wasn't isntantly raining signup links for $7 50TB servers.
@mksh said:
At least no deluge of aff-links for file hosters (yet). Anyway, here's to OP, bless their file-loving heart!
Hey. I'm littlebit late, but even i could provide over petabytes of cloud storage for 0.01e/gb/month
How? Well I'm running seafile.com professional server with OVH object storage as data backend.