All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Looking for a Potential Sponsor for Free Toxicity API/Project
Hey everyone!
I run ModerateHatespeech -- we provide free + accessible ML-backed endpoints for detecting toxic/hateful comments online. We're primarily geared toward content moderation, but we've worked with researchers in academic settings too.
Since we're built on some relatively new ML techniques + we've taken a very adaptive approach to specifically targeting content moderation wrt. toxicity, we're able to pretty outperform every single platform out there in added value (including Google's Perspective, as well as commercial solutions). Did I mention we're free?
We also work directly with different communities to build custom solutions, tools, ML baselines, and a lot of other cool stuff. Mitigating unintended censorship and bias are big issues too, so obviously we do a lot of work auditing + improving those aspects.
I wanted to humbly request to see if any provider(s) is willing to sponsor us -- ideally with infrastructure. Since we're running ML (transformers!) ideally it would be awesome if a GPU instance could be provided. We don't need an A100...just something decently fast. (Discounts, if possible, are also highly appreciated if full sponsorship isn't possible!)
Happy to talk terms/joint marketing/case studies, whatnot. We help moderate > 125M people online, ~500k comments a day, and probably remove ~16k hateful comments a day, and work with UNICEF, MIT, among others so your support is very much appreciated + impactful!
Here's some more information about us, what we do (and why), how we define "hateful," etc: https://moderatehatespeech.com/
Thanks!
Welton
(FWIW...we also have NPO status, so depending on your country in-kind donations might be tax deductible, if that's your thing).
Comments
I'd have a chat with @lentro, sounds like a good fit for their platform (TensorDock)
Oh yeah, that would be sweet. @lentro permission to send ya a message? Or however you prefer to communicate? Would love to work together
Thanks @Erisa for the mention!
At TensorDock, we run our own cloud as well as a compute marketplace, where hosts install our hypervisor software and then customers provision VMs on those.
For your use case, the marketplace would have the lowest costs; it's where providers compete against each other for the best pricing.
GPU prices are among the best in the market, starting at $0.42/hr for an A6000 (vs $0.80/hr at Lambda), $0.27/hr for a 3090 (vs $1.30/hr at Genesis), etc.
While we might not be able to provide a full sponsorship, we'd love for you to join us! DM sent
You could try to reach out to stability.ai. They are known to provide grant for research
good luck!
Thanks! Will take a look
Talk to Twitter. I'm sure they'd be happy to replace 1000 workers with an open source free solution for content moderation.
Seriously, Tweet at Musk and see if he'll have one of his Tesla engineers look it over and tell Musk what it's about.
Edit: you can also try the "buy us before the competition does" route.
Yeah, I think that's interesting -- I would assuming Twitter's got their own ML backend which handles a large portion of their content moderation in addition to human reviewers.
We do track toxicity on Twitter: https://moderatehatespeech.com/research/twitter-toxicity-index/ -- this an interesting drop right around when Elon took over. Correlation, causation, no idea, but I know they did make some platform/algorithmic changes right around then.
It's moot. They've let go so much experience, the next team will just replace it all. There was a good article about how the experience of the workers was the secret sauce and Musk just screwed the pooch letting go of nearly all their accumulated value.
There's going to be dozens and dozens of successful startups as a result of these mass layoffs at big tech.
Hmm, all the public reports implied toxicity got worse since Musk took over, citing 5x N-word use as an example. Some advertisers pulled out out of concern but some pulled out as a result of their ads appearing next to garbage posts.
Also, I never took statistics in high school, so I'm clueless, but doesn't your ~300k daily analyzed posts need to be a minimum/consistent % of total posts to be valid? I have NO idea whether total daily tweets are 3M, 30M or 300M... but a mass exodus of people would drop the overall total.
Yeah, totally -- it's like RedHat. Talent is value
Indeed it does! Not necessarily a minimum %, but statistically significant, which depends on N and the actual change. You can do the math, but the sample size, when aggregated over a longer period of time like with do with the moving average, allows us to see rough trends. In this case, we can just eyeball and say pretty confidently it is statistically significant because of the degree of change.
For those of us with less experience in this space:
"ML" stands for "machine learning".
"NPO" stands for "non-profit organization".
Well, if you're looking for exposure, you'd get media attention if you posted stats that said, "actually, Twitter isn't any more Toxic after Musk". You can tweet it out and tag Musk.