All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Twitter confirms zero-day used to expose data of 5.4 million accounts
Twitter has confirmed a recent data breach was caused by a now-patched zero-day vulnerability used to link email addresses and phone numbers to users' accounts, allowing a threat actor to compile a list of 5.4 million user account profiles.
Last month, BleepingComputer spoke to a threat actor who said that they were able to create a list of 5.4 million Twitter account profiles using a vulnerability on the social media site.
This vulnerability allowed anyone to submit an email address or phone number, verify if it was associated with a Twitter account, and retrieve the associated account ID. The threat actor then used this ID to scrape the public information for the account.
Comments
Waiting for someone to put out some paper regarding bot count.
???
Hey, most social websites require you to submit phone number. What a surprise!
ElonMusk has entered the chat
bots accounts were compromised as used unsecure password like: bot1#23456789, bot2#3456789, bot3#45678910
The news category is dreary
Aren't there good news in the industry? Maybe I'm overly optimistic.
ahahahahahahahaha
NOPE! I think affected ALL accounts on twitter with phone number & email. This problem to be good to exploited in large scale.
Such type of problems - extremely old. Last time when i saw something similar were around 2012 or 2013 with few services, than the big boys did fix that problem. And here we go - 10 years after... WOW. Just WoW.
I kind of object to the phrase "zero-day vulnerability" when what they really mean is "bug in our own sloppily written code".
I think "zero-day vulnerability" is fair enough in something supplied by a vendor that can't be fixed immediately until the vendor provides a fix, but if it's their own stack they should just admit they messed up.
I fully agree - but as a realist I also see that "zero-day vulnerability" in most peoples mind translates to "something one just can not defend against", zero-days are perceived as similar to a comet hitting someone's house, impossible to defend against.
"just admit they messed up"? Nuh, that would drive their share value down. (From their perspective) making it look like something one just can't avoid or defend against is much better and minimizing their damage.
Plus, hey, 5 dot something millions? In relation to the number of users and bots they have that's far less than 1%.
Besides the only real problem with this case is that they couldn't hush it up.
:face_palm:
The zero-day refers to how long they knew about the bug relative to how long it took to fix and patch. It's inherently a bug, they're not saying it wasn't.
It would be an asshole company to throw the developer under the bus. Developers make bugs. They often feel bad about making them. Anyone who thinks otherwise is delusional.
You're talking about the way the term has been changed into its current usage, and it's exactly that modern usage that I was saying is stupid.
Originally, it referred to how long from when the product was released to when it was exploited.
But using the term to describe bugs in your own software when you have a continuous release cycle is just an exercise in deflection, which is done because the public has a perception that "oh, it's a zero day, there's nothing they could have done".
From the article itself:
Clearly this was a long-standing bug resulting from careless development and code review practices. The code wasn't audited properly at the time it was deployed, and remained vulnerable for months. They don't even know how many people exploited it and has their data, they only know about this because they rely on bounty programs instead of actual security auditing and are now only admitting it because someone is selling the data and they have no choice.
Imagine if this same bounty program exploiter had been hired by the company, given access to the source, think how much quicker he could have found the issue. But they chose not to employ people to do this because it costs too much.
Firstly, this viewpoint is dangerous. Assuming a company has competent developers, there is generally a balance between time and cost of development which is related to both the scope and complexity of the project and its quality.
People have been conditioned to accept serious bugs in software, just because they've been exposed to lots of products where the company has made a decision to prioritise their own profits over quality of the product and ensuring protection of their customer's PII.
In fact, this isn't really a fault of the developers, although obviously they are the ones who create the bugs, but of the corporate culture that creates a culture where careful code review is rejected in favour of a cursory glance and "it looks OK" and meticulous attention to possible failure modes by the developer while implementing a solution is rejected in favour of "we need this yesterday, can you just pull an all-nighter to get it done?"
I'm not saying the developers should be thrown under the bus. I'm saying the company should acknowledge responsibility for bugs like this, do a proper root cause analysis and fix the processes that lead to it, rather than just making some crappy statement that sounds like there was nothing they could do.
The really insidious part of this is that there particular guidelines about best practices for handling PII data. This data is ostensibly only collected by Twitter to "ensure your security", when the result is that a ton of very sensitive data that they demanded from customers solely to "protect them" was then dumped in a system that clearly hadn't been audited properly for security either during development, QA or acceptance testing.
.