New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
It's a known issue that 32-bit of mongodb only works <2gb sizes. You need 64bit. 64bit is the norm these days anyway. I see no issue.
the author's expectations are questionable. he based things on assumptions without reading.
Eh.
I started writing something using MongoDB a few days ago. I read several guides and articles about it to determine whether it was a good fit or not. Not once was the 32 bits limit mentioned. I was not even aware of this until I read this thread just now.
"64bit is the norm these days" is complete nonsense, especially on a LEB forum. It's common to run a 32-bits version of an OS on a LEB for the simple reason that it requires less RAM to run the same stuff, and RAM is quite often tight on low end VPSes, so every bit of saving is welcome.
The problem is that this isn't explicitly documented where it should be. The frontpage of the MongoDB site tells me "Store files of any size without complicating your stack.", which makes no mention or even a footnote of a limitation on 32-bits systems. Once you click through via the documentation on the MongoDB site, to the database section, to the Python driver docs - you still haven't run across documentation on this behaviour even once. A data-losing pitfall like this should be documented right at the top of whatever official documentation you start reading, and have at the very least a footnote in the features list. It is essential information for a developer.
Not that visible but its somewhere there: http://www.mongodb.org/display/DOCS/Developer+Zone#DeveloperZone-MongoDBOperationalOverview
When download its there also: http://www.mongodb.org/downloads#32-bit-limit
Which is not a page you end up on when visiting the documentation through the main site. And even if you did, it's such a small mention in a blob of text that you're almost guaranteed to overlook it. Should at the very least be emphasized.
I install software on a server via the package manager, I never even visited that page.
Understood your point. And it's okay to assume if you will use just for fun. But when you need to use in serious project, a developer need to dig more into the software, etc. specially for emerging tech such as nosql
I'd consider a few hours of reading to be enough to get an idea of the basic pitfalls of something - yet I was not aware of this issue. I don't care how 'revolutionary' or 'emerging' or 'different' something is, something that can cause data loss should be documented at every single relevant place, no exceptions. The ball is completely in MongoDB's yard here. How is a developer supposed to know that he should read until he finds a reference to something that silently discards his data? The whole purpose of MongoDB is to store data, how is it reasonable to expect it to do the opposite in a virtually undocumented situation, no matter how serious your project is?
it's open source. be a contributor and fix the problem?
@joepie91 couchdb... Or just plain old RDBMS, PostgreSQL, Oracle, MariaDB, Mysql...
@jcaleb Purpose of this thread is to make people aware of this.
Agree
Mind telling me where the documentation repository is then?
EDIT: And even if the documentation is changed, that doesn't fix the core of this problem - the fact that the MongoDB developers apparently have no desire to properly document this, and probably won't have that desire for any future pitfalls either.
Not Visible? Does nobody read their logs? Every time I start mongod on 32-bit arch I get this:
I'm also sure his logs are full of errors saying the writes are failing because of the 2GB limit. He's fundamentally missed the point that Mongo writes are not guaranteed unless you ask for them to be - you need to specify a write concern if you care about the result of the actual write. There's no error to report back because the write has been queued successfully, and that's all thats implied by the function returning.
I strongly suspect (I'm not a python person) that if the original author had specified his write concern correctly he would have found out about the failed writes.
It's their software. The only thing we can do is not use it if we're not happy. although i study nosql, i have no desire to use them in anything serious, as i feel i need so much time to understand them well before i could trust them. being conservative, i prefer sql database 100% of the time.
EDIT:
just my opinion. if you really want to use a particular open source software, you are the one who needs to adjust to the software, not the author to you.