Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


What's the most lightweight server side language for mysql server? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What's the most lightweight server side language for mysql server?

2

Comments

  • @Benjiro said:

    @Jona4s said:
    Im surprised someone mentioned Rust.

    If no one mentions PHP, there's still hope for humanity.

    PHP is good for fast development but to handle 100.000req/x seconds. No ... The way PHP works it can not handle that traffic ( unless you want to balance a few hundred of servers ). Let alone the memory need for every PHP instance lol. Apache is also a bad choice when it comes for this workload.

    Go, Crystal, etc are much better suited for this work because of the way they work.

    Yes. And desktop computers have a monitor, mouse and keyboard.

    Also, don't forget a mouse usually has 2 buttons. They are used for clicking and double clicking.

  • A while ago I did a wrk bench on a Golang server, and got 1M req/s (16 core machine)

    This is consistent with TechEmpower benchmarks.

    But as noted. The db you are using is the limiting factor.

  • Whatever you do, make sure to stress test your solution at the full intended load before going live.

    Thanked by 2Detruire vimalware
  • nemnem Member, Host Rep

    @Benjiro said:
    PHP is good for fast development but to handle 100.000req/x seconds. No ... The way PHP works it can not handle that traffic ( unless you want to balance a few hundred of servers ). Let alone the memory need for every PHP instance lol. Apache is also a bad choice when it comes for this workload.

    The real issue is the writes / database anyway. And potentially HTTPS with this amount of connections.

    Maybe with pthreads it's attainable, but that's another nest of issues. Implement the backend in a language you're highly competent in then refactor into something else should the need arise.

    SSL key negotiation with every client will greatly retard throughput. It's best to put haproxy in front to handle SSL termination then send the decrypted payload to the backend. Rotate in additional IPs round-robin that are haproxy SSL terminators to scale up as TPS increase. Using ECDSA instead of RSA would be wise as well. RSA is faster at verifying a signature, but not handshakes.

  • netomxnetomx Moderator, Veteran
    edited March 2019

    Php

    Thanked by 1Amitz
  • jsgjsg Member, Resident Benchmarker

    @netomx said:
    Php

    and pthreads! (plus some hundred GB of memory ...)

    Thanked by 1netomx
  • More seriously what hosting are you using for this and what budget? Maybe the cdn can handle the ssl termination, for example. Digital Ocean has a load balancer product now that also might be able to do it.

  • As i said before , I am talking about the back end. The frontend isnt a matter at all. It's not a problem.

    If MySQL is not suited for this kind of scenario, I am open to suggestion. How is MySQL compared to mongodb { since I am going to use js }.

    Also if ssl is gonna make such a big deal , I can just disable it. It's not like people will send super important critical high secret stuff.

  • Java or Ruby

  • williewillie Member
    edited April 2019

    You can get higher update rate with mongodb if you crank down the write safety. Keep in mind when you see a web server claiming to take 1M requests/sec that doesn't necessarily mean a million new TCP connections per second.

    These days we usually think of a front end as meaning html/css layout, client side javascript etc. But yes you do have to consider whatever runs your http listener.

    If you use a database (mongo, mysql, or whatever) you want to make sure to use persistent connections rather than opening a new one for each web request. And remember those databases all communicate with the server app through sockets, so it won't be as fast as in-process memory.

    Depending on load factors it may be faster or slower to run the db on the same machine as the web app. You should probably test it both ways and see.

    Seriously I'd check into the DO load balancer whether you use SSL or not, and use a DO-scale provider. The typical small LET host might not be able to deal with such a high connection rate regardless.

    @jar do you have any advice?

  • jsgjsg Member, Resident Benchmarker
    edited April 2019

    @yokowasis said:
    As i said before , I am talking about the back end. The frontend isnt a matter at all. It's not a problem.

    If MySQL is not suited for this kind of scenario, I am open to suggestion. How is MySQL compared to mongodb { since I am going to use js }.

    Also if ssl is gonna make such a big deal , I can just disable it. It's not like people will send super important critical high secret stuff.

    • you can't possibly get sensible advice re. DB/tables/... without telling us more
    • disabling/not using https will allow for dramatically more req/s -but- it will also make most modern browsers complain and scream which again drives many users away.

    I'm sorry, things are a bit more complicated than "js or go? Mysql or mongo?". And: do not expect us to put more efforts/brains into responding than you invest into asking (see e.g. point 1).

    Thanked by 1BeardyUnixGuy
  • nemnem Member, Host Rep

    Next up "who's going to write this for me for $50".

    Really best option here is to prototype some ideas. Benchmark them, and see where your bottlenecks are. SSL is guaranteed to be one that you can't ignore. Having an upstream proxy terminate SSL is necessary.

    Other concerns will be how you implement whatever the hell you're striving to do. Building out horizontally makes more sense rather than squeezing every drop of performance out of a machine. Unless you're on a dedicated, I wouldn't put much faith in hitting consistent TPS numbers on that machine. Picking a solution won't do much good if you don't know how to develop an efficient solution in that medium.

    Thanked by 1sibaper
  • jsg said: I'm sorry, things are a bit more complicated than "js or go? Mysql or mongo?"

    But... but... node.js is bad ass rock star tech!

    Thanked by 2nem vimalware
  • Might as well use the opportunity to learn Postgresql 11. It supports everything.

  • @nem said:
    Next up "who's going to write this for me for $50".

    Really best option here is to prototype some ideas. Benchmark them, and see where your bottlenecks are. SSL is guaranteed to be one that you can't ignore. Having an upstream proxy terminate SSL is necessary.

    Other concerns will be how you implement whatever the hell you're striving to do. Building out horizontally makes more sense rather than squeezing every drop of performance out of a machine. Unless you're on a dedicated, I wouldn't put much faith in hitting consistent TPS numbers on that machine. Picking a solution won't do much good if you don't know how to develop an efficient solution in that medium.

    I am not explaining anything because there is no other thing.

    It's just a simple ajax submit and then

    INSERT INTO something (a something) VALUES ('a values');

    as a matter of fact, it can easily be done with Google Forms. But the lack of customization makes me using custom solution.

  • yokowasis said: INSERT INTO something (a something) VALUES ('a values');

    Doing that 100k times/sec won't be easy. Try just appending to a file. Doing that on a bunch of different servers and gathering up the results afterwards shouldn't be too bad.

  • willie said: Try just appending to a file. Doing that on a bunch of different servers and gathering up the results afterwards shouldn't be too bad.

    That is anyway what a write-ahead log (WAL) does on a database. With a file, you would still need locking to prevent file corruption with multiple concurrent writers. Facebook's MySQL fork should do ok with a write-optimized storage engine.

  • Well if you ever consider Golang, for db I use FastCache (in memory like Redis, Memcache) and BadgerDB (LSM equivalent of RocksDB).

    Thanked by 1rincewind
  • rincewind said: With a file, you would still need locking to prevent file corruption with multiple concurrent writers.

    You would use a separate file for each writer, and merge the files afterwards. Trying to funnel all the updates through a single database will eventually hit a wall.

  • If writing to a file is better than into database, how about file based database, such as nodejs local storage database engine. Maybe not an elegant solution , but in this case as long as it works.

  • williewillie Member
    edited April 2019

    You want to avoid communicating with a separate database process through sockets (local storage in node might manage that), and you want to avoid commits (file fsyncs) per query (just let the linux file cache do its thing), and you preferably want to avoid indexes, seeking, etc. Just append, that's it. There's no reason to be fancy. Simplest in node might be to write out json records, one per line, so you can save stuff like timestamps and ip addresses for later analysis, in a flexible way that makes it a bit easier to change format as you go along. Then in postprocessing, read the records back in with a json streaming library.

    If you're doing this for a company with a budget, you might try to get a more tech-heavy developer involved.

    Thanked by 2vimalware uptime
  • For small entries, an embedded database would outperform independent files due to fewer system calls (https://www.sqlite.org/fasterthanfs.html)

    RocksDB, LMDB, Badger, SQlite(in WAL mode) are all good options for embedded DBs for appending rows.

    Thanked by 1vimalware
  • That's making 100,000 files! My suggestion was 1 file per process. You'd use 1 process per server core. Buffering in the stdio library should eliminate a lot of system calls (I hope node uses it or something similar). Or you might be able to save some more using mmap. If you use mmap, it might help to pre-warm the file cache by writing a few megabytes of nulls to the file. This all presumes you really do want to retain some data from each request.

    Actually an even lighter approach might be to have your script do nothing at all. Just save the httpd logs which should be configured to record the query parameters for each http hit. Then make sure to use GET rather than POST in your voting form, so that the person's vote appears as a query parameter. Presto, no database, no append-only file, no web app, no nothing. Just send back a static page thanking the person for voting, and parse the logs later.

  • Even with 1 file per core, there is still a question of impedance matching, i.e., incoming connections can be handled much faster than writes to SSD. So ideally you want a queue into which writes are pushed and a dedicated thread for flushing the queue to disk. There is also the question of blocking versus non-blocking I/O, fair queuing, and thread synchronization.

    The point is that existing key-value stores and DBs have already put in the effort to solve these problems efficiently and provide a simple API. Maybe a custom solution can outperform them, but I would think twice and benchmark ** a lot** . BTW, LMDB uses mmap, don't know about RocksDB but they all are plenty fast.

    The biggest latency issue might be the HTTP session creation itself, but maybe HTTP2 with TLS1.3 and early data handshakes would make it competitive with HTTP/1.1. Centinmod's install scripts should help with that.

    Thanked by 2vimalware uptime
  • rincewind said: So ideally you want a queue into which writes are pushed and a dedicated thread for flushing the queue to disk.

    Yes, this is how logging usually works. Your points are all well taken and benchmarks are important. Still, having an in-process db rather than one that requires context switches for every query can only help, other things being equal. And if you can aggregate multiple user actions (votes) into a single disk/db update that helps too.

    If each of these votes uses 250 bytes of ip traffic (counting everything: tcp setup, http headers...) that's about 200 mbit/sec of network bw and that is probably a low estimate. I wish OP good luck with their budget even without knowing what it is.

  • datanoisedatanoise Member
    edited April 2019

    Maybe you already know this but I'd recommend you to check "Using hashes to abstract a very memory efficient plain key-value store on top of Redis" https://redis.io/topics/memory-optimization depending on what kind of data you store, it can end up using less ram than what mysql indexes would have needed.

  • Actually if you're just trying to estimate the relative popularity of different choices (chocolate vs vanilla or whatever) and you're getting millions of votes, a brutally simple thing you could do is just throw away 99% of the votes and record the other 1%. They should have the same distribution within confidence intervals you can figure out with standard statistical methods.

  • @willie said:
    Actually an even lighter approach might be to have your script do nothing at all. Just save the httpd logs which should be configured to record the query parameters for each http hit. Then make sure to use GET rather than POST in your voting form, so that the person's vote appears as a query parameter. Presto, no database, no append-only file, no web app, no nothing. Just send back a static page thanking the person for voting, and parse the logs later.

    Wait, is this even possible ? So, all I need to do is parsing the log file. Anyone able to comment on this ? Is it really feasible ?

  • Maybe. If it's something like POST /vote?id=1, then you could parse the access.log. In other words, if the data is in the URL and not in the body. If you are behind Cloudflare just make sure it doesn't get cached on the CDN. This is assuming that your access logs are enabled.

  • rincewind said: Maybe. If it's something like POST /vote?id=1,

    You want to use GET rather than POST to ensure that the query looks like that. Good point about the CDN.

    It occurs to me you can possibly mix or randomize the form targets (x1.yourdomain.com, x2.yourdomain.com...) to spread the queries across different hosts, instead of sending everything through a single url and using a load balancer or round robin DNS or whatever.

This discussion has been closed.