Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How to host custom backend
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How to host custom backend

Hi i want to host some code which should run when a request comes in and continue to run for a bit but only one instance should run. I am thinking about using gullo nat vps or nearlyfreespeech's hosting. Code will be go binary with websockets there will be virtualy no traffic on the server it just needs to run. I am optimizing for cost because i am in TÜRKİYE.

Comments

  • Sounds interesting. Tell us more..

  • ralfralf Member

    If you're writing your own code, actually doing the "only handle one request at a time" is by far the simplest case.

    Essentially, you call listen() to bind a socket to the port address, then you call accept() which blocks until a connection is received and then returns you another file descriptor for that single connection. If you process that request in its entirety before calling accept() again, other concurrent requests will be refused. So, having a single request active at a time is very easy.

    Actually, you can specify a backlog in the listen, which allows that many connection to be waiting to connect, instead of just being refused, but if you don't call accept() until you've handled the first request, you're still only ever processing one request at once.

    Normally, this is extended in one of two ways. The simplest and traditional way, was to call accept() to establish the connection, and then call fork() and the child process handles the request, while the parent process is then able to call accept() again. The danger of this, of course, is that it's hard for the child processes to co-operate (unless you created some shared memory pre-fork) and you can easily be DDoS'd as you're creating a new thread per connection.

    As soon as you want to try managing that state, it becomes easier to move to a single-threaded model, where you can use select() to see if you incoming listen socket has anyone waiting and also check if your already connected sockets have any data. It's more effort to write this kind of system, but it scales a lot more easily and it's easier to share data between connections. You can also throw threads into the mix too.

    However, all of that was just to say that the simplest and traditional way of handling this does exactly what you want. You'll probably find that most frameworks are set up to try to handle multiple connections, so counter-intuitively, it might be simpler to do it yourself if you're only handling HTTP not HTTPS. Personally, I would do this with an HTTP socket that only listens on localhost and then stick haproxy/nginx/apache as a reverse proxy in front of it to handle all the SSL stuff, but it's probably overkill for your application, but simplifying your code so you don't have to worry about SSL is probably worth it.

    Thanked by 1jetchirag
  • @ralf said:
    If you're writing your own code, actually doing the "only handle one request at a time" is by far the simplest case.

    Essentially, you call listen() to bind a socket to the port address, then you call accept() which blocks until a connection is received and then returns you another file descriptor for that single connection. If you process that request in its entirety before calling accept() again, other concurrent requests will be refused. So, having a single request active at a time is very easy.

    Actually, you can specify a backlog in the listen, which allows that many connection to be waiting to connect, instead of just being refused, but if you don't call accept() until you've handled the first request, you're still only ever processing one request at once.

    Normally, this is extended in one of two ways. The simplest and traditional way, was to call accept() to establish the connection, and then call fork() and the child process handles the request, while the parent process is then able to call accept() again. The danger of this, of course, is that it's hard for the child processes to co-operate (unless you created some shared memory pre-fork) and you can easily be DDoS'd as you're creating a new thread per connection.

    As soon as you want to try managing that state, it becomes easier to move to a single-threaded model, where you can use select() to see if you incoming listen socket has anyone waiting and also check if your already connected sockets have any data. It's more effort to write this kind of system, but it scales a lot more easily and it's easier to share data between connections. You can also throw threads into the mix too.

    However, all of that was just to say that the simplest and traditional way of handling this does exactly what you want. You'll probably find that most frameworks are set up to try to handle multiple connections, so counter-intuitively, it might be simpler to do it yourself if you're only handling HTTP not HTTPS. Personally, I would do this with an HTTP socket that only listens on localhost and then stick haproxy/nginx/apache as a reverse proxy in front of it to handle all the SSL stuff, but it's probably overkill for your application, but simplifying your code so you don't have to worry about SSL is probably worth it.

    Hi. I wasn't asking for application development https://github.com/kedihacker/webmasterserver here is my code it works i will optimizer when it checks for new websockets so it can accept socket dynamicly. I am thinking of ngnix or caddy proxy in front of it but my main questions is how i would host this application. thinking of heroku free tier, nfsn or gullo's nat vps. Static frontend would be hosted from github pages. for routing backend i will get a dynamic dns address probably from afraid.org. It just needs to run for 1-2 minutes after a request but only one instance should run.
    For handling one request at a time won't be possible.
    It will be messaging server. If any of you got questions you can ask me. I am making a project about webrtc file transfer because other programs cant transfer a lot of files which in total takes 30GB. I came across this problem when i tried transferring my moms photos of her phone and usb cable would break connections i had to start again. There is sendight.ml but it cant send text easly so i would address these problems in my project. Backed would just act as a message broker. which hosting i should use (I dont want suprised by a big bill because someone might abuse the server.)

  • sanvitsanvit Member

    tl;dr You have a code to run, but need a server to run it cheap (or free), correct?

    Try Oracle Cloud or even AWS free tier

  • 0xbkt0xbkt Member
    edited July 2022

    Just push your code to Heroku. Bandwidth is soft limited to 2TB/mo/app. Free tier should suffice.

  • @0xbkt said:
    Just push your code to Heroku. Bandwidth is soft limited to 2TB/mo/app. Free tier should suffice.

    Yes thats what i am gonna do thanks everybody.

  • You could try hax.co.id they have free IPv6 VPS.

  • @shelfchair said:
    You could try hax.co.id they have free IPv6 VPS.

    In turkey ipv6 is rare maybe mess with cloudflare but heroku seems better with free tier ofter that i'll get some vps

  • cadddrcadddr Member

    You can also try fly.io, you do need a credit card, but you get 3 small vms and a decent amount of traffic for free(good for small use-cases and testing).

  • @cadddr said:
    You can also try fly.io, you do need a credit card, but you get 3 small vms and a decent amount of traffic for free(good for small use-cases and testing).

    as for today:
    free tier without payment method provided
    2 shared-cpu-1x VMs with 256MB RAM + 1GB storage volume

Sign In or Register to comment.