-
-
Notifications
You must be signed in to change notification settings - Fork 579
Description
Hi there,
I am thinking about the best way to handle a bunch of connections using the least amount of machines (2 to 4 beefy servers probably), and I wonder if folks here have insights. With http I've been using gunicorn which makes it very easy to use many cores, but I'm a bit lost with asyncio on how to use all my cores. My application will likely just push data into redis and do little CPU work, but if there are a bunch of connected clients cpu might become an issue.
I thought about running multiple servers on multiple ports, and use nginx who's serving on port 80 as a load-balancer + proxy_pass (is that even possible) to go to multiple daemon running the websockets app (something like that?).
And then I'd have multiple services (each one, on a given machine, running under systemd) / each proxy_pass talk to one.
start_server = websockets.serve(hello, 'localhost', 8000) # for 'proxy_pass A'
start_server = websockets.serve(hello, 'localhost', 8001) # for 'proxy_pass B'
start_server = websockets.serve(hello, 'localhost', 8002) # for 'proxy_pass C, etc...'
I know it's a very vague question and not really an issue, but in absence of a better place to discuss this ... here we go !
Thanks,
- Benjamin