Skip to content

tcpsock:setkeepalive #664

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ghost opened this issue Jan 28, 2016 · 19 comments
Closed

tcpsock:setkeepalive #664

ghost opened this issue Jan 28, 2016 · 19 comments

Comments

@ghost
Copy link

ghost commented Jan 28, 2016

Hello, I'm trying to deal with websockets and want to set unlimited keepalive to socket. In nginx.conf lua_socket_keepalive_timeout 180m; does not affect. In code sock:settimeout works well, but sock:setkeepalive says, that there is not such method in sock, i.e sock:setkeepalive == nil. Why? Pleese write in docs what is maximum timeout. Thanks for great work!

@agentzh
Copy link
Member

agentzh commented Jan 28, 2016

@Romaboy Please ensure your ngx_lua module or OpenResty is up to date. If it is indeed the latest, please provide a self-contained and minimal example that we can easily reproduce the issue on our side. Thank you.

@ghost
Copy link
Author

ghost commented Jan 29, 2016

I had updated everything this morning, so I'm very and very happy that now can control pub/sub with semaphore. 3 hours of timeout is quite enough. But setkeepalive wasn't appear. In "content_by_lua" (moonscript):
print require('inspect') ngx.req.socket true
describes this table as:
{
<userdata 1>,
= <1>{
__index = <table 1>,
receive = <function 1>,
receiveuntil = <function 2>,
send = <function 3>,
settimeout = <function 4>
}
}

@agentzh
Copy link
Member

agentzh commented Jan 29, 2016

@Romaboy Okay, I see what is going on here. There is no setkeepalive for downstream cosockets. The NGINX core handles downstream keepalive according to the HTTP protocol automatically.

@ghost ghost closed this as completed Jan 30, 2016
@ghost ghost reopened this Jan 30, 2016
@ghost
Copy link
Author

ghost commented Jan 30, 2016

@agentzh User subscribe to some channel, and new message in channel will be appear in any moment, maybe after hour, maybe won't appeat at all, so how is it possible to open suitable socket "upstream" with setkeepalive? Would be good to write few words in docks about 'downstream", "upstream", cause it's little confusing.

@ghost
Copy link
Author

ghost commented Jan 30, 2016

No, maybe problem is that I read not very carefully or bad english undestending, there is someting about downstream.

@ghost ghost closed this as completed Jan 30, 2016
@agentzh
Copy link
Member

agentzh commented Jan 31, 2016

@Romaboy These are nginx terms. "downstream" means the direction towards the client side while "upstream" means the direction towards backend (or origin).

@ghost
Copy link
Author

ghost commented May 18, 2016

@agentzh I've been thinking for long time how to do usable websockets among several workers and now have idea! What if get data from downstream such as user IP and other necessary data, then convert it into string (json or just separate with comma) and save into shared.dict, then in another request when some db row is added or changed check for subscribers in that shared dict, for each of those create ngx.socket.tcp (that is upstream, as I understood) with those saved IPs and send changed rows to subscribers? How do you think, is it possible? Another words, is it possible to fill ngx.socket.tcp with data from downstream only using lua primitives?

@ghost ghost reopened this May 18, 2016
@agentzh
Copy link
Member

agentzh commented May 18, 2016

@ghost
Copy link
Author

ghost commented May 18, 2016

@agentzh yes, I had made chat with semphores, that was small victory, but want some more: such chat won't be run with few workers, and if there is eight CPU on server only one would be run, it's confusing... Now I see only two ways: 1) somehow deserialize connection into string, save to dict and then serialize it into new connection on other requests. But seems that is impossible
2) Save changed data with channel to dict and then somehow ask all workers to read them and send to subscribers. Maybe it's possible to iterate workers? I don't understand why there are ngx.worker.pid, ngx.worker.count, ngx.worker.id if there is no way to use them for iteration. Or switching. Or tie one location to one worker. I think bridge between them would be very important feature for whole project.

@agentzh
Copy link
Member

agentzh commented May 18, 2016

@Romaboy There are two approaches here for inter-worker synchronization:

  1. use the shared memory queues to dispatch messages across nginx workers: feature: add api in shdict: lpush, lpop, rpush, rpop, llen #586

    This feature is expected to get merged into the mainline version soon. Each worker can have a background light thread (initiated by init_worker_by_lua + ngx.timer.at) that repeatedly dispatch messages between the current worker and the shared queues.

  2. use cosocket's listen() and accept() APIs to allow each worker to listen on a different unix domain socket. And the workers can talk to each other through such local socket pairs directly. And I know a developer is actively working on the listen() and accept() methods right now.

Hope it helps.

@ghost
Copy link
Author

ghost commented May 18, 2016

@agentzh

Each worker can have a background light thread (initiated by init_worker_by_lua + ngx.timer.at) that repeatedly dispatch messages between the current worker and the shared queues.

while true do *check dict* end
infinite loop, okay, best solution for now

@ghost ghost closed this as completed May 18, 2016
@agentzh
Copy link
Member

agentzh commented May 18, 2016

@Romaboy Hopefully you have some kind of ngx.sleep with adaptive sleeping intervals inside that infinite loop to avoid exhausting your CPU resources :)

@ghost
Copy link
Author

ghost commented May 23, 2016

Can you please help me to figure it out?
I want to write c patch or maybe just module with helpful function, now that function is in ngx_http_lua_worker.c
The logic is as follows: each worker saves it's context or something else in global struck, temporary I added lua_State into ngx_process_t definition. That variable will be accessible in per-worker loop.
But if I save lua_State, it will not help, because lua states are chaotically created in many places and I can't simple save lua state wherever and than use it in another request. What variables can I globally store in nginx, so I could create temporary state variable from them and get global lua variables?
`static int

ngx_http_lua_ngx_worker_my_function(lua_State *L)

{

 lua_getglobal(L, "print");  /* function to be called */   

 lua_getglobal(L, "var");   /* push 1st argument */   

 lua_pcall(L, 1, 1, 0);   

 return 1;   

}`
If I create global lua variable var and than call that function, it will print it. But only at first request, on others var would be nil, although var didn't disappeared from lua.
Please help me understand, I have never seen such complicated code, but really want to make openresty little bit better.

@ghost ghost reopened this May 23, 2016
@agentzh
Copy link
Member

agentzh commented May 23, 2016

@Romaboy I don't quite follow you. But I think you may find the following links interesting:

https://github.com/openresty/lua-nginx-module#data-sharing-within-an-nginx-worker

https://github.com/openresty/lua-nginx-module#lua_shared_dict

https://github.com/openresty/lua-resty-lrucache

#586

Please ensure you have read these carefully before trying to hack on the OpenResty internals.

@ghost
Copy link
Author

ghost commented May 23, 2016

@agentzh "Note however that Lua global variables (note, not module-level variables) WILL NOT persist between requests because of the one-coroutine-per-request isolation design."
It exactly that why I'm trying to hack on the OpenResty internals.
one-coroutine-per-request isolation design - so each request creates new lua_State and fills it with data, that was stored before. Is it impossible to create lua_State and fill it with data, which was stored by other worker's request?
ngx_processes - excellent example of global variable, it has prompted the idea that other data could be stored globally too.

@agentzh
Copy link
Member

agentzh commented May 23, 2016

@Romaboy Oh, and this link too:

https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/semaphore.md

Do not use Lua global variables; they are evil. The documentation explains why Lua global variables are bad pretty well already. Use proper mechanisms like Lua modules and Lua shared dictionaries. Again: please read through the links I've given above. The Lua global variable thing is a dead end.

@ghost
Copy link
Author

ghost commented May 23, 2016

@agentzh I'm trying to write function, that will fire events for each worker. Problem is not about lua global variables, at this state they are just for testing. And it seems that global variables are the easiest way to store data between request. Or even only way.
At my function in ngx_http_lua_worker.c I add:

printf("%d\n", L);

And also add that line after each calling of ngx_http_lua_get_lua_vm(...) and then after each calling of ngx_http_get_module_loc_conf(...)

ngx_http_lua_ngx_worker_my_function(lua_State *L) is accepting lua_State. I can't find place where this state was created! :c
Number of printed state in this function is always different from all states, that I found, I can't find from where it goes.
I need the place in code, where that state is created, to write cycle for each worker to create same lua_State and call global function per worker. Really need. Please.

@agentzh
Copy link
Member

agentzh commented May 23, 2016

@Romaboy You're not listening to me and I'm not following you. I'm closing this ticket. Mind you, this ticket's title is "tcpsock:setkeepalive". We should not continue off-topic discussions here.

@agentzh agentzh closed this as completed May 23, 2016
@ghost
Copy link
Author

ghost commented May 24, 2016

@agentzh Sorry, I didn't want to create more ticket's. I know, this issue is not about openresty bugs, I just hoped for help in developing event bridge between workers. Also google groups are awful, but ok I will ask there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant