Skip to content

Multiple parallel subrequests #7

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
bengrimm opened this issue Nov 14, 2010 · 3 comments
Closed

Multiple parallel subrequests #7

bengrimm opened this issue Nov 14, 2010 · 3 comments

Comments

@bengrimm
Copy link

Right now location capture is done in the background, but you're limited to issuing one request at a time. If you need to pull data from multiple sources, your page is as slow as the sum of all subrequests, but would be reduced to just the slowest if they could be issued in parallel.

If a table were passed to location.capture like this, then it would respond with an identical table of res objects.

res = ngx.location.capture({ "http://this/...", "http://that/...", "http://..." })
ngx.print( res[0].body, res[1].body, res[2].body )

Or it may be easier to use a separate function (e.g. capture_multi).

@agentzh
Copy link
Member

agentzh commented Nov 15, 2010

Right, it's been on our TODO list :) There indeed will be a .capture_multi method. Do you have the tuits to provide a patch? ;)

Thanks!

@bengrimm
Copy link
Author

Perhaps. I'll see what I can do!

@agentzh
Copy link
Member

agentzh commented Feb 5, 2011

I've implemented ngx.location.capture_multi() in git HEAD. See README (and related test cases in the t/027-multi-capture.t file) for the documentation of this new function. Please try it out to see if it works for you ;)

Enjoy!

@cs0604 cs0604 mentioned this issue Sep 6, 2013
thibaultcha added a commit to thibaultcha/lua-nginx-module that referenced this issue Feb 15, 2019
This issue appeared in our EC2 test cluster, which compiles Nginx with
`-DNGX_LUA_USE_ASSERT` and `-DNGX_LUA_ABORT_AT_PANIC`.

The lua-resty-redis test:

    === TEST 1: github issue openresty#108: ngx.locaiton.capture + redis.set_keepalive

in t/bugs.t [1] would produce core dumps in the check leak testing mode.

The backtrace for these core dumps was:

    #0  0x00007fd417bee277 in raise () from /lib64/libc.so.6
    openresty#1  0x00007fd417bef968 in abort () from /lib64/libc.so.6
    openresty#2  0x00007fd417be7096 in __assert_fail_base () from /lib64/libc.so.6
    openresty#3  0x00007fd417be7142 in __assert_fail () from /lib64/libc.so.6
    openresty#4  0x000000000050d227 in ngx_http_lua_socket_tcp_resume_conn_op (spool=c/ngx_http_lua_socket_tcp.c:3963
    openresty#5  0x000000000050e51a in ngx_http_lua_socket_tcp_finalize (r=r@entry=0x5628) at ../../src/ngx_http_lua_socket_tcp.c:4195
    openresty#6  0x000000000050e570 in ngx_http_lua_socket_tcp_cleanup (data=0x7fd419p_lua_socket_tcp.c:3755
    openresty#7  0x0000000000463aa5 in ngx_http_free_request (r=r@entry=0xbfaec0, rc=http_request.c:3508
    ...

Which was caused by the following assertion in ngx_http_lua_socket_tcp.c
with `NGX_DEBUG`:

    #if (NGX_DEBUG)
        ngx_http_lua_assert(spool->connections >= 0);

    #else

Thanks to Mozilla's rr, a recorded session showed that
`spool->connections` was `-1`.

Unfortunately, reproducing this case does not seem possible, since the
failure is due to the request cleanup (`ngx_http_free_request`). Here is
an explanation:

    -- thread 1
    local sock = ngx.socket.tcp()
    sock:connect()
    sock:setkeepalive() -- pool created, connections: 1

        -- thread 2
        local sock = ngx.socket.tcp()
        sock:connect() -- from pool, connections: 1

    -- thread 1
    -- sock from thread 1 idle timeout, closes, and calls
    -- ngx_http_lua_socket_tcp_finalize, connections: 0

        -- thread 2
        sock:setkeepalive() -- connections: -1
        -- ngx_http_lua_socket_tcp_resume_conn_op gets called, assertion fails

In order to avoid this race condition, we must determine whether the
socket pool exists or not, not from the
`ngx_http_lua_socket_tcp_upstream` struct, but from the Lua Registry.
This way, when thread 2's socket enters the keepalive state, it will
respect the previous call to `ngx_http_lua_socket_free_pool` (which
unset the pool from the registry).

[1]: https://github.com/openresty/lua-resty-redis/blob/master/t/bugs.t
thibaultcha added a commit to thibaultcha/lua-nginx-module that referenced this issue Feb 16, 2019
This issue appeared in our EC2 test cluster, which compiles Nginx with
`-DNGX_LUA_USE_ASSERT` and `-DNGX_LUA_ABORT_AT_PANIC`.

The lua-resty-redis test:

    === TEST 1: github issue openresty#108: ngx.locaiton.capture + redis.set_keepalive

in t/bugs.t [1] would produce core dumps in the check leak testing mode.

The backtrace for these core dumps was:

    #0  0x00007fd417bee277 in raise () from /lib64/libc.so.6
    openresty#1  0x00007fd417bef968 in abort () from /lib64/libc.so.6
    openresty#2  0x00007fd417be7096 in __assert_fail_base () from /lib64/libc.so.6
    openresty#3  0x00007fd417be7142 in __assert_fail () from /lib64/libc.so.6
    openresty#4  0x000000000050d227 in ngx_http_lua_socket_tcp_resume_conn_op (spool=c/ngx_http_lua_socket_tcp.c:3963
    openresty#5  0x000000000050e51a in ngx_http_lua_socket_tcp_finalize (r=r@entry=0x5628) at ../../src/ngx_http_lua_socket_tcp.c:4195
    openresty#6  0x000000000050e570 in ngx_http_lua_socket_tcp_cleanup (data=0x7fd419p_lua_socket_tcp.c:3755
    openresty#7  0x0000000000463aa5 in ngx_http_free_request (r=r@entry=0xbfaec0, rc=http_request.c:3508
    ...

Which was caused by the following assertion in ngx_http_lua_socket_tcp.c
with `NGX_DEBUG`:

    #if (NGX_DEBUG)
        ngx_http_lua_assert(spool->connections >= 0);

    #else

Thanks to Mozilla's rr, a recorded session showed that
`spool->connections` was `-1`.

Here is a reproducible test case:

    local sock1 = ngx.socket.tcp()
    local sock2 = ngx.socket.tcp()

    sock1:connect()
    sock2:connect()

    sock1:setkeepalive() -- pool created, connections: 1
    sock2:setkeepalive() -- connections: 1

    sock1:connect() -- connections: 1
    sock2:connect() -- connections: 1

    sock1:close() -- connections: 0
    sock2:close() -- connections: -1
    -- ngx_http_lua_socket_tcp_resume_conn_op gets called, assertion fails

In order to avoid this race condition, we must determine whether the
socket pool exists or not, not from the
`ngx_http_lua_socket_tcp_upstream` struct, but from the Lua Registry.
This way, when thread 2's socket enters the keepalive state, it will
respect the previous call to `ngx_http_lua_socket_free_pool` (which
unset the pool from the registry).

[1]: https://github.com/openresty/lua-resty-redis/blob/master/t/bugs.t
zhuizhuhaomeng pushed a commit that referenced this issue Oct 19, 2021
==openresty==70603==ERROR: AddressSanitizer: memcpy-param-overlap: memory ranges [0x621000001500,0x621000002181) and [0x62100000187f, 0x621000002500) overlap
    #0 0x7f3db1899ffe  (/lib64/libasan.so.5+0x99ffe)
    #1 0x9da926  (/usr/local/openresty-debug/nginx/sbin/nginx+0x9da926)
    #2 0x9dd1a1  (/usr/local/openresty-debug/nginx/sbin/nginx+0x9dd1a1)
    #3 0x4c89c6  (/usr/local/openresty-debug/nginx/sbin/nginx+0x4c89c6)
    #4 0x5d1e4e  (/usr/local/openresty-debug/nginx/sbin/nginx+0x5d1e4e)
    #5 0x4c89c6  (/usr/local/openresty-debug/nginx/sbin/nginx+0x4c89c6)
    #6 0x5b8583  (/usr/local/openresty-debug/nginx/sbin/nginx+0x5b8583)
    #7 0x4c89c6  (/usr/local/openresty-debug/nginx/sbin/nginx+0x4c89c6)
    #8 0x4b4419  (/usr/local/openresty-debug/nginx/sbin/nginx+0x4b4419)
    #9 0x427f16  (/usr/local/openresty-debug/nginx/sbin/nginx+0x427f16)
    #10 0x7f3daff27554  (/lib64/libc.so.6+0x22554)
    #11 0x42d537  (/usr/local/openresty-debug/nginx/sbin/nginx+0x42d537)
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants