Skip to content

Mutithreading and WatchError #224

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
w0rse opened this issue Feb 17, 2012 · 7 comments
Closed

Mutithreading and WatchError #224

w0rse opened this issue Feb 17, 2012 · 7 comments

Comments

@w0rse
Copy link

w0rse commented Feb 17, 2012

We're having WatchErrors in a django cache app, that uses redis for storage. Author of that app points to some threading issues in redis-py client. Indeed, recently we added multithreading option to our wsgi config.

The code, generating this error is located here:
https://github.com/Suor/django-cacheops/blob/master/cacheops/query.py#L26

Our configuration along with a call stack is in original issue:
Suor/django-cacheops#9
The suggested upgrade to latest client didn't solve the issue.

@andymccurdy
Copy link
Contributor

This looks like a problem in django-cacheops here: https://github.com/Suor/django-cacheops/blob/master/cacheops/invalidation.py#L126

redis-py changed the way pipelines and transactions are handled in version 2.4.6. @Suor (the author of django-cacheops) had some concerns over those changes, which can be seen here: #197

I think the work-arounds on line 126 are actually causing the threading issues. Specifically, "WATCH" is being added to the pipeline in a way that doesn't bind the Connection object to the pipeline object, and when the initial pipeline is executed on 132, the WATCH command gets sent to the server, but the Connection object is released back to the pool. Note that the WATCH is still bound to that connection, now completely disassociated to the client at this point, which I suspect is causing issues later when another thread grabs the connection from the pool and uses it for something else.

@w0rse
Copy link
Author

w0rse commented Feb 21, 2012

Thank you for a thorough reply. I'll forward it to @Suor.

@Suor
Copy link
Contributor

Suor commented Feb 22, 2012

Most commands are added into pipe command stack without binding a connection and then executed in batch upon a .execute() call. pipe is local variable here so is it's command stack which is executed in batch in a single connection fetched from pool in .execute() call.

However, I see now that the whole invalidate_from_dict() sub was written before pooling had gone into redis-py and should be rethought - after pipe.execute() I get WATCH in one connection and later I make redis_conn.unwatch() which could be another connection so WATCH kind of hangs.

For now I don't even know how with new redis-py I should handle that - I need watch, execute some commands out of MULTI - EXEC and then some code in actual transaction. It looks like its not possible in new redis version - I can assign connection to pipeline to make sure it's the same, but redis_conn.execute_command() will pop a new connection from pool every time.

Any tips?

@Suor
Copy link
Contributor

Suor commented Feb 22, 2012

It looks like I need some lower level stuff here like choosing connection and then do some commands, pipes, watches and transactions on it conditionally. Looks like the only way I can do it now is creating a pipe, switching it into immediate mode and then issuing commands one by one - no pipes or pipeline transactions possible.

@andymccurdy
Copy link
Contributor

@Suor Take a look at the implementation I did of invalidate_from_dict on #197. I haven't looked if you've updated the logic since then, but I think it should work.

@Suor
Copy link
Contributor

Suor commented Mar 3, 2012

This should be closed, since it's a bug on my side.
See however #229

@andymccurdy
Copy link
Contributor

#229 Merged, closing this. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants