Skip to content

weird pipeline() combined with watch() behavior #197

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Suor opened this issue Sep 17, 2011 · 6 comments
Closed

weird pipeline() combined with watch() behavior #197

Suor opened this issue Sep 17, 2011 · 6 comments
Labels

Comments

@Suor
Copy link
Contributor

Suor commented Sep 17, 2011

I understand it's intended and documented, still it's seems so inconvenient, illogical and prohibits efficient use of pipelining (not MULTI).

Why pipe suddenly breaks when I call .watch()? It is still kind of pipe, but it doesn't work as one. Maybe, old 2.2 approach is not the best, but it didn't cause this confusion. I used .pipeline(transaction=false) to make pipeline and with a flag on to make atomic pipeline. Cluttering everything in on method is questionable but concept is clear. Now, we still use .pipeline() to get atomic pipeline until we don't use .watch() when we suddenly loose pipelining and using .multi() directly to get atomic, which makes things with MULTI very inconsistent.

My use case, implemented for redis-py 2.2:

# set up pipeline not for multi, but to avoid unneeded network roundtrips
pipe = redis_conn.pipeline(transaction=False)  
pipe.watch(version_key, *conjs_keys)
# get something from redis
pipe.get(...)
pipe.sunion(...)
...
# ignore watch() result, gather all other
_, result1, result2, ... = pipe.execute() 

if <some condition involving result*>:
    redis_conn.unwatch()
else:
    try:
        txn = redis_conn.pipeline()
        # Change something in redis
        txn.delete(...)
        txn.execute()
    except WatchError:
        <Optimistic locking failed: just redo everything.>

Actual working example is here
https://github.com/Suor/django-cacheops/blob/master/cacheops/invalidation.py#L111

@andymccurdy
Copy link
Contributor

It seems like you're making some incorrect assumptions about how WATCH works.

I've attempted top rewrite the invalidate_from_dict function in your code to work with redis-py 2.4.6+. In my opinion, it's much simpler and straight forward.

def invalidate_from_dict(model, values):
    """
    Invalidates caches that can possibly be influenced by object
    """

    def _invalidate(pipe):
        # Create a list of invalidators from list of schemes and values of object fields
        schemes = cache_schemes.schemes(model)
        conjs_keys = [conj_cache_key_from_scheme(model, scheme, values) for scheme in schemes]

        version_key = cache_schemes.get_version_key(model)

        pipe.watch(*conjs_keys)
        version_key = cache_schemes.get_version_key(model)
        version = pipe.get(version_key)
        cache_keys = pipe.sunion(conjs_keys)

        if version is None:
            version = 0
        if int(version) != cache_schemes.version(model):
            cache_schemes.load_schemes(model)
            # force a manual reset of the pipeline to UNWATCH any previously set keys
            pipe.reset()
            # a WatchError will simply cause this entire function to be re-executed.
            raise WatchError()
        elif cache_keys or conjs_keys:
            pipe.multi()
            pipe.delete(*(list(cache_keys) + conjs_keys))

    redis_conn.transaction(_invalidate)

@Suor
Copy link
Contributor Author

Suor commented Oct 11, 2011

No I can't get watch error on line 132 cause that pipe is pipeline not transaction, so there is no EXEC call on that line. By using pipe I avoid two unnecessary network round trips to redis server in:

pipe.watch(version_key, *conjs_keys)
pipe.get(version_key)
pipe.sunion(conjs_keys)

which not only good in itself but also lessens the probability of WatchError here https://github.com/Suor/django-cacheops/blob/master/cacheops/invalidation.py#L151

So, new watch behavior is ok with transactions, the problem is its incompatible with plain pipelines with transaction=False. By the way, .pipeline() method being a transaction by default is very confusing, it should really be two separate methods with distinct names.

@andymccurdy
Copy link
Contributor

You're correct about the WatchError. I saw the .execute() and just naturally assumed a EXEC was being sent.

I agree that the naming is confusing, but the names are stuck for the time being because they're in the public API. Changing them would break backwards compatibility.

@Suor
Copy link
Contributor Author

Suor commented Oct 12, 2011

You can create an aliases such as .pipe() for .pipeline(transaction=False), and something for .pipeline(transaction=True) (unfortunately .transaction() is already taken) and encourage people to migrate their code to them. But this is really a separate issue.

What about .watch()? I'd really prefer old behavior, at least it had not that unintuitive side effect of breaking pipe.

@roncohen
Copy link

Sorry to revisit this old thread, but is there a way to buffer up those three commands that @Suor mentioned? I'm trying to get a lot of data from watched keys, and would like to avoid those roundtrips. I also tried to start the overall pipeline with pipeline(transaction=false) but it does not seem to be the way to go:

with redis.pipeline(transaction=False) as pipe:
    pipe.watch(*keys)
    for key in keys:
        pipe.lrange(key, 0, -1)

    data = pipe.execute()

    pipe.multi()
    # set stuff back, then:
    pipe.execute()

I get ResponseError: DISCARD without MULTI

@github-actions
Copy link
Contributor

github-actions bot commented Sep 1, 2020

This issue is marked stale. It will be closed in 30 days if it is not updated.

@github-actions github-actions bot added the Stale label Sep 1, 2020
@github-actions github-actions bot closed this as completed Oct 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants