Skip to content

Incremental lock #22

@alex88

Description

@alex88

Hi there,

I've actually this code:

mutex = RedisMutex.new lock_key(feed, user), block: 300, sleep: 0.5, expire: 300
if mutex.locked?
  Rails.logger.info 'Another feed is in processing, waiting for that and returning'
  # Another feed is running, just wait for the lock to expire and end
  mutex.unlock if mutex.lock
  Rails.logger.info 'Other feed finished, returning'
else
  # Not locked, aquire the lock and process the feed
  Rails.logger.info 'No feed are processing, acquiring lock'
  mutex.with_lock do
    Rails.logger.info 'feed lock acquired, processing feed'
    teams = get_teams(feed)
    teams.each { |team| sync_team(team, feed, user) }
    Rails.logger.info 'feed finished, releasing lock'
  end
end

running in a Sidekiq job in Heroku, this works fine, except when due a scale down the job is being SIGKILLed since the teams.each loop isn't exited in 10 seconds (heroku default KILL timeout).

Is there a way, inside the with_lock thing, to increase the expire time of the lock? Something like:

mutex.with_lock do
  Rails.logger.info 'feed lock acquired, processing feed'
  teams = get_teams(feed)
  teams.each do |team|
    sync_team(team, feed, user)
    mutex.lock_until(now + 10 .seconds)
  end
  Rails.logger.info 'feed finished, releasing lock'
end

this lets me lower down the expire time to 10 seconds and incrementally increase the expire time so that if the process is killed the lock is still active only for at max 10 seconds, not the remaining time to 300 seconds.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions