You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Group queue deletions on_node_down into 10 operations per transaction
When many queues are being deleted, we believe that it's faster to have
fewer Mnesia transactions and therefore group 10 queue deletions into a
single Mnesia transaction. This number (10) is arbitrary, we didn't try
with a different number. Creating 1 Mnesia transaction for every queue
deletion feels too many transaction, and having a single Mnesia
transaction for all queue deletions is too few transactions. This felt
like a sensible option.
We cannot determine if this is a good change because
rabbit_core_metrics:queue_deleted/1 takes the most time and obscures all
observations. According to qcachegrind,
rabbit_misc:execute_mnesia_transaction/1 takes 1.8s while
rabbit_core_metrics:queue_deleted/1 takes 132s out of which ets:select/2
takes 131s.
How can we optimise rabbit_core_metrics:queue_deleted/1 ? We are
thinking that rather than calling ets:select/2 twice for every queue, we
should call it twice for all queues that need to be deleted. We don't
know whether this is possible. Alternatively, we might look into
ets:first/1 & ets:next/2 to iterate over the entire table ONCE with all
the queues that have been deleted. Thoughts @dcorbacho@michaelklishin ?
For initial context, see #1513
Partner-in-crime: @essen
0 commit comments