Skip to content

Proposal: BoundedConcurrentQueue<T> #23700

@stephentoub

Description

@stephentoub

ConcurrentQueue<T> is an unbounded, thread-safe queue, where the primary operations are Enqueue and TryDequeue. It's one of the more valuable concurrent collection types. However, unbounded collections aren't always desirable. For example, consider wanting to use a concurrent queue for object pooling. If you want to ensure you don't store more than N objects, this is difficult or impossible to achieve efficiently with ConcurrentQueue<T>, which will automatically grow its storage to store the item being added if there's insufficient room.

ConcurrentQueue<T>, however, is actually implemented as a wrapper for a bounded queue, internally called ConcurrentQueue<T>.Segment.
https://github.com/dotnet/corefx/blob/9c468a08151402a68732c784b0502437b808df9f/src/System.Collections.Concurrent/src/System/Collections/Concurrent/ConcurrentQueue.cs#L820
In essence, ConcurrentQueue<T>.Segment provides the bounded queue behavior, and ConcurrentQueue<T> layers on top of that unbounded semantics.

We should clean up the Segment APIs and expose it as

namespace System.Collections.Concurrent
{
    public sealed class BoundedConcurrentQueue<T>
    {
        public BoundedConcurrentQueue(int capacity); // capacity must a power of 2 that's >= 2

        public int Capacity { get; }
        public int Count { get; }
        public bool IsEmpty { get; }

        public bool TryEnqueue(T item);
        public bool TryDequeue(out T item);
    }
}

The current implementation is fast, and is able to achieve that speed in part by eschewing some functionality that would weigh it down non-trivially, e.g. enumeration. That's why it doesn't implement any interfaces like IEnumerable<T> or IReadOnlyCollection<T> that would force us to add such behavior and slow it down. This collection would be very specialized and used purely for its ability to have items enqueued and dequeued quickly. (Methods like TryPeek, ToArray, CopyTo, GetEnumerator, etc., all require the ability to look at data in the queue without removing it, and in the current implementation, that requires marking the segment as "preserving for observation", which means nothing will ever be removed from the queue; this has the effect of continuing to allow enqueues until the segment is full, but since dequeues don't end up removing data, at that point nothing further can be enqueued, even if everything is dequeued. ConcurrentQueue<T> deals with this simply by creating a new segment, but that doesn't work for the segment itself.)


EDIT @stephentoub 7/6/2018: See alternate proposal at https://github.com/dotnet/corefx/issues/24365#issuecomment-403074379.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions