Skip to content

Opt-in use of async_io::block_on in bevy_tasks #9625

@BigWingBeat

Description

@BigWingBeat

What problem does this solve or what need does it fill?

bevy_tasks currently uses the block_on method provided by futures-lite. If you are using async-io and bevy_tasks is the only async runtime in your application, async-io experiences extreme latency spikes (Upwards of hundreds of ms), as io events only get processed by the fallback thread, which uses an exponential backoff strategy. async-io provides its own block_on method which processes io events when idle, and is the intended way of avoiding this issue.

What solution would you like?

Add async-io as an optional dependency for bevy_tasks, which replaces uses of futures-lite's block_on method with async-io's when enabled. This solution is already used by async-global-executor to solve the same problem (Quote from their readme):

async-io: if enabled, async-global-executor will use async_io::block_on instead of futures_lite::future::block_on internally. this is preferred if your application also uses async-io.

What alternative(s) have you considered?

  • Using tokio or async-std alongside bevy_tasks, and letting them fight over system resources

Additional context

I encountered this issue while adding a bevy_tasks backend to Quinn, so I could use it in Bevy without having multiple async runtimes competing with eachother, which from what I understand is undesirable at best. (Quinn currently lets you choose between tokio or async-std.)

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-TasksTools for parallel and async workC-FeatureA new feature, making something new possible

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions