- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 4.2k
Open
Labels
A-TasksTools for parallel and async workTools for parallel and async workC-BugAn unexpected or incorrect behaviorAn unexpected or incorrect behaviorC-DocsAn addition or correction to our documentationAn addition or correction to our documentationD-StraightforwardSimple bug fixes and API improvements, docs, test and examplesSimple bug fixes and API improvements, docs, test and examplesS-Ready-For-ImplementationThis issue is ready for an implementation PR. Go for it!This issue is ready for an implementation PR. Go for it!
Description
When spawning 8 blocking tasks (each with thread::sleep(10ms)) via both bevy_tasks::ComputeTaskPool::get().scope and ECS schedule (MultiThreaded executor), actual runtime is 80ms+ instead of the expected ~10ms. This means all tasks are run strictly serially, with no true thread pool concurrency.
bevy_app bevy_task 0.17.2 default features
[Optional] Relevant system information
Rust 1.90, Windows 11 & Ubuntu 24.04, Intel 10700k 8cores
What you did
use std::{
    sync::{Arc, atomic::AtomicUsize},
    thread,
    time::{Duration, Instant},
};
use bevy_app::{
    App, Main, TaskPoolOptions, TaskPoolPlugin, TaskPoolThreadAssignmentPolicy, Update,
};
use bevy_ecs::schedule::ExecutorKind;
use std::sync::atomic::Ordering::Relaxed;
const SYSTEM_COUNT: usize = 8;
fn main() {
    let mut schedule = App::new();
    println!("{}", bevy_tasks::available_parallelism());
    let mut options = TaskPoolOptions::with_num_threads(bevy_tasks::available_parallelism());
    options.io.min_threads = 0;
    options.async_compute.min_threads = 0;
    options.io.max_threads = 0;
    options.async_compute.max_threads = 0;
    options.compute = TaskPoolThreadAssignmentPolicy {
        min_threads: 1,
        max_threads: bevy_tasks::available_parallelism(),
        percent: 1.0,
        on_thread_spawn: Some(Arc::new(|| {
            println!("spawned ");
        })),
        on_thread_destroy: None,
    };
    schedule.add_plugins(TaskPoolPlugin {
        task_pool_options: options,
    });
    let t0 = Instant::now();
    let _ = bevy_tasks::ComputeTaskPool::get().scope(|s| {
        for _ in 0..SYSTEM_COUNT {
            s.spawn(async {
                thread::sleep(Duration::from_millis(10));
            });
        }
    });
    let duration = t0.elapsed();
    println!(" taskpool time consumed: {duration:?}");
    for _i in 0..SYSTEM_COUNT {
        schedule.add_systems(Update, || {
            thread::sleep(Duration::from_millis(10));
        });
    }
    schedule
        .get_schedule_mut(Update)
        .unwrap()
        .set_executor_kind(ExecutorKind::MultiThreaded);
    schedule
        .get_schedule_mut(Main)
        .unwrap()
        .set_executor_kind(ExecutorKind::MultiThreaded);
    let t0 = Instant::now();
    schedule.update();
    let duration = t0.elapsed();
    println!(" schedule time consumed: {duration:?}");
}What went wrong
- what were you expecting?
 The time consumed should be around 10-20ms with parallel execution.
- what actually happened?
- The output indicates the taskpool and systems are serial:
16
 taskpool time consumed: 80.668829ms
 schedule time consumed: 81.100603mswith
    println!(
        "available threads: {}",
        bevy_tasks::ComputeTaskPool::get().thread_num()
    );The available thread is 1 even I set so many options. Apparently it is default to a single thread task pool and the whole api is misleading.
Metadata
Metadata
Assignees
Labels
A-TasksTools for parallel and async workTools for parallel and async workC-BugAn unexpected or incorrect behaviorAn unexpected or incorrect behaviorC-DocsAn addition or correction to our documentationAn addition or correction to our documentationD-StraightforwardSimple bug fixes and API improvements, docs, test and examplesSimple bug fixes and API improvements, docs, test and examplesS-Ready-For-ImplementationThis issue is ready for an implementation PR. Go for it!This issue is ready for an implementation PR. Go for it!