Skip to content

Fix #95126: io::cleanup() can panic in unusual circumstances #95181

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 12 additions & 1 deletion library/std/src/io/stdio.rs
Original file line number Diff line number Diff line change
Expand Up @@ -621,7 +621,18 @@ pub fn cleanup() {
// might have leaked a StdoutLock, which would
// otherwise cause a deadlock here.
if let Some(lock) = Pin::static_ref(instance).try_lock() {
*lock.borrow_mut() = LineWriter::with_capacity(0, stdout_raw());
if let Ok(mut refmut) = lock.try_borrow_mut() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you help me understand this try_borrow_mut? the try_lock above is justified by leaking a StdoutLock, so presumably the panic on borrow_mut() here would occur when someone leaks a locked RefCell -- but that RefCell is never exposed, and std shouldn't be leaking it (right?)...

I think this could also happen if cleanup() is invoked while the RefCell is locked on the stack above us, but I wouldn't expect that to happen either.

// construct an unbuffered LineWriter
let unbuffered = LineWriter::with_capacity(0, stdout_raw());
// replace the current LineWriter with the unbuffered one
let mut buffered = crate::mem::replace(&mut *refmut, unbuffered);
// flush the old, buffered LineWriter
let _ = buffered.flush();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would loosely expect that we should move this flushing out of the STDOUT lock.

// leak the old, buffered LineWriter to avoid deallocations this
// ensures that writes to stdout *during* cleanup (e.g., writes
// emitted by a custom global allocator) won't deadlock (#95126)
crate::mem::forget(buffered);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I follow why this needs to be leaked -- it seems like what we need to do is make sure that the lock has been released before we drop it, but that's something that we can do easily enough (just moving the drop outside the lock.try_borrow_mut).

}
}
}
}
Expand Down
68 changes: 68 additions & 0 deletions src/test/ui/issues/issue-95126.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
// run-pass
// no-prefer-dynamic

// regression test for #95126: io::cleanup() can panic in unusual circumstances

#[global_allocator]
static ALLOCATOR: allocator::TracingSystemAllocator = allocator::TracingSystemAllocator;

fn main() {
let _ = std::io::stdout();
allocator::enable_tracing();
// panic occurs after `main()`
}

// a global allocator that prints on `alloc` and `dealloc`
mod allocator {
use std::{
alloc::{GlobalAlloc, Layout, System},
cell::{RefCell, RefMut},
panic::catch_unwind,
};

pub struct TracingSystemAllocator;

unsafe impl GlobalAlloc for TracingSystemAllocator {
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
let ptr = System.alloc(layout);

let _ = catch_unwind(|| {
maybe_with_guard(|trace_allocations| {
if *trace_allocations {
println!("alloc({:?}) = {}", layout, ptr as usize);
}
})
});

ptr
}

unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
System.dealloc(ptr, layout);

let _ = catch_unwind(|| {
maybe_with_guard(|trace_allocations| {
if *trace_allocations {
println!("dealloc({}, {:?})", ptr as usize, layout);
}
})
});
}
}

pub fn enable_tracing() {
maybe_with_guard(|mut trace| *trace = true)
}

// maybe run `f`, if a unique, mutable reference to `TRACE_ALLOCATOR` can be
// acquired.
fn maybe_with_guard<F>(f: F)
where
F: for<'a> FnOnce(RefMut<'a, bool>),
{
let _ = TRACE_ALLOCATOR.try_with(|guard| guard.try_borrow_mut().map(f));
}

// used to prevent infinitely recursive tracing
thread_local! { static TRACE_ALLOCATOR: RefCell<bool> = RefCell::default(); }
}
2 changes: 1 addition & 1 deletion src/tools/tidy/src/ui_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ use std::path::Path;
const ENTRY_LIMIT: usize = 1000;
// FIXME: The following limits should be reduced eventually.
const ROOT_ENTRY_LIMIT: usize = 984;
const ISSUES_ENTRY_LIMIT: usize = 2310;
const ISSUES_ENTRY_LIMIT: usize = 2311;

fn check_entries(path: &Path, bad: &mut bool) {
let dirs = walkdir::WalkDir::new(&path.join("test/ui"))
Expand Down