Skip to content

rust: update Ref to use the kernel's refcount_t. #377

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 25, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 43 additions & 41 deletions drivers/android/process.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ use kernel::{
linked_list::List,
pages::Pages,
prelude::*,
sync::{Guard, Mutex, Ref, RefCount, RefCounted},
sync::{Guard, Mutex, Ref},
user_ptr::{UserSlicePtr, UserSlicePtrReader},
Error,
};
Expand Down Expand Up @@ -275,7 +275,6 @@ impl ProcessNodeRefs {

pub(crate) struct Process {
ctx: Arc<Context>,
ref_count: RefCount,

// TODO: For now this a mutex because we have allocations in BTreeMap and RangeAllocator while
// holding the lock. We may want to split up the process state at some point to use a spin lock
Expand All @@ -293,22 +292,23 @@ unsafe impl Sync for Process {}

impl Process {
fn new(ctx: Arc<Context>) -> Result<Ref<Self>> {
let mut proc_ref = Ref::try_new(Self {
ref_count: RefCount::new(),
ctx,
// SAFETY: `inner` is initialised in the call to `mutex_init` below.
inner: unsafe { Mutex::new(ProcessInner::new()) },
// SAFETY: `node_refs` is initialised in the call to `mutex_init` below.
node_refs: unsafe { Mutex::new(ProcessNodeRefs::new()) },
})?;
let process = Ref::get_mut(&mut proc_ref).ok_or(Error::EINVAL)?;
// SAFETY: `inner` is pinned behind the `Arc` reference.
let pinned = unsafe { Pin::new_unchecked(&process.inner) };
kernel::mutex_init!(pinned, "Process::inner");
// SAFETY: `node_refs` is pinned behind the `Arc` reference.
let pinned = unsafe { Pin::new_unchecked(&process.node_refs) };
kernel::mutex_init!(pinned, "Process::node_refs");
Ok(proc_ref)
Ref::try_new_and_init(
Self {
ctx,
// SAFETY: `inner` is initialised in the call to `mutex_init` below.
inner: unsafe { Mutex::new(ProcessInner::new()) },
// SAFETY: `node_refs` is initialised in the call to `mutex_init` below.
node_refs: unsafe { Mutex::new(ProcessNodeRefs::new()) },
},
|process| {
// SAFETY: `inner` is pinned behind the `Ref` reference.
let pinned = unsafe { Pin::new_unchecked(&process.inner) };
kernel::mutex_init!(pinned, "Process::inner");
// SAFETY: `node_refs` is pinned behind the `Ref` reference.
let pinned = unsafe { Pin::new_unchecked(&process.node_refs) };
kernel::mutex_init!(pinned, "Process::node_refs");
},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the support for an initialization closure only there to support easy initialization of Pinned structures? Or do you believe it's more generally useful? Just asking out of general interest.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See wedsonaf@61397b0 -- this allows us to obviate get_mut, which allows us to safely claim that Ref is in fact pinned.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see ! Yes, getting rid of get_mut() looks very good. Not sure why you'd need a mut closure for initialization, though? Arc also provides shared references only, but we are still able to initialize the Mutexes inside of it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See wedsonaf@e201bf1 -- allowing init on just &self (as opposed to &mut self) was an oversight that leads to unsafety (e.g., someone calling init on a synchronisation primitive that is in use).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Holy smoke! I was already wondering about that, but I'm coming from profound ignorance on Pinning and everything around it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The best way to avoid get_mut is through a wrapper type like UniqueArc https://doc.servo.org/servo_arc/struct.UniqueArc.html

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

allowing init on just &self (as opposed to &mut self) was an oversight that leads to unsafety

PS do we have an Issue to keep track of this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The best way to avoid get_mut is through a wrapper type like UniqueArc https://doc.servo.org/servo_arc/struct.UniqueArc.html

Nice, I like it.

Something for another PR though. (The reason I'm removing get_mut in this PR is that we don't have a refcount_t function with the right barriers for get_mut and I want this code to be simple wrappers around refcount_t).

allowing init on just &self (as opposed to &mut self) was an oversight that leads to unsafety

PS do we have an Issue to keep track of this?

I don't think so. Feel free to open one.

)
}

/// Attemps to fetch a work item from the process queue.
Expand Down Expand Up @@ -337,7 +337,7 @@ impl Process {
Either::Right(Registration::new(self, thread, &mut inner))
}

fn get_thread(&self, id: i32) -> Result<Arc<Thread>> {
fn get_thread(self: &Ref<Self>, id: i32) -> Result<Arc<Thread>> {
// TODO: Consider using read/write locks here instead.
{
let inner = self.inner.lock();
Expand All @@ -347,7 +347,7 @@ impl Process {
}

// Allocate a new `Thread` without holding any locks.
let ta = Thread::new(id, Ref::new_from(self))?;
let ta = Thread::new(id, self.clone())?;

let mut inner = self.inner.lock();

Expand All @@ -366,7 +366,7 @@ impl Process {
self.inner.lock().push_work(work)
}

fn set_as_manager(&self, info: Option<FlatBinderObject>, thread: &Thread) -> Result {
fn set_as_manager(self: &Ref<Self>, info: Option<FlatBinderObject>, thread: &Thread) -> Result {
let (ptr, cookie, flags) = if let Some(obj) = info {
(
// SAFETY: The object type for this ioctl is implicitly `BINDER_TYPE_BINDER`, so it
Expand All @@ -390,7 +390,7 @@ impl Process {
}

pub(crate) fn get_node(
&self,
self: &Ref<Self>,
ptr: usize,
cookie: usize,
flags: u32,
Expand All @@ -406,7 +406,7 @@ impl Process {
}

// Allocate the node before reacquiring the lock.
let node = Arc::try_new(Node::new(ptr, cookie, flags, Ref::new_from(self)))?;
let node = Arc::try_new(Node::new(ptr, cookie, flags, self.clone()))?;

let mut inner = self.inner.lock();
if let Some(node) = inner.get_existing_node_ref(ptr, cookie, strong, thread)? {
Expand Down Expand Up @@ -693,7 +693,11 @@ impl Process {
ret
}

pub(crate) fn request_death(&self, reader: &mut UserSlicePtrReader, thread: &Thread) -> Result {
pub(crate) fn request_death(
self: &Ref<Self>,
reader: &mut UserSlicePtrReader,
thread: &Thread,
) -> Result {
let handle: u32 = reader.read()?;
let cookie: usize = reader.read()?;

Expand All @@ -716,9 +720,8 @@ impl Process {
}

// SAFETY: `init` is called below.
let death = death.commit(unsafe {
NodeDeath::new(info.node_ref.node.clone(), Ref::new_from(self), cookie)
});
let death = death
.commit(unsafe { NodeDeath::new(info.node_ref.node.clone(), self.clone(), cookie) });
// SAFETY: `death` is pinned behind the `Arc` reference.
unsafe { Pin::new_unchecked(death.as_ref()) }.init();
info.death = Some(death.clone());
Expand Down Expand Up @@ -766,9 +769,14 @@ impl Process {
}

impl IoctlHandler for Process {
type Target = Self;

fn write(this: &Self, _file: &File, cmd: u32, reader: &mut UserSlicePtrReader) -> Result<i32> {
type Target = Ref<Process>;

fn write(
this: &Ref<Process>,
_file: &File,
cmd: u32,
reader: &mut UserSlicePtrReader,
) -> Result<i32> {
let thread = this.get_thread(unsafe { rust_helper_current_pid() })?;
match cmd {
bindings::BINDER_SET_MAX_THREADS => this.set_max_threads(reader.read()?),
Expand All @@ -782,7 +790,7 @@ impl IoctlHandler for Process {
Ok(0)
}

fn read_write(this: &Self, file: &File, cmd: u32, data: UserSlicePtr) -> Result<i32> {
fn read_write(this: &Ref<Process>, file: &File, cmd: u32, data: UserSlicePtr) -> Result<i32> {
let thread = this.get_thread(unsafe { rust_helper_current_pid() })?;
match cmd {
bindings::BINDER_WRITE_READ => thread.write_read(data, file.is_blocking())?,
Expand All @@ -795,12 +803,6 @@ impl IoctlHandler for Process {
}
}

unsafe impl RefCounted for Process {
fn get_count(&self) -> &RefCount {
&self.ref_count
}
}

impl FileOpener<Arc<Context>> for Process {
fn open(ctx: &Arc<Context>) -> Result<Self::Wrapper> {
let process = Self::new(ctx.clone())?;
Expand Down Expand Up @@ -893,15 +895,15 @@ impl FileOperations for Process {
}
}

fn ioctl(this: &Process, file: &File, cmd: &mut IoctlCommand) -> Result<i32> {
fn ioctl(this: &Ref<Process>, file: &File, cmd: &mut IoctlCommand) -> Result<i32> {
cmd.dispatch::<Self>(this, file)
}

fn compat_ioctl(this: &Process, file: &File, cmd: &mut IoctlCommand) -> Result<i32> {
fn compat_ioctl(this: &Ref<Process>, file: &File, cmd: &mut IoctlCommand) -> Result<i32> {
cmd.dispatch::<Self>(this, file)
}

fn mmap(this: &Process, _file: &File, vma: &mut bindings::vm_area_struct) -> Result {
fn mmap(this: &Ref<Process>, _file: &File, vma: &mut bindings::vm_area_struct) -> Result {
// TODO: Only group leader is allowed to create mappings.

if vma.vm_start == 0 {
Expand All @@ -919,7 +921,7 @@ impl FileOperations for Process {
this.create_mapping(vma)
}

fn poll(this: &Process, file: &File, table: &PollTable) -> Result<u32> {
fn poll(this: &Ref<Process>, file: &File, table: &PollTable) -> Result<u32> {
let thread = this.get_thread(unsafe { rust_helper_current_pid() })?;
let (from_proc, mut mask) = thread.poll(file, table);
if mask == 0 && from_proc && !this.inner.lock().work.is_empty() {
Expand Down
18 changes: 18 additions & 0 deletions rust/helpers.c
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,24 @@ rust_helper_platform_set_drvdata(struct platform_device *pdev,
}
EXPORT_SYMBOL_GPL(rust_helper_platform_set_drvdata);

refcount_t rust_helper_refcount_new(void)
{
return (refcount_t)REFCOUNT_INIT(1);
}
EXPORT_SYMBOL_GPL(rust_helper_refcount_new);

void rust_helper_refcount_inc(refcount_t *r)
{
refcount_inc(r);
}
EXPORT_SYMBOL_GPL(rust_helper_refcount_inc);

bool rust_helper_refcount_dec_and_test(refcount_t *r)
{
return refcount_dec_and_test(r);
}
EXPORT_SYMBOL_GPL(rust_helper_refcount_dec_and_test);

/* We use bindgen's --size_t-is-usize option to bind the C size_t type
* as the Rust usize type, so we can use it in contexts where Rust
* expects a usize like slice (array) indices. usize is defined to be
Expand Down
1 change: 1 addition & 0 deletions rust/kernel/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
const_panic,
const_raw_ptr_deref,
const_unreachable_unchecked,
receiver_trait,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feature is not meant to eventually be stabilized. It doesn't have a tracking issue and Receiver is marked as #[doc(hidden)].

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we are going to have a in-tree copy of alloc crate I think it's okay to use it for now. We'll need DispatchFromDyn anyway which is also hidden. I think eventually we should push for a stable way in Rust to define custom smart pointers that are as usable as Arc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Receiver could be stabilized if wanted. For DispatchFromDyn this is not as easy as by definition it relies on an unstable implementation detail. An incorrect implementation of it can cause an ICE or miscompilation I think. Basically for Foo to have a valid DispatchFromDyn impl, Foo<dyn Trait> must consist of only a single fat pointer with the data part being identical to the thin pointer in Foo<ConcreteType>. Rustc assumes that Foo<dyn Trait> is passed in two registers. The second register contains the vtable and is used to get the method function pointer. The first register is then passed as sole argument to the method. Turning Foo<ConcreteType> into Foo<dyn Trait> also depends on another unstable feature called CoerceUnsized, which itself depends on Unsize.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To add some more context to what Gary already said: we had a technical meeting on the alloc crate last week. The consensus was that we'll have a "fork" of alloc in the kernel tree; changes directly to it are discouraged but in the event that they are needed, we will work with the Rust community to upstream them so in steady-state our fork and upstream should be the same.

A replacement for Arc was explicitly discussed, and @joshtriplett mentioned that it may be reasonable for Rust to allow Arc-like implementations that are not so tied to the compiler, and that we should explore this as a way to minimise changes to our fork of alloc. While this isn't a reality, and once our fork is committed, we can move this Ref implementation to our fork of alloc if we want to minimise the risk of other components accidentally relying on unstable features required by Ref.

try_reserve
)]
#![deny(clippy::complexity)]
Expand Down
Loading