Skip to content

Commit 8461a9e

Browse files
image-dragonKernel Patches Daemon
authored andcommitted
sched: fix some typos in include/linux/preempt.h
There are some typos in the comments of migrate in include/linux/preempt.h: elegible -> eligible it's -> its migirate_disable -> migrate_disable abritrary -> arbitrary Just fix them. Signed-off-by: Menglong Dong <[email protected]>
1 parent e8a9168 commit 8461a9e

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

include/linux/preempt.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -372,7 +372,7 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier,
372372
/*
373373
* Migrate-Disable and why it is undesired.
374374
*
375-
* When a preempted task becomes elegible to run under the ideal model (IOW it
375+
* When a preempted task becomes eligible to run under the ideal model (IOW it
376376
* becomes one of the M highest priority tasks), it might still have to wait
377377
* for the preemptee's migrate_disable() section to complete. Thereby suffering
378378
* a reduction in bandwidth in the exact duration of the migrate_disable()
@@ -387,7 +387,7 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier,
387387
* - a lower priority tasks; which under preempt_disable() could've instantly
388388
* migrated away when another CPU becomes available, is now constrained
389389
* by the ability to push the higher priority task away, which might itself be
390-
* in a migrate_disable() section, reducing it's available bandwidth.
390+
* in a migrate_disable() section, reducing its available bandwidth.
391391
*
392392
* IOW it trades latency / moves the interference term, but it stays in the
393393
* system, and as long as it remains unbounded, the system is not fully
@@ -399,15 +399,15 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier,
399399
* PREEMPT_RT breaks a number of assumptions traditionally held. By forcing a
400400
* number of primitives into becoming preemptible, they would also allow
401401
* migration. This turns out to break a bunch of per-cpu usage. To this end,
402-
* all these primitives employ migirate_disable() to restore this implicit
402+
* all these primitives employ migrate_disable() to restore this implicit
403403
* assumption.
404404
*
405405
* This is a 'temporary' work-around at best. The correct solution is getting
406406
* rid of the above assumptions and reworking the code to employ explicit
407407
* per-cpu locking or short preempt-disable regions.
408408
*
409409
* The end goal must be to get rid of migrate_disable(), alternatively we need
410-
* a schedulability theory that does not depend on abritrary migration.
410+
* a schedulability theory that does not depend on arbitrary migration.
411411
*
412412
*
413413
* Notes on the implementation.

0 commit comments

Comments
 (0)