aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--book/en/src/SUMMARY.md7
-rw-r--r--book/en/src/internals.md5
-rw-r--r--book/en/src/internals/access.md158
-rw-r--r--book/en/src/internals/ceilings.md79
-rw-r--r--book/en/src/internals/critical-sections.md515
-rw-r--r--book/en/src/internals/interrupt-configuration.md74
-rw-r--r--book/en/src/internals/late-resources.md115
-rw-r--r--book/en/src/internals/non-reentrancy.md84
-rw-r--r--book/en/src/internals/tasks.md398
-rw-r--r--book/en/src/internals/timer-queue.md368
10 files changed, 1798 insertions, 5 deletions
diff --git a/book/en/src/SUMMARY.md b/book/en/src/SUMMARY.md
index bde2b8d2..873410a5 100644
--- a/book/en/src/SUMMARY.md
+++ b/book/en/src/SUMMARY.md
@@ -10,6 +10,11 @@
- [Starting a new project](./by-example/new.md)
- [Tips & tricks](./by-example/tips.md)
- [Under the hood](./internals.md)
+ - [Interrupt configuration](./internals/interrupt-configuration.md)
+ - [Non-reentrancy](./internals/non-reentrancy.md)
+ - [Access control](./internals/access.md)
+ - [Late resources](./internals/late-resources.md)
+ - [Critical sections](./internals/critical-sections.md)
- [Ceiling analysis](./internals/ceilings.md)
- - [Task dispatcher](./internals/tasks.md)
+ - [Software tasks](./internals/tasks.md)
- [Timer queue](./internals/timer-queue.md)
diff --git a/book/en/src/internals.md b/book/en/src/internals.md
index 0ef55e62..15da97a8 100644
--- a/book/en/src/internals.md
+++ b/book/en/src/internals.md
@@ -4,3 +4,8 @@ This section describes the internals of the RTFM framework at a *high level*.
Low level details like the parsing and code generation done by the procedural
macro (`#[app]`) will not be explained here. The focus will be the analysis of
the user specification and the data structures used by the runtime.
+
+We highly suggest that you read the embedonomicon section on [concurrency]
+before you dive into this material.
+
+[concurrency]: https://github.com/rust-embedded/embedonomicon/pull/48
diff --git a/book/en/src/internals/access.md b/book/en/src/internals/access.md
new file mode 100644
index 00000000..513cef1b
--- /dev/null
+++ b/book/en/src/internals/access.md
@@ -0,0 +1,158 @@
+# Access control
+
+One of the core foundations of RTFM is access control. Controlling which parts
+of the program can access which static variables is instrumental to enforcing
+memory safety.
+
+Static variables are used to share state between interrupt handlers, or between
+interrupts handlers and the bottom execution context, `main`. In normal Rust
+code it's hard to have fine grained control over which functions can access a
+static variable because static variables can be accessed from any function that
+resides in the same scope in which they are declared. Modules give some control
+over how a static variable can be accessed by they are not flexible enough.
+
+To achieve the fine-grained access control where tasks can only access the
+static variables (resources) that they have specified in their RTFM attribute
+the RTFM framework performs a source code level transformation. This
+transformation consists of placing the resources (static variables) specified by
+the user *inside* a `const` item and the user code *outside* the `const` item.
+This makes it impossible for the user code to refer to these static variables.
+
+Access to the resources is then given to each task using a `Resources` struct
+whose fields correspond to the resources the task has access to. There's one
+such struct per task and the `Resources` struct is initialized with either a
+mutable reference (`&mut`) to the static variables or with a resource proxy (see
+section on [critical sections](critical-sections.html)).
+
+The code below is an example of the kind of source level transformation that
+happens behind the scenes:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ static mut X: u64: 0;
+ static mut Y: bool: 0;
+
+ #[init(resources = [Y])]
+ fn init(c: init::Context) {
+ // .. user code ..
+ }
+
+ #[interrupt(binds = UART0, resources = [X])]
+ fn foo(c: foo::Context) {
+ // .. user code ..
+ }
+
+ #[interrupt(binds = UART1, resources = [X, Y])]
+ fn bar(c: bar::Context) {
+ // .. user code ..
+ }
+
+ // ..
+};
+```
+
+The framework produces codes like this:
+
+``` rust
+fn init(c: init::Context) {
+ // .. user code ..
+}
+
+fn foo(c: foo::Context) {
+ // .. user code ..
+}
+
+fn bar(c: bar::Context) {
+ // .. user code ..
+}
+
+// Public API
+pub mod init {
+ pub struct Context<'a> {
+ pub resources: Resources<'a>,
+ // ..
+ }
+
+ pub struct Resources<'a> {
+ pub Y: &'a mut bool,
+ }
+}
+
+pub mod foo {
+ pub struct Context<'a> {
+ pub resources: Resources<'a>,
+ // ..
+ }
+
+ pub struct Resources<'a> {
+ pub X: &'a mut u64,
+ }
+}
+
+pub mod bar {
+ pub struct Context<'a> {
+ pub resources: Resources<'a>,
+ // ..
+ }
+
+ pub struct Resources<'a> {
+ pub X: &'a mut u64,
+ pub Y: &'a mut bool,
+ }
+}
+
+/// Implementation details
+const APP: () = {
+ // everything inside this `const` item is hidden from user code
+
+ static mut X: u64 = 0;
+ static mut Y: bool = 0;
+
+ // the real entry point of the program
+ unsafe fn main() -> ! {
+ interrupt::disable();
+
+ // ..
+
+ // call into user code; pass references to the static variables
+ init(init::Context {
+ resources: init::Resources {
+ X: &mut X,
+ },
+ // ..
+ });
+
+ // ..
+
+ interrupt::enable();
+
+ // ..
+ }
+
+ // interrupt handler that `foo` binds to
+ #[no_mangle]
+ unsafe fn UART0() {
+ // call into user code; pass references to the static variables
+ foo(foo::Context {
+ resources: foo::Resources {
+ X: &mut X,
+ },
+ // ..
+ });
+ }
+
+ // interrupt handler that `bar` binds to
+ #[no_mangle]
+ unsafe fn UART1() {
+ // call into user code; pass references to the static variables
+ bar(bar::Context {
+ resources: bar::Resources {
+ X: &mut X,
+ Y: &mut Y,
+ },
+ // ..
+ });
+ }
+};
+```
diff --git a/book/en/src/internals/ceilings.md b/book/en/src/internals/ceilings.md
index 2c645a4d..c13df537 100644
--- a/book/en/src/internals/ceilings.md
+++ b/book/en/src/internals/ceilings.md
@@ -1,3 +1,80 @@
# Ceiling analysis
-**TODO**
+A resource *priority ceiling*, or just *ceiling*, is the dynamic priority that
+any task must have to safely access the resource memory. Ceiling analysis is
+relatively simple but critical to the memory safety of RTFM applications.
+
+To compute the ceiling of a resource we must first collect a list of tasks that
+have access to the resource -- as the RTFM framework enforces access control to
+resources at compile time it also has access to this information at compile
+time. The ceiling of the resource is simply the highest logical priority among
+those tasks.
+
+`init` and `idle` are not proper tasks but they can access resources so they
+need to be considered in the ceiling analysis. `idle` is considered as a task
+that has a logical priority of `0` whereas `init` is completely omitted from the
+analysis -- the reason for that is that `init` never uses (or needs) critical
+sections to access static variables.
+
+In the previous section we showed that a shared resource may appear as a mutable
+reference or behind a proxy depending on the task that has access to it. Which
+version is presented to the task depends on the task priority and the resource
+ceiling. If the task priority is the same as the resource ceiling then the task
+gets a mutable reference to the resource memory, otherwise the task gets a
+proxy -- this also applies to `idle`. `init` is special: it always gets a
+mutable reference to resources.
+
+An example to illustrate the ceiling analysis:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ // accessed by `foo` (prio = 1) and `bar` (prio = 2)
+ // CEILING = 2
+ static mut X: u64 = 0;
+
+ // accessed by `idle` (prio = 0)
+ // CEILING = 0
+ static mut Y: u64 = 0;
+
+ #[init(resources = [X])]
+ fn init(c: init::Context) {
+ // mutable reference because this is `init`
+ let x: &mut u64 = c.resources.X;
+
+ // mutable reference because this is `init`
+ let y: &mut u64 = c.resources.Y;
+
+ // ..
+ }
+
+ // PRIORITY = 0
+ #[idle(resources = [Y])]
+ fn idle(c: idle::Context) -> ! {
+ // mutable reference because priority (0) == resource ceiling (0)
+ let y: &'static mut u64 = c.resources.Y;
+
+ loop {
+ // ..
+ }
+ }
+
+ #[interrupt(binds = UART0, priority = 1, resources = [X])]
+ fn foo(c: foo::Context) {
+ // resource proxy because task priority (1) < resource ceiling (2)
+ let x: resources::X = c.resources.X;
+
+ // ..
+ }
+
+ #[interrupt(binds = UART1, priority = 2, resources = [X])]
+ fn bar(c: foo::Context) {
+ // mutable reference because task priority (2) == resource ceiling (2)
+ let x: &mut u64 = c.resources.X;
+
+ // ..
+ }
+
+ // ..
+};
+```
diff --git a/book/en/src/internals/critical-sections.md b/book/en/src/internals/critical-sections.md
new file mode 100644
index 00000000..54f02ac8
--- /dev/null
+++ b/book/en/src/internals/critical-sections.md
@@ -0,0 +1,515 @@
+# Critical sections
+
+When a resource (static variable) is shared between two, or more, tasks that run
+at different priorities some form of mutual exclusion is required to access the
+memory in a data race free manner. In RTFM we use priority-based critical
+sections to guarantee mutual exclusion (see the [Immediate Priority Ceiling
+Protocol][ipcp]).
+
+[ipcp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol
+
+The critical section consists of temporarily raising the *dynamic* priority of
+the task. While a task is within this critical section all the other tasks that
+may request the resource are *not allowed to start*.
+
+How high must the dynamic priority be to ensure mutual exclusion on a particular
+resource? The [ceiling analysis](ceiling-analysis.html) is in charge of
+answering that question and will be discussed in the next section. This section
+will focus on the implementation of the critical section.
+
+## Resource proxy
+
+For simplicity, let's look at a resource shared by two tasks that run at
+different priorities. Clearly one of the task can preempt the other; to prevent
+a data race the *lower priority* task must use a critical section when it needs
+to modify the shared memory. On the other hand, the higher priority task can
+directly modify the shared memory because it can't be preempted by the lower
+priority task. To enforce the use of a critical section on the lower priority
+task we give it a *resource proxy*, whereas we give a mutable reference
+(`&mut-`) to the higher priority task.
+
+The example below shows the different types handed out to each task:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ static mut X: u64 = 0;
+
+ #[interrupt(binds = UART0, priority = 1, resources = [X])]
+ fn foo(c: foo::Context) {
+ // resource proxy
+ let mut x: resources::X = c.resources.X;
+
+ x.lock(|x: &mut u64| {
+ // critical section
+ *x += 1
+ });
+ }
+
+ #[interrupt(binds = UART1, priority = 2, resources = [X])]
+ fn bar(c: foo::Context) {
+ let mut x: &mut u64 = c.resources.X;
+
+ *x += 1;
+ }
+
+ // ..
+};
+```
+
+Now let's see how these types are created by the framework.
+
+``` rust
+fn foo(c: foo::Context) {
+ // .. user code ..
+}
+
+fn bar(c: bar::Context) {
+ // .. user code ..
+}
+
+pub mod resources {
+ pub struct X {
+ // ..
+ }
+}
+
+pub mod foo {
+ pub struct Resources {
+ pub X: resources::X,
+ }
+
+ pub struct Context {
+ pub resources: Resources,
+ // ..
+ }
+}
+
+pub mod bar {
+ pub struct Resources<'a> {
+ pub X: rtfm::Exclusive<'a, u64>, // newtype over `&'a mut u64`
+ }
+
+ pub struct Context {
+ pub resources: Resources,
+ // ..
+ }
+}
+
+const APP: () = {
+ static mut X: u64 = 0;
+
+ impl rtfm::Mutex for resources::X {
+ type T = u64;
+
+ fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
+ // we'll check this in detail later
+ }
+ }
+
+ #[no_mangle]
+ unsafe fn UART0() {
+ foo(foo::Context {
+ resources: foo::Resources {
+ X: resources::X::new(/* .. */),
+ },
+ // ..
+ })
+ }
+
+ #[no_mangle]
+ unsafe fn UART1() {
+ bar(bar::Context {
+ resources: bar::Resources {
+ X: rtfm::Exclusive(&mut X),
+ },
+ // ..
+ })
+ }
+};
+```
+
+## `lock`
+
+Let's now zoom into the critical section itself. In this example, we have to
+raise the dynamic priority to at least `2` to prevent a data race. On the
+Cortex-M architecture the dynamic priority can be changed by writing to the
+`BASEPRI` register.
+
+The semantics of the `BASEPRI` register are as follows:
+
+- Writing a value of `0` to `BASEPRI` disables its functionality.
+- Writing a non-zero value to `BASEPRI` changes the priority level required for
+ interrupt preemption. However, this only has an effect when the written value
+ is *lower* than the priority level of current execution context, but note that
+ a lower hardware priority level means higher logical priority
+
+Thus the dynamic priority at any point in time can be computed as
+
+``` rust
+dynamic_priority = max(hw2logical(BASEPRI), hw2logical(static_priority))
+```
+
+Where `static_priority` is the priority programmed in the NVIC for the current
+interrupt, or a logical `0` when the current context is `idle`.
+
+In this particular example we could implement the critical section as follows:
+
+> **NOTE:** this is a simplified implementation
+
+``` rust
+impl rtfm::Mutex for resources::X {
+ type T = u64;
+
+ fn lock<R, F>(&mut self, f: F) -> R
+ where
+ F: FnOnce(&mut u64) -> R,
+ {
+ unsafe {
+ // start of critical section: raise dynamic priority to `2`
+ asm!("msr BASEPRI, 192" : : : "memory" : "volatile");
+
+ // run user code within the critical section
+ let r = f(&mut implementation_defined_name_for_X);
+
+ // end of critical section: restore dynamic priority to its static value (`1`)
+ asm!("msr BASEPRI, 0" : : : "memory" : "volatile");
+
+ r
+ }
+ }
+}
+```
+
+Here it's important to use the `"memory"` clobber in the `asm!` block. It
+prevents the compiler from reordering memory operations across it. This is
+important because accessing the variable `X` outside the critical section would
+result in a data race.
+
+It's important to note that the signature of the `lock` method prevents nesting
+calls to it. This is required for memory safety, as nested calls would produce
+multiple mutable references (`&mut-`) to `X` breaking Rust aliasing rules. See
+below:
+
+``` rust
+#[interrupt(binds = UART0, priority = 1, resources = [X])]
+fn foo(c: foo::Context) {
+ // resource proxy
+ let mut res: resources::X = c.resources.X;
+
+ res.lock(|x: &mut u64| {
+ res.lock(|alias: &mut u64| {
+ //~^ error: `res` has already been mutably borrowed
+ // ..
+ });
+ });
+}
+```
+
+## Nesting
+
+Nesting calls to `lock` on the *same* resource must be rejected by the compiler
+for memory safety but nesting `lock` calls on *different* resources is a valid
+operation. In that case we want to make sure that nesting critical sections
+never results in lowering the dynamic priority, as that would be unsound, and we
+also want to optimize the number of writes to the `BASEPRI` register and
+compiler fences. To that end we'll track the dynamic priority of the task using
+a stack variable and use that to decide whether to write to `BASEPRI` or not. In
+practice, the stack variable will be optimized away by the compiler but it still
+provides extra information to the compiler.
+
+Consider this program:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ static mut X: u64 = 0;
+ static mut Y: u64 = 0;
+
+ #[init]
+ fn init() {
+ rtfm::pend(Interrupt::UART0);
+ }
+
+ #[interrupt(binds = UART0, priority = 1, resources = [X, Y])]
+ fn foo(c: foo::Context) {
+ let mut x = c.resources.X;
+ let mut y = c.resources.Y;
+
+ y.lock(|y| {
+ *y += 1;
+
+ *x.lock(|x| {
+ x += 1;
+ });
+
+ *y += 1;
+ });
+
+ // mid-point
+
+ x.lock(|x| {
+ *x += 1;
+
+ y.lock(|y| {
+ *y += 1;
+ });
+
+ *x += 1;
+ })
+ }
+
+ #[interrupt(binds = UART1, priority = 2, resources = [X])]
+ fn bar(c: foo::Context) {
+ // ..
+ }
+
+ #[interrupt(binds = UART2, priority = 3, resources = [Y])]
+ fn baz(c: foo::Context) {
+ // ..
+ }
+
+ // ..
+};
+```
+
+The code generated by the framework looks like this:
+
+``` rust
+// omitted: user code
+
+pub mod resources {
+ pub struct X<'a> {
+ priority: &'a Cell<u8>,
+ }
+
+ impl<'a> X<'a> {
+ pub unsafe fn new(priority: &'a Cell<u8>) -> Self {
+ X { priority }
+ }
+
+ pub unsafe fn priority(&self) -> &Cell<u8> {
+ self.priority
+ }
+ }
+
+ // repeat for `Y`
+}
+
+pub mod foo {
+ pub struct Context {
+ pub resources: Resources,
+ // ..
+ }
+
+ pub struct Resources<'a> {
+ pub X: resources::X<'a>,
+ pub Y: resources::Y<'a>,
+ }
+}
+
+const APP: () = {
+ #[no_mangle]
+ unsafe fn UART0() {
+ // the static priority of this interrupt (as specified by the user)
+ const PRIORITY: u8 = 1;
+
+ // take a snashot of the BASEPRI
+ let initial: u8;
+ asm!("mrs $0, BASEPRI" : "=r"(initial) : : : "volatile");
+
+ let priority = Cell::new(PRIORITY);
+ foo(foo::Context {
+ resources: foo::Resources::new(&priority),
+ // ..
+ });
+
+ // roll back the BASEPRI to the snapshot value we took before
+ asm!("msr BASEPRI, $0" : : "r"(initial) : : "volatile");
+ }
+
+ // similarly for `UART1`
+
+ impl<'a> rtfm::Mutex for resources::X<'a> {
+ type T = u64;
+
+ fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
+ unsafe {
+ // the priority ceiling of this resource
+ const CEILING: u8 = 2;
+
+ let current = self.priority().get();
+ if current < CEILING {
+ // raise dynamic priority
+ self.priority().set(CEILING);
+ let hw = logical2hw(CEILING);
+ asm!("msr BASEPRI, $0" : : "r"(hw) : "memory" : "volatile");
+
+ let r = f(&mut X);
+
+ // restore dynamic priority
+ let hw = logical2hw(current);
+ asm!("msr BASEPRI, $0" : : "r"(hw) : "memory" : "volatile");
+ self.priority().set(current);
+
+ r
+ } else {
+ // dynamic priority is high enough
+ f(&mut X)
+ }
+ }
+ }
+ }
+
+ // repeat for `Y`
+};
+```
+
+At the end the compiler will optimize the function `foo` into something like
+this:
+
+``` rust
+fn foo(c: foo::Context) {
+ // NOTE: BASEPRI contains the value `0` (its reset value) at this point
+
+ // raise dynamic priority to `3`
+ unsafe { asm!("msr BASEPRI, 160" : : : "memory" : "volatile") }
+
+ // the two operations on `Y` are merged into one
+ Y += 2;
+
+ // BASEPRI is not modified to access `X` because the dynamic priority is high enough
+ X += 1;
+
+ // lower (restore) the dynamic priority to `1`
+ unsafe { asm!("msr BASEPRI, 224" : : : "memory" : "volatile") }
+
+ // mid-point
+
+ // raise dynamic priority to `2`
+ unsafe { asm!("msr BASEPRI, 192" : : : "memory" : "volatile") }
+
+ X += 1;
+
+ // raise dynamic priority to `3`
+ unsafe { asm!("msr BASEPRI, 160" : : : "memory" : "volatile") }
+
+ Y += 1;
+
+ // lower (restore) the dynamic priority to `2`
+ unsafe { asm!("msr BASEPRI, 192" : : : "memory" : "volatile") }
+
+ // NOTE: it would be sound to merge this operation on X with the previous one but
+ // compiler fences are coarse grained and prevent such optimization
+ X += 1;
+
+ // lower (restore) the dynamic priority to `1`
+ unsafe { asm!("msr BASEPRI, 224" : : : "memory" : "volatile") }
+
+ // NOTE: BASEPRI contains the value `224` at this point
+ // the UART0 handler will restore the value to `0` before returning
+}
+```
+
+## The BASEPRI invariant
+
+An invariant that the RTFM framework has to preserve is that the value of the
+BASEPRI at the start of an *interrupt* handler must be the same value it has
+when the interrupt handler returns. BASEPRI may change during the execution of
+the interrupt handler but running an interrupt handler from start to finish
+should not result in an observable change of BASEPRI.
+
+This invariant needs to be preserved to avoid raising the dynamic priority of a
+handler through preemption. This is best observed in the following example:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ static mut X: u64 = 0;
+
+ #[init]
+ fn init() {
+ // `foo` will run right after `init` returns
+ rtfm::pend(Interrupt::UART0);
+ }
+
+ #[task(binds = UART0, priority = 1)]
+ fn foo() {
+ // BASEPRI is `0` at this point; the dynamic priority is currently `1`
+
+ // `bar` will preempt `foo` at this point
+ rtfm::pend(Interrupt::UART1);
+
+ // BASEPRI is `192` at this point (due to a bug); the dynamic priority is now `2`
+ // this function returns to `idle`
+ }
+
+ #[task(binds = UART1, priority = 2, resources = [X])]
+ fn bar() {
+ // BASEPRI is `0` (dynamic priority = 2)
+
+ X.lock(|x| {
+ // BASEPRI is raised to `160` (dynamic priority = 3)
+
+ // ..
+ });
+
+ // BASEPRI is restored to `192` (dynamic priority = 2)
+ }
+
+ #[idle]
+ fn idle() -> ! {
+ // BASEPRI is `192` (due to a bug); dynamic priority = 2
+
+ // this has no effect due to the BASEPRI value
+ // the task `foo` will never be executed again
+ rtfm::pend(Interrupt::UART0);
+
+ loop {
+ // ..
+ }
+ }
+
+ #[task(binds = UART2, priority = 3, resources = [X])]
+ fn baz() {
+ // ..
+ }
+
+};
+```
+
+IMPORTANT: let's say we *forget* to roll back `BASEPRI` in `UART1` -- this would
+be a bug in the RTFM code generator.
+
+``` rust
+// code generated by RTFM
+
+const APP: () = {
+ // ..
+
+ #[no_mangle]
+ unsafe fn UART1() {
+ // the static priority of this interrupt (as specified by the user)
+ const PRIORITY: u8 = 2;
+
+ // take a snashot of the BASEPRI
+ let initial: u8;
+ asm!("mrs $0, BASEPRI" : "=r"(initial) : : : "volatile");
+
+ let priority = Cell::new(PRIORITY);
+ bar(bar::Context {
+ resources: bar::Resources::new(&priority),
+ // ..
+ });
+
+ // BUG: FORGOT to roll back the BASEPRI to the snapshot value we took before
+ // asm!("msr BASEPRI, $0" : : "r"(initial) : : "volatile");
+ }
+};
+```
+
+The consequence is that `idle` will run at a dynamic priority of `2` and in fact
+the system will never again run at a dynamic priority lower than `2`. This
+doesn't compromise the memory safety of the program but affects task scheduling:
+in this particular case tasks with a priority of `1` will never get a chance to
+run.
diff --git a/book/en/src/internals/interrupt-configuration.md b/book/en/src/internals/interrupt-configuration.md
new file mode 100644
index 00000000..b34b3089
--- /dev/null
+++ b/book/en/src/internals/interrupt-configuration.md
@@ -0,0 +1,74 @@
+# Interrupt configuration
+
+Interrupts are core to the operation of RTFM applications. Correctly setting
+interrupt priorities and ensuring they remain fixed at runtime is a requisite
+for the memory safety of the application.
+
+The RTFM framework exposes interrupt priorities as something that is declared at
+compile time. However, this static configuration must be programmed into the
+relevant registers during the initialization of the application. The interrupt
+configuration is done before the `init` function runs.
+
+This example gives you an idea of the code that the RTFM framework runs:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ #[init]
+ fn init(c: init::Context) {
+ // .. user code ..
+ }
+
+ #[idle]
+ fn idle(c: idle::Context) -> ! {
+ // .. user code ..
+ }
+
+ #[interrupt(binds = UART0, priority = 2)]
+ fn foo(c: foo::Context) {
+ // .. user code ..
+ }
+};
+```
+
+The framework generates an entry point that looks like this:
+
+``` rust
+// the real entry point of the program
+#[no_mangle]
+unsafe fn main() -> ! {
+ // transforms a logical priority into a hardware / NVIC priority
+ fn logical2hw(priority: u8) -> u8 {
+ // this value comes from the device crate
+ const NVIC_PRIO_BITS: u8 = ..;
+
+ // the NVIC encodes priority in the higher bits of a bit
+ // also a bigger numbers means lower priority
+ ((1 << NVIC_PRIORITY_BITS) - priority) << (8 - NVIC_PRIO_BITS)
+ }
+
+ cortex_m::interrupt::disable();
+
+ let mut core = cortex_m::Peripheral::steal();
+
+ core.NVIC.enable(Interrupt::UART0);
+
+ // value specified by the user
+ let uart0_prio = 2;
+
+ // check at compile time that the specified priority is within the supported range
+ let _ = [(); (1 << NVIC_PRIORITY_BITS) - (uart0_prio as usize)];
+
+ core.NVIC.set_priority(Interrupt::UART0, logical2hw(uart0_prio));
+
+ // call into user code
+ init(/* .. */);
+
+ // ..
+
+ cortex_m::interrupt::enable();
+
+ // call into user code
+ idle(/* .. */)
+}
+```
diff --git a/book/en/src/internals/late-resources.md b/book/en/src/internals/late-resources.md
new file mode 100644
index 00000000..71157f28
--- /dev/null
+++ b/book/en/src/internals/late-resources.md
@@ -0,0 +1,115 @@
+# Late resources
+
+Some resources are initialized at runtime after the `init` function returns.
+It's important that these resources (static variables) are fully initialized
+before tasks are allowed to run, that is they must be initialized while
+interrupts are disabled.
+
+The example below shows the kind of code that the framework generates to
+initialize late resources.
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ // late resource
+ static mut X: Thing = {};
+
+ #[init]
+ fn init() -> init::LateResources {
+ // ..
+
+ init::LateResources {
+ X: Thing::new(..),
+ }
+ }
+
+ #[task(binds = UART0, resources = [X])]
+ fn foo(c: foo::Context) {
+ let x: &mut Thing = c.resources.X;
+
+ x.frob();
+
+ // ..
+ }
+
+ // ..
+};
+```
+
+The code generated by the framework looks like this:
+
+``` rust
+fn init(c: init::Context) -> init::LateResources {
+ // .. user code ..
+}
+
+fn foo(c: foo::Context) {
+ // .. user code ..
+}
+
+// Public API
+pub mod init {
+ pub struct LateResources {
+ pub X: Thing,
+ }
+
+ // ..
+}
+
+pub mod foo {
+ pub struct Resources<'a> {
+ pub X: &'a mut Thing,
+ }
+
+ pub struct Context<'a> {
+ pub resources: Resources<'a>,
+ // ..
+ }
+}
+
+/// Implementation details
+const APP: () = {
+ // uninitialized static
+ static mut X: MaybeUninit<Thing> = MaybeUninit::uninit();
+
+ #[no_mangle]
+ unsafe fn main() -> ! {
+ cortex_m::interrupt::disable();
+
+ // ..
+
+ let late = init(..);
+
+ // initialization of late resources
+ X.write(late.X);
+
+ cortex_m::interrupt::enable(); //~ compiler fence
+
+ // exceptions, interrupts and tasks can preempt `main` at this point
+
+ idle(..)
+ }
+
+ #[no_mangle]
+ unsafe fn UART0() {
+ foo(foo::Context {
+ resources: foo::Resources {
+ // `X` has been initialized at this point
+ X: &mut *X.as_mut_ptr(),
+ },
+ // ..
+ })
+ }
+};
+```
+
+An important detail here is that `interrupt::enable` behaves like a *compiler
+fence*, which prevents the compiler from reordering the write to `X` to *after*
+`interrupt::enable`. If the compiler were to do that kind of reordering there
+would be a data race between that write and whatever operation `foo` performs on
+`X`.
+
+Architectures with more complex instruction pipelines may need a memory barrier
+(`atomic::fence`) instead of a compiler fence to fully flush the write operation
+before interrupts are re-enabled. The ARM Cortex-M architecture doesn't need a
+memory barrier in single-core context.
diff --git a/book/en/src/internals/non-reentrancy.md b/book/en/src/internals/non-reentrancy.md
new file mode 100644
index 00000000..408a012e
--- /dev/null
+++ b/book/en/src/internals/non-reentrancy.md
@@ -0,0 +1,84 @@
+# Non-reentrancy
+
+In RTFM, tasks handlers are *not* reentrant. Reentering a task handler can break
+Rust aliasing rules and lead to *undefined behavior*. A task handler can be
+reentered in one of two ways: in software or by hardware.
+
+## In software
+
+To reenter a task handler in software its underlying interrupt handler must be
+invoked using FFI (see example below). FFI requires `unsafe` code so end users
+are discouraged from directly invoking an interrupt handler.
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ static mut X: u64 = 0;
+
+ #[init]
+ fn init(c: init::Context) { .. }
+
+ #[interrupt(binds = UART0, resources = [X])]
+ fn foo(c: foo::Context) {
+ let x: &mut u64 = c.resources.X;
+
+ *x = 1;
+
+ //~ `bar` can preempt `foo` at this point
+
+ *x = 2;
+
+ if *x == 2 {
+ // something
+ }
+ }
+
+ #[interrupt(binds = UART1, priority = 2)]
+ fn bar(c: foo::Context) {
+ extern "C" {
+ fn UART0();
+ }
+
+ // this interrupt handler will invoke task handler `foo` resulting
+ // in mutable aliasing of the static variable `X`
+ unsafe { UART0() }
+ }
+};
+```
+
+The RTFM framework must generate the interrupt handler code that calls the user
+defined task handlers. We are careful in making these handlers `unsafe` and / or
+impossible to call from user code.
+
+The above example expands into:
+
+``` rust
+fn foo(c: foo::Context) {
+ // .. user code ..
+}
+
+fn bar(c: bar::Context) {
+ // .. user code ..
+}
+
+const APP: () = {
+ // everything in this block is not visible to user code
+
+ #[no_mangle]
+ unsafe fn USART0() {
+ foo(..);
+ }
+
+ #[no_mangle]
+ unsafe fn USART1() {
+ bar(..);
+ }
+};
+```
+
+## By hardware
+
+A task handler can also be reentered without software intervention. This can
+occur if the same handler is assigned to two or more interrupts in the vector
+table but there's no syntax for this kind of configuration in the RTFM
+framework.
diff --git a/book/en/src/internals/tasks.md b/book/en/src/internals/tasks.md
index 85f783fb..432c2e6e 100644
--- a/book/en/src/internals/tasks.md
+++ b/book/en/src/internals/tasks.md
@@ -1,3 +1,397 @@
-# Task dispatcher
+# Software tasks
-**TODO**
+RTFM supports software tasks and hardware tasks. Each hardware task is bound to
+a different interrupt handler. On the other hand, several software tasks may be
+dispatched by the same interrupt handler -- this is done to minimize the number
+of interrupts handlers used by the framework.
+
+The framework groups `spawn`-able tasks by priority level and generates one
+*task dispatcher* per priority level. Each task dispatcher runs on a different
+interrupt handler and the priority of said interrupt handler is set to match the
+priority level of the tasks managed by the dispatcher.
+
+Each task dispatcher keeps a *queue* of tasks which are *ready* to execute; this
+queue is referred to as the *ready queue*. Spawning a software task consists of
+adding an entry to this queue and pending the interrupt that runs the
+corresponding task dispatcher. Each entry in this queue contains a tag (`enum`)
+that identifies the task to execute and a *pointer* to the message passed to the
+task.
+
+The ready queue is a SPSC (Single Producer Single Consumer) lock-free queue. The
+task dispatcher owns the consumer endpoint of the queue; the producer endpoint
+is treated as a resource shared by the tasks that can `spawn` other tasks.
+
+## The task dispatcher
+
+Let's first take a look the code generated by the framework to dispatch tasks.
+Consider this example:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ // ..
+
+ #[interrupt(binds = UART0, priority = 2, spawn = [bar, baz])]
+ fn foo(c: foo::Context) {
+ foo.spawn.bar().ok();
+
+ foo.spawn.baz(42).ok();
+ }
+
+ #[task(capacity = 2, priority = 1)]
+ fn bar(c: bar::Context) {
+ // ..
+ }
+
+ #[task(capacity = 2, priority = 1, resources = [X])]
+ fn baz(c: baz::Context, input: i32) {
+ // ..
+ }
+
+ extern "C" {
+ fn UART1();
+ }
+};
+```
+
+The framework produces the following task dispatcher which consists of an
+interrupt handler and a ready queue:
+
+``` rust
+fn bar(c: bar::Context) {
+ // .. user code ..
+}
+
+const APP: () = {
+ use heapless::spsc::Queue;
+ use cortex_m::register::basepri;
+
+ struct Ready<T> {
+ task: T,
+ // ..
+ }
+
+ /// `spawn`-able tasks that run at priority level `1`
+ enum T1 {
+ bar,
+ baz,
+ }
+
+ // ready queue of the task dispatcher
+ // `U4` is a type-level integer that represents the capacity of this queue
+ static mut RQ1: Queue<Ready<T1>, U4> = Queue::new();
+
+ // interrupt handler chosen to dispatch tasks at priority `1`
+ #[no_mangle]
+ unsafe UART1() {
+ // the priority of this interrupt handler
+ const PRIORITY: u8 = 1;
+
+ let snapshot = basepri::read();
+
+ while let Some(ready) = RQ1.split().1.dequeue() {
+ match ready.task {
+ T1::bar => {
+ // **NOTE** simplified implementation
+
+ // used to track the dynamic priority
+ let priority = Cell::new(PRIORITY);
+
+ // call into user code
+ bar(bar::Context::new(&priority));
+ }
+
+ T1::baz => {
+ // we'll look at `baz` later
+ }
+ }
+ }
+
+ // BASEPRI invariant
+ basepri::write(snapshot);
+ }
+};
+```
+
+## Spawning a task
+
+The `spawn` API is exposed to the user as the methods of a `Spawn` struct.
+There's one `Spawn` struct per task.
+
+The `Spawn` code generated by the framework for the previous example looks like
+this:
+
+``` rust
+mod foo {
+ // ..
+
+ pub struct Context<'a> {
+ pub spawn: Spawn<'a>,
+ // ..
+ }
+
+ pub struct Spawn<'a> {
+ // tracks the dynamic priority of the task
+ priority: &'a Cell<u8>,
+ }
+
+ impl<'a> Spawn<'a> {
+ // `unsafe` and hidden because we don't want the user to tamper with it
+ #[doc(hidden)]
+ pub unsafe fn priority(&self) -> &Cell<u8> {
+ self.priority
+ }
+ }
+}
+
+const APP: () = {
+ // ..
+
+ // Priority ceiling for the producer endpoint of the `RQ1`
+ const RQ1_CEILING: u8 = 2;
+
+ // used to track how many more `bar` messages can be enqueued
+ // `U2` is the capacity of the `bar` task; a max of two instances can be queued
+ // this queue is filled by the framework before `init` runs
+ static mut bar_FQ: Queue<(), U2> = Queue::new();
+
+ // Priority ceiling for the consumer endpoint of `bar_FQ`
+ const bar_FQ_CEILING: u8 = 2;
+
+ // a priority-based critical section
+ //
+ // this run the given closure `f` at a dynamic priority of at least
+ // `ceiling`
+ fn lock(priority: &Cell<u8>, ceiling: u8, f: impl FnOnce()) {
+ // ..
+ }
+
+ impl<'a> foo::Spawn<'a> {
+ /// Spawns the `bar` task
+ pub fn bar(&self) -> Result<(), ()> {
+ unsafe {
+ match lock(self.priority(), bar_FQ_CEILING, || {
+ bar_FQ.split().1.dequeue()
+ }) {
+ Some(()) => {
+ lock(self.priority(), RQ1_CEILING, || {
+ // put the taks in the ready queue
+ RQ1.split().1.enqueue_unchecked(Ready {
+ task: T1::bar,
+ // ..
+ })
+ });
+
+ // pend the interrupt that runs the task dispatcher
+ rtfm::pend(Interrupt::UART0);
+ }
+
+ None => {
+ // maximum capacity reached; spawn failed
+ Err(())
+ }
+ }
+ }
+ }
+ }
+};
+```
+
+Using `bar_FQ` to limit the number of `bar` tasks that can be spawned may seem
+like an artificial limitation but it will make more sense when we talk about
+task capacities.
+
+## Messages
+
+We have omitted how message passing actually works so let's revisit the `spawn`
+implementation but this time for task `baz` which receives a `u64` message.
+
+``` rust
+fn baz(c: baz::Context, input: u64) {
+ // .. user code ..
+}
+
+const APP: () = {
+ // ..
+
+ // Now we show the full contents of the `Ready` struct
+ struct Ready {
+ task: Task,
+ // message index; used to index the `INPUTS` buffer
+ index: u8,
+ }
+
+ // memory reserved to hold messages passed to `baz`
+ static mut baz_INPUTS: [MaybeUninit<u64>; 2] =
+ [MaybeUninit::uninit(), MaybeUninit::uninit()];
+
+ // the free queue: used to track free slots in the `baz_INPUTS` array
+ // this queue is initialized with values `0` and `1` before `init` is executed
+ static mut baz_FQ: Queue<u8, U2> = Queue::new();
+
+ // Priority ceiling for the consumer endpoint of `baz_FQ`
+ const baz_FQ_CEILING: u8 = 2;
+
+ impl<'a> foo::Spawn<'a> {
+ /// Spawns the `baz` task
+ pub fn baz(&self, message: u64) -> Result<(), u64> {
+ unsafe {
+ match lock(self.priority(), baz_FQ_CEILING, || {
+ baz_FQ.split().1.dequeue()
+ }) {
+ Some(index) => {
+ // NOTE: `index` is an ownining pointer into this buffer
+ baz_INPUTS[index as usize].write(message);
+
+ lock(self.priority(), RQ1_CEILING, || {
+ // put the task in the ready queu
+ RQ1.split().1.enqueue_unchecked(Ready {
+ task: T1::baz,
+ index,
+ });
+ });
+
+ // pend the interrupt that runs the task dispatcher
+ rtfm::pend(Interrupt::UART0);
+ }
+
+ None => {
+ // maximum capacity reached; spawn failed
+ Err(message)
+ }
+ }
+ }
+ }
+ }
+};
+```
+
+And now let's look at the real implementation of the task dispatcher:
+
+``` rust
+const APP: () = {
+ // ..
+
+ #[no_mangle]
+ unsafe UART1() {
+ const PRIORITY: u8 = 1;
+
+ let snapshot = basepri::read();
+
+ while let Some(ready) = RQ1.split().1.dequeue() {
+ match ready.task {
+ Task::baz => {
+ // NOTE: `index` is an ownining pointer into this buffer
+ let input = baz_INPUTS[ready.index as usize].read();
+
+ // the message has been read out so we can return the slot
+ // back to the free queue
+ // (the task dispatcher has exclusive access to the producer
+ // endpoint of this queue)
+ baz_FQ.split().0.enqueue_unchecked(ready.index);
+
+ let priority = Cell::new(PRIORITY);
+ baz(baz::Context::new(&priority), input)
+ }
+
+ Task::bar => {
+ // looks just like the `baz` branch
+ }
+
+ }
+ }
+
+ // BASEPRI invariant
+ basepri::write(snapshot);
+ }
+};
+```
+
+`INPUTS` plus `FQ`, the free queue, is effectively a memory pool. However,
+instead of using the usual *free list* (linked list) to track empty slots
+in the `INPUTS` buffer we use a SPSC queue; this lets us reduce the number of
+critical sections. In fact, thanks to this choice the task dispatching code is
+lock-free.
+
+## Queue capacity
+
+The RTFM framework uses several queues like ready queues and free queues. When
+the free queue is empty trying to `spawn` a task results in an error; this
+condition is checked at runtime. Not all the operations performed by the
+framework on these queues check if the queue is empty / full. For example,
+returning an slot to the free queue (see the task dispatcher) is unchecked
+because there's a fixed number of such slots circulating in the system that's
+equal to the capacity of the free queue. Similarly, adding an entry to the ready
+queue (see `Spawn`) is unchecked because of the queue capacity chosen by the
+framework.
+
+Users can specify the capacity of software tasks; this capacity is the maximum
+number of messages one can post to said task from a higher priority task before
+`spawn` returns an error. This user-specified capacity is the capacity of the
+free queue of the task (e.g. `foo_FQ`) and also the size of the array that holds
+the inputs to the task (e.g. `foo_INPUTS`).
+
+The capacity of the ready queue (e.g. `RQ1`) is chosen to be the *sum* of the
+capacities of all the different tasks managed by the dispatcher; this sum is
+also the number of messages the queue will hold in the worst case scenario of
+all possible messages being posted before the task dispatcher gets a chance to
+run. For this reason, getting a slot from the free queue in any `spawn`
+operation implies that the ready queue is not yet full so inserting an entry
+into the ready queue can omit the "is it full?" check.
+
+In our running example the task `bar` takes no input so we could have omitted
+both `bar_INPUTS` and `bar_FQ` and let the user post an unbounded number of
+messages to this task, but if we did that it would have not be possible to pick
+a capacity for `RQ1` that lets us omit the "is it full?" check when spawning a
+`baz` task. In the section about the [timer queue](timer-queue.html) we'll see
+how the free queue is used by tasks that have no inputs.
+
+## Ceiling analysis
+
+The queues internally used by the `spawn` API are treated like normal resources
+and included in the ceiling analysis. It's important to note that these are SPSC
+queues and that only one of the endpoints is behind a resource; the other
+endpoint is owned by a task dispatcher.
+
+Consider the following example:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ #[idle(spawn = [foo, bar])]
+ fn idle(c: idle::Context) -> ! {
+ // ..
+ }
+
+ #[task]
+ fn foo(c: foo::Context) {
+ // ..
+ }
+
+ #[task]
+ fn bar(c: bar::Context) {
+ // ..
+ }
+
+ #[task(priority = 2, spawn = [foo])]
+ fn baz(c: baz::Context) {
+ // ..
+ }
+
+ #[task(priority = 3, spawn = [bar])]
+ fn quux(c: quux::Context) {
+ // ..
+ }
+};
+```
+
+This is how the ceiling analysis would go:
+
+- `idle` (prio = 0) and `baz` (prio = 2) contend for the consumer endpoint of
+ `foo_FQ`; this leads to a priority ceiling of `2`.
+
+- `idle` (prio = 0) and `quux` (prio = 3) contend for the consumer endpoint of
+ `bar_FQ`; this leads to a priority ceiling of `3`.
+
+- `idle` (prio = 0), `baz` (prio = 2) and `quux` (prio = 3) all contend for the
+ producer endpoint of `RQ1`; this leads to a priority ceiling of `3`
diff --git a/book/en/src/internals/timer-queue.md b/book/en/src/internals/timer-queue.md
index 70592852..436f421d 100644
--- a/book/en/src/internals/timer-queue.md
+++ b/book/en/src/internals/timer-queue.md
@@ -1,3 +1,369 @@
# Timer queue
-**TODO**
+The timer queue functionality lets the user schedule tasks to run at some time
+in the future. Unsurprisingly, this feature is also implemented using a queue:
+a priority queue where the scheduled tasks are kept sorted by earliest scheduled
+time. This feature requires a timer capable of setting up timeout interrupts.
+The timer is used to trigger an interrupt when the scheduled time of a task is
+up; at that point the task is removed from the timer queue and moved into the
+appropriate ready queue.
+
+Let's see how this in implemented in code. Consider the following program:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ // ..
+
+ #[task(capacity = 2, schedule = [foo])]
+ fn foo(c: foo::Context, x: u32) {
+ // schedule this task to run again in 1M cycles
+ c.schedule.foo(c.scheduled + Duration::cycles(1_000_000), x + 1).ok();
+ }
+
+ extern "C" {
+ fn UART0();
+ }
+};
+```
+
+## `schedule`
+
+Let's first look at the `schedule` API.
+
+``` rust
+mod foo {
+ pub struct Schedule<'a> {
+ priority: &'a Cell<u8>,
+ }
+
+ impl<'a> Schedule<'a> {
+ // unsafe and hidden because we don't want the user to tamper with this
+ #[doc(hidden)]
+ pub unsafe fn priority(&self) -> &Cell<u8> {
+ self.priority
+ }
+ }
+}
+
+const APP: () = {
+ use rtfm::Instant;
+
+ // all tasks that can be `schedule`-d
+ enum T {
+ foo,
+ }
+
+ struct NotReady {
+ index: u8,
+ instant: Instant,
+ task: T,
+ }
+
+ // The timer queue is a binary (min) heap of `NotReady` tasks
+ static mut TQ: TimerQueue<U2> = ..;
+ const TQ_CEILING: u8 = 1;
+
+ static mut foo_FQ: Queue<u8, U2> = Queue::new();
+ const foo_FQ_CEILING: u8 = 1;
+
+ static mut foo_INPUTS: [MaybeUninit<u32>; 2] =
+ [MaybeUninit::uninit(), MaybeUninit::uninit()];
+
+ static mut foo_INSTANTS: [MaybeUninit<Instant>; 2] =
+ [MaybeUninit::uninit(), MaybeUninit::uninit()];
+
+ impl<'a> foo::Schedule<'a> {
+ fn foo(&self, instant: Instant, input: u32) -> Result<(), u32> {
+ unsafe {
+ let priority = self.priority();
+ if let Some(index) = lock(priority, foo_FQ_CEILING, || {
+ foo_FQ.split().1.dequeue()
+ }) {
+ // `index` is an owning pointer into these buffers
+ foo_INSTANTS[index as usize].write(instant);
+ foo_INPUTS[index as usize].write(input);
+
+ let nr = NotReady {
+ index,
+ instant,
+ task: T::foo,
+ };
+
+ lock(priority, TQ_CEILING, || {
+ TQ.enqueue_unchecked(nr);
+ });
+ } else {
+ // No space left to store the input / instant
+ Err(input)
+ }
+ }
+ }
+ }
+};
+```
+
+This looks very similar to the `Spawn` implementation. In fact, the same
+`INPUTS` buffer and free queue (`FQ`) are shared between the `spawn` and
+`schedule` APIs. The main difference between the two is that `schedule` also
+stores the `Instant` at which the task was scheduled to run in a separate buffer
+(`foo_INSTANTS` in this case).
+
+`TimerQueue::enqueue_unchecked` does a bit more work that just adding the entry
+into a min-heap: it also pends the system timer interrupt (`SysTick`) if the new
+entry ended up first in the queue.
+
+## The system timer
+
+The system timer interrupt (`SysTick`) takes cares of two things: moving tasks
+that have become ready from the timer queue into the right ready queue and
+setting up a timeout interrupt to fire when the scheduled time of the next task
+is up.
+
+Let's see the associated code.
+
+``` rust
+const APP: () = {
+ #[no_mangle]
+ fn SysTick() {
+ const PRIORITY: u8 = 1;
+
+ let priority = &Cell::new(PRIORITY);
+ while let Some(ready) = lock(priority, TQ_CEILING, || TQ.dequeue()) {
+ match ready.task {
+ T::foo => {
+ // move this task into the `RQ1` ready queue
+ lock(priority, RQ1_CEILING, || {
+ RQ1.split().0.enqueue_unchecked(Ready {
+ task: T1::foo,
+ index: ready.index,
+ })
+ });
+
+ // pend the task dispatcher
+ rtfm::pend(Interrupt::UART0);
+ }
+ }
+ }
+ }
+};
+```
+
+This looks similar to a task dispatcher except that instead of running the
+ready task this only places the task in the corresponding ready queue, that
+way it will run at the right priority.
+
+`TimerQueue::dequeue` will set up a new timeout interrupt when it returns
+`None`. This ties in with `TimerQueue::enqueue_unchecked`, which pends this
+handler; basically, `enqueue_unchecked` delegates the task of setting up a new
+timeout interrupt to the `SysTick` handler.
+
+## Resolution and range of `Instant` and `Duration`
+
+In the current implementation the `DWT`'s (Data Watchpoint and Trace) cycle
+counter is used as a monotonic timer. `Instant::now` returns a snapshot of this
+timer; these DWT snapshots (`Instant`s) are used to sort entries in the timer
+queue. The cycle counter is a 32-bit counter clocked at the core clock
+frequency. This counter wraps around every `(1 << 32)` clock cycles; there's no
+interrupt associated to this counter so nothing worth noting happens when it
+wraps around.
+
+To order `Instant`s in the queue we need to compare two 32-bit integers. To
+account for the wrap-around behavior we use the difference between two
+`Instant`s, `a - b`, and treat the result as a 32-bit signed integer. If the
+result is less than zero then `b` is a later `Instant`; if the result is greater
+than zero then `b` is an earlier `Instant`. This means that scheduling a task at
+an `Instant` that's `(1 << 31) - 1` cycles greater than the scheduled time
+(`Instant`) of the first (earliest) entry in the queue will cause the task to be
+inserted at the wrong place in the queue. There some debug assertions in place
+to prevent this user error but it can't be avoided because the user can write
+`(instant + duration_a) + duration_b` and overflow the `Instant`.
+
+The system timer, `SysTick`, is a 24-bit counter also clocked at the core clock
+frequency. When the next scheduled task is more than `1 << 24` clock cycles in
+the future an interrupt is set to go off in `1 << 24` cycles. This process may
+need to happen several times until the next scheduled task is within the range
+of the `SysTick` counter.
+
+In conclusion, both `Instant` and `Duration` have a resolution of 1 core clock
+cycle and `Duration` effectively has a (half-open) range of `0..(1 << 31)` (end
+not included) core clock cycles.
+
+## Queue capacity
+
+The capacity of the timer queue is chosen to be the sum of the capacities of all
+`schedule`-able tasks. Like in the case of the ready queues, this means that
+once we have claimed a free slot in the `INPUTS` buffer we are guaranteed to be
+able to insert the task in the timer queue; this lets us omit runtime checks.
+
+## System timer priority
+
+The priority of the system timer can't set by the user; it is chosen by the
+framework. To ensure that lower priority tasks don't prevent higher priority
+tasks from running we choose the priority of the system timer to be the maximum
+of all the `schedule`-able tasks.
+
+To see why this is required consider the case where two previously scheduled
+tasks with priorities `2` and `3` become ready at about the same time but the
+lower priority task is moved into the ready queue first. If the system timer
+priority was, for example, `1` then after moving the lower priority (`2`) task
+it would run to completion (due to it being higher priority than the system
+timer) delaying the execution of the higher priority (`3`) task. To prevent
+scenarios like these the system timer must match the highest priority of the
+`schedule`-able tasks; in this example that would be `3`.
+
+## Ceiling analysis
+
+The timer queue is a resource shared between all the tasks that can `schedule` a
+task and the `SysTick` handler. Also the `schedule` API contends with the
+`spawn` API over the free queues. All this must be considered in the ceiling
+analysis.
+
+To illustrate, consider the following example:
+
+``` rust
+#[rtfm::app(device = ..)]
+const APP: () = {
+ #[task(priority = 3, spawn = [baz])]
+ fn foo(c: foo::Context) {
+ // ..
+ }
+
+ #[task(priority = 2, schedule = [foo, baz])]
+ fn bar(c: bar::Context) {
+ // ..
+ }
+
+ #[task(priority = 1)]
+ fn baz(c: baz::Context) {
+ // ..
+ }
+};
+```
+
+The ceiling analysis would go like this:
+
+- `foo` (prio = 3) and `baz` (prio = 1) are `schedule`-able task so the
+ `SysTick` must run at the highest priority between these two, that is `3`.
+
+- `foo::Spawn` (prio = 3) and `bar::Schedule` (prio = 2) contend over the
+ consumer endpoind of `baz_FQ`; this leads to a priority ceiling of `3`.
+
+- `bar::Schedule` (prio = 2) has exclusive access over the consumer endpoint of
+`foo_FQ`; thus the priority ceiling of `foo_FQ` is effectively `2`.
+
+- `SysTick` (prio = 3) and `bar::Schedule` (prio = 2) contend over the timer
+ queue `TQ`; this leads to a priority ceiling of `3`.
+
+- `SysTick` (prio = 3) and `foo::Spawn` (prio = 3) both have lock-free access to
+ the ready queue `RQ3`, which holds `foo` entries; thus the priority ceiling of
+ `RQ3` is effectively `3`.
+
+- The `SysTick` has exclusive access to the ready queue `RQ1`, which holds `baz`
+ entries; thus the priority ceiling of `RQ1` is effectively `3`.
+
+## Changes in the `spawn` implementation
+
+When the "timer-queue" feature is enabled the `spawn` implementation changes a
+bit to track the baseline of tasks. As you saw in the `schedule` implementation
+there's an `INSTANTS` buffers used to store the time at which a task was
+scheduled to run; this `Instant` is read in the task dispatcher and passed to
+the user code as part of the task context.
+
+``` rust
+const APP: () = {
+ // ..
+
+ #[no_mangle]
+ unsafe UART1() {
+ const PRIORITY: u8 = 1;
+
+ let snapshot = basepri::read();
+
+ while let Some(ready) = RQ1.split().1.dequeue() {
+ match ready.task {
+ Task::baz => {
+ let input = baz_INPUTS[ready.index as usize].read();
+ // ADDED
+ let instant = baz_INSTANTS[ready.index as usize].read();
+
+ baz_FQ.split().0.enqueue_unchecked(ready.index);
+
+ let priority = Cell::new(PRIORITY);
+ // CHANGED the instant is passed as part the task context
+ baz(baz::Context::new(&priority, instant), input)
+ }
+
+ Task::bar => {
+ // looks just like the `baz` branch
+ }
+
+ }
+ }
+
+ // BASEPRI invariant
+ basepri::write(snapshot);
+ }
+};
+```
+
+Conversely, the `spawn` implementation needs to write a value to the `INSTANTS`
+buffer. The value to be written is stored in the `Spawn` struct and its either
+the `start` time of the hardware task or the `scheduled` time of the software
+task.
+
+``` rust
+mod foo {
+ // ..
+
+ pub struct Spawn<'a> {
+ priority: &'a Cell<u8>,
+ // ADDED
+ instant: Instant,
+ }
+
+ impl<'a> Spawn<'a> {
+ pub unsafe fn priority(&self) -> &Cell<u8> {
+ &self.priority
+ }
+
+ // ADDED
+ pub unsafe fn instant(&self) -> Instant {
+ self.instant
+ }
+ }
+}
+
+const APP: () = {
+ impl<'a> foo::Spawn<'a> {
+ /// Spawns the `baz` task
+ pub fn baz(&self, message: u64) -> Result<(), u64> {
+ unsafe {
+ match lock(self.priority(), baz_FQ_CEILING, || {
+ baz_FQ.split().1.dequeue()
+ }) {
+ Some(index) => {
+ baz_INPUTS[index as usize].write(message);
+ // ADDED
+ baz_INSTANTS[index as usize].write(self.instant());
+
+ lock(self.priority(), RQ1_CEILING, || {
+ RQ1.split().1.enqueue_unchecked(Ready {
+ task: Task::foo,
+ index,
+ });
+ });
+
+ rtfm::pend(Interrupt::UART0);
+ }
+
+ None => {
+ // maximum capacity reached; spawn failed
+ Err(message)
+ }
+ }
+ }
+ }
+ }
+};
+```