diff options
Diffstat (limited to 'book/en/src')
24 files changed, 2671 insertions, 267 deletions
diff --git a/book/en/src/SUMMARY.md b/book/en/src/SUMMARY.md index 051d1acc..e1a4a330 100644 --- a/book/en/src/SUMMARY.md +++ b/book/en/src/SUMMARY.md @@ -1,16 +1,25 @@ # Summary [Preface](./preface.md) -- [RTFM by example](./by-example.md) + +- [RTIC by example](./by-example.md) - [The `app` attribute](./by-example/app.md) - [Resources](./by-example/resources.md) - - [Tasks](./by-example/tasks.md) + - [Software tasks](./by-example/tasks.md) - [Timer queue](./by-example/timer-queue.md) - - [Singletons](./by-example/singletons.md) - [Types, Send and Sync](./by-example/types-send-sync.md) - [Starting a new project](./by-example/new.md) - [Tips & tricks](./by-example/tips.md) +- [Migration Guides](./migration.md) + - [v0.5.x to v0.6.x](./migration/migration_v5.md) + - [v0.4.x to v0.5.x](./migration/migration_v4.md) + - [RTFM to RTIC](./migration/migration_rtic.md) - [Under the hood](./internals.md) + - [Interrupt configuration](./internals/interrupt-configuration.md) + - [Non-reentrancy](./internals/non-reentrancy.md) + - [Access control](./internals/access.md) + - [Late resources](./internals/late-resources.md) + - [Critical sections](./internals/critical-sections.md) - [Ceiling analysis](./internals/ceilings.md) - - [Task dispatcher](./internals/tasks.md) + - [Software tasks](./internals/tasks.md) - [Timer queue](./internals/timer-queue.md) diff --git a/book/en/src/by-example.md b/book/en/src/by-example.md index e19f0749..e4441fd9 100644 --- a/book/en/src/by-example.md +++ b/book/en/src/by-example.md @@ -1,15 +1,15 @@ -# RTFM by example +# RTIC by example -This part of the book introduces the Real Time For the Masses (RTFM) framework +This part of the book introduces the Real-Time Interrupt-driven Concurrency (RTIC) framework to new users by walking them through examples of increasing complexity. All examples in this part of the book can be found in the GitHub [repository] of the project, and most of the examples can be run on QEMU so no special hardware is required to follow along. -[repository]: https://github.com/japaric/cortex-m-rtfm +[repository]: https://github.com/rtic-rs/cortex-m-rtic -To run the examples on your laptop / PC you'll need the `qemu-system-arm` +To run the examples on your computer you'll need the `qemu-system-arm` program. Check [the embedded Rust book] for instructions on how to set up an embedded development environment that includes QEMU. diff --git a/book/en/src/by-example/app.md b/book/en/src/by-example/app.md index 996b8c16..ab6f4524 100644 --- a/book/en/src/by-example/app.md +++ b/book/en/src/by-example/app.md @@ -1,50 +1,44 @@ # The `app` attribute -This is the smallest possible RTFM application: +This is the smallest possible RTIC application: ``` rust {{#include ../../../../examples/smallest.rs}} ``` -All RTFM applications use the [`app`] attribute (`#[app(..)]`). This attribute -must be applied to a `const` item that contains items. The `app` attribute has +All RTIC applications use the [`app`] attribute (`#[app(..)]`). This attribute +must be applied to a `mod`-item. The `app` attribute has a mandatory `device` argument that takes a *path* as a value. This path must point to a *peripheral access crate* (PAC) generated using [`svd2rust`] -**v0.14.x**. The `app` attribute will expand into a suitable entry point so it's -not required to use the [`cortex_m_rt::entry`] attribute. +**v0.14.x** or newer. The `app` attribute will expand into a suitable entry +point so it's not required to use the [`cortex_m_rt::entry`] attribute. -[`app`]: ../../api/cortex_m_rtfm_macros/attr.app.html +[`app`]: ../../../api/cortex_m_rtic_macros/attr.app.html [`svd2rust`]: https://crates.io/crates/svd2rust -[`cortex_m_rt::entry`]: ../../api/cortex_m_rt_macros/attr.entry.html - -> **ASIDE**: Some of you may be wondering why we are using a `const` item as a -> module and not a proper `mod` item. The reason is that using attributes on -> modules requires a feature gate, which requires a nightly toolchain. To make -> RTFM work on stable we use the `const` item instead. When more parts of macros -> 1.2 are stabilized we'll move from a `const` item to a `mod` item and -> eventually to a crate level attribute (`#![app]`). +[`cortex_m_rt::entry`]: ../../../api/cortex_m_rt_macros/attr.entry.html ## `init` -Within the pseudo-module the `app` attribute expects to find an initialization +Within the `app` module the attribute expects to find an initialization function marked with the `init` attribute. This function must have signature -`[unsafe] fn()`. +`fn(init::Context) [-> init::LateResources]` (the return type is not always +required). This initialization function will be the first part of the application to run. The `init` function will run *with interrupts disabled* and has exclusive access -to Cortex-M and device specific peripherals through the `core` and `device` -variables, which are injected in the scope of `init` by the `app` attribute. Not -all Cortex-M peripherals are available in `core` because the RTFM runtime takes -ownership of some of them -- for more details see the [`rtfm::Peripherals`] -struct. +to Cortex-M where the `bare_metal::CriticalSection` token is available as `cs`. +And optionally, device specific peripherals through the `core` and `device` fields +of `init::Context`. `static mut` variables declared at the beginning of `init` will be transformed into `&'static mut` references that are safe to access. -[`rtfm::Peripherals`]: ../../api/rtfm/struct.Peripherals.html +[`rtic::Peripherals`]: ../../api/rtic/struct.Peripherals.html -The example below shows the types of the `core` and `device` variables and -showcases safe access to a `static mut` variable. +The example below shows the types of the `core`, `device` and `cs` fields, and +showcases safe access to a `static mut` variable. The `device` field is only +available when the `peripherals` argument is set to `true` (it defaults to +`false`). ``` rust {{#include ../../../../examples/init.rs}} @@ -55,51 +49,104 @@ process. ``` console $ cargo run --example init -{{#include ../../../../ci/expected/init.run}}``` +{{#include ../../../../ci/expected/init.run}} +``` ## `idle` A function marked with the `idle` attribute can optionally appear in the -pseudo-module. This function is used as the special *idle task* and must have -signature `[unsafe] fn() - > !`. +module. This function is used as the special *idle task* and must have +signature `fn(idle::Context) - > !`. When present, the runtime will execute the `idle` task after `init`. Unlike `init`, `idle` will run *with interrupts enabled* and it's not allowed to return -so it runs forever. +so it must run forever. When no `idle` function is declared, the runtime sets the [SLEEPONEXIT] bit and then sends the microcontroller to sleep after running `init`. -[SLEEPONEXIT]: https://developer.arm.com/products/architecture/cpu-architecture/m-profile/docs/100737/0100/power-management/sleep-mode/sleep-on-exit-bit +[SLEEPONEXIT]: https://developer.arm.com/docs/100737/0100/power-management/sleep-mode/sleep-on-exit-bit Like in `init`, `static mut` variables will be transformed into `&'static mut` references that are safe to access. The example below shows that `idle` runs after `init`. +**Note:** The `loop {}` in idle cannot be empty as this will crash the microcontroller due to a bug +in LLVM which miss-optimizes empty loops to a `UDF` instruction in release mode. + ``` rust {{#include ../../../../examples/idle.rs}} ``` ``` console $ cargo run --example idle -{{#include ../../../../ci/expected/idle.run}}``` +{{#include ../../../../ci/expected/idle.run}} +``` + +## Hardware tasks -## `interrupt` / `exception` +To declare interrupt handlers the framework provides a `#[task]` attribute that +can be attached to functions. This attribute takes a `binds` argument whose +value is the name of the interrupt to which the handler will be bound to; the +function adorned with this attribute becomes the interrupt handler. Within the +framework these type of tasks are referred to as *hardware* tasks, because they +start executing in reaction to a hardware event. -Just like you would do with the `cortex-m-rt` crate you can use the `interrupt` -and `exception` attributes within the `app` pseudo-module to declare interrupt -and exception handlers. In RTFM, we refer to interrupt and exception handlers as -*hardware* tasks. +The example below demonstrates the use of the `#[task]` attribute to declare an +interrupt handler. Like in the case of `#[init]` and `#[idle]` local `static +mut` variables are safe to use within a hardware task. ``` rust -{{#include ../../../../examples/interrupt.rs}} +{{#include ../../../../examples/hardware.rs}} ``` ``` console -$ cargo run --example interrupt -{{#include ../../../../ci/expected/interrupt.run}}``` +$ cargo run --example hardware +{{#include ../../../../ci/expected/hardware.run}} +``` + +So far all the RTIC applications we have seen look no different than the +applications one can write using only the `cortex-m-rt` crate. From this point +we start introducing features unique to RTIC. + +## Priorities + +The static priority of each handler can be declared in the `task` attribute +using the `priority` argument. Tasks can have priorities in the range `1..=(1 << +NVIC_PRIO_BITS)` where `NVIC_PRIO_BITS` is a constant defined in the `device` +crate. When the `priority` argument is omitted, the priority is assumed to be +`1`. The `idle` task has a non-configurable static priority of `0`, the lowest +priority. + +When several tasks are ready to be executed the one with *highest* static +priority will be executed first. Task prioritization can be observed in the +following scenario: an interrupt signal arrives during the execution of a low +priority task; the signal puts the higher priority task in the pending state. +The difference in priority results in the higher priority task preempting the +lower priority one: the execution of the lower priority task is suspended and +the higher priority task is executed to completion. Once the higher priority +task has terminated the lower priority task is resumed. + +The following example showcases the priority based scheduling of tasks. + +``` rust +{{#include ../../../../examples/preempt.rs}} +``` + +``` console +$ cargo run --example preempt +{{#include ../../../../ci/expected/preempt.run}} +``` -So far all the RTFM applications we have seen look no different that the -applications one can write using only the `cortex-m-rt` crate. In the next -section we start introducing features unique to RTFM. +Note that the task `gpiob` does *not* preempt task `gpioc` because its priority +is the *same* as `gpioc`'s. However, once `gpioc` terminates the execution of +task, `gpiob` is prioritized over `gpioa` due to its higher priority. `gpioa` +is resumed only after `gpiob` terminates. + +One more note about priorities: choosing a priority higher than what the device +supports (that is `1 << NVIC_PRIO_BITS`) will result in a compile error. Due to +limitations in the language, the error message is currently far from helpful: it +will say something along the lines of "evaluation of constant value failed" and +the span of the error will *not* point out to the problematic interrupt value -- +we are sorry about this! diff --git a/book/en/src/by-example/new.md b/book/en/src/by-example/new.md index ae49ef21..866a9fa5 100644 --- a/book/en/src/by-example/new.md +++ b/book/en/src/by-example/new.md @@ -1,6 +1,6 @@ # Starting a new project -Now that you have learned about the main features of the RTFM framework you can +Now that you have learned about the main features of the RTIC framework you can try it out on your hardware by following these instructions. 1. Instantiate the [`cortex-m-quickstart`] template. @@ -36,20 +36,19 @@ $ cargo add lm3s6965 --vers 0.1.3 $ rm memory.x build.rs ``` -3. Add the `cortex-m-rtfm` crate as a dependency and, if you need it, enable the - `timer-queue` feature. +3. Add the `cortex-m-rtic` crate as a dependency. ``` console -$ cargo add cortex-m-rtfm +$ cargo add cortex-m-rtic --allow-prerelease ``` -4. Write your RTFM application. +4. Write your RTIC application. -Here I'll use the `init` example from the `cortex-m-rtfm` crate. +Here I'll use the `init` example from the `cortex-m-rtic` crate. ``` console $ curl \ - -L https://github.com/japaric/cortex-m-rtfm/raw/v0.4.0/examples/init.rs \ + -L https://github.com/rtic-rs/cortex-m-rtic/raw/v0.5.3/examples/init.rs \ > src/main.rs ``` @@ -64,4 +63,5 @@ $ cargo add panic-semihosting ``` console $ # NOTE: I have uncommented the `runner` option in `.cargo/config` $ cargo run -{{#include ../../../../ci/expected/init.run}}``` +{{#include ../../../../ci/expected/init.run}} +``` diff --git a/book/en/src/by-example/resources.md b/book/en/src/by-example/resources.md index 17f4d139..d082dfc1 100644 --- a/book/en/src/by-example/resources.md +++ b/book/en/src/by-example/resources.md @@ -1,22 +1,29 @@ ## Resources -One of the limitations of the attributes provided by the `cortex-m-rt` crate is -that sharing data (or peripherals) between interrupts, or between an interrupt -and the `entry` function, requires a `cortex_m::interrupt::Mutex`, which -*always* requires disabling *all* interrupts to access the data. Disabling all -the interrupts is not always required for memory safety but the compiler doesn't -have enough information to optimize the access to the shared data. - -The `app` attribute has a full view of the application thus it can optimize -access to `static` variables. In RTFM we refer to the `static` variables -declared inside the `app` pseudo-module as *resources*. To access a resource the -context (`init`, `idle`, `interrupt` or `exception`) must first declare the -resource in the `resources` argument of its attribute. - -In the example below two interrupt handlers access the same resource. No `Mutex` -is required in this case because the two handlers run at the same priority and -no preemption is possible. The `SHARED` resource can only be accessed by these -two handlers. +The framework provides an abstraction to share data between any of the contexts +we saw in the previous section (task handlers, `init` and `idle`): resources. + +Resources are data visible only to functions declared within the `#[app]` +module. The framework gives the user complete control over which context +can access which resource. + +All resources are declared as a single `struct` within the `#[app]` +module. Each field in the structure corresponds to a different resource. +The `struct` must be annotated with the following attribute: `#[resources]`. + +Resources can optionally be given an initial value using the `#[init]` +attribute. Resources that are not given an initial value are referred to as +*late* resources and are covered in more detail in a follow-up section in this +page. + +Each context (task handler, `init` or `idle`) must declare the resources it +intends to access in its corresponding metadata attribute using the `resources` +argument. This argument takes a list of resource names as its value. The listed +resources are made available to the context under the `resources` field of the +`Context` structure. + +The example application shown below contains two interrupt handlers that share +access to a resource named `shared`. ``` rust {{#include ../../../../examples/resource.rs}} @@ -24,42 +31,42 @@ two handlers. ``` console $ cargo run --example resource -{{#include ../../../../ci/expected/resource.run}}``` +{{#include ../../../../ci/expected/resource.run}} +``` + +Note that the `shared` resource cannot be accessed from `idle`. Attempting to do +so results in a compile error. -## Priorities +## `lock` -The priority of each handler can be declared in the `interrupt` and `exception` -attributes. It's not possible to set the priority in any other way because the -runtime takes ownership of the `NVIC` peripheral; it's also not possible to -change the priority of a handler / task at runtime. Thanks to this restriction -the framework has knowledge about the *static* priorities of all interrupt and -exception handlers. +In the presence of preemption critical sections are required to mutate shared +data in a data race free manner. As the framework has complete knowledge over +the priorities of tasks and which tasks can access which resources it enforces +that critical sections are used where required for memory safety. -Interrupts and exceptions can have priorities in the range `1..=(1 << -NVIC_PRIO_BITS)` where `NVIC_PRIO_BITS` is a constant defined in the `device` -crate. The `idle` task has a priority of `0`, the lowest priority. +Where a critical section is required the framework hands out a resource proxy +instead of a reference. This resource proxy is a structure that implements the +[`Mutex`] trait. The only method on this trait, [`lock`], runs its closure +argument in a critical section. -Resources that are shared between handlers that run at different priorities -require critical sections for memory safety. The framework ensures that critical -sections are used but *only where required*: for example, no critical section is -required by the highest priority handler that has access to the resource. +[`Mutex`]: ../../../api/rtic/trait.Mutex.html +[`lock`]: ../../../api/rtic/trait.Mutex.html#method.lock -The critical section API provided by the RTFM framework (see [`Mutex`]) is -based on dynamic priorities rather than on disabling interrupts. The consequence -is that these critical sections will prevent *some* handlers, including all the -ones that contend for the resource, from *starting* but will let higher priority -handlers, that don't contend for the resource, run. +The critical section created by the `lock` API is based on dynamic priorities: +it temporarily raises the dynamic priority of the context to a *ceiling* +priority that prevents other tasks from preempting the critical section. This +synchronization protocol is known as the [Immediate Ceiling Priority Protocol +(ICPP)][icpp]. -[`Mutex`]: ../../api/rtfm/trait.Mutex.html +[icpp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol In the example below we have three interrupt handlers with priorities ranging from one to three. The two handlers with the lower priorities contend for the -`SHARED` resource. The lowest priority handler needs to [`lock`] the -`SHARED` resource to access its data, whereas the mid priority handler can -directly access its data. The highest priority handler is free to preempt -the critical section created by the lowest priority handler. - -[`lock`]: ../../api/rtfm/trait.Mutex.html#method.lock +`shared` resource. The lowest priority handler needs to `lock` the +`shared` resource to access its data, whereas the mid priority handler can +directly access its data. The highest priority handler, which cannot access +the `shared` resource, is free to preempt the critical section created by the +lowest priority handler. ``` rust {{#include ../../../../examples/lock.rs}} @@ -67,35 +74,26 @@ the critical section created by the lowest priority handler. ``` console $ cargo run --example lock -{{#include ../../../../ci/expected/lock.run}}``` - -One more note about priorities: choosing a priority higher than what the device -supports (that is `1 << NVIC_PRIO_BITS`) will result in a compile error. Due to -limitations in the language the error is currently far from helpful: it will say -something along the lines of "evaluation of constant value failed" and the span -of the error will *not* point out to the problematic interrupt value -- we are -sorry about this! +{{#include ../../../../ci/expected/lock.run}} +``` ## Late resources -Unlike normal `static` variables, which need to be assigned an initial value -when declared, resources can be initialized at runtime. We refer to these -runtime initialized resources as *late resources*. Late resources are useful for -*moving* (as in transferring ownership) peripherals initialized in `init` into -interrupt and exception handlers. +Late resources are resources that are not given an initial value at compile time +using the `#[init]` attribute but instead are initialized at runtime using the +`init::LateResources` values returned by the `init` function. -Late resources are declared like normal resources but that are given an initial -value of `()` (the unit value). `init` must return the initial values of all -late resources packed in a `struct` of type `init::LateResources`. +Late resources are useful for *moving* (as in transferring the ownership of) +peripherals initialized in `init` into interrupt handlers. -The example below uses late resources to stablish a lockless, one-way channel -between the `UART0` interrupt handler and the `idle` function. A single producer +The example below uses late resources to establish a lockless, one-way channel +between the `UART0` interrupt handler and the `idle` task. A single producer single consumer [`Queue`] is used as the channel. The queue is split into consumer and producer end points in `init` and then each end point is stored in a different resource; `UART0` owns the producer resource and `idle` owns the consumer resource. -[`Queue`]: ../../api/heapless/spsc/struct.Queue.html +[`Queue`]: ../../../api/heapless/spsc/struct.Queue.html ``` rust {{#include ../../../../examples/late.rs}} @@ -103,24 +101,36 @@ the consumer resource. ``` console $ cargo run --example late -{{#include ../../../../ci/expected/late.run}}``` +{{#include ../../../../ci/expected/late.run}} +``` -## `static` resources +## Only shared access -`static` variables can also be used as resources. Tasks can only get `&` -(shared) references to these resources but locks are never required to access -their data. You can think of `static` resources as plain `static` variables that -can be initialized at runtime and have better scoping rules: you can control -which tasks can access the variable, instead of the variable being visible to -all the functions in the scope it was declared in. +By default the framework assumes that all tasks require exclusive access +(`&mut-`) to resources but it is possible to specify that a task only requires +shared access (`&-`) to a resource using the `&resource_name` syntax in the +`resources` list. -In the example below a key is loaded (or created) at runtime and then used from -two tasks that run at different priorities. +The advantage of specifying shared access (`&-`) to a resource is that no locks +are required to access the resource even if the resource is contended by several +tasks running at different priorities. The downside is that the task only gets a +shared reference (`&-`) to the resource, limiting the operations it can perform +on it, but where a shared reference is enough this approach reduces the number +of required locks. + +Note that in this release of RTIC it is not possible to request both exclusive +access (`&mut-`) and shared access (`&-`) to the *same* resource from different +tasks. Attempting to do so will result in a compile error. + +In the example below a key (e.g. a cryptographic key) is loaded (or created) at +runtime and then used from two tasks that run at different priorities without +any kind of lock. ``` rust -{{#include ../../../../examples/static.rs}} +{{#include ../../../../examples/only-shared-access.rs}} ``` ``` console -$ cargo run --example static -{{#include ../../../../ci/expected/static.run}}``` +$ cargo run --example only-shared-access +{{#include ../../../../ci/expected/only-shared-access.run}} +``` diff --git a/book/en/src/by-example/singletons.md b/book/en/src/by-example/singletons.md deleted file mode 100644 index 0823f057..00000000 --- a/book/en/src/by-example/singletons.md +++ /dev/null @@ -1,26 +0,0 @@ -# Singletons - -The `app` attribute is aware of [`owned-singleton`] crate and its [`Singleton`] -attribute. When this attribute is applied to one of the resources the runtime -will perform the `unsafe` initialization of the singleton for you, ensuring that -only a single instance of the singleton is ever created. - -[`owned-singleton`]: ../../api/owned_singleton/index.html -[`Singleton`]: ../../api/owned_singleton_macros/attr.Singleton.html - -Note that when using the `Singleton` attribute you'll need to have the -`owned_singleton` in your dependencies. - -Below is an example that uses the `Singleton` attribute on a chunk of memory -and then uses the singleton instance as a fixed-size memory pool using one of -the [`alloc-singleton`] abstractions. - -[`alloc-singleton`]: https://crates.io/crates/alloc-singleton - -``` rust -{{#include ../../../../examples/singleton.rs}} -``` - -``` console -$ cargo run --example singleton -{{#include ../../../../ci/expected/singleton.run}}``` diff --git a/book/en/src/by-example/tasks.md b/book/en/src/by-example/tasks.md index edcdbed0..ba164048 100644 --- a/book/en/src/by-example/tasks.md +++ b/book/en/src/by-example/tasks.md @@ -1,22 +1,23 @@ # Software tasks -RTFM treats interrupt and exception handlers as *hardware* tasks. Hardware tasks -are invoked by the hardware in response to events, like pressing a button. RTFM -also supports *software* tasks which can be spawned by the software from any -execution context. - -Software tasks can also be assigned priorities and are dispatched from interrupt -handlers. RTFM requires that free interrupts are declared in an `extern` block -when using software tasks; these free interrupts will be used to dispatch the -software tasks. An advantage of software tasks over hardware tasks is that many -tasks can be mapped to a single interrupt handler. - -Software tasks are declared by applying the `task` attribute to functions. To be -able to spawn a software task the name of the task must appear in the `spawn` -argument of the context attribute (`init`, `idle`, `interrupt`, etc.). +In addition to hardware tasks, which are invoked by the hardware in response to +hardware events, RTIC also supports *software* tasks which can be spawned by the +application from any execution context. + +Software tasks can also be assigned priorities and, under the hood, are +dispatched from interrupt handlers. RTIC requires that free interrupts are +declared in an `extern` block when using software tasks; some of these free +interrupts will be used to dispatch the software tasks. An advantage of software +tasks over hardware tasks is that many tasks can be mapped to a single interrupt +handler. + +Software tasks are also declared using the `task` attribute but the `binds` +argument must be omitted. To be able to spawn a software task from a context +the name of the task must appear in the `spawn` argument of the context +attribute (`init`, `idle`, `task`, etc.). The example below showcases three software tasks that run at 2 different -priorities. The three tasks map to 2 interrupts handlers. +priorities. The three software tasks are mapped to 2 interrupts handlers. ``` rust {{#include ../../../../examples/task.rs}} @@ -24,7 +25,8 @@ priorities. The three tasks map to 2 interrupts handlers. ``` console $ cargo run --example task -{{#include ../../../../ci/expected/task.run}}``` +{{#include ../../../../ci/expected/task.run}} +``` ## Message passing @@ -40,19 +42,22 @@ The example below showcases three tasks, two of them expect a message. ``` console $ cargo run --example message -{{#include ../../../../ci/expected/message.run}}``` +{{#include ../../../../ci/expected/message.run}} +``` ## Capacity -Task dispatchers do *not* use any dynamic memory allocation. The memory required -to store messages is statically reserved. The framework will reserve enough -space for every context to be able to spawn each task at most once. This is a -sensible default but the "inbox" capacity of each task can be controlled using -the `capacity` argument of the `task` attribute. +RTIC does *not* perform any form of heap-based memory allocation. The memory +required to store messages is statically reserved. By default the framework +minimizes the memory footprint of the application so each task has a message +"capacity" of 1: meaning that at most one message can be posted to the task +before it gets a chance to run. This default can be overridden for each task +using the `capacity` argument. This argument takes a positive integer that +indicates how many messages the task message buffer can hold. The example below sets the capacity of the software task `foo` to 4. If the capacity is not specified then the second `spawn.foo` call in `UART0` would -fail. +fail (panic). ``` rust {{#include ../../../../examples/capacity.rs}} @@ -60,4 +65,56 @@ fail. ``` console $ cargo run --example capacity -{{#include ../../../../ci/expected/capacity.run}}``` +{{#include ../../../../ci/expected/capacity.run}} +``` + +## Error handling + +The `spawn` API returns the `Err` variant when there's no space to send the +message. In most scenarios spawning errors are handled in one of two ways: + +- Panicking, using `unwrap`, `expect`, etc. This approach is used to catch the + programmer error (i.e. bug) of selecting a capacity that was too small. When + this panic is encountered during testing choosing a bigger capacity and + recompiling the program may fix the issue but sometimes it's necessary to dig + deeper and perform a timing analysis of the application to check if the + platform can deal with peak payload or if the processor needs to be replaced + with a faster one. + +- Ignoring the result. In soft real-time and non real-time applications it may + be OK to occasionally lose data or fail to respond to some events during event + bursts. In those scenarios silently letting a `spawn` call fail may be + acceptable. + +It should be noted that retrying a `spawn` call is usually the wrong approach as +this operation will likely never succeed in practice. Because there are only +context switches towards *higher* priority tasks retrying the `spawn` call of a +lower priority task will never let the scheduler dispatch said task meaning that +its message buffer will never be emptied. This situation is depicted in the +following snippet: + +``` rust +#[rtic::app(..)] +mod app { + #[init(spawn = [foo, bar])] + fn init(cx: init::Context) { + cx.spawn.foo().unwrap(); + cx.spawn.bar().unwrap(); + } + + #[task(priority = 2, spawn = [bar])] + fn foo(cx: foo::Context) { + // .. + + // the program will get stuck here + while cx.spawn.bar(payload).is_err() { + // retry the spawn call if it failed + } + } + + #[task(priority = 1)] + fn bar(cx: bar::Context, payload: i32) { + // .. + } +} +``` diff --git a/book/en/src/by-example/timer-queue.md b/book/en/src/by-example/timer-queue.md index 167939ce..482aebc1 100644 --- a/book/en/src/by-example/timer-queue.md +++ b/book/en/src/by-example/timer-queue.md @@ -1,37 +1,47 @@ # Timer queue -When the `timer-queue` feature is enabled the RTFM framework includes a *global -timer queue* that applications can use to *schedule* software tasks to run at -some time in the future. - -> **NOTE**: The timer-queue feature can't be enabled when the target is -> `thumbv6m-none-eabi` because there's no timer queue support for ARMv6-M. This -> may change in the future. - -> **NOTE**: When the `timer-queue` feature is enabled you will *not* be able to -> use the `SysTick` exception as a hardware task because the runtime uses it to -> implement the global timer queue. - -To be able to schedule a software task the name of the task must appear in the -`schedule` argument of the context attribute. When scheduling a task the -[`Instant`] at which the task should be executed must be passed as the first -argument of the `schedule` invocation. - -[`Instant`]: ../../api/rtfm/struct.Instant.html - -The RTFM runtime includes a monotonic, non-decreasing, 32-bit timer which can be -queried using the `Instant::now` constructor. A [`Duration`] can be added to -`Instant::now()` to obtain an `Instant` into the future. The monotonic timer is -disabled while `init` runs so `Instant::now()` always returns the value -`Instant(0 /* clock cycles */)`; the timer is enabled right before the -interrupts are re-enabled and `idle` is executed. - -[`Duration`]: ../../api/rtfm/struct.Duration.html +In contrast with the `spawn` API, which immediately spawns a software task onto +the scheduler, the `schedule` API can be used to schedule a task to run some +time in the future. + +To use the `schedule` API a monotonic timer must be first defined using the +`monotonic` argument of the `#[app]` attribute. This argument takes a path to a +type that implements the [`Monotonic`] trait. The associated type, `Instant`, of +this trait represents a timestamp in arbitrary units and it's used extensively +in the `schedule` API -- it is suggested to model this type after [the one in +the standard library][std-instant]. + +Although not shown in the trait definition (due to limitations in the trait / +type system) the subtraction of two `Instant`s should return some `Duration` +type (see [`core::time::Duration`]) and this `Duration` type must implement the +`TryInto<u32>` trait. The implementation of this trait must convert the +`Duration` value, which uses some arbitrary unit of time, into the "system timer +(SYST) clock cycles" time unit. The result of the conversion must be a 32-bit +integer. If the result of the conversion doesn't fit in a 32-bit number then the +operation must return an error, any error type. + +[`Monotonic`]: ../../../api/rtic/trait.Monotonic.html +[std-instant]: https://doc.rust-lang.org/std/time/struct.Instant.html +[`core::time::Duration`]: https://doc.rust-lang.org/core/time/struct.Duration.html + +For ARMv7+ targets the `rtic` crate provides a `Monotonic` implementation based +on the built-in CYCle CouNTer (CYCCNT). Note that this is a 32-bit timer clocked +at the frequency of the CPU and as such it is not suitable for tracking time +spans in the order of seconds. + +To be able to schedule a software task from a context the name of the task must +first appear in the `schedule` argument of the context attribute. When +scheduling a task the (user-defined) `Instant` at which the task should be +executed must be passed as the first argument of the `schedule` invocation. + +Additionally, the chosen `monotonic` timer must be configured and initialized +during the `#[init]` phase. Note that this is *also* the case if you choose to +use the `CYCCNT` provided by the `cortex-m-rtic` crate. The example below schedules two tasks from `init`: `foo` and `bar`. `foo` is scheduled to run 8 million clock cycles in the future. Next, `bar` is scheduled -to run 4 million clock cycles in the future. `bar` runs before `foo` since it -was scheduled to run first. +to run 4 million clock cycles in the future. Thus `bar` runs before `foo` since +it was scheduled to run first. > **IMPORTANT**: The examples that use the `schedule` API or the `Instant` > abstraction will **not** properly work on QEMU because the Cortex-M cycle @@ -41,12 +51,19 @@ was scheduled to run first. {{#include ../../../../examples/schedule.rs}} ``` -Running the program on real hardware produces the following output in the console: +Running the program on real hardware produces the following output in the +console: ``` text {{#include ../../../../ci/expected/schedule.run}} ``` +When the `schedule` API is being used the runtime internally uses the `SysTick` +interrupt handler and the system timer peripheral (`SYST`) so neither can be +used by the application. This is accomplished by changing the type of +`init::Context.core` from `cortex_m::Peripherals` to `rtic::Peripherals`. The +latter structure contains all the fields of the former minus the `SYST` one. + ## Periodic tasks Software tasks have access to the `Instant` at which they were scheduled to run @@ -80,9 +97,10 @@ the task. Depending on the priority of the task and the load of the system the What do you think will be the value of `scheduled` for software tasks that are *spawned* instead of scheduled? The answer is that spawned tasks inherit the *baseline* time of the context that spawned it. The baseline of hardware tasks -is `start`, the baseline of software tasks is `scheduled` and the baseline of -`init` is `start = Instant(0)`. `idle` doesn't really have a baseline but tasks -spawned from it will use `Instant::now()` as their baseline time. +is their `start` time, the baseline of software tasks is their `scheduled` time +and the baseline of `init` is the system start time or time zero +(`Instant::zero()`). `idle` doesn't really have a baseline but tasks spawned +from it will use `Instant::now()` as their baseline time. The example below showcases the different meanings of the *baseline*. diff --git a/book/en/src/by-example/tips.md b/book/en/src/by-example/tips.md index c0bfc56e..d8264c90 100644 --- a/book/en/src/by-example/tips.md +++ b/book/en/src/by-example/tips.md @@ -2,10 +2,21 @@ ## Generics -Resources shared between two or more tasks implement the `Mutex` trait in *all* -contexts, even on those where a critical section is not required to access the -data. This lets you easily write generic code that operates on resources and can -be called from different tasks. Here's one such example: +Resources may appear in contexts as resource proxies or as unique references +(`&mut-`) depending on the priority of the task. Because the same resource may +appear as *different* types in different contexts one cannot refactor a common +operation that uses resources into a plain function; however, such refactor is +possible using *generics*. + +All resource proxies implement the `rtic::Mutex` trait. On the other hand, +unique references (`&mut-`) do *not* implement this trait (due to limitations in +the trait system) but one can wrap these references in the [`rtic::Exclusive`] +newtype which does implement the `Mutex` trait. With the help of this newtype +one can write a generic function that operates on generic resources and call it +from different tasks to perform some operation on the same set of resources. +Here's one such example: + +[`rtic::Exclusive`]: ../../../api/rtic/struct.Exclusive.html ``` rust {{#include ../../../../examples/generics.rs}} @@ -13,19 +24,18 @@ be called from different tasks. Here's one such example: ``` console $ cargo run --example generics -{{#include ../../../../ci/expected/generics.run}}``` +{{#include ../../../../ci/expected/generics.run}} +``` -This also lets you change the static priorities of tasks without having to -rewrite code. If you consistently use `lock`s to access the data behind shared -resources then your code will continue to compile when you change the priority -of tasks. +Using generics also lets you change the static priorities of tasks during +development without having to rewrite a bunch code every time. ## Conditional compilation -You can use conditional compilation (`#[cfg]`) on resources (`static [mut]` -items) and tasks (`fn` items). The effect of using `#[cfg]` attributes is that -the resource / task will *not* be injected into the prelude of tasks that use -them (see `resources`, `spawn` and `schedule`) if the condition doesn't hold. +You can use conditional compilation (`#[cfg]`) on resources (the fields of +`struct Resources`) and tasks (the `fn` items). The effect of using `#[cfg]` +attributes is that the resource / task will *not* be available through the +corresponding `Context` `struct` if the condition doesn't hold. The example below logs a message whenever the `foo` task is spawned, but only if the program has been compiled using the `dev` profile. @@ -34,10 +44,17 @@ the program has been compiled using the `dev` profile. {{#include ../../../../examples/cfg.rs}} ``` +``` console +$ cargo run --example cfg --release + +$ cargo run --example cfg +{{#include ../../../../ci/expected/cfg.run}} +``` + ## Running tasks from RAM -The main goal of moving the specification of RTFM applications to attributes in -RTFM v0.4.x was to allow inter-operation with other attributes. For example, the +The main goal of moving the specification of RTIC applications to attributes in +RTIC v0.4.0 was to allow inter-operation with other attributes. For example, the `link_section` attribute can be applied to tasks to place them in RAM; this can improve performance in some cases. @@ -63,31 +80,101 @@ Running this program produces the expected output. ``` console $ cargo run --example ramfunc -{{#include ../../../../ci/expected/ramfunc.run}}``` +{{#include ../../../../ci/expected/ramfunc.run}} +``` One can look at the output of `cargo-nm` to confirm that `bar` ended in RAM (`0x2000_0000`), whereas `foo` ended in Flash (`0x0000_0000`). ``` console $ cargo nm --example ramfunc --release | grep ' foo::' -{{#include ../../../../ci/expected/ramfunc.grep.foo}}``` +{{#include ../../../../ci/expected/ramfunc.grep.foo}} +``` ``` console $ cargo nm --example ramfunc --release | grep ' bar::' -{{#include ../../../../ci/expected/ramfunc.grep.bar}}``` +{{#include ../../../../ci/expected/ramfunc.grep.bar}} +``` + +## Indirection for faster message passing + +Message passing always involves copying the payload from the sender into a +static variable and then from the static variable into the receiver. Thus +sending a large buffer, like a `[u8; 128]`, as a message involves two expensive +`memcpy`s. To minimize the message passing overhead one can use indirection: +instead of sending the buffer by value, one can send an owning pointer into the +buffer. + +One can use a global allocator to achieve indirection (`alloc::Box`, +`alloc::Rc`, etc.), which requires using the nightly channel as of Rust v1.37.0, +or one can use a statically allocated memory pool like [`heapless::Pool`]. + +[`heapless::Pool`]: https://docs.rs/heapless/0.5.0/heapless/pool/index.html + +Here's an example where `heapless::Pool` is used to "box" buffers of 128 bytes. + +``` rust +{{#include ../../../../examples/pool.rs}} +``` +``` console +$ cargo run --example pool +{{#include ../../../../ci/expected/pool.run}} +``` + +## Inspecting the expanded code + +`#[rtic::app]` is a procedural macro that produces support code. If for some +reason you need to inspect the code generated by this macro you have two +options: -## `binds` +You can inspect the file `rtic-expansion.rs` inside the `target` directory. This +file contains the expansion of the `#[rtic::app]` item (not your whole program!) +of the *last built* (via `cargo build` or `cargo check`) RTIC application. The +expanded code is not pretty printed by default so you'll want to run `rustfmt` +over it before you read it. -**NOTE**: Requires RTFM ~0.4.2 +``` console +$ cargo build --example foo + +$ rustfmt target/rtic-expansion.rs -You can give hardware tasks more task-like names using the `binds` argument: you -name the function as you wish and specify the name of the interrupt / exception -in the `binds` argument. Types like `Spawn` will be placed in a module named -after the function, not the interrupt / exception. Example below: +$ tail target/rtic-expansion.rs +``` ``` rust -{{#include ../../../../examples/binds.rs}} +#[doc = r" Implementation details"] +mod app { + #[doc = r" Always include the device crate which contains the vector table"] + use lm3s6965 as _; + #[no_mangle] + unsafe extern "C" fn main() -> ! { + rtic::export::interrupt::disable(); + let mut core: rtic::export::Peripherals = core::mem::transmute(()); + core.SCB.scr.modify(|r| r | 1 << 1); + rtic::export::interrupt::enable(); + loop { + rtic::export::wfi() + } + } +} ``` + +Or, you can use the [`cargo-expand`] sub-command. This sub-command will expand +*all* the macros, including the `#[rtic::app]` attribute, and modules in your +crate and print the output to the console. + +[`cargo-expand`]: https://crates.io/crates/cargo-expand + ``` console -$ cargo run --example binds -{{#include ../../../../ci/expected/binds.run}}``` +$ # produces the same output as before +$ cargo expand --example smallest | tail +``` + +## Resource de-structure-ing + +When having a task taking multiple resources it can help in readability to split +up the resource struct. Here are two examples on how this can be done: + +``` rust +{{#include ../../../../examples/destructure.rs}} +``` diff --git a/book/en/src/by-example/types-send-sync.md b/book/en/src/by-example/types-send-sync.md index da53cf96..9cdb8894 100644 --- a/book/en/src/by-example/types-send-sync.md +++ b/book/en/src/by-example/types-send-sync.md @@ -1,14 +1,13 @@ # Types, Send and Sync -The `app` attribute injects a context, a collection of variables, into every -function. All these variables have predictable, non-anonymous types so you can -write plain functions that take them as arguments. +Every function within the `app` module has a `Context` structure as its +first parameter. All the fields of these structures have predictable, +non-anonymous types so you can write plain functions that take them as arguments. The API reference specifies how these types are generated from the input. You can also generate documentation for you binary crate (`cargo doc --bin <name>`); in the documentation you'll find `Context` structs (e.g. `init::Context` and -`idle::Context`) whose fields represent the variables injected into each -function. +`idle::Context`). The example below shows the different types generates by the `app` attribute. @@ -19,10 +18,10 @@ The example below shows the different types generates by the `app` attribute. ## `Send` [`Send`] is a marker trait for "types that can be transferred across thread -boundaries", according to its definition in `core`. In the context of RTFM the +boundaries", according to its definition in `core`. In the context of RTIC the `Send` trait is only required where it's possible to transfer a value between -tasks that run at *different* priorities. This occurs in a few places: in message -passing, in shared `static mut` resources and in the initialization of late +tasks that run at *different* priorities. This occurs in a few places: in +message passing, in shared resources and in the initialization of late resources. [`Send`]: https://doc.rust-lang.org/core/marker/trait.Send.html @@ -31,7 +30,7 @@ The `app` attribute will enforce that `Send` is implemented where required so you don't need to worry much about it. It's more important to know where you do *not* need the `Send` trait: on types that are transferred between tasks that run at the *same* priority. This occurs in two places: in message passing and in -shared `static mut` resources. +shared resources. The example below shows where a type that doesn't implement `Send` can be used. @@ -39,19 +38,34 @@ The example below shows where a type that doesn't implement `Send` can be used. {{#include ../../../../examples/not-send.rs}} ``` +It's important to note that late initialization of resources is effectively a +send operation where the initial value is sent from the background context, +which has the lowest priority of `0`, to a task, which will run at a priority +greater than or equal to `1`. Thus all late resources need to implement the +`Send` trait, except for those exclusively accessed by `idle`, which runs at a +priority of `0`. + +Sharing a resource with `init` can be used to implement late initialization, see +example below. For that reason, resources shared with `init` must also implement +the `Send` trait. + +``` rust +{{#include ../../../../examples/shared-with-init.rs}} +``` + ## `Sync` Similarly, [`Sync`] is a marker trait for "types for which it is safe to share references between threads", according to its definition in `core`. In the -context of RTFM the `Sync` trait is only required where it's possible for two, -or more, tasks that run at different priority to hold a shared reference to a -resource. This only occurs with shared `static` resources. +context of RTIC the `Sync` trait is only required where it's possible for two, +or more, tasks that run at different priorities and may get a shared reference +(`&-`) to a resource. This only occurs with shared access (`&-`) resources. [`Sync`]: https://doc.rust-lang.org/core/marker/trait.Sync.html The `app` attribute will enforce that `Sync` is implemented where required but -it's important to know where the `Sync` bound is not required: in `static` -resources shared between tasks that run at the *same* priority. +it's important to know where the `Sync` bound is not required: shared access +(`&-`) resources contended by tasks that run at the *same* priority. The example below shows where a type that doesn't implement `Sync` can be used. diff --git a/book/en/src/internals.md b/book/en/src/internals.md index 0ef55e62..3b570248 100644 --- a/book/en/src/internals.md +++ b/book/en/src/internals.md @@ -1,6 +1,11 @@ # Under the hood -This section describes the internals of the RTFM framework at a *high level*. +This section describes the internals of the RTIC framework at a *high level*. Low level details like the parsing and code generation done by the procedural macro (`#[app]`) will not be explained here. The focus will be the analysis of the user specification and the data structures used by the runtime. + +We highly suggest that you read the embedonomicon section on [concurrency] +before you dive into this material. + +[concurrency]: https://github.com/rust-embedded/embedonomicon/pull/48 diff --git a/book/en/src/internals/access.md b/book/en/src/internals/access.md new file mode 100644 index 00000000..3894470c --- /dev/null +++ b/book/en/src/internals/access.md @@ -0,0 +1,158 @@ +# Access control + +One of the core foundations of RTIC is access control. Controlling which parts +of the program can access which static variables is instrumental to enforcing +memory safety. + +Static variables are used to share state between interrupt handlers, or between +interrupts handlers and the bottom execution context, `main`. In normal Rust +code it's hard to have fine grained control over which functions can access a +static variable because static variables can be accessed from any function that +resides in the same scope in which they are declared. Modules give some control +over how a static variable can be accessed by they are not flexible enough. + +To achieve the fine-grained access control where tasks can only access the +static variables (resources) that they have specified in their RTIC attribute +the RTIC framework performs a source code level transformation. This +transformation consists of placing the resources (static variables) specified by +the user *inside* a module and the user code *outside* the module. +This makes it impossible for the user code to refer to these static variables. + +Access to the resources is then given to each task using a `Resources` struct +whose fields correspond to the resources the task has access to. There's one +such struct per task and the `Resources` struct is initialized with either a +unique reference (`&mut-`) to the static variables or with a resource proxy (see +section on [critical sections](critical-sections.html)). + +The code below is an example of the kind of source level transformation that +happens behind the scenes: + +``` rust +#[rtic::app(device = ..)] +mod app { + static mut X: u64: 0; + static mut Y: bool: 0; + + #[init(resources = [Y])] + fn init(c: init::Context) { + // .. user code .. + } + + #[interrupt(binds = UART0, resources = [X])] + fn foo(c: foo::Context) { + // .. user code .. + } + + #[interrupt(binds = UART1, resources = [X, Y])] + fn bar(c: bar::Context) { + // .. user code .. + } + + // .. +} +``` + +The framework produces codes like this: + +``` rust +fn init(c: init::Context) { + // .. user code .. +} + +fn foo(c: foo::Context) { + // .. user code .. +} + +fn bar(c: bar::Context) { + // .. user code .. +} + +// Public API +pub mod init { + pub struct Context<'a> { + pub resources: Resources<'a>, + // .. + } + + pub struct Resources<'a> { + pub Y: &'a mut bool, + } +} + +pub mod foo { + pub struct Context<'a> { + pub resources: Resources<'a>, + // .. + } + + pub struct Resources<'a> { + pub X: &'a mut u64, + } +} + +pub mod bar { + pub struct Context<'a> { + pub resources: Resources<'a>, + // .. + } + + pub struct Resources<'a> { + pub X: &'a mut u64, + pub Y: &'a mut bool, + } +} + +/// Implementation details +mod app { + // everything inside this module is hidden from user code + + static mut X: u64 = 0; + static mut Y: bool = 0; + + // the real entry point of the program + unsafe fn main() -> ! { + interrupt::disable(); + + // .. + + // call into user code; pass references to the static variables + init(init::Context { + resources: init::Resources { + X: &mut X, + }, + // .. + }); + + // .. + + interrupt::enable(); + + // .. + } + + // interrupt handler that `foo` binds to + #[no_mangle] + unsafe fn UART0() { + // call into user code; pass references to the static variables + foo(foo::Context { + resources: foo::Resources { + X: &mut X, + }, + // .. + }); + } + + // interrupt handler that `bar` binds to + #[no_mangle] + unsafe fn UART1() { + // call into user code; pass references to the static variables + bar(bar::Context { + resources: bar::Resources { + X: &mut X, + Y: &mut Y, + }, + // .. + }); + } +} +``` diff --git a/book/en/src/internals/ceilings.md b/book/en/src/internals/ceilings.md index 2c645a4d..07bd0add 100644 --- a/book/en/src/internals/ceilings.md +++ b/book/en/src/internals/ceilings.md @@ -1,3 +1,84 @@ # Ceiling analysis -**TODO** +A resource *priority ceiling*, or just *ceiling*, is the dynamic priority that +any task must have to safely access the resource memory. Ceiling analysis is +relatively simple but critical to the memory safety of RTIC applications. + +To compute the ceiling of a resource we must first collect a list of tasks that +have access to the resource -- as the RTIC framework enforces access control to +resources at compile time it also has access to this information at compile +time. The ceiling of the resource is simply the highest logical priority among +those tasks. + +`init` and `idle` are not proper tasks but they can access resources so they +need to be considered in the ceiling analysis. `idle` is considered as a task +that has a logical priority of `0` whereas `init` is completely omitted from the +analysis -- the reason for that is that `init` never uses (or needs) critical +sections to access static variables. + +In the previous section we showed that a shared resource may appear as a unique +reference (`&mut-`) or behind a proxy depending on the task that has access to +it. Which version is presented to the task depends on the task priority and the +resource ceiling. If the task priority is the same as the resource ceiling then +the task gets a unique reference (`&mut-`) to the resource memory, otherwise the +task gets a proxy -- this also applies to `idle`. `init` is special: it always +gets a unique reference (`&mut-`) to resources. + +An example to illustrate the ceiling analysis: + +``` rust +#[rtic::app(device = ..)] +mod app { + struct Resources { + // accessed by `foo` (prio = 1) and `bar` (prio = 2) + // -> CEILING = 2 + #[init(0)] + x: u64, + + // accessed by `idle` (prio = 0) + // -> CEILING = 0 + #[init(0)] + y: u64, + } + + #[init(resources = [x])] + fn init(c: init::Context) { + // unique reference because this is `init` + let x: &mut u64 = c.resources.x; + + // unique reference because this is `init` + let y: &mut u64 = c.resources.y; + + // .. + } + + // PRIORITY = 0 + #[idle(resources = [y])] + fn idle(c: idle::Context) -> ! { + // unique reference because priority (0) == resource ceiling (0) + let y: &'static mut u64 = c.resources.y; + + loop { + // .. + } + } + + #[interrupt(binds = UART0, priority = 1, resources = [x])] + fn foo(c: foo::Context) { + // resource proxy because task priority (1) < resource ceiling (2) + let x: resources::x = c.resources.x; + + // .. + } + + #[interrupt(binds = UART1, priority = 2, resources = [x])] + fn bar(c: foo::Context) { + // unique reference because task priority (2) == resource ceiling (2) + let x: &mut u64 = c.resources.x; + + // .. + } + + // .. +} +``` diff --git a/book/en/src/internals/critical-sections.md b/book/en/src/internals/critical-sections.md new file mode 100644 index 00000000..a064ad09 --- /dev/null +++ b/book/en/src/internals/critical-sections.md @@ -0,0 +1,523 @@ +# Critical sections + +When a resource (static variable) is shared between two, or more, tasks that run +at different priorities some form of mutual exclusion is required to mutate the +memory in a data race free manner. In RTIC we use priority-based critical +sections to guarantee mutual exclusion (see the [Immediate Ceiling Priority +Protocol][icpp]). + +[icpp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol + +The critical section consists of temporarily raising the *dynamic* priority of +the task. While a task is within this critical section all the other tasks that +may request the resource are *not allowed to start*. + +How high must the dynamic priority be to ensure mutual exclusion on a particular +resource? The [ceiling analysis](ceilings.html) is in charge of +answering that question and will be discussed in the next section. This section +will focus on the implementation of the critical section. + +## Resource proxy + +For simplicity, let's look at a resource shared by two tasks that run at +different priorities. Clearly one of the task can preempt the other; to prevent +a data race the *lower priority* task must use a critical section when it needs +to modify the shared memory. On the other hand, the higher priority task can +directly modify the shared memory because it can't be preempted by the lower +priority task. To enforce the use of a critical section on the lower priority +task we give it a *resource proxy*, whereas we give a unique reference +(`&mut-`) to the higher priority task. + +The example below shows the different types handed out to each task: + +``` rust +#[rtic::app(device = ..)] +mut app { + struct Resources { + #[init(0)] + x: u64, + } + + #[interrupt(binds = UART0, priority = 1, resources = [x])] + fn foo(c: foo::Context) { + // resource proxy + let mut x: resources::x = c.resources.x; + + x.lock(|x: &mut u64| { + // critical section + *x += 1 + }); + } + + #[interrupt(binds = UART1, priority = 2, resources = [x])] + fn bar(c: bar::Context) { + let mut x: &mut u64 = c.resources.x; + + *x += 1; + } + + // .. +} +``` + +Now let's see how these types are created by the framework. + +``` rust +fn foo(c: foo::Context) { + // .. user code .. +} + +fn bar(c: bar::Context) { + // .. user code .. +} + +pub mod resources { + pub struct x { + // .. + } +} + +pub mod foo { + pub struct Resources { + pub x: resources::x, + } + + pub struct Context { + pub resources: Resources, + // .. + } +} + +pub mod bar { + pub struct Resources<'a> { + pub x: &'a mut u64, + } + + pub struct Context { + pub resources: Resources, + // .. + } +} + +mod app { + static mut x: u64 = 0; + + impl rtic::Mutex for resources::x { + type T = u64; + + fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R { + // we'll check this in detail later + } + } + + #[no_mangle] + unsafe fn UART0() { + foo(foo::Context { + resources: foo::Resources { + x: resources::x::new(/* .. */), + }, + // .. + }) + } + + #[no_mangle] + unsafe fn UART1() { + bar(bar::Context { + resources: bar::Resources { + x: &mut x, + }, + // .. + }) + } +} +``` + +## `lock` + +Let's now zoom into the critical section itself. In this example, we have to +raise the dynamic priority to at least `2` to prevent a data race. On the +Cortex-M architecture the dynamic priority can be changed by writing to the +`BASEPRI` register. + +The semantics of the `BASEPRI` register are as follows: + +- Writing a value of `0` to `BASEPRI` disables its functionality. +- Writing a non-zero value to `BASEPRI` changes the priority level required for + interrupt preemption. However, this only has an effect when the written value + is *lower* than the priority level of current execution context, but note that + a lower hardware priority level means higher logical priority + +Thus the dynamic priority at any point in time can be computed as + +``` rust +dynamic_priority = max(hw2logical(BASEPRI), hw2logical(static_priority)) +``` + +Where `static_priority` is the priority programmed in the NVIC for the current +interrupt, or a logical `0` when the current context is `idle`. + +In this particular example we could implement the critical section as follows: + +> **NOTE:** this is a simplified implementation + +``` rust +impl rtic::Mutex for resources::x { + type T = u64; + + fn lock<R, F>(&mut self, f: F) -> R + where + F: FnOnce(&mut u64) -> R, + { + unsafe { + // start of critical section: raise dynamic priority to `2` + asm!("msr BASEPRI, 192" : : : "memory" : "volatile"); + + // run user code within the critical section + let r = f(&mut x); + + // end of critical section: restore dynamic priority to its static value (`1`) + asm!("msr BASEPRI, 0" : : : "memory" : "volatile"); + + r + } + } +} +``` + +Here it's important to use the `"memory"` clobber in the `asm!` block. It +prevents the compiler from reordering memory operations across it. This is +important because accessing the variable `x` outside the critical section would +result in a data race. + +It's important to note that the signature of the `lock` method prevents nesting +calls to it. This is required for memory safety, as nested calls would produce +multiple unique references (`&mut-`) to `x` breaking Rust aliasing rules. See +below: + +``` rust +#[interrupt(binds = UART0, priority = 1, resources = [x])] +fn foo(c: foo::Context) { + // resource proxy + let mut res: resources::x = c.resources.x; + + res.lock(|x: &mut u64| { + res.lock(|alias: &mut u64| { + //~^ error: `res` has already been uniquely borrowed (`&mut-`) + // .. + }); + }); +} +``` + +## Nesting + +Nesting calls to `lock` on the *same* resource must be rejected by the compiler +for memory safety but nesting `lock` calls on *different* resources is a valid +operation. In that case we want to make sure that nesting critical sections +never results in lowering the dynamic priority, as that would be unsound, and we +also want to optimize the number of writes to the `BASEPRI` register and +compiler fences. To that end we'll track the dynamic priority of the task using +a stack variable and use that to decide whether to write to `BASEPRI` or not. In +practice, the stack variable will be optimized away by the compiler but it still +provides extra information to the compiler. + +Consider this program: + +``` rust +#[rtic::app(device = ..)] +mod app { + struct Resources { + #[init(0)] + x: u64, + #[init(0)] + y: u64, + } + + #[init] + fn init() { + rtic::pend(Interrupt::UART0); + } + + #[interrupt(binds = UART0, priority = 1, resources = [x, y])] + fn foo(c: foo::Context) { + let mut x = c.resources.x; + let mut y = c.resources.y; + + y.lock(|y| { + *y += 1; + + *x.lock(|x| { + x += 1; + }); + + *y += 1; + }); + + // mid-point + + x.lock(|x| { + *x += 1; + + y.lock(|y| { + *y += 1; + }); + + *x += 1; + }) + } + + #[interrupt(binds = UART1, priority = 2, resources = [x])] + fn bar(c: foo::Context) { + // .. + } + + #[interrupt(binds = UART2, priority = 3, resources = [y])] + fn baz(c: foo::Context) { + // .. + } + + // .. +} +``` + +The code generated by the framework looks like this: + +``` rust +// omitted: user code + +pub mod resources { + pub struct x<'a> { + priority: &'a Cell<u8>, + } + + impl<'a> x<'a> { + pub unsafe fn new(priority: &'a Cell<u8>) -> Self { + x { priority } + } + + pub unsafe fn priority(&self) -> &Cell<u8> { + self.priority + } + } + + // repeat for `y` +} + +pub mod foo { + pub struct Context { + pub resources: Resources, + // .. + } + + pub struct Resources<'a> { + pub x: resources::x<'a>, + pub y: resources::y<'a>, + } +} + +mod app { + use cortex_m::register::basepri; + + #[no_mangle] + unsafe fn UART1() { + // the static priority of this interrupt (as specified by the user) + const PRIORITY: u8 = 2; + + // take a snashot of the BASEPRI + let initial = basepri::read(); + + let priority = Cell::new(PRIORITY); + bar(bar::Context { + resources: bar::Resources::new(&priority), + // .. + }); + + // roll back the BASEPRI to the snapshot value we took before + basepri::write(initial); // same as the `asm!` block we saw before + } + + // similarly for `UART0` / `foo` and `UART2` / `baz` + + impl<'a> rtic::Mutex for resources::x<'a> { + type T = u64; + + fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R { + unsafe { + // the priority ceiling of this resource + const CEILING: u8 = 2; + + let current = self.priority().get(); + if current < CEILING { + // raise dynamic priority + self.priority().set(CEILING); + basepri::write(logical2hw(CEILING)); + + let r = f(&mut y); + + // restore dynamic priority + basepri::write(logical2hw(current)); + self.priority().set(current); + + r + } else { + // dynamic priority is high enough + f(&mut y) + } + } + } + } + + // repeat for resource `y` +} +``` + +At the end the compiler will optimize the function `foo` into something like +this: + +``` rust +fn foo(c: foo::Context) { + // NOTE: BASEPRI contains the value `0` (its reset value) at this point + + // raise dynamic priority to `3` + unsafe { basepri::write(160) } + + // the two operations on `y` are merged into one + y += 2; + + // BASEPRI is not modified to access `x` because the dynamic priority is high enough + x += 1; + + // lower (restore) the dynamic priority to `1` + unsafe { basepri::write(224) } + + // mid-point + + // raise dynamic priority to `2` + unsafe { basepri::write(192) } + + x += 1; + + // raise dynamic priority to `3` + unsafe { basepri::write(160) } + + y += 1; + + // lower (restore) the dynamic priority to `2` + unsafe { basepri::write(192) } + + // NOTE: it would be sound to merge this operation on `x` with the previous one but + // compiler fences are coarse grained and prevent such optimization + x += 1; + + // lower (restore) the dynamic priority to `1` + unsafe { basepri::write(224) } + + // NOTE: BASEPRI contains the value `224` at this point + // the UART0 handler will restore the value to `0` before returning +} +``` + +## The BASEPRI invariant + +An invariant that the RTIC framework has to preserve is that the value of the +BASEPRI at the start of an *interrupt* handler must be the same value it has +when the interrupt handler returns. BASEPRI may change during the execution of +the interrupt handler but running an interrupt handler from start to finish +should not result in an observable change of BASEPRI. + +This invariant needs to be preserved to avoid raising the dynamic priority of a +handler through preemption. This is best observed in the following example: + +``` rust +#[rtic::app(device = ..)] +mod app { + struct Resources { + #[init(0)] + x: u64, + } + + #[init] + fn init() { + // `foo` will run right after `init` returns + rtic::pend(Interrupt::UART0); + } + + #[task(binds = UART0, priority = 1)] + fn foo() { + // BASEPRI is `0` at this point; the dynamic priority is currently `1` + + // `bar` will preempt `foo` at this point + rtic::pend(Interrupt::UART1); + + // BASEPRI is `192` at this point (due to a bug); the dynamic priority is now `2` + // this function returns to `idle` + } + + #[task(binds = UART1, priority = 2, resources = [x])] + fn bar() { + // BASEPRI is `0` (dynamic priority = 2) + + x.lock(|x| { + // BASEPRI is raised to `160` (dynamic priority = 3) + + // .. + }); + + // BASEPRI is restored to `192` (dynamic priority = 2) + } + + #[idle] + fn idle() -> ! { + // BASEPRI is `192` (due to a bug); dynamic priority = 2 + + // this has no effect due to the BASEPRI value + // the task `foo` will never be executed again + rtic::pend(Interrupt::UART0); + + loop { + // .. + } + } + + #[task(binds = UART2, priority = 3, resources = [x])] + fn baz() { + // .. + } + +} +``` + +IMPORTANT: let's say we *forget* to roll back `BASEPRI` in `UART1` -- this would +be a bug in the RTIC code generator. + +``` rust +// code generated by RTIC + +mod app { + // .. + + #[no_mangle] + unsafe fn UART1() { + // the static priority of this interrupt (as specified by the user) + const PRIORITY: u8 = 2; + + // take a snashot of the BASEPRI + let initial = basepri::read(); + + let priority = Cell::new(PRIORITY); + bar(bar::Context { + resources: bar::Resources::new(&priority), + // .. + }); + + // BUG: FORGOT to roll back the BASEPRI to the snapshot value we took before + basepri::write(initial); + } +} +``` + +The consequence is that `idle` will run at a dynamic priority of `2` and in fact +the system will never again run at a dynamic priority lower than `2`. This +doesn't compromise the memory safety of the program but affects task scheduling: +in this particular case tasks with a priority of `1` will never get a chance to +run. diff --git a/book/en/src/internals/interrupt-configuration.md b/book/en/src/internals/interrupt-configuration.md new file mode 100644 index 00000000..7aec9c9f --- /dev/null +++ b/book/en/src/internals/interrupt-configuration.md @@ -0,0 +1,73 @@ +# Interrupt configuration + +Interrupts are core to the operation of RTIC applications. Correctly setting +interrupt priorities and ensuring they remain fixed at runtime is a requisite +for the memory safety of the application. + +The RTIC framework exposes interrupt priorities as something that is declared at +compile time. However, this static configuration must be programmed into the +relevant registers during the initialization of the application. The interrupt +configuration is done before the `init` function runs. + +This example gives you an idea of the code that the RTIC framework runs: + +``` rust +#[rtic::app(device = lm3s6965)] +mod app { + #[init] + fn init(c: init::Context) { + // .. user code .. + } + + #[idle] + fn idle(c: idle::Context) -> ! { + // .. user code .. + } + + #[interrupt(binds = UART0, priority = 2)] + fn foo(c: foo::Context) { + // .. user code .. + } +} +``` + +The framework generates an entry point that looks like this: + +``` rust +// the real entry point of the program +#[no_mangle] +unsafe fn main() -> ! { + // transforms a logical priority into a hardware / NVIC priority + fn logical2hw(priority: u8) -> u8 { + use lm3s6965::NVIC_PRIO_BITS; + + // the NVIC encodes priority in the higher bits of a bit + // also a bigger numbers means lower priority + ((1 << NVIC_PRIORITY_BITS) - priority) << (8 - NVIC_PRIO_BITS) + } + + cortex_m::interrupt::disable(); + + let mut core = cortex_m::Peripheral::steal(); + + core.NVIC.enable(Interrupt::UART0); + + // value specified by the user + let uart0_prio = 2; + + // check at compile time that the specified priority is within the supported range + let _ = [(); (1 << NVIC_PRIORITY_BITS) - (uart0_prio as usize)]; + + core.NVIC.set_priority(Interrupt::UART0, logical2hw(uart0_prio)); + + // call into user code + init(/* .. */); + + // .. + + cortex_m::interrupt::enable(); + + // call into user code + idle(/* .. */) +} +``` diff --git a/book/en/src/internals/late-resources.md b/book/en/src/internals/late-resources.md new file mode 100644 index 00000000..f3a0b0ae --- /dev/null +++ b/book/en/src/internals/late-resources.md @@ -0,0 +1,116 @@ +# Late resources + +Some resources are initialized at runtime after the `init` function returns. +It's important that these resources (static variables) are fully initialized +before tasks are allowed to run, that is they must be initialized while +interrupts are disabled. + +The example below shows the kind of code that the framework generates to +initialize late resources. + +``` rust +#[rtic::app(device = ..)] +mod app { + struct Resources { + x: Thing, + } + + #[init] + fn init() -> init::LateResources { + // .. + + init::LateResources { + x: Thing::new(..), + } + } + + #[task(binds = UART0, resources = [x])] + fn foo(c: foo::Context) { + let x: &mut Thing = c.resources.x; + + x.frob(); + + // .. + } + + // .. +} +``` + +The code generated by the framework looks like this: + +``` rust +fn init(c: init::Context) -> init::LateResources { + // .. user code .. +} + +fn foo(c: foo::Context) { + // .. user code .. +} + +// Public API +pub mod init { + pub struct LateResources { + pub x: Thing, + } + + // .. +} + +pub mod foo { + pub struct Resources<'a> { + pub x: &'a mut Thing, + } + + pub struct Context<'a> { + pub resources: Resources<'a>, + // .. + } +} + +/// Implementation details +mod app { + // uninitialized static + static mut x: MaybeUninit<Thing> = MaybeUninit::uninit(); + + #[no_mangle] + unsafe fn main() -> ! { + cortex_m::interrupt::disable(); + + // .. + + let late = init(..); + + // initialization of late resources + x.as_mut_ptr().write(late.x); + + cortex_m::interrupt::enable(); //~ compiler fence + + // exceptions, interrupts and tasks can preempt `main` at this point + + idle(..) + } + + #[no_mangle] + unsafe fn UART0() { + foo(foo::Context { + resources: foo::Resources { + // `x` has been initialized at this point + x: &mut *x.as_mut_ptr(), + }, + // .. + }) + } +} +``` + +An important detail here is that `interrupt::enable` behaves like a *compiler +fence*, which prevents the compiler from reordering the write to `X` to *after* +`interrupt::enable`. If the compiler were to do that kind of reordering there +would be a data race between that write and whatever operation `foo` performs on +`X`. + +Architectures with more complex instruction pipelines may need a memory barrier +(`atomic::fence`) instead of a compiler fence to fully flush the write operation +before interrupts are re-enabled. The ARM Cortex-M architecture doesn't need a +memory barrier in single-core context. diff --git a/book/en/src/internals/non-reentrancy.md b/book/en/src/internals/non-reentrancy.md new file mode 100644 index 00000000..17b34d0c --- /dev/null +++ b/book/en/src/internals/non-reentrancy.md @@ -0,0 +1,80 @@ +# Non-reentrancy + +In RTIC, tasks handlers are *not* reentrant. Reentering a task handler can break +Rust aliasing rules and lead to *undefined behavior*. A task handler can be +reentered in one of two ways: in software or by hardware. + +## In software + +To reenter a task handler in software its underlying interrupt handler must be +invoked using FFI (see example below). FFI requires `unsafe` code so end users +are discouraged from directly invoking an interrupt handler. + +``` rust +#[rtic::app(device = ..)] +mod app { + #[init] + fn init(c: init::Context) { .. } + + #[interrupt(binds = UART0)] + fn foo(c: foo::Context) { + static mut X: u64 = 0; + + let x: &mut u64 = X; + + // .. + + //~ `bar` can preempt `foo` at this point + + // .. + } + + #[interrupt(binds = UART1, priority = 2)] + fn bar(c: foo::Context) { + extern "C" { + fn UART0(); + } + + // this interrupt handler will invoke task handler `foo` resulting + // in aliasing of the static variable `X` + unsafe { UART0() } + } +} +``` + +The RTIC framework must generate the interrupt handler code that calls the user +defined task handlers. We are careful in making these handlers impossible to +call from user code. + +The above example expands into: + +``` rust +fn foo(c: foo::Context) { + // .. user code .. +} + +fn bar(c: bar::Context) { + // .. user code .. +} + +mod app { + // everything in this block is not visible to user code + + #[no_mangle] + unsafe fn USART0() { + foo(..); + } + + #[no_mangle] + unsafe fn USART1() { + bar(..); + } +} +``` + +## By hardware + +A task handler can also be reentered without software intervention. This can +occur if the same handler is assigned to two or more interrupts in the vector +table but there's no syntax for this kind of configuration in the RTIC +framework. diff --git a/book/en/src/internals/tasks.md b/book/en/src/internals/tasks.md index 85f783fb..a533dc0c 100644 --- a/book/en/src/internals/tasks.md +++ b/book/en/src/internals/tasks.md @@ -1,3 +1,397 @@ -# Task dispatcher +# Software tasks -**TODO** +RTIC supports software tasks and hardware tasks. Each hardware task is bound to +a different interrupt handler. On the other hand, several software tasks may be +dispatched by the same interrupt handler -- this is done to minimize the number +of interrupts handlers used by the framework. + +The framework groups `spawn`-able tasks by priority level and generates one +*task dispatcher* per priority level. Each task dispatcher runs on a different +interrupt handler and the priority of said interrupt handler is set to match the +priority level of the tasks managed by the dispatcher. + +Each task dispatcher keeps a *queue* of tasks which are *ready* to execute; this +queue is referred to as the *ready queue*. Spawning a software task consists of +adding an entry to this queue and pending the interrupt that runs the +corresponding task dispatcher. Each entry in this queue contains a tag (`enum`) +that identifies the task to execute and a *pointer* to the message passed to the +task. + +The ready queue is a SPSC (Single Producer Single Consumer) lock-free queue. The +task dispatcher owns the consumer endpoint of the queue; the producer endpoint +is treated as a resource contended by the tasks that can `spawn` other tasks. + +## The task dispatcher + +Let's first take a look the code generated by the framework to dispatch tasks. +Consider this example: + +``` rust +#[rtic::app(device = ..)] +mod app { + // .. + + #[interrupt(binds = UART0, priority = 2, spawn = [bar, baz])] + fn foo(c: foo::Context) { + foo.spawn.bar().ok(); + + foo.spawn.baz(42).ok(); + } + + #[task(capacity = 2, priority = 1)] + fn bar(c: bar::Context) { + // .. + } + + #[task(capacity = 2, priority = 1, resources = [X])] + fn baz(c: baz::Context, input: i32) { + // .. + } + + extern "C" { + fn UART1(); + } +} +``` + +The framework produces the following task dispatcher which consists of an +interrupt handler and a ready queue: + +``` rust +fn bar(c: bar::Context) { + // .. user code .. +} + +mod app { + use heapless::spsc::Queue; + use cortex_m::register::basepri; + + struct Ready<T> { + task: T, + // .. + } + + /// `spawn`-able tasks that run at priority level `1` + enum T1 { + bar, + baz, + } + + // ready queue of the task dispatcher + // `U4` is a type-level integer that represents the capacity of this queue + static mut RQ1: Queue<Ready<T1>, U4> = Queue::new(); + + // interrupt handler chosen to dispatch tasks at priority `1` + #[no_mangle] + unsafe UART1() { + // the priority of this interrupt handler + const PRIORITY: u8 = 1; + + let snapshot = basepri::read(); + + while let Some(ready) = RQ1.split().1.dequeue() { + match ready.task { + T1::bar => { + // **NOTE** simplified implementation + + // used to track the dynamic priority + let priority = Cell::new(PRIORITY); + + // call into user code + bar(bar::Context::new(&priority)); + } + + T1::baz => { + // we'll look at `baz` later + } + } + } + + // BASEPRI invariant + basepri::write(snapshot); + } +} +``` + +## Spawning a task + +The `spawn` API is exposed to the user as the methods of a `Spawn` struct. +There's one `Spawn` struct per task. + +The `Spawn` code generated by the framework for the previous example looks like +this: + +``` rust +mod foo { + // .. + + pub struct Context<'a> { + pub spawn: Spawn<'a>, + // .. + } + + pub struct Spawn<'a> { + // tracks the dynamic priority of the task + priority: &'a Cell<u8>, + } + + impl<'a> Spawn<'a> { + // `unsafe` and hidden because we don't want the user to tamper with it + #[doc(hidden)] + pub unsafe fn priority(&self) -> &Cell<u8> { + self.priority + } + } +} + +mod app { + // .. + + // Priority ceiling for the producer endpoint of the `RQ1` + const RQ1_CEILING: u8 = 2; + + // used to track how many more `bar` messages can be enqueued + // `U2` is the capacity of the `bar` task; a max of two instances can be queued + // this queue is filled by the framework before `init` runs + static mut bar_FQ: Queue<(), U2> = Queue::new(); + + // Priority ceiling for the consumer endpoint of `bar_FQ` + const bar_FQ_CEILING: u8 = 2; + + // a priority-based critical section + // + // this run the given closure `f` at a dynamic priority of at least + // `ceiling` + fn lock(priority: &Cell<u8>, ceiling: u8, f: impl FnOnce()) { + // .. + } + + impl<'a> foo::Spawn<'a> { + /// Spawns the `bar` task + pub fn bar(&self) -> Result<(), ()> { + unsafe { + match lock(self.priority(), bar_FQ_CEILING, || { + bar_FQ.split().1.dequeue() + }) { + Some(()) => { + lock(self.priority(), RQ1_CEILING, || { + // put the taks in the ready queue + RQ1.split().1.enqueue_unchecked(Ready { + task: T1::bar, + // .. + }) + }); + + // pend the interrupt that runs the task dispatcher + rtic::pend(Interrupt::UART0); + } + + None => { + // maximum capacity reached; spawn failed + Err(()) + } + } + } + } + } +} +``` + +Using `bar_FQ` to limit the number of `bar` tasks that can be spawned may seem +like an artificial limitation but it will make more sense when we talk about +task capacities. + +## Messages + +We have omitted how message passing actually works so let's revisit the `spawn` +implementation but this time for task `baz` which receives a `u64` message. + +``` rust +fn baz(c: baz::Context, input: u64) { + // .. user code .. +} + +mod app { + // .. + + // Now we show the full contents of the `Ready` struct + struct Ready { + task: Task, + // message index; used to index the `INPUTS` buffer + index: u8, + } + + // memory reserved to hold messages passed to `baz` + static mut baz_INPUTS: [MaybeUninit<u64>; 2] = + [MaybeUninit::uninit(), MaybeUninit::uninit()]; + + // the free queue: used to track free slots in the `baz_INPUTS` array + // this queue is initialized with values `0` and `1` before `init` is executed + static mut baz_FQ: Queue<u8, U2> = Queue::new(); + + // Priority ceiling for the consumer endpoint of `baz_FQ` + const baz_FQ_CEILING: u8 = 2; + + impl<'a> foo::Spawn<'a> { + /// Spawns the `baz` task + pub fn baz(&self, message: u64) -> Result<(), u64> { + unsafe { + match lock(self.priority(), baz_FQ_CEILING, || { + baz_FQ.split().1.dequeue() + }) { + Some(index) => { + // NOTE: `index` is an ownining pointer into this buffer + baz_INPUTS[index as usize].write(message); + + lock(self.priority(), RQ1_CEILING, || { + // put the task in the ready queue + RQ1.split().1.enqueue_unchecked(Ready { + task: T1::baz, + index, + }); + }); + + // pend the interrupt that runs the task dispatcher + rtic::pend(Interrupt::UART0); + } + + None => { + // maximum capacity reached; spawn failed + Err(message) + } + } + } + } + } +} +``` + +And now let's look at the real implementation of the task dispatcher: + +``` rust +mod app { + // .. + + #[no_mangle] + unsafe UART1() { + const PRIORITY: u8 = 1; + + let snapshot = basepri::read(); + + while let Some(ready) = RQ1.split().1.dequeue() { + match ready.task { + Task::baz => { + // NOTE: `index` is an ownining pointer into this buffer + let input = baz_INPUTS[ready.index as usize].read(); + + // the message has been read out so we can return the slot + // back to the free queue + // (the task dispatcher has exclusive access to the producer + // endpoint of this queue) + baz_FQ.split().0.enqueue_unchecked(ready.index); + + let priority = Cell::new(PRIORITY); + baz(baz::Context::new(&priority), input) + } + + Task::bar => { + // looks just like the `baz` branch + } + + } + } + + // BASEPRI invariant + basepri::write(snapshot); + } +} +``` + +`INPUTS` plus `FQ`, the free queue, is effectively a memory pool. However, +instead of using the usual *free list* (linked list) to track empty slots +in the `INPUTS` buffer we use a SPSC queue; this lets us reduce the number of +critical sections. In fact, thanks to this choice the task dispatching code is +lock-free. + +## Queue capacity + +The RTIC framework uses several queues like ready queues and free queues. When +the free queue is empty trying to `spawn` a task results in an error; this +condition is checked at runtime. Not all the operations performed by the +framework on these queues check if the queue is empty / full. For example, +returning an slot to the free queue (see the task dispatcher) is unchecked +because there's a fixed number of such slots circulating in the system that's +equal to the capacity of the free queue. Similarly, adding an entry to the ready +queue (see `Spawn`) is unchecked because of the queue capacity chosen by the +framework. + +Users can specify the capacity of software tasks; this capacity is the maximum +number of messages one can post to said task from a higher priority task before +`spawn` returns an error. This user-specified capacity is the capacity of the +free queue of the task (e.g. `foo_FQ`) and also the size of the array that holds +the inputs to the task (e.g. `foo_INPUTS`). + +The capacity of the ready queue (e.g. `RQ1`) is chosen to be the *sum* of the +capacities of all the different tasks managed by the dispatcher; this sum is +also the number of messages the queue will hold in the worst case scenario of +all possible messages being posted before the task dispatcher gets a chance to +run. For this reason, getting a slot from the free queue in any `spawn` +operation implies that the ready queue is not yet full so inserting an entry +into the ready queue can omit the "is it full?" check. + +In our running example the task `bar` takes no input so we could have omitted +both `bar_INPUTS` and `bar_FQ` and let the user post an unbounded number of +messages to this task, but if we did that it would have not be possible to pick +a capacity for `RQ1` that lets us omit the "is it full?" check when spawning a +`baz` task. In the section about the [timer queue](timer-queue.html) we'll see +how the free queue is used by tasks that have no inputs. + +## Ceiling analysis + +The queues internally used by the `spawn` API are treated like normal resources +and included in the ceiling analysis. It's important to note that these are SPSC +queues and that only one of the endpoints is behind a resource; the other +endpoint is owned by a task dispatcher. + +Consider the following example: + +``` rust +#[rtic::app(device = ..)] +mod app { + #[idle(spawn = [foo, bar])] + fn idle(c: idle::Context) -> ! { + // .. + } + + #[task] + fn foo(c: foo::Context) { + // .. + } + + #[task] + fn bar(c: bar::Context) { + // .. + } + + #[task(priority = 2, spawn = [foo])] + fn baz(c: baz::Context) { + // .. + } + + #[task(priority = 3, spawn = [bar])] + fn quux(c: quux::Context) { + // .. + } +} +``` + +This is how the ceiling analysis would go: + +- `idle` (prio = 0) and `baz` (prio = 2) contend for the consumer endpoint of + `foo_FQ`; this leads to a priority ceiling of `2`. + +- `idle` (prio = 0) and `quux` (prio = 3) contend for the consumer endpoint of + `bar_FQ`; this leads to a priority ceiling of `3`. + +- `idle` (prio = 0), `baz` (prio = 2) and `quux` (prio = 3) all contend for the + producer endpoint of `RQ1`; this leads to a priority ceiling of `3` diff --git a/book/en/src/internals/timer-queue.md b/book/en/src/internals/timer-queue.md index 70592852..fcd345c5 100644 --- a/book/en/src/internals/timer-queue.md +++ b/book/en/src/internals/timer-queue.md @@ -1,3 +1,368 @@ # Timer queue -**TODO** +The timer queue functionality lets the user schedule tasks to run at some time +in the future. Unsurprisingly, this feature is also implemented using a queue: +a priority queue where the scheduled tasks are kept sorted by earliest scheduled +time. This feature requires a timer capable of setting up timeout interrupts. +The timer is used to trigger an interrupt when the scheduled time of a task is +up; at that point the task is removed from the timer queue and moved into the +appropriate ready queue. + +Let's see how this in implemented in code. Consider the following program: + +``` rust +#[rtic::app(device = ..)] +mod app { + // .. + + #[task(capacity = 2, schedule = [foo])] + fn foo(c: foo::Context, x: u32) { + // schedule this task to run again in 1M cycles + c.schedule.foo(c.scheduled + Duration::cycles(1_000_000), x + 1).ok(); + } + + extern "C" { + fn UART0(); + } +} +``` + +## `schedule` + +Let's first look at the `schedule` API. + +``` rust +mod foo { + pub struct Schedule<'a> { + priority: &'a Cell<u8>, + } + + impl<'a> Schedule<'a> { + // unsafe and hidden because we don't want the user to tamper with this + #[doc(hidden)] + pub unsafe fn priority(&self) -> &Cell<u8> { + self.priority + } + } +} + +mod app { + type Instant = <path::to::user::monotonic::timer as rtic::Monotonic>::Instant; + + // all tasks that can be `schedule`-d + enum T { + foo, + } + + struct NotReady { + index: u8, + instant: Instant, + task: T, + } + + // The timer queue is a binary (min) heap of `NotReady` tasks + static mut TQ: TimerQueue<U2> = ..; + const TQ_CEILING: u8 = 1; + + static mut foo_FQ: Queue<u8, U2> = Queue::new(); + const foo_FQ_CEILING: u8 = 1; + + static mut foo_INPUTS: [MaybeUninit<u32>; 2] = + [MaybeUninit::uninit(), MaybeUninit::uninit()]; + + static mut foo_INSTANTS: [MaybeUninit<Instant>; 2] = + [MaybeUninit::uninit(), MaybeUninit::uninit()]; + + impl<'a> foo::Schedule<'a> { + fn foo(&self, instant: Instant, input: u32) -> Result<(), u32> { + unsafe { + let priority = self.priority(); + if let Some(index) = lock(priority, foo_FQ_CEILING, || { + foo_FQ.split().1.dequeue() + }) { + // `index` is an owning pointer into these buffers + foo_INSTANTS[index as usize].write(instant); + foo_INPUTS[index as usize].write(input); + + let nr = NotReady { + index, + instant, + task: T::foo, + }; + + lock(priority, TQ_CEILING, || { + TQ.enqueue_unchecked(nr); + }); + } else { + // No space left to store the input / instant + Err(input) + } + } + } + } +} +``` + +This looks very similar to the `Spawn` implementation. In fact, the same +`INPUTS` buffer and free queue (`FQ`) are shared between the `spawn` and +`schedule` APIs. The main difference between the two is that `schedule` also +stores the `Instant` at which the task was scheduled to run in a separate buffer +(`foo_INSTANTS` in this case). + +`TimerQueue::enqueue_unchecked` does a bit more work that just adding the entry +into a min-heap: it also pends the system timer interrupt (`SysTick`) if the new +entry ended up first in the queue. + +## The system timer + +The system timer interrupt (`SysTick`) takes cares of two things: moving tasks +that have become ready from the timer queue into the right ready queue and +setting up a timeout interrupt to fire when the scheduled time of the next task +is up. + +Let's see the associated code. + +``` rust +mod app { + #[no_mangle] + fn SysTick() { + const PRIORITY: u8 = 1; + + let priority = &Cell::new(PRIORITY); + while let Some(ready) = lock(priority, TQ_CEILING, || TQ.dequeue()) { + match ready.task { + T::foo => { + // move this task into the `RQ1` ready queue + lock(priority, RQ1_CEILING, || { + RQ1.split().0.enqueue_unchecked(Ready { + task: T1::foo, + index: ready.index, + }) + }); + + // pend the task dispatcher + rtic::pend(Interrupt::UART0); + } + } + } + } +} +``` + +This looks similar to a task dispatcher except that instead of running the +ready task this only places the task in the corresponding ready queue, that +way it will run at the right priority. + +`TimerQueue::dequeue` will set up a new timeout interrupt when it returns +`None`. This ties in with `TimerQueue::enqueue_unchecked`, which pends this +handler; basically, `enqueue_unchecked` delegates the task of setting up a new +timeout interrupt to the `SysTick` handler. + +## Resolution and range of `cyccnt::Instant` and `cyccnt::Duration` + +RTIC provides a `Monotonic` implementation based on the `DWT`'s (Data Watchpoint +and Trace) cycle counter. `Instant::now` returns a snapshot of this timer; these +DWT snapshots (`Instant`s) are used to sort entries in the timer queue. The +cycle counter is a 32-bit counter clocked at the core clock frequency. This +counter wraps around every `(1 << 32)` clock cycles; there's no interrupt +associated to this counter so nothing worth noting happens when it wraps around. + +To order `Instant`s in the queue we need to compare two 32-bit integers. To +account for the wrap-around behavior we use the difference between two +`Instant`s, `a - b`, and treat the result as a 32-bit signed integer. If the +result is less than zero then `b` is a later `Instant`; if the result is greater +than zero then `b` is an earlier `Instant`. This means that scheduling a task at +an `Instant` that's `(1 << 31) - 1` cycles greater than the scheduled time +(`Instant`) of the first (earliest) entry in the queue will cause the task to be +inserted at the wrong place in the queue. There some debug assertions in place +to prevent this user error but it can't be avoided because the user can write +`(instant + duration_a) + duration_b` and overflow the `Instant`. + +The system timer, `SysTick`, is a 24-bit counter also clocked at the core clock +frequency. When the next scheduled task is more than `1 << 24` clock cycles in +the future an interrupt is set to go off in `1 << 24` cycles. This process may +need to happen several times until the next scheduled task is within the range +of the `SysTick` counter. + +In conclusion, both `Instant` and `Duration` have a resolution of 1 core clock +cycle and `Duration` effectively has a (half-open) range of `0..(1 << 31)` (end +not included) core clock cycles. + +## Queue capacity + +The capacity of the timer queue is chosen to be the sum of the capacities of all +`schedule`-able tasks. Like in the case of the ready queues, this means that +once we have claimed a free slot in the `INPUTS` buffer we are guaranteed to be +able to insert the task in the timer queue; this lets us omit runtime checks. + +## System timer priority + +The priority of the system timer can't be set by the user; it is chosen by the +framework. To ensure that lower priority tasks don't prevent higher priority +tasks from running we choose the priority of the system timer to be the maximum +of all the `schedule`-able tasks. + +To see why this is required consider the case where two previously scheduled +tasks with priorities `2` and `3` become ready at about the same time but the +lower priority task is moved into the ready queue first. If the system timer +priority was, for example, `1` then after moving the lower priority (`2`) task +it would run to completion (due to it being higher priority than the system +timer) delaying the execution of the higher priority (`3`) task. To prevent +scenarios like these the system timer must match the highest priority of the +`schedule`-able tasks; in this example that would be `3`. + +## Ceiling analysis + +The timer queue is a resource shared between all the tasks that can `schedule` a +task and the `SysTick` handler. Also the `schedule` API contends with the +`spawn` API over the free queues. All this must be considered in the ceiling +analysis. + +To illustrate, consider the following example: + +``` rust +#[rtic::app(device = ..)] +mod app { + #[task(priority = 3, spawn = [baz])] + fn foo(c: foo::Context) { + // .. + } + + #[task(priority = 2, schedule = [foo, baz])] + fn bar(c: bar::Context) { + // .. + } + + #[task(priority = 1)] + fn baz(c: baz::Context) { + // .. + } +} +``` + +The ceiling analysis would go like this: + +- `foo` (prio = 3) and `baz` (prio = 1) are `schedule`-able task so the + `SysTick` must run at the highest priority between these two, that is `3`. + +- `foo::Spawn` (prio = 3) and `bar::Schedule` (prio = 2) contend over the + consumer endpoint of `baz_FQ`; this leads to a priority ceiling of `3`. + +- `bar::Schedule` (prio = 2) has exclusive access over the consumer endpoint of +`foo_FQ`; thus the priority ceiling of `foo_FQ` is effectively `2`. + +- `SysTick` (prio = 3) and `bar::Schedule` (prio = 2) contend over the timer + queue `TQ`; this leads to a priority ceiling of `3`. + +- `SysTick` (prio = 3) and `foo::Spawn` (prio = 3) both have lock-free access to + the ready queue `RQ3`, which holds `foo` entries; thus the priority ceiling of + `RQ3` is effectively `3`. + +- The `SysTick` has exclusive access to the ready queue `RQ1`, which holds `baz` + entries; thus the priority ceiling of `RQ1` is effectively `3`. + +## Changes in the `spawn` implementation + +When the `schedule` API is used the `spawn` implementation changes a bit to +track the baseline of tasks. As you saw in the `schedule` implementation there's +an `INSTANTS` buffers used to store the time at which a task was scheduled to +run; this `Instant` is read in the task dispatcher and passed to the user code +as part of the task context. + +``` rust +mod app { + // .. + + #[no_mangle] + unsafe UART1() { + const PRIORITY: u8 = 1; + + let snapshot = basepri::read(); + + while let Some(ready) = RQ1.split().1.dequeue() { + match ready.task { + Task::baz => { + let input = baz_INPUTS[ready.index as usize].read(); + // ADDED + let instant = baz_INSTANTS[ready.index as usize].read(); + + baz_FQ.split().0.enqueue_unchecked(ready.index); + + let priority = Cell::new(PRIORITY); + // CHANGED the instant is passed as part the task context + baz(baz::Context::new(&priority, instant), input) + } + + Task::bar => { + // looks just like the `baz` branch + } + + } + } + + // BASEPRI invariant + basepri::write(snapshot); + } +} +``` + +Conversely, the `spawn` implementation needs to write a value to the `INSTANTS` +buffer. The value to be written is stored in the `Spawn` struct and its either +the `start` time of the hardware task or the `scheduled` time of the software +task. + +``` rust +mod foo { + // .. + + pub struct Spawn<'a> { + priority: &'a Cell<u8>, + // ADDED + instant: Instant, + } + + impl<'a> Spawn<'a> { + pub unsafe fn priority(&self) -> &Cell<u8> { + &self.priority + } + + // ADDED + pub unsafe fn instant(&self) -> Instant { + self.instant + } + } +} + +mod app { + impl<'a> foo::Spawn<'a> { + /// Spawns the `baz` task + pub fn baz(&self, message: u64) -> Result<(), u64> { + unsafe { + match lock(self.priority(), baz_FQ_CEILING, || { + baz_FQ.split().1.dequeue() + }) { + Some(index) => { + baz_INPUTS[index as usize].write(message); + // ADDED + baz_INSTANTS[index as usize].write(self.instant()); + + lock(self.priority(), RQ1_CEILING, || { + RQ1.split().1.enqueue_unchecked(Ready { + task: Task::foo, + index, + }); + }); + + rtic::pend(Interrupt::UART0); + } + + None => { + // maximum capacity reached; spawn failed + Err(message) + } + } + } + } + } +} +``` diff --git a/book/en/src/migration.md b/book/en/src/migration.md new file mode 100644 index 00000000..08feb81e --- /dev/null +++ b/book/en/src/migration.md @@ -0,0 +1,4 @@ +# Migration Guides + +This section describes how to migrate between different version of RTIC. +It also acts as a comparing reference between versions. diff --git a/book/en/src/migration/migration_rtic.md b/book/en/src/migration/migration_rtic.md new file mode 100644 index 00000000..555f1bb7 --- /dev/null +++ b/book/en/src/migration/migration_rtic.md @@ -0,0 +1,54 @@ +# Migrating from RTFM to RTIC + +This section covers how to upgrade an application written against RTFM v0.5.x to +the same version of RTIC. This applies since the renaming of the framework as per [RFC #33]. + +**Note:** There are no code differences between RTFM v0.5.3 and RTIC v0.5.3, it is purely a name +change. + +[RFC #33]: https://github.com/rtic-rs/rfcs/pull/33 + + + +## `Cargo.toml` + +First, the `cortex-m-rtfm` dependency needs to be updated to +`cortex-m-rtic`. + + +``` toml +[dependencies] +# change this +cortex-m-rtfm = "0.5.3" + +# into this +cortex-m-rtic = "0.5.3" +``` + +## Code changes + +The only code change that needs to be made is that any reference to `rtfm` before now need to point +to `rtic` as follows: + +``` rust +// +// Change this +// + +#[rtfm::app(/* .. */, monotonic = rtfm::cyccnt::CYCCNT)] +const APP: () = { + // ... + +}; + +// +// Into this +// + +#[rtic::app(/* .. */, monotonic = rtic::cyccnt::CYCCNT)] +const APP: () = { + // ... + +}; +``` + diff --git a/book/en/src/migration/migration_v4.md b/book/en/src/migration/migration_v4.md new file mode 100644 index 00000000..2c4e3ade --- /dev/null +++ b/book/en/src/migration/migration_v4.md @@ -0,0 +1,232 @@ +# Migrating from v0.4.x to v0.5.0 + +This section covers how to upgrade an application written against RTIC v0.4.x to +the version v0.5.0 of the framework. + +### `Cargo.toml` + +First, the version of the `cortex-m-rtic` dependency needs to be updated to +`"0.5.0"`. The `timer-queue` feature needs to be removed. + +``` toml +[dependencies.cortex-m-rtic] +# change this +version = "0.4.3" + +# into this +version = "0.5.0" + +# and remove this Cargo feature +features = ["timer-queue"] +# ^^^^^^^^^^^^^ +``` + +### `Context` argument + +All functions inside the `#[rtic::app]` item need to take as first argument a +`Context` structure. This `Context` type will contain the variables that were +magically injected into the scope of the function by version v0.4.x of the +framework: `resources`, `spawn`, `schedule` -- these variables will become +fields of the `Context` structure. Each function within the `#[rtic::app]` item +gets a different `Context` type. + +``` rust +#[rtic::app(/* .. */)] +const APP: () = { + // change this + #[task(resources = [x], spawn = [a], schedule = [b])] + fn foo() { + resources.x.lock(|x| /* .. */); + spawn.a(message); + schedule.b(baseline); + } + + // into this + #[task(resources = [x], spawn = [a], schedule = [b])] + fn foo(mut cx: foo::Context) { + // ^^^^^^^^^^^^^^^^^^^^ + + cx.resources.x.lock(|x| /* .. */); + // ^^^ + + cx.spawn.a(message); + // ^^^ + + cx.schedule.b(message, baseline); + // ^^^ + } + + // change this + #[init] + fn init() { + // .. + } + + // into this + #[init] + fn init(cx: init::Context) { + // ^^^^^^^^^^^^^^^^^ + // .. + } + + // .. +}; +``` + +### Resources + +The syntax used to declare resources has been changed from `static mut` +variables to a `struct Resources`. + +``` rust +#[rtic::app(/* .. */)] +const APP: () = { + // change this + static mut X: u32 = 0; + static mut Y: u32 = (); // late resource + + // into this + struct Resources { + #[init(0)] // <- initial value + X: u32, // NOTE: we suggest changing the naming style to `snake_case` + + Y: u32, // late resource + } + + // .. +}; +``` + +### Device peripherals + +If your application was accessing the device peripherals in `#[init]` through +the `device` variable then you'll need to add `peripherals = true` to the +`#[rtic::app]` attribute to continue to access the device peripherals through +the `device` field of the `init::Context` structure. + +Change this: + +``` rust +#[rtic::app(/* .. */)] +const APP: () = { + #[init] + fn init() { + device.SOME_PERIPHERAL.write(something); + } + + // .. +}; +``` + +Into this: + +``` rust +#[rtic::app(/* .. */, peripherals = true)] +// ^^^^^^^^^^^^^^^^^^ +const APP: () = { + #[init] + fn init(cx: init::Context) { + // ^^^^^^^^^^^^^^^^^ + cx.device.SOME_PERIPHERAL.write(something); + // ^^^ + } + + // .. +}; +``` + +### `#[interrupt]` and `#[exception]` + +The `#[interrupt]` and `#[exception]` attributes have been removed. To declare +hardware tasks in v0.5.x use the `#[task]` attribute with the `binds` argument. + +Change this: + +``` rust +#[rtic::app(/* .. */)] +const APP: () = { + // hardware tasks + #[exception] + fn SVCall() { /* .. */ } + + #[interrupt] + fn UART0() { /* .. */ } + + // software task + #[task] + fn foo() { /* .. */ } + + // .. +}; +``` + +Into this: + +``` rust +#[rtic::app(/* .. */)] +const APP: () = { + #[task(binds = SVCall)] + // ^^^^^^^^^^^^^^ + fn svcall(cx: svcall::Context) { /* .. */ } + // ^^^^^^ we suggest you use a `snake_case` name here + + #[task(binds = UART0)] + // ^^^^^^^^^^^^^ + fn uart0(cx: uart0::Context) { /* .. */ } + + #[task] + fn foo(cx: foo::Context) { /* .. */ } + + // .. +}; +``` + +### `schedule` + +The `timer-queue` feature has been removed. To use the `schedule` API one must +first define the monotonic timer the runtime will use using the `monotonic` +argument of the `#[rtic::app]` attribute. To continue using the cycle counter +(CYCCNT) as the monotonic timer, and match the behavior of version v0.4.x, add +the `monotonic = rtic::cyccnt::CYCCNT` argument to the `#[rtic::app]` attribute. + +Also, the `Duration` and `Instant` types and the `U32Ext` trait have been moved +into the `rtic::cyccnt` module. This module is only available on ARMv7-M+ +devices. The removal of the `timer-queue` also brings back the `DWT` peripheral +inside the core peripherals struct, this will need to be enabled by the application +inside `init`. + +Change this: + +``` rust +use rtic::{Duration, Instant, U32Ext}; + +#[rtic::app(/* .. */)] +const APP: () = { + #[task(schedule = [b])] + fn a() { + // .. + } +}; +``` + +Into this: + +``` rust +use rtic::cyccnt::{Duration, Instant, U32Ext}; +// ^^^^^^^^ + +#[rtic::app(/* .. */, monotonic = rtic::cyccnt::CYCCNT)] +// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +const APP: () = { + #[init] + fn init(cx: init::Context) { + cx.core.DWT.enable_cycle_counter(); + // optional, configure the DWT run without a debugger connected + cx.core.DCB.enable_trace(); + } + #[task(schedule = [b])] + fn a(cx: a::Context) { + // .. + } +}; +``` diff --git a/book/en/src/migration/migration_v5.md b/book/en/src/migration/migration_v5.md new file mode 100644 index 00000000..749ddecd --- /dev/null +++ b/book/en/src/migration/migration_v5.md @@ -0,0 +1,96 @@ +# Migrating from v0.5.x to v0.6.0 + +This section describes how to upgrade from v0.5.x to v0.6.0 of the RTIC framework. + +### `Cargo.toml` - version bump + +Change the version of `cortex-m-rtic` to `"0.6.0"`. + +### Module instead of Const + +With the support of attributes on modules the `const APP` workaround is not needed. + +Change + +``` rust +#[rtic::app(/* .. */)] +const APP: () = { + [code here] +}; +``` + +into + +``` rust +#[rtic::app(/* .. */)] +mod app { + [code here] +} +``` + +Now that a regular Rust module is used it means it is possible to have custom +user code within that module. +Additionally, it means that `use`-statements for resources etc may be required. + +### Init always returns late resources + +In order to make the API more symmetric the #[init]-task always returns a late resource. + +From this: + +``` rust +#[rtic::app(device = lm3s6965)] +mod app { + #[init] + fn init(_: init::Context) { + rtic::pend(Interrupt::UART0); + } + [more code] +} +``` + +to this: + +``` rust +#[rtic::app(device = lm3s6965)] +mod app { + #[init] + fn init(_: init::Context) -> init::LateResources { + rtic::pend(Interrupt::UART0); + + init::LateResources {} + } + [more code] +} +``` + +### Resources struct - #[resources] + +Previously the RTIC resources had to be in in a struct named exactly "Resources": + +``` rust +struct Resources { + // Resources defined in here +} +``` + +With RTIC v0.6.0 the resources struct is annotated similarly like +`#[task]`, `#[init]`, `#[idle]`: with an attribute `#[resources]` + +``` rust +#[resources] +struct Resources { + // Resources defined in here +} +``` + +In fact, the name of the struct is now up to the developer: + +``` rust +#[resources] +struct whateveryouwant { + // Resources defined in here +} +``` + +would work equally well. diff --git a/book/en/src/preface.md b/book/en/src/preface.md index d8f64fd4..041b3bd4 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -1,16 +1,23 @@ -<h1 align="center">Real Time For the Masses</h1> +<h1 align="center">Real-Time Interrupt-driven Concurrency</h1> -<p align="center">A concurrency framework for building real time systems</p> +<p align="center">A concurrency framework for building real-time systems</p> # Preface -This book contains user level documentation for the Real Time For the Masses -(RTFM) framework. The API reference can be found [here](../api/rtfm/index.html). +This book contains user level documentation for the Real-Time Interrupt-driven Concurrency +(RTIC) framework. The API reference can be found [here](../../api/). + +Formerly known as Real-Time For the Masses. There is a translation of this book in [Russian]. [Russian]: ../ru/index.html -{{#include ../../../README.md:5:46}} +This is the documentation of v0.6.x of RTIC; for the documentation of version + +* v0.5.x go [here](/0.5). +* v0.4.x go [here](/0.4). + +{{#include ../../../README.md:7:46}} {{#include ../../../README.md:52:}} |