741: Docs 2 r=korken89 a=datdenkikniet

Working on the migration guide and other docs

TODO:
- [x] Migration guide
- [x] Hardcoded examples should link to example code that is tested (this was already done, AFAICT)
- [x] Address #699 
- [x] Discuss: should we remove references to non-v2, apart from the migration guide and link to the book for v1? (Off-github conclusion: yes)
- [x] RTIC {vs,and} Embassy (important: distinction between embassy runtime & HALs)
- [x] More descriptive docs on how to implement & PR implementations of `Monotonic` to `rtic-monotonics` 


Co-authored-by: datdenkikniet <jcdra1@gmail.com>
This commit is contained in:
bors[bot] 2023-05-23 06:26:28 +00:00 committed by GitHub
commit 62162241d4
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
54 changed files with 587 additions and 262 deletions

1
.gitignore vendored
View file

@ -5,3 +5,4 @@
/target
Cargo.lock
*.hex
book-target/

View file

@ -34,7 +34,7 @@ This activates the monotonics making it possible to use them.
See the following example:
``` rust
``` rust,noplayground
{{#include ../../../../examples/schedule.rs}}
```
@ -54,7 +54,7 @@ which allows canceling or rescheduling of the task scheduled to run in the futur
If `cancel` or `reschedule_at`/`reschedule_after` returns an `Err` it means that the operation was
too late and that the task is already sent for execution. The following example shows this in action:
``` rust
``` rust,noplayground
{{#include ../../../../examples/cancel-reschedule.rs}}
```

View file

@ -11,8 +11,8 @@ improve performance in some cases.
The example below shows how to place the higher priority task, `bar`, in RAM.
``` rust
{{#include ../../../../rtic/examples/ramfunc.rs}}
``` rust,noplayground
{{#include ../../../../../rtic/examples/ramfunc.rs}}
```
Running this program produces the expected output.
@ -22,7 +22,7 @@ $ cargo run --target thumbv7m-none-eabi --example ramfunc
```
``` console
{{#include ../../../../rtic/ci/expected/ramfunc.run}}
{{#include ../../../../../rtic/ci/expected/ramfunc.run}}
```
One can look at the output of `cargo-nm` to confirm that `bar` ended in RAM
@ -33,7 +33,7 @@ $ cargo nm --example ramfunc --release | grep ' foo::'
```
``` console
{{#include ../../../../rtic/ci/expected/ramfunc.run.grep.foo}}
{{#include ../../../../../rtic/ci/expected/ramfunc.run.grep.foo}}
```
``` console
@ -41,5 +41,5 @@ $ cargo nm --example ramfunc --target thumbv7m-none-eabi --release | grep '*bar
```
``` console
{{#include ../../../../rtic/ci/expected/ramfunc.run.grep.bar}}
{{#include ../../../../../rtic/ci/expected/ramfunc.run.grep.bar}}
```

View file

@ -27,7 +27,7 @@ cortex-m-rtic = "0.5.3"
The only code change that needs to be made is that any reference to `rtfm` before now need to point
to `rtic` as follows:
``` rust
``` rust,noplayground
//
// Change this
//

View file

@ -42,7 +42,7 @@ framework: `resources`, `spawn`, `schedule` -- these variables will become
fields of the `Context` structure. Each function within the `#[rtfm::app]` item
gets a different `Context` type.
``` rust
``` rust,noplayground
#[rtfm::app(/* .. */)]
const APP: () = {
// change this
@ -90,7 +90,7 @@ const APP: () = {
The syntax used to declare resources has changed from `static mut`
variables to a `struct Resources`.
``` rust
``` rust,noplayground
#[rtfm::app(/* .. */)]
const APP: () = {
// change this
@ -118,7 +118,7 @@ the `device` field of the `init::Context` structure.
Change this:
``` rust
``` rust,noplayground
#[rtfm::app(/* .. */)]
const APP: () = {
#[init]
@ -132,7 +132,7 @@ const APP: () = {
Into this:
``` rust
``` rust,noplayground
#[rtfm::app(/* .. */, peripherals = true)]
// ^^^^^^^^^^^^^^^^^^
const APP: () = {
@ -155,7 +155,7 @@ attribute with the `binds` argument instead.
Change this:
``` rust
``` rust,noplayground
#[rtfm::app(/* .. */)]
const APP: () = {
// hardware tasks
@ -175,7 +175,7 @@ const APP: () = {
Into this:
``` rust
``` rust,noplayground
#[rtfm::app(/* .. */)]
const APP: () = {
#[task(binds = SVCall)]
@ -212,7 +212,7 @@ ensure it is enabled by the application inside `init`.
Change this:
``` rust
``` rust,noplayground
use rtfm::{Duration, Instant, U32Ext};
#[rtfm::app(/* .. */)]
@ -226,7 +226,7 @@ const APP: () = {
Into this:
``` rust
``` rust,noplayground
use rtfm::cyccnt::{Duration, Instant, U32Ext};
// ^^^^^^^^

View file

@ -12,7 +12,7 @@ With the support of attributes on modules the `const APP` workaround is not need
Change
``` rust
``` rust,noplayground
#[rtic::app(/* .. */)]
const APP: () = {
[code here]
@ -21,7 +21,7 @@ const APP: () = {
into
``` rust
``` rust,noplayground
#[rtic::app(/* .. */)]
mod app {
[code here]
@ -75,7 +75,7 @@ mod app {
Change
``` rust
``` rust,noplayground
#[rtic::app(/* .. */)]
const APP: () = {
[code here]
@ -92,7 +92,7 @@ const APP: () = {
into
``` rust
``` rust,noplayground
#[rtic::app(/* .. */, dispatchers = [SSI0, QEI0])]
mod app {
[code here]
@ -106,7 +106,7 @@ This works also for ram functions, see examples/ramfunc.rs
Previously the RTIC resources had to be in in a struct named exactly "Resources":
``` rust
``` rust,noplayground
struct Resources {
// Resources defined in here
}
@ -115,7 +115,7 @@ struct Resources {
With RTIC v1.0.0 the resources structs are annotated similarly like
`#[task]`, `#[init]`, `#[idle]`: with the attributes `#[shared]` and `#[local]`
``` rust
``` rust,noplayground
#[shared]
struct MySharedResources {
// Resources shared between tasks are defined here
@ -136,7 +136,7 @@ In v1.0.0 resources are split between `shared` resources and `local` resources.
In v0.5.x:
``` rust
``` rust,noplayground
struct Resources {
local_to_b: i64,
shared_by_a_and_b: i64,
@ -151,7 +151,7 @@ fn b(_: b::Context) {}
In v1.0.0:
``` rust
``` rust,noplayground
#[shared]
struct Shared {
shared_by_a_and_b: i64,
@ -176,7 +176,7 @@ to be used for all `shared` resource access.
In old code one could do the following as the high priority
task has exclusive access to the resource:
``` rust
``` rust,noplayground
#[task(priority = 2, resources = [r])]
fn foo(cx: foo::Context) {
cx.resources.r = /* ... */;
@ -190,7 +190,7 @@ fn bar(cx: bar::Context) {
And with symmetric locks one needs to use locks in both tasks:
``` rust
``` rust,noplayground
#[task(priority = 2, shared = [r])]
fn foo(cx: foo::Context) {
cx.shared.r.lock(|r| r = /* ... */);
@ -211,7 +211,7 @@ This is still possible in 1.0: the `#[shared]` resource must be annotated with t
v0.5 code:
``` rust
``` rust,noplayground
struct Resources {
counter: u64,
}
@ -229,7 +229,7 @@ fn b(cx: b::Context) {
v1.0 code:
``` rust
``` rust,noplayground
#[shared]
struct Shared {
#[lock_free]
@ -254,7 +254,7 @@ Instead of that syntax, use the `local` argument in `#[init]`.
v0.5.x code:
``` rust
``` rust,noplayground
#[init]
fn init(_: init::Context) {
static mut BUFFER: [u8; 1024] = [0; 1024];
@ -264,7 +264,7 @@ fn init(_: init::Context) {
v1.0.0 code:
``` rust
``` rust,noplayground
#[init(local = [
buffer: [u8; 1024] = [0; 1024]
// type ^^^^^^^^^^^^ ^^^^^^^^^ initial value
@ -282,7 +282,7 @@ In order to make the API more symmetric the #[init]-task always returns a late r
From this:
``` rust
``` rust,noplayground
#[rtic::app(device = lm3s6965)]
const APP: () = {
#[init]
@ -296,7 +296,7 @@ const APP: () = {
to this:
``` rust
``` rust,noplayground
#[rtic::app(device = lm3s6965)]
mod app {
#[shared]
@ -321,7 +321,7 @@ mod app {
With the new spawn/spawn_after/spawn_at interface,
old code requiring the context `cx` for spawning such as:
``` rust
``` rust,noplayground
#[task(spawn = [bar])]
fn foo(cx: foo::Context) {
cx.spawn.bar().unwrap();
@ -335,7 +335,7 @@ fn bar(cx: bar::Context) {
Will now be written as:
``` rust
``` rust,noplayground
#[task]
fn foo(_c: foo::Context) {
bar::spawn().unwrap();

View file

@ -2,31 +2,40 @@
[Preface](./preface.md)
---
- [Starting a new project](./starting_a_project.md)
- [RTIC by example](./by-example.md)
- [The `app`](./by-example/app.md)
- [Hardware tasks & `pend`](./by-example/hardware_tasks.md)
- [Hardware tasks](./by-example/hardware_tasks.md)
- [Software tasks & `spawn`](./by-example/software_tasks.md)
- [Resources](./by-example/resources.md)
- [The init task](./by-example/app_init.md)
- [The idle task](./by-example/app_idle.md)
- [Channel based communication](./by-example/channel.md)
- [Delay and Timeout](./by-example/delay.md)
- [Starting a new project](./by-example/starting_a_project.md)
- [Delay and Timeout using Monotonics](./by-example/delay.md)
- [The minimal app](./by-example/app_minimal.md)
- [Tips & Tricks](./by-example/tips.md)
- [Implementing Monotonic](./by-example/tips_monotonic_impl.md)
- [Resource de-structure-ing](./by-example/tips_destructureing.md)
- [Avoid copies when message passing](./by-example/tips_indirection.md)
- [`'static` super-powers](./by-example/tips_static_lifetimes.md)
- [Inspecting generated code](./by-example/tips_view_code.md)
<!-- - [Running tasks from RAM](./by-example/tips_from_ram.md) -->
<!-- - [`#[cfg(..)]` support](./by-example/tips.md) -->
- [Tips & Tricks](./by-example/tips/index.md)
- [Resource de-structure-ing](./by-example/tips/destructureing.md)
- [Avoid copies when message passing](./by-example/tips/indirection.md)
- [`'static` super-powers](./by-example/tips/static_lifetimes.md)
- [Inspecting generated code](./by-example/tips/view_code.md)
- [Monotonics & the Timer Queue](./monotonic_impl.md)
- [RTIC vs. the world](./rtic_vs.md)
- [RTIC and Embassy](./rtic_and_embassy.md)
- [Awesome RTIC examples](./awesome_rtic.md)
<!-- - [Migration Guides](./migration.md)
- [v0.5.x to v1.0.x](./migration/migration_v5.md)
- [v0.4.x to v0.5.x](./migration/migration_v4.md)
- [RTFM to RTIC](./migration/migration_rtic.md) -->
---
- [Migrating from v1.0.x to v2.0.0](./migration_v1_v2.md)
- [Rust Nightly & features](./migration_v1_v2/nightly.md)
- [Migrating to `rtic-monotonics`](./migration_v1_v2/monotonics.md)
- [Software tasks must now be `async`](./migration_v1_v2/async_tasks.md)
- [Using and understanding `rtic-sync`](./migration_v1_v2/rtic-sync.md)
- [A code example on migration](./migration_v1_v2/complete_example.md)
---
- [Under the hood](./internals.md)
- [Cortex-M architectures](./internals/targets.md)
<!--- [Interrupt configuration](./internals/interrupt-configuration.md)-->

View file

@ -1,8 +1,7 @@
# Awesome RTIC examples
See the [`rtic-rs/rtic-examples`][rticexamples] repository for community
provided complete examples.
See the [`rtic-rs/rtic/examples`][rticexamples] repository for complete examples.
Pull-requests to this repo are welcome!
Pull-requests are welcome!
[rticexamples]: https://github.com/rtic-rs/rtic-examples
[rticexamples]: https://github.com/rtic-rs/rtic/tree/master/examples

View file

@ -2,7 +2,9 @@
## Requirements on the `app` attribute
All RTIC applications use the [`app`] attribute (`#[app(..)]`). This attribute only applies to a `mod`-item containing the RTIC application. The `app` attribute has a mandatory `device` argument that takes a *path* as a value. This must be a full path pointing to a *peripheral access crate* (PAC) generated using [`svd2rust`] **v0.14.x** or newer.
All RTIC applications use the [`app`] attribute (`#[app(..)]`). This attribute only applies to a `mod`-item containing the RTIC application.
The `app` attribute has a mandatory `device` argument that takes a *path* as a value. This must be a full path pointing to a *peripheral access crate* (PAC) generated using [`svd2rust`] **v0.14.x** or newer.
The `app` attribute will expand into a suitable entry point and thus replaces the use of the [`cortex_m_rt::entry`] attribute.
@ -12,21 +14,33 @@ The `app` attribute will expand into a suitable entry point and thus replaces th
## Structure and zero-cost concurrency
An RTIC `app` is an executable system model for single-core applications, declaring a set of `local` and `shared` resources operated on by a set of `init`, `idle`, *hardware* and *software* tasks. In short the `init` task runs before any other task returning the set of `local` and `shared` resources. Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority).
An RTIC `app` is an executable system model for single-core applications, declaring a set of `local` and `shared` resources operated on by a set of `init`, `idle`, *hardware* and *software* tasks.
* `init` runs before any other task, and returns the `local` and `shared` resources.
* Tasks (both hardware and software) run preemptively based on their associated static priority.
* Hardware tasks are bound to underlying hardware interrupts.
* Software tasks are schedulied by an set of asynchronous executors, one for each software task priority.
* `idle` has the lowest priority, and can be used for background work, and/or to put the system to sleep until it is woken by some event.
At compile time the task/resource model is analyzed under the Stack Resource Policy (SRP) and executable code generated with the following outstanding properties:
- guaranteed race-free resource access and deadlock-free execution on a single-shared stack
- hardware task scheduling is performed directly by the hardware, and
- software task scheduling is performed by auto generated async executors tailored to the application.
- Guaranteed race-free resource access and deadlock-free execution on a single-shared stack.
- Hardware task scheduling is performed directly by the hardware.
- Software task scheduling is performed by auto generated async executors tailored to the application.
Overall, the generated code infers no additional overhead in comparison to a hand-written implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency.
## Priority
Priorities in RTIC are specified using the `priority = N` (where N is a positive number) argument passed to the `#[task]` attribute. All `#[task]`s can have a priority. If the priority of a task is not specified, it is set to the default value of 1.
Priorities in RTIC follow a higher value = more important scheme. For examples, a task with priority 2 will preempt a task with priority 1.
## An RTIC application example
To give a flavour of RTIC, the following example contains commonly used features.
To give a taste of RTIC, the following example contains commonly used features.
In the following sections we will go through each feature in detail.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/common.rs}}
```

View file

@ -11,7 +11,7 @@ Like in `init`, locally declared resources will have `'static` lifetimes that ar
The example below shows that `idle` runs after `init`.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/idle.rs}}
```
@ -38,7 +38,7 @@ The following example shows how to enable sleep by setting the
[WFI]: https://developer.arm.com/documentation/dui0662/b/The-Cortex-M0--Instruction-Set/Miscellaneous-instructions/WFI
[NOP]: https://developer.arm.com/documentation/dui0662/b/The-Cortex-M0--Instruction-Set/Miscellaneous-instructions/NOP
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/idle-wfi.rs}}
```

View file

@ -16,7 +16,7 @@ The example below shows the types of the `core`, `device` and `cs` fields, and s
The `device` field is only available when the `peripherals` argument is set to the default value `true`.
In the rare case you want to implement an ultra-slim application you can explicitly set `peripherals` to `false`.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/init.rs}}
```

View file

@ -2,7 +2,7 @@
This is the smallest possible RTIC application:
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/smallest.rs}}
```

View file

@ -33,7 +33,7 @@ Task Priority
The following example showcases the priority based scheduling of tasks:
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/preempt.rs}}
```

View file

@ -1,8 +1,8 @@
# Communication over channels.
Channels can be used to communicate data between running *software* tasks. The channel is essentially a wait queue, allowing tasks with multiple producers and a single receiver. A channel is constructed in the `init` task and backed by statically allocated memory. Send and receive endpoints are distributed to *software* tasks:
Channels can be used to communicate data between running tasks. The channel is essentially a wait queue, allowing tasks with multiple producers and a single receiver. A channel is constructed in the `init` task and backed by statically allocated memory. Send and receive endpoints are distributed to *software* tasks:
``` rust
``` rust,noplayground
...
const CAPACITY: usize = 5;
#[init]
@ -16,11 +16,13 @@ const CAPACITY: usize = 5;
In this case the channel holds data of `u32` type with a capacity of 5 elements.
Channels can also be used from *hardware* tasks, but only in a non-`async` manner using the [Try API](#try-api).
## Sending data
The `send` method post a message on the channel as shown below:
``` rust
``` rust,noplayground
#[task]
async fn sender1(_c: sender1::Context, mut sender: Sender<'static, u32, CAPACITY>) {
hprintln!("Sender 1 sending: 1");
@ -32,7 +34,7 @@ async fn sender1(_c: sender1::Context, mut sender: Sender<'static, u32, CAPACITY
The receiver can `await` incoming messages:
``` rust
``` rust,noplayground
#[task]
async fn receiver(_c: receiver::Context, mut receiver: Receiver<'static, u32, CAPACITY>) {
while let Ok(val) = receiver.recv().await {
@ -46,7 +48,7 @@ Channels are implemented using a small (global) *Critical Section* (CS) for prot
For a complete example:
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/async-channel.rs}}
```
@ -62,7 +64,7 @@ Also sender endpoint can be awaited. In case the channel capacity has not yet be
In the following example the `CAPACITY` has been reduced to 1, forcing sender tasks to wait until the data in the channel has been received.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/async-channel-done.rs}}
```
@ -79,7 +81,7 @@ $ cargo run --target thumbv7m-none-eabi --example async-channel-done --features
In case all senders have been dropped `await`-ing on an empty receiver channel results in an error. This allows to gracefully implement different types of shutdown operations.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/async-channel-no-sender.rs}}
```
@ -95,7 +97,7 @@ Similarly, `await`-ing on a send channel results in an error in case the receive
The resulting error returns the data back to the sender, allowing the sender to take appropriate action (e.g., storing the data to later retry sending it).
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/async-channel-no-receiver.rs}}
```
@ -107,13 +109,13 @@ $ cargo run --target thumbv7m-none-eabi --example async-channel-no-receiver --fe
{{#include ../../../../rtic/ci/expected/async-channel-no-receiver.run}}
```
## Try API
In cases you wish the sender to proceed even in case the channel is full. To that end, a `try_send` API is provided.
Using the Try API, you can send or receive data from or to a channel without requiring that the operation succeeds, and in non-`async` contexts.
``` rust
This API is exposed through `Receiver::try_recv` and `Sender::try_send`.
``` rust,noplayground
{{#include ../../../../rtic/examples/async-channel-try.rs}}
```

View file

@ -1,24 +1,23 @@
# Tasks with delay
A convenient way to express *miniminal* timing requirements is by means of delaying progression.
A convenient way to express miniminal timing requirements is by delaying progression.
This can be achieved by instantiating a monotonic timer:
This can be achieved by instantiating a monotonic timer (for implementations, see [`rtic-monotonics`]):
``` rust
[`rtic-monotonics`]: https://github.com/rtic-rs/rtic/tree/master/rtic-monotonics
[`rtic-time`]: https://github.com/rtic-rs/rtic/tree/master/rtic-time
[`Monotonic`]: https://docs.rs/rtic-time/latest/rtic_time/trait.Monotonic.html
[Implementing a `Monotonic`]: ../monotonic_impl.md
``` rust,noplayground
...
rtic_monotonics::make_systick_handler!();
#[init]
fn init(cx: init::Context) -> (Shared, Local) {
hprintln!("init");
Systick::start(cx.core.SYST, 12_000_000);
...
{{#include ../../../../rtic/examples/async-timeout.rs:init}}
...
```
A *software* task can `await` the delay to expire:
``` rust
``` rust,noplayground
#[task]
async fn foo(_cx: foo::Context) {
...
@ -28,13 +27,10 @@ async fn foo(_cx: foo::Context) {
```
Technically, the timer queue is implemented as a list based priority queue, where list-nodes are statically allocated as part of the underlying task `Future`. Thus, the timer queue is infallible at run-time (its size and allocation is determined at compile time).
<details>
<summary>A complete example</summary>
Similarly the channels implementation, the timer-queue implementation relies on a global *Critical Section* (CS) for race protection. For the examples a CS implementation is provided by adding `--features test-critical-section` to the build options.
For a complete example:
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/async-delay.rs}}
```
@ -46,75 +42,63 @@ $ cargo run --target thumbv7m-none-eabi --example async-delay --features test-cr
{{#include ../../../../rtic/ci/expected/async-delay.run}}
```
</details>
> Interested in contributing new implementations of [`Monotonic`], or more information about the inner workings of monotonics?
> Check out the [Implementing a `Monotonic`] chapter!
## Timeout
Rust `Futures` (underlying Rust `async`/`await`) are composable. This makes it possible to `select` in between `Futures` that have completed.
Rust [`Future`]s (underlying Rust `async`/`await`) are composable. This makes it possible to `select` in between `Futures` that have completed.
A common use case is transactions with associated timeout. In the examples shown below, we introduce a fake HAL device which performs some transaction. We have modelled the time it takes based on the input parameter (`n`) as `350ms + n * 100ms)`.
[`Future`]: https://doc.rust-lang.org/std/future/trait.Future.html
A common use case is transactions with an associated timeout. In the examples shown below, we introduce a fake HAL device that performs some transaction. We have modelled the time it takes based on the input parameter (`n`) as `350ms + n * 100ms`.
Using the `select_biased` macro from the `futures` crate it may look like this:
``` rust
// Call hal with short relative timeout using `select_biased`
select_biased! {
v = hal_get(1).fuse() => hprintln!("hal returned {}", v),
_ = Systick::delay(200.millis()).fuse() => hprintln!("timeout", ), // this will finish first
}
``` rust,noplayground,noplayground
{{#include ../../../../rtic/examples/async-timeout.rs:select_biased}}
```
Assuming the `hal_get` will take 450ms to finish, a short timeout of 200ms will expire.
Assuming the `hal_get` will take 450ms to finish, a short timeout of 200ms will expire before `hal_get` can complete.
``` rust
// Call hal with long relative timeout using `select_biased`
select_biased! {
v = hal_get(1).fuse() => hprintln!("hal returned {}", v), // hal finish first
_ = Systick::delay(1000.millis()).fuse() => hprintln!("timeout", ),
}
Extending the timeout to 1000ms would cause `hal_get` will to complete first.
Using `select_biased` any number of futures can be combined, so its very powerful. However, as the timeout pattern is frequently used, more ergonomic support is baked into RTIC, provided by the [`rtic-monotonics`] and [`rtic-time`] crates.
Rewriting the second example from above using `timeout_after` gives:
``` rust,noplayground
{{#include ../../../../rtic/examples/async-timeout.rs:timeout_at_basic}}
```
By extending the timeout to 1000ms, the `hal_get` will finish first.
Using `select_biased` any number of futures can be combined, so its very powerful. However, as the timeout pattern is frequently used, it is directly supported by the RTIC [rtc-monotonics] and [rtic-time] crates. The second example from above using `timeout_after`:
``` rust
// Call hal with long relative timeout using monotonic `timeout_after`
match Systick::timeout_after(1000.millis(), hal_get(1)).await {
Ok(v) => hprintln!("hal returned {}", v),
_ => hprintln!("timeout"),
}
```
In cases you want exact control over time without drift. For this purpose we can use exact points in time using `Instance`, and spans of time using `Duration`. Operations on the `Instance` and `Duration` types are given by the [fugit] crate.
In cases where you want exact control over time without drift we can use exact points in time using `Instant`, and spans of time using `Duration`. Operations on the `Instant` and `Duration` types come from the [`fugit`] crate.
[fugit]: https://crates.io/crates/fugit
``` rust
// get the current time instance
let mut instant = Systick::now();
``` rust,noplayground
// do this 3 times
for n in 0..3 {
// absolute point in time without drift
instant += 1000.millis();
Systick::delay_until(instant).await;
{{#include ../../../../rtic/examples/async-timeout.rs:timeout_at}}
// absolute point it time for timeout
let timeout = instant + 500.millis();
hprintln!("now is {:?}, timeout at {:?}", Systick::now(), timeout);
match Systick::timeout_at(timeout, hal_get(n)).await {
Ok(v) => hprintln!("hal returned {} at time {:?}", v, Systick::now()),
_ => hprintln!("timeout"),
}
}
```
`instant = Systick::now()` gives the baseline (i.e., the absolute current point in time). We want to call `hal_get` after 1000ms relative to this absolute point in time. This can be accomplished by `Systick::delay_until(instant).await;`. We define the absolute point in time for the `timeout`, and call `Systick::timeout_at(timeout, hal_get(n)).await`. For the first loop iteration `n == 0`, and the `hal_get` will take 350ms (and finishes before the timeout). For the second iteration `n == 1`, and `hal_get` will take 450ms (and again succeeds to finish before the timeout). For the third iteration `n == 2` (`hal_get` will take 5500ms to finish). In this case we will run into a timeout.
`let mut instant = Systick::now()` sets the starting time of execution.
We want to call `hal_get` after 1000ms relative to this starting time. This can be accomplished by using `Systick::delay_until(instant).await`.
The complete example:
Then, we define a point in time called `timeout`, and call `Systick::timeout_at(timeout, hal_get(n)).await`.
``` rust
For the first iteration of the loop, with `n == 0`, the `hal_get` will take 350ms (and finishes before the timeout).
For the second iteration, with `n == 1`, the `hal_get` will take 450ms (and again succeeds to finish before the timeout).
For the third iteration, with `n == 2`, `hal_get` will take 550ms to finish, in which case we will run into a timeout.
<details>
<summary>A complete example</summary>
``` rust,noplayground
{{#include ../../../../rtic/examples/async-timeout.rs}}
```
@ -125,3 +109,4 @@ $ cargo run --target thumbv7m-none-eabi --example async-timeout --features test-
``` console
{{#include ../../../../rtic/ci/expected/async-timeout.run}}
```
</details>

View file

@ -1,8 +1,6 @@
# Hardware tasks
At its core RTIC is using a hardware interrupt controller ([ARM NVIC on cortex-m][NVIC]) to schedule and start execution of tasks. All tasks except `pre-init`, `#[init]` and `#[idle]` run as interrupt handlers.
Hardware tasks are explicitly bound to interrupt handlers.
At its core RTIC is using a hardware interrupt controller ([ARM NVIC on cortex-m][NVIC]) to schedule and start execution of tasks. All tasks except `pre-init` (a hidden "task"), `#[init]` and `#[idle]` run as interrupt handlers.
To bind a task to an interrupt, use the `#[task]` attribute argument `binds = InterruptName`. This task then becomes the interrupt handler for this hardware interrupt vector.
@ -17,9 +15,11 @@ Beware of using interrupt vectors that are used internally by hardware features;
[pacorhal]: https://docs.rust-embedded.org/book/start/registers.html
[NVIC]: https://developer.arm.com/documentation/100166/0001/Nested-Vectored-Interrupt-Controller/NVIC-functional-description/NVIC-interrupts
## Example
The example below demonstrates the use of the `#[task(binds = InterruptName)]` attribute to declare a hardware task bound to an interrupt handler.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/hardware.rs}}
```

View file

@ -3,14 +3,14 @@
Software tasks support message passing, this means that software tasks can be spawned
with an argument: `foo::spawn(1)` which will run the task `foo` with the argument `1`.
Capacity sets the size of the spawn queue for the task, if not specified capacity defaults to 1.
Capacity sets the size of the spawn queue for the task. If it is not specified, the capacity defaults to 1.
In the example below, the capacity of task `foo` is `3`, allowing three simultaneous
pending spawns of `foo`. Exceeding this capacity is an `Error`.
The number of arguments to a task is not limited:
``` rust
``` rust,noplayground
{{#include ../../../../examples/message_passing.rs}}
```

View file

@ -25,7 +25,7 @@ Types of `#[local]` resources must implement a [`Send`] trait as they are being
The example application shown below contains three tasks `foo`, `bar` and `idle`, each having access to its own `#[local]` resource.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/locals.rs}}
```
@ -51,7 +51,7 @@ Types of `#[task(local = [..])]` resources have to be neither [`Send`] nor [`Syn
In the example below the different uses and lifetimes are shown:
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/declared_locals.rs}}
```
@ -76,7 +76,7 @@ The critical section created by the `lock` API is based on dynamic priorities: i
In the example below we have three interrupt handlers with priorities ranging from one to three. The two handlers with the lower priorities contend for a `shared` resource and need to succeed in locking the resource in order to access its data. The highest priority handler, which does not access the `shared` resource, is free to preempt a critical section created by the lowest priority handler.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/lock.rs}}
```
@ -94,7 +94,7 @@ Types of `#[shared]` resources have to be [`Send`].
As an extension to `lock`, and to reduce rightward drift, locks can be taken as tuples. The following examples show this in use:
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/multilock.rs}}
```
@ -116,7 +116,7 @@ Note that in this release of RTIC it is not possible to request both exclusive a
In the example below a key (e.g. a cryptographic key) is loaded (or created) at runtime (returned by `init`) and then used from two tasks that run at different priorities without any kind of lock.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/only-shared-access.rs}}
```
@ -142,7 +142,7 @@ To adhere to the Rust [aliasing] rule, a resource may be either accessed through
Using `#[lock_free]` on resources shared by tasks running at different priorities will result in a *compile-time* error -- not using the `lock` API would violate the aforementioned alias rule. Similarly, for each priority there can be only a single *software* task accessing a shared resource (as an `async` task may yield execution to other *software* or *hardware* tasks running at the same priority). However, under this single-task restriction, we make the observation that the resource is in effect no longer `shared` but rather `local`. Thus, using a `#[lock_free]` shared resource will result in a *compile-time* error -- where applicable, use a `#[local]` resource instead.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/lock-free.rs}}
```

View file

@ -1,7 +1,6 @@
# Software tasks & spawn
The RTIC concept of a software task shares a lot with that of [hardware tasks](./hardware_tasks.md) with the core difference that a software task is not explicitly bound to a specific
interrupt vector, but rather bound to a “dispatcher” interrupt vector running at the intended priority of the software task (see below).
The RTIC concept of a software task shares a lot with that of [hardware tasks](./hardware_tasks.md). The core difference is that a software task is not explicitly bound to a specific interrupt vector, but rather bound to a “dispatcher” interrupt vector running at the intended priority of the software task (see below).
Similarly to *hardware* tasks, the `#[task]` attribute used on a function declare it as a task. The absence of a `binds = InterruptName` argument to the attribute declares the function as a *software task*.
@ -9,11 +8,11 @@ The static method `task_name::spawn()` spawns (starts) a software task and given
The *software* task itself is given as an `async` Rust function, which allows the user to optionally `await` future events. This allows to blend reactive programming (by means of *hardware* tasks) with sequential programming (by means of *software* tasks).
Whereas, *hardware* tasks are assumed to run-to-completion (and return), *software* tasks may be started (`spawned`) once and run forever, with the side condition that any loop (execution path) is broken by at least one `await` (yielding operation).
While *hardware* tasks are assumed to run-to-completion (and return), *software* tasks may be started (`spawned`) once and run forever, on the condition that any loop (execution path) is broken by at least one `await` (yielding operation).
All *software* tasks at the same priority level shares an interrupt handler acting as an async executor dispatching the software tasks.
## Dispatchers
This list of dispatchers, `dispatchers = [FreeInterrupt1, FreeInterrupt2, ...]` is an argument to the `#[app]` attribute, where you define the set of free and usable interrupts.
All *software* tasks at the same priority level share an interrupt handler acting as an async executor dispatching the software tasks. This list of dispatchers, `dispatchers = [FreeInterrupt1, FreeInterrupt2, ...]` is an argument to the `#[app]` attribute, where you define the set of free and usable interrupts.
Each interrupt vector acting as dispatcher gets assigned to one priority level meaning that the list of dispatchers need to cover all priority levels used by software tasks.
@ -23,7 +22,7 @@ The framework will give a compilation error if there are not enough dispatchers
See the following example:
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/spawn.rs}}
```
@ -40,7 +39,7 @@ In the below example, we `spawn` the *software* task `foo` from the `idle` task.
Technically the async executor will `poll` the `foo` *future* which in this case leaves the *future* in a *completed* state.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/spawn_loop.rs}}
```
@ -56,7 +55,7 @@ An attempt to `spawn` an already spawned task (running) task will result in an e
Technically, a `spawn` to a *future* that is not in *completed* state is considered an error.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/spawn_err.rs}}
```
@ -71,7 +70,7 @@ $ cargo run --target thumbv7m-none-eabi --example spawn_err
## Passing arguments
You can also pass arguments at spawn as follows.
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/spawn_arguments.rs}}
```
@ -92,7 +91,7 @@ Conceptually, one can see such tasks as running in the `main` thread of the appl
[Send]: https://doc.rust-lang.org/nomicon/send-and-sync.html
``` rust
``` rust,noplayground
{{#include ../../../../rtic/examples/zero-prio-task.rs}}
```

View file

@ -3,13 +3,13 @@
Destructuring task resources might help readability if a task takes multiple
resources. Here are two examples on how to split up the resource struct:
``` rust
{{#include ../../../../rtic/examples/destructure.rs}}
``` rust,noplayground
{{#include ../../../../../rtic/examples/destructure.rs}}
```
``` console
$ cargo run --target thumbv7m-none-eabi --example destructure
```
``` console
{{#include ../../../../rtic/ci/expected/destructure.run}}
{{#include ../../../../../rtic/ci/expected/destructure.run}}
```

View file

@ -7,14 +7,14 @@ Indirection can minimize message passing overhead: instead of sending the buffer
One can use a global memory allocator to achieve indirection (`alloc::Box`, `alloc::Rc`, etc.), which requires using the nightly channel as of Rust v1.37.0, or one can use a statically allocated memory pool like [`heapless::Pool`].
[`heapless::Pool`]: https://docs.rs/heapless/0.5.0/heapless/pool/index.html
[`heapless::Pool`]: https://docs.rs/heapless/latest/heapless/pool/index.html
As this example of approach goes completely outside of RTIC resource model with shared and local the program would rely on the correctness of the memory allocator, in this case `heapless::pool`.
Here's an example where `heapless::Pool` is used to "box" buffers of 128 bytes.
``` rust
{{#include ../../../../rtic/examples/pool.rs}}
``` rust,noplayground
{{#include ../../../../../rtic/examples/pool.rs}}
```
``` console
@ -22,5 +22,5 @@ $ cargo run --target thumbv7m-none-eabi --example pool
```
``` console
{{#include ../../../../rtic/ci/expected/pool.run}}
{{#include ../../../../../rtic/ci/expected/pool.run}}
```

View file

@ -8,8 +8,8 @@ In the following example two different tasks share a [`heapless::spsc::Queue`] f
[`heapless::spsc::Queue`]: https://docs.rs/heapless/0.7.5/heapless/spsc/struct.Queue.html
``` rust
{{#include ../../../../rtic/examples/static.rs}}
``` rust,noplayground
{{#include ../../../../../rtic/examples/static.rs}}
```
Running this program produces the expected output.
@ -19,5 +19,5 @@ $ cargo run --target thumbv7m-none-eabi --example static
```
``` console
{{#include ../../../../rtic/ci/expected/static.run}}
{{#include ../../../../../rtic/ci/expected/static.run}}
```

View file

@ -16,7 +16,7 @@ $ rustfmt target/rtic-expansion.rs
$ tail target/rtic-expansion.rs
```
``` rust
``` rust,noplayground
#[doc = r" Implementation details"]
mod app {
#[doc = r" Always include the device crate which contains the vector table"]

View file

@ -1,29 +0,0 @@
# Implementing a `Monotonic` timer for scheduling
The framework is flexible because it can use any timer which has compare-match and optionally supporting overflow interrupts for scheduling. The single requirement to make a timer usable with RTIC is implementing the [`rtic-time::Monotonic`] trait.
For RTIC 1.0 and 2.0 we instead assume the user has a time library, e.g. [`fugit`] or [`embedded_time`], as the basis for all time-based operations when implementing `Monotonic`. These libraries make it much easier to correctly implement the `Monotonic` trait, allowing the use of
almost any timer in the system for scheduling.
The trait documents the requirements for each method, and for inspiration
there is a reference implementation based on the `SysTick` timer available on all ARM Cortex M MCUs.
- [`Systick based`], runs at a fixed interrupt (tick) rate - with some overhead but simple and provides support for large time spans
Here is a list of `Monotonic` implementations for RTIC 1.0:
- [`STM32F411 series`], implemented for the 32-bit timers
- [`Nordic nRF52 series Timer`], implemented for the 32-bit timers
- [`Nordic nRF52 series RTC`], implemented for the RTCs
- [`DWT and Systick based`], a more efficient (tickless) implementation - requires both `SysTick` and `DWT`, supports both high resolution and large time spans
If you know of more implementations feel free to add them to this list.
[`rtic_time::Monotonic`]: https://docs.rs/rtic_time/
[`fugit`]: https://docs.rs/fugit/
[`embedded_time`]: https://docs.rs/embedded_time/
[`STM32F411 series`]: https://github.com/kalkyl/f411-rtic/blob/a696fce7d6d19fda2356c37642c4d53547982cca/src/mono.rs
[`Nordic nRF52 series Timer`]: https://github.com/kalkyl/nrf-play/blob/47f4410d4e39374c18ff58dc17c25159085fb526/src/mono.rs
[`Nordic nRF52 series RTC`]: https://gist.github.com/korken89/fe94a475726414dd1bce031c76adc3dd
[`Systick based`]: https://github.com/rtic-monotonics
[`DWT and Systick based`]: https://github.com/rtic-rs/dwt-systick-monotonic

View file

@ -27,7 +27,7 @@ section on [critical sections](critical-sections.html)).
The code below is an example of the kind of source level transformation that
happens behind the scenes:
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
static mut X: u64: 0;
@ -54,7 +54,7 @@ mod app {
The framework produces codes like this:
``` rust
``` rust,noplayground
fn init(c: init::Context) {
// .. user code ..
}

View file

@ -26,7 +26,7 @@ gets a unique reference (`&mut-`) to resources.
An example to illustrate the ceiling analysis:
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
struct Resources {

View file

@ -30,7 +30,7 @@ task we give it a *resource proxy*, whereas we give a unique reference
The example below shows the different types handed out to each task:
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mut app {
struct Resources {
@ -62,7 +62,7 @@ mut app {
Now let's see how these types are created by the framework.
``` rust
``` rust,noplayground
fn foo(c: foo::Context) {
// .. user code ..
}
@ -149,7 +149,7 @@ The semantics of the `BASEPRI` register are as follows:
Thus the dynamic priority at any point in time can be computed as
``` rust
``` rust,noplayground
dynamic_priority = max(hw2logical(BASEPRI), hw2logical(static_priority))
```
@ -160,7 +160,7 @@ In this particular example we could implement the critical section as follows:
> **NOTE:** this is a simplified implementation
``` rust
``` rust,noplayground
impl rtic::Mutex for resources::x {
type T = u64;
@ -194,7 +194,7 @@ calls to it. This is required for memory safety, as nested calls would produce
multiple unique references (`&mut-`) to `x` breaking Rust aliasing rules. See
below:
``` rust
``` rust,noplayground
#[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) {
// resource proxy
@ -223,7 +223,7 @@ provides extra information to the compiler.
Consider this program:
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
struct Resources {
@ -282,7 +282,7 @@ mod app {
The code generated by the framework looks like this:
``` rust
``` rust,noplayground
// omitted: user code
pub mod resources {
@ -374,7 +374,7 @@ mod app {
At the end the compiler will optimize the function `foo` into something like
this:
``` rust
``` rust,noplayground
fn foo(c: foo::Context) {
// NOTE: BASEPRI contains the value `0` (its reset value) at this point
@ -428,7 +428,7 @@ should not result in an observable change of BASEPRI.
This invariant needs to be preserved to avoid raising the dynamic priority of a
handler through preemption. This is best observed in the following example:
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
struct Resources {
@ -490,7 +490,7 @@ mod app {
IMPORTANT: let's say we *forget* to roll back `BASEPRI` in `UART1` -- this would
be a bug in the RTIC code generator.
``` rust
``` rust,noplayground
// code generated by RTIC
mod app {

View file

@ -11,7 +11,7 @@ configuration is done before the `init` function runs.
This example gives you an idea of the code that the RTIC framework runs:
``` rust
``` rust,noplayground
#[rtic::app(device = lm3s6965)]
mod app {
#[init]
@ -33,7 +33,7 @@ mod app {
The framework generates an entry point that looks like this:
``` rust
``` rust,noplayground
// the real entry point of the program
#[no_mangle]
unsafe fn main() -> ! {

View file

@ -8,7 +8,7 @@ interrupts are disabled.
The example below shows the kind of code that the framework generates to
initialize late resources.
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
struct Resources {
@ -39,7 +39,7 @@ mod app {
The code generated by the framework looks like this:
``` rust
``` rust,noplayground
fn init(c: init::Context) -> init::LateResources {
// .. user code ..
}

View file

@ -10,7 +10,7 @@ To reenter a task handler in software its underlying interrupt handler must be
invoked using FFI (see example below). FFI requires `unsafe` code so end users
are discouraged from directly invoking an interrupt handler.
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
#[init]
@ -48,7 +48,7 @@ call from user code.
The above example expands into:
``` rust
``` rust,noplayground
fn foo(c: foo::Context) {
// .. user code ..
}

View file

@ -29,7 +29,7 @@ Table 1 below shows a list of Cortex-m processors and which type of critical sec
## Priority Ceiling
This is covered by the [Resources][resources] page of this book.
This is covered by the [Resources](../by-example/resources.html) page of this book.
## Source Masking

View file

@ -26,7 +26,7 @@ is treated as a resource contended by the tasks that can `spawn` other tasks.
Let's first take a look the code generated by the framework to dispatch tasks.
Consider this example:
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
// ..
@ -57,7 +57,7 @@ mod app {
The framework produces the following task dispatcher which consists of an
interrupt handler and a ready queue:
``` rust
``` rust,noplayground
fn bar(c: bar::Context) {
// .. user code ..
}
@ -121,7 +121,7 @@ There's one `Spawn` struct per task.
The `Spawn` code generated by the framework for the previous example looks like
this:
``` rust
``` rust,noplayground
mod foo {
// ..
@ -206,7 +206,7 @@ task capacities.
We have omitted how message passing actually works so let's revisit the `spawn`
implementation but this time for task `baz` which receives a `u64` message.
``` rust
``` rust,noplayground
fn baz(c: baz::Context, input: u64) {
// .. user code ..
}
@ -268,7 +268,7 @@ mod app {
And now let's look at the real implementation of the task dispatcher:
``` rust
``` rust,noplayground
mod app {
// ..
@ -355,7 +355,7 @@ endpoint is owned by a task dispatcher.
Consider the following example:
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
#[idle(spawn = [foo, bar])]

View file

@ -10,7 +10,7 @@ appropriate ready queue.
Let's see how this in implemented in code. Consider the following program:
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
// ..
@ -31,7 +31,7 @@ mod app {
Let's first look at the `schedule` API.
``` rust
``` rust,noplayground
mod foo {
pub struct Schedule<'a> {
priority: &'a Cell<u8>,
@ -122,7 +122,7 @@ is up.
Let's see the associated code.
``` rust
``` rust,noplayground
mod app {
#[no_mangle]
fn SysTick() {
@ -220,7 +220,7 @@ analysis.
To illustrate, consider the following example:
``` rust
``` rust,noplayground
#[rtic::app(device = ..)]
mod app {
#[task(priority = 3, spawn = [baz])]
@ -269,7 +269,7 @@ an `INSTANTS` buffers used to store the time at which a task was scheduled to
run; this `Instant` is read in the task dispatcher and passed to the user code
as part of the task context.
``` rust
``` rust,noplayground
mod app {
// ..
@ -311,7 +311,7 @@ buffer. The value to be written is stored in the `Spawn` struct and its either
the `start` time of the hardware task or the `scheduled` time of the software
task.
``` rust
``` rust,noplayground
mod foo {
// ..

View file

@ -0,0 +1,18 @@
# Migrating from v1.0.x to v2.0.0
Migrating a project from RTIC `v1.0.x` to `v2.0.0` involves the following steps:
1. `v2.0.0` requires [`#![type_alias_impl_trait]`](https://github.com/rust-lang/rust/issues/63063) and Rust Nightly.
2. Migrating from the monotonics included in `v1.0.x` to `rtic-time` and `rtic-monotonics`, replacing `spawn_after`, `spawn_at`.
3. Software tasks are now required to be `async`, and using them correctly.
4. Understanding and using data types provided by `rtic-sync`.
For a detailed description of the changes, refer to the subchapters.
If you wish to see a code example of changes required, you can check out [the full example migration page](./migration_v1_v2/complete_example.md).
#### TL;DR (Too Long; Didn't Read)
1. Add `#![type_alias_impl_trait]` to your crate, and use `cargo +nightly`.
2. Instead of `spawn_after` and `spawn_at`, you now use the `async` functions `delay`, `delay_until` (and related) with impls provided by `rtic-monotonics`.
3. Software tasks _must_ be `async fn`s now. Not returning from a task is allowed so long as there is an `await` in the task. You can still `lock` shared resources.
4. Use `rtic_sync::Arbiter` to `await` access to a shared resource, and `rtic-channel` to communicate between tasks instead of `spawn`-ing new ones.

View file

@ -0,0 +1,55 @@
# Using `async` softare tasks.
There have been a few changes to software tasks. They are outlined below.
### Software tasks must now be `async`.
All software tasks are now required to be `async`.
#### Required changes.
All of the tasks in your project that do not bind to an interrupt must now be an `async fn`. For example:
``` rust,noplayground
#[task(
local = [ some_resource ],
shared = [ my_shared_resource ],
priority = 2
)]
fn my_task(cx: my_task::Context) {
cx.local.some_resource.do_trick();
cx.shared.my_shared_resource.lock(|s| s.do_shared_thing());
}
```
becomes
``` rust,noplayground
#[task(
local = [ some_resource ],
shared = [ my_shared_resource ],
priority = 2
)]
async fn my_task(cx: my_task::Context) {
cx.local.some_resource.do_trick();
cx.shared.my_shared_resource.lock(|s| s.do_shared_thing());
}
```
## Software tasks may now run forever
The new `async` software tasks are allowed to run forever, on one precondition: **there must be an `await` within the infinite loop of the task**. An example of such a task:
``` rust,noplayground
#[task(local = [ my_channel ] )]
async fn my_task_that_runs_forever(cx: my_task_that_runs_forever::Context) {
loop {
let value = cx.local.my_channel.recv().await;
do_something_with_value(value);
}
}
```
## `spawn_after` and `spawn_at` have been removed.
As discussed in the [Migrating to `rtic-monotonics`](./monotonics.md) chapter, `spawn_after` and `spawn_at` are no longer available.

View file

@ -0,0 +1,169 @@
# A complete example of migration
Below you can find the code for the implementation of the `stm32f3_blinky` example for v1.0.x and for v2.0.0. Further down, a diff is displayed.
# v1.0.X
```rust
#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]
use panic_rtt_target as _;
use rtic::app;
use rtt_target::{rprintln, rtt_init_print};
use stm32f3xx_hal::gpio::{Output, PushPull, PA5};
use stm32f3xx_hal::prelude::*;
use systick_monotonic::{fugit::Duration, Systick};
#[app(device = stm32f3xx_hal::pac, peripherals = true, dispatchers = [SPI1])]
mod app {
use super::*;
#[shared]
struct Shared {}
#[local]
struct Local {
led: PA5<Output<PushPull>>,
state: bool,
}
#[monotonic(binds = SysTick, default = true)]
type MonoTimer = Systick<1000>;
#[init]
fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) {
// Setup clocks
let mut flash = cx.device.FLASH.constrain();
let mut rcc = cx.device.RCC.constrain();
let mono = Systick::new(cx.core.SYST, 36_000_000);
rtt_init_print!();
rprintln!("init");
let _clocks = rcc
.cfgr
.use_hse(8.MHz())
.sysclk(36.MHz())
.pclk1(36.MHz())
.freeze(&mut flash.acr);
// Setup LED
let mut gpioa = cx.device.GPIOA.split(&mut rcc.ahb);
let mut led = gpioa
.pa5
.into_push_pull_output(&mut gpioa.moder, &mut gpioa.otyper);
led.set_high().unwrap();
// Schedule the blinking task
blink::spawn_after(Duration::<u64, 1, 1000>::from_ticks(1000)).unwrap();
(
Shared {},
Local { led, state: false },
init::Monotonics(mono),
)
}
#[task(local = [led, state])]
fn blink(cx: blink::Context) {
rprintln!("blink");
if *cx.local.state {
cx.local.led.set_high().unwrap();
*cx.local.state = false;
} else {
cx.local.led.set_low().unwrap();
*cx.local.state = true;
}
blink::spawn_after(Duration::<u64, 1, 1000>::from_ticks(1000)).unwrap();
}
}
```
# V2.0.0
``` rust,noplayground
{{ #include ../../../../examples/stm32f3_blinky/src/main.rs }}
```
## A diff between the two projects
_Note_: This diff may not be 100% accurate, but it displays the important changes.
``` diff
#![no_main]
#![no_std]
+#![feature(type_alias_impl_trait)]
use panic_rtt_target as _;
use rtic::app;
use stm32f3xx_hal::gpio::{Output, PushPull, PA5};
use stm32f3xx_hal::prelude::*;
-use systick_monotonic::{fugit::Duration, Systick};
+use rtic_monotonics::Systick;
#[app(device = stm32f3xx_hal::pac, peripherals = true, dispatchers = [SPI1])]
mod app {
@@ -20,16 +21,14 @@ mod app {
state: bool,
}
- #[monotonic(binds = SysTick, default = true)]
- type MonoTimer = Systick<1000>;
-
#[init]
fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) {
// Setup clocks
let mut flash = cx.device.FLASH.constrain();
let mut rcc = cx.device.RCC.constrain();
- let mono = Systick::new(cx.core.SYST, 36_000_000);
+ let mono_token = rtic_monotonics::create_systick_token!();
+ let mono = Systick::new(cx.core.SYST, 36_000_000, mono_token);
let _clocks = rcc
.cfgr
@@ -46,7 +45,7 @@ mod app {
led.set_high().unwrap();
// Schedule the blinking task
- blink::spawn_after(Duration::<u64, 1, 1000>::from_ticks(1000)).unwrap();
+ blink::spawn().unwrap();
(
Shared {},
@@ -56,14 +55,18 @@ mod app {
}
#[task(local = [led, state])]
- fn blink(cx: blink::Context) {
- rprintln!("blink");
- if *cx.local.state {
- cx.local.led.set_high().unwrap();
- *cx.local.state = false;
- } else {
- cx.local.led.set_low().unwrap();
- *cx.local.state = true;
- blink::spawn_after(Duration::<u64, 1, 1000>::from_ticks(1000)).unwrap();
- }
+ async fn blink(cx: blink::Context) {
+ loop {
+ // A task is now allowed to run forever, provided that
+ // there is an `await` somewhere in the loop.
+ SysTick::delay(1000.millis()).await;
+ rprintln!("blink");
+ if *cx.local.state {
+ cx.local.led.set_high().unwrap();
+ *cx.local.state = false;
+ } else {
+ cx.local.led.set_low().unwrap();
+ *cx.local.state = true;
+ }
+ }
+ }
}
```

View file

@ -0,0 +1,13 @@
# Migrating to `rtic-monotonics`
In previous versions of `rtic`, monotonics were an integral, tightly coupled part of the `#[rtic::app]`. In this new version, [`rtic-monotonics`] provides them in a more decoupled way.
The `#[monotonic]` attribute is no longer used. Instead, you use a `create_X_token` from [`rtic-monotonics`]. An invocation of this macro returns an interrupt registration token, which can be used to construct an instance of your desired monotonic.
`spawn_after` and `spawn_at` are no longer available. Instead, you use the async functions `delay` and `delay_until` provided by ipmlementations of the `rtic_time::Monotonic` trait, available through [`rtic-monotonics`].
Check out the [code example](./complete_example.md) for an overview of the required changes.
For more information on current monotonic implementations, see [the `rtic-monotonics` documentation](https://docs.rs/rtic-monotonics), and [the examples](https://github.com/rtic-rs/rtic/tree/master/examples).
[`rtic-monotonics`]: ghttps://github.com/rtic/rtic-monotonics

View file

@ -0,0 +1,5 @@
# RTIC now requires Rust Nightly
The new `async` features require that you use a nightly compiler, and that the feature `type_alias_impl_trait` is enabled for your applications.
To enable this feature, you must add the line `#![type_alias_impl_trait]` to the root file of your project, on the lines below or above where `#![no_std]` and `#![no_main]` are defined.

View file

@ -0,0 +1,9 @@
# Using `rtic-sync`
`rtic-sync` provides primitives that can be used for message passing and resource sharing in async context.
The important structs are:
* The `Arbiter`, which allows you to await access to a shared resource in async contexts without using `lock`.
* `Channel`, which allows you to communicate between tasks (both `async` and non-`async`).
For more information on these structs, see the [`rtic-sync` docs](https://docs.rs/rtic-sync)

View file

@ -0,0 +1,38 @@
# The magic behind Monotonics
Internally, all monotonics use a [Timer Queue](#the-timer-queue), which is a priority queue with entries describing the time at which their respective `Future`s should complete.
## Implementing a `Monotonic` timer for scheduling
The [`rtic-time`] framework is flexible because it can use any timer which has compare-match and optionally supporting overflow interrupts for scheduling. The single requirement to make a timer usable with RTIC is implementing the [`rtic-time::Monotonic`] trait.
For RTIC 2.0, we assume that the user has a time library, e.g. [`fugit`], as the basis for all time-based operations when implementing [`Monotonic`]. These libraries make it much easier to correctly implement the [`Monotonic`] trait, allowing the use of almost any timer in the system for scheduling.
The trait documents the requirements for each method. There are reference implementations available in [`rtic-monotonics`] that can be used for inspriation.
- [`Systick based`], runs at a fixed interrupt (tick) rate - with some overhead but simple and provides support for large time spans
- [`RP2040 Timer`], a "proper" implementation with support for waiting for long periods without interrupts. Clearly demonstrates how to use the [`TimerQueue`] to handle scheduling.
- [`nRF52 timers`] implements monotonic & Timer Queue for the RTC and normal timers in nRF52's
## Contributing
Contributing new implementations of `Monotonic` can be done in multiple ways:
* Implement the trait behind a feature flag in [`rtic-monotonics`], and create a PR for them to be included in the main RTIC repository. This way, the implementations of are in-tree, RTIC can guarantee their correctness, and can update them in the case of a new release.
* Implement the changes in an external repository. Doing so will not have them included in [`rtic-monotonics`], but may make it easier to do so in the future.
[`rtic-monotonics`]: https://github.com/rtic-rs/rtic/tree/master/rtic-monotonics/
[`fugit`]: https://docs.rs/fugit/
[`Systick based`]: https://github.com/rtic-monotonics
[`rtic-monotonics`]: https://github.com/rtic-rs/rtic/blob/master/rtic-monotonics
[`RP2040 Timer`]: https://github.com/rtic-rs/rtic/blob/master/rtic-monotonics/src/rp2040.rs
[`nRF52 timers`]: https://github.com/rtic-rs/rtic/blob/master/rtic-monotonics/src/nrf.rs
[`rtic-time`]: https://docs.rs/rtic-time/latest/rtic_time
[`rtic-time::Monotonic`]: https://docs.rs/rtic-time/latest/rtic_time/trait.Monotonic.html
[`Monotonic`]: https://docs.rs/rtic-time/latest/rtic_time/trait.Monotonic.html
[`TimerQueue`]: https://docs.rs/rtic-time/latest/rtic_time/struct.TimerQueue.html
## The timer queue
The timer queue is implemented as a list based priority queue, where list-nodes are statically allocated as part of the `Future` created when `await`-ing a Future created when waiting for the monotonic. Thus, the timer queue is infallible at run-time (its size and allocation are determined at compile time).
Similarly the channels implementation, the timer-queue implementation relies on a global *Critical Section* (CS) for race protection. For the examples a CS implementation is provided by adding `--features test-critical-section` to the build options.

View file

@ -0,0 +1,17 @@
# RTIC vs. Embassy
## Differences
Embassy provides both Hardware Abstraction Layers, and an executor/runtime, while RTIC aims to only provide an execution framework. For example, embassy provides `embassy-stm32` (a HAL), and `embassy-executor` (an executor). On the other hand, RTIC provides the framework in the form of [`rtic`], and the user is responsible for providing a PAC and HAL implementation (generally from the [`stm32-rs`] project).
Additionally, RTIC aims to provide exclusive access to resources on as low a level of possible, ideally guarded by some form of hardware protection. This allows for access to hardware while not necessarily requiring locking mechanisms on the software level.
## Mixing use of Embassy and RTIC
Since most Embassy and RTIC libraries are runtime agnostic, many details from one project can be used in the other. For example, using [`rtic-monotonics`] in an `embassy-executor` powered project works, and using [`embassy-sync`] (though [`rtic-sync`] is recommended) in an RTIC project works.
[`stm32-rs`]: https://github.com/stm32-rs
[`rtic`]: https://docs.rs/rtic/latest/rtic/
[`rtic-monotonics`]: https://docs.rs/rtic-monotonics/latest/rtic_monotonics/
[`embassy-sync`]: https://docs.rs/embassy-sync/latest/embassy_sync/
[`rtic-sync`]: https://docs.rs/rtic-sync/latest/rtic_sync/

View file

@ -4,8 +4,6 @@ RTIC aims to provide the lowest level of abstraction needed for developing robus
It provides a minimal set of required mechanisms for safe sharing of mutable resources among interrupts and asynchronously executing tasks. The scheduling primitives leverages on the underlying hardware for unparalleled performance and predictability, in effect RTIC provides in Rust terms a zero-cost abstraction to concurrent real-time programming.
## Comparison regarding safety and security
Comparing RTIC to traditional a Real-Time Operating System (RTOS) is hard. Firstly, a traditional RTOS typically comes with no guarantees regarding system safety, even the most hardened kernels like the formally verified [seL4] kernel. Their claims to integrity, confidentiality, and availability regards only the kernel itself (under additional assumptions its configuration and environment). They even state:
@ -16,7 +14,7 @@ Comparing RTIC to traditional a Real-Time Operating System (RTOS) is hard. First
[seL4]: https://sel4.systems/
### Security by design
## Security by design
In the world of information security we commonly find:

View file

@ -3,19 +3,15 @@
A recommendation when starting a RTIC project from scratch is to
follow RTIC's [`defmt-app-template`].
If you are targeting ARMv6-M or ARMv8-M-base architecture, check out the section [Target Architecture](../internals/targets.md) for more information on hardware limitations to be aware of.
If you are targeting ARMv6-M or ARMv8-M-base architecture, check out the section [Target Architecture](./internals/targets.md) for more information on hardware limitations to be aware of.
[`defmt-app-template`]: https://github.com/rtic-rs/defmt-app-template
This will give you an RTIC application with support for RTT logging with [`defmt`] and stack overflow
protection using [`flip-link`]. There is also a multitude of examples provided by the community:
For inspiration, you may look at the below resources. For now, they cover RTIC v1.x, but will be updated with RTIC v2.x examples over time.
- [`rtic-examples`] - Multiple projects
- [https://github.com/kalkyl/f411-rtic](https://github.com/kalkyl/f411-rtic)
- ... More to come
For inspiration, you may look at the [rtic examples].
[`defmt`]: https://github.com/knurling-rs/defmt/
[`flip-link`]: https://github.com/knurling-rs/flip-link/
[`rtic-examples`]: https://github.com/rtic-rs/rtic-examples
[rtic examples]: https://github.com/rtic-rs/rtic/tree/master/examples

18
check-book.sh Executable file
View file

@ -0,0 +1,18 @@
#!/bin/sh
set -e
cd book/en/
mdbook build
cd ../../
cargo doc --features thumbv7-backend
mkdir -p book-target/book/
cp -r book/en/book/ book-target/book/en/
cp LICENSE-* book-target/book/en
cp -r target/doc/ book-target/api/
lychee --offline --format detailed book-target/book/en/
rm -rf book-target/

View file

@ -38,8 +38,3 @@ target = "thumbv6m-none-eabi" # Cortex-M0 and Cortex-M0+
# target = "thumbv8m.base-none-eabi" # Cortex-M23
# target = "thumbv8m.main-none-eabi" # Cortex-M33 (no FPU)
# target = "thumbv8m.main-none-eabihf" # Cortex-M33 (with FPU)
# thumbv7m-none-eabi is not coming with core and alloc, compile myself
[unstable]
mtime-on-use = true
build-std = ["core", "alloc"]

View file

@ -433,7 +433,7 @@ dependencies = [
[[package]]
name = "rtic-monotonics"
version = "1.0.0-alpha.1"
version = "1.0.0-alpha.2"
dependencies = [
"atomic-polyfill",
"cfg-if",

View file

@ -38,8 +38,3 @@ target = "thumbv7m-none-eabi" # Cortex-M3
# target = "thumbv8m.base-none-eabi" # Cortex-M23
# target = "thumbv8m.main-none-eabi" # Cortex-M33 (no FPU)
# target = "thumbv8m.main-none-eabihf" # Cortex-M33 (with FPU)
# thumbv7m-none-eabi is not coming with core and alloc, compile myself
[unstable]
mtime-on-use = true
build-std = ["core", "alloc"]

View file

@ -455,7 +455,7 @@ dependencies = [
[[package]]
name = "rtic-monotonics"
version = "1.0.0-alpha.1"
version = "1.0.0-alpha.2"
dependencies = [
"atomic-polyfill",
"cfg-if",

View file

@ -18,7 +18,9 @@ mod app {
struct Shared {}
#[local]
struct Local {}
struct Local {
sender: Sender<'static, u32, CAPACITY>,
}
const CAPACITY: usize = 1;
#[init]
@ -28,7 +30,7 @@ mod app {
receiver::spawn(r).unwrap();
sender1::spawn(s.clone()).unwrap();
(Shared {}, Local {})
(Shared {}, Local { sender: s.clone() })
}
#[task]
@ -45,4 +47,11 @@ mod app {
hprintln!("Sender 1 try sending: 2 {:?}", sender.try_send(2));
debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator
}
// This interrupt is never triggered, but is used to demonstrate that
// one can (try to) send data into a channel from a hardware task.
#[task(binds = GPIOA, local = [sender])]
fn hw_task(cx: hw_task::Context) {
cx.local.sender.try_send(3).ok();
}
}

View file

@ -23,12 +23,14 @@ mod app {
#[local]
struct Local {}
// ANCHOR: init
#[init]
fn init(cx: init::Context) -> (Shared, Local) {
hprintln!("init");
let systick_token = rtic_monotonics::create_systick_token!();
Systick::start(cx.core.SYST, 12_000_000, systick_token);
// ANCHOR_END: init
foo::spawn().ok();
@ -37,6 +39,7 @@ mod app {
#[task]
async fn foo(_cx: foo::Context) {
// ANCHOR: select_biased
// Call hal with short relative timeout using `select_biased`
select_biased! {
v = hal_get(1).fuse() => hprintln!("hal returned {}", v),
@ -48,13 +51,17 @@ mod app {
v = hal_get(1).fuse() => hprintln!("hal returned {}", v), // hal finish first
_ = Systick::delay(1000.millis()).fuse() => hprintln!("timeout", ),
}
// ANCHOR_END: select_biased
// ANCHOR: timeout_after_basic
// Call hal with long relative timeout using monotonic `timeout_after`
match Systick::timeout_after(1000.millis(), hal_get(1)).await {
Ok(v) => hprintln!("hal returned {}", v),
_ => hprintln!("timeout"),
}
// ANCHOR_END: timeout_after_basic
// ANCHOR: timeout_at
// get the current time instance
let mut instant = Systick::now();
@ -73,6 +80,7 @@ mod app {
_ => hprintln!("timeout"),
}
}
// ANCHOR_END: timeout_at
debug::exit(debug::EXIT_SUCCESS);
}

View file

@ -36,6 +36,8 @@ mod app {
hprintln!("idle");
// Some backends provide a manual way of pending an
// interrupt.
rtic::pend(Interrupt::UART0);
loop {