Real-Time Interrupt-driven Concurrency
A concurrency framework for building real-time systems
Preface
This book contains user level documentation for the Real-Time Interrupt-driven Concurrency (RTIC) framework. The API reference is available here.
Formerly known as Real-Time For the Masses.
This is the documentation of v1.0.x of RTIC; for the documentation of version
Is RTIC an RTOS?
A common question is whether RTIC is an RTOS or not, and depending on your background the answer may vary. From RTIC's developers point of view; RTIC is a hardware accelerated RTOS that utilizes the NVIC in Cortex-M MCUs to perform scheduling, rather than the more classical software kernel.
Another common view from the community is that RTIC is a concurrency framework as there is no software kernel and that it relies on external HALs.
Features
-
Tasks as the unit of concurrency 1. Tasks can be event triggered (fired in response to asynchronous stimuli) or spawned by the application on demand.
-
Message passing between tasks. Specifically, messages can be passed to software tasks at spawn time.
-
A timer queue 2. Software tasks can be scheduled to run at some time in the future. This feature can be used to implement periodic tasks.
-
Support for prioritization of tasks and, thus, preemptive multitasking.
-
Efficient and data race free memory sharing through fine grained priority based critical sections 1.
-
Deadlock free execution guaranteed at compile time. This is a stronger guarantee than what's provided by the standard
Mutex
abstraction.
-
Minimal scheduling overhead. The task scheduler has minimal software footprint; the hardware does the bulk of the scheduling.
-
Highly efficient memory usage: All the tasks share a single call stack and there's no hard dependency on a dynamic memory allocator.
-
All Cortex-M devices are fully supported.
-
This task model is amenable to known WCET (Worst Case Execution Time) analysis and scheduling analysis techniques.
Crate cortex-m
0.6 vs 0.7 in RTIC 0.5.x
The crate cortex-m
0.7 started using trait InterruptNumber
for interrupts instead of Nr
from bare-metal
. In order to preserve backwards compatibility, RTIC 0.5.x will keep using cortex-m
0.6 by default. cortex-m
0.7 can be enabled using the feature cortex-m-7
and disabling default features:
cortex-m-rtic = { version = "0.5.8", default-features = false, features = ["cortex-m-7"] }
RTIC 1.0.0 already uses cortex-m
0.7 by default.
User documentation
Documentation for the development version.
API reference
Community provided examples repo
Chat
Join us and talk about RTIC in the Matrix room.
Weekly meeting notes can be found over at HackMD
Contributing
New features and big changes should go through the RFC process in the dedicated RFC repository.
Running tests locally
To check all Run-pass tests
locally on your thumbv6m-none-eabi
or thumbv7m-none-eabi
target device, run
$ cargo xtask --target <your target>
# ˆˆˆˆˆˆˆˆˆˆˆˆ
# e.g. thumbv7m-none-eabi
Acknowledgments
This crate is based on the Real-Time For the Masses language created by the Embedded Systems group at Luleå University of Technology, led by Prof. Per Lindgren.
References
Eriksson, J., Häggström, F., Aittamaa, S., Kruglyak, A., & Lindgren, P. (2013, June). Real-time for the masses, step 1: Programming API and static priority SRP kernel primitives. In Industrial Embedded Systems (SIES), 2013 8th IEEE International Symposium on (pp. 110-113). IEEE.
Lindgren, P., Fresk, E., Lindner, M., Lindner, A., Pereira, D., & Pinho, L. M. (2016). Abstract timers and their implementation onto the arm cortex-m family of mcus. ACM SIGBED Review, 13(1), 48-53.
License
All source code (including code snippets) is licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or https://opensource.org/licenses/MIT)
at your option.
The written prose contained within the book is licensed under the terms of the Creative Commons CC-BY-SA v4.0 license (LICENSE-CC-BY-SA or https://creativecommons.org/licenses/by-sa/4.0/legalcode).
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.
RTIC by example
This part of the book introduces the Real-Time Interrupt-driven Concurrency (RTIC) framework to new users by walking them through examples of increasing complexity.
All examples in this part of the book are accessible at the GitHub repository. The examples are runnable on QEMU (emulating a Cortex M3 target), thus no special hardware required to follow along.
To run the examples with QEMU you will need the qemu-system-arm
program.
Check the embedded Rust book for instructions on how to set up an
embedded development environment that includes QEMU.
To run the examples found in examples/
locally, cargo needs a supported target
and
either --examples
(run all examples) or --example NAME
to run a specific example.
Assuming dependencies in place, running:
$ cargo run --target thumbv7m-none-eabi --example locals
Yields this output:
foo: local_to_foo = 1
bar: local_to_bar = 1
idle: local_to_idle = 1
NOTE: You can choose target device by passing a target triple to cargo (e.g.
cargo run --example init --target thumbv7m-none-eabi
) or configure a default target in.cargo/config.toml
.For running the examples, we use a Cortex M3 emulated in QEMU, so the target is
thumbv7m-none-eabi
.
The #[app]
attribute and an RTIC application
Requirements on the app
attribute
All RTIC applications use the app
attribute (#[app(..)]
). This attribute
only applies to a mod
-item containing the RTIC application. The app
attribute has a mandatory device
argument that takes a path as a value.
This must be a full path pointing to a
peripheral access crate (PAC) generated using svd2rust
v0.14.x or
newer.
The app
attribute will expand into a suitable entry point and thus replaces
the use of the cortex_m_rt::entry
attribute.
An RTIC application example
To give a flavour of RTIC, the following example contains commonly used features. In the following sections we will go through each feature in detail.
#![allow(unused)] fn main() { //! examples/common.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [SSI0, QEI0])] mod app { use cortex_m_semihosting::{debug, hprintln}; use systick_monotonic::*; // Implements the `Monotonic` trait // A monotonic timer to enable scheduling in RTIC #[monotonic(binds = SysTick, default = true)] type MyMono = Systick<100>; // 100 Hz / 10 ms granularity // Resources shared between tasks #[shared] struct Shared { s1: u32, s2: i32, } // Local resources to specific tasks (cannot be shared) #[local] struct Local { l1: u8, l2: i8, } #[init] fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) { let systick = cx.core.SYST; // Initialize the monotonic (SysTick rate in QEMU is 12 MHz) let mono = Systick::new(systick, 12_000_000); // Spawn the task `foo` directly after `init` finishes foo::spawn().unwrap(); // Spawn the task `bar` 1 second after `init` finishes, this is enabled // by the `#[monotonic(..)]` above bar::spawn_after(1.secs()).unwrap(); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator ( // Initialization of shared resources Shared { s1: 0, s2: 1 }, // Initialization of task local resources Local { l1: 2, l2: 3 }, // Move the monotonic timer to the RTIC run-time, this enables // scheduling init::Monotonics(mono), ) } // Background task, runs whenever no other tasks are running #[idle] fn idle(_: idle::Context) -> ! { loop { continue; } } // Software task, not bound to a hardware interrupt. // This task takes the task local resource `l1` // The resources `s1` and `s2` are shared between all other tasks. #[task(shared = [s1, s2], local = [l1])] fn foo(_: foo::Context) { // This task is only spawned once in `init`, hence this task will run // only once hprintln!("foo"); } // Software task, also not bound to a hardware interrupt // This task takes the task local resource `l2` // The resources `s1` and `s2` are shared between all other tasks. #[task(shared = [s1, s2], local = [l2])] fn bar(_: bar::Context) { hprintln!("bar"); // Run `bar` once per second bar::spawn_after(1.secs()).unwrap(); } // Hardware task, bound to a hardware interrupt // The resources `s1` and `s2` are shared between all other tasks. #[task(binds = UART0, priority = 3, shared = [s1, s2])] fn uart0_interrupt(_: uart0_interrupt::Context) { // This task is bound to the interrupt `UART0` and will run // whenever the interrupt fires // Note that RTIC does NOT clear the interrupt flag, this is up to the // user hprintln!("UART0 interrupt!"); } } }
Resource usage
The RTIC framework manages shared and task local resources allowing persistent data
storage and safe accesses without the use of unsafe
code.
RTIC resources are visible only to functions declared within the #[app]
module and the framework
gives the user complete control (on a per-task basis) over resource accessibility.
Declaration of system-wide resources is done by annotating two struct
s within the #[app]
module
with the attribute #[local]
and #[shared]
.
Each field in these structures corresponds to a different resource (identified by field name).
The difference between these two sets of resources will be covered below.
Each task must declare the resources it intends to access in its corresponding metadata attribute
using the local
and shared
arguments. Each argument takes a list of resource identifiers.
The listed resources are made available to the context under the local
and shared
fields of the
Context
structure.
The init
task returns the initial values for the system-wide (#[shared]
and #[local]
)
resources, and the set of initialized timers used by the application. The monotonic timers will be
further discussed in Monotonic & spawn_{at/after}
.
#[local]
resources
#[local]
resources are locally accessible to a specific task, meaning that only that task can
access the resource and does so without locks or critical sections. This allows for the resources,
commonly drivers or large objects, to be initialized in #[init]
and then be passed to a specific
task.
Thus, a task #[local]
resource can only be accessed by one singular task.
Attempting to assign the same #[local]
resource to more than one task is a compile-time error.
Types of #[local]
resources must implement a Send
trait as they are being sent from init
to a target task, crossing a thread boundary.
The example application shown below contains two tasks where each task has access to its own
#[local]
resource; the idle
task has its own #[local]
as well.
#![allow(unused)] fn main() { //! examples/locals.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [UART0, UART1])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared {} #[local] struct Local { /// Local foo local_to_foo: i64, /// Local bar local_to_bar: i64, /// Local idle local_to_idle: i64, } // `#[init]` cannot access locals from the `#[local]` struct as they are initialized here. #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); bar::spawn().unwrap(); ( Shared {}, // initial values for the `#[local]` resources Local { local_to_foo: 0, local_to_bar: 0, local_to_idle: 0, }, init::Monotonics(), ) } // `local_to_idle` can only be accessed from this context #[idle(local = [local_to_idle])] fn idle(cx: idle::Context) -> ! { let local_to_idle = cx.local.local_to_idle; *local_to_idle += 1; hprintln!("idle: local_to_idle = {}", local_to_idle); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator // error: no `local_to_foo` field in `idle::LocalResources` // _cx.local.local_to_foo += 1; // error: no `local_to_bar` field in `idle::LocalResources` // _cx.local.local_to_bar += 1; loop { cortex_m::asm::nop(); } } // `local_to_foo` can only be accessed from this context #[task(local = [local_to_foo])] fn foo(cx: foo::Context) { let local_to_foo = cx.local.local_to_foo; *local_to_foo += 1; // error: no `local_to_bar` field in `foo::LocalResources` // cx.local.local_to_bar += 1; hprintln!("foo: local_to_foo = {}", local_to_foo); } // `local_to_bar` can only be accessed from this context #[task(local = [local_to_bar])] fn bar(cx: bar::Context) { let local_to_bar = cx.local.local_to_bar; *local_to_bar += 1; // error: no `local_to_foo` field in `bar::LocalResources` // cx.local.local_to_foo += 1; hprintln!("bar: local_to_bar = {}", local_to_bar); } } }
Running the example:
$ cargo run --target thumbv7m-none-eabi --example locals
foo: local_to_foo = 1
bar: local_to_bar = 1
idle: local_to_idle = 1
Local resources in #[init]
and #[idle]
have 'static
lifetimes. This is safe since both tasks are not re-entrant.
Task local initialized resources
Local resources can also be specified directly in the resource claim like so:
#[task(local = [my_var: TYPE = INITIAL_VALUE, ...])]
; this allows for creating locals which do no need to be
initialized in #[init]
.
Types of #[task(local = [..])]
resources have to be neither Send
nor Sync
as they
are not crossing any thread boundary.
In the example below the different uses and lifetimes are shown:
#![allow(unused)] fn main() { //! examples/declared_locals.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [UART0])] mod app { use cortex_m_semihosting::debug; #[shared] struct Shared {} #[local] struct Local {} #[init(local = [a: u32 = 0])] fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) { // Locals in `#[init]` have 'static lifetime let _a: &'static mut u32 = cx.local.a; debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator (Shared {}, Local {}, init::Monotonics()) } #[idle(local = [a: u32 = 0])] fn idle(cx: idle::Context) -> ! { // Locals in `#[idle]` have 'static lifetime let _a: &'static mut u32 = cx.local.a; loop {} } #[task(local = [a: u32 = 0])] fn foo(cx: foo::Context) { // Locals in `#[task]`s have a local lifetime let _a: &mut u32 = cx.local.a; // error: explicit lifetime required in the type of `cx` // let _a: &'static mut u32 = cx.local.a; } } }
#[shared]
resources and lock
Critical sections are required to access #[shared]
resources in a data race-free manner and to
achieve this the shared
field of the passed Context
implements the Mutex
trait for each
shared resource accessible to the task. This trait has only one method, lock
, which runs its
closure argument in a critical section.
The critical section created by the lock
API is based on dynamic priorities: it temporarily
raises the dynamic priority of the context to a ceiling priority that prevents other tasks from
preempting the critical section. This synchronization protocol is known as the
Immediate Ceiling Priority Protocol (ICPP), and complies with
Stack Resource Policy (SRP) based scheduling of RTIC.
In the example below we have three interrupt handlers with priorities ranging from one to three.
The two handlers with the lower priorities contend for a shared
resource and need to succeed in locking the
resource in order to access its data. The highest priority handler, which does not access the shared
resource, is free to preempt a critical section created by the lowest priority handler.
#![allow(unused)] fn main() { //! examples/lock.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [GPIOA, GPIOB, GPIOC])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared { shared: u32, } #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); (Shared { shared: 0 }, Local {}, init::Monotonics()) } // when omitted priority is assumed to be `1` #[task(shared = [shared])] fn foo(mut c: foo::Context) { hprintln!("A"); // the lower priority task requires a critical section to access the data c.shared.shared.lock(|shared| { // data can only be modified within this critical section (closure) *shared += 1; // bar will *not* run right now due to the critical section bar::spawn().unwrap(); hprintln!("B - shared = {}", *shared); // baz does not contend for `shared` so it's allowed to run now baz::spawn().unwrap(); }); // critical section is over: bar can now start hprintln!("E"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } #[task(priority = 2, shared = [shared])] fn bar(mut c: bar::Context) { // the higher priority task does still need a critical section let shared = c.shared.shared.lock(|shared| { *shared += 1; *shared }); hprintln!("D - shared = {}", shared); } #[task(priority = 3)] fn baz(_: baz::Context) { hprintln!("C"); } } }
$ cargo run --target thumbv7m-none-eabi --example lock
A
B - shared = 1
C
D - shared = 2
E
Types of #[shared]
resources have to be Send
.
Multi-lock
As an extension to lock
, and to reduce rightward drift, locks can be taken as tuples. The
following examples show this in use:
#![allow(unused)] fn main() { //! examples/mutlilock.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [GPIOA])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared { shared1: u32, shared2: u32, shared3: u32, } #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { locks::spawn().unwrap(); ( Shared { shared1: 0, shared2: 0, shared3: 0, }, Local {}, init::Monotonics(), ) } // when omitted priority is assumed to be `1` #[task(shared = [shared1, shared2, shared3])] fn locks(c: locks::Context) { let s1 = c.shared.shared1; let s2 = c.shared.shared2; let s3 = c.shared.shared3; (s1, s2, s3).lock(|s1, s2, s3| { *s1 += 1; *s2 += 1; *s3 += 1; hprintln!("Multiple locks, s1: {}, s2: {}, s3: {}", *s1, *s2, *s3); }); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } }
$ cargo run --target thumbv7m-none-eabi --example multilock
Multiple locks, s1: 1, s2: 1, s3: 1
Only shared (&-
) access
By default, the framework assumes that all tasks require exclusive access (&mut-
) to resources,
but it is possible to specify that a task only requires shared access (&-
) to a resource using the
&resource_name
syntax in the shared
list.
The advantage of specifying shared access (&-
) to a resource is that no locks are required to
access the resource even if the resource is contended by more than one task running at different
priorities. The downside is that the task only gets a shared reference (&-
) to the resource,
limiting the operations it can perform on it, but where a shared reference is enough this approach
reduces the number of required locks. In addition to simple immutable data, this shared access can
be useful where the resource type safely implements interior mutability, with appropriate locking
or atomic operations of its own.
Note that in this release of RTIC it is not possible to request both exclusive access (&mut-
)
and shared access (&-
) to the same resource from different tasks. Attempting to do so will
result in a compile error.
In the example below a key (e.g. a cryptographic key) is loaded (or created) at runtime and then used from two tasks that run at different priorities without any kind of lock.
#![allow(unused)] fn main() { //! examples/only-shared-access.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [UART0, UART1])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared { key: u32, } #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); bar::spawn().unwrap(); (Shared { key: 0xdeadbeef }, Local {}, init::Monotonics()) } #[task(shared = [&key])] fn foo(cx: foo::Context) { let key: &u32 = cx.shared.key; hprintln!("foo(key = {:#x})", key); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } #[task(priority = 2, shared = [&key])] fn bar(cx: bar::Context) { hprintln!("bar(key = {:#x})", cx.shared.key); } } }
$ cargo run --target thumbv7m-none-eabi --example only-shared-access
bar(key = 0xdeadbeef)
foo(key = 0xdeadbeef)
Lock-free resource access of shared resources
A critical section is not required to access a #[shared]
resource that's only accessed by tasks
running at the same priority. In this case, you can opt out of the lock
API by adding the
#[lock_free]
field-level attribute to the resource declaration (see example below). Note that
this is merely a convenience to reduce needless resource locking code, because even if the
lock
API is used, at runtime the framework will not produce a critical section due to how
the underlying resource-ceiling preemption works.
Also worth noting: using #[lock_free]
on resources shared by
tasks running at different priorities will result in a compile-time error -- not using the lock
API would be a data race in that case.
#![allow(unused)] fn main() { //! examples/lock-free.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [GPIOA])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared { #[lock_free] // <- lock-free shared resource counter: u64, } #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); (Shared { counter: 0 }, Local {}, init::Monotonics()) } #[task(shared = [counter])] // <- same priority fn foo(c: foo::Context) { bar::spawn().unwrap(); *c.shared.counter += 1; // <- no lock API required let counter = *c.shared.counter; hprintln!(" foo = {}", counter); } #[task(shared = [counter])] // <- same priority fn bar(c: bar::Context) { foo::spawn().unwrap(); *c.shared.counter += 1; // <- no lock API required let counter = *c.shared.counter; hprintln!(" bar = {}", counter); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } }
$ cargo run --target thumbv7m-none-eabi --example lock-free
foo = 1
bar = 2
App initialization and the #[init]
task
An RTIC application requires an init
task setting up the system. The corresponding init
function must have the
signature fn(init::Context) -> (Shared, Local, init::Monotonics)
, where Shared
and Local
are resource
structures defined by the user.
The init
task executes after system reset, after an optionally defined pre-init
code section and an always occurring internal RTIC
initialization.
The init
and optional pre-init
tasks runs with interrupts disabled and have exclusive access to Cortex-M (the
bare_metal::CriticalSection
token is available as cs
).
Device specific peripherals are available through the core
and device
fields of init::Context
.
Example
The example below shows the types of the core
, device
and cs
fields, and showcases the use of a local
variable with 'static
lifetime.
Such variables can be delegated from the init
task to other tasks of the RTIC application.
The device
field is only available when the peripherals
argument is set to the default value true
.
In the rare case you want to implement an ultra-slim application you can explicitly set peripherals
to false
.
#![allow(unused)] fn main() { //! examples/init.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, peripherals = true)] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared {} #[local] struct Local {} #[init(local = [x: u32 = 0])] fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) { // Cortex-M peripherals let _core: cortex_m::Peripherals = cx.core; // Device specific peripherals let _device: lm3s6965::Peripherals = cx.device; // Locals in `init` have 'static lifetime let _x: &'static mut u32 = cx.local.x; // Access to the critical section token, // to indicate that this is a critical seciton let _cs_token: bare_metal::CriticalSection = cx.cs; hprintln!("init"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator (Shared {}, Local {}, init::Monotonics()) } } }
Running the example will print init
to the console and then exit the QEMU process.
$ cargo run --target thumbv7m-none-eabi --example init
init
The background task #[idle]
A function marked with the idle
attribute can optionally appear in the
module. This becomes the special idle task and must have signature
fn(idle::Context) -> !
.
When present, the runtime will execute the idle
task after init
. Unlike
init
, idle
will run with interrupts enabled and must never return,
as the -> !
function signature indicates.
The Rust type !
means “never”.
Like in init
, locally declared resources will have 'static
lifetimes that
are safe to access.
The example below shows that idle
runs after init
.
#![allow(unused)] fn main() { //! examples/idle.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965)] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { hprintln!("init"); (Shared {}, Local {}, init::Monotonics()) } #[idle(local = [x: u32 = 0])] fn idle(cx: idle::Context) -> ! { // Locals in idle have lifetime 'static let _x: &'static mut u32 = cx.local.x; hprintln!("idle"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator loop { cortex_m::asm::nop(); } } } }
$ cargo run --target thumbv7m-none-eabi --example idle
init
idle
By default, the RTIC idle
task does not try to optimize for any specific targets.
A common useful optimization is to enable the SLEEPONEXIT and allow the MCU
to enter sleep when reaching idle
.
Caution some hardware unless configured disables the debug unit during sleep mode.
Consult your hardware specific documentation as this is outside the scope of RTIC.
The following example shows how to enable sleep by setting the
SLEEPONEXIT
and providing a custom idle
task replacing the
default nop()
with wfi()
.
#![allow(unused)] fn main() { //! examples/idle-wfi.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965)] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(mut cx: init::Context) -> (Shared, Local, init::Monotonics) { hprintln!("init"); // Set the ARM SLEEPONEXIT bit to go to sleep after handling interrupts // See https://developer.arm.com/docs/100737/0100/power-management/sleep-mode/sleep-on-exit-bit cx.core.SCB.set_sleepdeep(); (Shared {}, Local {}, init::Monotonics()) } #[idle(local = [x: u32 = 0])] fn idle(cx: idle::Context) -> ! { // Locals in idle have lifetime 'static let _x: &'static mut u32 = cx.local.x; hprintln!("idle"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator loop { // Now Wait For Interrupt is used instead of a busy-wait loop // to allow MCU to sleep between interrupts // https://developer.arm.com/documentation/ddi0406/c/Application-Level-Architecture/Instruction-Details/Alphabetical-list-of-instructions/WFI rtic::export::wfi() } } } }
$ cargo run --target thumbv7m-none-eabi --example idle-wfi
init
idle
Defining tasks with #[task]
Tasks, defined with #[task]
, are the main mechanism of getting work done in RTIC.
Tasks can
- Be spawned (now or in the future, also by themselves)
- Receive messages (passing messages between tasks)
- Be prioritized, allowing preemptive multitasking
- Optionally bind to a hardware interrupt
RTIC makes a distinction between “software tasks” and “hardware tasks”.
Hardware tasks are tasks that are bound to a specific interrupt vector in the MCU while software tasks are not.
This means that if a hardware task is bound to, lets say, a UART RX interrupt, the task will be run every time that interrupt triggers, usually when a character is received.
Software tasks are explicitly spawned in a task, either immediately or using the Monotonic timer mechanism.
In the coming pages we will explore both tasks and the different options available.
Hardware tasks
At its core RTIC is using a hardware interrupt controller (ARM NVIC on cortex-m)
to schedule and start execution of tasks. All tasks except pre-init
, #[init]
and #[idle]
run as interrupt handlers.
Hardware tasks are explicitly bound to interrupt handlers.
To bind a task to an interrupt, use the #[task]
attribute argument binds = InterruptName
.
This task then becomes the interrupt handler for this hardware interrupt vector.
All tasks bound to an explicit interrupt are called hardware tasks since they start execution in reaction to a hardware event.
Specifying a non-existing interrupt name will cause a compilation error. The interrupt names are commonly defined by PAC or HAL crates.
Any available interrupt vector should work. Specific devices may bind specific interrupt priorities to specific interrupt vectors outside user code control. See for example the nRF “softdevice”.
Beware of using interrupt vectors that are used internally by hardware features; RTIC is unaware of such hardware specific details.
The example below demonstrates the use of the #[task(binds = InterruptName)]
attribute to declare a
hardware task bound to an interrupt handler.
#![allow(unused)] fn main() { //! examples/hardware.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965)] mod app { use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { // Pends the UART0 interrupt but its handler won't run until *after* // `init` returns because interrupts are disabled rtic::pend(Interrupt::UART0); // equivalent to NVIC::pend hprintln!("init"); (Shared {}, Local {}, init::Monotonics()) } #[idle] fn idle(_: idle::Context) -> ! { // interrupts are enabled again; the `UART0` handler runs at this point hprintln!("idle"); rtic::pend(Interrupt::UART0); loop { // Exit moved after nop to ensure that rtic::pend gets // to run before exiting cortex_m::asm::nop(); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } #[task(binds = UART0, local = [times: u32 = 0])] fn uart0(cx: uart0::Context) { // Safe access to local `static mut` variable *cx.local.times += 1; hprintln!( "UART0 called {} time{}", *cx.local.times, if *cx.local.times > 1 { "s" } else { "" } ); } } }
$ cargo run --target thumbv7m-none-eabi --example hardware
init
UART0 called 1 time
idle
UART0 called 2 times
Software tasks & spawn
The RTIC concept of a software task shares a lot with that of hardware tasks with the core difference that a software task is not explicitly bound to a specific interrupt vector, but rather bound to a “dispatcher” interrupt vector running at the intended priority of the software task (see below).
Thus, software tasks are tasks which are not directly bound to an interrupt vector.
The #[task]
attributes used on a function determine if it is
software tasks, specifically the absence of a binds = InterruptName
argument to the attribute definition.
The static method task_name::spawn()
spawns (schedules) a software
task by registering it with a specific dispatcher. If there are no
higher priority tasks available to the scheduler (which serves a set
of dispatchers), the task will start executing directly.
All software tasks at the same priority level share an interrupt handler bound to their dispatcher. What differentiates software and hardware tasks is the usage of either a dispatcher or a bound interrupt vector.
The interrupt vectors used as dispatchers cannot be used by hardware tasks.
Availability of a set of “free” (not in use by hardware tasks) and usable interrupt vectors allows the framework to dispatch software tasks via dedicated interrupt handlers.
This set of dispatchers, dispatchers = [FreeInterrupt1, FreeInterrupt2, ...]
is an
argument to the #[app]
attribute.
Each interrupt vector acting as dispatcher gets assigned to a unique priority level meaning that the list of dispatchers needs to cover all priority levels used by software tasks.
Example: The dispatchers =
argument needs to have at least 3 entries for an application using
three different priorities for software tasks.
The framework will give a compilation error if there are not enough dispatchers provided.
See the following example:
#![allow(unused)] fn main() { //! examples/spawn.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [SSI0])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { hprintln!("init"); foo::spawn().unwrap(); (Shared {}, Local {}, init::Monotonics()) } #[task] fn foo(_: foo::Context) { hprintln!("foo"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } }
$ cargo run --target thumbv7m-none-eabi --example spawn
init
foo
Message passing & capacity
Software tasks support message passing, this means that software tasks can be spawned
with an argument: foo::spawn(1)
which will run the task foo
with the argument 1
.
Capacity sets the size of the spawn queue for the task, if not specified capacity defaults to 1.
In the example below, the capacity of task foo
is 3
, allowing three simultaneous
pending spawns of foo
. Exceeding this capacity is an Error
.
The number of arguments to a task is not limited:
#![allow(unused)] fn main() { //! examples/message_passing.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [SSI0])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn(1, 1).unwrap(); foo::spawn(1, 2).unwrap(); foo::spawn(2, 3).unwrap(); assert!(foo::spawn(1, 4).is_err()); // The capacity of `foo` is reached (Shared {}, Local {}, init::Monotonics()) } #[task(capacity = 3)] fn foo(_c: foo::Context, x: i32, y: u32) { hprintln!("foo {}, {}", x, y); if x == 2 { debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } } }
$ cargo run --target thumbv7m-none-eabi --example message_passing
foo 1, 1
foo 1, 2
foo 2, 3
Task priorities
Priorities
The priority
argument declares the static priority of each task
.
For Cortex-M, tasks can have priorities in the range 1..=(1 << NVIC_PRIO_BITS)
where NVIC_PRIO_BITS
is a constant defined in the device
crate.
Omitting the priority
argument the task priority defaults to 1
.
The idle
task has a non-configurable static priority of 0
, the lowest priority.
A higher number means a higher priority in RTIC, which is the opposite from what Cortex-M does in the NVIC peripheral. Explicitly, this means that number
10
has a higher priority than number9
.
The highest static priority task takes precedence when more than one task are ready to execute.
The following scenario demonstrates task prioritization: Spawning a higher priority task A during execution of a lower priority task B suspends task B. Task A has higher priority thus preempting task B which gets suspended until task A completes execution. Thus, when task A completes task B resumes execution.
Task Priority
┌────────────────────────────────────────────────────────┐
│ │
│ │
3 │ Preempts │
2 │ A─────────► │
1 │ B─────────► - - - - B────────► │
0 │Idle┌─────► Resumes ┌──────────► │
├────┴──────────────────────────────────┴────────────────┤
│ │
└────────────────────────────────────────────────────────┘Time
The following example showcases the priority based scheduling of tasks:
#![allow(unused)] fn main() { //! examples/preempt.rs #![no_main] #![no_std] use panic_semihosting as _; use rtic::app; #[app(device = lm3s6965, dispatchers = [SSI0, QEI0])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); (Shared {}, Local {}, init::Monotonics()) } #[task(priority = 1)] fn foo(_: foo::Context) { hprintln!("foo - start"); baz::spawn().unwrap(); hprintln!("foo - end"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } #[task(priority = 2)] fn bar(_: bar::Context) { hprintln!(" bar"); } #[task(priority = 2)] fn baz(_: baz::Context) { hprintln!(" baz - start"); bar::spawn().unwrap(); hprintln!(" baz - end"); } } }
$ cargo run --target thumbv7m-none-eabi --example preempt
foo - start
baz - start
baz - end
bar
foo - end
Note that the task bar
does not preempt task baz
because its priority
is the same as baz
's. The higher priority task bar
runs before foo
when baz
returns. When bar
returns foo
can resume.
One more note about priorities: choosing a priority higher than what the device supports will result in a compilation error.
The error is cryptic due to limitations in the Rust language
if priority = 9
for task uart0_interrupt
in example/common.rs
this looks like:
error[E0080]: evaluation of constant value failed
--> examples/common.rs:10:1
|
10 | #[rtic::app(device = lm3s6965, dispatchers = [SSI0, QEI0])]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ attempt to compute `8_usize - 9_usize`, which would overflow
|
= note: this error originates in the attribute macro `rtic::app` (in Nightly builds, run with -Z macro-backtrace for more info)
The error message incorrectly points to the starting point of the macro, but at least the value subtracted (in this case 9) will suggest which task causes the error.
Monotonic & spawn_
The understanding of time is an important concept in embedded systems, and to be able to run tasks
based on time is essential. The framework provides the static methods
task::spawn_after(/* duration */)
and task::spawn_at(/* specific time instant */)
.
spawn_after
is more commonly used, but in cases where it's needed to have spawns happen
without drift or to a fixed baseline spawn_at
is available.
The #[monotonic]
attribute, applied to a type alias definition, exists to support this.
This type alias must point to a type which implements the rtic_monotonic::Monotonic
trait.
This is generally some timer which handles the timing of the system.
One or more monotonics can coexist in the same system, for example a slow timer that wakes the
system from sleep and another which purpose is for fine grained scheduling while the
system is awake.
The attribute has one required parameter and two optional parameters, binds
, default
and
priority
respectively.
The required parameter, binds = InterruptName
, associates an interrupt vector to the timer's
interrupt, while default = true
enables a shorthand API when spawning and accessing
time (monotonics::now()
vs monotonics::MyMono::now()
), and priority
sets the priority
of the interrupt vector.
The default
priority
is the maximum priority of the system. If your system has a high priority task with tight scheduling requirements, it might be desirable to demote themonotonic
task to a lower priority to reduce scheduling jitter for the high priority task. This however might introduce jitter and delays into scheduling via themonotonic
, making it a trade-off.
The monotonics are initialized in #[init]
and returned within the init::Monotonic( ... )
tuple.
This activates the monotonics making it possible to use them.
See the following example:
#![allow(unused)] fn main() { //! examples/schedule.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [SSI0])] mod app { use cortex_m_semihosting::{debug, hprintln}; use systick_monotonic::*; #[monotonic(binds = SysTick, default = true)] type MyMono = Systick<100>; // 100 Hz / 10 ms granularity #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) { let systick = cx.core.SYST; // Initialize the monotonic (SysTick rate in QEMU is 12 MHz) let mono = Systick::new(systick, 12_000_000); hprintln!("init"); // Schedule `foo` to run 1 second in the future foo::spawn_after(1.secs()).unwrap(); ( Shared {}, Local {}, init::Monotonics(mono), // Give the monotonic to RTIC ) } #[task] fn foo(_: foo::Context) { hprintln!("foo"); // Schedule `bar` to run 2 seconds in the future (1 second after foo runs) bar::spawn_after(1.secs()).unwrap(); } #[task] fn bar(_: bar::Context) { hprintln!("bar"); // Schedule `baz` to run 1 seconds from now, but with a specific time instant. baz::spawn_at(monotonics::now() + 1.secs()).unwrap(); } #[task] fn baz(_: baz::Context) { hprintln!("baz"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } }
$ cargo run --target thumbv7m-none-eabi --example schedule
init
foo
bar
baz
A key requirement of a Monotonic is that it must deal gracefully with hardware timer overruns.
Canceling or rescheduling a scheduled task
Tasks spawned using task::spawn_after
and task::spawn_at
returns a SpawnHandle
,
which allows canceling or rescheduling of the task scheduled to run in the future.
If cancel
or reschedule_at
/reschedule_after
returns an Err
it means that the operation was
too late and that the task is already sent for execution. The following example shows this in action:
#![allow(unused)] fn main() { //! examples/cancel-reschedule.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [SSI0])] mod app { use cortex_m_semihosting::{debug, hprintln}; use systick_monotonic::*; #[monotonic(binds = SysTick, default = true)] type MyMono = Systick<100>; // 100 Hz / 10 ms granularity #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) { let systick = cx.core.SYST; // Initialize the monotonic (SysTick rate in QEMU is 12 MHz) let mono = Systick::new(systick, 12_000_000); hprintln!("init"); // Schedule `foo` to run 1 second in the future foo::spawn_after(1.secs()).unwrap(); ( Shared {}, Local {}, init::Monotonics(mono), // Give the monotonic to RTIC ) } #[task] fn foo(_: foo::Context) { hprintln!("foo"); // Schedule `bar` to run 2 seconds in the future (1 second after foo runs) let spawn_handle = baz::spawn_after(2.secs()).unwrap(); bar::spawn_after(1.secs(), spawn_handle, false).unwrap(); // Change to true } #[task] fn bar(_: bar::Context, baz_handle: baz::SpawnHandle, do_reschedule: bool) { hprintln!("bar"); if do_reschedule { // Reschedule baz 2 seconds from now, instead of the original 1 second // from now. baz_handle.reschedule_after(2.secs()).unwrap(); // Or baz_handle.reschedule_at(/* time */) } else { // Or cancel it baz_handle.cancel().unwrap(); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } #[task] fn baz(_: baz::Context) { hprintln!("baz"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } }
$ cargo run --target thumbv7m-none-eabi --example cancel-reschedule
init
foo
bar
Starting a new project
A recommendation when starting a RTIC project from scratch is to
follow RTIC's defmt-app-template
.
If you are targeting ARMv6-M or ARMv8-M-base architecture, check out the section Target Architecture for more information on hardware limitations to be aware of.
This will give you an RTIC application with support for RTT logging with defmt
and stack overflow
protection using flip-link
. There is also a multitude of examples provided by the community:
rtic-examples
- Multiple projects- https://github.com/kalkyl/f411-rtic
- ... More to come
The minimal app
This is the smallest possible RTIC application:
#![allow(unused)] fn main() { //! examples/smallest.rs #![no_main] #![no_std] use panic_semihosting as _; // panic handler use rtic::app; #[app(device = lm3s6965)] mod app { use cortex_m_semihosting::debug; #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator (Shared {}, Local {}, init::Monotonics()) } } }
Tips & tricks
In this section we will explore common tips & tricks related to using RTIC.
Implementing a Monotonic
timer for scheduling
The framework is flexible because it can use any timer which has compare-match and optionally
supporting overflow interrupts for scheduling.
The single requirement to make a timer usable with RTIC is implementing the
rtic_monotonic::Monotonic
trait.
Implementing time counting that supports large time spans is generally difficult, in RTIC 0.5 implementing time handling was a common problem. Moreover, the relation between time and timers used for scheduling was difficult to understand.
For RTIC 1.0 we instead assume the user has a time library, e.g. fugit
or embedded_time
,
as the basis for all time-based operations when implementing Monotonic
.
These libraries make it much easier to correctly implement the Monotonic
trait, allowing the use of
almost any timer in the system for scheduling.
The trait documents the requirements for each method,
and for inspiration here is a list of Monotonic
implementations:
STM32F411 series
, implemented for the 32-bit timersNordic nRF52 series Timer
, implemented for the 32-bit timersNordic nRF52 series RTC
, implemented for the RTCsSystick based
, runs at a fixed interrupt (tick) rate - with some overhead but simple and with support for large time spansDWT and Systick based
, a more efficient (tickless) implementation - requires bothSysTick
andDWT
, supports both high resolution and large time spans
If you know of more implementations feel free to add them to this list.
Resource de-structure-ing
Destructuring task resources might help readability if a task takes multiple resources. Here are two examples on how to split up the resource struct:
#![allow(unused)] fn main() { //! examples/destructure.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [UART0])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared { a: u32, b: u32, c: u32, } #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); bar::spawn().unwrap(); (Shared { a: 0, b: 0, c: 0 }, Local {}, init::Monotonics()) } #[idle] fn idle(_: idle::Context) -> ! { debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator loop {} } // Direct destructure #[task(shared = [&a, &b, &c])] fn foo(cx: foo::Context) { let a = cx.shared.a; let b = cx.shared.b; let c = cx.shared.c; hprintln!("foo: a = {}, b = {}, c = {}", a, b, c); } // De-structure-ing syntax #[task(shared = [&a, &b, &c])] fn bar(cx: bar::Context) { let bar::SharedResources { a, b, c } = cx.shared; hprintln!("bar: a = {}, b = {}, c = {}", a, b, c); } } }
$ cargo run --target thumbv7m-none-eabi --example destructure
foo: a = 0, b = 0, c = 0
bar: a = 0, b = 0, c = 0
Using indirection for faster message passing
Message passing always involves copying the payload from the sender into a
static variable and then from the static variable into the receiver. Thus
sending a large buffer, like a [u8; 128]
, as a message involves two expensive
memcpy
s.
Indirection can minimize message passing overhead: instead of sending the buffer by value, one can send an owning pointer into the buffer.
One can use a global memory allocator to achieve indirection (alloc::Box
,
alloc::Rc
, etc.), which requires using the nightly channel as of Rust v1.37.0,
or one can use a statically allocated memory pool like heapless::Pool
.
As this example of approach goes completely outside of RTIC resource
model with shared and local the program would rely on the correctness
of the memory allocator, in this case heapless::pool
.
Here's an example where heapless::Pool
is used to "box" buffers of 128 bytes.
#![allow(unused)] fn main() { //! examples/pool.rs #![deny(unsafe_code)] #![deny(warnings)] // pool!() generates a struct without docs //#![deny(missing_docs)] #![no_main] #![no_std] use heapless::{ pool, pool::singleton::{Box, Pool}, }; use panic_semihosting as _; use rtic::app; // Declare a pool of 128-byte memory blocks pool!(P: [u8; 128]); #[app(device = lm3s6965, dispatchers = [SSI0, QEI0])] mod app { use crate::{Box, Pool}; use cortex_m_semihosting::debug; use lm3s6965::Interrupt; // Import the memory pool into scope use super::P; #[shared] struct Shared {} #[local] struct Local {} #[init(local = [memory: [u8; 512] = [0; 512]])] fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) { // Increase the capacity of the memory pool by ~4 P::grow(cx.local.memory); rtic::pend(Interrupt::I2C0); (Shared {}, Local {}, init::Monotonics()) } #[task(binds = I2C0, priority = 2)] fn i2c0(_: i2c0::Context) { // claim a memory block, initialize it and .. let x = P::alloc().unwrap().init([0u8; 128]); // .. send it to the `foo` task foo::spawn(x).ok().unwrap(); // send another block to the task `bar` bar::spawn(P::alloc().unwrap().init([0u8; 128])) .ok() .unwrap(); } #[task] fn foo(_: foo::Context, _x: Box<P>) { // explicitly return the block to the pool drop(_x); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } #[task(priority = 2)] fn bar(_: bar::Context, _x: Box<P>) { // this is done automatically so we can omit the call to `drop` // drop(x); } } }
$ cargo run --target thumbv7m-none-eabi --example pool
'static super-powers
In #[init]
and #[idle]
local
resources have 'static
lifetime.
Useful when pre-allocating and/or splitting resources between tasks, drivers
or some other object.
This comes in handy when drivers, such as USB drivers, need to allocate memory and
when using splittable data structures such as heapless::spsc::Queue
.
In the following example two different tasks share a heapless::spsc::Queue
for lock-free access to the shared queue.
#![allow(unused)] fn main() { //! examples/static.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [UART0])] mod app { use cortex_m_semihosting::{debug, hprintln}; use heapless::spsc::{Consumer, Producer, Queue}; #[shared] struct Shared {} #[local] struct Local { p: Producer<'static, u32, 5>, c: Consumer<'static, u32, 5>, } #[init(local = [q: Queue<u32, 5> = Queue::new()])] fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) { // q has 'static life-time so after the split and return of `init` // it will continue to exist and be allocated let (p, c) = cx.local.q.split(); foo::spawn().unwrap(); (Shared {}, Local { p, c }, init::Monotonics()) } #[idle(local = [c])] fn idle(c: idle::Context) -> ! { loop { // Lock-free access to the same underlying queue! if let Some(data) = c.local.c.dequeue() { hprintln!("received message: {}", data); // Run foo until data if data == 3 { debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } else { foo::spawn().unwrap(); } } } } #[task(local = [p, state: u32 = 0])] fn foo(c: foo::Context) { *c.local.state += 1; // Lock-free access to the same underlying queue! c.local.p.enqueue(*c.local.state).unwrap(); } } }
Running this program produces the expected output.
$ cargo run --target thumbv7m-none-eabi --example static
received message: 1
received message: 2
received message: 3
Inspecting generated code
#[rtic::app]
is a procedural macro that produces support code. If for some
reason you need to inspect the code generated by this macro you have two
options:
You can inspect the file rtic-expansion.rs
inside the target
directory. This
file contains the expansion of the #[rtic::app]
item (not your whole program!)
of the last built (via cargo build
or cargo check
) RTIC application. The
expanded code is not pretty printed by default, so you'll want to run rustfmt
on it before you read it.
$ cargo build --example foo
$ rustfmt target/rtic-expansion.rs
tail target/rtic-expansion.rs
#[doc = r" Implementation details"] mod app { #[doc = r" Always include the device crate which contains the vector table"] use lm3s6965 as _; #[no_mangle] unsafe extern "C" fn main() -> ! { rtic::export::interrupt::disable(); let mut core: rtic::export::Peripherals = core::mem::transmute(()); core.SCB.scr.modify(|r| r | 1 << 1); rtic::export::interrupt::enable(); loop { rtic::export::wfi() } } }
Or, you can use the cargo-expand
sub-command. This sub-command will expand
all the macros, including the #[rtic::app]
attribute, and modules in your
crate and print the output to the console.
# produces the same output as before
cargo expand --example smallest | tail
Running tasks from RAM
The main goal of moving the specification of RTIC applications to attributes in
RTIC v0.4.0 was to allow inter-operation with other attributes. For example, the
link_section
attribute can be applied to tasks to place them in RAM; this can
improve performance in some cases.
IMPORTANT: In general, the
link_section
,export_name
andno_mangle
attributes are powerful but also easy to misuse. Incorrectly using any of these attributes can cause undefined behavior; you should always prefer to use safe, higher level attributes around them likecortex-m-rt
'sinterrupt
andexception
attributes.In the particular case of RAM functions there's no safe abstraction for it in
cortex-m-rt
v0.6.5 but there's an RFC for adding aramfunc
attribute in a future release.
The example below shows how to place the higher priority task, bar
, in RAM.
#![allow(unused)] fn main() { //! examples/ramfunc.rs #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app( device = lm3s6965, dispatchers = [ UART0, #[link_section = ".data.UART1"] UART1 ]) ] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared {} #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); (Shared {}, Local {}, init::Monotonics()) } #[inline(never)] #[task] fn foo(_: foo::Context) { hprintln!("foo"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } // run this task from RAM #[inline(never)] #[link_section = ".data.bar"] #[task(priority = 2)] fn bar(_: bar::Context) { foo::spawn().unwrap(); } } }
Running this program produces the expected output.
$ cargo run --target thumbv7m-none-eabi --example ramfunc
foo
One can look at the output of cargo-nm
to confirm that bar
ended in RAM
(0x2000_0000
), whereas foo
ended in Flash (0x0000_0000
).
$ cargo nm --example ramfunc --release | grep ' foo::'
00000162 t ramfunc::foo::h30e7789b08c08e19
$ cargo nm --example ramfunc --release | grep ' bar::'
20000000 t ramfunc::bar::h9d6714fe5a3b0c89
Awesome RTIC examples
See the rtic-rs/rtic-examples
repository for community
provided complete examples.
Pull-requests to this repo are welcome!
Migration Guides
This section describes how to migrate between different versions of RTIC. It also acts as a comparing reference between versions.
Migrating from v0.5.x to v1.0.0
This section describes how to upgrade from v0.5.x to v1.0.0 of the RTIC framework.
Cargo.toml
- version bump
Change the version of cortex-m-rtic
to "1.0.0"
.
mod
instead of const
With the support of attributes on modules the const APP
workaround is not needed.
Change
#![allow(unused)] fn main() { #[rtic::app(/* .. */)] const APP: () = { [code here] }; }
into
#![allow(unused)] fn main() { #[rtic::app(/* .. */)] mod app { [code here] } }
Now that a regular Rust module is used it means it is possible to have custom
user code within that module.
Additionally, it means that use
-statements for resources used in user
code must be moved inside mod app
, or be referred to with super
. For
example, change:
#![allow(unused)] fn main() { use some_crate::some_func; #[rtic::app(/* .. */)] const APP: () = { fn func() { some_crate::some_func(); } }; }
into
#![allow(unused)] fn main() { #[rtic::app(/* .. */)] mod app { use some_crate::some_func; fn func() { some_crate::some_func(); } } }
or
#![allow(unused)] fn main() { use some_crate::some_func; #[rtic::app(/* .. */)] mod app { fn func() { super::some_crate::some_func(); } } }
Move Dispatchers from extern "C"
to app arguments
Change
#![allow(unused)] fn main() { #[rtic::app(/* .. */)] const APP: () = { [code here] // RTIC requires that unused interrupts are declared in an extern block when // using software tasks; these free interrupts will be used to dispatch the // software tasks. extern "C" { fn SSI0(); fn QEI0(); } }; }
into
#![allow(unused)] fn main() { #[rtic::app(/* .. */, dispatchers = [SSI0, QEI0])] mod app { [code here] } }
This works also for ram functions, see examples/ramfunc.rs
Resources structs - #[shared]
, #[local]
Previously the RTIC resources had to be in in a struct named exactly "Resources":
#![allow(unused)] fn main() { struct Resources { // Resources defined in here } }
With RTIC v1.0.0 the resources structs are annotated similarly like
#[task]
, #[init]
, #[idle]
: with the attributes #[shared]
and #[local]
#![allow(unused)] fn main() { #[shared] struct MySharedResources { // Resources shared between tasks are defined here } #[local] struct MyLocalResources { // Resources defined here cannot be shared between tasks; each one is local to a single task } }
These structs can be freely named by the developer.
shared
and local
arguments in #[task]
s
In v1.0.0 resources are split between shared
resources and local
resources.
#[task]
, #[init]
and #[idle]
no longer have a resources
argument; they must now use the shared
and local
arguments.
In v0.5.x:
#![allow(unused)] fn main() { struct Resources { local_to_b: i64, shared_by_a_and_b: i64, } #[task(resources = [shared_by_a_and_b])] fn a(_: a::Context) {} #[task(resources = [shared_by_a_and_b, local_to_b])] fn b(_: b::Context) {} }
In v1.0.0:
#![allow(unused)] fn main() { #[shared] struct Shared { shared_by_a_and_b: i64, } #[local] struct Local { local_to_b: i64, } #[task(shared = [shared_by_a_and_b])] fn a(_: a::Context) {} #[task(shared = [shared_by_a_and_b], local = [local_to_b])] fn b(_: b::Context) {} }
Symmetric locks
Now RTIC utilizes symmetric locks, this means that the lock
method need
to be used for all shared
resource access.
In old code one could do the following as the high priority
task has exclusive access to the resource:
#![allow(unused)] fn main() { #[task(priority = 2, resources = [r])] fn foo(cx: foo::Context) { cx.resources.r = /* ... */; } #[task(resources = [r])] fn bar(cx: bar::Context) { cx.resources.r.lock(|r| r = /* ... */); } }
And with symmetric locks one needs to use locks in both tasks:
#![allow(unused)] fn main() { #[task(priority = 2, shared = [r])] fn foo(cx: foo::Context) { cx.shared.r.lock(|r| r = /* ... */); } #[task(shared = [r])] fn bar(cx: bar::Context) { cx.shared.r.lock(|r| r = /* ... */); } }
Note that the performance does not change thanks to LLVM's optimizations which optimizes away unnecessary locks.
Lock-free resource access
In RTIC 0.5 resources shared by tasks running at the same priority could be accessed without the lock
API.
This is still possible in 1.0: the #[shared]
resource must be annotated with the field-level #[lock_free]
attribute.
v0.5 code:
#![allow(unused)] fn main() { struct Resources { counter: u64, } #[task(resources = [counter])] fn a(cx: a::Context) { *cx.resources.counter += 1; } #[task(resources = [counter])] fn b(cx: b::Context) { *cx.resources.counter += 1; } }
v1.0 code:
#![allow(unused)] fn main() { #[shared] struct Shared { #[lock_free] counter: u64, } #[task(shared = [counter])] fn a(cx: a::Context) { *cx.shared.counter += 1; } #[task(shared = [counter])] fn b(cx: b::Context) { *cx.shared.counter += 1; } }
no static mut
transform
static mut
variables are no longer transformed to safe &'static mut
references.
Instead of that syntax, use the local
argument in #[init]
.
v0.5.x code:
#![allow(unused)] fn main() { #[init] fn init(_: init::Context) { static mut BUFFER: [u8; 1024] = [0; 1024]; let buffer: &'static mut [u8; 1024] = BUFFER; } }
v1.0.0 code:
#![allow(unused)] fn main() { #[init(local = [ buffer: [u8; 1024] = [0; 1024] // type ^^^^^^^^^^^^ ^^^^^^^^^ initial value ])] fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) { let buffer: &'static mut [u8; 1024] = cx.local.buffer; (Shared {}, Local {}, init::Monotonics()) } }
Init always returns late resources
In order to make the API more symmetric the #[init]-task always returns a late resource.
From this:
#![allow(unused)] fn main() { #[rtic::app(device = lm3s6965)] const APP: () = { #[init] fn init(_: init::Context) { rtic::pend(Interrupt::UART0); } // [more code] }; }
to this:
#![allow(unused)] fn main() { #[rtic::app(device = lm3s6965)] mod app { #[shared] struct MySharedResources {} #[local] struct MyLocalResources {} #[init] fn init(_: init::Context) -> (MySharedResources, MyLocalResources, init::Monotonics) { rtic::pend(Interrupt::UART0); (MySharedResources, MyLocalResources, init::Monotonics()) } // [more code] } }
Spawn from anywhere
With the new spawn/spawn_after/spawn_at interface,
old code requiring the context cx
for spawning such as:
#![allow(unused)] fn main() { #[task(spawn = [bar])] fn foo(cx: foo::Context) { cx.spawn.bar().unwrap(); } #[task(schedule = [bar])] fn bar(cx: bar::Context) { cx.schedule.foo(/* ... */).unwrap(); } }
Will now be written as:
#![allow(unused)] fn main() { #[task] fn foo(_c: foo::Context) { bar::spawn().unwrap(); } #[task] fn bar(_c: bar::Context) { // Takes a Duration, relative to “now” let spawn_handle = foo::spawn_after(/* ... */); } #[task] fn bar(_c: bar::Context) { // Takes an Instant let spawn_handle = foo::spawn_at(/* ... */); } }
Thus the requirement of having access to the context is dropped.
Note that the attributes spawn
/schedule
in the task definition are no longer needed.
Additions
Extern tasks
Both software and hardware tasks can now be defined external to the mod app
.
Previously this was possible only by implementing a trampoline calling out the task implementation.
See examples examples/extern_binds.rs
and examples/extern_spawn.rs
.
This enables breaking apps into multiple files.
Migrating from v0.4.x to v0.5.0
This section covers how to upgrade an application written against RTFM v0.4.x to the version v0.5.0 of the framework.
Project name change RTFM -> RTIC
With release v0.5.2 the name was change to Real-Time Interrupt-driven Concurrency
All occurrences of RTFM
needs to change to RTIC
.
See migration guide RTFM to RTIC
Cargo.toml
Change the version of cortex-m-rtfm
to
"0.5.0"
, change rtfm
to rtic
.
Remove the timer-queue
feature.
[dependencies.cortex-m-rtfm]
# change this
version = "0.4.3"
# into this
[dependencies.cortex-m-rtic]
version = "0.5.0"
# and remove this Cargo feature
features = ["timer-queue"]
# ^^^^^^^^^^^^^
Context
argument
All functions inside the #[rtfm::app]
item need to take as first argument a
Context
structure. This Context
type will contain the variables that were
magically injected into the scope of the function by version v0.4.x of the
framework: resources
, spawn
, schedule
-- these variables will become
fields of the Context
structure. Each function within the #[rtfm::app]
item
gets a different Context
type.
#![allow(unused)] fn main() { #[rtfm::app(/* .. */)] const APP: () = { // change this #[task(resources = [x], spawn = [a], schedule = [b])] fn foo() { resources.x.lock(|x| /* .. */); spawn.a(message); schedule.b(baseline); } // into this #[task(resources = [x], spawn = [a], schedule = [b])] fn foo(mut cx: foo::Context) { // ^^^^^^^^^^^^^^^^^^^^ cx.resources.x.lock(|x| /* .. */); // ^^^ cx.spawn.a(message); // ^^^ cx.schedule.b(message, baseline); // ^^^ } // change this #[init] fn init() { // .. } // into this #[init] fn init(cx: init::Context) { // ^^^^^^^^^^^^^^^^^ // .. } // .. }; }
Resources
The syntax used to declare resources has changed from static mut
variables to a struct Resources
.
#![allow(unused)] fn main() { #[rtfm::app(/* .. */)] const APP: () = { // change this static mut X: u32 = 0; static mut Y: u32 = (); // late resource // into this struct Resources { #[init(0)] // <- initial value X: u32, // NOTE: we suggest changing the naming style to `snake_case` Y: u32, // late resource } // .. }; }
Device peripherals
If your application was accessing the device peripherals in #[init]
through
the device
variable then you'll need to add peripherals = true
to the
#[rtfm::app]
attribute to continue to access the device peripherals through
the device
field of the init::Context
structure.
Change this:
#![allow(unused)] fn main() { #[rtfm::app(/* .. */)] const APP: () = { #[init] fn init() { device.SOME_PERIPHERAL.write(something); } // .. }; }
Into this:
#![allow(unused)] fn main() { #[rtfm::app(/* .. */, peripherals = true)] // ^^^^^^^^^^^^^^^^^^ const APP: () = { #[init] fn init(cx: init::Context) { // ^^^^^^^^^^^^^^^^^ cx.device.SOME_PERIPHERAL.write(something); // ^^^ } // .. }; }
#[interrupt]
and #[exception]
Remove the attributes #[interrupt]
and #[exception]
.
To declare hardware tasks in v0.5.x use the #[task]
attribute with the binds
argument instead.
Change this:
#![allow(unused)] fn main() { #[rtfm::app(/* .. */)] const APP: () = { // hardware tasks #[exception] fn SVCall() { /* .. */ } #[interrupt] fn UART0() { /* .. */ } // software task #[task] fn foo() { /* .. */ } // .. }; }
Into this:
#![allow(unused)] fn main() { #[rtfm::app(/* .. */)] const APP: () = { #[task(binds = SVCall)] // ^^^^^^^^^^^^^^ fn svcall(cx: svcall::Context) { /* .. */ } // ^^^^^^ we suggest you use a `snake_case` name here #[task(binds = UART0)] // ^^^^^^^^^^^^^ fn uart0(cx: uart0::Context) { /* .. */ } #[task] fn foo(cx: foo::Context) { /* .. */ } // .. }; }
schedule
The schedule
API no longer requires the timer-queue
cargo feature.
To use the schedule
API one must first define the monotonic timer the
runtime will use using the monotonic
argument of the #[rtfm::app]
attribute.
To continue using the cycle counter (CYCCNT) as the monotonic timer,
and match the behavior of version v0.4.x, add the monotonic = rtfm::cyccnt::CYCCNT
argument to the #[rtfm::app]
attribute.
Also, the Duration
and Instant
types and the U32Ext
trait moved
into the rtfm::cyccnt
module.
This module is only available on ARMv7-M+ devices.
The removal of the timer-queue
also brings back the DWT
peripheral
inside the core peripherals struct, if DWT
is required,
ensure it is enabled by the application inside init
.
Change this:
#![allow(unused)] fn main() { use rtfm::{Duration, Instant, U32Ext}; #[rtfm::app(/* .. */)] const APP: () = { #[task(schedule = [b])] fn a() { // .. } }; }
Into this:
#![allow(unused)] fn main() { use rtfm::cyccnt::{Duration, Instant, U32Ext}; // ^^^^^^^^ #[rtfm::app(/* .. */, monotonic = rtfm::cyccnt::CYCCNT)] // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ const APP: () = { #[init] fn init(cx: init::Context) { cx.core.DWT.enable_cycle_counter(); // optional, configure the DWT run without a debugger connected cx.core.DCB.enable_trace(); } #[task(schedule = [b])] fn a(cx: a::Context) { // .. } }; }
Migrating from RTFM to RTIC
This section covers how to upgrade an application written against RTFM v0.5.x to the same version of RTIC. This applies since the renaming of the framework as per RFC #33.
Note: There are no code differences between RTFM v0.5.3 and RTIC v0.5.3, it is purely a name change.
Cargo.toml
First, the cortex-m-rtfm
dependency needs to be updated to
cortex-m-rtic
.
[dependencies]
# change this
cortex-m-rtfm = "0.5.3"
# into this
cortex-m-rtic = "0.5.3"
Code changes
The only code change that needs to be made is that any reference to rtfm
before now need to point
to rtic
as follows:
#![allow(unused)] fn main() { // // Change this // #[rtfm::app(/* .. */, monotonic = rtfm::cyccnt::CYCCNT)] const APP: () = { // ... }; // // Into this // #[rtic::app(/* .. */, monotonic = rtic::cyccnt::CYCCNT)] const APP: () = { // ... }; }
Under the hood
This is chapter is currently work in progress, it will re-appear once it is more complete
This section describes the internals of the RTIC framework at a high level.
Low level details like the parsing and code generation done by the procedural
macro (#[app]
) will not be explained here. The focus will be the analysis of
the user specification and the data structures used by the runtime.
We highly suggest that you read the embedonomicon section on concurrency before you dive into this material.
Target Architecture
While RTIC can currently target all Cortex-m devices there are some key architecure differences that
users should be aware of. Namely the absence of Base Priority Mask Register (BASEPRI
) which lends
itself exceptionally well to the hardware priority ceiling support used in RTIC, in the ARMv6-M and
ARMv8-M-base architectures, which forces RTIC to use source masking instead. For each implementation
of lock and a detailed commentary of pros and cons, see the implementation of
lock in src/export.rs.
These differences influence how critical sections are realized, but functionality should be the same except that ARMv6-M/ARMv8-M-base cannot have tasks with shared resources bound to exception handlers, as these cannot be masked in hardware.
Table 1 below shows a list of Cortex-m processors and which type of critical section they employ.
Table 1: Critical Section Implementation by Processor Architecture
Processor | Architecture | Priority Ceiling | Source Masking |
---|---|---|---|
Cortex-M0 | ARMv6-M | ✓ | |
Cortex-M0+ | ARMv6-M | ✓ | |
Cortex-M3 | ARMv7-M | ✓ | |
Cortex-M4 | ARMv7-M | ✓ | |
Cortex-M7 | ARMv7-M | ✓ | |
Cortex-M23 | ARMv8-M-base | ✓ | |
Cortex-M33 | ARMv8-M-main | ✓ |
Priority Ceiling
This implementation is covered in depth by the Critical Sections page of this book.
Source Masking
Without a BASEPRI
register which allows for directly setting a priority ceiling in the Nested
Vectored Interrupt Controller (NVIC), RTIC must instead rely on disabling (masking) interrupts.
Consider Figure 1 below, showing two tasks A and B where A has higher priority but shares a resource
with B.
Figure 1: Shared Resources and Source Masking
┌────────────────────────────────────────────────────────────────┐
│ │
│ │
3 │ Pending Preempts │
2 │ ↑- - -A- - - - -↓A─────────► │
1 │ B───────────────────► - - - - B────────► │
0 │Idle┌─────► Resumes ┌────────► │
├────┴────────────────────────────────────────────┴──────────────┤
│ │
└────────────────────────────────────────────────────────────────┴──► Time
t1 t2 t3 t4
At time t1, task B locks the shared resource by selectively disabling (using the NVIC) all other
tasks which have a priority equal to or less than any task which shares resouces with B. In effect
this creates a virtual priority ceiling, miroring the BASEPRI
approach described in the
Critical Sections page. Task A is one such task that shares resources with
task B. At time t2, task A is either spawned by task B or becomes pending through an interrupt
condition, but does not yet preempt task B even though its priority is greater. This is because the
NVIC is preventing it from starting due to task A being being disabled. At time t3, task B
releases the lock by re-enabling the tasks in the NVIC. Because task A was pending and has a higher
priority than task B, it immediately preempts task B and is free to use the shared resource without
risk of data race conditions. At time t4, task A completes and returns the execution context to B.
Since source masking relies on use of the NVIC, core exception sources such as HardFault, SVCall, PendSV, and SysTick cannot share data with other tasks.