; POOL_CAPACITY] = [BLOCK; POOL_CAPACITY]])] fn init(cx: init::Context) -> (Shared, Local) { for block in cx.local.memory { // Give the 'static memory to the pool P.manage(block); } rtic::pend(Interrupt::I2C0); (Shared {}, Local {}) } #[task(binds = I2C0, priority = 2)] fn i2c0(_: i2c0::Context) { // Claim 128 u8 blocks let x = P.alloc(128).unwrap(); // .. send it to the `foo` task foo::spawn(x).ok().unwrap(); // send another 128 u8 blocks to the task `bar` bar::spawn(P.alloc(128).unwrap()).ok().unwrap(); } #[task] async fn foo(_: foo::Context, _x: Box) { // explicitly return the block to the pool drop(_x); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } #[task(priority = 2)] async fn bar(_: bar::Context, _x: Box
) { // this is done automatically so we can omit the call to `drop` // drop(_x); } } }\n} $ cargo xtask qemu --verbose --example pool","breadcrumbs":"RTIC by example » Tips & Tricks » Avoid copies when message passing » Using indirection for faster message passing","id":"46","title":"Using indirection for faster message passing"},"47":{"body":"In #[init] and #[idle] local resources have 'static lifetime. Useful when pre-allocating and/or splitting resources between tasks, drivers or some other object. This comes in handy when drivers, such as USB drivers, need to allocate memory and when using splittable data structures such as heapless::spsc::Queue . In the following example two different tasks share a heapless::spsc::Queue for lock-free access to the shared queue. //! examples/static.rs #![no_main]\n#![no_std]\n#![deny(warnings)]\n#![deny(unsafe_code)]\n#![deny(missing_docs)] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [UART0])]\nmod app { use cortex_m_semihosting::{debug, hprintln}; use heapless::spsc::{Consumer, Producer, Queue}; #[shared] struct Shared {} #[local] struct Local { p: Producer<'static, u32, 5>, c: Consumer<'static, u32, 5>, } #[init(local = [q: Queue = Queue::new()])] fn init(cx: init::Context) -> (Shared, Local) { // q has 'static life-time so after the split and return of `init` // it will continue to exist and be allocated let (p, c) = cx.local.q.split(); foo::spawn().unwrap(); (Shared {}, Local { p, c }) } #[idle(local = [c])] fn idle(c: idle::Context) -> ! { loop { // Lock-free access to the same underlying queue! if let Some(data) = c.local.c.dequeue() { hprintln!(\"received message: {}\", data); // Run foo until data if data == 3 { debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } else { foo::spawn().unwrap(); } } } } #[task(local = [p, state: u32 = 0], priority = 1)] async fn foo(c: foo::Context) { *c.local.state += 1; // Lock-free access to the same underlying queue! c.local.p.enqueue(*c.local.state).unwrap(); }\n} Running this program produces the expected output. $ cargo xtask qemu --verbose --example static received message: 1\nreceived message: 2\nreceived message: 3","breadcrumbs":"RTIC by example » Tips & Tricks » 'static super-powers » 'static super-powers","id":"47","title":"'static super-powers"},"48":{"body":"#[rtic::app] is a procedural macro that produces support code. If for some reason you need to inspect the code generated by this macro you have two options: You can inspect the file rtic-expansion.rs inside the target directory. This file contains the expansion of the #[rtic::app] item (not your whole program!) of the last built (via cargo build or cargo check) RTIC application. The expanded code is not pretty printed by default, so you'll want to run rustfmt on it before you read it. $ cargo build --example smallest --target thumbv7m-none-eabi $ rustfmt target/rtic-expansion.rs $ tail target/rtic-expansion.rs #[doc = r\" Implementation details\"]\nmod app { #[doc = r\" Always include the device crate which contains the vector table\"] use lm3s6965 as _; #[no_mangle] unsafe extern \"C\" fn main() -> ! { rtic::export::interrupt::disable(); let mut core: rtic::export::Peripherals = core::mem::transmute(()); core.SCB.scr.modify(|r| r | 1 << 1); rtic::export::interrupt::enable(); loop { rtic::export::wfi() } }\n} Or, you can use the cargo-expand sub-command. This sub-command will expand all the macros, including the #[rtic::app] attribute, and modules in your crate and print the output to the console. # produces the same output as before cargo expand --example smallest | tail","breadcrumbs":"RTIC by example » Tips & Tricks » Inspecting generated code » Inspecting generated code","id":"48","title":"Inspecting generated code"},"49":{"body":"Internally, all monotonics use a Timer Queue , which is a priority queue with entries describing the time at which their respective Futures should complete.","breadcrumbs":"Monotonics & the Timer Queue » The magic behind Monotonics","id":"49","title":"The magic behind Monotonics"},"5":{"body":"Assume two tasks A (with priority p(A) = 2) and B (with priority p(B) = 4) both accessing the shared resource R. The static ceiling of R is 4 (computed from 𝝅(R) = max(p(A) = 2, p(B) = 4) = 4). A graph representation of the example: graph LR A[\"p(A) = 2\"] --> R B[\"p(B) = 4\"] --> R R[\"𝝅(R) = 4\"]","breadcrumbs":"Preface » Example","id":"5","title":"Example"},"50":{"body":"The rtic-time framework is flexible because it can use any timer which has compare-match and optionally supporting overflow interrupts for scheduling. The single requirement to make a timer usable with RTIC is implementing the rtic-time::Monotonic trait. For RTIC 2.0, we assume that the user has a time library, e.g. fugit , as the basis for all time-based operations when implementing Monotonic . These libraries make it much easier to correctly implement the Monotonic trait, allowing the use of almost any timer in the system for scheduling. The trait documents the requirements for each method. There are reference implementations available in rtic-monotonics that can be used for inspriation. Systick based , runs at a fixed interrupt (tick) rate - with some overhead but simple and provides support for large time spans RP2040 Timer , a \"proper\" implementation with support for waiting for long periods without interrupts. Clearly demonstrates how to use the TimerQueue to handle scheduling. nRF52 timers implements monotonic & Timer Queue for the RTC and normal timers in nRF52's","breadcrumbs":"Monotonics & the Timer Queue » Implementing a Monotonic timer for scheduling","id":"50","title":"Implementing a Monotonic timer for scheduling"},"51":{"body":"Contributing new implementations of Monotonic can be done in multiple ways: Implement the trait behind a feature flag in rtic-monotonics , and create a PR for them to be included in the main RTIC repository. This way, the implementations of are in-tree, RTIC can guarantee their correctness, and can update them in the case of a new release. Implement the changes in an external repository. Doing so will not have them included in rtic-monotonics , but may make it easier to do so in the future.","breadcrumbs":"Monotonics & the Timer Queue » Contributing","id":"51","title":"Contributing"},"52":{"body":"The timer queue is implemented as a list based priority queue, where list-nodes are statically allocated as part of the Future created when await-ing a Future created when waiting for the monotonic. Thus, the timer queue is infallible at run-time (its size and allocation are determined at compile time). Similarly the channels implementation, the timer-queue implementation relies on a global Critical Section (CS) for race protection. For the examples a CS implementation is provided by adding --features test-critical-section to the build options.","breadcrumbs":"Monotonics & the Timer Queue » The timer queue","id":"52","title":"The timer queue"},"53":{"body":"RTIC aims to provide the lowest level of abstraction needed for developing robust and reliable embedded software. It provides a minimal set of required mechanisms for safe sharing of mutable resources among interrupts and asynchronously executing tasks. The scheduling primitives leverages on the underlying hardware for unparalleled performance and predictability, in effect RTIC provides in Rust terms a zero-cost abstraction to concurrent real-time programming.","breadcrumbs":"RTIC vs. the world » RTIC vs. the world","id":"53","title":"RTIC vs. the world"},"54":{"body":"Comparing RTIC to traditional a Real-Time Operating System (RTOS) is hard. Firstly, a traditional RTOS typically comes with no guarantees regarding system safety, even the most hardened kernels like the formally verified seL4 kernel. Their claims to integrity, confidentiality, and availability regards only the kernel itself (under additional assumptions its configuration and environment). They even state: \"An OS kernel, verified or not, does not automatically make a system secure. In fact, any system, no matter how secure, can be used in insecure ways.\" - seL4 FAQ","breadcrumbs":"RTIC vs. the world » Comparison regarding safety and security","id":"54","title":"Comparison regarding safety and security"},"55":{"body":"In the world of information security we commonly find: confidentiality, protecting the information from being exposed to an unauthorized party, integrity, referring to accuracy and completeness of data, and availability, referring to data being accessible to authorized users. Obviously, a traditional OS can guarantee neither confidentiality nor integrity, as both requires the security critical code to be trusted. Regarding availability, this typically boils down to the usage of system resources. Any OS that allows for dynamic allocation of resources, relies on that the application correctly handles allocations/de-allocations, and cases of allocation failures. Thus their claim is correct, security is completely out of hands for the OS, the best we can hope for is that it does not add further vulnerabilities. RTIC on the other hand holds your back. The declarative system wide model gives you a static set of tasks and resources, with precise control over what data is shared and between which parties. Moreover, Rust as a programming language comes with strong properties regarding integrity (compile time aliasing, mutability and lifetime guarantees, together with ensured data validity). Using RTIC these properties propagate to the system wide model, without interference of other applications running. The RTIC kernel is internally infallible without any need of dynamically allocated data.","breadcrumbs":"RTIC vs. the world » Security by design","id":"55","title":"Security by design"},"56":{"body":"","breadcrumbs":"RTIC and Embassy » RTIC vs. Embassy","id":"56","title":"RTIC vs. Embassy"},"57":{"body":"Embassy provides both Hardware Abstraction Layers, and an executor/runtime, while RTIC aims to only provide an execution framework. For example, embassy provides embassy-stm32 (a HAL), and embassy-executor (an executor). On the other hand, RTIC provides the framework in the form of rtic , and the user is responsible for providing a PAC and HAL implementation (generally from the stm32-rs project). Additionally, RTIC aims to provide exclusive access to resources at as low a level as possible, ideally guarded by some form of hardware protection. This allows for access to hardware without necessarily requiring locking mechanisms at the software level.","breadcrumbs":"RTIC and Embassy » Differences","id":"57","title":"Differences"},"58":{"body":"Since most Embassy and RTIC libraries are runtime agnostic, many details from one project can be used in the other. For example, using rtic-monotonics in an embassy-executor powered project works, and using embassy-sync (though rtic-sync is recommended) in an RTIC project works.","breadcrumbs":"RTIC and Embassy » Mixing use of Embassy and RTIC","id":"58","title":"Mixing use of Embassy and RTIC"},"59":{"body":"See the rtic-rs/rtic/examples repository for complete examples. Pull-requests are welcome!","breadcrumbs":"Awesome RTIC examples » Awesome RTIC examples","id":"59","title":"Awesome RTIC examples"},"6":{"body":"SRP itself is compatible with both dynamic and static priority scheduling. For the implementation of RTIC we leverage on the underlying hardware for accelerated static priority scheduling. In the case of the ARM Cortex-M architecture, each interrupt vector entry v[i] is associated a function pointer (v[i].fn), and a static priority (v[i].priority), an enabled- (v[i].enabled) and a pending-bit (v[i].pending). An interrupt i is scheduled (run) by the hardware under the conditions: is pended and enabled and has a priority higher than the (optional BASEPRI) register, and has the highest priority among interrupts meeting 1. The first condition (1) can be seen a filter allowing RTIC to take control over which tasks should be allowed to start (and which should be prevented from starting). The SPR model for single-core static scheduling on the other hand states that a task should be scheduled (run) under the conditions: it is requested to run and has a static priority higher than the current system ceiling (𝜫) it has the highest static priority among tasks meeting 1. The similarities are striking and it is not by chance/luck/coincidence. The hardware was cleverly designed with real-time scheduling in mind. In order to map the SRP scheduling onto the hardware we need to take a closer look at the system ceiling (𝜫). Under SRP 𝜫 is computed as the maximum priority ceiling of the currently held resources, and will thus change dynamically during the system operation.","breadcrumbs":"Preface » RTIC the hardware accelerated real-time scheduler","id":"6","title":"RTIC the hardware accelerated real-time scheduler"},"60":{"body":"Migrating a project from RTIC v1.0.x to v2.0.0 involves the following steps: v2.1.0 works on Rust Stable from 1.75 ( recommended ), while older versions require a nightly compiler via the use of #![type_alias_impl_trait] . Migrating from the monotonics included in v1.0.x to rtic-time and rtic-monotonics, replacing spawn_after, spawn_at. Software tasks are now required to be async, and using them correctly. Understanding and using data types provided by rtic-sync. For a detailed description of the changes, refer to the subchapters. If you wish to see a code example of changes required, you can check out the full example migration page . TL;DR (Too Long; Didn't Read) Instead of spawn_after and spawn_at, you now use the async functions delay, delay_until (and related) with impls provided by rtic-monotonics. Software tasks must be async fns now. Not returning from a task is allowed so long as there is an await in the task. You can still lock shared resources. Use rtic_sync::arbiter::Arbiter to await access to a shared resource, and rtic_sync::channel::Channel to communicate between tasks instead of spawn-ing new ones.","breadcrumbs":"Migrating from v1.0.x to v2.0.0 » Migrating from v1.0.x to v2.0.0","id":"60","title":"Migrating from v1.0.x to v2.0.0"},"61":{"body":"In previous versions of rtic, monotonics were an integral, tightly coupled part of the #[rtic::app]. In this new version, rtic-monotonics provides them in a more decoupled way. The #[monotonic] attribute is no longer used. Instead, you use a create_X_token from rtic-monotonics . An invocation of this macro returns an interrupt registration token, which can be used to construct an instance of your desired monotonic. spawn_after and spawn_at are no longer available. Instead, you use the async functions delay and delay_until provided by ipmlementations of the rtic_time::Monotonic trait, available through rtic-monotonics . Check out the code example for an overview of the required changes. For more information on current monotonic implementations, see the rtic-monotonics documentation , and the examples .","breadcrumbs":"Migrating from v1.0.x to v2.0.0 » Migrating to rtic-monotonics » Migrating to rtic-monotonics","id":"61","title":"Migrating to rtic-monotonics"},"62":{"body":"There have been a few changes to software tasks. They are outlined below.","breadcrumbs":"Migrating from v1.0.x to v2.0.0 » Software tasks must now be async » Using async software tasks.","id":"62","title":"Using async software tasks."},"63":{"body":"All software tasks are now required to be async. Required changes. All of the tasks in your project that do not bind to an interrupt must now be an async fn. For example: #[task( local = [ some_resource ], shared = [ my_shared_resource ], priority = 2\n)]\nfn my_task(cx: my_task::Context) { cx.local.some_resource.do_trick(); cx.shared.my_shared_resource.lock(|s| s.do_shared_thing());\n} becomes #[task( local = [ some_resource ], shared = [ my_shared_resource ], priority = 2\n)]\nasync fn my_task(cx: my_task::Context) { cx.local.some_resource.do_trick(); cx.shared.my_shared_resource.lock(|s| s.do_shared_thing());\n}","breadcrumbs":"Migrating from v1.0.x to v2.0.0 » Software tasks must now be async » Software tasks must now be async.","id":"63","title":"Software tasks must now be async."},"64":{"body":"The new async software tasks are allowed to run forever, on one precondition: there must be an await within the infinite loop of the task . An example of such a task: #[task(local = [ my_channel ] )]\nasync fn my_task_that_runs_forever(cx: my_task_that_runs_forever::Context) { loop { let value = cx.local.my_channel.recv().await; do_something_with_value(value); }\n}","breadcrumbs":"Migrating from v1.0.x to v2.0.0 » Software tasks must now be async » Software tasks may now run forever","id":"64","title":"Software tasks may now run forever"},"65":{"body":"As discussed in the Migrating to rtic-monotonics chapter, spawn_after and spawn_at are no longer available.","breadcrumbs":"Migrating from v1.0.x to v2.0.0 » Software tasks must now be async » spawn_after and spawn_at have been removed.","id":"65","title":"spawn_after and spawn_at have been removed."},"66":{"body":"rtic-sync provides primitives that can be used for message passing and resource sharing in async context. The important structs are: The Arbiter, which allows you to await access to a shared resource in async contexts without using lock. Channel, which allows you to communicate between tasks (both async and non-async). For more information on these structs, see the rtic-sync docs","breadcrumbs":"Migrating from v1.0.x to v2.0.0 » Using and understanding rtic-sync » Using rtic-sync","id":"66","title":"Using rtic-sync"},"67":{"body":"Below you can find the code for the implementation of the stm32f3_blinky example for v1.0.x and for v2.0.0. Further down, a diff is displayed.","breadcrumbs":"Migrating from v1.0.x to v2.0.0 » A code example on migration » A complete example of migration","id":"67","title":"A complete example of migration"},"68":{"body":"#![deny(unsafe_code)]\n#![deny(warnings)]\n#![no_main]\n#![no_std] use panic_rtt_target as _;\nuse rtic::app;\nuse rtt_target::{rprintln, rtt_init_print};\nuse stm32f3xx_hal::gpio::{Output, PushPull, PA5};\nuse stm32f3xx_hal::prelude::*;\nuse systick_monotonic::{fugit::Duration, Systick}; #[app(device = stm32f3xx_hal::pac, peripherals = true, dispatchers = [SPI1])]\nmod app { use super::*; #[shared] struct Shared {} #[local] struct Local { led: PA5