This commit is contained in:
Jorge Aparicio 2019-08-21 10:17:27 +02:00
parent 0e146f8d11
commit 07b2b4d830
43 changed files with 628 additions and 437 deletions

View file

@ -31,9 +31,7 @@ A concurrency framework for building real time systems.
- **Highly efficient memory usage**: All the tasks share a single call stack and - **Highly efficient memory usage**: All the tasks share a single call stack and
there's no hard dependency on a dynamic memory allocator. there's no hard dependency on a dynamic memory allocator.
- **All Cortex-M devices are supported**. The core features of RTFM are - **All Cortex-M devices are fully supported**.
supported on all Cortex-M devices. The timer queue is currently only supported
on ARMv7-M devices.
- This task model is amenable to known WCET (Worst Case Execution Time) analysis - This task model is amenable to known WCET (Worst Case Execution Time) analysis
and scheduling analysis techniques. (Though we haven't yet developed Rust and scheduling analysis techniques. (Though we haven't yet developed Rust

View file

@ -4,7 +4,7 @@
- [RTFM by example](./by-example.md) - [RTFM by example](./by-example.md)
- [The `app` attribute](./by-example/app.md) - [The `app` attribute](./by-example/app.md)
- [Resources](./by-example/resources.md) - [Resources](./by-example/resources.md)
- [Tasks](./by-example/tasks.md) - [Software tasks](./by-example/tasks.md)
- [Timer queue](./by-example/timer-queue.md) - [Timer queue](./by-example/timer-queue.md)
- [Types, Send and Sync](./by-example/types-send-sync.md) - [Types, Send and Sync](./by-example/types-send-sync.md)
- [Starting a new project](./by-example/new.md) - [Starting a new project](./by-example/new.md)
@ -18,3 +18,5 @@
- [Ceiling analysis](./internals/ceilings.md) - [Ceiling analysis](./internals/ceilings.md)
- [Software tasks](./internals/tasks.md) - [Software tasks](./internals/tasks.md)
- [Timer queue](./internals/timer-queue.md) - [Timer queue](./internals/timer-queue.md)
- [Homogeneous multi-core support](./homogeneous.md)
- [Heterogeneous multi-core support](./heterogeneous.md)

View file

@ -28,22 +28,23 @@ not required to use the [`cortex_m_rt::entry`] attribute.
Within the pseudo-module the `app` attribute expects to find an initialization Within the pseudo-module the `app` attribute expects to find an initialization
function marked with the `init` attribute. This function must have signature function marked with the `init` attribute. This function must have signature
`fn(init::Context) [-> init::LateResources]`. `fn(init::Context) [-> init::LateResources]` (the return type is not always
required).
This initialization function will be the first part of the application to run. This initialization function will be the first part of the application to run.
The `init` function will run *with interrupts disabled* and has exclusive access The `init` function will run *with interrupts disabled* and has exclusive access
to Cortex-M and device specific peripherals through the `core` and `device` to Cortex-M and, optionally, device specific peripherals through the `core` and
variables fields of `init::Context`. Not all Cortex-M peripherals are available `device` fields of `init::Context`.
in `core` because the RTFM runtime takes ownership of some of them -- for more
details see the [`rtfm::Peripherals`] struct.
`static mut` variables declared at the beginning of `init` will be transformed `static mut` variables declared at the beginning of `init` will be transformed
into `&'static mut` references that are safe to access. into `&'static mut` references that are safe to access.
[`rtfm::Peripherals`]: ../../api/rtfm/struct.Peripherals.html [`rtfm::Peripherals`]: ../../api/rtfm/struct.Peripherals.html
The example below shows the types of the `core` and `device` variables and The example below shows the types of the `core` and `device` fields and
showcases safe access to a `static mut` variable. showcases safe access to a `static mut` variable. The `device` field is only
available when the `peripherals` argument is set to `true` (it defaults to
`false`).
``` rust ``` rust
{{#include ../../../../examples/init.rs}} {{#include ../../../../examples/init.rs}}
@ -64,7 +65,7 @@ signature `fn(idle::Context) - > !`.
When present, the runtime will execute the `idle` task after `init`. Unlike When present, the runtime will execute the `idle` task after `init`. Unlike
`init`, `idle` will run *with interrupts enabled* and it's not allowed to return `init`, `idle` will run *with interrupts enabled* and it's not allowed to return
so it runs forever. so it must run forever.
When no `idle` function is declared, the runtime sets the [SLEEPONEXIT] bit and When no `idle` function is declared, the runtime sets the [SLEEPONEXIT] bit and
then sends the microcontroller to sleep after running `init`. then sends the microcontroller to sleep after running `init`.
@ -84,21 +85,67 @@ The example below shows that `idle` runs after `init`.
$ cargo run --example idle $ cargo run --example idle
{{#include ../../../../ci/expected/idle.run}}``` {{#include ../../../../ci/expected/idle.run}}```
## `interrupt` / `exception` ## Hardware tasks
Just like you would do with the `cortex-m-rt` crate you can use the `interrupt` To declare interrupt handlers the framework provides a `#[task]` attribute that
and `exception` attributes within the `app` pseudo-module to declare interrupt can be attached to functions. This attribute takes a `binds` argument whose
and exception handlers. In RTFM, we refer to interrupt and exception handlers as value is the name of the interrupt to which the handler will be bound to; the
*hardware* tasks. function adornated with this attribute becomes the interrupt handler. Within the
framework these type of tasks are referred to as *hardware* tasks, because they
start executing in reaction to a hardware event.
The example below demonstrates the use of the `#[task]` attribute to declare an
interrupt handler. Like in the case of `#[init]` and `#[idle]` local `static
mut` variables are safe to use within a hardware task.
``` rust ``` rust
{{#include ../../../../examples/interrupt.rs}} {{#include ../../../../examples/hardware.rs}}
``` ```
``` console ``` console
$ cargo run --example interrupt $ cargo run --example interrupt
{{#include ../../../../ci/expected/interrupt.run}}``` {{#include ../../../../ci/expected/hardware.run}}```
So far all the RTFM applications we have seen look no different that the So far all the RTFM applications we have seen look no different that the
applications one can write using only the `cortex-m-rt` crate. In the next applications one can write using only the `cortex-m-rt` crate. From this point
section we start introducing features unique to RTFM. we start introducing features unique to RTFM.
## Priorities
The static priority of each handler can be declared in the `task` attribute
using the `priority` argument. Tasks can have priorities in the range `1..=(1 <<
NVIC_PRIO_BITS)` where `NVIC_PRIO_BITS` is a constant defined in the `device`
crate. When the `priority` argument is omitted the priority is assumed to be
`1`. The `idle` task has a non-configurable static priority of `0`, the lowest
priority.
When several tasks are ready to be executed the one with *highest* static
priority will be executed first. Task prioritization can be observed in the
following scenario: an interrupt signal arrives during the execution of a low
priority task; the signal puts the higher priority task in the pending state.
The difference in priority results in the higher priority task preempting the
lower priority one: the execution of the lower priority task is suspended and
the higher priority task is executed to completion. Once the higher priority
task has terminated the lower priority task is resumed.
The following example showcases the priority based scheduling of tasks.
``` rust
{{#include ../../../../examples/preempt.rs}}
```
``` console
$ cargo run --example interrupt
{{#include ../../../../ci/expected/preempt.run}}```
Note that the task `uart1` does *not* preempt task `uart2` because its priority
is the *same* as `uart2`'s. However, once `uart2` terminates the execution of
task `uart1` is prioritized over `uart0`'s due to its higher priority. `uart0`
is resumed only after `uart1` terminates.
One more note about priorities: choosing a priority higher than what the device
supports (that is `1 << NVIC_PRIO_BITS`) will result in a compile error. Due to
limitations in the language the error message is currently far from helpful: it
will say something along the lines of "evaluation of constant value failed" and
the span of the error will *not* point out to the problematic interrupt value --
we are sorry about this!

View file

@ -36,8 +36,7 @@ $ cargo add lm3s6965 --vers 0.1.3
$ rm memory.x build.rs $ rm memory.x build.rs
``` ```
3. Add the `cortex-m-rtfm` crate as a dependency and, if you need it, enable the 3. Add the `cortex-m-rtfm` crate as a dependency.
`timer-queue` feature.
``` console ``` console
$ cargo add cortex-m-rtfm --allow-prerelease $ cargo add cortex-m-rtfm --allow-prerelease

View file

@ -1,22 +1,27 @@
## Resources ## Resources
One of the limitations of the attributes provided by the `cortex-m-rt` crate is The framework provides an abstraction to share data between any of the contexts
that sharing data (or peripherals) between interrupts, or between an interrupt we saw in the previous section (task handlers, `init` and `idle`): resources.
and the `entry` function, requires a `cortex_m::interrupt::Mutex`, which
*always* requires disabling *all* interrupts to access the data. Disabling all
the interrupts is not always required for memory safety but the compiler doesn't
have enough information to optimize the access to the shared data.
The `app` attribute has a full view of the application thus it can optimize Resources are data visible only to functions declared within the `#[app]`
access to `static` variables. In RTFM we refer to the `static` variables pseudo-module. The framework gives the user complete control over which context
declared inside the `app` pseudo-module as *resources*. To access a resource the can access which resource.
context (`init`, `idle`, `interrupt` or `exception`) one must first declare the
resource in the `resources` argument of its attribute.
In the example below two interrupt handlers access the same resource. No `Mutex` All resources are declared as a single `struct` within the `#[app]`
is required in this case because the two handlers run at the same priority and pseudo-module. Each field in the structure corresponds to a different resource.
no preemption is possible. The `SHARED` resource can only be accessed by these Resources can optionally be given an initial value using the `#[init]`
two handlers. attribute. Resources that are not given an initial value are referred to as
*late* resources and are covered in more detail in a follow up section in this
page.
Each context (task handler, `init` or `idle`) must declare the resources it
intends to access in its corresponding metadata attribute using the `resources`
argument. This argument takes a list of resource names as its value. The listed
resources are made available to the context under the `resources` field of the
`Context` structure.
The example application shown below contains two interrupt handlers that share
access to a resource named `shared`.
``` rust ``` rust
{{#include ../../../../examples/resource.rs}} {{#include ../../../../examples/resource.rs}}
@ -26,40 +31,39 @@ two handlers.
$ cargo run --example resource $ cargo run --example resource
{{#include ../../../../ci/expected/resource.run}}``` {{#include ../../../../ci/expected/resource.run}}```
## Priorities Note that the `shared` resource cannot accessed from `idle`. Attempting to do
so results in a compile error.
The priority of each handler can be declared in the `interrupt` and `exception` ## `lock`
attributes. It's not possible to set the priority in any other way because the
runtime takes ownership of the `NVIC` peripheral thus it's also not possible to
change the priority of a handler / task at runtime. Thanks to this restriction
the framework has knowledge about the *static* priorities of all interrupt and
exception handlers.
Interrupts and exceptions can have priorities in the range `1..=(1 << In the presence of preemption critical sections are required to mutate shared
NVIC_PRIO_BITS)` where `NVIC_PRIO_BITS` is a constant defined in the `device` data in a data race free manner. As the framework has complete knowledge over
crate. The `idle` task has a priority of `0`, the lowest priority. the priorities of tasks and which tasks can access which resources it enforces
that critical sections are used where required for memory safety.
Resources that are shared between handlers that run at different priorities Where a critical section is required the framework hands out a resource proxy
require critical sections for memory safety. The framework ensures that critical instead of a reference. This resource proxy is a structure that implements the
sections are used but *only where required*: for example, no critical section is [`Mutex`] trait. The only method on this trait, [`lock`], runs its closure
required by the highest priority handler that has access to the resource. argument in a critical section.
The critical section API provided by the RTFM framework (see [`Mutex`]) is
based on dynamic priorities rather than on disabling interrupts. The consequence
is that these critical sections will prevent *some* handlers, including all the
ones that contend for the resource, from *starting* but will let higher priority
handlers, that don't contend for the resource, run.
[`Mutex`]: ../../api/rtfm/trait.Mutex.html [`Mutex`]: ../../api/rtfm/trait.Mutex.html
[`lock`]: ../../api/rtfm/trait.Mutex.html#method.lock
The critical section created by the `lock` API is based on dynamic priorities:
it temporarily raises the dynamic priority of the context to a *ceiling*
priority that prevents other tasks from preempting the critical section. This
synchronization protocol is known as the [Immediate Ceiling Priority Protocol
(ICPP)][icpp].
[icpp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol
In the example below we have three interrupt handlers with priorities ranging In the example below we have three interrupt handlers with priorities ranging
from one to three. The two handlers with the lower priorities contend for the from one to three. The two handlers with the lower priorities contend for the
`SHARED` resource. The lowest priority handler needs to [`lock`] the `shared` resource. The lowest priority handler needs to `lock` the
`SHARED` resource to access its data, whereas the mid priority handler can `shared` resource to access its data, whereas the mid priority handler can
directly access its data. The highest priority handler is free to preempt directly access its data. The highest priority handler, which cannot access
the critical section created by the lowest priority handler. the `shared` resource, is free to preempt the critical section created by the
lowest priority handler.
[`lock`]: ../../api/rtfm/trait.Mutex.html#method.lock
``` rust ``` rust
{{#include ../../../../examples/lock.rs}} {{#include ../../../../examples/lock.rs}}
@ -69,27 +73,17 @@ the critical section created by the lowest priority handler.
$ cargo run --example lock $ cargo run --example lock
{{#include ../../../../ci/expected/lock.run}}``` {{#include ../../../../ci/expected/lock.run}}```
One more note about priorities: choosing a priority higher than what the device
supports (that is `1 << NVIC_PRIO_BITS`) will result in a compile error. Due to
limitations in the language the error message is currently far from helpful: it
will say something along the lines of "evaluation of constant value failed" and
the span of the error will *not* point out to the problematic interrupt value --
we are sorry about this!
## Late resources ## Late resources
Unlike normal `static` variables, which need to be assigned an initial value Late resources are resources that are not given an initial value at compile
when declared, resources can be initialized at runtime. We refer to these using the `#[init]` attribute but instead are initialized are runtime using the
runtime initialized resources as *late resources*. Late resources are useful for `init::LateResources` values returned by the `init` function.
*moving* (as in transferring ownership) peripherals initialized in `init` into
interrupt and exception handlers.
Late resources are declared like normal resources but that are given an initial Late resources are useful for *moving* (as in transferring the ownership of)
value of `()` (the unit value). `init` must return the initial values of all peripherals initialized in `init` into interrupt handlers.
late resources packed in a `struct` of type `init::LateResources`.
The example below uses late resources to stablish a lockless, one-way channel The example below uses late resources to stablish a lockless, one-way channel
between the `UART0` interrupt handler and the `idle` function. A single producer between the `UART0` interrupt handler and the `idle` task. A single producer
single consumer [`Queue`] is used as the channel. The queue is split into single consumer [`Queue`] is used as the channel. The queue is split into
consumer and producer end points in `init` and then each end point is stored consumer and producer end points in `init` and then each end point is stored
in a different resource; `UART0` owns the producer resource and `idle` owns in a different resource; `UART0` owns the producer resource and `idle` owns
@ -105,22 +99,32 @@ the consumer resource.
$ cargo run --example late $ cargo run --example late
{{#include ../../../../ci/expected/late.run}}``` {{#include ../../../../ci/expected/late.run}}```
## `static` resources ## Only shared access
`static` variables can also be used as resources. Tasks can only get `&` By default the framework assumes that all tasks require exclusive access
(shared) references to these resources but locks are never required to access (`&mut-`) to resources but it is possible to specify that a task only requires
their data. You can think of `static` resources as plain `static` variables that shared access (`&-`) to a resource using the `&resource_name` syntax in the
can be initialized at runtime and have better scoping rules: you can control `resources` list.
which tasks can access the variable, instead of the variable being visible to
all the functions in the scope it was declared in.
In the example below a key is loaded (or created) at runtime and then used from The advantage of specifying shared access (`&-`) to a resource is that no locks
two tasks that run at different priorities. are required to access the resource even if the resource is contended by several
tasks running at different priorities. The downside is that the task only gets a
shared reference (`&-`) to the resource, limiting the operations it can perform
on it, but where a shared reference is enough this approach reduces the number
of required locks.
Note that in this release of RTFM it is not possible to request both exclusive
access (`&mut-`) and shared access (`&-`) to the *same* resource from different
tasks. Attempting to do so will result in a compile error.
In the example below a key (e.g. a cryptographic key) is loaded (or created) at
runtime and then used from two tasks that run at different priorities without
any kind of lock.
``` rust ``` rust
{{#include ../../../../examples/static.rs}} {{#include ../../../../examples/only-shared-access.rs}}
``` ```
``` console ``` console
$ cargo run --example static $ cargo run --example only-shared-access
{{#include ../../../../ci/expected/static.run}}``` {{#include ../../../../ci/expected/only-shared-access.run}}```

View file

@ -1,22 +1,23 @@
# Software tasks # Software tasks
RTFM treats interrupt and exception handlers as *hardware* tasks. Hardware tasks In addition to hardware tasks, which are invoked by the hardware in response to
are invoked by the hardware in response to events, like pressing a button. RTFM hardware events, RTFM also supports *software* tasks which can be spawned by the
also supports *software* tasks which can be spawned by the software from any application from any execution context.
execution context.
Software tasks can also be assigned priorities and are dispatched from interrupt Software tasks can also be assigned priorities and, under the hood, are
handlers. RTFM requires that free interrupts are declared in an `extern` block dispatched from interrupt handlers. RTFM requires that free interrupts are
when using software tasks; these free interrupts will be used to dispatch the declared in an `extern` block when using software tasks; some of these free
software tasks. An advantage of software tasks over hardware tasks is that many interrupts will be used to dispatch the software tasks. An advantage of software
tasks can be mapped to a single interrupt handler. tasks over hardware tasks is that many tasks can be mapped to a single interrupt
handler.
Software tasks are declared by applying the `task` attribute to functions. To be Software tasks are also declared using the `task` attribute but the `binds`
able to spawn a software task the name of the task must appear in the `spawn` argument must be omitted. To be able to spawn a software task from a context
argument of the context attribute (`init`, `idle`, `interrupt`, etc.). the name of the task must appear in the `spawn` argument of the context
attribute (`init`, `idle`, `task`, etc.).
The example below showcases three software tasks that run at 2 different The example below showcases three software tasks that run at 2 different
priorities. The three tasks map to 2 interrupts handlers. priorities. The three software tasks are mapped to 2 interrupts handlers.
``` rust ``` rust
{{#include ../../../../examples/task.rs}} {{#include ../../../../examples/task.rs}}
@ -44,15 +45,17 @@ $ cargo run --example message
## Capacity ## Capacity
Task dispatchers do *not* use any dynamic memory allocation. The memory required RTFM does *not* perform any form of heap-based memory allocation. The memory
to store messages is statically reserved. The framework will reserve enough required to store messages is statically reserved. By default the framework
space for every context to be able to spawn each task at most once. This is a minimizes the memory footprint of the application so each task has a message
sensible default but the "inbox" capacity of each task can be controlled using "capacity" of 1: meaning that at most one message can be posted to the task
the `capacity` argument of the `task` attribute. before it gets a chance to run. This default can be overridden for each task
using the `capacity` argument. This argument takes a positive integer that
indicates how many messages the task message buffer can hold.
The example below sets the capacity of the software task `foo` to 4. If the The example below sets the capacity of the software task `foo` to 4. If the
capacity is not specified then the second `spawn.foo` call in `UART0` would capacity is not specified then the second `spawn.foo` call in `UART0` would
fail. fail (panic).
``` rust ``` rust
{{#include ../../../../examples/capacity.rs}} {{#include ../../../../examples/capacity.rs}}
@ -61,3 +64,54 @@ fail.
``` console ``` console
$ cargo run --example capacity $ cargo run --example capacity
{{#include ../../../../ci/expected/capacity.run}}``` {{#include ../../../../ci/expected/capacity.run}}```
## Error handling
The `spawn` API returns the `Err` variant when there's no space to send the
message. In most scenarios spawning errors are handled in one of two ways:
- Panicking, using `unwrap`, `expect`, etc. This approach is used to catch the
programmer error (i.e. bug) of selecting a capacity that was too small. When
this panic is encountered during testing choosing a bigger capacity and
recompiling the program may fix the issue but sometimes it's necessary to dig
deeper and perform a timing analysis of the application to check if the
platform can deal with peak payload or if the processor needs to be replaced
with a faster one.
- Ignoring the result. In soft real time and non real time applications it may
be OK to occasionally lose data or fail to respond to some events during event
bursts. In those scenarios silently letting a `spawn` call fail may be
acceptable.
It should be noted that retrying a `spawn` call is usually the wrong approach as
this operation will likely never succeed in practice. Because there are only
context switches towards *higher* priority tasks retrying the `spawn` call of a
lower priority task will never let the scheduler dispatch said task meaning that
its message buffer will never be emptied. This situation is depicted in the
following snippet:
``` rust
#[rtfm::app(..)]
const APP: () = {
#[init(spawn = [foo, bar])]
fn init(cx: init::Context) {
cx.spawn.foo().unwrap();
cx.spawn.bar().unwrap();
}
#[task(priority = 2, spawn = [bar])]
fn foo(cx: foo::Context) {
// ..
// the program will get stuck here
while cx.spawn.bar(payload).is_err() {
// retry the spawn call if it failed
}
}
#[task(priority = 1)]
fn bar(cx: bar::Context, payload: i32) {
// ..
}
};
```

View file

@ -1,37 +1,43 @@
# Timer queue # Timer queue
When the `timer-queue` feature is enabled the RTFM framework includes a *global In contrast with the `spawn` API, which immediately spawns a software task onto
timer queue* that applications can use to *schedule* software tasks to run at the scheduler, the `schedule` API can be used to schedule a task to run some
some time in the future. time in the future.
> **NOTE**: The timer-queue feature can't be enabled when the target is To use the `schedule` API a monotonic timer must be first defined using the
> `thumbv6m-none-eabi` because there's no timer queue support for ARMv6-M. This `monotonic` argument of the `#[app]` attribute. This argument takes a path to a
> may change in the future. type that implements the [`Monotonic`] trait. The associated type, `Instant`, of
this trait represents a timestamp in arbitrary units and it's used extensively
in the `schedule` API -- it is suggested to model this type after [the one in
the standard library][std-instant].
> **NOTE**: When the `timer-queue` feature is enabled you will *not* be able to Although not shown in the trait definition (due to limitations in the trait /
> use the `SysTick` exception as a hardware task because the runtime uses it to type system) the subtraction of two `Instant`s should return some `Duration`
> implement the global timer queue. type (see [`core::time::Duration`]) and this `Duration` type must implement the
`TryInto<u32>` trait. The implementation of this trait must convert the
`Duration` value, which uses some arbitrary unit of time, into the "system timer
(SYST) clock cycles" time unit. The result of the conversion must be a 32-bit
integer. If the result of the conversion doesn't fit in a 32-bit number then the
operation must return an error, any error type.
To be able to schedule a software task the name of the task must appear in the [`Monotonic`]: ../../api/rtfm/trait.Monotonic.html
`schedule` argument of the context attribute. When scheduling a task the [std-instant]: https://doc.rust-lang.org/std/time/struct.Instant.html
[`Instant`] at which the task should be executed must be passed as the first [`core::time::Duration`]: https://doc.rust-lang.org/core/time/struct.Duration.html
argument of the `schedule` invocation.
[`Instant`]: ../../api/rtfm/struct.Instant.html For ARMv7+ targets the `rtfm` crate provides a `Monotonic` implementation based
on the built-in CYCle CouNTer (CYCCNT). Note that this is a 32-bit timer clocked
at the frequency of the CPU and as such it is not suitable for tracking time
spans in the order of seconds.
The RTFM runtime includes a monotonic, non-decreasing, 32-bit timer which can be To be able to schedule a software task from a context the name of the task must
queried using the `Instant::now` constructor. A [`Duration`] can be added to first appear in the `schedule` argument of the context attribute. When
`Instant::now()` to obtain an `Instant` into the future. The monotonic timer is scheduling a task the (user-defined) `Instant` at which the task should be
disabled while `init` runs so `Instant::now()` always returns the value executed must be passed as the first argument of the `schedule` invocation.
`Instant(0 /* clock cycles */)`; the timer is enabled right before the
interrupts are re-enabled and `idle` is executed.
[`Duration`]: ../../api/rtfm/struct.Duration.html
The example below schedules two tasks from `init`: `foo` and `bar`. `foo` is The example below schedules two tasks from `init`: `foo` and `bar`. `foo` is
scheduled to run 8 million clock cycles in the future. Next, `bar` is scheduled scheduled to run 8 million clock cycles in the future. Next, `bar` is scheduled
to run 4 million clock cycles in the future. `bar` runs before `foo` since it to run 4 million clock cycles in the future. Thus `bar` runs before `foo` since
was scheduled to run first. it was scheduled to run first.
> **IMPORTANT**: The examples that use the `schedule` API or the `Instant` > **IMPORTANT**: The examples that use the `schedule` API or the `Instant`
> abstraction will **not** properly work on QEMU because the Cortex-M cycle > abstraction will **not** properly work on QEMU because the Cortex-M cycle
@ -41,12 +47,19 @@ was scheduled to run first.
{{#include ../../../../examples/schedule.rs}} {{#include ../../../../examples/schedule.rs}}
``` ```
Running the program on real hardware produces the following output in the console: Running the program on real hardware produces the following output in the
console:
``` text ``` text
{{#include ../../../../ci/expected/schedule.run}} {{#include ../../../../ci/expected/schedule.run}}
``` ```
When the `schedule` API is being used the runtime internally uses the `SysTick`
interrupt handler and the system timer peripheral (`SYST`) so neither can be
used by the application. This is accomplished by changing the type of
`init::Context.core` from `cortex_m::Peripherals` to `rtfm::Peripherals`. The
latter structure contains all the fields of the former minus the `SYST` one.
## Periodic tasks ## Periodic tasks
Software tasks have access to the `Instant` at which they were scheduled to run Software tasks have access to the `Instant` at which they were scheduled to run
@ -80,9 +93,10 @@ the task. Depending on the priority of the task and the load of the system the
What do you think will be the value of `scheduled` for software tasks that are What do you think will be the value of `scheduled` for software tasks that are
*spawned* instead of scheduled? The answer is that spawned tasks inherit the *spawned* instead of scheduled? The answer is that spawned tasks inherit the
*baseline* time of the context that spawned it. The baseline of hardware tasks *baseline* time of the context that spawned it. The baseline of hardware tasks
is `start`, the baseline of software tasks is `scheduled` and the baseline of is their `start` time, the baseline of software tasks is their `scheduled` time
`init` is `start = Instant(0)`. `idle` doesn't really have a baseline but tasks and the baseline of `init` is the system start time or time zero
spawned from it will use `Instant::now()` as their baseline time. (`Instant::zero()`). `idle` doesn't really have a baseline but tasks spawned
from it will use `Instant::now()` as their baseline time.
The example below showcases the different meanings of the *baseline*. The example below showcases the different meanings of the *baseline*.

View file

@ -2,10 +2,21 @@
## Generics ## Generics
Resources shared between two or more tasks implement the `Mutex` trait in *all* Resources may appear in contexts as resource proxies or as unique references
contexts, even on those where a critical section is not required to access the (`&mut-`) depending on the priority of the task. Because the same resource may
data. This lets you easily write generic code that operates on resources and can appear as *different* types in different contexts one cannot refactor a common
be called from different tasks. Here's one such example: operation that uses resources into a plain function; however, such refactor is
possible using *generics*.
All resource proxies implement the `rtfm::Mutex` trait. On the other hand,
unique references (`&mut-`) do *not* implement this trait (due to limitations in
the trait system) but one can wrap these references in the [`rtfm::Exclusive`]
newtype which does implement the `Mutex` trait. With the help of this newtype
one can write a generic function that operates on generic resources and call it
from different tasks to perform some operation on the same set of resources.
Here's one such example:
[`rtfm::Exclusive`]: ../../api/rtfm/struct.Exclusive.html
``` rust ``` rust
{{#include ../../../../examples/generics.rs}} {{#include ../../../../examples/generics.rs}}
@ -15,17 +26,15 @@ be called from different tasks. Here's one such example:
$ cargo run --example generics $ cargo run --example generics
{{#include ../../../../ci/expected/generics.run}}``` {{#include ../../../../ci/expected/generics.run}}```
This also lets you change the static priorities of tasks without having to Using generics also lets you change the static priorities of tasks during
rewrite code. If you consistently use `lock`s to access the data behind shared development without having to rewrite a bunch code every time.
resources then your code will continue to compile when you change the priority
of tasks.
## Conditional compilation ## Conditional compilation
You can use conditional compilation (`#[cfg]`) on resources (`static [mut]` You can use conditional compilation (`#[cfg]`) on resources (the fields of
items) and tasks (`fn` items). The effect of using `#[cfg]` attributes is that `struct Resources`) and tasks (the `fn` items). The effect of using `#[cfg]`
the resource / task will *not* be available through the corresponding `Context` attributes is that the resource / task will *not* be available through the
`struct` if the condition doesn't hold. corresponding `Context` `struct` if the condition doesn't hold.
The example below logs a message whenever the `foo` task is spawned, but only if The example below logs a message whenever the `foo` task is spawned, but only if
the program has been compiled using the `dev` profile. the program has been compiled using the `dev` profile.
@ -34,6 +43,12 @@ the program has been compiled using the `dev` profile.
{{#include ../../../../examples/cfg.rs}} {{#include ../../../../examples/cfg.rs}}
``` ```
``` console
$ cargo run --example cfg --release
$ cargo run --example cfg
{{#include ../../../../ci/expected/cfg.run}}```
## Running tasks from RAM ## Running tasks from RAM
The main goal of moving the specification of RTFM applications to attributes in The main goal of moving the specification of RTFM applications to attributes in
@ -70,25 +85,13 @@ One can look at the output of `cargo-nm` to confirm that `bar` ended in RAM
``` console ``` console
$ cargo nm --example ramfunc --release | grep ' foo::' $ cargo nm --example ramfunc --release | grep ' foo::'
{{#include ../../../../ci/expected/ramfunc.grep.foo}}``` {{#include ../../../../ci/expected/ramfunc.grep.foo}}
```
``` console ``` console
$ cargo nm --example ramfunc --release | grep ' bar::' $ cargo nm --example ramfunc --release | grep ' bar::'
{{#include ../../../../ci/expected/ramfunc.grep.bar}}``` {{#include ../../../../ci/expected/ramfunc.grep.bar}}
## `binds`
You can give hardware tasks more task-like names using the `binds` argument: you
name the function as you wish and specify the name of the interrupt / exception
in the `binds` argument. Types like `Spawn` will be placed in a module named
after the function, not the interrupt / exception. Example below:
``` rust
{{#include ../../../../examples/binds.rs}}
``` ```
``` console
$ cargo run --example binds
{{#include ../../../../ci/expected/binds.run}}```
## Indirection for faster message passing ## Indirection for faster message passing
@ -100,10 +103,10 @@ instead of sending the buffer by value, one can send an owning pointer into the
buffer. buffer.
One can use a global allocator to achieve indirection (`alloc::Box`, One can use a global allocator to achieve indirection (`alloc::Box`,
`alloc::Rc`, etc.), which requires using the nightly channel as of Rust v1.34.0, `alloc::Rc`, etc.), which requires using the nightly channel as of Rust v1.37.0,
or one can use a statically allocated memory pool like [`heapless::Pool`]. or one can use a statically allocated memory pool like [`heapless::Pool`].
[`heapless::Pool`]: https://docs.rs/heapless/0.4.3/heapless/pool/index.html [`heapless::Pool`]: https://docs.rs/heapless/0.5.0/heapless/pool/index.html
Here's an example where `heapless::Pool` is used to "box" buffers of 128 bytes. Here's an example where `heapless::Pool` is used to "box" buffers of 128 bytes.
@ -111,7 +114,7 @@ Here's an example where `heapless::Pool` is used to "box" buffers of 128 bytes.
{{#include ../../../../examples/pool.rs}} {{#include ../../../../examples/pool.rs}}
``` ```
``` console ``` console
$ cargo run --example binds $ cargo run --example pool
{{#include ../../../../ci/expected/pool.run}}``` {{#include ../../../../ci/expected/pool.run}}```
## Inspecting the expanded code ## Inspecting the expanded code
@ -131,33 +134,18 @@ $ cargo build --example foo
$ rustfmt target/rtfm-expansion.rs $ rustfmt target/rtfm-expansion.rs
$ tail -n30 target/rtfm-expansion.rs $ tail target/rtfm-expansion.rs
``` ```
``` rust ``` rust
#[doc = r" Implementation details"] #[doc = r" Implementation details"]
const APP: () = { const APP: () = {
#[doc = r" Always include the device crate which contains the vector table"]
use lm3s6965 as _; use lm3s6965 as _;
#[no_mangle] #[no_mangle]
unsafe fn main() -> ! { unsafe extern "C" fn main() -> ! {
rtfm::export::interrupt::disable(); rtfm::export::interrupt::disable();
let mut core = rtfm::export::Peripherals::steal(); let mut core: rtfm::export::Peripherals = core::mem::transmute(());
let late = init(
init::Locals::new(),
init::Context::new(rtfm::Peripherals {
CBP: core.CBP,
CPUID: core.CPUID,
DCB: core.DCB,
DWT: core.DWT,
FPB: core.FPB,
FPU: core.FPU,
ITM: core.ITM,
MPU: core.MPU,
SCB: &mut core.SCB,
SYST: core.SYST,
TPIU: core.TPIU,
}),
);
core.SCB.scr.modify(|r| r | 1 << 1); core.SCB.scr.modify(|r| r | 1 << 1);
rtfm::export::interrupt::enable(); rtfm::export::interrupt::enable();
loop { loop {
@ -175,5 +163,5 @@ crate and print the output to the console.
``` console ``` console
$ # produces the same output as before $ # produces the same output as before
$ cargo expand --example smallest | tail -n30 $ cargo expand --example smallest | tail
``` ```

View file

@ -1,8 +1,8 @@
# Types, Send and Sync # Types, Send and Sync
The `app` attribute injects a context, a collection of variables, into every Every function within the `APP` pseudo-module has a `Context` structure as its
function. All these variables have predictable, non-anonymous types so you can first parameter. All the fields of these structures have predictable,
write plain functions that take them as arguments. non-anonymous types so you can write plain functions that take them as arguments.
The API reference specifies how these types are generated from the input. You The API reference specifies how these types are generated from the input. You
can also generate documentation for you binary crate (`cargo doc --bin <name>`); can also generate documentation for you binary crate (`cargo doc --bin <name>`);
@ -20,8 +20,8 @@ The example below shows the different types generates by the `app` attribute.
[`Send`] is a marker trait for "types that can be transferred across thread [`Send`] is a marker trait for "types that can be transferred across thread
boundaries", according to its definition in `core`. In the context of RTFM the boundaries", according to its definition in `core`. In the context of RTFM the
`Send` trait is only required where it's possible to transfer a value between `Send` trait is only required where it's possible to transfer a value between
tasks that run at *different* priorities. This occurs in a few places: in message tasks that run at *different* priorities. This occurs in a few places: in
passing, in shared `static mut` resources and in the initialization of late message passing, in shared resources and in the initialization of late
resources. resources.
[`Send`]: https://doc.rust-lang.org/core/marker/trait.Send.html [`Send`]: https://doc.rust-lang.org/core/marker/trait.Send.html
@ -30,7 +30,7 @@ The `app` attribute will enforce that `Send` is implemented where required so
you don't need to worry much about it. It's more important to know where you do you don't need to worry much about it. It's more important to know where you do
*not* need the `Send` trait: on types that are transferred between tasks that *not* need the `Send` trait: on types that are transferred between tasks that
run at the *same* priority. This occurs in two places: in message passing and in run at the *same* priority. This occurs in two places: in message passing and in
shared `static mut` resources. shared resources.
The example below shows where a type that doesn't implement `Send` can be used. The example below shows where a type that doesn't implement `Send` can be used.
@ -39,9 +39,11 @@ The example below shows where a type that doesn't implement `Send` can be used.
``` ```
It's important to note that late initialization of resources is effectively a It's important to note that late initialization of resources is effectively a
send operation where the initial value is sent from `idle`, which has the lowest send operation where the initial value is sent from the background context,
priority of `0`, to a task with will run with a priority greater than or equal which has the lowest priority of `0`, to a task, which will run at a priority
to `1`. Thus all late resources need to implement the `Send` trait. greater than or equal to `1`. Thus all late resources need to implement the
`Send` trait, except for those exclusively accessed by `idle`, which runs at a
priority of `0`.
Sharing a resource with `init` can be used to implement late initialization, see Sharing a resource with `init` can be used to implement late initialization, see
example below. For that reason, resources shared with `init` must also implement example below. For that reason, resources shared with `init` must also implement
@ -56,14 +58,14 @@ the `Send` trait.
Similarly, [`Sync`] is a marker trait for "types for which it is safe to share Similarly, [`Sync`] is a marker trait for "types for which it is safe to share
references between threads", according to its definition in `core`. In the references between threads", according to its definition in `core`. In the
context of RTFM the `Sync` trait is only required where it's possible for two, context of RTFM the `Sync` trait is only required where it's possible for two,
or more, tasks that run at different priority to hold a shared reference to a or more, tasks that run at different priorities and may get a shared reference
resource. This only occurs with shared `static` resources. (`&-`) to a resource. This only occurs with shared access (`&-`) resources.
[`Sync`]: https://doc.rust-lang.org/core/marker/trait.Sync.html [`Sync`]: https://doc.rust-lang.org/core/marker/trait.Sync.html
The `app` attribute will enforce that `Sync` is implemented where required but The `app` attribute will enforce that `Sync` is implemented where required but
it's important to know where the `Sync` bound is not required: in `static` it's important to know where the `Sync` bound is not required: shared access
resources shared between tasks that run at the *same* priority. (`&-`) resources contended by tasks that run at the *same* priority.
The example below shows where a type that doesn't implement `Sync` can be used. The example below shows where a type that doesn't implement `Sync` can be used.

View file

@ -0,0 +1,6 @@
# Heterogeneous multi-core support
This section covers the *experimental* heterogeneous multi-core support provided
by RTFM behind the `heterogeneous` Cargo feature.
**Content coming soon**

View file

@ -0,0 +1,6 @@
# Homogeneous multi-core support
This section covers the *experimental* homogeneous multi-core support provided
by RTFM behind the `homogeneous` Cargo feature.
**Content coming soon**

View file

@ -21,7 +21,7 @@ This makes it impossible for the user code to refer to these static variables.
Access to the resources is then given to each task using a `Resources` struct Access to the resources is then given to each task using a `Resources` struct
whose fields correspond to the resources the task has access to. There's one whose fields correspond to the resources the task has access to. There's one
such struct per task and the `Resources` struct is initialized with either a such struct per task and the `Resources` struct is initialized with either a
mutable reference (`&mut`) to the static variables or with a resource proxy (see unique reference (`&mut-`) to the static variables or with a resource proxy (see
section on [critical sections](critical-sections.html)). section on [critical sections](critical-sections.html)).
The code below is an example of the kind of source level transformation that The code below is an example of the kind of source level transformation that

View file

@ -16,61 +16,65 @@ that has a logical priority of `0` whereas `init` is completely omitted from the
analysis -- the reason for that is that `init` never uses (or needs) critical analysis -- the reason for that is that `init` never uses (or needs) critical
sections to access static variables. sections to access static variables.
In the previous section we showed that a shared resource may appear as a mutable In the previous section we showed that a shared resource may appear as a unique
reference or behind a proxy depending on the task that has access to it. Which reference (`&mut-`) or behind a proxy depending on the task that has access to
version is presented to the task depends on the task priority and the resource it. Which version is presented to the task depends on the task priority and the
ceiling. If the task priority is the same as the resource ceiling then the task resource ceiling. If the task priority is the same as the resource ceiling then
gets a mutable reference to the resource memory, otherwise the task gets a the task gets a unique reference (`&mut-`) to the resource memory, otherwise the
proxy -- this also applies to `idle`. `init` is special: it always gets a task gets a proxy -- this also applies to `idle`. `init` is special: it always
mutable reference to resources. gets a unique reference (`&mut-`) to resources.
An example to illustrate the ceiling analysis: An example to illustrate the ceiling analysis:
``` rust ``` rust
#[rtfm::app(device = ..)] #[rtfm::app(device = ..)]
const APP: () = { const APP: () = {
// accessed by `foo` (prio = 1) and `bar` (prio = 2) struct Resources {
// CEILING = 2 // accessed by `foo` (prio = 1) and `bar` (prio = 2)
static mut X: u64 = 0; // -> CEILING = 2
#[init(0)]
x: u64,
// accessed by `idle` (prio = 0) // accessed by `idle` (prio = 0)
// CEILING = 0 // -> CEILING = 0
static mut Y: u64 = 0; #[init(0)]
y: u64,
}
#[init(resources = [X])] #[init(resources = [x])]
fn init(c: init::Context) { fn init(c: init::Context) {
// mutable reference because this is `init` // unique reference because this is `init`
let x: &mut u64 = c.resources.X; let x: &mut u64 = c.resources.x;
// mutable reference because this is `init` // unique reference because this is `init`
let y: &mut u64 = c.resources.Y; let y: &mut u64 = c.resources.y;
// .. // ..
} }
// PRIORITY = 0 // PRIORITY = 0
#[idle(resources = [Y])] #[idle(resources = [y])]
fn idle(c: idle::Context) -> ! { fn idle(c: idle::Context) -> ! {
// mutable reference because priority (0) == resource ceiling (0) // unique reference because priority (0) == resource ceiling (0)
let y: &'static mut u64 = c.resources.Y; let y: &'static mut u64 = c.resources.y;
loop { loop {
// .. // ..
} }
} }
#[interrupt(binds = UART0, priority = 1, resources = [X])] #[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) { fn foo(c: foo::Context) {
// resource proxy because task priority (1) < resource ceiling (2) // resource proxy because task priority (1) < resource ceiling (2)
let x: resources::X = c.resources.X; let x: resources::x = c.resources.x;
// .. // ..
} }
#[interrupt(binds = UART1, priority = 2, resources = [X])] #[interrupt(binds = UART1, priority = 2, resources = [x])]
fn bar(c: foo::Context) { fn bar(c: foo::Context) {
// mutable reference because task priority (2) == resource ceiling (2) // unique reference because task priority (2) == resource ceiling (2)
let x: &mut u64 = c.resources.X; let x: &mut u64 = c.resources.x;
// .. // ..
} }

View file

@ -1,12 +1,12 @@
# Critical sections # Critical sections
When a resource (static variable) is shared between two, or more, tasks that run When a resource (static variable) is shared between two, or more, tasks that run
at different priorities some form of mutual exclusion is required to access the at different priorities some form of mutual exclusion is required to mutate the
memory in a data race free manner. In RTFM we use priority-based critical memory in a data race free manner. In RTFM we use priority-based critical
sections to guarantee mutual exclusion (see the [Immediate Priority Ceiling sections to guarantee mutual exclusion (see the [Immediate Ceiling Priority
Protocol][ipcp]). Protocol][icpp]).
[ipcp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol [icpp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol
The critical section consists of temporarily raising the *dynamic* priority of The critical section consists of temporarily raising the *dynamic* priority of
the task. While a task is within this critical section all the other tasks that the task. While a task is within this critical section all the other tasks that
@ -25,7 +25,7 @@ a data race the *lower priority* task must use a critical section when it needs
to modify the shared memory. On the other hand, the higher priority task can to modify the shared memory. On the other hand, the higher priority task can
directly modify the shared memory because it can't be preempted by the lower directly modify the shared memory because it can't be preempted by the lower
priority task. To enforce the use of a critical section on the lower priority priority task. To enforce the use of a critical section on the lower priority
task we give it a *resource proxy*, whereas we give a mutable reference task we give it a *resource proxy*, whereas we give a unique reference
(`&mut-`) to the higher priority task. (`&mut-`) to the higher priority task.
The example below shows the different types handed out to each task: The example below shows the different types handed out to each task:
@ -33,12 +33,15 @@ The example below shows the different types handed out to each task:
``` rust ``` rust
#[rtfm::app(device = ..)] #[rtfm::app(device = ..)]
const APP: () = { const APP: () = {
static mut X: u64 = 0; struct Resources {
#[init(0)]
x: u64,
}
#[interrupt(binds = UART0, priority = 1, resources = [X])] #[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) { fn foo(c: foo::Context) {
// resource proxy // resource proxy
let mut x: resources::X = c.resources.X; let mut x: resources::x = c.resources.x;
x.lock(|x: &mut u64| { x.lock(|x: &mut u64| {
// critical section // critical section
@ -46,9 +49,9 @@ const APP: () = {
}); });
} }
#[interrupt(binds = UART1, priority = 2, resources = [X])] #[interrupt(binds = UART1, priority = 2, resources = [x])]
fn bar(c: foo::Context) { fn bar(c: foo::Context) {
let mut x: &mut u64 = c.resources.X; let mut x: &mut u64 = c.resources.x;
*x += 1; *x += 1;
} }
@ -69,14 +72,14 @@ fn bar(c: bar::Context) {
} }
pub mod resources { pub mod resources {
pub struct X { pub struct x {
// .. // ..
} }
} }
pub mod foo { pub mod foo {
pub struct Resources { pub struct Resources {
pub X: resources::X, pub x: resources::x,
} }
pub struct Context { pub struct Context {
@ -87,7 +90,7 @@ pub mod foo {
pub mod bar { pub mod bar {
pub struct Resources<'a> { pub struct Resources<'a> {
pub X: rtfm::Exclusive<'a, u64>, // newtype over `&'a mut u64` pub x: &'a mut u64,
} }
pub struct Context { pub struct Context {
@ -97,9 +100,9 @@ pub mod bar {
} }
const APP: () = { const APP: () = {
static mut X: u64 = 0; static mut x: u64 = 0;
impl rtfm::Mutex for resources::X { impl rtfm::Mutex for resources::x {
type T = u64; type T = u64;
fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R { fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
@ -111,7 +114,7 @@ const APP: () = {
unsafe fn UART0() { unsafe fn UART0() {
foo(foo::Context { foo(foo::Context {
resources: foo::Resources { resources: foo::Resources {
X: resources::X::new(/* .. */), x: resources::x::new(/* .. */),
}, },
// .. // ..
}) })
@ -121,7 +124,7 @@ const APP: () = {
unsafe fn UART1() { unsafe fn UART1() {
bar(bar::Context { bar(bar::Context {
resources: bar::Resources { resources: bar::Resources {
X: rtfm::Exclusive(&mut X), x: &mut x,
}, },
// .. // ..
}) })
@ -158,7 +161,7 @@ In this particular example we could implement the critical section as follows:
> **NOTE:** this is a simplified implementation > **NOTE:** this is a simplified implementation
``` rust ``` rust
impl rtfm::Mutex for resources::X { impl rtfm::Mutex for resources::x {
type T = u64; type T = u64;
fn lock<R, F>(&mut self, f: F) -> R fn lock<R, F>(&mut self, f: F) -> R
@ -170,7 +173,7 @@ impl rtfm::Mutex for resources::X {
asm!("msr BASEPRI, 192" : : : "memory" : "volatile"); asm!("msr BASEPRI, 192" : : : "memory" : "volatile");
// run user code within the critical section // run user code within the critical section
let r = f(&mut implementation_defined_name_for_X); let r = f(&mut x);
// end of critical section: restore dynamic priority to its static value (`1`) // end of critical section: restore dynamic priority to its static value (`1`)
asm!("msr BASEPRI, 0" : : : "memory" : "volatile"); asm!("msr BASEPRI, 0" : : : "memory" : "volatile");
@ -183,23 +186,23 @@ impl rtfm::Mutex for resources::X {
Here it's important to use the `"memory"` clobber in the `asm!` block. It Here it's important to use the `"memory"` clobber in the `asm!` block. It
prevents the compiler from reordering memory operations across it. This is prevents the compiler from reordering memory operations across it. This is
important because accessing the variable `X` outside the critical section would important because accessing the variable `x` outside the critical section would
result in a data race. result in a data race.
It's important to note that the signature of the `lock` method prevents nesting It's important to note that the signature of the `lock` method prevents nesting
calls to it. This is required for memory safety, as nested calls would produce calls to it. This is required for memory safety, as nested calls would produce
multiple mutable references (`&mut-`) to `X` breaking Rust aliasing rules. See multiple unique references (`&mut-`) to `x` breaking Rust aliasing rules. See
below: below:
``` rust ``` rust
#[interrupt(binds = UART0, priority = 1, resources = [X])] #[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) { fn foo(c: foo::Context) {
// resource proxy // resource proxy
let mut res: resources::X = c.resources.X; let mut res: resources::x = c.resources.x;
res.lock(|x: &mut u64| { res.lock(|x: &mut u64| {
res.lock(|alias: &mut u64| { res.lock(|alias: &mut u64| {
//~^ error: `res` has already been mutably borrowed //~^ error: `res` has already been uniquely borrowed (`&mut-`)
// .. // ..
}); });
}); });
@ -223,18 +226,22 @@ Consider this program:
``` rust ``` rust
#[rtfm::app(device = ..)] #[rtfm::app(device = ..)]
const APP: () = { const APP: () = {
static mut X: u64 = 0; struct Resources {
static mut Y: u64 = 0; #[init(0)]
x: u64,
#[init(0)]
y: u64,
}
#[init] #[init]
fn init() { fn init() {
rtfm::pend(Interrupt::UART0); rtfm::pend(Interrupt::UART0);
} }
#[interrupt(binds = UART0, priority = 1, resources = [X, Y])] #[interrupt(binds = UART0, priority = 1, resources = [x, y])]
fn foo(c: foo::Context) { fn foo(c: foo::Context) {
let mut x = c.resources.X; let mut x = c.resources.x;
let mut y = c.resources.Y; let mut y = c.resources.y;
y.lock(|y| { y.lock(|y| {
*y += 1; *y += 1;
@ -259,12 +266,12 @@ const APP: () = {
}) })
} }
#[interrupt(binds = UART1, priority = 2, resources = [X])] #[interrupt(binds = UART1, priority = 2, resources = [x])]
fn bar(c: foo::Context) { fn bar(c: foo::Context) {
// .. // ..
} }
#[interrupt(binds = UART2, priority = 3, resources = [Y])] #[interrupt(binds = UART2, priority = 3, resources = [y])]
fn baz(c: foo::Context) { fn baz(c: foo::Context) {
// .. // ..
} }
@ -279,13 +286,13 @@ The code generated by the framework looks like this:
// omitted: user code // omitted: user code
pub mod resources { pub mod resources {
pub struct X<'a> { pub struct x<'a> {
priority: &'a Cell<u8>, priority: &'a Cell<u8>,
} }
impl<'a> X<'a> { impl<'a> x<'a> {
pub unsafe fn new(priority: &'a Cell<u8>) -> Self { pub unsafe fn new(priority: &'a Cell<u8>) -> Self {
X { priority } x { priority }
} }
pub unsafe fn priority(&self) -> &Cell<u8> { pub unsafe fn priority(&self) -> &Cell<u8> {
@ -293,7 +300,7 @@ pub mod resources {
} }
} }
// repeat for `Y` // repeat for `y`
} }
pub mod foo { pub mod foo {
@ -303,34 +310,35 @@ pub mod foo {
} }
pub struct Resources<'a> { pub struct Resources<'a> {
pub X: resources::X<'a>, pub x: resources::x<'a>,
pub Y: resources::Y<'a>, pub y: resources::y<'a>,
} }
} }
const APP: () = { const APP: () = {
use cortex_m::register::basepri;
#[no_mangle] #[no_mangle]
unsafe fn UART0() { unsafe fn UART1() {
// the static priority of this interrupt (as specified by the user) // the static priority of this interrupt (as specified by the user)
const PRIORITY: u8 = 1; const PRIORITY: u8 = 2;
// take a snashot of the BASEPRI // take a snashot of the BASEPRI
let initial: u8; let initial = basepri::read();
asm!("mrs $0, BASEPRI" : "=r"(initial) : : : "volatile");
let priority = Cell::new(PRIORITY); let priority = Cell::new(PRIORITY);
foo(foo::Context { bar(bar::Context {
resources: foo::Resources::new(&priority), resources: bar::Resources::new(&priority),
// .. // ..
}); });
// roll back the BASEPRI to the snapshot value we took before // roll back the BASEPRI to the snapshot value we took before
asm!("msr BASEPRI, $0" : : "r"(initial) : : "volatile"); basepri::write(initial); // same as the `asm!` block we saw before
} }
// similarly for `UART1` // similarly for `UART0` / `foo` and `UART2` / `baz`
impl<'a> rtfm::Mutex for resources::X<'a> { impl<'a> rtfm::Mutex for resources::x<'a> {
type T = u64; type T = u64;
fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R { fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
@ -342,26 +350,24 @@ const APP: () = {
if current < CEILING { if current < CEILING {
// raise dynamic priority // raise dynamic priority
self.priority().set(CEILING); self.priority().set(CEILING);
let hw = logical2hw(CEILING); basepri::write(logical2hw(CEILING));
asm!("msr BASEPRI, $0" : : "r"(hw) : "memory" : "volatile");
let r = f(&mut X); let r = f(&mut y);
// restore dynamic priority // restore dynamic priority
let hw = logical2hw(current); basepri::write(logical2hw(current));
asm!("msr BASEPRI, $0" : : "r"(hw) : "memory" : "volatile");
self.priority().set(current); self.priority().set(current);
r r
} else { } else {
// dynamic priority is high enough // dynamic priority is high enough
f(&mut X) f(&mut y)
} }
} }
} }
} }
// repeat for `Y` // repeat for resource `y`
}; };
``` ```
@ -373,38 +379,38 @@ fn foo(c: foo::Context) {
// NOTE: BASEPRI contains the value `0` (its reset value) at this point // NOTE: BASEPRI contains the value `0` (its reset value) at this point
// raise dynamic priority to `3` // raise dynamic priority to `3`
unsafe { asm!("msr BASEPRI, 160" : : : "memory" : "volatile") } unsafe { basepri::write(160) }
// the two operations on `Y` are merged into one // the two operations on `y` are merged into one
Y += 2; y += 2;
// BASEPRI is not modified to access `X` because the dynamic priority is high enough // BASEPRI is not modified to access `x` because the dynamic priority is high enough
X += 1; x += 1;
// lower (restore) the dynamic priority to `1` // lower (restore) the dynamic priority to `1`
unsafe { asm!("msr BASEPRI, 224" : : : "memory" : "volatile") } unsafe { basepri::write(224) }
// mid-point // mid-point
// raise dynamic priority to `2` // raise dynamic priority to `2`
unsafe { asm!("msr BASEPRI, 192" : : : "memory" : "volatile") } unsafe { basepri::write(192) }
X += 1; x += 1;
// raise dynamic priority to `3` // raise dynamic priority to `3`
unsafe { asm!("msr BASEPRI, 160" : : : "memory" : "volatile") } unsafe { basepri::write(160) }
Y += 1; y += 1;
// lower (restore) the dynamic priority to `2` // lower (restore) the dynamic priority to `2`
unsafe { asm!("msr BASEPRI, 192" : : : "memory" : "volatile") } unsafe { basepri::write(192) }
// NOTE: it would be sound to merge this operation on X with the previous one but // NOTE: it would be sound to merge this operation on `x` with the previous one but
// compiler fences are coarse grained and prevent such optimization // compiler fences are coarse grained and prevent such optimization
X += 1; x += 1;
// lower (restore) the dynamic priority to `1` // lower (restore) the dynamic priority to `1`
unsafe { asm!("msr BASEPRI, 224" : : : "memory" : "volatile") } unsafe { basepri::write(224) }
// NOTE: BASEPRI contains the value `224` at this point // NOTE: BASEPRI contains the value `224` at this point
// the UART0 handler will restore the value to `0` before returning // the UART0 handler will restore the value to `0` before returning
@ -425,7 +431,10 @@ handler through preemption. This is best observed in the following example:
``` rust ``` rust
#[rtfm::app(device = ..)] #[rtfm::app(device = ..)]
const APP: () = { const APP: () = {
static mut X: u64 = 0; struct Resources {
#[init(0)]
x: u64,
}
#[init] #[init]
fn init() { fn init() {
@ -444,11 +453,11 @@ const APP: () = {
// this function returns to `idle` // this function returns to `idle`
} }
#[task(binds = UART1, priority = 2, resources = [X])] #[task(binds = UART1, priority = 2, resources = [x])]
fn bar() { fn bar() {
// BASEPRI is `0` (dynamic priority = 2) // BASEPRI is `0` (dynamic priority = 2)
X.lock(|x| { x.lock(|x| {
// BASEPRI is raised to `160` (dynamic priority = 3) // BASEPRI is raised to `160` (dynamic priority = 3)
// .. // ..
@ -470,7 +479,7 @@ const APP: () = {
} }
} }
#[task(binds = UART2, priority = 3, resources = [X])] #[task(binds = UART2, priority = 3, resources = [x])]
fn baz() { fn baz() {
// .. // ..
} }
@ -493,8 +502,7 @@ const APP: () = {
const PRIORITY: u8 = 2; const PRIORITY: u8 = 2;
// take a snashot of the BASEPRI // take a snashot of the BASEPRI
let initial: u8; let initial = basepri::read();
asm!("mrs $0, BASEPRI" : "=r"(initial) : : : "volatile");
let priority = Cell::new(PRIORITY); let priority = Cell::new(PRIORITY);
bar(bar::Context { bar(bar::Context {
@ -503,7 +511,7 @@ const APP: () = {
}); });
// BUG: FORGOT to roll back the BASEPRI to the snapshot value we took before // BUG: FORGOT to roll back the BASEPRI to the snapshot value we took before
// asm!("msr BASEPRI, $0" : : "r"(initial) : : "volatile"); basepri::write(initial);
} }
}; };
``` ```

View file

@ -12,7 +12,7 @@ configuration is done before the `init` function runs.
This example gives you an idea of the code that the RTFM framework runs: This example gives you an idea of the code that the RTFM framework runs:
``` rust ``` rust
#[rtfm::app(device = ..)] #[rtfm::app(device = lm3s6965)]
const APP: () = { const APP: () = {
#[init] #[init]
fn init(c: init::Context) { fn init(c: init::Context) {
@ -39,8 +39,7 @@ The framework generates an entry point that looks like this:
unsafe fn main() -> ! { unsafe fn main() -> ! {
// transforms a logical priority into a hardware / NVIC priority // transforms a logical priority into a hardware / NVIC priority
fn logical2hw(priority: u8) -> u8 { fn logical2hw(priority: u8) -> u8 {
// this value comes from the device crate use lm3s6965::NVIC_PRIO_BITS;
const NVIC_PRIO_BITS: u8 = ..;
// the NVIC encodes priority in the higher bits of a bit // the NVIC encodes priority in the higher bits of a bit
// also a bigger numbers means lower priority // also a bigger numbers means lower priority

View file

@ -11,21 +11,22 @@ initialize late resources.
``` rust ``` rust
#[rtfm::app(device = ..)] #[rtfm::app(device = ..)]
const APP: () = { const APP: () = {
// late resource struct Resources {
static mut X: Thing = {}; x: Thing,
}
#[init] #[init]
fn init() -> init::LateResources { fn init() -> init::LateResources {
// .. // ..
init::LateResources { init::LateResources {
X: Thing::new(..), x: Thing::new(..),
} }
} }
#[task(binds = UART0, resources = [X])] #[task(binds = UART0, resources = [x])]
fn foo(c: foo::Context) { fn foo(c: foo::Context) {
let x: &mut Thing = c.resources.X; let x: &mut Thing = c.resources.x;
x.frob(); x.frob();
@ -50,7 +51,7 @@ fn foo(c: foo::Context) {
// Public API // Public API
pub mod init { pub mod init {
pub struct LateResources { pub struct LateResources {
pub X: Thing, pub x: Thing,
} }
// .. // ..
@ -58,7 +59,7 @@ pub mod init {
pub mod foo { pub mod foo {
pub struct Resources<'a> { pub struct Resources<'a> {
pub X: &'a mut Thing, pub x: &'a mut Thing,
} }
pub struct Context<'a> { pub struct Context<'a> {
@ -70,7 +71,7 @@ pub mod foo {
/// Implementation details /// Implementation details
const APP: () = { const APP: () = {
// uninitialized static // uninitialized static
static mut X: MaybeUninit<Thing> = MaybeUninit::uninit(); static mut x: MaybeUninit<Thing> = MaybeUninit::uninit();
#[no_mangle] #[no_mangle]
unsafe fn main() -> ! { unsafe fn main() -> ! {
@ -81,7 +82,7 @@ const APP: () = {
let late = init(..); let late = init(..);
// initialization of late resources // initialization of late resources
X.write(late.X); x.as_mut_ptr().write(late.x);
cortex_m::interrupt::enable(); //~ compiler fence cortex_m::interrupt::enable(); //~ compiler fence
@ -94,8 +95,8 @@ const APP: () = {
unsafe fn UART0() { unsafe fn UART0() {
foo(foo::Context { foo(foo::Context {
resources: foo::Resources { resources: foo::Resources {
// `X` has been initialized at this point // `x` has been initialized at this point
X: &mut *X.as_mut_ptr(), x: &mut *x.as_mut_ptr(),
}, },
// .. // ..
}) })

View file

@ -13,24 +13,20 @@ are discouraged from directly invoking an interrupt handler.
``` rust ``` rust
#[rtfm::app(device = ..)] #[rtfm::app(device = ..)]
const APP: () = { const APP: () = {
static mut X: u64 = 0;
#[init] #[init]
fn init(c: init::Context) { .. } fn init(c: init::Context) { .. }
#[interrupt(binds = UART0, resources = [X])] #[interrupt(binds = UART0)]
fn foo(c: foo::Context) { fn foo(c: foo::Context) {
let x: &mut u64 = c.resources.X; static mut X: u64 = 0;
*x = 1; let x: &mut u64 = X;
// ..
//~ `bar` can preempt `foo` at this point //~ `bar` can preempt `foo` at this point
*x = 2; // ..
if *x == 2 {
// something
}
} }
#[interrupt(binds = UART1, priority = 2)] #[interrupt(binds = UART1, priority = 2)]
@ -40,15 +36,15 @@ const APP: () = {
} }
// this interrupt handler will invoke task handler `foo` resulting // this interrupt handler will invoke task handler `foo` resulting
// in mutable aliasing of the static variable `X` // in aliasing of the static variable `X`
unsafe { UART0() } unsafe { UART0() }
} }
}; };
``` ```
The RTFM framework must generate the interrupt handler code that calls the user The RTFM framework must generate the interrupt handler code that calls the user
defined task handlers. We are careful in making these handlers `unsafe` and / or defined task handlers. We are careful in making these handlers impossible to
impossible to call from user code. call from user code.
The above example expands into: The above example expands into:

View file

@ -19,7 +19,7 @@ task.
The ready queue is a SPSC (Single Producer Single Consumer) lock-free queue. The The ready queue is a SPSC (Single Producer Single Consumer) lock-free queue. The
task dispatcher owns the consumer endpoint of the queue; the producer endpoint task dispatcher owns the consumer endpoint of the queue; the producer endpoint
is treated as a resource shared by the tasks that can `spawn` other tasks. is treated as a resource contended by the tasks that can `spawn` other tasks.
## The task dispatcher ## The task dispatcher
@ -244,7 +244,7 @@ const APP: () = {
baz_INPUTS[index as usize].write(message); baz_INPUTS[index as usize].write(message);
lock(self.priority(), RQ1_CEILING, || { lock(self.priority(), RQ1_CEILING, || {
// put the task in the ready queu // put the task in the ready queue
RQ1.split().1.enqueue_unchecked(Ready { RQ1.split().1.enqueue_unchecked(Ready {
task: T1::baz, task: T1::baz,
index, index,

View file

@ -47,7 +47,7 @@ mod foo {
} }
const APP: () = { const APP: () = {
use rtfm::Instant; type Instant = <path::to::user::monotonic::timer as rtfm::Monotonic>::Instant;
// all tasks that can be `schedule`-d // all tasks that can be `schedule`-d
enum T { enum T {
@ -158,15 +158,14 @@ way it will run at the right priority.
handler; basically, `enqueue_unchecked` delegates the task of setting up a new handler; basically, `enqueue_unchecked` delegates the task of setting up a new
timeout interrupt to the `SysTick` handler. timeout interrupt to the `SysTick` handler.
## Resolution and range of `Instant` and `Duration` ## Resolution and range of `cyccnt::Instant` and `cyccnt::Duration`
In the current implementation the `DWT`'s (Data Watchpoint and Trace) cycle RTFM provides a `Monotonic` implementation based on the `DWT`'s (Data Watchpoint
counter is used as a monotonic timer. `Instant::now` returns a snapshot of this and Trace) cycle counter. `Instant::now` returns a snapshot of this timer; these
timer; these DWT snapshots (`Instant`s) are used to sort entries in the timer DWT snapshots (`Instant`s) are used to sort entries in the timer queue. The
queue. The cycle counter is a 32-bit counter clocked at the core clock cycle counter is a 32-bit counter clocked at the core clock frequency. This
frequency. This counter wraps around every `(1 << 32)` clock cycles; there's no counter wraps around every `(1 << 32)` clock cycles; there's no interrupt
interrupt associated to this counter so nothing worth noting happens when it associated to this counter so nothing worth noting happens when it wraps around.
wraps around.
To order `Instant`s in the queue we need to compare two 32-bit integers. To To order `Instant`s in the queue we need to compare two 32-bit integers. To
account for the wrap-around behavior we use the difference between two account for the wrap-around behavior we use the difference between two
@ -264,11 +263,11 @@ The ceiling analysis would go like this:
## Changes in the `spawn` implementation ## Changes in the `spawn` implementation
When the "timer-queue" feature is enabled the `spawn` implementation changes a When the `schedule` API is used the `spawn` implementation changes a bit to
bit to track the baseline of tasks. As you saw in the `schedule` implementation track the baseline of tasks. As you saw in the `schedule` implementation there's
there's an `INSTANTS` buffers used to store the time at which a task was an `INSTANTS` buffers used to store the time at which a task was scheduled to
scheduled to run; this `Instant` is read in the task dispatcher and passed to run; this `Instant` is read in the task dispatcher and passed to the user code
the user code as part of the task context. as part of the task context.
``` rust ``` rust
const APP: () = { const APP: () = {

View file

@ -14,6 +14,6 @@ There is a translation of this book in [Russian].
**HEADS UP** This is an **alpha** pre-release; there may be breaking changes in **HEADS UP** This is an **alpha** pre-release; there may be breaking changes in
the API and semantics before a proper release is made. the API and semantics before a proper release is made.
{{#include ../../../README.md:5:46}} {{#include ../../../README.md:5:44}}
{{#include ../../../README.md:52:}} {{#include ../../../README.md:50:}}

2
ci/expected/cfg.run Normal file
View file

@ -0,0 +1,2 @@
foo has been called 1 time
foo has been called 2 times

5
ci/expected/preempt.run Normal file
View file

@ -0,0 +1,5 @@
UART0 - start
UART2 - start
UART2 - end
UART1
UART0 - end

View file

@ -1,3 +1 @@
20000100 B bar::FREE_QUEUE::lk14244m263eivix 20000000 t ramfunc::bar::h9d6714fe5a3b0c89
200000dc B bar::INPUTS::mi89534s44r1mnj1
20000000 T bar::ns9009yhw2dc2y25

View file

@ -1,3 +1 @@
20000100 B foo::FREE_QUEUE::ujkptet2nfdw5t20 00000162 t ramfunc::foo::h30e7789b08c08e19
200000dc B foo::INPUTS::thvubs85b91dg365
000002c6 T foo::sidaht420cg1mcm8

View file

@ -1,3 +1,5 @@
foo foo - start
foo - middle
baz baz
foo - end
bar bar

View file

@ -99,13 +99,14 @@ main() {
local exs=( local exs=(
idle idle
init init
interrupt hardware
preempt
binds binds
resource resource
lock lock
late late
static only-shared-access
task task
message message
@ -117,6 +118,7 @@ main() {
shared-with-init shared-with-init
generics generics
cfg
pool pool
ramfunc ramfunc
) )
@ -160,7 +162,11 @@ main() {
fi fi
arm_example "run" $ex "debug" "" "1" arm_example "run" $ex "debug" "" "1"
arm_example "run" $ex "release" "" "1" if [ $ex = types ]; then
arm_example "run" $ex "release" "" "1"
else
arm_example "build" $ex "release" "" "1"
fi
done done
local built=() local built=()

View file

@ -13,18 +13,18 @@ use panic_semihosting as _;
#[rtfm::app(device = lm3s6965, monotonic = rtfm::cyccnt::CYCCNT)] #[rtfm::app(device = lm3s6965, monotonic = rtfm::cyccnt::CYCCNT)]
const APP: () = { const APP: () = {
#[init(spawn = [foo])] #[init(spawn = [foo])]
fn init(c: init::Context) { fn init(cx: init::Context) {
hprintln!("init(baseline = {:?})", c.start).unwrap(); hprintln!("init(baseline = {:?})", cx.start).unwrap();
// `foo` inherits the baseline of `init`: `Instant(0)` // `foo` inherits the baseline of `init`: `Instant(0)`
c.spawn.foo().unwrap(); cx.spawn.foo().unwrap();
} }
#[task(schedule = [foo])] #[task(schedule = [foo])]
fn foo(c: foo::Context) { fn foo(cx: foo::Context) {
static mut ONCE: bool = true; static mut ONCE: bool = true;
hprintln!("foo(baseline = {:?})", c.scheduled).unwrap(); hprintln!("foo(baseline = {:?})", cx.scheduled).unwrap();
if *ONCE { if *ONCE {
*ONCE = false; *ONCE = false;
@ -36,11 +36,11 @@ const APP: () = {
} }
#[task(binds = UART0, spawn = [foo])] #[task(binds = UART0, spawn = [foo])]
fn uart0(c: uart0::Context) { fn uart0(cx: uart0::Context) {
hprintln!("UART0(baseline = {:?})", c.start).unwrap(); hprintln!("UART0(baseline = {:?})", cx.start).unwrap();
// `foo` inherits the baseline of `UART0`: its `start` time // `foo` inherits the baseline of `UART0`: its `start` time
c.spawn.foo().unwrap(); cx.spawn.foo().unwrap();
} }
extern "C" { extern "C" {

View file

@ -5,6 +5,7 @@
#![no_main] #![no_main]
#![no_std] #![no_std]
use cortex_m_semihosting::debug;
#[cfg(debug_assertions)] #[cfg(debug_assertions)]
use cortex_m_semihosting::hprintln; use cortex_m_semihosting::hprintln;
use panic_semihosting as _; use panic_semihosting as _;
@ -17,28 +18,36 @@ const APP: () = {
count: u32, count: u32,
} }
#[init] #[init(spawn = [foo])]
fn init(_: init::Context) { fn init(cx: init::Context) {
// .. cx.spawn.foo().unwrap();
cx.spawn.foo().unwrap();
} }
#[task(priority = 3, resources = [count], spawn = [log])] #[idle]
fn foo(_c: foo::Context) { fn idle(_: idle::Context) -> ! {
debug::exit(debug::EXIT_SUCCESS);
loop {}
}
#[task(capacity = 2, resources = [count], spawn = [log])]
fn foo(_cx: foo::Context) {
#[cfg(debug_assertions)] #[cfg(debug_assertions)]
{ {
*_c.resources.count += 1; *_cx.resources.count += 1;
_c.spawn.log(*_c.resources.count).ok(); _cx.spawn.log(*_cx.resources.count).unwrap();
} }
// this wouldn't compile in `release` mode // this wouldn't compile in `release` mode
// *resources.count += 1; // *_cx.resources.count += 1;
// .. // ..
} }
#[cfg(debug_assertions)] #[cfg(debug_assertions)]
#[task] #[task(capacity = 2)]
fn log(_: log::Context, n: u32) { fn log(_: log::Context, n: u32) {
hprintln!( hprintln!(
"foo has been called {} time{}", "foo has been called {} time{}",

View file

@ -29,6 +29,7 @@ const APP: () = {
hprintln!("UART0(STATE = {})", *STATE).unwrap(); hprintln!("UART0(STATE = {})", *STATE).unwrap();
// second argument has type `resources::shared`
advance(STATE, c.resources.shared); advance(STATE, c.resources.shared);
rtfm::pend(Interrupt::UART1); rtfm::pend(Interrupt::UART1);
@ -45,14 +46,16 @@ const APP: () = {
// just to show that `shared` can be accessed directly // just to show that `shared` can be accessed directly
*c.resources.shared += 0; *c.resources.shared += 0;
// second argument has type `Exclusive<u32>`
advance(STATE, Exclusive(c.resources.shared)); advance(STATE, Exclusive(c.resources.shared));
} }
}; };
// the second parameter is generic: it can be any type that implements the `Mutex` trait
fn advance(state: &mut u32, mut shared: impl Mutex<T = u32>) { fn advance(state: &mut u32, mut shared: impl Mutex<T = u32>) {
*state += 1; *state += 1;
let (old, new) = shared.lock(|shared| { let (old, new) = shared.lock(|shared: &mut u32| {
let old = *shared; let old = *shared;
*shared += *state; *shared += *state;
(old, *shared) (old, *shared)

View file

@ -1,4 +1,4 @@
//! examples/interrupt.rs //! examples/hardware.rs
#![deny(unsafe_code)] #![deny(unsafe_code)]
#![deny(warnings)] #![deny(warnings)]
@ -15,7 +15,7 @@ const APP: () = {
fn init(_: init::Context) { fn init(_: init::Context) {
// Pends the UART0 interrupt but its handler won't run until *after* // Pends the UART0 interrupt but its handler won't run until *after*
// `init` returns because interrupts are disabled // `init` returns because interrupts are disabled
rtfm::pend(Interrupt::UART0); rtfm::pend(Interrupt::UART0); // equivalent to NVIC::pend
hprintln!("init").unwrap(); hprintln!("init").unwrap();
} }

View file

@ -11,14 +11,14 @@ use panic_semihosting as _;
#[rtfm::app(device = lm3s6965, peripherals = true)] #[rtfm::app(device = lm3s6965, peripherals = true)]
const APP: () = { const APP: () = {
#[init] #[init]
fn init(c: init::Context) { fn init(cx: init::Context) {
static mut X: u32 = 0; static mut X: u32 = 0;
// Cortex-M peripherals // Cortex-M peripherals
let _core: cortex_m::Peripherals = c.core; let _core: cortex_m::Peripherals = cx.core;
// Device specific peripherals // Device specific peripherals
let _device: lm3s6965::Peripherals = c.device; let _device: lm3s6965::Peripherals = cx.device;
// Safe access to local `static mut` variable // Safe access to local `static mut` variable
let _x: &'static mut u32 = X; let _x: &'static mut u32 = X;

View file

@ -8,6 +8,7 @@
use cortex_m_semihosting::{debug, hprintln}; use cortex_m_semihosting::{debug, hprintln};
use heapless::{ use heapless::{
consts::*, consts::*,
i,
spsc::{Consumer, Producer, Queue}, spsc::{Consumer, Producer, Queue},
}; };
use lm3s6965::Interrupt; use lm3s6965::Interrupt;
@ -23,12 +24,9 @@ const APP: () = {
#[init] #[init]
fn init(_: init::Context) -> init::LateResources { fn init(_: init::Context) -> init::LateResources {
// NOTE: we use `Option` here to work around the lack of static mut Q: Queue<u32, U4> = Queue(i::Queue::new());
// a stable `const` constructor
static mut Q: Option<Queue<u32, U4>> = None;
*Q = Some(Queue::new()); let (p, c) = Q.split();
let (p, c) = Q.as_mut().unwrap().split();
// Initialization of late resources // Initialization of late resources
init::LateResources { p, c } init::LateResources { p, c }

View file

@ -26,12 +26,12 @@ const APP: () = {
debug::exit(debug::EXIT_SUCCESS); debug::exit(debug::EXIT_SUCCESS);
} }
#[task(resources = [shared])] #[task(resources = [&shared])]
fn foo(c: foo::Context) { fn foo(c: foo::Context) {
let _: &NotSync = c.resources.shared; let _: &NotSync = c.resources.shared;
} }
#[task(resources = [shared])] #[task(resources = [&shared])]
fn bar(c: bar::Context) { fn bar(c: bar::Context) {
let _: &NotSync = c.resources.shared; let _: &NotSync = c.resources.shared;
} }

View file

@ -24,14 +24,15 @@ const APP: () = {
} }
#[task(binds = UART0, resources = [&key])] #[task(binds = UART0, resources = [&key])]
fn uart0(c: uart0::Context) { fn uart0(cx: uart0::Context) {
hprintln!("UART0(key = {:#x})", c.resources.key).unwrap(); let key: &u32 = cx.resources.key;
hprintln!("UART0(key = {:#x})", key).unwrap();
debug::exit(debug::EXIT_SUCCESS); debug::exit(debug::EXIT_SUCCESS);
} }
#[task(binds = UART1, priority = 2, resources = [&key])] #[task(binds = UART1, priority = 2, resources = [&key])]
fn uart1(c: uart1::Context) { fn uart1(cx: uart1::Context) {
hprintln!("UART1(key = {:#x})", c.resources.key).unwrap(); hprintln!("UART1(key = {:#x})", cx.resources.key).unwrap();
} }
}; };

View file

@ -15,16 +15,16 @@ const PERIOD: u32 = 8_000_000;
#[rtfm::app(device = lm3s6965, monotonic = rtfm::cyccnt::CYCCNT)] #[rtfm::app(device = lm3s6965, monotonic = rtfm::cyccnt::CYCCNT)]
const APP: () = { const APP: () = {
#[init(schedule = [foo])] #[init(schedule = [foo])]
fn init(c: init::Context) { fn init(cx: init::Context) {
c.schedule.foo(Instant::now() + PERIOD.cycles()).unwrap(); cx.schedule.foo(Instant::now() + PERIOD.cycles()).unwrap();
} }
#[task(schedule = [foo])] #[task(schedule = [foo])]
fn foo(c: foo::Context) { fn foo(cx: foo::Context) {
let now = Instant::now(); let now = Instant::now();
hprintln!("foo(scheduled = {:?}, now = {:?})", c.scheduled, now).unwrap(); hprintln!("foo(scheduled = {:?}, now = {:?})", cx.scheduled, now).unwrap();
c.schedule.foo(c.scheduled + PERIOD.cycles()).unwrap(); cx.schedule.foo(cx.scheduled + PERIOD.cycles()).unwrap();
} }
extern "C" { extern "C" {

37
examples/preempt.rs Normal file
View file

@ -0,0 +1,37 @@
//! examples/preempt.rs
#![no_main]
#![no_std]
use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;
use rtfm::app;
#[app(device = lm3s6965)]
const APP: () = {
#[init]
fn init(_: init::Context) {
rtfm::pend(Interrupt::UART0);
}
#[task(binds = UART0, priority = 1)]
fn uart0(_: uart0::Context) {
hprintln!("UART0 - start").unwrap();
rtfm::pend(Interrupt::UART2);
hprintln!("UART0 - end").unwrap();
debug::exit(debug::EXIT_SUCCESS);
}
#[task(binds = UART1, priority = 2)]
fn uart1(_: uart1::Context) {
hprintln!(" UART1").unwrap();
}
#[task(binds = UART2, priority = 2)]
fn uart2(_: uart2::Context) {
hprintln!(" UART2 - start").unwrap();
rtfm::pend(Interrupt::UART1);
hprintln!(" UART2 - end").unwrap();
}
};

View file

@ -23,29 +23,31 @@ const APP: () = {
rtfm::pend(Interrupt::UART1); rtfm::pend(Interrupt::UART1);
} }
// `shared` cannot be accessed from this context
#[idle] #[idle]
fn idle(_: idle::Context) -> ! { fn idle(_cx: idle::Context) -> ! {
debug::exit(debug::EXIT_SUCCESS); debug::exit(debug::EXIT_SUCCESS);
// error: `shared` can't be accessed from this context // error: no `resources` field in `idle::Context`
// shared += 1; // _cx.resources.shared += 1;
loop {} loop {}
} }
// `shared` can be access from this context // `shared` can be accessed from this context
#[task(binds = UART0, resources = [shared])] #[task(binds = UART0, resources = [shared])]
fn uart0(c: uart0::Context) { fn uart0(cx: uart0::Context) {
*c.resources.shared += 1; let shared: &mut u32 = cx.resources.shared;
*shared += 1;
hprintln!("UART0: shared = {}", c.resources.shared).unwrap(); hprintln!("UART0: shared = {}", shared).unwrap();
} }
// `shared` can be access from this context // `shared` can be accessed from this context
#[task(binds = UART1, resources = [shared])] #[task(binds = UART1, resources = [shared])]
fn uart1(c: uart1::Context) { fn uart1(cx: uart1::Context) {
*c.resources.shared += 1; *cx.resources.shared += 1;
hprintln!("UART1: shared = {}", c.resources.shared).unwrap(); hprintln!("UART1: shared = {}", cx.resources.shared).unwrap();
} }
}; };

View file

@ -13,16 +13,16 @@ use rtfm::cyccnt::{Instant, U32Ext as _};
#[rtfm::app(device = lm3s6965, monotonic = rtfm::cyccnt::CYCCNT)] #[rtfm::app(device = lm3s6965, monotonic = rtfm::cyccnt::CYCCNT)]
const APP: () = { const APP: () = {
#[init(schedule = [foo, bar])] #[init(schedule = [foo, bar])]
fn init(c: init::Context) { fn init(cx: init::Context) {
let now = Instant::now(); let now = Instant::now();
hprintln!("init @ {:?}", now).unwrap(); hprintln!("init @ {:?}", now).unwrap();
// Schedule `foo` to run 8e6 cycles (clock cycles) in the future // Schedule `foo` to run 8e6 cycles (clock cycles) in the future
c.schedule.foo(now + 8_000_000.cycles()).unwrap(); cx.schedule.foo(now + 8_000_000.cycles()).unwrap();
// Schedule `bar` to run 4e6 cycles in the future // Schedule `bar` to run 4e6 cycles in the future
c.schedule.bar(now + 4_000_000.cycles()).unwrap(); cx.schedule.bar(now + 4_000_000.cycles()).unwrap();
} }
#[task] #[task]

View file

@ -1,7 +1,5 @@
//! examples/smallest.rs //! examples/smallest.rs
#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main] #![no_main]
#![no_std] #![no_std]

View file

@ -17,16 +17,20 @@ const APP: () = {
#[task(spawn = [bar, baz])] #[task(spawn = [bar, baz])]
fn foo(c: foo::Context) { fn foo(c: foo::Context) {
hprintln!("foo").unwrap(); hprintln!("foo - start").unwrap();
// spawns `bar` onto the task scheduler // spawns `bar` onto the task scheduler
// `foo` and `bar` have the same priority so `bar` will not run until // `foo` and `bar` have the same priority so `bar` will not run until
// after `foo` terminates // after `foo` terminates
c.spawn.bar().unwrap(); c.spawn.bar().unwrap();
hprintln!("foo - middle").unwrap();
// spawns `baz` onto the task scheduler // spawns `baz` onto the task scheduler
// `baz` has higher priority than `foo` so it immediately preempts `foo` // `baz` has higher priority than `foo` so it immediately preempts `foo`
c.spawn.baz().unwrap(); c.spawn.baz().unwrap();
hprintln!("foo - end").unwrap();
} }
#[task] #[task]

View file

@ -7,7 +7,7 @@
use cortex_m_semihosting::debug; use cortex_m_semihosting::debug;
use panic_semihosting as _; use panic_semihosting as _;
use rtfm::cyccnt::Instant; use rtfm::cyccnt;
#[rtfm::app(device = lm3s6965, peripherals = true, monotonic = rtfm::cyccnt::CYCCNT)] #[rtfm::app(device = lm3s6965, peripherals = true, monotonic = rtfm::cyccnt::CYCCNT)]
const APP: () = { const APP: () = {
@ -17,38 +17,39 @@ const APP: () = {
} }
#[init(schedule = [foo], spawn = [foo])] #[init(schedule = [foo], spawn = [foo])]
fn init(c: init::Context) { fn init(cx: init::Context) {
let _: Instant = c.start; let _: cyccnt::Instant = cx.start;
let _: rtfm::Peripherals = c.core; let _: rtfm::Peripherals = cx.core;
let _: lm3s6965::Peripherals = c.device; let _: lm3s6965::Peripherals = cx.device;
let _: init::Schedule = c.schedule; let _: init::Schedule = cx.schedule;
let _: init::Spawn = c.spawn; let _: init::Spawn = cx.spawn;
debug::exit(debug::EXIT_SUCCESS); debug::exit(debug::EXIT_SUCCESS);
} }
#[task(binds = SVCall, schedule = [foo], spawn = [foo])] #[idle(schedule = [foo], spawn = [foo])]
fn svcall(c: svcall::Context) { fn idle(cx: idle::Context) -> ! {
let _: Instant = c.start; let _: idle::Schedule = cx.schedule;
let _: svcall::Schedule = c.schedule; let _: idle::Spawn = cx.spawn;
let _: svcall::Spawn = c.spawn;
loop {}
} }
#[task(binds = UART0, resources = [shared], schedule = [foo], spawn = [foo])] #[task(binds = UART0, resources = [shared], schedule = [foo], spawn = [foo])]
fn uart0(c: uart0::Context) { fn uart0(cx: uart0::Context) {
let _: Instant = c.start; let _: cyccnt::Instant = cx.start;
let _: resources::shared = c.resources.shared; let _: resources::shared = cx.resources.shared;
let _: uart0::Schedule = c.schedule; let _: uart0::Schedule = cx.schedule;
let _: uart0::Spawn = c.spawn; let _: uart0::Spawn = cx.spawn;
} }
#[task(priority = 2, resources = [shared], schedule = [foo], spawn = [foo])] #[task(priority = 2, resources = [shared], schedule = [foo], spawn = [foo])]
fn foo(c: foo::Context) { fn foo(cx: foo::Context) {
let _: Instant = c.scheduled; let _: cyccnt::Instant = cx.scheduled;
let _: &mut u32 = c.resources.shared; let _: &mut u32 = cx.resources.shared;
let _: foo::Resources = c.resources; let _: foo::Resources = cx.resources;
let _: foo::Schedule = c.schedule; let _: foo::Schedule = cx.schedule;
let _: foo::Spawn = c.spawn; let _: foo::Spawn = cx.spawn;
} }
extern "C" { extern "C" {