Tips & tricks

Generics

Resources may appear in contexts as resource proxies or as unique references (&mut-) depending on the priority of the task. Because the same resource may appear as different types in different contexts one cannot refactor a common operation that uses resources into a plain function; however, such refactor is possible using generics.

All resource proxies implement the rtfm::Mutex trait. On the other hand, unique references (&mut-) do not implement this trait (due to limitations in the trait system) but one can wrap these references in the rtfm::Exclusive newtype which does implement the Mutex trait. With the help of this newtype one can write a generic function that operates on generic resources and call it from different tasks to perform some operation on the same set of resources. Here's one such example:


# #![allow(unused_variables)]
#fn main() {
//! examples/generics.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;
use rtfm::{Exclusive, Mutex};

#[rtfm::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        #[init(0)]
        shared: u32,
    }

    #[init]
    fn init(_: init::Context) {
        rtfm::pend(Interrupt::UART0);
        rtfm::pend(Interrupt::UART1);
    }

    #[task(binds = UART0, resources = [shared])]
    fn uart0(c: uart0::Context) {
        static mut STATE: u32 = 0;

        hprintln!("UART0(STATE = {})", *STATE).unwrap();

        // second argument has type `resources::shared`
        advance(STATE, c.resources.shared);

        rtfm::pend(Interrupt::UART1);

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(binds = UART1, priority = 2, resources = [shared])]
    fn uart1(c: uart1::Context) {
        static mut STATE: u32 = 0;

        hprintln!("UART1(STATE = {})", *STATE).unwrap();

        // just to show that `shared` can be accessed directly
        *c.resources.shared += 0;

        // second argument has type `Exclusive<u32>`
        advance(STATE, Exclusive(c.resources.shared));
    }
};

// the second parameter is generic: it can be any type that implements the `Mutex` trait
fn advance(state: &mut u32, mut shared: impl Mutex<T = u32>) {
    *state += 1;

    let (old, new) = shared.lock(|shared: &mut u32| {
        let old = *shared;
        *shared += *state;
        (old, *shared)
    });

    hprintln!("shared: {} -> {}", old, new).unwrap();
}

#}
$ cargo run --example generics
UART1(STATE = 0)
shared: 0 -> 1
UART0(STATE = 0)
shared: 1 -> 2
UART1(STATE = 1)
shared: 2 -> 4

Using generics also lets you change the static priorities of tasks during development without having to rewrite a bunch code every time.

Conditional compilation

You can use conditional compilation (#[cfg]) on resources (the fields of struct Resources) and tasks (the fn items). The effect of using #[cfg] attributes is that the resource / task will not be available through the corresponding Context struct if the condition doesn't hold.

The example below logs a message whenever the foo task is spawned, but only if the program has been compiled using the dev profile.


# #![allow(unused_variables)]
#fn main() {
//! examples/cfg.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::debug;
#[cfg(debug_assertions)]
use cortex_m_semihosting::hprintln;
use panic_semihosting as _;

#[rtfm::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        #[cfg(debug_assertions)] // <- `true` when using the `dev` profile
        #[init(0)]
        count: u32,
    }

    #[init(spawn = [foo])]
    fn init(cx: init::Context) {
        cx.spawn.foo().unwrap();
        cx.spawn.foo().unwrap();
    }

    #[idle]
    fn idle(_: idle::Context) -> ! {
        debug::exit(debug::EXIT_SUCCESS);

        loop {}
    }

    #[task(capacity = 2, resources = [count], spawn = [log])]
    fn foo(_cx: foo::Context) {
        #[cfg(debug_assertions)]
        {
            *_cx.resources.count += 1;

            _cx.spawn.log(*_cx.resources.count).unwrap();
        }

        // this wouldn't compile in `release` mode
        // *_cx.resources.count += 1;

        // ..
    }

    #[cfg(debug_assertions)]
    #[task(capacity = 2)]
    fn log(_: log::Context, n: u32) {
        hprintln!(
            "foo has been called {} time{}",
            n,
            if n == 1 { "" } else { "s" }
        )
        .ok();
    }

    extern "C" {
        fn UART0();
        fn UART1();
    }
};

#}
$ cargo run --example cfg --release

$ cargo run --example cfg
foo has been called 1 time
foo has been called 2 times

Running tasks from RAM

The main goal of moving the specification of RTFM applications to attributes in RTFM v0.4.0 was to allow inter-operation with other attributes. For example, the link_section attribute can be applied to tasks to place them in RAM; this can improve performance in some cases.

IMPORTANT: In general, the link_section, export_name and no_mangle attributes are very powerful but also easy to misuse. Incorrectly using any of these attributes can cause undefined behavior; you should always prefer to use safe, higher level attributes around them like cortex-m-rt's interrupt and exception attributes.

In the particular case of RAM functions there's no safe abstraction for it in cortex-m-rt v0.6.5 but there's an RFC for adding a ramfunc attribute in a future release.

The example below shows how to place the higher priority task, bar, in RAM.


# #![allow(unused_variables)]
#fn main() {
//! examples/ramfunc.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use panic_semihosting as _;

#[rtfm::app(device = lm3s6965)]
const APP: () = {
    #[init(spawn = [bar])]
    fn init(c: init::Context) {
        c.spawn.bar().unwrap();
    }

    #[inline(never)]
    #[task]
    fn foo(_: foo::Context) {
        hprintln!("foo").unwrap();

        debug::exit(debug::EXIT_SUCCESS);
    }

    // run this task from RAM
    #[inline(never)]
    #[link_section = ".data.bar"]
    #[task(priority = 2, spawn = [foo])]
    fn bar(c: bar::Context) {
        c.spawn.foo().unwrap();
    }

    extern "C" {
        fn UART0();

        // run the task dispatcher from RAM
        #[link_section = ".data.UART1"]
        fn UART1();
    }
};

#}

Running this program produces the expected output.

$ cargo run --example ramfunc
foo

One can look at the output of cargo-nm to confirm that bar ended in RAM (0x2000_0000), whereas foo ended in Flash (0x0000_0000).

$ cargo nm --example ramfunc --release | grep ' foo::'
00000162 t ramfunc::foo::h30e7789b08c08e19
$ cargo nm --example ramfunc --release | grep ' bar::'
20000000 t ramfunc::bar::h9d6714fe5a3b0c89

Indirection for faster message passing

Message passing always involves copying the payload from the sender into a static variable and then from the static variable into the receiver. Thus sending a large buffer, like a [u8; 128], as a message involves two expensive memcpys. To minimize the message passing overhead one can use indirection: instead of sending the buffer by value, one can send an owning pointer into the buffer.

One can use a global allocator to achieve indirection (alloc::Box, alloc::Rc, etc.), which requires using the nightly channel as of Rust v1.37.0, or one can use a statically allocated memory pool like heapless::Pool.

Here's an example where heapless::Pool is used to "box" buffers of 128 bytes.


# #![allow(unused_variables)]
#fn main() {
//! examples/pool.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use heapless::{
    pool,
    pool::singleton::{Box, Pool},
};
use lm3s6965::Interrupt;
use panic_semihosting as _;
use rtfm::app;

// Declare a pool of 128-byte memory blocks
pool!(P: [u8; 128]);

#[app(device = lm3s6965)]
const APP: () = {
    #[init]
    fn init(_: init::Context) {
        static mut MEMORY: [u8; 512] = [0; 512];

        // Increase the capacity of the memory pool by ~4
        P::grow(MEMORY);

        rtfm::pend(Interrupt::I2C0);
    }

    #[task(binds = I2C0, priority = 2, spawn = [foo, bar])]
    fn i2c0(c: i2c0::Context) {
        // claim a memory block, leave it uninitialized and ..
        let x = P::alloc().unwrap().freeze();

        // .. send it to the `foo` task
        c.spawn.foo(x).ok().unwrap();

        // send another block to the task `bar`
        c.spawn.bar(P::alloc().unwrap().freeze()).ok().unwrap();
    }

    #[task]
    fn foo(_: foo::Context, x: Box<P>) {
        hprintln!("foo({:?})", x.as_ptr()).unwrap();

        // explicitly return the block to the pool
        drop(x);

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(priority = 2)]
    fn bar(_: bar::Context, x: Box<P>) {
        hprintln!("bar({:?})", x.as_ptr()).unwrap();

        // this is done automatically so we can omit the call to `drop`
        // drop(x);
    }

    extern "C" {
        fn UART0();
        fn UART1();
    }
};

#}
$ cargo run --example pool
bar(0x2000008c)
foo(0x20000110)

Inspecting the expanded code

#[rtfm::app] is a procedural macro that produces support code. If for some reason you need to inspect the code generated by this macro you have two options:

You can inspect the file rtfm-expansion.rs inside the target directory. This file contains the expansion of the #[rtfm::app] item (not your whole program!) of the last built (via cargo build or cargo check) RTFM application. The expanded code is not pretty printed by default so you'll want to run rustfmt over it before you read it.

$ cargo build --example foo

$ rustfmt target/rtfm-expansion.rs

$ tail target/rtfm-expansion.rs
#[doc = r" Implementation details"]
const APP: () = {
    #[doc = r" Always include the device crate which contains the vector table"]
    use lm3s6965 as _;
    #[no_mangle]
    unsafe extern "C" fn main() -> ! {
        rtfm::export::interrupt::disable();
        let mut core: rtfm::export::Peripherals = core::mem::transmute(());
        core.SCB.scr.modify(|r| r | 1 << 1);
        rtfm::export::interrupt::enable();
        loop {
            rtfm::export::wfi()
        }
    }
};

Or, you can use the cargo-expand subcommand. This subcommand will expand all the macros, including the #[rtfm::app] attribute, and modules in your crate and print the output to the console.

$ # produces the same output as before
$ cargo expand --example smallest | tail

Resource de-structure-ing

When having a task taking multiple resources it can help in readability to split up the resource struct. Here're two examples on how this can be done:


# #![allow(unused_variables)]
#fn main() {
//! examples/destructure.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::hprintln;
use lm3s6965::Interrupt;
use panic_semihosting as _;

#[rtfm::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        // Some resources to work with
        #[init(0)]
        a: u32,
        #[init(0)]
        b: u32,
        #[init(0)]
        c: u32,
    }

    #[init]
    fn init(_: init::Context) {
        rtfm::pend(Interrupt::UART0);
        rtfm::pend(Interrupt::UART1);
    }

    // Direct destructure
    #[task(binds = UART0, resources = [a, b, c])]
    fn uart0(cx: uart0::Context) {
        let a = cx.resources.a;
        let b = cx.resources.b;
        let c = cx.resources.c;

        hprintln!("UART0: a = {}, b = {}, c = {}", a, b, c).unwrap();
    }

    // De-structure-ing syntax
    #[task(binds = UART1, resources = [a, b, c])]
    fn uart1(cx: uart1::Context) {
        let uart1::Resources { a, b, c } = cx.resources;

        hprintln!("UART0: a = {}, b = {}, c = {}", a, b, c).unwrap();
    }
};

#}