Making an RISC-V OS (Part 1): Project Setup

22 January 2024

Long term goals

The plan of this series of articles is to make a full blown OS, with filesystems, PCIe support, a good POSIX compatibility...

The OS will only target the virt machine of QEMU, in order to simplify the implementation. Performance won't be a major goal, as I will do everything alone, and as such I will try to keep things simple.

This project is meant to be a way to learn about low-level systems subjects, and in the longer term maybe higher level subjects. Most of the software developed in the project is going to be in Rust, but I will still try to run external programs in other languages (C/C++ utilities for example).

With this plan let's start tinkering with a simple kernel!

RISC-V boot sequence

When a RISC-V first boots it is in a mode called "Machine mode" (M). This mode is the first of 3 privilege levels, the other two being Supervisor (S) and User (U).

The Machine mode is pretty bare bones, as it does not support features like Virtual memory. It is meant to be used for simple embedded devices, that don't require all those extra features and so costs can be reduced.

In order to build a complex kernel, and operating system having access to virtual memory is going to be useful. This will require our kernel to run in Supervisor mode (with our applications running in User mode). So we are going to need the CPU to transition from the M mode to the S mode. This can be done in the OS, but we can also rely on the SBI specification for this.

SBI means "Supervisor Binary Interface" and it is an official RISC-V specification to define an interface between a firmware running in machine mode and an OS running in Supervisor mode.

We are going to use QEMU in order to run our OS, and QEMU comes bundled with OpenSBI, the reference implementation of SBI.

We can see this by launching QEMU without an OS: qemu-system-riscv64 -M virt -nographic:

OpenSBI v1.3.1
   ____                    _____ ____ _____
  / __ \                  / ____|  _ \_   _|
 | |  | |_ __   ___ _ __ | (___ | |_) || |
 | |  | | '_ \ / _ \ '_ \ \___ \|  _ < | |
 | |__| | |_) |  __/ | | |____) | |_) || |_
  \____/| .__/ \___|_| |_|_____/|___/_____|
        | |

Platform Name             : riscv-virtio,qemu


Domain0 Next Address      : 0x0000000000000000
Domain0 Next Mode         : S-mode


(Note to exit QEMU you can press Ctrl-A, then X)

OpenSBI has three ways to continue the execution to the next Supervisor stage:

Using QEMU with the -kernel <an elf file> the FW_JUMP method is used. So we can simply put our kernel at the right address and it will started executing directly!

The kernel: hades

This project is going to be called pantheon, with components named after diverse gods. hades seems like a fitting name for the kernel, as it controls what lies bellow!

Project structure

Let's start with a basic project structure:

├── .envrc
├── flake.lock
├── flake.nix
├── hades
│  ├── .cargo
│  │  └── config
│  ├── Cargo.lock
│  ├── Cargo.toml
│  └── src
│     └──
└── Justfile

We are going to use just to handle building the different parts of the OS. To manage our developer environment we are going to use Nix & direnv.

Here is our flake.nix:

  description = "A basic flake with a shell";
  inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
  inputs.flake-utils.url = "github:numtide/flake-utils";
  # Use a recent rust version, a bit like rustup, but pinned
  inputs.rust-overlay.url = "github:oxalica/rust-overlay";

  outputs = {
    flake-utils.lib.eachDefaultSystem (system: let
      pkgs = import nixpkgs {
        inherit system;
        overlays = [(import rust-overlay)];
      rust = pkgs.rust-bin.stable.latest.default.override {
        # Add the RISC-V target
        targets = ["riscv64gc-unknown-none-elf"];
      naersk' = pkgs.callPackage naersk {
        cargo = rust;
        rustc = rust;
    in {
    devShell = pkgs.mkShell {
      nativeBuildInputs = with pkgs; [
      RUST_PATH = "${rust}";
      RUST_DOC_PATH = "${rust}/share/doc/rust/html/std/index.html";

This allows us to have a rust compiler & just available.

Then let's define our Justfile to run the kernel:

alias r := run

kernel_path := "./hades/target/riscv64gc-unknown-none-elf/debug/hades"

    cd hades && cargo build

    qemu-system-riscv64 {{EXTRA_ARGS}} -M virt -m 2G -nographic \
        -kernel {{kernel_path}}

debug: (run "-gdb tcp::1234 -S")
    gdb {{kernel_path}} \
        -ex 'target remote localhost:1234'

This allows us to launch our kernel with just run, and debug it with just debug and just gdb.

We are also going to add a hades/.cargo/config file to always build our kernel for RISC-V 64:

target = "riscv64gc-unknown-none-elf"

Booting with OpenSBI

We can now see with the -kernel {{kernel_path}} argument something different in the SBI output:

Domain0 Next Address      : 0x0000000080200000

This means that we are going to need to have a kernel that starts execution at that address! To do this we are going to use one of the few external dependencies I am going to allow myself: riscv-rt.

This crate defines the few functions to transition from assembly to Rust code (setup the stack, the BSS section, interrupt handlers, ...).

To use it properly we are going to need to add a few files, first memory.x:

    RAM : ORIGIN = 0x80200000, LENGTH = 16M


This defines the memory layout of our machine. We are going to define a simple memory layout for now, by setting the RAM at the OpenSBI jump address.

We can then define a to allow rustc to pickup our memory.x:

use std::env;
use std::fs;
use std::path::PathBuf;

fn main() {
    let out_dir = PathBuf::from(env::var("OUT_DIR").unwrap());

    // Put the linker script somewhere the linker can find it.
    fs::write(out_dir.join("memory.x"), include_bytes!("memory.x")).unwrap();
    println!("cargo:rustc-link-search={}", out_dir.display());


Finally we need to tell the linker to read our scripts, we can do this by adding the following in the .cargo/config file:

rustflags = [

link.x is a script defined in the riscv-rt crate that implements the different REGION_ALIAS we defined in the memory.x.

In our we can then define the most basic RISC-V kernel:


use core::panic::PanicInfo;
use riscv_rt::entry;

// Wait for interrupt, allows the CPU to go into a power saving mode
pub fn wfi() {
    unsafe { core::arch::asm!("wfi") }

fn panic(_info: &PanicInfo) -> ! {
    loop {

fn main() -> ! {
    loop {

Printing a banner

Right now our kernel is not very interesting as we can't output anything to the serial monitor. Let's try to print the following banner at the boot (thank you figlet!):


 ,,           |\               
 ||      _     \\              
 ||/\\  < \,  / \\  _-_   _-_, 
 || ||  /-|| || || || \\ ||_.  
 || || (( || || || ||/    ~ || 
 \\ |/  \/\\  \\/  \\,/  ,-_-  

We can use OpenSBI for this! Until now we didn't really use the "Interface" in SBI, only the "Supervisor". Let's see how SBI works!

The SBI specification lists all the calls that can be used by a supervisor.

The calls are divided into "extensions" that group a number of related "functions", for example we can use the timer extension, the system reset extension, or in our case the debug console extension.

In order to call an SBI procedure we need to write some RISC-V assembly code.

We need to perform an ecall instruction, and specify in the a7 register the extension ID and in the a6 register the function ID.

For example the debug console extension ID is 0x4442434E, or the ascii string "DBCN".

In the debug console we have three functions (with their IDs)

All SBI procedures then return their results through the a0 and a1 registers. a0 is a status code denoting success or cause of failure, and a1 is a procedure specific result value.

In order to pass arguments to SBI procedures we fill in the a0 through a7 registers.

For example the Console Write function takes three arguments, that we can pass in a0, a1, a2.

We are now ready to write our wrapper around the procedure:

const DBCN_EID: u64 = 0x4442434E;

/// This function only works on RISC-V 64, not RISC-V 32
pub fn sbi_debug_console_write(data: &[u8]) -> SbiResult<usize> {
    let written_len;
    let status;

    unsafe {
            in("a7") DBCN_EID,
            in("a6") 0,
            in("a0") data.len(), // length of the buffer to print
            in("a1") data.as_ptr(), // 64 lower bytes of the address
            in("a2") 0, // 64 upper bytes of the address
            lateout("a0") status,
            lateout("a1") written_len,

    sbi_ret(status, written_len) // function checking the status to pass the value

In order to use this more easily to print text we can make a type that implements core::fmt::Write:

pub struct DebugConsole;

impl core::fmt::Write for DebugConsole {
    fn write_str(&mut self, mut s: &str) -> core::fmt::Result {
        while !s.is_empty() {
            match sbi_debug_console_write(s.as_bytes()) {
                Ok(n) => s = &s[n..],
                Err(_) => return Err(core::fmt::Error),


We can then use the write! macro: write!(DebugConsole, "Some text: {}", "foo").

Let's define a few more helper macros:

macro_rules! debug_print {
    ($($arg:tt)*) => ($crate::_print_args(format_args!($($arg)*)));

macro_rules! debug_println {
    ($($arg:tt)*) => {{

pub fn _debug_print_args(args: fmt::Arguments) {
    let _ = write!(DebugConsole, "{args}");

pub fn _debug_println_args(args: fmt::Arguments) {
    let _ = writeln!(DebugConsole, "{args}");

We can then add debug_println!("\n{BANNER}\n"); in our main!

Handling Panics

In order to correctly panics we can change our panic handler in two ways:

This is very similar to the debug console implementation, so I'm not going to write about it here.

Next steps

The next step is to find the address of the RAM & create a page table in order be able to use virtual memory!

Next part is about setting up virtual memory.