StarFive VisionFive 2

From Gentoo Wiki
Jump to:navigation Jump to:search

This page describes several methods for installing Gentoo on the StarFive VisionFive 2.

Introduction

This page aims to provide a comprehensive introduction to the various methods of installing Gentoo onto Embedded hardware using the VisionFive 2 as an example device.

The majority of the instructions here are not specific to the VisionFive 2 and may be applied to other devices with similar hardware. Please apply some common sense when adapting these instructions for other devices and do not blindly copy and paste commands without understanding what they do.

It should also be noted that the processes described hereafter are not always the most efficient in terms of commands used, and multiple commands may be used with, for example, environment variables that would typically be exported, or repeated additional options where a wrapper would be useful. This is intentional as it is intended to be a learning experience for the reader.

This article is intended to supplement the Embedded Handbook.

Hardware

The StarFive VisionFive 2 is a Single Board Computer (SBC) based on the StarFive JH7110 SoC with a quad-core SiFive U74 RISC-V CPU running at 1.5 GHz and an Imagination BXE-4-32 GPU. It comes in variants of 2/4/8 GB of LPDDR4 memory and uses the rv64gc subarch.

This SBC is notable for supporting TF/SD, eMMC, USB and NVMe storage devices, as well as having a 40-pin GPIO header and a 2-bit RGPIO boot device selector switch.

Useful notes

Some useful notes that may be of interest to the reader can be found below.

Musl

This example uses a glib libc. It is possible to use musl libc as the systems C library. The TL;DR is:

  • Use the tuple riscv64-unknown-linux-musl instead of riscv64-unknown-linux-gnu wherever crossdev is in use.
  • Obtain (or build) any lp64d musl stage3 tarball and use that.
  • Select an appropriate musl profile.

Faster installation

Anywhere that QEMU-user is invoked to build a cross-arch package, using Portage within a chroot may be replaced with an external installation utilizing crossdev to cross-compile binaries and portage to install them into the image as follows:

root #riscv64-unknown-linux-gnu-emerge --ask sys-kernel/dracut
root #cd rootfs
root #ROOT=$PWD/ riscv64-unknown-linux-gnu-emerge --ask --usepkgonly --oneshot sys-kernel/dracut

It will be faster to cross-compile packages and install them into the image than to use QEMU-user to build them within the chroot, though this is not the preferred approach.

make.conf

Some useful additions for cross-compiling packages and identifying breakage in failed package builds:

FILE /etc/portage/make.conf
# Colour in portage output, useful for debugging
# Needed for ninja (e.g. z3)
CLICOLOR_FORCE=1
# https://gitlab.kitware.com/cmake/cmake/-/merge_requests/6747
# https://github.com/ninja-build/ninja/issues/174
CMAKE_COMPILER_COLOR_DIAGNOSTICS=ON
CMAKE_COLOR_DIAGNOSTICS=ON

# Common flags for cross-compiling and colour; params pulled from -march=native
COMMON_FLAGS="-mabi=lp64d -march=rv64imafdc_zicsr_zba_zbb -mcpu=sifive-u74 -mtune=sifive-7-series -O2 -pipe -fdiagnostics-color=always -frecord-gcc-switches --param l1-cache-size=32 --param l2-cache-size=2048"

# Enable QA messages for from iwdevtools
PORTAGE_ELOG_CLASSES="${PORTAGE_ELOG_CLASSES} qa"

RISCV ISA standard and extensions

When identifying the RISC-V ISA standard and extensions for the target device, the following table may be useful:

Name Description
RV32I Base Integer Instruction Set - 32-bit
RV32E Base Integer Instruction Set (embedded) - 32-bit, 16 registers
RV64I Base Integer Instruction Set - 64-bit
RV128I Base Integer Instruction Set - 128-bit
Extension
M Standard Extension for Integer Multiplication and Division
A Standard Extension for Atomic Instructions
F Standard Extension for Single-Precision Floating-Point
D Standard Extension for Double-Precision Floating-Point
G Shorthand for the base and above extensions
Q Standard Extension for Quad-Precision Floating-Point
L Standard Extension for Decimal Floating-Point
C Standard Extension for Compressed Instructions
B Standard Extension for Bit Manipulation
J Standard Extension for Dynamically Translated Languages
T Standard Extension for Transactional Memory
P Standard Extension for Packed-SIMD Instructions
V Standard Extension for Vector Operations
N Standard Extension for User-Level Interrupts
H Standard Extension for Hypervisor
S Standard Extension for Supervisor-level Instructions

RISC-V defines the order that must be used to define the ISA subset:

  RV [32, 64, 128] I, M, A, F, D, G, Q, L, C, B, J, T, P, V, N

For example, RV32IMAFDQC is legal, whereas RV32IMAFDCQ is not

In the case of the VisionFive 2, the following identifiers are both valid however the first is more descriptive: rv64imafdc, rv64gc

We have some additional extensions to take into account:

  • zicsr (Control and Status Register [CSR] Instructions); implied by the F extension
  • Bitmanip extensions Zba (address generation) and Zbb (Basic bit manipulation)

This results in the following being the descriptive and shorthand flags for the VisionFive2 board respectively: rv64imafdc_zicsr_zba_zbb, rv64gc_zba_zbb

Prerequisites

When working with embedded hardware a UART interface that may be attached to the device is essential. Consult the documentation for the device to determine UART interface pinout. The VisionFive 2 has a UART interface on the 40-pin GPIO header.

Configure QEMU-user and binfmt

When working with embedded systems it is often desirable to chroot into the image that is to be deployed to the target device. QEMU-user may be used to chroot into a rootfs for a different architecture than the host system. This is particularly useful for installing packages and configuring the system before deploying it to the target device.

Configure and install QEMU; make a binpkg to install into the chroot:

root #echo 'QEMU_SOFTMMU_TARGETS="riscv64 x86_64"' >> /etc/portage/make.conf
root #echo 'QEMU_USER_TARGETS="riscv64"' >> /etc/portage/make.conf
root #echo app-emulation/qemu static-user >> /etc/portage/package.use/qemu
root #echo ':riscv64:M::\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xf3\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-riscv64:' > /proc/sys/fs/binfmt_misc/register
root #echo ':riscv64:M::\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xf3\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-riscv64:' > /etc/binfmt.d/qemu-riscv64-static.conf
root #systemctl restart systemd-binfmt
root #emerge --ask app-emulation/qemu
root #gpasswd -a larry kvm
root #quickpkg app-emulation/qemu

Boot sequence

The boot sequence of a typical RISC-V device is as follows:

  1. When the SoC is powered on, the CPU fetches instructions beginning at address 0x0, where is the BootROM (BROM AKA Primary Program Loader) is located.
  2. The BROM loads the Secondary Program Loader (SPL) from some form of Non Volatile Memory (NVM).
  3. The SPL loads a U-Boot Flattened Image Tree (FIT) image (presumably from the same device). This FIT image contains the U-Boot binary, the Device Tree Blob (DTB) and the OpenSBI binary. This image may be combined with the SPL.
  4. U-Boot loads the Linux Kernel.

In the case of the VisionFive 2/JH7110, the BROM is located on 32k of onboard (on the SoC) memory which may be seen on the SoC block diagram.

The VisionFive 2 BROM uses the state of the RGPIO pins on the board to determine which attached storage device (NVM) to load the the U-Boot Secondary Program Loader (U-Boot SPL) (u-boot-spl.bin.normal.out) from. By default this is a partition with a GUID type code of 2E54B353-1271-4842-806F-E436D6AF6985. The number of this partition is irrelevant, though it is typically partition 1.

In its default configuration, the U-Boot SPL then loads the U-Boot FIT image (u-boot.itb) from partition 2 (CONFIG_SYS_MMCSD_RAW_MODE_U_BOOT_PARTITION=0x2). When formatting this partition the recommended GUID type code is BC13C2FF-59E6-4262-A352-B275FD6F7172.

The FIT image (u-boot.itb) is a combination of OpenSBI's fw_dynamic.bin, u-boot-nodtb.bin and the device tree blob (jh7110-starfive-visionfive-2-v1.3b.dtb or jh7110-starfive-visionfive-2-v1.2a.dtb).

The VisionFive 2 has a two-switch RGPIO header on the board. The configurations are as follows:

Option RGPIO_0 RGPIO_1
Onboard QSPI Flash 0 0
SDIO3.0 1 0
eMMC 0 1
UART 1 1

By default, the device is configured to to load firmware from QSPI flash.

Firmware

Before attempting to install Gentoo onto the VisionFive 2 it is essential to update the firmware. Depending on the age of the firmware the current (modern) official images from StarFive may fail to load.

There are several methods for updating the firmware, including:

  • Using the U-Boot console to retrieve and install firmware images via tftp
  • Invoking the flashcp binary (from sys-fs/mtd-utils) on a running system
  • Sending firmware binaries over a UART console interface.
  • Booting updated firmware off removable media

This guide will focus on updating the firmware on the QSPI flash via UART as it involves the least additional software and does not require a bootable board.

Note
Updating U-Boot will likely erase the contents of the QSPI flash. If you have a working system that relies on variables saved in the QSPI flash, back it up before proceeding!

UART firmware upgrade

First, gather the following firmware files from the latest VisionFive2 software release:

  • u-boot-spl.bin.normal.out
  • visionfive2_fw_payload.img

Install an appropriate serial communication tool (e.g. net-dialup/minicom), then boot the device and achieve a u-boot shell. This may be accomplished by powering the device on with a UART interface attached (please see the documentation for detailed instructions) and either:

  • Pressing a key to interrupt the boot sequence.
  • Booting with no TF card inserted.

By default the VisionFive 2 U-Boot UART uses 115200 baud, 8N1, no flow control; no flow control is often not the default, as is the case with minicom. The following command may be used to connect to the device:

user $minicom -D /dev/ttyUSB0
Hit any key to stop autoboot: 0

Use the loady command to prepare the bootloader to receive a firmware file:

StarFive #loady

Use Ctrl+A then S to enter the file transfer menu, then select Ymodem. Select a single firmware file to transfer and wait for the transfer to complete.

Although ymodem transfers validate with a CRC16 checksum after every chunk (~1KB), it may be desirable to validate that the file transfer was successful by comparing the CRC32 checksum of the file on the host with the CRC32 checksum of the file on the device:

StarFive #crc32 $loadaddr $filesize
crc32 for a0000000 ... a02d1884 ==> 452c6590

Probe for SPI flash and flash the firmware file using the appropriate offset:

  • u-boot-spl.bin.normal.out is 0x0
  • visionfive2_fw_payload.img is 0x100000
StarFive #sf probe
SF: Detected gd25lq128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB
StarFive #sf update $loadaddr 0x100000 $filesize
device 0 offset 0x100000, size 0x2d1885
1042565 bytes written, 1912832 bytes skipped in 8.618s, speed 351041 B/s

Once both firmware files have been uploaded, send a reset command to reboot the device:

StarFive #reset
resetting ...

U-Boot SPL 2021.10 (Mar 24 2023 - 01:42:56 +0800)
DDR version: dc2e84f0.
Trying to boot from SPI

Building a system image

There are two philosophies when it comes to installing an operating system onto a SBC / embedded device such as the VisionFive 2.

The first involves writing a static system image, typically a squashfs, onto some form of media (typically eMMC, though NVMe is on the rise). The initramfs is able to load this image into RAM and use it as a rootfs; when an update is required the whole image is replaced as a single operation. There are advantages to this approach, particularly for embedded devices where users are not expected to update individual packages and recovery 'in-the-field' may be impractical, or to provide an A/B partition layout for updates. Systems configured in this way are also resilient when it comes to unexpected shutdowns as the only time that the rootfs storage volume is performing writes is when this image is being updated. This approach will be described as an 'embedded' installation going forward and called out where possible.

The second approach involves writing a rootfs onto some accessible storage media the device and using a package manager to install and update packages as required. For a Gentoo system this approach is more flexible and allows for a more traditional Linux experience, but requires more effort to set up and maintain. This approach will be described as a 'traditional' installation going forward and called out where possible.

The process of generating a system image for a Gentoo installation on the VisionFive 2 may be broadly described as follows:

  • Gather the installation files and generate a cross-compiler
    • Check out the VisionFive 2 SDK OR kernel sources (until the upstream kernel has full hardware support)
    • Build a cross-compiler and use it to build a kernel, initramfs, and FIT image
  • Generate a Gentoo rootfs
    • Unpack and customize a Gentoo Stage 3 tarball to create a Gentoo rootfs
    • Use Catalyst to generate an image from scratch
  • Load the Flattened Image Tree (FIT) onto the device
  • Write a Gentoo rootfs to the selected storage medium.

Generate a cross-compiler

A cross-compiler will typically also be required as the VisionFive 2 is a RISC-V device and most users will be running an x86_64 (amd64) host.

Check out the VisionFive 2 BSP

The VisionFive 2 Board Support Package (BSP) (sometimes called an SDK by manufacturers) is a git repository containing a collection of scripts (and git submodules) that may be used to bootstrap a cross-compiler and build a Linux kernel, initramfs, rootfs, U-Boot, and FIT image for the VisionFive 2.

As most consumers of this article already use Gentoo, this step is not essential; Gentoo users have the option of using crossdev (from sys-devel/crossdev) to get an up-to-date riscv64 cross-compiler with which to build the kernel, initramfs, and FIT image. However, the SDK is still useful: it provides a convenient way to build U-Boot and a rootfs, contains a great deal of information about how the developers intend for images to be written to the device, and, for inexperienced embedded users, provides a way to generate a guaranteed bootable image and a less-complex approach which be preferable.

The VisionFive 2 BSP repository is available at: https://github.com/starfive-tech/VisionFive2

Use the following commands to check out the repository:

Cloning into 'VisionFive2'...
remote: Enumerating objects: 4479, done.
remote: Counting objects: 100% (603/603), done.
remote: Compressing objects: 100% (311/311), done.
remote: Total 4479 (delta 331), reused 552 (delta 292), pack-reused 3876
Receiving objects: 100% (4479/4479), 290.58 MiB | 6.31 MiB/s, done.
Resolving deltas: 100% (2457/2457), done.
user $cd VisionFive2
user $git submodule update --init --recursive
Submodule 'buildroot' (https://github.com/starfive-tech/buildroot.git) registered for path 'buildroot'
Submodule 'linux' (https://github.com/starfive-tech/linux.git) registered for path 'linux'
Submodule 'opensbi' (https://github.com/starfive-tech/opensbi.git) registered for path 'opensbi'
Submodule 'soft_3rdpart' (https://github.com/starfive-tech/soft_3rdpart.git) registered for path 'soft_3rdpart'
Submodule 'u-boot' (https://github.com/starfive-tech/u-boot.git) registered for path 'u-boot'
Cloning into '/data/development/visionfive/VisionFive2/buildroot'...
Cloning into '/data/development/visionfive/VisionFive2/linux'...
Cloning into '/data/development/visionfive/VisionFive2/opensbi'...
Cloning into '/data/development/visionfive/VisionFive2/soft_3rdpart'...
Cloning into '/data/development/visionfive/VisionFive2/u-boot'...
Submodule path 'buildroot': checked out '762ee9bc4e1fbdaf09675acaed9516d6c136d5b1'
Submodule path 'linux': checked out 'a87c6861c6d96621026ee53b94f081a1a00a4cc7'
Submodule path 'opensbi': checked out 'c6a092cd80112529cb2e92e180767ff5341b22a3'
Submodule path 'soft_3rdpart': checked out 'cd7b50cd9f9eca66c23ebd19f06a172ce0be591f'
Submodule path 'u-boot': checked out '688befadf1d337dee3593e6cc0fe1c737cc150bd'

Use the BSP to build everything

For inexperienced embedded developers it may be desirable to use the VisionFive 2 SDK to build the kernel, initramfs, U-Boot, etc. This is not the recommended approach as the BSP uses outdated dependencies that include a number of bugs and issues that have been fixed in the upstream components.

Use the SDK's build scripts to build a cross-compiler, kernel, initramfs, U-Boot, etc.

user $make -j$(nproc)

The output of this process (kernel, initramfs, device tree blobs [dtb], OR the FIT image containing them) may be used to boot a Gentoo rootfs; if this is all that is desired there is no need to build custom Gentoo versions.

Use crossdev

Rather than using the (already outdated) version of GCC specified in the VisionFive 2 SDK crossdev may instead be used to to build an up-to-date cross-compiler from the Gentoo repository. This will then be used to bootstrap a Catalyst stage, build the kernel, and any required firmware binaries.

Install the sys-devel/crossdev package and generate a RISC-V cross toolchain (see Cross Build Environment for further information):

root #emerge --ask sys-devel/crossdev

Create an ebuild repository for crossdev, preventing it from choosing a (seemingly) random repository to store its packages:

root #mkdir -p /var/db/repos/crossdev/{profiles,metadata}
root #echo 'crossdev' > /var/db/repos/crossdev/profiles/repo_name
root #echo 'masters = gentoo' > /var/db/repos/crossdev/metadata/layout.conf
root #chown -R portage:portage /var/db/repos/crossdev

If the Gentoo ebuild repository is synchronized using Git, or any other method with Manifest files that do not include checksums for ebuilds:

FILE /var/db/repos/crossdev/metadata/layout.conf
masters = gentoo
thin-manifests = true

Instruct Portage and crossdev to use this ebuild repository:

FILE /etc/portage/repos.conf/crossdev.conf
[crossdev]
location = /var/db/repos/crossdev
priority = 10
masters = gentoo
auto-sync = no
root #crossdev --target riscv64-unknown-linux-gnu

Once crossdev has built the cross-toolchain it will be installed to /usr/<target>. The cross-compiler may be used by prefixing the target to the command, e.g.

user $riscv64-unknown-linux-gnu-gcc

Until the VisionFive 2 is fully supported by the upstream Linux Kernel it is necessary to use the VisionFive 2 kernel fork. Clone the repository:

user $cd linux
user $git checkout tags/VF2_v2.11.5
Build the kernel

In order for the VisionFive 2 to run Linux a Linux Kernel is required. This is built using the cross-compiler provided by crossdev.

Note
It's recommended to use GCC 12 to build the VisionFive 2's 5.15.0 kernel
Note
CONFIG_SECCOMP=y is not set in the defconfig and should be manually enabled if packages such as net-libs/webkit-gtk will be installed with USE=seccomp.
Note
There are some out-of-tree kernel modules (jpu/venc/vdec) that should be installed in addition to this kernel. These modules are available from the VisionFive 2 soft_3rdpart repo, or by emerging media-video/vf2vpudev from the bingch overlay
user $cd linux
user $HWBOARD_FLAG=HWBOARD_VISIONFIVE2 ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- make starfive_visionfive2_defconfig
user $HWBOARD_FLAG=HWBOARD_VISIONFIVE2 ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- make menuconfig
user $HWBOARD_FLAG=HWBOARD_VISIONFIVE2 ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- make -j$(nproc) vmlinux all modules

On such error: use of enum ‘gsi_iterator_update’ without previous declaration

Just use this patch: https://lore.kernel.org/lkml/DB6P189MB05681E9F4785DF2758B9875B9CA49@DB6P189MB0568.EURP189.PROD.OUTLOOK.COM/t/

Skipping ahead a bit, once a rootfs is available, install the kernel modules to the rootfs:

root #HWBOARD_FLAG=HWBOARD_VISIONFIVE2 ARCH=riscv INSTALL_MOD_PATH=/path/to/rootfs make modules_install

Once a Gentoo initramfs has been generated, mkimage from dev-embedded/u-boot-tools may be used to consolidate the kernel, initramfs, and dtb into a single file that U-Boot can load. This is not required; the kernel, initramfs, and dtb may also be loaded separately by U-Boot.

root #echo "sys-apps/dtc yaml" > /etc/portage/package.use/device-tree-compiler
root #emerge --ask sys-apps/dtc dev-embedded/u-boot-tools
Note
Update the paths of the kernel, initramfs, and dtb in the .its file before running mkimage.
user $mkimage -f visionfive2-fit-image.its -A riscv -O linux -T flat_dt gentoo.fit

Root filesystem

There are several methods that may be used to create or obtain a Gentoo root filesystem for the JH7110 SoC. The simplest method is simply downloading and unpacking a stage3 tarball from https://www.gentoo.org/downloads/, if a suitable one is available:

root #mkdir rootfs
root #tar xpvf stage3-*.tar.xz --xattrs-include='*.*' --numeric-owner -C rootfs

This example will assume that this has not been selected and instead Catalyst will be used to generate an appropriate stage3 tarball from scratch. If using an upstream stage3 tarball is desired, skip ahead to customising the rootfs.

It is possible directly use the crossdev root under /usr to build a rootfs; it is broadly similar to using Catalyst however instead of generating a seed tarball the rootfs is built by chrooting directly into the crossdev root. There are some advantages to this approach, particularly that it is possible to save a significant amount of time.

While this may be faster for quick development or building for a single device, if intending to target multiple configurations it is usually better to use Catalyst as a generic stage 1 image may be created and used as the base for multiple stage 3 images. Using Catalyst also provides better isolation between the host and target systems.

Build a seed tarball

To create a stage3 tarball, Calalyst requires a seed tarball. Catalyst will chroot into the seed and emerge packages for the new stage to ensure that packages generated for stage tarballs are isolated from the host system.

This example will build a seed tarball from scratch; an appropriate stage3 tarball from upstream may be placed in /var/tmp/catalyst/builds/default and used instead.

Set the system profile

root #PORTAGE_CONFIGROOT=/usr/riscv64-unknown-linux-gnu eselect profile list
Available profile symlink targets:
  [1]   default/linux/riscv/20.0/rv64gc/lp64d (stable)
  [2]   default/linux/riscv/20.0/rv64gc/lp64d/desktop (dev)
  [3]   default/linux/riscv/20.0/rv64gc/lp64d/desktop/gnome (dev)
  [4]   default/linux/riscv/20.0/rv64gc/lp64d/desktop/gnome/systemd (dev)
  [5]   default/linux/riscv/20.0/rv64gc/lp64d/desktop/gnome/systemd/merged-usr (dev)
  [6]   default/linux/riscv/20.0/rv64gc/lp64d/desktop/plasma (dev)
  [7]   default/linux/riscv/20.0/rv64gc/lp64d/desktop/plasma/systemd (dev)
  [8]   default/linux/riscv/20.0/rv64gc/lp64d/desktop/plasma/systemd/merged-usr (dev)
  [9]   default/linux/riscv/20.0/rv64gc/lp64d/desktop/systemd (dev)
  [10]  default/linux/riscv/20.0/rv64gc/lp64d/desktop/systemd/merged-usr (dev)
  [11]  default/linux/riscv/20.0/rv64gc/lp64d/systemd (stable)
  [12]  default/linux/riscv/20.0/rv64gc/lp64d/systemd/merged-usr (stable)
  [13]  default/linux/riscv/20.0/rv64gc/lp64 (stable)
  [14]  default/linux/riscv/20.0/rv64gc/lp64/desktop (dev)
  [15]  default/linux/riscv/20.0/rv64gc/lp64/desktop/gnome (dev)
  [16]  default/linux/riscv/20.0/rv64gc/lp64/desktop/gnome/systemd (dev)
  [17]  default/linux/riscv/20.0/rv64gc/lp64/desktop/gnome/systemd/merged-usr (dev)
  [18]  default/linux/riscv/20.0/rv64gc/lp64/desktop/plasma (dev)
  [19]  default/linux/riscv/20.0/rv64gc/lp64/desktop/plasma/systemd (dev)
  [20]  default/linux/riscv/20.0/rv64gc/lp64/desktop/plasma/systemd/merged-usr (dev)
  [21]  default/linux/riscv/20.0/rv64gc/lp64/desktop/systemd (dev)
  [22]  default/linux/riscv/20.0/rv64gc/lp64/desktop/systemd/merged-usr (dev)
  [23]  default/linux/riscv/20.0/rv64gc/lp64/systemd (stable)
  [24]  default/linux/riscv/20.0/rv64gc/lp64/systemd/merged-usr (stable)
  [25]  default/linux/riscv/20.0/rv64gc/multilib (exp)
  [26]  default/linux/riscv/20.0/rv64gc/multilib/systemd (exp)
  [27]  default/linux/riscv/20.0/rv64gc/multilib/systemd/merged-usr (exp)
  [28]  default/linux/riscv/20.0/rv64gc/lp64d/musl (dev)
  [29]  default/linux/riscv/20.0/rv64gc/lp64/musl (dev)
root #PORTAGE_CONFIGROOT=/usr/riscv64-unknown-linux-gnu eselect profile set 10
Warning
As of profile 23.0, the equivalent profile is 35: default/linux/riscv/23.0/rv64/lp64d/systemd (stable) as documented in the 23.0 Profile update table.

If a profile marked experimental (exp) is desired, use the --force flag to enable the profile.

To prevent errors from occurring while building the seed, the following USE flags should be set to prevent conflicts over the default su provider:

root #mkdir /usr/riscv64-unknown-linux-gnu/etc/portage/package.use
root #echo "sys-apps/util-linux -su" > /usr/riscv64-unknown-linux-gnu/etc/portage/package.use/system

or

root #sed -i -e "s:-pam::" /usr/riscv64-unknown-linux-gnu/etc/portage/make.conf

Emerge the system:

root #riscv64-unknown-linux-gnu-emerge -va1 @system --keep-going
Note
At this point in the process, the Catalyst stage generation may be skipped and instead the system may be built by chrooting into the crossdev environment.

Create a seed tarball:

root #cd /usr/riscv64-unknown-linux-gnu/
root #tar -cvJf /tmp/riscv64-glibc-seed.tar.xz *

Catalyst

Install catalyst:

root #emerge --ask dev-util/catalyst
Important
As of July 2023 Catalyst is only supported as a live ebuild release, See the Catalyst article for more information.

Create a Catalyst work directory, move the seed tarball to Catalyst's workdir, and build a Portage snapshot:

root #mkdir -p /var/tmp/catalyst/builds
root #mv /tmp/riscv64-glibc-seed.tar.xz /var/tmp/catalyst/builds/
root #emerge --sync
root #mkdir -p /var/tmp/catalyst/repos; pushd /var/tmp/catalyst/repos/
root #git clone --mirror /var/db/repos/gentoo
root #popd
root #catalyst --snapshot stable
18 May 2023 10:31:46 AEST: NOTICE  : Loading configuration file: /etc/catalyst/catalyst.conf
NOTICE:catalyst:Loading configuration file: /etc/catalyst/catalyst.conf
18 May 2023 10:31:46 AEST: NOTICE  : conf_values[options] = ['autoresume', 'bindist', 'kerncache', 'pkgcache', 'seedcache']
NOTICE:catalyst:conf_values[options] = ['autoresume', 'bindist', 'kerncache', 'pkgcache', 'seedcache']
18 May 2023 10:31:46 AEST: NOTICE  : >>> /usr/bin/git -C /var/tmp/catalyst/repos/gentoo.git fetch --quiet --depth=1
NOTICE:catalyst:>>> /usr/bin/git -C /var/tmp/catalyst/repos/gentoo.git fetch --quiet --depth=1
18 May 2023 10:31:46 AEST: NOTICE  : >>> /usr/bin/git -C /var/tmp/catalyst/repos/gentoo.git update-ref HEAD FETCH_HEAD
NOTICE:catalyst:>>> /usr/bin/git -C /var/tmp/catalyst/repos/gentoo.git update-ref HEAD FETCH_HEAD
18 May 2023 10:31:46 AEST: NOTICE  : >>> /usr/bin/git -C /var/tmp/catalyst/repos/gentoo.git gc --quiet
NOTICE:catalyst:>>> /usr/bin/git -C /var/tmp/catalyst/repos/gentoo.git gc --quiet
18 May 2023 10:31:47 AEST: NOTICE  : Creating gentoo tree snapshot afe106ae95ed7ba6536c870774c1b7e62d940ebd from /var/tmp/catalyst/repos/gentoo.git
NOTICE:catalyst:Creating gentoo tree snapshot afe106ae95ed7ba6536c870774c1b7e62d940ebd from /var/tmp/catalyst/repos/gentoo.git
18 May 2023 10:31:47 AEST: NOTICE  : >>> /usr/bin/git -C /var/tmp/catalyst/repos/gentoo.git archive --format=tar afe106ae95ed7ba6536c870774c1b7e62d940ebd |
NOTICE:catalyst:>>> /usr/bin/git -C /var/tmp/catalyst/repos/gentoo.git archive --format=tar afe106ae95ed7ba6536c870774c1b7e62d940ebd |
18 May 2023 10:31:47 AEST: NOTICE  :     /usr/bin/tar2sqfs /var/tmp/catalyst/snapshots/gentoo-afe106ae95ed7ba6536c870774c1b7e62d940ebd.sqfs -q -f -j1 -c gzip
NOTICE:catalyst:    /usr/bin/tar2sqfs /var/tmp/catalyst/snapshots/gentoo-afe106ae95ed7ba6536c870774c1b7e62d940ebd.sqfs -q -f -j1 -c gzip
18 May 2023 10:31:55 AEST: NOTICE  : Wrote snapshot to /var/tmp/catalyst/snapshots/gentoo-  .sqfs
NOTICE:catalyst:Wrote snapshot to /var/tmp/catalyst/snapshots/gentoo-afe106ae95ed7ba6536c870774c1b7e62d940ebd.sqfs

Create the Catalyst spec files that match the desired stage type:

Replace afe106ae95ed7ba6536c870774c1b7e62d940ebd in snapshot_treeish with the commit id that was given when running catalyst -s stable

root #cd /var/tmp/catalyst
FILE stage1-riscv64-systemd-mergedusr.spec
subarch: rv64_lp64d
target: stage1
version_stamp: systemd-mergedusr-20230518
interpreter: /usr/bin/qemu-riscv64
rel_type: default
profile: default/linux/riscv/20.0/rv64gc/lp64/systemd/merged-usr
snapshot_treeish: afe106ae95ed7ba6536c870774c1b7e62d940ebd
source_subpath: riscv64-glibc-seed
compression_mode: pixz
decompressor_search_order: xz bzip2
update_seed: yes
update_seed_command: -uDN @world
FILE stage3-riscv64-systemd-mergedusr.spec
subarch: rv64_lp64d
target: stage3
version_stamp: systemd-mergedusr-20230518
interpreter: /usr/bin/qemu-riscv64
rel_type: default
profile: default/linux/riscv/20.0/rv64gc/lp64/systemd/merged-usr
snapshot_treeish: afe106ae95ed7ba6536c870774c1b7e62d940ebd
source_subpath: default/stage1-rv64_lp64d_systemd-mergedusr-20230518.tar.xz
compression_mode: pixz
decompressor_search_order: xz bzip2

Finally, using Catalyst, build a Stage 1 image from the seed tarball, and a Stage 3 image from the Stage 1 image:

root #catalyst -f stage1-riscv64-systemd-mergedusr.spec
root #catalyst -f stage3-riscv64-systemd-mergedusr.spec
Tip
If @system fails to build try checking out the releng repo and setting the portage_confdir variable to its location.
root #git clone -o upstream https://github.com/gentoo/releng.git
root #echo "portage_confdir = /path/to/releng/releases/portage/stages-qemu" >> /var/tmp/catalyst/builds/default/stage1-riscv64-musl-openrc.spec

This ensures that the most up-to-date portage configuration is available to the build process.

If this fails, and a suitable stage 3 image is available, try using that as the seed. If that fails, or is unavailable, ask for support in #gentoo-releng.

If a stage successfully builds the output will be located at /var/tmp/catalyst/builds/default.

Customise the RootFS

Once a a stage 3 image has been obtained or constructed the next task is personalisation of the root filesystem. This involves mounting the root filesystem, followed by executing a chroot command to enter it. For the purpose of the following commands, it is assumed that the root filesystem is stored at /var/tmp/catalyst/builds/default/stage3-riscv64-systemd-mergedusr-20230518.tar.xz.

root #mkdir rootfs
root #tar xpvf /var/tmp/catalyst/builds/default/stage3-*.tar.xz --xattrs-include='*.*' --numeric-owner -C rootfs

Emerge QEMU into the target and chroot; customise the system image (at least set the root password and add a regular user). Proceed with a typical Stage 3 configuration outside of Kernel and Bootloader.

root #mount --bind rootfs rootfs
root #cd rootfs
root #ROOT=$PWD/ emerge --usepkgonly --oneshot --nodeps qemu
root #mount --bind /proc proc
root #mount --bind /sys sys
root #mount --bind /dev dev
root #mount --bind /dev/pts dev/pts
root #mkdir -p var/db/repos/gentoo
root #mount --bind /var/db/repos/gentoo var/db/repos/gentoo
root #mkdir -p usr/src/linux
root #mount --bind ../linux usr/src/linux
root #chroot . /bin/bash --login

To enable accelerated graphics, add the bingch ebuild repository to utilise its custom ebuilds for the Imagination BXE-4-32 GPU.

root #eselect repository add bingch git https://gitlab.com/bingch/gentoo-overlay.git
root #emerge --sync
root #echo 'media-libs/mesa::bingch vulkan' > /etc/portage/package.use/powervr_graphics
root #echo 'VIDEO_CARDS="imagination"' >> /etc/portage/make.conf
root #echo 'media-libs/img-gpu-powervr-bin restricted' > /etc/portage/package.license/powervr-gpu-blobs
Note
LLVM 15 is required to emerge bingch's customized mesa, (or USE="-llvm" which results in a slower swrast which is not ideal for X11); until the Imagination BXE-4-32 is well supported upstream it is unlikely that ::gentoo will contain a version of LLVM that can build the older Mesa releases.
root #emerge --ask media-libs/mesa::bingch

Install the kernel and modules to the rootfs image:

root #pushd /usr/src/linux
root #make install && make modules_install

Generate an initramfs from within the chroot using your rootfs and a tool like Dracut:

root #dracut --kver 5.15.0

Finally, exit the chroot and recursively unmount it:

root #exit
root #umount -R rootfs

Bootloader configuration

There are several methods of of configuring the U-Boot bootloader, each with their own advantages and disadvantages.

An external extlinux configuration file (also known as U-Boot Standard Boot) to determine bootup parameters. This enables U-Boot to load a dynamic configuation from the disk, enabling the user to amend the bootloader configuration without requiring changes to the U-Boot environment.

U-boot is capable of reading from ext2, ext4 and FAT filesystems; each of these filesystems may be used to store the kernel and extlinux/syslinux configuration file.

Regardless of the choice of filesystem, each of these files must be located in a directory called /boot on the root of the partition that it is located. This means that the kernel and extlinuxconfiguration file must be located at /boot/Image and /boot/extlinux/extlinux.conf respectively.

Within the unpacked rootfs, with the kernel at /boot/Image, create a file at /boot/extlinux/extlinux.conf that looks similar to this:

FILE /boot/extlinux/extlinux.conf
label default
    linux /Image
    append root=/dev/mmcblk0p2 rootwait console=ttyS0,115200 earlycon=sbi debug

The extlinux configuration file does not support FIT images.

The bootloader may also be configured by saving the U-Boot environment. This method of configuring the bootloader offers the most control, but requires that the user has a good understanding of the U-Boot environment variables and the boot process.

Imaging the device

There are several ways to get an image onto the device with the most straightforward method being simply writing an image to a TF (MicroSD) card. The VisionFive 2 has a built-in TF card reader, so this is a matter of imaging the card, inserting the card, selecting the appropriate boot option via the RGPIO switches, and powering on the device.

An alternate method where an image is loaded via TFTP will be described here.

The following files may required to boot the VF2 board, depending on the selected boot method:

├── visionfive2_fw_payload.img
├── image.fit
├── initramfs.cpio.gz
├── u-boot-spl.bin.normal.out
├── linux/arch/riscv/boot
    ├── dts
    │   └── starfive
    │       ├── jh7110-visionfive-v2-ac108.dtb
    │       ├── jh7110-visionfive-v2.dtb
    │       ├── jh7110-visionfive-v2-wm8960.dtb
    │       ├── vf2-overlay
    │       │   └── vf2-overlay-uart3-i2c.dtbo
    └── Image.gz

A note about firmware

While the process of imaging the SPI NAND was documented earlier in this guide, it is not the only location that firmware may be loaded from. Depending on the configuration of the RGPIO switches (and saved or compiled-in U-Boot configuration) the VF2 device may attempt to load firmware from partitions on a variety of block devices or over UART.

M.2 installation using a single rootfs partition

This example utilises the QSPI NAND to store the firmware and U-Boot environment alongside with a single rootfs partition on the M.2 NVMe SSD, used to store the rootfs with a FIT image being stored (and updated if necessary) under /boot. This configuration is broadly analogous to booting a legacy desktop with a monolithic partition and GRUB or LILO installed to the MBR with some additional, manual, bootloader configuration.

It is not a requirement to store the FIT image on the same block device as the rootfs, however it is often convenient to do so. A potential downside to this approach is that U-Boot only knows how to read ext2, ext4 and FAT file systems which restricts the choice of rootfs. It is entirely possible to make /boot another partition and store the FIT image elsewhere. If this option is selected ensure that partition numbers and device names are updated (or, ideally, use UUIDs for the rootfs in the kernel cmdline).

It is assumed that the user has some familiarity with running a TFTP server such as net-ftp/tftp-hpa.

To begin, validate the produced image.fit. Set environment parameters, load the image from the TFTP sever into memory, and boot it:

StarFive #setenv bootfile vmlinuz; setenv fdtcontroladdr 0xffffffffffffffff; setenv serverip 192.168.1.x; setenv ipaddr 192.168.1.x;
StarFive #tftpboot ${loadaddr} ${serverip}:image.fit;
StarFive #bootm start ${loadaddr};bootm loados ${loadaddr};run chipa_set_linux;run cpu_vol_set;booti 0x40200000 0x46100000:${filesize} 0x46000000
## Loading kernel from FIT Image at a0000000 ...
   Using 'config-1' configuration
   Trying 'vmlinux' kernel subimage
     Description:  vmlinux
     Type:         Kernel Image
     Compression:  uncompressed
     Data Start:   0xa00000c8
     Data Size:    24161280 Bytes = 23 MiB
     Architecture: RISC-V
     OS:           Linux
     Load Address: 0x40200000
     Entry Point:  0x40200000
   Verifying Hash Integrity ... OK
## Loading fdt from FIT Image at a0000000 ...
   Using 'config-1' configuration
   Trying 'fdt' fdt subimage
     Description:  unavailable
     Type:         Flat Device Tree
     Compression:  uncompressed
     Data Start:   0xa7f57d50
     Data Size:    48366 Bytes = 47.2 KiB
     Architecture: RISC-V
     Load Address: 0x46000000
     Hash algo:    sha256
     Hash value:   dae1bdb73c5a4806cc8ff17df2552c3152c5d858e76f89212a3de4714e63e40b
   Verifying Hash Integrity ... sha256+ OK
   Loading fdt from 0xa7f57d50 to 0x46000000
   Booting using the fdt blob at 0x46000000
## Loading loadables from FIT Image at a0000000 ...
   Trying 'ramdisk' loadables subimage
     Description:  buildroot initramfs
     Type:         RAMDisk Image
     Compression:  uncompressed
     Data Start:   0xa170ad7c
     Data Size:    109367045 Bytes = 104.3 MiB
     Architecture: RISC-V
     OS:           Linux
     Load Address: 0x46100000
     Entry Point:  unavailable
     Hash algo:    sha256
     Hash value:   9b07bf94f17c3ae38607e09944ebd036c962e91d731d2b81231088a7e56ad46e
   Verifying Hash Integrity ... sha256+ OK
   Loading loadables from 0xa170ad7c to 0x46100000
   Loading Kernel Image
## Flattened Device Tree blob at 46000000
   Booting using the fdt blob at 0x46000000
   Using Device Tree in place at 0000000046000000, end 000000004600eced

Starting kernel ...

If the device successfully boots using the generated FIT image it may be used to install the rootfs onto attached storage.

Use mkstage4 to generate a rootfs tarball:

root #pushd rootfs
root #mkstage4 -C gz -t $(pwd) ../work/visionfive2-rootfs
root #popd

Using the initramfs UART console, partition the NVMe device, set an IP, and copy the rootfs tarball to the device:

#gdisk /dev/nvme0n1
#mkdir /mnt/gentoo
#ip addr add 192.168.1.x/24 dev eth1
#ip route add default via 192.168.1.1 dev eth1
#scp larry@buildhost:visionfive2-rootfs.tar.gz /mnt/gentoo/

Unpack the tarball and configure the stage3 as usual. Copy image.fit to /boot. The FIT image produced by the VF2 SDK may be used to boot the Gentoo installation however, if desired, a new FIT image may be generated using the Gentoo kernel and device tree.

#tar -xpf visionfive2-rootfs.tar.gz -C /mnt/gentoo

It is recommended that the dhcpcd (or an alternative DHCP client) and sshd services be enabled at this time.

From a U-Boot console, scan and identify the NVMe device:

StarFive #nvme scan
StarFive #nvme info
Device 0: Vendor: 0x1179 Rev: 10604107 Prod: Y1BFC3Z2F8R3
            Type: Hard Disk
            Capacity: 976762.3 MB = 953.8 GB (2000409264 x 512)

As U-Boot is able to see the device, configure the bootloader to add an NVMe boot target.

Note
It doesn't appear to be possible to load a FIT image using a sysboot/extlinux/standard boot configuration. It would be ideal for this to be configured dynamically rather than requiring a U-Boot configuration change if (e.g.) the kernel cmdline needs to be updated. Investigate this further and add to the wiki!

First, load the image manually from disk:

StarFive #setenv bootfile vmlinuz; setenv fileaddr a0000000; setenv fdtcontroladdr 0xffffffffffffffff; nvme scan; ext4load nvme 0:1 $fileaddr boot/image.fit
StarFive #bootm start ${fileaddr};bootm loados ${fileaddr};run chipa_set_linux;run cpu_vol_set;booti 0x40200000 0x46100000:${filesize} 0x46000000

To debug a failing boot, try a combination of the following additional kernel cmdline options:

  • PID1 fails to start: systemd.log_level=debug systemd.log_target=console
  • Essential service fails to start after control handed to systemd: systemd.journald.forward_to_console=1

The 'debug' option may be removed from the default 'bootargs' environment setting if desired, once the system is booting successfully.

Once safe commands for booting the system are identified they need to be saved to the SPI flash as macros that can be easily run; several variables will be saved to make the boot process easier to parse and debug. It is important to note that any lines containing other macros or environment variables must be enclosed in ' to ensure that they are expanded correctly at runtime. The following commands may be used to save the boot commands to the SPI flash:

StarFive #setenv nvme_bootargs 'setenv bootargs ${bootargs} root=/dev/nvme0n1p1 rw'
StarFive #setenv nvme_fitload 'setenv bootfile vmlinuz; setenv fdtcontroladdr 0xffffffffffffffff; nvme scan; ext4load nvme 0:1 $loadaddr boot/gentoo.fit'
StarFive #setenv nvme_bootfit 'bootm start ${loadaddr};bootm loados ${loadaddr};run chipa_set_linux;run cpu_vol_set;booti 0x40200000 0x46100000:${filesize} 0x46000000'
StarFive #setenv bootcmd_nvme0 'run nvme_bootargs nvme_fitload nvme_bootfit'
StarFive #setenv boot_targets 'mmc0 dhcp nvme0'
StarFive #saveenv
StarFive #reset
Saving Environment to SPI Flash...
Erasing SPI flash...Writing to SPI flash...done

Upon the next boot the device will load the kernel and initramfs from the NVMe device without interaction.

TF Card/eMMC

In this example the BROM is instructed to load firmware from an attached TF/eMMC device. It is very similar to the the way that aarch64 devices are imaged due to the similar boot mechanisms. The TF card will be partitioned using GPT.

An image will be generated manually on-disk which may then be installed onto the TF card using tools such as dd.

Create an image and mount it on an available loop device:

user $fallocate -l 8G visionfive2.img
user $doas losetup --find --show visionfive2.img
/dev/loop0
Tip
The sectors used below are pulled from the VF2 BSP image generation script. The numbers assume a 512 Byte sector size with partitions aligned to 2048-sector boundaries. It is possible to use a different sector size, refer to the earlier guidance around the SoC's boot sequence.
root #
sgdisk --clear --set-alignment=2 \
  --new=1:4096:8191    --change-name=1:"spl"   --typecode=1:2E54B353-1271-4842-806F-E436D6AF6985 \
  --new=2:8192:16383   --change-name=2:"uboot" --typecode=2:5B193300-FC78-40CD-8002-E86C45580B47 \
  --new=3:16385:614399 --change-name=3:"image" --typecode=3:EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 \
  --new=4:614400:0     --change-name=4:"root"  --typecode=4:0FC63DAF-8483-4772-8E79-3D69D8477DE4 \
  /dev/loop0
Creating new GPT entries in memory.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
Note
If the TF card is larger than the size of the disk image (which is recommended; it saves a lot of time when imaging) resize2fs or a similar utility depending on file system may be used to resize the rootfs partition later.

Write the SPL and U-Boot

Note
It is possible to combine the SPL and U-Boot into a single image FIT image. This will require a different partition layout but should otherwise be similar to this example. This is not covered here, but should be. Please help out by updating the wiki!

The SPL and U-Boot may be written to the image using dd.

root #dd if=u-boot/spl/u-boot-spl.bin of=/dev/loop0p1
root #dd if=u-boot/u-boot.itb of=/dev/loop0p2

Format and mount the boot and root partitions

The boot partition may be formatted with mkfs.fat, and the root partition with mkfs.ext4. The following commands may be used to format and mount the partitions:

root #mkfs.fat -F 32 -n boot /dev/loop0p3
root #mkfs.ext4 -L root /dev/loop0p4
Tip
For a more conventional /boot layout, make /boot a symlink to /mnt/sdcard/boot and mount sdcard at /mnt/sdcard.
root #mkdir -p /mnt/visionfive2
root #mount /dev/loop0p4 /mnt/visionfive2
root #cp -a rootfs/* /mnt/visionfive2
root #mkdir -p /mnt/visionfive2/mnt/sdcard
root #mount /dev/loop0p3 /mnt/visionfive2/mnt/sdcard
root #mv /mnt/visionfive2/boot /mnt/visionfive2/mnt/sdcard/
root #ln -s /mnt/sdcard/boot /mnt/visionfive2/boot

Edit fstab to cater for any file systems that need to be mounted on the device, then unmount the image:

root #umount -R /mnt/visionfive2/mnt/sdcard
root #sync
root #losetup -d /dev/loop0

Write the image to a TF card

There are many methods available to do this. sys-boot/etcher-bin is a common method for those that want an easy-to-use GUI, however this example will use dd:

root #dd if=visionfive2.img | pv | dd of=/dev/mmcblk0 bs=4M

Move the secondary GPT to the actual end of the disk, delete the last partition then recreate it, and finally resize the file system:

root #sgdisk -e /dev/mmcblk0
root #sgdisk -d 4 /dev/mmcblk0
root #sgdisk --new=4:614400:0 --change-name=4:"root" --typecode=4:0FC63DAF-8483-4772-8E79-3D69D8477DE4 /dev/mmcblk0
root #partprobe /dev/mmcblk0
root #resize2fs /dev/mmcblk0p4

Boot the device

Once the TF card has been inserted into the device and the RGPIO switch is set to TF, the device should boot from the TF card.

See also

External resources