Expand description
§Scarlet Kernel
The Scarlet Kernel is a bare metal, no_std
operating system kernel designed with architecture
flexibility in mind. It aims to provide a clean design with strong safety guarantees
through Rust’s ownership model.
While the current implementation establishes fundamental kernel functionality, our long-term vision is to develop a fully modular operating system where components can be dynamically loaded and unloaded at runtime, similar to loadable kernel modules in other systems.
§Core Features
- No Standard Library: Built using
#![no_std]
for bare metal environments, implementing only the essential functionality needed for kernel operation without relying on OS-specific features - Multi-Architecture Design: Currently implemented for RISC-V 64-bit, with a clean abstraction layer designed for supporting multiple architectures in the future
- Memory Management: Custom heap allocator with virtual memory support that handles physical and virtual memory mapping, page tables, and memory protection
- Task Scheduling: Cooperative and preemptive multitasking with priority-based scheduling and support for kernel and user tasks
- Driver Framework: Organized driver architecture with device discovery through FDT (Flattened Device Tree), supporting hot-pluggable and fixed devices
- Filesystem Support: Flexible Virtual File System (VFS) layer with support for mounting multiple filesystem implementations and unified path handling
- Hardware Abstraction: Clean architecture-specific abstractions that isolate architecture-dependent code to facilitate porting to different architectures
- Future Modularity: Working toward a fully modular design with runtime-loadable kernel components
§Resource Management with Rust’s Ownership Model
Scarlet leverages Rust’s ownership and borrowing system to provide memory safety without garbage collection:
-
Zero-Cost Abstractions: Using Rust’s type system for resource management without runtime overhead. For example, the device driver system uses traits to define common interfaces while allowing specialized implementations with no virtual dispatch cost when statically resolvable.
-
RAII Resource Management: Kernel resources are automatically cleaned up when they go out of scope, including:
- File handles that automatically close when dropped
- Memory allocations that are properly freed
- Device resources that are released when no longer needed
-
Mutex and RwLock: Thread-safe concurrent access to shared resources using the
spin
crate’s lock implementations:- The scheduler uses locks to protect its internal state during task switching
- Device drivers use locks to ensure exclusive access to hardware
- Filesystem operations use RwLocks to allow concurrent reads but exclusive writes
-
Arc (Atomic Reference Counting): Safe sharing of resources between kernel components:
- Filesystem implementations are shared between multiple mount points
- Device instances can be referenced by multiple drivers
- System-wide singletons are managed safely with interior mutability patterns
-
Memory Safety: Prevention of use-after-free, double-free, and data races at compile time:
- The type system ensures resources are not used after being freed
- Mutable references are exclusive, preventing data races
- Lifetimes ensure references do not outlive the data they point to
-
Trait-based Abstractions: Common interfaces for device drivers and subsystems enabling modularity:
- The
BlockDevice
trait defines operations for block-based storage - The
SerialDevice
trait provides a common interface for UART and console devices - The
FileSystem
andFileOperations
traits allow different filesystem implementations
- The
§Virtual File System
Scarlet implements a highly flexible Virtual File System (VFS) layer designed for containerization and process isolation with advanced bind mount capabilities:
§Core Architecture
-
Per-Task VFS Management: Each task can have its own isolated
VfsManager
instance:- Tasks store
Option<Arc<VfsManager>>
allowing independent filesystem namespaces - Support for complete filesystem isolation or selective resource sharing
- Thread-safe operations via RwLock protection throughout the VFS layer
- Tasks store
-
Filesystem Driver Framework: Modular driver system with type-safe parameter handling:
- Global
FileSystemDriverManager
singleton for driver registration and management - Support for block device, memory-based, and virtual filesystem creation
- Structured parameter system replacing old string-based configuration
- Dynamic dispatch enabling future runtime filesystem module loading
- Global
-
Enhanced Mount Tree: Hierarchical mount point management with bind mount support:
- O(log k) path resolution performance where k is path depth
- Independent mount point namespaces per VfsManager instance
- Security-enhanced path normalization preventing directory traversal attacks
- Efficient Trie-based mount point storage reducing memory usage
§Bind Mount Functionality
Advanced bind mount capabilities for flexible directory mapping and container orchestration:
- Basic Bind Mounts: Mount directories from one location to another within the same VfsManager
- Cross-VFS Bind Mounts: Share directories between isolated VfsManager instances for container resource sharing
- Read-Only Bind Mounts: Security-enhanced mounting with write protection
- Shared Bind Mounts: Mount propagation sharing for complex namespace scenarios
- Thread-Safe Operations: Bind mount operations callable from system call context
§Path Resolution & Security
- Normalized Path Handling: Automatic resolution of relative paths (
.
and..
) - Security Protection: Prevention of directory traversal attacks through path validation
- Transparent Resolution: Seamless handling of bind mounts and nested mount points
- Performance Optimization: Efficient path lookup with O(log k) complexity
§File Operations & Resource Management
- RAII Resource Safety: Files automatically close when dropped, preventing resource leaks
- Thread-Safe File Access: Concurrent file operations with proper locking
- Handle Management: Arc-based file handle sharing with automatic cleanup
- Directory Operations: Complete directory manipulation with metadata support
§Storage Integration
- Block Device Interface: Abstraction layer for storage device interaction
- Memory-Based Filesystems: Support for RAM-based filesystems like tmpfs
- Hybrid Filesystem Support: Filesystems operating on both block devices and memory
- Device File Support: Integration with character and block device management
§Boot Process
The kernel has two main entry points:
start_kernel
: Main boot entry point for the bootstrap processorstart_ap
: Entry point for application processors (APs) in multicore systems
The initialization sequence for the bootstrap processor includes:
.bss
section initialization (zeroing)- Architecture-specific initialization (setting up CPU features)
- FDT (Flattened Device Tree) parsing for hardware discovery
- Heap initialization enabling dynamic memory allocation
- Early driver initialization via the initcall mechanism
- Driver registration and initialization (serial, block devices, etc.)
- Virtual memory setup with kernel page tables
- Device discovery and initialization based on FDT data
- Timer initialization for scheduling and timeouts
- Scheduler initialization and initial task creation
- Task scheduling and transition to the kernel main loop
§Current Architecture Implementation
The current RISC-V implementation includes:
- Boot sequence utilizing SBI (Supervisor Binary Interface) for hardware interaction
- Support for S-mode operation
- Interrupt handling through trap frames with proper context saving/restoring
- Memory management with Sv48 virtual memory addressing
- Architecture-specific timer implementation
- Support for multiple privilege levels
- Instruction abstractions for atomic operations and privileged instructions
§Testing Framework
Scarlet includes a custom testing framework that allows:
- Unit tests for kernel components
- Integration tests for subsystem interaction
- Boot tests to verify initialization sequence
- Hardware-in-the-loop tests when running on real or emulated hardware
§Development Notes
The kernel uses Rust’s advanced features like naked functions and custom test frameworks. In non-test builds, a simple panic handler is provided that prints the panic information and enters an infinite loop. The kernel makes extensive use of Rust’s unsafe code where necessary for hardware interaction while maintaining safety guarantees through careful abstraction boundaries.
Modules§
- abi
- ABI module.
- arch
- Architecture-specific code for Scarlet kernel
- device
- Device module.
- drivers
- Device drivers module.
- earlycon
- Early console for generic architecture.
- environment
- fs
- Virtual File System (VFS) module.
- initcall
- Initcall System
- library
- Library module for the kernel.
- mem
- Memory management module.
- sched
- Scheduler module.
- syscall
- System call interface module.
- task
- Task module.
- time
- Time utilities for the kernel
- timer
- Kernel timer module.
- traits
- vm
- Virtual memory module.
Macros§
- defer
- Macro to defer execution of a block of code.
This macro allows you to specify a block of code that will be executed when the
current scope is exited.
It is similar to the
defer
function but provides a more concise syntax. - driver_
initcall - A macro used to register driver initialization functions to be called during the system boot process.
- early_
initcall - early_
print - early_
println - late_
initcall - println
- register_
abi
Functions§
- panic 🔒
- A panic handler is required in Rust, this is probably the most basic one possible
- start_
ap - start_
kernel