kernel/
main.rs

1//! # Scarlet Kernel
2//!
3//! The Scarlet Kernel is a bare metal, `no_std` operating system kernel designed with architecture 
4//! flexibility in mind. It aims to provide a clean design with strong safety guarantees 
5//! through Rust's ownership model.
6//!
7//! While the current implementation establishes fundamental kernel functionality, our long-term
8//! vision is to develop a fully modular operating system where components can be dynamically
9//! loaded and unloaded at runtime, similar to loadable kernel modules in other systems.
10//!
11//! ## Core Features
12//!
13//! - **No Standard Library**: Built using `#![no_std]` for bare metal environments, implementing only the essential
14//!   functionality needed for kernel operation without relying on OS-specific features
15//! - **Multi-Architecture Design**: Currently implemented for RISC-V 64-bit, with a clean abstraction layer designed
16//!   for supporting multiple architectures in the future
17//! - **Memory Management**: Custom heap allocator with virtual memory support that handles physical and virtual memory
18//!   mapping, page tables, and memory protection
19//! - **Task Scheduling**: Cooperative and preemptive multitasking with priority-based scheduling and support for
20//!   kernel and user tasks
21//! - **Driver Framework**: Organized driver architecture with device discovery through FDT (Flattened Device Tree),
22//!   supporting hot-pluggable and fixed devices
23//! - **Filesystem Support**: Flexible Virtual File System (VFS) layer with support for mounting multiple filesystem
24//!   implementations and unified path handling
25//! - **Hardware Abstraction**: Clean architecture-specific abstractions that isolate architecture-dependent code
26//!   to facilitate porting to different architectures
27//! - **Future Modularity**: Working toward a fully modular design with runtime-loadable kernel components
28//!
29//! ## Resource Management with Rust's Ownership Model
30//!
31//! Scarlet leverages Rust's ownership and borrowing system to provide memory safety without garbage collection:
32//!
33//! - **Zero-Cost Abstractions**: Using Rust's type system for resource management without runtime overhead. For example,
34//!   the device driver system uses traits to define common interfaces while allowing specialized implementations
35//!   with no virtual dispatch cost when statically resolvable.
36//!
37//! - **RAII Resource Management**: Kernel resources are automatically cleaned up when they go out of scope, including:
38//!   - File handles that automatically close when dropped
39//!   - Memory allocations that are properly freed
40//!   - Device resources that are released when no longer needed
41//!
42//! - **Mutex and RwLock**: Thread-safe concurrent access to shared resources using the `spin` crate's lock implementations:
43//!   - The scheduler uses locks to protect its internal state during task switching
44//!   - Device drivers use locks to ensure exclusive access to hardware
45//!   - Filesystem operations use RwLocks to allow concurrent reads but exclusive writes
46//!
47//! - **Arc** (Atomic Reference Counting): Safe sharing of resources between kernel components:
48//!   - Filesystem implementations are shared between multiple mount points
49//!   - Device instances can be referenced by multiple drivers
50//!   - System-wide singletons are managed safely with interior mutability patterns
51//!
52//! - **Memory Safety**: Prevention of use-after-free, double-free, and data races at compile time:
53//!   - The type system ensures resources are not used after being freed
54//!   - Mutable references are exclusive, preventing data races
55//!   - Lifetimes ensure references do not outlive the data they point to
56//!
57//! - **Trait-based Abstractions**: Common interfaces for device drivers and subsystems enabling modularity:
58//!   - The `BlockDevice` trait defines operations for block-based storage
59//!   - The `SerialDevice` trait provides a common interface for UART and console devices
60//!   - The `FileSystem` and `FileOperations` traits allow different filesystem implementations
61//!
62//! ## Virtual File System
63//!
64//! Scarlet implements a highly flexible Virtual File System (VFS) layer designed for
65//! containerization and process isolation with advanced bind mount capabilities:
66//!
67//! ### Core Architecture
68//!
69//! - **Per-Task VFS Management**: Each task can have its own isolated `VfsManager` instance:
70//!   - Tasks store `Option<Arc<VfsManager>>` allowing independent filesystem namespaces
71//!   - Support for complete filesystem isolation or selective resource sharing
72//!   - Thread-safe operations via RwLock protection throughout the VFS layer
73//!
74//! - **Filesystem Driver Framework**: Modular driver system with type-safe parameter handling:
75//!   - Global `FileSystemDriverManager` singleton for driver registration and management
76//!   - Support for block device, memory-based, and virtual filesystem creation
77//!   - Structured parameter system replacing old string-based configuration
78//!   - Dynamic dispatch enabling future runtime filesystem module loading
79//!
80//! - **Enhanced Mount Tree**: Hierarchical mount point management with bind mount support:
81//!   - O(log k) path resolution performance where k is path depth
82//!   - Independent mount point namespaces per VfsManager instance
83//!   - Security-enhanced path normalization preventing directory traversal attacks
84//!   - Efficient Trie-based mount point storage reducing memory usage
85//!
86//! ### Bind Mount Functionality
87//!
88//! Advanced bind mount capabilities for flexible directory mapping and container orchestration:
89//!
90//! - **Basic Bind Mounts**: Mount directories from one location to another within the same VfsManager
91//! - **Cross-VFS Bind Mounts**: Share directories between isolated VfsManager instances for container resource sharing
92//! - **Read-Only Bind Mounts**: Security-enhanced mounting with write protection
93//! - **Shared Bind Mounts**: Mount propagation sharing for complex namespace scenarios
94//! - **Thread-Safe Operations**: Bind mount operations callable from system call context
95//!
96//! ### Path Resolution & Security
97//!
98//! - **Normalized Path Handling**: Automatic resolution of relative paths (`.` and `..`)
99//! - **Security Protection**: Prevention of directory traversal attacks through path validation
100//! - **Transparent Resolution**: Seamless handling of bind mounts and nested mount points
101//! - **Performance Optimization**: Efficient path lookup with O(log k) complexity
102//!
103//! ### File Operations & Resource Management
104//!
105//! - **RAII Resource Safety**: Files automatically close when dropped, preventing resource leaks
106//! - **Thread-Safe File Access**: Concurrent file operations with proper locking
107//! - **Handle Management**: Arc-based file handle sharing with automatic cleanup
108//! - **Directory Operations**: Complete directory manipulation with metadata support
109//!
110//! ### Storage Integration
111//!
112//! - **Block Device Interface**: Abstraction layer for storage device interaction
113//! - **Memory-Based Filesystems**: Support for RAM-based filesystems like tmpfs
114//! - **Hybrid Filesystem Support**: Filesystems operating on both block devices and memory
115//! - **Device File Support**: Integration with character and block device management
116//!
117//! ## Boot Process
118//!
119//! The kernel has two main entry points:
120//! - `start_kernel`: Main boot entry point for the bootstrap processor
121//! - `start_ap`: Entry point for application processors (APs) in multicore systems
122//!
123//! The initialization sequence for the bootstrap processor includes:
124//! 1. `.bss` section initialization (zeroing)
125//! 2. Architecture-specific initialization (setting up CPU features)
126//! 3. FDT (Flattened Device Tree) parsing for hardware discovery
127//! 4. Heap initialization enabling dynamic memory allocation
128//! 5. Early driver initialization via the initcall mechanism
129//! 6. Driver registration and initialization (serial, block devices, etc.)
130//! 7. Virtual memory setup with kernel page tables
131//! 8. Device discovery and initialization based on FDT data
132//! 9. Timer initialization for scheduling and timeouts
133//! 10. Scheduler initialization and initial task creation
134//! 11. Task scheduling and transition to the kernel main loop
135//!
136//! ## Current Architecture Implementation
137//!
138//! The current RISC-V implementation includes:
139//! - Boot sequence utilizing SBI (Supervisor Binary Interface) for hardware interaction
140//! - Support for S-mode operation
141//! - Interrupt handling through trap frames with proper context saving/restoring
142//! - Memory management with Sv48 virtual memory addressing
143//! - Architecture-specific timer implementation
144//! - Support for multiple privilege levels
145//! - Instruction abstractions for atomic operations and privileged instructions
146//!
147//! ## Testing Framework
148//!
149//! Scarlet includes a custom testing framework that allows:
150//! - Unit tests for kernel components
151//! - Integration tests for subsystem interaction
152//! - Boot tests to verify initialization sequence
153//! - Hardware-in-the-loop tests when running on real or emulated hardware
154//!
155//! ## Development Notes
156//!
157//! The kernel uses Rust's advanced features like naked functions and custom test frameworks.
158//! In non-test builds, a simple panic handler is provided that prints the panic information 
159//! and enters an infinite loop. The kernel makes extensive use of Rust's unsafe code where
160//! necessary for hardware interaction while maintaining safety guarantees through careful
161//! abstraction boundaries.
162
163#![no_std]
164#![no_main]
165#![feature(used_with_arg)]
166#![feature(custom_test_frameworks)]
167#![test_runner(crate::test::test_runner)]
168#![reexport_test_harness_main = "test_main"]
169
170pub mod abi;
171pub mod arch;
172pub mod drivers;
173pub mod timer;
174pub mod time;
175pub mod library;
176pub mod mem;
177pub mod traits;
178pub mod sched;
179pub mod earlycon;
180pub mod environment;
181pub mod vm;
182pub mod task;
183pub mod initcall;
184pub mod syscall;
185pub mod device;
186pub mod fs;
187
188#[cfg(test)]
189pub mod test;
190
191extern crate alloc;
192use alloc::{string::ToString, sync::Arc};
193use device::{fdt::{init_fdt, relocate_fdt, FdtManager}, manager::DeviceManager};
194use environment::PAGE_SIZE;
195use fs::{drivers::initramfs::{init_initramfs, relocate_initramfs}, File, VfsManager};
196use initcall::{call_initcalls, driver::driver_initcall_call, early::early_initcall_call};
197use slab_allocator_rs::MIN_HEAP_SIZE;
198
199use core::panic::{self, PanicInfo};
200
201use arch::{get_cpu, init_arch};
202use task::{elf_loader::load_elf_into_task, new_user_task};
203use vm::{kernel_vm_init, vmem::MemoryArea};
204use sched::scheduler::get_scheduler;
205use mem::{allocator::init_heap, init_bss, __FDT_RESERVED_START, __KERNEL_SPACE_END, __KERNEL_SPACE_START};
206use timer::get_kernel_timer;
207
208
209/// A panic handler is required in Rust, this is probably the most basic one possible
210#[cfg(not(test))]
211#[panic_handler]
212fn panic(info: &PanicInfo) -> ! {
213    use arch::instruction::idle;
214
215    println!("[Scarlet Kernel] panic: {}", info);
216    loop {
217        idle();
218    }
219}
220
221#[unsafe(no_mangle)]
222pub extern "C" fn start_kernel(cpu_id: usize) -> ! {
223    early_println!("Hello, I'm Scarlet kernel!");
224    early_println!("[Scarlet Kernel] Boot on CPU {}", cpu_id);
225    early_println!("[Scarlet Kernel] Initializing .bss section...");
226    init_bss();
227    early_println!("[Scarlet Kernel] Initializing arch...");
228    init_arch(cpu_id);
229    /* Initializing FDT subsystem */
230    early_println!("[Scarlet Kernel] Initializing FDT...");
231    init_fdt();
232    /* Get DRAM area from FDT */
233    let dram_area = FdtManager::get_manager().get_dram_memoryarea().expect("Memory area not found");
234    early_println!("[Scarlet Kernel] DRAM area          : {:#x} - {:#x}", dram_area.start, dram_area.end);
235    /* Relocate FDT to usable memory area */
236    early_println!("[Scarlet Kernel] Relocating FDT...");
237    let fdt_reloc_start = unsafe { &__FDT_RESERVED_START as *const usize as usize };
238    let dest_ptr = fdt_reloc_start as *mut u8;
239    relocate_fdt(dest_ptr);
240    /* Calculate usable memory area */
241    let kernel_end =  unsafe { &__KERNEL_SPACE_END as *const usize as usize };
242    let mut usable_area = MemoryArea::new(kernel_end, dram_area.end);
243    early_println!("[Scarlet Kernel] Usable memory area : {:#x} - {:#x}", usable_area.start, usable_area.end);
244    /* Relocate initramfs to usable memory area */
245    early_println!("[Scarlet Kernel] Relocating initramfs...");
246    if let Err(e) = relocate_initramfs(&mut usable_area) {
247        early_println!("[Scarlet Kernel] Failed to relocate initramfs: {}", e);
248    }
249    early_println!("[Scarlet Kernel] Updated Usable memory area : {:#x} - {:#x}", usable_area.start, usable_area.end);
250    /* Initialize heap with the usable memory area after FDT */
251    early_println!("[Scarlet Kernel] Initializing heap...");
252    let heap_start = (usable_area.start + PAGE_SIZE - 1) & !(PAGE_SIZE - 1);
253    let heap_size = ((usable_area.end - heap_start + 1) / MIN_HEAP_SIZE) * MIN_HEAP_SIZE;
254    let heap_end = heap_start + heap_size - 1;
255    init_heap(MemoryArea::new(heap_start, heap_end));
256    /* After this point, we can use the heap */
257    early_initcall_call();
258    driver_initcall_call();
259    /* Serial console also works */
260
261    #[cfg(test)]
262    test_main();
263
264    println!("[Scarlet Kernel] Initializing Virtual Memory...");
265    let kernel_start =  unsafe { &__KERNEL_SPACE_START as *const usize as usize };
266    kernel_vm_init(MemoryArea::new(kernel_start, usable_area.end));
267    /* After this point, we can use the heap and virtual memory */
268    /* We will also be restricted to the kernel address space */
269
270    /* Initialize (populate) devices */
271    println!("[Scarlet Kernel] Initializing devices...");
272    DeviceManager::get_mut_manager().populate_devices();
273    /* Initcalls */
274    call_initcalls();
275    /* Initialize timer */
276    println!("[Scarlet Kernel] Initializing timer...");
277    get_kernel_timer().init();
278    println!("[Scarlet Kernel] Initializing scheduler...");
279    let scheduler = get_scheduler();
280    /* Initialize initramfs */
281    println!("[Scarlet Kernel] Initializing initramfs...");
282    let mut manager = VfsManager::new();
283    init_initramfs(&mut manager);
284    /* Make init task */
285    println!("[Scarlet Kernel] Creating initial user task...");
286    let mut task = new_user_task("init".to_string(), 0);
287
288    task.init();
289    task.vfs = Some(Arc::new(manager));
290    task.cwd = Some("/".to_string());
291    let mut file = match task.vfs.as_ref().unwrap().open("/bin/init", 0) {
292        Ok(file) => file,
293        Err(e) => {
294            panic!("Failed to open init file: {:?}", e);
295        },
296    };
297
298    match load_elf_into_task(&mut file, &mut task) {
299        Ok(_) => {
300            for map in task.vm_manager.get_memmap() {
301                early_println!("[Scarlet Kernel] Task memory map: {:#x} - {:#x}", map.vmarea.start, map.vmarea.end);
302            }
303            early_println!("[Scarlet Kernel] Successfully loaded init ELF into task");
304            get_scheduler().add_task(task, get_cpu().get_cpuid());
305        }
306        Err(e) => early_println!("[Scarlet Kernel] Error loading ELF into task: {:?}", e),
307    }
308
309    println!("[Scarlet Kernel] Scheduler will start...");
310    scheduler.start_scheduler();
311    loop {} 
312}
313
314#[unsafe(no_mangle)]
315pub extern "C" fn start_ap(cpu_id: usize) {
316    println!("[Scarlet Kernel] CPU {} is up and running", cpu_id);
317    println!("[Scarlet Kernel] Initializing arch...");
318    init_arch(cpu_id);
319    loop {}
320}