Expand description
Virtual File System (VFS) Module - Version 2 Architecture
This module provides a modern Virtual File System implementation based on VFS v2 architecture, supporting per-task isolated filesystems, containerization, and advanced mount operations including bind mounts and overlay filesystems.
§VFS v2 Architecture Overview
The VFS v2 architecture provides a clean separation of concerns with three main components inspired by modern operating systems:
§Core Components
-
VfsEntry: Path hierarchy cache (similar to Linux dentry)
- Represents “names” and “links” in the filesystem hierarchy
- Provides fast path resolution with weak reference-based caching
- Manages parent-child relationships in the VFS tree
-
VfsNode: File entity interface (similar to Linux inode/BSD vnode)
- Abstract representation of files, directories, and special files
- Provides metadata access and type information
- Enables clean downcasting for filesystem-specific operations
-
FileSystemOperations: Unified driver API for filesystem implementations
- Consolidated interface for all filesystem operations (lookup, create, etc.)
- Clean separation between VFS core and filesystem drivers
- Supports both simple and complex filesystem types
§Key Infrastructure
- VfsManager: Main VFS management structure supporting isolation and sharing
- MountTree: Hierarchical mount tree with support for bind mounts and overlays
- FileSystemDriverManager: Global singleton for driver registration (VFS v1 compatibility)
- MountPoint: Associates filesystem instances with mount paths and manages mount relationships
§VfsManager Distribution and Isolation
- Per-Task VfsManager: Each task can have its own isolated
VfsManager
instance stored asOption<Arc<VfsManager>>
in the task structure - Shared Filesystems: Multiple VfsManager instances can share underlying filesystem objects while maintaining independent mount points
- Global Fallback: Tasks without their own VFS use the global VfsManager instance
§Advanced Mount Operations
VFS v2 provides comprehensive mount functionality for flexible filesystem composition:
§Basic Filesystem Mounting
let vfs = VfsManager::new();
// Create and mount a tmpfs
let tmpfs = TmpFS::new(1024 * 1024); // 1MB limit
vfs.mount(tmpfs, "/tmp", 0)?;
// Mount with specific options
vfs.mount_with_options(filesystem, "/mnt/data", &mount_options)?;
§Bind Mount Operations
// Basic bind mount - mount a directory at another location
vfs.bind_mount("/source/dir", "/target/dir")?;
// Cross-VFS bind mount for container isolation
let host_vfs = Arc::new(host_vfs_manager);
container_vfs.bind_mount_from(host_vfs, "/host/data", "/container/data")?;
§Overlay Filesystem Support
// Create overlay combining multiple layers
let overlay = OverlayFS::new(
Some((upper_mount, upper_entry)), // Upper layer (writable)
vec![(lower_mount, lower_entry)], // Lower layers (read-only)
"system_overlay".to_string()
)?;
vfs.mount(overlay, "/merged", 0)?;
§Available Filesystem Types
VFS v2 includes several built-in filesystem drivers:
- TmpFS: Memory-based temporary filesystem with optional size limits
- CpioFS: Read-only CPIO archive filesystem for initramfs
- OverlayFS: Union/overlay filesystem combining multiple layers
- InitramFS: Special handling for initial ramdisk mounting
§Usage Patterns
§Container Isolation with Namespaces
// Create isolated VfsManager for container
let container_vfs = VfsManager::new();
// Mount container root filesystem
let container_fs = TmpFS::new(512 * 1024 * 1024); // 512MB
container_vfs.mount(container_fs, "/", 0)?;
// Bind mount host resources selectively
let host_vfs = get_global_vfs();
container_vfs.bind_mount_from(&host_vfs, "/host/shared", "/shared")?;
// Assign isolated namespace to task
task.vfs = Some(Arc::new(container_vfs));
§Shared VFS Access Patterns
VFS v2 supports multiple sharing patterns for different use cases:
§Full VFS Sharing via Arc
// Share entire VfsManager instance including mount points
let shared_vfs = Arc::new(vfs_manager);
let task_vfs = Arc::clone(&shared_vfs);
// All mount operations affect the shared mount tree
shared_vfs.mount(tmpfs, "/tmp", 0)?; // Visible to all references
// Useful for:
// - Fork-like behavior where child inherits parent's filesystem view
// - Thread-like sharing where all threads see the same mount points
// - System-wide mount operations
§Selective Resource Sharing via Bind Mounts
// Each container has isolated filesystem but shares specific directories
let container1_vfs = VfsManager::new();
let container2_vfs = VfsManager::new();
// Both containers share a common data directory
let host_vfs = get_global_vfs();
container1_vfs.bind_mount_from(&host_vfs, "/host/shared", "/data")?;
container2_vfs.bind_mount_from(&host_vfs, "/host/shared", "/data")?;
§System Call Interface
VFS v2 provides system calls that operate within each task’s VFS namespace:
- File operations:
open()
,read()
,write()
,close()
,lseek()
- Directory operations:
mkdir()
,readdir()
- Mount operations:
mount()
,umount()
,pivot_root()
§Performance Characteristics
VFS v2 is designed for performance with:
- Path Resolution Caching: VfsEntry provides fast lookup of recently accessed paths
- Weak Reference Cleanup: Automatic cleanup of expired cache entries
- Mount Boundary Optimization: Efficient crossing of mount points during path resolution
- Lock Granularity: Fine-grained locking to minimize contention
§Migration from VFS v1
VFS v2 maintains compatibility with existing code while providing improved APIs. The old interfaces are deprecated but still functional for transition purposes.
This architecture enables flexible deployment scenarios from simple shared filesystems to complete filesystem isolation with selective resource sharing for containerized applications, all while maintaining high performance and POSIX compatibility.
Re-exports§
pub use vfs_v2::manager::VfsManager;
pub use crate::object::capability::file::SeekFrom;
pub use crate::object::capability::file::FileObject;
pub use vfs_v2::*;
pub use params::*;
Modules§
Structs§
- Device
File Info - Information about device files in the filesystem
- Directory
- Structure representing a directory
- Directory
Entry - Binary representation of directory entry for system call interface This structure has a fixed layout for efficient copying between kernel and user space
- Directory
Entry Internal - Structure representing a directory entry (internal representation)
- File
Metadata - File
Permission - File
System Driver Manager - Global filesystem driver manager singleton
- File
System Error
Enums§
- File
System Error Kind - File
System Type - Enum defining the type of file system
- File
Type
Constants§
Statics§
- FS_
DRIVER_ 🔒MANAGER - Singleton for global access to the FileSystemDriverManager
Traits§
- File
System Driver - Trait for file system drivers
Functions§
- get_
fs_ driver_ manager - Global filesystem driver manager singleton