A memory analysis toolkit with specialized tracking strategies for single-threaded, multi-threaded, and async Rust applications. 238 source files, 2450+ tests, and 235k+ lines of code provide memory insights with minimal overhead.
memscope-rs provides four tracking strategies selected based on your application patterns:
| Strategy | Use Case | Performance | Best For |
|---|---|---|---|
| 🧩Core Tracker | Development & debugging | Zero overhead | Precise analysis with track_var! macros |
| 🔀Lock-free Multi-threaded | High concurrency (100+ threads) | Thread-local sampling | Production monitoring, reduced contention |
| ⚡Async Task-aware | async/await applications | < 5ns per allocation | Context-aware async task tracking |
| 🔄Unified Backend | Complex hybrid applications | Adaptive routing | Automatic strategy selection and switching |
use memscope_rs::{track_var, track_var_smart, track_var_owned};
fn main() {
// Zero-overhead reference tracking (recommended)
let data = vec![1, 2, 3, 4, 5];
track_var!(data);
// Smart tracking (automatic strategy selection)
let number = 42i32; // Copy type - copied
let text = String::new(); // Non-copy - tracked by reference
track_var_smart!(number);
track_var_smart!(text);
// Ownership tracking (precise lifecycle analysis)
let tracked = track_var_owned!(vec![1, 2, 3]);
// Export with multiple formats
memscope_rs::export_user_variables_json("analysis.json").unwrap();
memscope_rs::export_user_variables_binary("analysis.memscope").unwrap();
}use memscope_rs::lockfree;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize lock-free tracking
lockfree::initialize_lockfree_tracking()?;
// Spawn many threads (scales to 100+ threads)
let handles: Vec<_> = (0..100).map(|i| {
std::thread::spawn(move || {
// Thread-local tracking with intelligent sampling
for j in 0..1000 {
let data = vec![i; j % 100 + 1];
lockfree::track_allocation(&data, &format!("data_{}_{}", i, j));
}
})
}).collect();
for handle in handles {
handle.join().unwrap();
}
// Aggregate and analyze all threads
let analysis = lockfree::aggregate_all_threads()?;
lockfree::export_analysis(&analysis, "lockfree_analysis")?;
Ok(())
}use memscope_rs::async_memory;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize async-aware tracking
async_memory::initialize().await?;
// Track memory across async tasks
let tasks: Vec<_> = (0..50).map(|i| {
tokio::spawn(async move {
let data = vec![i; 1000];
async_memory::track_in_task(&data, &format!("async_data_{}", i)).await;
// Simulate async work
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
})
}).collect();
futures::future::join_all(tasks).await;
// Export task-aware analysis
let analysis = async_memory::generate_analysis().await?;
async_memory::export_visualization(&analysis, "async_analysis").await?;
Ok(())
}use memscope_rs::unified::{UnifiedBackend, BackendConfig};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize unified backend with automatic detection
let mut backend = UnifiedBackend::initialize(BackendConfig::default())?;
// Backend automatically detects environment and selects optimal strategy:
// - Single-threaded: Core tracker
// - Multi-threaded: Lock-free tracker
// - Async runtime: Async-aware tracker
// - Mixed: Hybrid strategy
let session = backend.start_tracking()?;
// Your application logic here - tracking happens transparently
let data = vec![1, 2, 3, 4, 5];
// Backend handles tracking automatically
// Collect comprehensive analysis
let analysis = session.collect_data()?;
let final_data = session.end_session()?;
// Export unified analysis
backend.export_analysis(&final_data, "unified_analysis")?;
Ok(())
}- JSON Export: Human-readable with interactive HTML dashboards
- Binary Export: High-performance format (5-10x faster, 60-80% smaller)
- Streaming Export: Memory-efficient for large datasets
- HTML Dashboards: Interactive real-time visualization
- Automatic Detection: Rc, Arc, Box, and custom smart pointers
- Reference Counting: Accurate ref count tracking
- Lifecycle Analysis: Comprehensive ownership history
- Memory Safety: Enhanced safety analysis and validation
- Low Overhead: Optimized tracking with configurable sampling (2-8% overhead)
- Thread Safety: Multi-threading support for 100+ threads with reduced contention
- Sampling Support: Adaptive sampling for production environments
- Error Recovery: Error handling and degradation mechanisms
- Memory Passport System: FFI boundary tracking with lifecycle documentation
- Cross-Boundary Risk Detection: Analysis of Rust/C/C++ memory handovers and potential leaks
- Dynamic Safety Violation Detection: Real-time detection of double-free, use-after-free, and buffer overflows
- FFI Memory Custody Tracking: Track memory ownership across language boundaries with passport status monitoring
- Smart Pointer Lifecycle Analysis: Complete Rc, Arc, Box ownership chain tracking with reference counting
- Container Deep Analysis: Specialized tracking for Vec, HashMap, BTreeMap with growth pattern analysis
- Drop Chain Analysis: Complex destructor chain analysis with RAII violation detection
- Unsafe Code Risk Assessment: Risk scoring for unsafe blocks, raw pointers, and transmutes
| Strategy | Overhead | Best Use Case |
|---|---|---|
| Reference Tracking | ~0% (zero-cost) | Development debugging |
| Ownership Tracking | ~5-10% | Precise lifecycle analysis |
| Lock-free Multi-threaded | ~2-8% (adaptive sampling) | High concurrency production |
| Async Task-aware | < 5ns per allocation | Async applications |
| Format | Speed vs JSON | Size vs JSON | Use Case |
|---|---|---|---|
| Binary Export | 5-10x faster | 60-80% smaller | Production, large datasets |
| JSON Export | Baseline | Baseline | Development, debugging |
| Streaming Export | Memory-efficient | Variable | Large datasets, limited memory |
| Metric | Single-threaded | Multi-threaded | Async |
|---|---|---|---|
| Concurrency | 1 thread | 100+ threads | 50+ tasks |
| Variables | 1M+ variables | 100K+ per thread | 10K+ per task |
| Memory Usage | ~50KB + 100B/var | Thread-local pools | Task-local buffers |
| Module | Export Time | File Size | Use Case |
|---|---|---|---|
| Single-threaded | 1.3s | 1.2MB | Development analysis |
| Multi-threaded | 211ms | 480KB | Production monitoring |
| Async | 800ms | 800KB | Task performance analysis |
| Hybrid | 2.1s | 2.5MB | Comprehensive analysis |
Based on actual test results from example applications
All modules generate rich, interactive HTML dashboards:
- Memory Timeline: Real-time allocation/deallocation patterns
- Thread Analysis: Per-thread memory usage and performance metrics
- Task Insights: Async task lifecycle and resource usage
- Smart Pointer Tracking: Reference counting and relationship analysis
- Leak Detection: Automatic identification of potential memory leaks
- Performance Bottlenecks: CPU, I/O, and memory correlation analysis
# Clone the repository
git clone https://github.com/TimWood0x10/memscope-rs
cd memscope-rs
# Try each module:
cargo run --example basic_usage # 🧩 Single-threaded
cargo run --example complex_multithread_showcase # 🔀 Multi-threaded
cargo run --example comprehensive_async_showcase # ⚡ Async
cargo run --example enhanced_30_thread_demo # 🔄 Hybrid
# Generate HTML reports:
make html DIR=MemoryAnalysis BASE=basic_usage- Core Modules Overview - Complete comparison of all four tracking strategies
- Single-threaded Module - Zero-overhead
track_var!macros with examples - Multi-threaded Module - Lock-free high-concurrency tracking for 20+ threads
- Async Module - Task-centric memory analysis for async/await applications
- Hybrid Module - Comprehensive cross-module analysis and visualization
- Getting Started - Installation, quick start, and basic tutorials
- User Guide - Tracking macros, analysis, export formats, CLI tools
- API Reference - Complete API documentation with examples
- Examples - Real-world usage examples and integration guides
- Advanced Features - Binary format, custom allocators, performance optimization
- English Documentation - Complete English documentation
- 中文文档 - 完整的中文文档
- Non-intrusive tracking: Use
track_var!macro to track variables without breaking your existing code (we promise!) - Smart pointer support: Full support for
Rc<T>,Arc<T>,Box<T>- because Rust loves its smart pointers - Lifecycle analysis: Automatic recording of variable lifecycles from birth to... well, drop
- Reference count monitoring: Real-time tracking of smart pointer reference count changes (watch those Rc clones!)
- Memory leak detection: Find those sneaky leaks hiding in your code
- Fragmentation analysis: Basic heap fragmentation reporting
- Usage pattern detection: Simple memory usage pattern recognition
- Performance issue identification: Spot memory-related bottlenecks
- JSON export: Export detailed memory allocation data for programmatic analysis
- Binary export: Efficient binary format for large datasets with faster I/O
- SVG visualization: Generate memory usage charts and timelines (pretty pictures!)
- 🎯 HTML Interactive Dashboard: Full-featured web-based dashboard with clickable charts, filterable data, and real-time analysis
- Binary → HTML: Convert binary snapshots directly to interactive HTML dashboards
- JSON → HTML: Transform JSON analysis data into rich web visualizations
- Multiple export modes: Fast mode, detailed mode, and "let the computer decide" mode
- FFI boundary tracking: Monitor memory interactions between Rust and C/C++ code
- Security violation detection: Identify potential memory safety issues
- Use-after-free detection: Catch those "oops, I used it after freeing it" moments
# Basic usage demonstration
cargo run --example basic_usage
# Comprehensive memory analysis showcase
cargo run --example comprehensive_memory_analysis
# Complex lifecycle showcase
cargo run --example comprehensive_binary_to_html_demo
# Memory stress test (warning: may stress your computer too)
cargo run --example heavy_workload_test
# Multi-threaded stress test
cargo run --example multithreaded_stress_test
# Performance test
cargo run --example performance_benchmark_demo
# Realistic usage with extensions
cargo run --example realistic_usage_with_extensions
# Large-scale binary comparison
cargo run --example large_scale_binary_comparison
# Unsafe/FFI safety demo (for the brave souls)
cargo run --example unsafe_ffi_demo
# Async basic test
cargo run --example async_basic_test
# Simple binary test
cargo run --example simple_binary_test
# JSON export test
cargo run --example test_binary_to_jsonuse memscope_rs::{init, track_var, get_global_tracker};
fn main() {
// Initialize memory tracking (don't forget this, or nothing will work!)
init();
// Create and track variables
let my_vec = vec![1, 2, 3, 4, 5];
track_var!(my_vec);
let my_string = String::from("Hello, memscope!");
track_var!(my_string);
let my_box = Box::new(42); // The answer to everything
track_var!(my_box);
// Variables work normally (tracking is invisible, like a good spy)
println!("Vector: {:?}", my_vec);
println!("String: {}", my_string);
println!("Box: {}", *my_box);
// Export analysis results
let tracker = get_global_tracker();
if let Err(e) = tracker.export_to_json("my_analysis") {
eprintln!("Export failed: {} (this shouldn't happen, but computers...)", e);
}
}use std::rc::Rc;
use std::sync::Arc;
// Track reference counted pointers
let rc_data = Rc::new(vec![1, 2, 3]);
track_var!(rc_data);
// Track atomic reference counted pointers (for when you need thread safety)
let arc_data = Arc::new(String::from("shared data"));
track_var!(arc_data);
// Cloning operations are also tracked (watch the ref count go up!)
let rc_clone = Rc::clone(&rc_data);
track_var!(rc_clone);use memscope_rs::ExportOptions;
let options = ExportOptions::new()
.include_system_allocations(false) // Fast mode (recommended)
.verbose_logging(true) // For when you want ALL the details
.buffer_size(128 * 1024); // 128KB buffer (because bigger is better, right?)
if let Err(e) = tracker.export_to_json_with_options("detailed_analysis", options) {
eprintln!("Export failed: {}", e);
}# Clone and setup
git clone https://github.com/TimWood0x10/memscope-rs
cd memscope-rs
# Build and test basic functionality
make build
make run-basic
# Generate HTML report
make html DIR=MemoryAnalysis/basic_usage BASE=user OUTPUT=memory_report.html VERBOSE=1
open ./MemoryAnalysis/basic_usage/memory_report.html
# Fast benchmarks (recommended)
make benchmark-main # ~2 minutes
# Comprehensive benchmarks
make run-benchmark # Full performance analysis
make run-core-performance # Core system evaluation
make run-simple-benchmark # Quick validation
# Stress testing
cargo run --example heavy_workload_test
cargo run --example multithreaded_stress_test- Rust: 1.85 or later (required for bincode 2.0.1 compatibility)
- OS: Linux, macOS, Windows (basically everywhere Rust runs)
- Memory: At least 4GB RAM recommended (for analyzing large projects)
# Clone the repository
git clone https://github.com/TimWood0x10/memscope-rs.git
cd memscope-rs
# Build the project (grab a coffee, this might take a moment)
make build
# Run tests
cargo test
# Try an example
make run-basic
├── complex_lifecycle_snapshot_complex_types.json
├── complex_lifecycle_snapshot_lifetime.json
├── complex_lifecycle_snapshot_memory_analysis.json
├── complex_lifecycle_snapshot_performance.json
├── complex_lifecycle_snapshot_security_violations.json
├── complex_lifecycle_snapshot_unsafe_ffi.json
# Export to different formats
make html DIR=MemoryAnalysis/basic_usage OUTPUT=memory_report.html # JSON → HTML
cargo run --example comprehensive_binary_to_html_demo # Binary → HTML
cargo run --example large_scale_binary_comparison # Binary format comparison demo
# View generated dashboards
open memory_report.html # From JSON conversion
open comprehensive_report.html # From binary conversion
# You can view the HTML interface examples in ./images/*.html# Add to your project
cargo add memscope-rs
# Or manually add to Cargo.toml
[dependencies]
memscope-rs = "0.1.10"[dependencies]
memscope-rs = { version = "0.1.10" }Available features:
backtrace- Enable stack trace collection (adds overhead, but gives you the full story)derive- Enable derive macro supporttracking-allocator- Custom allocator support (enabled by default)
After running programs, you'll find analysis results in the MemoryAnalysis/ directory:
├── basic_usage_memory_analysis.json // comprehensive memory data
├── basic_usage_lifetime.json // variable lifetime info
├── basic_usage_performance.json // performance metrics
├── basic_usage_security_violations.json // security analysis
├── basic_usage_unsafe_ffi.json // unsafe && ffi info
├── basic_usage_complex_types.json // complex types data
└── memory_report.html // interactive dashboard
The generated dashboard.html provides a rich, interactive experience:
- 📊 Interactive Charts: Click and zoom on memory usage graphs
- 🔍 Filterable Data Tables: Search and filter allocations by type, size, or lifetime
- 📈 Real-time Statistics: Live updating memory metrics and trends
- 🎯 Variable Drill-down: Click on any variable to see detailed lifecycle information
- 📱 Responsive Design: Works on desktop, tablet, and mobile browsers
- 🔗 Cross-references: Navigate between related allocations and smart pointer relationships
To view the dashboard:
# output html
make html DIR=YOUR_JSON_DIR BASE=complex_lifecycle OUTPUT=improved_tracking_final.html
# After running your tracked program
open MemoryAnalysis/your_analysis_name/dashboard.html
# Or simply double-click the HTML file in your file manager- Use macros for tracking without changing your code structure
- Variables work normally after tracking (no weird side effects)
- Selective tracking of key variables instead of global tracking (because sometimes less is more)
- Automatic identification of memory usage patterns and anomalies
- Smart pointer reference count change tracking
- Variable relationship analysis and dependency graph generation
- JSON data for programmatic processing and integration
- SVG charts for intuitive visualization
- HTML dashboard for interactive analysis (with actual buttons to click!)
- Fast export mode to reduce performance overhead
- Parallel processing support for large datasets
- Configurable buffer sizes for I/O optimization
- FFI boundary memory safety checks
- Automatic detection of potential security vulnerabilities
- Memory access pattern safety assessment
| Feature | memscope-rs | Valgrind | Heaptrack | jemalloc |
|---|---|---|---|---|
| Rust Native | ✅ | ❌ | ❌ | |
| Variable Names | ✅ | ❌ | ❌ | ❌ |
| Smart Pointer Analysis | ✅ | ❌ | ||
| Visual Reports | ✅ | ✅ | ❌ | |
| Production Ready | ✅ | ✅ | ✅ | ✅ |
| Interactive Timeline | ✅ | ❌ | ❌ | |
| Real-time Tracking | ✅ | ✅ | ✅ | ✅ |
| Low Overhead | ✅ | ✅ | ✅ | |
| 🌟 Memory Passport System | ✅ | ❌ | ❌ | ❌ |
| 🌟 FFI Boundary Tracking | ✅ | ❌ | ❌ | ❌ |
| 🌟 Dynamic Safety Detection | ✅ | ❌ | ❌ | |
| 🌟 Risk Assessment Engine | ✅ | ❌ | ❌ | ❌ |
| 🌟 Multi-layered Leak Detection | ✅ | ❌ | ||
| 🌟 Unsafe Code Analysis | ✅ | ❌ | ❌ | ❌ |
| Mature Ecosystem | ❌ | ✅ | ✅ | ✅ |
memscope-rs (this project)
- ✅ Strengths: Rust native, variable name tracking, smart pointer analysis, interactive visualization
- ✅ Current status: Stable with testing coverage (2450+ tests passing), mature codebase
⚠️ Considerations: Monitor performance overhead in high-frequency scenarios, test thoroughly in your specific environment
Valgrind
- ✅ Strengths: Widely used, mature, feature-rich, stable
⚠️ Limitations: Not Rust native, significant performance overhead, steep learning curve- 🎯 Best for: Deep memory debugging, complex problem troubleshooting
Heaptrack
- ✅ Strengths: Mature profiling tool, good visualization, relatively low overhead
⚠️ Limitations: Mainly for C/C++, limited Rust-specific features- 🎯 Best for: Performance analysis, memory usage optimization
jemalloc
- ✅ Strengths: Stable allocator, good performance, built-in analysis features
⚠️ Limitations: Mainly an allocator, basic analysis functionality- 🎯 Best for: Production environments, performance optimization
Good scenarios:
- 🔍 Rust project development debugging - Want to understand specific variable memory usage
- 📚 Learning Rust memory management - Visualize ownership and borrowing concepts
- 🧪 Prototype validation - Quickly verify memory usage patterns
- 🎯 Smart pointer analysis - Deep dive into Rc/Arc reference count changes
Use with caution:
⚠️ Production environments - Recommend thorough testing in staging first⚠️ High-performance requirements - Monitor tracking overhead in your specific use case⚠️ Very large datasets - Performance may degrade with >1M allocations⚠️ Complex memory issues - Consider using mature tools like Valgrind for deep debugging
Based on actual testing (not marketing numbers):
- Small programs: ~5-15% runtime overhead
- Memory usage: ~10-20% additional memory for tracking data
- Large datasets: Performance degrades with >1M allocations (optimization ongoing)
- Small datasets (< 1000 allocations): < 100ms
- Medium datasets (1000-10000 allocations): 100ms - 1s
- Large datasets (> 10000 allocations): Several seconds
- Performance optimization : Identify memory bottlenecks and optimization opportunities
- Memory leak troubleshooting : Locate and fix memory leak issues
- Code review : Analyze code memory usage patterns
- Educational demos : Demonstrate Rust memory management mechanisms
- Algorithm analysis : Understand memory behavior of data structures and algorithms
- Production-grade threading: Handles 100+ concurrent threads reliably with lock-free optimizations
- Async/await support: Comprehensive Future and task tracking
- Lock-free optimizations: Reduced contention and improved performance
- Hybrid analysis: Automatic detection of mixed execution patterns
Cross-language memory tracking with lifecycle documentation:
use memscope_rs::analysis::SafetyAnalyzer;
let analyzer = SafetyAnalyzer::new();
// Create passport for FFI memory handover
let passport_id = analyzer.create_memory_passport(
ptr_address,
size_bytes,
PassportEventType::HandoverToFfi
)?;
// Track lifecycle events
analyzer.record_passport_event(
ptr_address,
PassportEventType::FreedByForeign,
"external_library_cleanup".to_string()
)?;
// Detect leaks at shutdown
let leaked_passports = analyzer.finalize_passports_at_shutdown();
for leak in leaked_passports {
println!("🚨 FFI Memory Leak Detected: {}", leak);
}Memory Passport Features:
- Cross-Language Tracking: Monitor memory handed between Rust and C/C++
- Lifecycle Documentation: Complete event history from allocation to deallocation
- Leak Detection: Automatic detection of memory left in foreign custody
- Risk Assessment: Scoring of unsafe operations and boundary crossings
- Ownership Transfer: Track complex ownership scenarios across FFI boundaries
Multi-layered leak detection with pattern analysis:
use memscope_rs::quality::MemoryLeakChecker;
let mut checker = MemoryLeakChecker::new();
// Set baseline for comparison
checker.set_baseline("critical_operation", initial_memory, initial_allocations);
// Continuous monitoring
let current = MemorySnapshot {
memory_usage: current_memory,
allocation_count: current_allocations,
timestamp: Instant::now(),
};
let result = checker.check_for_leaks("critical_operation", ¤t);
if result.leak_detected {
match result.severity {
LeakSeverity::Critical => println!("🚨 CRITICAL: {:.2}MB/sec growth rate", result.growth_rate / 1_048_576.0),
LeakSeverity::High => println!("⚠️ HIGH: Excessive memory growth detected"),
LeakSeverity::Medium => println!("📊 MEDIUM: Moderate memory growth"),
_ => {}
}
}Leak Detection Capabilities:
- Real-time Monitoring: Continuous growth rate analysis during operation
- Baseline Comparison: Detection based on expected vs actual usage
- Pattern Recognition: Identify allocation/deallocation imbalances and suspicious patterns
- Sensitivity Levels: Configurable detection sensitivity (Low, Medium, High, Paranoid)
- FFI Leak Detection: Specialized detection for cross-boundary memory leaks
Real-time detection of memory safety violations:
// Automatic detection of common violations
- Double-Free Detection: Prevent attempts to free already-freed memory
- Use-After-Free Detection: Catch access to deallocated memory
- Buffer Overflow Detection: Monitor bounds violations in unsafe code
- Cross-Boundary Violations: Detect improper FFI memory handling
- Invalid Transmute Detection: Analyze unsafe type conversionsRisk scoring for unsafe operations:
use memscope_rs::analysis::SafetyAnalyzer;
let analyzer = SafetyAnalyzer::new();
let risk_assessment = analyzer.assess_risk(unsafe_operation, memory_context, call_stack);
match risk_assessment.risk_level {
RiskLevel::Critical => {
println!("🚨 CRITICAL RISK: Score {:.1}/100", risk_assessment.risk_score);
println!("Mitigation: {:?}", risk_assessment.mitigation_suggestions);
},
RiskLevel::High => println!("⚠️ HIGH RISK: Requires immediate attention"),
_ => {}
}Risk Assessment Features:
- Rule-based Scoring: Risk calculation (0-100 scale) based on operation types
- Factor Analysis: Detailed breakdown of contributing risk factors
- Mitigation Suggestions: Actionable recommendations for risk reduction
- Context Awareness: Risk adjustment based on memory pressure and system state
- Weighted Risk Factors: Different risk weights for various unsafe operations
- Large datasets: Optimized for datasets up to 1M allocations; sampling recommended for larger scales
- High-frequency systems: Configurable sampling available for production workloads
- Production deployment: Test suite available; recommend performance validation in your environment
The project uses a modular design:
- core/: Core tracking functionality and type definitions
- analysis/: Memory analysis algorithms and pattern recognition
- export/: Data export and visualization generation
- cli/: Command-line tools and user interface
- bin/: Executable analysis tools
Performance optimization:
# Use fast mode for reduced overhead
export MEMSCOPE_FAST_MODE=1
# Or disable expensive operations for large datasets
export MEMSCOPE_DISABLE_ANALYSIS=1Export fails with large datasets:
// Use smaller buffer or exclude system allocations
let options = ExportOptions::new()
.include_system_allocations(false)
.buffer_size(32 * 1024);High memory usage:
# Disable backtrace collection
cargo run --no-default-features --features tracking-allocatorPermission errors on output:
# Ensure write permissions
mkdir -p MemoryAnalysis
chmod 755 MemoryAnalysisPlatform-specific configuration:
# For optimal performance on different platforms
export MEMSCOPE_PLATFORM_OPTIMIZED=1We welcome contributions to continue improving memscope-rs! Please:
- Test thoroughly - Make sure your changes don't break existing functionality
- Document limitations - Be honest about what doesn't work
- Performance test - Measure the impact of your changes
- Keep it simple - Avoid over-engineering (we have enough complexity already)
# Development workflow
git clone https://github.com/TimWood0x10/memscope-rs
cd memscope-rs
make build
make run-basicLicensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.
Add to your Cargo.toml:
[dependencies]
memscope-rs = "0.1.10"
# Optional features
[features]
default = ["parking-lot"]
derive = ["memscope-rs/derive"] # Derive macros
enhanced-tracking = ["memscope-rs/enhanced-tracking"] # Advanced analysis
system-metrics = ["memscope-rs/system-metrics"] # System monitoringmemscope-rs includes command-line tools:
# Analyze existing memory data
cargo run --bin memscope-analyze -- analysis.json
# Generate reports
cargo run --bin memscope-report -- --input analysis.memscope --format html
# Run performance benchmarks
cargo run --bin memscope-benchmark -- --threads 50 --allocations 10000- API Documentation - Complete API reference
- User Guide - Step-by-step tutorials
- Examples - Real-world usage examples
- Performance Guide - Optimization tips
I need your feedback! While memscope-rs has useful functionality, I believe it can be even better with your help.
I've put tremendous effort into testing, but complex software inevitably has edge cases I haven't encountered. Your real-world usage scenarios are invaluable:
- Performance issues in your specific use case
- Compatibility problems with certain crates or Rust versions
- Unexpected behavior that doesn't match documentation
- Missing features that would make your workflow easier
- Create Issues: Open an issue - no matter how small!
- Share Use Cases: Tell me how you're using memscope-rs
- Report Performance: Let me know if tracking overhead is higher than expected
- Documentation Gaps: Point out anything confusing or unclear
Every issue report helps improve memscope-rs for the Rust community. I'm committed to:
- Quick responses to reported issues
- Transparent communication about fixes and improvements
- Recognition for your contributions
Together, we can build the best memory analysis tool for Rust! 🦀
We welcome contributions! Please see our Contributing Guide for details.
make test # Run all tests
make check # Check code quality
make benchmark # Run performance benchmarksThis project is licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
*Made with ❤️ and 🦀 by developers who care about memory (maybe too much) *