# LLM Benchmark Suite - Requirements # Core dependencies torch>=2.0.0 transformers>=4.35.0 accelerate>=0.24.0 tokenizers>=0.14.0 # Attention implementations flash-attn>=2.0.0 # GPU monitoring pynvml>=11.5.0 # NVIDIA GPU monitoring pyrsmi>=1.0.0 # AMD GPU monitoring # Utilities numpy>=1.24.0 pyyaml>=6.0 tqdm>=4.65.0 # Optional: for better performance triton>=2.0.0