Overview
These GPUs are validated for online RL and finetuning workflows. Availability may depend on your cloud provider.Validated Types
- H100 (80GB)
- A100 (40GB/80GB)
- A10G
- L4
Notes
- RDMA configurations are supported where available. See RL → Topology for GPU partitioning between vLLM, trainer, and reference models.
- For best performance, match topology GPU counts with vLLM tensor parallel settings.