Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.oumi.ai/llms.txt

Use this file to discover all available pages before exploring further.

Once your model is in production, you can begin using it for inference and applying it to downstream applications. Oumi keeps all your project assets (models, datasets, evaluations, evaluators, and recipes) automatically version controlled, so you can iterate and enjoy exporting models for repeatable deployments. As data and usage patterns change, performance will invariably degrade, making periodic retraining necessary. When this happens, you can use Oumi to evaluate, retrain, and redeploy your model seamlessly within the same workflow.

Running evaluations

Using the Builder to evaluate your model.

Diagnosing failure modes

Generate a new dataset directly from an evaluator’s identified failure modes.