Once you’ve trained and evaluated a model in Oumi, the next step is to export it for deployment or downstream inference workflows. Exporting a model allows you to use it outside of the Oumi platform, whether for local testing, cloud-based inference, or integration into production systems. Oumi makes model export straightforward by packaging trained artifacts in a standard, portable format that works with common inference engines and serving frameworks. This enables a smooth transition from experimentation to real-world usage without additional conversion steps.Documentation Index
Fetch the complete documentation index at: https://docs.oumi.ai/llms.txt
Use this file to discover all available pages before exploring further.
WHEN TO EXPORT A MODEL
You’re ready to export your model when:- Evaluation results meet your predefined success criteria
- You’re ready to run inference locally or in the cloud
- You want to serve the model behind an API
- You plan to integrate the model into an external application or workflow
WHAT GETS EXPORTED
When you export a model from Oumi, the following artifacts are typically produced:- Model weights: the trained parameters resulting from fine-tuning
- Tokenizer and configuration files: required for correct input/output handling
- Model metadata: information about the training run and configuration
HOW TO EXPORT A MODEL
To export a model in Oumi:- Go to the Models page and click on the model name.
- On the model’s detail page, click on the `Export* button.
- Click
Continue Exportto download the file to your local computer.