# The Configuration file

The configuration file, typically named `elbo.yaml` has the following properties:

<table><thead><tr><th>Option</th><th>Description</th><th>Examples</th></tr></thead><tbody><tr><td><code>name</code></td><td>The name of your ML training task</td><td>"Hello, ELBO 💪"</td></tr><tr><td><code>gpu_class</code></td><td><p>The class of GPU you want to request. This can be one of the following:<br></p><ul><li><code>Economy</code> - Economy class GPUs - Tesla K80, Tesla M60 etc. These can be used for simple training tasks or just for testing purposes. Usually, these cost less than $1 per hour.</li><li><code>MidRange</code> - Mid range GPUs - V100s or equivalent. These are more powerful GPUs and can be used for more compute-intensive tasks. These GPUs also have more GPU RAM (24Gb+) which is useful in generative models.</li><li><code>HighEnd</code> - These are the latest and greatest GPU compute environment. Typically an Nvidia A100. These can be very expensive ~ $9 - $30 / hour depending on usage. </li><li><code>All</code> - This options shows all the GPUs options that are available.</li></ul></td><td><code>MidRange</code></td></tr><tr><td><code>setup</code> (Optional)</td><td><p></p><p>A setup script that will be run prior to calling the training code</p></td><td><code>sudo apt-get install fish</code></td></tr><tr><td><code>requirements</code> (Optional) </td><td><p></p><p>A <code>requirements.txt</code> file path that lists all the dependencies of the training code.</p><p></p></td><td><p></p><pre><code>pandas
numpy
torch
pytorch_lightning
tqdm
torchvision
wandb    
</code></pre></td></tr><tr><td><code>run</code></td><td>The main training code. The task execution will call this file directly.</td><td><code>main.py</code></td></tr><tr><td><code>task_dir</code></td><td><p>The directory where this task is present. Usually the current directory. This directory will be zipped and uploaded for running the task. </p><p></p><p>Please make sure all the files and scripts needed to run the training code are present in this directory.</p></td><td><code>.</code></td></tr><tr><td><code>artifacts</code></td><td>The directory where your code will place model checkpoints, plots, generated files etc. The ELBO service will package this directory and save it for you to download after the task is complete.</td><td><code>artifacts</code></td></tr><tr><td><code>keep_alive</code></td><td>Setting this to <code>True</code> will ensure the node running the job is not stopped after the job is complete.</td><td><code>True</code></td></tr></tbody></table>

{% hint style="info" %}
**Tip:** If you are submitting the task for the first time, you may want to run the training task on an `Economy` class machine and then move to higher classes when you see the model converging after a few epochs.
{% endhint %}

Here is a sample configuration with comments on what each property means:

```yaml
#
# ELBO Sample Config File for MNIST Classifier Task
#
# All paths are relative to where the `elbo.yaml` file is placed

name: "Train MNIST Classifier"

# The GPU class to use - Economy, MidRange, HighEnd, All
gpu_class: Economy

# The script to run for setting up the environment. For example - installing packages 
# on Ubuntu
setup: setup.sh

# The PIP requirements file. ELBO will install the requirements specified in this 
# file before launching the task.
requirements: requirements.txt

# The main entry point in the task. Once the script exits or terminates, the task
# is considered complete.
run: main.py

# The task directory, relative to this file. This directory will be tar-balled and sent to ELBO task executor for
# execution
task_dir: .

# Artifacts directory. This is the directory that will be copied over as output. All model related files - 
# checkpoints, generated samples, evaluation results etc. should be placed in this directory. 
artifacts: ~/artifacts

```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.elbo.ai/the-configuration-file.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
