Quick Start

Here is the guide to help setup ELBO environment in your local machine.

Good to know: We are just getting started with this service and are actively building it. If you face any problems with the service or API, please reach out to us at contact@elbo.ai

Get your API keys

Your API requests are authenticated using API keys. Any request that doesn't include an API key will return an HTTP Authentication error.

Sign up for an account (with a 14 day trial period). You can get the API key from your here at any time on the website.

Setup your Virtual Environment

It's better to run Python in a virtual environment or use conda. To install your virtual environment run:

pip3 install virtualenv virtualenvwrapper

And create an environment using:

virtualenv -p python3 .venv

or if virtualenv is not in path:

~/Library/Python/3.9/bin/virtualenv -p python3 .venv

This creates a virtual Python environment in the .venv folder. To activate this environment use the command:

. .venv/bin/activate

Or the following if you are using the fish shell:

. .venv/bin/activate.fish

If you hit a Command not found error while running virtualenvthen try running virtual env from the user install location. This happens if the package was installed in the user path instead of the system global path.

~/Library/Python/3.9/bin/virtualenv

Install the library

The best way to interact with our API is to use our elbo library. You can install it using the command line below:

pip3 install elbo --upgrade

Good to know: The elbo package still resides in the test pypi repository. We will move it to the official repository once we are out of beta development.

Login to ELBO

Use the command line tool to login.

elbo login

This will prompt you to enter your token. The token can be obtained by logging into the ELBO welcome page.

Make your first task submission

Try out one of the sample ML submission from our examples Github repository. First clone the repository:

git clone https://github.com/elbo-ai/elbo-examples.git
cd elbo-examples/pytorch/mnist_classifier/

Submit the sample task:

elbo run --config elbo.yaml

Here is a sample output of the command that prompts with a list of compute options from our providers:

elbo.client is starting 'Train MNIST Classifier' submission ...
elbo.client Hey Anu ๐Ÿ‘‹, welcome!
elbo.client is uploading sources from ....
elbo.client upload successful.

elbo.client number of compute choices - 28
? Please choose: (Use arrow keys)
 ยป  $ 0.0028/hour        Micro (for testing)   2 cpu     1Gb mem    0Gb gpu-mem AWS (spot)
    $ 0.0150/hour     Standard (for testing)   1 cpu     2Gb mem    0Gb gpu-mem Linode (~ 9 mins to provision)
    $ 0.0770/hour        Micro (for testing)   2 cpu     1Gb mem    0Gb gpu-mem AWS
    $ 0.2700/hour           Nvidia Tesla K80   4 cpu    61Gb mem   12Gb gpu-mem AWS (spot)
    $ 0.6100/hour         Nvidia Quadro 4000  16 cpu    32Gb mem    8Gb gpu-mem TensorDock
    $ 0.9000/hour           Nvidia Tesla K80   4 cpu    61Gb mem   12Gb gpu-mem AWS
    $ 0.9180/hour                Nvidia V100   8 cpu    61Gb mem   16Gb gpu-mem AWS (spot)
    $ 0.9200/hour         Nvidia Quadro 5000   2 cpu     4Gb mem   16Gb gpu-mem FluidStack
    $ 0.9600/hour               Nvidia A5000   2 cpu    16Gb mem   24Gb gpu-mem TensorDock
    $ 1.4900/hour               Nvidia A4000  12 cpu    64Gb mem   16Gb gpu-mem FluidStack
    $ 1.4940/hour                 Nvidia A40   2 cpu    12Gb mem   48Gb gpu-mem TensorDock
    $ 1.5000/hour         Nvidia Quadro 6000   8 cpu    32Gb mem    0Gb gpu-mem Linode (~ 9 mins to provision)
    $ 1.5140/hour               Nvidia A6000   2 cpu    16Gb mem   48Gb gpu-mem TensorDock
    $ 2.1600/hour        8x Nvidia Tesla K80  32 cpu   488Gb mem   12Gb gpu-mem AWS (spot)
    $ 3.0000/hour      2x Nvidia Quadro 6000  16 cpu    64Gb mem    0Gb gpu-mem Linode (~ 9 mins to provision)
    $ 3.0600/hour                Nvidia V100   8 cpu    61Gb mem   16Gb gpu-mem AWS
    $ 3.6720/hour             4x Nvidia V100  32 cpu   244Gb mem   16Gb gpu-mem AWS (spot)
    $ 3.7460/hour             7x Nvidia V100   6 cpu     8Gb mem   16Gb gpu-mem TensorDock
    $ 4.3200/hour       16x Nvidia Tesla K80  64 cpu   732Gb mem   12Gb gpu-mem AWS (spot)
    $ 4.5000/hour      3x Nvidia Quadro 6000  20 cpu    96Gb mem    0Gb gpu-mem Linode (~ 9 mins to provision)
    $ 6.0000/hour      4x Nvidia Quadro 6000  24 cpu   128Gb mem    0Gb gpu-mem Linode (~ 9 mins to provision)
    $ 7.3440/hour             8x Nvidia V100  64 cpu   488Gb mem   16Gb gpu-mem AWS (spot)
    $ 7.9200/hour        8x Nvidia Tesla K80  32 cpu   488Gb mem   12Gb gpu-mem AWS
    $ 9.8318/hour             8x Nvidia A100  96 cpu  1152Gb mem   80Gb gpu-mem AWS (spot)
    $13.0360/hour             4x Nvidia V100  32 cpu   244Gb mem   16Gb gpu-mem AWS
    $14.4000/hour       16x Nvidia Tesla K80  64 cpu   732Gb mem   12Gb gpu-mem AWS
    $24.4800/hour             8x Nvidia V100  64 cpu   488Gb mem   16Gb gpu-mem AWS
    $32.7726/hour             8x Nvidia A100  96 cpu  1152Gb mem   80Gb gpu-mem AWS

Thats it! ๐Ÿฅณ Monitor your task progression using elbo show <task_id>.

Good to know: The list of compute options is sorted in the order of best price to performance. Note that the cheapest option may not always be the best nor is the most expensive option.

Last updated