-
-
Notifications
You must be signed in to change notification settings - Fork 151
2.3.7 Satellite: Open Interpreter
Handle:
opintURL: -
Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal.
Note that Harbor uses shortened
opintservice handle. For the CLI, you are free to use either officialinterpreteroropintalias.
Harbor will allow you running interpreter as if it was installed on your local machine. A big disclaimer is that Harbor only allows for the features of interpreter that are compatible with Docker runtime. Official Docker Integration outlines those nicely.
We'll refer to the service as opint from now on.
# Pre-build the image for convenience
harbor build opint
# opint is only configured to run
# alongside the LLM backend service (ollama, litellm, mistral.rs),
# check that at least one of them is running, otherwise
# you'll see connection errors
harbor ps
# See official CLI help
harbor opint --help# See where profiles are located on the host
# Modify the profiles as needed
harbor opint profiles
# Ensure that specific model is unset before
# setting the profile
harbor opint model ""
harbor opint args --profile <name>
# [Alternative] Set via opint.cmd config
# Note, it resets .model and .args
harbor opint cmd --profile <name>opin is pre-configured to run with ollama when it is also running.
# 0. Check your current default services
# ollama should be one of them
# See ollama models you have available
harbor defaults
harbor ollama models
# 1.1 You want to choose as big of a model
# as you can afford for the best experience
harbor opint model codestral
# Execute in the target folder
harbor opint# [Optional] If running __multiple__ backends
# at a time, you'll need to point opint to one of them
harbor opint backend vllm
# Set opint to use one of the models from
# /v1/models endpoint of the backend
harbor opint model google/gemma-2-2b-it
# Execute in the target folder
harbor opintTo check if a backend is integrated with opint - lookup compose.x.opint.<backend>.yml file in the Harbor workspace.
The setup is identical to vllm:
- if running multiple backends, ensure that
opintis pointed to one of them - ensure that
opintis configured to use one of the models from the backend's OpenAI API