ChatGradient
This will help you getting started with DigitalOcean Gradient Chat Models.
Overview
Integration details
Class | Package | Package downloads | Package latest |
---|---|---|---|
DigitalOcean Gradient | langchain-gradient |
Setup
langchain-gradient uses DigitalOcean Gradient Platform.
Create an account on DigitalOcean, acquire a DIGITALOCEAN_INFERENCE_KEY
API key from the Gradient Platform, and install the langchain-gradient
integration package.
Credentials
Head to DigitalOcean Login
- Sign up/Login to DigitalOcean Cloud Console
- Go to the Gradient Platform and navigate to Serverless Inference.
- Click on Create model access key, enter a name, and create the key.
Once you've done this set the DIGITALOCEAN_INFERENCE_KEY
environment variable:
import getpass
import os
if not os.getenv("DIGITALOCEAN_INFERENCE_KEY"):
os.environ["DIGITALOCEAN_INFERENCE_KEY"] = getpass.getpass(
"Enter your DIGITALOCEAN_INFERENCE_KEY API key: "
)
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
Installation
The DigitalOcean Gradient integration lives in the langchain-gradient
package:
%pip install -qU langchain-gradient
[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m A new release of pip is available: [0m[31;49m24.0[0m[39;49m -> [0m[32;49m25.1.1[0m
[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m To update, run: [0m[32;49mpip3.12 install --upgrade pip[0m
Note: you may need to restart the kernel to use updated packages.
Instantiation
Now we can instantiate our model object and generate chat completions:
from langchain_gradient import ChatGradient
llm = ChatGradient(
model="llama3.3-70b-instruct",
# other params...
)