Skip to main content

Vertex AI Embedding

Usage - Embeddingโ€‹

import litellm
from litellm import embedding
litellm.vertex_project = "hardy-device-38811" # Your Project ID
litellm.vertex_location = "us-central1" # proj location

response = embedding(
model="vertex_ai/textembedding-gecko",
input=["good morning from litellm"],
)
print(response)

Supported Embedding Modelsโ€‹

All models listed here are supported

Model NameFunction Call
text-embedding-004embedding(model="vertex_ai/text-embedding-004", input)
text-multilingual-embedding-002embedding(model="vertex_ai/text-multilingual-embedding-002", input)
textembedding-geckoembedding(model="vertex_ai/textembedding-gecko", input)
textembedding-gecko-multilingualembedding(model="vertex_ai/textembedding-gecko-multilingual", input)
textembedding-gecko-multilingual@001embedding(model="vertex_ai/textembedding-gecko-multilingual@001", input)
textembedding-gecko@001embedding(model="vertex_ai/textembedding-gecko@001", input)
textembedding-gecko@003embedding(model="vertex_ai/textembedding-gecko@003", input)
text-embedding-preview-0409embedding(model="vertex_ai/text-embedding-preview-0409", input)
text-multilingual-embedding-preview-0409embedding(model="vertex_ai/text-multilingual-embedding-preview-0409", input)
Fine-tuned OR Custom Embedding modelsembedding(model="vertex_ai/<your-model-id>", input)

Supported OpenAI (Unified) Paramsโ€‹

paramtypevertex equivalent
inputstring or List[string]instances
dimensionsintoutput_dimensionality
input_typeLiteral["RETRIEVAL_QUERY","RETRIEVAL_DOCUMENT", "SEMANTIC_SIMILARITY", "CLASSIFICATION", "CLUSTERING", "QUESTION_ANSWERING", "FACT_VERIFICATION"]task_type

Usage with OpenAI (Unified) Paramsโ€‹

response = litellm.embedding(
model="vertex_ai/text-embedding-004",
input=["good morning from litellm", "gm"]
input_type = "RETRIEVAL_DOCUMENT",
dimensions=1,
)

Supported Vertex Specific Paramsโ€‹

paramtype
auto_truncatebool
task_typeLiteral["RETRIEVAL_QUERY","RETRIEVAL_DOCUMENT", "SEMANTIC_SIMILARITY", "CLASSIFICATION", "CLUSTERING", "QUESTION_ANSWERING", "FACT_VERIFICATION"]
titlestr

Usage with Vertex Specific Params (Use task_type and title)โ€‹

You can pass any vertex specific params to the embedding model. Just pass them to the embedding function like this:

Relevant Vertex AI doc with all embedding params

response = litellm.embedding(
model="vertex_ai/text-embedding-004",
input=["good morning from litellm", "gm"]
task_type = "RETRIEVAL_DOCUMENT",
title = "test",
dimensions=1,
auto_truncate=True,
)

BGE Embeddingsโ€‹

Use BGE (Baidu General Embedding) models deployed on Vertex AI.

Usageโ€‹

Using BGE on Vertex AI
import litellm

response = litellm.embedding(
model="vertex_ai/bge/<your-endpoint-id>",
input=["Hello", "World"],
vertex_project="your-project-id",
vertex_location="your-location"
)

print(response)

Multi-Modal Embeddingsโ€‹

Known Limitations:

  • Only supports 1 image / video / image per request
  • Only supports GCS or base64 encoded images / videos

Usageโ€‹

Using GCS Images

response = await litellm.aembedding(
model="vertex_ai/multimodalembedding@001",
input="gs://cloud-samples-data/vertex-ai/llm/prompts/landmark1.png" # will be sent as a gcs image
)

Using base 64 encoded images

response = await litellm.aembedding(
model="vertex_ai/multimodalembedding@001",
input="data:image/jpeg;base64,..." # will be sent as a base64 encoded image
)

Text + Image + Video Embeddingsโ€‹

Text + Image

response = await litellm.aembedding(
model="vertex_ai/multimodalembedding@001",
input=["hey", "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark1.png"] # will be sent as a gcs image
)

Text + Video

response = await litellm.aembedding(
model="vertex_ai/multimodalembedding@001",
input=["hey", "gs://my-bucket/embeddings/supermarket-video.mp4"] # will be sent as a gcs image
)

Image + Video

response = await litellm.aembedding(
model="vertex_ai/multimodalembedding@001",
input=["gs://cloud-samples-data/vertex-ai/llm/prompts/landmark1.png", "gs://my-bucket/embeddings/supermarket-video.mp4"] # will be sent as a gcs image
)