BGE Series#

In this Part, we will walk through the BGE series and introduce how to use the BGE embedding models.

1. BAAI General Embedding#

BGE stands for BAAI General Embedding, it’s a series of embeddings models developed and published by Beijing Academy of Artificial Intelligence (BAAI).

A full support of APIs and related usages of BGE is maintained in FlagEmbedding on GitHub.

Run the following cell to install FlagEmbedding in your environment.

%%capture
%pip install -U FlagEmbedding
import os 
os.environ['TRANSFORMERS_NO_ADVISORY_WARNINGS'] = 'true'
# single GPU is better for small tasks
os.environ['CUDA_VISIBLE_DEVICES'] = '0'

The collection of BGE models can be found in Huggingface collection.

2. BGE Series Models#

2.1 BGE#

The very first version of BGE has 6 models, with ‘large’, ‘base’, and ‘small’ for English and Chinese.

Model

Language

Parameters

Model Size

Description

Base Model

BAAI/bge-large-en

English

500M

1.34 GB

Embedding Model which map text into vector

BERT

BAAI/bge-base-en

English

109M

438 MB

a base-scale model but with similar ability to bge-large-en

BERT

BAAI/bge-small-en

English

33.4M

133 MB

a small-scale model but with competitive performance

BERT

BAAI/bge-large-zh

Chinese

326M

1.3 GB

Embedding Model which map text into vector

BERT

BAAI/bge-base-zh

Chinese

102M

409 MB

a base-scale model but with similar ability to bge-large-zh

BERT

BAAI/bge-small-zh

Chinese

24M

95.8 MB

a small-scale model but with competitive performance

BERT

For inference, simply import FlagModel from FlagEmbedding and initialize the model.

from FlagEmbedding import FlagModel

# Load BGE model
model = FlagModel(
    'BAAI/bge-base-en',
    query_instruction_for_retrieval="Represent this sentence for searching relevant passages:",
    query_instruction_format='{}{}',
)

queries = ["query 1", "query 2"]
corpus = ["passage 1", "passage 2"]

# encode the queries and corpus
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode_corpus(corpus)

# compute the similarity scores
scores = q_embeddings @ p_embeddings.T
print(scores)
[[0.84864    0.7946737 ]
 [0.760097   0.85449743]]

For general encoding, use either encode():

FlagModel.encode(sentences, batch_size=256, max_length=512, convert_to_numpy=True)

or encode_corpus() that directly calls encode():

FlagModel.encode_corpus(corpus, batch_size=256, max_length=512, convert_to_numpy=True)

The encode_queries() function concatenate the query_instruction_for_retrieval with each of the input query to form the new sentences and then feed them to encode().

FlagModel.encode_queries(queries, batch_size=256, max_length=512, convert_to_numpy=True)

2.2 BGE v1.5#

BGE 1.5 alleviate the issue of the similarity distribution, and enhance retrieval ability without instruction.

Model

Language

Parameters

Model Size

Description

Base Model

BAAI/bge-large-en-v1.5

English

335M

1.34 GB

version 1.5 with more reasonable similarity distribution

BERT

BAAI/bge-base-en-v1.5

English

109M

438 MB

version 1.5 with more reasonable similarity distribution

BERT

BAAI/bge-small-en-v1.5

English

33.4M

133 MB

version 1.5 with more reasonable similarity distribution

BERT

BAAI/bge-large-zh-v1.5

Chinese

326M

1.3 GB

version 1.5 with more reasonable similarity distribution

BERT

BAAI/bge-base-zh-v1.5

Chinese

102M

409 MB

version 1.5 with more reasonable similarity distribution

BERT

BAAI/bge-small-zh-v1.5

Chinese

24M

95.8 MB

version 1.5 with more reasonable similarity distribution

BERT

You can use BGE 1.5 models exactly same to BGE v1 models.

model = FlagModel(
    'BAAI/bge-base-en-v1.5',
    query_instruction_for_retrieval="Represent this sentence for searching relevant passages:",
    query_instruction_format='{}{}'
)

queries = ["query 1", "query 2"]
corpus = ["passage 1", "passage 2"]

# encode the queries and corpus
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode_corpus(corpus)

# compute the similarity scores
scores = q_embeddings @ p_embeddings.T
print(scores)
pre tokenize: 100%|██████████| 1/1 [00:00<00:00, 2252.58it/s]
pre tokenize: 100%|██████████| 1/1 [00:00<00:00, 3575.71it/s]
[[0.76   0.6714]
 [0.6177 0.7603]]

2.3 BGE M3#

BGE-M3 is the new version of BGE models that is distinguished for its versatility in:

  • Multi-Functionality: Simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.

  • Multi-Linguality: Supports more than 100 working languages.

  • Multi-Granularity: Can proces inputs with different granularityies, spanning from short sentences to long documents of up to 8192 tokens.

For more details, feel free to check out the paper.

Model

Language

Parameters

Model Size

Description

Base Model

BAAI/bge-m3

Multilingual

568M

2.27 GB

Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens)

XLM-RoBERTa

from FlagEmbedding import BGEM3FlagModel

model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)

sentences = ["What is BGE M3?", "Defination of BM25"]
Fetching 30 files: 100%|██████████| 30/30 [00:00<00:00, 194180.74it/s]
BGEM3FlagModel.encode(
    sentences, 
    batch_size=12, 
    max_length=8192, 
    return_dense=True, 
    return_sparse=False, 
    return_colbert_vecs=False
)

It returns a dictionary like:

{
    'dense_vecs':       # array of dense embeddings of inputs if return_dense=True, otherwise None,
    'lexical_weights':  # array of dictionaries with keys and values are ids of tokens and their corresponding weights if return_sparse=True, otherwise None,
    'colbert_vecs':     # array of multi-vector embeddings of inputs if return_cobert_vecs=True, otherwise None,'
}
# If you don't need such a long length of 8192 input tokens, you can set max_length to a smaller value to speed up encoding.
embeddings = model.encode(
    sentences, 
    max_length=10,
    return_dense=True, 
    return_sparse=True, 
    return_colbert_vecs=True
)
pre tokenize: 100%|██████████| 1/1 [00:00<00:00, 1148.18it/s]
print(f"dense embedding:\n{embeddings['dense_vecs']}")
print(f"sparse embedding:\n{embeddings['lexical_weights']}")
print(f"multi-vector:\n{embeddings['colbert_vecs']}")
dense embedding:
[[-0.03412  -0.04706  -0.00087  ...  0.04822   0.007614 -0.02957 ]
 [-0.01035  -0.04483  -0.02434  ... -0.008224  0.01497   0.011055]]
sparse embedding:
[defaultdict(<class 'int'>, {'4865': np.float16(0.0836), '83': np.float16(0.0814), '335': np.float16(0.1296), '11679': np.float16(0.2517), '276': np.float16(0.1699), '363': np.float16(0.2695), '32': np.float16(0.04077)}), defaultdict(<class 'int'>, {'262': np.float16(0.05014), '5983': np.float16(0.1367), '2320': np.float16(0.04517), '111': np.float16(0.0634), '90017': np.float16(0.2517), '2588': np.float16(0.3333)})]
multi-vector:
[array([[-8.68966337e-03, -4.89266850e-02, -3.03634931e-03, ...,
        -2.21243706e-02,  5.72856329e-02,  1.28355855e-02],
       [-8.92937183e-03, -4.67235669e-02, -9.52814799e-03, ...,
        -3.14785317e-02,  5.39088845e-02,  6.96671568e-03],
       [ 1.84195358e-02, -4.22310382e-02,  8.55499704e-04, ...,
        -1.97946690e-02,  3.84313315e-02,  7.71250250e-03],
       ...,
       [-2.55824160e-02, -1.65533274e-02, -4.21357416e-02, ...,
        -4.50234264e-02,  4.41286489e-02, -1.00052059e-02],
       [ 5.90990965e-07, -5.53734899e-02,  8.51499755e-03, ...,
        -2.29209941e-02,  6.04418293e-02,  9.39912070e-03],
       [ 2.57394509e-03, -2.92690992e-02, -1.89342294e-02, ...,
        -8.04431178e-03,  3.28964666e-02,  4.38723788e-02]], dtype=float32), array([[ 0.01724418,  0.03835401, -0.02309308, ...,  0.00141706,
         0.02995041, -0.05990082],
       [ 0.00996325,  0.03922409, -0.03849588, ...,  0.00591671,
         0.02722516, -0.06510868],
       [ 0.01781915,  0.03925728, -0.01710397, ...,  0.00801776,
         0.03987768, -0.05070014],
       ...,
       [ 0.05478653,  0.00755799,  0.00328444, ..., -0.01648209,
         0.02405782,  0.00363262],
       [ 0.00936953,  0.05028074, -0.02388872, ...,  0.02567679,
         0.00791224, -0.03257877],
       [ 0.01803976,  0.0133922 ,  0.00019365, ...,  0.0184015 ,
         0.01373822,  0.00315539]], dtype=float32)]

2.4 BGE Multilingual Gemma2#

BGE Multilingual Gemma2 is a LLM-based Multi-Lingual embedding model.

Model

Language

Parameters

Model Size

Description

Base Model

BAAI/bge-multilingual-gemma2

Multilingual

9.24B

37 GB

LLM-based multilingual embedding model with SOTA results on multilingual benchmarks

Gemma2-9B

from FlagEmbedding import FlagLLMModel

queries = ["how much protein should a female eat", "summit define"]
documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments."
]

model = FlagLLMModel('BAAI/bge-multilingual-gemma2', 
                     query_instruction_for_retrieval="Given a web search query, retrieve relevant passages that answer the query.",
                     use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation

embeddings_1 = model.encode_queries(queries)
embeddings_2 = model.encode_corpus(documents)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00,  6.34it/s]
pre tokenize: 100%|██████████| 1/1 [00:00<00:00, 816.49it/s]
pre tokenize: 100%|██████████| 1/1 [00:00<00:00, 718.33it/s]
[[0.559     0.01685  ]
 [0.0008683 0.5015   ]]

2.4 BGE ICL#

BGE ICL stands for in-context learning. By providing few-shot examples in the query, it can significantly enhance the model’s ability to handle new tasks.

Model

Language

Parameters

Model Size

Description

Base Model

BAAI/bge-en-icl

English

7.11B

28.5 GB

LLM-based English embedding model with excellent in-context learning ability.

Mistral-7B

documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments."
]

examples = [
    {
        'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
        'query': 'what is a virtual interface',
        'response': "A virtual interface is a software-defined abstraction that mimics the behavior and characteristics of a physical network interface. It allows multiple logical network connections to share the same physical network interface, enabling efficient utilization of network resources. Virtual interfaces are commonly used in virtualization technologies such as virtual machines and containers to provide network connectivity without requiring dedicated hardware. They facilitate flexible network configurations and help in isolating network traffic for security and management purposes."
    },
    {
        'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
        'query': 'causes of back pain in female for a week',
        'response': "Back pain in females lasting a week can stem from various factors. Common causes include muscle strain due to lifting heavy objects or improper posture, spinal issues like herniated discs or osteoporosis, menstrual cramps causing referred pain, urinary tract infections, or pelvic inflammatory disease. Pregnancy-related changes can also contribute. Stress and lack of physical activity may exacerbate symptoms. Proper diagnosis by a healthcare professional is crucial for effective treatment and management."
    }
]

queries = ["how much protein should a female eat", "summit define"]
from FlagEmbedding import FlagICLModel
import os

model = FlagICLModel('BAAI/bge-en-icl', 
                     examples_for_task=examples,  # set `examples_for_task=None` to use model without examples
                    #  examples_instruction_format="<instruct>{}\n<query>{}\n<response>{}" # specify the format to use examples_for_task
                     )

embeddings_1 = model.encode_queries(queries)
embeddings_2 = model.encode_corpus(documents)
similarity = embeddings_1 @ embeddings_2.T

print(similarity)
Loading checkpoint shards: 100%|██████████| 3/3 [00:00<00:00,  6.55it/s]
pre tokenize: 100%|██████████| 1/1 [00:00<00:00, 366.09it/s]
pre tokenize: 100%|██████████| 1/1 [00:00<00:00, 623.69it/s]
[[0.6064 0.3018]
 [0.257  0.537 ]]