FAQ#
Below are some commonly asked questions.
Tip
For more questions, search issues on GitHub or join our community!
When does the query instruction need to be used?
For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task. In all cases, the documents/passages do not need to add the instruction.
Why it takes quite long to just encode 1 sentence?
Note that if you have multiple CUDA GPUs, FlagEmbedding will automatically use all of them. Then the time used to start the multi-process will cost way longer than the actual encoding. Try to just use CPU or just single GPU for simple tasks.
The embedding results are different for CPU and GPU?
The encode function will use FP16 by default if GPU is available, which leads to different precision.
Set fp16=False
to get full precision.
How many languages do the multi-lingual models support?
The training datasets cover up to 170+ languages. But note that due to the unbalanced distribution of languages, the performances will be different. Please further test refer to the real application scenario.
How does the different retrieval method works in bge-m3?
Dense retrieval: map the text into a single embedding, e.g., DPR, BGE-v1.5
Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text.
e.g., BM25, unicoil, and splade - Multi-vector retrieval: use multiple vectors to represent a text, e.g., ColBERT.
Recommended vector database?
Generally you can use any vector database (open-sourced, commercial). We use Faiss by default in our evaluation pipeline and tutorials.