Modeling#

class FlagEmbedding.finetune.embedder.decoder_only.icl.BiDecoderOnlyEmbedderICLModel(base_model: AutoModel, tokenizer: AutoTokenizer | None = None, negatives_cross_device: bool = False, temperature: float = 1.0, sub_batch_size: int = -1, kd_loss_type: str = 'kl_div', sentence_pooling_method: str = 'last_token', normalize_embeddings: bool = False)[source]#

Embedder model class for decoder only model.

Parameters:
  • base_model (AutoModel) – The base model to train on.

  • tokenizer (AutoTokenizer, optional) – The tokenizer to use. Defaults to None.

  • negatives_cross_device (bool, optional) – If True, will compute cross devices negative loss. Defaults to False.

  • temperature (float, optional) – Temperature to control the scale of scores. Defaults to 1.0.

  • sub_batch_size (int, optional) – Sub-batch size during encoding. If negative, will not split to sub-batch. Defaults to -1.

  • kd_loss_type (str, optional) – Type of knowledge distillation loss. Defaults to 'kl_div'.

  • sentence_pooling_method (str, optional) – Pooling method to get sentence embedding. Defaults to 'last_token'.

  • normalize_embeddings (bool, optional) – If True, normalize the embedding vector. Defaults to False.

Methods#

BiDecoderOnlyEmbedderICLModel.encode(features)[source]#

Encode and get the embedding.

Parameters:

features (Union[list, dict]) – Features feed to the model.

Returns:

The embedding vectors.

Return type:

torch.Tensor

BiDecoderOnlyEmbedderICLModel.compute_score(q_reps, p_reps)[source]#

Computes the scores between query and passage representations.

Parameters:
  • q_reps (torch.Tensor) – Query representations.

  • p_reps (torch.Tensor) – Passage representations.

Returns:

The computed scores, adjusted by temperature.

Return type:

torch.Tensor

BiDecoderOnlyEmbedderICLModel.compute_loss(scores, target)[source]#

Compute the loss using cross entropy.

Parameters:
  • scores (torch.Tensor) – Computed score.

  • target (torch.Tensor) – The target value.

Returns:

The computed cross entropy loss.

Return type:

torch.Tensor

BiDecoderOnlyEmbedderICLModel.gradient_checkpointing_enable(**kwargs)[source]#

Activates gradient checkpointing for the current model.

BiDecoderOnlyEmbedderICLModel.enable_input_require_grads(**kwargs)[source]#

Enables the gradients for the input embeddings.

BiDecoderOnlyEmbedderICLModel.save(output_dir: str)[source]#

Save the model to the directory.

Parameters:

output_dir (str) – Directory for saving the model.

BiDecoderOnlyEmbedderICLModel._sentence_embedding(last_hidden_state, attention_mask)[source]#

Use the pooling method to get the sentence embedding.

Parameters:
  • last_hidden_state (torch.Tensor) – The model output’s last hidden state.

  • attention_mask (torch.Tensor) – Mask out padding tokens during pooling.

Raises:

NotImplementedError – Specified pooling method not implemented.

Returns:

The sentence embeddings.

Return type:

torch.Tensor

BiDecoderOnlyEmbedderICLModel._compute_similarity(q_reps, p_reps)[source]#

Computes the similarity between query and passage representations using inner product.

Parameters:
  • q_reps (torch.Tensor) – Query representations.

  • p_reps (torch.Tensor) – Passage representations.

Returns:

The computed similarity matrix.

Return type:

torch.Tensor