AbsModeling#

AbsRerankerModel#

class FlagEmbedding.abc.finetune.reranker.AbsRerankerModel(base_model: None, tokenizer: AutoTokenizer | None = None, train_batch_size: int = 4)[source]#

Abstract class of embedding model for training.

Parameters:
  • base_model – The base model to train on.

  • tokenizer (AutoTokenizer, optional) – The tokenizer to use. Defaults to None.

  • train_batch_size (int, optional) – Batch size used for training. Defaults to 4.

Methods#

abstract AbsRerankerModel.encode(features)[source]#

Abstract method of encode.

Parameters:

features (dict) – Teatures to pass to the model.

AbsRerankerModel.gradient_checkpointing_enable(**kwargs)[source]#

Activates gradient checkpointing for the current model.

AbsRerankerModel.enable_input_require_grads(**kwargs)[source]#

Enables the gradients for the input embeddings.

AbsRerankerModel.forward(pair: Dict[str, Tensor] | List[Dict[str, Tensor]] | None = None, teacher_scores: Tensor | None = None)[source]#

The computation performed at every call.

Parameters:
  • pair (Union[Dict[str, Tensor], List[Dict[str, Tensor]]], optional) – The query-document pair. Defaults to None.

  • teacher_scores (Optional[Tensor], optional) – Teacher scores of knowledge distillation. Defaults to None.

Returns:

Output of reranker model.

Return type:

RerankerOutput

AbsRerankerModel.compute_loss(scores, target)[source]#

Compute the loss.

Parameters:
  • scores (torch.Tensor) – Computed scores.

  • target (torch.Tensor) – The target value.

Returns:

The computed loss.

Return type:

torch.Tensor

AbsRerankerModel.save(output_dir: str)[source]#

Save the model.

Parameters:

output_dir (str) – Directory for saving the model.

AbsRerankerModel.save_pretrained(*args, **kwargs)[source]#

Save the tokenizer and model.

RerankerOutput#

class FlagEmbedding.abc.finetune.reranker.RerankerOutput(loss: torch.Tensor | None = None, scores: torch.Tensor | None = None)[source]#