AbsModeling#
AbsRerankerModel#
- class FlagEmbedding.abc.finetune.reranker.AbsRerankerModel(base_model: None, tokenizer: AutoTokenizer | None = None, train_batch_size: int = 4)[source]#
Abstract class of embedding model for training.
- Parameters:
base_model – The base model to train on.
tokenizer (AutoTokenizer, optional) – The tokenizer to use. Defaults to
None
.train_batch_size (int, optional) – Batch size used for training. Defaults to
4
.
Methods#
- abstract AbsRerankerModel.encode(features)[source]#
Abstract method of encode.
- Parameters:
features (dict) – Teatures to pass to the model.
- AbsRerankerModel.gradient_checkpointing_enable(**kwargs)[source]#
Activates gradient checkpointing for the current model.
- AbsRerankerModel.enable_input_require_grads(**kwargs)[source]#
Enables the gradients for the input embeddings.
- AbsRerankerModel.forward(pair: Dict[str, Tensor] | List[Dict[str, Tensor]] | None = None, teacher_scores: Tensor | None = None)[source]#
The computation performed at every call.
- Parameters:
pair (Union[Dict[str, Tensor], List[Dict[str, Tensor]]], optional) – The query-document pair. Defaults to
None
.teacher_scores (Optional[Tensor], optional) – Teacher scores of knowledge distillation. Defaults to None.
- Returns:
Output of reranker model.
- Return type:
- AbsRerankerModel.compute_loss(scores, target)[source]#
Compute the loss.
- Parameters:
scores (torch.Tensor) – Computed scores.
target (torch.Tensor) – The target value.
- Returns:
The computed loss.
- Return type:
torch.Tensor