AbsTrainer#

AbsEmbedderTrainer#

class FlagEmbedding.abc.finetune.embedder.AbsEmbedderTrainer(model: PreTrainedModel | Module | None = None, args: TrainingArguments | None = None, data_collator: DataCollator | None = None, train_dataset: Dataset | IterableDataset | Dataset | None = None, eval_dataset: Dataset | Dict[str, Dataset] | Dataset | None = None, tokenizer: PreTrainedTokenizerBase | None = None, model_init: Callable[[], PreTrainedModel] | None = None, compute_metrics: Callable[[EvalPrediction], Dict] | None = None, callbacks: List[TrainerCallback] | None = None, optimizers: Tuple[Optimizer, LambdaLR] = (None, None), preprocess_logits_for_metrics: Callable[[Tensor, Tensor], Tensor] | None = None)[source]#

Abstract class for the trainer of embedder.

Methods#

AbsEmbedderTrainer.compute_loss(model, inputs, return_outputs=False, **kwargs)[source]#

How the loss is computed by Trainer. By default, all models return the loss in the first element.

Subclass and override for custom behavior.

Parameters:
  • model (AbsEmbedderModel) – The model being trained.

  • inputs (dict) – A dictionary of input tensors to be passed to the model.

  • return_outputs (bool, optional) – If True, returns both the loss and the model’s outputs. Otherwise, returns only the loss.

Returns:

The computed loss. If return_outputs is True,

also returns the model’s outputs in a tuple (loss, outputs).

Return type:

Union[torch.Tensor, tuple(torch.Tensor, EmbedderOutput)]