{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# MTEB" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For evaluation of embedding models, MTEB is one of the most well-known benchmark. In this tutorial, we'll introduce MTEB, its basic usage, and evaluate how your model performs on the MTEB leaderboard." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 0. Installation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Install the packages we will use in your environment:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "%pip install sentence_transformers mteb" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Intro" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [Massive Text Embedding Benchmark (MTEB)](https://github.com/embeddings-benchmark/mteb) is a large-scale evaluation framework designed to assess the performance of text embedding models across a wide variety of natural language processing (NLP) tasks. Introduced to standardize and improve the evaluation of text embeddings, MTEB is crucial for assessing how well these models generalize across various real-world applications. It contains a wide range of datasets in eight main NLP tasks and different languages, and provides an easy pipeline for evaluation.\n", "\n", "MTEB is also well known for the MTEB leaderboard, which contains a ranking of the latest first-class embedding models. We'll cover that in the next tutorial. Now let's have a look on how to use MTEB to do evaluation easily." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "import mteb\n", "from sentence_transformers import SentenceTransformer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's take a look at how to use MTEB to do a quick evaluation.\n", "\n", "First we load the model that we would like to evaluate on:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "model_name = \"BAAI/bge-base-en-v1.5\"\n", "model = SentenceTransformer(model_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below is the list of datasets of retrieval used by MTEB's English leaderboard.\n", "\n", "MTEB directly use the open source benchmark BEIR in its retrieval part, which contains 15 datasets (note there are 12 subsets of CQADupstack)." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "retrieval_tasks = [\n", " \"ArguAna\",\n", " \"ClimateFEVER\",\n", " \"CQADupstackAndroidRetrieval\",\n", " \"CQADupstackEnglishRetrieval\",\n", " \"CQADupstackGamingRetrieval\",\n", " \"CQADupstackGisRetrieval\",\n", " \"CQADupstackMathematicaRetrieval\",\n", " \"CQADupstackPhysicsRetrieval\",\n", " \"CQADupstackProgrammersRetrieval\",\n", " \"CQADupstackStatsRetrieval\",\n", " \"CQADupstackTexRetrieval\",\n", " \"CQADupstackUnixRetrieval\",\n", " \"CQADupstackWebmastersRetrieval\",\n", " \"CQADupstackWordpressRetrieval\",\n", " \"DBPedia\",\n", " \"FEVER\",\n", " \"FiQA2018\",\n", " \"HotpotQA\",\n", " \"MSMARCO\",\n", " \"NFCorpus\",\n", " \"NQ\",\n", " \"QuoraRetrieval\",\n", " \"SCIDOCS\",\n", " \"SciFact\",\n", " \"Touche2020\",\n", " \"TRECCOVID\",\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For demonstration, let's just run the first one, \"ArguAna\".\n", "\n", "For a full list of tasks and languages that MTEB supports, check the [page](https://github.com/embeddings-benchmark/mteb/blob/18662380f0f476db3d170d0926892045aa9f74ee/docs/tasks.md)." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "tasks = mteb.get_tasks(tasks=retrieval_tasks[:1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, create and initialize an MTEB instance with our chosen tasks, and run the evaluation process." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
───────────────────────────────────────────────── Selected tasks  ─────────────────────────────────────────────────\n",
       "
\n" ], "text/plain": [ "\u001b[38;5;235m───────────────────────────────────────────────── \u001b[0m\u001b[1mSelected tasks \u001b[0m\u001b[38;5;235m ─────────────────────────────────────────────────\u001b[0m\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
Retrieval\n",
       "
\n" ], "text/plain": [ "\u001b[1mRetrieval\u001b[0m\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
    - ArguAna, s2p\n",
       "
\n" ], "text/plain": [ " - ArguAna, \u001b[3;38;5;241ms2p\u001b[0m\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n",
       "\n",
       "
\n" ], "text/plain": [ "\n", "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "Batches: 100%|██████████| 44/44 [00:41<00:00, 1.06it/s]\n", "Batches: 100%|██████████| 272/272 [03:36<00:00, 1.26it/s]\n" ] } ], "source": [ "# use the tasks we chose to initialize the MTEB instance\n", "evaluation = mteb.MTEB(tasks=tasks)\n", "\n", "# call run() with the model and output_folder\n", "results = evaluation.run(model, output_folder=\"results\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The results should be stored in `{output_folder}/{model_name}/{model_revision}/{task_name}.json`.\n", "\n", "Openning the json file you should see contents as below, which are the evaluation results on \"ArguAna\" with different metrics on cutoffs from 1 to 1000." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```python\n", "{\n", " \"dataset_revision\": \"c22ab2a51041ffd869aaddef7af8d8215647e41a\",\n", " \"evaluation_time\": 260.14976954460144,\n", " \"kg_co2_emissions\": null,\n", " \"mteb_version\": \"1.14.17\",\n", " \"scores\": {\n", " \"test\": [\n", " {\n", " \"hf_subset\": \"default\",\n", " \"languages\": [\n", " \"eng-Latn\"\n", " ],\n", " \"main_score\": 0.63616,\n", " \"map_at_1\": 0.40754,\n", " \"map_at_10\": 0.55773,\n", " \"map_at_100\": 0.56344,\n", " \"map_at_1000\": 0.56347,\n", " \"map_at_20\": 0.56202,\n", " \"map_at_3\": 0.51932,\n", " \"map_at_5\": 0.54023,\n", " \"mrr_at_1\": 0.4139402560455192,\n", " \"mrr_at_10\": 0.5603739077423295,\n", " \"mrr_at_100\": 0.5660817425350153,\n", " \"mrr_at_1000\": 0.5661121884705748,\n", " \"mrr_at_20\": 0.564661930998293,\n", " \"mrr_at_3\": 0.5208629682313899,\n", " \"mrr_at_5\": 0.5429113323850182,\n", " \"nauc_map_at_1000_diff1\": 0.15930478114759905,\n", " \"nauc_map_at_1000_max\": -0.06396189194646361,\n", " \"nauc_map_at_1000_std\": -0.13168797291549253,\n", " \"nauc_map_at_100_diff1\": 0.15934819555197366,\n", " \"nauc_map_at_100_max\": -0.06389635013430676,\n", " \"nauc_map_at_100_std\": -0.13164524259533786,\n", " \"nauc_map_at_10_diff1\": 0.16057318234658585,\n", " \"nauc_map_at_10_max\": -0.060962623117325254,\n", " \"nauc_map_at_10_std\": -0.1300413865104607,\n", " \"nauc_map_at_1_diff1\": 0.17346152653542332,\n", " \"nauc_map_at_1_max\": -0.09705499215630589,\n", " \"nauc_map_at_1_std\": -0.14726476953035533,\n", " \"nauc_map_at_20_diff1\": 0.15956349246366208,\n", " \"nauc_map_at_20_max\": -0.06259296677860492,\n", " \"nauc_map_at_20_std\": -0.13097093150054095,\n", " \"nauc_map_at_3_diff1\": 0.15620049317363813,\n", " \"nauc_map_at_3_max\": -0.06690213479396273,\n", " \"nauc_map_at_3_std\": -0.13440904793529648,\n", " \"nauc_map_at_5_diff1\": 0.1557795701081579,\n", " \"nauc_map_at_5_max\": -0.06255283252590663,\n", " \"nauc_map_at_5_std\": -0.1355361594910923,\n", " \"nauc_mrr_at_1000_diff1\": 0.1378988612808882,\n", " \"nauc_mrr_at_1000_max\": -0.07507962333910836,\n", " \"nauc_mrr_at_1000_std\": -0.12969109830101241,\n", " \"nauc_mrr_at_100_diff1\": 0.13794450668758515,\n", " \"nauc_mrr_at_100_max\": -0.07501290390362861,\n", " \"nauc_mrr_at_100_std\": -0.12964855554504057,\n", " \"nauc_mrr_at_10_diff1\": 0.1396047981645623,\n", " \"nauc_mrr_at_10_max\": -0.07185174301688693,\n", " \"nauc_mrr_at_10_std\": -0.12807325096717753,\n", " \"nauc_mrr_at_1_diff1\": 0.15610387932529113,\n", " \"nauc_mrr_at_1_max\": -0.09824591983546396,\n", " \"nauc_mrr_at_1_std\": -0.13914318784294258,\n", " \"nauc_mrr_at_20_diff1\": 0.1382786098284509,\n", " \"nauc_mrr_at_20_max\": -0.07364476417961506,\n", " \"nauc_mrr_at_20_std\": -0.12898192060943495,\n", " \"nauc_mrr_at_3_diff1\": 0.13118224861025093,\n", " \"nauc_mrr_at_3_max\": -0.08164985279853691,\n", " \"nauc_mrr_at_3_std\": -0.13241573571401533,\n", " \"nauc_mrr_at_5_diff1\": 0.1346130730317385,\n", " \"nauc_mrr_at_5_max\": -0.07404093236468848,\n", " \"nauc_mrr_at_5_std\": -0.1340775377068567,\n", " \"nauc_ndcg_at_1000_diff1\": 0.15919987960292029,\n", " \"nauc_ndcg_at_1000_max\": -0.05457945565481172,\n", " \"nauc_ndcg_at_1000_std\": -0.12457339152558143,\n", " \"nauc_ndcg_at_100_diff1\": 0.1604091882521101,\n", " \"nauc_ndcg_at_100_max\": -0.05281549383775287,\n", " \"nauc_ndcg_at_100_std\": -0.12347288098914058,\n", " \"nauc_ndcg_at_10_diff1\": 0.1657018523692905,\n", " \"nauc_ndcg_at_10_max\": -0.036222943297402846,\n", " \"nauc_ndcg_at_10_std\": -0.11284619565817842,\n", " \"nauc_ndcg_at_1_diff1\": 0.17346152653542332,\n", " \"nauc_ndcg_at_1_max\": -0.09705499215630589,\n", " \"nauc_ndcg_at_1_std\": -0.14726476953035533,\n", " \"nauc_ndcg_at_20_diff1\": 0.16231721725673165,\n", " \"nauc_ndcg_at_20_max\": -0.04147115653921931,\n", " \"nauc_ndcg_at_20_std\": -0.11598700704312062,\n", " \"nauc_ndcg_at_3_diff1\": 0.15256475371124711,\n", " \"nauc_ndcg_at_3_max\": -0.05432154580979357,\n", " \"nauc_ndcg_at_3_std\": -0.12841084787822227,\n", " \"nauc_ndcg_at_5_diff1\": 0.15236205846534961,\n", " \"nauc_ndcg_at_5_max\": -0.04356123278888682,\n", " \"nauc_ndcg_at_5_std\": -0.12942556865700913,\n", " \"nauc_precision_at_1000_diff1\": -0.038790629929866066,\n", " \"nauc_precision_at_1000_max\": 0.3630826341915611,\n", " \"nauc_precision_at_1000_std\": 0.4772189839676386,\n", " \"nauc_precision_at_100_diff1\": 0.32118609204433185,\n", " \"nauc_precision_at_100_max\": 0.4740132817600036,\n", " \"nauc_precision_at_100_std\": 0.3456396169952022,\n", " \"nauc_precision_at_10_diff1\": 0.22279659689895104,\n", " \"nauc_precision_at_10_max\": 0.16823918613191954,\n", " \"nauc_precision_at_10_std\": 0.0377209694331257,\n", " \"nauc_precision_at_1_diff1\": 0.17346152653542332,\n", " \"nauc_precision_at_1_max\": -0.09705499215630589,\n", " \"nauc_precision_at_1_std\": -0.14726476953035533,\n", " \"nauc_precision_at_20_diff1\": 0.23025740175221762,\n", " \"nauc_precision_at_20_max\": 0.2892313928157665,\n", " \"nauc_precision_at_20_std\": 0.13522755012490692,\n", " \"nauc_precision_at_3_diff1\": 0.1410889527057097,\n", " \"nauc_precision_at_3_max\": -0.010771302313530132,\n", " \"nauc_precision_at_3_std\": -0.10744937823276193,\n", " \"nauc_precision_at_5_diff1\": 0.14012953903010988,\n", " \"nauc_precision_at_5_max\": 0.03977485677045894,\n", " \"nauc_precision_at_5_std\": -0.10292184602358977,\n", " \"nauc_recall_at_1000_diff1\": -0.03879062992990034,\n", " \"nauc_recall_at_1000_max\": 0.36308263419153386,\n", " \"nauc_recall_at_1000_std\": 0.47721898396760526,\n", " \"nauc_recall_at_100_diff1\": 0.3211860920443005,\n", " \"nauc_recall_at_100_max\": 0.4740132817599919,\n", " \"nauc_recall_at_100_std\": 0.345639616995194,\n", " \"nauc_recall_at_10_diff1\": 0.22279659689895054,\n", " \"nauc_recall_at_10_max\": 0.16823918613192046,\n", " \"nauc_recall_at_10_std\": 0.037720969433127145,\n", " \"nauc_recall_at_1_diff1\": 0.17346152653542332,\n", " \"nauc_recall_at_1_max\": -0.09705499215630589,\n", " \"nauc_recall_at_1_std\": -0.14726476953035533,\n", " \"nauc_recall_at_20_diff1\": 0.23025740175221865,\n", " \"nauc_recall_at_20_max\": 0.2892313928157675,\n", " \"nauc_recall_at_20_std\": 0.13522755012490456,\n", " \"nauc_recall_at_3_diff1\": 0.14108895270570979,\n", " \"nauc_recall_at_3_max\": -0.010771302313529425,\n", " \"nauc_recall_at_3_std\": -0.10744937823276134,\n", " \"nauc_recall_at_5_diff1\": 0.14012953903010958,\n", " \"nauc_recall_at_5_max\": 0.039774856770459645,\n", " \"nauc_recall_at_5_std\": -0.10292184602358935,\n", " \"ndcg_at_1\": 0.40754,\n", " \"ndcg_at_10\": 0.63616,\n", " \"ndcg_at_100\": 0.66063,\n", " \"ndcg_at_1000\": 0.6613,\n", " \"ndcg_at_20\": 0.65131,\n", " \"ndcg_at_3\": 0.55717,\n", " \"ndcg_at_5\": 0.59461,\n", " \"precision_at_1\": 0.40754,\n", " \"precision_at_10\": 0.08841,\n", " \"precision_at_100\": 0.00991,\n", " \"precision_at_1000\": 0.001,\n", " \"precision_at_20\": 0.04716,\n", " \"precision_at_3\": 0.22238,\n", " \"precision_at_5\": 0.15149,\n", " \"recall_at_1\": 0.40754,\n", " \"recall_at_10\": 0.88407,\n", " \"recall_at_100\": 0.99147,\n", " \"recall_at_1000\": 0.99644,\n", " \"recall_at_20\": 0.9431,\n", " \"recall_at_3\": 0.66714,\n", " \"recall_at_5\": 0.75747\n", " }\n", " ]\n", " },\n", " \"task_name\": \"ArguAna\"\n", "}\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we've successfully run the evaluation using mteb! In the next tutorial, we'll show how to evaluate your model on the whole 56 tasks of English MTEB and compete with models on the leaderboard." ] } ], "metadata": { "kernelspec": { "display_name": "base", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.13" } }, "nbformat": 4, "nbformat_minor": 2 }