Skip to content

Commit

Permalink
fake table for format test
Browse files Browse the repository at this point in the history
  • Loading branch information
slobentanzer committed Feb 6, 2024
1 parent 87085fa commit a197daf
Showing 1 changed file with 16 additions and 1 deletion.
17 changes: 16 additions & 1 deletion content/20.results.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,22 @@ The 2- and 3-bit quantisations of the 70B model show worse performance than the
The Mixtral 8x7B model (46.7 billion parameters), a generally well-performing current open-source model, shows worse performance than all LLaMA2 models in our benchmark.
We will update the benchmark ([https://biochatter.org/benchmark/](https://biochatter.org/benchmark/)) as new models, benchmark datasets, and BioChatter functionalities are released.

**Table**
<!-- insert table -->
[]: # Table 1
[]: #
[]: # | Model | Size | F1 score | Precision | Recall |
[]: # | --- | --- | --- | --- | --- |
[]: # | OpenAI GPT-3.5-turbo | 175B | 0.92 | 0.92 | 0.92 |
[]: # | OpenAI GPT-4 | 1.6T | 0.93 | 0.93 | 0.93 |
[]: # | Meta LLaMA2 7B | 7B | 0.89 | 0.89 | 0.89 |
[]: # | Meta LLaMA2 13B | 13B | 0.90 | 0.90 | 0.90 |
[]: # | Meta LLaMA2 70B | 70B | 0.91 | 0.91 | 0.91 |
[]: # | Meta LLaMA2 70B 2-bit | 70B | 0.88 | 0.88 | 0.88 |
[]: # | Meta LLaMA2 70B 3-bit | 70B | 0.87 | 0.87 | 0.87 |
[]: # | Meta LLaMA2 70B 4-bit | 70B | 0.92 | 0.92 | 0.92 |
[]: # | Mixtral 8x7B | 46.7B | 0.86 | 0.86 | 0.86 |

{#tab:benchmark}

### Knowledge Graphs

Expand Down

0 comments on commit a197daf

Please sign in to comment.