Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Fix broken hyperlink in README.md #149

Merged
merged 1 commit into from
Jul 17, 2024
Merged

[Docs] Fix broken hyperlink in README.md #149

merged 1 commit into from
Jul 17, 2024

Conversation

abzb1
Copy link
Contributor

@abzb1 abzb1 commented Jul 17, 2024

This pull request fixes a broken hyperlink in the README.md file.
The original hyperlink was pointing to a file that no longer exists.
The updated link now correctly points to the task documentation.

Before you open a pull-request, please check if a similar issue already exists or has been closed before.

When you open a pull-request, please be sure to include the following

  • A descriptive title: [xxx] XXXX
  • A detailed description

Thank you for your contributions!

Correct hyperlink to supported task documentation
@abzb1 abzb1 changed the title Fix broken hyperlink in README.md [Docs] Fix broken hyperlink in README.md Jul 17, 2024
@Luodian Luodian merged commit bd9908f into EvolvingLMMs-Lab:main Jul 17, 2024
1 check passed
Luodian added a commit that referenced this pull request Sep 1, 2024
* claude auto detect json mode

* extract information

* use claude to generate

* fix bugs

* fix

* generate data

* chore: Update dataset name and version for live_bench task

* gpt-4-turbo => gpt-4o

* chore: Update dataset capture settings in create_dataset.py

* everything use gpt-4o

* websites

* livebench_july

* Refactor code to simplify data assignment in example.ipynb

* chore: Update dataset name for live_bench task

* Update lmms_eval configuration to use process_results_use_image and full_docs options

* split tasks

* chore: Update live_bench task dataset names and versions

* feat: Add multi-choice parsing and processing functions

This commit adds two new functions to the `egoschema/utils.py` file: `get_multi_choice_info` and `parse_multi_choice_response`. These functions are used to parse and process multi-choice responses in the Egoschema task.

The `get_multi_choice_info` function extracts information about the available choices from the input document, while the `parse_multi_choice_response` function parses the generated response and returns the predicted index.

These functions are essential for accurately processing multi-choice answers in the Egoschema task.

* feat: Add regex-based parsing for multi-choice predictions

This commit enhances the `perceptiontest_val_process_results_mc` function in the `utils.py` file. It introduces regex-based parsing to extract the predicted choice from the raw text prediction. If a match is found for A, B, C, or D, the matched letter is used as the prediction. Otherwise, an empty string is set as the prediction.

This improvement ensures accurate processing of multi-choice predictions in the perception test validation.

Co-authored-by: [Co-author Name] <[[email protected]]>

* refactor: Improve accuracy calculation in perception test validation

This commit refactors the `perceptiontest_val_aggregate_accuracy` function in the `utils.py` file. Instead of comparing the string representations of `answer_id` and `pred_id`, it now directly checks the `correct` field in the `accuracy` dictionary. This change ensures more accurate calculation of the overall accuracy in the perception test validation.

Co-authored-by: [Co-author Name] <[[email protected]]>

* Refactor accuracy calculation in perception test validation

* feat: Add SRT_API model to available models

This commit adds the SRT_API model to the list of available models in the `__init__.py` file. This model can now be used for evaluation and testing purposes.

Co-authored-by: [Co-author Name] <[[email protected]]>

* chore: Update live_bench task dataset names and versions

(cherry picked from commit 46a85b40b013503e52d007c96ca0607bd4604a3e)

* refactor: Update question_for_eval key in MathVerseEvaluator

This commit updates the key "question_for_eval" to "question" in the MathVerseEvaluator class. This change ensures consistency and clarity in the code.

Co-authored-by: [Co-author Name] <[[email protected]]>

* refactor: Update generate_submission_file function in mathverse utils

This commit updates the `generate_submission_file` function in the `mathverse/utils.py` file. It adds the `problem_version` to the filename to ensure that the submission files are saved with the correct problem version. This change improves the organization and clarity of the code.

Co-authored-by: [Co-author Name] <[[email protected]]>

* refactor: Update default template YAML files

This commit updates the default template YAML files in the `lmms_eval/tasks` directory. It modifies the `generation_kwargs` and `metadata` sections to improve the configuration and versioning of the tasks. These changes ensure consistency and clarity in the code.

Co-authored-by: [Co-author Name] <[[email protected]]>

---------

Co-authored-by: Fanyi Pu <[email protected]>
Co-authored-by: [Co-author Name] <[[email protected]]>
Co-authored-by: kcz358 <[email protected]>
kcz358 added a commit that referenced this pull request Sep 5, 2024
* claude auto detect json mode

* extract information

* use claude to generate

* fix bugs

* fix

* generate data

* chore: Update dataset name and version for live_bench task

* gpt-4-turbo => gpt-4o

* chore: Update dataset capture settings in create_dataset.py

* everything use gpt-4o

* websites

* livebench_july

* Refactor code to simplify data assignment in example.ipynb

* chore: Update dataset name for live_bench task

* Update lmms_eval configuration to use process_results_use_image and full_docs options

* split tasks

* chore: Update live_bench task dataset names and versions

* feat: Add multi-choice parsing and processing functions

This commit adds two new functions to the `egoschema/utils.py` file: `get_multi_choice_info` and `parse_multi_choice_response`. These functions are used to parse and process multi-choice responses in the Egoschema task.

The `get_multi_choice_info` function extracts information about the available choices from the input document, while the `parse_multi_choice_response` function parses the generated response and returns the predicted index.

These functions are essential for accurately processing multi-choice answers in the Egoschema task.

* feat: Add regex-based parsing for multi-choice predictions

This commit enhances the `perceptiontest_val_process_results_mc` function in the `utils.py` file. It introduces regex-based parsing to extract the predicted choice from the raw text prediction. If a match is found for A, B, C, or D, the matched letter is used as the prediction. Otherwise, an empty string is set as the prediction.

This improvement ensures accurate processing of multi-choice predictions in the perception test validation.

Co-authored-by: [Co-author Name] <[[email protected]]>

* refactor: Improve accuracy calculation in perception test validation

This commit refactors the `perceptiontest_val_aggregate_accuracy` function in the `utils.py` file. Instead of comparing the string representations of `answer_id` and `pred_id`, it now directly checks the `correct` field in the `accuracy` dictionary. This change ensures more accurate calculation of the overall accuracy in the perception test validation.

Co-authored-by: [Co-author Name] <[[email protected]]>

* Refactor accuracy calculation in perception test validation

* feat: Add SRT_API model to available models

This commit adds the SRT_API model to the list of available models in the `__init__.py` file. This model can now be used for evaluation and testing purposes.

Co-authored-by: [Co-author Name] <[[email protected]]>

* chore: Update live_bench task dataset names and versions

(cherry picked from commit 46a85b40b013503e52d007c96ca0607bd4604a3e)

* refactor: Update question_for_eval key in MathVerseEvaluator

This commit updates the key "question_for_eval" to "question" in the MathVerseEvaluator class. This change ensures consistency and clarity in the code.

Co-authored-by: [Co-author Name] <[[email protected]]>

* refactor: Update generate_submission_file function in mathverse utils

This commit updates the `generate_submission_file` function in the `mathverse/utils.py` file. It adds the `problem_version` to the filename to ensure that the submission files are saved with the correct problem version. This change improves the organization and clarity of the code.

Co-authored-by: [Co-author Name] <[[email protected]]>

* refactor: Update default template YAML files

This commit updates the default template YAML files in the `lmms_eval/tasks` directory. It modifies the `generation_kwargs` and `metadata` sections to improve the configuration and versioning of the tasks. These changes ensure consistency and clarity in the code.

Co-authored-by: [Co-author Name] <[[email protected]]>

---------

Co-authored-by: Fanyi Pu <[email protected]>
Co-authored-by: [Co-author Name] <[[email protected]]>
Co-authored-by: kcz358 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants