Skip to content

【Feature】SWEBench Support#191

Open
GaoHuaZhang wants to merge 2 commits intoAISBench:masterfrom
GaoHuaZhang:swe_bench_support
Open

【Feature】SWEBench Support#191
GaoHuaZhang wants to merge 2 commits intoAISBench:masterfrom
GaoHuaZhang:swe_bench_support

Conversation

@GaoHuaZhang
Copy link
Collaborator

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the benchmarking framework by integrating full support for SWEBench datasets. It introduces new components for loading, inferring, and evaluating models against SWEBench, alongside improvements to the configuration system for greater flexibility and automation. The changes streamline the process of setting up and running complex benchmarks, particularly for code generation and repair tasks, by abstracting away common configuration patterns and providing robust task execution mechanisms.

Highlights

  • SWEBench Integration: Added comprehensive support for SWEBench, including new dataset loaders, inference tasks, and evaluation tasks, enabling the framework to run and evaluate models on SWEBench datasets.
  • Dynamic Configuration Handling: Introduced a recursive configuration type conversion utility and updated core configuration management to automatically fill in default inference, reader, and evaluation configurations for datasets if they are not explicitly defined.
  • Worker Refactoring: Refactored the Infer and Eval workers to improve flexibility and maintainability, allowing them to dynamically handle different task types and merge configurations more robustly.
  • Example Configurations: Provided new example configuration files for SWEBench Lite and SWEBench Verified datasets, demonstrating how to set up and run benchmarks using the newly integrated features.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • ais_bench/benchmark/cli/config_manager.py
    • Imported recur_convert_config_type for recursive configuration processing.
    • Simplified DATASET_REQUIRED_FIELDS by removing reader_cfg, infer_cfg, and eval_cfg.
    • Invoked recur_convert_config_type on the configuration object before dumping it.
  • ais_bench/benchmark/cli/utils.py
    • Imported ConfigDict and Config from mmengine.config.
    • Added a new function recur_convert_config_type to recursively convert config types to string representations.
  • ais_bench/benchmark/cli/workers.py
    • Modified Infer worker to use direct class references for OpenICLApiInferTask and OpenICLInferTask instead of get_config_type.
    • Refactored Infer worker's update_cfg method to conditionally merge new inference configurations and update runner parameters.
    • Modified Eval worker to use direct class references for NaivePartitioner and LocalRunner.
    • Refactored Eval worker's update_cfg method to conditionally merge new evaluation configurations and update runner parameters, including debug, dump details, and extract rate flags.
  • ais_bench/benchmark/datasets/init.py
    • Added an import for the new swebench module to make SWEBenchDataset discoverable.
  • ais_bench/benchmark/datasets/swebench.py
    • Added a new file defining the SWEBenchDataset class.
    • Implemented filter_instances method to filter and shuffle dataset instances based on a filter specification.
    • Implemented load method to load SWEBench datasets from parquet files or directly from Hugging Face, supporting various SWEBench variants.
  • ais_bench/benchmark/tasks/init.py
    • Added imports for SWEBenchInferTask and SWEBenchEvalTask to register them in the task registry.
  • ais_bench/benchmark/tasks/swebench_eval.py
    • Added a new file defining the SWEBenchEvalTask class.
    • Implemented the run method to evaluate SWE-bench predictions using the official SWE-bench harness.
    • Included logic for handling prediction file paths, output directories, and error reporting for the evaluation process.
  • ais_bench/benchmark/tasks/swebench_infer.py
    • Added a new file defining the SWEBenchInferTask class.
    • Implemented the run method to execute mini-swe-agent on SWE-bench instances for inference.
    • Included helper functions _get_minisweagent_config to adapt ais_bench model configurations to mini-swe-agent's format.
    • Introduced _AISBenchProgressManager and _CompositeProgressManager for detailed progress reporting during inference.
    • Managed concurrent execution of instances using ThreadPoolExecutor and integrated with rich for live progress display.
  • ais_bench/benchmark/utils/config/run.py
    • Imported PromptTemplate, ZeroRetriever, GenInferencer, and AccEvaluator from ais_bench.benchmark.openicl.
    • Enhanced try_fill_in_custom_cfgs to automatically populate infer_cfg, reader_cfg, and eval_cfg with default OpenICL configurations for datasets if they are not present.
  • ais_bench/configs/swe_bench_examples/swe_bench_lite.py
    • Added a new example configuration file for benchmarking with the SWEBench Lite dataset.
    • Defined model, dataset, summarizer, infer, and eval configurations specific to SWEBench Lite.
  • ais_bench/configs/swe_bench_examples/swe_bench_verified.py
    • Added a new example configuration file for benchmarking with the SWEBench Verified dataset.
    • Defined model, dataset, summarizer, infer, and eval configurations specific to SWEBench Verified.
Activity
  • The pull request introduces new features and modifications across multiple files, indicating initial development work.
  • No human comments or reviews have been recorded yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the SWE-Bench benchmark, including new dataset loaders, inference tasks, and evaluation tasks. The changes are extensive and introduce new dependencies like mini-swe-agent. I've found a few critical issues that will prevent the new workflow from running correctly, related to file handling and incorrect assumptions about library functions. There are also some correctness and performance issues that should be addressed. Please see my detailed comments below.

self.model_cfg,
dataset_cfg,
osp.join(self.work_dir, self.output_subdir),
file_extension="json",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The SWE-Bench evaluation harness expects a .jsonl file for predictions. Using a .json extension here will cause a FileNotFoundError in the evaluation step.

Suggested change
file_extension="json",
file_extension="jsonl",

Comment on lines +287 to +289
preds_path = out_dir / "preds.json"
if preds_path.exists():
shutil.move(preds_path, out_path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The process_instance function from mini-swe-agent only creates per-instance trajectory files and does not generate the final aggregated prediction file. You need to add a step to collect the predictions from these trajectory files into a single preds.jsonl file after all instances have been processed. The mini-swe-agent library provides utilities for this. Also, the filename should be preds.jsonl, not preds.json.

Suggested change
preds_path = out_dir / "preds.json"
if preds_path.exists():
shutil.move(preds_path, out_path)
from minisweagent.run.benchmarks.utils.run_utils import get_predictions_from_trajectories
self.logger.info(f"Collecting predictions from trajectories in {out_dir}...")
get_predictions_from_trajectories(str(out_dir))
preds_path = out_dir / "preds.jsonl"
if preds_path.exists():
shutil.move(str(preds_path), out_path)

f"Invalid swebench dataset name, expected one of {list(DATASET_MAPPING.keys())} but got {name}",
)
try:
dataset = load_dataset("parquet", data_files={split: path})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The function load_dataset("parquet", data_files=...) returns a DatasetDict object, not a Dataset. The subsequent code on line 68, list(dataset), will then operate on the keys of this dictionary (e.g., ['test']) instead of the dataset records, which will cause a TypeError inside filter_instances. You need to select the appropriate split from the DatasetDict before processing it.

Suggested change
dataset = load_dataset("parquet", data_files={split: path})
dataset = load_dataset("parquet", data_files={split: path})[split]

Comment on lines +17 to +32
for dataset_cfg in config["datasets"]:
if "infer_cfg" not in dataset_cfg:
logger.debug(f"Filling in infer config for dataset {dataset_cfg['abbr']}")
dataset_cfg["infer_cfg"] = dict(
prompt_template=dict(type=get_config_type(PromptTemplate), template="{dummy}"),
retriever=dict(type=get_config_type(ZeroRetriever)),
inferencer=dict(type=get_config_type(GenInferencer)),
)
if "reader_cfg" not in dataset_cfg:
logger.debug(f"Filling in reader config for dataset {dataset_cfg['abbr']}")
dataset_cfg["reader_cfg"] = dict(input_columns=["dummy"], output_column="dummy")
if "eval_cfg" not in dataset_cfg:
logger.debug(f"Filling in eval config for dataset {dataset_cfg['abbr']}")
dataset_cfg["eval_cfg"] = dict(
evaluator=dict(type=get_config_type(AccEvaluator)),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The function get_config_type is called here, but its definition appears later in the file. This will result in a NameError at runtime. Please ensure that get_config_type is defined before it is called by moving its definition to a location before the try_fill_in_custom_cfgs function.

Comment on lines +37 to +38
for i, item in enumerate(cfg):
cfg[i] = recur_convert_config_type(item) if isinstance(item, (dict, ConfigDict, Config, list)) else item
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The recursive call for list items can be simplified. Instead of checking the type of the item before the recursive call, you can just call recur_convert_config_type on every item. The function already handles non-container types by returning them as is.

Suggested change
for i, item in enumerate(cfg):
cfg[i] = recur_convert_config_type(item) if isinstance(item, (dict, ConfigDict, Config, list)) else item
for i, item in enumerate(cfg):
cfg[i] = recur_convert_config_type(item)

DSET_CODES.DATA_PREPROCESSING_ERROR,
f"Failed to load swebench dataset {name} from Hugging Face with error: {e}.",
)
dataset = self.filter_instances(list(dataset), filter_spec=filter_spec, shuffle=shuffle)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Converting the entire dataset to a list using list(dataset) can be very memory-intensive, especially for large datasets, and may lead to out-of-memory errors. It is more efficient to use the .filter() method provided by the datasets library, which processes the data in a streaming fashion without loading everything into memory at once. Consider refactoring filter_instances to work directly with Dataset objects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant