Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the benchmarking framework by integrating full support for SWEBench datasets. It introduces new components for loading, inferring, and evaluating models against SWEBench, alongside improvements to the configuration system for greater flexibility and automation. The changes streamline the process of setting up and running complex benchmarks, particularly for code generation and repair tasks, by abstracting away common configuration patterns and providing robust task execution mechanisms. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for the SWE-Bench benchmark, including new dataset loaders, inference tasks, and evaluation tasks. The changes are extensive and introduce new dependencies like mini-swe-agent. I've found a few critical issues that will prevent the new workflow from running correctly, related to file handling and incorrect assumptions about library functions. There are also some correctness and performance issues that should be addressed. Please see my detailed comments below.
| self.model_cfg, | ||
| dataset_cfg, | ||
| osp.join(self.work_dir, self.output_subdir), | ||
| file_extension="json", |
| preds_path = out_dir / "preds.json" | ||
| if preds_path.exists(): | ||
| shutil.move(preds_path, out_path) |
There was a problem hiding this comment.
The process_instance function from mini-swe-agent only creates per-instance trajectory files and does not generate the final aggregated prediction file. You need to add a step to collect the predictions from these trajectory files into a single preds.jsonl file after all instances have been processed. The mini-swe-agent library provides utilities for this. Also, the filename should be preds.jsonl, not preds.json.
| preds_path = out_dir / "preds.json" | |
| if preds_path.exists(): | |
| shutil.move(preds_path, out_path) | |
| from minisweagent.run.benchmarks.utils.run_utils import get_predictions_from_trajectories | |
| self.logger.info(f"Collecting predictions from trajectories in {out_dir}...") | |
| get_predictions_from_trajectories(str(out_dir)) | |
| preds_path = out_dir / "preds.jsonl" | |
| if preds_path.exists(): | |
| shutil.move(str(preds_path), out_path) |
| f"Invalid swebench dataset name, expected one of {list(DATASET_MAPPING.keys())} but got {name}", | ||
| ) | ||
| try: | ||
| dataset = load_dataset("parquet", data_files={split: path}) |
There was a problem hiding this comment.
The function load_dataset("parquet", data_files=...) returns a DatasetDict object, not a Dataset. The subsequent code on line 68, list(dataset), will then operate on the keys of this dictionary (e.g., ['test']) instead of the dataset records, which will cause a TypeError inside filter_instances. You need to select the appropriate split from the DatasetDict before processing it.
| dataset = load_dataset("parquet", data_files={split: path}) | |
| dataset = load_dataset("parquet", data_files={split: path})[split] |
| for dataset_cfg in config["datasets"]: | ||
| if "infer_cfg" not in dataset_cfg: | ||
| logger.debug(f"Filling in infer config for dataset {dataset_cfg['abbr']}") | ||
| dataset_cfg["infer_cfg"] = dict( | ||
| prompt_template=dict(type=get_config_type(PromptTemplate), template="{dummy}"), | ||
| retriever=dict(type=get_config_type(ZeroRetriever)), | ||
| inferencer=dict(type=get_config_type(GenInferencer)), | ||
| ) | ||
| if "reader_cfg" not in dataset_cfg: | ||
| logger.debug(f"Filling in reader config for dataset {dataset_cfg['abbr']}") | ||
| dataset_cfg["reader_cfg"] = dict(input_columns=["dummy"], output_column="dummy") | ||
| if "eval_cfg" not in dataset_cfg: | ||
| logger.debug(f"Filling in eval config for dataset {dataset_cfg['abbr']}") | ||
| dataset_cfg["eval_cfg"] = dict( | ||
| evaluator=dict(type=get_config_type(AccEvaluator)), | ||
| ) |
There was a problem hiding this comment.
| for i, item in enumerate(cfg): | ||
| cfg[i] = recur_convert_config_type(item) if isinstance(item, (dict, ConfigDict, Config, list)) else item |
There was a problem hiding this comment.
The recursive call for list items can be simplified. Instead of checking the type of the item before the recursive call, you can just call recur_convert_config_type on every item. The function already handles non-container types by returning them as is.
| for i, item in enumerate(cfg): | |
| cfg[i] = recur_convert_config_type(item) if isinstance(item, (dict, ConfigDict, Config, list)) else item | |
| for i, item in enumerate(cfg): | |
| cfg[i] = recur_convert_config_type(item) |
| DSET_CODES.DATA_PREPROCESSING_ERROR, | ||
| f"Failed to load swebench dataset {name} from Hugging Face with error: {e}.", | ||
| ) | ||
| dataset = self.filter_instances(list(dataset), filter_spec=filter_spec, shuffle=shuffle) |
There was a problem hiding this comment.
Converting the entire dataset to a list using list(dataset) can be very memory-intensive, especially for large datasets, and may lead to out-of-memory errors. It is more efficient to use the .filter() method provided by the datasets library, which processes the data in a streaming fashion without loading everything into memory at once. Consider refactoring filter_instances to work directly with Dataset objects.
Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。
PR Type / PR类型
Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)
🔍 Motivation / 变更动机
Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。
📝 Modification / 修改内容
Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。
📐 Associated Test Results / 关联测试结果
Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。
Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。
If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。
🌟 Use cases (Optional) / 使用案例(可选)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。
✅ Checklist / 检查列表
Before PR:
After PR:
👥 Collaboration Info / 协作信息
🌟 Useful CI Command / 实用的CI命令
/gemini review/gemini summary/gemini help/readthedocs build