Skip to yearly menu bar Skip to main content


Poster

InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks

Xueyu Hu · Ziyu Zhao · Shuang Wei · Ziwei Chai · Qianli Ma · Guoyin Wang · Xuwu Wang · Jing Su · Jingjing Xu · Ming Zhu · Yao Cheng · Jianbo Yuan · Jiwei Li · Kun Kuang · Yang Yang · Hongxia Yang · Fei Wu


Abstract:

In this paper, we introduce InfiAgent-DABench, the first benchmark specifically designed to evaluate LLM-based agents on data analysis tasks. These tasks require agents to end-to-end solving complex tasks by interacting with an execution environment. This benchmark contains DAEval, a dataset consisting of 603 data analysis questions derived from 124 CSV files, and an agent framework which incorporates LLMs to serve as data analysis agents for both serving and evaluation. Since data analysis questions are often open-ended and hard to evaluate without human supervision, we adopt a format-prompting technique to convert each question into a closed-form format so that they can be automatically evaluated. Our extensive benchmarking of 34 LLMs uncovers the current challenges encountered in data analysis tasks. In addition, building on top of our agent framework, we develop a specialized agent, DAAgent, which surpasses GPT-3.5 by 3.9% on DABench.

Live content is unavailable. Log in and register to view live content