Skip to yearly menu bar Skip to main content


Poster

Generating Chain-of-Thoughts with a Direct Pairwise-Comparison Approach to Find the Most Promising Intermediate Thought

Zhen-Yu Zhang · Siwei Han · Huaxiu Yao · Gang Niu · Masashi Sugiyama


Abstract:

To improve the ability of the \emph{large language model}~(LLMs) to handle complex reasoning problems, \emph{chain-of-thoughts} (CoT) methods were proposed to guide LLMs to reason step-by-step, facilitating problem solving from simple to complex tasks. State-of-the-art approaches for generating such a chain involve interactive collaboration, where the learner generates candidate intermediate thoughts, evaluated by the LLM, guiding the generation of subsequent thoughts. However, a widespread yet understudied problem is that \emph{the evaluation from the LLM is typically noisy and unreliable}, potentially misleading the generation process in selecting promising intermediate thoughts. In this paper, motivated by Vapnik's principle, we propose a novel comparison-based CoT generation algorithm that directly identifies the most promising thoughts with the noisy feedback from the LLM. In each round, we randomly \emph{pair intermediate thoughts and directly prompt the LLM to select} the more promising one from each pair, allowing us to identify the most promising thoughts through an iterative process. To further model the noise in the comparison, we resort to the techniques of ensemble and dueling bandits and propose two variants of the proposed algorithm. Experiments on three real-world mathematical and reasoning tasks demonstrate the effectiveness of our proposed algorithm and verify the rationale of the direct pairwise comparison.

Live content is unavailable. Log in and register to view live content