Papers
arxiv:2510.25404

GPTOpt: Teaching LLMs to do Interpretable Black-Box Optimization

Published on Oct 29, 2025
Authors:
,
,
,
,
,
,
,

Abstract

GPTOpt enhances LLMs with continuous black-box optimization capabilities through fine-tuning on Bayesian optimization data, achieving competitive performance while maintaining explainability.

AI-generated summary

Global optimization of expensive, derivative-free black-box functions demands extreme sample efficiency and decision interpretability. While Large Language Models (LLMs) have shown broad capabilities, even state-of-the-art models remain limited in solving continuous black-box optimization tasks and struggle to maintain exploration-exploitation balance. We introduce GPTOpt, an optimization method that equips LLMs with continuous black-box optimization capabilities by fine-tuning Llama 3.1 8B on structured Bayesian optimization (BO) data, including surrogate model information. This provides an explainable framework calibrated to produce surrogate model outputs comparable to a Gaussian process, while keeping the advantages of flexible LLM-based optimization. On a variety of black-box optimization benchmarks, our model shows favorable performance compared to traditional optimizers and transformer-based alternatives, while providing important context and insight into the model's decisions.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.25404 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.25404 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.