T2 of Thoughts: Temperature Tree Elicits Reasoning in Large Language Models

Research output: Working paperPreprintResearch

Standard

T2 of Thoughts : Temperature Tree Elicits Reasoning in Large Language Models. / Cai, Chengkun; Zhao, Xu; Du, Yucheng; Liu, Haoliang; Li, Lei.

arXiv.org, 2024.

Research output: Working paperPreprintResearch

Harvard

Cai, C, Zhao, X, Du, Y, Liu, H & Li, L 2024 'T2 of Thoughts: Temperature Tree Elicits Reasoning in Large Language Models' arXiv.org. <https://arxiv.org/abs/2405.14075>

APA

Cai, C., Zhao, X., Du, Y., Liu, H., & Li, L. (2024). T2 of Thoughts: Temperature Tree Elicits Reasoning in Large Language Models. arXiv.org. https://arxiv.org/abs/2405.14075

Vancouver

Cai C, Zhao X, Du Y, Liu H, Li L. T2 of Thoughts: Temperature Tree Elicits Reasoning in Large Language Models. arXiv.org. 2024 May 23.

Author

Cai, Chengkun ; Zhao, Xu ; Du, Yucheng ; Liu, Haoliang ; Li, Lei. / T2 of Thoughts : Temperature Tree Elicits Reasoning in Large Language Models. arXiv.org, 2024.

Bibtex

@techreport{989bf7b15d1e402fba33d67c2045771a,
title = "T2 of Thoughts: Temperature Tree Elicits Reasoning in Large Language Models",
abstract = " Large Language Models (LLMs) have emerged as powerful tools in artificial intelligence, especially in complex decision-making scenarios, but their static problem-solving strategies often limit their adaptability to dynamic environments. We explore the enhancement of reasoning capabilities in LLMs through Temperature Tree ($T^2$) prompting via Particle Swarm Optimization, termed as $T^2$ of Thoughts ($T^2oT$). The primary focus is on enhancing decision-making processes by dynamically adjusting search parameters, especially temperature, to improve accuracy without increasing computational demands. We empirically validate that our hybrid $T^2oT$ approach yields enhancements in, single-solution accuracy, multi-solution generation and text generation quality. Our findings suggest that while dynamic search depth adjustments based on temperature can yield mixed results, a fixed search depth, when coupled with adaptive capabilities of $T^2oT$, provides a more reliable and versatile problem-solving strategy. This work highlights the potential for future explorations in optimizing algorithmic interactions with foundational language models, particularly illustrated by our development for the Game of 24 and Creative Writing tasks. ",
keywords = "cs.CL, cs.AI, cs.LG",
author = "Chengkun Cai and Xu Zhao and Yucheng Du and Haoliang Liu and Lei Li",
note = "10 pages, 5 figures",
year = "2024",
month = may,
day = "23",
language = "Udefineret/Ukendt",
publisher = "arXiv.org",
type = "WorkingPaper",
institution = "arXiv.org",

}

RIS

TY - UNPB

T1 - T2 of Thoughts

T2 - Temperature Tree Elicits Reasoning in Large Language Models

AU - Cai, Chengkun

AU - Zhao, Xu

AU - Du, Yucheng

AU - Liu, Haoliang

AU - Li, Lei

N1 - 10 pages, 5 figures

PY - 2024/5/23

Y1 - 2024/5/23

N2 - Large Language Models (LLMs) have emerged as powerful tools in artificial intelligence, especially in complex decision-making scenarios, but their static problem-solving strategies often limit their adaptability to dynamic environments. We explore the enhancement of reasoning capabilities in LLMs through Temperature Tree ($T^2$) prompting via Particle Swarm Optimization, termed as $T^2$ of Thoughts ($T^2oT$). The primary focus is on enhancing decision-making processes by dynamically adjusting search parameters, especially temperature, to improve accuracy without increasing computational demands. We empirically validate that our hybrid $T^2oT$ approach yields enhancements in, single-solution accuracy, multi-solution generation and text generation quality. Our findings suggest that while dynamic search depth adjustments based on temperature can yield mixed results, a fixed search depth, when coupled with adaptive capabilities of $T^2oT$, provides a more reliable and versatile problem-solving strategy. This work highlights the potential for future explorations in optimizing algorithmic interactions with foundational language models, particularly illustrated by our development for the Game of 24 and Creative Writing tasks.

AB - Large Language Models (LLMs) have emerged as powerful tools in artificial intelligence, especially in complex decision-making scenarios, but their static problem-solving strategies often limit their adaptability to dynamic environments. We explore the enhancement of reasoning capabilities in LLMs through Temperature Tree ($T^2$) prompting via Particle Swarm Optimization, termed as $T^2$ of Thoughts ($T^2oT$). The primary focus is on enhancing decision-making processes by dynamically adjusting search parameters, especially temperature, to improve accuracy without increasing computational demands. We empirically validate that our hybrid $T^2oT$ approach yields enhancements in, single-solution accuracy, multi-solution generation and text generation quality. Our findings suggest that while dynamic search depth adjustments based on temperature can yield mixed results, a fixed search depth, when coupled with adaptive capabilities of $T^2oT$, provides a more reliable and versatile problem-solving strategy. This work highlights the potential for future explorations in optimizing algorithmic interactions with foundational language models, particularly illustrated by our development for the Game of 24 and Creative Writing tasks.

KW - cs.CL

KW - cs.AI

KW - cs.LG

M3 - Preprint

BT - T2 of Thoughts

PB - arXiv.org

ER -

ID: 395084579