LLMs as Function Approximators: Terminology, Taxonomy, and Questions for Evaluation

Authors: David Schlangen

Abstract: Natural Language Processing has moved rather quickly from modelling specific
tasks to taking more general pre-trained models and fine-tuning them for
specific tasks, to a point where we now have what appear to be inherently
generalist models. This paper argues that the resultant loss of clarity on what
these models model leads to metaphors like “artificial general intelligences”
that are not helpful for evaluating their strengths and weaknesses. The
proposal is to see their generality, and their potential value, in their
ability to approximate specialist function, based on a natural language
specification. This framing brings to the fore questions of the quality of the
approximation, but beyond that, also questions of discoverability, stability,
and protectability of these functions. As the paper will show, this framing
hence brings together in one conceptual framework various aspects of
evaluation, both from a practical and a theoretical perspective, as well as
questions often relegated to a secondary status (such as “prompt injection” and
“jailbreaking”).

Source: http://arxiv.org/abs/2407.13744v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these