Direct Task Specification

Constructing the Signifier

Pre-GPT-3 models had much less capability to understand abstract descriptions of tasks due to their limited model of the world and human concepts. GPT-3’s impressive performance on 0-shot prompts indicates a new realm of possibilities for direct task specification.

A direct task specification is a 0-shot prompt which tells the model to perform a task that it already knows how to do using a signifier for the task. A signifier is a pattern which keys the intended behavior. It could be the name of the task, such as “translate”, a compound description, such as “rephrase this paragraph so that a 2nd grader can understand it, emphasizing real-world applications”, or purely contextual, such as the simple colon prompt from Figure 1. […]

In none of these cases does the signifier explain how to accomplish the task or provide examples of intended behavior; instead, it explicitly or implicitly calls functions which it assumes the language model has already learned.

Direct specifications can supervene on an infinity of implicit examples, like a closed-form expression on an infinite sequence, making them very powerful and compact. For instance, the phrase “translate French to English” supervenes on a list of mappings from all possible French phrases to English.

A Large Language Model, like a Person, has also learned behaviors for which it is less obvious how to construct a direct signifier. Task specification by demonstration (§4.3) and by proxy (§4.4) may be viable alternative strategies for eliciting those behaviors.


REYNOLDS, Laria and MCDONELL, Kyle, 2021. Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. Online. New York, NY, USA: Association for Computing Machinery. 2021. p. 1–7. [Accessed 29 January 2023]. CHI EA ’21. ISBN 978-1-4503-8095-9. DOI 10.1145/3411763.3451760.