To understand how to prompt an autoregressive language model, we must first consider the context in which it was trained and the function it approximates.
This motivates an Anthropomorphic Approach to Prompt Programming, since modelling how GPT-3 will react to a prompt involves modelling virtual human writer(s). An anthropomorphic approach is distinct from anthropomorphizing the model. GPT-3’s dynamics entail sophisticated predictions of humans, but it behaves unlike a human in several important ways.
In this paper we will address two such ways: its resemblance not to a single human author but a Superposition of authors, which motivates a subtractive approach to prompt programming (§4.5), and its constrained ability to predict dynamics in situations where a substantial amount of silent reasoning happens between tokens, a limitation which can be partially overcome by prompting techniques (§4.6).
~
REYNOLDS, Laria and MCDONELL, Kyle, 2021. Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery. 2021. p. 1–7. CHI EA ’21. ISBN 978-1-4503-8095-9. DOI 10.1145/3411763.3451760.