Memetic Proxy

A memetic proxy refers to the use of a symbol or representation as a surrogate for a cultural meme or idea. In this context, a meme is a unit of cultural information, such as a concept, belief, or behavior, that is transmitted from one individual to another through communication. A memetic proxy can serve as a shorthand for a complex idea or concept and can be used to quickly convey that idea in a way that is easily recognized and understood by others. This can be useful for communication and can help to spread ideas and information more efficiently within a culture or community. The term is often used in the context of marketing, advertising, and social media, where memes and other forms of viral content are used to promote products or ideas. -- chat.openai

~

Memetic Algorithm wikipedia

BERRIOS TORRES, ANTONIO, 2022. Language models for patents: exploring prompt engineering for the Patent domain. 2022. ⇒ Establish RulesReal ConstraintConstraint

> For example, the ability of computational systems to establish Rules as genuine constraints where an analogous human legal system can only penalize violations makes possible Patterns of Organization that can only be approximated in society.

HEWETT, Joe and LEEKE, Matthew, 2022. Developing a GPT-3-Based Automated Victim for Advance Fee Fraud Disruption. In: 2022 IEEE 27th Pacific Rim International Symposium on Dependable Computing (PRDC). IEEE. 2022. p. 205–211.

REYNOLDS, Laria and MCDONELL, Kyle, 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 2021. p. 1–7. ⇒ Task Specification by Memetic Proxy

LUNDBLAD, Jonathan, THÖRN, Edwin and THÖRN, Linus, 2023. The impact of task specification on code generated via ChatGPT. 2023. page [Accessed 18 February 2024].

ChatGPT has made large language models more accessible and made it possible to code using natural language prompts. This study conducted an experiment comparing prompt engineering techniques called Task Specification and investigated their impact on code generation in terms of correctness and variety. The hypotheses of this study focused on whether the baseline method had a statistically significant difference in code correctness compared to the other methods. Code is evaluated using a software requirement specification that measures functional and syntactical correctness. Additionally, code variance is measured to identify patterns in code generation. The results show that there is a statistically significant difference in some code correctness criteria between the baseline and the other task specification methods, and the code variance measurements indicate a variety in the generated solutions. Future work could include using another large language model; different programming tasks and programming languages; and other prompt engineering techniques.