[论文笔记] A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT

Introduction : 一个好的 prompt 可以提高 LLM 的表现;prompt 可以像软件开发一样被工程化;这篇论文的主要贡献在于提出了 prompt patterns 用于 prompt engineering

Comparing software patterns with prompt patterns : 这篇论文提出的用于构建 prompt 的 framework 可以帮助用户与 LLM 进行有效交流;我们将 prompt pattern 和 software designing patterns 做对比,二者有以下的共同之处:

software pattern prompt pattern
A name and classification 用来标识是哪一种模式并对其进行分类 用来标识是哪一种模式并对其进行分类
The intent 描述这种设计模式的意图 描述这种 prompt 解决的问题或目的
The motivation 描述设计模式针对的基本问题及其重要性 向 LLM 解释动机
The structure and participants / key ideas 描述不同的设计模式及其协作 描述一系列基本信息,比如一些 key ideas
Example code / implementation 样例代码 向 LLM 举一个例子
Consequences 总结 总结

之后在介绍具体 patterns 时也采用这一套分析流程;在软件设计中,经常用 UML 类图来描述结构;在 prompt engineering 中,固然可以构建 grammar for prompt,即使这会带来一些挑战

A catalog of prompt patterns for conversational LLMs : 下面是一张 prompt patterns 表

Pattern category Prompt pattern Description
Input semantics Meta language creation How an LLM understands and translates
Output customization Output autometer; Persona; Visualization; Generator; Recipe; Template Focus on the type of output
Error identification Fact check list;Reflection Identifying and resolving errors
Prompt improvement Question refinement; Alternative approaches; Cognitive; Verifier; Refusal breaker Improving quality of answers
Interaction Flipped interaction; Game play; Infinite generation Change styles of interaction
Context control Context manager Controling the context

The meta language creation : 元语言创建

实验工具:讯飞星火

messages = [
    ChatMessage(
        role='system',
        content='When I type two identifiers separated by "->", I am describing a graph traversal. For example, "A->B" means "from A, go to B".',
    ), ChatMessage(
        role='user',
        content='What does "A->B->C" mean?',
    )]
generations=[[ChatGeneration(text='"A->B->C" means "from A, go to B, and then go to C".', message=AIMessage(content='"A->B->C" means "from A, go to B, and then go to C".'))]] llm_output={'token_usage': {'question_tokens': 11, 'prompt_tokens': 50, 'completion_tokens': 23, 'total_tokens': 73}} run=[RunInfo(run_id=UUID('bd98c7b1-26c9-4c7d-96ab-1dffd4ef18be'))]

感觉只是做了单纯的翻译

messages = [
        ChatMessage(
            role='system',
            content='When I type two identifiers separated by "->", I am describing a graph traversal. For example, "A->B" means "from A, go to B".',
        ), ChatMessage(
            role='user',
            content='There is "A->B->C" and "A->D->E->C". What is the shortest path from A to C? Why?',
        )]
generations=[[ChatGeneration(text='The shortest path from A to C is "A->B->C". \n\nThis is because it only involves two edges, whereas the other path "A->D->E->C" involves three edges. In graph theory, the shortest path between two vertices is the one with the minimum number of edges.', message=AIMessage(content='The shortest path from A to C is "A->B->C". \n\nThis is because it only involves two edges, whereas the other path "A->D->E->C" involves three edges. In graph theory, the shortest path between two vertices is the one with the minimum number of edges.'))]] llm_output={'token_usage': {'question_tokens': 33, 'prompt_tokens': 72, 'completion_tokens': 68, 'total_tokens': 140}} run=[RunInfo(run_id=UUID('40c21154-9949-4109-bbd5-826baf7a713d'))]

The output automater pattern : 输出自动化

messages = [
        ChatMessage(
            role='system',
            content='当输出包含一个或多个步骤时,输出每个步骤对应的html代码',
        ), ChatMessage(
            role='user',
            content='创建一个圣杯布局的网页',
        )]

结果太长,不放了

The flipped interaction pattern : 反问模式

翻译一下:让 LLM 充当提问者的角色,在 prompt 中给出对提问的 restraining

有点笨

Persona: 角色扮演

有意思的是,LLM 显然虚构了一台linux主机上的文件结构

The Question Refinement Pattern: 精炼问题

提问时让 LLM 帮你细化(丰富)你提出的问题,可以弥补个人理解和 LLM understanding 之间的差异

很遗憾,迅飞星火对于上下文的处理堪称灾难级别

The Alternative Approaches Pattern:做选择题

给 LLM 提供多个选项或者让 LLM 生成多个选项并给出最佳选择

The Cognitive Verifier Pattern:哲学家模式

将大问题分解成若干小问题(或者附加问题),让 LLM 通过综合子问题的答案给出大问题的答案。就像一个哲学硕士看待哲学问题一样。

posted @ 2024-04-29 21:10  sysss  阅读(97)  评论(0编辑  收藏  举报