nlp基础2-大模型Adaptation
pre
机器学习基础!请看
吴恩达机器学习复习1:监督学习、无监督学习、模型表示、损失函数、直觉Ⅰ、直觉Ⅱ、梯度下降及其直觉、线性回归的梯度下降 - asandstar - 博客园 (cnblogs.com)
吴恩达机器学习复习3:分类、假设的表示方法、决策边界、损失函数、简化的损失函数和梯度下降、梯度下降、高等优化、多级分类、正规方程式 - asandstar - 博客园 (cnblogs.com)
吴恩达机器学习复习4:非线性假设、神经和大脑、模型表示1、模型表示2、例子与直觉1、例子与直觉2、多类分类 - asandstar - 博客园 (cnblogs.com)
【Motivation】
语言模型(给出提示后执行任务)不适用于所有下游任务(如自然语言推理(NLI)、问题回答(QA)、将网络表格转换为文本、解析电子健康记录(EHR)等)
差别在于:语言模型的训练数据的格式和主题可能不同,或需要随时更新
GPT3任务不可知,不针对特定任务优化,可以捕捉任务通用结构以应对下游任务,虽然灵活但在一些任务上可能表现不够好
总之,由于不同任务的数据集建模方法不同,因此处理下游任务时会出现一定问题
格式 | 自然语言推理(NLI) | BERT训练和MASK标记 |
两句子比较 然后产生单一二进制输出 |
BERT训练时使用了MASK标记 但一些下游任务没有使用,所以需要根据具体情况调整 |
|
主题 | 特定领域需求 | 广泛主题的灵活性 |
特定领域有相应专业术语 如医疗记录分析和法律文档解析 |
下游任务突然聚集在新的或独特的领域 超出了模型的训练范围 |
|
时间 | 新时代需求 | 非公开信息需求 |
时间推移,新信息和知识会产生 | 训练期间不公开信息,需要依据特定领域的知识和调整 |
通用Adaption配置:
预训练模型(用于理解和训练语言)
→结合下游任务(文本、情感分类等)数据集
→适配参数
→定义损失函数(交叉熵衡量模型预测的概率分布和真实分布间的差异)
→优化问题
【常见Adaption方法】
1.Probing
主要适用于仅编码器模型,从最后一层表示中训练一个Probing到输出
固定长度表示策略:加上CLS特殊token,平均化token
2.Fine-tuning
微调时间远小于预训练
zero-shoting就是对训练阶段没有见过的任务进行泛化
3.LightWeight Fine-tuning
轻量级微调:减小模型存储和计算负担同时也表现出好的性能
提示、前缀、适配器调整
提示调整:随机词汇词、类标签词、随机初始化
适配器调整:轻量权重参数
【总结】
冻结模型大部分,根据任务对少量额外参数调整,方法有提示调整、前缀调整和适配器调整等方法。
上述几种调整方法的目的都是为了灵活应用不同的下游部分,实现对特定任务的精确适配,更要控制计算和存储的成本,以便在实际应用中获得更好的性能
MULTITASK PROMPTED TRAINING ENABLES ZERO-SHOT TASK GENERALIZATION
2110.08207.pdf (arxiv.org) 后面有很长的prompts set
1.motivations
Can zero-shot generalization instead be directly induced by explicit multitask learning?
2.proposed solution
develop a system for easily mapping any natural language tasks into a human-readable prompted form
3.evaluation
evaluate on a subset of the datasets from BIG-bench, which is a recent community driven benchmark to create a diverse collection of difficult tasks to test the abilities of large language models.
4.analysis of the identified problem, idea, evaluation
T0 could be more robust to prompt formulation than GPT-3
T0 is an encoder-decoder model that consumes textual inputs and produces target responses.
5.future directions
improving zero-shot generalization
6.questions left with
not given
FINETUNED LANGUAGE MODELS ARE ZERO-SHOT LEARNERS
2109.01652.pdf (arxiv.org) 文章不长,参考文献倒是挺多的
1.motivations
GPT-3’s zero-shot performance is much worse than few-shot performance on tasks such as reading comprehension, question answering, and natural language inference
→explore a simple method to improve the zero-shot performance of large language models, which would expand their reach to a broader audience
improve the ability of language models to respond to NLP instructions
2.proposed solution
using supervision to teach an LM to perform tasks described via instructions, the LM will learn to follow instructions and do so even for unseen tasks
3.evaluation
group datasets into clusters by task type and hold out each task cluster for evaluation while instruction tuning on all remaining clusters
4.analysis of the identified problem, idea, evaluation
instruction tuning is very effective on tasks naturally verbalized as instructions (e.g., NLI, QA, translation, struct-to-text)
and is less effective on tasks directly formulated as language modeling,
where instructions would be largely redundant (e.g., commonsense reasoning and coreference resolution tasks that are formatted as finishing an incomplete sentence or paragraph)
其他文献
Prefix-Tuning: Optimizing Continuous Prompts for Generation
The Power of Scale for Parameter-Efficient Prompt Tuning
TOWARDS A UNIFIED VIEW OF PARAMETER-EFFICIENT TRANSFER LEARNING