李宏毅《机器学习》总结 - 2022 HW7(BERT) Strong Baseline
相对比较轻松的作业,不用做大的修改,代码写的也挺清晰的。。。
题目是要求实作一个截取版的 QA,即给一个文档和一个问题,要求在文档中找这个问题的答案(同时保证是连续的一段),给训练集、dev集(个人感觉就是认为划定了 training set 和 validation set)和答案集
代码:https://www.kaggle.com/code/skyrainwind/hw7-bert
题目分析
medium:实作了一下带 warm up 的余弦学习率,调整了 doc_stride
strong:调整预训练模型(我使用了 luhua/chinese_pretrain_mrc_roberta_wwm_ext_large,hugging face 上有)+调整 learning rate(改成了 AdamW,不过其实差不多。。)+修改每个问题在对应的 paragraph 段中的位置(之前是都取中间,现在是取随机包含这个答案的区间)
代码分析
transformer 中有 bertforquestionanswer 和 tokenizer,这里直接引用即可
model = BertForQuestionAnswering.from_pretrained("luhua/chinese_pretrain_mrc_roberta_wwm_ext_large").to(device)
tokenizer = BertTokenizerFast.from_pretrained("luhua/chinese_pretrain_mrc_roberta_wwm_ext_large")
其中,tokenizer 的作用是把文字转化为数字,便于变成向量。
对于提取式的 QA 而言,其扔到 bert 的结构长这样:
简单来说,前面是问题,后面是问题对应的文档。
但是,我们不能直接把全部的文档都扔到 bert 里面,因为 bert 有很多层 self-attention 构成,而 self-attention 至少要进行 (n 个 word) 次向量运算,而文档可能会很长。因此可以考虑截取答案所在的那一小段文本,进行学习。为了避免截取的时候恰好把答案截开了,可以考虑有重叠的截。(如代码中每次截 150 个字符,每次截的时候往右移动doc_stride
个字符)
class QA_Dataset(Dataset):
def __init__(self, split, questions, tokenized_questions, tokenized_paragraphs):
self.split = split
self.questions = questions
self.tokenized_questions = tokenized_questions
self.tokenized_paragraphs = tokenized_paragraphs
self.max_question_len = 40
self.max_paragraph_len = 150
##### TODO: Change value of doc_stride #####
self.doc_stride = 32
# Input sequence length = [CLS] + question + [SEP] + paragraph + [SEP]
self.max_seq_len = 1 + self.max_question_len + 1 + self.max_paragraph_len + 1
def __len__(self):
return len(self.questions)
def __getitem__(self, idx):
question = self.questions[idx]
tokenized_question = self.tokenized_questions[idx]
tokenized_paragraph = self.tokenized_paragraphs[question["paragraph_id"]]
##### TODO: Preprocessing #####
# Hint: How to prevent model from learning something it should not learn
if self.split == "train":
# Convert answer's start/end positions in paragraph_text to start/end positions in tokenized_paragraph
answer_start_token = tokenized_paragraph.char_to_token(question["answer_start"])
answer_end_token = tokenized_paragraph.char_to_token(question["answer_end"])
# A single window is obtained by slicing the portion of paragraph containing the answer
#mid = (answer_start_token + answer_end_token) // 2
#paragraph_start = max(0, min(mid - self.max_paragraph_len // 2, len(tokenized_paragraph) - self.max_paragraph_len))
#paragraph_end = paragraph_start + self.max_paragraph_len
start_min = max(0, answer_end_token - self.max_paragraph_len + 1)
start_max = min(answer_start_token, len(tokenized_paragraph) - self.max_paragraph_len)
start_max = max(start_min, start_max)
paragraph_start = random.randint(start_min, start_max + 1)
paragraph_end = paragraph_start + self.max_paragraph_len
# Slice question/paragraph and add special tokens (101: CLS, 102: SEP)
input_ids_question = [101] + tokenized_question.ids[:self.max_question_len] + [102]
input_ids_paragraph = tokenized_paragraph.ids[paragraph_start : paragraph_end] + [102]
# Convert answer's start/end positions in tokenized_paragraph to start/end positions in the window
answer_start_token += len(input_ids_question) - paragraph_start
answer_end_token += len(input_ids_question) - paragraph_start
# Pad sequence and obtain inputs to model
input_ids, token_type_ids, attention_mask = self.padding(input_ids_question, input_ids_paragraph)
return torch.tensor(input_ids), torch.tensor(token_type_ids), torch.tensor(attention_mask), answer_start_token, answer_end_token
# Validation/Testing
else:
input_ids_list, token_type_ids_list, attention_mask_list = [], [], []
# Paragraph is split into several windows, each with start positions separated by step "doc_stride"
for i in range(0, len(tokenized_paragraph), self.doc_stride):
# Slice question/paragraph and add special tokens (101: CLS, 102: SEP)
input_ids_question = [101] + tokenized_question.ids[:self.max_question_len] + [102]
input_ids_paragraph = tokenized_paragraph.ids[i : i + self.max_paragraph_len] + [102]
# Pad sequence and obtain inputs to model
input_ids, token_type_ids, attention_mask = self.padding(input_ids_question, input_ids_paragraph)
input_ids_list.append(input_ids)
token_type_ids_list.append(token_type_ids)
attention_mask_list.append(attention_mask)
return torch.tensor(input_ids_list), torch.tensor(token_type_ids_list), torch.tensor(attention_mask_list)
def padding(self, input_ids_question, input_ids_paragraph):
# Pad zeros if sequence length is shorter than max_seq_len
padding_len = self.max_seq_len - len(input_ids_question) - len(input_ids_paragraph)
# Indices of input sequence tokens in the vocabulary
input_ids = input_ids_question + input_ids_paragraph + [0] * padding_len
# Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
token_type_ids = [0] * len(input_ids_question) + [1] * len(input_ids_paragraph) + [0] * padding_len
# Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
attention_mask = [1] * (len(input_ids_question) + len(input_ids_paragraph)) + [0] * padding_len
return input_ids, token_type_ids, attention_mask
train_set = QA_Dataset("train", train_questions, train_questions_tokenized, train_paragraphs_tokenized)
dev_set = QA_Dataset("dev", dev_questions, dev_questions_tokenized, dev_paragraphs_tokenized)
test_set = QA_Dataset("test", test_questions, test_questions_tokenized, test_paragraphs_tokenized)
train_batch_size = 16
# Note: Do NOT change batch size of dev_loader / test_loader !
# Although batch size=1, it is actually a batch consisting of several windows from the same QA pair
train_loader = DataLoader(train_set, batch_size=train_batch_size, shuffle=True, pin_memory=True)
dev_loader = DataLoader(dev_set, batch_size=1, shuffle=False, pin_memory=True)
test_loader = DataLoader(test_set, batch_size=1, shuffle=False, pin_memory=True)
预处理的时候还考虑了 bert 的接口所需要的参数:
训练过程基本不变,使用了梯度累计。
num_epoch = 1
validation = True
logging_step = 100
total_steps = num_epoch * len(train_loader)
acc_steps = 4
learning_rate = 1.5e-4 / acc_steps
optimizer = AdamW(model.parameters(), lr=learning_rate)
from transformers import get_linear_schedule_with_warmup
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=100, num_training_steps=total_steps//acc_steps)
if fp16_training:
model, optimizer, train_loader = accelerator.prepare(model, optimizer, train_loader)
model.train()
print("Start Training ...")
for epoch in range(num_epoch):
step = 1
train_loss = train_acc = 0
optimizer.zero_grad()
for data in tqdm(train_loader):
# Load all data into GPU
data = [i.to(device) for i in data]
# Model inputs: input_ids, token_type_ids, attention_mask, start_positions, end_positions (Note: only "input_ids" is mandatory)
# Model outputs: start_logits, end_logits, loss (return when start_positions/end_positions are provided)
output = model(input_ids=data[0], token_type_ids=data[1], attention_mask=data[2], start_positions=data[3], end_positions=data[4])
# Choose the most probable start position / end position
start_index = torch.argmax(output.start_logits, dim=1)
end_index = torch.argmax(output.end_logits, dim=1)
# Prediction is correct only if both start_index and end_index are correct
train_acc += ((start_index == data[3]) & (end_index == data[4])).float().mean()
train_loss += output.loss
if fp16_training:
accelerator.backward(output.loss)
else:
output.loss.backward()
step += 1
if step % acc_steps == 0:
optimizer.step()
optimizer.zero_grad()
scheduler.step()
##### TODO: Apply linear learning rate decay #####
# Print training loss and accuracy over past logging step
if step % logging_step == 0:
lr = optimizer.state_dict()['param_groups'][0]['lr']
print(f"Epoch {epoch + 1} | Step {step} | loss = {train_loss.item() / logging_step:.3f}, acc = {train_acc / logging_step:.3f}, lr={lr}")
train_loss = train_acc = 0
if validation:
print("Evaluating Dev Set ...")
model.eval()
with torch.no_grad():
dev_acc = 0
for i, data in enumerate(tqdm(dev_loader)):
output = model(input_ids=data[0].squeeze(dim=0).to(device), token_type_ids=data[1].squeeze(dim=0).to(device),
attention_mask=data[2].squeeze(dim=0).to(device))
# prediction is correct only if answer text exactly matches
dev_acc += evaluate(data, output) == dev_questions[i]["answer_text"]
print(f"Validation | Epoch {epoch + 1} | acc = {dev_acc / len(dev_loader):.3f}")
model.train()
# Save a model and its configuration file to the directory 「saved_model」
# i.e. there are two files under the direcory 「saved_model」: 「pytorch_model.bin」 and 「config.json」
# Saved model can be re-loaded using 「model = BertForQuestionAnswering.from_pretrained("saved_model")」
print("Saving Model ...")
model_save_dir = "saved_model"
model.save_pretrained(model_save_dir)
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 记一次.NET内存居高不下排查解决与启示