axolotl-mistral fine-tuning
command & progress
click to view the command
CUDA_VISIBLE_DEVICES="0,1,2,3" python -m axolotl.cli.preprocess examples/mistral/lora-mps.yml
accelerate launch -m axolotl.cli.train examples/mistral/lora-mps.yml
dataset
daze-unlv/medmcqa_axolotl
note
1 before runing mistral fine-tuning, use pip install --upgrade flash-attn
to update flash-attn to 2.5.6
2 chage this line control.should_training_stop = True
, change True as False, otherwise the training will stoped cause the high loss.
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步