调试chatglm4代码

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from modeling_chatglm import ChatGLMForConditionalGeneration 
from configuration_chatglm import ChatGLMConfig

a=ChatGLMForConditionalGeneration(config=ChatGLMConfig(num_layers=2,hidden_size=4,rope_ratio= 500,
  original_rope= True,
  padded_vocab_size= 151552,
  post_layer_norm= True,
  rmsnorm= True,
  seq_length= 131072,
  use_cache= True,
  torch_dtype= 'bfloat16',

  tie_word_embeddings= False,
  eos_token_id= [151329, 151336, 151338],
  pad_token_id= 151329))
import numpy as np 
b=a(torch.tensor([[2,3,5]]))
print(b)





总结可以看到代码上跟3代没有任何区别.只是参数改了改.

posted on 2024-06-05 21:11  张博的博客  阅读(61)  评论(0编辑  收藏  举报

导航