BUG-‘Tokenizer’ object has no attribute ‘oov_token’

使用keras包实现NLP问题时,报错

/lib/python3.5/dist-packages/keras/preprocessing/text.py”,
line 302, in texts_to_sequences_generator elif self.oov_token is not None: 
AttributeError: ‘Tokenizer’ object has no attribute ‘oov_token’

报错的代码行为

train_sequences = tokenizer.texts_to_sequences(new_training_list)

从texts_to_sequences()点进去keras的源码,发现它调用texts_to_sequences_generator()方法

而该方法里没有oov_token,后面有调用,但是没有设置

手动设置就ok

在texts_to_sequences_generator()方法里添加

tokenizer.ovv_token=None

OK.Fine

posted @ 2019-01-30 11:33  闲不住的小李  阅读(1228)  评论(0编辑  收藏  举报