alex_bn_lee

导航

< 2025年3月 >
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5

统计

【458】keras 文本向量化 Vectorization

相关类与方法说明:

  • from keras.preprocessing.text import Tokenizer
  • Tokenizer:文本标记实用类。该类允许使用两种方法向量化一个文本语料库: 将每个文本转化为一个整数序列(每个整数都是词典中标记的索引); 或者将其转化为一个向量,其中每个标记的系数可以是二进制值、词频、TF-IDF权重等。
    • num_words: 需要保留的最大词数,基于词频。只有最常出现的 num_words 词会被保留。
  • tokenizer.fit_on_texts():Updates internal vocabulary based on a list of texts.
  • tokenizer.texts_to_sequences():Transforms each text in texts in a sequence of integers. Only top "num_words" most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.
  • tokenizer.word_index:dict {word: index}.
    1
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
import os
imdb_dir = r"D:\Deep Learning\Data\IMDB\aclImdb\aclImdb"
train_dir = os.path.join(imdb_dir, 'train')
 
labels = []
texts = []
 
for label_type in ['neg', 'pos']:
    dir_name = os.path.join(train_dir, label_type)
    for fname in os.listdir(dir_name):
        if fname[-4:] == '.txt':
            f = open(os.path.join(dir_name, fname), encoding='UTF-8')
            texts.append(f.read())
            f.close()
            if label_type == 'neg':
                labels.append(0)
            else:
                labels.append(1)
 
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
 
maxlen = 100
training_samples = 200
validation_samples = 10000
max_words = 10000
 
"""
Text tokenization utility class.
 
This class allows to vectorize a text corpus, by turning each
text into either a sequence of integers (each integer being the index
of a token in a dictionary) or into a vector where the coefficient
for each token could be binary, based on word count, based on tf-idf...
 
# Arguments
    num_words: the maximum number of words to keep, based
        on word frequency. Only the most common `num_words` words will
        be kept.
"""
tokenizer = Tokenizer(num_words=max_words)
# Updates internal vocabulary based on a list of texts.
tokenizer.fit_on_texts(texts)
# Transforms each text in texts in a sequence of integers.
# Only top "num_words" most frequent words will be taken into account.
# Only words known by the tokenizer will be taken into account.
sequences = tokenizer.texts_to_sequences(texts)
# dict {word: index}
word_index = tokenizer.word_index
 
print('Found %s unique tokens.' % len(word_index))
 
data = pad_sequences(sequences, maxlen=maxlen)
print('Shape of data tensor:', data.shape)

 

 

 

 

posted on   McDelfino  阅读(495)  评论(0编辑  收藏  举报

编辑推荐:
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
阅读排行:
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· .NET10 - 预览版1新功能体验(一)
历史上的今天:
2015-02-11 【158】◀▶ Linux-Bash学习
2012-02-11 【012】C#中嵌入音频或视频【转】
点击右上角即可分享
微信分享提示