NLP基础实验⑧:TextRNN

    xiaoxiao2025-06-10  49

    TextRNN

    TextRNN

    import tensorflow as tf import numpy as np tf.reset_default_graph() sentences = [ "i like dog", "i love coffee", "i hate milk"] word_list = " ".join(sentences).split() word_list = list(set(word_list)) word_dict = {w: i for i, w in enumerate(word_list)} number_dict = {i: w for i, w in enumerate(word_list)} n_class = len(word_dict) # TextRNN Parameter n_step = 2 # number of cells(= number of Step) n_hidden = 5 # number of hidden units in one cell def make_batch(sentences): input_batch = [] target_batch = [] for sen in sentences: word = sen.split() input = [word_dict[n] for n in word[:-1]] target = word_dict[word[-1]] input_batch.append(np.eye(n_class)[input]) target_batch.append(np.eye(n_class)[target]) return input_batch, target_batch # Model X = tf.placeholder(tf.float32, [None, n_step, n_class]) # [batch_size, n_step, n_class] Y = tf.placeholder(tf.float32, [None, n_class]) # [batch_size, n_class] W = tf.Variable(tf.random_normal([n_hidden, n_class])) b = tf.Variable(tf.random_normal([n_class])) cell = tf.nn.rnn_cell.BasicRNNCell(n_hidden) outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32) # outputs : [batch_size, n_step, n_hidden] outputs = tf.transpose(outputs, [1, 0, 2]) # [n_step, batch_size, n_hidden] outputs = outputs[-1] # [batch_size, n_hidden] model = tf.matmul(outputs, W) + b # model : [batch_size, n_class] cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=Y)) optimizer = tf.train.AdamOptimizer(0.001).minimize(cost) prediction = tf.cast(tf.argmax(model, 1), tf.int32) # Training init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) input_batch, target_batch = make_batch(sentences) for epoch in range(5000): _, loss = sess.run([optimizer, cost], feed_dict={X: input_batch, Y: target_batch}) if (epoch + 1)00 == 0: print('Epoch:', 'd' % (epoch + 1), 'cost =', '{:.6f}'.format(loss)) input = [sen.split()[:2] for sen in sentences] predict = sess.run([prediction], feed_dict={X: input_batch}) print([sen.split()[:2] for sen in sentences], '->', [number_dict[n] for n in predict[0]])

    Bi-LSTM

    import tensorflow as tf import numpy as np tf.reset_default_graph() sentence = ( 'Lorem ipsum dolor sit amet consectetur adipisicing elit ' 'sed do eiusmod tempor incididunt ut labore et dolore magna ' 'aliqua Ut enim ad minim veniam quis nostrud exercitation' ) word_dict = {w: i for i, w in enumerate(list(set(sentence.split())))} number_dict = {i: w for i, w in enumerate(list(set(sentence.split())))} n_class = len(word_dict) n_step = len(sentence.split()) n_hidden = 5 def make_batch(sentence): input_batch = [] target_batch = [] words = sentence.split() for i, word in enumerate(words[:-1]): input = [word_dict[n] for n in words[:(i + 1)]] input = input + [0] * (n_step - len(input)) target = word_dict[words[i + 1]] input_batch.append(np.eye(n_class)[input]) target_batch.append(np.eye(n_class)[target]) return input_batch, target_batch # Bi-LSTM Model X = tf.placeholder(tf.float32, [None, n_step, n_class]) Y = tf.placeholder(tf.float32, [None, n_class]) W = tf.Variable(tf.random_normal([n_hidden * 2, n_class])) b = tf.Variable(tf.random_normal([n_class])) lstm_fw_cell = tf.nn.rnn_cell.LSTMCell(n_hidden) lstm_bw_cell = tf.nn.rnn_cell.LSTMCell(n_hidden) # outputs : [batch_size, len_seq, n_hidden], states : [batch_size, n_hidden] outputs, _ = tf.nn.bidirectional_dynamic_rnn(lstm_fw_cell,lstm_bw_cell, X, dtype=tf.float32) outputs = tf.concat([outputs[0], outputs[1]], 2) # output[0] : lstm_fw, output[1] : lstm_bw outputs = tf.transpose(outputs, [1, 0, 2]) # [n_step, batch_size, n_hidden] outputs = outputs[-1] # [batch_size, n_hidden] model = tf.matmul(outputs, W) + b cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=Y)) optimizer = tf.train.AdamOptimizer(0.001).minimize(cost) prediction = tf.cast(tf.argmax(model, 1), tf.int32) # Training init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) input_batch, target_batch = make_batch(sentence) for epoch in range(10000): _, loss = sess.run([optimizer, cost], feed_dict={X: input_batch, Y: target_batch}) if (epoch + 1)00 == 0: print('Epoch:', 'd' % (epoch + 1), 'cost =', '{:.6f}'.format(loss)) predict = sess.run([prediction], feed_dict={X: input_batch}) print(sentence) print([number_dict[n] for n in [pre for pre in predict[0]]])

    尽管TextCNN能够在很多任务里面能有不错的表现,但CNN有个最大问题是固定 filter_size 的视野,一方面无法建模更长的序列信息,另一方面 filter_size 的超参调节也很繁琐。CNN本质是做文本的特征表达工作,而自然语言处理中更常用的是递归神经网络(RNN, Recurrent Neural Network),能够更好的表达上下文信息。具体在文本分类任务中,Bi-directional RNN(实际使用的是双向LSTM)从某种意义上可以理解为可以捕获变长且双向的的 "n-gram" 信息。

    RNN算是在自然语言处理领域非常一个标配网络了,在序列标注/命名体识别/seq2seq模型等很多场景都有应用,Recurrent Neural Network for Text Classification with Multi-Task Learning文中介绍了RNN用于分类问题的设计,下图LSTM用于网络结构原理示意图,示例中的是利用最后一个词的结果直接接全连接层softmax输出了。

     

    TextRNN + Attention

    CNN和RNN用在文本分类任务中尽管效果显著,但都有一个不足的地方就是不够直观,可解释性不好,特别是在分析badcase时候感受尤其深刻。而注意力(Attention)机制是自然语言处理领域一个常用的建模长时间记忆机制,能够很直观的给出每个词对结果的贡献,基本成了Seq2Seq模型的标配了。实际上文本分类从某种意义上也可以理解为一种特殊的Seq2Seq,所以考虑把Attention机制引入近来,研究了下学术界果然有类似做法。

     

    Attention机制介绍:

    详细介绍Attention恐怕需要一小篇文章的篇幅,感兴趣的可参考14年这篇paper NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE。

    以机器翻译为例简单介绍下,下图中  是源语言的一个词, 是目标语言的一个词,机器翻译的任务就是给定源序列得到目标序列。翻译  的过程产生取决于上一个词  和源语言的词的表示 ( 的 bi-RNN 模型的表示),而每个词所占的权重是不一样的。比如源语言是中文 “我 / 是 / 中国人” 目标语言 “i / am / Chinese”,翻译出“Chinese”时候显然取决于“中国人”,而与“我 / 是”基本无关。下图公式,  则是翻译英文第  个词时,中文第  个词的贡献,也就是注意力。显然在翻译“Chinese”时,“中国人”的注意力值非常大。

     

    Attention的核心point是在翻译每个目标词(或 预测商品标题文本所属类别)所用的上下文是不同的,这样的考虑显然是更合理的。

    TextRNN + Attention 模型:

    我们参考了这篇文章 Hierarchical Attention Networks for Document Classification,下图是模型的网络结构图,它一方面用层次化的结构保留了文档的结构,另一方面在word-level和sentence-level。淘宝标题场景只需要 word-level 这一层的 Attention 即可。

    加入Attention之后最大的好处自然是能够直观的解释各个句子和词对分类类别的重要性。

     

    参考文献

    论文:https://arxiv.org/abs/1605.05101v1

    论文解读:https://blog.csdn.net/sinat_33741547/article/details/84838877

    概念简介:https://www.jianshu.com/p/a846c311d3ac

    图解LSTM和GRU:https://blog.csdn.net/IOT_victor/article/details/88934316

    最新回复(0)