TextRnn使用了双层RNN 基本结构:embedding layer—>Bi-LSTM layer—>concat output—>FC layer —> softmax层
# X是Eembeding后输入的向量 lstm_1 = LSTM(100, input_dim=256, return_sequences=True)(X) lstm_2 = LSTM(100, return_sequences=True)(lstm_1) X = concatenate([lstm_1,lstm_2]) X = Dropout(0.5)(X) X = Densee(1024,activation="relu") X = Dropout(0.5)(X) Y = Dense(2,activation="softmax") model = Model(X,Y) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])RCNN 找了很久,网上基本都是原理,没写代码的,基本结构没看太懂,就没写代码 基本结构:递归CNN----max-pooling层—softmax层
参考资料: 【RCNN论文地址】:http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=F2929368FEDF4A9A7E495DC2A3137D19?doi=10.1.1.822.3091&rep=rep1&type=pdf 【RNN论文解读】:https://blog.csdn.net/linchuhai/article/details/86985582 【TextRNN参考】:https://blog.csdn.net/Torero_lch/article/details/82588732