Hi anish
Wonderful post by the way and I was working on similar type of nlp task and fake news was one of them. Initially I tried with sklearn models, conv 1d, 2d, rnn and I too end up with lstm. In every model I saw a steady increase in the validation accuracy.
But for lstm I was not able to perform well, but after using your set of configuration, I mean 60 lstm units, pooling, then fully connected layer I was able to reach closer to 93% of validation accuracy.Though I had done different methods of pre-processing.
I have to thank you for that.
Finally can you tell me how to choose the number of lstm units for a particular task, I had 196 units before, now 60 units works like charm.
Also the input shape was 1000 with Padding, can you tell me how that is processed with 60 units, I have no idea.
And this line, model.add(input(shape= Max sequence length,....)) is that redundant or serves some purpose? I never used in any of my lstm models.
Thanks for the post.