Array of input sentences.
Optional
options: { Configuration options for LDA.
default=10 - Number of topics to extract.
default=10 - Number of terms to show for each topic.
default=0.1 - Dirichlet prior on document-topic distributions.
default=0.01 - Dirichlet prior on topic-word distributions.
default=1000 - Number of iterations for the LDA algorithm.
default=100 - Number of burn-in iterations.
default=10 - Lag between samples.
Latent Dirichlet (pronounced Dee-ruesh-ley) allocation is used in natural language processing to discover abstract topics in a collection of documents. It is a generative probabilistic model that assumes documents are mixtures of topics, where a topic is a probability distribution over words. LDA uses Bayesian inference to simultaneously learn the topics and topic mixtures that occur around each other in an unsupervised manner.
Latent Dirichlet Allocation (LDA) with Gibbs Sampling Explained
Latent Dirichlet Allocation
Topic Models (Youtube)