What is Latent Dirichlet Allocation (LDA) and how is it used for topic modeling in natural language processing? How does LDA identify hidden topics within a collection of documents? What are the key assumptions and components of the LDA model? In which applications is LDA commonly used, such as document classification or recommendation systems? What are the advantages and limitations of using LDA for topic modeling?