What is forward propagation in deep learning and how does it work in neural networks? How are inputs processed through layers to produce an output during forward propagation? What role do weights, biases, and activation functions play in this process? How does forward propagation differ from backpropagation in training a model? What are the key challenges or considerations when implementing forward propagation in deep learning models?