Equivariance and Invariance in Neural Networks
- Date:
- Time: 14:00 - 15:30
- Address: Sokolovská 83, Praha
- Room: K1
- Speaker: Filip Šroubek, Tomáš Karella
In the rapidly evolving field of neural networks, achieving robustness against various geometric and radiometric transformations like rotation, scale, noise, or blur is crucial. This seminar begins by exploring why this robustness is important and how it is traditionally addressed through training neural networks with augmented datasets. We will define and differentiate between two key concepts: invariance, where the neural network output remains constant despite transformations to the input, and equivariance, a property where transformations to the input result in similar transformations in the output. The seminar will delve into the advantages of equivariance in neural networks, particularly its efficiency in encoding features and the ability to achieve enhanced performance with fewer parameters. Participants will leave with a deeper understanding of these concepts and their practical implications in the field of neural networks.