- NEGW Home
- ·
- Registration
- ·
- Schedule
- ·
- Poster
- ·
- History
- ·
- Participants
- ·
- Organizers
- ·
- Links
Contrastive self-supervised learning based on point-wise comparisons has been widely studied for vision tasks. In the neural cortex, neuronal responses to distinct stimulus classes are organized into geometric structures known as neural manifolds. Accurate classification of stimuli can be achieved by effectively separating these manifolds, akin to solving a packing problem. Despite its intuitive appeal, neurobiological relevance, and potential for enhancing interpretability, the perspective of neural manifold packing in contrastive learning remains largely unexplored. In this talk, I will discuss how concepts from statistical and soft matter physics can be leveraged to analyze neural manifold packing dynamics under stochastic gradient descent and related optimization algorithms. This perspective not only informs the development of highly interpretable self-supervised learning methods but also reveals striking parallels between the energy landscapes of sphere packings and the loss landscapes of neural networks. I will present a combination of numerical experiments and analytical theory demonstrating the depth of these analogies.
Copyright © All Rights Reserved.
|