Thursday, 28th April, 2016, 11.00 - 12.00
After a brief introduction to deep architectures and their typical supervised and unsupervised training approaches, the talk focuses on incremental strategies (at the base of natural learning). We will present our experience on incremental training of both CNN (Convolutional Neural Networks) and HTM (Hierarchical Temporal Memory). In particular a recently proposed semi-supervised tuning strategy (exploiting time coherence) proved to be very effective in conjunction with HTM, sometimes approaching supervised training accuracy.
Speaker: Davide Maltoni, University of Bologna (Dept. of Computer Science and Engineering - DISI)
Room Xipre Seminar (173.06)