Tools




Seminars, events & talks

Friday, 23th November, 2018, 10.00

Deep Reinforcement Learning for Partially Observable Environments

Many real-world sequential decision-making problems are partially observable by nature and the environmental model is often unknown. Examples include visual occlusions, unobserved latent causes like in healthcare or when we rely on noisy sensors. Consequently, there is a great need for reinforcement learning methods that can tackle such problems given only a stream of observations. 

In this talk, I will briefly present the two fundamental approaches how partial observability can be tackled when we want to learn a policy, namely memory or inference. Subsequently, I will present our recently proposed algorithm "Deep Variational Reinforcement  Learning" (DVRL) which combines learning a model with particle filtering to allow the agent to reason more effectively about the unobserved state of the environment.

Speaker: Maximilian Igl, Department of Computer Science, University of Oxford, UK

Room Marie Curie, PRBB Innner square



Site Information