I can't speak for everyone, but what I can do is write what's gotten me, personally, the most excited in recent months.
1. Generative models and image translation:
A few years ago, we could barely generate written digits. Then, the convolutional network came along and suddenly images became hundreds of times easier to deal with. In recent years, the generative adversarial network has brought the most magic with it. We can now generate celebrity faces
at almost perfect accuracy, which in my opinion is amazing and something I had not even thought was possible. Also, something that's caught my attention is CycleGAN
, which is a way to translate images (for example, cows->horses or daytime->nighttime), WITHOUT the need for training pairs! Which is just insane. Everything that had been limited by lack of training data (which is a lot of things) now has a non-neglibile chance of becoming possible.
2. Reinforcement Learning: Speaking of lack of training data, RL is a way that throws that out of the window. Reinforcement learning is like a holy grail. In theory, it can solve any problem you throw at it, without any supervision or data. Of course, the problem with RL is that it's unstable and in practice needs a lot of tricks to get working. That's why results in the past years like solving ATARI games or controlling robotic arms are so exciting - they're a sign that techniques are improving, and we're making solid progress to discovering the secrets that RL hold.