With new neural network architectures popping up every now and then, it’s hard to keep track of them all. So I decided to compose a cheat sheet performance des architectures IT PDF many of those architectures. Most of these are neural networks, some are completely different beasts. One problem with drawing them as node maps: it doesn’t really show how they’re used.
The use-cases for trained networks differ even more, because VAEs are generators, where you insert noise to get a new sample. It should be noted that while most of the abbreviations used are generally accepted, not all of them are. RNNs sometimes refer to recursive neural networks, but most of the time they refer to recurrent neural networks. That’s not the end of it though, in many places you’ll find RNN used as placeholder for any recurrent architecture, including LSTMs, GRUs and even the bidirectional variants. Composing a complete list is practically impossible, as new architectures are invented all the time.
Even if published it can still be quite challenging to find them even if you’re looking for them, or sometimes you just overlook some. For each of the architectures depicted in the picture, I wrote a very, very brief description. You may find some of these to be useful if you’re quite familiar with some architectures, but you aren’t familiar with a particular one. Neural networks are often described as having layers, where each layer consists of either input, hidden or output cells in parallel.