AlexNet - It consists of 5 convolution layers, 3 max-pooling layers, 2 Normalized layers, 2 fully connected layers and 1 SoftMax layer. - Each convolution layer consists of a convolution filter and a non-linear activation function called “ReLU”. - The pooling layers are used to perform the max-pooling function - and the input size is fixed due to the presence of fully connected layers. The input..