
上QQ阅读APP看书,第一时间看更新
Summarizing your model visually
Going back to our model, let's summarize the output of what we are about to train. You can do this in Keras by using the summary() method on the model, which is actually a shortcut for the longer utility function (and hence harder to remember), which is as follows:
keras.utils.print_summary(model, line_length=None, positions=None,
print_fn=None)
Using this, you can actually visualize the shapes of the individual layers of the neural network, as well as the parameters in each layer:
model.summary()
The preceding code generates the following output:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_2 (Flatten) (None, 784) 0
_________________________________________________________________
dense_4 (Dense) (None, 1024) 803840
_________________________________________________________________
dense_5 (Dense) (None, 28) 28700
_________________________________________________________________
dense_6 (Dense) (None, 10) 290
=================================================================
Total params: 832,830
Trainable params: 832,830
Non-trainable params: 0
_________________________________________________________________
As you can see, contrary to the perceptron we saw in Chapter 2, A Deeper Dive into Neural Networks, this extremely simple model already has 51, 600 trainable parameters that are capable of scaling its learning almost exponentially compared to its ancestor.