Keras : Vision models サンプル: mnist_cnn.py (fashion)
Fashion-MNIST データセット上で単純な ConvNet をトレーニングします。
50 エポック後に 93.3 % テスト精度を得ます。
from __future__ import print_function import keras from keras.datasets import fashion_mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K
batch_size = 128 num_classes = 10 epochs = 12
# input image dimensions img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples')
x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples
# convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax'))
model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ conv2d_2 (Conv2D) (None, 24, 24, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 12, 12, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 12, 12, 64) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 9216) 0 _________________________________________________________________ dense_1 (Dense) (None, 128) 1179776 _________________________________________________________________ dropout_2 (Dropout) (None, 128) 0 _________________________________________________________________ dense_2 (Dense) (None, 10) 1290 ================================================================= Total params: 1,199,882 Trainable params: 1,199,882 Non-trainable params: 0
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))
Train on 60000 samples, validate on 10000 samples Epoch 1/50 60000/60000 [==============================] - 14s 238us/step - loss: 0.6112 - acc: 0.7836 - val_loss: 0.3941 - val_acc: 0.8597 Epoch 2/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.3926 - acc: 0.8620 - val_loss: 0.3349 - val_acc: 0.8801 Epoch 3/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.3443 - acc: 0.8784 - val_loss: 0.3071 - val_acc: 0.8915 Epoch 4/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.3147 - acc: 0.8875 - val_loss: 0.2979 - val_acc: 0.8926 Epoch 5/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.2937 - acc: 0.8966 - val_loss: 0.2757 - val_acc: 0.9003 Epoch 6/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.2779 - acc: 0.9009 - val_loss: 0.2648 - val_acc: 0.9075 Epoch 7/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.2607 - acc: 0.9067 - val_loss: 0.2580 - val_acc: 0.9077 Epoch 8/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.2487 - acc: 0.9119 - val_loss: 0.2553 - val_acc: 0.9067 Epoch 9/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.2377 - acc: 0.9164 - val_loss: 0.2444 - val_acc: 0.9114 Epoch 10/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.2298 - acc: 0.9178 - val_loss: 0.2400 - val_acc: 0.9125 Epoch 11/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.2208 - acc: 0.9219 - val_loss: 0.2377 - val_acc: 0.9155 Epoch 12/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.2118 - acc: 0.9245 - val_loss: 0.2328 - val_acc: 0.9166 Epoch 13/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.2050 - acc: 0.9267 - val_loss: 0.2256 - val_acc: 0.9198 Epoch 14/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1986 - acc: 0.9291 - val_loss: 0.2213 - val_acc: 0.9218 Epoch 15/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1913 - acc: 0.9325 - val_loss: 0.2215 - val_acc: 0.9181 Epoch 16/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1850 - acc: 0.9328 - val_loss: 0.2177 - val_acc: 0.9237 Epoch 17/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1789 - acc: 0.9355 - val_loss: 0.2209 - val_acc: 0.9200 Epoch 18/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1729 - acc: 0.9375 - val_loss: 0.2234 - val_acc: 0.9241 Epoch 19/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1698 - acc: 0.9398 - val_loss: 0.2179 - val_acc: 0.9226 Epoch 20/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1653 - acc: 0.9407 - val_loss: 0.2154 - val_acc: 0.9250 Epoch 21/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1622 - acc: 0.9418 - val_loss: 0.2182 - val_acc: 0.9254 Epoch 22/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1563 - acc: 0.9441 - val_loss: 0.2217 - val_acc: 0.9260 Epoch 23/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1550 - acc: 0.9457 - val_loss: 0.2112 - val_acc: 0.9230 Epoch 24/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1495 - acc: 0.9472 - val_loss: 0.2125 - val_acc: 0.9238 Epoch 25/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1468 - acc: 0.9481 - val_loss: 0.2237 - val_acc: 0.9259 Epoch 26/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1430 - acc: 0.9484 - val_loss: 0.2174 - val_acc: 0.9278 Epoch 27/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1397 - acc: 0.9501 - val_loss: 0.2286 - val_acc: 0.9264 Epoch 28/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1363 - acc: 0.9514 - val_loss: 0.2139 - val_acc: 0.9307 Epoch 29/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1368 - acc: 0.9518 - val_loss: 0.2081 - val_acc: 0.9282 Epoch 30/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1314 - acc: 0.9535 - val_loss: 0.2280 - val_acc: 0.9287 Epoch 31/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1293 - acc: 0.9541 - val_loss: 0.2106 - val_acc: 0.9291 Epoch 32/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1273 - acc: 0.9556 - val_loss: 0.2171 - val_acc: 0.9300 Epoch 33/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1250 - acc: 0.9551 - val_loss: 0.2225 - val_acc: 0.9288 Epoch 34/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1236 - acc: 0.9565 - val_loss: 0.2358 - val_acc: 0.9252 Epoch 35/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1224 - acc: 0.9566 - val_loss: 0.2260 - val_acc: 0.9294 Epoch 36/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1175 - acc: 0.9582 - val_loss: 0.2181 - val_acc: 0.9289 Epoch 37/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1159 - acc: 0.9586 - val_loss: 0.2201 - val_acc: 0.9312 Epoch 38/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1153 - acc: 0.9599 - val_loss: 0.2182 - val_acc: 0.9318 Epoch 39/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1162 - acc: 0.9584 - val_loss: 0.2212 - val_acc: 0.9317 Epoch 40/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1110 - acc: 0.9606 - val_loss: 0.2089 - val_acc: 0.9327 Epoch 41/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1123 - acc: 0.9592 - val_loss: 0.2219 - val_acc: 0.9293 Epoch 42/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1101 - acc: 0.9610 - val_loss: 0.2422 - val_acc: 0.9321 Epoch 43/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1115 - acc: 0.9607 - val_loss: 0.2128 - val_acc: 0.9327 Epoch 44/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1088 - acc: 0.9612 - val_loss: 0.2370 - val_acc: 0.9321 Epoch 45/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1066 - acc: 0.9621 - val_loss: 0.2249 - val_acc: 0.9304 Epoch 46/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1059 - acc: 0.9626 - val_loss: 0.2261 - val_acc: 0.9303 Epoch 47/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1048 - acc: 0.9626 - val_loss: 0.2358 - val_acc: 0.9320 Epoch 48/50 60000/60000 [==============================] - 10s 173us/step - loss: 0.1041 - acc: 0.9631 - val_loss: 0.2429 - val_acc: 0.9326 Epoch 49/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1041 - acc: 0.9633 - val_loss: 0.2354 - val_acc: 0.9330 Epoch 50/50 60000/60000 [==============================] - 10s 174us/step - loss: 0.1029 - acc: 0.9647 - val_loss: 0.2245 - val_acc: 0.9330
score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1])
Test loss: 0.224495261292 Test accuracy: 0.933
以上