Deep Learning day 8 cat-and-dog-from-dataframe

Cat and Dog DataSet

Imports

%pwd
'/kaggle/working'
!ls -la /kaggle/input
total 8
drwxr-xr-x 3 root   root    4096 Sep 14 03:53 .
drwxr-xr-x 5 root   root    4096 Sep 14 03:52 ..
drwxr-xr-x 4 nobody nogroup    0 Dec  4  2020 cat-and-dog
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import cv2
import glob
from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.layers import Input, Conv2D, Dropout, Flatten, Activation, MaxPooling2D, Dense
from tensorflow.keras.layers import GlobalAveragePooling2D, BatchNormalization

from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from sklearn.model_selection import train_test_split
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
2021-09-14 03:57:13.718210: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
np.random.seed(42)
tf.random.set_seed(42)

Load Dataset

## in train_set directory
training_cats = glob.glob("/kaggle/input/cat-and-dog/training_set/training_set/cats/*.jpg")
training_dogs = glob.glob("/kaggle/input/cat-and-dog/training_set/training_set/dogs/*.jpg")

print(len(training_cats), len(training_dogs))
4000 4005
## in test_set directory
test_cats = glob.glob("/kaggle/input/cat-and-dog/test_set/test_set/cats/*.jpg")
test_dogs = glob.glob("/kaggle/input/cat-and-dog/test_set/test_set/dogs/*.jpg")

print(len(test_cats), len(test_dogs))
1011 1012
test_cats[:3]
['/kaggle/input/cat-and-dog/test_set/test_set/cats/cat.4414.jpg',
 '/kaggle/input/cat-and-dog/test_set/test_set/cats/cat.4420.jpg',
 '/kaggle/input/cat-and-dog/test_set/test_set/cats/cat.4880.jpg']

Visualize Data

figure, axes = plt.subplots(figsize=(22, 6), nrows=1, ncols=4)
dog_images = training_dogs[:4]
for i in range(4):
    image = cv2.cvtColor(cv2.imread(dog_images[i]), cv2.COLOR_BGR2RGB)
    axes[i].imshow(image)
    
figure, axes = plt.subplots(figsize=(22, 6), nrows=1, ncols=4)
cat_images = training_cats[:4]
for i in range(4):
    image = cv2.cvtColor(cv2.imread(cat_images[i]), cv2.COLOR_BGR2RGB)
    axes[i].imshow(image)

Preprocess data (from dataframe)

pd.set_option("display.max_colwidth", 200)
train_paths = training_cats + training_dogs
train_labels = ["CAT" for _ in range(len(training_cats))] + ["DOG" for _ in range(len(training_dogs))]
train_df = pd.DataFrame({"path":train_paths, "label":train_labels})

test_paths = test_cats + test_dogs
test_labels = ["CAT" for _ in range(len(test_cats))] + ["DOG" for _ in range(len(test_dogs))]
test_df = pd.DataFrame({"path":test_paths, "label":test_labels})

print(train_df["label"].value_counts())
print(test_df["label"].value_counts())
DOG    4005
CAT    4000
Name: label, dtype: int64
DOG    1012
CAT    1011
Name: label, dtype: int64
train_df, valid_df = train_test_split(train_df, test_size=0.2, stratify=train_df["label"])

print(train_df["label"].value_counts())
print(valid_df["label"].value_counts())
DOG    3204
CAT    3200
Name: label, dtype: int64
DOG    801
CAT    800
Name: label, dtype: int64
IMAGE_SIZE = 224
BATCH_SIZE = 64
train_df.shape
(6404, 2)
train_generator = ImageDataGenerator(horizontal_flip = True, rescale=1/255.0 )
train_generator_iterator = train_generator.flow_from_dataframe(dataframe=train_df,
                                                               x_col = "path",
                                                               y_col = "label",
                                                               target_size=(IMAGE_SIZE, IMAGE_SIZE), batch_size=BATCH_SIZE, 
                                                               class_mode="binary")
Found 6404 validated image filenames belonging to 2 classes.
valid_generator = ImageDataGenerator(rescale=1/255.0 )
valid_generator_iterator = valid_generator.flow_from_dataframe(dataframe=valid_df,
                                                               x_col = "path",
                                                               y_col = "label",
                                                               target_size=(IMAGE_SIZE, IMAGE_SIZE), batch_size=BATCH_SIZE, 
                                                               class_mode="binary")
Found 1601 validated image filenames belonging to 2 classes.
test_generator = ImageDataGenerator(rescale=1/255.0 )
test_generator_iterator = test_generator.flow_from_dataframe(dataframe=test_df,
                                                               x_col = "path",
                                                               y_col = "label",
                                                               target_size=(IMAGE_SIZE, IMAGE_SIZE), batch_size=BATCH_SIZE, 
                                                               class_mode="binary")
Found 2023 validated image filenames belonging to 2 classes.
  • fetch some data
image_array, label_array = next(train_generator_iterator)
print(image_array.shape, label_array.shape)
(64, 224, 224, 3) (64,)

Create Model

def build_extended_gap_model():
  tf.keras.backend.clear_session()

  input_tensor = Input(shape=(224, 224, 3))
  x = Conv2D(filters=32, kernel_size=(3, 3), strides=1, padding="same")(input_tensor)
  x = BatchNormalization()(x)
  x = Activation("relu")(x)
  x = Conv2D(filters=64, kernel_size=(3, 3), strides=1, padding="same")(x)
  x = BatchNormalization()(x)
  x = Activation("relu")(x)
  x = MaxPooling2D(pool_size=(2, 2))(x)

  x = Conv2D(filters=128, kernel_size=(3, 3), strides=1, padding="same")(x)
  x = BatchNormalization()(x)
  x = Activation("relu")(x)
  x = Conv2D(filters=128, kernel_size=(3, 3), strides=1, padding="same")(x)
  x = BatchNormalization()(x)
  x = Activation("relu")(x)
  x = MaxPooling2D(pool_size=(2, 2))(x)
                  
  x = Conv2D(filters=256, kernel_size=(3, 3), strides=1, padding="same")(x)
  x = BatchNormalization()(x)
  x = Activation("relu")(x)
  x = Conv2D(filters=256, kernel_size=(3, 3), strides=1, padding="same")(x)
  x = BatchNormalization()(x)
  x = Activation("relu")(x)
  #x = MaxPooling2D(pool_size=(2, 2))(x)                 

  x = Conv2D(filters=512, kernel_size=(3, 3), strides=1, padding="same")(x)
  x = BatchNormalization()(x)
  x = Activation("relu")(x)
  x = MaxPooling2D(pool_size=(2, 2))(x) 

  # x = Flatten()(x)
  x = GlobalAveragePooling2D()(x)
  x = Dropout(rate=0.5)(x)                 
  x = Dense(300, activation="relu")(x)
  x = Dropout(rate=0.3)(x)
  x = Dense(100, activation="relu")(x)
  x = Dropout(rate=0.3)(x)
  output = Dense(1, activation="sigmoid")(x)

  model = Model(inputs=input_tensor, outputs=output)
  return model

model = build_extended_gap_model()
model.summary()
2021-09-14 03:58:53.560908: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-14 03:58:53.564553: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-09-14 03:58:53.601061: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-14 03:58:53.601718: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0
coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-09-14 03:58:53.601783: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-09-14 03:58:53.625620: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-09-14 03:58:53.625730: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-09-14 03:58:53.643130: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-09-14 03:58:53.698879: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-09-14 03:58:53.735215: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-09-14 03:58:53.743651: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-09-14 03:58:53.746293: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-09-14 03:58:53.746502: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-14 03:58:53.747169: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-14 03:58:53.748687: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-09-14 03:58:53.749169: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-14 03:58:53.749364: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-14 03:58:53.749529: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-14 03:58:53.750126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0
coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-09-14 03:58:53.750169: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-09-14 03:58:53.750198: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-09-14 03:58:53.750216: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-09-14 03:58:53.750232: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-09-14 03:58:53.750249: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-09-14 03:58:53.750266: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-09-14 03:58:53.750284: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-09-14 03:58:53.750303: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-09-14 03:58:53.750399: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-14 03:58:53.751027: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-14 03:58:53.751568: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-09-14 03:58:53.752692: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-09-14 03:58:55.219796: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-09-14 03:58:55.219840: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0 
2021-09-14 03:58:55.219850: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N 
2021-09-14 03:58:55.222487: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-14 03:58:55.223416: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-14 03:58:55.224164: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-14 03:58:55.224741: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14957 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)


Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 224, 224, 3)]     0         
_________________________________________________________________
conv2d (Conv2D)              (None, 224, 224, 32)      896       
_________________________________________________________________
batch_normalization (BatchNo (None, 224, 224, 32)      128       
_________________________________________________________________
activation (Activation)      (None, 224, 224, 32)      0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 224, 224, 64)      18496     
_________________________________________________________________
batch_normalization_1 (Batch (None, 224, 224, 64)      256       
_________________________________________________________________
activation_1 (Activation)    (None, 224, 224, 64)      0         
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 112, 112, 64)      0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 112, 112, 128)     73856     
_________________________________________________________________
batch_normalization_2 (Batch (None, 112, 112, 128)     512       
_________________________________________________________________
activation_2 (Activation)    (None, 112, 112, 128)     0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 112, 112, 128)     147584    
_________________________________________________________________
batch_normalization_3 (Batch (None, 112, 112, 128)     512       
_________________________________________________________________
activation_3 (Activation)    (None, 112, 112, 128)     0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 56, 56, 128)       0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 56, 56, 256)       295168    
_________________________________________________________________
batch_normalization_4 (Batch (None, 56, 56, 256)       1024      
_________________________________________________________________
activation_4 (Activation)    (None, 56, 56, 256)       0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 56, 56, 256)       590080    
_________________________________________________________________
batch_normalization_5 (Batch (None, 56, 56, 256)       1024      
_________________________________________________________________
activation_5 (Activation)    (None, 56, 56, 256)       0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 56, 56, 512)       1180160   
_________________________________________________________________
batch_normalization_6 (Batch (None, 56, 56, 512)       2048      
_________________________________________________________________
activation_6 (Activation)    (None, 56, 56, 512)       0         
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 28, 28, 512)       0         
_________________________________________________________________
global_average_pooling2d (Gl (None, 512)               0         
_________________________________________________________________
dropout (Dropout)            (None, 512)               0         
_________________________________________________________________
dense (Dense)                (None, 300)               153900    
_________________________________________________________________
dropout_1 (Dropout)          (None, 300)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 100)               30100     
_________________________________________________________________
dropout_2 (Dropout)          (None, 100)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 101       
=================================================================
Total params: 2,495,845
Trainable params: 2,493,093
Non-trainable params: 2,752
_________________________________________________________________

Compile Model, Train

checkpoint_cb = ModelCheckpoint("my_keras_model.h5", save_best_only=True, verbose=1)
early_stopping_cb = EarlyStopping(patience=12, restore_best_weights=True)
reducelr_cb = ReduceLROnPlateau(monitor="val_loss", factor=0.2, patience=5, mode="min", verbose=1)
model.compile(optimizer=Adam(), loss="binary_crossentropy", metrics=["accuracy"])
history = model.fit(train_generator_iterator, epochs=40, validation_data=valid_generator_iterator,
                   callbacks=[checkpoint_cb, early_stopping_cb, reducelr_cb])
2021-09-14 03:59:07.927045: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-09-14 03:59:07.932171: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2000179999 Hz


Epoch 1/40


2021-09-14 03:59:09.288199: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-09-14 03:59:10.172470: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-09-14 03:59:10.216676: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8


101/101 [==============================] - 80s 691ms/step - loss: 0.7415 - accuracy: 0.5474 - val_loss: 0.7738 - val_accuracy: 0.4997

Epoch 00001: val_loss improved from inf to 0.77377, saving model to my_keras_model.h5
Epoch 2/40
101/101 [==============================] - 49s 485ms/step - loss: 0.6639 - accuracy: 0.6101 - val_loss: 0.9152 - val_accuracy: 0.4997

Epoch 00002: val_loss did not improve from 0.77377
Epoch 3/40
101/101 [==============================] - 49s 483ms/step - loss: 0.6398 - accuracy: 0.6218 - val_loss: 0.7118 - val_accuracy: 0.5284

Epoch 00003: val_loss improved from 0.77377 to 0.71180, saving model to my_keras_model.h5
Epoch 4/40
101/101 [==============================] - 49s 481ms/step - loss: 0.6247 - accuracy: 0.6402 - val_loss: 0.6454 - val_accuracy: 0.5884

Epoch 00004: val_loss improved from 0.71180 to 0.64535, saving model to my_keras_model.h5
Epoch 5/40
101/101 [==============================] - 48s 476ms/step - loss: 0.6351 - accuracy: 0.6356 - val_loss: 0.7540 - val_accuracy: 0.5759

Epoch 00005: val_loss did not improve from 0.64535
Epoch 6/40
101/101 [==============================] - 48s 477ms/step - loss: 0.6081 - accuracy: 0.6623 - val_loss: 0.9634 - val_accuracy: 0.5222

Epoch 00006: val_loss did not improve from 0.64535
Epoch 7/40
101/101 [==============================] - 49s 481ms/step - loss: 0.5883 - accuracy: 0.6830 - val_loss: 0.6075 - val_accuracy: 0.6615

Epoch 00007: val_loss improved from 0.64535 to 0.60748, saving model to my_keras_model.h5
Epoch 8/40
101/101 [==============================] - 49s 479ms/step - loss: 0.5766 - accuracy: 0.7013 - val_loss: 0.7697 - val_accuracy: 0.5871

Epoch 00008: val_loss did not improve from 0.60748
Epoch 9/40
101/101 [==============================] - 48s 473ms/step - loss: 0.5473 - accuracy: 0.7273 - val_loss: 1.4200 - val_accuracy: 0.5303

Epoch 00009: val_loss did not improve from 0.60748
Epoch 10/40
101/101 [==============================] - 48s 473ms/step - loss: 0.5443 - accuracy: 0.7324 - val_loss: 0.5894 - val_accuracy: 0.7014

Epoch 00010: val_loss improved from 0.60748 to 0.58944, saving model to my_keras_model.h5
Epoch 11/40
101/101 [==============================] - 49s 478ms/step - loss: 0.5275 - accuracy: 0.7442 - val_loss: 0.5825 - val_accuracy: 0.6889

Epoch 00011: val_loss improved from 0.58944 to 0.58255, saving model to my_keras_model.h5
Epoch 12/40
101/101 [==============================] - 48s 478ms/step - loss: 0.5387 - accuracy: 0.7388 - val_loss: 0.5774 - val_accuracy: 0.6971

Epoch 00012: val_loss improved from 0.58255 to 0.57736, saving model to my_keras_model.h5
Epoch 13/40
101/101 [==============================] - 48s 473ms/step - loss: 0.5188 - accuracy: 0.7537 - val_loss: 0.5652 - val_accuracy: 0.6971

Epoch 00013: val_loss improved from 0.57736 to 0.56520, saving model to my_keras_model.h5
Epoch 14/40
101/101 [==============================] - 49s 482ms/step - loss: 0.5031 - accuracy: 0.7627 - val_loss: 0.9337 - val_accuracy: 0.5715

Epoch 00014: val_loss did not improve from 0.56520
Epoch 15/40
101/101 [==============================] - 48s 475ms/step - loss: 0.4828 - accuracy: 0.7682 - val_loss: 0.5075 - val_accuracy: 0.7626

Epoch 00015: val_loss improved from 0.56520 to 0.50749, saving model to my_keras_model.h5
Epoch 16/40
101/101 [==============================] - 48s 478ms/step - loss: 0.4820 - accuracy: 0.7736 - val_loss: 0.8924 - val_accuracy: 0.5640

Epoch 00016: val_loss did not improve from 0.50749
Epoch 17/40
101/101 [==============================] - 48s 476ms/step - loss: 0.4713 - accuracy: 0.7893 - val_loss: 0.6860 - val_accuracy: 0.6883

Epoch 00017: val_loss did not improve from 0.50749
Epoch 18/40
101/101 [==============================] - 48s 473ms/step - loss: 0.4511 - accuracy: 0.7931 - val_loss: 0.5290 - val_accuracy: 0.7495

Epoch 00018: val_loss did not improve from 0.50749
Epoch 19/40
101/101 [==============================] - 48s 478ms/step - loss: 0.4556 - accuracy: 0.7915 - val_loss: 0.8103 - val_accuracy: 0.6152

Epoch 00019: val_loss did not improve from 0.50749
Epoch 20/40
101/101 [==============================] - 48s 471ms/step - loss: 0.4438 - accuracy: 0.7935 - val_loss: 0.7559 - val_accuracy: 0.6752

Epoch 00020: val_loss did not improve from 0.50749

Epoch 00020: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.
Epoch 21/40
101/101 [==============================] - 49s 482ms/step - loss: 0.4092 - accuracy: 0.8223 - val_loss: 0.5091 - val_accuracy: 0.7739

Epoch 00021: val_loss did not improve from 0.50749
Epoch 22/40
101/101 [==============================] - 49s 482ms/step - loss: 0.3618 - accuracy: 0.8455 - val_loss: 0.4146 - val_accuracy: 0.8164

Epoch 00022: val_loss improved from 0.50749 to 0.41457, saving model to my_keras_model.h5
Epoch 23/40
101/101 [==============================] - 48s 475ms/step - loss: 0.3419 - accuracy: 0.8613 - val_loss: 0.6720 - val_accuracy: 0.7327

Epoch 00023: val_loss did not improve from 0.41457
Epoch 24/40
101/101 [==============================] - 49s 481ms/step - loss: 0.3261 - accuracy: 0.8643 - val_loss: 0.5263 - val_accuracy: 0.7726

Epoch 00024: val_loss did not improve from 0.41457
Epoch 25/40
101/101 [==============================] - 48s 477ms/step - loss: 0.3342 - accuracy: 0.8592 - val_loss: 0.3470 - val_accuracy: 0.8476

Epoch 00025: val_loss improved from 0.41457 to 0.34702, saving model to my_keras_model.h5
Epoch 26/40
101/101 [==============================] - 49s 484ms/step - loss: 0.3305 - accuracy: 0.8572 - val_loss: 0.4356 - val_accuracy: 0.7983

Epoch 00026: val_loss did not improve from 0.34702
Epoch 27/40
101/101 [==============================] - 49s 479ms/step - loss: 0.3206 - accuracy: 0.8621 - val_loss: 0.4012 - val_accuracy: 0.8139

Epoch 00027: val_loss did not improve from 0.34702
Epoch 28/40
101/101 [==============================] - 49s 486ms/step - loss: 0.3157 - accuracy: 0.8641 - val_loss: 0.3958 - val_accuracy: 0.8376

Epoch 00028: val_loss did not improve from 0.34702
Epoch 29/40
101/101 [==============================] - 49s 479ms/step - loss: 0.3054 - accuracy: 0.8751 - val_loss: 0.4213 - val_accuracy: 0.8020

Epoch 00029: val_loss did not improve from 0.34702
Epoch 30/40
101/101 [==============================] - 49s 480ms/step - loss: 0.2910 - accuracy: 0.8780 - val_loss: 0.3995 - val_accuracy: 0.8438

Epoch 00030: val_loss did not improve from 0.34702

Epoch 00030: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05.
Epoch 31/40
101/101 [==============================] - 49s 479ms/step - loss: 0.2703 - accuracy: 0.8956 - val_loss: 0.3370 - val_accuracy: 0.8613

Epoch 00031: val_loss improved from 0.34702 to 0.33704, saving model to my_keras_model.h5
Epoch 32/40
101/101 [==============================] - 49s 479ms/step - loss: 0.2845 - accuracy: 0.8809 - val_loss: 0.3271 - val_accuracy: 0.8676

Epoch 00032: val_loss improved from 0.33704 to 0.32710, saving model to my_keras_model.h5
Epoch 33/40
101/101 [==============================] - 49s 481ms/step - loss: 0.2762 - accuracy: 0.8799 - val_loss: 0.3132 - val_accuracy: 0.8688

Epoch 00033: val_loss improved from 0.32710 to 0.31319, saving model to my_keras_model.h5
Epoch 34/40
101/101 [==============================] - 48s 476ms/step - loss: 0.2705 - accuracy: 0.8882 - val_loss: 0.3170 - val_accuracy: 0.8682

Epoch 00034: val_loss did not improve from 0.31319
Epoch 35/40
101/101 [==============================] - 49s 480ms/step - loss: 0.2598 - accuracy: 0.8908 - val_loss: 0.3283 - val_accuracy: 0.8557

Epoch 00035: val_loss did not improve from 0.31319
Epoch 36/40
101/101 [==============================] - 49s 482ms/step - loss: 0.2525 - accuracy: 0.8957 - val_loss: 0.3404 - val_accuracy: 0.8601

Epoch 00036: val_loss did not improve from 0.31319
Epoch 37/40
101/101 [==============================] - 49s 480ms/step - loss: 0.2531 - accuracy: 0.8970 - val_loss: 0.3170 - val_accuracy: 0.8738

Epoch 00037: val_loss did not improve from 0.31319
Epoch 38/40
101/101 [==============================] - 48s 484ms/step - loss: 0.2745 - accuracy: 0.8825 - val_loss: 0.3406 - val_accuracy: 0.8620

Epoch 00038: val_loss did not improve from 0.31319

Epoch 00038: ReduceLROnPlateau reducing learning rate to 8.000000525498762e-06.
Epoch 39/40
101/101 [==============================] - 49s 479ms/step - loss: 0.2714 - accuracy: 0.8850 - val_loss: 0.3135 - val_accuracy: 0.8663

Epoch 00039: val_loss did not improve from 0.31319
Epoch 40/40
101/101 [==============================] - 48s 477ms/step - loss: 0.2628 - accuracy: 0.8919 - val_loss: 0.3131 - val_accuracy: 0.8726

Epoch 00040: val_loss improved from 0.31319 to 0.31306, saving model to my_keras_model.h5

Evaluate

model.evaluate(test_generator_iterator)
32/32 [==============================] - 16s 509ms/step - loss: 0.2941 - accuracy: 0.8828





[0.29405850172042847, 0.882847249507904]

좋은 웹페이지 즐겨찾기