【SIGNATE】 이미지 라벨링 (10 종류) 전이 학습
18325 단어 파이썬TensorFlow이미지 처리전이 학습기계 학습
전체 구성
① 데이터 아래 준비
②화상 데이터의 확장(물 증가)
③전이 학습
환경 정보
파이썬 3.6.5
tensorflow 2.3.1
【SIGNATE】이미지 라벨링(10종류)에 대해서
이미지 데이터에 대해 10가지 라벨 중 하나를 할당하는 모델을 만듭니다.
학습 데이터 샘플 수: 5000
아래 링크
h tps : // / g. jp/코 m페치치온 s/133
전이 학습
지금까지의 처리는 이하를 참조
①데이터 아래 준비
htps : // 이 m/하라_타츠/있어 ms/아 90173d33cb381648f72
②화상 데이터의 확장(물 증가)
htps : // 이 m / 하라_타츠 / ms / 86 df3c00 A 374 E9 Ae 796
라이브러리 가져오기
python.pyimport numpy as np
import matplotlib.pyplot as plt
# tensorflow
from tensorflow import keras
from tensorflow.keras import regularizers
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Dropout
# 学習済モデル(VGG16)
from tensorflow.keras.applications.vgg16 import VGG16
전이 학습
이번은 딥 러닝에 의한 화상 응용의 대표적인 모델의 하나로서 VGG16 사용합니다.
python.py# 学習済モデル(VGG16)の読み込み
base_model = VGG16(weights = 'imagenet', #学習済の重みを利用
include_top = False, #出力の全結合層は使わない
input_shape = (96, 96, 3)) #入力の値
base_model.summary()
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 96, 96, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 96, 96, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 96, 96, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 48, 48, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 48, 48, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 48, 48, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 24, 24, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 24, 24, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 24, 24, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 24, 24, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 12, 12, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 12, 12, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 12, 12, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 12, 12, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 6, 6, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 3, 3, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
미세 조정
출력층에 가까운 부분만 재학습을 실시합니다.
python.py# 最後のconv層の直前までの層を凍結する
for layer in base_model.layers[:15]:
layer.trainable = False
# 重みが固定(False)されているかの確認
for layer in base_model.layers:
print(layer, layer.trainable )
<tensorflow.python.keras.engine.input_layer.InputLayer object at 0x7fd13a393860> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd13a3d4668> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138ae0c88> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd138a305f8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a68cf8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138ae0748> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd1389f3ef0> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a68dd8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389f3080> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389ff4e0> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd138a097f0> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389ffba8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389ff7f0> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a11a58> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd138a1cd68> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a11c18> True
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a11d68> True
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389aa048> True
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd1389b52e8> True
「True」가 되고 있는 부분만 재학습이 행해집니다.
학습된 모델에 출력 레이어를 추가하여 모델을 완성합니다.
python.py# VGG16のモデルに全結合分類を追加する
model = keras.Sequential()
model.add(base_model)
# モデルの作成
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Functional) (None, 3, 3, 512) 14714688
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 1179904
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 2570
=================================================================
Total params: 15,897,162
Trainable params: 8,261,898
Non-trainable params: 7,635,264
_________________________________________________________________
모델 학습
python.pymodel.compile(loss='categorical_crossentropy',
optimizer= keras.optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
%%time
# 学習の実施
log = model.fit_generator(train_data_gen,
steps_per_epoch = 87,
epochs = 100,
validation_data = valid_data_gen,
validation_steps = 33)
결과 확인
python.pyacc = log.history['accuracy']
val_acc = log.history['val_accuracy']
loss = log.history['loss']
val_loss = log.history['val_loss']
epochs_range = range(100)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
결과는 이런 느낌
결과를 제출한 곳
잠정 평가: 0.8005000
순위 : 54위
글쎄, 그 결과입니까? .
결론
개선의 여지가 있다면,
・학습 완료 모델을 바꾸어 본다
· 파라미터 최적화
· 자체 데이터 세트 추가
얼마입니까? .
조언이 있으면 가르쳐주세요! ! ! !
Reference
이 문제에 관하여(【SIGNATE】 이미지 라벨링 (10 종류) 전이 학습), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://qiita.com/hara_tatsu/items/bc93fb61b7ccbc639eed
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
파이썬 3.6.5
tensorflow 2.3.1
【SIGNATE】이미지 라벨링(10종류)에 대해서
이미지 데이터에 대해 10가지 라벨 중 하나를 할당하는 모델을 만듭니다.
학습 데이터 샘플 수: 5000
아래 링크
h tps : // / g. jp/코 m페치치온 s/133
전이 학습
지금까지의 처리는 이하를 참조
①데이터 아래 준비
htps : // 이 m/하라_타츠/있어 ms/아 90173d33cb381648f72
②화상 데이터의 확장(물 증가)
htps : // 이 m / 하라_타츠 / ms / 86 df3c00 A 374 E9 Ae 796
라이브러리 가져오기
python.pyimport numpy as np
import matplotlib.pyplot as plt
# tensorflow
from tensorflow import keras
from tensorflow.keras import regularizers
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Dropout
# 学習済モデル(VGG16)
from tensorflow.keras.applications.vgg16 import VGG16
전이 학습
이번은 딥 러닝에 의한 화상 응용의 대표적인 모델의 하나로서 VGG16 사용합니다.
python.py# 学習済モデル(VGG16)の読み込み
base_model = VGG16(weights = 'imagenet', #学習済の重みを利用
include_top = False, #出力の全結合層は使わない
input_shape = (96, 96, 3)) #入力の値
base_model.summary()
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 96, 96, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 96, 96, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 96, 96, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 48, 48, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 48, 48, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 48, 48, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 24, 24, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 24, 24, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 24, 24, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 24, 24, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 12, 12, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 12, 12, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 12, 12, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 12, 12, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 6, 6, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 3, 3, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
미세 조정
출력층에 가까운 부분만 재학습을 실시합니다.
python.py# 最後のconv層の直前までの層を凍結する
for layer in base_model.layers[:15]:
layer.trainable = False
# 重みが固定(False)されているかの確認
for layer in base_model.layers:
print(layer, layer.trainable )
<tensorflow.python.keras.engine.input_layer.InputLayer object at 0x7fd13a393860> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd13a3d4668> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138ae0c88> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd138a305f8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a68cf8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138ae0748> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd1389f3ef0> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a68dd8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389f3080> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389ff4e0> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd138a097f0> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389ffba8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389ff7f0> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a11a58> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd138a1cd68> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a11c18> True
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a11d68> True
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389aa048> True
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd1389b52e8> True
「True」가 되고 있는 부분만 재학습이 행해집니다.
학습된 모델에 출력 레이어를 추가하여 모델을 완성합니다.
python.py# VGG16のモデルに全結合分類を追加する
model = keras.Sequential()
model.add(base_model)
# モデルの作成
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Functional) (None, 3, 3, 512) 14714688
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 1179904
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 2570
=================================================================
Total params: 15,897,162
Trainable params: 8,261,898
Non-trainable params: 7,635,264
_________________________________________________________________
모델 학습
python.pymodel.compile(loss='categorical_crossentropy',
optimizer= keras.optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
%%time
# 学習の実施
log = model.fit_generator(train_data_gen,
steps_per_epoch = 87,
epochs = 100,
validation_data = valid_data_gen,
validation_steps = 33)
결과 확인
python.pyacc = log.history['accuracy']
val_acc = log.history['val_accuracy']
loss = log.history['loss']
val_loss = log.history['val_loss']
epochs_range = range(100)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
결과는 이런 느낌
결과를 제출한 곳
잠정 평가: 0.8005000
순위 : 54위
글쎄, 그 결과입니까? .
결론
개선의 여지가 있다면,
・학습 완료 모델을 바꾸어 본다
· 파라미터 최적화
· 자체 데이터 세트 추가
얼마입니까? .
조언이 있으면 가르쳐주세요! ! ! !
Reference
이 문제에 관하여(【SIGNATE】 이미지 라벨링 (10 종류) 전이 학습), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://qiita.com/hara_tatsu/items/bc93fb61b7ccbc639eed
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
지금까지의 처리는 이하를 참조
①데이터 아래 준비
htps : // 이 m/하라_타츠/있어 ms/아 90173d33cb381648f72
②화상 데이터의 확장(물 증가)
htps : // 이 m / 하라_타츠 / ms / 86 df3c00 A 374 E9 Ae 796
라이브러리 가져오기
python.py
import numpy as np
import matplotlib.pyplot as plt
# tensorflow
from tensorflow import keras
from tensorflow.keras import regularizers
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Dropout
# 学習済モデル(VGG16)
from tensorflow.keras.applications.vgg16 import VGG16
전이 학습
이번은 딥 러닝에 의한 화상 응용의 대표적인 모델의 하나로서 VGG16 사용합니다.
python.py
# 学習済モデル(VGG16)の読み込み
base_model = VGG16(weights = 'imagenet', #学習済の重みを利用
include_top = False, #出力の全結合層は使わない
input_shape = (96, 96, 3)) #入力の値
base_model.summary()
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 96, 96, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 96, 96, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 96, 96, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 48, 48, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 48, 48, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 48, 48, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 24, 24, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 24, 24, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 24, 24, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 24, 24, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 12, 12, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 12, 12, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 12, 12, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 12, 12, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 6, 6, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 3, 3, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
미세 조정
출력층에 가까운 부분만 재학습을 실시합니다.
python.py
# 最後のconv層の直前までの層を凍結する
for layer in base_model.layers[:15]:
layer.trainable = False
# 重みが固定(False)されているかの確認
for layer in base_model.layers:
print(layer, layer.trainable )
<tensorflow.python.keras.engine.input_layer.InputLayer object at 0x7fd13a393860> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd13a3d4668> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138ae0c88> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd138a305f8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a68cf8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138ae0748> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd1389f3ef0> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a68dd8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389f3080> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389ff4e0> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd138a097f0> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389ffba8> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389ff7f0> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a11a58> False
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd138a1cd68> False
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a11c18> True
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd138a11d68> True
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fd1389aa048> True
<tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x7fd1389b52e8> True
「True」가 되고 있는 부분만 재학습이 행해집니다.
학습된 모델에 출력 레이어를 추가하여 모델을 완성합니다.
python.py
# VGG16のモデルに全結合分類を追加する
model = keras.Sequential()
model.add(base_model)
# モデルの作成
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Functional) (None, 3, 3, 512) 14714688
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 1179904
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 2570
=================================================================
Total params: 15,897,162
Trainable params: 8,261,898
Non-trainable params: 7,635,264
_________________________________________________________________
모델 학습
python.py
model.compile(loss='categorical_crossentropy',
optimizer= keras.optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
%%time
# 学習の実施
log = model.fit_generator(train_data_gen,
steps_per_epoch = 87,
epochs = 100,
validation_data = valid_data_gen,
validation_steps = 33)
결과 확인
python.py
acc = log.history['accuracy']
val_acc = log.history['val_accuracy']
loss = log.history['loss']
val_loss = log.history['val_loss']
epochs_range = range(100)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
결과는 이런 느낌
결과를 제출한 곳
잠정 평가: 0.8005000
순위 : 54위
글쎄, 그 결과입니까? .
결론
개선의 여지가 있다면,
・학습 완료 모델을 바꾸어 본다
· 파라미터 최적화
· 자체 데이터 세트 추가
얼마입니까? .
조언이 있으면 가르쳐주세요! ! ! !
Reference
이 문제에 관하여(【SIGNATE】 이미지 라벨링 (10 종류) 전이 학습), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://qiita.com/hara_tatsu/items/bc93fb61b7ccbc639eed텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)