In this portfolio project, I use a dataset containing x-rays of 3 classes - patients with COVID, patients with viral pneumonia, and patients having no illness. The goal is to compare a range of deep learning architectures to see which achieves the highest predictive strength. Though I report accuracy, a confusion matrix, precision, recall, and F1-Scores on test data for each model, I also plot the ROC Curve and calculate the area under it on test data.
I specifically compare the metrics of 8 non-augmented models to counterpart models that vary only in that they use augmented data via Keras ImageDataGenerator and flow_from_directory; for these augmented models, though I could have used numerous image data aumentation methods, I restrict my analysis here to randomly varying levels of brightness and horizontal flips.
I run 40 epochs for each model, with EarlyStopping when validation loss has not improved after 15 epochs. I also use ModelCheckpoint for each model, saving the best version of each model which minimizes validation loss. I use ReduceLROnPlateau, but the patience and factor values vary depending on the model.
Please note that though I could have simply used OpenCV to process the x-rays using grayscale, for this project I use RGB for illustration purposes.
The 8 models I compare consist of the following architectures:
1) 10 convolutional layers, each with a filter size of 3x3. The first 8 layers have 32 filters, 9th layer has 16 filters, and 10th layer has 8 filters. Activations for each layer are ReLU. Batch Normalization occurs in between layers. Batch size of 60. Max Pooling with a pooling size of 2 occurs after the 2nd, 4th, 6th, and 8th layers. ReduceLROnPlateau has patience of 2 epochs and factor of 0.35.
2) 10 convolutional layers, each with a filter size of 3x3. The first 8 layers have 64 filters, 9th layer has 32 filters, and 10th layer has 16 filters. Activations for each layer are ReLU. No Batch Normalization occurs. Batch size of 32. Max Pooling with a pooling size of 2 occurs after the 2nd, 4th, 6th, and 8th layers. ReduceLROnPlateau has patience of 2 epochs and factor of 0.15.
3) 10 convolutional layers, each with a filter size of 3x3. The first 4 layers have 32 filters, 4th through 8th layers have 64 filters, 9th layer has 32 filters, and 10th layer has 16 filters. Activations for each layer are ReLU. No Batch Normalization occurs. Batch size of 32. Max Pooling with a pooling size of 2 occurs after the 2nd, 4th, 6th, and 8th layers. ReduceLROnPlateau has patience of 2 epochs and factor of 0.05.
4) 9 convolutional layers, each with a filter size of 3x3. The first set of 3 layers have 32 filters, the second set of 3 layers have 64 filters, and the third set of 3 layers have 128 filters. Activations for each layer are ReLU. No Batch Normalization occurs. Batch size of 32. Max Pooling with a pooling size of 2 occurs after the 3rd, 6th, and 9th layers. ReduceLROnPlateau has patience of 2 epochs and factor of 0.05.
5) Modified version of SqueezeNet. It has 11 layers. The 2nd through 10th layers are fire modules with ReLU activations in which the first two layers of each fire module have a filter size of 1 and the third layer has a filter size of 3. For these 9 fire modules, the squeeze/expand layers are as follows:
1st: 8, 16
2nd: 16, 32
3rd: 26, 32
4th: 32, 64
5th: 32, 64
6th: 64, 128
7th: 64, 128
8th: 128, 256
9th: 128, 256
The first convolutional layer in the model has a filter size of 3, 32 filters, “same” padding, and a ReLU activation. The 11th convolutional layer has a filter size of 3, 64 filters, “same” padding, and a ReLU activation. Batch Normalization occurs in between the 2nd to 11th layers (including in between fire modules), and after the 11th layer. Batch size of 32. Max pooling with a pooling size of 2 occurs after the 2nd, 5th, 9th, and 10th layers, and Global Average Pooling occurs after the last convolutional layer. ReduceLROnPlateau has patience of 2 epochs and factor of 0.05.
6) Another modified version of SqueezeNet. It has 7 layers. The 2nd through last layers are fire modules with ReLU activation. Inside each fire module there are 6 layers. The 1st and 2nd fire module layers have a filter size of 1 and “same” padding. The 3rd and 4th layers inside each fire module have a filter size of 16 and “same” padding. The 5th and 6th layers inside each fire module have a filter size of 32 and “same” padding. For all 6 fire modules, the squeeze/expand layers are (24, 48).
7) A model which uses the InceptionV3 transfer learning model and ImageNet weights. All layers after the 30th layer are frozen, and the first 30 layers are fine-tuned to the x-ray images. The final InceptionV3 convolutional layer connects to a Global Average Pooling layer, which connects to two consecutive convolutional dense layers having 128 neurons each with ReLU activation, before connecting to a final dense output layer which uses softmax activation.
8) A model which uses the InceptionV3 transfer learning model and ImageNet weights. All layers after the 30th layer are frozen, and the first 30 layers are fine-tuned to the x-ray images. The final InceptionV3 convolutional layer connects to a Global Average Pooling layer, which connects to two consecutive convolutional dense layers having 256 neurons each with ReLU activation, before connecting to a final dense output layer which uses softmax activation.
The dataset comprises X-ray images in three classes - the first class includes individuals who had viral pneumonia, the second includes those who do not have any infection that would be visible from X-rays, i.e. their X-rays are "normal." The third class includes those who had COVID-19 at the time of the X-ray.
The "viral pneumonia" class has 1,345 images/observations. The "normal" class has 1,341 images/observations. The "covid" class has 1,201 images/observations.
Building a predictive model using X-ray image data would be potentially very useful. First, if patients are able to receive X-ray results back before they can receive COVID-19 test results back, or if COVID-19 tests are not available (e.g., in developing countries where X-ray machines are readily available), patients would be able to know likely very quickly (perhaps even within minutes) whether there was a very high probability that they had COVID-19. This is important because even COVID-19 tests that are given as swabs through the nose are not 100% accurate.
Patients (and their loved ones and those who live with them or near them) would be benefited because those diagnosed with COVID-19 by X-ray would know to immediately quarantine. If they did so immediately, this would likely reduce the spread of COVID-19 from infected patients since those who take a regular COVID-19 test often do not receive their test results back the same day, and likely in the meantime would not quarantine.
As a result, in reality, not only would the person receiving the X-ray and their family and loved ones and those who they come in contact with benefit from a highly accurate X-ray method of prediction, but truly society at large since there would likely not be as much spread of the virus if X-ray results could be disseminated within minutes of having them taken.
M.E.H. Chowdhury, T. Rahman, A. Khandakar, R. Mazhar, M.A. Kadir, Z.B. Mahbub, K.R. Islam, M.S. Khan, A. Iqbal, N. Al-Emadi, M.B.I. Reaz, “Can AI help in screening Viral and COVID-19 pneumonia?” arXiv preprint, 29 March 2020, https://arxiv.org/abs/2003.13145
Augmenting the data resulted in improved ROC AUC scores (in yellow) for 4 of the 8 models. The best performing models, which are highlighted in green (models 7 and 8, i.e., the InceptionV3 transfer learning models), performed better when not augmented. Of these two models, both had the same ROC AUC score (0.9991), but model 7 achieved better accuracy, F1, precision, and recall scores. Further, for model 6 (my second modified version of SqueezeNet), augmentation resulted in catastophic decreases in performance. However, augmenting the data for model 5 (my first modified version of SqueezeNet) resulted in improved performance.
import numpy as np
import scipy as sp
import pandas as pd
import os
!gdown --id 1xt7g5LkZuX09e1a8rK9sRXIrGFN6rjzl
!unzip COVID-19_Radiography_Database.zip
/usr/local/lib/python3.8/dist-packages/gdown/cli.py:127: FutureWarning: Option `--id` was deprecated in version 4.3.1 and will be removed in 5.0. You don't need to pass it anymore to use a file ID. warnings.warn( Access denied with the following error: Cannot retrieve the public link of the file. You may need to change the permission to 'Anyone with the link', or have had many accesses. You may still be able to access the file from the browser: https://drive.google.com/uc?id=1xt7g5LkZuX09e1a8rK9sRXIrGFN6rjzl unzip: cannot find or open COVID-19_Radiography_Database.zip, COVID-19_Radiography_Database.zip.zip or COVID-19_Radiography_Database.zip.ZIP.
from google.colab import drive
drive.mount('/content/my-drive/')
Mounted at /content/my-drive/
!pwd
import os
os.chdir('/content/my-drive/MyDrive/x-ray_project/')
!pwd
/content /content/my-drive/MyDrive/x-ray_project
!pip install scikit-learn --upgrade
import os
os.environ['TF_KERAS'] = '2.9.1'
%tensorflow_version 2.9.1
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: scikit-learn in /usr/local/lib/python3.8/dist-packages (1.0.2) Collecting scikit-learn Downloading scikit_learn-1.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (31.2 MB) |████████████████████████████████| 31.2 MB 204 kB/s Requirement already satisfied: joblib>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (1.2.0) Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (1.7.3) Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (3.1.0) Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (1.21.6) Installing collected packages: scikit-learn Attempting uninstall: scikit-learn Found existing installation: scikit-learn 1.0.2 Uninstalling scikit-learn-1.0.2: Successfully uninstalled scikit-learn-1.0.2 Successfully installed scikit-learn-1.1.3 Colab only includes TensorFlow 2.x; %tensorflow_version has no effect.
import sys
import time
import cv2
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
import os
import zipfile
from skimage.transform import resize
from sklearn.model_selection import train_test_split
from tensorflow.python.keras.utils import np_utils
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Dropout, Flatten, Activation, BatchNormalization
from tensorflow.python.keras.layers.convolutional import Conv2D, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam,SGD,Adagrad,Adadelta,RMSprop
from tensorflow.keras.applications import VGG19, ResNet50, InceptionV3
# Extracting all filenames iteratively
base_path = 'images'
categories = ['Viral Pneumonia', 'NORMAL', 'COVID']
# load file names to fnames list object
fnames = []
for category in categories:
COVID_folder = os.path.join(base_path, category)
file_names = os.listdir(COVID_folder)
full_path = [os.path.join(COVID_folder, file_name) for file_name in file_names]
fnames.append(full_path)
print('number of images for each category:', [len(f) for f in fnames])
#print(fnames[0:2]) #examples of file names
number of images for each category: [1345, 1341, 1200]
# Import image, load to array of shape height, width, channels, then min/max transform.
# Write preprocessor that will match up with model's expected input shape.
# Uses opencv for image preprocessing
def preprocessor(data, shape=(192, 192)):
"""
This function reads in images, resizes them to a fixed shape, and
min/max transforms them, before converting feature values to float32.
params:
data
list of unprocessed images
returns:
X
numpy array of preprocessed image data
"""
import cv2
import numpy as np
"Resize a color image and min/max transform the image"
img = cv2.imread(data) # Read in image from filepath.
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # cv2 reads in images in order of blue green and red, we reverse the order for ML.
#I could have instead converted to grayscale images using im_gray = cv2.imread('gray_image.png', cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, (192,192)) # Change height and width of image.
img = img / 255.0 # Min-max transform.
# Resize the images.
X = np.array(img)
#X = np.expand_dims(X, axis=0) # Expand dims to add "1" to object shape [1, h, w, channels] if needed.
X = np.array(X, dtype=np.float32)
return X
#Try on single flower file (imports file and preprocesses it to data with following shape)
preprocessor('images/Viral Pneumonia/Viral Pneumonia (3).png').shape
(192, 192, 3)
#Import image files iteratively and preprocess them into array of correctly structured data
# Create list of file paths
filepaths=fnames[0]+fnames[1]+fnames[2]
# Iteratively import and preprocess data using map function
# map functions apply your preprocessor function one step at a time to each filepath
preprocessed_image_data=list(map(preprocessor, filepaths))
# Object needs to be an array rather than a list for Keras (map returns to list object)
X= np.array(preprocessed_image_data) # Assigning to X to highlight that this represents feature input data for our model
'''
import pickle
pickle.dump(X, open("X_preprocessed_x-rays.pkl", "wb"))
import pickle
X = pickle.load(open("X_preprocessed_x-rays.pkl", "rb"))
X.shape
(3886, 192, 192, 3)
# Create y data made up of correctly ordered labels from file folders
from itertools import repeat
# Recall that we have five folders with the following number of images in each folder
#...corresponding to each flower type
print('number of images for each category:', [len(f) for f in fnames])
pneumonia=list(repeat("Viral Pneumonia", 1345))
normal=list(repeat("NORMAL", 1341))
covid19=list(repeat("COVID", 1200))
#combine into single list of y labels
y_labels = pneumonia+normal+covid19
#check length, same as X above
print(len(y_labels))
# Need to one hot encode for Keras. Let's use Pandas
import pandas as pd
y=pd.get_dummies(y_labels)
display(y)
number of images for each category: [1345, 1341, 1200] 3886
COVID | NORMAL | Viral Pneumonia | |
---|---|---|---|
0 | 0 | 0 | 1 |
1 | 0 | 0 | 1 |
2 | 0 | 0 | 1 |
3 | 0 | 0 | 1 |
4 | 0 | 0 | 1 |
... | ... | ... | ... |
3881 | 1 | 0 | 0 |
3882 | 1 | 0 | 0 |
3883 | 1 | 0 | 0 |
3884 | 1 | 0 | 0 |
3885 | 1 | 0 | 0 |
3886 rows × 3 columns
X[0].shape
(192, 192, 3)
Below, I visualize only x-rays that either demonstrate COVID-19 postivity (first row, columns 1 and 2), or no illness at all (second row, columns 1 and 2). It is clear that chest x-rays of individuals with COVID-19 tend to have much more inflamation than chest x-rays of individuals who have no illness.
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
import numpy as np
import random
im1 =preprocessor('images/COVID/COVID (235).png')
im2 =preprocessor('images/COVID/COVID (199).png')
im3 =preprocessor('images/NORMAL/NORMAL (1005).png')
im4 =preprocessor('images/NORMAL/NORMAL (1073).png')
fig = plt.figure(figsize=(4., 4.))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(2, 2), # creates 2x2 grid of axes
axes_pad=0.25, # pad between axes in inch.
)
for ax, im in zip(grid, [im1, im2, im3, im4]):
# Iterating over the grid returns the Axes.
ax.imshow(im)
plt.show()
# Train test split resized images
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify = y, test_size = 0.20, random_state = 42)
y_train
COVID | NORMAL | Viral Pneumonia | |
---|---|---|---|
1702 | 0 | 1 | 0 |
2394 | 0 | 1 | 0 |
364 | 0 | 0 | 1 |
1208 | 0 | 0 | 1 |
939 | 0 | 0 | 1 |
... | ... | ... | ... |
1372 | 0 | 1 | 0 |
3755 | 1 | 0 | 0 |
543 | 0 | 0 | 1 |
2447 | 0 | 1 | 0 |
3279 | 1 | 0 | 0 |
3108 rows × 3 columns
X_train.shape
(3108, 192, 192, 3)
Since I am using a validation split of 0.20, a common level, the validation set = 20% of the size of the training set. Since the original training set created from train_test_split comprises 80% of the size of the overall dataset, and since creating the validation set takes away 20% of the original training set, this means that the validation set = 0.20*0.80 = 16% of the size of the overall dataset. Therefore, the training set now = 80% - 16% = 64% of the size of the overall dataset. Thus, the train/validation/test set ratio becomes: 64%/16%/20%.
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, BatchNormalization, Flatten
from keras.regularizers import l1
from tensorflow.keras.optimizers import SGD
from sklearn.utils import class_weight
import numpy as np
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from keras.metrics import AUC
with tf.device('/device:GPU:0'): #"/GPU:0": Short-hand notation for the first GPU of your machine that is visible to TensorFlow.
opt=Adam(learning_rate=.001)
model = tf.keras.Sequential([
# input: images of size Sample size, height, width, channels 1x192x192x3 pixels (the three stands for RGB channels)
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu', input_shape=(192, 192, 3)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=8, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Flatten(),
# classifying into 3 categories
tf.keras.layers.Dense(3, activation='softmax')
])
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.35) #factor = the factor by which the lr is reduced when val_accuracy fails to improve after the number of epochs set to equal "patience"
mc = ModelCheckpoint('best_model_1_non-aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
# Fitting the CNN to the Training set
model.fit(X_train, y_train, epochs = 40, verbose=1, validation_split=0.20, batch_size=60, callbacks=[mc,red_lr,es])
Epoch 1/40 42/42 [==============================] - ETA: 0s - loss: 0.4063 - accuracy: 0.8512 - auc: 0.9572 Epoch 00001: val_loss improved from inf to 1.12842, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 22s 252ms/step - loss: 0.4063 - accuracy: 0.8512 - auc: 0.9572 - val_loss: 1.1284 - val_accuracy: 0.3633 - val_auc: 0.5189 - lr: 0.0010 Epoch 2/40 42/42 [==============================] - ETA: 0s - loss: 0.1382 - accuracy: 0.9549 - auc: 0.9944 Epoch 00002: val_loss did not improve from 1.12842 42/42 [==============================] - 7s 167ms/step - loss: 0.1382 - accuracy: 0.9549 - auc: 0.9944 - val_loss: 1.1935 - val_accuracy: 0.3633 - val_auc: 0.5945 - lr: 0.0010 Epoch 3/40 42/42 [==============================] - ETA: 0s - loss: 0.0812 - accuracy: 0.9722 - auc: 0.9982 Epoch 00003: val_loss did not improve from 1.12842 Epoch 00003: ReduceLROnPlateau reducing learning rate to 0.00035000001662410796. 42/42 [==============================] - 7s 168ms/step - loss: 0.0812 - accuracy: 0.9722 - auc: 0.9982 - val_loss: 1.4170 - val_accuracy: 0.3778 - val_auc: 0.5966 - lr: 0.0010 Epoch 4/40 42/42 [==============================] - ETA: 0s - loss: 0.0467 - accuracy: 0.9903 - auc: 0.9995 Epoch 00004: val_loss did not improve from 1.12842 42/42 [==============================] - 7s 169ms/step - loss: 0.0467 - accuracy: 0.9903 - auc: 0.9995 - val_loss: 1.3578 - val_accuracy: 0.5370 - val_auc: 0.6348 - lr: 3.5000e-04 Epoch 5/40 42/42 [==============================] - ETA: 0s - loss: 0.0229 - accuracy: 0.9968 - auc: 1.0000 Epoch 00005: val_loss did not improve from 1.12842 Epoch 00005: ReduceLROnPlateau reducing learning rate to 0.00012250000581843777. 42/42 [==============================] - 7s 171ms/step - loss: 0.0229 - accuracy: 0.9968 - auc: 1.0000 - val_loss: 1.3069 - val_accuracy: 0.5016 - val_auc: 0.6431 - lr: 3.5000e-04 Epoch 6/40 42/42 [==============================] - ETA: 0s - loss: 0.0139 - accuracy: 0.9996 - auc: 1.0000 Epoch 00006: val_loss did not improve from 1.12842 42/42 [==============================] - 7s 171ms/step - loss: 0.0139 - accuracy: 0.9996 - auc: 1.0000 - val_loss: 1.3489 - val_accuracy: 0.5096 - val_auc: 0.6696 - lr: 1.2250e-04 Epoch 7/40 42/42 [==============================] - ETA: 0s - loss: 0.0118 - accuracy: 1.0000 - auc: 1.0000 Epoch 00007: val_loss did not improve from 1.12842 Epoch 00007: ReduceLROnPlateau reducing learning rate to 4.287500050850212e-05. 42/42 [==============================] - 7s 172ms/step - loss: 0.0118 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 1.3105 - val_accuracy: 0.5225 - val_auc: 0.7224 - lr: 1.2250e-04 Epoch 8/40 42/42 [==============================] - ETA: 0s - loss: 0.0103 - accuracy: 1.0000 - auc: 1.0000 Epoch 00008: val_loss did not improve from 1.12842 42/42 [==============================] - 7s 177ms/step - loss: 0.0103 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 1.1661 - val_accuracy: 0.5482 - val_auc: 0.7723 - lr: 4.2875e-05 Epoch 9/40 42/42 [==============================] - ETA: 0s - loss: 0.0101 - accuracy: 1.0000 - auc: 1.0000 Epoch 00009: val_loss improved from 1.12842 to 0.95039, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 8s 182ms/step - loss: 0.0101 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.9504 - val_accuracy: 0.6045 - val_auc: 0.8272 - lr: 4.2875e-05 Epoch 10/40 42/42 [==============================] - ETA: 0s - loss: 0.0097 - accuracy: 1.0000 - auc: 1.0000 Epoch 00010: val_loss improved from 0.95039 to 0.68776, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 179ms/step - loss: 0.0097 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.6878 - val_accuracy: 0.7138 - val_auc: 0.8884 - lr: 4.2875e-05 Epoch 11/40 42/42 [==============================] - ETA: 0s - loss: 0.0094 - accuracy: 1.0000 - auc: 1.0000 Epoch 00011: val_loss improved from 0.68776 to 0.46247, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 8s 179ms/step - loss: 0.0094 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.4625 - val_accuracy: 0.8039 - val_auc: 0.9430 - lr: 4.2875e-05 Epoch 12/40 42/42 [==============================] - ETA: 0s - loss: 0.0087 - accuracy: 1.0000 - auc: 1.0000 Epoch 00012: val_loss improved from 0.46247 to 0.30952, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 8s 179ms/step - loss: 0.0087 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.3095 - val_accuracy: 0.8666 - val_auc: 0.9734 - lr: 4.2875e-05 Epoch 13/40 42/42 [==============================] - ETA: 0s - loss: 0.0086 - accuracy: 1.0000 - auc: 1.0000 Epoch 00013: val_loss improved from 0.30952 to 0.21083, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 178ms/step - loss: 0.0086 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.2108 - val_accuracy: 0.9132 - val_auc: 0.9877 - lr: 4.2875e-05 Epoch 14/40 42/42 [==============================] - ETA: 0s - loss: 0.0078 - accuracy: 1.0000 - auc: 1.0000 Epoch 00014: val_loss improved from 0.21083 to 0.16245, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 177ms/step - loss: 0.0078 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.1624 - val_accuracy: 0.9341 - val_auc: 0.9927 - lr: 4.2875e-05 Epoch 15/40 42/42 [==============================] - ETA: 0s - loss: 0.0077 - accuracy: 1.0000 - auc: 1.0000 Epoch 00015: val_loss improved from 0.16245 to 0.13142, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 179ms/step - loss: 0.0077 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.1314 - val_accuracy: 0.9405 - val_auc: 0.9954 - lr: 4.2875e-05 Epoch 16/40 42/42 [==============================] - ETA: 0s - loss: 0.0076 - accuracy: 1.0000 - auc: 1.0000 Epoch 00016: val_loss improved from 0.13142 to 0.11108, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 176ms/step - loss: 0.0076 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.1111 - val_accuracy: 0.9534 - val_auc: 0.9967 - lr: 4.2875e-05 Epoch 17/40 42/42 [==============================] - ETA: 0s - loss: 0.0072 - accuracy: 1.0000 - auc: 1.0000 Epoch 00017: val_loss improved from 0.11108 to 0.09648, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 176ms/step - loss: 0.0072 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0965 - val_accuracy: 0.9614 - val_auc: 0.9974 - lr: 4.2875e-05 Epoch 18/40 42/42 [==============================] - ETA: 0s - loss: 0.0072 - accuracy: 1.0000 - auc: 1.0000 Epoch 00018: val_loss improved from 0.09648 to 0.09115, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 8s 180ms/step - loss: 0.0072 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0912 - val_accuracy: 0.9646 - val_auc: 0.9970 - lr: 4.2875e-05 Epoch 19/40 42/42 [==============================] - ETA: 0s - loss: 0.0063 - accuracy: 1.0000 - auc: 1.0000 Epoch 00019: val_loss improved from 0.09115 to 0.08357, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 177ms/step - loss: 0.0063 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0836 - val_accuracy: 0.9695 - val_auc: 0.9970 - lr: 4.2875e-05 Epoch 20/40 42/42 [==============================] - ETA: 0s - loss: 0.0062 - accuracy: 1.0000 - auc: 1.0000 Epoch 00020: val_loss improved from 0.08357 to 0.08279, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 178ms/step - loss: 0.0062 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0828 - val_accuracy: 0.9695 - val_auc: 0.9971 - lr: 4.2875e-05 Epoch 21/40 42/42 [==============================] - ETA: 0s - loss: 0.0059 - accuracy: 1.0000 - auc: 1.0000 Epoch 00021: val_loss improved from 0.08279 to 0.07889, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 178ms/step - loss: 0.0059 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0789 - val_accuracy: 0.9727 - val_auc: 0.9972 - lr: 4.2875e-05 Epoch 22/40 42/42 [==============================] - ETA: 0s - loss: 0.0060 - accuracy: 1.0000 - auc: 1.0000 Epoch 00022: val_loss improved from 0.07889 to 0.07828, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 8s 182ms/step - loss: 0.0060 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0783 - val_accuracy: 0.9727 - val_auc: 0.9972 - lr: 4.2875e-05 Epoch 23/40 42/42 [==============================] - ETA: 0s - loss: 0.0056 - accuracy: 1.0000 - auc: 1.0000 Epoch 00023: val_loss improved from 0.07828 to 0.07716, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 177ms/step - loss: 0.0056 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0772 - val_accuracy: 0.9727 - val_auc: 0.9973 - lr: 4.2875e-05 Epoch 24/40 42/42 [==============================] - ETA: 0s - loss: 0.0056 - accuracy: 1.0000 - auc: 1.0000 Epoch 00024: val_loss did not improve from 0.07716 42/42 [==============================] - 7s 174ms/step - loss: 0.0056 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0781 - val_accuracy: 0.9727 - val_auc: 0.9973 - lr: 4.2875e-05 Epoch 25/40 42/42 [==============================] - ETA: 0s - loss: 0.0053 - accuracy: 1.0000 - auc: 1.0000 Epoch 00025: val_loss did not improve from 0.07716 Epoch 00025: ReduceLROnPlateau reducing learning rate to 1.500624966865871e-05. 42/42 [==============================] - 7s 174ms/step - loss: 0.0053 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0795 - val_accuracy: 0.9711 - val_auc: 0.9972 - lr: 4.2875e-05 Epoch 26/40 42/42 [==============================] - ETA: 0s - loss: 0.0048 - accuracy: 1.0000 - auc: 1.0000 Epoch 00026: val_loss did not improve from 0.07716 42/42 [==============================] - 7s 174ms/step - loss: 0.0048 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0774 - val_accuracy: 0.9743 - val_auc: 0.9973 - lr: 1.5006e-05 Epoch 27/40 42/42 [==============================] - ETA: 0s - loss: 0.0044 - accuracy: 1.0000 - auc: 1.0000 Epoch 00027: val_loss improved from 0.07716 to 0.07677, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 8s 184ms/step - loss: 0.0044 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9743 - val_auc: 0.9973 - lr: 1.5006e-05 Epoch 28/40 42/42 [==============================] - ETA: 0s - loss: 0.0046 - accuracy: 1.0000 - auc: 1.0000 Epoch 00028: val_loss improved from 0.07677 to 0.07529, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 179ms/step - loss: 0.0046 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0753 - val_accuracy: 0.9743 - val_auc: 0.9974 - lr: 1.5006e-05 Epoch 29/40 42/42 [==============================] - ETA: 0s - loss: 0.0047 - accuracy: 0.9996 - auc: 1.0000 Epoch 00029: val_loss improved from 0.07529 to 0.07528, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 8s 180ms/step - loss: 0.0047 - accuracy: 0.9996 - auc: 1.0000 - val_loss: 0.0753 - val_accuracy: 0.9743 - val_auc: 0.9974 - lr: 1.5006e-05 Epoch 30/40 42/42 [==============================] - ETA: 0s - loss: 0.0045 - accuracy: 1.0000 - auc: 1.0000 Epoch 00030: val_loss improved from 0.07528 to 0.07521, saving model to best_model_1_non-aug.h5 Epoch 00030: ReduceLROnPlateau reducing learning rate to 5.2521873840305485e-06. 42/42 [==============================] - 7s 177ms/step - loss: 0.0045 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9743 - val_auc: 0.9974 - lr: 1.5006e-05 Epoch 31/40 42/42 [==============================] - ETA: 0s - loss: 0.0041 - accuracy: 1.0000 - auc: 1.0000 Epoch 00031: val_loss improved from 0.07521 to 0.07510, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 178ms/step - loss: 0.0041 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0751 - val_accuracy: 0.9743 - val_auc: 0.9974 - lr: 5.2522e-06 Epoch 32/40 42/42 [==============================] - ETA: 0s - loss: 0.0043 - accuracy: 1.0000 - auc: 1.0000 Epoch 00032: val_loss improved from 0.07510 to 0.07506, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 8s 183ms/step - loss: 0.0043 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0751 - val_accuracy: 0.9743 - val_auc: 0.9974 - lr: 5.2522e-06 Epoch 33/40 42/42 [==============================] - ETA: 0s - loss: 0.0043 - accuracy: 1.0000 - auc: 1.0000 Epoch 00033: val_loss improved from 0.07506 to 0.07461, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 177ms/step - loss: 0.0043 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0746 - val_accuracy: 0.9743 - val_auc: 0.9974 - lr: 5.2522e-06 Epoch 34/40 42/42 [==============================] - ETA: 0s - loss: 0.0042 - accuracy: 1.0000 - auc: 1.0000 Epoch 00034: val_loss improved from 0.07461 to 0.07446, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 178ms/step - loss: 0.0042 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0745 - val_accuracy: 0.9743 - val_auc: 0.9974 - lr: 5.2522e-06 Epoch 35/40 42/42 [==============================] - ETA: 0s - loss: 0.0041 - accuracy: 1.0000 - auc: 1.0000 Epoch 00035: val_loss improved from 0.07446 to 0.07438, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 179ms/step - loss: 0.0041 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0744 - val_accuracy: 0.9743 - val_auc: 0.9974 - lr: 5.2522e-06 Epoch 36/40 42/42 [==============================] - ETA: 0s - loss: 0.0043 - accuracy: 1.0000 - auc: 1.0000 Epoch 00036: val_loss improved from 0.07438 to 0.07429, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 8s 180ms/step - loss: 0.0043 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0743 - val_accuracy: 0.9743 - val_auc: 0.9974 - lr: 5.2522e-06 Epoch 37/40 42/42 [==============================] - ETA: 0s - loss: 0.0041 - accuracy: 1.0000 - auc: 1.0000 Epoch 00037: val_loss improved from 0.07429 to 0.07402, saving model to best_model_1_non-aug.h5 42/42 [==============================] - 7s 177ms/step - loss: 0.0041 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9759 - val_auc: 0.9974 - lr: 5.2522e-06 Epoch 38/40 42/42 [==============================] - ETA: 0s - loss: 0.0042 - accuracy: 1.0000 - auc: 1.0000 Epoch 00038: val_loss did not improve from 0.07402 42/42 [==============================] - 7s 179ms/step - loss: 0.0042 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0742 - val_accuracy: 0.9759 - val_auc: 0.9974 - lr: 5.2522e-06 Epoch 39/40 42/42 [==============================] - ETA: 0s - loss: 0.0038 - accuracy: 1.0000 - auc: 1.0000 Epoch 00039: val_loss did not improve from 0.07402 Epoch 00039: ReduceLROnPlateau reducing learning rate to 1.8382655525783774e-06. 42/42 [==============================] - 7s 174ms/step - loss: 0.0038 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0742 - val_accuracy: 0.9759 - val_auc: 0.9974 - lr: 5.2522e-06 Epoch 40/40 42/42 [==============================] - ETA: 0s - loss: 0.0039 - accuracy: 1.0000 - auc: 1.0000 Epoch 00040: val_loss did not improve from 0.07402 42/42 [==============================] - 7s 173ms/step - loss: 0.0039 - accuracy: 1.0000 - auc: 1.0000 - val_loss: 0.0742 - val_accuracy: 0.9759 - val_auc: 0.9974 - lr: 1.8383e-06
from tensorflow.keras.models import load_model
#model=load_model("best_model_1_non-aug.h5")
y_test_array = y_test.to_numpy()
print(y_test_array)
[[0 0 1] [1 0 0] [0 1 0] ... [1 0 0] [1 0 0] [0 0 1]]
y_true_test = np.argmax(y_test_array,axis=1)
print(y_true_test)
print(y_true_test.shape)
print(y_true_test.dtype)
[2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 1 1 0 0 2] (778,) int64
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
# using predict_classes() for multi-class data to return predicted class index.
def predict_classes(x): # adjusted from keras github code
proba=x
if proba.shape[-1] > 1:
return proba.argmax(axis=-1)
else:
return (proba > 0.5).astype("int32")
print(predict_classes(model.predict(X_test)))
prediction_index=predict_classes(model.predict(X_test))
#Now lets run some code to get keras to return the label rather than the index...
# get labels from one hot encoded y_train data
labels=pd.get_dummies(y_train).columns
# Iterate through all predicted indices using map method
predicted_labels=list(map(lambda x: labels[x], prediction_index))
# Now we can extract some evaluative metrics
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred, classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
25/25 [==============================] - 9s 28ms/step [2 0 2 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 2 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 2 1 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 1 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 2 2 2 1 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 1 0 0 0 0 1 1 0 1 0 0 0 1 2 1 1 2 0 0 0 2 1 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 2 0 2 1 2 2 0 1 1 2 2 2 1 1 2 0 2 1 2 2 1 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 2 0 0 2 0 2 1 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 2 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 2 2 1 2 0 2 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 1 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 1 0 0 0 2 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 2 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 2 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 0 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 1 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 1 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 2 0 0 2 1 0 1 0 2 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 1 0 2 1 2 2 0 0 2 1 0 0 2] 25/25 [==============================] - 1s 22ms/step
# y_test is one hot encoded so we need to extract labels before running model_eval_metrics()
y_test_labels=y_test.idxmax(axis=1) #extract labels from one hot encoded y_test object
y_test_labels=list(y_test.idxmax(axis=1)) #returns a pandas series of predicted labels
# get metrics
model_eval_metrics(y_test_labels,predicted_labels,classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.957584 | 0.958572 | 0.958499 | 0.958659 |
y_pred = model.predict(X_test)
25/25 [==============================] - 1s 22ms/step
labels=list(y_train.columns)
print(labels)
['COVID', 'NORMAL', 'Viral Pneumonia']
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
CM = confusion_matrix(y_test_labels,predicted_labels, labels=['COVID', 'NORMAL', 'Viral Pneumonia'])
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_test_labels,predicted_labels, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9834 0.9875 0.9854 240 NORMAL 0.9551 0.9480 0.9515 269 Viral Pneumonia 0.9370 0.9405 0.9388 269 accuracy 0.9576 778 macro avg 0.9585 0.9587 0.9586 778 weighted avg 0.9576 0.9576 0.9576 778
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, y_pred[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoomed in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
import tensorflow.keras.backend as K
#print(model.get_config()) # Full configuration to fit keras model
print(K.eval(model.optimizer.get_config())) # Optimizer configuration
#print(len(model.history.epoch)) # Number of epochs
{'name': 'Adam', 'learning_rate': 5.2521873e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model_1.png')
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from keras.metrics import AUC
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
with tf.device('/device:GPU:0'):
model = tf.keras.Sequential([
# input: images of size Sample size, height, width, channels 1x192x192x3 pixels (the three stands for RGB channels)
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu', input_shape=(192, 192, 3)),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu'),
tf.keras.layers.Flatten(),
# classifying into 3 categories
tf.keras.layers.Dense(3, activation='softmax')
])
es = EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr = ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.15)
mc = ModelCheckpoint('best_model_2_non-aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
model.fit(X_train, y_train,
epochs = 40, verbose=1,validation_split=0.20, callbacks=[red_lr, mc, es])
Epoch 1/40 78/78 [==============================] - ETA: 0s - loss: 0.6719 - accuracy: 0.6999 - auc: 0.8818 Epoch 00001: val_loss improved from inf to 0.35924, saving model to best_model_2_non-aug.h5 78/78 [==============================] - 19s 200ms/step - loss: 0.6719 - accuracy: 0.6999 - auc: 0.8818 - val_loss: 0.3592 - val_accuracy: 0.8826 - val_auc: 0.9736 - lr: 0.0010 Epoch 2/40 78/78 [==============================] - ETA: 0s - loss: 0.3055 - accuracy: 0.8858 - auc: 0.9737 Epoch 00002: val_loss improved from 0.35924 to 0.23153, saving model to best_model_2_non-aug.h5 78/78 [==============================] - 12s 160ms/step - loss: 0.3055 - accuracy: 0.8858 - auc: 0.9737 - val_loss: 0.2315 - val_accuracy: 0.9180 - val_auc: 0.9856 - lr: 0.0010 Epoch 3/40 78/78 [==============================] - ETA: 0s - loss: 0.2413 - accuracy: 0.9167 - auc: 0.9821 Epoch 00003: val_loss improved from 0.23153 to 0.18890, saving model to best_model_2_non-aug.h5 78/78 [==============================] - 12s 160ms/step - loss: 0.2413 - accuracy: 0.9167 - auc: 0.9821 - val_loss: 0.1889 - val_accuracy: 0.9373 - val_auc: 0.9888 - lr: 0.0010 Epoch 4/40 78/78 [==============================] - ETA: 0s - loss: 0.1831 - accuracy: 0.9393 - auc: 0.9898 Epoch 00004: val_loss improved from 0.18890 to 0.18299, saving model to best_model_2_non-aug.h5 78/78 [==============================] - 12s 159ms/step - loss: 0.1831 - accuracy: 0.9393 - auc: 0.9898 - val_loss: 0.1830 - val_accuracy: 0.9357 - val_auc: 0.9897 - lr: 0.0010 Epoch 5/40 78/78 [==============================] - ETA: 0s - loss: 0.1446 - accuracy: 0.9481 - auc: 0.9925 Epoch 00005: val_loss did not improve from 0.18299 78/78 [==============================] - 12s 156ms/step - loss: 0.1446 - accuracy: 0.9481 - auc: 0.9925 - val_loss: 0.1863 - val_accuracy: 0.9341 - val_auc: 0.9887 - lr: 0.0010 Epoch 6/40 78/78 [==============================] - ETA: 0s - loss: 0.1665 - accuracy: 0.9421 - auc: 0.9908 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.0001500000071246177. Epoch 00006: val_loss did not improve from 0.18299 78/78 [==============================] - 12s 157ms/step - loss: 0.1665 - accuracy: 0.9421 - auc: 0.9908 - val_loss: 0.1901 - val_accuracy: 0.9164 - val_auc: 0.9896 - lr: 0.0010 Epoch 7/40 78/78 [==============================] - ETA: 0s - loss: 0.0856 - accuracy: 0.9674 - auc: 0.9976 Epoch 00007: val_loss improved from 0.18299 to 0.11519, saving model to best_model_2_non-aug.h5 78/78 [==============================] - 12s 159ms/step - loss: 0.0856 - accuracy: 0.9674 - auc: 0.9976 - val_loss: 0.1152 - val_accuracy: 0.9566 - val_auc: 0.9954 - lr: 1.5000e-04 Epoch 8/40 78/78 [==============================] - ETA: 0s - loss: 0.0634 - accuracy: 0.9795 - auc: 0.9982 Epoch 00008: val_loss improved from 0.11519 to 0.11248, saving model to best_model_2_non-aug.h5 78/78 [==============================] - 12s 159ms/step - loss: 0.0634 - accuracy: 0.9795 - auc: 0.9982 - val_loss: 0.1125 - val_accuracy: 0.9614 - val_auc: 0.9949 - lr: 1.5000e-04 Epoch 9/40 78/78 [==============================] - ETA: 0s - loss: 0.0515 - accuracy: 0.9831 - auc: 0.9991 Epoch 00009: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 155ms/step - loss: 0.0515 - accuracy: 0.9831 - auc: 0.9991 - val_loss: 0.1139 - val_accuracy: 0.9630 - val_auc: 0.9949 - lr: 1.5000e-04 Epoch 10/40 78/78 [==============================] - ETA: 0s - loss: 0.0460 - accuracy: 0.9851 - auc: 0.9991 Epoch 00010: ReduceLROnPlateau reducing learning rate to 2.2500001068692655e-05. Epoch 00010: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 157ms/step - loss: 0.0460 - accuracy: 0.9851 - auc: 0.9991 - val_loss: 0.1248 - val_accuracy: 0.9614 - val_auc: 0.9943 - lr: 1.5000e-04 Epoch 11/40 78/78 [==============================] - ETA: 0s - loss: 0.0319 - accuracy: 0.9916 - auc: 0.9997 Epoch 00011: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 156ms/step - loss: 0.0319 - accuracy: 0.9916 - auc: 0.9997 - val_loss: 0.1188 - val_accuracy: 0.9614 - val_auc: 0.9946 - lr: 2.2500e-05 Epoch 12/40 78/78 [==============================] - ETA: 0s - loss: 0.0293 - accuracy: 0.9932 - auc: 0.9997 Epoch 00012: ReduceLROnPlateau reducing learning rate to 3.3750000511645338e-06. Epoch 00012: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 156ms/step - loss: 0.0293 - accuracy: 0.9932 - auc: 0.9997 - val_loss: 0.1204 - val_accuracy: 0.9630 - val_auc: 0.9940 - lr: 2.2500e-05 Epoch 13/40 78/78 [==============================] - ETA: 0s - loss: 0.0271 - accuracy: 0.9936 - auc: 0.9998 Epoch 00013: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 156ms/step - loss: 0.0271 - accuracy: 0.9936 - auc: 0.9998 - val_loss: 0.1201 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 3.3750e-06 Epoch 14/40 78/78 [==============================] - ETA: 0s - loss: 0.0268 - accuracy: 0.9936 - auc: 0.9998 Epoch 00014: ReduceLROnPlateau reducing learning rate to 5.062500008534698e-07. Epoch 00014: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 159ms/step - loss: 0.0268 - accuracy: 0.9936 - auc: 0.9998 - val_loss: 0.1204 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 3.3750e-06 Epoch 15/40 78/78 [==============================] - ETA: 0s - loss: 0.0265 - accuracy: 0.9940 - auc: 0.9998 Epoch 00015: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 156ms/step - loss: 0.0265 - accuracy: 0.9940 - auc: 0.9998 - val_loss: 0.1204 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 5.0625e-07 Epoch 16/40 78/78 [==============================] - ETA: 0s - loss: 0.0264 - accuracy: 0.9940 - auc: 0.9998 Epoch 00016: ReduceLROnPlateau reducing learning rate to 7.59374984227179e-08. Epoch 00016: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 156ms/step - loss: 0.0264 - accuracy: 0.9940 - auc: 0.9998 - val_loss: 0.1205 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 5.0625e-07 Epoch 17/40 78/78 [==============================] - ETA: 0s - loss: 0.0264 - accuracy: 0.9940 - auc: 0.9998 Epoch 00017: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 155ms/step - loss: 0.0264 - accuracy: 0.9940 - auc: 0.9998 - val_loss: 0.1205 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 7.5937e-08 Epoch 18/40 78/78 [==============================] - ETA: 0s - loss: 0.0264 - accuracy: 0.9940 - auc: 0.9998 Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.1390624976570507e-08. Epoch 00018: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 157ms/step - loss: 0.0264 - accuracy: 0.9940 - auc: 0.9998 - val_loss: 0.1205 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 7.5937e-08 Epoch 19/40 78/78 [==============================] - ETA: 0s - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 Epoch 00019: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 157ms/step - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 - val_loss: 0.1205 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 1.1391e-08 Epoch 20/40 78/78 [==============================] - ETA: 0s - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.7085937997762811e-09. Epoch 00020: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 156ms/step - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 - val_loss: 0.1205 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 1.1391e-08 Epoch 21/40 78/78 [==============================] - ETA: 0s - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 Epoch 00021: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 156ms/step - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 - val_loss: 0.1205 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 1.7086e-09 Epoch 22/40 78/78 [==============================] - ETA: 0s - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 Epoch 00022: ReduceLROnPlateau reducing learning rate to 2.5628907329711126e-10. Epoch 00022: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 156ms/step - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 - val_loss: 0.1205 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 1.7086e-09 Epoch 23/40 78/78 [==============================] - ETA: 0s - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 Epoch 00023: val_loss did not improve from 0.11248 78/78 [==============================] - 12s 157ms/step - loss: 0.0263 - accuracy: 0.9940 - auc: 0.9998 - val_loss: 0.1205 - val_accuracy: 0.9630 - val_auc: 0.9941 - lr: 2.5629e-10
from tensorflow.keras.models import load_model
#model=load_model("best_model_2_non-aug.h5")
y_test_array = y_test.to_numpy()
print(y_test_array)
[[0 0 1] [1 0 0] [0 1 0] ... [1 0 0] [1 0 0] [0 0 1]]
y_true_test = np.argmax(y_test_array,axis=1)
print(y_true_test)
print(y_true_test.shape)
print(y_true_test.dtype)
[2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 1 1 0 0 2] (778,) int64
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
def predict_classes(x):
proba=x
if proba.shape[-1] > 1:
return proba.argmax(axis=-1)
else:
return (proba > 0.5).astype("int32")
print(predict_classes(model.predict(X_test)))
prediction_index=predict_classes(model.predict(X_test))
labels=pd.get_dummies(y_train).columns
predicted_labels=list(map(lambda x: labels[x], prediction_index))
#print(predicted_labels)
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
25/25 [==============================] - 3s 61ms/step [2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 1 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 1 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 0 2 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 1 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 1 2 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 0 0 2 2 0 2 1 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 2 0 1 2 0 0 0 2 2 2 0 1 2 2 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 1 1 1 0 0 1 2 2 1 0 2 0 1 2 2 0 2 0 1 2 0 0 0 2 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 2 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 2 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 0 1 0 2 0 0 2 1 0 2 2 1 2 2 0 0 1 2 0 2 1 2 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 2 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 2 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 1 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 1 2 1 1 0 2 0 2 0 0 0 1 0 0 1 2 2 2 0 1 0 2 1 2 0 0 0 2 1 0 0 2] 25/25 [==============================] - 1s 46ms/step
y_test_labels=y_test.idxmax(axis=1)
y_test_labels=list(y_test.idxmax(axis=1))
model_eval_metrics(y_test_labels,predicted_labels,classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.962725 | 0.963333 | 0.96342 | 0.963316 |
y_pred = model.predict(X_test)
25/25 [==============================] - 1s 51ms/step
labels=list(y_train.columns)
print(labels)
['COVID', 'NORMAL', 'Viral Pneumonia']
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
CM = confusion_matrix(y_test_labels,predicted_labels, labels=['COVID', 'NORMAL', 'Viral Pneumonia'])
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_test_labels,predicted_labels, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9792 0.9792 0.9792 240 NORMAL 0.9660 0.9517 0.9588 269 Viral Pneumonia 0.9451 0.9591 0.9520 269 accuracy 0.9627 778 macro avg 0.9634 0.9633 0.9633 778 weighted avg 0.9628 0.9627 0.9627 778
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, y_pred[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
import tensorflow.keras.backend as K
#print(model.get_config())
print(K.eval(model.optimizer.get_config()))
#print(len(model.history.epoch)) # Number of epochs
{'name': 'Adam', 'learning_rate': 0.00015, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model_2.png')
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
model = tf.keras.Sequential([
# input: images of size Sample size, height, width, channels 1x192x192x3 pixels (the three stands for RGB channels)
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu', input_shape=(192, 192, 3)),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu'),
tf.keras.layers.Flatten(),
# classifying into 3 categories
tf.keras.layers.Dense(3, activation='softmax')
])
es = EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr = ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.05)
mc = ModelCheckpoint('best_model_3_non-aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
# Fitting the CNN to the Training set
model.fit(X_train, y_train,
epochs = 40, verbose=1,validation_split=0.20, callbacks=[red_lr, mc, es])
Epoch 1/40 78/78 [==============================] - ETA: 0s - loss: 0.6032 - accuracy: 0.7329 - auc: 0.9015 Epoch 00001: val_loss improved from inf to 0.25016, saving model to best_model_3_non-aug.h5 78/78 [==============================] - 18s 115ms/step - loss: 0.6032 - accuracy: 0.7329 - auc: 0.9015 - val_loss: 0.2502 - val_accuracy: 0.9003 - val_auc: 0.9818 - lr: 0.0010 Epoch 2/40 78/78 [==============================] - ETA: 0s - loss: 0.2604 - accuracy: 0.9143 - auc: 0.9797 Epoch 00002: val_loss improved from 0.25016 to 0.22517, saving model to best_model_3_non-aug.h5 78/78 [==============================] - 6s 80ms/step - loss: 0.2604 - accuracy: 0.9143 - auc: 0.9797 - val_loss: 0.2252 - val_accuracy: 0.9228 - val_auc: 0.9858 - lr: 0.0010 Epoch 3/40 78/78 [==============================] - ETA: 0s - loss: 0.1944 - accuracy: 0.9288 - auc: 0.9885 Epoch 00003: val_loss improved from 0.22517 to 0.17243, saving model to best_model_3_non-aug.h5 78/78 [==============================] - 6s 80ms/step - loss: 0.1944 - accuracy: 0.9288 - auc: 0.9885 - val_loss: 0.1724 - val_accuracy: 0.9373 - val_auc: 0.9915 - lr: 0.0010 Epoch 4/40 78/78 [==============================] - ETA: 0s - loss: 0.1801 - accuracy: 0.9385 - auc: 0.9897 Epoch 00004: val_loss improved from 0.17243 to 0.16788, saving model to best_model_3_non-aug.h5 78/78 [==============================] - 6s 81ms/step - loss: 0.1801 - accuracy: 0.9385 - auc: 0.9897 - val_loss: 0.1679 - val_accuracy: 0.9373 - val_auc: 0.9901 - lr: 0.0010 Epoch 5/40 78/78 [==============================] - ETA: 0s - loss: 0.1803 - accuracy: 0.9417 - auc: 0.9896 Epoch 00005: val_loss did not improve from 0.16788 78/78 [==============================] - 6s 79ms/step - loss: 0.1803 - accuracy: 0.9417 - auc: 0.9896 - val_loss: 0.2367 - val_accuracy: 0.9019 - val_auc: 0.9841 - lr: 0.0010 Epoch 6/40 78/78 [==============================] - ETA: 0s - loss: 0.1631 - accuracy: 0.9421 - auc: 0.9913 Epoch 00006: ReduceLROnPlateau reducing learning rate to 5.0000002374872565e-05. Epoch 00006: val_loss did not improve from 0.16788 78/78 [==============================] - 6s 79ms/step - loss: 0.1631 - accuracy: 0.9421 - auc: 0.9913 - val_loss: 0.1864 - val_accuracy: 0.9228 - val_auc: 0.9898 - lr: 0.0010 Epoch 7/40 78/78 [==============================] - ETA: 0s - loss: 0.1038 - accuracy: 0.9606 - auc: 0.9967 Epoch 00007: val_loss improved from 0.16788 to 0.11819, saving model to best_model_3_non-aug.h5 78/78 [==============================] - 7s 88ms/step - loss: 0.1038 - accuracy: 0.9606 - auc: 0.9967 - val_loss: 0.1182 - val_accuracy: 0.9598 - val_auc: 0.9961 - lr: 5.0000e-05 Epoch 8/40 78/78 [==============================] - ETA: 0s - loss: 0.0824 - accuracy: 0.9690 - auc: 0.9978 Epoch 00008: val_loss improved from 0.11819 to 0.11765, saving model to best_model_3_non-aug.h5 78/78 [==============================] - 7s 87ms/step - loss: 0.0824 - accuracy: 0.9690 - auc: 0.9978 - val_loss: 0.1177 - val_accuracy: 0.9614 - val_auc: 0.9962 - lr: 5.0000e-05 Epoch 9/40 78/78 [==============================] - ETA: 0s - loss: 0.0765 - accuracy: 0.9718 - auc: 0.9983 Epoch 00009: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 80ms/step - loss: 0.0765 - accuracy: 0.9718 - auc: 0.9983 - val_loss: 0.1202 - val_accuracy: 0.9582 - val_auc: 0.9961 - lr: 5.0000e-05 Epoch 10/40 78/78 [==============================] - ETA: 0s - loss: 0.0723 - accuracy: 0.9730 - auc: 0.9982 Epoch 00010: ReduceLROnPlateau reducing learning rate to 2.5000001187436284e-06. Epoch 00010: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 80ms/step - loss: 0.0723 - accuracy: 0.9730 - auc: 0.9982 - val_loss: 0.1228 - val_accuracy: 0.9598 - val_auc: 0.9959 - lr: 5.0000e-05 Epoch 11/40 78/78 [==============================] - ETA: 0s - loss: 0.0661 - accuracy: 0.9767 - auc: 0.9985 Epoch 00011: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 81ms/step - loss: 0.0661 - accuracy: 0.9767 - auc: 0.9985 - val_loss: 0.1220 - val_accuracy: 0.9598 - val_auc: 0.9959 - lr: 2.5000e-06 Epoch 12/40 78/78 [==============================] - ETA: 0s - loss: 0.0654 - accuracy: 0.9763 - auc: 0.9985 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.2500000821091816e-07. Epoch 00012: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 81ms/step - loss: 0.0654 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1216 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 2.5000e-06 Epoch 13/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00013: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 81ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1216 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 1.2500e-07 Epoch 14/40 77/78 [============================>.] - ETA: 0s - loss: 0.0655 - accuracy: 0.9761 - auc: 0.9985 Epoch 00014: ReduceLROnPlateau reducing learning rate to 6.250000694763003e-09. Epoch 00014: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 81ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 1.2500e-07 Epoch 15/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00015: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 81ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 6.2500e-09 Epoch 16/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00016: ReduceLROnPlateau reducing learning rate to 3.1250002585636594e-10. Epoch 00016: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 82ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 6.2500e-09 Epoch 17/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00017: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 82ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 3.1250e-10 Epoch 18/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.5625001292818297e-11. Epoch 00018: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 82ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 3.1250e-10 Epoch 19/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00019: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 82ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 1.5625e-11 Epoch 20/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00020: ReduceLROnPlateau reducing learning rate to 7.812500646409148e-13. Epoch 00020: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 82ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 1.5625e-11 Epoch 21/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00021: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 82ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 7.8125e-13 Epoch 22/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00022: ReduceLROnPlateau reducing learning rate to 3.906250323204574e-14. Epoch 00022: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 83ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 7.8125e-13 Epoch 23/40 78/78 [==============================] - ETA: 0s - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 Epoch 00023: val_loss did not improve from 0.11765 78/78 [==============================] - 6s 83ms/step - loss: 0.0650 - accuracy: 0.9763 - auc: 0.9985 - val_loss: 0.1215 - val_accuracy: 0.9614 - val_auc: 0.9960 - lr: 3.9063e-14
from tensorflow.keras.models import load_model
#model=load_model("best_model_3_non-aug.h5")
y_test_array = y_test.to_numpy()
print(y_test_array)
[[0 0 1] [1 0 0] [0 1 0] ... [1 0 0] [1 0 0] [0 0 1]]
y_true_test = np.argmax(y_test_array,axis=1)
print(y_true_test)
print(y_true_test.shape)
print(y_true_test.dtype)
[2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 1 1 0 0 2] (778,) int64
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
# using predict_classes() for multi-class data to return predicted class index.
def predict_classes(x): # adjusted from keras github code
proba=x
if proba.shape[-1] > 1:
return proba.argmax(axis=-1)
else:
return (proba > 0.5).astype("int32")
print(predict_classes(model.predict(X_test)))
prediction_index=predict_classes(model.predict(X_test))
#Now lets run some code to get keras to return the label rather than the index...
# get labels from one hot encoded y_train data
labels=pd.get_dummies(y_train).columns
# Iterate through all predicted indices using map method
predicted_labels=list(map(lambda x: labels[x], prediction_index))
#print(predicted_labels)
# Now we can extract some evaluative metrics to use for model submission
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
25/25 [==============================] - 1s 28ms/step [2 0 2 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 2 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 0 2 1 1 1 1 2 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 1 0 2 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 0 1 2 1 1 2 0 0 0 2 1 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 0 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 2 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 0 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 1 1 2 0 0 0 2 2 2 0 1 2 2 0 2 0 1 2 1 2 1 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 0 1 1 2 0 1 2 1 1 0 0 0 1 2 2 1 2 0 1 1 0 2 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 2 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 2 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 1 0 0 2 2 1 1 0 2 2 0 1 1 1 0 1 1 2 1 0 2 0 0 2 1 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 0 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 2 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 1 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 1 1 1 2 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 2 1 1 0 2 1 2 1 1 0 2 0 2 0 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 0 0 0 2 1 0 0 2] 25/25 [==============================] - 1s 25ms/step
# y_test is one hot encoded so we need to extract labels before runing model_eval_metrics()
y_test_labels=y_test.idxmax(axis=1) #extract labels from one hot encoded y_test object
y_test_labels=list(y_test.idxmax(axis=1)) #returns a pandas series of predicted labels
# get metrics
model_eval_metrics(y_test_labels,predicted_labels,classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.96144 | 0.961623 | 0.961484 | 0.961777 |
y_pred = model.predict(X_test)
25/25 [==============================] - 1s 27ms/step
labels=list(y_train.columns)
print(labels)
['COVID', 'NORMAL', 'Viral Pneumonia']
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
CM = confusion_matrix(y_test_labels,predicted_labels, labels=['COVID', 'NORMAL', 'Viral Pneumonia'])
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_test_labels,predicted_labels, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9628 0.9708 0.9668 240 NORMAL 0.9590 0.9554 0.9572 269 Viral Pneumonia 0.9627 0.9591 0.9609 269 accuracy 0.9614 778 macro avg 0.9615 0.9618 0.9616 778 weighted avg 0.9614 0.9614 0.9614 778
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, y_pred[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
import tensorflow.keras.backend as K
#print(model.get_config()) # Full configuration to fit keras model
print(K.eval(model.optimizer.get_config())) # Optimizer configuration
#print(len(model.history.epoch)) # Number of epochs
{'name': 'Adam', 'learning_rate': 5.0000002e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model_3.png')
#Left off:
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
model = tf.keras.Sequential([
# input: images of size Sample size, height, width, channels 1x192x192x3 pixels (the three stands for RGB channels)
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu', input_shape=(192, 192, 3)),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Flatten(),
# classifying into 3 categories
tf.keras.layers.Dense(3, activation='softmax')
])
es = EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr = ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.05)
mc = ModelCheckpoint('best_model_4_non-aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
# Fitting the CNN to the Training set
model.fit(X_train, y_train,
epochs = 40, verbose=1,validation_split=0.20, callbacks=[red_lr, mc, es])
Epoch 1/15 78/78 [==============================] - ETA: 0s - loss: 0.6015 - accuracy: 0.7144 Epoch 1: val_accuracy improved from -inf to 0.88103, saving model to best_model_4.h5 78/78 [==============================] - 16s 186ms/step - loss: 0.6015 - accuracy: 0.7144 - val_loss: 0.3030 - val_accuracy: 0.8810 - lr: 0.0010 Epoch 2/15 78/78 [==============================] - ETA: 0s - loss: 0.2188 - accuracy: 0.9212 Epoch 2: val_accuracy improved from 0.88103 to 0.95338, saving model to best_model_4.h5 78/78 [==============================] - 12s 159ms/step - loss: 0.2188 - accuracy: 0.9212 - val_loss: 0.1373 - val_accuracy: 0.9534 - lr: 0.0010 Epoch 3/15 78/78 [==============================] - ETA: 0s - loss: 0.1530 - accuracy: 0.9481 Epoch 3: val_accuracy did not improve from 0.95338 78/78 [==============================] - 12s 154ms/step - loss: 0.1530 - accuracy: 0.9481 - val_loss: 0.1920 - val_accuracy: 0.9309 - lr: 0.0010 Epoch 4/15 78/78 [==============================] - ETA: 0s - loss: 0.1257 - accuracy: 0.9497 Epoch 00004: ReduceLROnPlateau reducing learning rate to 5.0000002374872565e-05. Epoch 4: val_accuracy did not improve from 0.95338 78/78 [==============================] - 12s 155ms/step - loss: 0.1257 - accuracy: 0.9497 - val_loss: 0.2051 - val_accuracy: 0.9277 - lr: 0.0010 Epoch 5/15 78/78 [==============================] - ETA: 0s - loss: 0.0816 - accuracy: 0.9730 Epoch 5: val_accuracy improved from 0.95338 to 0.95981, saving model to best_model_4.h5 78/78 [==============================] - 12s 160ms/step - loss: 0.0816 - accuracy: 0.9730 - val_loss: 0.1156 - val_accuracy: 0.9598 - lr: 5.0000e-05 Epoch 6/15 78/78 [==============================] - ETA: 0s - loss: 0.0569 - accuracy: 0.9827 Epoch 6: val_accuracy improved from 0.95981 to 0.96141, saving model to best_model_4.h5 78/78 [==============================] - 13s 163ms/step - loss: 0.0569 - accuracy: 0.9827 - val_loss: 0.1164 - val_accuracy: 0.9614 - lr: 5.0000e-05 Epoch 7/15 78/78 [==============================] - ETA: 0s - loss: 0.0487 - accuracy: 0.9847 Epoch 7: val_accuracy improved from 0.96141 to 0.96463, saving model to best_model_4.h5 78/78 [==============================] - 13s 164ms/step - loss: 0.0487 - accuracy: 0.9847 - val_loss: 0.1154 - val_accuracy: 0.9646 - lr: 5.0000e-05 Epoch 8/15 78/78 [==============================] - ETA: 0s - loss: 0.0425 - accuracy: 0.9863 Epoch 8: val_accuracy did not improve from 0.96463 78/78 [==============================] - 12s 159ms/step - loss: 0.0425 - accuracy: 0.9863 - val_loss: 0.1199 - val_accuracy: 0.9646 - lr: 5.0000e-05 Epoch 9/15 78/78 [==============================] - ETA: 0s - loss: 0.0375 - accuracy: 0.9875 Epoch 00009: ReduceLROnPlateau reducing learning rate to 2.5000001187436284e-06. Epoch 9: val_accuracy did not improve from 0.96463 78/78 [==============================] - 12s 159ms/step - loss: 0.0375 - accuracy: 0.9875 - val_loss: 0.1207 - val_accuracy: 0.9646 - lr: 5.0000e-05 Epoch 10/15 78/78 [==============================] - ETA: 0s - loss: 0.0328 - accuracy: 0.9912 Epoch 10: val_accuracy did not improve from 0.96463 78/78 [==============================] - 12s 160ms/step - loss: 0.0328 - accuracy: 0.9912 - val_loss: 0.1203 - val_accuracy: 0.9646 - lr: 2.5000e-06 Epoch 11/15 78/78 [==============================] - ETA: 0s - loss: 0.0324 - accuracy: 0.9912 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.2500000821091816e-07. Epoch 11: val_accuracy did not improve from 0.96463 78/78 [==============================] - 12s 160ms/step - loss: 0.0324 - accuracy: 0.9912 - val_loss: 0.1200 - val_accuracy: 0.9646 - lr: 2.5000e-06 Epoch 12/15 78/78 [==============================] - ETA: 0s - loss: 0.0320 - accuracy: 0.9912 Epoch 12: val_accuracy did not improve from 0.96463 78/78 [==============================] - 13s 161ms/step - loss: 0.0320 - accuracy: 0.9912 - val_loss: 0.1200 - val_accuracy: 0.9646 - lr: 1.2500e-07 Epoch 13/15 78/78 [==============================] - ETA: 0s - loss: 0.0320 - accuracy: 0.9916 Epoch 00013: ReduceLROnPlateau reducing learning rate to 6.250000694763003e-09. Epoch 13: val_accuracy did not improve from 0.96463 78/78 [==============================] - 13s 161ms/step - loss: 0.0320 - accuracy: 0.9916 - val_loss: 0.1200 - val_accuracy: 0.9646 - lr: 1.2500e-07 Epoch 14/15 78/78 [==============================] - ETA: 0s - loss: 0.0320 - accuracy: 0.9916 Epoch 14: val_accuracy did not improve from 0.96463 78/78 [==============================] - 13s 161ms/step - loss: 0.0320 - accuracy: 0.9916 - val_loss: 0.1200 - val_accuracy: 0.9646 - lr: 6.2500e-09 Epoch 15/15 78/78 [==============================] - ETA: 0s - loss: 0.0320 - accuracy: 0.9916 Epoch 00015: ReduceLROnPlateau reducing learning rate to 3.1250002585636594e-10. Epoch 15: val_accuracy did not improve from 0.96463 78/78 [==============================] - 13s 163ms/step - loss: 0.0320 - accuracy: 0.9916 - val_loss: 0.1200 - val_accuracy: 0.9646 - lr: 6.2500e-09
from tensorflow.keras.models import load_model
#model=load_model("best_model_4_non-aug.h5")
y_test_array = y_test.to_numpy()
print(y_test_array)
[[0 0 1] [1 0 0] [0 1 0] ... [1 0 0] [1 0 0] [0 0 1]]
y_true_test = np.argmax(y_test_array,axis=1)
print(y_true_test)
print(y_true_test.shape)
print(y_true_test.dtype)
[2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 1 1 0 0 2] (778,) int64
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
def predict_classes(x):
proba=x
if proba.shape[-1] > 1:
return proba.argmax(axis=-1)
else:
return (proba > 0.5).astype("int32")
print(predict_classes(model.predict(X_test)))
prediction_index=predict_classes(model.predict(X_test))
labels=pd.get_dummies(y_train).columns
predicted_labels=list(map(lambda x: labels[x], prediction_index))
#print(predicted_labels)
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
25/25 [==============================] - 2s 51ms/step [2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 2 2 1 1 1 2 1 2 1 1 2 1 0 1 1 0 1 0 0 0 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 2 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 1 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 1 1 1 2 0 2 1 2 2 2 1 2 1 0 1 0 0 2 0 2 0 2 1 2 0 1 1 2 2 1 0 1 2 1 2 1 0 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 1 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 2 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 1 1 1 2 0 1 2 2 1 0 0 0 1 2 2 0 2 0 1 1 0 0 0 2 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 2 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 1 1 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 1 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 2 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 1 0 2 1 2 2 0 0 2 1 0 0 2] 25/25 [==============================] - 1s 47ms/step
# y_test is one hot encoded so we need to extract labels before runing model_eval_metrics()
y_test_labels=y_test.idxmax(axis=1) #extract labels from one hot encoded y_test object
y_test_labels=list(y_test.idxmax(axis=1)) #returns a pandas series of predicted labels
# get metrics
model_eval_metrics( y_test_labels,predicted_labels,classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.969152 | 0.969748 | 0.969848 | 0.969661 |
y_pred = model.predict(X_test)
25/25 [==============================] - 1s 52ms/step
labels=list(y_train.columns)
print(labels)
['COVID', 'NORMAL', 'Viral Pneumonia']
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
CM = confusion_matrix(y_test_labels,predicted_labels, labels=['COVID', 'NORMAL', 'Viral Pneumonia'])
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_test_labels,predicted_labels, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9874 0.9833 0.9854 240 NORMAL 0.9594 0.9665 0.9630 269 Viral Pneumonia 0.9627 0.9591 0.9609 269 accuracy 0.9692 778 macro avg 0.9698 0.9697 0.9697 778 weighted avg 0.9692 0.9692 0.9692 778
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, y_pred[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
import tensorflow.keras.backend as K
#print(model.get_config()) # Full configuration to fit keras model
print(K.eval(model.optimizer.get_config())) # Optimizer configuration
#print(len(model.history.epoch)) # Number of epochs
{'name': 'Adam', 'learning_rate': 1.2500001e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model_4.png')
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
# Let's build a squeezenet model instead to see how well it performs
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.metrics import AUC
l = tf.keras.layers # syntax shortcut
# Create function to define fire modules
def fire(x, squeeze, expand):
y = l.Conv2D(filters=squeeze, kernel_size=1, padding='same', activation='relu')(x)
y1 = l.Conv2D(filters=expand//2, kernel_size=1, padding='same', activation='relu')(y) # note: //2 takes input value and divides by 2, so we reach the dimensions requested with stacking later.
y3 = l.Conv2D(filters=expand//2, kernel_size=3, padding='same', activation='relu')(y)
return tf.keras.layers.concatenate([y1, y3])
def fire_module(squeeze, expand):
return lambda x: fire(x, squeeze, expand)
with tf.device('/device:GPU:0'):
x = tf.keras.layers.Input(shape=[192,192, 3]) # input is 192x192 pixels RGB
y = tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu')(x)
y = fire_module(8, 16)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(16, 32)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(26, 32)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(32, 64)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(32, 64)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(64, 128)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(64, 128)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(128, 256)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(128, 256)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu')(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.GlobalAveragePooling2D()(y) # Takes average of h x w for each channel and returns 1 scalar value per channel
y = tf.keras.layers.Dense(3, activation='softmax')(y) # Parameters for final layer from GAP = number of channels in previous layer plus number of dense nodes in output layer times number of dense nodes
model = tf.keras.Model(x, y)
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.05)
mc = ModelCheckpoint('best_model_5_non-aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc', 'AUC'])
model.fit(X_train, y_train,
epochs = 40, verbose=1,validation_split=0.20,batch_size=32,callbacks=[red_lr, mc, es])
Epoch 1/25 78/78 [==============================] - ETA: 0s - loss: 0.4285 - acc: 0.8395 Epoch 1: val_acc improved from -inf to 0.31350, saving model to best_model_5.h5 78/78 [==============================] - 26s 192ms/step - loss: 0.4285 - acc: 0.8395 - val_loss: 3.6241 - val_acc: 0.3135 - lr: 0.0010 Epoch 2/25 78/78 [==============================] - ETA: 0s - loss: 0.2481 - acc: 0.9179 Epoch 2: val_acc improved from 0.31350 to 0.36334, saving model to best_model_5.h5 78/78 [==============================] - 12s 154ms/step - loss: 0.2481 - acc: 0.9179 - val_loss: 4.1546 - val_acc: 0.3633 - lr: 0.0010 Epoch 3/25 78/78 [==============================] - ETA: 0s - loss: 0.1941 - acc: 0.9324 Epoch 3: val_acc did not improve from 0.36334 78/78 [==============================] - 11s 147ms/step - loss: 0.1941 - acc: 0.9324 - val_loss: 4.1904 - val_acc: 0.3633 - lr: 0.0010 Epoch 4/25 78/78 [==============================] - ETA: 0s - loss: 0.1591 - acc: 0.9477 Epoch 00004: ReduceLROnPlateau reducing learning rate to 5.0000002374872565e-05. Epoch 4: val_acc did not improve from 0.36334 78/78 [==============================] - 12s 154ms/step - loss: 0.1591 - acc: 0.9477 - val_loss: 6.4967 - val_acc: 0.3633 - lr: 0.0010 Epoch 5/25 78/78 [==============================] - ETA: 0s - loss: 0.1000 - acc: 0.9690 Epoch 5: val_acc did not improve from 0.36334 78/78 [==============================] - 11s 147ms/step - loss: 0.1000 - acc: 0.9690 - val_loss: 5.1320 - val_acc: 0.3633 - lr: 5.0000e-05 Epoch 6/25 78/78 [==============================] - ETA: 0s - loss: 0.0878 - acc: 0.9735 Epoch 6: val_acc improved from 0.36334 to 0.37621, saving model to best_model_5.h5 78/78 [==============================] - 12s 157ms/step - loss: 0.0878 - acc: 0.9735 - val_loss: 3.1995 - val_acc: 0.3762 - lr: 5.0000e-05 Epoch 7/25 78/78 [==============================] - ETA: 0s - loss: 0.0817 - acc: 0.9783 Epoch 7: val_acc improved from 0.37621 to 0.55145, saving model to best_model_5.h5 78/78 [==============================] - 12s 156ms/step - loss: 0.0817 - acc: 0.9783 - val_loss: 1.3511 - val_acc: 0.5514 - lr: 5.0000e-05 Epoch 8/25 78/78 [==============================] - ETA: 0s - loss: 0.0709 - acc: 0.9795 Epoch 8: val_acc improved from 0.55145 to 0.78778, saving model to best_model_5.h5 78/78 [==============================] - 12s 158ms/step - loss: 0.0709 - acc: 0.9795 - val_loss: 0.5116 - val_acc: 0.7878 - lr: 5.0000e-05 Epoch 9/25 78/78 [==============================] - ETA: 0s - loss: 0.0727 - acc: 0.9767 Epoch 9: val_acc improved from 0.78778 to 0.90193, saving model to best_model_5.h5 78/78 [==============================] - 12s 157ms/step - loss: 0.0727 - acc: 0.9767 - val_loss: 0.2589 - val_acc: 0.9019 - lr: 5.0000e-05 Epoch 10/25 78/78 [==============================] - ETA: 0s - loss: 0.0636 - acc: 0.9839 Epoch 10: val_acc improved from 0.90193 to 0.93408, saving model to best_model_5.h5 78/78 [==============================] - 12s 158ms/step - loss: 0.0636 - acc: 0.9839 - val_loss: 0.1745 - val_acc: 0.9341 - lr: 5.0000e-05 Epoch 11/25 78/78 [==============================] - ETA: 0s - loss: 0.0594 - acc: 0.9807 Epoch 11: val_acc improved from 0.93408 to 0.96624, saving model to best_model_5.h5 78/78 [==============================] - 12s 158ms/step - loss: 0.0594 - acc: 0.9807 - val_loss: 0.1044 - val_acc: 0.9662 - lr: 5.0000e-05 Epoch 12/25 78/78 [==============================] - ETA: 0s - loss: 0.0526 - acc: 0.9887 Epoch 12: val_acc improved from 0.96624 to 0.97428, saving model to best_model_5.h5 78/78 [==============================] - 12s 160ms/step - loss: 0.0526 - acc: 0.9887 - val_loss: 0.0887 - val_acc: 0.9743 - lr: 5.0000e-05 Epoch 13/25 78/78 [==============================] - ETA: 0s - loss: 0.0433 - acc: 0.9916 Epoch 13: val_acc improved from 0.97428 to 0.97910, saving model to best_model_5.h5 78/78 [==============================] - 12s 159ms/step - loss: 0.0433 - acc: 0.9916 - val_loss: 0.0890 - val_acc: 0.9791 - lr: 5.0000e-05 Epoch 14/25 78/78 [==============================] - ETA: 0s - loss: 0.0426 - acc: 0.9891 Epoch 14: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 152ms/step - loss: 0.0426 - acc: 0.9891 - val_loss: 0.0776 - val_acc: 0.9743 - lr: 5.0000e-05 Epoch 15/25 78/78 [==============================] - ETA: 0s - loss: 0.0339 - acc: 0.9952 Epoch 00015: ReduceLROnPlateau reducing learning rate to 2.5000001187436284e-06. Epoch 15: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 152ms/step - loss: 0.0339 - acc: 0.9952 - val_loss: 0.0768 - val_acc: 0.9695 - lr: 5.0000e-05 Epoch 16/25 78/78 [==============================] - ETA: 0s - loss: 0.0361 - acc: 0.9944 Epoch 16: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 153ms/step - loss: 0.0361 - acc: 0.9944 - val_loss: 0.0639 - val_acc: 0.9759 - lr: 2.5000e-06 Epoch 17/25 78/78 [==============================] - ETA: 0s - loss: 0.0389 - acc: 0.9899 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.2500000821091816e-07. Epoch 17: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 153ms/step - loss: 0.0389 - acc: 0.9899 - val_loss: 0.0622 - val_acc: 0.9791 - lr: 2.5000e-06 Epoch 18/25 78/78 [==============================] - ETA: 0s - loss: 0.0325 - acc: 0.9968 Epoch 18: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 153ms/step - loss: 0.0325 - acc: 0.9968 - val_loss: 0.0616 - val_acc: 0.9791 - lr: 1.2500e-07 Epoch 19/25 78/78 [==============================] - ETA: 0s - loss: 0.0288 - acc: 0.9972 Epoch 00019: ReduceLROnPlateau reducing learning rate to 6.250000694763003e-09. Epoch 19: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 153ms/step - loss: 0.0288 - acc: 0.9972 - val_loss: 0.0611 - val_acc: 0.9791 - lr: 1.2500e-07 Epoch 20/25 78/78 [==============================] - ETA: 0s - loss: 0.0331 - acc: 0.9940 Epoch 20: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 155ms/step - loss: 0.0331 - acc: 0.9940 - val_loss: 0.0611 - val_acc: 0.9791 - lr: 6.2500e-09 Epoch 21/25 78/78 [==============================] - ETA: 0s - loss: 0.0311 - acc: 0.9948 Epoch 00021: ReduceLROnPlateau reducing learning rate to 3.1250002585636594e-10. Epoch 21: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 153ms/step - loss: 0.0311 - acc: 0.9948 - val_loss: 0.0609 - val_acc: 0.9791 - lr: 6.2500e-09 Epoch 22/25 78/78 [==============================] - ETA: 0s - loss: 0.0333 - acc: 0.9940 Epoch 22: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 154ms/step - loss: 0.0333 - acc: 0.9940 - val_loss: 0.0610 - val_acc: 0.9791 - lr: 3.1250e-10 Epoch 23/25 78/78 [==============================] - ETA: 0s - loss: 0.0310 - acc: 0.9948 Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.5625001292818297e-11. Epoch 23: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 153ms/step - loss: 0.0310 - acc: 0.9948 - val_loss: 0.0609 - val_acc: 0.9791 - lr: 3.1250e-10 Epoch 24/25 78/78 [==============================] - ETA: 0s - loss: 0.0339 - acc: 0.9964 Epoch 24: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 153ms/step - loss: 0.0339 - acc: 0.9964 - val_loss: 0.0610 - val_acc: 0.9775 - lr: 1.5625e-11 Epoch 25/25 78/78 [==============================] - ETA: 0s - loss: 0.0321 - acc: 0.9952 Epoch 00025: ReduceLROnPlateau reducing learning rate to 7.812500646409148e-13. Epoch 25: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 153ms/step - loss: 0.0321 - acc: 0.9952 - val_loss: 0.0610 - val_acc: 0.9791 - lr: 1.5625e-11
from tensorflow.keras.models import load_model
#model=load_model("best_model_5_non-aug.h5")
y_test_array = y_test.to_numpy()
print(y_test_array)
[[0 0 1] [1 0 0] [0 1 0] ... [1 0 0] [1 0 0] [0 0 1]]
y_true_test = np.argmax(y_test_array,axis=1)
print(y_true_test)
print(y_true_test.shape)
print(y_true_test.dtype)
[2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 1 1 0 0 2] (778,) int64
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
def predict_classes(x):
proba=x
if proba.shape[-1] > 1:
return proba.argmax(axis=-1)
else:
return (proba > 0.5).astype("int32")
print(predict_classes(model.predict(X_test)))
prediction_index=predict_classes(model.predict(X_test))
labels=pd.get_dummies(y_train).columns
# Iterate through all predicted indices using map method
predicted_labels=list(map(lambda x: labels[x], prediction_index))
#print(predicted_labels)
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
25/25 [==============================] - 3s 56ms/step [2 0 2 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 1 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 0 2 1 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 2 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 1 0 0 0 2 1 1 0 1 0 0 0 1 2 1 1 2 0 0 0 2 1 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 1 1 0 1 1 2 2 0 1 1 2 2 1 1 1 2 2 2 1 2 2 1 1 2 1 0 2 2 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 0 0 2 2 0 2 1 2 2 2 1 0 2 1 2 1 2 1 1 1 1 0 0 1 2 0 0 0 1 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 2 0 2 0 2 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 1 1 1 0 0 1 2 1 1 0 0 0 1 2 2 2 2 0 1 1 0 0 0 2 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 2 1 0 2 0 1 0 1 0 0 1 0 0 1 2 2 0 2 1 0 0 2 1 2 1 1 1 0 2 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 1 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 0 2 1 0 1 0 0 2 0 1 1 1 0 0 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 2 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 1 1 1 2 2 1 1 2 0 1 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 1 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 2 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 2 0 1 0 2 1 2 1 1 0 2 0 2 0 0 0 1 0 0 1 2 2 2 0 1 0 2 1 2 2 0 0 2 1 0 0 2] 25/25 [==============================] - 1s 41ms/step
# y_test is one hot encoded so we need to extract labels before runing model_eval_metrics()
y_test_labels=y_test.idxmax(axis=1) #extract labels from one hot encoded y_test object
y_test_labels=list(y_test.idxmax(axis=1)) #returns a pandas series of predicted labels
# get metrics
model_eval_metrics( y_test_labels,predicted_labels,classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.947301 | 0.948233 | 0.948334 | 0.948146 |
y_pred = model.predict(X_test)
25/25 [==============================] - 1s 43ms/step
labels=list(y_train.columns)
print(labels)
['COVID', 'NORMAL', 'Viral Pneumonia']
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
CM = confusion_matrix(y_test_labels,predicted_labels, labels=['COVID', 'NORMAL', 'Viral Pneumonia'])
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_test_labels,predicted_labels, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9749 0.9708 0.9729 240 NORMAL 0.9373 0.9442 0.9407 269 Viral Pneumonia 0.9328 0.9294 0.9311 269 accuracy 0.9473 778 macro avg 0.9483 0.9481 0.9482 778 weighted avg 0.9473 0.9473 0.9473 778
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, y_pred[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
import tensorflow.keras.backend as K
#print(model.get_config()) # Full configuration to fit keras model
print(K.eval(model.optimizer.get_config())) # Optimizer configuration
#print(len(model.history.epoch)) # Number of epochs
{'name': 'Adam', 'learning_rate': 2.5000002e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model_5.png')
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from keras.metrics import AUC
l = tf.keras.layers
# Create function to define fire modules
def fire(x, squeeze, expand):
y = l.Conv2D(filters=squeeze, kernel_size=1, padding='same', activation='relu')(x)
y1 = l.Conv2D(filters=expand//2, kernel_size=1, padding='same', activation='relu')(y) # note: //2 takes input value and divides by 2, so we reach the dimensions requested with stacking later.
y2 = l.Conv2D(filters=expand//2, kernel_size=16, padding='same', activation='relu')(y)
y3 = l.Conv2D(filters=expand//2, kernel_size=16, padding='same', activation='relu')(y)
y4 = l.Conv2D(filters=expand//2, kernel_size=32, padding='same', activation='relu')(y)
y5 = l.Conv2D(filters=expand//2, kernel_size=32, padding='same', activation='relu')(y)
return tf.keras.layers.concatenate([y1, y2, y3, y3, y4, y5])
def fire_module(squeeze, expand):
return lambda x: fire(x, squeeze, expand)
with tf.device('/device:GPU:0'):
x = tf.keras.layers.Input(shape=[192,192, 3]) # input is 192x192 pixels RGB
y = tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu')(x)
y = fire_module(24, 48)(y)
y = fire_module(24, 48)(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(24, 48)(y)
y = fire_module(24, 48)(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(24, 48)(y)
y = fire_module(24, 48)(y)
y = tf.keras.layers.GlobalAveragePooling2D()(y) # Takes average of h x w for each channel and returns 1 scalar value per channel
y = tf.keras.layers.Dense(3, activation='softmax')(y) # Parameters for final layer from GAP = number of channels in previous layer plus number of dense nodes in output layer times number of dense nodes
model = tf.keras.Model(x, y)
es = EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr = ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.20)
mc = ModelCheckpoint('best_model_6_non-aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc', 'AUC'])
model.fit(X_train, y_train,
epochs = 40, verbose=1,validation_split=0.20,batch_size=50,callbacks=[red_lr, mc, es])
Epoch 1/25 50/50 [==============================] - ETA: 0s - loss: 107.6702 - acc: 0.3399 Epoch 1: val_acc improved from -inf to 0.32315, saving model to best_model_6.h5 50/50 [==============================] - 275s 4s/step - loss: 107.6702 - acc: 0.3399 - val_loss: 1.0999 - val_acc: 0.3232 - lr: 0.0010 Epoch 2/25 50/50 [==============================] - ETA: 0s - loss: 1.0984 - acc: 0.3504 Epoch 2: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0984 - acc: 0.3504 - val_loss: 1.0987 - val_acc: 0.3232 - lr: 0.0010 Epoch 3/25 50/50 [==============================] - ETA: 0s - loss: 1.0979 - acc: 0.3250 Epoch 00003: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026. Epoch 3: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0979 - acc: 0.3250 - val_loss: 1.0979 - val_acc: 0.3232 - lr: 0.0010 Epoch 4/25 50/50 [==============================] - ETA: 0s - loss: 1.0974 - acc: 0.3504 Epoch 4: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0974 - acc: 0.3504 - val_loss: 1.0979 - val_acc: 0.3232 - lr: 2.0000e-04 Epoch 5/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00005: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05. Epoch 5: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0979 - val_acc: 0.3232 - lr: 2.0000e-04 Epoch 6/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 6: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0979 - val_acc: 0.3232 - lr: 4.0000e-05 Epoch 7/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00007: ReduceLROnPlateau reducing learning rate to 8.000000525498762e-06. Epoch 7: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 4.0000e-05 Epoch 8/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 8: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 8.0000e-06 Epoch 9/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00009: ReduceLROnPlateau reducing learning rate to 1.6000001778593287e-06. Epoch 9: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 8.0000e-06 Epoch 10/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 10: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 1.6000e-06 Epoch 11/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00011: ReduceLROnPlateau reducing learning rate to 3.200000264769187e-07. Epoch 11: val_acc did not improve from 0.32315 50/50 [==============================] - 94s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 1.6000e-06 Epoch 12/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 12: val_acc did not improve from 0.32315 50/50 [==============================] - 94s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 3.2000e-07 Epoch 13/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00013: ReduceLROnPlateau reducing learning rate to 6.400000529538374e-08. Epoch 13: val_acc did not improve from 0.32315 50/50 [==============================] - 94s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 3.2000e-07 Epoch 14/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 14: val_acc did not improve from 0.32315 50/50 [==============================] - 94s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 6.4000e-08 Epoch 15/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.2800001059076749e-08. Epoch 15: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 6.4000e-08 Epoch 16/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 16: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 1.2800e-08 Epoch 17/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00017: ReduceLROnPlateau reducing learning rate to 2.5600002118153498e-09. Epoch 17: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 1.2800e-08 Epoch 18/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 18: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 2.5600e-09 Epoch 19/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00019: ReduceLROnPlateau reducing learning rate to 5.1200004236307e-10. Epoch 19: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 2.5600e-09 Epoch 20/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 20: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 5.1200e-10 Epoch 21/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1.0240001069306004e-10. Epoch 21: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 5.1200e-10 Epoch 22/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 22: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 1.0240e-10 Epoch 23/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00023: ReduceLROnPlateau reducing learning rate to 2.0480002416167767e-11. Epoch 23: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 1.0240e-10 Epoch 24/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 24: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 2.0480e-11 Epoch 25/25 50/50 [==============================] - ETA: 0s - loss: 1.0973 - acc: 0.3504 Epoch 00025: ReduceLROnPlateau reducing learning rate to 4.096000622011431e-12. Epoch 25: val_acc did not improve from 0.32315 50/50 [==============================] - 93s 2s/step - loss: 1.0973 - acc: 0.3504 - val_loss: 1.0980 - val_acc: 0.3232 - lr: 2.0480e-11
from tensorflow.keras.models import load_model
#model=load_model("best_model_6_non-aug.h5")
y_test_array = y_test.to_numpy()
print(y_test_array)
[[0 0 1] [1 0 0] [0 1 0] ... [1 0 0] [1 0 0] [0 0 1]]
y_true_test = np.argmax(y_test_array,axis=1)
print(y_true_test)
print(y_true_test.shape)
print(y_true_test.dtype)
[2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 1 1 0 0 2] (778,) int64
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
def predict_classes(x):
proba=x
if proba.shape[-1] > 1:
return proba.argmax(axis=-1)
else:
return (proba > 0.5).astype("int32")
print(predict_classes(model.predict(X_test)))
prediction_index=predict_classes(model.predict(X_test))
labels=pd.get_dummies(y_train).columns
# Iterate through all predicted indices using map method
predicted_labels=list(map(lambda x: labels[x], prediction_index))
#print(predicted_labels)
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
25/25 [==============================] - 38s 637ms/step [2 0 2 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 1 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 1 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 2 2 1 1 1 2 0 0 2 2 1 2 2 2 0 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 2 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 2 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 1 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 1 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 2 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 2 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 2 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 1 1 2 0 0 0 2 2 2 0 1 2 2 0 2 0 2 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 1 0 0 0 2 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 2 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 0 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 0 1 0 2 0 0 2 1 0 2 2 1 2 0 0 0 1 2 0 2 1 2 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 0 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 2 2 0 2 2 1 0 0 2 1 0 2 1 1 2 2 0 1 0 1 1 1 2 2 1 1 2 0 2 2 1 0 0 2 2 2 1 0 0 0 0 2 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 1 1 1 2 1 1 1 2 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 2 0 1 0 2 1 0 1 1 0 2 2 2 0 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 2 1 0 0 2] 25/25 [==============================] - 6s 266ms/step
# y_test is one hot encoded so we need to extract labels before runing model_eval_metrics()
y_test_labels=y_test.idxmax(axis=1) #extract labels from one hot encoded y_test object
y_test_labels=list(y_test.idxmax(axis=1)) #returns a pandas series of predicted labels
model_eval_metrics( y_test_labels,predicted_labels,classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.951157 | 0.951907 | 0.952058 | 0.951864 |
y_pred = model.predict(X_test)
25/25 [==============================] - 7s 270ms/step
labels=list(y_train.columns)
print(labels)
['COVID', 'NORMAL', 'Viral Pneumonia']
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
CM = confusion_matrix(y_test_labels,predicted_labels, labels=['COVID', 'NORMAL', 'Viral Pneumonia'])
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_test_labels,predicted_labels, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9708 0.9708 0.9708 240 NORMAL 0.9583 0.9405 0.9493 269 Viral Pneumonia 0.9270 0.9442 0.9355 269 accuracy 0.9512 778 macro avg 0.9521 0.9519 0.9519 778 weighted avg 0.9514 0.9512 0.9512 778
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, y_pred[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
import tensorflow.keras.backend as K
#print(model.get_config()) # Full configuration to fit keras model
print(K.eval(model.optimizer.get_config())) # Optimizer configuration
#print(len(model.history.epoch)) # Number of epochs
{'name': 'Adam', 'learning_rate': 2.5600002e-09, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model_6.png')
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.applications import InceptionV3
from tensorflow.keras.models import Sequential,Model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Flatten
from tensorflow.keras import backend as K
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
# load model new input layer shape.
IMG_SHAPE = (192, 192, 3)
# Create the base model from the pre-trained InceptionV3 model
base_model = InceptionV3(input_shape=IMG_SHAPE, include_top=False, weights='imagenet')
base_model.summary() # Notice unfrozen number of trainable parameters
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/inception_v3/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 87910968/87910968 [==============================] - 1s 0us/step Model: "inception_v3" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 192, 192, 3 0 [] )] conv2d (Conv2D) (None, 95, 95, 32) 864 ['input_1[0][0]'] batch_normalization (BatchNorm (None, 95, 95, 32) 96 ['conv2d[0][0]'] alization) activation (Activation) (None, 95, 95, 32) 0 ['batch_normalization[0][0]'] conv2d_1 (Conv2D) (None, 93, 93, 32) 9216 ['activation[0][0]'] batch_normalization_1 (BatchNo (None, 93, 93, 32) 96 ['conv2d_1[0][0]'] rmalization) activation_1 (Activation) (None, 93, 93, 32) 0 ['batch_normalization_1[0][0]'] conv2d_2 (Conv2D) (None, 93, 93, 64) 18432 ['activation_1[0][0]'] batch_normalization_2 (BatchNo (None, 93, 93, 64) 192 ['conv2d_2[0][0]'] rmalization) activation_2 (Activation) (None, 93, 93, 64) 0 ['batch_normalization_2[0][0]'] max_pooling2d (MaxPooling2D) (None, 46, 46, 64) 0 ['activation_2[0][0]'] conv2d_3 (Conv2D) (None, 46, 46, 80) 5120 ['max_pooling2d[0][0]'] batch_normalization_3 (BatchNo (None, 46, 46, 80) 240 ['conv2d_3[0][0]'] rmalization) activation_3 (Activation) (None, 46, 46, 80) 0 ['batch_normalization_3[0][0]'] conv2d_4 (Conv2D) (None, 44, 44, 192) 138240 ['activation_3[0][0]'] batch_normalization_4 (BatchNo (None, 44, 44, 192) 576 ['conv2d_4[0][0]'] rmalization) activation_4 (Activation) (None, 44, 44, 192) 0 ['batch_normalization_4[0][0]'] max_pooling2d_1 (MaxPooling2D) (None, 21, 21, 192) 0 ['activation_4[0][0]'] conv2d_8 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_1[0][0]'] batch_normalization_8 (BatchNo (None, 21, 21, 64) 192 ['conv2d_8[0][0]'] rmalization) activation_8 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_8[0][0]'] conv2d_6 (Conv2D) (None, 21, 21, 48) 9216 ['max_pooling2d_1[0][0]'] conv2d_9 (Conv2D) (None, 21, 21, 96) 55296 ['activation_8[0][0]'] batch_normalization_6 (BatchNo (None, 21, 21, 48) 144 ['conv2d_6[0][0]'] rmalization) batch_normalization_9 (BatchNo (None, 21, 21, 96) 288 ['conv2d_9[0][0]'] rmalization) activation_6 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_6[0][0]'] activation_9 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_9[0][0]'] average_pooling2d (AveragePool (None, 21, 21, 192) 0 ['max_pooling2d_1[0][0]'] ing2D) conv2d_5 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_1[0][0]'] conv2d_7 (Conv2D) (None, 21, 21, 64) 76800 ['activation_6[0][0]'] conv2d_10 (Conv2D) (None, 21, 21, 96) 82944 ['activation_9[0][0]'] conv2d_11 (Conv2D) (None, 21, 21, 32) 6144 ['average_pooling2d[0][0]'] batch_normalization_5 (BatchNo (None, 21, 21, 64) 192 ['conv2d_5[0][0]'] rmalization) batch_normalization_7 (BatchNo (None, 21, 21, 64) 192 ['conv2d_7[0][0]'] rmalization) batch_normalization_10 (BatchN (None, 21, 21, 96) 288 ['conv2d_10[0][0]'] ormalization) batch_normalization_11 (BatchN (None, 21, 21, 32) 96 ['conv2d_11[0][0]'] ormalization) activation_5 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_5[0][0]'] activation_7 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_7[0][0]'] activation_10 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_10[0][0]'] activation_11 (Activation) (None, 21, 21, 32) 0 ['batch_normalization_11[0][0]'] mixed0 (Concatenate) (None, 21, 21, 256) 0 ['activation_5[0][0]', 'activation_7[0][0]', 'activation_10[0][0]', 'activation_11[0][0]'] conv2d_15 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] batch_normalization_15 (BatchN (None, 21, 21, 64) 192 ['conv2d_15[0][0]'] ormalization) activation_15 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_15[0][0]'] conv2d_13 (Conv2D) (None, 21, 21, 48) 12288 ['mixed0[0][0]'] conv2d_16 (Conv2D) (None, 21, 21, 96) 55296 ['activation_15[0][0]'] batch_normalization_13 (BatchN (None, 21, 21, 48) 144 ['conv2d_13[0][0]'] ormalization) batch_normalization_16 (BatchN (None, 21, 21, 96) 288 ['conv2d_16[0][0]'] ormalization) activation_13 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_13[0][0]'] activation_16 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_16[0][0]'] average_pooling2d_1 (AveragePo (None, 21, 21, 256) 0 ['mixed0[0][0]'] oling2D) conv2d_12 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] conv2d_14 (Conv2D) (None, 21, 21, 64) 76800 ['activation_13[0][0]'] conv2d_17 (Conv2D) (None, 21, 21, 96) 82944 ['activation_16[0][0]'] conv2d_18 (Conv2D) (None, 21, 21, 64) 16384 ['average_pooling2d_1[0][0]'] batch_normalization_12 (BatchN (None, 21, 21, 64) 192 ['conv2d_12[0][0]'] ormalization) batch_normalization_14 (BatchN (None, 21, 21, 64) 192 ['conv2d_14[0][0]'] ormalization) batch_normalization_17 (BatchN (None, 21, 21, 96) 288 ['conv2d_17[0][0]'] ormalization) batch_normalization_18 (BatchN (None, 21, 21, 64) 192 ['conv2d_18[0][0]'] ormalization) activation_12 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_12[0][0]'] activation_14 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_14[0][0]'] activation_17 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_17[0][0]'] activation_18 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_18[0][0]'] mixed1 (Concatenate) (None, 21, 21, 288) 0 ['activation_12[0][0]', 'activation_14[0][0]', 'activation_17[0][0]', 'activation_18[0][0]'] conv2d_22 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] batch_normalization_22 (BatchN (None, 21, 21, 64) 192 ['conv2d_22[0][0]'] ormalization) activation_22 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_22[0][0]'] conv2d_20 (Conv2D) (None, 21, 21, 48) 13824 ['mixed1[0][0]'] conv2d_23 (Conv2D) (None, 21, 21, 96) 55296 ['activation_22[0][0]'] batch_normalization_20 (BatchN (None, 21, 21, 48) 144 ['conv2d_20[0][0]'] ormalization) batch_normalization_23 (BatchN (None, 21, 21, 96) 288 ['conv2d_23[0][0]'] ormalization) activation_20 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_20[0][0]'] activation_23 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_23[0][0]'] average_pooling2d_2 (AveragePo (None, 21, 21, 288) 0 ['mixed1[0][0]'] oling2D) conv2d_19 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] conv2d_21 (Conv2D) (None, 21, 21, 64) 76800 ['activation_20[0][0]'] conv2d_24 (Conv2D) (None, 21, 21, 96) 82944 ['activation_23[0][0]'] conv2d_25 (Conv2D) (None, 21, 21, 64) 18432 ['average_pooling2d_2[0][0]'] batch_normalization_19 (BatchN (None, 21, 21, 64) 192 ['conv2d_19[0][0]'] ormalization) batch_normalization_21 (BatchN (None, 21, 21, 64) 192 ['conv2d_21[0][0]'] ormalization) batch_normalization_24 (BatchN (None, 21, 21, 96) 288 ['conv2d_24[0][0]'] ormalization) batch_normalization_25 (BatchN (None, 21, 21, 64) 192 ['conv2d_25[0][0]'] ormalization) activation_19 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_19[0][0]'] activation_21 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_21[0][0]'] activation_24 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_24[0][0]'] activation_25 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_25[0][0]'] mixed2 (Concatenate) (None, 21, 21, 288) 0 ['activation_19[0][0]', 'activation_21[0][0]', 'activation_24[0][0]', 'activation_25[0][0]'] conv2d_27 (Conv2D) (None, 21, 21, 64) 18432 ['mixed2[0][0]'] batch_normalization_27 (BatchN (None, 21, 21, 64) 192 ['conv2d_27[0][0]'] ormalization) activation_27 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_27[0][0]'] conv2d_28 (Conv2D) (None, 21, 21, 96) 55296 ['activation_27[0][0]'] batch_normalization_28 (BatchN (None, 21, 21, 96) 288 ['conv2d_28[0][0]'] ormalization) activation_28 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_28[0][0]'] conv2d_26 (Conv2D) (None, 10, 10, 384) 995328 ['mixed2[0][0]'] conv2d_29 (Conv2D) (None, 10, 10, 96) 82944 ['activation_28[0][0]'] batch_normalization_26 (BatchN (None, 10, 10, 384) 1152 ['conv2d_26[0][0]'] ormalization) batch_normalization_29 (BatchN (None, 10, 10, 96) 288 ['conv2d_29[0][0]'] ormalization) activation_26 (Activation) (None, 10, 10, 384) 0 ['batch_normalization_26[0][0]'] activation_29 (Activation) (None, 10, 10, 96) 0 ['batch_normalization_29[0][0]'] max_pooling2d_2 (MaxPooling2D) (None, 10, 10, 288) 0 ['mixed2[0][0]'] mixed3 (Concatenate) (None, 10, 10, 768) 0 ['activation_26[0][0]', 'activation_29[0][0]', 'max_pooling2d_2[0][0]'] conv2d_34 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] batch_normalization_34 (BatchN (None, 10, 10, 128) 384 ['conv2d_34[0][0]'] ormalization) activation_34 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_34[0][0]'] conv2d_35 (Conv2D) (None, 10, 10, 128) 114688 ['activation_34[0][0]'] batch_normalization_35 (BatchN (None, 10, 10, 128) 384 ['conv2d_35[0][0]'] ormalization) activation_35 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_35[0][0]'] conv2d_31 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] conv2d_36 (Conv2D) (None, 10, 10, 128) 114688 ['activation_35[0][0]'] batch_normalization_31 (BatchN (None, 10, 10, 128) 384 ['conv2d_31[0][0]'] ormalization) batch_normalization_36 (BatchN (None, 10, 10, 128) 384 ['conv2d_36[0][0]'] ormalization) activation_31 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_31[0][0]'] activation_36 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_36[0][0]'] conv2d_32 (Conv2D) (None, 10, 10, 128) 114688 ['activation_31[0][0]'] conv2d_37 (Conv2D) (None, 10, 10, 128) 114688 ['activation_36[0][0]'] batch_normalization_32 (BatchN (None, 10, 10, 128) 384 ['conv2d_32[0][0]'] ormalization) batch_normalization_37 (BatchN (None, 10, 10, 128) 384 ['conv2d_37[0][0]'] ormalization) activation_32 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_32[0][0]'] activation_37 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_37[0][0]'] average_pooling2d_3 (AveragePo (None, 10, 10, 768) 0 ['mixed3[0][0]'] oling2D) conv2d_30 (Conv2D) (None, 10, 10, 192) 147456 ['mixed3[0][0]'] conv2d_33 (Conv2D) (None, 10, 10, 192) 172032 ['activation_32[0][0]'] conv2d_38 (Conv2D) (None, 10, 10, 192) 172032 ['activation_37[0][0]'] conv2d_39 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_3[0][0]'] batch_normalization_30 (BatchN (None, 10, 10, 192) 576 ['conv2d_30[0][0]'] ormalization) batch_normalization_33 (BatchN (None, 10, 10, 192) 576 ['conv2d_33[0][0]'] ormalization) batch_normalization_38 (BatchN (None, 10, 10, 192) 576 ['conv2d_38[0][0]'] ormalization) batch_normalization_39 (BatchN (None, 10, 10, 192) 576 ['conv2d_39[0][0]'] ormalization) activation_30 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_30[0][0]'] activation_33 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_33[0][0]'] activation_38 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_38[0][0]'] activation_39 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_39[0][0]'] mixed4 (Concatenate) (None, 10, 10, 768) 0 ['activation_30[0][0]', 'activation_33[0][0]', 'activation_38[0][0]', 'activation_39[0][0]'] conv2d_44 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] batch_normalization_44 (BatchN (None, 10, 10, 160) 480 ['conv2d_44[0][0]'] ormalization) activation_44 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_44[0][0]'] conv2d_45 (Conv2D) (None, 10, 10, 160) 179200 ['activation_44[0][0]'] batch_normalization_45 (BatchN (None, 10, 10, 160) 480 ['conv2d_45[0][0]'] ormalization) activation_45 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_45[0][0]'] conv2d_41 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] conv2d_46 (Conv2D) (None, 10, 10, 160) 179200 ['activation_45[0][0]'] batch_normalization_41 (BatchN (None, 10, 10, 160) 480 ['conv2d_41[0][0]'] ormalization) batch_normalization_46 (BatchN (None, 10, 10, 160) 480 ['conv2d_46[0][0]'] ormalization) activation_41 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_41[0][0]'] activation_46 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_46[0][0]'] conv2d_42 (Conv2D) (None, 10, 10, 160) 179200 ['activation_41[0][0]'] conv2d_47 (Conv2D) (None, 10, 10, 160) 179200 ['activation_46[0][0]'] batch_normalization_42 (BatchN (None, 10, 10, 160) 480 ['conv2d_42[0][0]'] ormalization) batch_normalization_47 (BatchN (None, 10, 10, 160) 480 ['conv2d_47[0][0]'] ormalization) activation_42 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_42[0][0]'] activation_47 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_47[0][0]'] average_pooling2d_4 (AveragePo (None, 10, 10, 768) 0 ['mixed4[0][0]'] oling2D) conv2d_40 (Conv2D) (None, 10, 10, 192) 147456 ['mixed4[0][0]'] conv2d_43 (Conv2D) (None, 10, 10, 192) 215040 ['activation_42[0][0]'] conv2d_48 (Conv2D) (None, 10, 10, 192) 215040 ['activation_47[0][0]'] conv2d_49 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_4[0][0]'] batch_normalization_40 (BatchN (None, 10, 10, 192) 576 ['conv2d_40[0][0]'] ormalization) batch_normalization_43 (BatchN (None, 10, 10, 192) 576 ['conv2d_43[0][0]'] ormalization) batch_normalization_48 (BatchN (None, 10, 10, 192) 576 ['conv2d_48[0][0]'] ormalization) batch_normalization_49 (BatchN (None, 10, 10, 192) 576 ['conv2d_49[0][0]'] ormalization) activation_40 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_40[0][0]'] activation_43 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_43[0][0]'] activation_48 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_48[0][0]'] activation_49 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_49[0][0]'] mixed5 (Concatenate) (None, 10, 10, 768) 0 ['activation_40[0][0]', 'activation_43[0][0]', 'activation_48[0][0]', 'activation_49[0][0]'] conv2d_54 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] batch_normalization_54 (BatchN (None, 10, 10, 160) 480 ['conv2d_54[0][0]'] ormalization) activation_54 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_54[0][0]'] conv2d_55 (Conv2D) (None, 10, 10, 160) 179200 ['activation_54[0][0]'] batch_normalization_55 (BatchN (None, 10, 10, 160) 480 ['conv2d_55[0][0]'] ormalization) activation_55 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_55[0][0]'] conv2d_51 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] conv2d_56 (Conv2D) (None, 10, 10, 160) 179200 ['activation_55[0][0]'] batch_normalization_51 (BatchN (None, 10, 10, 160) 480 ['conv2d_51[0][0]'] ormalization) batch_normalization_56 (BatchN (None, 10, 10, 160) 480 ['conv2d_56[0][0]'] ormalization) activation_51 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_51[0][0]'] activation_56 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_56[0][0]'] conv2d_52 (Conv2D) (None, 10, 10, 160) 179200 ['activation_51[0][0]'] conv2d_57 (Conv2D) (None, 10, 10, 160) 179200 ['activation_56[0][0]'] batch_normalization_52 (BatchN (None, 10, 10, 160) 480 ['conv2d_52[0][0]'] ormalization) batch_normalization_57 (BatchN (None, 10, 10, 160) 480 ['conv2d_57[0][0]'] ormalization) activation_52 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_52[0][0]'] activation_57 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_57[0][0]'] average_pooling2d_5 (AveragePo (None, 10, 10, 768) 0 ['mixed5[0][0]'] oling2D) conv2d_50 (Conv2D) (None, 10, 10, 192) 147456 ['mixed5[0][0]'] conv2d_53 (Conv2D) (None, 10, 10, 192) 215040 ['activation_52[0][0]'] conv2d_58 (Conv2D) (None, 10, 10, 192) 215040 ['activation_57[0][0]'] conv2d_59 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_5[0][0]'] batch_normalization_50 (BatchN (None, 10, 10, 192) 576 ['conv2d_50[0][0]'] ormalization) batch_normalization_53 (BatchN (None, 10, 10, 192) 576 ['conv2d_53[0][0]'] ormalization) batch_normalization_58 (BatchN (None, 10, 10, 192) 576 ['conv2d_58[0][0]'] ormalization) batch_normalization_59 (BatchN (None, 10, 10, 192) 576 ['conv2d_59[0][0]'] ormalization) activation_50 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_50[0][0]'] activation_53 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_53[0][0]'] activation_58 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_58[0][0]'] activation_59 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_59[0][0]'] mixed6 (Concatenate) (None, 10, 10, 768) 0 ['activation_50[0][0]', 'activation_53[0][0]', 'activation_58[0][0]', 'activation_59[0][0]'] conv2d_64 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] batch_normalization_64 (BatchN (None, 10, 10, 192) 576 ['conv2d_64[0][0]'] ormalization) activation_64 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_64[0][0]'] conv2d_65 (Conv2D) (None, 10, 10, 192) 258048 ['activation_64[0][0]'] batch_normalization_65 (BatchN (None, 10, 10, 192) 576 ['conv2d_65[0][0]'] ormalization) activation_65 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_65[0][0]'] conv2d_61 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_66 (Conv2D) (None, 10, 10, 192) 258048 ['activation_65[0][0]'] batch_normalization_61 (BatchN (None, 10, 10, 192) 576 ['conv2d_61[0][0]'] ormalization) batch_normalization_66 (BatchN (None, 10, 10, 192) 576 ['conv2d_66[0][0]'] ormalization) activation_61 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_61[0][0]'] activation_66 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_66[0][0]'] conv2d_62 (Conv2D) (None, 10, 10, 192) 258048 ['activation_61[0][0]'] conv2d_67 (Conv2D) (None, 10, 10, 192) 258048 ['activation_66[0][0]'] batch_normalization_62 (BatchN (None, 10, 10, 192) 576 ['conv2d_62[0][0]'] ormalization) batch_normalization_67 (BatchN (None, 10, 10, 192) 576 ['conv2d_67[0][0]'] ormalization) activation_62 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_62[0][0]'] activation_67 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_67[0][0]'] average_pooling2d_6 (AveragePo (None, 10, 10, 768) 0 ['mixed6[0][0]'] oling2D) conv2d_60 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_63 (Conv2D) (None, 10, 10, 192) 258048 ['activation_62[0][0]'] conv2d_68 (Conv2D) (None, 10, 10, 192) 258048 ['activation_67[0][0]'] conv2d_69 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_6[0][0]'] batch_normalization_60 (BatchN (None, 10, 10, 192) 576 ['conv2d_60[0][0]'] ormalization) batch_normalization_63 (BatchN (None, 10, 10, 192) 576 ['conv2d_63[0][0]'] ormalization) batch_normalization_68 (BatchN (None, 10, 10, 192) 576 ['conv2d_68[0][0]'] ormalization) batch_normalization_69 (BatchN (None, 10, 10, 192) 576 ['conv2d_69[0][0]'] ormalization) activation_60 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_60[0][0]'] activation_63 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_63[0][0]'] activation_68 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_68[0][0]'] activation_69 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_69[0][0]'] mixed7 (Concatenate) (None, 10, 10, 768) 0 ['activation_60[0][0]', 'activation_63[0][0]', 'activation_68[0][0]', 'activation_69[0][0]'] conv2d_72 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] batch_normalization_72 (BatchN (None, 10, 10, 192) 576 ['conv2d_72[0][0]'] ormalization) activation_72 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_72[0][0]'] conv2d_73 (Conv2D) (None, 10, 10, 192) 258048 ['activation_72[0][0]'] batch_normalization_73 (BatchN (None, 10, 10, 192) 576 ['conv2d_73[0][0]'] ormalization) activation_73 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_73[0][0]'] conv2d_70 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] conv2d_74 (Conv2D) (None, 10, 10, 192) 258048 ['activation_73[0][0]'] batch_normalization_70 (BatchN (None, 10, 10, 192) 576 ['conv2d_70[0][0]'] ormalization) batch_normalization_74 (BatchN (None, 10, 10, 192) 576 ['conv2d_74[0][0]'] ormalization) activation_70 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_70[0][0]'] activation_74 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_74[0][0]'] conv2d_71 (Conv2D) (None, 4, 4, 320) 552960 ['activation_70[0][0]'] conv2d_75 (Conv2D) (None, 4, 4, 192) 331776 ['activation_74[0][0]'] batch_normalization_71 (BatchN (None, 4, 4, 320) 960 ['conv2d_71[0][0]'] ormalization) batch_normalization_75 (BatchN (None, 4, 4, 192) 576 ['conv2d_75[0][0]'] ormalization) activation_71 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_71[0][0]'] activation_75 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_75[0][0]'] max_pooling2d_3 (MaxPooling2D) (None, 4, 4, 768) 0 ['mixed7[0][0]'] mixed8 (Concatenate) (None, 4, 4, 1280) 0 ['activation_71[0][0]', 'activation_75[0][0]', 'max_pooling2d_3[0][0]'] conv2d_80 (Conv2D) (None, 4, 4, 448) 573440 ['mixed8[0][0]'] batch_normalization_80 (BatchN (None, 4, 4, 448) 1344 ['conv2d_80[0][0]'] ormalization) activation_80 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_80[0][0]'] conv2d_77 (Conv2D) (None, 4, 4, 384) 491520 ['mixed8[0][0]'] conv2d_81 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_80[0][0]'] batch_normalization_77 (BatchN (None, 4, 4, 384) 1152 ['conv2d_77[0][0]'] ormalization) batch_normalization_81 (BatchN (None, 4, 4, 384) 1152 ['conv2d_81[0][0]'] ormalization) activation_77 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_77[0][0]'] activation_81 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_81[0][0]'] conv2d_78 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_79 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_82 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] conv2d_83 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] average_pooling2d_7 (AveragePo (None, 4, 4, 1280) 0 ['mixed8[0][0]'] oling2D) conv2d_76 (Conv2D) (None, 4, 4, 320) 409600 ['mixed8[0][0]'] batch_normalization_78 (BatchN (None, 4, 4, 384) 1152 ['conv2d_78[0][0]'] ormalization) batch_normalization_79 (BatchN (None, 4, 4, 384) 1152 ['conv2d_79[0][0]'] ormalization) batch_normalization_82 (BatchN (None, 4, 4, 384) 1152 ['conv2d_82[0][0]'] ormalization) batch_normalization_83 (BatchN (None, 4, 4, 384) 1152 ['conv2d_83[0][0]'] ormalization) conv2d_84 (Conv2D) (None, 4, 4, 192) 245760 ['average_pooling2d_7[0][0]'] batch_normalization_76 (BatchN (None, 4, 4, 320) 960 ['conv2d_76[0][0]'] ormalization) activation_78 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_78[0][0]'] activation_79 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_79[0][0]'] activation_82 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_82[0][0]'] activation_83 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_83[0][0]'] batch_normalization_84 (BatchN (None, 4, 4, 192) 576 ['conv2d_84[0][0]'] ormalization) activation_76 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_76[0][0]'] mixed9_0 (Concatenate) (None, 4, 4, 768) 0 ['activation_78[0][0]', 'activation_79[0][0]'] concatenate (Concatenate) (None, 4, 4, 768) 0 ['activation_82[0][0]', 'activation_83[0][0]'] activation_84 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_84[0][0]'] mixed9 (Concatenate) (None, 4, 4, 2048) 0 ['activation_76[0][0]', 'mixed9_0[0][0]', 'concatenate[0][0]', 'activation_84[0][0]'] conv2d_89 (Conv2D) (None, 4, 4, 448) 917504 ['mixed9[0][0]'] batch_normalization_89 (BatchN (None, 4, 4, 448) 1344 ['conv2d_89[0][0]'] ormalization) activation_89 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_89[0][0]'] conv2d_86 (Conv2D) (None, 4, 4, 384) 786432 ['mixed9[0][0]'] conv2d_90 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_89[0][0]'] batch_normalization_86 (BatchN (None, 4, 4, 384) 1152 ['conv2d_86[0][0]'] ormalization) batch_normalization_90 (BatchN (None, 4, 4, 384) 1152 ['conv2d_90[0][0]'] ormalization) activation_86 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_86[0][0]'] activation_90 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_90[0][0]'] conv2d_87 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_88 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_91 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] conv2d_92 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] average_pooling2d_8 (AveragePo (None, 4, 4, 2048) 0 ['mixed9[0][0]'] oling2D) conv2d_85 (Conv2D) (None, 4, 4, 320) 655360 ['mixed9[0][0]'] batch_normalization_87 (BatchN (None, 4, 4, 384) 1152 ['conv2d_87[0][0]'] ormalization) batch_normalization_88 (BatchN (None, 4, 4, 384) 1152 ['conv2d_88[0][0]'] ormalization) batch_normalization_91 (BatchN (None, 4, 4, 384) 1152 ['conv2d_91[0][0]'] ormalization) batch_normalization_92 (BatchN (None, 4, 4, 384) 1152 ['conv2d_92[0][0]'] ormalization) conv2d_93 (Conv2D) (None, 4, 4, 192) 393216 ['average_pooling2d_8[0][0]'] batch_normalization_85 (BatchN (None, 4, 4, 320) 960 ['conv2d_85[0][0]'] ormalization) activation_87 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_87[0][0]'] activation_88 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_88[0][0]'] activation_91 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_91[0][0]'] activation_92 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_92[0][0]'] batch_normalization_93 (BatchN (None, 4, 4, 192) 576 ['conv2d_93[0][0]'] ormalization) activation_85 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_85[0][0]'] mixed9_1 (Concatenate) (None, 4, 4, 768) 0 ['activation_87[0][0]', 'activation_88[0][0]'] concatenate_1 (Concatenate) (None, 4, 4, 768) 0 ['activation_91[0][0]', 'activation_92[0][0]'] activation_93 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_93[0][0]'] mixed10 (Concatenate) (None, 4, 4, 2048) 0 ['activation_85[0][0]', 'mixed9_1[0][0]', 'concatenate_1[0][0]', 'activation_93[0][0]'] ================================================================================================== Total params: 21,802,784 Trainable params: 21,768,352 Non-trainable params: 34,432 __________________________________________________________________________________________________
len(base_model.trainable_variables)
188
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine-tune everything up to this layer onwards
freeze_layers_after=30
# Freeze all the layers after the `freeze_layers_after` layer
for layer in base_model.layers[freeze_layers_after:]: # Using [integer:] indexes every value after the interger in the list
layer.trainable = False
print("Number of layers frozen in the base model: ", len(base_model.layers)-freeze_layers_after)
Number of layers in the base model: 311 Number of layers frozen in the base model: 281
len(base_model.trainable_variables) #trainable layers after freezing
18
# Add new GAP layer and output layer to frozen layers of original model with adjusted input
gap1 = GlobalAveragePooling2D()(base_model.layers[-1].output)
class1 = Dense(128, activation='relu')(gap1)
class1 = Dense(128, activation='relu')(class1)
output = Dense(3, activation='softmax')(class1)
# define new model
model = Model(inputs=base_model.inputs, outputs=output)
# summarize
model.summary()
Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 192, 192, 3 0 [] )] conv2d (Conv2D) (None, 95, 95, 32) 864 ['input_1[0][0]'] batch_normalization (BatchNorm (None, 95, 95, 32) 96 ['conv2d[0][0]'] alization) activation (Activation) (None, 95, 95, 32) 0 ['batch_normalization[0][0]'] conv2d_1 (Conv2D) (None, 93, 93, 32) 9216 ['activation[0][0]'] batch_normalization_1 (BatchNo (None, 93, 93, 32) 96 ['conv2d_1[0][0]'] rmalization) activation_1 (Activation) (None, 93, 93, 32) 0 ['batch_normalization_1[0][0]'] conv2d_2 (Conv2D) (None, 93, 93, 64) 18432 ['activation_1[0][0]'] batch_normalization_2 (BatchNo (None, 93, 93, 64) 192 ['conv2d_2[0][0]'] rmalization) activation_2 (Activation) (None, 93, 93, 64) 0 ['batch_normalization_2[0][0]'] max_pooling2d (MaxPooling2D) (None, 46, 46, 64) 0 ['activation_2[0][0]'] conv2d_3 (Conv2D) (None, 46, 46, 80) 5120 ['max_pooling2d[0][0]'] batch_normalization_3 (BatchNo (None, 46, 46, 80) 240 ['conv2d_3[0][0]'] rmalization) activation_3 (Activation) (None, 46, 46, 80) 0 ['batch_normalization_3[0][0]'] conv2d_4 (Conv2D) (None, 44, 44, 192) 138240 ['activation_3[0][0]'] batch_normalization_4 (BatchNo (None, 44, 44, 192) 576 ['conv2d_4[0][0]'] rmalization) activation_4 (Activation) (None, 44, 44, 192) 0 ['batch_normalization_4[0][0]'] max_pooling2d_1 (MaxPooling2D) (None, 21, 21, 192) 0 ['activation_4[0][0]'] conv2d_8 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_1[0][0]'] batch_normalization_8 (BatchNo (None, 21, 21, 64) 192 ['conv2d_8[0][0]'] rmalization) activation_8 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_8[0][0]'] conv2d_6 (Conv2D) (None, 21, 21, 48) 9216 ['max_pooling2d_1[0][0]'] conv2d_9 (Conv2D) (None, 21, 21, 96) 55296 ['activation_8[0][0]'] batch_normalization_6 (BatchNo (None, 21, 21, 48) 144 ['conv2d_6[0][0]'] rmalization) batch_normalization_9 (BatchNo (None, 21, 21, 96) 288 ['conv2d_9[0][0]'] rmalization) activation_6 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_6[0][0]'] activation_9 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_9[0][0]'] average_pooling2d (AveragePool (None, 21, 21, 192) 0 ['max_pooling2d_1[0][0]'] ing2D) conv2d_5 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_1[0][0]'] conv2d_7 (Conv2D) (None, 21, 21, 64) 76800 ['activation_6[0][0]'] conv2d_10 (Conv2D) (None, 21, 21, 96) 82944 ['activation_9[0][0]'] conv2d_11 (Conv2D) (None, 21, 21, 32) 6144 ['average_pooling2d[0][0]'] batch_normalization_5 (BatchNo (None, 21, 21, 64) 192 ['conv2d_5[0][0]'] rmalization) batch_normalization_7 (BatchNo (None, 21, 21, 64) 192 ['conv2d_7[0][0]'] rmalization) batch_normalization_10 (BatchN (None, 21, 21, 96) 288 ['conv2d_10[0][0]'] ormalization) batch_normalization_11 (BatchN (None, 21, 21, 32) 96 ['conv2d_11[0][0]'] ormalization) activation_5 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_5[0][0]'] activation_7 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_7[0][0]'] activation_10 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_10[0][0]'] activation_11 (Activation) (None, 21, 21, 32) 0 ['batch_normalization_11[0][0]'] mixed0 (Concatenate) (None, 21, 21, 256) 0 ['activation_5[0][0]', 'activation_7[0][0]', 'activation_10[0][0]', 'activation_11[0][0]'] conv2d_15 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] batch_normalization_15 (BatchN (None, 21, 21, 64) 192 ['conv2d_15[0][0]'] ormalization) activation_15 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_15[0][0]'] conv2d_13 (Conv2D) (None, 21, 21, 48) 12288 ['mixed0[0][0]'] conv2d_16 (Conv2D) (None, 21, 21, 96) 55296 ['activation_15[0][0]'] batch_normalization_13 (BatchN (None, 21, 21, 48) 144 ['conv2d_13[0][0]'] ormalization) batch_normalization_16 (BatchN (None, 21, 21, 96) 288 ['conv2d_16[0][0]'] ormalization) activation_13 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_13[0][0]'] activation_16 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_16[0][0]'] average_pooling2d_1 (AveragePo (None, 21, 21, 256) 0 ['mixed0[0][0]'] oling2D) conv2d_12 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] conv2d_14 (Conv2D) (None, 21, 21, 64) 76800 ['activation_13[0][0]'] conv2d_17 (Conv2D) (None, 21, 21, 96) 82944 ['activation_16[0][0]'] conv2d_18 (Conv2D) (None, 21, 21, 64) 16384 ['average_pooling2d_1[0][0]'] batch_normalization_12 (BatchN (None, 21, 21, 64) 192 ['conv2d_12[0][0]'] ormalization) batch_normalization_14 (BatchN (None, 21, 21, 64) 192 ['conv2d_14[0][0]'] ormalization) batch_normalization_17 (BatchN (None, 21, 21, 96) 288 ['conv2d_17[0][0]'] ormalization) batch_normalization_18 (BatchN (None, 21, 21, 64) 192 ['conv2d_18[0][0]'] ormalization) activation_12 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_12[0][0]'] activation_14 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_14[0][0]'] activation_17 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_17[0][0]'] activation_18 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_18[0][0]'] mixed1 (Concatenate) (None, 21, 21, 288) 0 ['activation_12[0][0]', 'activation_14[0][0]', 'activation_17[0][0]', 'activation_18[0][0]'] conv2d_22 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] batch_normalization_22 (BatchN (None, 21, 21, 64) 192 ['conv2d_22[0][0]'] ormalization) activation_22 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_22[0][0]'] conv2d_20 (Conv2D) (None, 21, 21, 48) 13824 ['mixed1[0][0]'] conv2d_23 (Conv2D) (None, 21, 21, 96) 55296 ['activation_22[0][0]'] batch_normalization_20 (BatchN (None, 21, 21, 48) 144 ['conv2d_20[0][0]'] ormalization) batch_normalization_23 (BatchN (None, 21, 21, 96) 288 ['conv2d_23[0][0]'] ormalization) activation_20 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_20[0][0]'] activation_23 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_23[0][0]'] average_pooling2d_2 (AveragePo (None, 21, 21, 288) 0 ['mixed1[0][0]'] oling2D) conv2d_19 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] conv2d_21 (Conv2D) (None, 21, 21, 64) 76800 ['activation_20[0][0]'] conv2d_24 (Conv2D) (None, 21, 21, 96) 82944 ['activation_23[0][0]'] conv2d_25 (Conv2D) (None, 21, 21, 64) 18432 ['average_pooling2d_2[0][0]'] batch_normalization_19 (BatchN (None, 21, 21, 64) 192 ['conv2d_19[0][0]'] ormalization) batch_normalization_21 (BatchN (None, 21, 21, 64) 192 ['conv2d_21[0][0]'] ormalization) batch_normalization_24 (BatchN (None, 21, 21, 96) 288 ['conv2d_24[0][0]'] ormalization) batch_normalization_25 (BatchN (None, 21, 21, 64) 192 ['conv2d_25[0][0]'] ormalization) activation_19 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_19[0][0]'] activation_21 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_21[0][0]'] activation_24 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_24[0][0]'] activation_25 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_25[0][0]'] mixed2 (Concatenate) (None, 21, 21, 288) 0 ['activation_19[0][0]', 'activation_21[0][0]', 'activation_24[0][0]', 'activation_25[0][0]'] conv2d_27 (Conv2D) (None, 21, 21, 64) 18432 ['mixed2[0][0]'] batch_normalization_27 (BatchN (None, 21, 21, 64) 192 ['conv2d_27[0][0]'] ormalization) activation_27 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_27[0][0]'] conv2d_28 (Conv2D) (None, 21, 21, 96) 55296 ['activation_27[0][0]'] batch_normalization_28 (BatchN (None, 21, 21, 96) 288 ['conv2d_28[0][0]'] ormalization) activation_28 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_28[0][0]'] conv2d_26 (Conv2D) (None, 10, 10, 384) 995328 ['mixed2[0][0]'] conv2d_29 (Conv2D) (None, 10, 10, 96) 82944 ['activation_28[0][0]'] batch_normalization_26 (BatchN (None, 10, 10, 384) 1152 ['conv2d_26[0][0]'] ormalization) batch_normalization_29 (BatchN (None, 10, 10, 96) 288 ['conv2d_29[0][0]'] ormalization) activation_26 (Activation) (None, 10, 10, 384) 0 ['batch_normalization_26[0][0]'] activation_29 (Activation) (None, 10, 10, 96) 0 ['batch_normalization_29[0][0]'] max_pooling2d_2 (MaxPooling2D) (None, 10, 10, 288) 0 ['mixed2[0][0]'] mixed3 (Concatenate) (None, 10, 10, 768) 0 ['activation_26[0][0]', 'activation_29[0][0]', 'max_pooling2d_2[0][0]'] conv2d_34 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] batch_normalization_34 (BatchN (None, 10, 10, 128) 384 ['conv2d_34[0][0]'] ormalization) activation_34 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_34[0][0]'] conv2d_35 (Conv2D) (None, 10, 10, 128) 114688 ['activation_34[0][0]'] batch_normalization_35 (BatchN (None, 10, 10, 128) 384 ['conv2d_35[0][0]'] ormalization) activation_35 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_35[0][0]'] conv2d_31 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] conv2d_36 (Conv2D) (None, 10, 10, 128) 114688 ['activation_35[0][0]'] batch_normalization_31 (BatchN (None, 10, 10, 128) 384 ['conv2d_31[0][0]'] ormalization) batch_normalization_36 (BatchN (None, 10, 10, 128) 384 ['conv2d_36[0][0]'] ormalization) activation_31 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_31[0][0]'] activation_36 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_36[0][0]'] conv2d_32 (Conv2D) (None, 10, 10, 128) 114688 ['activation_31[0][0]'] conv2d_37 (Conv2D) (None, 10, 10, 128) 114688 ['activation_36[0][0]'] batch_normalization_32 (BatchN (None, 10, 10, 128) 384 ['conv2d_32[0][0]'] ormalization) batch_normalization_37 (BatchN (None, 10, 10, 128) 384 ['conv2d_37[0][0]'] ormalization) activation_32 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_32[0][0]'] activation_37 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_37[0][0]'] average_pooling2d_3 (AveragePo (None, 10, 10, 768) 0 ['mixed3[0][0]'] oling2D) conv2d_30 (Conv2D) (None, 10, 10, 192) 147456 ['mixed3[0][0]'] conv2d_33 (Conv2D) (None, 10, 10, 192) 172032 ['activation_32[0][0]'] conv2d_38 (Conv2D) (None, 10, 10, 192) 172032 ['activation_37[0][0]'] conv2d_39 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_3[0][0]'] batch_normalization_30 (BatchN (None, 10, 10, 192) 576 ['conv2d_30[0][0]'] ormalization) batch_normalization_33 (BatchN (None, 10, 10, 192) 576 ['conv2d_33[0][0]'] ormalization) batch_normalization_38 (BatchN (None, 10, 10, 192) 576 ['conv2d_38[0][0]'] ormalization) batch_normalization_39 (BatchN (None, 10, 10, 192) 576 ['conv2d_39[0][0]'] ormalization) activation_30 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_30[0][0]'] activation_33 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_33[0][0]'] activation_38 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_38[0][0]'] activation_39 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_39[0][0]'] mixed4 (Concatenate) (None, 10, 10, 768) 0 ['activation_30[0][0]', 'activation_33[0][0]', 'activation_38[0][0]', 'activation_39[0][0]'] conv2d_44 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] batch_normalization_44 (BatchN (None, 10, 10, 160) 480 ['conv2d_44[0][0]'] ormalization) activation_44 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_44[0][0]'] conv2d_45 (Conv2D) (None, 10, 10, 160) 179200 ['activation_44[0][0]'] batch_normalization_45 (BatchN (None, 10, 10, 160) 480 ['conv2d_45[0][0]'] ormalization) activation_45 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_45[0][0]'] conv2d_41 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] conv2d_46 (Conv2D) (None, 10, 10, 160) 179200 ['activation_45[0][0]'] batch_normalization_41 (BatchN (None, 10, 10, 160) 480 ['conv2d_41[0][0]'] ormalization) batch_normalization_46 (BatchN (None, 10, 10, 160) 480 ['conv2d_46[0][0]'] ormalization) activation_41 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_41[0][0]'] activation_46 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_46[0][0]'] conv2d_42 (Conv2D) (None, 10, 10, 160) 179200 ['activation_41[0][0]'] conv2d_47 (Conv2D) (None, 10, 10, 160) 179200 ['activation_46[0][0]'] batch_normalization_42 (BatchN (None, 10, 10, 160) 480 ['conv2d_42[0][0]'] ormalization) batch_normalization_47 (BatchN (None, 10, 10, 160) 480 ['conv2d_47[0][0]'] ormalization) activation_42 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_42[0][0]'] activation_47 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_47[0][0]'] average_pooling2d_4 (AveragePo (None, 10, 10, 768) 0 ['mixed4[0][0]'] oling2D) conv2d_40 (Conv2D) (None, 10, 10, 192) 147456 ['mixed4[0][0]'] conv2d_43 (Conv2D) (None, 10, 10, 192) 215040 ['activation_42[0][0]'] conv2d_48 (Conv2D) (None, 10, 10, 192) 215040 ['activation_47[0][0]'] conv2d_49 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_4[0][0]'] batch_normalization_40 (BatchN (None, 10, 10, 192) 576 ['conv2d_40[0][0]'] ormalization) batch_normalization_43 (BatchN (None, 10, 10, 192) 576 ['conv2d_43[0][0]'] ormalization) batch_normalization_48 (BatchN (None, 10, 10, 192) 576 ['conv2d_48[0][0]'] ormalization) batch_normalization_49 (BatchN (None, 10, 10, 192) 576 ['conv2d_49[0][0]'] ormalization) activation_40 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_40[0][0]'] activation_43 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_43[0][0]'] activation_48 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_48[0][0]'] activation_49 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_49[0][0]'] mixed5 (Concatenate) (None, 10, 10, 768) 0 ['activation_40[0][0]', 'activation_43[0][0]', 'activation_48[0][0]', 'activation_49[0][0]'] conv2d_54 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] batch_normalization_54 (BatchN (None, 10, 10, 160) 480 ['conv2d_54[0][0]'] ormalization) activation_54 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_54[0][0]'] conv2d_55 (Conv2D) (None, 10, 10, 160) 179200 ['activation_54[0][0]'] batch_normalization_55 (BatchN (None, 10, 10, 160) 480 ['conv2d_55[0][0]'] ormalization) activation_55 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_55[0][0]'] conv2d_51 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] conv2d_56 (Conv2D) (None, 10, 10, 160) 179200 ['activation_55[0][0]'] batch_normalization_51 (BatchN (None, 10, 10, 160) 480 ['conv2d_51[0][0]'] ormalization) batch_normalization_56 (BatchN (None, 10, 10, 160) 480 ['conv2d_56[0][0]'] ormalization) activation_51 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_51[0][0]'] activation_56 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_56[0][0]'] conv2d_52 (Conv2D) (None, 10, 10, 160) 179200 ['activation_51[0][0]'] conv2d_57 (Conv2D) (None, 10, 10, 160) 179200 ['activation_56[0][0]'] batch_normalization_52 (BatchN (None, 10, 10, 160) 480 ['conv2d_52[0][0]'] ormalization) batch_normalization_57 (BatchN (None, 10, 10, 160) 480 ['conv2d_57[0][0]'] ormalization) activation_52 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_52[0][0]'] activation_57 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_57[0][0]'] average_pooling2d_5 (AveragePo (None, 10, 10, 768) 0 ['mixed5[0][0]'] oling2D) conv2d_50 (Conv2D) (None, 10, 10, 192) 147456 ['mixed5[0][0]'] conv2d_53 (Conv2D) (None, 10, 10, 192) 215040 ['activation_52[0][0]'] conv2d_58 (Conv2D) (None, 10, 10, 192) 215040 ['activation_57[0][0]'] conv2d_59 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_5[0][0]'] batch_normalization_50 (BatchN (None, 10, 10, 192) 576 ['conv2d_50[0][0]'] ormalization) batch_normalization_53 (BatchN (None, 10, 10, 192) 576 ['conv2d_53[0][0]'] ormalization) batch_normalization_58 (BatchN (None, 10, 10, 192) 576 ['conv2d_58[0][0]'] ormalization) batch_normalization_59 (BatchN (None, 10, 10, 192) 576 ['conv2d_59[0][0]'] ormalization) activation_50 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_50[0][0]'] activation_53 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_53[0][0]'] activation_58 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_58[0][0]'] activation_59 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_59[0][0]'] mixed6 (Concatenate) (None, 10, 10, 768) 0 ['activation_50[0][0]', 'activation_53[0][0]', 'activation_58[0][0]', 'activation_59[0][0]'] conv2d_64 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] batch_normalization_64 (BatchN (None, 10, 10, 192) 576 ['conv2d_64[0][0]'] ormalization) activation_64 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_64[0][0]'] conv2d_65 (Conv2D) (None, 10, 10, 192) 258048 ['activation_64[0][0]'] batch_normalization_65 (BatchN (None, 10, 10, 192) 576 ['conv2d_65[0][0]'] ormalization) activation_65 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_65[0][0]'] conv2d_61 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_66 (Conv2D) (None, 10, 10, 192) 258048 ['activation_65[0][0]'] batch_normalization_61 (BatchN (None, 10, 10, 192) 576 ['conv2d_61[0][0]'] ormalization) batch_normalization_66 (BatchN (None, 10, 10, 192) 576 ['conv2d_66[0][0]'] ormalization) activation_61 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_61[0][0]'] activation_66 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_66[0][0]'] conv2d_62 (Conv2D) (None, 10, 10, 192) 258048 ['activation_61[0][0]'] conv2d_67 (Conv2D) (None, 10, 10, 192) 258048 ['activation_66[0][0]'] batch_normalization_62 (BatchN (None, 10, 10, 192) 576 ['conv2d_62[0][0]'] ormalization) batch_normalization_67 (BatchN (None, 10, 10, 192) 576 ['conv2d_67[0][0]'] ormalization) activation_62 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_62[0][0]'] activation_67 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_67[0][0]'] average_pooling2d_6 (AveragePo (None, 10, 10, 768) 0 ['mixed6[0][0]'] oling2D) conv2d_60 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_63 (Conv2D) (None, 10, 10, 192) 258048 ['activation_62[0][0]'] conv2d_68 (Conv2D) (None, 10, 10, 192) 258048 ['activation_67[0][0]'] conv2d_69 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_6[0][0]'] batch_normalization_60 (BatchN (None, 10, 10, 192) 576 ['conv2d_60[0][0]'] ormalization) batch_normalization_63 (BatchN (None, 10, 10, 192) 576 ['conv2d_63[0][0]'] ormalization) batch_normalization_68 (BatchN (None, 10, 10, 192) 576 ['conv2d_68[0][0]'] ormalization) batch_normalization_69 (BatchN (None, 10, 10, 192) 576 ['conv2d_69[0][0]'] ormalization) activation_60 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_60[0][0]'] activation_63 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_63[0][0]'] activation_68 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_68[0][0]'] activation_69 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_69[0][0]'] mixed7 (Concatenate) (None, 10, 10, 768) 0 ['activation_60[0][0]', 'activation_63[0][0]', 'activation_68[0][0]', 'activation_69[0][0]'] conv2d_72 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] batch_normalization_72 (BatchN (None, 10, 10, 192) 576 ['conv2d_72[0][0]'] ormalization) activation_72 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_72[0][0]'] conv2d_73 (Conv2D) (None, 10, 10, 192) 258048 ['activation_72[0][0]'] batch_normalization_73 (BatchN (None, 10, 10, 192) 576 ['conv2d_73[0][0]'] ormalization) activation_73 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_73[0][0]'] conv2d_70 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] conv2d_74 (Conv2D) (None, 10, 10, 192) 258048 ['activation_73[0][0]'] batch_normalization_70 (BatchN (None, 10, 10, 192) 576 ['conv2d_70[0][0]'] ormalization) batch_normalization_74 (BatchN (None, 10, 10, 192) 576 ['conv2d_74[0][0]'] ormalization) activation_70 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_70[0][0]'] activation_74 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_74[0][0]'] conv2d_71 (Conv2D) (None, 4, 4, 320) 552960 ['activation_70[0][0]'] conv2d_75 (Conv2D) (None, 4, 4, 192) 331776 ['activation_74[0][0]'] batch_normalization_71 (BatchN (None, 4, 4, 320) 960 ['conv2d_71[0][0]'] ormalization) batch_normalization_75 (BatchN (None, 4, 4, 192) 576 ['conv2d_75[0][0]'] ormalization) activation_71 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_71[0][0]'] activation_75 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_75[0][0]'] max_pooling2d_3 (MaxPooling2D) (None, 4, 4, 768) 0 ['mixed7[0][0]'] mixed8 (Concatenate) (None, 4, 4, 1280) 0 ['activation_71[0][0]', 'activation_75[0][0]', 'max_pooling2d_3[0][0]'] conv2d_80 (Conv2D) (None, 4, 4, 448) 573440 ['mixed8[0][0]'] batch_normalization_80 (BatchN (None, 4, 4, 448) 1344 ['conv2d_80[0][0]'] ormalization) activation_80 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_80[0][0]'] conv2d_77 (Conv2D) (None, 4, 4, 384) 491520 ['mixed8[0][0]'] conv2d_81 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_80[0][0]'] batch_normalization_77 (BatchN (None, 4, 4, 384) 1152 ['conv2d_77[0][0]'] ormalization) batch_normalization_81 (BatchN (None, 4, 4, 384) 1152 ['conv2d_81[0][0]'] ormalization) activation_77 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_77[0][0]'] activation_81 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_81[0][0]'] conv2d_78 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_79 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_82 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] conv2d_83 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] average_pooling2d_7 (AveragePo (None, 4, 4, 1280) 0 ['mixed8[0][0]'] oling2D) conv2d_76 (Conv2D) (None, 4, 4, 320) 409600 ['mixed8[0][0]'] batch_normalization_78 (BatchN (None, 4, 4, 384) 1152 ['conv2d_78[0][0]'] ormalization) batch_normalization_79 (BatchN (None, 4, 4, 384) 1152 ['conv2d_79[0][0]'] ormalization) batch_normalization_82 (BatchN (None, 4, 4, 384) 1152 ['conv2d_82[0][0]'] ormalization) batch_normalization_83 (BatchN (None, 4, 4, 384) 1152 ['conv2d_83[0][0]'] ormalization) conv2d_84 (Conv2D) (None, 4, 4, 192) 245760 ['average_pooling2d_7[0][0]'] batch_normalization_76 (BatchN (None, 4, 4, 320) 960 ['conv2d_76[0][0]'] ormalization) activation_78 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_78[0][0]'] activation_79 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_79[0][0]'] activation_82 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_82[0][0]'] activation_83 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_83[0][0]'] batch_normalization_84 (BatchN (None, 4, 4, 192) 576 ['conv2d_84[0][0]'] ormalization) activation_76 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_76[0][0]'] mixed9_0 (Concatenate) (None, 4, 4, 768) 0 ['activation_78[0][0]', 'activation_79[0][0]'] concatenate (Concatenate) (None, 4, 4, 768) 0 ['activation_82[0][0]', 'activation_83[0][0]'] activation_84 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_84[0][0]'] mixed9 (Concatenate) (None, 4, 4, 2048) 0 ['activation_76[0][0]', 'mixed9_0[0][0]', 'concatenate[0][0]', 'activation_84[0][0]'] conv2d_89 (Conv2D) (None, 4, 4, 448) 917504 ['mixed9[0][0]'] batch_normalization_89 (BatchN (None, 4, 4, 448) 1344 ['conv2d_89[0][0]'] ormalization) activation_89 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_89[0][0]'] conv2d_86 (Conv2D) (None, 4, 4, 384) 786432 ['mixed9[0][0]'] conv2d_90 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_89[0][0]'] batch_normalization_86 (BatchN (None, 4, 4, 384) 1152 ['conv2d_86[0][0]'] ormalization) batch_normalization_90 (BatchN (None, 4, 4, 384) 1152 ['conv2d_90[0][0]'] ormalization) activation_86 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_86[0][0]'] activation_90 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_90[0][0]'] conv2d_87 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_88 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_91 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] conv2d_92 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] average_pooling2d_8 (AveragePo (None, 4, 4, 2048) 0 ['mixed9[0][0]'] oling2D) conv2d_85 (Conv2D) (None, 4, 4, 320) 655360 ['mixed9[0][0]'] batch_normalization_87 (BatchN (None, 4, 4, 384) 1152 ['conv2d_87[0][0]'] ormalization) batch_normalization_88 (BatchN (None, 4, 4, 384) 1152 ['conv2d_88[0][0]'] ormalization) batch_normalization_91 (BatchN (None, 4, 4, 384) 1152 ['conv2d_91[0][0]'] ormalization) batch_normalization_92 (BatchN (None, 4, 4, 384) 1152 ['conv2d_92[0][0]'] ormalization) conv2d_93 (Conv2D) (None, 4, 4, 192) 393216 ['average_pooling2d_8[0][0]'] batch_normalization_85 (BatchN (None, 4, 4, 320) 960 ['conv2d_85[0][0]'] ormalization) activation_87 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_87[0][0]'] activation_88 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_88[0][0]'] activation_91 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_91[0][0]'] activation_92 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_92[0][0]'] batch_normalization_93 (BatchN (None, 4, 4, 192) 576 ['conv2d_93[0][0]'] ormalization) activation_85 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_85[0][0]'] mixed9_1 (Concatenate) (None, 4, 4, 768) 0 ['activation_87[0][0]', 'activation_88[0][0]'] concatenate_1 (Concatenate) (None, 4, 4, 768) 0 ['activation_91[0][0]', 'activation_92[0][0]'] activation_93 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_93[0][0]'] mixed10 (Concatenate) (None, 4, 4, 2048) 0 ['activation_85[0][0]', 'mixed9_1[0][0]', 'concatenate_1[0][0]', 'activation_93[0][0]'] global_average_pooling2d (Glob (None, 2048) 0 ['mixed10[0][0]'] alAveragePooling2D) dense (Dense) (None, 128) 262272 ['global_average_pooling2d[0][0]' ] dense_1 (Dense) (None, 128) 16512 ['dense[0][0]'] dense_2 (Dense) (None, 3) 387 ['dense_1[0][0]'] ================================================================================================== Total params: 22,081,955 Trainable params: 617,539 Non-trainable params: 21,464,416 __________________________________________________________________________________________________
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
mc = ModelCheckpoint('best_model_7_non-aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=4,verbose=1,factor=0.05)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc', 'AUC'])
model.fit(X_train, y_train, batch_size=32,
epochs = 40, verbose=1, validation_split=0.20, callbacks=[mc,red_lr,es])
Epoch 1/40 78/78 [==============================] - ETA: 0s - loss: 0.3650 - acc: 0.8644 Epoch 1: val_acc improved from -inf to 0.84727, saving model to best_model_7.h5 78/78 [==============================] - 25s 218ms/step - loss: 0.3650 - acc: 0.8644 - val_loss: 0.4128 - val_acc: 0.8473 - lr: 0.0010 Epoch 2/40 78/78 [==============================] - ETA: 0s - loss: 0.1495 - acc: 0.9461 Epoch 2: val_acc did not improve from 0.84727 78/78 [==============================] - 12s 151ms/step - loss: 0.1495 - acc: 0.9461 - val_loss: 1.4051 - val_acc: 0.6704 - lr: 0.0010 Epoch 3/40 78/78 [==============================] - ETA: 0s - loss: 0.1331 - acc: 0.9545 Epoch 3: val_acc improved from 0.84727 to 0.88424, saving model to best_model_7.h5 78/78 [==============================] - 14s 177ms/step - loss: 0.1331 - acc: 0.9545 - val_loss: 0.3508 - val_acc: 0.8842 - lr: 0.0010 Epoch 4/40 78/78 [==============================] - ETA: 0s - loss: 0.0876 - acc: 0.9710 Epoch 4: val_acc did not improve from 0.88424 78/78 [==============================] - 12s 154ms/step - loss: 0.0876 - acc: 0.9710 - val_loss: 0.7289 - val_acc: 0.7894 - lr: 0.0010 Epoch 5/40 78/78 [==============================] - ETA: 0s - loss: 0.0715 - acc: 0.9743 Epoch 5: val_acc improved from 0.88424 to 0.91961, saving model to best_model_7.h5 78/78 [==============================] - 13s 171ms/step - loss: 0.0715 - acc: 0.9743 - val_loss: 0.1859 - val_acc: 0.9196 - lr: 0.0010 Epoch 6/40 78/78 [==============================] - ETA: 0s - loss: 0.1003 - acc: 0.9634 Epoch 6: val_acc improved from 0.91961 to 0.94855, saving model to best_model_7.h5 78/78 [==============================] - 13s 173ms/step - loss: 0.1003 - acc: 0.9634 - val_loss: 0.1597 - val_acc: 0.9486 - lr: 0.0010 Epoch 7/40 78/78 [==============================] - ETA: 0s - loss: 0.0632 - acc: 0.9763 Epoch 7: val_acc improved from 0.94855 to 0.96302, saving model to best_model_7.h5 78/78 [==============================] - 13s 170ms/step - loss: 0.0632 - acc: 0.9763 - val_loss: 0.1125 - val_acc: 0.9630 - lr: 0.0010 Epoch 8/40 78/78 [==============================] - ETA: 0s - loss: 0.0606 - acc: 0.9795 Epoch 8: val_acc did not improve from 0.96302 78/78 [==============================] - 12s 150ms/step - loss: 0.0606 - acc: 0.9795 - val_loss: 0.1533 - val_acc: 0.9486 - lr: 0.0010 Epoch 9/40 78/78 [==============================] - ETA: 0s - loss: 0.0658 - acc: 0.9743 Epoch 9: val_acc did not improve from 0.96302 78/78 [==============================] - 12s 151ms/step - loss: 0.0658 - acc: 0.9743 - val_loss: 0.2779 - val_acc: 0.8859 - lr: 0.0010 Epoch 10/40 78/78 [==============================] - ETA: 0s - loss: 0.0724 - acc: 0.9755 Epoch 10: val_acc improved from 0.96302 to 0.96463, saving model to best_model_7.h5 78/78 [==============================] - 14s 176ms/step - loss: 0.0724 - acc: 0.9755 - val_loss: 0.0871 - val_acc: 0.9646 - lr: 0.0010 Epoch 11/40 78/78 [==============================] - ETA: 0s - loss: 0.0313 - acc: 0.9883 Epoch 11: val_acc did not improve from 0.96463 78/78 [==============================] - 12s 151ms/step - loss: 0.0313 - acc: 0.9883 - val_loss: 1.1770 - val_acc: 0.7974 - lr: 0.0010 Epoch 12/40 78/78 [==============================] - ETA: 0s - loss: 0.0533 - acc: 0.9823 Epoch 12: val_acc did not improve from 0.96463 78/78 [==============================] - 12s 152ms/step - loss: 0.0533 - acc: 0.9823 - val_loss: 0.2194 - val_acc: 0.9244 - lr: 0.0010 Epoch 13/40 78/78 [==============================] - ETA: 0s - loss: 0.0311 - acc: 0.9899 Epoch 13: val_acc did not improve from 0.96463 78/78 [==============================] - 12s 150ms/step - loss: 0.0311 - acc: 0.9899 - val_loss: 0.1489 - val_acc: 0.9518 - lr: 0.0010 Epoch 14/40 78/78 [==============================] - ETA: 0s - loss: 0.0444 - acc: 0.9863 Epoch 14: val_acc did not improve from 0.96463 Epoch 00014: ReduceLROnPlateau reducing learning rate to 5.0000002374872565e-05. 78/78 [==============================] - 12s 150ms/step - loss: 0.0444 - acc: 0.9863 - val_loss: 0.1836 - val_acc: 0.9453 - lr: 0.0010 Epoch 15/40 78/78 [==============================] - ETA: 0s - loss: 0.0153 - acc: 0.9952 Epoch 15: val_acc improved from 0.96463 to 0.97106, saving model to best_model_7.h5 78/78 [==============================] - 13s 170ms/step - loss: 0.0153 - acc: 0.9952 - val_loss: 0.0813 - val_acc: 0.9711 - lr: 5.0000e-05 Epoch 16/40 78/78 [==============================] - ETA: 0s - loss: 0.0091 - acc: 0.9976 Epoch 16: val_acc improved from 0.97106 to 0.97588, saving model to best_model_7.h5 78/78 [==============================] - 13s 170ms/step - loss: 0.0091 - acc: 0.9976 - val_loss: 0.0750 - val_acc: 0.9759 - lr: 5.0000e-05 Epoch 17/40 78/78 [==============================] - ETA: 0s - loss: 0.0062 - acc: 0.9984 Epoch 17: val_acc improved from 0.97588 to 0.97749, saving model to best_model_7.h5 78/78 [==============================] - 14s 174ms/step - loss: 0.0062 - acc: 0.9984 - val_loss: 0.0714 - val_acc: 0.9775 - lr: 5.0000e-05 Epoch 18/40 78/78 [==============================] - ETA: 0s - loss: 0.0036 - acc: 1.0000 Epoch 18: val_acc did not improve from 0.97749 78/78 [==============================] - 12s 156ms/step - loss: 0.0036 - acc: 1.0000 - val_loss: 0.0718 - val_acc: 0.9775 - lr: 5.0000e-05 Epoch 19/40 78/78 [==============================] - ETA: 0s - loss: 0.0027 - acc: 0.9996 Epoch 19: val_acc did not improve from 0.97749 78/78 [==============================] - 12s 151ms/step - loss: 0.0027 - acc: 0.9996 - val_loss: 0.0723 - val_acc: 0.9775 - lr: 5.0000e-05 Epoch 20/40 78/78 [==============================] - ETA: 0s - loss: 0.0024 - acc: 0.9996 Epoch 20: val_acc did not improve from 0.97749 78/78 [==============================] - 12s 151ms/step - loss: 0.0024 - acc: 0.9996 - val_loss: 0.0738 - val_acc: 0.9759 - lr: 5.0000e-05 Epoch 21/40 78/78 [==============================] - ETA: 0s - loss: 0.0020 - acc: 1.0000 Epoch 21: val_acc did not improve from 0.97749 Epoch 00021: ReduceLROnPlateau reducing learning rate to 2.5000001187436284e-06. 78/78 [==============================] - 12s 150ms/step - loss: 0.0020 - acc: 1.0000 - val_loss: 0.0745 - val_acc: 0.9775 - lr: 5.0000e-05 Epoch 22/40 78/78 [==============================] - ETA: 0s - loss: 0.0019 - acc: 1.0000 Epoch 22: val_acc did not improve from 0.97749 78/78 [==============================] - 12s 150ms/step - loss: 0.0019 - acc: 1.0000 - val_loss: 0.0744 - val_acc: 0.9775 - lr: 2.5000e-06 Epoch 23/40 78/78 [==============================] - ETA: 0s - loss: 0.0017 - acc: 1.0000 Epoch 23: val_acc did not improve from 0.97749 78/78 [==============================] - 12s 150ms/step - loss: 0.0017 - acc: 1.0000 - val_loss: 0.0745 - val_acc: 0.9775 - lr: 2.5000e-06 Epoch 24/40 78/78 [==============================] - ETA: 0s - loss: 0.0021 - acc: 0.9996 Epoch 24: val_acc improved from 0.97749 to 0.97910, saving model to best_model_7.h5 78/78 [==============================] - 13s 174ms/step - loss: 0.0021 - acc: 0.9996 - val_loss: 0.0744 - val_acc: 0.9791 - lr: 2.5000e-06 Epoch 25/40 78/78 [==============================] - ETA: 0s - loss: 0.0017 - acc: 1.0000 Epoch 25: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 150ms/step - loss: 0.0017 - acc: 1.0000 - val_loss: 0.0745 - val_acc: 0.9775 - lr: 2.5000e-06 Epoch 26/40 78/78 [==============================] - ETA: 0s - loss: 0.0015 - acc: 0.9996 Epoch 26: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 152ms/step - loss: 0.0015 - acc: 0.9996 - val_loss: 0.0744 - val_acc: 0.9791 - lr: 2.5000e-06 Epoch 27/40 78/78 [==============================] - ETA: 0s - loss: 0.0013 - acc: 1.0000 Epoch 27: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 150ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 0.0745 - val_acc: 0.9775 - lr: 2.5000e-06 Epoch 28/40 78/78 [==============================] - ETA: 0s - loss: 0.0013 - acc: 1.0000 Epoch 28: val_acc did not improve from 0.97910 Epoch 00028: ReduceLROnPlateau reducing learning rate to 1.2500000821091816e-07. 78/78 [==============================] - 12s 151ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 0.0746 - val_acc: 0.9775 - lr: 2.5000e-06 Epoch 29/40 78/78 [==============================] - ETA: 0s - loss: 0.0013 - acc: 1.0000 Epoch 29: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 150ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 0.0745 - val_acc: 0.9791 - lr: 1.2500e-07 Epoch 30/40 78/78 [==============================] - ETA: 0s - loss: 0.0013 - acc: 1.0000 Epoch 30: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 154ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 0.0744 - val_acc: 0.9791 - lr: 1.2500e-07 Epoch 31/40 78/78 [==============================] - ETA: 0s - loss: 0.0013 - acc: 1.0000 Epoch 31: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 150ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 0.0744 - val_acc: 0.9775 - lr: 1.2500e-07 Epoch 32/40 78/78 [==============================] - ETA: 0s - loss: 0.0012 - acc: 1.0000 Epoch 32: val_acc did not improve from 0.97910 Epoch 00032: ReduceLROnPlateau reducing learning rate to 6.250000694763003e-09. 78/78 [==============================] - 12s 150ms/step - loss: 0.0012 - acc: 1.0000 - val_loss: 0.0745 - val_acc: 0.9791 - lr: 1.2500e-07 Epoch 33/40 78/78 [==============================] - ETA: 0s - loss: 0.0012 - acc: 1.0000 Epoch 33: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 150ms/step - loss: 0.0012 - acc: 1.0000 - val_loss: 0.0744 - val_acc: 0.9791 - lr: 6.2500e-09 Epoch 34/40 78/78 [==============================] - ETA: 0s - loss: 0.0017 - acc: 0.9996 Epoch 34: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 152ms/step - loss: 0.0017 - acc: 0.9996 - val_loss: 0.0744 - val_acc: 0.9791 - lr: 6.2500e-09 Epoch 35/40 78/78 [==============================] - ETA: 0s - loss: 0.0013 - acc: 1.0000 Epoch 35: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 151ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 0.0746 - val_acc: 0.9791 - lr: 6.2500e-09 Epoch 36/40 78/78 [==============================] - ETA: 0s - loss: 0.0012 - acc: 1.0000 Epoch 36: val_acc did not improve from 0.97910 Epoch 00036: ReduceLROnPlateau reducing learning rate to 3.1250002585636594e-10. 78/78 [==============================] - 12s 151ms/step - loss: 0.0012 - acc: 1.0000 - val_loss: 0.0745 - val_acc: 0.9791 - lr: 6.2500e-09 Epoch 37/40 78/78 [==============================] - ETA: 0s - loss: 0.0018 - acc: 1.0000 Epoch 37: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 150ms/step - loss: 0.0018 - acc: 1.0000 - val_loss: 0.0744 - val_acc: 0.9791 - lr: 3.1250e-10 Epoch 38/40 78/78 [==============================] - ETA: 0s - loss: 0.0013 - acc: 1.0000 Epoch 38: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 150ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 0.0744 - val_acc: 0.9791 - lr: 3.1250e-10 Epoch 39/40 78/78 [==============================] - ETA: 0s - loss: 0.0016 - acc: 1.0000 Epoch 39: val_acc did not improve from 0.97910 78/78 [==============================] - 12s 151ms/step - loss: 0.0016 - acc: 1.0000 - val_loss: 0.0745 - val_acc: 0.9791 - lr: 3.1250e-10 Epoch 40/40 78/78 [==============================] - ETA: 0s - loss: 0.0014 - acc: 1.0000 Epoch 40: val_acc did not improve from 0.97910 Epoch 00040: ReduceLROnPlateau reducing learning rate to 1.5625001292818297e-11. 78/78 [==============================] - 12s 151ms/step - loss: 0.0014 - acc: 1.0000 - val_loss: 0.0744 - val_acc: 0.9791 - lr: 3.1250e-10
from tensorflow.keras.models import load_model
#model=load_model("best_model_7_non-aug.h5")
y_test_array = y_test.to_numpy()
print(y_test_array)
[[0 0 1] [1 0 0] [0 1 0] ... [1 0 0] [1 0 0] [0 0 1]]
y_true_test = np.argmax(y_test_array,axis=1)
print(y_true_test)
print(y_true_test.shape)
print(y_true_test.dtype)
[2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 1 1 0 0 2] (778,) int64
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
def predict_classes(x):
proba=x
if proba.shape[-1] > 1:
return proba.argmax(axis=-1)
else:
return (proba > 0.5).astype("int32")
print(predict_classes(model.predict(X_test)))
prediction_index=predict_classes(model.predict(X_test))
labels=pd.get_dummies(y_train).columns
# Iterate through all predicted indices using map method
predicted_labels=list(map(lambda x: labels[x], prediction_index))
#print(predicted_labels)
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
25/25 [==============================] - 4s 80ms/step [2 0 2 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 1 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 1 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 1 1 1 2 2 2 1 2 2 2 1 2 1 0 2 1 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 2 2 0 2 2 1 0 0 2 1 0 2 1 1 2 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 2 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 2 1 0 0 2] 25/25 [==============================] - 1s 54ms/step
# y_test is one hot encoded so we need to extract labels before runing model_eval_metrics()
y_test_labels=y_test.idxmax(axis=1) #extract labels from one hot encoded y_test object
y_test_labels=list(y_test.idxmax(axis=1)) #returns a pandas series of predicted labels
model_eval_metrics(y_test_labels, predicted_labels, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.988432 | 0.988779 | 0.988875 | 0.988698 |
y_pred = model.predict(X_test)
25/25 [==============================] - 1s 56ms/step
labels=list(y_train.columns)
print(labels)
['COVID', 'NORMAL', 'Viral Pneumonia']
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
CM = confusion_matrix(y_test_labels,predicted_labels, labels=['COVID', 'NORMAL', 'Viral Pneumonia'])
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_test_labels,predicted_labels, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 1.0000 0.9958 0.9979 240 NORMAL 0.9851 0.9814 0.9832 269 Viral Pneumonia 0.9815 0.9888 0.9852 269 accuracy 0.9884 778 macro avg 0.9889 0.9887 0.9888 778 weighted avg 0.9885 0.9884 0.9884 778
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, y_pred[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
import tensorflow.keras.backend as K
#print(model.get_config()) # Full configuration to fit keras model
print(K.eval(model.optimizer.get_config())) # Optimizer configuration
#print(len(model.history.epoch)) # Number of epochs
{'name': 'Adam', 'learning_rate': 2.5000002e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model_7.png')
Revised Version of the Above InceptionV3 Transfer Learning Model:
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.applications import InceptionV3
from tensorflow.keras.models import Sequential,Model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Flatten
from tensorflow.keras import backend as K
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
# Add new GAP layer and output layer to frozen layers of original model with adjusted input
gap1 = GlobalAveragePooling2D()(base_model.layers[-1].output)
class1 = Dense(256, activation='relu')(gap1)
class1 = Dense(256, activation='relu')(class1)
output = Dense(3, activation='softmax')(class1)
# define new model
model = Model(inputs=base_model.inputs, outputs=output)
# summarize
model.summary()
Model: "model_1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 192, 192, 3 0 [] )] conv2d (Conv2D) (None, 95, 95, 32) 864 ['input_1[0][0]'] batch_normalization (BatchNorm (None, 95, 95, 32) 96 ['conv2d[0][0]'] alization) activation (Activation) (None, 95, 95, 32) 0 ['batch_normalization[0][0]'] conv2d_1 (Conv2D) (None, 93, 93, 32) 9216 ['activation[0][0]'] batch_normalization_1 (BatchNo (None, 93, 93, 32) 96 ['conv2d_1[0][0]'] rmalization) activation_1 (Activation) (None, 93, 93, 32) 0 ['batch_normalization_1[0][0]'] conv2d_2 (Conv2D) (None, 93, 93, 64) 18432 ['activation_1[0][0]'] batch_normalization_2 (BatchNo (None, 93, 93, 64) 192 ['conv2d_2[0][0]'] rmalization) activation_2 (Activation) (None, 93, 93, 64) 0 ['batch_normalization_2[0][0]'] max_pooling2d (MaxPooling2D) (None, 46, 46, 64) 0 ['activation_2[0][0]'] conv2d_3 (Conv2D) (None, 46, 46, 80) 5120 ['max_pooling2d[0][0]'] batch_normalization_3 (BatchNo (None, 46, 46, 80) 240 ['conv2d_3[0][0]'] rmalization) activation_3 (Activation) (None, 46, 46, 80) 0 ['batch_normalization_3[0][0]'] conv2d_4 (Conv2D) (None, 44, 44, 192) 138240 ['activation_3[0][0]'] batch_normalization_4 (BatchNo (None, 44, 44, 192) 576 ['conv2d_4[0][0]'] rmalization) activation_4 (Activation) (None, 44, 44, 192) 0 ['batch_normalization_4[0][0]'] max_pooling2d_1 (MaxPooling2D) (None, 21, 21, 192) 0 ['activation_4[0][0]'] conv2d_8 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_1[0][0]'] batch_normalization_8 (BatchNo (None, 21, 21, 64) 192 ['conv2d_8[0][0]'] rmalization) activation_8 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_8[0][0]'] conv2d_6 (Conv2D) (None, 21, 21, 48) 9216 ['max_pooling2d_1[0][0]'] conv2d_9 (Conv2D) (None, 21, 21, 96) 55296 ['activation_8[0][0]'] batch_normalization_6 (BatchNo (None, 21, 21, 48) 144 ['conv2d_6[0][0]'] rmalization) batch_normalization_9 (BatchNo (None, 21, 21, 96) 288 ['conv2d_9[0][0]'] rmalization) activation_6 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_6[0][0]'] activation_9 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_9[0][0]'] average_pooling2d (AveragePool (None, 21, 21, 192) 0 ['max_pooling2d_1[0][0]'] ing2D) conv2d_5 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_1[0][0]'] conv2d_7 (Conv2D) (None, 21, 21, 64) 76800 ['activation_6[0][0]'] conv2d_10 (Conv2D) (None, 21, 21, 96) 82944 ['activation_9[0][0]'] conv2d_11 (Conv2D) (None, 21, 21, 32) 6144 ['average_pooling2d[0][0]'] batch_normalization_5 (BatchNo (None, 21, 21, 64) 192 ['conv2d_5[0][0]'] rmalization) batch_normalization_7 (BatchNo (None, 21, 21, 64) 192 ['conv2d_7[0][0]'] rmalization) batch_normalization_10 (BatchN (None, 21, 21, 96) 288 ['conv2d_10[0][0]'] ormalization) batch_normalization_11 (BatchN (None, 21, 21, 32) 96 ['conv2d_11[0][0]'] ormalization) activation_5 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_5[0][0]'] activation_7 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_7[0][0]'] activation_10 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_10[0][0]'] activation_11 (Activation) (None, 21, 21, 32) 0 ['batch_normalization_11[0][0]'] mixed0 (Concatenate) (None, 21, 21, 256) 0 ['activation_5[0][0]', 'activation_7[0][0]', 'activation_10[0][0]', 'activation_11[0][0]'] conv2d_15 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] batch_normalization_15 (BatchN (None, 21, 21, 64) 192 ['conv2d_15[0][0]'] ormalization) activation_15 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_15[0][0]'] conv2d_13 (Conv2D) (None, 21, 21, 48) 12288 ['mixed0[0][0]'] conv2d_16 (Conv2D) (None, 21, 21, 96) 55296 ['activation_15[0][0]'] batch_normalization_13 (BatchN (None, 21, 21, 48) 144 ['conv2d_13[0][0]'] ormalization) batch_normalization_16 (BatchN (None, 21, 21, 96) 288 ['conv2d_16[0][0]'] ormalization) activation_13 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_13[0][0]'] activation_16 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_16[0][0]'] average_pooling2d_1 (AveragePo (None, 21, 21, 256) 0 ['mixed0[0][0]'] oling2D) conv2d_12 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] conv2d_14 (Conv2D) (None, 21, 21, 64) 76800 ['activation_13[0][0]'] conv2d_17 (Conv2D) (None, 21, 21, 96) 82944 ['activation_16[0][0]'] conv2d_18 (Conv2D) (None, 21, 21, 64) 16384 ['average_pooling2d_1[0][0]'] batch_normalization_12 (BatchN (None, 21, 21, 64) 192 ['conv2d_12[0][0]'] ormalization) batch_normalization_14 (BatchN (None, 21, 21, 64) 192 ['conv2d_14[0][0]'] ormalization) batch_normalization_17 (BatchN (None, 21, 21, 96) 288 ['conv2d_17[0][0]'] ormalization) batch_normalization_18 (BatchN (None, 21, 21, 64) 192 ['conv2d_18[0][0]'] ormalization) activation_12 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_12[0][0]'] activation_14 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_14[0][0]'] activation_17 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_17[0][0]'] activation_18 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_18[0][0]'] mixed1 (Concatenate) (None, 21, 21, 288) 0 ['activation_12[0][0]', 'activation_14[0][0]', 'activation_17[0][0]', 'activation_18[0][0]'] conv2d_22 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] batch_normalization_22 (BatchN (None, 21, 21, 64) 192 ['conv2d_22[0][0]'] ormalization) activation_22 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_22[0][0]'] conv2d_20 (Conv2D) (None, 21, 21, 48) 13824 ['mixed1[0][0]'] conv2d_23 (Conv2D) (None, 21, 21, 96) 55296 ['activation_22[0][0]'] batch_normalization_20 (BatchN (None, 21, 21, 48) 144 ['conv2d_20[0][0]'] ormalization) batch_normalization_23 (BatchN (None, 21, 21, 96) 288 ['conv2d_23[0][0]'] ormalization) activation_20 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_20[0][0]'] activation_23 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_23[0][0]'] average_pooling2d_2 (AveragePo (None, 21, 21, 288) 0 ['mixed1[0][0]'] oling2D) conv2d_19 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] conv2d_21 (Conv2D) (None, 21, 21, 64) 76800 ['activation_20[0][0]'] conv2d_24 (Conv2D) (None, 21, 21, 96) 82944 ['activation_23[0][0]'] conv2d_25 (Conv2D) (None, 21, 21, 64) 18432 ['average_pooling2d_2[0][0]'] batch_normalization_19 (BatchN (None, 21, 21, 64) 192 ['conv2d_19[0][0]'] ormalization) batch_normalization_21 (BatchN (None, 21, 21, 64) 192 ['conv2d_21[0][0]'] ormalization) batch_normalization_24 (BatchN (None, 21, 21, 96) 288 ['conv2d_24[0][0]'] ormalization) batch_normalization_25 (BatchN (None, 21, 21, 64) 192 ['conv2d_25[0][0]'] ormalization) activation_19 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_19[0][0]'] activation_21 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_21[0][0]'] activation_24 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_24[0][0]'] activation_25 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_25[0][0]'] mixed2 (Concatenate) (None, 21, 21, 288) 0 ['activation_19[0][0]', 'activation_21[0][0]', 'activation_24[0][0]', 'activation_25[0][0]'] conv2d_27 (Conv2D) (None, 21, 21, 64) 18432 ['mixed2[0][0]'] batch_normalization_27 (BatchN (None, 21, 21, 64) 192 ['conv2d_27[0][0]'] ormalization) activation_27 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_27[0][0]'] conv2d_28 (Conv2D) (None, 21, 21, 96) 55296 ['activation_27[0][0]'] batch_normalization_28 (BatchN (None, 21, 21, 96) 288 ['conv2d_28[0][0]'] ormalization) activation_28 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_28[0][0]'] conv2d_26 (Conv2D) (None, 10, 10, 384) 995328 ['mixed2[0][0]'] conv2d_29 (Conv2D) (None, 10, 10, 96) 82944 ['activation_28[0][0]'] batch_normalization_26 (BatchN (None, 10, 10, 384) 1152 ['conv2d_26[0][0]'] ormalization) batch_normalization_29 (BatchN (None, 10, 10, 96) 288 ['conv2d_29[0][0]'] ormalization) activation_26 (Activation) (None, 10, 10, 384) 0 ['batch_normalization_26[0][0]'] activation_29 (Activation) (None, 10, 10, 96) 0 ['batch_normalization_29[0][0]'] max_pooling2d_2 (MaxPooling2D) (None, 10, 10, 288) 0 ['mixed2[0][0]'] mixed3 (Concatenate) (None, 10, 10, 768) 0 ['activation_26[0][0]', 'activation_29[0][0]', 'max_pooling2d_2[0][0]'] conv2d_34 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] batch_normalization_34 (BatchN (None, 10, 10, 128) 384 ['conv2d_34[0][0]'] ormalization) activation_34 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_34[0][0]'] conv2d_35 (Conv2D) (None, 10, 10, 128) 114688 ['activation_34[0][0]'] batch_normalization_35 (BatchN (None, 10, 10, 128) 384 ['conv2d_35[0][0]'] ormalization) activation_35 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_35[0][0]'] conv2d_31 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] conv2d_36 (Conv2D) (None, 10, 10, 128) 114688 ['activation_35[0][0]'] batch_normalization_31 (BatchN (None, 10, 10, 128) 384 ['conv2d_31[0][0]'] ormalization) batch_normalization_36 (BatchN (None, 10, 10, 128) 384 ['conv2d_36[0][0]'] ormalization) activation_31 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_31[0][0]'] activation_36 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_36[0][0]'] conv2d_32 (Conv2D) (None, 10, 10, 128) 114688 ['activation_31[0][0]'] conv2d_37 (Conv2D) (None, 10, 10, 128) 114688 ['activation_36[0][0]'] batch_normalization_32 (BatchN (None, 10, 10, 128) 384 ['conv2d_32[0][0]'] ormalization) batch_normalization_37 (BatchN (None, 10, 10, 128) 384 ['conv2d_37[0][0]'] ormalization) activation_32 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_32[0][0]'] activation_37 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_37[0][0]'] average_pooling2d_3 (AveragePo (None, 10, 10, 768) 0 ['mixed3[0][0]'] oling2D) conv2d_30 (Conv2D) (None, 10, 10, 192) 147456 ['mixed3[0][0]'] conv2d_33 (Conv2D) (None, 10, 10, 192) 172032 ['activation_32[0][0]'] conv2d_38 (Conv2D) (None, 10, 10, 192) 172032 ['activation_37[0][0]'] conv2d_39 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_3[0][0]'] batch_normalization_30 (BatchN (None, 10, 10, 192) 576 ['conv2d_30[0][0]'] ormalization) batch_normalization_33 (BatchN (None, 10, 10, 192) 576 ['conv2d_33[0][0]'] ormalization) batch_normalization_38 (BatchN (None, 10, 10, 192) 576 ['conv2d_38[0][0]'] ormalization) batch_normalization_39 (BatchN (None, 10, 10, 192) 576 ['conv2d_39[0][0]'] ormalization) activation_30 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_30[0][0]'] activation_33 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_33[0][0]'] activation_38 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_38[0][0]'] activation_39 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_39[0][0]'] mixed4 (Concatenate) (None, 10, 10, 768) 0 ['activation_30[0][0]', 'activation_33[0][0]', 'activation_38[0][0]', 'activation_39[0][0]'] conv2d_44 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] batch_normalization_44 (BatchN (None, 10, 10, 160) 480 ['conv2d_44[0][0]'] ormalization) activation_44 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_44[0][0]'] conv2d_45 (Conv2D) (None, 10, 10, 160) 179200 ['activation_44[0][0]'] batch_normalization_45 (BatchN (None, 10, 10, 160) 480 ['conv2d_45[0][0]'] ormalization) activation_45 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_45[0][0]'] conv2d_41 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] conv2d_46 (Conv2D) (None, 10, 10, 160) 179200 ['activation_45[0][0]'] batch_normalization_41 (BatchN (None, 10, 10, 160) 480 ['conv2d_41[0][0]'] ormalization) batch_normalization_46 (BatchN (None, 10, 10, 160) 480 ['conv2d_46[0][0]'] ormalization) activation_41 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_41[0][0]'] activation_46 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_46[0][0]'] conv2d_42 (Conv2D) (None, 10, 10, 160) 179200 ['activation_41[0][0]'] conv2d_47 (Conv2D) (None, 10, 10, 160) 179200 ['activation_46[0][0]'] batch_normalization_42 (BatchN (None, 10, 10, 160) 480 ['conv2d_42[0][0]'] ormalization) batch_normalization_47 (BatchN (None, 10, 10, 160) 480 ['conv2d_47[0][0]'] ormalization) activation_42 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_42[0][0]'] activation_47 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_47[0][0]'] average_pooling2d_4 (AveragePo (None, 10, 10, 768) 0 ['mixed4[0][0]'] oling2D) conv2d_40 (Conv2D) (None, 10, 10, 192) 147456 ['mixed4[0][0]'] conv2d_43 (Conv2D) (None, 10, 10, 192) 215040 ['activation_42[0][0]'] conv2d_48 (Conv2D) (None, 10, 10, 192) 215040 ['activation_47[0][0]'] conv2d_49 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_4[0][0]'] batch_normalization_40 (BatchN (None, 10, 10, 192) 576 ['conv2d_40[0][0]'] ormalization) batch_normalization_43 (BatchN (None, 10, 10, 192) 576 ['conv2d_43[0][0]'] ormalization) batch_normalization_48 (BatchN (None, 10, 10, 192) 576 ['conv2d_48[0][0]'] ormalization) batch_normalization_49 (BatchN (None, 10, 10, 192) 576 ['conv2d_49[0][0]'] ormalization) activation_40 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_40[0][0]'] activation_43 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_43[0][0]'] activation_48 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_48[0][0]'] activation_49 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_49[0][0]'] mixed5 (Concatenate) (None, 10, 10, 768) 0 ['activation_40[0][0]', 'activation_43[0][0]', 'activation_48[0][0]', 'activation_49[0][0]'] conv2d_54 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] batch_normalization_54 (BatchN (None, 10, 10, 160) 480 ['conv2d_54[0][0]'] ormalization) activation_54 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_54[0][0]'] conv2d_55 (Conv2D) (None, 10, 10, 160) 179200 ['activation_54[0][0]'] batch_normalization_55 (BatchN (None, 10, 10, 160) 480 ['conv2d_55[0][0]'] ormalization) activation_55 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_55[0][0]'] conv2d_51 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] conv2d_56 (Conv2D) (None, 10, 10, 160) 179200 ['activation_55[0][0]'] batch_normalization_51 (BatchN (None, 10, 10, 160) 480 ['conv2d_51[0][0]'] ormalization) batch_normalization_56 (BatchN (None, 10, 10, 160) 480 ['conv2d_56[0][0]'] ormalization) activation_51 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_51[0][0]'] activation_56 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_56[0][0]'] conv2d_52 (Conv2D) (None, 10, 10, 160) 179200 ['activation_51[0][0]'] conv2d_57 (Conv2D) (None, 10, 10, 160) 179200 ['activation_56[0][0]'] batch_normalization_52 (BatchN (None, 10, 10, 160) 480 ['conv2d_52[0][0]'] ormalization) batch_normalization_57 (BatchN (None, 10, 10, 160) 480 ['conv2d_57[0][0]'] ormalization) activation_52 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_52[0][0]'] activation_57 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_57[0][0]'] average_pooling2d_5 (AveragePo (None, 10, 10, 768) 0 ['mixed5[0][0]'] oling2D) conv2d_50 (Conv2D) (None, 10, 10, 192) 147456 ['mixed5[0][0]'] conv2d_53 (Conv2D) (None, 10, 10, 192) 215040 ['activation_52[0][0]'] conv2d_58 (Conv2D) (None, 10, 10, 192) 215040 ['activation_57[0][0]'] conv2d_59 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_5[0][0]'] batch_normalization_50 (BatchN (None, 10, 10, 192) 576 ['conv2d_50[0][0]'] ormalization) batch_normalization_53 (BatchN (None, 10, 10, 192) 576 ['conv2d_53[0][0]'] ormalization) batch_normalization_58 (BatchN (None, 10, 10, 192) 576 ['conv2d_58[0][0]'] ormalization) batch_normalization_59 (BatchN (None, 10, 10, 192) 576 ['conv2d_59[0][0]'] ormalization) activation_50 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_50[0][0]'] activation_53 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_53[0][0]'] activation_58 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_58[0][0]'] activation_59 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_59[0][0]'] mixed6 (Concatenate) (None, 10, 10, 768) 0 ['activation_50[0][0]', 'activation_53[0][0]', 'activation_58[0][0]', 'activation_59[0][0]'] conv2d_64 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] batch_normalization_64 (BatchN (None, 10, 10, 192) 576 ['conv2d_64[0][0]'] ormalization) activation_64 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_64[0][0]'] conv2d_65 (Conv2D) (None, 10, 10, 192) 258048 ['activation_64[0][0]'] batch_normalization_65 (BatchN (None, 10, 10, 192) 576 ['conv2d_65[0][0]'] ormalization) activation_65 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_65[0][0]'] conv2d_61 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_66 (Conv2D) (None, 10, 10, 192) 258048 ['activation_65[0][0]'] batch_normalization_61 (BatchN (None, 10, 10, 192) 576 ['conv2d_61[0][0]'] ormalization) batch_normalization_66 (BatchN (None, 10, 10, 192) 576 ['conv2d_66[0][0]'] ormalization) activation_61 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_61[0][0]'] activation_66 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_66[0][0]'] conv2d_62 (Conv2D) (None, 10, 10, 192) 258048 ['activation_61[0][0]'] conv2d_67 (Conv2D) (None, 10, 10, 192) 258048 ['activation_66[0][0]'] batch_normalization_62 (BatchN (None, 10, 10, 192) 576 ['conv2d_62[0][0]'] ormalization) batch_normalization_67 (BatchN (None, 10, 10, 192) 576 ['conv2d_67[0][0]'] ormalization) activation_62 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_62[0][0]'] activation_67 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_67[0][0]'] average_pooling2d_6 (AveragePo (None, 10, 10, 768) 0 ['mixed6[0][0]'] oling2D) conv2d_60 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_63 (Conv2D) (None, 10, 10, 192) 258048 ['activation_62[0][0]'] conv2d_68 (Conv2D) (None, 10, 10, 192) 258048 ['activation_67[0][0]'] conv2d_69 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_6[0][0]'] batch_normalization_60 (BatchN (None, 10, 10, 192) 576 ['conv2d_60[0][0]'] ormalization) batch_normalization_63 (BatchN (None, 10, 10, 192) 576 ['conv2d_63[0][0]'] ormalization) batch_normalization_68 (BatchN (None, 10, 10, 192) 576 ['conv2d_68[0][0]'] ormalization) batch_normalization_69 (BatchN (None, 10, 10, 192) 576 ['conv2d_69[0][0]'] ormalization) activation_60 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_60[0][0]'] activation_63 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_63[0][0]'] activation_68 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_68[0][0]'] activation_69 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_69[0][0]'] mixed7 (Concatenate) (None, 10, 10, 768) 0 ['activation_60[0][0]', 'activation_63[0][0]', 'activation_68[0][0]', 'activation_69[0][0]'] conv2d_72 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] batch_normalization_72 (BatchN (None, 10, 10, 192) 576 ['conv2d_72[0][0]'] ormalization) activation_72 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_72[0][0]'] conv2d_73 (Conv2D) (None, 10, 10, 192) 258048 ['activation_72[0][0]'] batch_normalization_73 (BatchN (None, 10, 10, 192) 576 ['conv2d_73[0][0]'] ormalization) activation_73 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_73[0][0]'] conv2d_70 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] conv2d_74 (Conv2D) (None, 10, 10, 192) 258048 ['activation_73[0][0]'] batch_normalization_70 (BatchN (None, 10, 10, 192) 576 ['conv2d_70[0][0]'] ormalization) batch_normalization_74 (BatchN (None, 10, 10, 192) 576 ['conv2d_74[0][0]'] ormalization) activation_70 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_70[0][0]'] activation_74 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_74[0][0]'] conv2d_71 (Conv2D) (None, 4, 4, 320) 552960 ['activation_70[0][0]'] conv2d_75 (Conv2D) (None, 4, 4, 192) 331776 ['activation_74[0][0]'] batch_normalization_71 (BatchN (None, 4, 4, 320) 960 ['conv2d_71[0][0]'] ormalization) batch_normalization_75 (BatchN (None, 4, 4, 192) 576 ['conv2d_75[0][0]'] ormalization) activation_71 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_71[0][0]'] activation_75 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_75[0][0]'] max_pooling2d_3 (MaxPooling2D) (None, 4, 4, 768) 0 ['mixed7[0][0]'] mixed8 (Concatenate) (None, 4, 4, 1280) 0 ['activation_71[0][0]', 'activation_75[0][0]', 'max_pooling2d_3[0][0]'] conv2d_80 (Conv2D) (None, 4, 4, 448) 573440 ['mixed8[0][0]'] batch_normalization_80 (BatchN (None, 4, 4, 448) 1344 ['conv2d_80[0][0]'] ormalization) activation_80 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_80[0][0]'] conv2d_77 (Conv2D) (None, 4, 4, 384) 491520 ['mixed8[0][0]'] conv2d_81 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_80[0][0]'] batch_normalization_77 (BatchN (None, 4, 4, 384) 1152 ['conv2d_77[0][0]'] ormalization) batch_normalization_81 (BatchN (None, 4, 4, 384) 1152 ['conv2d_81[0][0]'] ormalization) activation_77 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_77[0][0]'] activation_81 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_81[0][0]'] conv2d_78 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_79 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_82 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] conv2d_83 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] average_pooling2d_7 (AveragePo (None, 4, 4, 1280) 0 ['mixed8[0][0]'] oling2D) conv2d_76 (Conv2D) (None, 4, 4, 320) 409600 ['mixed8[0][0]'] batch_normalization_78 (BatchN (None, 4, 4, 384) 1152 ['conv2d_78[0][0]'] ormalization) batch_normalization_79 (BatchN (None, 4, 4, 384) 1152 ['conv2d_79[0][0]'] ormalization) batch_normalization_82 (BatchN (None, 4, 4, 384) 1152 ['conv2d_82[0][0]'] ormalization) batch_normalization_83 (BatchN (None, 4, 4, 384) 1152 ['conv2d_83[0][0]'] ormalization) conv2d_84 (Conv2D) (None, 4, 4, 192) 245760 ['average_pooling2d_7[0][0]'] batch_normalization_76 (BatchN (None, 4, 4, 320) 960 ['conv2d_76[0][0]'] ormalization) activation_78 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_78[0][0]'] activation_79 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_79[0][0]'] activation_82 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_82[0][0]'] activation_83 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_83[0][0]'] batch_normalization_84 (BatchN (None, 4, 4, 192) 576 ['conv2d_84[0][0]'] ormalization) activation_76 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_76[0][0]'] mixed9_0 (Concatenate) (None, 4, 4, 768) 0 ['activation_78[0][0]', 'activation_79[0][0]'] concatenate (Concatenate) (None, 4, 4, 768) 0 ['activation_82[0][0]', 'activation_83[0][0]'] activation_84 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_84[0][0]'] mixed9 (Concatenate) (None, 4, 4, 2048) 0 ['activation_76[0][0]', 'mixed9_0[0][0]', 'concatenate[0][0]', 'activation_84[0][0]'] conv2d_89 (Conv2D) (None, 4, 4, 448) 917504 ['mixed9[0][0]'] batch_normalization_89 (BatchN (None, 4, 4, 448) 1344 ['conv2d_89[0][0]'] ormalization) activation_89 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_89[0][0]'] conv2d_86 (Conv2D) (None, 4, 4, 384) 786432 ['mixed9[0][0]'] conv2d_90 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_89[0][0]'] batch_normalization_86 (BatchN (None, 4, 4, 384) 1152 ['conv2d_86[0][0]'] ormalization) batch_normalization_90 (BatchN (None, 4, 4, 384) 1152 ['conv2d_90[0][0]'] ormalization) activation_86 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_86[0][0]'] activation_90 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_90[0][0]'] conv2d_87 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_88 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_91 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] conv2d_92 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] average_pooling2d_8 (AveragePo (None, 4, 4, 2048) 0 ['mixed9[0][0]'] oling2D) conv2d_85 (Conv2D) (None, 4, 4, 320) 655360 ['mixed9[0][0]'] batch_normalization_87 (BatchN (None, 4, 4, 384) 1152 ['conv2d_87[0][0]'] ormalization) batch_normalization_88 (BatchN (None, 4, 4, 384) 1152 ['conv2d_88[0][0]'] ormalization) batch_normalization_91 (BatchN (None, 4, 4, 384) 1152 ['conv2d_91[0][0]'] ormalization) batch_normalization_92 (BatchN (None, 4, 4, 384) 1152 ['conv2d_92[0][0]'] ormalization) conv2d_93 (Conv2D) (None, 4, 4, 192) 393216 ['average_pooling2d_8[0][0]'] batch_normalization_85 (BatchN (None, 4, 4, 320) 960 ['conv2d_85[0][0]'] ormalization) activation_87 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_87[0][0]'] activation_88 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_88[0][0]'] activation_91 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_91[0][0]'] activation_92 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_92[0][0]'] batch_normalization_93 (BatchN (None, 4, 4, 192) 576 ['conv2d_93[0][0]'] ormalization) activation_85 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_85[0][0]'] mixed9_1 (Concatenate) (None, 4, 4, 768) 0 ['activation_87[0][0]', 'activation_88[0][0]'] concatenate_1 (Concatenate) (None, 4, 4, 768) 0 ['activation_91[0][0]', 'activation_92[0][0]'] activation_93 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_93[0][0]'] mixed10 (Concatenate) (None, 4, 4, 2048) 0 ['activation_85[0][0]', 'mixed9_1[0][0]', 'concatenate_1[0][0]', 'activation_93[0][0]'] global_average_pooling2d_1 (Gl (None, 2048) 0 ['mixed10[0][0]'] obalAveragePooling2D) dense_3 (Dense) (None, 256) 524544 ['global_average_pooling2d_1[0][0 ]'] dense_4 (Dense) (None, 256) 65792 ['dense_3[0][0]'] dense_5 (Dense) (None, 3) 771 ['dense_4[0][0]'] ================================================================================================== Total params: 22,393,891 Trainable params: 929,475 Non-trainable params: 21,464,416 __________________________________________________________________________________________________
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
es = EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
mc = ModelCheckpoint('best_model_8_non-aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
red_lr = ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.20)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc', 'AUC'])
model.fit(X_train, y_train, batch_size=32,
epochs = 20, verbose=1, validation_data=(X_test,y_test), callbacks=[mc, red_lr, es])
Epoch 1/20 98/98 [==============================] - ETA: 0s - loss: 0.1547 - acc: 0.9588 Epoch 1: val_acc improved from -inf to 0.93059, saving model to best_model_8.h5 98/98 [==============================] - 24s 189ms/step - loss: 0.1547 - acc: 0.9588 - val_loss: 0.2557 - val_acc: 0.9306 - lr: 0.0010 Epoch 2/20 97/98 [============================>.] - ETA: 0s - loss: 0.0362 - acc: 0.9890 Epoch 2: val_acc improved from 0.93059 to 0.94602, saving model to best_model_8.h5 98/98 [==============================] - 17s 170ms/step - loss: 0.0362 - acc: 0.9891 - val_loss: 0.1726 - val_acc: 0.9460 - lr: 0.0010 Epoch 3/20 97/98 [============================>.] - ETA: 0s - loss: 0.0418 - acc: 0.9874 Epoch 3: val_acc did not improve from 0.94602 98/98 [==============================] - 15s 154ms/step - loss: 0.0418 - acc: 0.9875 - val_loss: 0.3415 - val_acc: 0.9152 - lr: 0.0010 Epoch 4/20 97/98 [============================>.] - ETA: 0s - loss: 0.0518 - acc: 0.9826 Epoch 4: val_acc did not improve from 0.94602 Epoch 00004: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026. 98/98 [==============================] - 15s 150ms/step - loss: 0.0518 - acc: 0.9826 - val_loss: 0.3620 - val_acc: 0.9023 - lr: 0.0010 Epoch 5/20 97/98 [============================>.] - ETA: 0s - loss: 0.0088 - acc: 0.9977 Epoch 5: val_acc improved from 0.94602 to 0.96015, saving model to best_model_8.h5 98/98 [==============================] - 16s 167ms/step - loss: 0.0088 - acc: 0.9977 - val_loss: 0.1575 - val_acc: 0.9602 - lr: 2.0000e-04 Epoch 6/20 97/98 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.9997 Epoch 6: val_acc improved from 0.96015 to 0.96787, saving model to best_model_8.h5 98/98 [==============================] - 16s 164ms/step - loss: 0.0014 - acc: 0.9997 - val_loss: 0.1442 - val_acc: 0.9679 - lr: 2.0000e-04 Epoch 7/20 97/98 [============================>.] - ETA: 0s - loss: 4.4021e-04 - acc: 1.0000 Epoch 7: val_acc improved from 0.96787 to 0.96915, saving model to best_model_8.h5 98/98 [==============================] - 16s 167ms/step - loss: 4.3986e-04 - acc: 1.0000 - val_loss: 0.1530 - val_acc: 0.9692 - lr: 2.0000e-04 Epoch 8/20 98/98 [==============================] - ETA: 0s - loss: 4.1068e-04 - acc: 1.0000 Epoch 8: val_acc did not improve from 0.96915 98/98 [==============================] - 15s 152ms/step - loss: 4.1068e-04 - acc: 1.0000 - val_loss: 0.1644 - val_acc: 0.9692 - lr: 2.0000e-04 Epoch 9/20 97/98 [============================>.] - ETA: 0s - loss: 2.9486e-04 - acc: 1.0000 Epoch 9: val_acc did not improve from 0.96915 Epoch 00009: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05. 98/98 [==============================] - 15s 150ms/step - loss: 2.9705e-04 - acc: 1.0000 - val_loss: 0.1689 - val_acc: 0.9692 - lr: 2.0000e-04 Epoch 10/20 97/98 [============================>.] - ETA: 0s - loss: 1.3662e-04 - acc: 1.0000 Epoch 10: val_acc did not improve from 0.96915 98/98 [==============================] - 15s 150ms/step - loss: 1.3645e-04 - acc: 1.0000 - val_loss: 0.1701 - val_acc: 0.9692 - lr: 4.0000e-05 Epoch 11/20 97/98 [============================>.] - ETA: 0s - loss: 1.1398e-04 - acc: 1.0000 Epoch 11: val_acc did not improve from 0.96915 Epoch 00011: ReduceLROnPlateau reducing learning rate to 8.000000525498762e-06. 98/98 [==============================] - 15s 149ms/step - loss: 1.1433e-04 - acc: 1.0000 - val_loss: 0.1711 - val_acc: 0.9692 - lr: 4.0000e-05 Epoch 12/20 97/98 [============================>.] - ETA: 0s - loss: 1.9312e-04 - acc: 1.0000 Epoch 12: val_acc did not improve from 0.96915 98/98 [==============================] - 15s 150ms/step - loss: 1.9296e-04 - acc: 1.0000 - val_loss: 0.1714 - val_acc: 0.9692 - lr: 8.0000e-06 Epoch 13/20 97/98 [============================>.] - ETA: 0s - loss: 1.2492e-04 - acc: 1.0000 Epoch 13: val_acc did not improve from 0.96915 Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.6000001778593287e-06. 98/98 [==============================] - 15s 149ms/step - loss: 1.2482e-04 - acc: 1.0000 - val_loss: 0.1715 - val_acc: 0.9692 - lr: 8.0000e-06 Epoch 14/20 97/98 [============================>.] - ETA: 0s - loss: 1.1967e-04 - acc: 1.0000 Epoch 14: val_acc did not improve from 0.96915 98/98 [==============================] - 15s 149ms/step - loss: 1.1951e-04 - acc: 1.0000 - val_loss: 0.1715 - val_acc: 0.9692 - lr: 1.6000e-06 Epoch 15/20 97/98 [============================>.] - ETA: 0s - loss: 2.7591e-04 - acc: 1.0000 Epoch 15: val_acc did not improve from 0.96915 Epoch 00015: ReduceLROnPlateau reducing learning rate to 3.200000264769187e-07. 98/98 [==============================] - 15s 151ms/step - loss: 2.7556e-04 - acc: 1.0000 - val_loss: 0.1717 - val_acc: 0.9692 - lr: 1.6000e-06 Epoch 16/20 97/98 [============================>.] - ETA: 0s - loss: 2.9552e-04 - acc: 1.0000 Epoch 16: val_acc did not improve from 0.96915 98/98 [==============================] - 15s 150ms/step - loss: 2.9514e-04 - acc: 1.0000 - val_loss: 0.1716 - val_acc: 0.9692 - lr: 3.2000e-07 Epoch 17/20 97/98 [============================>.] - ETA: 0s - loss: 1.5993e-04 - acc: 1.0000 Epoch 17: val_acc did not improve from 0.96915 Epoch 00017: ReduceLROnPlateau reducing learning rate to 6.400000529538374e-08. 98/98 [==============================] - 15s 150ms/step - loss: 1.6569e-04 - acc: 1.0000 - val_loss: 0.1715 - val_acc: 0.9692 - lr: 3.2000e-07 Epoch 18/20 98/98 [==============================] - ETA: 0s - loss: 1.3196e-04 - acc: 1.0000 Epoch 18: val_acc did not improve from 0.96915 98/98 [==============================] - 15s 150ms/step - loss: 1.3196e-04 - acc: 1.0000 - val_loss: 0.1716 - val_acc: 0.9692 - lr: 6.4000e-08 Epoch 19/20 97/98 [============================>.] - ETA: 0s - loss: 1.9291e-04 - acc: 1.0000 Epoch 19: val_acc did not improve from 0.96915 Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.2800001059076749e-08. 98/98 [==============================] - 15s 149ms/step - loss: 1.9268e-04 - acc: 1.0000 - val_loss: 0.1716 - val_acc: 0.9692 - lr: 6.4000e-08 Epoch 20/20 97/98 [============================>.] - ETA: 0s - loss: 1.4026e-04 - acc: 1.0000 Epoch 20: val_acc did not improve from 0.96915 98/98 [==============================] - 15s 149ms/step - loss: 1.4009e-04 - acc: 1.0000 - val_loss: 0.1717 - val_acc: 0.9692 - lr: 1.2800e-08
from tensorflow.keras.models import load_model
#model=load_model("best_model_8_non-aug.h5")
y_test_array = y_test.to_numpy()
print(y_test_array)
[[0 0 1] [1 0 0] [0 1 0] ... [1 0 0] [1 0 0] [0 0 1]]
y_true_test = np.argmax(y_test_array,axis=1)
print(y_true_test)
print(y_true_test.shape)
print(y_true_test.dtype)
[2 0 1 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 1 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 2 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 1 2 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 0 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 2 2 0 2 1 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 0 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 1 0 2 2 0 2 2 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 2 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 0 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 2 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 1 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 1 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 1 1 0 0 2] (778,) int64
import tensorflow.keras.backend as K
#print(model.get_config()) # Full configuration to fit keras model
print(K.eval(model.optimizer.get_config())) # Optimizer configuration
#print(len(model.history.epoch)) # Number of epochs
{'name': 'Adam', 'learning_rate': 0.00020000001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model_8.png')
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
def predict_classes(x):
proba=x
if proba.shape[-1] > 1:
return proba.argmax(axis=-1)
else:
return (proba > 0.5).astype("int32")
print(predict_classes(model.predict(X_test)))
prediction_index=predict_classes(model.predict(X_test))
labels=pd.get_dummies(y_train).columns
predicted_labels=list(map(lambda x: labels[x], prediction_index))
#print(predicted_labels)
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
25/25 [==============================] - 2s 53ms/step [2 0 2 1 2 1 0 1 2 0 1 1 2 1 1 0 0 1 0 1 1 1 1 0 1 2 2 1 2 0 2 1 0 2 1 1 0 2 1 2 0 2 1 2 0 1 2 2 0 0 2 0 2 0 2 2 1 2 1 0 0 2 0 1 0 0 1 2 2 2 0 2 2 1 0 0 2 1 0 1 1 2 0 0 1 0 1 2 1 1 0 1 2 1 1 1 1 2 1 1 1 1 2 0 0 2 2 1 2 2 2 1 1 1 2 2 2 0 1 0 2 2 1 2 2 0 2 1 1 1 1 1 2 1 1 2 1 0 1 1 0 1 0 0 1 1 2 2 0 0 0 0 1 0 0 2 0 0 1 1 2 0 2 1 0 2 1 1 1 1 2 0 1 1 1 2 2 0 1 0 2 2 2 2 0 2 1 0 1 1 0 2 0 0 0 0 1 1 0 1 0 0 2 1 2 1 1 2 0 0 2 2 2 2 0 0 0 2 0 1 2 2 0 2 1 1 2 0 2 2 2 0 1 1 0 2 0 0 2 0 2 1 0 1 1 2 2 0 1 1 2 2 2 1 1 2 2 2 1 2 2 2 1 2 1 0 2 1 0 2 0 2 0 2 2 2 0 1 1 2 2 1 0 1 2 1 2 1 2 0 2 2 0 2 1 2 2 2 1 0 2 0 2 1 2 1 1 1 1 0 0 1 1 0 0 0 2 1 2 1 2 1 1 0 2 1 0 0 0 0 0 2 1 2 0 1 0 0 2 1 0 1 2 0 0 0 2 2 2 0 1 2 0 0 2 0 1 2 1 2 0 1 1 2 2 1 0 0 2 0 0 2 0 2 0 1 1 2 2 2 2 0 1 1 2 2 1 0 2 1 1 1 0 0 1 2 1 1 0 0 0 1 2 2 0 2 0 1 2 0 0 0 0 2 2 1 1 0 2 1 0 1 2 0 0 2 2 1 2 0 1 1 0 2 1 1 0 1 0 1 0 1 0 0 1 0 0 1 2 1 0 1 1 0 0 2 1 2 1 1 1 0 2 0 1 2 2 2 2 2 1 2 2 2 2 2 1 2 0 2 1 1 1 1 2 0 0 2 2 0 1 1 1 0 0 0 2 2 1 1 0 2 2 0 0 1 0 0 1 1 1 1 0 2 0 0 2 2 0 2 2 1 2 0 0 0 1 2 0 0 1 1 2 2 1 1 2 1 1 0 2 0 2 2 0 0 1 2 2 1 0 1 0 0 2 0 1 1 1 0 2 1 2 1 1 0 0 2 1 2 1 1 1 2 1 1 2 2 0 2 2 1 0 0 2 1 0 2 1 1 1 2 0 1 0 2 1 1 2 2 1 1 2 0 2 2 1 2 0 2 2 2 1 0 0 0 0 1 1 2 0 2 1 1 1 0 2 0 0 2 2 2 2 0 2 1 1 2 1 1 0 1 1 2 1 0 2 1 0 1 0 2 1 2 1 0 2 1 0 2 2 1 2 1 1 1 2 2 2 1 0 2 0 2 2 1 2 1 1 0 0 1 2 1 2 0 1 1 1 0 1 0 2 1 1 0 0 2 0 2 1 2 1 2 1 0 0 2 2 1 2 2 1 1 0 2 2 0 2 0 2 1 2 0 0 1 0 2 2 1 1 2 1 0 0 0 2 1 0 1 0 2 0 1 0 2 1 2 1 1 0 2 0 2 1 0 0 1 0 0 1 2 2 2 0 2 0 2 1 2 2 0 0 2 1 0 0 2] 25/25 [==============================] - 1s 50ms/step
y_test_labels=y_test.idxmax(axis=1)
y_test_labels=list(y_test.idxmax(axis=1))
model_eval_metrics(y_test_labels, predicted_labels, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.979434 | 0.979899 | 0.980107 | 0.979724 |
y_pred = model.predict(X_test)
25/25 [==============================] - 1s 57ms/step
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
CM = confusion_matrix(y_test_labels,predicted_labels, labels=['COVID', 'NORMAL', 'Viral Pneumonia'])
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_test_labels,predicted_labels, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9958 0.9875 0.9916 240 NORMAL 0.9776 0.9740 0.9758 269 Viral Pneumonia 0.9669 0.9777 0.9723 269 accuracy 0.9794 778 macro avg 0.9801 0.9797 0.9799 778 weighted avg 0.9795 0.9794 0.9795 778
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, y_pred[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
PATH = '/content/my-drive/MyDrive/x-ray_project/'
train_dir = os.path.join(PATH, 'train')
test_dir = os.path.join(PATH, 'test')
train_pneumonia_dir = os.path.join(train_dir, 'Viral Pneumonia') # directory with our viral pneumonia training x-rays
train_normal_dir = os.path.join(train_dir, 'NORMAL') # directory with our normal training x-rays
train_covid_dir = os.path.join(train_dir, 'COVID') # directory with our COVID training x-rays
test_pneumonia_dir = os.path.join(test_dir, 'Viral Pneumonia') # directory with our viral pneumonia training x-rays
test_normal_dir = os.path.join(test_dir, 'NORMAL') # directory with our normal training x-rays
test_covid_dir = os.path.join(test_dir, 'COVID') # directory with our COVID training x-rays
print(train_covid_dir) # printing example of training COVID directory
/content/my-drive/MyDrive/x-ray_project/train/COVID
num_pneumonia_tr = len(os.listdir(train_pneumonia_dir))
num_normal_tr = len(os.listdir(train_normal_dir))
num_covid_tr = len(os.listdir(train_covid_dir))
num_pneumonia_test = len(os.listdir(test_pneumonia_dir))
num_normal_test = len(os.listdir(test_normal_dir))
num_covid_test = len(os.listdir(test_covid_dir))
total_train = num_pneumonia_tr + num_normal_tr + num_covid_tr
total_test = num_pneumonia_test + num_normal_test + num_covid_test
print('total pneumonia training x-rays:', num_pneumonia_tr)
print('total normal training x-rays:', num_normal_tr)
print('total COVID training x-rays:', num_covid_tr)
print('total pneumonia test x-rays:', num_pneumonia_test)
print('total normal test x-rays:', num_normal_test)
print('total COVID test x-rays:', num_covid_test)
print("--")
print("Total training x-rays:", total_train)
print("Total test x-rays:", total_test)
total pneumonia training x-rays: 1076 total normal training x-rays: 1072 total COVID training x-rays: 960 total pneumonia test x-rays: 269 total normal test x-rays: 269 total COVID test x-rays: 240 -- Total training x-rays: 3108 Total test x-rays: 778
batch_size = 60
epochs = 40
IMG_HEIGHT = 192
IMG_WIDTH = 192
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Below, I have only included as augmentations random variation in the brightness range and random horizontal flips. I have commented out other possible
#example augmentations that I chose not to use for this portfolio project.
train_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our training data
#The validation image generator should not be used to augment data. It is sometimes the case that individuals will use the same
#ImageDataGenerator object to create both their training image generator and validation image generator. However, that should not
#be done because the validation set should not be used to augment data, since it is meant to replicate unseen data for the purpose
#of identifying good tuning parameters. I therefore make sure that my validation_image_generator contains no augmentation parameters.
validation_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2
) # Generator for our validation data
#The purpose of the test_image_generator is to replicate how the train_image_generator should perform on unseen data (from the test set).
#I thus use the same parameters for test_image_generator as I did with train_image_generator (with the exception that I do not use
#validation_split)
test_image_generator = ImageDataGenerator(
rescale=1./255,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our test data
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Note: Both train_gen and val_gen use the training set, but since I used validation split, this splits
#the training set into a smaller training set and a validation set. Therefore, train_gen and val_gen
#do not use the same image data but rather different images within the original training set that has
#been randomly separated.
#Choose subset = 'training'
#Set seed=42 for both train_gen and for val_gen to ensure they are randomized similarly
train_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='training',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
#Choose subset = 'validation'
val_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='validation',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
test_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory='test',
seed=42,
shuffle=False, #generally, shuffle should be set to false when the image generator is used on test data
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
Found 2487 images belonging to 3 classes. Found 621 images belonging to 3 classes. Found 778 images belonging to 3 classes.
#Because I am using validation split with train_image_generator and validation_image_generator,
#I need to adjust the number of training steps per epoch and number of validation steps per epoch
#to be used within model.fit() further below. Instead of using the arguments
#steps_per_epoch=total_train // batch_size and validation_steps=total_train // batch_size, I
#instead use the followin variables for steps_per_epoch and validation_steps, respectively:
TRAIN_STEPS_PER_EPOCH = np.ceil((total_train*0.8/batch_size)-1)
VAL_STEPS_PER_EPOCH = np.ceil((total_train*0.2/batch_size)-1)
#Source: https://stackoverflow.com/questions/59864408/tensorflowyour-input-ran-out-of-data
#Note, all of the following random seed codes are required in order to allow for reproducable results.
#See: https://stackoverflow.com/questions/50659482/why-cant-i-get-reproducible-results-in-keras-even-though-i-set-the-random-seeds
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, BatchNormalization, Flatten
from keras.regularizers import l1
from tensorflow.keras.optimizers import SGD
from sklearn.utils import class_weight
import numpy as np
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
opt=Adam(learning_rate=.001)
model = tf.keras.Sequential([
# input: images of size Sample size, height, width, channels 1x192x192x3 pixels (the three stands for RGB channels)
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(kernel_size=3, filters=8, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Flatten(),
# classifying into 3 categories
tf.keras.layers.Dense(3, activation='softmax')
])
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.35)
mc = ModelCheckpoint('best_model_1_aug.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
# Fitting the CNN to the Training set
history = model.fit(
train_gen,
#steps_per_epoch=total_train // batch_size, #adjusts training process for new image batches
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
epochs=epochs,
validation_data=val_gen,
#validation_steps=total_train // batch_size,
validation_steps=VAL_STEPS_PER_EPOCH,
verbose=1,
callbacks=[mc,red_lr,es]
)
Epoch 1/40 41/41 [==============================] - ETA: 0s - loss: 0.5042 - accuracy: 0.8043 - auc: 0.9338 Epoch 1: val_loss improved from inf to 1.11582, saving model to best_model_1_aug.h5 41/41 [==============================] - 49s 1s/step - loss: 0.5042 - accuracy: 0.8043 - auc: 0.9338 - val_loss: 1.1158 - val_accuracy: 0.3450 - val_auc: 0.5899 - lr: 0.0010 Epoch 2/40 41/41 [==============================] - ETA: 0s - loss: 0.2230 - accuracy: 0.9180 - auc: 0.9846 Epoch 2: val_loss did not improve from 1.11582 41/41 [==============================] - 46s 1s/step - loss: 0.2230 - accuracy: 0.9180 - auc: 0.9846 - val_loss: 1.1638 - val_accuracy: 0.4950 - val_auc: 0.6358 - lr: 0.0010 Epoch 3/40 41/41 [==============================] - ETA: 0s - loss: 0.1678 - accuracy: 0.9394 - auc: 0.9914 Epoch 3: val_loss did not improve from 1.11582 Epoch 00003: ReduceLROnPlateau reducing learning rate to 0.00035000001662410796. 41/41 [==============================] - 46s 1s/step - loss: 0.1678 - accuracy: 0.9394 - auc: 0.9914 - val_loss: 1.2174 - val_accuracy: 0.5483 - val_auc: 0.6441 - lr: 0.0010 Epoch 4/40 41/41 [==============================] - ETA: 0s - loss: 0.1337 - accuracy: 0.9539 - auc: 0.9949 Epoch 4: val_loss did not improve from 1.11582 41/41 [==============================] - 45s 1s/step - loss: 0.1337 - accuracy: 0.9539 - auc: 0.9949 - val_loss: 1.1770 - val_accuracy: 0.5333 - val_auc: 0.6667 - lr: 3.5000e-04 Epoch 5/40 41/41 [==============================] - ETA: 0s - loss: 0.1146 - accuracy: 0.9604 - auc: 0.9962 Epoch 5: val_loss did not improve from 1.11582 Epoch 00005: ReduceLROnPlateau reducing learning rate to 0.00012250000581843777. 41/41 [==============================] - 46s 1s/step - loss: 0.1146 - accuracy: 0.9604 - auc: 0.9962 - val_loss: 1.3992 - val_accuracy: 0.5567 - val_auc: 0.6609 - lr: 3.5000e-04 Epoch 6/40 41/41 [==============================] - ETA: 0s - loss: 0.0965 - accuracy: 0.9703 - auc: 0.9970 Epoch 6: val_loss did not improve from 1.11582 41/41 [==============================] - 45s 1s/step - loss: 0.0965 - accuracy: 0.9703 - auc: 0.9970 - val_loss: 1.7876 - val_accuracy: 0.4933 - val_auc: 0.6265 - lr: 1.2250e-04 Epoch 7/40 41/41 [==============================] - ETA: 0s - loss: 0.0892 - accuracy: 0.9712 - auc: 0.9979 Epoch 7: val_loss did not improve from 1.11582 Epoch 00007: ReduceLROnPlateau reducing learning rate to 4.287500050850212e-05. 41/41 [==============================] - 45s 1s/step - loss: 0.0892 - accuracy: 0.9712 - auc: 0.9979 - val_loss: 1.9040 - val_accuracy: 0.5367 - val_auc: 0.6493 - lr: 1.2250e-04 Epoch 8/40 41/41 [==============================] - ETA: 0s - loss: 0.0835 - accuracy: 0.9707 - auc: 0.9982 Epoch 8: val_loss did not improve from 1.11582 41/41 [==============================] - 46s 1s/step - loss: 0.0835 - accuracy: 0.9707 - auc: 0.9982 - val_loss: 1.9583 - val_accuracy: 0.5383 - val_auc: 0.6625 - lr: 4.2875e-05 Epoch 9/40 41/41 [==============================] - ETA: 0s - loss: 0.0785 - accuracy: 0.9720 - auc: 0.9984 Epoch 9: val_loss did not improve from 1.11582 Epoch 00009: ReduceLROnPlateau reducing learning rate to 1.500624966865871e-05. 41/41 [==============================] - 46s 1s/step - loss: 0.0785 - accuracy: 0.9720 - auc: 0.9984 - val_loss: 1.8981 - val_accuracy: 0.5617 - val_auc: 0.7019 - lr: 4.2875e-05 Epoch 10/40 41/41 [==============================] - ETA: 0s - loss: 0.0830 - accuracy: 0.9691 - auc: 0.9979 Epoch 10: val_loss did not improve from 1.11582 41/41 [==============================] - 45s 1s/step - loss: 0.0830 - accuracy: 0.9691 - auc: 0.9979 - val_loss: 1.6675 - val_accuracy: 0.5817 - val_auc: 0.7559 - lr: 1.5006e-05 Epoch 11/40 41/41 [==============================] - ETA: 0s - loss: 0.0759 - accuracy: 0.9753 - auc: 0.9985 Epoch 11: val_loss did not improve from 1.11582 Epoch 00011: ReduceLROnPlateau reducing learning rate to 5.2521873840305485e-06. 41/41 [==============================] - 45s 1s/step - loss: 0.0759 - accuracy: 0.9753 - auc: 0.9985 - val_loss: 1.3434 - val_accuracy: 0.5950 - val_auc: 0.8204 - lr: 1.5006e-05 Epoch 12/40 41/41 [==============================] - ETA: 0s - loss: 0.0776 - accuracy: 0.9745 - auc: 0.9982 Epoch 12: val_loss improved from 1.11582 to 0.93229, saving model to best_model_1_aug.h5 41/41 [==============================] - 46s 1s/step - loss: 0.0776 - accuracy: 0.9745 - auc: 0.9982 - val_loss: 0.9323 - val_accuracy: 0.6583 - val_auc: 0.8778 - lr: 5.2522e-06 Epoch 13/40 41/41 [==============================] - ETA: 0s - loss: 0.0754 - accuracy: 0.9757 - auc: 0.9986 Epoch 13: val_loss improved from 0.93229 to 0.56664, saving model to best_model_1_aug.h5 41/41 [==============================] - 47s 1s/step - loss: 0.0754 - accuracy: 0.9757 - auc: 0.9986 - val_loss: 0.5666 - val_accuracy: 0.7550 - val_auc: 0.9273 - lr: 5.2522e-06 Epoch 14/40 41/41 [==============================] - ETA: 0s - loss: 0.0757 - accuracy: 0.9761 - auc: 0.9983 Epoch 14: val_loss improved from 0.56664 to 0.33631, saving model to best_model_1_aug.h5 41/41 [==============================] - 46s 1s/step - loss: 0.0757 - accuracy: 0.9761 - auc: 0.9983 - val_loss: 0.3363 - val_accuracy: 0.8617 - val_auc: 0.9676 - lr: 5.2522e-06 Epoch 15/40 41/41 [==============================] - ETA: 0s - loss: 0.0731 - accuracy: 0.9778 - auc: 0.9986 Epoch 15: val_loss improved from 0.33631 to 0.21976, saving model to best_model_1_aug.h5 41/41 [==============================] - 46s 1s/step - loss: 0.0731 - accuracy: 0.9778 - auc: 0.9986 - val_loss: 0.2198 - val_accuracy: 0.9150 - val_auc: 0.9857 - lr: 5.2522e-06 Epoch 16/40 41/41 [==============================] - ETA: 0s - loss: 0.0787 - accuracy: 0.9716 - auc: 0.9981 Epoch 16: val_loss improved from 0.21976 to 0.17626, saving model to best_model_1_aug.h5 41/41 [==============================] - 46s 1s/step - loss: 0.0787 - accuracy: 0.9716 - auc: 0.9981 - val_loss: 0.1763 - val_accuracy: 0.9333 - val_auc: 0.9908 - lr: 5.2522e-06 Epoch 17/40 41/41 [==============================] - ETA: 0s - loss: 0.0824 - accuracy: 0.9716 - auc: 0.9981 Epoch 17: val_loss improved from 0.17626 to 0.14926, saving model to best_model_1_aug.h5 41/41 [==============================] - 47s 1s/step - loss: 0.0824 - accuracy: 0.9716 - auc: 0.9981 - val_loss: 0.1493 - val_accuracy: 0.9450 - val_auc: 0.9934 - lr: 5.2522e-06 Epoch 18/40 41/41 [==============================] - ETA: 0s - loss: 0.0806 - accuracy: 0.9724 - auc: 0.9983 Epoch 18: val_loss improved from 0.14926 to 0.13791, saving model to best_model_1_aug.h5 41/41 [==============================] - 46s 1s/step - loss: 0.0806 - accuracy: 0.9724 - auc: 0.9983 - val_loss: 0.1379 - val_accuracy: 0.9583 - val_auc: 0.9944 - lr: 5.2522e-06 Epoch 19/40 41/41 [==============================] - ETA: 0s - loss: 0.0741 - accuracy: 0.9761 - auc: 0.9986 Epoch 19: val_loss improved from 0.13791 to 0.13277, saving model to best_model_1_aug.h5 41/41 [==============================] - 46s 1s/step - loss: 0.0741 - accuracy: 0.9761 - auc: 0.9986 - val_loss: 0.1328 - val_accuracy: 0.9600 - val_auc: 0.9947 - lr: 5.2522e-06 Epoch 20/40 41/41 [==============================] - ETA: 0s - loss: 0.0755 - accuracy: 0.9757 - auc: 0.9984 Epoch 20: val_loss improved from 0.13277 to 0.13163, saving model to best_model_1_aug.h5 41/41 [==============================] - 48s 1s/step - loss: 0.0755 - accuracy: 0.9757 - auc: 0.9984 - val_loss: 0.1316 - val_accuracy: 0.9617 - val_auc: 0.9947 - lr: 5.2522e-06 Epoch 21/40 41/41 [==============================] - ETA: 0s - loss: 0.0724 - accuracy: 0.9769 - auc: 0.9987 Epoch 21: val_loss did not improve from 0.13163 41/41 [==============================] - 45s 1s/step - loss: 0.0724 - accuracy: 0.9769 - auc: 0.9987 - val_loss: 0.1321 - val_accuracy: 0.9583 - val_auc: 0.9947 - lr: 5.2522e-06 Epoch 22/40 41/41 [==============================] - ETA: 0s - loss: 0.0752 - accuracy: 0.9745 - auc: 0.9982 Epoch 22: val_loss improved from 0.13163 to 0.13130, saving model to best_model_1_aug.h5 41/41 [==============================] - 46s 1s/step - loss: 0.0752 - accuracy: 0.9745 - auc: 0.9982 - val_loss: 0.1313 - val_accuracy: 0.9617 - val_auc: 0.9947 - lr: 5.2522e-06 Epoch 23/40 41/41 [==============================] - ETA: 0s - loss: 0.0737 - accuracy: 0.9728 - auc: 0.9987 Epoch 23: val_loss improved from 0.13130 to 0.12893, saving model to best_model_1_aug.h5 41/41 [==============================] - 48s 1s/step - loss: 0.0737 - accuracy: 0.9728 - auc: 0.9987 - val_loss: 0.1289 - val_accuracy: 0.9633 - val_auc: 0.9949 - lr: 5.2522e-06 Epoch 24/40 41/41 [==============================] - ETA: 0s - loss: 0.0710 - accuracy: 0.9769 - auc: 0.9988 Epoch 24: val_loss did not improve from 0.12893 41/41 [==============================] - 45s 1s/step - loss: 0.0710 - accuracy: 0.9769 - auc: 0.9988 - val_loss: 0.1342 - val_accuracy: 0.9600 - val_auc: 0.9946 - lr: 5.2522e-06 Epoch 25/40 41/41 [==============================] - ETA: 0s - loss: 0.0721 - accuracy: 0.9790 - auc: 0.9987 Epoch 25: val_loss did not improve from 0.12893 Epoch 00025: ReduceLROnPlateau reducing learning rate to 1.8382655525783774e-06. 41/41 [==============================] - 46s 1s/step - loss: 0.0721 - accuracy: 0.9790 - auc: 0.9987 - val_loss: 0.1333 - val_accuracy: 0.9617 - val_auc: 0.9947 - lr: 5.2522e-06 Epoch 26/40 41/41 [==============================] - ETA: 0s - loss: 0.0793 - accuracy: 0.9740 - auc: 0.9983 Epoch 26: val_loss did not improve from 0.12893 41/41 [==============================] - 46s 1s/step - loss: 0.0793 - accuracy: 0.9740 - auc: 0.9983 - val_loss: 0.1328 - val_accuracy: 0.9600 - val_auc: 0.9947 - lr: 1.8383e-06 Epoch 27/40 41/41 [==============================] - ETA: 0s - loss: 0.0742 - accuracy: 0.9724 - auc: 0.9985 Epoch 27: val_loss did not improve from 0.12893 Epoch 00027: ReduceLROnPlateau reducing learning rate to 6.433929513605108e-07. 41/41 [==============================] - 45s 1s/step - loss: 0.0742 - accuracy: 0.9724 - auc: 0.9985 - val_loss: 0.1306 - val_accuracy: 0.9617 - val_auc: 0.9948 - lr: 1.8383e-06 Epoch 28/40 41/41 [==============================] - ETA: 0s - loss: 0.0754 - accuracy: 0.9745 - auc: 0.9984 Epoch 28: val_loss improved from 0.12893 to 0.12791, saving model to best_model_1_aug.h5 41/41 [==============================] - 47s 1s/step - loss: 0.0754 - accuracy: 0.9745 - auc: 0.9984 - val_loss: 0.1279 - val_accuracy: 0.9617 - val_auc: 0.9952 - lr: 6.4339e-07 Epoch 29/40 41/41 [==============================] - ETA: 0s - loss: 0.0740 - accuracy: 0.9749 - auc: 0.9985 Epoch 29: val_loss did not improve from 0.12791 41/41 [==============================] - 47s 1s/step - loss: 0.0740 - accuracy: 0.9749 - auc: 0.9985 - val_loss: 0.1313 - val_accuracy: 0.9617 - val_auc: 0.9948 - lr: 6.4339e-07 Epoch 30/40 41/41 [==============================] - ETA: 0s - loss: 0.0712 - accuracy: 0.9773 - auc: 0.9988 Epoch 30: val_loss did not improve from 0.12791 Epoch 00030: ReduceLROnPlateau reducing learning rate to 2.2518752302858046e-07. 41/41 [==============================] - 46s 1s/step - loss: 0.0712 - accuracy: 0.9773 - auc: 0.9988 - val_loss: 0.1333 - val_accuracy: 0.9600 - val_auc: 0.9947 - lr: 6.4339e-07 Epoch 31/40 41/41 [==============================] - ETA: 0s - loss: 0.0719 - accuracy: 0.9753 - auc: 0.9988 Epoch 31: val_loss did not improve from 0.12791 41/41 [==============================] - 46s 1s/step - loss: 0.0719 - accuracy: 0.9753 - auc: 0.9988 - val_loss: 0.1333 - val_accuracy: 0.9600 - val_auc: 0.9947 - lr: 2.2519e-07 Epoch 32/40 41/41 [==============================] - ETA: 0s - loss: 0.0732 - accuracy: 0.9757 - auc: 0.9987 Epoch 32: val_loss did not improve from 0.12791 Epoch 00032: ReduceLROnPlateau reducing learning rate to 7.881563206524333e-08. 41/41 [==============================] - 46s 1s/step - loss: 0.0732 - accuracy: 0.9757 - auc: 0.9987 - val_loss: 0.1297 - val_accuracy: 0.9617 - val_auc: 0.9949 - lr: 2.2519e-07 Epoch 33/40 41/41 [==============================] - ETA: 0s - loss: 0.0738 - accuracy: 0.9749 - auc: 0.9986 Epoch 33: val_loss did not improve from 0.12791 41/41 [==============================] - 46s 1s/step - loss: 0.0738 - accuracy: 0.9749 - auc: 0.9986 - val_loss: 0.1308 - val_accuracy: 0.9617 - val_auc: 0.9948 - lr: 7.8816e-08 Epoch 34/40 41/41 [==============================] - ETA: 0s - loss: 0.0696 - accuracy: 0.9786 - auc: 0.9988 Epoch 34: val_loss improved from 0.12791 to 0.12433, saving model to best_model_1_aug.h5 41/41 [==============================] - 46s 1s/step - loss: 0.0696 - accuracy: 0.9786 - auc: 0.9988 - val_loss: 0.1243 - val_accuracy: 0.9650 - val_auc: 0.9954 - lr: 7.8816e-08 Epoch 35/40 41/41 [==============================] - ETA: 0s - loss: 0.0678 - accuracy: 0.9786 - auc: 0.9989 Epoch 35: val_loss did not improve from 0.12433 41/41 [==============================] - 46s 1s/step - loss: 0.0678 - accuracy: 0.9786 - auc: 0.9989 - val_loss: 0.1287 - val_accuracy: 0.9617 - val_auc: 0.9951 - lr: 7.8816e-08 Epoch 36/40 41/41 [==============================] - ETA: 0s - loss: 0.0792 - accuracy: 0.9724 - auc: 0.9984 Epoch 36: val_loss did not improve from 0.12433 Epoch 00036: ReduceLROnPlateau reducing learning rate to 2.758547097414521e-08. 41/41 [==============================] - 46s 1s/step - loss: 0.0792 - accuracy: 0.9724 - auc: 0.9984 - val_loss: 0.1336 - val_accuracy: 0.9600 - val_auc: 0.9947 - lr: 7.8816e-08 Epoch 37/40 41/41 [==============================] - ETA: 0s - loss: 0.0726 - accuracy: 0.9757 - auc: 0.9985 Epoch 37: val_loss did not improve from 0.12433 41/41 [==============================] - 45s 1s/step - loss: 0.0726 - accuracy: 0.9757 - auc: 0.9985 - val_loss: 0.1330 - val_accuracy: 0.9600 - val_auc: 0.9948 - lr: 2.7585e-08 Epoch 38/40 41/41 [==============================] - ETA: 0s - loss: 0.0669 - accuracy: 0.9782 - auc: 0.9990 Epoch 38: val_loss did not improve from 0.12433 Epoch 00038: ReduceLROnPlateau reducing learning rate to 9.654915089640781e-09. 41/41 [==============================] - 45s 1s/step - loss: 0.0669 - accuracy: 0.9782 - auc: 0.9990 - val_loss: 0.1297 - val_accuracy: 0.9617 - val_auc: 0.9949 - lr: 2.7585e-08 Epoch 39/40 41/41 [==============================] - ETA: 0s - loss: 0.0713 - accuracy: 0.9765 - auc: 0.9987 Epoch 39: val_loss improved from 0.12433 to 0.12091, saving model to best_model_1_aug.h5 41/41 [==============================] - 46s 1s/step - loss: 0.0713 - accuracy: 0.9765 - auc: 0.9987 - val_loss: 0.1209 - val_accuracy: 0.9650 - val_auc: 0.9955 - lr: 9.6549e-09 Epoch 40/40 41/41 [==============================] - ETA: 0s - loss: 0.0723 - accuracy: 0.9749 - auc: 0.9987 Epoch 40: val_loss did not improve from 0.12091 41/41 [==============================] - 46s 1s/step - loss: 0.0723 - accuracy: 0.9749 - auc: 0.9987 - val_loss: 0.1259 - val_accuracy: 0.9617 - val_auc: 0.9953 - lr: 9.6549e-09
from tensorflow.keras.models import load_model
#model=load_model("best_model_1_aug.h5")
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Using test_generator for predictions on test data:
#Sources: https://stackoverflow.com/questions/52270177/how-to-use-predict-generator-on-new-images-keras
# https://tylerburleigh.com/blog/predicting-pneumonia-from-chest-x-rays-using-efficientnet/
test_gen.reset() #It's important to always reset the test generator.
Y_pred_test=model.predict(test_gen)
y_pred_test=np.argmax(Y_pred_test,axis=1)
13/13 [==============================] - 152s 13s/step
labels = (test_gen.class_indices)
print(labels)
{'COVID': 0, 'NORMAL': 1, 'Viral Pneumonia': 2}
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
y_true_test = test_gen.classes
CM = confusion_matrix(y_true_test, y_pred_test)
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_true_test, y_pred_test, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9794 0.9917 0.9855 240 NORMAL 0.9493 0.9740 0.9615 269 Viral Pneumonia 0.9730 0.9368 0.9545 269 accuracy 0.9666 778 macro avg 0.9672 0.9675 0.9672 778 weighted avg 0.9668 0.9666 0.9665 778
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
# get metrics
model_eval_metrics(y_true_test, y_pred_test, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.966581 | 0.967174 | 0.967224 | 0.967482 |
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, Y_pred_test[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
batch_size = 32
epochs = 40
IMG_HEIGHT = 192
IMG_WIDTH = 192
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Below, I have only included as augmentations random variation in the brightness range and random horizontal flips. I have commented out other possible
#example augmentations that I chose not to use for this portfolio project.
train_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our training data
#The validation image generator should not be used to augment data. It is sometimes the case that individuals will use the same
#ImageDataGenerator object to create both their training image generator and validation image generator. However, that should not
#be done because the validation set should not be used to augment data, since it is meant to replicate unseen data for the purpose
#of identifying good tuning parameters. I therefore make sure that my validation_image_generator contains no augmentation parameters.
validation_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2
) # Generator for our validation data
#The purpose of the test_image_generator is to replicate how the train_image_generator should perform on unseen data (from the test set).
#I thus use the same parameters for test_image_generator as I did with train_image_generator (with the exception that I do not use
#validation_split)
test_image_generator = ImageDataGenerator(
rescale=1./255,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our test data
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Note: Both train_gen and val_gen use the training set, but since I used validation split, this splits
#the training set into a smaller training set and a validation set. Therefore, train_gen and val_gen
#do not use the same image data but rather different images within the original training set that has
#been randomly separated.
#Choose subset = 'training'
#Set seed=42 for both train_gen and for val_gen to ensure they are randomized similarly
train_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='training',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
#Choose subset = 'validation'
val_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='validation',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
test_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory='test',
seed=42,
shuffle=False, #generally, shuffle should be set to false when the image generator is used on test data
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
Found 2487 images belonging to 3 classes. Found 621 images belonging to 3 classes. Found 778 images belonging to 3 classes.
TRAIN_STEPS_PER_EPOCH = np.ceil((total_train*0.8/batch_size)-1)
VAL_STEPS_PER_EPOCH = np.ceil((total_train*0.2/batch_size)-1)
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, BatchNormalization,Flatten
from keras.regularizers import l1
from tensorflow.keras.optimizers import SGD, Adam
from sklearn.utils import class_weight
import numpy as np
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
model = tf.keras.Sequential([
# input: images of size Sample size, height, width, channels 1x192x192x3 pixels (the three stands for RGB channels)
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu', input_shape=(192, 192, 3)),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu'),
tf.keras.layers.Flatten(),
# classifying into 3 categories
tf.keras.layers.Dense(3, activation='softmax')
])
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.15)
mc = ModelCheckpoint('best_model_2_aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
history = model.fit(
train_gen,
#steps_per_epoch=total_train // batch_size, #adjusts training process for new image batches
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
epochs=epochs,
validation_data=val_gen,
#validation_steps=total_train // batch_size,
validation_steps=VAL_STEPS_PER_EPOCH,
verbose=1,
callbacks=[mc, red_lr, es]
)
Epoch 1/40 77/77 [==============================] - ETA: 0s - loss: 0.6604 - accuracy: 0.7206 - auc: 0.8864 Epoch 1: val_loss improved from inf to 0.46681, saving model to best_model_2_aug.h5 77/77 [==============================] - 490s 6s/step - loss: 0.6604 - accuracy: 0.7206 - auc: 0.8864 - val_loss: 0.4668 - val_accuracy: 0.7993 - val_auc: 0.9415 - lr: 0.0010 Epoch 2/40 77/77 [==============================] - ETA: 0s - loss: 0.3878 - accuracy: 0.8511 - auc: 0.9591 Epoch 2: val_loss improved from 0.46681 to 0.31743, saving model to best_model_2_aug.h5 77/77 [==============================] - 47s 614ms/step - loss: 0.3878 - accuracy: 0.8511 - auc: 0.9591 - val_loss: 0.3174 - val_accuracy: 0.8832 - val_auc: 0.9711 - lr: 0.0010 Epoch 3/40 77/77 [==============================] - ETA: 0s - loss: 0.3040 - accuracy: 0.8933 - auc: 0.9731 Epoch 3: val_loss improved from 0.31743 to 0.23826, saving model to best_model_2_aug.h5 77/77 [==============================] - 47s 612ms/step - loss: 0.3040 - accuracy: 0.8933 - auc: 0.9731 - val_loss: 0.2383 - val_accuracy: 0.9112 - val_auc: 0.9850 - lr: 0.0010 Epoch 4/40 77/77 [==============================] - ETA: 0s - loss: 0.2706 - accuracy: 0.9063 - auc: 0.9795 Epoch 4: val_loss improved from 0.23826 to 0.22139, saving model to best_model_2_aug.h5 77/77 [==============================] - 46s 594ms/step - loss: 0.2706 - accuracy: 0.9063 - auc: 0.9795 - val_loss: 0.2214 - val_accuracy: 0.9194 - val_auc: 0.9857 - lr: 0.0010 Epoch 5/40 77/77 [==============================] - ETA: 0s - loss: 0.2530 - accuracy: 0.9161 - auc: 0.9814 Epoch 5: val_loss did not improve from 0.22139 77/77 [==============================] - 48s 629ms/step - loss: 0.2530 - accuracy: 0.9161 - auc: 0.9814 - val_loss: 0.2968 - val_accuracy: 0.8701 - val_auc: 0.9751 - lr: 0.0010 Epoch 6/40 77/77 [==============================] - ETA: 0s - loss: 0.2114 - accuracy: 0.9275 - auc: 0.9864 Epoch 6: val_loss improved from 0.22139 to 0.20119, saving model to best_model_2_aug.h5 77/77 [==============================] - 46s 601ms/step - loss: 0.2114 - accuracy: 0.9275 - auc: 0.9864 - val_loss: 0.2012 - val_accuracy: 0.9293 - val_auc: 0.9876 - lr: 0.0010 Epoch 7/40 77/77 [==============================] - ETA: 0s - loss: 0.1697 - accuracy: 0.9450 - auc: 0.9904 Epoch 7: val_loss improved from 0.20119 to 0.19448, saving model to best_model_2_aug.h5 77/77 [==============================] - 46s 600ms/step - loss: 0.1697 - accuracy: 0.9450 - auc: 0.9904 - val_loss: 0.1945 - val_accuracy: 0.9178 - val_auc: 0.9894 - lr: 0.0010 Epoch 8/40 77/77 [==============================] - ETA: 0s - loss: 0.1516 - accuracy: 0.9479 - auc: 0.9927 Epoch 8: val_loss improved from 0.19448 to 0.14994, saving model to best_model_2_aug.h5 77/77 [==============================] - 46s 602ms/step - loss: 0.1516 - accuracy: 0.9479 - auc: 0.9927 - val_loss: 0.1499 - val_accuracy: 0.9424 - val_auc: 0.9938 - lr: 0.0010 Epoch 9/40 77/77 [==============================] - ETA: 0s - loss: 0.1552 - accuracy: 0.9495 - auc: 0.9922 Epoch 9: val_loss did not improve from 0.14994 77/77 [==============================] - 46s 599ms/step - loss: 0.1552 - accuracy: 0.9495 - auc: 0.9922 - val_loss: 0.2004 - val_accuracy: 0.9342 - val_auc: 0.9894 - lr: 0.0010 Epoch 10/40 77/77 [==============================] - ETA: 0s - loss: 0.1451 - accuracy: 0.9544 - auc: 0.9926 Epoch 10: val_loss improved from 0.14994 to 0.14381, saving model to best_model_2_aug.h5 77/77 [==============================] - 46s 602ms/step - loss: 0.1451 - accuracy: 0.9544 - auc: 0.9926 - val_loss: 0.1438 - val_accuracy: 0.9474 - val_auc: 0.9941 - lr: 0.0010 Epoch 11/40 77/77 [==============================] - ETA: 0s - loss: 0.1308 - accuracy: 0.9564 - auc: 0.9941 Epoch 11: val_loss did not improve from 0.14381 77/77 [==============================] - 48s 624ms/step - loss: 0.1308 - accuracy: 0.9564 - auc: 0.9941 - val_loss: 0.1958 - val_accuracy: 0.9408 - val_auc: 0.9905 - lr: 0.0010 Epoch 12/40 77/77 [==============================] - ETA: 0s - loss: 0.1191 - accuracy: 0.9625 - auc: 0.9943 Epoch 12: val_loss did not improve from 0.14381 Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.0001500000071246177. 77/77 [==============================] - 45s 585ms/step - loss: 0.1191 - accuracy: 0.9625 - auc: 0.9943 - val_loss: 0.1972 - val_accuracy: 0.9326 - val_auc: 0.9901 - lr: 0.0010 Epoch 13/40 77/77 [==============================] - ETA: 0s - loss: 0.0779 - accuracy: 0.9760 - auc: 0.9975 Epoch 13: val_loss improved from 0.14381 to 0.11903, saving model to best_model_2_aug.h5 77/77 [==============================] - 46s 603ms/step - loss: 0.0779 - accuracy: 0.9760 - auc: 0.9975 - val_loss: 0.1190 - val_accuracy: 0.9490 - val_auc: 0.9958 - lr: 1.5000e-04 Epoch 14/40 77/77 [==============================] - ETA: 0s - loss: 0.0570 - accuracy: 0.9809 - auc: 0.9990 Epoch 14: val_loss improved from 0.11903 to 0.11886, saving model to best_model_2_aug.h5 77/77 [==============================] - 46s 602ms/step - loss: 0.0570 - accuracy: 0.9809 - auc: 0.9990 - val_loss: 0.1189 - val_accuracy: 0.9539 - val_auc: 0.9958 - lr: 1.5000e-04 Epoch 15/40 77/77 [==============================] - ETA: 0s - loss: 0.0500 - accuracy: 0.9833 - auc: 0.9990 Epoch 15: val_loss did not improve from 0.11886 77/77 [==============================] - 46s 593ms/step - loss: 0.0500 - accuracy: 0.9833 - auc: 0.9990 - val_loss: 0.1366 - val_accuracy: 0.9507 - val_auc: 0.9942 - lr: 1.5000e-04 Epoch 16/40 77/77 [==============================] - ETA: 0s - loss: 0.0409 - accuracy: 0.9837 - auc: 0.9995 Epoch 16: val_loss did not improve from 0.11886 Epoch 00016: ReduceLROnPlateau reducing learning rate to 2.2500001068692655e-05. 77/77 [==============================] - 45s 590ms/step - loss: 0.0409 - accuracy: 0.9837 - auc: 0.9995 - val_loss: 0.1356 - val_accuracy: 0.9474 - val_auc: 0.9949 - lr: 1.5000e-04 Epoch 17/40 77/77 [==============================] - ETA: 0s - loss: 0.0348 - accuracy: 0.9890 - auc: 0.9995 Epoch 17: val_loss did not improve from 0.11886 77/77 [==============================] - 47s 605ms/step - loss: 0.0348 - accuracy: 0.9890 - auc: 0.9995 - val_loss: 0.1322 - val_accuracy: 0.9490 - val_auc: 0.9949 - lr: 2.2500e-05 Epoch 18/40 77/77 [==============================] - ETA: 0s - loss: 0.0334 - accuracy: 0.9886 - auc: 0.9996 Epoch 18: val_loss did not improve from 0.11886 Epoch 00018: ReduceLROnPlateau reducing learning rate to 3.3750000511645338e-06. 77/77 [==============================] - 46s 599ms/step - loss: 0.0334 - accuracy: 0.9886 - auc: 0.9996 - val_loss: 0.1358 - val_accuracy: 0.9507 - val_auc: 0.9949 - lr: 2.2500e-05 Epoch 19/40 77/77 [==============================] - ETA: 0s - loss: 0.0297 - accuracy: 0.9906 - auc: 0.9997 Epoch 19: val_loss did not improve from 0.11886 77/77 [==============================] - 45s 583ms/step - loss: 0.0297 - accuracy: 0.9906 - auc: 0.9997 - val_loss: 0.1347 - val_accuracy: 0.9523 - val_auc: 0.9949 - lr: 3.3750e-06 Epoch 20/40 77/77 [==============================] - ETA: 0s - loss: 0.0364 - accuracy: 0.9886 - auc: 0.9994 Epoch 20: val_loss did not improve from 0.11886 Epoch 00020: ReduceLROnPlateau reducing learning rate to 5.062500008534698e-07. 77/77 [==============================] - 46s 596ms/step - loss: 0.0364 - accuracy: 0.9886 - auc: 0.9994 - val_loss: 0.1251 - val_accuracy: 0.9539 - val_auc: 0.9962 - lr: 3.3750e-06 Epoch 21/40 77/77 [==============================] - ETA: 0s - loss: 0.0316 - accuracy: 0.9906 - auc: 0.9997 Epoch 21: val_loss did not improve from 0.11886 77/77 [==============================] - 48s 619ms/step - loss: 0.0316 - accuracy: 0.9906 - auc: 0.9997 - val_loss: 0.1359 - val_accuracy: 0.9523 - val_auc: 0.9948 - lr: 5.0625e-07 Epoch 22/40 77/77 [==============================] - ETA: 0s - loss: 0.0336 - accuracy: 0.9886 - auc: 0.9995 Epoch 22: val_loss did not improve from 0.11886 Epoch 00022: ReduceLROnPlateau reducing learning rate to 7.59374984227179e-08. 77/77 [==============================] - 46s 600ms/step - loss: 0.0336 - accuracy: 0.9886 - auc: 0.9995 - val_loss: 0.1307 - val_accuracy: 0.9539 - val_auc: 0.9950 - lr: 5.0625e-07 Epoch 23/40 77/77 [==============================] - ETA: 0s - loss: 0.0332 - accuracy: 0.9886 - auc: 0.9997 Epoch 23: val_loss did not improve from 0.11886 77/77 [==============================] - 46s 601ms/step - loss: 0.0332 - accuracy: 0.9886 - auc: 0.9997 - val_loss: 0.1339 - val_accuracy: 0.9539 - val_auc: 0.9948 - lr: 7.5937e-08 Epoch 24/40 77/77 [==============================] - ETA: 0s - loss: 0.0283 - accuracy: 0.9910 - auc: 0.9998 Epoch 24: val_loss did not improve from 0.11886 Epoch 00024: ReduceLROnPlateau reducing learning rate to 1.1390624976570507e-08. 77/77 [==============================] - 46s 598ms/step - loss: 0.0283 - accuracy: 0.9910 - auc: 0.9998 - val_loss: 0.1275 - val_accuracy: 0.9539 - val_auc: 0.9951 - lr: 7.5937e-08 Epoch 25/40 77/77 [==============================] - ETA: 0s - loss: 0.0309 - accuracy: 0.9898 - auc: 0.9997 Epoch 25: val_loss did not improve from 0.11886 77/77 [==============================] - 46s 602ms/step - loss: 0.0309 - accuracy: 0.9898 - auc: 0.9997 - val_loss: 0.1190 - val_accuracy: 0.9556 - val_auc: 0.9959 - lr: 1.1391e-08 Epoch 26/40 77/77 [==============================] - ETA: 0s - loss: 0.0318 - accuracy: 0.9914 - auc: 0.9997 Epoch 26: val_loss did not improve from 0.11886 Epoch 00026: ReduceLROnPlateau reducing learning rate to 1.7085937997762811e-09. 77/77 [==============================] - 46s 591ms/step - loss: 0.0318 - accuracy: 0.9914 - auc: 0.9997 - val_loss: 0.1296 - val_accuracy: 0.9539 - val_auc: 0.9950 - lr: 1.1391e-08 Epoch 27/40 77/77 [==============================] - ETA: 0s - loss: 0.0300 - accuracy: 0.9898 - auc: 0.9997 Epoch 27: val_loss did not improve from 0.11886 77/77 [==============================] - 46s 595ms/step - loss: 0.0300 - accuracy: 0.9898 - auc: 0.9997 - val_loss: 0.1349 - val_accuracy: 0.9523 - val_auc: 0.9948 - lr: 1.7086e-09 Epoch 28/40 77/77 [==============================] - ETA: 0s - loss: 0.0308 - accuracy: 0.9910 - auc: 0.9997 Epoch 28: val_loss did not improve from 0.11886 Epoch 00028: ReduceLROnPlateau reducing learning rate to 2.5628907329711126e-10. 77/77 [==============================] - 46s 599ms/step - loss: 0.0308 - accuracy: 0.9910 - auc: 0.9997 - val_loss: 0.1266 - val_accuracy: 0.9556 - val_auc: 0.9951 - lr: 1.7086e-09 Epoch 29/40 77/77 [==============================] - ETA: 0s - loss: 0.0348 - accuracy: 0.9874 - auc: 0.9995 Epoch 29: val_loss did not improve from 0.11886 77/77 [==============================] - 46s 600ms/step - loss: 0.0348 - accuracy: 0.9874 - auc: 0.9995 - val_loss: 0.1356 - val_accuracy: 0.9523 - val_auc: 0.9948 - lr: 2.5629e-10
from tensorflow.keras.models import load_model
model=load_model("best_model_2_aug.h5")
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Using test_generator for predictions on test data:
#Sources: https://stackoverflow.com/questions/52270177/how-to-use-predict-generator-on-new-images-keras
# https://tylerburleigh.com/blog/predicting-pneumonia-from-chest-x-rays-using-efficientnet/
test_gen.reset() #It's important to always reset the test generator.
Y_pred_test=model.predict(test_gen)
y_pred_test=np.argmax(Y_pred_test,axis=1)
25/25 [==============================] - 12s 504ms/step
labels = (test_gen.class_indices)
print(labels)
{'COVID': 0, 'NORMAL': 1, 'Viral Pneumonia': 2}
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
y_true_test = test_gen.classes
CM = confusion_matrix(y_true_test, y_pred_test)
fig, ax = plot_confusion_matrix(conf_mat=CM, figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_true_test, y_pred_test, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9831 0.9667 0.9748 240 NORMAL 0.9627 0.9591 0.9609 269 Viral Pneumonia 0.9416 0.9591 0.9503 269 accuracy 0.9614 778 macro avg 0.9624 0.9616 0.9620 778 weighted avg 0.9617 0.9614 0.9615 778
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
# get metrics
model_eval_metrics(y_true_test, y_pred_test, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.96144 | 0.961987 | 0.962448 | 0.961627 |
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, Y_pred_test[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
batch_size = 32
epochs = 40
IMG_HEIGHT = 192
IMG_WIDTH = 192
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Below, I have only included as augmentations random variation in the brightness range and random horizontal flips. I have commented out other possible
#example augmentations that I chose not to use for this portfolio project.
train_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our training data
#The validation image generator should not be used to augment data. It is sometimes the case that individuals will use the same
#ImageDataGenerator object to create both their training image generator and validation image generator. However, that should not
#be done because the validation set should not be used to augment data, since it is meant to replicate unseen data for the purpose
#of identifying good tuning parameters. I therefore make sure that my validation_image_generator contains no augmentation parameters.
validation_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2
) # Generator for our validation data
#The purpose of the test_image_generator is to replicate how the train_image_generator should perform on unseen data (from the test set).
#I thus use the same parameters for test_image_generator as I did with train_image_generator (with the exception that I do not use
#validation_split)
test_image_generator = ImageDataGenerator(
rescale=1./255,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our test data
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Note: Both train_gen and val_gen use the training set, but since I used validation split, this splits
#the training set into a smaller training set and a validation set. Therefore, train_gen and val_gen
#do not use the same image data but rather different images within the original training set that has
#been randomly separated.
#Choose subset = 'training'
#Set seed=42 for both train_gen and for val_gen to ensure they are randomized similarly
train_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='training',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
#Choose subset = 'validation'
val_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='validation',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
test_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory='test',
seed=42,
shuffle=False, #generally, shuffle should be set to false when the image generator is used on test data
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
Found 2487 images belonging to 3 classes. Found 621 images belonging to 3 classes. Found 778 images belonging to 3 classes.
TRAIN_STEPS_PER_EPOCH = np.ceil((total_train*0.8/batch_size)-1)
VAL_STEPS_PER_EPOCH = np.ceil((total_train*0.2/batch_size)-1)
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, BatchNormalization,Flatten
from keras.regularizers import l1
from tensorflow.keras.optimizers import SGD, Adam
from sklearn.utils import class_weight
import numpy as np
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
model = tf.keras.Sequential([
# input: images of size Sample size, height, width, channels 1x192x192x3 pixels (the three stands for RGB channels)
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu', input_shape=(192, 192, 3)),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu'),
tf.keras.layers.Flatten(),
# classifying into 3 categories
tf.keras.layers.Dense(3, activation='softmax')
])
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.05)
mc = ModelCheckpoint('best_model_3_aug.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
history = model.fit(
train_gen,
#steps_per_epoch=total_train // batch_size, #adjusts training process for new image batches
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
epochs=epochs,
validation_data=val_gen,
#validation_steps=total_train // batch_size,
validation_steps=VAL_STEPS_PER_EPOCH,
verbose=1,
callbacks=[mc,red_lr,es]
)
Epoch 1/40 77/77 [==============================] - ETA: 0s - loss: 0.6708 - accuracy: 0.7141 - auc: 0.8850 Epoch 1: val_loss improved from inf to 0.39228, saving model to best_model_3_aug.h5 77/77 [==============================] - 47s 588ms/step - loss: 0.6708 - accuracy: 0.7141 - auc: 0.8850 - val_loss: 0.3923 - val_accuracy: 0.8520 - val_auc: 0.9568 - lr: 0.0010 Epoch 2/40 77/77 [==============================] - ETA: 0s - loss: 0.3810 - accuracy: 0.8624 - auc: 0.9609 Epoch 2: val_loss improved from 0.39228 to 0.28101, saving model to best_model_3_aug.h5 77/77 [==============================] - 45s 591ms/step - loss: 0.3810 - accuracy: 0.8624 - auc: 0.9609 - val_loss: 0.2810 - val_accuracy: 0.8734 - val_auc: 0.9775 - lr: 0.0010 Epoch 3/40 77/77 [==============================] - ETA: 0s - loss: 0.3071 - accuracy: 0.8969 - auc: 0.9733 Epoch 3: val_loss improved from 0.28101 to 0.26333, saving model to best_model_3_aug.h5 77/77 [==============================] - 48s 618ms/step - loss: 0.3071 - accuracy: 0.8969 - auc: 0.9733 - val_loss: 0.2633 - val_accuracy: 0.9079 - val_auc: 0.9815 - lr: 0.0010 Epoch 4/40 77/77 [==============================] - ETA: 0s - loss: 0.2675 - accuracy: 0.9071 - auc: 0.9791 Epoch 4: val_loss did not improve from 0.26333 77/77 [==============================] - 44s 573ms/step - loss: 0.2675 - accuracy: 0.9071 - auc: 0.9791 - val_loss: 0.3233 - val_accuracy: 0.8799 - val_auc: 0.9725 - lr: 0.0010 Epoch 5/40 77/77 [==============================] - ETA: 0s - loss: 0.2293 - accuracy: 0.9214 - auc: 0.9842 Epoch 5: val_loss improved from 0.26333 to 0.17996, saving model to best_model_3_aug.h5 77/77 [==============================] - 45s 581ms/step - loss: 0.2293 - accuracy: 0.9214 - auc: 0.9842 - val_loss: 0.1800 - val_accuracy: 0.9375 - val_auc: 0.9936 - lr: 0.0010 Epoch 6/40 77/77 [==============================] - ETA: 0s - loss: 0.1865 - accuracy: 0.9340 - auc: 0.9894 Epoch 6: val_loss improved from 0.17996 to 0.12715, saving model to best_model_3_aug.h5 77/77 [==============================] - 46s 593ms/step - loss: 0.1865 - accuracy: 0.9340 - auc: 0.9894 - val_loss: 0.1272 - val_accuracy: 0.9556 - val_auc: 0.9956 - lr: 0.0010 Epoch 7/40 77/77 [==============================] - ETA: 0s - loss: 0.1793 - accuracy: 0.9401 - auc: 0.9893 Epoch 7: val_loss did not improve from 0.12715 77/77 [==============================] - 45s 586ms/step - loss: 0.1793 - accuracy: 0.9401 - auc: 0.9893 - val_loss: 0.2444 - val_accuracy: 0.9030 - val_auc: 0.9833 - lr: 0.0010 Epoch 8/40 77/77 [==============================] - ETA: 0s - loss: 0.1595 - accuracy: 0.9462 - auc: 0.9913 Epoch 8: val_loss improved from 0.12715 to 0.12469, saving model to best_model_3_aug.h5 77/77 [==============================] - 47s 605ms/step - loss: 0.1595 - accuracy: 0.9462 - auc: 0.9913 - val_loss: 0.1247 - val_accuracy: 0.9572 - val_auc: 0.9957 - lr: 0.0010 Epoch 9/40 77/77 [==============================] - ETA: 0s - loss: 0.1431 - accuracy: 0.9487 - auc: 0.9934 Epoch 9: val_loss improved from 0.12469 to 0.11976, saving model to best_model_3_aug.h5 77/77 [==============================] - 45s 590ms/step - loss: 0.1431 - accuracy: 0.9487 - auc: 0.9934 - val_loss: 0.1198 - val_accuracy: 0.9474 - val_auc: 0.9960 - lr: 0.0010 Epoch 10/40 77/77 [==============================] - ETA: 0s - loss: 0.1621 - accuracy: 0.9418 - auc: 0.9918 Epoch 10: val_loss did not improve from 0.11976 77/77 [==============================] - 44s 571ms/step - loss: 0.1621 - accuracy: 0.9418 - auc: 0.9918 - val_loss: 0.1506 - val_accuracy: 0.9589 - val_auc: 0.9956 - lr: 0.0010 Epoch 11/40 77/77 [==============================] - ETA: 0s - loss: 0.1316 - accuracy: 0.9519 - auc: 0.9944 Epoch 11: val_loss improved from 0.11976 to 0.10468, saving model to best_model_3_aug.h5 77/77 [==============================] - 45s 591ms/step - loss: 0.1316 - accuracy: 0.9519 - auc: 0.9944 - val_loss: 0.1047 - val_accuracy: 0.9655 - val_auc: 0.9969 - lr: 0.0010 Epoch 12/40 77/77 [==============================] - ETA: 0s - loss: 0.1139 - accuracy: 0.9625 - auc: 0.9954 Epoch 12: val_loss did not improve from 0.10468 77/77 [==============================] - 46s 599ms/step - loss: 0.1139 - accuracy: 0.9625 - auc: 0.9954 - val_loss: 0.1087 - val_accuracy: 0.9556 - val_auc: 0.9952 - lr: 0.0010 Epoch 13/40 77/77 [==============================] - ETA: 0s - loss: 0.1179 - accuracy: 0.9609 - auc: 0.9947 Epoch 13: val_loss did not improve from 0.10468 Epoch 00013: ReduceLROnPlateau reducing learning rate to 5.0000002374872565e-05. 77/77 [==============================] - 50s 648ms/step - loss: 0.1179 - accuracy: 0.9609 - auc: 0.9947 - val_loss: 0.1655 - val_accuracy: 0.9474 - val_auc: 0.9904 - lr: 0.0010 Epoch 14/40 77/77 [==============================] - ETA: 0s - loss: 0.0866 - accuracy: 0.9674 - auc: 0.9973 Epoch 14: val_loss did not improve from 0.10468 77/77 [==============================] - 47s 607ms/step - loss: 0.0866 - accuracy: 0.9674 - auc: 0.9973 - val_loss: 0.1147 - val_accuracy: 0.9490 - val_auc: 0.9966 - lr: 5.0000e-05 Epoch 15/40 77/77 [==============================] - ETA: 0s - loss: 0.0656 - accuracy: 0.9772 - auc: 0.9987 Epoch 15: val_loss did not improve from 0.10468 Epoch 00015: ReduceLROnPlateau reducing learning rate to 2.5000001187436284e-06. 77/77 [==============================] - 47s 607ms/step - loss: 0.0656 - accuracy: 0.9772 - auc: 0.9987 - val_loss: 0.1075 - val_accuracy: 0.9507 - val_auc: 0.9970 - lr: 5.0000e-05 Epoch 16/40 77/77 [==============================] - ETA: 0s - loss: 0.0647 - accuracy: 0.9813 - auc: 0.9982 Epoch 16: val_loss improved from 0.10468 to 0.10445, saving model to best_model_3_aug.h5 77/77 [==============================] - 46s 600ms/step - loss: 0.0647 - accuracy: 0.9813 - auc: 0.9982 - val_loss: 0.1045 - val_accuracy: 0.9539 - val_auc: 0.9971 - lr: 2.5000e-06 Epoch 17/40 77/77 [==============================] - ETA: 0s - loss: 0.0561 - accuracy: 0.9825 - auc: 0.9990 Epoch 17: val_loss improved from 0.10445 to 0.10378, saving model to best_model_3_aug.h5 77/77 [==============================] - 46s 596ms/step - loss: 0.0561 - accuracy: 0.9825 - auc: 0.9990 - val_loss: 0.1038 - val_accuracy: 0.9539 - val_auc: 0.9971 - lr: 2.5000e-06 Epoch 18/40 77/77 [==============================] - ETA: 0s - loss: 0.0644 - accuracy: 0.9821 - auc: 0.9979 Epoch 18: val_loss did not improve from 0.10378 77/77 [==============================] - 47s 610ms/step - loss: 0.0644 - accuracy: 0.9821 - auc: 0.9979 - val_loss: 0.1070 - val_accuracy: 0.9523 - val_auc: 0.9970 - lr: 2.5000e-06 Epoch 19/40 77/77 [==============================] - ETA: 0s - loss: 0.0577 - accuracy: 0.9809 - auc: 0.9989 Epoch 19: val_loss did not improve from 0.10378 Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.2500000821091816e-07. 77/77 [==============================] - 45s 588ms/step - loss: 0.0577 - accuracy: 0.9809 - auc: 0.9989 - val_loss: 0.1070 - val_accuracy: 0.9523 - val_auc: 0.9969 - lr: 2.5000e-06 Epoch 20/40 77/77 [==============================] - ETA: 0s - loss: 0.0599 - accuracy: 0.9821 - auc: 0.9988 Epoch 20: val_loss improved from 0.10378 to 0.10000, saving model to best_model_3_aug.h5 77/77 [==============================] - 46s 596ms/step - loss: 0.0599 - accuracy: 0.9821 - auc: 0.9988 - val_loss: 0.1000 - val_accuracy: 0.9556 - val_auc: 0.9973 - lr: 1.2500e-07 Epoch 21/40 77/77 [==============================] - ETA: 0s - loss: 0.0539 - accuracy: 0.9817 - auc: 0.9989 Epoch 21: val_loss did not improve from 0.10000 77/77 [==============================] - 45s 586ms/step - loss: 0.0539 - accuracy: 0.9817 - auc: 0.9989 - val_loss: 0.1035 - val_accuracy: 0.9539 - val_auc: 0.9971 - lr: 1.2500e-07 Epoch 22/40 77/77 [==============================] - ETA: 0s - loss: 0.0580 - accuracy: 0.9813 - auc: 0.9984 Epoch 22: val_loss did not improve from 0.10000 Epoch 00022: ReduceLROnPlateau reducing learning rate to 6.250000694763003e-09. 77/77 [==============================] - 46s 595ms/step - loss: 0.0580 - accuracy: 0.9813 - auc: 0.9984 - val_loss: 0.1051 - val_accuracy: 0.9539 - val_auc: 0.9970 - lr: 1.2500e-07 Epoch 23/40 77/77 [==============================] - ETA: 0s - loss: 0.0604 - accuracy: 0.9805 - auc: 0.9989 Epoch 23: val_loss did not improve from 0.10000 77/77 [==============================] - 46s 602ms/step - loss: 0.0604 - accuracy: 0.9805 - auc: 0.9989 - val_loss: 0.1070 - val_accuracy: 0.9523 - val_auc: 0.9969 - lr: 6.2500e-09 Epoch 24/40 77/77 [==============================] - ETA: 0s - loss: 0.0564 - accuracy: 0.9821 - auc: 0.9990 Epoch 24: val_loss improved from 0.10000 to 0.09321, saving model to best_model_3_aug.h5 77/77 [==============================] - 45s 592ms/step - loss: 0.0564 - accuracy: 0.9821 - auc: 0.9990 - val_loss: 0.0932 - val_accuracy: 0.9556 - val_auc: 0.9976 - lr: 6.2500e-09 Epoch 25/40 77/77 [==============================] - ETA: 0s - loss: 0.0629 - accuracy: 0.9792 - auc: 0.9983 Epoch 25: val_loss did not improve from 0.09321 77/77 [==============================] - 44s 578ms/step - loss: 0.0629 - accuracy: 0.9792 - auc: 0.9983 - val_loss: 0.0965 - val_accuracy: 0.9556 - val_auc: 0.9974 - lr: 6.2500e-09 Epoch 26/40 77/77 [==============================] - ETA: 0s - loss: 0.0533 - accuracy: 0.9833 - auc: 0.9992 Epoch 26: val_loss did not improve from 0.09321 Epoch 00026: ReduceLROnPlateau reducing learning rate to 3.1250002585636594e-10. 77/77 [==============================] - 45s 590ms/step - loss: 0.0533 - accuracy: 0.9833 - auc: 0.9992 - val_loss: 0.1046 - val_accuracy: 0.9539 - val_auc: 0.9971 - lr: 6.2500e-09 Epoch 27/40 77/77 [==============================] - ETA: 0s - loss: 0.0565 - accuracy: 0.9837 - auc: 0.9990 Epoch 27: val_loss did not improve from 0.09321 77/77 [==============================] - 45s 580ms/step - loss: 0.0565 - accuracy: 0.9837 - auc: 0.9990 - val_loss: 0.1066 - val_accuracy: 0.9523 - val_auc: 0.9970 - lr: 3.1250e-10 Epoch 28/40 77/77 [==============================] - ETA: 0s - loss: 0.0614 - accuracy: 0.9792 - auc: 0.9987 Epoch 28: val_loss did not improve from 0.09321 Epoch 00028: ReduceLROnPlateau reducing learning rate to 1.5625001292818297e-11. 77/77 [==============================] - 46s 592ms/step - loss: 0.0614 - accuracy: 0.9792 - auc: 0.9987 - val_loss: 0.1021 - val_accuracy: 0.9556 - val_auc: 0.9972 - lr: 3.1250e-10 Epoch 29/40 77/77 [==============================] - ETA: 0s - loss: 0.0576 - accuracy: 0.9833 - auc: 0.9985 Epoch 29: val_loss did not improve from 0.09321 77/77 [==============================] - 45s 585ms/step - loss: 0.0576 - accuracy: 0.9833 - auc: 0.9985 - val_loss: 0.1066 - val_accuracy: 0.9523 - val_auc: 0.9970 - lr: 1.5625e-11 Epoch 30/40 77/77 [==============================] - ETA: 0s - loss: 0.0518 - accuracy: 0.9837 - auc: 0.9992 Epoch 30: val_loss did not improve from 0.09321 Epoch 00030: ReduceLROnPlateau reducing learning rate to 7.812500646409148e-13. 77/77 [==============================] - 45s 585ms/step - loss: 0.0518 - accuracy: 0.9837 - auc: 0.9992 - val_loss: 0.1067 - val_accuracy: 0.9523 - val_auc: 0.9970 - lr: 1.5625e-11 Epoch 31/40 77/77 [==============================] - ETA: 0s - loss: 0.0608 - accuracy: 0.9809 - auc: 0.9984 Epoch 31: val_loss did not improve from 0.09321 77/77 [==============================] - 44s 576ms/step - loss: 0.0608 - accuracy: 0.9809 - auc: 0.9984 - val_loss: 0.1043 - val_accuracy: 0.9539 - val_auc: 0.9971 - lr: 7.8125e-13 Epoch 32/40 77/77 [==============================] - ETA: 0s - loss: 0.0615 - accuracy: 0.9796 - auc: 0.9986 Epoch 32: val_loss did not improve from 0.09321 Epoch 00032: ReduceLROnPlateau reducing learning rate to 3.906250323204574e-14. 77/77 [==============================] - 45s 583ms/step - loss: 0.0615 - accuracy: 0.9796 - auc: 0.9986 - val_loss: 0.1057 - val_accuracy: 0.9523 - val_auc: 0.9970 - lr: 7.8125e-13 Epoch 33/40 77/77 [==============================] - ETA: 0s - loss: 0.0576 - accuracy: 0.9821 - auc: 0.9990 Epoch 33: val_loss did not improve from 0.09321 77/77 [==============================] - 45s 582ms/step - loss: 0.0576 - accuracy: 0.9821 - auc: 0.9990 - val_loss: 0.1070 - val_accuracy: 0.9523 - val_auc: 0.9969 - lr: 3.9063e-14 Epoch 34/40 77/77 [==============================] - ETA: 0s - loss: 0.0580 - accuracy: 0.9809 - auc: 0.9990 Epoch 34: val_loss did not improve from 0.09321 Epoch 00034: ReduceLROnPlateau reducing learning rate to 1.9531251616022873e-15. 77/77 [==============================] - 45s 581ms/step - loss: 0.0580 - accuracy: 0.9809 - auc: 0.9990 - val_loss: 0.1056 - val_accuracy: 0.9523 - val_auc: 0.9970 - lr: 3.9063e-14 Epoch 35/40 77/77 [==============================] - ETA: 0s - loss: 0.0574 - accuracy: 0.9796 - auc: 0.9988 Epoch 35: val_loss did not improve from 0.09321 77/77 [==============================] - 45s 581ms/step - loss: 0.0574 - accuracy: 0.9796 - auc: 0.9988 - val_loss: 0.1049 - val_accuracy: 0.9539 - val_auc: 0.9970 - lr: 1.9531e-15 Epoch 36/40 77/77 [==============================] - ETA: 0s - loss: 0.0603 - accuracy: 0.9817 - auc: 0.9986 Epoch 36: val_loss did not improve from 0.09321 Epoch 00036: ReduceLROnPlateau reducing learning rate to 9.765626019769673e-17. 77/77 [==============================] - 45s 585ms/step - loss: 0.0603 - accuracy: 0.9817 - auc: 0.9986 - val_loss: 0.1050 - val_accuracy: 0.9539 - val_auc: 0.9971 - lr: 1.9531e-15 Epoch 37/40 77/77 [==============================] - ETA: 0s - loss: 0.0581 - accuracy: 0.9796 - auc: 0.9989 Epoch 37: val_loss did not improve from 0.09321 77/77 [==============================] - 48s 619ms/step - loss: 0.0581 - accuracy: 0.9796 - auc: 0.9989 - val_loss: 0.1031 - val_accuracy: 0.9539 - val_auc: 0.9971 - lr: 9.7656e-17 Epoch 38/40 77/77 [==============================] - ETA: 0s - loss: 0.0543 - accuracy: 0.9804 - auc: 0.9991 Epoch 38: val_loss did not improve from 0.09321 Epoch 00038: ReduceLROnPlateau reducing learning rate to 4.882813076059286e-18. 77/77 [==============================] - 45s 584ms/step - loss: 0.0543 - accuracy: 0.9804 - auc: 0.9991 - val_loss: 0.1068 - val_accuracy: 0.9523 - val_auc: 0.9970 - lr: 9.7656e-17 Epoch 39/40 77/77 [==============================] - ETA: 0s - loss: 0.0520 - accuracy: 0.9833 - auc: 0.9992 Epoch 39: val_loss did not improve from 0.09321 77/77 [==============================] - 44s 571ms/step - loss: 0.0520 - accuracy: 0.9833 - auc: 0.9992 - val_loss: 0.1055 - val_accuracy: 0.9539 - val_auc: 0.9970 - lr: 4.8828e-18
from tensorflow.keras.models import load_model
model=load_model("best_model_3_aug.h5")
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Using test_generator for predictions on test data:
#Sources: https://stackoverflow.com/questions/52270177/how-to-use-predict-generator-on-new-images-keras
# https://tylerburleigh.com/blog/predicting-pneumonia-from-chest-x-rays-using-efficientnet/
test_gen.reset() #It's important to always reset the test generator.
Y_pred_test=model.predict(test_gen)
y_pred_test=np.argmax(Y_pred_test,axis=1)
25/25 [==============================] - 12s 486ms/step
labels = (test_gen.class_indices)
print(labels)
{'COVID': 0, 'NORMAL': 1, 'Viral Pneumonia': 2}
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
y_true_test = test_gen.classes
CM = confusion_matrix(y_true_test, y_pred_test)
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_true_test, y_pred_test, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9793 0.9833 0.9813 240 NORMAL 0.9698 0.9554 0.9625 269 Viral Pneumonia 0.9485 0.9591 0.9538 269 accuracy 0.9653 778 macro avg 0.9659 0.9659 0.9659 778 weighted avg 0.9654 0.9653 0.9653 778
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
model_eval_metrics(y_true_test, y_pred_test, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.965296 | 0.965875 | 0.965865 | 0.965944 |
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, Y_pred_test[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
batch_size = 32
epochs = 40
IMG_HEIGHT = 192
IMG_WIDTH = 192
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Below, I have only included as augmentations random variation in the brightness range and random horizontal flips. I have commented out other possible
#example augmentations that I chose not to use for this portfolio project.
train_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our training data
#The validation image generator should not be used to augment data. It is sometimes the case that individuals will use the same
#ImageDataGenerator object to create both their training image generator and validation image generator. However, that should not
#be done because the validation set should not be used to augment data, since it is meant to replicate unseen data for the purpose
#of identifying good tuning parameters. I therefore make sure that my validation_image_generator contains no augmentation parameters.
validation_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2
) # Generator for our validation data
#The purpose of the test_image_generator is to replicate how the train_image_generator should perform on unseen data (from the test set).
#I thus use the same parameters for test_image_generator as I did with train_image_generator (with the exception that I do not use
#validation_split)
test_image_generator = ImageDataGenerator(
rescale=1./255,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our test data
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Note: Both train_gen and val_gen use the training set, but since I used validation split, this splits
#the training set into a smaller training set and a validation set. Therefore, train_gen and val_gen
#do not use the same image data but rather different images within the original training set that has
#been randomly separated.
#Choose subset = 'training'
#Set seed=42 for both train_gen and for val_gen to ensure they are randomized similarly
train_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='training',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
#Choose subset = 'validation'
val_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='validation',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
test_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory='test',
seed=42,
shuffle=False, #generally, shuffle should be set to false when the image generator is used on test data
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
Found 2487 images belonging to 3 classes. Found 621 images belonging to 3 classes. Found 778 images belonging to 3 classes.
TRAIN_STEPS_PER_EPOCH = np.ceil((total_train*0.8/batch_size)-1)
VAL_STEPS_PER_EPOCH = np.ceil((total_train*0.2/batch_size)-1)
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, BatchNormalization, Flatten
from keras.regularizers import l1
from tensorflow.keras.optimizers import SGD, Adam
from sklearn.utils import class_weight
import numpy as np
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
model = tf.keras.Sequential([
# input: images of size Sample size, height, width, channels 1x192x192x3 pixels (the three stands for RGB channels)
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu', input_shape=(192, 192, 3)),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
tf.keras.layers.Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Flatten(),
# classifying into 3 categories
tf.keras.layers.Dense(3, activation='softmax')
])
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.05)
mc = ModelCheckpoint('best_model_4_aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
# Fitting the CNN to the Training set
history = model.fit(
train_gen,
#steps_per_epoch=total_train // batch_size, #adjusts training process for new image batches
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
epochs=epochs,
validation_data=val_gen,
#validation_steps=total_train // batch_size,
validation_steps=VAL_STEPS_PER_EPOCH,
verbose=1,
callbacks=[mc, red_lr, es]
)
Epoch 1/40 77/77 [==============================] - ETA: 0s - loss: 0.6503 - accuracy: 0.7010 - auc: 0.8854 Epoch 1: val_loss improved from inf to 0.36830, saving model to best_model_4_aug.h5 77/77 [==============================] - 1619s 21s/step - loss: 0.6503 - accuracy: 0.7010 - auc: 0.8854 - val_loss: 0.3683 - val_accuracy: 0.8635 - val_auc: 0.9615 - lr: 0.0010 Epoch 2/40 77/77 [==============================] - ETA: 0s - loss: 0.3027 - accuracy: 0.8945 - auc: 0.9738 Epoch 2: val_loss improved from 0.36830 to 0.28093, saving model to best_model_4_aug.h5 77/77 [==============================] - 48s 624ms/step - loss: 0.3027 - accuracy: 0.8945 - auc: 0.9738 - val_loss: 0.2809 - val_accuracy: 0.9128 - val_auc: 0.9823 - lr: 0.0010 Epoch 3/40 77/77 [==============================] - ETA: 0s - loss: 0.2642 - accuracy: 0.9039 - auc: 0.9795 Epoch 3: val_loss improved from 0.28093 to 0.25216, saving model to best_model_4_aug.h5 77/77 [==============================] - 46s 593ms/step - loss: 0.2642 - accuracy: 0.9039 - auc: 0.9795 - val_loss: 0.2522 - val_accuracy: 0.9046 - val_auc: 0.9855 - lr: 0.0010 Epoch 4/40 77/77 [==============================] - ETA: 0s - loss: 0.2270 - accuracy: 0.9218 - auc: 0.9851 Epoch 4: val_loss improved from 0.25216 to 0.24971, saving model to best_model_4_aug.h5 77/77 [==============================] - 46s 594ms/step - loss: 0.2270 - accuracy: 0.9218 - auc: 0.9851 - val_loss: 0.2497 - val_accuracy: 0.8964 - val_auc: 0.9812 - lr: 0.0010 Epoch 5/40 77/77 [==============================] - ETA: 0s - loss: 0.1756 - accuracy: 0.9377 - auc: 0.9901 Epoch 5: val_loss improved from 0.24971 to 0.20357, saving model to best_model_4_aug.h5 77/77 [==============================] - 46s 591ms/step - loss: 0.1756 - accuracy: 0.9377 - auc: 0.9901 - val_loss: 0.2036 - val_accuracy: 0.9145 - val_auc: 0.9882 - lr: 0.0010 Epoch 6/40 77/77 [==============================] - ETA: 0s - loss: 0.1619 - accuracy: 0.9409 - auc: 0.9913 Epoch 6: val_loss improved from 0.20357 to 0.17153, saving model to best_model_4_aug.h5 77/77 [==============================] - 46s 601ms/step - loss: 0.1619 - accuracy: 0.9409 - auc: 0.9913 - val_loss: 0.1715 - val_accuracy: 0.9375 - val_auc: 0.9919 - lr: 0.0010 Epoch 7/40 77/77 [==============================] - ETA: 0s - loss: 0.1373 - accuracy: 0.9540 - auc: 0.9938 Epoch 7: val_loss did not improve from 0.17153 77/77 [==============================] - 45s 580ms/step - loss: 0.1373 - accuracy: 0.9540 - auc: 0.9938 - val_loss: 0.1750 - val_accuracy: 0.9326 - val_auc: 0.9917 - lr: 0.0010 Epoch 8/40 77/77 [==============================] - ETA: 0s - loss: 0.1253 - accuracy: 0.9576 - auc: 0.9946 Epoch 8: val_loss improved from 0.17153 to 0.17014, saving model to best_model_4_aug.h5 77/77 [==============================] - 48s 624ms/step - loss: 0.1253 - accuracy: 0.9576 - auc: 0.9946 - val_loss: 0.1701 - val_accuracy: 0.9309 - val_auc: 0.9920 - lr: 0.0010 Epoch 9/40 77/77 [==============================] - ETA: 0s - loss: 0.1262 - accuracy: 0.9515 - auc: 0.9949 Epoch 9: val_loss did not improve from 0.17014 77/77 [==============================] - 45s 585ms/step - loss: 0.1262 - accuracy: 0.9515 - auc: 0.9949 - val_loss: 0.1849 - val_accuracy: 0.9309 - val_auc: 0.9913 - lr: 0.0010 Epoch 10/40 77/77 [==============================] - ETA: 0s - loss: 0.1241 - accuracy: 0.9548 - auc: 0.9949 Epoch 10: val_loss improved from 0.17014 to 0.14320, saving model to best_model_4_aug.h5 77/77 [==============================] - 46s 593ms/step - loss: 0.1241 - accuracy: 0.9548 - auc: 0.9949 - val_loss: 0.1432 - val_accuracy: 0.9490 - val_auc: 0.9936 - lr: 0.0010 Epoch 11/40 77/77 [==============================] - ETA: 0s - loss: 0.0962 - accuracy: 0.9670 - auc: 0.9962 Epoch 11: val_loss did not improve from 0.14320 77/77 [==============================] - 45s 585ms/step - loss: 0.0962 - accuracy: 0.9670 - auc: 0.9962 - val_loss: 0.2009 - val_accuracy: 0.9375 - val_auc: 0.9897 - lr: 0.0010 Epoch 12/40 77/77 [==============================] - ETA: 0s - loss: 0.1021 - accuracy: 0.9686 - auc: 0.9961 Epoch 12: val_loss did not improve from 0.14320 Epoch 00012: ReduceLROnPlateau reducing learning rate to 5.0000002374872565e-05. 77/77 [==============================] - 44s 575ms/step - loss: 0.1021 - accuracy: 0.9686 - auc: 0.9961 - val_loss: 0.1731 - val_accuracy: 0.9474 - val_auc: 0.9912 - lr: 0.0010 Epoch 13/40 77/77 [==============================] - ETA: 0s - loss: 0.0526 - accuracy: 0.9829 - auc: 0.9987 Epoch 13: val_loss did not improve from 0.14320 77/77 [==============================] - 47s 607ms/step - loss: 0.0526 - accuracy: 0.9829 - auc: 0.9987 - val_loss: 0.1899 - val_accuracy: 0.9507 - val_auc: 0.9881 - lr: 5.0000e-05 Epoch 14/40 77/77 [==============================] - ETA: 0s - loss: 0.0467 - accuracy: 0.9882 - auc: 0.9988 Epoch 14: val_loss did not improve from 0.14320 Epoch 00014: ReduceLROnPlateau reducing learning rate to 2.5000001187436284e-06. 77/77 [==============================] - 45s 584ms/step - loss: 0.0467 - accuracy: 0.9882 - auc: 0.9988 - val_loss: 0.1888 - val_accuracy: 0.9457 - val_auc: 0.9875 - lr: 5.0000e-05 Epoch 15/40 77/77 [==============================] - ETA: 0s - loss: 0.0443 - accuracy: 0.9874 - auc: 0.9991 Epoch 15: val_loss did not improve from 0.14320 77/77 [==============================] - 45s 578ms/step - loss: 0.0443 - accuracy: 0.9874 - auc: 0.9991 - val_loss: 0.1986 - val_accuracy: 0.9474 - val_auc: 0.9861 - lr: 2.5000e-06 Epoch 16/40 77/77 [==============================] - ETA: 0s - loss: 0.0429 - accuracy: 0.9874 - auc: 0.9992 Epoch 16: val_loss did not improve from 0.14320 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.2500000821091816e-07. 77/77 [==============================] - 45s 581ms/step - loss: 0.0429 - accuracy: 0.9874 - auc: 0.9992 - val_loss: 0.1988 - val_accuracy: 0.9474 - val_auc: 0.9863 - lr: 2.5000e-06 Epoch 17/40 77/77 [==============================] - ETA: 0s - loss: 0.0407 - accuracy: 0.9874 - auc: 0.9990 Epoch 17: val_loss did not improve from 0.14320 77/77 [==============================] - 45s 578ms/step - loss: 0.0407 - accuracy: 0.9874 - auc: 0.9990 - val_loss: 0.1891 - val_accuracy: 0.9490 - val_auc: 0.9876 - lr: 1.2500e-07 Epoch 18/40 77/77 [==============================] - ETA: 0s - loss: 0.0406 - accuracy: 0.9882 - auc: 0.9994 Epoch 18: val_loss did not improve from 0.14320 Epoch 00018: ReduceLROnPlateau reducing learning rate to 6.250000694763003e-09. 77/77 [==============================] - 45s 578ms/step - loss: 0.0406 - accuracy: 0.9882 - auc: 0.9994 - val_loss: 0.1988 - val_accuracy: 0.9474 - val_auc: 0.9862 - lr: 1.2500e-07 Epoch 19/40 77/77 [==============================] - ETA: 0s - loss: 0.0402 - accuracy: 0.9870 - auc: 0.9993 Epoch 19: val_loss did not improve from 0.14320 77/77 [==============================] - 45s 578ms/step - loss: 0.0402 - accuracy: 0.9870 - auc: 0.9993 - val_loss: 0.1979 - val_accuracy: 0.9490 - val_auc: 0.9862 - lr: 6.2500e-09 Epoch 20/40 77/77 [==============================] - ETA: 0s - loss: 0.0463 - accuracy: 0.9853 - auc: 0.9990 Epoch 20: val_loss did not improve from 0.14320 Epoch 00020: ReduceLROnPlateau reducing learning rate to 3.1250002585636594e-10. 77/77 [==============================] - 47s 617ms/step - loss: 0.0463 - accuracy: 0.9853 - auc: 0.9990 - val_loss: 0.2001 - val_accuracy: 0.9474 - val_auc: 0.9861 - lr: 6.2500e-09 Epoch 21/40 77/77 [==============================] - ETA: 0s - loss: 0.0389 - accuracy: 0.9870 - auc: 0.9993 Epoch 21: val_loss did not improve from 0.14320 77/77 [==============================] - 45s 579ms/step - loss: 0.0389 - accuracy: 0.9870 - auc: 0.9993 - val_loss: 0.1971 - val_accuracy: 0.9490 - val_auc: 0.9862 - lr: 3.1250e-10 Epoch 22/40 77/77 [==============================] - ETA: 0s - loss: 0.0406 - accuracy: 0.9878 - auc: 0.9992 Epoch 22: val_loss did not improve from 0.14320 Epoch 00022: ReduceLROnPlateau reducing learning rate to 1.5625001292818297e-11. 77/77 [==============================] - 44s 576ms/step - loss: 0.0406 - accuracy: 0.9878 - auc: 0.9992 - val_loss: 0.1992 - val_accuracy: 0.9474 - val_auc: 0.9862 - lr: 3.1250e-10 Epoch 23/40 77/77 [==============================] - ETA: 0s - loss: 0.0401 - accuracy: 0.9882 - auc: 0.9994 Epoch 23: val_loss did not improve from 0.14320 77/77 [==============================] - 44s 575ms/step - loss: 0.0401 - accuracy: 0.9882 - auc: 0.9994 - val_loss: 0.1991 - val_accuracy: 0.9474 - val_auc: 0.9862 - lr: 1.5625e-11 Epoch 24/40 77/77 [==============================] - ETA: 0s - loss: 0.0451 - accuracy: 0.9866 - auc: 0.9988 Epoch 24: val_loss did not improve from 0.14320 Epoch 00024: ReduceLROnPlateau reducing learning rate to 7.812500646409148e-13. 77/77 [==============================] - 44s 577ms/step - loss: 0.0451 - accuracy: 0.9866 - auc: 0.9988 - val_loss: 0.2001 - val_accuracy: 0.9474 - val_auc: 0.9861 - lr: 1.5625e-11 Epoch 25/40 77/77 [==============================] - ETA: 0s - loss: 0.0407 - accuracy: 0.9870 - auc: 0.9992 Epoch 25: val_loss did not improve from 0.14320 77/77 [==============================] - 45s 579ms/step - loss: 0.0407 - accuracy: 0.9870 - auc: 0.9992 - val_loss: 0.1924 - val_accuracy: 0.9507 - val_auc: 0.9864 - lr: 7.8125e-13
from tensorflow.keras.models import load_model
model=load_model("best_model_4_aug.h5")
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Using test_generator for predictions on test data:
#Sources: https://stackoverflow.com/questions/52270177/how-to-use-predict-generator-on-new-images-keras
# https://tylerburleigh.com/blog/predicting-pneumonia-from-chest-x-rays-using-efficientnet/
test_gen.reset() #It's important to always reset the test generator.
Y_pred_test=model.predict(test_gen)
y_pred_test=np.argmax(Y_pred_test,axis=1)
25/25 [==============================] - 582s 24s/step
labels = (test_gen.class_indices)
print(labels)
{'COVID': 0, 'NORMAL': 1, 'Viral Pneumonia': 2}
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
y_true_test = test_gen.classes
CM = confusion_matrix(y_true_test, y_pred_test)
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_true_test, y_pred_test, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9712 0.9833 0.9772 240 NORMAL 0.9688 0.9219 0.9448 269 Viral Pneumonia 0.9176 0.9517 0.9343 269 accuracy 0.9512 778 macro avg 0.9525 0.9523 0.9521 778 weighted avg 0.9518 0.9512 0.9512 778
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
# get metrics
model_eval_metrics(y_true_test, y_pred_test, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.951157 | 0.952098 | 0.952502 | 0.952313 |
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, Y_pred_test[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
batch_size = 32
epochs = 40
IMG_HEIGHT = 192
IMG_WIDTH = 192
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Below, I have only included as augmentations random variation in the brightness range and random horizontal flips. I have commented out other possible
#example augmentations that I chose not to use for this portfolio project.
train_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our training data
#The validation image generator should not be used to augment data. It is sometimes the case that individuals will use the same
#ImageDataGenerator object to create both their training image generator and validation image generator. However, that should not
#be done because the validation set should not be used to augment data, since it is meant to replicate unseen data for the purpose
#of identifying good tuning parameters. I therefore make sure that my validation_image_generator contains no augmentation parameters.
validation_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2
) # Generator for our validation data
#The purpose of the test_image_generator is to replicate how the train_image_generator should perform on unseen data (from the test set).
#I thus use the same parameters for test_image_generator as I did with train_image_generator (with the exception that I do not use
#validation_split)
test_image_generator = ImageDataGenerator(
rescale=1./255,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our test data
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Note: Both train_gen and val_gen use the training set, but since I used validation split, this splits
#the training set into a smaller training set and a validation set. Therefore, train_gen and val_gen
#do not use the same image data but rather different images within the original training set that has
#been randomly separated.
#Choose subset = 'training'
#Set seed=42 for both train_gen and for val_gen to ensure they are randomized similarly
train_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='training',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
#Choose subset = 'validation'
val_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='validation',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
test_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory='test',
seed=42,
shuffle=False, #generally, shuffle should be set to false when the image generator is used on test data
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
Found 2487 images belonging to 3 classes. Found 621 images belonging to 3 classes. Found 778 images belonging to 3 classes.
TRAIN_STEPS_PER_EPOCH = np.ceil((total_train*0.8/batch_size)-1)
VAL_STEPS_PER_EPOCH = np.ceil((total_train*0.2/batch_size)-1)
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, BatchNormalization,Flatten
from keras.regularizers import l1
from tensorflow.keras.optimizers import SGD, Adam
from sklearn.utils import class_weight
import numpy as np
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.metrics import AUC
# Let's build a squeezenet model instead to see how well it performs
# Does adding more layers help (i.e.- try building a deeper and deeper network)
l = tf.keras.layers # syntax shortcut
# Create function to define fire modules
def fire(x, squeeze, expand):
y = l.Conv2D(filters=squeeze, kernel_size=1, padding='same', activation='relu')(x)
y1 = l.Conv2D(filters=expand//2, kernel_size=1, padding='same', activation='relu')(y) # note: //2 takes input value and divides by 2, so we reach the dimensions requested with stacking later.
y3 = l.Conv2D(filters=expand//2, kernel_size=3, padding='same', activation='relu')(y)
return tf.keras.layers.concatenate([y1, y3])
# this is to make it behave similarly to other Keras layers
def fire_module(squeeze, expand):
return lambda x: fire(x, squeeze, expand)
with tf.device('/device:GPU:0'):
x = tf.keras.layers.Input(shape=[192,192, 3]) # input is 192x192 pixels RGB
y = tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu')(x)
y = fire_module(8, 16)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(16, 32)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(26, 32)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(32, 64)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(32, 64)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(64, 128)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(64, 128)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = fire_module(128, 256)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(128, 256)(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = tf.keras.layers.Conv2D(kernel_size=3, filters=64, padding='same', activation='relu')(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.GlobalAveragePooling2D()(y) # Takes average of h x w for each channel and returns 1 scalar value per channel
y = tf.keras.layers.Dense(3, activation='softmax')(y) # Parameters for final layer from GAP = number of channels in previous layer plus number of dense nodes in output layer times number of dense nodes
model = tf.keras.Model(x, y)
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.05)
mc = ModelCheckpoint('best_model_5_aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
# Fitting the CNN to the Training set
history = model.fit(
train_gen,
#steps_per_epoch=total_train // batch_size, #adjusts training process for new image batches
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
epochs=epochs,
validation_data=val_gen,
#validation_steps=total_train // batch_size,
validation_steps=VAL_STEPS_PER_EPOCH,
verbose=1,
callbacks=[mc,red_lr,es]
)
Epoch 1/40 77/77 [==============================] - ETA: 0s - loss: 0.5137 - accuracy: 0.8077 - auc: 0.9305 Epoch 1: val_loss improved from inf to 3.61278, saving model to best_model_5_aug.h5 77/77 [==============================] - 51s 608ms/step - loss: 0.5137 - accuracy: 0.8077 - auc: 0.9305 - val_loss: 3.6128 - val_accuracy: 0.3059 - val_auc: 0.4798 - lr: 0.0010 Epoch 2/40 77/77 [==============================] - ETA: 0s - loss: 0.3203 - accuracy: 0.8831 - auc: 0.9711 Epoch 2: val_loss did not improve from 3.61278 77/77 [==============================] - 44s 575ms/step - loss: 0.3203 - accuracy: 0.8831 - auc: 0.9711 - val_loss: 5.3654 - val_accuracy: 0.3487 - val_auc: 0.5391 - lr: 0.0010 Epoch 3/40 77/77 [==============================] - ETA: 0s - loss: 0.2914 - accuracy: 0.8912 - auc: 0.9759 Epoch 3: val_loss did not improve from 3.61278 Epoch 00003: ReduceLROnPlateau reducing learning rate to 5.0000002374872565e-05. 77/77 [==============================] - 45s 582ms/step - loss: 0.2914 - accuracy: 0.8912 - auc: 0.9759 - val_loss: 6.2672 - val_accuracy: 0.3454 - val_auc: 0.5300 - lr: 0.0010 Epoch 4/40 77/77 [==============================] - ETA: 0s - loss: 0.2449 - accuracy: 0.9092 - auc: 0.9836 Epoch 4: val_loss did not improve from 3.61278 77/77 [==============================] - 45s 579ms/step - loss: 0.2449 - accuracy: 0.9092 - auc: 0.9836 - val_loss: 5.4474 - val_accuracy: 0.3470 - val_auc: 0.5060 - lr: 5.0000e-05 Epoch 5/40 77/77 [==============================] - ETA: 0s - loss: 0.2077 - accuracy: 0.9299 - auc: 0.9883 Epoch 5: val_loss did not improve from 3.61278 Epoch 00005: ReduceLROnPlateau reducing learning rate to 2.5000001187436284e-06. 77/77 [==============================] - 45s 578ms/step - loss: 0.2077 - accuracy: 0.9299 - auc: 0.9883 - val_loss: 4.3935 - val_accuracy: 0.3602 - val_auc: 0.5451 - lr: 5.0000e-05 Epoch 6/40 77/77 [==============================] - ETA: 0s - loss: 0.2066 - accuracy: 0.9336 - auc: 0.9879 Epoch 6: val_loss improved from 3.61278 to 2.84188, saving model to best_model_5_aug.h5 77/77 [==============================] - 46s 595ms/step - loss: 0.2066 - accuracy: 0.9336 - auc: 0.9879 - val_loss: 2.8419 - val_accuracy: 0.4918 - val_auc: 0.6137 - lr: 2.5000e-06 Epoch 7/40 77/77 [==============================] - ETA: 0s - loss: 0.2062 - accuracy: 0.9336 - auc: 0.9886 Epoch 7: val_loss improved from 2.84188 to 1.55949, saving model to best_model_5_aug.h5 77/77 [==============================] - 46s 597ms/step - loss: 0.2062 - accuracy: 0.9336 - auc: 0.9886 - val_loss: 1.5595 - val_accuracy: 0.5987 - val_auc: 0.7304 - lr: 2.5000e-06 Epoch 8/40 77/77 [==============================] - ETA: 0s - loss: 0.2043 - accuracy: 0.9308 - auc: 0.9882 Epoch 8: val_loss improved from 1.55949 to 0.74833, saving model to best_model_5_aug.h5 77/77 [==============================] - 45s 588ms/step - loss: 0.2043 - accuracy: 0.9308 - auc: 0.9882 - val_loss: 0.7483 - val_accuracy: 0.6842 - val_auc: 0.8810 - lr: 2.5000e-06 Epoch 9/40 77/77 [==============================] - ETA: 0s - loss: 0.2010 - accuracy: 0.9320 - auc: 0.9889 Epoch 9: val_loss improved from 0.74833 to 0.36661, saving model to best_model_5_aug.h5 77/77 [==============================] - 47s 607ms/step - loss: 0.2010 - accuracy: 0.9320 - auc: 0.9889 - val_loss: 0.3666 - val_accuracy: 0.8503 - val_auc: 0.9623 - lr: 2.5000e-06 Epoch 10/40 77/77 [==============================] - ETA: 0s - loss: 0.2060 - accuracy: 0.9279 - auc: 0.9884 Epoch 10: val_loss improved from 0.36661 to 0.21758, saving model to best_model_5_aug.h5 77/77 [==============================] - 46s 597ms/step - loss: 0.2060 - accuracy: 0.9279 - auc: 0.9884 - val_loss: 0.2176 - val_accuracy: 0.9326 - val_auc: 0.9877 - lr: 2.5000e-06 Epoch 11/40 77/77 [==============================] - ETA: 0s - loss: 0.1915 - accuracy: 0.9377 - auc: 0.9900 Epoch 11: val_loss improved from 0.21758 to 0.19134, saving model to best_model_5_aug.h5 77/77 [==============================] - 46s 599ms/step - loss: 0.1915 - accuracy: 0.9377 - auc: 0.9900 - val_loss: 0.1913 - val_accuracy: 0.9408 - val_auc: 0.9909 - lr: 2.5000e-06 Epoch 12/40 77/77 [==============================] - ETA: 0s - loss: 0.1985 - accuracy: 0.9303 - auc: 0.9896 Epoch 12: val_loss improved from 0.19134 to 0.18024, saving model to best_model_5_aug.h5 77/77 [==============================] - 46s 600ms/step - loss: 0.1985 - accuracy: 0.9303 - auc: 0.9896 - val_loss: 0.1802 - val_accuracy: 0.9408 - val_auc: 0.9921 - lr: 2.5000e-06 Epoch 13/40 77/77 [==============================] - ETA: 0s - loss: 0.1962 - accuracy: 0.9324 - auc: 0.9896 Epoch 13: val_loss improved from 0.18024 to 0.17129, saving model to best_model_5_aug.h5 77/77 [==============================] - 46s 593ms/step - loss: 0.1962 - accuracy: 0.9324 - auc: 0.9896 - val_loss: 0.1713 - val_accuracy: 0.9408 - val_auc: 0.9930 - lr: 2.5000e-06 Epoch 14/40 77/77 [==============================] - ETA: 0s - loss: 0.2033 - accuracy: 0.9316 - auc: 0.9884 Epoch 14: val_loss did not improve from 0.17129 77/77 [==============================] - 45s 579ms/step - loss: 0.2033 - accuracy: 0.9316 - auc: 0.9884 - val_loss: 0.1764 - val_accuracy: 0.9391 - val_auc: 0.9926 - lr: 2.5000e-06 Epoch 15/40 77/77 [==============================] - ETA: 0s - loss: 0.1979 - accuracy: 0.9356 - auc: 0.9889 Epoch 15: val_loss did not improve from 0.17129 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.2500000821091816e-07. 77/77 [==============================] - 47s 612ms/step - loss: 0.1979 - accuracy: 0.9356 - auc: 0.9889 - val_loss: 0.1765 - val_accuracy: 0.9375 - val_auc: 0.9925 - lr: 2.5000e-06 Epoch 16/40 77/77 [==============================] - ETA: 0s - loss: 0.1890 - accuracy: 0.9365 - auc: 0.9903 Epoch 16: val_loss did not improve from 0.17129 77/77 [==============================] - 45s 580ms/step - loss: 0.1890 - accuracy: 0.9365 - auc: 0.9903 - val_loss: 0.1743 - val_accuracy: 0.9391 - val_auc: 0.9927 - lr: 1.2500e-07 Epoch 17/40 77/77 [==============================] - ETA: 0s - loss: 0.1984 - accuracy: 0.9324 - auc: 0.9895 Epoch 17: val_loss did not improve from 0.17129 Epoch 00017: ReduceLROnPlateau reducing learning rate to 6.250000694763003e-09. 77/77 [==============================] - 45s 580ms/step - loss: 0.1984 - accuracy: 0.9324 - auc: 0.9895 - val_loss: 0.1756 - val_accuracy: 0.9375 - val_auc: 0.9926 - lr: 1.2500e-07 Epoch 18/40 77/77 [==============================] - ETA: 0s - loss: 0.1994 - accuracy: 0.9291 - auc: 0.9896 Epoch 18: val_loss did not improve from 0.17129 77/77 [==============================] - 45s 577ms/step - loss: 0.1994 - accuracy: 0.9291 - auc: 0.9896 - val_loss: 0.1752 - val_accuracy: 0.9375 - val_auc: 0.9925 - lr: 6.2500e-09 Epoch 19/40 77/77 [==============================] - ETA: 0s - loss: 0.1961 - accuracy: 0.9397 - auc: 0.9890 Epoch 19: val_loss did not improve from 0.17129 Epoch 00019: ReduceLROnPlateau reducing learning rate to 3.1250002585636594e-10. 77/77 [==============================] - 45s 582ms/step - loss: 0.1961 - accuracy: 0.9397 - auc: 0.9890 - val_loss: 0.1721 - val_accuracy: 0.9391 - val_auc: 0.9930 - lr: 6.2500e-09 Epoch 20/40 77/77 [==============================] - ETA: 0s - loss: 0.1920 - accuracy: 0.9328 - auc: 0.9900 Epoch 20: val_loss did not improve from 0.17129 77/77 [==============================] - 47s 613ms/step - loss: 0.1920 - accuracy: 0.9328 - auc: 0.9900 - val_loss: 0.1742 - val_accuracy: 0.9391 - val_auc: 0.9926 - lr: 3.1250e-10 Epoch 21/40 77/77 [==============================] - ETA: 0s - loss: 0.2003 - accuracy: 0.9352 - auc: 0.9892 Epoch 21: val_loss did not improve from 0.17129 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1.5625001292818297e-11. 77/77 [==============================] - 45s 580ms/step - loss: 0.2003 - accuracy: 0.9352 - auc: 0.9892 - val_loss: 0.1752 - val_accuracy: 0.9375 - val_auc: 0.9925 - lr: 3.1250e-10 Epoch 22/40 77/77 [==============================] - ETA: 0s - loss: 0.1930 - accuracy: 0.9316 - auc: 0.9902 Epoch 22: val_loss did not improve from 0.17129 77/77 [==============================] - 45s 581ms/step - loss: 0.1930 - accuracy: 0.9316 - auc: 0.9902 - val_loss: 0.1747 - val_accuracy: 0.9375 - val_auc: 0.9926 - lr: 1.5625e-11 Epoch 23/40 77/77 [==============================] - ETA: 0s - loss: 0.1916 - accuracy: 0.9375 - auc: 0.9899 Epoch 23: val_loss improved from 0.17129 to 0.17088, saving model to best_model_5_aug.h5 77/77 [==============================] - 46s 592ms/step - loss: 0.1916 - accuracy: 0.9375 - auc: 0.9899 - val_loss: 0.1709 - val_accuracy: 0.9391 - val_auc: 0.9930 - lr: 1.5625e-11 Epoch 24/40 77/77 [==============================] - ETA: 0s - loss: 0.1885 - accuracy: 0.9365 - auc: 0.9906 Epoch 24: val_loss did not improve from 0.17088 77/77 [==============================] - 45s 578ms/step - loss: 0.1885 - accuracy: 0.9365 - auc: 0.9906 - val_loss: 0.1761 - val_accuracy: 0.9375 - val_auc: 0.9924 - lr: 1.5625e-11 Epoch 25/40 77/77 [==============================] - ETA: 0s - loss: 0.1882 - accuracy: 0.9369 - auc: 0.9900 Epoch 25: val_loss improved from 0.17088 to 0.17060, saving model to best_model_5_aug.h5 77/77 [==============================] - 49s 642ms/step - loss: 0.1882 - accuracy: 0.9369 - auc: 0.9900 - val_loss: 0.1706 - val_accuracy: 0.9408 - val_auc: 0.9930 - lr: 1.5625e-11 Epoch 26/40 77/77 [==============================] - ETA: 0s - loss: 0.1979 - accuracy: 0.9344 - auc: 0.9892 Epoch 26: val_loss did not improve from 0.17060 77/77 [==============================] - 45s 581ms/step - loss: 0.1979 - accuracy: 0.9344 - auc: 0.9892 - val_loss: 0.1715 - val_accuracy: 0.9408 - val_auc: 0.9929 - lr: 1.5625e-11 Epoch 27/40 77/77 [==============================] - ETA: 0s - loss: 0.1933 - accuracy: 0.9332 - auc: 0.9899 Epoch 27: val_loss did not improve from 0.17060 Epoch 00027: ReduceLROnPlateau reducing learning rate to 7.812500646409148e-13. 77/77 [==============================] - 45s 581ms/step - loss: 0.1933 - accuracy: 0.9332 - auc: 0.9899 - val_loss: 0.1770 - val_accuracy: 0.9375 - val_auc: 0.9924 - lr: 1.5625e-11 Epoch 28/40 77/77 [==============================] - ETA: 0s - loss: 0.1908 - accuracy: 0.9352 - auc: 0.9906 Epoch 28: val_loss did not improve from 0.17060 77/77 [==============================] - 46s 593ms/step - loss: 0.1908 - accuracy: 0.9352 - auc: 0.9906 - val_loss: 0.1741 - val_accuracy: 0.9391 - val_auc: 0.9926 - lr: 7.8125e-13 Epoch 29/40 77/77 [==============================] - ETA: 0s - loss: 0.1997 - accuracy: 0.9295 - auc: 0.9894 Epoch 29: val_loss did not improve from 0.17060 Epoch 00029: ReduceLROnPlateau reducing learning rate to 3.906250323204574e-14. 77/77 [==============================] - 47s 609ms/step - loss: 0.1997 - accuracy: 0.9295 - auc: 0.9894 - val_loss: 0.1711 - val_accuracy: 0.9424 - val_auc: 0.9929 - lr: 7.8125e-13 Epoch 30/40 77/77 [==============================] - ETA: 0s - loss: 0.1900 - accuracy: 0.9365 - auc: 0.9902 Epoch 30: val_loss did not improve from 0.17060 77/77 [==============================] - 45s 579ms/step - loss: 0.1900 - accuracy: 0.9365 - auc: 0.9902 - val_loss: 0.1737 - val_accuracy: 0.9391 - val_auc: 0.9927 - lr: 3.9063e-14 Epoch 31/40 77/77 [==============================] - ETA: 0s - loss: 0.1907 - accuracy: 0.9365 - auc: 0.9903 Epoch 31: val_loss did not improve from 0.17060 Epoch 00031: ReduceLROnPlateau reducing learning rate to 1.9531251616022873e-15. 77/77 [==============================] - 46s 595ms/step - loss: 0.1907 - accuracy: 0.9365 - auc: 0.9903 - val_loss: 0.1730 - val_accuracy: 0.9408 - val_auc: 0.9927 - lr: 3.9063e-14 Epoch 32/40 77/77 [==============================] - ETA: 0s - loss: 0.1919 - accuracy: 0.9316 - auc: 0.9904 Epoch 32: val_loss did not improve from 0.17060 77/77 [==============================] - 46s 593ms/step - loss: 0.1919 - accuracy: 0.9316 - auc: 0.9904 - val_loss: 0.1751 - val_accuracy: 0.9375 - val_auc: 0.9925 - lr: 1.9531e-15 Epoch 33/40 77/77 [==============================] - ETA: 0s - loss: 0.1890 - accuracy: 0.9352 - auc: 0.9906 Epoch 33: val_loss did not improve from 0.17060 Epoch 00033: ReduceLROnPlateau reducing learning rate to 9.765626019769673e-17. 77/77 [==============================] - 45s 590ms/step - loss: 0.1890 - accuracy: 0.9352 - auc: 0.9906 - val_loss: 0.1760 - val_accuracy: 0.9375 - val_auc: 0.9924 - lr: 1.9531e-15 Epoch 34/40 77/77 [==============================] - ETA: 0s - loss: 0.1870 - accuracy: 0.9352 - auc: 0.9908 Epoch 34: val_loss did not improve from 0.17060 77/77 [==============================] - 48s 629ms/step - loss: 0.1870 - accuracy: 0.9352 - auc: 0.9908 - val_loss: 0.1748 - val_accuracy: 0.9375 - val_auc: 0.9926 - lr: 9.7656e-17 Epoch 35/40 77/77 [==============================] - ETA: 0s - loss: 0.1974 - accuracy: 0.9275 - auc: 0.9895 Epoch 35: val_loss did not improve from 0.17060 Epoch 00035: ReduceLROnPlateau reducing learning rate to 4.882813076059286e-18. 77/77 [==============================] - 45s 585ms/step - loss: 0.1974 - accuracy: 0.9275 - auc: 0.9895 - val_loss: 0.1754 - val_accuracy: 0.9375 - val_auc: 0.9925 - lr: 9.7656e-17 Epoch 36/40 77/77 [==============================] - ETA: 0s - loss: 0.1928 - accuracy: 0.9348 - auc: 0.9893 Epoch 36: val_loss did not improve from 0.17060 77/77 [==============================] - 46s 591ms/step - loss: 0.1928 - accuracy: 0.9348 - auc: 0.9893 - val_loss: 0.1707 - val_accuracy: 0.9408 - val_auc: 0.9930 - lr: 4.8828e-18 Epoch 37/40 77/77 [==============================] - ETA: 0s - loss: 0.1847 - accuracy: 0.9397 - auc: 0.9911 Epoch 37: val_loss did not improve from 0.17060 Epoch 00037: ReduceLROnPlateau reducing learning rate to 2.441406620747704e-19. 77/77 [==============================] - 45s 584ms/step - loss: 0.1847 - accuracy: 0.9397 - auc: 0.9911 - val_loss: 0.1737 - val_accuracy: 0.9391 - val_auc: 0.9926 - lr: 4.8828e-18 Epoch 38/40 77/77 [==============================] - ETA: 0s - loss: 0.1907 - accuracy: 0.9340 - auc: 0.9903 Epoch 38: val_loss did not improve from 0.17060 77/77 [==============================] - 45s 585ms/step - loss: 0.1907 - accuracy: 0.9340 - auc: 0.9903 - val_loss: 0.1711 - val_accuracy: 0.9391 - val_auc: 0.9931 - lr: 2.4414e-19 Epoch 39/40 77/77 [==============================] - ETA: 0s - loss: 0.1832 - accuracy: 0.9409 - auc: 0.9913 Epoch 39: val_loss improved from 0.17060 to 0.16821, saving model to best_model_5_aug.h5 77/77 [==============================] - 46s 602ms/step - loss: 0.1832 - accuracy: 0.9409 - auc: 0.9913 - val_loss: 0.1682 - val_accuracy: 0.9408 - val_auc: 0.9933 - lr: 2.4414e-19 Epoch 40/40 77/77 [==============================] - ETA: 0s - loss: 0.1947 - accuracy: 0.9336 - auc: 0.9897 Epoch 40: val_loss did not improve from 0.16821 77/77 [==============================] - 46s 595ms/step - loss: 0.1947 - accuracy: 0.9336 - auc: 0.9897 - val_loss: 0.1737 - val_accuracy: 0.9391 - val_auc: 0.9927 - lr: 2.4414e-19
from tensorflow.keras.models import load_model
model=load_model("best_model_5_aug.h5")
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Using test_generator for predictions on test data:
#Sources: https://stackoverflow.com/questions/52270177/how-to-use-predict-generator-on-new-images-keras
# https://tylerburleigh.com/blog/predicting-pneumonia-from-chest-x-rays-using-efficientnet/
test_gen.reset() #It's important to always reset the test generator.
Y_pred_test=model.predict(test_gen)
y_pred_test=np.argmax(Y_pred_test,axis=1)
25/25 [==============================] - 12s 477ms/step
labels = (test_gen.class_indices)
print(labels)
{'COVID': 0, 'NORMAL': 1, 'Viral Pneumonia': 2}
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
y_true_test = test_gen.classes
CM = confusion_matrix(y_true_test, y_pred_test)
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_true_test, y_pred_test, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9710 0.9750 0.9730 240 NORMAL 0.9170 0.9851 0.9498 269 Viral Pneumonia 0.9597 0.8848 0.9207 269 accuracy 0.9473 778 macro avg 0.9492 0.9483 0.9478 778 weighted avg 0.9484 0.9473 0.9469 778
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
model_eval_metrics(y_true_test, y_pred_test, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.947301 | 0.94783 | 0.949196 | 0.948296 |
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, Y_pred_test[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
batch_size = 50
epochs = 40
IMG_HEIGHT = 192
IMG_WIDTH = 192
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Below, I have only included as augmentations random variation in the brightness range and random horizontal flips. I have commented out other possible
#example augmentations that I chose not to use for this portfolio project.
train_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our training data
#The validation image generator should not be used to augment data. It is sometimes the case that individuals will use the same
#ImageDataGenerator object to create both their training image generator and validation image generator. However, that should not
#be done because the validation set should not be used to augment data, since it is meant to replicate unseen data for the purpose
#of identifying good tuning parameters. I therefore make sure that my validation_image_generator contains no augmentation parameters.
validation_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2
) # Generator for our validation data
#The purpose of the test_image_generator is to replicate how the train_image_generator should perform on unseen data (from the test set).
#I thus use the same parameters for test_image_generator as I did with train_image_generator (with the exception that I do not use
#validation_split)
test_image_generator = ImageDataGenerator(
rescale=1./255,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our test data
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Note: Both train_gen and val_gen use the training set, but since I used validation split, this splits
#the training set into a smaller training set and a validation set. Therefore, train_gen and val_gen
#do not use the same image data but rather different images within the original training set that has
#been randomly separated.
#Choose subset = 'training'
#Set seed=42 for both train_gen and for val_gen to ensure they are randomized similarly
train_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='training',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
#Choose subset = 'validation'
val_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='validation',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
test_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory='test',
seed=42,
shuffle=False, #generally, shuffle should be set to false when the image generator is used on test data
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
Found 2487 images belonging to 3 classes. Found 621 images belonging to 3 classes. Found 778 images belonging to 3 classes.
TRAIN_STEPS_PER_EPOCH = np.ceil((total_train*0.8/batch_size)-1)
VAL_STEPS_PER_EPOCH = np.ceil((total_train*0.2/batch_size)-1)
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.layers import Dense, Activation, Dropout, BatchNormalization,Flatten
from keras.regularizers import l1
from tensorflow.keras.optimizers import SGD, Adam
from sklearn.utils import class_weight
import numpy as np
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.metrics import AUC
l = tf.keras.layers # syntax shortcut
# Create function to define fire modules
def fire(x, squeeze, expand):
y = l.Conv2D(filters=squeeze, kernel_size=1, padding='same', activation='relu')(x)
y1 = l.Conv2D(filters=expand//2, kernel_size=1, padding='same', activation='relu')(y) # note: //2 takes input value and divides by 2, so we reach the dimensions requested with stacking later.
y2 = l.Conv2D(filters=expand//2, kernel_size=16, padding='same', activation='relu')(y)
y3 = l.Conv2D(filters=expand//2, kernel_size=16, padding='same', activation='relu')(y)
y4 = l.Conv2D(filters=expand//2, kernel_size=32, padding='same', activation='relu')(y)
y5 = l.Conv2D(filters=expand//2, kernel_size=32, padding='same', activation='relu')(y)
return tf.keras.layers.concatenate([y1, y2, y3, y3, y4, y5])
# this is to make it behave similarly to other Keras layers
def fire_module(squeeze, expand):
return lambda x: fire(x, squeeze, expand)
with tf.device('/device:GPU:0'):
x = tf.keras.layers.Input(shape=[192,192, 3]) # input is 192x192 pixels RGB
y = tf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu')(x)
y = fire_module(24, 48)(y)
y = fire_module(24, 48)(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(24, 48)(y)
y = fire_module(24, 48)(y)
y = tf.keras.layers.MaxPooling2D(pool_size=2)(y)
y = fire_module(24, 48)(y)
y = fire_module(24, 48)(y)
y = tf.keras.layers.GlobalAveragePooling2D()(y) # Takes average of h x w for each channel and returns 1 scalar value per channel
y = tf.keras.layers.Dense(3, activation='softmax')(y) # Parameters for final layer from GAP = number of channels in previous layer plus number of dense nodes in output layer times number of dense nodes
model = tf.keras.Model(x, y)
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.20)
mc = ModelCheckpoint('best_model_6_aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
# Fitting the CNN to the Training set
history = model.fit(
train_gen,
#steps_per_epoch=total_train // batch_size, #adjusts training process for new image batches
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
epochs=epochs,
validation_data=val_gen,
#validation_steps=total_train // batch_size,
validation_steps=VAL_STEPS_PER_EPOCH,
verbose=1,
callbacks=[mc,red_lr,es]
)
Epoch 1/40 49/49 [==============================] - ETA: 0s - loss: 31.3701 - accuracy: 0.3340 - auc: 0.5074 Epoch 1: val_loss improved from inf to 1.09850, saving model to best_model_6_aug.h5 49/49 [==============================] - 1738s 33s/step - loss: 31.3701 - accuracy: 0.3340 - auc: 0.5074 - val_loss: 1.0985 - val_accuracy: 0.3450 - val_auc: 0.4988 - lr: 0.0010 Epoch 2/40 49/49 [==============================] - ETA: 0s - loss: 1.0983 - accuracy: 0.3402 - auc: 0.5140 Epoch 2: val_loss improved from 1.09850 to 1.09815, saving model to best_model_6_aug.h5 49/49 [==============================] - 75s 2s/step - loss: 1.0983 - accuracy: 0.3402 - auc: 0.5140 - val_loss: 1.0982 - val_accuracy: 0.3433 - val_auc: 0.5158 - lr: 0.0010 Epoch 3/40 49/49 [==============================] - ETA: 0s - loss: 1.0978 - accuracy: 0.3488 - auc: 0.5210 Epoch 3: val_loss improved from 1.09815 to 1.09736, saving model to best_model_6_aug.h5 49/49 [==============================] - 75s 2s/step - loss: 1.0978 - accuracy: 0.3488 - auc: 0.5210 - val_loss: 1.0974 - val_accuracy: 0.3500 - val_auc: 0.5225 - lr: 0.0010 Epoch 4/40 49/49 [==============================] - ETA: 0s - loss: 1.0977 - accuracy: 0.3443 - auc: 0.5185 Epoch 4: val_loss did not improve from 1.09736 49/49 [==============================] - 73s 1s/step - loss: 1.0977 - accuracy: 0.3443 - auc: 0.5185 - val_loss: 1.0976 - val_accuracy: 0.3483 - val_auc: 0.5163 - lr: 0.0010 Epoch 5/40 49/49 [==============================] - ETA: 0s - loss: 1.0975 - accuracy: 0.3307 - auc: 0.5185 Epoch 5: val_loss did not improve from 1.09736 Epoch 00005: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026. 49/49 [==============================] - 73s 1s/step - loss: 1.0975 - accuracy: 0.3307 - auc: 0.5185 - val_loss: 1.0976 - val_accuracy: 0.3467 - val_auc: 0.5175 - lr: 0.0010 Epoch 6/40 49/49 [==============================] - ETA: 0s - loss: 1.0974 - accuracy: 0.3463 - auc: 0.5193 Epoch 6: val_loss did not improve from 1.09736 49/49 [==============================] - 72s 1s/step - loss: 1.0974 - accuracy: 0.3463 - auc: 0.5193 - val_loss: 1.0975 - val_accuracy: 0.3367 - val_auc: 0.5133 - lr: 2.0000e-04 Epoch 7/40 49/49 [==============================] - ETA: 0s - loss: 1.0973 - accuracy: 0.3463 - auc: 0.5191 Epoch 7: val_loss did not improve from 1.09736 Epoch 00007: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05. 49/49 [==============================] - 73s 1s/step - loss: 1.0973 - accuracy: 0.3463 - auc: 0.5191 - val_loss: 1.0974 - val_accuracy: 0.3450 - val_auc: 0.5175 - lr: 2.0000e-04 Epoch 8/40 49/49 [==============================] - ETA: 0s - loss: 1.0974 - accuracy: 0.3439 - auc: 0.5172 Epoch 8: val_loss did not improve from 1.09736 49/49 [==============================] - 73s 1s/step - loss: 1.0974 - accuracy: 0.3439 - auc: 0.5172 - val_loss: 1.0979 - val_accuracy: 0.3467 - val_auc: 0.5150 - lr: 4.0000e-05 Epoch 9/40 49/49 [==============================] - ETA: 0s - loss: 1.0973 - accuracy: 0.3471 - auc: 0.5197 Epoch 9: val_loss did not improve from 1.09736 Epoch 00009: ReduceLROnPlateau reducing learning rate to 8.000000525498762e-06. 49/49 [==============================] - 73s 1s/step - loss: 1.0973 - accuracy: 0.3471 - auc: 0.5197 - val_loss: 1.0975 - val_accuracy: 0.3483 - val_auc: 0.5183 - lr: 4.0000e-05 Epoch 10/40 49/49 [==============================] - ETA: 0s - loss: 1.0975 - accuracy: 0.3463 - auc: 0.5176 Epoch 10: val_loss improved from 1.09736 to 1.09728, saving model to best_model_6_aug.h5 49/49 [==============================] - 74s 2s/step - loss: 1.0975 - accuracy: 0.3463 - auc: 0.5176 - val_loss: 1.0973 - val_accuracy: 0.3483 - val_auc: 0.5200 - lr: 8.0000e-06 Epoch 11/40 49/49 [==============================] - ETA: 0s - loss: 1.0971 - accuracy: 0.3476 - auc: 0.5207 Epoch 11: val_loss improved from 1.09728 to 1.09707, saving model to best_model_6_aug.h5 49/49 [==============================] - 76s 2s/step - loss: 1.0971 - accuracy: 0.3476 - auc: 0.5207 - val_loss: 1.0971 - val_accuracy: 0.3433 - val_auc: 0.5192 - lr: 8.0000e-06 Epoch 12/40 49/49 [==============================] - ETA: 0s - loss: 1.0974 - accuracy: 0.3463 - auc: 0.5181 Epoch 12: val_loss did not improve from 1.09707 49/49 [==============================] - 72s 1s/step - loss: 1.0974 - accuracy: 0.3463 - auc: 0.5181 - val_loss: 1.0978 - val_accuracy: 0.3383 - val_auc: 0.5125 - lr: 8.0000e-06 Epoch 13/40 49/49 [==============================] - ETA: 0s - loss: 1.0975 - accuracy: 0.3455 - auc: 0.5172 Epoch 13: val_loss did not improve from 1.09707 Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.6000001778593287e-06. 49/49 [==============================] - 72s 1s/step - loss: 1.0975 - accuracy: 0.3455 - auc: 0.5172 - val_loss: 1.0975 - val_accuracy: 0.3433 - val_auc: 0.5167 - lr: 8.0000e-06 Epoch 14/40 49/49 [==============================] - ETA: 0s - loss: 1.0971 - accuracy: 0.3476 - auc: 0.5209 Epoch 14: val_loss did not improve from 1.09707 49/49 [==============================] - 73s 1s/step - loss: 1.0971 - accuracy: 0.3476 - auc: 0.5209 - val_loss: 1.0975 - val_accuracy: 0.3483 - val_auc: 0.5183 - lr: 1.6000e-06 Epoch 15/40 49/49 [==============================] - ETA: 0s - loss: 1.0975 - accuracy: 0.3451 - auc: 0.5174 Epoch 15: val_loss improved from 1.09707 to 1.09689, saving model to best_model_6_aug.h5 49/49 [==============================] - 76s 2s/step - loss: 1.0975 - accuracy: 0.3451 - auc: 0.5174 - val_loss: 1.0969 - val_accuracy: 0.3483 - val_auc: 0.5225 - lr: 1.6000e-06 Epoch 16/40 49/49 [==============================] - ETA: 0s - loss: 1.0973 - accuracy: 0.3467 - auc: 0.5195 Epoch 16: val_loss did not improve from 1.09689 49/49 [==============================] - 73s 1s/step - loss: 1.0973 - accuracy: 0.3467 - auc: 0.5195 - val_loss: 1.0972 - val_accuracy: 0.3433 - val_auc: 0.5183 - lr: 1.6000e-06 Epoch 17/40 49/49 [==============================] - ETA: 0s - loss: 1.0974 - accuracy: 0.3455 - auc: 0.5181 Epoch 17: val_loss did not improve from 1.09689 Epoch 00017: ReduceLROnPlateau reducing learning rate to 3.200000264769187e-07. 49/49 [==============================] - 72s 1s/step - loss: 1.0974 - accuracy: 0.3455 - auc: 0.5181 - val_loss: 1.0974 - val_accuracy: 0.3483 - val_auc: 0.5192 - lr: 1.6000e-06 Epoch 18/40 49/49 [==============================] - ETA: 0s - loss: 1.0974 - accuracy: 0.3447 - auc: 0.5178 Epoch 18: val_loss did not improve from 1.09689 49/49 [==============================] - 73s 1s/step - loss: 1.0974 - accuracy: 0.3447 - auc: 0.5178 - val_loss: 1.0970 - val_accuracy: 0.3483 - val_auc: 0.5217 - lr: 3.2000e-07 Epoch 19/40 49/49 [==============================] - ETA: 0s - loss: 1.0976 - accuracy: 0.3451 - auc: 0.5166 Epoch 19: val_loss did not improve from 1.09689 Epoch 00019: ReduceLROnPlateau reducing learning rate to 6.400000529538374e-08. 49/49 [==============================] - 73s 1s/step - loss: 1.0976 - accuracy: 0.3451 - auc: 0.5166 - val_loss: 1.0978 - val_accuracy: 0.3383 - val_auc: 0.5125 - lr: 3.2000e-07 Epoch 20/40 49/49 [==============================] - ETA: 0s - loss: 1.0973 - accuracy: 0.3463 - auc: 0.5189 Epoch 20: val_loss did not improve from 1.09689 49/49 [==============================] - 73s 1s/step - loss: 1.0973 - accuracy: 0.3463 - auc: 0.5189 - val_loss: 1.0972 - val_accuracy: 0.3450 - val_auc: 0.5192 - lr: 6.4000e-08 Epoch 21/40 49/49 [==============================] - ETA: 0s - loss: 1.0975 - accuracy: 0.3480 - auc: 0.5187 Epoch 21: val_loss did not improve from 1.09689 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1.2800001059076749e-08. 49/49 [==============================] - 73s 1s/step - loss: 1.0975 - accuracy: 0.3480 - auc: 0.5187 - val_loss: 1.0977 - val_accuracy: 0.3500 - val_auc: 0.5183 - lr: 6.4000e-08 Epoch 22/40 49/49 [==============================] - ETA: 0s - loss: 1.0972 - accuracy: 0.3459 - auc: 0.5193 Epoch 22: val_loss did not improve from 1.09689 49/49 [==============================] - 73s 1s/step - loss: 1.0972 - accuracy: 0.3459 - auc: 0.5193 - val_loss: 1.0978 - val_accuracy: 0.3433 - val_auc: 0.5142 - lr: 1.2800e-08 Epoch 23/40 49/49 [==============================] - ETA: 0s - loss: 1.0972 - accuracy: 0.3463 - auc: 0.5197 Epoch 23: val_loss did not improve from 1.09689 Epoch 00023: ReduceLROnPlateau reducing learning rate to 2.5600002118153498e-09. 49/49 [==============================] - 73s 1s/step - loss: 1.0972 - accuracy: 0.3463 - auc: 0.5197 - val_loss: 1.0971 - val_accuracy: 0.3417 - val_auc: 0.5183 - lr: 1.2800e-08 Epoch 24/40 49/49 [==============================] - ETA: 0s - loss: 1.0974 - accuracy: 0.3455 - auc: 0.5178 Epoch 24: val_loss did not improve from 1.09689 49/49 [==============================] - 72s 1s/step - loss: 1.0974 - accuracy: 0.3455 - auc: 0.5178 - val_loss: 1.0970 - val_accuracy: 0.3467 - val_auc: 0.5208 - lr: 2.5600e-09 Epoch 25/40 49/49 [==============================] - ETA: 0s - loss: 1.0975 - accuracy: 0.3463 - auc: 0.5176 Epoch 25: val_loss did not improve from 1.09689 Epoch 00025: ReduceLROnPlateau reducing learning rate to 5.1200004236307e-10. 49/49 [==============================] - 74s 2s/step - loss: 1.0975 - accuracy: 0.3463 - auc: 0.5176 - val_loss: 1.0974 - val_accuracy: 0.3450 - val_auc: 0.5175 - lr: 2.5600e-09 Epoch 26/40 49/49 [==============================] - ETA: 0s - loss: 1.0973 - accuracy: 0.3488 - auc: 0.5203 Epoch 26: val_loss did not improve from 1.09689 49/49 [==============================] - 73s 1s/step - loss: 1.0973 - accuracy: 0.3488 - auc: 0.5203 - val_loss: 1.0972 - val_accuracy: 0.3450 - val_auc: 0.5192 - lr: 5.1200e-10 Epoch 27/40 49/49 [==============================] - ETA: 0s - loss: 1.0975 - accuracy: 0.3459 - auc: 0.5178 Epoch 27: val_loss did not improve from 1.09689 Epoch 00027: ReduceLROnPlateau reducing learning rate to 1.0240001069306004e-10. 49/49 [==============================] - 73s 1s/step - loss: 1.0975 - accuracy: 0.3459 - auc: 0.5178 - val_loss: 1.0973 - val_accuracy: 0.3483 - val_auc: 0.5200 - lr: 5.1200e-10 Epoch 28/40 49/49 [==============================] - ETA: 0s - loss: 1.0972 - accuracy: 0.3480 - auc: 0.5203 Epoch 28: val_loss did not improve from 1.09689 49/49 [==============================] - 72s 1s/step - loss: 1.0972 - accuracy: 0.3480 - auc: 0.5203 - val_loss: 1.0970 - val_accuracy: 0.3500 - val_auc: 0.5225 - lr: 1.0240e-10 Epoch 29/40 49/49 [==============================] - ETA: 0s - loss: 1.0973 - accuracy: 0.3455 - auc: 0.5187 Epoch 29: val_loss did not improve from 1.09689 Epoch 00029: ReduceLROnPlateau reducing learning rate to 2.0480002416167767e-11. 49/49 [==============================] - 73s 1s/step - loss: 1.0973 - accuracy: 0.3455 - auc: 0.5187 - val_loss: 1.0973 - val_accuracy: 0.3467 - val_auc: 0.5192 - lr: 1.0240e-10 Epoch 30/40 49/49 [==============================] - ETA: 0s - loss: 1.0975 - accuracy: 0.3439 - auc: 0.5166 Epoch 30: val_loss did not improve from 1.09689 49/49 [==============================] - 72s 1s/step - loss: 1.0975 - accuracy: 0.3439 - auc: 0.5166 - val_loss: 1.0973 - val_accuracy: 0.3450 - val_auc: 0.5183 - lr: 2.0480e-11
from tensorflow.keras.models import load_model
model=load_model("best_model_6_aug.h5")
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Using test_generator for predictions on test data:
#Sources: https://stackoverflow.com/questions/52270177/how-to-use-predict-generator-on-new-images-keras
# https://tylerburleigh.com/blog/predicting-pneumonia-from-chest-x-rays-using-efficientnet/
test_gen.reset() #It's important to always reset the test generator.
Y_pred_test=model.predict(test_gen)
y_pred_test=np.argmax(Y_pred_test,axis=1)
16/16 [==============================] - 571s 38s/step
labels = (test_gen.class_indices)
print(labels)
{'COVID': 0, 'NORMAL': 1, 'Viral Pneumonia': 2}
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
y_true_test = test_gen.classes
CM = confusion_matrix(y_true_test, y_pred_test)
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_true_test, y_pred_test, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.0000 0.0000 0.0000 240 NORMAL 0.0000 0.0000 0.0000 269 Viral Pneumonia 0.3458 1.0000 0.5138 269 accuracy 0.3458 778 macro avg 0.1153 0.3333 0.1713 778 weighted avg 0.1195 0.3458 0.1777 778
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result))
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
# get metrics
model_eval_metrics(y_true_test, y_pred_test, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.345758 | 0.171283 | 0.115253 | 0.333333 |
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, Y_pred_test[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
batch_size = 32
epochs = 40
IMG_HEIGHT = 192
IMG_WIDTH = 192
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Below, I have only included as augmentations random variation in the brightness range and random horizontal flips. I have commented out other possible
#example augmentations that I chose not to use for this portfolio project.
train_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our training data
#The validation image generator should not be used to augment data. It is sometimes the case that individuals will use the same
#ImageDataGenerator object to create both their training image generator and validation image generator. However, that should not
#be done because the validation set should not be used to augment data, since it is meant to replicate unseen data for the purpose
#of identifying good tuning parameters. I therefore make sure that my validation_image_generator contains no augmentation parameters.
validation_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2
) # Generator for our validation data
#The purpose of the test_image_generator is to replicate how the train_image_generator should perform on unseen data (from the test set).
#I thus use the same parameters for test_image_generator as I did with train_image_generator (with the exception that I do not use
#validation_split)
test_image_generator = ImageDataGenerator(
rescale=1./255,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our test data
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Note: Both train_gen and val_gen use the training set, but since I used validation split, this splits
#the training set into a smaller training set and a validation set. Therefore, train_gen and val_gen
#do not use the same image data but rather different images within the original training set that has
#been randomly separated.
#Choose subset = 'training'
#Set seed=42 for both train_gen and for val_gen to ensure they are randomized similarly
train_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='training',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
#Choose subset = 'validation'
val_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='validation',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
test_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory='test',
seed=42,
shuffle=False, #generally, shuffle should be set to false when the image generator is used on test data
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
Found 2487 images belonging to 3 classes. Found 621 images belonging to 3 classes. Found 778 images belonging to 3 classes.
from numpy.random import seed
seed(42)
import tensorflow
tensorflow.random.set_seed(42)
from tensorflow.keras.applications import InceptionV3
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Flatten
from tensorflow.keras import backend as K
# load model new input layer shape.
IMG_SHAPE = (192, 192, 3)
# Create the base model from the pre-trained InceptionV3 model
base_model = InceptionV3(input_shape=IMG_SHAPE, include_top=False, weights='imagenet')
base_model.summary() # Notice unfrozen number of trainable parameters
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/inception_v3/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 87910968/87910968 [==============================] - 5s 0us/step Model: "inception_v3" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, 192, 192, 3 0 [] )] conv2d_37 (Conv2D) (None, 95, 95, 32) 864 ['input_2[0][0]'] batch_normalization (BatchNorm (None, 95, 95, 32) 96 ['conv2d_37[0][0]'] alization) activation (Activation) (None, 95, 95, 32) 0 ['batch_normalization[0][0]'] conv2d_38 (Conv2D) (None, 93, 93, 32) 9216 ['activation[0][0]'] batch_normalization_1 (BatchNo (None, 93, 93, 32) 96 ['conv2d_38[0][0]'] rmalization) activation_1 (Activation) (None, 93, 93, 32) 0 ['batch_normalization_1[0][0]'] conv2d_39 (Conv2D) (None, 93, 93, 64) 18432 ['activation_1[0][0]'] batch_normalization_2 (BatchNo (None, 93, 93, 64) 192 ['conv2d_39[0][0]'] rmalization) activation_2 (Activation) (None, 93, 93, 64) 0 ['batch_normalization_2[0][0]'] max_pooling2d_2 (MaxPooling2D) (None, 46, 46, 64) 0 ['activation_2[0][0]'] conv2d_40 (Conv2D) (None, 46, 46, 80) 5120 ['max_pooling2d_2[0][0]'] batch_normalization_3 (BatchNo (None, 46, 46, 80) 240 ['conv2d_40[0][0]'] rmalization) activation_3 (Activation) (None, 46, 46, 80) 0 ['batch_normalization_3[0][0]'] conv2d_41 (Conv2D) (None, 44, 44, 192) 138240 ['activation_3[0][0]'] batch_normalization_4 (BatchNo (None, 44, 44, 192) 576 ['conv2d_41[0][0]'] rmalization) activation_4 (Activation) (None, 44, 44, 192) 0 ['batch_normalization_4[0][0]'] max_pooling2d_3 (MaxPooling2D) (None, 21, 21, 192) 0 ['activation_4[0][0]'] conv2d_45 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_3[0][0]'] batch_normalization_8 (BatchNo (None, 21, 21, 64) 192 ['conv2d_45[0][0]'] rmalization) activation_8 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_8[0][0]'] conv2d_43 (Conv2D) (None, 21, 21, 48) 9216 ['max_pooling2d_3[0][0]'] conv2d_46 (Conv2D) (None, 21, 21, 96) 55296 ['activation_8[0][0]'] batch_normalization_6 (BatchNo (None, 21, 21, 48) 144 ['conv2d_43[0][0]'] rmalization) batch_normalization_9 (BatchNo (None, 21, 21, 96) 288 ['conv2d_46[0][0]'] rmalization) activation_6 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_6[0][0]'] activation_9 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_9[0][0]'] average_pooling2d (AveragePool (None, 21, 21, 192) 0 ['max_pooling2d_3[0][0]'] ing2D) conv2d_42 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_3[0][0]'] conv2d_44 (Conv2D) (None, 21, 21, 64) 76800 ['activation_6[0][0]'] conv2d_47 (Conv2D) (None, 21, 21, 96) 82944 ['activation_9[0][0]'] conv2d_48 (Conv2D) (None, 21, 21, 32) 6144 ['average_pooling2d[0][0]'] batch_normalization_5 (BatchNo (None, 21, 21, 64) 192 ['conv2d_42[0][0]'] rmalization) batch_normalization_7 (BatchNo (None, 21, 21, 64) 192 ['conv2d_44[0][0]'] rmalization) batch_normalization_10 (BatchN (None, 21, 21, 96) 288 ['conv2d_47[0][0]'] ormalization) batch_normalization_11 (BatchN (None, 21, 21, 32) 96 ['conv2d_48[0][0]'] ormalization) activation_5 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_5[0][0]'] activation_7 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_7[0][0]'] activation_10 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_10[0][0]'] activation_11 (Activation) (None, 21, 21, 32) 0 ['batch_normalization_11[0][0]'] mixed0 (Concatenate) (None, 21, 21, 256) 0 ['activation_5[0][0]', 'activation_7[0][0]', 'activation_10[0][0]', 'activation_11[0][0]'] conv2d_52 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] batch_normalization_15 (BatchN (None, 21, 21, 64) 192 ['conv2d_52[0][0]'] ormalization) activation_15 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_15[0][0]'] conv2d_50 (Conv2D) (None, 21, 21, 48) 12288 ['mixed0[0][0]'] conv2d_53 (Conv2D) (None, 21, 21, 96) 55296 ['activation_15[0][0]'] batch_normalization_13 (BatchN (None, 21, 21, 48) 144 ['conv2d_50[0][0]'] ormalization) batch_normalization_16 (BatchN (None, 21, 21, 96) 288 ['conv2d_53[0][0]'] ormalization) activation_13 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_13[0][0]'] activation_16 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_16[0][0]'] average_pooling2d_1 (AveragePo (None, 21, 21, 256) 0 ['mixed0[0][0]'] oling2D) conv2d_49 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] conv2d_51 (Conv2D) (None, 21, 21, 64) 76800 ['activation_13[0][0]'] conv2d_54 (Conv2D) (None, 21, 21, 96) 82944 ['activation_16[0][0]'] conv2d_55 (Conv2D) (None, 21, 21, 64) 16384 ['average_pooling2d_1[0][0]'] batch_normalization_12 (BatchN (None, 21, 21, 64) 192 ['conv2d_49[0][0]'] ormalization) batch_normalization_14 (BatchN (None, 21, 21, 64) 192 ['conv2d_51[0][0]'] ormalization) batch_normalization_17 (BatchN (None, 21, 21, 96) 288 ['conv2d_54[0][0]'] ormalization) batch_normalization_18 (BatchN (None, 21, 21, 64) 192 ['conv2d_55[0][0]'] ormalization) activation_12 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_12[0][0]'] activation_14 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_14[0][0]'] activation_17 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_17[0][0]'] activation_18 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_18[0][0]'] mixed1 (Concatenate) (None, 21, 21, 288) 0 ['activation_12[0][0]', 'activation_14[0][0]', 'activation_17[0][0]', 'activation_18[0][0]'] conv2d_59 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] batch_normalization_22 (BatchN (None, 21, 21, 64) 192 ['conv2d_59[0][0]'] ormalization) activation_22 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_22[0][0]'] conv2d_57 (Conv2D) (None, 21, 21, 48) 13824 ['mixed1[0][0]'] conv2d_60 (Conv2D) (None, 21, 21, 96) 55296 ['activation_22[0][0]'] batch_normalization_20 (BatchN (None, 21, 21, 48) 144 ['conv2d_57[0][0]'] ormalization) batch_normalization_23 (BatchN (None, 21, 21, 96) 288 ['conv2d_60[0][0]'] ormalization) activation_20 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_20[0][0]'] activation_23 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_23[0][0]'] average_pooling2d_2 (AveragePo (None, 21, 21, 288) 0 ['mixed1[0][0]'] oling2D) conv2d_56 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] conv2d_58 (Conv2D) (None, 21, 21, 64) 76800 ['activation_20[0][0]'] conv2d_61 (Conv2D) (None, 21, 21, 96) 82944 ['activation_23[0][0]'] conv2d_62 (Conv2D) (None, 21, 21, 64) 18432 ['average_pooling2d_2[0][0]'] batch_normalization_19 (BatchN (None, 21, 21, 64) 192 ['conv2d_56[0][0]'] ormalization) batch_normalization_21 (BatchN (None, 21, 21, 64) 192 ['conv2d_58[0][0]'] ormalization) batch_normalization_24 (BatchN (None, 21, 21, 96) 288 ['conv2d_61[0][0]'] ormalization) batch_normalization_25 (BatchN (None, 21, 21, 64) 192 ['conv2d_62[0][0]'] ormalization) activation_19 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_19[0][0]'] activation_21 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_21[0][0]'] activation_24 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_24[0][0]'] activation_25 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_25[0][0]'] mixed2 (Concatenate) (None, 21, 21, 288) 0 ['activation_19[0][0]', 'activation_21[0][0]', 'activation_24[0][0]', 'activation_25[0][0]'] conv2d_64 (Conv2D) (None, 21, 21, 64) 18432 ['mixed2[0][0]'] batch_normalization_27 (BatchN (None, 21, 21, 64) 192 ['conv2d_64[0][0]'] ormalization) activation_27 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_27[0][0]'] conv2d_65 (Conv2D) (None, 21, 21, 96) 55296 ['activation_27[0][0]'] batch_normalization_28 (BatchN (None, 21, 21, 96) 288 ['conv2d_65[0][0]'] ormalization) activation_28 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_28[0][0]'] conv2d_63 (Conv2D) (None, 10, 10, 384) 995328 ['mixed2[0][0]'] conv2d_66 (Conv2D) (None, 10, 10, 96) 82944 ['activation_28[0][0]'] batch_normalization_26 (BatchN (None, 10, 10, 384) 1152 ['conv2d_63[0][0]'] ormalization) batch_normalization_29 (BatchN (None, 10, 10, 96) 288 ['conv2d_66[0][0]'] ormalization) activation_26 (Activation) (None, 10, 10, 384) 0 ['batch_normalization_26[0][0]'] activation_29 (Activation) (None, 10, 10, 96) 0 ['batch_normalization_29[0][0]'] max_pooling2d_4 (MaxPooling2D) (None, 10, 10, 288) 0 ['mixed2[0][0]'] mixed3 (Concatenate) (None, 10, 10, 768) 0 ['activation_26[0][0]', 'activation_29[0][0]', 'max_pooling2d_4[0][0]'] conv2d_71 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] batch_normalization_34 (BatchN (None, 10, 10, 128) 384 ['conv2d_71[0][0]'] ormalization) activation_34 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_34[0][0]'] conv2d_72 (Conv2D) (None, 10, 10, 128) 114688 ['activation_34[0][0]'] batch_normalization_35 (BatchN (None, 10, 10, 128) 384 ['conv2d_72[0][0]'] ormalization) activation_35 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_35[0][0]'] conv2d_68 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] conv2d_73 (Conv2D) (None, 10, 10, 128) 114688 ['activation_35[0][0]'] batch_normalization_31 (BatchN (None, 10, 10, 128) 384 ['conv2d_68[0][0]'] ormalization) batch_normalization_36 (BatchN (None, 10, 10, 128) 384 ['conv2d_73[0][0]'] ormalization) activation_31 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_31[0][0]'] activation_36 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_36[0][0]'] conv2d_69 (Conv2D) (None, 10, 10, 128) 114688 ['activation_31[0][0]'] conv2d_74 (Conv2D) (None, 10, 10, 128) 114688 ['activation_36[0][0]'] batch_normalization_32 (BatchN (None, 10, 10, 128) 384 ['conv2d_69[0][0]'] ormalization) batch_normalization_37 (BatchN (None, 10, 10, 128) 384 ['conv2d_74[0][0]'] ormalization) activation_32 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_32[0][0]'] activation_37 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_37[0][0]'] average_pooling2d_3 (AveragePo (None, 10, 10, 768) 0 ['mixed3[0][0]'] oling2D) conv2d_67 (Conv2D) (None, 10, 10, 192) 147456 ['mixed3[0][0]'] conv2d_70 (Conv2D) (None, 10, 10, 192) 172032 ['activation_32[0][0]'] conv2d_75 (Conv2D) (None, 10, 10, 192) 172032 ['activation_37[0][0]'] conv2d_76 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_3[0][0]'] batch_normalization_30 (BatchN (None, 10, 10, 192) 576 ['conv2d_67[0][0]'] ormalization) batch_normalization_33 (BatchN (None, 10, 10, 192) 576 ['conv2d_70[0][0]'] ormalization) batch_normalization_38 (BatchN (None, 10, 10, 192) 576 ['conv2d_75[0][0]'] ormalization) batch_normalization_39 (BatchN (None, 10, 10, 192) 576 ['conv2d_76[0][0]'] ormalization) activation_30 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_30[0][0]'] activation_33 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_33[0][0]'] activation_38 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_38[0][0]'] activation_39 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_39[0][0]'] mixed4 (Concatenate) (None, 10, 10, 768) 0 ['activation_30[0][0]', 'activation_33[0][0]', 'activation_38[0][0]', 'activation_39[0][0]'] conv2d_81 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] batch_normalization_44 (BatchN (None, 10, 10, 160) 480 ['conv2d_81[0][0]'] ormalization) activation_44 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_44[0][0]'] conv2d_82 (Conv2D) (None, 10, 10, 160) 179200 ['activation_44[0][0]'] batch_normalization_45 (BatchN (None, 10, 10, 160) 480 ['conv2d_82[0][0]'] ormalization) activation_45 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_45[0][0]'] conv2d_78 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] conv2d_83 (Conv2D) (None, 10, 10, 160) 179200 ['activation_45[0][0]'] batch_normalization_41 (BatchN (None, 10, 10, 160) 480 ['conv2d_78[0][0]'] ormalization) batch_normalization_46 (BatchN (None, 10, 10, 160) 480 ['conv2d_83[0][0]'] ormalization) activation_41 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_41[0][0]'] activation_46 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_46[0][0]'] conv2d_79 (Conv2D) (None, 10, 10, 160) 179200 ['activation_41[0][0]'] conv2d_84 (Conv2D) (None, 10, 10, 160) 179200 ['activation_46[0][0]'] batch_normalization_42 (BatchN (None, 10, 10, 160) 480 ['conv2d_79[0][0]'] ormalization) batch_normalization_47 (BatchN (None, 10, 10, 160) 480 ['conv2d_84[0][0]'] ormalization) activation_42 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_42[0][0]'] activation_47 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_47[0][0]'] average_pooling2d_4 (AveragePo (None, 10, 10, 768) 0 ['mixed4[0][0]'] oling2D) conv2d_77 (Conv2D) (None, 10, 10, 192) 147456 ['mixed4[0][0]'] conv2d_80 (Conv2D) (None, 10, 10, 192) 215040 ['activation_42[0][0]'] conv2d_85 (Conv2D) (None, 10, 10, 192) 215040 ['activation_47[0][0]'] conv2d_86 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_4[0][0]'] batch_normalization_40 (BatchN (None, 10, 10, 192) 576 ['conv2d_77[0][0]'] ormalization) batch_normalization_43 (BatchN (None, 10, 10, 192) 576 ['conv2d_80[0][0]'] ormalization) batch_normalization_48 (BatchN (None, 10, 10, 192) 576 ['conv2d_85[0][0]'] ormalization) batch_normalization_49 (BatchN (None, 10, 10, 192) 576 ['conv2d_86[0][0]'] ormalization) activation_40 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_40[0][0]'] activation_43 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_43[0][0]'] activation_48 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_48[0][0]'] activation_49 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_49[0][0]'] mixed5 (Concatenate) (None, 10, 10, 768) 0 ['activation_40[0][0]', 'activation_43[0][0]', 'activation_48[0][0]', 'activation_49[0][0]'] conv2d_91 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] batch_normalization_54 (BatchN (None, 10, 10, 160) 480 ['conv2d_91[0][0]'] ormalization) activation_54 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_54[0][0]'] conv2d_92 (Conv2D) (None, 10, 10, 160) 179200 ['activation_54[0][0]'] batch_normalization_55 (BatchN (None, 10, 10, 160) 480 ['conv2d_92[0][0]'] ormalization) activation_55 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_55[0][0]'] conv2d_88 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] conv2d_93 (Conv2D) (None, 10, 10, 160) 179200 ['activation_55[0][0]'] batch_normalization_51 (BatchN (None, 10, 10, 160) 480 ['conv2d_88[0][0]'] ormalization) batch_normalization_56 (BatchN (None, 10, 10, 160) 480 ['conv2d_93[0][0]'] ormalization) activation_51 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_51[0][0]'] activation_56 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_56[0][0]'] conv2d_89 (Conv2D) (None, 10, 10, 160) 179200 ['activation_51[0][0]'] conv2d_94 (Conv2D) (None, 10, 10, 160) 179200 ['activation_56[0][0]'] batch_normalization_52 (BatchN (None, 10, 10, 160) 480 ['conv2d_89[0][0]'] ormalization) batch_normalization_57 (BatchN (None, 10, 10, 160) 480 ['conv2d_94[0][0]'] ormalization) activation_52 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_52[0][0]'] activation_57 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_57[0][0]'] average_pooling2d_5 (AveragePo (None, 10, 10, 768) 0 ['mixed5[0][0]'] oling2D) conv2d_87 (Conv2D) (None, 10, 10, 192) 147456 ['mixed5[0][0]'] conv2d_90 (Conv2D) (None, 10, 10, 192) 215040 ['activation_52[0][0]'] conv2d_95 (Conv2D) (None, 10, 10, 192) 215040 ['activation_57[0][0]'] conv2d_96 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_5[0][0]'] batch_normalization_50 (BatchN (None, 10, 10, 192) 576 ['conv2d_87[0][0]'] ormalization) batch_normalization_53 (BatchN (None, 10, 10, 192) 576 ['conv2d_90[0][0]'] ormalization) batch_normalization_58 (BatchN (None, 10, 10, 192) 576 ['conv2d_95[0][0]'] ormalization) batch_normalization_59 (BatchN (None, 10, 10, 192) 576 ['conv2d_96[0][0]'] ormalization) activation_50 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_50[0][0]'] activation_53 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_53[0][0]'] activation_58 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_58[0][0]'] activation_59 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_59[0][0]'] mixed6 (Concatenate) (None, 10, 10, 768) 0 ['activation_50[0][0]', 'activation_53[0][0]', 'activation_58[0][0]', 'activation_59[0][0]'] conv2d_101 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] batch_normalization_64 (BatchN (None, 10, 10, 192) 576 ['conv2d_101[0][0]'] ormalization) activation_64 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_64[0][0]'] conv2d_102 (Conv2D) (None, 10, 10, 192) 258048 ['activation_64[0][0]'] batch_normalization_65 (BatchN (None, 10, 10, 192) 576 ['conv2d_102[0][0]'] ormalization) activation_65 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_65[0][0]'] conv2d_98 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_103 (Conv2D) (None, 10, 10, 192) 258048 ['activation_65[0][0]'] batch_normalization_61 (BatchN (None, 10, 10, 192) 576 ['conv2d_98[0][0]'] ormalization) batch_normalization_66 (BatchN (None, 10, 10, 192) 576 ['conv2d_103[0][0]'] ormalization) activation_61 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_61[0][0]'] activation_66 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_66[0][0]'] conv2d_99 (Conv2D) (None, 10, 10, 192) 258048 ['activation_61[0][0]'] conv2d_104 (Conv2D) (None, 10, 10, 192) 258048 ['activation_66[0][0]'] batch_normalization_62 (BatchN (None, 10, 10, 192) 576 ['conv2d_99[0][0]'] ormalization) batch_normalization_67 (BatchN (None, 10, 10, 192) 576 ['conv2d_104[0][0]'] ormalization) activation_62 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_62[0][0]'] activation_67 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_67[0][0]'] average_pooling2d_6 (AveragePo (None, 10, 10, 768) 0 ['mixed6[0][0]'] oling2D) conv2d_97 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_100 (Conv2D) (None, 10, 10, 192) 258048 ['activation_62[0][0]'] conv2d_105 (Conv2D) (None, 10, 10, 192) 258048 ['activation_67[0][0]'] conv2d_106 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_6[0][0]'] batch_normalization_60 (BatchN (None, 10, 10, 192) 576 ['conv2d_97[0][0]'] ormalization) batch_normalization_63 (BatchN (None, 10, 10, 192) 576 ['conv2d_100[0][0]'] ormalization) batch_normalization_68 (BatchN (None, 10, 10, 192) 576 ['conv2d_105[0][0]'] ormalization) batch_normalization_69 (BatchN (None, 10, 10, 192) 576 ['conv2d_106[0][0]'] ormalization) activation_60 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_60[0][0]'] activation_63 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_63[0][0]'] activation_68 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_68[0][0]'] activation_69 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_69[0][0]'] mixed7 (Concatenate) (None, 10, 10, 768) 0 ['activation_60[0][0]', 'activation_63[0][0]', 'activation_68[0][0]', 'activation_69[0][0]'] conv2d_109 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] batch_normalization_72 (BatchN (None, 10, 10, 192) 576 ['conv2d_109[0][0]'] ormalization) activation_72 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_72[0][0]'] conv2d_110 (Conv2D) (None, 10, 10, 192) 258048 ['activation_72[0][0]'] batch_normalization_73 (BatchN (None, 10, 10, 192) 576 ['conv2d_110[0][0]'] ormalization) activation_73 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_73[0][0]'] conv2d_107 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] conv2d_111 (Conv2D) (None, 10, 10, 192) 258048 ['activation_73[0][0]'] batch_normalization_70 (BatchN (None, 10, 10, 192) 576 ['conv2d_107[0][0]'] ormalization) batch_normalization_74 (BatchN (None, 10, 10, 192) 576 ['conv2d_111[0][0]'] ormalization) activation_70 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_70[0][0]'] activation_74 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_74[0][0]'] conv2d_108 (Conv2D) (None, 4, 4, 320) 552960 ['activation_70[0][0]'] conv2d_112 (Conv2D) (None, 4, 4, 192) 331776 ['activation_74[0][0]'] batch_normalization_71 (BatchN (None, 4, 4, 320) 960 ['conv2d_108[0][0]'] ormalization) batch_normalization_75 (BatchN (None, 4, 4, 192) 576 ['conv2d_112[0][0]'] ormalization) activation_71 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_71[0][0]'] activation_75 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_75[0][0]'] max_pooling2d_5 (MaxPooling2D) (None, 4, 4, 768) 0 ['mixed7[0][0]'] mixed8 (Concatenate) (None, 4, 4, 1280) 0 ['activation_71[0][0]', 'activation_75[0][0]', 'max_pooling2d_5[0][0]'] conv2d_117 (Conv2D) (None, 4, 4, 448) 573440 ['mixed8[0][0]'] batch_normalization_80 (BatchN (None, 4, 4, 448) 1344 ['conv2d_117[0][0]'] ormalization) activation_80 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_80[0][0]'] conv2d_114 (Conv2D) (None, 4, 4, 384) 491520 ['mixed8[0][0]'] conv2d_118 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_80[0][0]'] batch_normalization_77 (BatchN (None, 4, 4, 384) 1152 ['conv2d_114[0][0]'] ormalization) batch_normalization_81 (BatchN (None, 4, 4, 384) 1152 ['conv2d_118[0][0]'] ormalization) activation_77 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_77[0][0]'] activation_81 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_81[0][0]'] conv2d_115 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_116 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_119 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] conv2d_120 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] average_pooling2d_7 (AveragePo (None, 4, 4, 1280) 0 ['mixed8[0][0]'] oling2D) conv2d_113 (Conv2D) (None, 4, 4, 320) 409600 ['mixed8[0][0]'] batch_normalization_78 (BatchN (None, 4, 4, 384) 1152 ['conv2d_115[0][0]'] ormalization) batch_normalization_79 (BatchN (None, 4, 4, 384) 1152 ['conv2d_116[0][0]'] ormalization) batch_normalization_82 (BatchN (None, 4, 4, 384) 1152 ['conv2d_119[0][0]'] ormalization) batch_normalization_83 (BatchN (None, 4, 4, 384) 1152 ['conv2d_120[0][0]'] ormalization) conv2d_121 (Conv2D) (None, 4, 4, 192) 245760 ['average_pooling2d_7[0][0]'] batch_normalization_76 (BatchN (None, 4, 4, 320) 960 ['conv2d_113[0][0]'] ormalization) activation_78 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_78[0][0]'] activation_79 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_79[0][0]'] activation_82 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_82[0][0]'] activation_83 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_83[0][0]'] batch_normalization_84 (BatchN (None, 4, 4, 192) 576 ['conv2d_121[0][0]'] ormalization) activation_76 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_76[0][0]'] mixed9_0 (Concatenate) (None, 4, 4, 768) 0 ['activation_78[0][0]', 'activation_79[0][0]'] concatenate_6 (Concatenate) (None, 4, 4, 768) 0 ['activation_82[0][0]', 'activation_83[0][0]'] activation_84 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_84[0][0]'] mixed9 (Concatenate) (None, 4, 4, 2048) 0 ['activation_76[0][0]', 'mixed9_0[0][0]', 'concatenate_6[0][0]', 'activation_84[0][0]'] conv2d_126 (Conv2D) (None, 4, 4, 448) 917504 ['mixed9[0][0]'] batch_normalization_89 (BatchN (None, 4, 4, 448) 1344 ['conv2d_126[0][0]'] ormalization) activation_89 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_89[0][0]'] conv2d_123 (Conv2D) (None, 4, 4, 384) 786432 ['mixed9[0][0]'] conv2d_127 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_89[0][0]'] batch_normalization_86 (BatchN (None, 4, 4, 384) 1152 ['conv2d_123[0][0]'] ormalization) batch_normalization_90 (BatchN (None, 4, 4, 384) 1152 ['conv2d_127[0][0]'] ormalization) activation_86 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_86[0][0]'] activation_90 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_90[0][0]'] conv2d_124 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_125 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_128 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] conv2d_129 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] average_pooling2d_8 (AveragePo (None, 4, 4, 2048) 0 ['mixed9[0][0]'] oling2D) conv2d_122 (Conv2D) (None, 4, 4, 320) 655360 ['mixed9[0][0]'] batch_normalization_87 (BatchN (None, 4, 4, 384) 1152 ['conv2d_124[0][0]'] ormalization) batch_normalization_88 (BatchN (None, 4, 4, 384) 1152 ['conv2d_125[0][0]'] ormalization) batch_normalization_91 (BatchN (None, 4, 4, 384) 1152 ['conv2d_128[0][0]'] ormalization) batch_normalization_92 (BatchN (None, 4, 4, 384) 1152 ['conv2d_129[0][0]'] ormalization) conv2d_130 (Conv2D) (None, 4, 4, 192) 393216 ['average_pooling2d_8[0][0]'] batch_normalization_85 (BatchN (None, 4, 4, 320) 960 ['conv2d_122[0][0]'] ormalization) activation_87 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_87[0][0]'] activation_88 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_88[0][0]'] activation_91 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_91[0][0]'] activation_92 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_92[0][0]'] batch_normalization_93 (BatchN (None, 4, 4, 192) 576 ['conv2d_130[0][0]'] ormalization) activation_85 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_85[0][0]'] mixed9_1 (Concatenate) (None, 4, 4, 768) 0 ['activation_87[0][0]', 'activation_88[0][0]'] concatenate_7 (Concatenate) (None, 4, 4, 768) 0 ['activation_91[0][0]', 'activation_92[0][0]'] activation_93 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_93[0][0]'] mixed10 (Concatenate) (None, 4, 4, 2048) 0 ['activation_85[0][0]', 'mixed9_1[0][0]', 'concatenate_7[0][0]', 'activation_93[0][0]'] ================================================================================================== Total params: 21,802,784 Trainable params: 21,768,352 Non-trainable params: 34,432 __________________________________________________________________________________________________
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine-tune everything up to this layer onwards
freeze_layers_after=30
# Freeze all the layers after the `freeze_layers_after` layer
for layer in base_model.layers[freeze_layers_after:]:
layer.trainable = False
print("Number of layers frozen in the base model: ", len(base_model.layers)-freeze_layers_after)
Number of layers in the base model: 311 Number of layers frozen in the base model: 281
len(base_model.trainable_variables) #trainable layers after freezing
18
# Add new GAP layer and output layer to frozen layers of original model with adjusted input
gap1 = GlobalAveragePooling2D()(base_model.layers[-1].output)
class1 = Dense(128, activation='relu')(gap1)
class1 = Dense(128, activation='relu')(class1)
output = Dense(3, activation='softmax')(class1)
# define new model
model = Model(inputs=base_model.inputs, outputs=output)
# summarize
model.summary()
Model: "model_1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, 192, 192, 3 0 [] )] conv2d_37 (Conv2D) (None, 95, 95, 32) 864 ['input_2[0][0]'] batch_normalization (BatchNorm (None, 95, 95, 32) 96 ['conv2d_37[0][0]'] alization) activation (Activation) (None, 95, 95, 32) 0 ['batch_normalization[0][0]'] conv2d_38 (Conv2D) (None, 93, 93, 32) 9216 ['activation[0][0]'] batch_normalization_1 (BatchNo (None, 93, 93, 32) 96 ['conv2d_38[0][0]'] rmalization) activation_1 (Activation) (None, 93, 93, 32) 0 ['batch_normalization_1[0][0]'] conv2d_39 (Conv2D) (None, 93, 93, 64) 18432 ['activation_1[0][0]'] batch_normalization_2 (BatchNo (None, 93, 93, 64) 192 ['conv2d_39[0][0]'] rmalization) activation_2 (Activation) (None, 93, 93, 64) 0 ['batch_normalization_2[0][0]'] max_pooling2d_2 (MaxPooling2D) (None, 46, 46, 64) 0 ['activation_2[0][0]'] conv2d_40 (Conv2D) (None, 46, 46, 80) 5120 ['max_pooling2d_2[0][0]'] batch_normalization_3 (BatchNo (None, 46, 46, 80) 240 ['conv2d_40[0][0]'] rmalization) activation_3 (Activation) (None, 46, 46, 80) 0 ['batch_normalization_3[0][0]'] conv2d_41 (Conv2D) (None, 44, 44, 192) 138240 ['activation_3[0][0]'] batch_normalization_4 (BatchNo (None, 44, 44, 192) 576 ['conv2d_41[0][0]'] rmalization) activation_4 (Activation) (None, 44, 44, 192) 0 ['batch_normalization_4[0][0]'] max_pooling2d_3 (MaxPooling2D) (None, 21, 21, 192) 0 ['activation_4[0][0]'] conv2d_45 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_3[0][0]'] batch_normalization_8 (BatchNo (None, 21, 21, 64) 192 ['conv2d_45[0][0]'] rmalization) activation_8 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_8[0][0]'] conv2d_43 (Conv2D) (None, 21, 21, 48) 9216 ['max_pooling2d_3[0][0]'] conv2d_46 (Conv2D) (None, 21, 21, 96) 55296 ['activation_8[0][0]'] batch_normalization_6 (BatchNo (None, 21, 21, 48) 144 ['conv2d_43[0][0]'] rmalization) batch_normalization_9 (BatchNo (None, 21, 21, 96) 288 ['conv2d_46[0][0]'] rmalization) activation_6 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_6[0][0]'] activation_9 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_9[0][0]'] average_pooling2d (AveragePool (None, 21, 21, 192) 0 ['max_pooling2d_3[0][0]'] ing2D) conv2d_42 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_3[0][0]'] conv2d_44 (Conv2D) (None, 21, 21, 64) 76800 ['activation_6[0][0]'] conv2d_47 (Conv2D) (None, 21, 21, 96) 82944 ['activation_9[0][0]'] conv2d_48 (Conv2D) (None, 21, 21, 32) 6144 ['average_pooling2d[0][0]'] batch_normalization_5 (BatchNo (None, 21, 21, 64) 192 ['conv2d_42[0][0]'] rmalization) batch_normalization_7 (BatchNo (None, 21, 21, 64) 192 ['conv2d_44[0][0]'] rmalization) batch_normalization_10 (BatchN (None, 21, 21, 96) 288 ['conv2d_47[0][0]'] ormalization) batch_normalization_11 (BatchN (None, 21, 21, 32) 96 ['conv2d_48[0][0]'] ormalization) activation_5 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_5[0][0]'] activation_7 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_7[0][0]'] activation_10 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_10[0][0]'] activation_11 (Activation) (None, 21, 21, 32) 0 ['batch_normalization_11[0][0]'] mixed0 (Concatenate) (None, 21, 21, 256) 0 ['activation_5[0][0]', 'activation_7[0][0]', 'activation_10[0][0]', 'activation_11[0][0]'] conv2d_52 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] batch_normalization_15 (BatchN (None, 21, 21, 64) 192 ['conv2d_52[0][0]'] ormalization) activation_15 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_15[0][0]'] conv2d_50 (Conv2D) (None, 21, 21, 48) 12288 ['mixed0[0][0]'] conv2d_53 (Conv2D) (None, 21, 21, 96) 55296 ['activation_15[0][0]'] batch_normalization_13 (BatchN (None, 21, 21, 48) 144 ['conv2d_50[0][0]'] ormalization) batch_normalization_16 (BatchN (None, 21, 21, 96) 288 ['conv2d_53[0][0]'] ormalization) activation_13 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_13[0][0]'] activation_16 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_16[0][0]'] average_pooling2d_1 (AveragePo (None, 21, 21, 256) 0 ['mixed0[0][0]'] oling2D) conv2d_49 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] conv2d_51 (Conv2D) (None, 21, 21, 64) 76800 ['activation_13[0][0]'] conv2d_54 (Conv2D) (None, 21, 21, 96) 82944 ['activation_16[0][0]'] conv2d_55 (Conv2D) (None, 21, 21, 64) 16384 ['average_pooling2d_1[0][0]'] batch_normalization_12 (BatchN (None, 21, 21, 64) 192 ['conv2d_49[0][0]'] ormalization) batch_normalization_14 (BatchN (None, 21, 21, 64) 192 ['conv2d_51[0][0]'] ormalization) batch_normalization_17 (BatchN (None, 21, 21, 96) 288 ['conv2d_54[0][0]'] ormalization) batch_normalization_18 (BatchN (None, 21, 21, 64) 192 ['conv2d_55[0][0]'] ormalization) activation_12 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_12[0][0]'] activation_14 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_14[0][0]'] activation_17 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_17[0][0]'] activation_18 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_18[0][0]'] mixed1 (Concatenate) (None, 21, 21, 288) 0 ['activation_12[0][0]', 'activation_14[0][0]', 'activation_17[0][0]', 'activation_18[0][0]'] conv2d_59 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] batch_normalization_22 (BatchN (None, 21, 21, 64) 192 ['conv2d_59[0][0]'] ormalization) activation_22 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_22[0][0]'] conv2d_57 (Conv2D) (None, 21, 21, 48) 13824 ['mixed1[0][0]'] conv2d_60 (Conv2D) (None, 21, 21, 96) 55296 ['activation_22[0][0]'] batch_normalization_20 (BatchN (None, 21, 21, 48) 144 ['conv2d_57[0][0]'] ormalization) batch_normalization_23 (BatchN (None, 21, 21, 96) 288 ['conv2d_60[0][0]'] ormalization) activation_20 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_20[0][0]'] activation_23 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_23[0][0]'] average_pooling2d_2 (AveragePo (None, 21, 21, 288) 0 ['mixed1[0][0]'] oling2D) conv2d_56 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] conv2d_58 (Conv2D) (None, 21, 21, 64) 76800 ['activation_20[0][0]'] conv2d_61 (Conv2D) (None, 21, 21, 96) 82944 ['activation_23[0][0]'] conv2d_62 (Conv2D) (None, 21, 21, 64) 18432 ['average_pooling2d_2[0][0]'] batch_normalization_19 (BatchN (None, 21, 21, 64) 192 ['conv2d_56[0][0]'] ormalization) batch_normalization_21 (BatchN (None, 21, 21, 64) 192 ['conv2d_58[0][0]'] ormalization) batch_normalization_24 (BatchN (None, 21, 21, 96) 288 ['conv2d_61[0][0]'] ormalization) batch_normalization_25 (BatchN (None, 21, 21, 64) 192 ['conv2d_62[0][0]'] ormalization) activation_19 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_19[0][0]'] activation_21 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_21[0][0]'] activation_24 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_24[0][0]'] activation_25 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_25[0][0]'] mixed2 (Concatenate) (None, 21, 21, 288) 0 ['activation_19[0][0]', 'activation_21[0][0]', 'activation_24[0][0]', 'activation_25[0][0]'] conv2d_64 (Conv2D) (None, 21, 21, 64) 18432 ['mixed2[0][0]'] batch_normalization_27 (BatchN (None, 21, 21, 64) 192 ['conv2d_64[0][0]'] ormalization) activation_27 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_27[0][0]'] conv2d_65 (Conv2D) (None, 21, 21, 96) 55296 ['activation_27[0][0]'] batch_normalization_28 (BatchN (None, 21, 21, 96) 288 ['conv2d_65[0][0]'] ormalization) activation_28 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_28[0][0]'] conv2d_63 (Conv2D) (None, 10, 10, 384) 995328 ['mixed2[0][0]'] conv2d_66 (Conv2D) (None, 10, 10, 96) 82944 ['activation_28[0][0]'] batch_normalization_26 (BatchN (None, 10, 10, 384) 1152 ['conv2d_63[0][0]'] ormalization) batch_normalization_29 (BatchN (None, 10, 10, 96) 288 ['conv2d_66[0][0]'] ormalization) activation_26 (Activation) (None, 10, 10, 384) 0 ['batch_normalization_26[0][0]'] activation_29 (Activation) (None, 10, 10, 96) 0 ['batch_normalization_29[0][0]'] max_pooling2d_4 (MaxPooling2D) (None, 10, 10, 288) 0 ['mixed2[0][0]'] mixed3 (Concatenate) (None, 10, 10, 768) 0 ['activation_26[0][0]', 'activation_29[0][0]', 'max_pooling2d_4[0][0]'] conv2d_71 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] batch_normalization_34 (BatchN (None, 10, 10, 128) 384 ['conv2d_71[0][0]'] ormalization) activation_34 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_34[0][0]'] conv2d_72 (Conv2D) (None, 10, 10, 128) 114688 ['activation_34[0][0]'] batch_normalization_35 (BatchN (None, 10, 10, 128) 384 ['conv2d_72[0][0]'] ormalization) activation_35 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_35[0][0]'] conv2d_68 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] conv2d_73 (Conv2D) (None, 10, 10, 128) 114688 ['activation_35[0][0]'] batch_normalization_31 (BatchN (None, 10, 10, 128) 384 ['conv2d_68[0][0]'] ormalization) batch_normalization_36 (BatchN (None, 10, 10, 128) 384 ['conv2d_73[0][0]'] ormalization) activation_31 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_31[0][0]'] activation_36 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_36[0][0]'] conv2d_69 (Conv2D) (None, 10, 10, 128) 114688 ['activation_31[0][0]'] conv2d_74 (Conv2D) (None, 10, 10, 128) 114688 ['activation_36[0][0]'] batch_normalization_32 (BatchN (None, 10, 10, 128) 384 ['conv2d_69[0][0]'] ormalization) batch_normalization_37 (BatchN (None, 10, 10, 128) 384 ['conv2d_74[0][0]'] ormalization) activation_32 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_32[0][0]'] activation_37 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_37[0][0]'] average_pooling2d_3 (AveragePo (None, 10, 10, 768) 0 ['mixed3[0][0]'] oling2D) conv2d_67 (Conv2D) (None, 10, 10, 192) 147456 ['mixed3[0][0]'] conv2d_70 (Conv2D) (None, 10, 10, 192) 172032 ['activation_32[0][0]'] conv2d_75 (Conv2D) (None, 10, 10, 192) 172032 ['activation_37[0][0]'] conv2d_76 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_3[0][0]'] batch_normalization_30 (BatchN (None, 10, 10, 192) 576 ['conv2d_67[0][0]'] ormalization) batch_normalization_33 (BatchN (None, 10, 10, 192) 576 ['conv2d_70[0][0]'] ormalization) batch_normalization_38 (BatchN (None, 10, 10, 192) 576 ['conv2d_75[0][0]'] ormalization) batch_normalization_39 (BatchN (None, 10, 10, 192) 576 ['conv2d_76[0][0]'] ormalization) activation_30 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_30[0][0]'] activation_33 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_33[0][0]'] activation_38 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_38[0][0]'] activation_39 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_39[0][0]'] mixed4 (Concatenate) (None, 10, 10, 768) 0 ['activation_30[0][0]', 'activation_33[0][0]', 'activation_38[0][0]', 'activation_39[0][0]'] conv2d_81 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] batch_normalization_44 (BatchN (None, 10, 10, 160) 480 ['conv2d_81[0][0]'] ormalization) activation_44 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_44[0][0]'] conv2d_82 (Conv2D) (None, 10, 10, 160) 179200 ['activation_44[0][0]'] batch_normalization_45 (BatchN (None, 10, 10, 160) 480 ['conv2d_82[0][0]'] ormalization) activation_45 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_45[0][0]'] conv2d_78 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] conv2d_83 (Conv2D) (None, 10, 10, 160) 179200 ['activation_45[0][0]'] batch_normalization_41 (BatchN (None, 10, 10, 160) 480 ['conv2d_78[0][0]'] ormalization) batch_normalization_46 (BatchN (None, 10, 10, 160) 480 ['conv2d_83[0][0]'] ormalization) activation_41 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_41[0][0]'] activation_46 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_46[0][0]'] conv2d_79 (Conv2D) (None, 10, 10, 160) 179200 ['activation_41[0][0]'] conv2d_84 (Conv2D) (None, 10, 10, 160) 179200 ['activation_46[0][0]'] batch_normalization_42 (BatchN (None, 10, 10, 160) 480 ['conv2d_79[0][0]'] ormalization) batch_normalization_47 (BatchN (None, 10, 10, 160) 480 ['conv2d_84[0][0]'] ormalization) activation_42 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_42[0][0]'] activation_47 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_47[0][0]'] average_pooling2d_4 (AveragePo (None, 10, 10, 768) 0 ['mixed4[0][0]'] oling2D) conv2d_77 (Conv2D) (None, 10, 10, 192) 147456 ['mixed4[0][0]'] conv2d_80 (Conv2D) (None, 10, 10, 192) 215040 ['activation_42[0][0]'] conv2d_85 (Conv2D) (None, 10, 10, 192) 215040 ['activation_47[0][0]'] conv2d_86 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_4[0][0]'] batch_normalization_40 (BatchN (None, 10, 10, 192) 576 ['conv2d_77[0][0]'] ormalization) batch_normalization_43 (BatchN (None, 10, 10, 192) 576 ['conv2d_80[0][0]'] ormalization) batch_normalization_48 (BatchN (None, 10, 10, 192) 576 ['conv2d_85[0][0]'] ormalization) batch_normalization_49 (BatchN (None, 10, 10, 192) 576 ['conv2d_86[0][0]'] ormalization) activation_40 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_40[0][0]'] activation_43 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_43[0][0]'] activation_48 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_48[0][0]'] activation_49 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_49[0][0]'] mixed5 (Concatenate) (None, 10, 10, 768) 0 ['activation_40[0][0]', 'activation_43[0][0]', 'activation_48[0][0]', 'activation_49[0][0]'] conv2d_91 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] batch_normalization_54 (BatchN (None, 10, 10, 160) 480 ['conv2d_91[0][0]'] ormalization) activation_54 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_54[0][0]'] conv2d_92 (Conv2D) (None, 10, 10, 160) 179200 ['activation_54[0][0]'] batch_normalization_55 (BatchN (None, 10, 10, 160) 480 ['conv2d_92[0][0]'] ormalization) activation_55 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_55[0][0]'] conv2d_88 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] conv2d_93 (Conv2D) (None, 10, 10, 160) 179200 ['activation_55[0][0]'] batch_normalization_51 (BatchN (None, 10, 10, 160) 480 ['conv2d_88[0][0]'] ormalization) batch_normalization_56 (BatchN (None, 10, 10, 160) 480 ['conv2d_93[0][0]'] ormalization) activation_51 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_51[0][0]'] activation_56 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_56[0][0]'] conv2d_89 (Conv2D) (None, 10, 10, 160) 179200 ['activation_51[0][0]'] conv2d_94 (Conv2D) (None, 10, 10, 160) 179200 ['activation_56[0][0]'] batch_normalization_52 (BatchN (None, 10, 10, 160) 480 ['conv2d_89[0][0]'] ormalization) batch_normalization_57 (BatchN (None, 10, 10, 160) 480 ['conv2d_94[0][0]'] ormalization) activation_52 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_52[0][0]'] activation_57 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_57[0][0]'] average_pooling2d_5 (AveragePo (None, 10, 10, 768) 0 ['mixed5[0][0]'] oling2D) conv2d_87 (Conv2D) (None, 10, 10, 192) 147456 ['mixed5[0][0]'] conv2d_90 (Conv2D) (None, 10, 10, 192) 215040 ['activation_52[0][0]'] conv2d_95 (Conv2D) (None, 10, 10, 192) 215040 ['activation_57[0][0]'] conv2d_96 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_5[0][0]'] batch_normalization_50 (BatchN (None, 10, 10, 192) 576 ['conv2d_87[0][0]'] ormalization) batch_normalization_53 (BatchN (None, 10, 10, 192) 576 ['conv2d_90[0][0]'] ormalization) batch_normalization_58 (BatchN (None, 10, 10, 192) 576 ['conv2d_95[0][0]'] ormalization) batch_normalization_59 (BatchN (None, 10, 10, 192) 576 ['conv2d_96[0][0]'] ormalization) activation_50 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_50[0][0]'] activation_53 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_53[0][0]'] activation_58 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_58[0][0]'] activation_59 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_59[0][0]'] mixed6 (Concatenate) (None, 10, 10, 768) 0 ['activation_50[0][0]', 'activation_53[0][0]', 'activation_58[0][0]', 'activation_59[0][0]'] conv2d_101 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] batch_normalization_64 (BatchN (None, 10, 10, 192) 576 ['conv2d_101[0][0]'] ormalization) activation_64 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_64[0][0]'] conv2d_102 (Conv2D) (None, 10, 10, 192) 258048 ['activation_64[0][0]'] batch_normalization_65 (BatchN (None, 10, 10, 192) 576 ['conv2d_102[0][0]'] ormalization) activation_65 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_65[0][0]'] conv2d_98 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_103 (Conv2D) (None, 10, 10, 192) 258048 ['activation_65[0][0]'] batch_normalization_61 (BatchN (None, 10, 10, 192) 576 ['conv2d_98[0][0]'] ormalization) batch_normalization_66 (BatchN (None, 10, 10, 192) 576 ['conv2d_103[0][0]'] ormalization) activation_61 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_61[0][0]'] activation_66 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_66[0][0]'] conv2d_99 (Conv2D) (None, 10, 10, 192) 258048 ['activation_61[0][0]'] conv2d_104 (Conv2D) (None, 10, 10, 192) 258048 ['activation_66[0][0]'] batch_normalization_62 (BatchN (None, 10, 10, 192) 576 ['conv2d_99[0][0]'] ormalization) batch_normalization_67 (BatchN (None, 10, 10, 192) 576 ['conv2d_104[0][0]'] ormalization) activation_62 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_62[0][0]'] activation_67 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_67[0][0]'] average_pooling2d_6 (AveragePo (None, 10, 10, 768) 0 ['mixed6[0][0]'] oling2D) conv2d_97 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_100 (Conv2D) (None, 10, 10, 192) 258048 ['activation_62[0][0]'] conv2d_105 (Conv2D) (None, 10, 10, 192) 258048 ['activation_67[0][0]'] conv2d_106 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_6[0][0]'] batch_normalization_60 (BatchN (None, 10, 10, 192) 576 ['conv2d_97[0][0]'] ormalization) batch_normalization_63 (BatchN (None, 10, 10, 192) 576 ['conv2d_100[0][0]'] ormalization) batch_normalization_68 (BatchN (None, 10, 10, 192) 576 ['conv2d_105[0][0]'] ormalization) batch_normalization_69 (BatchN (None, 10, 10, 192) 576 ['conv2d_106[0][0]'] ormalization) activation_60 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_60[0][0]'] activation_63 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_63[0][0]'] activation_68 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_68[0][0]'] activation_69 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_69[0][0]'] mixed7 (Concatenate) (None, 10, 10, 768) 0 ['activation_60[0][0]', 'activation_63[0][0]', 'activation_68[0][0]', 'activation_69[0][0]'] conv2d_109 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] batch_normalization_72 (BatchN (None, 10, 10, 192) 576 ['conv2d_109[0][0]'] ormalization) activation_72 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_72[0][0]'] conv2d_110 (Conv2D) (None, 10, 10, 192) 258048 ['activation_72[0][0]'] batch_normalization_73 (BatchN (None, 10, 10, 192) 576 ['conv2d_110[0][0]'] ormalization) activation_73 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_73[0][0]'] conv2d_107 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] conv2d_111 (Conv2D) (None, 10, 10, 192) 258048 ['activation_73[0][0]'] batch_normalization_70 (BatchN (None, 10, 10, 192) 576 ['conv2d_107[0][0]'] ormalization) batch_normalization_74 (BatchN (None, 10, 10, 192) 576 ['conv2d_111[0][0]'] ormalization) activation_70 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_70[0][0]'] activation_74 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_74[0][0]'] conv2d_108 (Conv2D) (None, 4, 4, 320) 552960 ['activation_70[0][0]'] conv2d_112 (Conv2D) (None, 4, 4, 192) 331776 ['activation_74[0][0]'] batch_normalization_71 (BatchN (None, 4, 4, 320) 960 ['conv2d_108[0][0]'] ormalization) batch_normalization_75 (BatchN (None, 4, 4, 192) 576 ['conv2d_112[0][0]'] ormalization) activation_71 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_71[0][0]'] activation_75 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_75[0][0]'] max_pooling2d_5 (MaxPooling2D) (None, 4, 4, 768) 0 ['mixed7[0][0]'] mixed8 (Concatenate) (None, 4, 4, 1280) 0 ['activation_71[0][0]', 'activation_75[0][0]', 'max_pooling2d_5[0][0]'] conv2d_117 (Conv2D) (None, 4, 4, 448) 573440 ['mixed8[0][0]'] batch_normalization_80 (BatchN (None, 4, 4, 448) 1344 ['conv2d_117[0][0]'] ormalization) activation_80 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_80[0][0]'] conv2d_114 (Conv2D) (None, 4, 4, 384) 491520 ['mixed8[0][0]'] conv2d_118 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_80[0][0]'] batch_normalization_77 (BatchN (None, 4, 4, 384) 1152 ['conv2d_114[0][0]'] ormalization) batch_normalization_81 (BatchN (None, 4, 4, 384) 1152 ['conv2d_118[0][0]'] ormalization) activation_77 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_77[0][0]'] activation_81 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_81[0][0]'] conv2d_115 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_116 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_119 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] conv2d_120 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] average_pooling2d_7 (AveragePo (None, 4, 4, 1280) 0 ['mixed8[0][0]'] oling2D) conv2d_113 (Conv2D) (None, 4, 4, 320) 409600 ['mixed8[0][0]'] batch_normalization_78 (BatchN (None, 4, 4, 384) 1152 ['conv2d_115[0][0]'] ormalization) batch_normalization_79 (BatchN (None, 4, 4, 384) 1152 ['conv2d_116[0][0]'] ormalization) batch_normalization_82 (BatchN (None, 4, 4, 384) 1152 ['conv2d_119[0][0]'] ormalization) batch_normalization_83 (BatchN (None, 4, 4, 384) 1152 ['conv2d_120[0][0]'] ormalization) conv2d_121 (Conv2D) (None, 4, 4, 192) 245760 ['average_pooling2d_7[0][0]'] batch_normalization_76 (BatchN (None, 4, 4, 320) 960 ['conv2d_113[0][0]'] ormalization) activation_78 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_78[0][0]'] activation_79 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_79[0][0]'] activation_82 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_82[0][0]'] activation_83 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_83[0][0]'] batch_normalization_84 (BatchN (None, 4, 4, 192) 576 ['conv2d_121[0][0]'] ormalization) activation_76 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_76[0][0]'] mixed9_0 (Concatenate) (None, 4, 4, 768) 0 ['activation_78[0][0]', 'activation_79[0][0]'] concatenate_6 (Concatenate) (None, 4, 4, 768) 0 ['activation_82[0][0]', 'activation_83[0][0]'] activation_84 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_84[0][0]'] mixed9 (Concatenate) (None, 4, 4, 2048) 0 ['activation_76[0][0]', 'mixed9_0[0][0]', 'concatenate_6[0][0]', 'activation_84[0][0]'] conv2d_126 (Conv2D) (None, 4, 4, 448) 917504 ['mixed9[0][0]'] batch_normalization_89 (BatchN (None, 4, 4, 448) 1344 ['conv2d_126[0][0]'] ormalization) activation_89 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_89[0][0]'] conv2d_123 (Conv2D) (None, 4, 4, 384) 786432 ['mixed9[0][0]'] conv2d_127 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_89[0][0]'] batch_normalization_86 (BatchN (None, 4, 4, 384) 1152 ['conv2d_123[0][0]'] ormalization) batch_normalization_90 (BatchN (None, 4, 4, 384) 1152 ['conv2d_127[0][0]'] ormalization) activation_86 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_86[0][0]'] activation_90 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_90[0][0]'] conv2d_124 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_125 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_128 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] conv2d_129 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] average_pooling2d_8 (AveragePo (None, 4, 4, 2048) 0 ['mixed9[0][0]'] oling2D) conv2d_122 (Conv2D) (None, 4, 4, 320) 655360 ['mixed9[0][0]'] batch_normalization_87 (BatchN (None, 4, 4, 384) 1152 ['conv2d_124[0][0]'] ormalization) batch_normalization_88 (BatchN (None, 4, 4, 384) 1152 ['conv2d_125[0][0]'] ormalization) batch_normalization_91 (BatchN (None, 4, 4, 384) 1152 ['conv2d_128[0][0]'] ormalization) batch_normalization_92 (BatchN (None, 4, 4, 384) 1152 ['conv2d_129[0][0]'] ormalization) conv2d_130 (Conv2D) (None, 4, 4, 192) 393216 ['average_pooling2d_8[0][0]'] batch_normalization_85 (BatchN (None, 4, 4, 320) 960 ['conv2d_122[0][0]'] ormalization) activation_87 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_87[0][0]'] activation_88 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_88[0][0]'] activation_91 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_91[0][0]'] activation_92 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_92[0][0]'] batch_normalization_93 (BatchN (None, 4, 4, 192) 576 ['conv2d_130[0][0]'] ormalization) activation_85 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_85[0][0]'] mixed9_1 (Concatenate) (None, 4, 4, 768) 0 ['activation_87[0][0]', 'activation_88[0][0]'] concatenate_7 (Concatenate) (None, 4, 4, 768) 0 ['activation_91[0][0]', 'activation_92[0][0]'] activation_93 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_93[0][0]'] mixed10 (Concatenate) (None, 4, 4, 2048) 0 ['activation_85[0][0]', 'mixed9_1[0][0]', 'concatenate_7[0][0]', 'activation_93[0][0]'] global_average_pooling2d_1 (Gl (None, 2048) 0 ['mixed10[0][0]'] obalAveragePooling2D) dense_1 (Dense) (None, 128) 262272 ['global_average_pooling2d_1[0][0 ]'] dense_2 (Dense) (None, 128) 16512 ['dense_1[0][0]'] dense_3 (Dense) (None, 3) 387 ['dense_2[0][0]'] ================================================================================================== Total params: 22,081,955 Trainable params: 617,539 Non-trainable params: 21,464,416 __________________________________________________________________________________________________
TRAIN_STEPS_PER_EPOCH = np.ceil((total_train*0.8/batch_size)-1)
VAL_STEPS_PER_EPOCH = np.ceil((total_train*0.2/batch_size)-1)
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.layers import Dense, Activation, Dropout, BatchNormalization,Flatten
from keras.regularizers import l1
from tensorflow.keras.optimizers import SGD, Adam
from sklearn.utils import class_weight
import numpy as np
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
es= EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=4,verbose=1,factor=0.05)
mc = ModelCheckpoint('best_model_7_aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
# Fitting the CNN to the Training set
history = model.fit(
train_gen,
#steps_per_epoch=total_train // batch_size, #adjusts training process for new image batches
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
epochs=epochs,
validation_data=val_gen,
#validation_steps=total_train // batch_size,
validation_steps=VAL_STEPS_PER_EPOCH,
verbose=1,
callbacks=[mc,red_lr,es]
)
Epoch 1/40 77/77 [==============================] - ETA: 0s - loss: 0.4148 - accuracy: 0.8395 - auc: 0.9550 Epoch 1: val_loss improved from inf to 0.45588, saving model to best_model_7_aug.h5 77/77 [==============================] - 58s 654ms/step - loss: 0.4148 - accuracy: 0.8395 - auc: 0.9550 - val_loss: 0.4559 - val_accuracy: 0.8174 - val_auc: 0.9432 - lr: 0.0010 Epoch 2/40 77/77 [==============================] - ETA: 0s - loss: 0.1964 - accuracy: 0.9330 - auc: 0.9873 Epoch 2: val_loss did not improve from 0.45588 77/77 [==============================] - 46s 600ms/step - loss: 0.1964 - accuracy: 0.9330 - auc: 0.9873 - val_loss: 1.0391 - val_accuracy: 0.5115 - val_auc: 0.7988 - lr: 0.0010 Epoch 3/40 77/77 [==============================] - ETA: 0s - loss: 0.1696 - accuracy: 0.9466 - auc: 0.9902 Epoch 3: val_loss improved from 0.45588 to 0.19603, saving model to best_model_7_aug.h5 77/77 [==============================] - 50s 655ms/step - loss: 0.1696 - accuracy: 0.9466 - auc: 0.9902 - val_loss: 0.1960 - val_accuracy: 0.9342 - val_auc: 0.9879 - lr: 0.0010 Epoch 4/40 77/77 [==============================] - ETA: 0s - loss: 0.1703 - accuracy: 0.9405 - auc: 0.9911 Epoch 4: val_loss did not improve from 0.19603 77/77 [==============================] - 47s 603ms/step - loss: 0.1703 - accuracy: 0.9405 - auc: 0.9911 - val_loss: 0.2488 - val_accuracy: 0.9145 - val_auc: 0.9814 - lr: 0.0010 Epoch 5/40 77/77 [==============================] - ETA: 0s - loss: 0.1359 - accuracy: 0.9560 - auc: 0.9932 Epoch 5: val_loss did not improve from 0.19603 77/77 [==============================] - 46s 598ms/step - loss: 0.1359 - accuracy: 0.9560 - auc: 0.9932 - val_loss: 0.2207 - val_accuracy: 0.9128 - val_auc: 0.9855 - lr: 0.0010 Epoch 6/40 77/77 [==============================] - ETA: 0s - loss: 0.1220 - accuracy: 0.9576 - auc: 0.9952 Epoch 6: val_loss did not improve from 0.19603 77/77 [==============================] - 47s 607ms/step - loss: 0.1220 - accuracy: 0.9576 - auc: 0.9952 - val_loss: 0.2167 - val_accuracy: 0.9095 - val_auc: 0.9873 - lr: 0.0010 Epoch 7/40 77/77 [==============================] - ETA: 0s - loss: 0.1095 - accuracy: 0.9609 - auc: 0.9955 Epoch 7: val_loss did not improve from 0.19603 Epoch 00007: ReduceLROnPlateau reducing learning rate to 5.0000002374872565e-05. 77/77 [==============================] - 50s 643ms/step - loss: 0.1095 - accuracy: 0.9609 - auc: 0.9955 - val_loss: 0.2369 - val_accuracy: 0.9062 - val_auc: 0.9850 - lr: 0.0010 Epoch 8/40 77/77 [==============================] - ETA: 0s - loss: 0.0725 - accuracy: 0.9731 - auc: 0.9985 Epoch 8: val_loss improved from 0.19603 to 0.15770, saving model to best_model_7_aug.h5 77/77 [==============================] - 49s 634ms/step - loss: 0.0725 - accuracy: 0.9731 - auc: 0.9985 - val_loss: 0.1577 - val_accuracy: 0.9441 - val_auc: 0.9934 - lr: 5.0000e-05 Epoch 9/40 77/77 [==============================] - ETA: 0s - loss: 0.0670 - accuracy: 0.9752 - auc: 0.9984 Epoch 9: val_loss improved from 0.15770 to 0.13597, saving model to best_model_7_aug.h5 77/77 [==============================] - 51s 665ms/step - loss: 0.0670 - accuracy: 0.9752 - auc: 0.9984 - val_loss: 0.1360 - val_accuracy: 0.9523 - val_auc: 0.9953 - lr: 5.0000e-05 Epoch 10/40 77/77 [==============================] - ETA: 0s - loss: 0.0583 - accuracy: 0.9788 - auc: 0.9986 Epoch 10: val_loss improved from 0.13597 to 0.11484, saving model to best_model_7_aug.h5 77/77 [==============================] - 50s 648ms/step - loss: 0.0583 - accuracy: 0.9788 - auc: 0.9986 - val_loss: 0.1148 - val_accuracy: 0.9589 - val_auc: 0.9963 - lr: 5.0000e-05 Epoch 11/40 77/77 [==============================] - ETA: 0s - loss: 0.0481 - accuracy: 0.9813 - auc: 0.9993 Epoch 11: val_loss did not improve from 0.11484 77/77 [==============================] - 47s 613ms/step - loss: 0.0481 - accuracy: 0.9813 - auc: 0.9993 - val_loss: 0.1241 - val_accuracy: 0.9507 - val_auc: 0.9958 - lr: 5.0000e-05 Epoch 12/40 77/77 [==============================] - ETA: 0s - loss: 0.0448 - accuracy: 0.9837 - auc: 0.9991 Epoch 12: val_loss did not improve from 0.11484 77/77 [==============================] - 46s 603ms/step - loss: 0.0448 - accuracy: 0.9837 - auc: 0.9991 - val_loss: 0.1235 - val_accuracy: 0.9539 - val_auc: 0.9960 - lr: 5.0000e-05 Epoch 13/40 77/77 [==============================] - ETA: 0s - loss: 0.0362 - accuracy: 0.9878 - auc: 0.9996 Epoch 13: val_loss did not improve from 0.11484 77/77 [==============================] - 49s 635ms/step - loss: 0.0362 - accuracy: 0.9878 - auc: 0.9996 - val_loss: 0.1292 - val_accuracy: 0.9556 - val_auc: 0.9959 - lr: 5.0000e-05 Epoch 14/40 77/77 [==============================] - ETA: 0s - loss: 0.0412 - accuracy: 0.9882 - auc: 0.9992 Epoch 14: val_loss did not improve from 0.11484 Epoch 00014: ReduceLROnPlateau reducing learning rate to 2.5000001187436284e-06. 77/77 [==============================] - 46s 601ms/step - loss: 0.0412 - accuracy: 0.9882 - auc: 0.9992 - val_loss: 0.1427 - val_accuracy: 0.9474 - val_auc: 0.9935 - lr: 5.0000e-05 Epoch 15/40 77/77 [==============================] - ETA: 0s - loss: 0.0407 - accuracy: 0.9813 - auc: 0.9995 Epoch 15: val_loss did not improve from 0.11484 77/77 [==============================] - 50s 649ms/step - loss: 0.0407 - accuracy: 0.9813 - auc: 0.9995 - val_loss: 0.1401 - val_accuracy: 0.9457 - val_auc: 0.9937 - lr: 2.5000e-06 Epoch 16/40 77/77 [==============================] - ETA: 0s - loss: 0.0352 - accuracy: 0.9866 - auc: 0.9994 Epoch 16: val_loss did not improve from 0.11484 77/77 [==============================] - 47s 611ms/step - loss: 0.0352 - accuracy: 0.9866 - auc: 0.9994 - val_loss: 0.1329 - val_accuracy: 0.9490 - val_auc: 0.9948 - lr: 2.5000e-06 Epoch 17/40 77/77 [==============================] - ETA: 0s - loss: 0.0355 - accuracy: 0.9870 - auc: 0.9996 Epoch 17: val_loss did not improve from 0.11484 77/77 [==============================] - 47s 608ms/step - loss: 0.0355 - accuracy: 0.9870 - auc: 0.9996 - val_loss: 0.1330 - val_accuracy: 0.9490 - val_auc: 0.9940 - lr: 2.5000e-06 Epoch 18/40 77/77 [==============================] - ETA: 0s - loss: 0.0392 - accuracy: 0.9866 - auc: 0.9995 Epoch 18: val_loss did not improve from 0.11484 Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.2500000821091816e-07. 77/77 [==============================] - 49s 634ms/step - loss: 0.0392 - accuracy: 0.9866 - auc: 0.9995 - val_loss: 0.1309 - val_accuracy: 0.9490 - val_auc: 0.9941 - lr: 2.5000e-06 Epoch 19/40 77/77 [==============================] - ETA: 0s - loss: 0.0361 - accuracy: 0.9857 - auc: 0.9996 Epoch 19: val_loss did not improve from 0.11484 77/77 [==============================] - 46s 601ms/step - loss: 0.0361 - accuracy: 0.9857 - auc: 0.9996 - val_loss: 0.1261 - val_accuracy: 0.9507 - val_auc: 0.9943 - lr: 1.2500e-07 Epoch 20/40 77/77 [==============================] - ETA: 0s - loss: 0.0387 - accuracy: 0.9862 - auc: 0.9995 Epoch 20: val_loss did not improve from 0.11484 77/77 [==============================] - 48s 627ms/step - loss: 0.0387 - accuracy: 0.9862 - auc: 0.9995 - val_loss: 0.1317 - val_accuracy: 0.9490 - val_auc: 0.9940 - lr: 1.2500e-07 Epoch 21/40 77/77 [==============================] - ETA: 0s - loss: 0.0364 - accuracy: 0.9870 - auc: 0.9993 Epoch 21: val_loss did not improve from 0.11484 77/77 [==============================] - 46s 593ms/step - loss: 0.0364 - accuracy: 0.9870 - auc: 0.9993 - val_loss: 0.1307 - val_accuracy: 0.9507 - val_auc: 0.9941 - lr: 1.2500e-07 Epoch 22/40 77/77 [==============================] - ETA: 0s - loss: 0.0367 - accuracy: 0.9857 - auc: 0.9996 Epoch 22: val_loss did not improve from 0.11484 Epoch 00022: ReduceLROnPlateau reducing learning rate to 6.250000694763003e-09. 77/77 [==============================] - 46s 599ms/step - loss: 0.0367 - accuracy: 0.9857 - auc: 0.9996 - val_loss: 0.1310 - val_accuracy: 0.9490 - val_auc: 0.9941 - lr: 1.2500e-07 Epoch 23/40 77/77 [==============================] - ETA: 0s - loss: 0.0368 - accuracy: 0.9882 - auc: 0.9993 Epoch 23: val_loss did not improve from 0.11484 77/77 [==============================] - 48s 626ms/step - loss: 0.0368 - accuracy: 0.9882 - auc: 0.9993 - val_loss: 0.1324 - val_accuracy: 0.9490 - val_auc: 0.9940 - lr: 6.2500e-09 Epoch 24/40 77/77 [==============================] - ETA: 0s - loss: 0.0358 - accuracy: 0.9866 - auc: 0.9996 Epoch 24: val_loss did not improve from 0.11484 77/77 [==============================] - 46s 596ms/step - loss: 0.0358 - accuracy: 0.9866 - auc: 0.9996 - val_loss: 0.1305 - val_accuracy: 0.9507 - val_auc: 0.9941 - lr: 6.2500e-09 Epoch 25/40 77/77 [==============================] - ETA: 0s - loss: 0.0364 - accuracy: 0.9853 - auc: 0.9996 Epoch 25: val_loss did not improve from 0.11484 77/77 [==============================] - 46s 599ms/step - loss: 0.0364 - accuracy: 0.9853 - auc: 0.9996 - val_loss: 0.1263 - val_accuracy: 0.9523 - val_auc: 0.9943 - lr: 6.2500e-09
from tensorflow.keras.models import load_model
model=load_model("best_model_7_aug.h5")
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Using test_generator for predictions on test data:
#Sources: https://stackoverflow.com/questions/52270177/how-to-use-predict-generator-on-new-images-keras
# https://tylerburleigh.com/blog/predicting-pneumonia-from-chest-x-rays-using-efficientnet/
test_gen.reset() #It's important to always reset the test generator.
Y_pred_test=model.predict(test_gen)
y_pred_test=np.argmax(Y_pred_test,axis=1)
25/25 [==============================] - 14s 509ms/step
labels = (test_gen.class_indices)
print(labels)
{'COVID': 0, 'NORMAL': 1, 'Viral Pneumonia': 2}
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
y_true_test = test_gen.classes
CM = confusion_matrix(y_true_test, y_pred_test)
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_true_test, y_pred_test, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9958 0.9958 0.9958 240 NORMAL 0.9772 0.9554 0.9662 269 Viral Pneumonia 0.9527 0.9740 0.9632 269 accuracy 0.9743 778 macro avg 0.9752 0.9751 0.9751 778 weighted avg 0.9745 0.9743 0.9743 778
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
# get metrics
model_eval_metrics(y_true_test, y_pred_test, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.974293 | 0.975078 | 0.975249 | 0.975067 |
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, Y_pred_test[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
batch_size = 32
epochs = 40
IMG_HEIGHT = 192
IMG_WIDTH = 192
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Below, I have only included as augmentations random variation in the brightness range and random horizontal flips. I have commented out other possible
#example augmentations that I chose not to use for this portfolio project.
train_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our training data
#The validation image generator should not be used to augment data. It is sometimes the case that individuals will use the same
#ImageDataGenerator object to create both their training image generator and validation image generator. However, that should not
#be done because the validation set should not be used to augment data, since it is meant to replicate unseen data for the purpose
#of identifying good tuning parameters. I therefore make sure that my validation_image_generator contains no augmentation parameters.
validation_image_generator = ImageDataGenerator(
rescale=1./255,
validation_split=0.2
) # Generator for our validation data
#The purpose of the test_image_generator is to replicate how the train_image_generator should perform on unseen data (from the test set).
#I thus use the same parameters for test_image_generator as I did with train_image_generator (with the exception that I do not use
#validation_split)
test_image_generator = ImageDataGenerator(
rescale=1./255,
brightness_range=[0.7,1.3],
#rotation_range=5,
#width_shift_range=.15,
#height_shift_range=.15,
horizontal_flip=True,
#shear_range=0.1,
fill_mode='nearest',
#zoom_range=0.05
) # Generator for our test data
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Note: Both train_gen and val_gen use the training set, but since I used validation split, this splits
#the training set into a smaller training set and a validation set. Therefore, train_gen and val_gen
#do not use the same image data but rather different images within the original training set that has
#been randomly separated.
#Choose subset = 'training'
#Set seed=42 for both train_gen and for val_gen to ensure they are randomized similarly
train_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='training',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
#Choose subset = 'validation'
val_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory='train',
subset='validation',
seed=42,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
test_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory='test',
seed=42,
shuffle=False, #generally, shuffle should be set to false when the image generator is used on test data
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical')
Found 2487 images belonging to 3 classes. Found 621 images belonging to 3 classes. Found 778 images belonging to 3 classes.
from numpy.random import seed
seed(42)
import tensorflow
tensorflow.random.set_seed(42)
from tensorflow.keras.applications import InceptionV3
from tensorflow.keras.models import Sequential,Model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Flatten
from tensorflow.keras import backend as K
# Add new GAP layer and output layer to frozen layers of original model with adjusted input
gap1 = GlobalAveragePooling2D()(base_model.layers[-1].output)
class1 = Dense(256, activation='relu')(gap1)
class1 = Dense(256, activation='relu')(class1)
output = Dense(3, activation='softmax')(class1)
# define new model
model = Model(inputs=base_model.inputs, outputs=output)
# summarize
model.summary()
Model: "model_2" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, 192, 192, 3 0 [] )] conv2d_37 (Conv2D) (None, 95, 95, 32) 864 ['input_2[0][0]'] batch_normalization (BatchNorm (None, 95, 95, 32) 96 ['conv2d_37[0][0]'] alization) activation (Activation) (None, 95, 95, 32) 0 ['batch_normalization[0][0]'] conv2d_38 (Conv2D) (None, 93, 93, 32) 9216 ['activation[0][0]'] batch_normalization_1 (BatchNo (None, 93, 93, 32) 96 ['conv2d_38[0][0]'] rmalization) activation_1 (Activation) (None, 93, 93, 32) 0 ['batch_normalization_1[0][0]'] conv2d_39 (Conv2D) (None, 93, 93, 64) 18432 ['activation_1[0][0]'] batch_normalization_2 (BatchNo (None, 93, 93, 64) 192 ['conv2d_39[0][0]'] rmalization) activation_2 (Activation) (None, 93, 93, 64) 0 ['batch_normalization_2[0][0]'] max_pooling2d_2 (MaxPooling2D) (None, 46, 46, 64) 0 ['activation_2[0][0]'] conv2d_40 (Conv2D) (None, 46, 46, 80) 5120 ['max_pooling2d_2[0][0]'] batch_normalization_3 (BatchNo (None, 46, 46, 80) 240 ['conv2d_40[0][0]'] rmalization) activation_3 (Activation) (None, 46, 46, 80) 0 ['batch_normalization_3[0][0]'] conv2d_41 (Conv2D) (None, 44, 44, 192) 138240 ['activation_3[0][0]'] batch_normalization_4 (BatchNo (None, 44, 44, 192) 576 ['conv2d_41[0][0]'] rmalization) activation_4 (Activation) (None, 44, 44, 192) 0 ['batch_normalization_4[0][0]'] max_pooling2d_3 (MaxPooling2D) (None, 21, 21, 192) 0 ['activation_4[0][0]'] conv2d_45 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_3[0][0]'] batch_normalization_8 (BatchNo (None, 21, 21, 64) 192 ['conv2d_45[0][0]'] rmalization) activation_8 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_8[0][0]'] conv2d_43 (Conv2D) (None, 21, 21, 48) 9216 ['max_pooling2d_3[0][0]'] conv2d_46 (Conv2D) (None, 21, 21, 96) 55296 ['activation_8[0][0]'] batch_normalization_6 (BatchNo (None, 21, 21, 48) 144 ['conv2d_43[0][0]'] rmalization) batch_normalization_9 (BatchNo (None, 21, 21, 96) 288 ['conv2d_46[0][0]'] rmalization) activation_6 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_6[0][0]'] activation_9 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_9[0][0]'] average_pooling2d (AveragePool (None, 21, 21, 192) 0 ['max_pooling2d_3[0][0]'] ing2D) conv2d_42 (Conv2D) (None, 21, 21, 64) 12288 ['max_pooling2d_3[0][0]'] conv2d_44 (Conv2D) (None, 21, 21, 64) 76800 ['activation_6[0][0]'] conv2d_47 (Conv2D) (None, 21, 21, 96) 82944 ['activation_9[0][0]'] conv2d_48 (Conv2D) (None, 21, 21, 32) 6144 ['average_pooling2d[0][0]'] batch_normalization_5 (BatchNo (None, 21, 21, 64) 192 ['conv2d_42[0][0]'] rmalization) batch_normalization_7 (BatchNo (None, 21, 21, 64) 192 ['conv2d_44[0][0]'] rmalization) batch_normalization_10 (BatchN (None, 21, 21, 96) 288 ['conv2d_47[0][0]'] ormalization) batch_normalization_11 (BatchN (None, 21, 21, 32) 96 ['conv2d_48[0][0]'] ormalization) activation_5 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_5[0][0]'] activation_7 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_7[0][0]'] activation_10 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_10[0][0]'] activation_11 (Activation) (None, 21, 21, 32) 0 ['batch_normalization_11[0][0]'] mixed0 (Concatenate) (None, 21, 21, 256) 0 ['activation_5[0][0]', 'activation_7[0][0]', 'activation_10[0][0]', 'activation_11[0][0]'] conv2d_52 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] batch_normalization_15 (BatchN (None, 21, 21, 64) 192 ['conv2d_52[0][0]'] ormalization) activation_15 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_15[0][0]'] conv2d_50 (Conv2D) (None, 21, 21, 48) 12288 ['mixed0[0][0]'] conv2d_53 (Conv2D) (None, 21, 21, 96) 55296 ['activation_15[0][0]'] batch_normalization_13 (BatchN (None, 21, 21, 48) 144 ['conv2d_50[0][0]'] ormalization) batch_normalization_16 (BatchN (None, 21, 21, 96) 288 ['conv2d_53[0][0]'] ormalization) activation_13 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_13[0][0]'] activation_16 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_16[0][0]'] average_pooling2d_1 (AveragePo (None, 21, 21, 256) 0 ['mixed0[0][0]'] oling2D) conv2d_49 (Conv2D) (None, 21, 21, 64) 16384 ['mixed0[0][0]'] conv2d_51 (Conv2D) (None, 21, 21, 64) 76800 ['activation_13[0][0]'] conv2d_54 (Conv2D) (None, 21, 21, 96) 82944 ['activation_16[0][0]'] conv2d_55 (Conv2D) (None, 21, 21, 64) 16384 ['average_pooling2d_1[0][0]'] batch_normalization_12 (BatchN (None, 21, 21, 64) 192 ['conv2d_49[0][0]'] ormalization) batch_normalization_14 (BatchN (None, 21, 21, 64) 192 ['conv2d_51[0][0]'] ormalization) batch_normalization_17 (BatchN (None, 21, 21, 96) 288 ['conv2d_54[0][0]'] ormalization) batch_normalization_18 (BatchN (None, 21, 21, 64) 192 ['conv2d_55[0][0]'] ormalization) activation_12 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_12[0][0]'] activation_14 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_14[0][0]'] activation_17 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_17[0][0]'] activation_18 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_18[0][0]'] mixed1 (Concatenate) (None, 21, 21, 288) 0 ['activation_12[0][0]', 'activation_14[0][0]', 'activation_17[0][0]', 'activation_18[0][0]'] conv2d_59 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] batch_normalization_22 (BatchN (None, 21, 21, 64) 192 ['conv2d_59[0][0]'] ormalization) activation_22 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_22[0][0]'] conv2d_57 (Conv2D) (None, 21, 21, 48) 13824 ['mixed1[0][0]'] conv2d_60 (Conv2D) (None, 21, 21, 96) 55296 ['activation_22[0][0]'] batch_normalization_20 (BatchN (None, 21, 21, 48) 144 ['conv2d_57[0][0]'] ormalization) batch_normalization_23 (BatchN (None, 21, 21, 96) 288 ['conv2d_60[0][0]'] ormalization) activation_20 (Activation) (None, 21, 21, 48) 0 ['batch_normalization_20[0][0]'] activation_23 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_23[0][0]'] average_pooling2d_2 (AveragePo (None, 21, 21, 288) 0 ['mixed1[0][0]'] oling2D) conv2d_56 (Conv2D) (None, 21, 21, 64) 18432 ['mixed1[0][0]'] conv2d_58 (Conv2D) (None, 21, 21, 64) 76800 ['activation_20[0][0]'] conv2d_61 (Conv2D) (None, 21, 21, 96) 82944 ['activation_23[0][0]'] conv2d_62 (Conv2D) (None, 21, 21, 64) 18432 ['average_pooling2d_2[0][0]'] batch_normalization_19 (BatchN (None, 21, 21, 64) 192 ['conv2d_56[0][0]'] ormalization) batch_normalization_21 (BatchN (None, 21, 21, 64) 192 ['conv2d_58[0][0]'] ormalization) batch_normalization_24 (BatchN (None, 21, 21, 96) 288 ['conv2d_61[0][0]'] ormalization) batch_normalization_25 (BatchN (None, 21, 21, 64) 192 ['conv2d_62[0][0]'] ormalization) activation_19 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_19[0][0]'] activation_21 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_21[0][0]'] activation_24 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_24[0][0]'] activation_25 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_25[0][0]'] mixed2 (Concatenate) (None, 21, 21, 288) 0 ['activation_19[0][0]', 'activation_21[0][0]', 'activation_24[0][0]', 'activation_25[0][0]'] conv2d_64 (Conv2D) (None, 21, 21, 64) 18432 ['mixed2[0][0]'] batch_normalization_27 (BatchN (None, 21, 21, 64) 192 ['conv2d_64[0][0]'] ormalization) activation_27 (Activation) (None, 21, 21, 64) 0 ['batch_normalization_27[0][0]'] conv2d_65 (Conv2D) (None, 21, 21, 96) 55296 ['activation_27[0][0]'] batch_normalization_28 (BatchN (None, 21, 21, 96) 288 ['conv2d_65[0][0]'] ormalization) activation_28 (Activation) (None, 21, 21, 96) 0 ['batch_normalization_28[0][0]'] conv2d_63 (Conv2D) (None, 10, 10, 384) 995328 ['mixed2[0][0]'] conv2d_66 (Conv2D) (None, 10, 10, 96) 82944 ['activation_28[0][0]'] batch_normalization_26 (BatchN (None, 10, 10, 384) 1152 ['conv2d_63[0][0]'] ormalization) batch_normalization_29 (BatchN (None, 10, 10, 96) 288 ['conv2d_66[0][0]'] ormalization) activation_26 (Activation) (None, 10, 10, 384) 0 ['batch_normalization_26[0][0]'] activation_29 (Activation) (None, 10, 10, 96) 0 ['batch_normalization_29[0][0]'] max_pooling2d_4 (MaxPooling2D) (None, 10, 10, 288) 0 ['mixed2[0][0]'] mixed3 (Concatenate) (None, 10, 10, 768) 0 ['activation_26[0][0]', 'activation_29[0][0]', 'max_pooling2d_4[0][0]'] conv2d_71 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] batch_normalization_34 (BatchN (None, 10, 10, 128) 384 ['conv2d_71[0][0]'] ormalization) activation_34 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_34[0][0]'] conv2d_72 (Conv2D) (None, 10, 10, 128) 114688 ['activation_34[0][0]'] batch_normalization_35 (BatchN (None, 10, 10, 128) 384 ['conv2d_72[0][0]'] ormalization) activation_35 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_35[0][0]'] conv2d_68 (Conv2D) (None, 10, 10, 128) 98304 ['mixed3[0][0]'] conv2d_73 (Conv2D) (None, 10, 10, 128) 114688 ['activation_35[0][0]'] batch_normalization_31 (BatchN (None, 10, 10, 128) 384 ['conv2d_68[0][0]'] ormalization) batch_normalization_36 (BatchN (None, 10, 10, 128) 384 ['conv2d_73[0][0]'] ormalization) activation_31 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_31[0][0]'] activation_36 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_36[0][0]'] conv2d_69 (Conv2D) (None, 10, 10, 128) 114688 ['activation_31[0][0]'] conv2d_74 (Conv2D) (None, 10, 10, 128) 114688 ['activation_36[0][0]'] batch_normalization_32 (BatchN (None, 10, 10, 128) 384 ['conv2d_69[0][0]'] ormalization) batch_normalization_37 (BatchN (None, 10, 10, 128) 384 ['conv2d_74[0][0]'] ormalization) activation_32 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_32[0][0]'] activation_37 (Activation) (None, 10, 10, 128) 0 ['batch_normalization_37[0][0]'] average_pooling2d_3 (AveragePo (None, 10, 10, 768) 0 ['mixed3[0][0]'] oling2D) conv2d_67 (Conv2D) (None, 10, 10, 192) 147456 ['mixed3[0][0]'] conv2d_70 (Conv2D) (None, 10, 10, 192) 172032 ['activation_32[0][0]'] conv2d_75 (Conv2D) (None, 10, 10, 192) 172032 ['activation_37[0][0]'] conv2d_76 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_3[0][0]'] batch_normalization_30 (BatchN (None, 10, 10, 192) 576 ['conv2d_67[0][0]'] ormalization) batch_normalization_33 (BatchN (None, 10, 10, 192) 576 ['conv2d_70[0][0]'] ormalization) batch_normalization_38 (BatchN (None, 10, 10, 192) 576 ['conv2d_75[0][0]'] ormalization) batch_normalization_39 (BatchN (None, 10, 10, 192) 576 ['conv2d_76[0][0]'] ormalization) activation_30 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_30[0][0]'] activation_33 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_33[0][0]'] activation_38 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_38[0][0]'] activation_39 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_39[0][0]'] mixed4 (Concatenate) (None, 10, 10, 768) 0 ['activation_30[0][0]', 'activation_33[0][0]', 'activation_38[0][0]', 'activation_39[0][0]'] conv2d_81 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] batch_normalization_44 (BatchN (None, 10, 10, 160) 480 ['conv2d_81[0][0]'] ormalization) activation_44 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_44[0][0]'] conv2d_82 (Conv2D) (None, 10, 10, 160) 179200 ['activation_44[0][0]'] batch_normalization_45 (BatchN (None, 10, 10, 160) 480 ['conv2d_82[0][0]'] ormalization) activation_45 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_45[0][0]'] conv2d_78 (Conv2D) (None, 10, 10, 160) 122880 ['mixed4[0][0]'] conv2d_83 (Conv2D) (None, 10, 10, 160) 179200 ['activation_45[0][0]'] batch_normalization_41 (BatchN (None, 10, 10, 160) 480 ['conv2d_78[0][0]'] ormalization) batch_normalization_46 (BatchN (None, 10, 10, 160) 480 ['conv2d_83[0][0]'] ormalization) activation_41 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_41[0][0]'] activation_46 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_46[0][0]'] conv2d_79 (Conv2D) (None, 10, 10, 160) 179200 ['activation_41[0][0]'] conv2d_84 (Conv2D) (None, 10, 10, 160) 179200 ['activation_46[0][0]'] batch_normalization_42 (BatchN (None, 10, 10, 160) 480 ['conv2d_79[0][0]'] ormalization) batch_normalization_47 (BatchN (None, 10, 10, 160) 480 ['conv2d_84[0][0]'] ormalization) activation_42 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_42[0][0]'] activation_47 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_47[0][0]'] average_pooling2d_4 (AveragePo (None, 10, 10, 768) 0 ['mixed4[0][0]'] oling2D) conv2d_77 (Conv2D) (None, 10, 10, 192) 147456 ['mixed4[0][0]'] conv2d_80 (Conv2D) (None, 10, 10, 192) 215040 ['activation_42[0][0]'] conv2d_85 (Conv2D) (None, 10, 10, 192) 215040 ['activation_47[0][0]'] conv2d_86 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_4[0][0]'] batch_normalization_40 (BatchN (None, 10, 10, 192) 576 ['conv2d_77[0][0]'] ormalization) batch_normalization_43 (BatchN (None, 10, 10, 192) 576 ['conv2d_80[0][0]'] ormalization) batch_normalization_48 (BatchN (None, 10, 10, 192) 576 ['conv2d_85[0][0]'] ormalization) batch_normalization_49 (BatchN (None, 10, 10, 192) 576 ['conv2d_86[0][0]'] ormalization) activation_40 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_40[0][0]'] activation_43 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_43[0][0]'] activation_48 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_48[0][0]'] activation_49 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_49[0][0]'] mixed5 (Concatenate) (None, 10, 10, 768) 0 ['activation_40[0][0]', 'activation_43[0][0]', 'activation_48[0][0]', 'activation_49[0][0]'] conv2d_91 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] batch_normalization_54 (BatchN (None, 10, 10, 160) 480 ['conv2d_91[0][0]'] ormalization) activation_54 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_54[0][0]'] conv2d_92 (Conv2D) (None, 10, 10, 160) 179200 ['activation_54[0][0]'] batch_normalization_55 (BatchN (None, 10, 10, 160) 480 ['conv2d_92[0][0]'] ormalization) activation_55 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_55[0][0]'] conv2d_88 (Conv2D) (None, 10, 10, 160) 122880 ['mixed5[0][0]'] conv2d_93 (Conv2D) (None, 10, 10, 160) 179200 ['activation_55[0][0]'] batch_normalization_51 (BatchN (None, 10, 10, 160) 480 ['conv2d_88[0][0]'] ormalization) batch_normalization_56 (BatchN (None, 10, 10, 160) 480 ['conv2d_93[0][0]'] ormalization) activation_51 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_51[0][0]'] activation_56 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_56[0][0]'] conv2d_89 (Conv2D) (None, 10, 10, 160) 179200 ['activation_51[0][0]'] conv2d_94 (Conv2D) (None, 10, 10, 160) 179200 ['activation_56[0][0]'] batch_normalization_52 (BatchN (None, 10, 10, 160) 480 ['conv2d_89[0][0]'] ormalization) batch_normalization_57 (BatchN (None, 10, 10, 160) 480 ['conv2d_94[0][0]'] ormalization) activation_52 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_52[0][0]'] activation_57 (Activation) (None, 10, 10, 160) 0 ['batch_normalization_57[0][0]'] average_pooling2d_5 (AveragePo (None, 10, 10, 768) 0 ['mixed5[0][0]'] oling2D) conv2d_87 (Conv2D) (None, 10, 10, 192) 147456 ['mixed5[0][0]'] conv2d_90 (Conv2D) (None, 10, 10, 192) 215040 ['activation_52[0][0]'] conv2d_95 (Conv2D) (None, 10, 10, 192) 215040 ['activation_57[0][0]'] conv2d_96 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_5[0][0]'] batch_normalization_50 (BatchN (None, 10, 10, 192) 576 ['conv2d_87[0][0]'] ormalization) batch_normalization_53 (BatchN (None, 10, 10, 192) 576 ['conv2d_90[0][0]'] ormalization) batch_normalization_58 (BatchN (None, 10, 10, 192) 576 ['conv2d_95[0][0]'] ormalization) batch_normalization_59 (BatchN (None, 10, 10, 192) 576 ['conv2d_96[0][0]'] ormalization) activation_50 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_50[0][0]'] activation_53 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_53[0][0]'] activation_58 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_58[0][0]'] activation_59 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_59[0][0]'] mixed6 (Concatenate) (None, 10, 10, 768) 0 ['activation_50[0][0]', 'activation_53[0][0]', 'activation_58[0][0]', 'activation_59[0][0]'] conv2d_101 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] batch_normalization_64 (BatchN (None, 10, 10, 192) 576 ['conv2d_101[0][0]'] ormalization) activation_64 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_64[0][0]'] conv2d_102 (Conv2D) (None, 10, 10, 192) 258048 ['activation_64[0][0]'] batch_normalization_65 (BatchN (None, 10, 10, 192) 576 ['conv2d_102[0][0]'] ormalization) activation_65 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_65[0][0]'] conv2d_98 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_103 (Conv2D) (None, 10, 10, 192) 258048 ['activation_65[0][0]'] batch_normalization_61 (BatchN (None, 10, 10, 192) 576 ['conv2d_98[0][0]'] ormalization) batch_normalization_66 (BatchN (None, 10, 10, 192) 576 ['conv2d_103[0][0]'] ormalization) activation_61 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_61[0][0]'] activation_66 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_66[0][0]'] conv2d_99 (Conv2D) (None, 10, 10, 192) 258048 ['activation_61[0][0]'] conv2d_104 (Conv2D) (None, 10, 10, 192) 258048 ['activation_66[0][0]'] batch_normalization_62 (BatchN (None, 10, 10, 192) 576 ['conv2d_99[0][0]'] ormalization) batch_normalization_67 (BatchN (None, 10, 10, 192) 576 ['conv2d_104[0][0]'] ormalization) activation_62 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_62[0][0]'] activation_67 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_67[0][0]'] average_pooling2d_6 (AveragePo (None, 10, 10, 768) 0 ['mixed6[0][0]'] oling2D) conv2d_97 (Conv2D) (None, 10, 10, 192) 147456 ['mixed6[0][0]'] conv2d_100 (Conv2D) (None, 10, 10, 192) 258048 ['activation_62[0][0]'] conv2d_105 (Conv2D) (None, 10, 10, 192) 258048 ['activation_67[0][0]'] conv2d_106 (Conv2D) (None, 10, 10, 192) 147456 ['average_pooling2d_6[0][0]'] batch_normalization_60 (BatchN (None, 10, 10, 192) 576 ['conv2d_97[0][0]'] ormalization) batch_normalization_63 (BatchN (None, 10, 10, 192) 576 ['conv2d_100[0][0]'] ormalization) batch_normalization_68 (BatchN (None, 10, 10, 192) 576 ['conv2d_105[0][0]'] ormalization) batch_normalization_69 (BatchN (None, 10, 10, 192) 576 ['conv2d_106[0][0]'] ormalization) activation_60 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_60[0][0]'] activation_63 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_63[0][0]'] activation_68 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_68[0][0]'] activation_69 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_69[0][0]'] mixed7 (Concatenate) (None, 10, 10, 768) 0 ['activation_60[0][0]', 'activation_63[0][0]', 'activation_68[0][0]', 'activation_69[0][0]'] conv2d_109 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] batch_normalization_72 (BatchN (None, 10, 10, 192) 576 ['conv2d_109[0][0]'] ormalization) activation_72 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_72[0][0]'] conv2d_110 (Conv2D) (None, 10, 10, 192) 258048 ['activation_72[0][0]'] batch_normalization_73 (BatchN (None, 10, 10, 192) 576 ['conv2d_110[0][0]'] ormalization) activation_73 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_73[0][0]'] conv2d_107 (Conv2D) (None, 10, 10, 192) 147456 ['mixed7[0][0]'] conv2d_111 (Conv2D) (None, 10, 10, 192) 258048 ['activation_73[0][0]'] batch_normalization_70 (BatchN (None, 10, 10, 192) 576 ['conv2d_107[0][0]'] ormalization) batch_normalization_74 (BatchN (None, 10, 10, 192) 576 ['conv2d_111[0][0]'] ormalization) activation_70 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_70[0][0]'] activation_74 (Activation) (None, 10, 10, 192) 0 ['batch_normalization_74[0][0]'] conv2d_108 (Conv2D) (None, 4, 4, 320) 552960 ['activation_70[0][0]'] conv2d_112 (Conv2D) (None, 4, 4, 192) 331776 ['activation_74[0][0]'] batch_normalization_71 (BatchN (None, 4, 4, 320) 960 ['conv2d_108[0][0]'] ormalization) batch_normalization_75 (BatchN (None, 4, 4, 192) 576 ['conv2d_112[0][0]'] ormalization) activation_71 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_71[0][0]'] activation_75 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_75[0][0]'] max_pooling2d_5 (MaxPooling2D) (None, 4, 4, 768) 0 ['mixed7[0][0]'] mixed8 (Concatenate) (None, 4, 4, 1280) 0 ['activation_71[0][0]', 'activation_75[0][0]', 'max_pooling2d_5[0][0]'] conv2d_117 (Conv2D) (None, 4, 4, 448) 573440 ['mixed8[0][0]'] batch_normalization_80 (BatchN (None, 4, 4, 448) 1344 ['conv2d_117[0][0]'] ormalization) activation_80 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_80[0][0]'] conv2d_114 (Conv2D) (None, 4, 4, 384) 491520 ['mixed8[0][0]'] conv2d_118 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_80[0][0]'] batch_normalization_77 (BatchN (None, 4, 4, 384) 1152 ['conv2d_114[0][0]'] ormalization) batch_normalization_81 (BatchN (None, 4, 4, 384) 1152 ['conv2d_118[0][0]'] ormalization) activation_77 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_77[0][0]'] activation_81 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_81[0][0]'] conv2d_115 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_116 (Conv2D) (None, 4, 4, 384) 442368 ['activation_77[0][0]'] conv2d_119 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] conv2d_120 (Conv2D) (None, 4, 4, 384) 442368 ['activation_81[0][0]'] average_pooling2d_7 (AveragePo (None, 4, 4, 1280) 0 ['mixed8[0][0]'] oling2D) conv2d_113 (Conv2D) (None, 4, 4, 320) 409600 ['mixed8[0][0]'] batch_normalization_78 (BatchN (None, 4, 4, 384) 1152 ['conv2d_115[0][0]'] ormalization) batch_normalization_79 (BatchN (None, 4, 4, 384) 1152 ['conv2d_116[0][0]'] ormalization) batch_normalization_82 (BatchN (None, 4, 4, 384) 1152 ['conv2d_119[0][0]'] ormalization) batch_normalization_83 (BatchN (None, 4, 4, 384) 1152 ['conv2d_120[0][0]'] ormalization) conv2d_121 (Conv2D) (None, 4, 4, 192) 245760 ['average_pooling2d_7[0][0]'] batch_normalization_76 (BatchN (None, 4, 4, 320) 960 ['conv2d_113[0][0]'] ormalization) activation_78 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_78[0][0]'] activation_79 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_79[0][0]'] activation_82 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_82[0][0]'] activation_83 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_83[0][0]'] batch_normalization_84 (BatchN (None, 4, 4, 192) 576 ['conv2d_121[0][0]'] ormalization) activation_76 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_76[0][0]'] mixed9_0 (Concatenate) (None, 4, 4, 768) 0 ['activation_78[0][0]', 'activation_79[0][0]'] concatenate_6 (Concatenate) (None, 4, 4, 768) 0 ['activation_82[0][0]', 'activation_83[0][0]'] activation_84 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_84[0][0]'] mixed9 (Concatenate) (None, 4, 4, 2048) 0 ['activation_76[0][0]', 'mixed9_0[0][0]', 'concatenate_6[0][0]', 'activation_84[0][0]'] conv2d_126 (Conv2D) (None, 4, 4, 448) 917504 ['mixed9[0][0]'] batch_normalization_89 (BatchN (None, 4, 4, 448) 1344 ['conv2d_126[0][0]'] ormalization) activation_89 (Activation) (None, 4, 4, 448) 0 ['batch_normalization_89[0][0]'] conv2d_123 (Conv2D) (None, 4, 4, 384) 786432 ['mixed9[0][0]'] conv2d_127 (Conv2D) (None, 4, 4, 384) 1548288 ['activation_89[0][0]'] batch_normalization_86 (BatchN (None, 4, 4, 384) 1152 ['conv2d_123[0][0]'] ormalization) batch_normalization_90 (BatchN (None, 4, 4, 384) 1152 ['conv2d_127[0][0]'] ormalization) activation_86 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_86[0][0]'] activation_90 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_90[0][0]'] conv2d_124 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_125 (Conv2D) (None, 4, 4, 384) 442368 ['activation_86[0][0]'] conv2d_128 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] conv2d_129 (Conv2D) (None, 4, 4, 384) 442368 ['activation_90[0][0]'] average_pooling2d_8 (AveragePo (None, 4, 4, 2048) 0 ['mixed9[0][0]'] oling2D) conv2d_122 (Conv2D) (None, 4, 4, 320) 655360 ['mixed9[0][0]'] batch_normalization_87 (BatchN (None, 4, 4, 384) 1152 ['conv2d_124[0][0]'] ormalization) batch_normalization_88 (BatchN (None, 4, 4, 384) 1152 ['conv2d_125[0][0]'] ormalization) batch_normalization_91 (BatchN (None, 4, 4, 384) 1152 ['conv2d_128[0][0]'] ormalization) batch_normalization_92 (BatchN (None, 4, 4, 384) 1152 ['conv2d_129[0][0]'] ormalization) conv2d_130 (Conv2D) (None, 4, 4, 192) 393216 ['average_pooling2d_8[0][0]'] batch_normalization_85 (BatchN (None, 4, 4, 320) 960 ['conv2d_122[0][0]'] ormalization) activation_87 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_87[0][0]'] activation_88 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_88[0][0]'] activation_91 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_91[0][0]'] activation_92 (Activation) (None, 4, 4, 384) 0 ['batch_normalization_92[0][0]'] batch_normalization_93 (BatchN (None, 4, 4, 192) 576 ['conv2d_130[0][0]'] ormalization) activation_85 (Activation) (None, 4, 4, 320) 0 ['batch_normalization_85[0][0]'] mixed9_1 (Concatenate) (None, 4, 4, 768) 0 ['activation_87[0][0]', 'activation_88[0][0]'] concatenate_7 (Concatenate) (None, 4, 4, 768) 0 ['activation_91[0][0]', 'activation_92[0][0]'] activation_93 (Activation) (None, 4, 4, 192) 0 ['batch_normalization_93[0][0]'] mixed10 (Concatenate) (None, 4, 4, 2048) 0 ['activation_85[0][0]', 'mixed9_1[0][0]', 'concatenate_7[0][0]', 'activation_93[0][0]'] global_average_pooling2d_2 (Gl (None, 2048) 0 ['mixed10[0][0]'] obalAveragePooling2D) dense_4 (Dense) (None, 256) 524544 ['global_average_pooling2d_2[0][0 ]'] dense_5 (Dense) (None, 256) 65792 ['dense_4[0][0]'] dense_6 (Dense) (None, 3) 771 ['dense_5[0][0]'] ================================================================================================== Total params: 22,393,891 Trainable params: 929,475 Non-trainable params: 21,464,416 __________________________________________________________________________________________________
TRAIN_STEPS_PER_EPOCH = np.ceil((total_train*0.8/batch_size)-1)
VAL_STEPS_PER_EPOCH = np.ceil((total_train*0.2/batch_size)-1)
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
from tensorflow.keras.layers import Dense, Activation, Dropout, BatchNormalization,Flatten
from keras.regularizers import l1
from tensorflow.keras.optimizers import SGD, Adam
from sklearn.utils import class_weight
import numpy as np
from tensorflow.python.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.metrics import AUC
with tf.device('/device:GPU:0'):
es = EarlyStopping(monitor='val_loss', patience=15, verbose=0, mode='min')
red_lr = ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=1,factor=0.20)
mc = ModelCheckpoint('best_model_8_aug.h5', monitor='val_loss',mode='min', verbose=1, save_best_only=True)
model.compile(
optimizer="adam",
loss= 'categorical_crossentropy',
metrics=['accuracy', 'AUC'])
# Fitting the CNN to the Training set
history = model.fit(
train_gen,
#steps_per_epoch=total_train // batch_size, #adjusts training process for new image batches
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
epochs=epochs,
validation_data=val_gen,
#validation_steps=total_train // batch_size,
validation_steps=VAL_STEPS_PER_EPOCH,
verbose=1,
callbacks=[mc,red_lr,es]
)
Epoch 1/40 77/77 [==============================] - ETA: 0s - loss: 0.2078 - accuracy: 0.9271 - auc: 0.9884 Epoch 1: val_loss improved from inf to 0.44476, saving model to best_model_8_aug.h5 77/77 [==============================] - 53s 627ms/step - loss: 0.2078 - accuracy: 0.9271 - auc: 0.9884 - val_loss: 0.4448 - val_accuracy: 0.8520 - val_auc: 0.9524 - lr: 0.0010 Epoch 2/40 77/77 [==============================] - ETA: 0s - loss: 0.1424 - accuracy: 0.9493 - auc: 0.9933 Epoch 2: val_loss did not improve from 0.44476 77/77 [==============================] - 46s 594ms/step - loss: 0.1424 - accuracy: 0.9493 - auc: 0.9933 - val_loss: 0.4698 - val_accuracy: 0.8553 - val_auc: 0.9600 - lr: 0.0010 Epoch 3/40 77/77 [==============================] - ETA: 0s - loss: 0.0997 - accuracy: 0.9625 - auc: 0.9962 Epoch 3: val_loss did not improve from 0.44476 Epoch 00003: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026. 77/77 [==============================] - 46s 593ms/step - loss: 0.0997 - accuracy: 0.9625 - auc: 0.9962 - val_loss: 0.7979 - val_accuracy: 0.7664 - val_auc: 0.9107 - lr: 0.0010 Epoch 4/40 77/77 [==============================] - ETA: 0s - loss: 0.0581 - accuracy: 0.9829 - auc: 0.9982 Epoch 4: val_loss improved from 0.44476 to 0.22442, saving model to best_model_8_aug.h5 77/77 [==============================] - 50s 646ms/step - loss: 0.0581 - accuracy: 0.9829 - auc: 0.9982 - val_loss: 0.2244 - val_accuracy: 0.9342 - val_auc: 0.9868 - lr: 2.0000e-04 Epoch 5/40 77/77 [==============================] - ETA: 0s - loss: 0.0418 - accuracy: 0.9882 - auc: 0.9989 Epoch 5: val_loss improved from 0.22442 to 0.15022, saving model to best_model_8_aug.h5 77/77 [==============================] - 47s 615ms/step - loss: 0.0418 - accuracy: 0.9882 - auc: 0.9989 - val_loss: 0.1502 - val_accuracy: 0.9490 - val_auc: 0.9950 - lr: 2.0000e-04 Epoch 6/40 77/77 [==============================] - ETA: 0s - loss: 0.0418 - accuracy: 0.9841 - auc: 0.9992 Epoch 6: val_loss improved from 0.15022 to 0.12288, saving model to best_model_8_aug.h5 77/77 [==============================] - 50s 649ms/step - loss: 0.0418 - accuracy: 0.9841 - auc: 0.9992 - val_loss: 0.1229 - val_accuracy: 0.9572 - val_auc: 0.9958 - lr: 2.0000e-04 Epoch 7/40 77/77 [==============================] - ETA: 0s - loss: 0.0350 - accuracy: 0.9894 - auc: 0.9987 Epoch 7: val_loss improved from 0.12288 to 0.11957, saving model to best_model_8_aug.h5 77/77 [==============================] - 47s 609ms/step - loss: 0.0350 - accuracy: 0.9894 - auc: 0.9987 - val_loss: 0.1196 - val_accuracy: 0.9589 - val_auc: 0.9958 - lr: 2.0000e-04 Epoch 8/40 77/77 [==============================] - ETA: 0s - loss: 0.0280 - accuracy: 0.9898 - auc: 0.9997 Epoch 8: val_loss did not improve from 0.11957 77/77 [==============================] - 45s 579ms/step - loss: 0.0280 - accuracy: 0.9898 - auc: 0.9997 - val_loss: 0.1254 - val_accuracy: 0.9556 - val_auc: 0.9936 - lr: 2.0000e-04 Epoch 9/40 77/77 [==============================] - ETA: 0s - loss: 0.0259 - accuracy: 0.9910 - auc: 0.9995 Epoch 9: val_loss improved from 0.11957 to 0.11375, saving model to best_model_8_aug.h5 77/77 [==============================] - 49s 635ms/step - loss: 0.0259 - accuracy: 0.9910 - auc: 0.9995 - val_loss: 0.1138 - val_accuracy: 0.9572 - val_auc: 0.9973 - lr: 2.0000e-04 Epoch 10/40 77/77 [==============================] - ETA: 0s - loss: 0.0225 - accuracy: 0.9923 - auc: 0.9999 Epoch 10: val_loss did not improve from 0.11375 77/77 [==============================] - 45s 580ms/step - loss: 0.0225 - accuracy: 0.9923 - auc: 0.9999 - val_loss: 0.1242 - val_accuracy: 0.9589 - val_auc: 0.9957 - lr: 2.0000e-04 Epoch 11/40 77/77 [==============================] - ETA: 0s - loss: 0.0170 - accuracy: 0.9951 - auc: 0.9996 Epoch 11: val_loss did not improve from 0.11375 Epoch 00011: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05. 77/77 [==============================] - 47s 613ms/step - loss: 0.0170 - accuracy: 0.9951 - auc: 0.9996 - val_loss: 0.1928 - val_accuracy: 0.9457 - val_auc: 0.9902 - lr: 2.0000e-04 Epoch 12/40 77/77 [==============================] - ETA: 0s - loss: 0.0192 - accuracy: 0.9927 - auc: 0.9996 Epoch 12: val_loss did not improve from 0.11375 77/77 [==============================] - 46s 597ms/step - loss: 0.0192 - accuracy: 0.9927 - auc: 0.9996 - val_loss: 0.1422 - val_accuracy: 0.9605 - val_auc: 0.9943 - lr: 4.0000e-05 Epoch 13/40 77/77 [==============================] - ETA: 0s - loss: 0.0109 - accuracy: 0.9967 - auc: 0.9997 Epoch 13: val_loss did not improve from 0.11375 Epoch 00013: ReduceLROnPlateau reducing learning rate to 8.000000525498762e-06. 77/77 [==============================] - 45s 585ms/step - loss: 0.0109 - accuracy: 0.9967 - auc: 0.9997 - val_loss: 0.1266 - val_accuracy: 0.9655 - val_auc: 0.9958 - lr: 4.0000e-05 Epoch 14/40 77/77 [==============================] - ETA: 0s - loss: 0.0104 - accuracy: 0.9963 - auc: 0.9997 Epoch 14: val_loss did not improve from 0.11375 77/77 [==============================] - 45s 583ms/step - loss: 0.0104 - accuracy: 0.9963 - auc: 0.9997 - val_loss: 0.1378 - val_accuracy: 0.9622 - val_auc: 0.9954 - lr: 8.0000e-06 Epoch 15/40 77/77 [==============================] - ETA: 0s - loss: 0.0123 - accuracy: 0.9963 - auc: 0.9999 Epoch 15: val_loss did not improve from 0.11375 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.6000001778593287e-06. 77/77 [==============================] - 45s 587ms/step - loss: 0.0123 - accuracy: 0.9963 - auc: 0.9999 - val_loss: 0.1338 - val_accuracy: 0.9638 - val_auc: 0.9955 - lr: 8.0000e-06 Epoch 16/40 77/77 [==============================] - ETA: 0s - loss: 0.0059 - accuracy: 0.9992 - auc: 1.0000 Epoch 16: val_loss did not improve from 0.11375 77/77 [==============================] - 45s 581ms/step - loss: 0.0059 - accuracy: 0.9992 - auc: 1.0000 - val_loss: 0.1432 - val_accuracy: 0.9622 - val_auc: 0.9933 - lr: 1.6000e-06 Epoch 17/40 77/77 [==============================] - ETA: 0s - loss: 0.0092 - accuracy: 0.9971 - auc: 1.0000 Epoch 17: val_loss did not improve from 0.11375 Epoch 00017: ReduceLROnPlateau reducing learning rate to 3.200000264769187e-07. 77/77 [==============================] - 45s 591ms/step - loss: 0.0092 - accuracy: 0.9971 - auc: 1.0000 - val_loss: 0.1445 - val_accuracy: 0.9622 - val_auc: 0.9929 - lr: 1.6000e-06 Epoch 18/40 77/77 [==============================] - ETA: 0s - loss: 0.0076 - accuracy: 0.9976 - auc: 1.0000 Epoch 18: val_loss did not improve from 0.11375 77/77 [==============================] - 45s 585ms/step - loss: 0.0076 - accuracy: 0.9976 - auc: 1.0000 - val_loss: 0.1451 - val_accuracy: 0.9605 - val_auc: 0.9929 - lr: 3.2000e-07 Epoch 19/40 77/77 [==============================] - ETA: 0s - loss: 0.0084 - accuracy: 0.9984 - auc: 1.0000 Epoch 19: val_loss did not improve from 0.11375 Epoch 00019: ReduceLROnPlateau reducing learning rate to 6.400000529538374e-08. 77/77 [==============================] - 48s 621ms/step - loss: 0.0084 - accuracy: 0.9984 - auc: 1.0000 - val_loss: 0.1361 - val_accuracy: 0.9622 - val_auc: 0.9943 - lr: 3.2000e-07 Epoch 20/40 77/77 [==============================] - ETA: 0s - loss: 0.0091 - accuracy: 0.9971 - auc: 1.0000 Epoch 20: val_loss did not improve from 0.11375 77/77 [==============================] - 48s 620ms/step - loss: 0.0091 - accuracy: 0.9971 - auc: 1.0000 - val_loss: 0.1457 - val_accuracy: 0.9605 - val_auc: 0.9929 - lr: 6.4000e-08 Epoch 21/40 77/77 [==============================] - ETA: 0s - loss: 0.0068 - accuracy: 0.9988 - auc: 1.0000 Epoch 21: val_loss did not improve from 0.11375 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1.2800001059076749e-08. 77/77 [==============================] - 45s 585ms/step - loss: 0.0068 - accuracy: 0.9988 - auc: 1.0000 - val_loss: 0.1455 - val_accuracy: 0.9605 - val_auc: 0.9929 - lr: 6.4000e-08 Epoch 22/40 77/77 [==============================] - ETA: 0s - loss: 0.0078 - accuracy: 0.9984 - auc: 1.0000 Epoch 22: val_loss did not improve from 0.11375 77/77 [==============================] - 45s 586ms/step - loss: 0.0078 - accuracy: 0.9984 - auc: 1.0000 - val_loss: 0.1449 - val_accuracy: 0.9605 - val_auc: 0.9930 - lr: 1.2800e-08 Epoch 23/40 77/77 [==============================] - ETA: 0s - loss: 0.0080 - accuracy: 0.9972 - auc: 1.0000 Epoch 23: val_loss did not improve from 0.11375 Epoch 00023: ReduceLROnPlateau reducing learning rate to 2.5600002118153498e-09. 77/77 [==============================] - 46s 593ms/step - loss: 0.0080 - accuracy: 0.9972 - auc: 1.0000 - val_loss: 0.1431 - val_accuracy: 0.9622 - val_auc: 0.9930 - lr: 1.2800e-08 Epoch 24/40 77/77 [==============================] - ETA: 0s - loss: 0.0077 - accuracy: 0.9976 - auc: 1.0000 Epoch 24: val_loss did not improve from 0.11375 77/77 [==============================] - 48s 629ms/step - loss: 0.0077 - accuracy: 0.9976 - auc: 1.0000 - val_loss: 0.1446 - val_accuracy: 0.9605 - val_auc: 0.9930 - lr: 2.5600e-09
from tensorflow.keras.models import load_model
model=load_model("best_model_8_aug.h5")
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
#Using test_generator for predictions on test data:
#Sources: https://stackoverflow.com/questions/52270177/how-to-use-predict-generator-on-new-images-keras
# https://tylerburleigh.com/blog/predicting-pneumonia-from-chest-x-rays-using-efficientnet/
test_gen.reset() #It's important to always reset the test generator.
Y_pred_test=model.predict(test_gen)
y_pred_test=np.argmax(Y_pred_test,axis=1)
25/25 [==============================] - 15s 529ms/step
labels = (test_gen.class_indices)
print(labels)
{'COVID': 0, 'NORMAL': 1, 'Viral Pneumonia': 2}
from sklearn.metrics import classification_report, confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
seed_value= 42
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
from numpy.random import seed
seed(seed_value)
import tensorflow
tensorflow.random.set_seed(seed_value)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
y_true_test = test_gen.classes
CM = confusion_matrix(y_true_test, y_pred_test)
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(5, 5))
plt.show()
#print(confusion_matrix(y_true, y_pred))
print('Classification Report')
target_names = ['COVID', 'NORMAL', 'Viral Pneumonia']
print(classification_report(y_true_test, y_pred_test, target_names=target_names, digits=4))
Classification Report precision recall f1-score support COVID 0.9958 0.9958 0.9958 240 NORMAL 0.9767 0.9331 0.9544 269 Viral Pneumonia 0.9359 0.9777 0.9564 269 accuracy 0.9679 778 macro avg 0.9695 0.9689 0.9689 778 weighted avg 0.9685 0.9679 0.9679 778
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
import pandas as pd
from math import sqrt
def model_eval_metrics(y_true, y_pred,classification="TRUE"):
if classification=="TRUE":
accuracy_eval = accuracy_score(y_true, y_pred)
f1_score_eval = f1_score(y_true, y_pred,average="macro",zero_division=0)
precision_eval = precision_score(y_true, y_pred,average="macro",zero_division=0)
recall_eval = recall_score(y_true, y_pred,average="macro",zero_division=0)
metricdata = {'Accuracy': [accuracy_eval], 'F1 Score': [f1_score_eval], 'Precision': [precision_eval], 'Recall': [recall_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
else:
mse_eval = mean_squared_error(y_true, y_pred)
rmse_eval = sqrt(mean_squared_error(y_true, y_pred))
mae_eval = mean_absolute_error(y_true, y_pred)
r2_eval = r2_score(y_true, y_pred)
metricdata = {'MSE': [mse_eval], 'RMSE': [rmse_eval], 'MAE': [mae_eval], 'R2': [r2_eval]}
finalmetricdata = pd.DataFrame(metricdata, index=[''])
return finalmetricdata
# get metrics
model_eval_metrics(y_true_test, y_pred_test, classification="TRUE")
Accuracy | F1 Score | Precision | Recall | |
---|---|---|---|---|
0.967866 | 0.968857 | 0.969477 | 0.968871 |
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true_test, Y_pred_test[:,i], pos_label=i)
roc_auc[i] = auc(fpr[i], tpr[i])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.figure(1)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
lw = 2
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.4f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
plt.plot(fpr[0], tpr[0], color='purple', label='COVID vs Rest (area = {1:0.4f})'.format(0, roc_auc[0]))
plt.plot(fpr[1], tpr[1], color='aqua', label='NORMAL vs Rest (area = {1:0.4f})'.format(1, roc_auc[1]))
plt.plot(fpr[2], tpr[2], color='darkorange', label='Pneumonia vs Rest (area = {1:0.4f})'.format(2, roc_auc[2]))
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Multiclass ROC Curve")
plt.legend(loc="lower right")
plt.show()