filmov
tv
Image classification using Tensorflow and CNN | Basic Deep Learning

Показать описание
It is a Deep Learning Project for beginners - How would you classify images using CNN. This Python code is an example of image classification using Convolutional Neural Networks (CNN) on the CIFAR-10 dataset, which consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class.
Let's code step-by-step:
Data Loading and Exploration: The code starts by importing necessary libraries, including TensorFlow, the Keras API, and NumPy for numerical operations and matplotlib for visualization.
The shapes of the training and test sets are printed to check the number of samples and dimensions of the images.
The first neural network (ann) is built using Keras' Sequential API. It starts with a Flatten layer that converts the 32x32x3 input images into a flat vector. It follows with two dense (fully connected) layers with 3000 and 1000 units, respectively. The activation function used is ReLU (Rectified Linear Unit), which introduces non-linearity in the network. The final dense layer has 10 units (equal to the number of classes) and uses the softmax activation function to produce probabilities for each class. The model is compiled using the stochastic gradient descent (SGD) optimizer, the sparse categorical cross-entropy loss function, and accuracy as the evaluation metric. The model is then trained on the training set (X_train and y_train) for 5 epochs.
The first neural network (ann) is built using Keras' Sequential API. It starts with a Flatten layer that converts the 32x32x3 input images into a flat vector. It follows with two dense (fully connected) layers with 3000 and 1000 units, respectively. The activation function used is ReLU (Rectified Linear Unit), which introduces non-linearity in the network. The final dense layer has 10 units (equal to the number of classes) and uses the softmax activation function to produce probabilities for each class. The model is compiled using the stochastic gradient descent (SGD) optimizer, the sparse categorical cross-entropy loss function, and accuracy as the evaluation metric.
The model is then trained on the training set (X_train and y_train) for 5 epochs.
The accuracy of the trained model is evaluated using the test set (X_test and y_test) using the classification_report from scikit-learn. The second neural network (cnn) is constructed for image classification using CNN architecture. It starts with a Conv2D layer with 32 filters (kernels) of size 3x3, followed by a ReLU activation function. A MaxPooling2D layer with a pool size of 2x2 is added to downsample the feature maps. The process is repeated with another Conv2D layer with 64 filters and a ReLU activation, followed by a MaxPooling2D layer. The feature maps are then flattened to a 1D vector using the Flatten layer. Two dense layers with 64 and 10 units, respectively, are added with ReLU and softmax activations for classification. The model is compiled with the Adam optimizer, sparse categorical cross-entropy loss, and accuracy as the evaluation metric.
The CNN model (cnn) is trained on the normalized training set (X_train and y_train) for 10 epochs.
A sample image from the test set with its true class and predicted class is visualized using plot_sample() classes.
Lastly, it prints the class name for the predicted class of the seventh sample in the test set.
Overall, this code demonstrates how to build, train, and evaluate two different neural network architectures for image classification: a basic neural network and a convolutional neural network.
The CNN achieves better performance than the basic neural network due to its ability to learn hierarchical features from images.
Contact Me:
Request Vidoes:
AI
Let's code step-by-step:
Data Loading and Exploration: The code starts by importing necessary libraries, including TensorFlow, the Keras API, and NumPy for numerical operations and matplotlib for visualization.
The shapes of the training and test sets are printed to check the number of samples and dimensions of the images.
The first neural network (ann) is built using Keras' Sequential API. It starts with a Flatten layer that converts the 32x32x3 input images into a flat vector. It follows with two dense (fully connected) layers with 3000 and 1000 units, respectively. The activation function used is ReLU (Rectified Linear Unit), which introduces non-linearity in the network. The final dense layer has 10 units (equal to the number of classes) and uses the softmax activation function to produce probabilities for each class. The model is compiled using the stochastic gradient descent (SGD) optimizer, the sparse categorical cross-entropy loss function, and accuracy as the evaluation metric. The model is then trained on the training set (X_train and y_train) for 5 epochs.
The first neural network (ann) is built using Keras' Sequential API. It starts with a Flatten layer that converts the 32x32x3 input images into a flat vector. It follows with two dense (fully connected) layers with 3000 and 1000 units, respectively. The activation function used is ReLU (Rectified Linear Unit), which introduces non-linearity in the network. The final dense layer has 10 units (equal to the number of classes) and uses the softmax activation function to produce probabilities for each class. The model is compiled using the stochastic gradient descent (SGD) optimizer, the sparse categorical cross-entropy loss function, and accuracy as the evaluation metric.
The model is then trained on the training set (X_train and y_train) for 5 epochs.
The accuracy of the trained model is evaluated using the test set (X_test and y_test) using the classification_report from scikit-learn. The second neural network (cnn) is constructed for image classification using CNN architecture. It starts with a Conv2D layer with 32 filters (kernels) of size 3x3, followed by a ReLU activation function. A MaxPooling2D layer with a pool size of 2x2 is added to downsample the feature maps. The process is repeated with another Conv2D layer with 64 filters and a ReLU activation, followed by a MaxPooling2D layer. The feature maps are then flattened to a 1D vector using the Flatten layer. Two dense layers with 64 and 10 units, respectively, are added with ReLU and softmax activations for classification. The model is compiled with the Adam optimizer, sparse categorical cross-entropy loss, and accuracy as the evaluation metric.
The CNN model (cnn) is trained on the normalized training set (X_train and y_train) for 10 epochs.
A sample image from the test set with its true class and predicted class is visualized using plot_sample() classes.
Lastly, it prints the class name for the predicted class of the seventh sample in the test set.
Overall, this code demonstrates how to build, train, and evaluate two different neural network architectures for image classification: a basic neural network and a convolutional neural network.
The CNN achieves better performance than the basic neural network due to its ability to learn hierarchical features from images.
Contact Me:
Request Vidoes:
AI