My Brain Cells

Easiest (and best) learning materials for anyone with a curiosity for machine learning and artificial intelligence, Deep learning, Programming, and other fun life hacks.

Digits Recognition using SVM

What is SVM?

SVM is short form for support vector machines. It is a supervised learning models for classification and regression analysis. This was developed by AT&T bell lab. This SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. if at all the new examples come to the picture then they are mapped into the same space and predicted to belong to a category based on which since of the gap they fall. SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.

Why do we use SVM for digital recognition?

We have different types to kernal in the SVM such as

  • Linear
  • RBF – Radial Basis Function
  • Polunomial Kernal

https://miro.medium.com/max/1922/1*Ha7EfcfB5mY2RIKsXaTRkA.png

By this SVM algorithm, which makes easy to classify the digits and recognise it very quickly. SVM has different types of kernals which is used in classification and regression analysis.

References:

Code in Python

We have taken the data from the sklearn dataset. Always the image data is converted into numpy array where the image data gets converted into numbers i.e. it represents as 0 to 255. and also try to flatten the image data.

from sklearn import datasets
digits = datasets.load_digits()
digits
digits.target.size
type(digits)
# Bunch : Dictionary like object of Sklearn
# {'key':'values}
digits.data[0]
# data is 1 Dimension
# images is 2 Dimension
# data is flattened version of images
import matplotlib.pyplot as plt
plt.imshow(digits.images[457],cmap='gray')
plt.show()
digits.target[457]
#5
# DataFrame containing : 
# Rows: 1797 
# Columns : 64 input + 1 output = 65 

import pandas as pd
df = pd.DataFrame(digits.data)
df['Target'] = digits.target
df
x = digits.data
y = digits.target
import numpy as np
np.unique(y,return_counts=True)
# splitting the data
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,random_state=0,stratify=y)
print(x_train.shape)
print(x_test.shape)
np.unique(y_train,return_counts=True)
np.unique(y_test,return_counts=True)
# Model
from sklearn.svm import SVC
model = SVC(C=10)
model.fit(x_train,y_train)
y_pred = model.predict(x_test)
y_pred
# Evaluation
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
confusion_matrix(y_pred,y_test)
accuracy_score(y_pred,y_test)
# finding the best parameters
from sklearn.model_selection import GridSearchCV
svc = SVC()
parameters = {
    'kernel':['linear','rbf'],
    'C':[0.1,1,10,100]
}
cv = GridSearchCV(svc,parameters,cv =5)
cv.fit(x_train,y_train)
cv.best_params_
OUTPUT: {'C': 10, 'kernel': 'rbf'}

Anthony

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top