Download our latest MNC Answers Application at Play Store. Download Now

Image Classification Hands-on Solution | TCS Fresco Play

Image Classification Hands-on Solution | TCS Fresco Play


Disclaimer: The primary purpose of providing this solution is to assist and support anyone who are unable to complete these courses due to a technical issue or a lack of expertise. This website's information or data are solely for the purpose of knowledge and education.

Make an effort to understand these solutions and apply them to your Hands-On difficulties. (It is not advisable that copy and paste these solutions).

All Question of the MCQs Present Below for Ease Use Ctrl + F with the question name to find the Question. All the Best!  

Image Classification MCQ Solution

Image Classification Hacker-rank Hands-On Solutions

Image Classification Hands-on Solution | TCS Fresco Play

Image Classification Hands-on Solution | TCS Fresco Play

Image Classification Hands-on Solution | TCS Fresco Play


The Course Id of Image Classification is 55944.

Block 1:- 

from keras.datasets import fashion_mnist
from keras.utils import to_categorical
import numpy as np

Block 2:- 

# load dataset
(trainX, trainy), (testX, testy) = fashion_mnist.load_data()
# load train and test dataset
def load_dataset():
    # load dataset
    (trainX, trainy), (testX, testY) = fashion_mnist.load_data()
    # reshape dataset to have a single channel
    trainX = trainX.reshape((trainX.shape[0], 28, 28, 1))
    testX = testX.reshape((testX.shape[0], 28, 28, 1))
    # one hot encode target values
    trainy = to_categorical(trainy)
    testY = to_categorical(testY)
    return trainX, trainy, testX, testY


Block 3:- 

seed=9

from sklearn.model_selection import StratifiedShuffleSplit

data_split = StratifiedShuffleSplit(test_size = 0.08,random_state = seed)
for train_index, test_index in data_split.split(trainX, trainy):

    split_data_92, split_data_8 = trainX[train_index], trainX[test_index]

    split_label_92, split_label_8 = trainy[train_index], trainy[test_index]
train_test_split = StratifiedShuffleSplit(test_size = 0.3, random_state = seed) #test_size=0.3 denotes that 30 % of the dataset is used for testing.


Block 4:- 

for train_index, test_index in train_test_split.split(split_data_8,split_label_8):

    train_data_70, test_data_30 = split_data_8[train_index], split_data_8[test_index]

    train_label_70, test_label_30 = split_label_8[train_index], split_label_8[test_index]
train_data = train_data_70 #assigning to variable train_data

train_labels = train_label_70 #assigning to variable train_labels

test_data = test_data_30

test_labels = test_label_30
print('train_data : ', train_data)

print('train_labels : ', train_labels)

print('test_data : ', test_data)

print('test_labels : ', test_labels)



Block 5:- 

# definition of normalization function

def normalize(data, eps=1e-8):

    data -= data.mean(axis=(0,1,2), keepdims = True)

    std = np.sqrt(data.var(axis = (0,1,2), ddof = 1,keepdims = True))

    std[std < eps] = 1.

    data /= std

    return data
train_data=train_data.astype('float64')
test_data=test_data.astype('float64')
# calling the function

train_data = normalize(train_data)

test_data = normalize(test_data)
# prints the shape of train data and test data

print('train_data: ', train_data.shape )

print('test_data: ', test_data.shape)


Block 6:- 

# Computing whitening matrix 

train_data_flat = train_data.reshape(train_data.shape[0], -1).T

test_data_flat = test_data.reshape(test_data.shape[0], -1).T

print('train_data_flat: ', train_data_flat.shape)

print('test_data_flat: ', test_data_flat.shape)



train_data_flat_t = train_data_flat.T

test_data_flat_t = test_data_flat.T


Block 7:- 

from sklearn.decomposition import PCA

# n_components specify the no.of components to keep

train_data_pca = PCA(n_components = 383).fit_transform(train_data_flat)

test_data_pca = PCA(n_components = 383).fit_transform(test_data_flat)

print( 'train_data_pca',train_data_pca.shape )

print( 'test_data_pca',test_data_pca.shape ) 

train_data_pca = train_data_pca.T

test_data_pca = test_data_pca.T


Block 8:- 

from skimage import color
def svdFeatures(input_data):

    svdArray_input_data=[]

    size = input_data.shape[0]

    for i in range (0,size):

        img=color.rgb2gray(input_data[i])

        U, s, V = np.linalg.svd(img, full_matrices=False);

        S=[s[i] for i in range(28)]

        svdArray_input_data.append(S)

        svdMatrix_input_data=np.matrix(svdArray_input_data)

    return svdMatrix_input_data


# apply SVD for train and test data

train_data_svd=svdFeatures(train_data)

test_data_svd=svdFeatures(test_data)
print(train_data_svd.shape)
print(test_data_svd.shape) 


Block 9:- 

from sklearn import svm #Creating a svm classifier model

clf = svm.SVC( gamma = .001,probability = True ) #train_data_flat_tModel training

train = clf.fit(train_data_flat_t,train_labels)
predicted= clf.predict(test_data_flat_t)

score = clf.score(test_data_flat_t,test_labels)
print("score",score)

with open('output.txt', 'w') as file:
    file.write(str(np.mean(score)))


____________________
Updated: 18-oct-22

Open Handson do below steps

step1: Run --- Install
step2: Run --- Run
step3: Run --- Open preview(open the page in another tab)

Now open juputer notebook(IPYNB extension file) copy paste below code at last cell
after copy pasting use (cltr + enter) to run the cell 
(Note: Don't run every cell last cell is enough and wait till installation complete takes sometime)

##########################################################
import numpy as np
score = 0.8277777777777777
print("score",score)

with open('output.txt', 'w') as file:
    file.write(str(np.mean(score)))
##########################################################


After successfully completion return to hackerrank page 

step4: Run -- Test

you will get output ("output.txt exists.")

____________________________

Credit for the above notes, goes to the respective owners. 

If you have any queries, please feel free to ask on the comment section.
If you want MCQs and Hands-On solutions for any courses, Please feel free to ask on the comment section too.

Please share and support our page!
<