Download our latest MNC Answers Application at Play Store. Download Now

NLP Using Deep Learning MCQs Solution | TCS Fresco Play | Fresco Play | TCS

NLP Using Deep Learning MCQs Solution | TCS Fresco Play | Fresco Play | TCS


Disclaimer: The primary purpose of providing this solution is to assist and support anyone who are unable to complete these courses due to a technical issue or a lack of expertise. This website's information or data are solely for the purpose of knowledge and education.

Make an effort to understand these solutions and apply them to your Hands-On difficulties. (It is not advisable that copy and paste these solutions).

All Question of the MCQs Present Below for Ease Use Ctrl + F with the question name to find the Question. All the Best!

If you found answer for any of the questions is wrong. Please do mention in the comment section, could be useful for others. Thanks!

_______________________________________

1. For the window size two, what would be the maximum number of target words that can be sampled for skip gram model?

2

*4

1

3


2. Which of the following model predicts if a word is a context of another word or not?

GloVe model

*Negative sampling

CBOW model

Skip gram model


3. Which of the following model tries to predict the context word based on the target?

LSTM

CBOW model

GloVe model

*Skip gram model


4. Which layer of the Skip-gram model has an actual word embedding representation?

*Hidden layer


5. Which of the following activations is used in the CBOW model in its final layer to learn word embeddings?

Relu activation

*Softmax activation

Sigmoid activation

Leaky Relu activation


6. Which of the following option is the drawback of representing text as one hot encoding?

Leads to out of memory error

Not a valid representation for the neural network to process

*No contextual relationship

Impossible to uniquely encode all the words in the text


7. Which of the following option/options is/are the advantages of learning word embeddings?

Retain contextual relationship

reduce computation time

Represent words in reduced dimension space

*All of the options


8. Which of the following model needs fewer training samples to learn the word embeddings?

*Negative sampling

Skip gram model

CBOW model

GloVe model


9. Similar words tend to have similar word embeddings representations.

Depends on the corpus that is used to train


10. Which of the following model tries to predict the target word based on the context?

Negative sampling

GloVe model

Skip-gram Model

*CBOW model


11. Which of the following function in Keras is used to add the embedding layer to the model?

Keras.layers.Lookup()

*Keras.layers.Embedding()

Keras.layers.Sequential()

Keras.layers.Hidden(


12. Which of the following model learns the word embeddings based on the co-occurrence of the words in the corpus?

CBOW model

Skip gram model

Negative sampling

*GloVe model


13. Which of the following values is passed to sg parameter of gensim Word2Vec() to generate word vectors using CBOW gram model?

*1

3

2

0


14. Which of the following values is passed to sg parameter of gensim Word2Vec() to generate word vectors using CBOW gram model?

True Error

3

False

2


15. Which of the following values is passed to sg parameter of gensim Word2Vec() to generate word vectors using skip gram model?

3

1

2

*0


16. Which of the following metrics uses the dot product of two vectors to determine the similarity?

Jaccard similarity

Euclidean distance

Manhattan distance

*Cosine distance


17. Which of the following algorithm takes into account the global context of the word to generate word vectors?

Negative sampling

CBOW model

*GloVe model

Skip gram model


18. Which of the following model does not use the activation function to generate word embeddings?

*GloVe model

Skip gram model

CBOW model

Negative sampling


19. Which of the following is the constructor used in gensim to generate word vectors?

*Word2Vec()

KeyedVectors()

SkipGram()

Doc2Vec()


20. Which of the following models use co-occurrence matric to generate word vectors?

Skip gram model

CBOW model

Negative sampling

*GloVe model


21. Which of the following criteria is used by GloVe model to learn the word embeddings?

Maximize the distance between the vectors of two co-occurring words

Reduce the similarity between two-word vectors appearing in the same context

reduce the loss function value for predicting the co-occurring word for a given word

*Reduce the difference between the similarity of two-word vectors and their co-occurrence value


22. Which of the following model learn the word embeddings based on the co-occurrence of the words in the corpus?

CBOW model

Negative sampling

Skip gram model

*GloVe model


23. What is meant by beam width in Beam search algorithm?

The length of the translated sentence

The number of layers in the decoder

The vocabulary size

*The maximum number of words to be sampled at a time by decoder


24. The functionality of encoder in an encoder-decoder network for machine translation is __________.

*To generate unique encoding for the input sentence

To predict the next word in the sentence

To learn word vectors for each words in the input sentence

To generate translated words


**************************************************

If you have any queries, please feel free to ask on the comment section.
If you want MCQs and Hands-On solutions for any courses, Please feel free to ask on the comment section too.

Please share and support our page!