**NPTEL Introduction to Machine Learning Assignment 2 Answers 2023? **This article will help you with the answer to the **Nation**** al Programme on Technology Enhanced Learning (NPTEL)** Course “

**“**

**Introduction to Machine Learning Assignment 2**** Below you can find the answers for Introduction to Machine Learning Assignment 2 Answers 2023**

## NPTEL Introduction to Machine Learning Assignment 2 Answers 2023:-

**Q1.** Given a training data set of 10,000 instances, with each input instance having 17 dimensions and each output instance having 2 dimensions, the dimensions of the design matrix used in applying linear regression to this data is

**Answer:-** **c**

**For week 1 Answers:- Click Here**

**Q2.** Suppose we want to add a regularizer to the linear regression loss function, to control the magnitudes of the weights β . We have a choice between Ω1(β)=Σpi=1|β| and Ω2(β)=Σpi=1β2 Which one is more likely to result in sparse weights?

**Answer:-** **a**

**Q3.** The model obtained by applying linear regression on the identified subset of features may differ from the model obtained at the end of the process of identifying the subset during

**Answer:-** **d**

**Q4.** Consider forward selection, backward selection and best subset selection with respect to the same data set. Which of the following is true?

**Answer:-** **a**

**Next Week Answers: Assignment 03**

**Q5.** In the lecture on Multivariate Regression, you learn about using orthogonalization iteratively to obtain regression co-effecients. This method is generally referred to as Multiple Regression using Successive Orthogonalization. In the formulation of the method, we observe that in iteration k , we regress the entire dataset on z0,z1,…zk−1 . It seems like a waste of computation to recompute the coefficients for z0 a total of p times, z1 a total of p−1 times and so on. Can we re-use the coefficients computed in iteration j for iteration j+1 for zj−1 ?

**Answer:-** **b**

**Q6.** Principal Component Regression (PCR) is an approach to find an orthogonal set of basis vectors which can then be used to reduce the dimension of the input. Which of the following matrices contains the principal component directions as its columns (follow notation from the lecture video)

**Answer:-** **d**

**If there are any changes in answers will notify you on telegram so you can get 100% score, So Join**

**Q7.** We want to learn a function f(x) of the form f(x)=ax+b which is parameterised by (a,b) . Using squared error as the loss function, which of the following parameters would you use to model this function to get a solution with the minimum loss.

**Answer:-** **d**

**Q8.** Let us build a nearest neighbours classifier that will predict which language a word belongs to. Say we represent each word using the following features.

**Answer:-** **a**

**For other courses answers:- Visit**

**For Internship and job updates:- Visit **

**Disclaimer:** We do not claim 100% surety of answers, these answers are based on our sole knowledge, and by posting these answers we are just trying to help students, so we urge do your assignment on your own.

if you have any suggestions then comment below or contact us at [email protected]

If you found this article Interesting and helpful, don’t forget to share it with your friends to get this information.