NPTEL INTRODUCTION TO MACHINE LEARNING ASSIGNMENT 2 Answers:- Hello students in this article we are going to share NPTEL INTRODUCTION TO MACHINE LEARNING assignment week 2 answers. All the Answers provided below to help the students as a reference, You must submit your assignment at your own knowledge.
Below you can find NPTEL INTRODUCTION TO MACHINE LEARNING Assignment 2 Answers
|INTRODUCTION TO MACHINE LEARNING Assignment 1||Click Here|
|INTRODUCTION TO MACHINE LEARNING Assignment 2||Click Here|
|INTRODUCTION TO MACHINE LEARNING Assignment 3||Click Here|
|INTRODUCTION TO MACHINE LEARNING Assignment 4||Click Here|
|INTRODUCTION TO MACHINE LEARNING Assignment 5||Click Here|
|INTRODUCTION TO MACHINE LEARNING Assignment 6||Click Here|
|INTRODUCTION TO MACHINE LEARNING Assignment 7||Click Here|
|INTRODUCTION TO MACHINE LEARNING Assignment 8||Click Here|
NPTEL INTRODUCTION TO MACHINE LEARNING Assignment 2 Answers 2022:-
1. The parameters obtained in linear regression
a. can take any value in the real space
b. are strictly integers
c. always lie in the range [0,1]
d. can take only non-zero values
Answer:- Answers will be Uploaded Shortly and it will be Notified on Telegram, So JOIN NOW
2. Suppose that we have N independent variables (X1,X2,…Xn) and the dependent variable is Y . Now imagine that you are applying linear regression by fitting the best fit line using the least square error on this data. You found that the correlation coefficient for one of its variables (Say X1) with Y is -0.005.
a. Regressing Y on X1 mostly does not explain away Y .
b. Regressing Y on X1 explains away Y .
c. The given data is insufficient to determine if regressing Y on X1 explains away Y or not.
3. Consider the following five training examples
We want to learn a function f(x) of the form f(x)=ax+b which is parameterised by (a,b).Using mean squared error as the loss function, which of the following parameters would you use to model this function to get a solution with the minimum loss?
a. (4, 3)
b. (1, 4)
c. (4, 1)
d. (3, 4)
4. The relation between studying time (in hours) and grade on the final examination (0-100) in a random sample of students in the Introduction to Machine Learning Class was found to be: Grade = 30.5 + 15.2 (h)
How will a student’s grade be affected if she studies for four hours?
a. It will go down by 30.4 points.
b. It will go down by 30.4 points.
c. It will go up by 60.8 points.
d. The grade will remain unchanged.
e.It cannot be determined from the information given
5. Which of the statements is/are True?
a. Ridge has sparsity constraint, and it will drive coefficients with low values to 0.
b. Lasso has a closed form solution for the optimization problem, but this is not the case for Ridge.
c. Ridge regression does not reduce the number of variables since it never leads a coefficient to zero but only minimizes it.
d. If there are two or more highly collinear variables, Lasso will select one of them randomly.
Next Week Assignment Answers
6. Consider the following statements:
Assertion(A): Orthogonalization is applied to the dimensions in linear regression.
Reason(R): Orthogonalization makes univariate regression possible in each orthogonal dimension separately to produce the coefficients.
a. Both A and R are true, and R is the correct explanation of A.
b. Both A and R are true, but R is not the correct explanation of A.
c. A is true, but R is false.
d. A is false, but R is true
e.Both A and R are false.
7. Consider the following statements:
Statement A: In Forward stepwise selection, in each step, that variable is chosen which has the maximum correlation with the residual, then the residual is regressed on that variable, and it is added to the predictor.
Statement B: In Forward stagewise selection, the variables are added one by one to the previously selected variables to produce the best fit till then
a. Both the statements are True.
b. Statement A is True, and Statement B is False
c. Statement A if False and Statement B is True
d. Both the statements are False.
8. The linear regression model y=a0+a1x1+a2x2+…+apxp is to be fitted to a set of N training data points having p attributes each. Let X be N×(p+1) vectors of input values (augmented by 1‘s), Y be N×1 vector of target values, and θ be (p+1)×1 vector of parameter values (a0,a1,a2,…,ap). If the sum squared error is minimized for obtaining the optimal regression model, which of the following equation holds?
For More NPTEL Answers:- CLICK HERE
Disclaimer: We do not claim 100% surety of answers, these answers are based on our sole knowledge, and by posting these answers we are just trying to help students, so we urge do your assignment on your own.
if you have any suggestions then comment below or contact us at [email protected]
If you found this article Interesting and helpful, don’t forget to share it with your friends to get this information.