× Didn't find what you were looking for? Ask a question
  
  
Top Posters
Since Sunday
17
j
15
m
15
a
15
Y
14
p
13
j
13
m
13
t
13
s
13
J
13
z
13
New Topic  
wrote...
Posts: 244
Rep: 2 0
A month ago
List the five assumptions made in linear regressions and select one to discuss in depth.
Textbook 

Analytics, Data Science, & Artificial Intelligence: Systems for Decision Support


Edition: 11th
Authors:
Read 48 times
1 Reply

Related Topics

Replies
wrote...
A month ago
1. Linearity. This assumption states that the relationship between the response variable and the explanatory variables is linear. That is, the expected value of the response variable is a straight-line function of each explanatory variable while holding all other explanatory variables fixed. Also, the slope of the line does not depend on the values of the other variables. It also implies that the effects of different explanatory variables on the expected value of the response variable are additive in nature.
2. Independence (of errors). This assumption states that the errors of the response variable are uncorrelated with each other. This independence of the errors is weaker
than actual statistical independence, which is a stronger condition and is often not needed for linear regression analysis.
3. Normality (of errors). This assumption states that the errors of the response variable are normally distributed. That is, they are supposed to be totally random and should not represent any nonrandom patterns.
4. Constant variance (of errors). This assumption, also called homoscedasticity, states that the response variables have the same variance in their error regardless of the values of the explanatory variables. In practice, this assumption is invalid if the response variable varies over a wide enough range/scale.
5. Multicollinearity. This assumption states that the explanatory variables are not correlated (i.e., do not replicate the same but provide a different perspective of the information needed for the model). Multicollinearity can be triggered by having two or more perfectly correlated explanatory variables presented to the model (e.g., if the same explanatory variable is mistakenly included in the model twice, one with a slight transformation of the same variable). A correlation-based data assessment usually catches this error.
New Topic      
Explore
Post your homework questions and get free online help from our incredible volunteers.
Learn More
Improve Grades
Help Others
Save Time
Accessible 24/7
  100 People Browsing
Related Images
 393
 109
 120