Regression Analysis Term Papers

Linear Regression Analysis

Part 14 of a Series on Evaluation of Scientific Publications

Astrid Schneider, Dipl. Math.,1Gerhard Hommel, Prof. Dr. rer. nat.,1 and Maria Blettner, Prof. Dr. rer. nat.*,1

1Departrment of Medical Biometrics, Epidemiology, and Computer Sciences, Johannes Gutenberg University, Mainz, Germany

*Department of Medical Biometrics, Epidemiology, and Computer Sciences, Johannes Gutenberg University, Obere Zahlbacher Str. 69, 55131 Mainz, Germany

Author information ►Article notes ►Copyright and License information ►

Received 2010 May 11; Accepted 2010 Jul 14.

Copyright notice

Review Article

This article has been cited by other articles in PMC.

Abstract

Background

Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication.

Methods

This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience.

Results

After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately.

Conclusion

The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.

The purpose of statistical evaluation of medical data is often to describe relationships between two variables or among several variables. For example, one would like to know not just whether patients have high blood pressure, but also whether the likelihood of having high blood pressure is influenced by factors such as age and weight. The variable to be explained (blood pressure) is called the dependent variable, or, alternatively, the response variable; the variables that explain it (age, weight) are called independent variables or predictor variables. Measures of association provide an initial impression of the extent of statistical dependence between variables. If the dependent and independent variables are continuous, as is the case for blood pressure and weight, then a correlation coefficient can be calculated as a measure of the strength of the relationship between them (box 1).

Box 1

Interpretation of the correlation coefficient (r)

Spearman’s coefficient:

Describes a monotone relationship

A monotone relationship is one in which the dependent variable either rises or sinks continuously as the independent variable rises.

Pearson’s correlation coefficient:

Describes a linear relationship

Interpretation/meaning:

Correlation coefficients provide information about the strength and direction of a relationship between two continuous variables. No distinction between the explaining variable and the variable to be explained is necessary:

  • r = ± 1: perfect linear and monotone relationship. The closer r is to 1 or –1, the stronger the relationship.

  • r = 0: no linear or monotone relationship

  • r < 0: negative, inverse relationship (high values of one variable tend to occur together with low values of the other variable)

  • r > 0: positive relationship (high values of one variable tend to occur together with high values of the other variable)

Graphical representation of a linear relationship:

Scatter plot with regression line

A negative relationship is represented by a falling regression line (regression coefficient b < 0), a positive one by a rising regression line (b > 0).

Regression analysis is a type of statistical evaluation that enables three things:

  • Description: Relationships among the dependent variables and the independent variables can be statistically described by means of regression analysis.

  • Estimation: The values of the dependent variables can be estimated from the observed values of the independent variables.

  • Prognostication: Risk factors that influence the outcome can be identified, and individual prognoses can be determined.

Regression analysis employs a model that describes the relationships between the dependent variables and the independent variables in a simplified mathematical form. There may be biological reasons to expect a priori that a certain type of mathematical function will best describe such a relationship, or simple assumptions have to be made that this is the case (e.g., that blood pressure rises linearly with age). The best-known types of regression analysis are the following (table 1):

  • Linear regression,

  • Logistic regression, and

  • Cox regression.

The goal of this article is to introduce the reader to linear regression. The theory is briefly explained, and the interpretation of statistical parameters is illustrated with examples. The methods of regression analysis are comprehensively discussed in many standard textbooks (1– 3).

Cox regression will be discussed in a later article in this journal.

Methods

Linear regression is used to study the linear relationship between a dependent variable Y (blood pressure) and one or more independent variables X (age, weight, sex).

The dependent variable Y must be continuous, while the independent variables may be either continuous (age), binary (sex), or categorical (social status). The initial judgment of a possible relationship between two continuous variables should always be made on the basis of a scatter plot (scatter graph). This type of plot will show whether the relationship is linear (figure 1) or nonlinear (figure 2).

Figure 1

A scatter plot showing a linear relationship

Figure 2

A scatter plot showing an exponential relationship. In this case, it would not be appropriate to compute a coefficient of determination or a regression line

Performing a linear regression makes sense only if the relationship is linear. Other methods must be used to study nonlinear relationships. The variable transformations and other, more complex techniques that can be used for this purpose will not be discussed in this article.

Univariable linear regression

Univariable linear regression studies the linear relationship between the dependent variable Y and a single independent variable X. The linear regression model describes the dependent variable with a straight line that is defined by the equation Y = a + b × X, where a is the y-intersect of the line, and b is its slope. First, the parameters a and b of the regression line are estimated from the values of the dependent variable Y and the independent variable X with the aid of statistical methods. The regression line enables one to predict the value of the dependent variable Y from that of the independent variable X. Thus, for example, after a linear regression has been performed, one would be able to estimate a person’s weight (dependent variable) from his or her height (independent variable) (figure 3).

Figure 3

A scatter plot and the corresponding regression line and regression equation for the relationship between the dependent variable body weight (kg) and the independent variable height (m).

The slope b of the regression line is called the regression coefficient. It provides a measure of the contribution of the independent variable X toward explaining the dependent variable Y. If the independent variable is continuous (e.g., body height in centimeters), then the regression coefficient represents the change in the dependent variable (body weight in kilograms) per unit of change in the independent variable (body height in centimeters). The proper interpretation of the regression coefficient thus requires attention to the units of measurement. The following example should make this relationship clear:

In a fictitious study, data were obtained from 135 women and men aged 18 to 27. Their height ranged from 1.59 to 1.93 meters. The relationship between height and weight was studied: weight in kilograms was the dependent variable that was to be estimated from the independent variable, height in centimeters. On the basis of the data, the following regression line was determined: Y= –133.18 + 1.16 × X, where X is height in centimeters and Y is weight in kilograms. The y-intersect a = –133.18 is the value of the dependent variable when X = 0, but X cannot possibly take on the value 0 in this study (one obviously cannot expect a person of height 0 centimeters to weigh negative 133.18 kilograms). Therefore, interpretation of the constant is often not useful. In general, only values within the range of observations of the independent variables should be used in a linear regression model; prediction of the value of the dependent variable becomes increasingly inaccurate the further one goes outside this range.

The regression coefficient of 1.16 means that, in this model, a person’s weight increases by 1.16 kg with each additional centimeter of height. If height had been measured in meters, rather than in centimeters, the regression coefficient b would have been 115.91 instead. The constant a, in contrast, is independent of the unit chosen to express the independent variables. Proper interpretation thus requires that the regression coefficient should be considered together with the units of all of the involved variables. Special attention to this issue is needed when publications from different countries use different units to express the same variables (e.g., feet and inches vs. centimeters, or pounds vs. kilograms).

Figure 3 shows the regression line that represents the linear relationship between height and weight.

For a person whose height is 1.74 m, the predicted weight is 68.50 kg (y = –133.18 + 115.91 × 1.74 m). The data set contains 6 persons whose height is 1.74 m, and their weights vary from 63 to 75 kg.

Linear regression can be used to estimate the weight of any persons whose height lies within the observed range (1.59 m to 1.93 m). The data set need not include any person with this precise height. Mathematically it is possible to estimate the weight of a person whose height is outside the range of values observed in the study. However, such an extrapolation is generally not useful.

If the independent variables are categorical or binary, then the regression coefficient must be interpreted in reference to the numerical encoding of these variables. Binary variables should generally be encoded with two consecutive whole numbers (usually 0/1 or 1/2). In interpreting the regression coefficient, one should recall which category of the independent variable is represented by the higher number (e.g., 2, when the encoding is 1/2). The regression coefficient reflects the change in the dependent variable that corresponds to a change in the independent variable from 1 to 2.

For example, if one studies the relationship between sex and weight, one obtains the regression line Y = 47.64 + 14.93 × X, where X = sex (1 = female, 2 = male). The regression coefficient of 14.93 reflects the fact that men are an average of 14.93 kg heavier than women.

When categorical variables are used, the reference category should be defined first, and all other categories are to be considered in relation to this category.

The coefficient of determination, r2, is a measure of how well the regression model describes the observed data (Box 2). In univariable regression analysis, r2 is simply the square of Pearson’s correlation coefficient. In the particular fictitious case that is described above, the coefficient of determination for the relationship between height and weight is 0.785. This means that 78.5% of the variance in weight is due to height. The remaining 21.5% is due to individual variation and might be explained by other factors that were not taken into account in the analysis, such as eating habits, exercise, sex, or age.

Box 2

Coefficient of determination (R-squared)

Definition:

Let

  • n be the number of observations (e.g., subjects in the study)

  • ŷi be the estimated value of the dependent variable for the ith observation, as computed with the regression equation

  • yi be the observed value of the dependent variable for the ith observation

  • y be the mean of all n observations of the dependent variable

The coefficient of determination is then defined

as follows:

In formal terms, the null hypothesis, which is the hypothesis that b = 0 (no relationship between variables, the regression coefficient is therefore 0), can be tested with a t-test. One can also compute the 95% confidence interval for the regression coefficient (4).

Multivariable linear regression

In many cases, the contribution of a single independent variable does not alone suffice to explain the dependent variable Y. If this is so, one can perform a multivariable linear regression to study the effect of multiple variables on the dependent variable.

In the multivariable regression model, the dependent variable is described as a linear function of the independent variables Xi, as follows: Y = a + b1 × X1 + b2 × X2 +…+ bn × Xn. The model permits the computation of a regression coefficient bi for each independent variable Xi (box 3).

Box 3

Regression line for a multivariable regression

Y= a + b1 × X1 + b2 × X2+ …+ bn × Xn,

where

Y = dependent variable

Xi = independent variables

a = constant (y-intersect)

bi= regression coefficient of the variable Xi

Example: regression line for a multivariable regression Y = –120.07 + 100.81 × X1 + 0.38 × X2 + 3.41 × X3,

where

X1 = height (meters)

X2 = age (years)

X3 = sex (1 = female, 2 = male)

Y = the weight to be estimated (kg)

Just as in univariable regression, the coefficient of determination describes the overall relationship between the independent variables Xi (weight, age, body-mass index) and the dependent variable Y (blood pressure). It corresponds to the square of the multiple correlation coefficient, which is the correlation between Y and b1 × X1 + … + bn × Xn.

It is better practice, however, to give the corrected coefficient of determination, as discussed in Box 2. Each of the coefficients bi reflects the effect of the corresponding individual independent variable Xi on Y, where the potential influences of the remaining independent variables on Xi have been taken into account, i.e., eliminated by an additional computation. Thus, in a multiple regression analysis with age and sex as independent variables and weight as the dependent variable, the adjusted regression coefficient for sex represents the amount of variation in weight that is due to sex alone, after age has been taken into account. This is done by a computation that adjusts for age, so that the effect of sex is not confounded by a simultaneously operative age effect (box 4).

Box 4

Two important terms

  • Confounder (in non-randomized studies): an independent variable that is associated, not only with the dependent variable, but also with other independent variables. The presence of confounders can distort the effect of the other independent variables. Age and sex are frequent confounders.

  • Adjustment: a statistical technique to eliminate the influence of one or more confounders on the treatment effect. Example: Suppose that age is a confounding variable in a study of the effect of treatment on a certain dependent variable. Adjustment for age involves a computational procedure to mimic a situation in which the men and women in the data set were of the same age. This computation eliminates the influence of age on the treatment effect.

In this way, multivariable regression analysis permits the study of multiple independent variables at the same time, with adjustment of their regression coefficients for possible confounding effects between variables.

Multivariable analysis does more than describe a statistical relationship; it also permits individual prognostication and the evaluation of the state of health of a given patient. A linear regression model can be used, for instance, to determine the optimal values for respiratory function tests depending on a person’s age, body-mass index (BMI), and sex. Comparing a patient’s measured respiratory function with these computed optimal values yields a measure of his or her state of health.

Medical questions often involve the effect of a very large number of factors (independent variables). The goal of statistical analysis is to find out which of these factors truly have an effect on the dependent variable. The art of statistical evaluation lies in finding the variables that best explain the dependent variable.

One way to carry out a multivariable regression is to include all potentially relevant independent variables in the model (complete model). The problem with this method is that the number of observations that can practically be made is often less than the model requires. In general, the number of observations should be at least 20 times greater than the number of variables under study.

Moreover, if too many irrelevant variables are included in the model, overadjustment is likely to be the result: that is, some of the irrelevant independent variables will be found to have an apparent effect, purely by chance. The inclusion of irrelevant independent variables in the model will indeed allow a better fit with the data set under study, but, because of random effects, the findings will not generally be applicable outside of this data set (1). The inclusion of irrelevant independent variables also strongly distorts the determination coefficient, so that it no longer provides a useful index of the quality of fit between the model and the data (Box 2).

In the following sections, we will discuss how these problems can be circumvented.

The selection of variables

For the regression model to be robust and to explain Y as well as possible, it should include only independent variables that explain a large portion of the variance in Y. Variable selection can be performed so that only such independent variables are included (1).

Variable selection should be carried out on the basis of medical expert knowledge and a good understanding of biometrics. This is optimally done as a collaborative effort of the physician-researcher and the statistician. There are various methods of selecting variables:

Forward selection

Forward selection is a stepwise procedure that includes variables in the model as long as they make an additional contribution toward explaining Y. This is done iteratively until there are no variables left that make any appreciable contribution to Y.

Backward selection

Backward selection, on the other hand, starts with a model that contains all potentially relevant independent variables. The variable whose removal worsens the prediction of the independent variable of the overall set of independent variables to the least extent is then removed from the model. This procedure is iterated until no dependent variables are left that can be removed without markedly worsening the prediction of the independent variable.

Stepwise selection

Stepwise selection combines certain aspects of forward and backward selection. Like forward selection, it begins with a null model, adds the single independent variable that makes the greatest contribution toward explaining the dependent variable, and then iterates the process. Additionally, a check is performed after each such step to see whether one of the variables has now become irrelevant because of its relationship to the other variables. If so, this variable is removed.

Block inclusion

There are often variables that should be included in the model in any case—for example, the effect of a certain form of treatment, or independent variables that have already been found to be relevant in prior studies. One way of taking such variables into account is their block inclusion into the model. In this way, one can combine the forced inclusion of some variables with the selective inclusion of further independent variables that turn out to be relevant to the explanation of variation in the dependent variable.

The evaluation of a regression model requires the performance of both forward and backward selection of variables. If these two procedures result in the selection of the same set of variables, then the model can be considered robust. If not, a statistician should be consulted for further advice.

Discussion

The study of relationships between variables and the generation of risk scores are very important elements of medical research. The proper performance of regression analysis requires that a number of important factors should be considered and tested:

1. Causality

Before a regression analysis is performed, the causal relationships among the variables to be considered must be examined from the point of view of their content and/or temporal relationship. The fact that an independent variable turns out to be significant says nothing about causality. This is an especially relevant point with respect to observational studies (5).

2. Planning of sample size

The number of cases needed for a regression analysis depends on the number of independent variables and of their expected effects (strength of relationships). If the sample is too small, only very strong relationships will be demonstrable. The sample size can be planned in the light of the researchers’ expectations regarding the coefficient of determination (r2) and the regression coefficient (b). Furthermore, at least 20 times as many observations should be made as there are independent variables to be studied; thus, if one wants to study 2 independent variables, one should make at least 40 observations.

3. Missing values

Missing values are a common problem in medical data. Whenever the value of either a dependent or an independent variable is missing, this particular observation has to be excluded from the regression analysis. If many values are missing from the dataset, the effective sample size will be appreciably diminished, and the sample may then turn out to be too small to yield significant findings, despite seemingly adequate advance planning. If this happens, real relationships can be overlooked, and the study findings may not be generally applicable. Moreover, selection effects can be expected in such cases. There are a number of ways to deal with the problem of missing values (6).

4. The data sample

A further important point to be considered is the composition of the study population. If there are subpopulations within it that behave differently with respect to the independent variables in question, then a real effect (or the lack of an effect) may be masked from the analysis and remain undetected. Suppose, for instance, that one wishes to study the effect of sex on weight, in a study population consisting half of children under age 8 and half of adults. Linear regression analysis over the entire population reveals an effect of sex on weight. If, however, a subgroup analysis is performed in which children and adults are considered separately, an effect of sex on weight is seen only in adults, and not in children. Subgroup analysis should only be performed if the subgroups have been predefined, and the questions already formulated, before the data analysis begins; furthermore, multiple testing should be taken into account (7, 8).

5. The selection of variables

If multiple independent variables are considered in a multivariable regression, some of these may turn out to be interdependent. An independent variable that would be found to have a strong effect in a univariable regression model might not turn out to have any appreciable effect in a multivariable regression with variable selection. This will happen if this particular variable itself depends so strongly on the other independent variables that it makes no additional contribution toward explaining the dependent variable. For related reasons, when the independent variables are mutually dependent, different independent variables might end up being included in the model depending on the particular technique that is used for variable selection.

Overview

Linear regression is an important tool for statistical analysis. Its broad spectrum of uses includes relationship description, estimation, and prognostication. The technique has many applications, but it also has prerequisites and limitations that must always be considered in the interpretation of findings (Box 5).

Box 5

What special points require attention in the interpretation of a regression analysis?

  1. How big is the study sample?

  2. Is causality demonstrable or plausible, in view of the content or temporal relationship of the variables?

  3. Has there been adjustment for potential confounding effects?

  4. Is the inclusion of the independent variables that were used justified, in view of their content?

  5. What is the corrected coefficient of determination (R-squared)?

  6. Is the study sample homogeneous?

  7. In what units were the potentially relevant independent variables reported?

  8. Was a selection of the independent variables (potentially relevant independent variables) performed, and, if so, what kind of selection?

  9. If a selection of variables was performed, was its result confirmed by a second selection of variables that was performed by a different procedure?

  10. Are predictions of the dependent variable made on the basis of extrapolated data?

Acknowledgments

Translated from the original German by Ethan Taub, MD

Footnotes

Conflict of interest statement

The authors declare that they have no conflict of interest as defined by the guidelines of the International Committee of Medical Journal Editors.

References

1. Fahrmeir L, Kneib T, Lang S. 2nd edition. Berlin, Heidelberg: Springer; 2009. Regression - Modelle, Methoden und Anwendungen.

2. Bortz J. 6th edition. Heidelberg: Springer; 2004. Statistik für Human-und Sozialwissenschaftler.

3. Selvin S. Epidemiologic Analysis. Oxford University Press. 2001

4. Bender R, Lange S. Was ist ein Konfidenzintervall? Dtsch Med Wschr. 2001;126

5. Sir Bradford Hill A. The environment and disease: Association or Causation? Proc R Soc Med. 1965;58:295–300.[PMC free article][PubMed]

6. Carpenter JR, Kenward MG. Missing Data in Randomised Controlled Trials: A practical guide Birmingham, Alabama: National Institute for Health Research. http://www.pcpoh.bham.ac.uk/publichealth/methodology/projects/RM03_JH17_MK.shtml.PublicationRM03/JH17/MK. 2008

7. EMEA. Poiints to consider on multiplicity issues in clinical trials. www.emea.europa.eu/pdfs/human/ewp/090899en.pdf.

8. Horn M, Vollandt R. Stuttgart: Gustav Fischer Verlag; 1995. Multiple Tests und Auswahlverfahren.

→ r2 is the fraction of the overall variance that is explained. The closer the regression model’s estimated values ŷi lie to the observed values yi, the nearer the coefficient of determination is to 1 and the more accurate the...

Articles from Deutsches Ärzteblatt International are provided here courtesy of Deutscher Arzte-Verlag GmbH

Regression analysis is a family of statistical tools that can help sociologists better understand and predict the way that people act and interact. Regression analysis is used to build mathematical models to predict the value of one variable from knowledge of another. Although statistical methods of correlation offer researchers techniques to help them better understand the degree to which two variables are consistently related, such knowledge alone is typically insufficient to predict behavior. Simple linear regression allows the value of one dependent variable to be predicted from the knowledge of one independent variable. Multiple linear regression can be used to develop models to predict the value of a dependent variable from the knowledge of the value of more than one independent variable.

Research Methods

Overview

Regression analysis is a family of statistical tools that can help sociologists better understand the way that people act and interact in groups and society. Regression analysis allows researchers to build mathematical models that can be used to predict the value of one variable from knowledge of another. There are a number of specific regression techniques that can be used by sociologists to model real-world behavior. These include:

* Simple linear regression analysis, which allows the modeling of two variables, one independent and one dependent

* Multiple linear regression analysis, which allows the modeling of two or more independent variables to predict one dependent variable

* Multiple curvilinear regression, where the relationship between variables is nonlinear (e.g., quadratic)

* Multivariate linear regression, which allows the simultaneous examination of several dependent variables

* Multivariate polynomial regression, which can be used to account for nonlinear relationships

The most commonly used of these techniques, simple linear regression and multiple linear regression, are discussed in the following sections.

Simple Linear Regression

Statistics offers sociology researchers a number of correlation techniques to help them better understand the degree to which two variables are consistently related. For example, correlation can help one understand the relationship between educational level and income level. Correlation coefficients show the degree of relationship between two variables with a value between zero and one. A correlation of 1.0 shows that the variables are completely related and a change in the value of one variable will signify a corresponding change in the other, while a correlation of 0.0 shows that there is no relationship between the two variables and that knowing the value of one variable will tell us nothing about the value of the other.

In addition to signifying the degree of relationship between two variables, a correlation coefficient also shows how the two variables are related. A positive correlation means that as the value of one variable increases, so does the value of the other variable. A negative correlation, on the other hand, means that as the value of one variable increases, the value of the other variable decreases. An example of a high positive correlation would be the relationship of weight to age for healthy children: the older the child is, the more he or she will probably weigh. An example of a high negative correlation would be the relationship between temperature and the likelihood of snow: the higher the temperature is, the less likely it is to snow.

However, as helpful as knowing what the correlation between two variables is, that knowledge alone does not necessarily give us sufficient information to predict behavior. For example, although we may know that people who do their grocery shopping when they are hungry are more likely to buy impulse items than those who are not, we cannot necessarily accurately predict that just because a person is hungry, he or she will purchase unneeded items at the grocery store. Merely knowing that there is a positive correlation between these two variables is insufficient to allow us to predict whether a given person or type of person is more likely to exhibit this behavior. In situations where one needs to be able to predict the value of one variable from knowledge of another variable based on the data, one needs to use simple linear regression.

Simple linear regression is a bivariate statistical tool that allows the value of one dependent variable to be predicted from the knowledge of one independent variable. Examples of sociological applications of simple linear regression include predicting the crime rate from population density, voting behavior in an election from voting behavior in the primary, and relative income based on gender. The pairs of data used in linear regression analysis are typically graphed on a scatter plot that shows the values of the points for two-variable numerical data. A line of best fit is superimposed on the scatter plot and used to predict the value of the dependent variable based on different values of the independent variable. A sample scatter plot with line of best fit is shown in Figure 1.

The equation for the regression line is determined by the statistics equivalent of the linear slope-intercept equation from basic algebra, y = mx + b:

ŷ = β0 + β1x + ∈

where

ŷ = the predicted value of y

β0 = the population y intercept

β1 = the population slope

∈ = the error term

For example, a sociologist interested in the behavior of small groups might want to determine whether or not the efficacy of the decisions made in small groups could be predicted from the number of people in the group. Although larger group size could mean that there are more ideas, more contribution to the thinking process, and a larger potential for synergistic thinking, a larger group could also mean that more time would be required to reach a decision, the competition of ideas could lead to confusion, and coalitions could form within the group and make it harder to resolve disagreements. A predictive model for group size versus efficacy of decision making could be developed by setting up an experiment that compared the efficacy of decision making on the same problem for groups of various sizes. The slope of the line of best fit passing through the data points on the scatter plot could be mathematically calculated, using these data points to determine the equation of the simple regression line. This equation could then be used by the sociologist to recommend optimal group size for similar types of decisions or projects based on the single variable of number of group members.

The problem with drawing a line of best fit through a scatter plot, of course, is that unless all the pairs of data fall on one straight line, it is possible to draw multiple lines through a data set. The question faced by the researcher is how to determine which of these possible lines will yield the best predictions of the dependent variable from the independent variable. This can be accomplished mathematically through residual analysis.

In regression analysis, a residual is defined as the difference between the actual y values and the predicted y values, or y - y^. To find the line of best fit, it is important to reduce the distance between the points on the scatter plot and the line. This is done by minimizing the sum of the squares of the residuals in order to find the line of best fit. By looking at the residuals, a researcher can better understand how well the regression line fits past data in order to estimate how well it will predict future data.

Standard regression analysis techniques make several Assumptions, including that the model is correct and that the data are good. Unfortunately, the types of real-world data needed by sociologists tend to be messy. As a result, these assumptions are rarely met in practice. Many factors can contribute to the problems in regression analysis, including the use of the incorrect functional form, which is used for the regression function; correlation of variables; inconstant variance; sample data with outliers; and multicollinearity among subsets of the input variables such that they exhibit nearly identical linear relations. If one or more of these problems occur, the entire analysis may be invalidated. This risk is complicated by the fact that there are few indications in standard statistics to indicate when these problems have occurred. Although there are other indicators and potential remedies for these situations, they must be used...

One thought on “Regression Analysis Term Papers

Leave a Reply

Your email address will not be published. Required fields are marked *