Posted: September 18th, 2017

Use the article below, which is attached.

Bosch, P.R., Poloni, J., Thornton, A., & Lynskey, J.V. (2012). The heart rate response to Nintendo Wii Boxing in young adults. Cardiopulmonary Physical Therapy Journal, 23(2), 13-18, 29.

Review the sections on the data analysis, results, and discussion sections with emphasis on the t-test reported. Then, respond to the following questions (copy and paste questions into your word doc. Use a different font for question and answers).

- What type of t-test did the researchers use in this study and for what variables?
- Why was this particular type of t-test used?
- Was the t-test reported appropriate to the level of measurement of the variables?
- What were the p values and were the results statistically significant?
- Were the results appropriately interpreted by the researchers?

*Statistical Tests of Relationship: Correlations and Regression*

*RNRS 327 Week** 4*

Inferential Statistics—Parametric

Correlational techniques are statistical tests that are used to examine relationships among variables (that represent concepts) and to make decisions about the strength of these relationships. The Pearson product moment correlation coefficient (*r*) is the usual method by which the relationship between two variables is quantified. To calculate *r* there must be at least two measures on each subject in the study. Valid results may be obtained when the level of measurement is interval or ordinal with a number of levels.

Selecting any test depends on certain assumptions.

Correlational coefficients ASSUMPTIONS:

* The sample should be independent and randomly selected

* The appropriate level of measurement should be met (if at least one variable is nominal, chi-square; if lowest level of data is ordinal, Spearman’; if both variables are interval/ratio, Pearson’s

* TWO variables MUST be compared, and a linear relationship must be present (can always check by a scatterplot)

* If you wish to use Pearson’s correlational coefficient, both variables should be NORMALLY DISTRIBUTED int he population

To select the best correlational test:

1) do you have one independent sample?

2) Are you looking for a linear relationship between two variables in this sample?

3) What is the LOWEST level of measurement for each of your variables?

Nominal: select chi-square

Ordinal: select Spearman’s

Interval/Ratio: select Pearson’s

If your data does NOT meet the test’s assumptions, then you need to select a test at a lower level.

For example, if you wish to use Pearson’s correlation coefficient but your sample is only 20 subjects and is not normally distributed, you need to use the Spearman’s correlation coefficient.

The Correlation Question:

- Is there a relationship between values in one variable (anxiety) and another variable (patient age) when single samples are examined on these variables?
- Is correlation the same as causation? Why or why not? Think through these relationships:

- The more police cars there are at the scene of an accident, the greater the injuries and vehicle damage.
- Depression is related to morbidity in chronic dialysis patients.
- There is a relationship between low cholesterol level and cancer.

Data Requirements: .At least 2 measures on each subject.

Possible scores and meaning: Correlation coefficients may range from -1 to +1.

Examples:

- job satisfaction and turnover (- negative correlation)

Hypothesis:

The greater the job satisfaction of the employees, the lower the rate of staff turnover in the employing organization.

- job satisfaction and commitment to the organization (+ positive correlation)

Hypothesis:

The greater the job satisfaction of the employee, the greater the commitment of the employee to the employing organization.

Null hypothesis (Ho): There is no relationship between age and pre-operative anxiety and geriatric patients scheduled for orthopedic surgery.

Alternate or Research Hypothesis (directional) (H1): The older the patient, the greater the pre-operative anxiety geriatric patients will experience when scheduled for orthopedic surgery.

Before running the correlation, check the data for normality of distributions by running “frequencies” on the variables you will use and doing “histograms” on each variable to see if they approximate a normal curve.

If variables are not normally distributed (severely skewed) or with small samples (<30) or an ordinal variable that can’t be treated as interval, we will need to need to use a non-parametric test. The non-parametric analog for Pearson product moment correlation coefficient = Spearman’s rank order correlation coefficient.

We need good variability in both variables to prevent the effect of restricted range. So if the data are greatly skewed, the Spearman *rho* should be used in place of the Pearson *r*.

If there is significant Measurement Error, this can act to reduce the magnitude of the correlation and therefore produce erroneous results.

If all the assumptions are satisfied, run the correlation.

Examine your printout carefully. Notice that some of the correlations in the matrix are marked with one or more *. This indicates that there is a significant relationship between the two variables that are examined. Usually, the greater the number of *’s, the more significant the correlation between the variables.

Interpreting the Strength of a Correlation Coefficient

To estimate Strength of the correlation coefficient (for physical or biological measures – these have a decreased chance of error compared with psychological measures).

0-.25 weak

.26 – .49 low

.50 – .69 moderate

.70 – .89 high

.90 – 1.00 very high

For psychosocial measures (these have an increased potential for error).

<.2 weak

.2 – .49 moderate

.5 and> strong

The interpretation of the results has two parts. We will separate them to show what is done first and second:

- Reporting (in text) the significant findings

Example of a partial interpretation: There is a relationship between education in years and life purpose and satisfaction (*r* = .193; p < 0.01).

- Reporting the strength of the relationship for correlations that are significant.

Examples of a complete interpretation:

There is a weak, positive relationship between education in years and life purpose and satisfaction (*r* = .193; p < 0.01).

There is a strong, positive relationship between life purpose and satisfaction and self-confidence (*r* = .741; p < 0.01).

There is a moderate, negative correlation between life purpose and satisfaction and smoking history (*r* = -.209; p < 0.01). [So, the greater the person’s sense of life purpose and satisfaction, the less likely they were to ever have smoked].

Meaningfulness of *r*^{2 }

This is the amount of shared variance between the two variables .2^{2 }= .04 = 4%

*Regression*

*Simple Regression*

This is a bivariate technique used to analyze relationships between variables and make predictions about values of variables. One independent variable (X) is used to predict a dependent variable (Y). For any value of X we can predict Y (if there is a linear relationship).

Assumptions:

- The sample must be representative of the population.
- The variables being correlated must each have a normal distribution.
- For every value of X, the distribution of Y scores must have approximately equal variability (assumption of homoscedasticity).
- The relationship between X & Y must be linear.

Y’ (predicted value of variable Y) = *a* (intercept constant) + *b* (regression coefficient – slope of the line) Xvalue of variable X) (actual

The regression equation is based on equation for a straight line!

Regression analysis solves for *a* and*b,* so for any value of X, a prediction about Y may be made. The regression equation is the formula for the best-fitting straight line to characterize the linear relationship between X and Y.

Examples of Regression:

Value of simple regression- e.g., using SAT scores to predict who will have high enough GPA’s to be successful in nursing school. The lower the correlation between the variables, the higher the errors of prediction will be.

Simple regression is the basis for multiple regression.

*Multiple Regression*

Most nursing phenomena are essentially multivariate in nature: they concern themselves with more than the relationship between two variables. It’s too easy to draw erroneous conclusions from bivariate statistics.

The purpose of multiple regression is to predict one dependent variable (criterion variable) from two or more independent variables (predictor variables).

Yí = *a* + *b*_{1}X_{1} + *b*_{2}X_{2}+ Ö*b*_{k}X_{k}

Yí = predicted value for variable Y

*a* = intercept constant

*k* = number of independent, predictor variables (independent)

*b*_{1} to *b _{k}* = regression coefficients for the

X_{1} to X* _{k}* = scores or values on the

Independent variables can be nominal, ordinal or interval. If nominal or ordinal, need to use a special procedure called “dummy coding.”

Dependent variable must be interval.

When we run a multiple regression we are looking for:

- Collectively, how well did the independent variables predict the dependent variable?
- Is the relationship between the independent variables and dependent variable statistically significant (F test, p value)?
- How strong is the relationship? What percentage of dependent variable is associated with independent variables?
- Which of the independent variables were more helpful? (t-test and p value will indicate which ones are significant predictors. Must also look at standardized beta weights to judge how strong a predictor each variable is. Standardized betas are converted to Z scores so they are all on the same scale and can be compared.

- Multiple correlation coefficient R = relationship between DV and combo of IV’s
- R
^{2}= variance accounted for by predictors - Adjusted R
^{2}= adjusts for sample size – accounts for instability in small samples

Entry Strategies:

- Simultaneous
- Hierarchical
- Stepwise
- Forward
- Backward

Multicollinearity.Interrelatedness of the independent variables in a multiple regression analysis. Problematic if Pearson’s *r* is greater than .85 for any two independent variables.

To test:

- Examine correlation matrix of variables before running multiple regression analysis.
- Examine the tolerance (proportion of the variance in a variable that is not accounted for by the other independent variables).
- Tolerance is 1-R2 when each independent variable is treated as a dependent variable and regressed on the other independent variables. The closer to 1 the better but must at least exceed .01. The variance inflation factor (VIF) is the reciprocal of tolerance (1/tolerance) so variables with high tolerances have small variance inflation factors and vice versa.

Residual Plots

- Normal Curve
- Inferring to a real or abstract (theoretical) distribution
- Parametric Statistical Analyses
- Assumptions

(1) the sample was drawn from a normal distribution

(2) random sampling techniques

(3) interval level data

Place an order in 3 easy steps. Takes less than 5 mins.