Posted: September 18th, 2017
Use the article below, which is attached.
Bosch, P.R., Poloni, J., Thornton, A., & Lynskey, J.V. (2012). The heart rate response to Nintendo Wii Boxing in young adults. Cardiopulmonary Physical Therapy Journal, 23(2), 13-18, 29.
Review the sections on the data analysis, results, and discussion sections with emphasis on the t-test reported. Then, respond to the following questions (copy and paste questions into your word doc. Use a different font for question and answers).
Statistical Tests of Relationship: Correlations and Regression
RNRS 327 Week 4
Correlational techniques are statistical tests that are used to examine relationships among variables (that represent concepts) and to make decisions about the strength of these relationships. The Pearson product moment correlation coefficient (r) is the usual method by which the relationship between two variables is quantified. To calculate r there must be at least two measures on each subject in the study. Valid results may be obtained when the level of measurement is interval or ordinal with a number of levels.
Selecting any test depends on certain assumptions.
Correlational coefficients ASSUMPTIONS:
* The sample should be independent and randomly selected
* The appropriate level of measurement should be met (if at least one variable is nominal, chi-square; if lowest level of data is ordinal, Spearman’; if both variables are interval/ratio, Pearson’s
* TWO variables MUST be compared, and a linear relationship must be present (can always check by a scatterplot)
* If you wish to use Pearson’s correlational coefficient, both variables should be NORMALLY DISTRIBUTED int he population
To select the best correlational test:
1) do you have one independent sample?
2) Are you looking for a linear relationship between two variables in this sample?
3) What is the LOWEST level of measurement for each of your variables?
Nominal: select chi-square
Ordinal: select Spearman’s
Interval/Ratio: select Pearson’s
If your data does NOT meet the test’s assumptions, then you need to select a test at a lower level.
For example, if you wish to use Pearson’s correlation coefficient but your sample is only 20 subjects and is not normally distributed, you need to use the Spearman’s correlation coefficient.
The Correlation Question:
Data Requirements: .At least 2 measures on each subject.
Possible scores and meaning: Correlation coefficients may range from -1 to +1.
The greater the job satisfaction of the employees, the lower the rate of staff turnover in the employing organization.
The greater the job satisfaction of the employee, the greater the commitment of the employee to the employing organization.
Null hypothesis (Ho): There is no relationship between age and pre-operative anxiety and geriatric patients scheduled for orthopedic surgery.
Alternate or Research Hypothesis (directional) (H1): The older the patient, the greater the pre-operative anxiety geriatric patients will experience when scheduled for orthopedic surgery.
Before running the correlation, check the data for normality of distributions by running “frequencies” on the variables you will use and doing “histograms” on each variable to see if they approximate a normal curve.
If variables are not normally distributed (severely skewed) or with small samples (<30) or an ordinal variable that can’t be treated as interval, we will need to need to use a non-parametric test. The non-parametric analog for Pearson product moment correlation coefficient = Spearman’s rank order correlation coefficient.
We need good variability in both variables to prevent the effect of restricted range. So if the data are greatly skewed, the Spearman rho should be used in place of the Pearson r.
If there is significant Measurement Error, this can act to reduce the magnitude of the correlation and therefore produce erroneous results.
If all the assumptions are satisfied, run the correlation.
Examine your printout carefully. Notice that some of the correlations in the matrix are marked with one or more *. This indicates that there is a significant relationship between the two variables that are examined. Usually, the greater the number of *’s, the more significant the correlation between the variables.
Interpreting the Strength of a Correlation Coefficient
To estimate Strength of the correlation coefficient (for physical or biological measures – these have a decreased chance of error compared with psychological measures).
.26 – .49 low
.50 – .69 moderate
.70 – .89 high
.90 – 1.00 very high
For psychosocial measures (these have an increased potential for error).
.2 – .49 moderate
.5 and> strong
The interpretation of the results has two parts. We will separate them to show what is done first and second:
Example of a partial interpretation: There is a relationship between education in years and life purpose and satisfaction (r = .193; p < 0.01).
Examples of a complete interpretation:
There is a weak, positive relationship between education in years and life purpose and satisfaction (r = .193; p < 0.01).
There is a strong, positive relationship between life purpose and satisfaction and self-confidence (r = .741; p < 0.01).
There is a moderate, negative correlation between life purpose and satisfaction and smoking history (r = -.209; p < 0.01). [So, the greater the person’s sense of life purpose and satisfaction, the less likely they were to ever have smoked].
Meaningfulness of r2
This is the amount of shared variance between the two variables .22 = .04 = 4%
This is a bivariate technique used to analyze relationships between variables and make predictions about values of variables. One independent variable (X) is used to predict a dependent variable (Y). For any value of X we can predict Y (if there is a linear relationship).
Y’ (predicted value of variable Y) = a (intercept constant) + b (regression coefficient – slope of the line) Xvalue of variable X) (actual
The regression equation is based on equation for a straight line!
Regression analysis solves for a andb, so for any value of X, a prediction about Y may be made. The regression equation is the formula for the best-fitting straight line to characterize the linear relationship between X and Y.
Examples of Regression:
Value of simple regression- e.g., using SAT scores to predict who will have high enough GPA’s to be successful in nursing school. The lower the correlation between the variables, the higher the errors of prediction will be.
Simple regression is the basis for multiple regression.
Most nursing phenomena are essentially multivariate in nature: they concern themselves with more than the relationship between two variables. It’s too easy to draw erroneous conclusions from bivariate statistics.
The purpose of multiple regression is to predict one dependent variable (criterion variable) from two or more independent variables (predictor variables).
Yí = a + b1X1 + b2X2+ ÖbkXk
Yí = predicted value for variable Y
a = intercept constant
k = number of independent, predictor variables (independent)
b1 to bk = regression coefficients for the k variables
X1 to Xk = scores or values on the k independent variables
Independent variables can be nominal, ordinal or interval. If nominal or ordinal, need to use a special procedure called “dummy coding.”
Dependent variable must be interval.
When we run a multiple regression we are looking for:
Multicollinearity.Interrelatedness of the independent variables in a multiple regression analysis. Problematic if Pearson’s r is greater than .85 for any two independent variables.
(1) the sample was drawn from a normal distribution
(2) random sampling techniques
(3) interval level data
Place an order in 3 easy steps. Takes less than 5 mins.