Posted: June 10th, 2015

MITIGATING DEMAND UNCERTAINTY THROUGH SUPPLY CHAIN STRATEGIES: THE CASE OF FOOD SMES IN HAJJ PHENOMENON

RESULTS CHAPTER

5.1 Introduction

This chapter presents the results obtained from the study semi-structured interviews, online surveys and document review. The qualitative data presented is gathered from interviews with 12 CEOs working in different SMEs across Saudi Arabia while the quantitative data is obtained from an online survey undertaken on 239 respondents from the SMEs in the food sector across the Kingdom of Saudi Arabia (KSA). The primary data was collected to provide evidence on how relevant supply chain strategies can be used to mitigate the demand uncertainty of KSA’s food small and medium sized enterprises (SMEs) during the Hajj phenomenon. Therefore, this chapter is divided into seven sections.

The first section (data analysis and screening) describes how the data was analysed and used to test the hypothesized structural relationships between the various constructs. This shall include a description of how missing values were analysed and how outliers were detected. The second section presents the demographic information of the respondents, that is; their education levels, occupation, work experience, and company location among others while the third section presents results of the control variables, that is; firm size, firm age and firm production type. The fourth section presents the descriptive statistics as well as the procedure that was used to test the normality of the research constructs. In the fifth section, the processes and measures for structural model assessment and testing of the hypotheses using the PLS-SEM are described. These include the coefficient of determination (R2), path coefficients and the predictive relevance. The sixth section shall present the results on the hypothesized relationship between the various constructs (hypothesis testing results) as per the path coefficients and their levels of significance. Lastly, a summary of the chapter is presented.

5.2 Data analysis and screening

Hair et al. (2010) recommends that all researchers undertake data examination prior to data analysis. Further, Tabachnick and Fiddell (2007) define data examination as the process of detecting and tackling missing values and outliers, and testing the data normality assumption. As a result, this section looks at the processes used in screening and examining the data, which include the analysis of missing data, methods used to detect outliers and the procedures undertaken in testing the data normality.

5.2.1 Missing data analysis

According to Hair and his colleagues, missing values occur when the study participants do not give answers to one or more of the interview or survey questions. They further subdivide missing data into two categories: the ignorable missing data and the non-ignorable missing data (Heir et al. 2010). While ignorable missing data does not require adjustments, non-ignorable data requires the researcher to make adjustments. Typically, when missing data is below 10%, Hair et al. (2010) recommends that it can be ignored. The current study used the SPSS software to check for missing data and detect outliers and found the missing data to be below 10% for all variables hence ignorable. According to Hair et al. (2010), when the percentage of missing data is between 10-15%, the researcher must delete some variables as the modification strategy for reducing biasness. However, the results of the SPSS analysis for the missing data did not such a percentage hence no need for deletions.

Also, when examining for missing data, it is important for the researcher to check whether the data is missing completely at random (MCAR) or missing at ra8ndom (MCA) (Graham, 2009). In order to determine missing data patterns and detect any systematic errors, Little’s MCAR test was undertaken using the SPSS software using the chi-square, standard deviation, significance level and p values as the test variables. The test’s null analysis was that the data are missing completely at random. Little’s MCAR test results (Standard deviation = 8.626; Variance = 74.410; sg. 1; P > 0.05) show that there current study did not have any systematic error. As a result, the researcher is able to treat the missing data, and for this study, the mean was used as a substitute for values that were missing based on Tabachnick and Fidell (2007) recommendations that substituting the mean is the most common and suitable way of imputing for missing values.

5.2.2 Detecting outliers

Outliers are defined as observation points that are distant from the rest of the observations arising from experimental errors or measurement variability (Hair et al. 2010, p. 64). While outliers may sometimes occur accidentally, their presence indicates either there exists a measurement error or that the study population’s distribution is heavy-tailed (Cousineau and Chartier, 2010). In a scenario where outliers occur as a result of measurement error, researchers are advised to make use of statistical tools of detecting outliers or remove them completely (Cousineau and Chartier, 2010). On the other hand, outliers that occur as a result of having a study population whose distribution is heavy-tailed are an indication of a distribution with high kurtosis and require to be handled carefully especially when using statistical tools that assume that the data distribution is normal (Cousineau and Chartier, 2010).

Since outliers are an indicator of faulty values, wrong research procedures or invalid theories, they should either be corrected, removed or retained depending on their magnitude levels (Leys et al. 2013). However, a researcher cannot make a decision on how to deal with the outliers without detecting their presence (Leys et al. 2013). Further, failure to identify and correct outliers could distort the statistical testing process and compromise the entire data analysis and results (Larson-Hall, 2009). Therefore, in order to enhance reliability and validity of in the current study, the researcher tested for both the multivariate and univariate outliers. According to Leys et al. (2013), univariate outliers are detected by changing all the data into standardized scores by using a standard deviation of either 2 or 2.5 around the mean depending on the researcher’s perspective and research situation (Leys et al. 2013). In the current study, a standard deviation of 3 was used since a sample size that is greater than 80 requires the use of a standard deviation that is greater than 2.5 (Hair et al. 2010).

On the other hand, multivariate outliers were detected using Mahalanobis D2 analysis, which was undertaken through SPSS regression analysis. Cases were regarded as multivariate outliers if their D2 probability results were less or equal to 0.001 (D2 ≤ 0.001). According to Tabachnick and Fidell (2007), data collected from a sample with a wide range of characteristics is likely to yield multivariate outliers. In the current study, multivariate outliers were expected because the data was gathered from respondents from different firm size, firm age and firm production types.

As earlier mentioned, there are three ways of dealing with outliers: removing, retaining or correcting the outliers (Leys et al. 2013). Further, Larson-Hall (2009) advises that outliers should be retained as long as they represent the population or they do not extremely diverge from the normal distribution. The current study used PLS-SEM data analysis method, which is famous for its insensitivity to data normality. Therefore, outliers were retained in this case because the statistical data analysis technique employed prevents data from extreme deviation from the normal distribution.

5.2.3 Examining the Normality of the data

Most statistical processes including regression, variance analysis, t-tests and correlation work on Gaussian distribution theorem that the data is normally distributed (Ghasemi and Zahediasl, 2012). In statistical data analysis process, the normality assumption is vital especially when formulating variables’ reference intervals. According to Ghasemi and Zahediasl (2012), data whose normality assumption is invalid cannot provide reliable, accurate and valid inferences about the research phenomenon; hence the need to be taken serious. While large sample sizes of greater than 30 are not prone to violating the normality distribution assumption, extremely large or small sample sizes of below 30 could have serious distribution issues that may affect the study’s reliability and validity (Pallant, 2013). Further, the central limit theory hypothesizes that the sampling distributions and their means shall be normal (irrespective of the data shape) when the sample data are nearly normal and the sample size is above 30 (Ghasemi and Zahediasl, 2012).

However, regardless of Gaussian and central limit theorems, it is always advisable to undertake a distribution normality test that determines how serious the data deviates from normality (Ghasemi and Zahediasl, 2012). Data normality can be performed numerically or graphically through the SPSS software (Pallant, 2013). The current study used kurtosis and skewness tests to determine how normally the data was distributed.

Skewness represents the disproportionateness and balance of the distribution while kurtosis measures the flatness or peakedness of the distribution relative to the normal distribution (Hair et al. 2010). When assessing the distribution normality using kurtosis, data with a normal distribution should have a kurtosis statistic of zero. When the kurtosis static is greater than zero (positive value), the data is said to be peaked while a less than zero kurtosis (negative value) is an indication of a flatter distribution (Hair et al. 2010). Similarly, data is said to be normally distributed when its skewness is symmetric, balanced and has a skewness statistical value of zero. If skewness value is greater than zero (positive value), the data is said to be distributed or unbalanced towards the left while a negative value of less than zero shows unbalanced distribution towards the right.

According to Hair et al. (2010), the critical value for determining the kurtosis and skewness statistical values can be drawn from the Z distribution, which relies on the study’s significance level. Therefore, the current study used a cut-off value of ± 2.58, with a corresponding significance level of 0.01 as recommended by Hair et al. (2010). The particulars of these statistics are presented in the descriptive statistics section of this chapter.

 

5.3 Basic Demographic variables

The online survey as well as the semi-structured interviews asked the respondents to provide some basic information about themselves and their organizations. Out of the 239 respondents who undertook the quantitative online survey, 3.3% had high school education level, 65.7% had a Bachelor’s degree, 25.5% had a master’s degree and 2.5% had a PhD as shown in the table below.

Frequency Percent Valid Percent Cumulative Percent
Valid High School 8 3.3 3.3 3.3
Bachelor Degree 157 65.7 65.7 69.0
Master Degree 68 28.5 28.5 97.5
PhD 6 2.5 2.5 100.0
Total 239 100.0 100.0

Further, the respondents were asked to state their current job positions and the results show that majority (91.2%) were in the top management and the rest in middle management (7.1%) and junior management (1.7%). When asked to describe their role in the organizations, 87.9% said there CEOs, 4.2% were vice-presidents and 7.9% said they were operations managers. The respondents were also asked to state how long they have been working for their current employers and majority (40.6%) had worked for 10-15 years, 18% for 15-20 years, 14.2% for 5-10 years, and another 14.2% for 20-25 years. Seven respondents (2.9%) had worked for 1-5 years and only one had worked for over 30 years in their current organizations. The number of years an employee had worked in a firm was also used to determine the age of the firm, which was considered a key control variable.

Based on the location of their firms, 89.1% said their offices were based in Makkah while the remaining 10.9% reported that their offices were based in Jeddah. All the data is tabulated in the tables below.

 

Your current job position
Frequency Percent Valid Percent Cumulative Percent
Valid Top management 218 91.2 91.2 91.2
Middle management 17 7.1 7.1 98.3
Junior management 4 1.7 1.7 100.0
Total 239 100.0 100.0

 

Which of the following best describes your role in your organisation?
Frequency Percent Valid Percent Cumulative Percent
Valid CEO 210 87.9 87.9 87.9
Vice president 10 4.2 4.2 92.1
Operation manager 19 7.9 7.9 100.0
Total 239 100.0 100.0

 

How long have you been working with your current employer?
Frequency Percent Valid Percent Cumulative Percent
Valid 1-5 7 2.9 2.9 2.9
5-10 34 14.2 14.2 17.2
10-15 97 40.6 40.6 57.7
15-20 43 18.0 18.0 75.7
20-25 34 14.2 14.2 90.0
25-30 23 9.6 9.6 99.6
Over 30 1 .4 .4 100.0
Total 239 100.0 100.0

 

Where is your office located?
Frequency Percent Valid Percent Cumulative Percent
Valid Makkah 213 89.1 89.1 89.1
Jeddah 26 10.9 10.9 100.0
Total 239 100.0 100.0

 

 

5.4 Control Variables

In order to better determine the hypothesized relationships, three control variables namely, firm size, firm age and firm production type were used. The firm size was determined by asking the study respondents to indicate the number of firm employees. Also, the firms’ logged scores were analysed to determine the number of employees that a firm may have. Further, the year of registration was used to determine the age of the firm, and the employees were asked to also state when the firm started its operations. The firm production category was determined based on the type of food that the establishment made. Therefore, three categories were formulated: fresh meal, pre-cooked, raw material wholesale.

The online survey participants were asked to state the number of employees in their firm and the results were as shown in the table below.

Number of employees?
Frequency Percent Valid Percent Cumulative Percent
Valid 30-50 19 7.9 7.9 7.9
210-250 17 7.1 7.1 15.1
50-70 37 15.5 15.5 30.5
25 6 2.5 2.5 33.1
50-90 34 14.2 14.2 47.3
35 2 .8 .8 48.1
38 2 .8 .8 49.0
90-110 22 9.2 9.2 58.2
40 1 .4 .4 58.6
42 1 .4 .4 59.0
44 1 .4 .4 59.4
110-130 24 10.0 10.0 69.5
53 1 .4 .4 69.9
55 3 1.3 1.3 71.1
130-150 38 15.9 15.9 87.0
60 1 .4 .4 87.4
150-170 30 12.6 12.6 100.0
Total 239 100.0 100.0

 

The data from the online surveys was further grouped and analysed and the results were as the pie chart below. Based on the results, 3% came from organizations with 25 employees, 11% were from organizations with 30-50 employees, 15% were from firms with 50-70 employees, 16% from organizations with 90-110 employees and another 16% from organizations with 130-150 employees. The remaining 9%, 10%, 13%, and 7% were from companies with 90-110, 110-130, 150-170, and 210-250 employees respectively.

The respondents were further asked to name the sectors in which their organizations belonged. As the pie chart below shows, almost half of the online survey informants (49%) came from firms operating under the food manufacturing sector while 22% came from food provider sector. The other 14% worked for firms that were in the subcontractors’ sector, 12% from Hajj campaigns sector and the minority 3% were from the SC management sector.

As earlier mentioned, the data on the number of years an employee had worked in a firm was used to determine the age of the firm. Therefore, based on the results received, the chart below tabulates the results obtained.

5.5 Descriptive statistics

In order to capture the main aspects that can be used to mitigate demand uncertainty (DMU), the measurement model suggested five main aspects namely: internal integration (II), supply integration (SI), demand integration (DI), postponement practice (PP) and mass customisation capability (MCC). As earlier mentioned, the theoretical model aimed at establishing the effect of supply chain integration (SCI), postponement (PP), mass customization capability (MCC), on demand uncertainty under high competitive intensity, in order to validate the relationship of the impact of supply chain integration (SCI), postponement (PP), and mass customisation capability (MCC) (independent variables) on mitigating demand uncertainty (DUM) (dependent variable) under high competitive intensity (CI). For each aspect, several indicators were developed to measure their relationship and dimensions. In the table below, the descriptive statistics of the hypothesized variables used to measure how internal integration, supply integration, demand integration with postponement practice and mass customisation capability are used to mitigate demand uncertainty. These include the mean, maximum and minimum values, skewness, standard deviation and kurtosis.

The descriptive statistics below show that all the main aspects and their variables scored an above the average score. The lowest variable was SI2 (we maintain close communications with food suppliers about quality considerations and design changes) which scored a mean of 4.26 out of 7 (0.6086). CPI2 (Our competitive pressures are extremely high) was the second least variable scoring a mean of 4.29 out of 7 (0.6129). Third least was CPI3 (we do not pay much attention to our competitors) whose score was 4.56 out of 7 (0.6514) closely following CPI1 (we are in a highly competitive industry) whose score was 4.75 out of 7 (0.6786). These results show that all the items of competitive intensity made it to the variables with the least scores.

The other variables followed in this order: SI3 (our firm key food suppliers provide input into our product development projects) scored a mean of 4.82 out of 7 (0.6886); II2 (our plant’s functions coordinate their activities) scored a mean of 5.15 out of 7 (0.7357); PP3 (our firm postpones final packaging activities until receives customer orders 5.18 out of 7 (0.7); SI (we maintain cooperative relationships with food suppliers) scored a mean of 5.21 out of 7 (0.74429); and II (our top management emphasizes the importance of good inter-functional relationships) scored a mean of 5.23 out of 7 (0.74714).

The descriptive statics further show that the demand mitigation uncertainty had an average score with DMU (we mitigate demand uncertainty when our customers place orders consistent with their nominated delivery lead time) scoring 5.36 out of 7 (0.7657), DMU (we mitigate demand uncertainty by providing products to our customer consistent with their nominated product specification) scoring 5.48 out of 7 (0.7829) and DMU   (we mitigate demand uncertainty when our customers provide us reliable forecasts on their demands) scoring 5.48 out of 7 (0.7829).

While the other aspects of mass customisation capacity performed much better, MCC2 (we can easily add significant food product variety without increasing costs) scored 5.52 out of 7 (0.7886). On the contrary, the internal integration variable II1 (the functions in our plant are well integrated) performed much better than the rest scoring a mean of 5.59 out of 7 (0.7985). The postponement practice variables PP1 (our firm postpones final product assembly activities until receives customer orders) and PP2      (our firm postpones final product labelling activities until receives customer orders) scored quite higher than PP3 with a mean score of 5.67 out of 7 (0.81) and 4.86 out of 6 (0.81) respectively.

Similarly, customer integration variables also performed well with three of its variables making it to the top five variables with highest scores. CI2 (our customers are actively involved in our product design process) scored a mean of 4.91 out of 6 (0.8183), MCC3 (we can easily add product variety without sacrificing quality) scored a mean of 5.91out of 7 (0.8443); CI3 (the customers involve us in their quality improvement efforts) scoring 5.93 out of 7 (0.8471); and CI1 (we are in frequent, close contact with our customers) scoring a mean of 6 out of 7 (0.8571). The highest scoring variable was MCC1 (we can are highly capable of large-scale product customization) with a mean of 6.1 out of 7 (0.8714). These statistics are tabulated in the table below.

 

Descriptive Statistics
N Minimum Maximum Mean Std. Deviation Skewness Kurtosis
Statistic Statistic Statistic Statistic Statistic Statistic Std. Error Statistic Std. Error
Age 239 1 4 2.54 .982 -.087 .157 -.994 .314
How long have you been working with your current employer? 239 1 25 3.64 1.871 6.357 .157 70.668 .314
We are in frequent, close contact with our customers. 239 3 7 6.00 1.051 -.823 .157 -.091 .314
Our customers are actively involved in our product design process. 239 2 6 4.91 1.177 -.863 .157 -.051 .314
The customers involve us in their quality improvement efforts. 239 4 7 5.93 1.103 -.558 .157 -1.074 .314
The functions in our plant are well integrated 239 3 7 5.59 1.306 -.724 .157 -.735 .314
Our plant’s functions coordinate their activities. 239 2 7 5.15 1.310 -.700 .157 -.345 .314
Our top management emphasizes the importance of good inter-functional relationships. 239 2 7 5.23 1.251 -.327 .157 -.783 .314
We maintain cooperative relationships with food suppliers. 239 1 7 5.21 1.448 -.296 .157 -.893 .314
We maintain close communications with food suppliers about quality considerations and design changes 239 1 7 4.26 1.453 -.020 .157 -1.402 .314
Our firm key food suppliers provide input into our product development projects. 239 1 7 4.82 1.529 -.283 .157 -.936 .314
Our firm postpones final product assembly activities until receives customer orders 239 3 7 5.67 1.242 -.854 .157 -.325 .314
Our firm postpones final product labelling activities until receives customer orders 239 1 6 4.86 1.082 -1.008 .157 .629 .314
Our firm postpones final packaging activities until receives customer orders 239 2 7 5.18 1.275 -.739 .157 -.119 .314
We can are highly capable of large-scale product customization. 239 4 7 6.10 1.034 -.832 .157 -.567 .314
We can easily add significant food product variety without increasing costs. 239 4 7 5.52 .703 -.543 .157 -.164 .314
We can easily add product variety without sacrificing quality. 239 4 7 5.91 1.129 -.533 .157 -1.160 .314
We mitigate demand uncertainty by providing products to our customer consistent with their nominated product specification. 239 2 7 5.48 1.371 -.670 .157 -.796 .314
We mitigate demand uncertainty when our customers place orders consistent with their nominated delivery lead time. 239 2 7 5.36 1.242 -.578 .157 -.551 .314
We mitigate demand uncertainty when our customers provide us reliable forecasts on their demands. 239 2 7 5.48 1.371 -.670 .157 -.796 .314
We are in a highly competitive industry. 239 1 7 4.75 1.568 -.013 .157 -1.082 .314
Our competitive pressures are extremely high. 239 1 7 4.29 1.779 .117 .157 -1.510 .314
We do not pay much attention to our competitors. 239 1 7 4.56 1.436 .152 .157 -.748 .314
Valid N (list wise) 239

 

In order to examine the distribution normality of the data, the kurtosis and skewness critical values were analysed. As earlier mentioned, normal distributed data should have a kurtosis and skewness level falling in the range of ± 2.58 (Hair et al. 2010). A closer analysis of the statistics showed that all the variables fell within the acceptable ± 2.58 range, which implies that all the variables fell within the normal distribution but were not entirely distributed normally.

5.6 Structural model assessment and testing

5.6.1 Structural Model

The current study was based on the SEM, and PLS-SEM approach was used in authenticating the association between the study’s constructs. Considered as second generation modelling technique, SEM performs dual function. It functions as measurement model and as structural model. As measurement model (outer model), it assesses the quality of the research constructs. As a structural model (inner model), SEM assesses the relationship between the outlined constructs (Fornell and Brookstein, 1982). The sample size based on SEM is dependent on five crucial factors. The study’s assumptions are tested using multivariate analysis. Estimation technique is identified to estimate the sample size. Model complexity is analysed and data is screened for missing data and outliners. Average error of variance is estimated (Hair et al., 2010).

According to Chin (1998), traditional significance testing techniques are not suitable for PLS-SEM as they take on a variance that is distribution free. As a result, the assessment of PLS-SEM models should be carried out using measures that are non-parametric and prediction-oriented rather than measures of fit (Chin, 1998). Asserting Chin’s (1998) arguments, Hair et al. (2011) recommend the use of Stone-Geisser test, path coefficient and coefficient of determination (R2) as the most appropriate methods of assessing PLS structural model. They also add that resampling techniques like jackknifing and bootstrapping can be utilized in assessing both the significance and the stability of path coefficient assessments (Hair et al. 2011).

The bootstrapping process in PLS is for 5000 samples and goodness-of-fit index is not used to test inner model evaluation (Henseler and Sarstedt, 2013) and Cronbach’s alpha for inner consistency is not used for outer model evaluation (Bagozzi and Yi, 1998) respectively. Contrarily, in the current study which applies the PLS approach, path loadings and R2 values are estimated. While path loadings identify the strength of the relationship between independent and dependent variables, R2 measures the predictive power of the variables. R2 values measure the degree of variance present in the independent variables. SmartPLS 2.0 M3 software by Ringle et al (2005) is used to identify the measurement and structural model. Bootstrapping estimation procedure is used to identify the significance of scale factor loadings and path coefficients of measurement model and structural model respectively (Gefen and Straub, 2005). Also, when assessing the structural model, it is vital to take into account the possibility if data classification when evaluating both the unobserved and observed heterogeneous variables.

 

5.6.2 CFA Analysis

CFA analysis was undertaken using the PLS software with an aim of assessing the reliability and validity of the multiple item scale. Given the reflective nature of the measurement scale, the outer loadings, composite reliability, AVE and its square roots were examined and reported. Reliability, convergent validity and discriminant validity tests were conducted in lieu with Fornell and Larcker’s guidelines. Indicator reliability and composite reliability tests, which are superior to Cronbach’s Alpha, are conducted in quantitative analysis, since these tests consider actual factor loadings rather than assigning assumed equal weight for each item (Fornell and Larcker, 1981). Indicator reliability refers to the square root of outer loadings and value of 0.70 or higher value validates indicator reliability (Hulland, 1999). As a result, an internal consistency reliability should be established if the composite reliability is 0.70 or higher (Bagozzi and Yi, 1988). Given that the composite reliability ranges from 0.9317 to 0.9754, which is greater than the recommended 0.70 value, internal consistency reliability was established.

Reliability of the study’s questionnaire was established post the pilot study, and face and content validity is established prior to the pilot study (Churchill, 1979). While the face and content validity of the study’s questionnaire was established: a) using a group of academic experts b) via the opinions of 12 CEOs through semi-structured interviews, construct validity subtypes namely convergent and discriminant validity is established through PLS analysis. Average variance extracted (AVE) was used to assess convergent validity and discriminant validity is assessed in the following process. AVE values established the presence or absence of convergent validity. If the AVE values are greater or equal to 0.5, then the convergent validity is established (Baggozzi and Yi, 1988). Given that, a) the AVE values are above the recommended value of 0.5 (ranging from 0.8206 to 0.9295) as indicated in Table II, and b) outer model loadings as indicated in Table I is greater than 0.70 values; convergent validity is demonstrated. Additionally, factor loadings from the t-statistics exhibited significance at p<0.01 and communalities >0.500 which clearly established convergent validity (Hair et al., 2010).

OUTER MODEL LOADINGS (FACTOR LOADINGS)

CI II SI PP DUM CPI
CIQ1 0.9442
CIQ2 0.9549
CIQ3 0.9377
IIQ1 0.9678
IIQ2 0.9654
IIQ3 0.9592
SIQ1 0.9620
SIQ2 0.9138
SIQ3 0.9339
PPQ1 0.9073
PPQ2 0.9605
PPQ3 0.9687
DUM1 0.9396
DUM2 0.8056
DUM3 0.9643
CPI1 0.9635
CPI2 0.9648
CPI3 0.9411

Note: Outer model loadings or factor loadings are extracted to conduct the CFA analysis

 

CFA ANALYSIS

AVE C.R T-Value* Loading Item
Customer Integration
0.8942 0.9621 34.4739 0.9442 CI1
30.0859 0.9549 CI2
22.4917 0.9377 CI3
Internal Integration
0.9295 0.9754 36.8457 0.9678 II1
36.3637 0.9654 II2
30.4638 0.9592 II3
Supplier Integration
0.8780 0.9561 27.3363 0.9620 SI1
17.3004 0.9138 SI2
19.4477 0.9339 SI3
Postponement Practice
0.8947 0.9624 26.6386 0.9073 PP1
32.6321 0.9605 PP2
48.7413 0.9687 PP3
Demand Uncertainty Mitigation
0.8206 0.9317 18.2925 0.9396 DUM1
15.2333 0.8056 DUM2
33.7083 0.9643 DUM3
Competitive Intensity
0.9150 0.9701 29.1807 0.9635 CPI1
32.2481 0.9648 CPI2
23.2297 0.9411 CPI3
n.a n.a n.a 1.0000 COMPANY SIZE
n.a n.a n.a 1.0000 COMPANY AGE
n.a n.a n.a 1.0000 PRODUCTION TYPE

Note: CR = composite reliability; AVE = average variance extracted; * all item loadings are significant at p<0.01 level

 

Discriminant validity is assessed by comparing the square roots of AVEs of each construct with correlations between the focal construct and each other construct. Discriminant validity is thus established when a square root is higher than the correlation with other constructs (Fornell and Larcker, 1981). Table III clearly indicates the inter-construct correlation values of the diagonal of the matrix. A comparison between the correlation values and square roots of AVEs on the diagonal (values in bold), indicates discriminant validity.

 

 

Common method variance (CMV), an important issue that requires crucial attention in survey based studies is also conducted using multiple respondents. Series of analyses are conducted to identify the presence of CMV based on the guidelines established by Podasakoff et al. (2003). Harmon’s single-factor test based on the analytical procedure outlined by Liang et al. (2007) is conducted to determine CMV. In the obtained data, if one covariance is accounted by one factor, then CMV is present. Additionally, correlation matrix is checked to determine the presence of excessively high correlations (<0.9). The results of the tests clearly established the unlikelihood of CMV’s influence on the study’s results.

5.6.3 Coefficient of Determination (R2)

Additionally, PLS is different from LISREL-type SEM, since it is purely dependent on predictive power of independent variables (Chin, 1998). This can be capitalized to explain complex relationships and used to build theory. A component based approach (Lohmoller, 1988), PLS problems associated with inadmissible solutions and factor indeterminacy can be avoided using PLS (Fornell and Brookstein, 1982). PLS also functions better than LISREL and AMOS, since it does not require the assumption of normal data distribution (Gefen and Straub, 2005; Chin, 1998). Under the conditions of non-normality, PLS is capable of executing its functionalities. While LISREL-type SEM provides goodness-of-fit indices, PLS estimate path loadings and R2 values. They do not provide goodness-of-fit indices. While path loadings identify the strength of the relationship between independent and dependent variables, R2 measures the predictive power of the variables. R2 values measure the degree of variance present in the independent variables (Gefen and Straub, 2005). While the above comparisons provide a general overview between SEM based method (PLS) and CBSEM and PLS method and LISREL-based SEM, the actual justification of using PLS can be obtained by comparing it with AMOS. Given the use of CFA in the quantitative analysis, a comparison between SmartPLS and AMOS software for CFA, highlights the benefits of using SmartPLS software.

AMOS is one of the popular statistical software for CBSEM (Hair, Ringle, Sarstedt, 2011). It is usually used when the research objective is to test a theory, confirm a theory or compare alternate theories. If the formative measures in the measurement model are limited to specified rules and require additional specifications such as co-variation, then AMOS based on CBSEM is used. If the structural model is non-recursive, then AMOS is used. If requirements with respect to model specification, non-convergence, data distribution assumptions and identification are met as per CBSEM, then AMOS software is used. Additionally, if the study requires global goodness-of-fit criteria and test for measurement model invariance, then AMOS is the best software for conducting analysis (Hair et al. 2011).

 

5.6.4 Path coefficients

Structural model is examined through PLS and the outlined hypotheses are subjected for testing. A basic model with main effects is primarily created and tested, whose results are indicated in the Figure below. From this model, 28% variance (R2) in DUM and 33% in PP is explained.

T-STATISTICS OF PATH COEFFICIENTS (INNER MODEL)

T-Statistics
CI -> PP 4.9302
II -> CI 7.2749
II -> PP 2.959
II -> SI 8.6952
PP -> DUM 5.4187
SI -> PP 4.2084

 

 

 

 

 

Note: Checking Structural Path Significance in Bootstrapping

Fig. 2. Structural model with path coefficient estimates

 

5.7 Relationship between Various Constructs

The current study had hypothesized that SCI not only has a significant direct and indirect effect on the PP and MCC, but also plays a critical role throughout the employment of PP as an important strategy, empowering MCC to mitigate demand uncertainty. In order to ascertain the relationship between various hypothesized constructs, the study participants were asked several questions that aimed at obtaining data that would ascertain the relationship between these constructs.

These include: the relationship between II and external integration (SI and CI); the relationship between the various forms of SCI (II, CI and SI) and postponement; the relationship between SCI and MCC; the relationship between II and PP; the relationship between II and MCC; the relationship between CI and PP; relationship between CI and MCC; the relationship between SI and PP; the relationship between SI and MCC; the relationship between Mass Customization and Postponement; the contingent effects of Demand Uncertainty and Competitive Intensity; and the relationship between II, SI, CI, PP, and MCC with Demand Uncertainty Mitigation (DUM).

5.7.1 Direct effects

The values from these models can be used to identify the direct and indirect effects of study’s constructs on each other. Direct effects are obtained to validate the relationship between each SCI type and PP. As per the path coefficients, internal integration (0.199, p < 0.01), customer integration (0.318, p < 0.001) and supplier integration (0.253, p < 0.001) have a direct impact on postponement. This validates H2, H3 and H4. Additionally, the path coefficient also validates the direct impact of PP on DUM (0.534, p > 0.001).

5.7.2 Mediating effect (Indirect effect)

Indirect effects are calculated to determine whether external integration carries the effect of internal integration to postponement practice. The indirect effects are obtained by multiplying the path coefficients from internal integration to external integration and from external integration to postponement. The indirect effect of customer integration is 0.397* 0.318= 0.126; the indirect effect of supplier integration is 0.453* 0.253= 0.114. This validates H1 of the current study. As proved above, there is a direct co-relation in mitigating demand uncertainty through supplier integration, internal integration and customer integration.

The Sbel’s A-test is then applied to assess the significance of these indirect effects. The resultant Z values indicate that the indirect effect of customer integration is significant at the p < 0.05 level, whereas, supplier integration are significant at the p < 0.01 level on postponement practice. The results validate H5 and H6 and confirm that external integration carries the effect of internal integration to postponement practice.

5.7.3 Moderating effect

In order to test the moderating effects of (Competitive Intensity) on the indirect effects of internal integration through external integration on PP, a conditional indirect (moderated mediation) models following the process flow suggested by Iacobucci (2008) is applied. A basic model indicative of the main effects was created and tested, the results of which are diagrammatically represented in figure 3.

Fig. 3. Basic model of moderated mediation effects on study’s variables (conditional indirect effects)

Following the procedure as suggested by Iacobucci (2008), the moderated mediation results are aptly indicated in Figs 4 and 5.

Fig. 4. Conditional Indirect Effect (moderation estimation of competitive intensity on the path from internal integration to supplier integration)

From Fig 4, it is a clear indication that, the moderating effects of competitive intensity on the path from internal integration to supplier integration is negatively significant (β = – 0.171; p < 0.01). The moderating effects of competitive intensity on the path from supplier integration to postponement practice is significant (β = 0.129; p < 0.01). The products a’∗b’ (0.171∗0.129 = 0.022) is significant at the p < 0.05 level. Thus, it can be deduced that, competitive Intensity enhances the indirect effect of internal integration on postponement practice through supplier integration. This validates H7 of the study.

Fig. 5. Conditional Indirect Effect (moderation estimation of competitive intensity on the path from internal integration to customer integration)

From Fig 5, it can be clearly deduced that, the moderating effects of competitive intensity on the path from internal integration to customer integration is significant (β = 0.185; p < 0.01). However, the moderating effects of competitive intensity on the path from customer integration to postponement practice is not significant (β = 0.05). The products a’∗b’ (-0.185 ∗0.05 = – 0.009) is negatively not significant. This fails to support hypothesis H8 of the study.

With respect to control variables, figures 4 and 5 clearly indicate the following aspects. Firm size has positive significant effect on postponement practice, (β= 0.497; p < 0.01). Firm age has negative significant effect on postponement practice β= -0.422; p < 0.01). Product type has no significant effect on postponement practice β= 0.009. Whereas, firm size, firm age, and firm product type have no significant effect on demand uncertainty mitigation β= -0.003, β= -0.13, β= -0.021 respectively.

 

5.8 Summary

The results from the study elucidate that SCI has a significant impact on PP and the interrelationship between the SCI types mitigate demand uncertainty. The results show that postponement has a direct and positive effect on demand uncertainty whereas internal integration has a direct and positive effect on postponement. Also, both the customer integration and supplier integration has been identified as having a direct and positive effect on postponement. Additionally, internal integration has been found to have a positive influence on customer and supplier integration. The addition of the contingent factor competitive intensity in the current study proved that, through supplier integration, competitive intensity significantly and indirectly enhances the effect of internal integration on postponement practice. Lastly, the size of the firm and the age of the firm have been found to play a significant role on postponement practice with the results clarifying the presence of positive significant effect of firm’s size on postponement practice and negative significant effect of firm’s age on postponement practice. In the next chapter, the results are discussed and conclusions drawn.

 

References

Bagozzi, R. & Yi, Y. 1988, “On the evaluation of structural equation models”, Journal of the Academy of Marketing Science, vol. 16, no. 1, pp. 74-94.

Chin, W. W. 1998, “The partial least squares approach to structural equation modelling”. In G. A. Marcoulides (Ed.), Modern methods for business research (pp. 295-336). Mahwah, NJ: Lawrence Erlbaum Associates, Publisher.

Churchill, G.A.,Jr. 1979, “A Paradigm for Developing Better Measures of Marketing Constructs”, Journal of Marketing Research, vol. 16, no. 1, pp. 64-73.

Cousineau, D., & Chartier, S. 2010, “Outliers detection and treatment: a review”. International Journal of Psychological Research, 3(1), 58-67.

Fornell, C. & Larcker, D.F. 1981, “Evaluating Structural Equation Models with Unobservable Variables and Measurement Error”, Journal of Marketing Research, vol. 18, no. 1, pp. 39-50.

Ghasemi, A., & Zahediasl, S. 2012, “Normality tests for statistical analysis: a guide for non-statisticians.” International journal of endocrinology and metabolism, 10(2), 486.

Graham, J. W. 2009, “Missing data analysis: Making it work in the real world”. Annual review of psychology, 60, 549-576.

Hair, J. F., Black, W. C., Babin, B. J., and Anderson, R. E. 2010, Multivariate Data Analysis, 7th Edn. Upper Saddle River, New Jersey: Prentice Hall.

Hair, J. F., Ringle, C. M., & Sarstedt, M. 2011, “PLS-SEM: Indeed a silver bullet”. Journal of Marketing Theory and Practice, vol. 19, no.2, pp.139–151.

Henseler, J. & Sarstedt, M. 2013, “Goodness-of-fit indices for partial least squares path modelling”, Computational Statistics, vol. 28, no. 2, pp. 565-580.

Hulland, J. 1999, “Use of partial least squares (PLS) in strategic management research: a review of four recent studies”, Strategic Management Journal, vol. 20, no. 2, pp. 195-204.

Iacobucci, D., 2008, Mediation analysis, Sage, Los Angeles.

Larson-Hall, J. 2009, A guide to doing statistics in second language research using SPSS. Routledge.

Leys, C., Ley, C., Klein, O., Bernard, P., & Licata, L. 2013, “Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median”. Journal of Experimental Social Psychology, 49(4), 764-766.

Liang, H., Saraf, N., Hu, Q. & Xue, Y. 2007, “Assimilation of Enterprise Systems: The Effect of Institutional Pressures and the Mediating Role of Top Management”, MIS Quarterly, vol. 31, no. 1, pp. 59-87.

Pallant, J. (2013). SPSS survival manual. McGraw-Hill International.

Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. 2003, “Common method biases in behavioral research: A critical review of the literature and recommended remedies”, Journal of Applied Psychology, vol. 88, no. 5, pp. 879-903.

Ringle, C.M., Wende, S., & Will, A. 2005, SmartPLS 2.0. Hamburg [Online]. Available: www.smartpls.de [2014, Dec].

Tabachnick, B., and Fidell, L. 2001, Using Multivariate Statistics, 4th Edn. Boston: Allyn and Bacon.

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Live Chat+1-631-333-0101EmailWhatsApp