• No results found

Analysis tools

In document A decision is made – and then? (Page 67-74)

4 Research methods and tools

4.4 Analysis tools

or in electronic shape, may not be handed over to anyone outside the company except my supervisor and the opponent at the disputation if they ask for them”

(translated from Swedish).

When planning an interview with an individual staff member I first phone, make an introduction about the scope and conditions and ask if it is possible to meet. If this is OK, an interview is scheduled. I start the interview by confirming that it is known by the top management that we meet, as the respondents must know the conditions of the interview. I clarify orally that all information given is treated confidentially: I tell nobody about what I learn and the publishing is done in such a way that it is impossible to identify the company and the respondent. - In certain cases there are additional agreements according to the requirements of the CEO of the studied companies as well as interviewees in special positions.

QCA makes it possible to analyze the presence of potential multiple and

“conjunctural causation” (the term introduced by Ragin) also in small populations.

“Multiple” means that there is more than one solution for a definite outcome and

“conjunctural” means that one single factor of two factors does not give an outcome but both together do.

The use of QCA in its basic form has two important conditions:

• there must be a specific outcome to explain

• the variables must be dichotomous

The measurement of the variables used in QCA analysis is a special subject, but the understanding of the QCA analysis does not demand further explanation. We just conclude that there is for each dichotomous variable a YES or NO answer or a characteristic, which can adopt just two values, e.g., Rich/Poor.

QCA is built on Boolean algebra, developed by George Boole in the mid-nineteenth century. There are ten basic features but I concentrate here on a few.

Boolean algebra uses binary data. A characteristic is true or false, present or absent and can consequently be transformed to 1 or 0. In the world of this study, a decision is implemented or not, goal satisfaction is true or false. A truth table is a complete matrix of all possible combinations of the actual independent x-variables. If we have three variables labeled M, F and C, the truth table has this shape:

M F C 1 1 1 1 1 0 1 0 0 1 0 1 0 1 1 0 0 1 0 1 0 0 0 0

The size of the truth table growths rapidly as the number of rows is 2n (n=number of x-variables); five variables give 32 rows and seven variables give 128 rows.

Let us now complete the truth table with the dependent variable y, here labeled I, and the outcome of I for each row:

M F C I 1 1 1 1 1 1 0 0 1 0 0 1 1 0 1 1 0 1 1 0 0 0 1 1 0 1 0 0 0 0 0 0

We transform the matrix to an equation describing alternative ways to get outcome I>1 where capital letters stand for 1, e.g., M=1, and lower-case letters stand for 0, e.g., m=0:

MFC + Mfc + MfC + mfC > I (1)

In this equation, capital letters (M, F, C) stand for 1 (present, true …) and lower-case letters (m, f, c) stand for 0 (absent, false). The equation (1) is just another way to say the same thing as the truth table: there are four situations where the outcome is I>1. Is it possible to simplify the equation (1)? Yes there is by using the minimizing technique, which means comparing the four situations with each other to find common elements:

Equation variables Common element(s) MFC compared with Mfc -

MfC MC mfC -

Mfc MfC Mf mfC -

MfC mfC fC The equation (1) may be written as

MC + Mf + fC > I (2)

There is one more technique for simplification, applicable on the minimized equation, called prime implicants; it is not presented here but it can be found in Ragin (1987).

What does equation (2) tell us? If M and C are present at the same time, F/f does not matter, I>1. For M to be present, f needs to be absent and C/c does not matter for outcome I>1. Finally, if C is present, then f needs to be absent and M/m does not matter for I>1. These generalizations may be tested in the truth table. MC covers row 111 and row 101. Mf covers row 100 and fC covers row 001. All rows I>1 are covered but no others.

But reality is normally more complex than this example. Let us assume that we study the implementation efficiency in a company. We have selected information about:

x1 implementer in charge is a manager M x2 follow-up plan created F

x3 cost cutting decision C

y successful implementation I

Our investigation gives us the following truth table supplemented with the information of the number of cases with 1 or 0 as an outcome:

Number of cases when I>

M F C 1 0 1 1 1 4 0 1 1 0 3 1 1 0 0 1 1 1 0 1 2 0 0 1 1 3 0 0 0 1 0 2 0 1 0 0 0 0 0 0 0 3

There are situations (rows) where we have both outcome and no outcome, contradictory row results, i.e., 100, but also situations (rows) where we lack cases, limited diversity, i.e., 010. Contradictory row results and limited diversity must be managed in some way. There are no given rules how to do this. Regarding contradictory row results a simple majority technique can be used: if there are 3 cases giving I>1 and 1 case giving I>0, the row is set I>1. Another technique is to compare rows and search for a systematic structure. In the truth table above it seems as M has an impact giving I>1. So row 100 is tested as I>1. The same technique used for row 010 (no information > limited diversity) sets I>0. The tradition of using QCA in the scientific discipline and common sense may help to solve the problems of contradictory row results and limited diversity. However, it is necessary to observe the risk of circular reasoning: the assumptions are to be proven. The analysis results may be repeated with different assumptions and the results tested against existing knowledge.

The table above can be adapted in a first analysis trial putting row 110 > I>1, row 100 > I>0 and 010 > I>0:

M F C I 1 1 1 1 1 1 0 1 1 0 0 0 1 0 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 0 0 0 We get an equation as

MFC + MFc + MfC + mFC > I (3) Minimizing equation (3) gives

MF + MC + FC > I (4)

There are three situations giving successful implementation. If the implementer is a manager and she/he has a follow-up plan the decision type C/c does not matter.

If the implementer is a manager and there is no follow-up plan it must be a cost

cutting decision, C. In the third situation, a follow-up plan, F, and a cost cutting decision, C, ensure a successful implementation and M/m does not matter.

Alternative assumptions may show if there is stability in the solutions. Let us therefore alternatively set row 100 I>1:

M F C I 1 1 1 1 1 1 0 1 1 0 0 1 1 0 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 0 0 0 We get an equation as

MFC + MFc + Mfc + MfC + mFC > I (5) Minimizing equation (5) gives

M + FC > I (6)

In this solution we get successful implementation if the implementation task is given to a manager. A follow-up plan and the type of decision do not matter. If a non-manager is responsible for implementation there must be a follow-up plan and the decision must be a cost cutting type if the implementation shall be successful.

Even more alternative assumptions may be tested but as a demonstration of the technique, I stop here. Finally, it may be noted that the results in equations (4) and (6) are both realistic but they must be tested against existing knowledge and the investigation may probably be repeated to confirm any of the results.

This short introduction to QCA hopefully helps in understanding the strength and weakness of the method, which I am using in the analysis. The calculations are made with the computer program fs/QCA 2.0.

4.4.3 Description of LISREL

An analysis of the fitness of the preliminary model (see figure 3) is a possible approach even if the quantitative data are limited. Furthermore, the application of simultaneous equation models, in the software package LISREL, is advantageously used in similar situations (Lunneryd, 2003). Therefore I have decided to use LISREL as the quantitative analysis tool. It is described from a user’s point of view by Lunneryd (2003). Diamantopoulos (1994) adds even more useful information for the user of LISREL. This introduction relies on these presentations with the focus on application conditions and possibilities in my research.

LISREL is short for LInear Structural RELationships and it is a commercial computer program; I have used version 8.50 for the calculations in the study.

LISREL is basically a covariance structure analysis aimed for solving structural

equation systems with latent variables. The use of LISREL has some conditions. A model must be designed beforehand on which LISREL is applied, so as to test if the used dataset confirms relationships. The data are preferably presented as interval or quota scale categories.

The quantitative data are produced on a quota scale. Factor groups in figure 3 are the latent variables in the LISREL analysis. As the study has a limited number of quantitatively measured observations, I have problems with the number of degrees of freedom. One way to solve this problem is to test parts of the model in sequences.

The LISREL analysis produces a solution as a path diagram. Correlations in the path diagram are tested with the student t-test. The P-value (the relation between chi-square value and degrees of freedom) and RMSEA (Root Mean Square Error of Approximation) estimate the model fitness. A P-value above 0.05 indicates a good fitness as well as a RMSEA value below 0.05. If the RMSEA value is between 0.05 and 0.09, it is an indication of an acceptable fitness. A value above 0.09 says that data processed could be explained by any other model.

4.4.4 Treating missing values

Missing values in a dataset are often a severe problem. Suppose that each dataset consists of 30 rows (respondents) and 15 columns (variables). If one variable has a missing value (a specific respondent has not given an answer) the whole row is rejected; 14 measurements are missed. Repeated missing values for more rows can heavily down-size the entire data volume to treat. In LISREL there are techniques to treat this situation (Jöreskog & Sörbom, 1996). Originally two types were used, pair wise and list wise deletion. However, “In many situations, particularly when values are missing not completely at random, these procedures are far from satisfactory” (Appendix B, p. 153). In LISREL version 8:50 the imputation technique is available; it is a “… substitution of real values for the missing values.

The value to be substituted for the missing value for a case is obtained from another case that has a similar response pattern over a set of matching variables”

(Appendix B, p. 153). After imputation there may still be missing values but normally you get an increase in completeness, which improves the statistical computing.

The datasets in this study used in QCA analysis are essentially the same as in LISREL analysis, which is a minor volume of all the data collected (see figure 7).

Grayed dots in the figure represent missing values in the originally collected data.

LISREL offers the imputation technique; QCA does not offer, as far as I know, a similar method to handle missing values. Therefore I use the imputed dataset from LISREL when carrying out QCA analysis instead of the limited non-imputed dataset. The completeness of the dataset increases, which improves the validity of the QCA. As far as I know this method of missing value treatment for QCA has not been reported earlier.

Collected data

Data used in LISREL Data used in QCA

x

x

x

x x

Figure 7. A principal outline of the dissertation datasets (grey dots are originally missing values and some of them are eliminated though LISREL imputation, here marked with x)

The variables in LISREL are categorized as ordinal or continious. Imputed values are connected to the variable categorization. The transformation of continious variables into dichotomous values in QCA takes place within the imputed dataset.

In document A decision is made – and then? (Page 67-74)