• No results found

Aspects on questions that emerged from selected methods

In document A decision is made – and then? (Page 64-67)

4 Research methods and tools

4.3. Aspects on questions that emerged from selected methods

complexity, the scope and importance, the decision demanded and/or recognized by implementers, the decision target group(s) and number of persons involved, the decision effects on internal and/or external groups, decision business risk and implementer competence and probably many more. The list has been developed with this diversity in mind and I start the sub-interview by asking “What is the purpose of the decision?” following the respondent track. Follow-up questions to cover the diversity are included and in the end the respondent makes a mark on the scale, where the poles are “Impossible to manage” and “Perfect”.

Implementation profile

Examples of variables constituting the implementation profile are the time schedule, CEO involvement and support, resources, relevant implementer competence, responsibility and the follow-up schedule (see table 3 but also Miller, 1997, and Braga Rodrigues & Hickson, 1995, for further details). The executives’

participation in implementation are important for the implementation success (Nutt, 1987) but it is situational. The need of a follow-up plan is underlined by Simons (2000).

I open the sub-interview with “How did you comprehend the implementation task?” then following the respondent track. The opinions about the variables mentioned above are covered by follow-up questions. In the end the respondent marks on the scale, where the poles are “Non-existent” and “Complete”.

other reported results. The main problem is probably not the validity itself but the definition of implementation efficiency. The challenge is to communicate the definition in such a manner that the respondent understands what to estimate according to the intentions of the definitions. Therefore a written definition is handed over when the respondent is asked to mark on the scale. The questions afterwards that center around the motives of the marked score provide an opportunity to detect potential misunderstandings.

In some studies the validity aspect is discussed, see, e.g., Bryson & Bromiley (1993) and Nutt (1989 and 2000). As the research methods used in them differ from mine it is not possible to adapt their validity tests. The differences are the use of quantitative methods and many people involved in data collection and categorization. I principally use qualitative methods. I have performed all the interviews and interpreted the verbal opinions of the respondents into scores avoiding that type of validity problem. Using several informants with different reported estimations is seen as another validity problem in the literature as the approach is to get a consensus. I have, instead, an ambition to map the tensions between respondents in a down-up perspective. Do the respondents judge the same thing? Does a difference depend on a real difference in opinions or on a different use of the “yard-stick”? I do not know, as I do not test this; it is a weakness of the study.

Regarding reliability, the interview questions, including the scales used in Step II, were tested in a couple of test interviews. Small changes were made to increase the clarity. The questionnaire used in Step I was not formally tested beforehand but discussed with colleagues forcing some adjustments. A risk of interviewer bias occurs when doing all the interviews myself in Step II. The risk consists of displaced emphasis on the importance of certain questions, the interviewer feeling fed-up during the last interviews, leading questions when a lot is known from the first interview, and probably more. Relying on Trost (1997) I have done my best to be on the guard but also to “debrief” myself after the interviews by mentally repeating the course of the interview looking for my mistakes. I have not dropped any interview or case.

I try to estimate the reliability of the respondent scoring. As described in 4.2.1 the respondent is asked to place a score on a scale at the end of each section of the interview. The scoring is often done with comments and body language but also with the respondent verbal statements as a background. When the respondent gives me the paper back with the score, I immediately estimate the reliability of her/his scoring using a scale 0 to 6. The estimation criteria are a combination of how well the respondent has picked up the question and the correspondence between the verbal description and the mark on the scale after the follow-up questions. I use this method not only for implementation efficiency but for all respondent’s judgments.

In all, my knowledge of the validity and reliability problems in this research partly minimizes the risk of failure, but a systematic evaluation is not carried out.

However, as discussed above, some tests are carried out both during the preparatory phase of the empirical study and when the interviews are completed in order to improve the data quality.

The discussion above has dealt with the methods and tools for information collection. The analysis tools used in this study are presented in 4.4 and the validity/reliability questions are discussed in Chapter 6 on the presentation of the analysis results but also in 7.5.1. The generalization question is discussed in 7.5.2.

4.3.2 Gender aspects

I am going to study business life, an arena traditionally dominated by men. The main strategy is to select interesting decisions to study in complex, profit-driven companies. The people to be met will be the people, men or women, involved in selected decisions. This is the Doris Day syndrome: que sera, sera, What shall be, shall be. I have considered investigating a gender perspective, i.e., a variable itself in the model, as it is not improbable that there could be gender-related differences in making commitments, handling conflicts, etc. The final decision is however to disregard the gender aspect as there are very few women to interview when randomizing the decisions to study, and this issue would further complicate the implementation model when there is already a poor understanding of the implementation process. A gender perspective may hopefully be possible to apply when we know more about implementation and its conditions.

There is however another gender aspect regarding the role of the researcher, see, e.g., Lundgren (1994) and Trost (1997). The researcher must be aware of her/his gender bias, which is often an unconscious bias, when interviewing people but also when selecting, describing and analyzing information. There are no rules of thumb or simple guidelines. The awareness of the problem and a self-critical distance to the role as researcher are my help to avoid the most dangerous mistakes.

4.3.3 Ethical aspects

The qualitative approach as it is designed so far implies there is a selection of people as respondents and there are close associations with the people selected.

The ethical dimension is therefore important to manage in a conscious and accounted way. Approaching people and asking them to tell their stories and to give me their private opinions has ethical implications. How do I treat the provided information? May outspoken opinions in conflict with established culture or guidelines hurt them? Can they trust me? Demanding secret business information from executives is also a crucial thing. The following presentation of the ethical research behavior is mainly inspired by Forsman (1997) and Hermerén (1996).

The basic ethical rule for me is to keep what I promise and not to promise more than I am able to keep. How do I put that in action? In the research Step I (see 4.2.1) I state in the introduction letter “No answer will be published in such a way that it will be possible to identify the company or the CEO” (translated from Swedish, see appendix B1). In Step II (see 4.2.1) I confirm in a letter to the CEO:

“I undertake to treat all information with complete secrecy which means that no answers will be published in such a way that it will be possible to identify the company or the CEO. The information received, or written down by me as papers

or in electronic shape, may not be handed over to anyone outside the company except my supervisor and the opponent at the disputation if they ask for them”

(translated from Swedish).

When planning an interview with an individual staff member I first phone, make an introduction about the scope and conditions and ask if it is possible to meet. If this is OK, an interview is scheduled. I start the interview by confirming that it is known by the top management that we meet, as the respondents must know the conditions of the interview. I clarify orally that all information given is treated confidentially: I tell nobody about what I learn and the publishing is done in such a way that it is impossible to identify the company and the respondent. - In certain cases there are additional agreements according to the requirements of the CEO of the studied companies as well as interviewees in special positions.

In document A decision is made – and then? (Page 64-67)