Talk:Biofuel assessments

From Opasnet
Jump to: navigation, search

Invited stakeholders

Environmental non-governmental organizations

  • Greenpeace
  • WWF Finland
  • The Finnish Association for Nature Conservation
  • Dodo ry
  • Friends of the Earth Finland

Expert organisations

  • National Institute for health and welfare (THL)
  • Finnish Environment Institute (SYKE)
  • Bioteknologia.info
  • Motiva Ltd
  • MTT Agrifood Research Finland
  • Finnish Game and Fisheries Research Institute (RKTL)
  • Ministry of the environment
  • Finnish energy industries
  • VTT Technical Research Centre of Finland
  • Paula Tommila (Researcher, specialist in Jatropha)

Human rights organisations

  • Amnesty International Finland

Energy corporations

  • Neste Oil
  • St1 Biofuels Oy
  • Vapo biofuels

Examples of formalized discussions

Comment received 18.11.2011 from Maija Suomela, Greenpeace Finland (Translation here, original comment in Finnish Opasnet (op_fi:Keskustelu:Jatropan käyttö bioenergian lähteenä))

Jatropha has been offered as a solution in a situation, where for instance Neste Oil has had it difficult to find sustainably and responsibly produced raw materials for wide scale biofuel production. However, Jatropha is no ”wonder plant”, which could grow on nutrient-poor grounds. Jatropha produces best harvest in a good agricultural land, just as any other oil plant. It competes for agricultural land with food production, but cannot be eaten due to its poisonous nature. Jatropha can be a part of local energy production, but it cannot solve the basic problem that has to do with biofuel production and indirect land use changes.

It is good to make use of waste oils and fats for example as resources for biofuel production. It is important to examine the whole life cycle of every potential raw material for biofuel production, so that besides considering the economical aspects also the ecological and social aspects are considered properly. However a wide scale biofuel production is not the solution for decreasing the greenhouse gas emissions from the transport sector. On the contrary, all the latest scientific research reports that the GHG emissions from the biofuel production very potentially exceed their benefits. Real solutions in decreasing emissions from the transport sector can be found in the development of more energy efficient equipment, railroad transport and other infrastructure as well as the production of electric cars.

Arguments moved to Climate impacts of Jatropha (Jatropan ilmastovaikutukset) page (op_fi:Keskustelu:Jatropan viljelyn ilmastovaikutukset)

How to read discussions

Fact discussion: .
Opening statement: Jathropa can be a source or sink of greenhouse gases, depending on land use changes and the need of irrigation and fertilisation.

Closing statement: Resolved. Approved.

(Resolved, i.e., a closing statement has been found and updated to the main page.)

Argumentation:

←--1: . Recent scientific evidence shows that greenhouse gas emissions from production of biofuels very potentially exceed the benefits of their use. These climate emissions from production result mainly from direct and indirect changes in land use as well as irrigation and fertilizing of farmland. (Maija Suomela) (type: truth; paradigms: science: defence)

←--2: . Jatropha can be a carbon sink if it does not take area from more carbon capturing vegetation, but instead is grown in grass, savannah or in land that has been earlier in agricultural production. If the production of Jatropha increases deforestation either directly or indirectly, can the carbon balance be negative, which means that more carbon is emitted than would have saved compared to the use of fossil diesel. The more irrigation and the use of fertilisers is needed, the more greenhouse gases are produced. (Maija Suomela) (type: truth; paradigms: science: defence)

Evaluation of assessments

Questionnaire

After finishing the assessment, participants were contacted again and they were asked to evaluate the performance of the two biofuel assessments. All of the invited stakeholder groups, primary users of the assessments (Neste Oil), summer trainees who worked with the assessment as well as the coordinators at THL were contacted by e-mail and asked to give numerical evaluations of certain questions of a questionnaire on a scale 1 - 5 (1 meaning bad and 5 meaning good). They were also asked to consider and argue the positive and negative aspects of the assessments and provide textual comment to accompany the numerical values. Both assessments (Jatropha and fish waste) were analysed together but the participants were told to specify if the answers differed in the two assessments. The questions were:

Impact of the participation (Q1 only for the stakeholders)

1. Do You feel that Your contribution has been included in the assessment fairly?

The performance of the assessments (Q2-Q6 for all)

2. Does the assessment content correspond to the research question accurately, truthfully and comprehensively?
3. How well does the information provided by the assessment serve Your needs (or the needs of Your organization)?
4. Has the information provided by the assessment reached You and Your organisation?
5. Did Your understanding increase about the issue along with the assessment?
6. Is the assessment result (output), and the way it is obtained and delivered for use acceptable?

Efficiency (Q7 only for the primary users, here Neste Oil)

7. How good is the assessment output in relation to the resources used?

Questionnaire results and statistical analysis

The data and statistical analyses are below. The actual results of the run are here (key: JqZbwWSSFTArzKiH)

Numerical questionnaire results

File:Biofuels questionnaire results.csv
Group Respondent Question Result
Users Respondent 1 Question 1 NA
Users Respondent 2 Question 1 NA
Users Respondent 3 Question 1 NA
Stakeholders Respondent 4 Question 1 NA
Stakeholders Respondent 5 Question 1 3
Stakeholders Respondent 6 Question 1 3
Stakeholders Respondent 7 Question 1 NA
Coordinators Respondent 8 Question 1 NA
Coordinators Respondent 9 Question 1 NA
Coordinators Respondent 10 Question 1 NA
Summer trainees Respondent 11 Question 1 NA
Summer trainees Respondent 12 Question 1 NA
Users Respondent 1 Question 2 4
Users Respondent 2 Question 2 4
Users Respondent 3 Question 2 4
Stakeholders Respondent 4 Question 2 NA
Stakeholders Respondent 5 Question 2 3
Stakeholders Respondent 6 Question 2 4
Stakeholders Respondent 7 Question 2 NA
Coordinators Respondent 8 Question 2 3
Coordinators Respondent 9 Question 2 4
Coordinators Respondent 10 Question 2 3
Summer trainees Respondent 11 Question 2 4
Summer trainees Respondent 12 Question 2 4
Users Respondent 1 Question 3 4
Users Respondent 2 Question 3 4
Users Respondent 3 Question 3 4
Stakeholders Respondent 4 Question 3 4
Stakeholders Respondent 5 Question 3 1
Stakeholders Respondent 6 Question 3 3
Stakeholders Respondent 7 Question 3 NA
Coordinators Respondent 8 Question 3 4
Coordinators Respondent 9 Question 3 3
Coordinators Respondent 10 Question 3 4
Summer trainees Respondent 11 Question 3 NA
Summer trainees Respondent 12 Question 3 NA
Users Respondent 1 Question 4 3
Users Respondent 2 Question 4 3
Users Respondent 3 Question 4 4
Stakeholders Respondent 4 Question 4 5
Stakeholders Respondent 5 Question 4 1
Stakeholders Respondent 6 Question 4 3
Stakeholders Respondent 7 Question 4 NA
Coordinators Respondent 8 Question 4 NA
Coordinators Respondent 9 Question 4 4
Coordinators Respondent 10 Question 4 NA
Summer trainees Respondent 11 Question 4 NA
Summer trainees Respondent 12 Question 4 NA
Users Respondent 1 Question 5 5
Users Respondent 2 Question 5 2
Users Respondent 3 Question 5 3
Stakeholders Respondent 4 Question 5 1
Stakeholders Respondent 5 Question 5 1
Stakeholders Respondent 6 Question 5 3
Stakeholders Respondent 7 Question 5 NA
Coordinators Respondent 8 Question 5 4
Coordinators Respondent 9 Question 5 5
Coordinators Respondent 10 Question 5 5
Summer trainees Respondent 11 Question 5 5
Summer trainees Respondent 12 Question 5 5
Users Respondent 1 Question 6 5
Users Respondent 2 Question 6 4
Users Respondent 3 Question 6 5
Stakeholders Respondent 4 Question 6 NA
Stakeholders Respondent 5 Question 6 NA
Stakeholders Respondent 6 Question 6 4
Stakeholders Respondent 7 Question 6 NA
Coordinators Respondent 8 Question 6 3
Coordinators Respondent 9 Question 6 4
Coordinators Respondent 10 Question 6 4
Summer trainees Respondent 11 Question 6 4
Summer trainees Respondent 12 Question 6 5
Users Respondent 1 Question 7 5
Users Respondent 2 Question 7 4
Users Respondent 3 Question 7 5
Stakeholders Respondent 4 Question 7 NA
Stakeholders Respondent 5 Question 7 NA
Stakeholders Respondent 6 Question 7 NA
Stakeholders Respondent 7 Question 7 NA
Coordinators Respondent 8 Question 7 NA
Coordinators Respondent 9 Question 7 NA
Coordinators Respondent 10 Question 7 NA
Summer trainees Respondent 11 Question 7 NA
Summer trainees Respondent 12 Question 7 NA

Analyses

In addition to the characteristics of numerical data presented in the manuscript heande:Evaluating effectiveness of open assessments on alternative biofuel sources, also some more statistical tests were made to see whether some further information was possible to get from the small data set. The description of tests, code for its calculation, and interpretation of results is given below.

We tested the medians of each question, as well as averages for questions 3-6 (applicability) and question s 1-7 (effectiveness), for deviation from value 3 with Wilcoxon signed-rank test. The respondent averages were calculated using all existing values and omitting missing values, and the averages were interpreted as continuous and normally distributed variables. The significance level was set as p < 0.05. The analysis was made for the numerical questionnaire results using R 2.14.2 software.

In the two instances (users) where different numerical values were given to different assessments on the same question by one respondent, the average was calculated and rounded up to the nearest integer in order to retain the ordinality of data. The medians for all questions were tested for deviation from value 3 with Wilcoxon signed-rank test and the differences between respondent groups for each question were analyzed with an ordered logit model. In addition, the respondent averages were calculated for questions 3-6 (applicability) and questions 1-7 (effectiveness) using all existing values and omitting missing values. These averages were interpreted as continuous and normally distributed variables, and for them the differences between groups were tested by one-way ANOVA and Tukey post-hoc test. The significance level in all tests was set to p < 0.05. The results should only be interpreted as providing some indication of possible variation between questions, question sets, and participant groups.

+ Show code

Altogether evaluations were received from 12 respondents. Three of these were from primary users, 4 from stakeholders (1 with only textual comments), 2 from summer trainees, and 3 from assessment coordinators. The numerical results of the evaluations asked from all participants provide an indication of possible variation between questions and question sets. Altogether evaluations were received from 12 respondents. Three of these were from primary users, 4 from stakeholders (1 with only textual comments), 2 from summer trainees, and 3 from assessment coordinators. The range of possible values in numerical evaluation was 1-5. The characteristics and results of the Wilcoxon signed rank test are given in Table 2.

Table 2. Characteristics of the numerical data from the evaluation questionnaires and results of Wilcoxon signed rank test. * = significant (p < 0.05) deviation from value 3.
Question type Question N Min Max Mean Variance Median
Influence of participation (stakeholders only) 1. Are stakeholder comments included well in assessment? 2 3 3 3 0,000 3
Quality of content 2. Is assessment question answered accurately, truthlikely and comprehensively? 10 3 4 3.70 0.233 4*
Applicability 3. Relevance: Does assessment serve well your (organization's) knowledge needs? 9 1 4 3.44 1.028 4
4. Availability: Has assessment content reached you (your organization)? 7 1 5 3.29 1.571 3
5. Usability: Has assessment influenced your understanding on the assessed issues? 11 1 5 3.55 2.673 4
6. Acceptability: Was assessment made in a good and acceptable way? 9 3 5 4.22 0.444 4*
Questions 3-6 average 11 1.00 5.00 3.59 1.072 3.67*
Efficiency (Primary users only) 7. How good is assessment output given the resources used? 3 4 5 4.67 0.33 5
Effectiveness Questions 1-7 average 11 1.80 4.70 3.72 0.609 4.0*

As the Table 2 shows the means and medians of the numerical evaluations for all questions are at least 3. Of these, the question 2 on quality of content and question 6 on acceptance alone as well as the average for questions 3-6 on applicability as whole and average for all questions 1-7 (effectiveness) are statistically significantly higher than value 3 according to the Wilcoxon signed-rank test. In addition, the median for the question on efficiency shows to be almost statistically significantly (p = 0.102) higher than value 3.

On a general level, the numerical results can be interpreted to indicate that the participants evaluated the assessments at least moderately effective across all questions. Looking at the data it seems that the external participants, i.e. users and stakeholders, would be slightly more critical in their evaluation than the internal participants, i.e. summer trainees and assessment coordinators, regarding some questions. The data is, however too small for reliably testing this with statistical analysis (for more, see http://en.opasnet.org/w/Talk:Biofuel_assessments#Analyses).

The ordered logit model for testing differences between respondent groups for individual questions showed that the evaluations by stakeholders (mean 1.7) and users (mean 3.3) gave statistically significantly lower values than coordinators (mean 4.7) and summer trainees (mean 5) in the question on usability. The one-way ANOVA and post-hoc Tukey tests for differences between respondent groups for averages of questions on applicability and all questions (effectiveness) show almost statistically significant differences between summer trainees and stakeholders for both (4.8/2.5, p = 0.07, 4.5/2.8, p = 0.05 respectively).

On a general level the statistical analyses can be interpreted so that the participants evaluated the assessments at least moderately effective across all questions. However, it appears that the external participants, i.e. users and stakeholders, were slightly more critical in their evaluation than the internal participants, i.e. summer trainees and assessment coordinators, at least regarding some questions. The findings regarding specific questions and participant groups are discussed in more detail below in terms of openness, quality of content, applicability and efficiency.

The overall evaluation of the assessments and related conclusions are explained and discussed in more detail in manuscript heande:Evaluating effectiveness of open assessments on alternative biofuel sources