TopTop

Shadow

Search

advanced search
search help

 

ipHandbook Blog

Your source for expert commentary on IP management issues.
Go to the blog

 

In ipHandbook Forums RSS

See recent topics

 

About

Editor-in-Chief,   Anatole Krattiger

Editorial Board

Concept Foundation

PIPRA

Fiocruz, Brazil

bioDevelopments-   Institute

CHAPTER NO. 6.14

Pefile S. 2007. Monitoring, Evaluating, and Assessing Impact. In Intellectual Property Management in Health and Agricultural Innovation: A Handbook of Best Practices (eds. A Krattiger, RT Mahoney, L Nelsen, et al.). MIHR: Oxford, U.K., and PIPRA: Davis, U.S.A. Available online at www.ipHandbook.org.

© 2007. S Pefile. Sharing the Art of IP Management: Photocopying and distribution through the Internet for noncommercial purposes is permitted and encouraged.

Monitoring, Evaluating, and Assessing Impact

Sibongile Pefile, Group Manager, R&D Outcomes, Council for Scientific and Industrial Research (CSIR), South Africa

Show SummaryEditor's Summary, Implications and Best Practices

Abstract

Much has been written about the socio-economic benefits and competitive advantage achieved by developed countries as a result of investing in scientific research and technological innovation. For developing and emerging economies, sustainable development is dependent on establishing and supporting R&D institutions that not only perform good science, but also effectively share their knowledge and technology outputs. Both the extent to which a return on an investment is realized from R&D activities and the magnitude of the resulting impact on intended beneficiaries are important to funders, policy-makers, taxpayers, government officials, development agencies, and the research institutions themselves. This chapter provides guidance on building organizational capacity to plan, monitor, evaluate, and assess the impact of R&D investments. It should be noted that the chapter does not address measuring the performance of a Technology Transfer Office to manage intellectual property, but rather focuses on determining the socio-economic impact of transferred knowledge and technology.

1. Introduction

Much has been written about the socio-economic benefits and competitive advantage that developed countries achieved by investing in scientific research and technological innovation.1 For developing and emerging economies, it is recognized that sustainable development depends on establishing and supporting R&D institutions that both perform good science and share their knowledge and technology outputs.2 A return on R&D investment, and the magnitude of that return, is important to policy-makers, tax payers, government officials, development agencies and, of course, those funding the research and the research institutions themselves. This chapter provides guidance on building organizational capacity to plan, monitor, evaluate, and assess the impact of R&D investment on society and in the market. It should be noted that the chapter does not evaluate the performance of Technology Transfer Offices in managing intellectual property, but rather focuses on determining the socioeconomic impact of transferred knowledge and technology.

R&D institutions in developing countries operate with limited financial resources for R&D and even less funding for technology and knowledge transfer. The socio-economic challenges experienced by developing countries put more pressure on R&D institutions, requiring them to effectively and efficiently address local social and economic development needs through the transfer and adoption of innovative science. To this end, a key responsibility of research institutions in developing countries is to make research outputs available for use by society and local industry. It is therefore critical that research institutions not only generate relevant research, but also transfer and diffuse research results in a way that maximizes impact. A well-developed and comprehensive monitoring, evaluation, and impact assessment framework is necessary to measure efforts by institutions to meet R&D objectives. Such a framework can assist research institutions in:

  • improving the efficiency of research resource allocation
  • improving the standard and effectiveness of project decision-making
  • directing future research plans more effectively
  • obtaining evidence of resource mobilization
  • prioritizing research based on the level of economic returns and positive social impact

Technological innovation transforms an idea generated during research into a new or improved product that can be introduced into a market, a new or improved operational process used in industry and commerce, or a new approach to a social service.3 Monitoring, evaluation, and impact assessment should be conducted throughout the R&D continuum described below:

  • research and technology generation. Basic research, applied research, and experimental development are included.
  • technology development. During this stage, knowledge from research is combined with practical experience to direct the production of a new product.
  • technology adaptation. This entails piloting technology and simulating real-life conditions for the production of the technology are typically involved.
  • technology transfer. An important component of technology transfer is IP (intellectual property) management. Typically, institutions manage IP protection, routes to commercialization or transfer, and contractual arrangements that facilitate the transfer of intellectual property from the lab to the market.
  • technology adoption and diffusion. This stage of the process is key, for it signifies the point that products, transferred to the market, achieve depth and spread widely. Technology adoption is measured at one point in time and is associated with the use of transferred technology; technology diffusion is the spread of a technology across a population over time.

A robust monitoring, evaluation, and impact assessment framework should demonstrate transparency and confer accountability. It is therefore important that systems enable institutions to document, analyze, and report on research and technology transfer performance effectively.

2. The Framework

There are different methodologies and processes for monitoring, evaluation, and impact assessment. An impact assessment study can be customized and structured to suit the information and reporting requirements of an institution and its stakeholders. Figure 1 illustrates a comprehensive monitoring, evaluation, and impact assessment framework. (The components of the diagram are described in greater detail in subsequent sections of this chapter.)

Figure 1: The Planning, Monitoring, and Evaluation Cycle

2.1 Diagnosis

For many developing country institutions, the public expects the research to provide solutions to health, food security, sanitation, water, poverty, and environmental challenges. As institutions invest their limited resources in these important areas, their research efforts must be focused so that the resulting impact on society and the economy is optimal. Institutions, therefore, must be able to articulate the problem that the science sets out to address. The needs assessment conducted at the start of a project defines the problem and provides baseline data for the ex ante evaluation. At the diagnosis stage of the process, questions should include:

  • Who is responsible for collecting performance information?
  • What information is being collected?
  • When and how often is the performance measure reported?
  • How is the information reported?
  • To whom is the performance measure reported?

The needs assessment should also seek to determine:

  • What is the nature and scope of the problem requiring action?
  • What intervention may be made to ameliorate the problem?
  • Who is the appropriate target population for the intervention?

The outcome of the diagnosis should be a document that:

  • defines baseline information
  • sets project targets
  • states assumptions
  • specifies measurement indicators
  • could be tied with ex post evaluation, that is, evalulation after the project has ended

2.2 Planning

Once the problem has been identified, a plan should be drawn up to explain how the research will address the challenges. A logical framework can be used to structure the various activities and specify means and ends. Information in a logical framework should include:

  • why a project is being conducted
  • what a project is expected to achieve
  • how the project is going to achieve these results
  • what external factors are crucial for the success of the project
  • how the success of the project can be assessed
  • where the data required to assess the success of the project can be found
  • what the project will cost

This information is then used to complete the matrix summarizing information, which is required both to design and evaluate the activity. Table 1 illustrates such a matrix.

TABLE 1: LOGICAL FRAMEWORK STRUCTURE

NARRATIVE SUMMARY OBJECTIVELY VERIFIABLE INDICATORS (OVI) MEANS OF VERIFICATION (MOV) IMPORTANT ASSUMPTIONS
Inputs
  • Nature and level of resources
  • Necessary cost
  • Planned starting date
  • Sources of information
  • Initial project assumptions
Outputs
  • Magnitudes of outputs
  • Planned completion data
  • Sources of information
  • Methods used
  • Assumptions affecting the input-output linkage
Purpose
  • End-of-project status
  • Sources of information
  • Methods used
  • Assumptions affecting the output-purpose linkage
Goal
  • Measures of goal achievement
  • Sources of information
  • Methods used
  • Assumptions affecting the purpose-goal linkage
  

A logical framework (logframe) is a useful tool for the assessor and has the following advantages:

  • It makes the project appraisal transparent by explicitly stating the assumptions underlying the analysis and by allowing a check on the proposed hypotheses and expected results in an ex post analysis.
  • It deals explicitly with a multitude of social goals and does not require reducing the benefits into one figure.
  • It is understandable to nonscientists. It can therefore be used as a tool to clarify the trade-off among objectives and, thus, specify the decision-making process.
  • It is flexible with regard to information and skill requirements. It can incorporate social cost, benefit analysis, use input, output tables, and partial models. It can also be used with rudimentary information skills, albeit at the cost of more hypotheses and uncertainties.

2.3 Implementation

Implementation is the actual evaluation; it entails data collection, analysis, and reporting. Evaluation is systematically assessing a situation at a given point in time, whether that point is in the past, the present, or the future. Put another way, an evaluation is the periodic and systematic assessment of the relevance, performance, efficiency, quality, and impact of a project, in relation to set objectives and goals. Evaluation seeks to investigate and determine whether:

  • the intervention is reaching the intended target audience
  • the intervention is being implemented as envisioned
  • the intervention is effective
  • the costs of the intervention, relative to effectiveness and benefits, is lower than the benefits

Different monitoring and evaluation systems can be used. The method chosen mainly depends on the following considerations:

  • What should be measured? The evaluation should be based on the project design. Stakeholders should agree about how the crucial project issues should be measured.
  • For whom should it be measured? The users of the evaluation results should be identified and the results should correspond to their expectations.
  • For what purpose should it be measured? This determines the sensitivity of the measures and the degree of accuracy needed.
  • How should it be measured? Consensus is needed between the evaluator and program/project managers on whether a proposed measure truly indicates a change in the desired direction.
  • How should the data be collected? The design of the evaluation system should be determined and the desired level of accuracy in the information agreed upon.
  • When and in what form is the information needed? It should be available when needed in a usable format.
  • Who collects, analyzes, and presents the information? This is necessary to adapt the monitoring and evaluation system to the management realities of a program/project. Managers should not underestimate the time needed to analyze and present the information.

The specific questions that an effective evaluation should answer are:

  • Is the program effective in achieving its intended goals?
  • Can the results of the program be explained by alternative explanations that do not include the program?
  • Does the program have effects that were not intended?
  • What are the costs of delivering services and benefits to program participants?
  • Is the program an efficient use of resources?

Deciding which evaluation process to use depends on numerous factors, such as set objectives, available time, skills, and resources. To guide your choice, Table 2 summarizes data collection designs and their different characteristics.

TABLE 2: DATA COLLECTION DESIGNS AND THEIR CHARACTERISTICS

CHARACTERISTICS EVALUATION DESIGN COST RELIABILITY TECHNICAL EXPERTISE TYPES OF EVALUATION (PRIMARILY ADOPTIVE TO THE DESIGN) ABILITY TO MEASURE WHAT IS HAPPENING ABILITY TO EXCLUDE RIVAL HYPOTHESIS
Case study: one measurement (actual vs. planned) low very low low reporting very low nonexistent
Case study: two measurements (before and after) medium low low process evaluation good low
Time series design (prior trend vs. actual) relatively low, if feasible medium medium impact evaluation very good medium
Case study with one measurement and a control group (with and without) medium low low formative evaluation low low
Quasi-experimental design relatively high (variable) relatively high (variable) relatively high impact evaluation very good good (variable)
Experimental design expensive     evaluation research very good very good
  

Typically, data collection methods include checklists, scoring models, cost-benefit analyses, surveys, and case studies. The best approach is to use several different methods in combination, balancing quantitative and qualitative information.

Ongoing monitoring and evaluation processes measure:

  • technical aspects: physical input-output of goods and services
  • institutional aspects: organizational and managerial aspects, including customs, tenure, local organizations, and cultural setting
  • socio-cultural aspects: broader social implications, resource and income distribution, and employment opportunities
  • commercial aspects: business and financial, securing supplies, and market demand
  • economic aspects: economic efficiency, costs and benefits
  • environmental aspects: biological and physical effects

2.4 Rediagnosis and replanning

Should the results of a monitoring and evaluation exercise indicate that a project is not going according to plan, then rediagnosis and replanning is required. Rediagnosis and replanning require the measurement process to be continually improved, and changes in the measurement process should be aligned with changing needs and priorities.4 Program replanning and rediagnosis may also require going back to prior steps in the planning process to review whether:

  • the problem is well defined and described
  • the objectives are adequately implemented
  • a revised-impact model has been developed
  • the target population has been redefined
  • the delivery system has been redesigned
  • there are revised plans for monitoring impact and efficiency

Research programs are dynamic, and evaluations should take this into consideration. Naturally, the longer the research project lasts, the greater the likelihood that a given project will require modification and adjustment. Table 3 summarizes the design, implementation, and assessment requirements of research projects at different stages of maturation.

TABLE 3: AN ASSESSMENT PLANNING GUIDE

  INNOVATIVE PROGRAMS ESTABLISHED PROGRAMS FINE-TUNING
CONCEPTUALIZING
  • problem description
  • operationalizing objectives
  • developing intervention models
  • defining extent and distribution of target population
  • specifying delivery system
  • determining capacity for evaluation
  • developing evaluation model
  • identifying potential modification opportunities
  • determining accountability requirements
  • identifying needed program changes
  • redefining objectives
  • designing program modifications
IMPLEMENTING
  • formative research and development
  • implementation monitoring
  • program monitoring and accountability studies
  • R&D program refinements
  • monitoring program changes
ASSESSING
  • impact studies
  • efficiency analyses
  • impact studies
  • efficiency analyses
  • impact studies
  • efficiency analyses

2.5 Ex post evaluations

These take place at the end of a research project, when the effects and results of the project can be tracked and used in adoption studies. At this stage, the evaluation:

  • assesses the project’s performance, quality, and relevance, immediately after its completion
  • works best when a pre-project baseline had been defined, targets projected, and data collected on important indicators
  • is often done by professional and external evaluators
  • requires that classical criteria be broadened to include user satisfaction
  • should be an integral part of project implementation
  • demands advanced preparation
  • uses a blend of interviews, field visits, observations, and available reports
  • provides lessons that can be systematically incorporated into future activities, for example ex ante evaluation, as well as project planning
  • is usually only done for more important, innovative, or controversial projects

Essentially, ex post evaluations determine impact and are used to demonstrate accountability. The evaluations sum up the lessons learned from the project. They provide a firm foundation for future planning and for establishing the credibility of public sector research. They can also be used to justify an increased allocation of resources.

2.6 Recommendations

The recommendations that arise from evaluation studies should assess the information collected. Evaluations should also review:

  • what turned out differently than expected
  • which part of the strategy produced the desired results and which did not
  • whether a cross-section of views were sought and accommodated
  • with whom the findings need to be shared
  • in what form the results should be presented

There are various uses for evaluation findings. The outcomes of an evaluation can be categorized into three basic types: direct, indirect, and symbolic.5 Evaluation outcomes are direct when information or findings are applied directly to alter decisions and results in an operational application. Indirect use refers to a more intellectual, gradual process, in which the decision maker gleans a broader sense of the problems addressed by a project or program. Indirect use of study results produces a strategic or structural application of outcomes. Symbolic use refers to situations where the evaluation results are accepted on paper, but go no further. Unfortunately, many evaluation studies end up as symbolic initiatives. It is imperative that technology transfer assessments do not end up simply as academic exercises. When an assessment is not practically applied or used, not only is the effort wasted, but future programs may continue to repeat mistakes and waste money.

2.7 Impact assessment

An impact-assessment study aims to determine causality and to establish the extent of improvement for the intended beneficiaries. Impact assessments are time sensitive and, therefore, studies should be conducted periodically throughout the duration of a project. An impact study should measure the rate of adoption for technologies that have been made available for social or industry use. Such studies should assess the technology’s level of use by targeted beneficiaries and estimate the benefits of R&D investments. By following these guidelines, impact studies should be able to determine the impact of technology generation and transfer. Impact assessments should also seek to measure both intended and unintended outcomes, taking into account behavioral change among potential users and beneficiaries. The resulting effect on productivity and quality of life should be measurable and, therefore, evaluated and reported.

When conducting an impact study, the impact is assessed by gathering information on the number of users, degree of adoption, and the effect of the technology on production costs and outputs. Studies should be conducted at different levels (for example, household; target population; regional and national; and at primary, secondary, or economy-wide sector levels.)

There are different types of impacts. Production and economic impact measure the extent to which the project addresses:

  • risk reduction
  • yield increases
  • cost reduction
  • reduction in necessary inputs
  • employment creation
  • implication for other sectors of the economy

Socio-cultural impact measures the extent to which the project contributes to:

  • food security
  • poverty reduction
  • status of women
  • increases in knowledge and skill level
  • number and types of jobs
  • distribution of benefits across gender and geographical locations
  • changes in resource allocation
  • changes in cash requirement
  • changes in labor distribution
  • nutritional implications

Environmental impact measures the project’s effects on:

  • soil erosion and degradation
  • silting
  • compact soil
  • soil contamination
  • water contamination
  • changes in hydrological regimes
  • effects on biodiversity
  • air pollution
  • greenhouse gases

Institutional impact measure effects on:

  • changes in organizational structure
  • change in the number of scientists
  • change in the composition of the research team
  • multidisciplinary approaches and improvements
  • changes in funding allocated to the program
  • changes in public and private sector participation
  • new techniques or methods

2.8 Tools

Different tools are used to measure performance over time. These include (1) secondary analysis of data, (2) the screening of projects and research orientations by peers and experts in the field, (3) qualitative descriptions of case studies and anecdotal accounts, and (4) matrix approaches, which provide rich information and help to rationalize and simplify choices.

Examples of the matrix approach include:

  • systemic methods. can be used to implement an evaluation (This method is not really suitable for evaluating and can be very difficult to implement.)
  • financial methods. namely, cost-benefit measures that take into account marketable outputs and commercial resources (It is often difficult to collect the information, and some factors cannot be financially assessed.)
  • technological forecasting methods. entail the use of scenario methods and allow for the causality chain to be reversed (This method also allows for forecasting and takes into account social transformations.)
  • quantitative indicators. for example, science and technology indicators and measurement, pure descriptiveness, and selection integration (Indicators provide fundamental scientific output measures.)

To help select the most appropriate study method, Table 4 maps the desired impact of a study against the assessment method and technique.

TABLE 4: IMPACT ASSESSMENT METHODS AND TECHNIQUES

IMPACT TYPE METHOD TECHNIQUE
Intermediate impact
  • Institutional changes
  • Changes in the enabling environment
Survey, monitoring Simple comparison/trend analysis
Direct product of research Effectiveness analysis using logical framework Simple comparison: target vs. actual
Economic impact (micro, macro, spillovers) Econometric approach, surplus approach Production function, total factor productivity, index number methods, and derivatives
Socio-cultural impact Socioeconomic survey/adoption survey Comparison over time
Environmental impact Environmental impact assessment Various
  • Qualitative
  • Quantitative

2.9 Indicators

Developing indicators is a critical step in the evaluation process. Ultimately, indicators drive impact assessment and influence how the assessment is conducted. In summary, there are three evaluation methods used to assess impact. These can be (1) qualitative, such as peer review, (2) semiquantitative, such as tracking scientific evidence, or (3) quantitative, such as econometric measures. The evaluation method selected should depend on the evaluation objectives of the study and the needs of each stakeholder (Table 5). The strengths and drawbacks of each tool are presented in more detail in Table 6 (at the end of this chapter).

TABLE 5: A SUMMARY OF THE EVALUATION NEEDS OF DIFFERENT STAKEHOLDERS7

EVALUATION ACTIVITY POLICY-MAKERS DONORS RESEARCH MANAGERS/PROGRAM LEADERS RESEARCHERS
Review of entire systemXXXX
In-depth review of component   X X X
Ex ante evaluation of program/project XXX
Ongoing evaluation/monitoring of research activities   X X X
Ex post evaluation of a research program/project XXX
Impact assessment X X X X
  

TABLE 6: COMPARISON OF ASSESSMENT TOOLS

METHODS R&D TIME FRAME R&D TYPE STRENGTHS WEAKNESSES
Modified peer reviewpast, ongoing, and future all
  • relatively easy to organize
  • can provide valuable information on potential impacts
  • probably the best method for basic/strategic R&D
  • low to medium cost
  • relies on the opinions of a small number of people
  • qualitative information only
User surveys past and ongoing applied R&D
  • overcomes the problem of a small number of respondents
  • possible to develop quantitative indices
  • medium cost
  • structuring the survey and analyzing the results can be tricky
  • often requires considerable time to identify users, develop survey methodology, and analyze results
Benefit-cost methods past, can be used for ongoing and future R&D in certain circumstances applied R&D
  • can provide reasonable defensible estimates of potential benefits
  • provides a structure and a framework for assessing R&D projects that forces the right questions to be asked
  • can be very time consuming and labor intensive
  • results are critically dependent on assumptions that can be highly uncertain
  • because of cost and time requirements, can only be used for a limited number of projects
  • relative cost is high
  • data collection requirements are demanding
Cost-effectiveness analysis future, past (to a certain extent) applied R&D
  • simplest
  • does not require benefit information
  • medium cost
  • there is nothing to prove that any of the alternatives can yield benefits over and above costs
  • if one of the alternatives costs less, but produces a low quality product or has a different impact, then the assessment becomes more complicated
Case studies past applied R&D
  • can provide good illustrations of the relationship between R&D and its impacts
  • probably the best method for basic/strategic R&D
  • medium cost
  • generally there is no way to add up the results of a group of case studies to obtain a measure of the total impact of the group
  • the results cannot be extrapolated to other R&D projects that are not in the group
Partial indicators past, ongoing, and future all
  • the information required to specify the indicators relatively easy to collect
  • probably the best method for ongoing monitoring
  • low relative cost
  • the individual indicators can generally only be added up on a subjective basis, making overall impact assessment more difficult
  • provides only a very partial picture of impacts
Integrated Partial indicators future applied R&D
  • an easy but structured way to identify research priorities
  • forces decision makers to explicitly consider the key determinants of impacts
  • low relative cost
  • relies heavily on the judgment of a few individuals
  • there is a potential for bias in assigning weights to different criteria
Mathematical programming past, ongoing, and future applied R&D
  • more powerful and sophisticated
  • enables one to select optimal portfolio
  • can handle simultaneous change in many variables
  • demanding in terms of data requirements
  • high relative cost
  • not particularly useful for evaluating too diverse a set of R&D projects
  • if either the criteria or constraints are not well defined, there is a risk of arriving at a nonsensical “optimal” solution
Simulation method past and future applied R&D
  • flexible
  • can be used to estimate optimal level of research at national, commodity, or program level
  • can estimate the effect of research on prices, income, employment, or other parameters
  • can handle simultaneous change in many variables
  • to be useful, it must accurately reflect the relationship between technological advancement and economic development
  • requires an extensive amount of time to construct and validate data
  • medium to high relative cost
Production function approach past applied R&D
  • offers a more rigorous analysis of the impact
  • estimates marginal rates of return
  • statistically isolates the effects of R&D from other complementary inputs and services
  • uncertainty in projecting past rates of returns to future
  • demanding in terms of data
  • selection of suitable functional form can be difficult
  • serious econometric problems may be involved
  • relative cost is high
  

3. Challenges and Key Success Factors

Monitoring, evaluation, and impact assessment is a complex field. The conditions, methodologies, and projects described here present various challenges that need to be factored into the evaluation and impact study. These challenges include the relatively unpredictable nature of research and technology transfer events. Certain research outcomes are discrete and are thus difficult to measure, track, and document. Moreover, there is no single, accurate method to objectively evaluate R&D performance. There are also institutional challenges. Effective communication between stakeholders can be a problem, partly because of the difficulty of maintaining data quality. And because assessments tend to focus on measuring more immediate, short-term benefits, there is the risk of overlooking some of the longer-term benefits of R&D. This issue is also related to determining the frequency of assessment studies. For example, the European Union has adopted a system that calls for three impact assessment studies: an ex ante study at the start of the project, a project-end assessment, and an ex post study three years after the completion of the project.6 The frequency of the study may affect its temporal focus. Of course, without establishing the commitment and resources to collect, process, store, and make accessible key performance data, nothing can be accomplished. Technology transfer managers need to develop the infrastructure necessary to have valid and reliable performance information and use this data for decision-making. They should take the time to develop a shared understanding with funders about the role of public R&D within the national innovation system. Such efforts may make it possible to alleviate shortages of essential financial, human, and knowledge resources.

 

It is essential to identify the key factors that, if in place, will improve the effectiveness of an assessment framework. Managers must strive to have in place as many of the following key success factors as possible:

  • leadership commitment
  • a desire for accountability
  • a conceptual framework
  • strategic alignment
  • knowledgeable and trained staff members
  • effective internal and external communication
  • a positive and not punitive culture
  • rewards linked to performance
  • effective data processing systems
  • a commitment to and plan for using performance information
  • adequate resources and the authority to deploy them effectively.

4. Conclusion

An effective evaluation system should strengthen an institution’s ability to maintain leadership across the frontiers of scientific knowledge. The system should enhance connections between fundamental research and national goals, such as improved health, environmental protection, prosperity, national security, and quality of life. Such an evaluation system also will stimulate partnerships that promote investments in fundamental science and engineering, as well as the overall more effective use of physical, human, and financial resources for social and economic benefit.

As a way of benchmarking progress, it is helpful to examine how other organizations measure impact. Impact measures are a sure way of knowing that science is delivering on its objectives and that R&D projects are having their intended effect. Without a measurement process, institutions cannot justify their efforts in R&D, IP management, commercialization, and technology transfer in relation to their economic and social goals.

Finally, it is essential to take the time to digest, reflect upon, and learn from an impact-assessment process. Lessons can be learned from both successes and mistakes, and these lessons should not only be used to take corrective action but also to improve future performance.

Endnotes

All referenced Web sites were last accessed between 1 and 10 October 2007.

1 Macfarlane M and J Granowitz. 2002. Report to Science Foundation Ireland: Technology Transfer for Publicly Funded Intellectual Property. Columbia University: New York; Rivette K and D Kline. 2000. Rembrandts in the Attic–Unlocking the Hidden Value of Patents. Harvard Business School Press: Cambridge, Mass.

2 Idris K. 2003. Intellectual Property: A Power Tool for Economic Growth. World Intellectual Property Organization, WIPO Publication No.888; Alikhan S. 2000. Socio-Economic Benefits of Intellectual Property Protection in Developing Countries. World Intellectual Property Organization, WIPO Publication No. 454(E); Dickson D. 2007. Technology Transfer for the Poor (editorial), SciDev.Net; 16 January 2007. Moreira M A. 2007. Technology Transfer Must be Relevant to the Poor (opinion), SciDev.Net;16 January 2007. www.SciDev.Net (click on Dossiers, Technology Transfer).

3 Main definitions and conventions for the measurement of research and experimental development (R&D): A summary of the Frascati Manual 1993. OECD. Paris.

4 U.S. Department of Energy Office of Policy & Office of Human Resources and Administration. 1996. Guidelines to Performance Management. Washington, DC.

5 Mackay R and D Horton. 2003. Expanding the Use of Impact Assessment and Evaluation in International Research and Development Organizations. Discussion Paper, ISNAR.

6 Anonymous. 2003. Assessing EU RTD Programme Impact; Collecting Quantitative and Qualitative Data at Project Level: Designing Suitable Questionnaires for Measurement of EU RTD Programme Impact Study Contract No XII/AP/3/98/A. www.evalsed.info/downloads/sb1_research_development.doc.

7 Interest depends on the activity and the role of the stakeholder concerned.

Pefile S. 2007. Monitoring, Evaluating, and Assessing Impact. In Intellectual Property Management in Health and Agricultural Innovation: A Handbook of Best Practices (eds. A Krattiger, RT Mahoney, L Nelsen, et al.). MIHR: Oxford, U.K., and PIPRA: Davis, U.S.A. Available online at www.ipHandbook.org.

© 2007. S Pefile. Sharing the Art of IP Management: Photocopying and distribution through the Internet for noncommercial purposes is permitted and encouraged.

User Comments and Uploads RSS

1 comments.

 

Admin (09/11/2010 19:26:53)