TopTop

Shadow

Search

advanced search
search help

 

ipHandbook Blog

Your source for expert commentary on IP management issues.
Go to the blog

 

About

Editor-in-Chief,   Anatole Krattiger

Editorial Board

Concept Foundation

PIPRA

Fiocruz, Brazil

bioDevelopments-   Institute

CHAPTER NO. 3.5

Heher AD. 2007. Benchmarking of Technology Transfer Offices and What It Means for Developing Countries. In Intellectual Property Management in Health and Agricultural Innovation: A Handbook of Best Practices (eds. A Krattiger, RT Mahoney, L Nelsen, et al.). MIHR: Oxford, U.K., and PIPRA: Davis, U.S.A. Available online at www.ipHandbook.org.

© 2007. AD Heher. Sharing the Art of IP Management: Photocopying and distribution through the Internet for noncommercial purposes is permitted and encouraged.

Benchmarking of Technology Transfer Offices and What It Means for Developing Countries

Anthony D. Heher, Director, Associates for Economic Development, South Africa

Show SummaryEditor's Summary, Implications and Best Practices

Abstract

At universities in both developed and developing countries, increasing emphasis has been placed on promoting technology transfer. Unfortunately, technology transfer is sometimes undertaken for the wrong reasons, especially in the mistaken belief that technology transfer will lead to substantial additional income for the institution. While it is important to protect intellectual property arising from research and to actively promote the transfer of research results, generating income should not be the primary objective in the transfer of technology. This is particularly important for health science, where there is a risk that research results, if not properly protected, will be inaccessible to private or public entities seeking to use the research for public benefit.

International technology transfer benchmark data can be used to understand the implications of promoting technology transfer and the likely outcomes of a technology transfer initiative. The benchmarks indicate that average income to an institution, after eight to ten years of activity, is likely to be a modest 1%–2% of annual research expenditure. The income is, moreover, highly uncertain and variable. Institutional and public sector managers must understand the nature of this income and the dynamics of the technology transfer process in order to manage this emerging discipline effectively, because unrealistic expectations can lead to dysfunctional policy decisions. The data and dynamic model presented in this paper are intended to promote better decisions.

1. Introduction

The successful technology transfer programs of universities in Canada and the United States have prompted other countries to emulate them, and major technology transfer and commercialization support programs have been launched in Australia, Japan, the United Kingdom, and many other countries. The high-profile successes of relatively few institutions have, however, generated unrealistic expectations. Additionally, it is not always clear that the success, measured in terms of income earned from commercialization, is proportional to the magnitude of the investment in research. Without a well-funded, high-quality research system, it is highly unlikely that a technology transfer program will contribute significantly to economic or social development. Moreover, it is doubtful whether other countries can easily emulate the performance of the United States, due to differing social and economic conditions.

The income-earning potential of technology transfer activities can, in fact, be a hindrance to effective programs. Technology transfer needs to be undertaken for good reasons, apart from the possibility of earning income. In health sciences and agriculture, in particular, appropriate IP (intellectual property) protection may be essential to effectively exploit research results and ensure that the benefits are widely available to society. Whether exploitation of research is for commercial or humanitarian uses, effective and appropriate transfer of knowledge is still required, in addition to the normal academic requirements to publish.

However, even with comparable investments in research, the performance of individual institutions is highly variable and unpredictable. This is true even for institutions that are comparable in size and maturity. A large portfolio of patents and licenses is required to give a reasonable probability of a net positive income. A large portfolio may be possible at a national level but is problematic in smaller countries, and even more so for smaller institutions. Because the benefits of the innovation system are captured largely at the national level, institutions need public sector support to reduce the institutional risk necessary to develop profitable investments.

Technology transfer is, of course, only one element of the overall research and innovation value-chain. All elements must function effectively for an institution to derive economic and social benefits from its research. In addition to a strong research system, a university must offer academics adequate incentives to encourage their participation, particularly with regard to the crucial initial step of invention disclosure. Universities must possess adequate institutional capacity to take an idea, evaluate it, appropriately protect intellectual property, and then seek a path to commercialization through licensing or a spinout.

It is widely recognized that monetary returns are not, and should not be, the primary motivation for engaging in technology transfer. Increasingly, it is a public research organization’s social responsibility to ensure that research results are effectively transferred in a timely manner into the public domain for the good of society. The production of graduates and publication of research results remain the most important ways of affecting knowledge transfer; the more direct transfer of knowledge through technology transfer is, however, an essential adjunct. Far from undermining conventional approaches, effective technology transfer can support and enhance traditional knowledge transfer.

Technology transfer affects a society’s economic well being directly and indirectly. In this chapter, both the conditions necessary for deriving economic benefit and the factors that influence the performance of a technology transfer office (TTO) are outlined. The data and models highlight the need for skilled technology transfer professionals. If a country is to profit from its investment in research, then training and capacity building at the institutional and national levels are key requirements.

2. Research and Innovation Value-Chain Benchmark Data

Universities and research institutions in North America have been benchmarking the research and innovation value-chain for a number of years.1 This data covers each step in the value chain, including expenditure on research; numbers of invention disclosures, patents, licenses, and spinout companies; income from licensing; and expenditure on IP protection. A few other countries are following a similar approach, facilitating cross-country comparisons. (A selection of the data is shown in Table 1.)

Table 1: Technology Transfer Benchmarks from Selected Countries; Summary of Key Data

To assist with comparisons across countries, benchmarks are generally converted to normalized values. The most commonly used approach is to normalize in terms of total research expenditure, converted to equivalent U.S. dollars. This approach is called the adjusted total research expenditure (ATRE). The most commonly used reporting basis is per US$1billion ATRE or US$100 million ATRE. Table 2 presents normalized values of the raw data in Table 1 based on US$100 million ATRE. For simplicity, only selected variables are shown. This normalized data can be considered typical data for a small-to midsized U.S. university or a large university in a developing country.

Table 2: Adjusted Total Research Expenditure (All Per $100m)

The normalized data shows a remarkably consistent pattern across different countries (summarized in Table 3). While there are variations from year to year and from country to country, they are relatively small and statistically insignificant compared to the variations between institutions in one country.

TABLE 3: RESEARCH AND INNOVATION VALUE CHAIN

  TYPICAL Ranges per $100M ATRE
Invention disclosures 40 − 50
Patents 20 − 30
Licenses 10 − 15
Spinout companies 1 − 5
Income $ 1m − 3m$
  ( 1% − 3% of research
  

The data presented in Tables 1, 2, and 3 is for all research disciplines, as no desegregation to field of research was undertaken by any of the countries that conducted the survey. Although such results would be interesting, there is a lack of clear definition of the different fields, even within one country, let alone across countries, making classification difficult. However, it is well recorded that medical and health-related research constitutes around 50% of all research expenditure. An analysis of individual results of commercialization efforts also shows that health-related products make up around 50% of technology transfer outputs, so the available evidence indicates it is likely that the data for all fields is broadly representative for health sciences. Given that a relatively small proportion of total research is devoted to agricultural research, it is not possible to make similar conclusions about technology transfer in the agricultural sciences. Indications are, however, that it is likely to follow a similar pattern; there is little evidence that one field of research has significantly different results from another in terms of average performance, as indicated in Table 3.

A widely used proxy for the overall performance of the technology transfer system is the total license income earned per year as a percentage of the total research expenditure. This measure is used in this chapter, and elsewhere, but it must be remembered that the measure is a proxy for a complex system and does not, by itself, tell the whole story. License income as a percentage of research expenditure is often referred to, for simplicity, as the “return” from an investment in technology transfer. The concept represents one form of return, with returns to the economy through direct and indirect benefits being equally, if not more, important. The benefits to society, particularly in health and agriculture, are often far more important than any financial return the institution may earn. The difficulty, however, is that the institution bears the costs of undertaking technology transfer, particularly, in terms of IP protection costs. The benefits, in contrast, may be enjoyed by the wider society, or even by another country.

Over the years that data has been collected, the trend in total license income is instructive (the graph is shown in Figure 1). In the United States, the value has increased from 1.5% in the first year of surveys by the Association of University Technology Managers (AUTM) in 1991 to around 3.5% in recent years, ignoring the anomalous peak during the dot-com boom in 2000. Excluding medical research institutes and considering only universities, the figure is slightly lower, at around 3%. The available figures for the United Kingdom, Australia, and Canada are also plotted in Figure 1. Again ignoring anomalous figures in 2000/2001, averages are in the vicinity of 1% to 1.5%. Interestingly, no evidence yet exists of other countries having the same rising trend that was observed in the United States in the early years. Whether a similar trend will occur in the years ahead, or whether there is a systematic difference between the United States and other countries, is still unclear.

Figure 1: License Income (1991-2004 for U.S., 2000-2003 for Other Countries)

The average data set is misleading, however, and the full data set, showing individual institutions, needs to be scrutinized. The AUTM data is excellent in this respect, as is the Australian survey. It is unfortunate that cultural norms in Europe tend to hide individual performance, as this impedes an understanding of the data.

The characteristic distribution of this data is illustrated in Figure 2, which shows the returns of all reporting U.S. universities in rank order. The data is more easily understood when plotted on a logarithmic scale, as shown in Figure 3. The dotted line in Figure 3 shows the approximate trend line for U.S. data. Figure 4 shows the same data for Canadian universities and Figure 5 shows the date for Australian universities, both with the U.S. trend line superimposed. The distribution of returns is remarkably consistent in these three reporting countries. The data for the United Kingdom shows a similar trend, but cannot be displayed in the same way because individual institutional performance is not reported.

Figure 2: License Income for U.S. Universities for Fy 2002

Figure 3: License Income for U.S. Universities For Fy 2002 (Logorithmic plot)

Figure 4: license Income for Canadian universities for Fy 2002 (logorithmic plot)

Figure 5: License Income for Australian Universities for Fy 2002 (Logorithmic Plot)

Table 4 summarizes the returns for the United States, Canada, Australia, and the United Kingdom in three bands. The first is for all reporting universities, the second for the lower 95%, excluding the upper 5%, while the last row shows the performance of the lower 50%. Excluding the upper 5% removes eight universities in the United States, two in Canada, one in Australia, and five in the United Kingdom.

TABLE 4: AVERAGE RETURNS IN 2002 (LICENSE INCOME AS % OF TOTAL RESEARCH INCOME)

GROUP U.S. CANADA AUSTRALIA U.K.
All universities       3% 1.60% 1.50% 1.10%
Lower 95% 1.60% 0.80% 0.60% 0.55%
No. of universities excluded from average       8       2       1       5
Bottom 50% 0.28% 0.23% 0.08% 0.02%
 
Note: “Group” refers to university rankings by percentage of license income, as indicated in Figures 2–5.

The affect of the skewness of the returns is evident: 95% of universities have returns of less than half the averages, while 50% earn only very small amounts from technology transfer. This has important implications whether TTOs earn enough to cover operating costs or if they need to be subsidized. (This is discussed in more detail below.)

An important reason for undertaking benchmarking is the understanding and insight that the process fosters. This is what Lundvall2 calls a learning-by-comparing approach, and it is especially important when using benchmarks in a different environment. Inherently, long-time delays make innovation-system benchmark data particularly difficult to collect and interpret. A good understanding of the origins and structure is necessary to avoid misuse of the data. The model that has been developed here assists with the interpretation of the raw benchmark data and the underlying processes it reflects.

Analysis of the data is complicated by the existence of a few exceptional cases. In Australia, for example, the omission of a single equity transaction in 2000 changed the income earned by over 50%; while in 2001 and 2002, one university accounted for 66% of all income earned. In Canada, omission of two universities had a similar impact, while in Europe omitting two universities reduced the income by 70%. The affect of a few large transactions makes measuring and interpreting the benchmark data more difficult, particularly for projections and comparisons.

Some observations, with respect to the country average data, are relevant:

  • The invention disclosure rate of 40 to 50 disclosures per US$100 million ATRE (or US$2 million to US$2.5 million of research expenditure per invention disclosure) is remarkably consistent across countries and over time. The most recent U.K. data set is an exception and would seem to indicate a difference in policy approach, with the invention disclosure rate increasing by nearly 50% from 2001 to 2002. Not all disclosures are equal, however, and in some instances a higher disclosure rate would appear to indicate a lower “quality” of disclosure, as indicated by the fact that a smaller percentage of the disclosures are converted to license or spinout opportunities, as shown in Table 2.
  • The rate at which disclosures are converted into a patent or license varies from 30% to 50%. This is a relatively close correspondence with differences explainable by different national policies and support measures.
  • The spinout-company rate shows a similar range, explainable by the greater emphasis on company formation in Europe and in the United Kingdom, in contrast with the emphasis in the United States on licensing. The United Kingdom and Europe generated four to six times more spinout companies in 2000/2001, but the number had dropped in 2002, reflecting a much more difficult venture-capital environment after the dot-com bust.
  • A recent report in the United Kingdom, however, has asserted that the reported rate of spinout-company formation is 50% more than the real rate. If true, this would make the United Kingdom data more comparable to other countries and illustrates the importance of clear definitions when collecting benchmark data.
  • It is noteworthy that the total percentage of invention disclosures that result in either a license or a spinout is roughly similar in all countries examined, at around 30% to 40%.
  • The staffing of TTOs shows interesting variations. The United States averages four staff (per US$100 million ATRE), whereas Australia and Canada have eight to ten staff (per US$100 million ATRE). This reflects economies of scale in the United States, as the average number of staff per institution is similar. Staffing levels in the United Kingdom, however, are six times higher than the United States per ATRE. This reflects the emphasis on spinout-company formation (known to be much more people-intensive than licensing) in the United Kingdom and the strong national support schemes that are in place.
  • The cost of operating a TTO can be estimated from the reported staffing levels and salary survey results, formal or informal. As shown in Table 5, these budgets fall into two categories and three groupings. For the United States, Canada, and Australia, the budget for a small university is about 1% to 2% of total research expenditure and 0.2% to 0.5% at the larger institutions. U.K. universities typically have budgets approximately double these figures.
  • The average returns shown in Table 4, coupled with the typical budgets shown in Table 5, enable an estimate of the profitability of the various classes of offices. These results are shown in Table 6. In the United States and Canada, the bottom 50% of all universities operate at a loss, and only the 50% to 95% group are operating at a break-even or slightly profitable level. Only the top 5% are very profitable. It is this skewness that contributes to the all-too-common expectations of unrealistic performance. In the United Kingdom and Australia, only a few universities are profitable, with over 95% operating at a loss.

TABLE 5: TYPICAL TECHNOLOGY TRANSFER OFFICE BUDGETS
(AS % OF TOTAL RESEARCH EXPENDITURE)

UNIVERSITY SIZE BUDGET (U.S./AUSTRALIA MODEL) BUDGET (U.K. MODEL)
Small      1%–2%      2%–3%
Medium     0.5%–1%     1%–2%
Large 0.2%–0.5% 0.5%–1%

 

TABLE 6: LIKELY OUTCOMES (ESTIMATED BUDGET VS. LIKELY INCOME)

GROUP U.S. & CANADA U.K & AUSTRALIA
Bottom 50% (of all universities) Loss Large loss
50%–95% Break even–profitable Loss
Top 5% (of all universities) Very profitable Profitable
 
Note: “Group” refers to university rankings by percentage of license income, as indicated in Figures 2–5.

The similarity in performance among countries with different innovation systems and cultures indicates that the creative innovation process is inherently similar regardless of the environment. The single biggest factor that dwarfs all others is the expenditure on research, and it appears that no innovation system is significantly different with respect to the effectiveness with which ideas are generated and transformed.

This is not to imply that active innovation support systems are not required. All the countries examined and reported in the benchmarks in Table 2 have strong systems of support and are actively involved in training and developing capacity to manage the research and innovation process. Without such capacity, it is highly unlikely that the performance of any institution, region, or country will come even close to matching the average benchmarks.3

3. Phasing of the Innovation-Value-Chain

The benchmark data is masked by the long delays inherent in the technology transfer process. Each step in the value chain takes a few years; typically six to ten years elapses from the moment of invention disclosure to the time when significant income can be generated from a license. These delays are depicted in Figure 6, and the impact of these delays is illustrated in Table 7 and Figure 7.

Figure 6: Typical Phasing of the Value Chain

 

Table 7: Institutional and National Effects on Technology Transfer

Table 7: Institutional and National Effects on Technology Transfer

 

Figure 7: Impact of Policy Choices on Performance

This phasing makes interpretation of the benchmark data difficult, because data for a particular year depends on activities that happened many years earlier. The total license income in any one year, for example, depends on the accumulated sum of invention disclosure and patenting activities from prior years and is independent of the disclosure rate in that particular year. For ease of analysis and reporting, ratios are used to measure the relationship between variables that may in fact be years apart. In a steady-state environment, these ratios are correct, but the dynamic relationship must be understood.

The data presented in Table 2 is therefore primarily useful as a steady-state approximation, particularly when used to make projections for a new institution or a country just establishing an innovation system. Misunderstanding these dynamics can contribute to false expectations of returns that are more properly based on observations of essentially steady-state data from mature systems.

The dynamic model combines knowledge of the phasing of the value chain and the time duration of the various steps with the steady-state benchmark data in Table 2. The primary purpose of the model is to provide estimates of the likely rate of return and cash-flow forecasts (institutional and national) of alternative innovation-system scenarios. As the parameters of any particular innovation system are not known in advance (and are difficult to measure even in retrospect), the main use of the model is as a “what-if tool” to explore alternative approaches and understand the impact of policy decisions.

Table 7 illustrates one possible model based on a hypothetical institution expending US$100 million in research expenditure per year for 20 years. (The model is currency independent and whether this is US$ or any other currency makes no difference to the rate of return.) The model has also been used for actual institutions, where past and future research expenditure is known or can be forecast. Any available data on past invention disclosures, patents, or licenses can be used as initial conditions; the model can incorporate as much past data as is available to generate forecasts.

Figure 7 shows the results of using a range of parameters to represent the three main TTO operating models, called the income, service, or economic models. The choice of office operating model depends on institutional and national policy, and upon capabilities and resources. In practice, a mix of models is normal. Each model can be defined by a set of innovation value-chain operating parameters. These parameters enable the future performance of an office (or country) to be calculated, including investment outlay required, patent prosecution costs, time to break even, and potential internal rate of return (IRR). The IRR is the estimated return to the institution from investing in establishing a TTO, including staff costs and IP protection expenses.

The importance of the model is not the accuracy of its predictions, which will, of course, be no better than the underlying parameters and assumptions underpinning their use. The primary benefit is in understanding the dynamics and relatively long timescales involved in technology transfer. The model can thus help avoid unrealistic expectations and can also provide the basis for a series of intermediate benchmarks that can help ensure that the innovation system is moving in the right direction. Invention disclosure, for example, is clearly an important early indicator to measure both the health and the vibrancy of the research system.

4. Economic-Impact Estimation

The ability to calculate, or even estimate, the economic impact of technology transfer activities has been actively debated for a number of years. The statement below from the AUTM licensing survey for fiscal year 1999 has been disputed, and in subsequent years AUTM has refrained from making claims in the survey, suggesting instead on the need for ongoing research.

“The economic impact of the licensing of technologies developed at academic institutions is remarkable. The responses from member institutions estimate that the licensing of innovations made at academic institutions contributed over [US]$40 billion in economic activity and supported more than 270,000 jobs in Fiscal Year 1999. In addition, business activity associated with sales of products is estimated to generate [US]$5 billion in United States tax revenues at the federal, state, and local levels.”4

Despite contention over specific claims of economic impact, it is widely accepted that the process is of economic benefit in all countries that have active innovation systems and promote university technology transfer. The many countries that are investing resources in technology transfer development confirm that there is widespread confidence that the investment is worthwhile and generates a positive return.

With considerable justification, developed countries use the overriding argument that, when a research program is already in place, technology transfer can result in significant additional benefits for a small additional cost (as shown in Table 4). But in developing countries with smaller economies, less-developed innovation systems, and many competing demands for resources, the situation is less clear. The benchmark data shows that the volume of innovation activities arising from research is directly proportional to the amount of research funding. If additional investment in research is proposed on the grounds that it supports economic growth, some justification for this needs to be shown (for example, that there will be a positive return from that investment).

While there is some financial benefit to the institution performing the research, the benefit is, at best, around 1% to 2% of research expenditure, as shown in Tables 2 and 3, and is generally between 0.5% and 1.5%. Income generation from technology transfer is therefore clearly not an adequate reason for an institution to invest in research. The financial benefits of technology transfer activities are captured primarily at the national economic level through business creation, with national returns arising from direct and indirect economic effects. The data makes a compelling case for public funding, not only of research itself, but also of technology transfer activities.

Even when the public sector invests funds in research (whether for economic development reasons or otherwise), a research institution must invest in technology transfer activities over an extended period (eight to ten years) before a positive return can be expected. The highly uncertain and variable nature of the returns compounds these difficulties. Indeed, measuring the national economic impact of technology transfer is difficult and has been the subject of intense discussion and debate. A simplistic model has been developed to illustrate the concepts and motivate the development of more comprehensive models (the approach used here follows that described by Pressman5).

Universities report that the typical average royalty rate, from which license revenues are derived, is within the range of 2% to 4%. Direct business activity generated by technology transfer activities is therefore of the order of 25 to 50 times the revenue received by the licensing institution. Using an appropriate multiplier (typically 1.5 to 2.0), the overall direct economic impact can be estimated. This is not strictly an economic model. It is an estimate of the multiplier effects that are required to obtain a positive return. More work is needed to determine the actual multiplier effects that occur or are achievable. In addition to these benefits, the pre-production benefits associated with technology transfer activities have been shown to be significant.6

This economic return is the direct return from the activities measured and managed by the institutions’ TTOs. There is strong evidence that the entrepreneurial culture resulting from the focus on technology transfer results in many other benefits that are neither captured nor measured by the institution, but which have an impact on the local economy.7 These are the indirect multiplier effects. Whether similar benefits will accrue in developing countries is difficult to say and requires more research. Certainly, the factors noted by Tornatzky generate cause for concern. He noted that states with strong entrepreneurial support (such as Massachusetts and California) tend to draw entrepreneurial talent and opportunities from states with less support, resulting in a loss of economic benefits accruing to the states where the research was undertaken. This migration constitutes a leakage of benefits from states with less-well-developed entrepreneurial environments to those with a more nurturing environment. If leakages from poorer to richer states in the United States (in terms of entrepreneurial support) have an impact in the United States, the effect in developing countries is likely to be even more pronounced.

Figure 8 illustrates these concepts in an example projecting the returns arising from the technology to an investment in research illustrated in Figure 7. These projections are, of course, sensitive to the assumptions made. The model shows, for example, that a positive national IRR can only be achieved if the indirect multiplier effects are at least three to four times more than the direct effects. This reinforces the need for a more in-depth understanding of innovation system dynamics so that these effects can be understood and measured.

Figure 8: Estimation of National Internal Rate of Return (Irr)

What is clear from the model is that the direct returns resulting from technology transfer are far from adequate to justify additional expenditure on research. In developing countries, the debate on whether higher expenditure on research is justified is intense and the model illustrates the need for more in-depth analysis and better economic data.

5. Variability of Benchmarks and Returns

The benchmark data from individual institutions (from all countries and over hundreds of institutions) shows a very high variability from year to year and from institution to institution. This variability is observed on all measures in the value chain: invention disclosures, patents, licenses, spinout companies, and income. The variations are up to two orders of magnitude, even for institutions that in other respects are similar. Some of these trends were illustrated in Figures 2, 3, 4, and 5. Analysis of the data by income, size of the institution, maturity, or size of the TTO indicates that none of these variables is strongly correlated with efficiency or performance measures. The only significant correlation is that innovation output measures are proportional to the volume of research, as measured by expenditure on research. Even this figure is proportional only in aggregate over a large portfolio, with strong institutional variations.

Figure 9, for example, shows the variation in invention disclosure rate in terms of millions of dollars of research expenditure per invention disclosure, as a function of both the age of the office and the magnitude of research expenditure. Although in aggregate over time and across countries the figure is relatively constant, at the institutional level very strong variations occur, irrespective of the size or maturity of the institution. The European, United Kingdom, and Australian surveys show a similar distribution, so this is not unique to the United States.

Figure 9A: Disclosure Rate vs. Age of Office

Figure 9B: Disclosure Rate vs. Research Expenditure

Figure 10 shows the variation in license income (as a percentage of research expenditure) for U.S. and Canadian institutions. The graphs confirm the theoretical model presented above and demonstrate the ten-year lag before significant revenue is generated. But even after this portfolio-establishing period, returns to offices of similar size and experience vary greatly.

Figure 10A: TTO Age as a Determinate of Licensing Income (U.S.)

Figure 10B: TTO Age as a Determinate of Licensing Income (Canada)

This high variability in returns has been noted and studied.8, 9, 10 The variability in innovation returns appears to be inherent to the nature of innovation, but the variation in returns in early intermediate benchmarks (for example, invention disclosure rates) is not affected by the same factors. While still variable, this variability is less inherent and more manageable. Economic returns are determined by an unpredictable set of market factors, while the intermediate benchmarks are more controllable by the institution and TTOs. Institutional commitment, coupled with skilled, experienced staff, can significantly contribute by identifying opportunities and motivating invention disclosure, and, of course, by managing all the subsequent steps in the value chain.

The impact that skilled staff could have on the overall innovation process and benchmark figures is a topic for further research. If best practices could be identified and disseminated, they could potentially increase innovation returns substantially. This is particularly relevant to smaller, more-isolated offices, and offices in developing countries where peer learning is absent. Strong professional networks are critical, and these need to be promoted and developed.

Sherer and Harhoff11 performed an in-depth study on innovation returns. Based on their analysis of eight large patent portfolios in both the United States and Germany, the researchers concluded:

“Our empirical research reveals, at a high level of confidence, that the size distribution of private value returns from individual technological innovations is quite skew—most likely adhering to a log normal law. A small minority of innovations yield the lion’s share of all innovations’ total economic value. This implies difficulty in averting risk through portfolio strategies and in assessing individual organizations’ innovative track records. Assuming similar degrees of skewness in the returns from projects undertaken under government sponsorship, public sector programs seeking to support major technological advances must strive to let many flowers bloom. The skewness of innovative returns almost surely persists to add instability to the profit returns of whole industries and may extend even up to the macroeconomic level. Although much remains to be learned, some important lessons for technology policy have begun to emerge.”

The AUTM data confirms that this skewness is even more apparent in university portfolios, with an average of only one in 200 licenses generating more than US$1 million in revenue.12 This concurs with Sherer’s data: of the eight portfolios he analyzed, the three from universities all had higher levels of skewness than the industry portfolios. This skewness is of particular relevance to smaller institutions and countries.

This disparity in outcome, which can occur even between institutions of similar size, capability, and investment, can lead to problems. Without an in-depth understanding, the benchmarks can result in dysfunctional policy decisions at both national and institutional levels.

6. Implications for Developing Countries

Data on the actual performance of developing countries is not available, or at least none has been discovered in the course of conducting this research and making presentations in a number of countries.13 A limited set of data, which has been obtained by personal contact with a number of institutions, is available for South Africa. This data is shown in Table 8, together with projections of the possible outcome if South Africa was operating within the international ranges summarized in Table 3.

TABLE 8: PROJECTIONS OF TTO ACTIVITY FOR SOUTH AFRICA

  INTERNATIONAL
RANGES
(FROM TABLE 3)
CURRENT (2004)
(BASED
ON FIVE
UNIVERSITIES
)
PROJECTIONS
IF AT
INTERNATIONAL
NORMS
    US$194 m ZAR2 b
Research expenditure (ATRE) per US$100m per US$100m US$500 m
Invention disclosures (total) 40–60 23 200–300
Patents filed 20–30 6 100–150
Licenses 10–15 4 60–100
Start-ups 1–5 3 5–20
Patent budget (as % income) 0.2%–0.5% 0.30%  
License income 1%–2% of total 0.1% of total US$5 m–US$10 m
Size of staff 4–20 9 20–100
 
Note: Projections are based on likely ranges from international benchmarks.

If South Africa was to attain an innovation performance similar to comparable institutions elsewhere, the entire South African higher-education research system could be expected to generate 200 to 300 invention disclosures per year. After seven to ten years, such a disclosure rate should lead to a portfolio of around 500 active licenses, two of which would be likely to be generating revenue of greater than US$1 million per year, with total revenue of US$5 million to US$10 million per year.

Furthermore, the distribution of returns would almost certainly be skewed, even among the five or six major research universities, let alone the 15 smaller institutions. A few institutions are likely to perform relatively well, while the majority are likely to operate at a net loss, even after ten or 15 years. Furthermore, the skewness and variability of returns means that it is not possible to predict who is likely to succeed and who will be considered to have “failed.” Given the financial constraints that exist in higher education institutions, continued institutional support for technology transfer is likely to be a risk, unless external support or stimulus is provided.

In the United States, the Bayh-Dole act of 1980 provided a major stimulus for technology transfer, but the difficulty of using a similar measure in South Africa is illustrated by the funding differences. In the United States, the proportion of research from federal funding is 61%, while industry contributes only 9% of total research funding.14 In South Africa, industry funding is 58% and government funding makes up 28% of total research funding.15 This funding pattern has implications for IP generation and ownership, as well, and is an example of the differences that need to be considered when making projections based on international benchmarks.

One argument that carries some weight is that the high levels of industry-sponsored research in South Africa and other countries with a similar pattern of funding, represent considerable informal technology transfer embedded in research contracts. The true performance of these institutions, therefore, may be much higher than is indicated by the simple “AUTM-like” technology transfer indicators.

Whether the benchmarks from countries with large, well-developed research and innovation systems will scale to smaller countries is at present unknown. More detailed analysis and measurement are required to determine appropriate benchmarks and to construct a more robust and accurate economic impact model.

7. Conclusions

The similar relative performance of higher-education technology transfer systems in developed countries indicates that the creative innovation process is inherently similar and that no one country is significantly better in terms of the efficiency with which ideas are generated and transferred. The impact of a technology transfer program is determined primarily by the magnitude of the expenditure on research and the length of time the program has been in operation, provided active innovation programs exist and well-trained technology transfer professionals are in place. These are essential requirements if institutions and countries aspire to attain international norms of performance.

To avoid unrealistic expectations of the benefits of technology transfer in smaller countries and institutions, this data set must be understood. Effective models of the innovation system, preferably based on local data, can help predict budget requirements, the possible return on investment, and the timescales to attain these goals. Measurement of the local innovation system should commence at the earliest possible stage, because early indicators (such as the invention disclosure rate) can provide insight into how the remainder of the value chain is likely to develop.

The long time-period required for individual institutions to derive benefits, and the fact that the benefits are largely to the national economy, indicate that appropriate national support measures are needed to encourage innovation development and to overcome institutional resistance in resource-constrained environments. Using an innovation-system model (where appropriate) to evaluate and quantify alternatives, further research is needed to determine the most effective support measures.

Institutions and innovation systems need to take into account the skewness and inherent variability of innovation returns. In the early stages, more emphasis needs to be placed on intermediate benchmark measures and less on such traditional measures as license revenues and spinout-company formation.

Acknowledgments

This research has drawn heavily on work done by many colleagues. The sterling, unglamorous work of hundreds of AUTM members in performing the licensing survey each and every year for the past 12 years is noted with particular thanks. Without that data, many of us who are new to the profession would be fumbling in the dark. The surveys sent to me by colleagues in United Kingdom, Canada, Europe, and Australia have filled out an interesting story. If I have misrepresented the data in any way in making comparisons, I apologize.

The research work on which this paper was based was supported in part by USAID from contract 674-0321-C-00-8016-22. This support is acknowledged with thanks.

This chapter is based on a paper published in the Journal for Technology Transfer. It draws on the data presented in that paper but presents a number of new results.

Endnotes

All referenced Web sites were last accessed between 1 and 10 October 2007.

1 See AUTM Licensing Survey. Association of University Technology Managers. Northbrook, Ill. www.autm.net.

2 Lundvall BÅ and M Tomlinson. 2001. Learning-by-Comparing—Reflections on the Use and Abuse of International Benchmarking. In Innovation, Economic Progress and the Quality of Life (G Sweeney, ed.), chap. 8. Elgar Publishers: Denmark.

3 Comprehensive descriptions of the necessary support functions are given in: DTI. 2001. The Higher Education Business Interaction Survey. Department of Trade and Industry: London.

Tornatzky LG. 2000. Building State Economies by Promoting University-Industry Technology Transfer. National Governors Association: Washington, DC.

Zeitlyn M and J Horne. 2002. Business Interface Training Provision (BITS) Review, Final Report for the Department of Trade & Industry, London, UK. Oakland Innovation and Information Services Ltd.: Cambridge. http://www.herda-sw.ac.uk/additional/files/fulcrum/training/bits.pdf.

4 AUTM Licensing Survey: Fiscal Year 1999. Association of University Technology Managers. Northbrook, Ill. www.autm.net.

5 Pressman L. 2002. What is Known and Knowable about the Economic Impact of University Technology Transfer Programs? Research Universities as Tools For Economic Development. Council on Research Policy and Graduate Education, NASULCG Annual Meeting, Chicago.

6 Geist D. 1995. Pre-Production Investment and Jobs Induced by Technology Licensing: The M.I.T. Method. Technology Licensing Office. Massachusetts Institute of Technology: Cambridge.

7 See Tornatzky at supra note 3.

8 See AUTM 2000, 2001 and 2002. AUTM Licensing Survey: Fiscal Years 1999, 2000, and 2001. Association of University Technology Managers: Northbrook, Ill. www.autm.net.

9 Sherer, FM and D Harhoff. 2000. Technology Policy for a World of Skew-Distributed Outcomes. Research Policy 29: 559.

10 Marsili O and A Salter. 2003. Is Innovation ‘Democratic’? Skewed Distributions and the Returns to Innovation in Dutch Manufacturing, Holland. ECIS-Eindhoven Centre for Innovation Studies. Eindhoven University of Technology: Eindhoven.

11 See supra note 9.

12 AUTM. 2002. AUTM Licensing Survey: Fiscal Year 2001. Association of University Technology Managers: Northbrook, Ill. www.autm.net.

13 If any reader knows of benchmark data from countries other than those presented, the author would be pleased to be notified. The research on which this chapter is based is ongoing and the opportunity to extend the work to include data from other countries is welcome.

14 See supra note 12.

15 CENIS. 2002. South African S&T Indicators (2002). Centre for Interdisciplinary Studies. University of Stellenbosch: South Africa.

Heher AD. 2007. Benchmarking of Technology Transfer Offices and What It Means for Developing Countries. In Intellectual Property Management in Health and Agricultural Innovation: A Handbook of Best Practices (eds. A Krattiger, RT Mahoney, L Nelsen, et al.). MIHR: Oxford, U.K., and PIPRA: Davis, U.S.A. Available online at www.ipHandbook.org.

© 2007. AD Heher. Sharing the Art of IP Management: Photocopying and distribution through the Internet for noncommercial purposes is permitted and encouraged.