The following article is a noteset from the book Blood Safety in the Age of AIDS by Eric A. Feldman and Ronald Bayer, available from LPE Publications. This article is reprinted here, in its entirety, simply because, the article says it all. The article appeared on the Center for Interdisciplinary Research on AIDS internet web site. The CIRA conducts research aimed at the prevention of HIV infection and the reduction of the negative consequences of HIV disease in vulnerable and underserved populations.
CIRA is located in Yale University's School of Medicine, within the Department of Epidemiology and Public Health. 


Blood Safety in the Age of AIDS
Eric A. Feldman and Ronald Bayer
LPE Publications

Introduction

Blood--its color, its flow, its scientific properties, its social significance--has long captured the imagination. With almost magical associations that exceed its biological functions, blood encompasses a spectrum of meanings that are complex, controversial, and often contradictory. Blood has been viewed as the apogee of purity and the nadir of filth; cited as both the basis for unity and the justification for war; analyzed as a gift that brings life and a source of death; and sold as a commodity that has been called priceless.

Only in the 20th century did it become possible to use blood therapeutically. Initially for battlefield transfusions, later in response to accidents and in surgery, and finally as pharmaceutical products made from technologically transformed blood components, blood became a life-saving medical intervention. The healing uses of blood inevitably would have unintended side effects; along with the new curative powers flowed dangerous pathogens that made blood and its components a medium for disease transmission. The infection of soldiers with hepatitis in the 1940s was among the first occasions for the expression of concern about blood borne disease; half a century later, anxiety about whether mad cow disease might be transmitted by blood provides the most recent occasion for consternation. The transmission of HIV through blood, and the bitter and protracted controversies it engendered, however, is by far the most serious crisis to have confronted the blood system.

The development of clotting agents for hemophiliacs, all made from blood plasma, set the stage for an iatrogenic tragedy. Factor VIII blood concentrate transformed the lives of hemophiliacs by reducing debilitating bleeding that was often crippling. A few short years later, tainted products wreaked havoc. In 1982 the U.S. Center for Disease Control and Prevention (CDC) identified blood as a medium that could transmit the as-yet-unknown etiological agent responsible for the newly identified lethal disease, AIDS. From that moment, the collection, fractionation, distribution and consumption of blood became the subjects of bitter legal and political disagreements.

Even before the first cases of AIDS were identified in the U.S., HIV had entered the world's bloodstream. Starting in the late 1970s, until the blood supply was secured by testing for viral antibody and heat treatment, those dependent on blood or blood products were at risk for HIV infection. In that period, less than a decade, tens of thousands of people, primarily in the developed world, received blood or blood products infected with a lethal virus. Blood--with its spectrum of social and medical meanings--had become the vector for an international iatrogenic catastrophe.

The risk of being infected with HIV-tainted blood or blood products, like all other risks in technologically advanced societies, was not evenly distributed. Patients requiring transfusions of whole blood typically received supplies donated by several individuals, depending upon how much blood they needed. Their overall risk depended upon the prevalence of HIV infection in the local blood-donating population, and was limited by the small chance that any given donor was infected. In the industrialized nations, where millions of individuals annually receive blood transfusions, the proportion of those infected with HIV was but a fraction of one percent.

Hemophiliacs, on the other hand, do not receive whole blood from small numbers of donors. Instead, as a result of therapeutic advances, blood clotting elements known as Factors VIII and IX are extracted from the pooled blood of hundreds, thousands, even tens of thousands of donors. A single infected source of plasma can contaminate an entire lot of factor concentrate. Consequently, in the late 1970s and early 1980s the risk to hemophiliacs of receiving contaminated blood products was high, even when the overall prevalence of HIV infection in a given society was relatively low. The political economy of blood further compounded the risk to which hemophiliacs were exposed. Sixty percent of the world's plasma supply was purchased from plasma sellers in the United States, where it was then made into factor concentrate. The fact that the background prevalence of HIV infection in the U.S. was higher than in any other industrialized nation had fatal consequences.

As a result of these epidemiological, technological and economic factors, hemophiliacs were compelled to bear an extraordinary burden. Half of the hemophiliacs in many countries were infected with HIV. Entire families have succumbed to AIDS as infected men with hemophilia--fathers, brothers, and sons--transmitted infection to their wives, who in turn infected their children. A single viral infection thus endangered the universe of blood therapeutics.

In the course of conflicts over blood, long-established convictions about the moral and political status of the institutions responsible for the blood supply were shattered. Symbols of altruism and national solidarity, such as Red Cross Societies, became the targets of escalating criticism. The reputations of many who had cared for hemophiliacs, some of whom contributed to therapeutic advances that ushered in the era of hemophilia normalization, were destroyed. Abject apologies were forced from corporate officials and government administrators. In some dramatic instances, those held responsible for the catastrophe were jailed. Hemophiliacs, once docile patients, transformed themselves into activists demanding justice and recompense from those they held responsible for their plight.

Almost every industrial nation has experienced deep and sustained conflict as a result of the contamination of blood with HIV. In those nations, the awareness that AIDS could be transmitted through blood, the discovery that heat treatment could kill HIV, and the development of an antibody test that would detect the viral cause of AIDS were well-known scientific achievements. Dispute centered on when emerging scientific and epidemiological evidence was sufficient to provide a basis for public action. In the heat of controversy, no nation escaped the tendency to treat blood as a metaphor for the connection between the individual and society, thereby making domestic blood a symbol of virtue and foreign blood one of impurity.

Five core issues have animated conflicts over contaminated blood, to varying degrees, in all industrialized democracies.2 They provide a framework both for a retrospective understanding of controversy over HIV-tainted blood and an attempt to identify the central policy questions that must be answered if the threat of blood-borne pathogens is to be minimized in the future, and the social burden they create is to be managed equitably.

 

Managing Threats to Blood Safety

The first indication that blood could transmit AIDS came in mid-1982 in the United States. The CDC’s Morbidity and Mortality Weekly Report reported three hemophiliacs with symptoms virtually identical to those found in some homosexual men and Haitians, suggesting the possibility that members of all three groups were victims of a common, new, and poorly understood syndrome. In retrospect it is clear that those first cases were warnings of a potential catastrophe. To those examining the evidence available at the time, however, the picture was rife with uncertainty. Beyond the contested baseline question of whether the blood supply was contaminated, the appropriate course of action was the subject of controversy because of disagreement about which (if any) interventions would enhance blood safety. How should such ambiguous and inconclusive evidence have been judged? Which institutional forces shaped and constrained decision making? What scientific and clinical factors informed the behavior of those in positions of responsibility?

In the face of uncertainty, both action and inaction had potential costs. An immediate change in blood collection, processing, and distribution could make authorities vulnerable to charges of over-zealousness if their action proved unnecessary or ineffective. It could increase costs and decrease supplies of whole blood and blood products. It could require hemophiliacs to return to burdensome therapeutic interventions. Inaction, on the other hand, could lead to disastrous clinical consequences. In the end, the concept of “acceptable risk,” in this instance as well as others, inevitably produces the question, “acceptable to whom?”

Confronted with uncertainty about the danger of blood and blood products as vectors for AIDS transmission, decision makers retreated to a well-established paradigm of blood safety developed in response to Hepatitis B. From the 1970s, patients, physicians, and blood safety experts were united in accepting that the lifestyle benefits hemophiliacs enjoyed from blood products outweighed the substantial risk of contracting Hepatitis B from those products. By 1980, progress in developing a test for Hepatitis B reinforced the view that the burden of Hepatitis B would be time-limited as well as worth bearing. Those who produced, distributed, prescribed, and consumed blood products were therefore accustomed to stressing the significant benefits of the products and minimizing their risks.

Once it became clear that a viral blood-borne agent was responsible for AIDS, decision makers confronted a range of policy options. They included the introduction of antibody testing, the mandating of heat treatment, and the withdrawal of blood and blood products on the shelves before safety-enhancing measures were introduced. The speed with which those responsible for blood and blood products introduced safety measures, and their willingness to bear the costs of withdrawing potentially unsafe products, varied from country to country, despite reference to shared epidemiological data. Sometimes the differences could be measured in months, but the impact of even small differences would become epidemiologically and politically significant.

In retrospect, it is known that some portion of HIV infection in those dependent upon blood occurred before the first blood-associated cases of AIDS were identified. Others were infected in the period between the identification of the first cases and the technical availability of safety-enhancing measures; still others between the time when such measures were available and when they were introduced. In each of these periods, the issues of risk assessment and management were different. So, too, are the lessons to be drawn.

The disaster of HIV contamination, and the outrage it has spawned, has profoundly affected how those responsible for public health now view questions of blood safety. It has shaped their understanding of how new and uncertain risks to the blood supply should be managed, especially risks that are small but costly to eliminate. In the United States, for example, antigen testing was introduced to reduce the risk of HIV transmission, but did so only marginally at a cost of millions of dollars per averted case. In Australia as well as Denmark, a decision was made to screen for the viruses HTLV I/II. Danish estimates indicated that such efforts would prevent only 1 case of cancer in the next 30-60 years. In many nations, there has been a shift in presumptions governing decision making under conditions of uncertainty, one that emphasizes risk-aversion at any cost.

As a result, the need to balance high levels of safety with reasonable costs have been subsumed by an impulse to maximize safety irrespective of the price. Despite the obvious social benefits of blood safety, there are some degrees of safety that require vast expenditure and bring small benefit. Sometimes, possibilities to improve safety marginally must be foregone. Such determinations should be made under conditions that maximize public accountability, without regard to commercial or other narrow interests. They must take into consideration individual needs and broad social welfare concerns. In most cases, there will be a tendency to mask decisions that accept “imperfect” safety; such decisions have high political costs. Such tendencies must be resisted. Those who may be placed at risk by the decision to forego costly safety-enhancing measures have a right to know about the situation they confront as they make medical decisions. Consent to undergo treatment in such circumstances should not be confused with a willing acceptance of risk. Therefore, those who may suffer injury may have a legitimate claim to some form of compensation.

In a world where absolute safety is unattainable, decision makers are compelled to make policy choices based upon ambiguous, incomplete, and conflicting information. Such decisions are, at base, ethical, involving the distribution of the burdens and benefits of treatment with blood and blood products. In short, the question of managing risk is a question of justice.

 

Voluntarism, Markets and the Supply of Blood

Since Richard Titmuss posited a direct relationship between blood safety and voluntary, unremunerated donation, the idea that all blood should be freely given has achieved the status of an international orthodoxy. What made his proposition all the more attractive was the claim that the goals of social solidarity and safety were, in the case of blood, linked through the altruistic donor. Here, at any rate, markets and market-inspired behavior seemed not to provide a model for the organization of a critical domain of social life. Despite sustained criticism of Titmuss’ views by those who claimed that carefully selected “paid donors” could provide a source of blood that was both safe and reliable, voluntarism has survived the increasingly hegemonic status of market perspectives in recent decades.

The aura that has come to surround volunteer blood donors and altruistic blood donation has exacted a price that only became apparent in the context of the AIDS epidemic. Because donors were viewed as selfless, and because the process of donation was viewed as an expression of solidarity, it was politically and ethically difficult to develop policies that distinguished among potential donors. Blood authorities could not simply exclude those who might pose a risk because of their behavior or because they came from nations or groups thought to present increased risk to the blood supply. Indeed, some of the most contentious encounters in the nations as diverse as the United States, Australia, and Denmark, in terms of the status of gay men, centered on the potential benefits from and consequences of efforts to exclude gay blood donors, who viewed such actions as a manifestation of homophobia and a threat to the goal of social equality. Gay men were not the only ones affected in this way. In France, prison o were reluctant to exclude inmates, many of whom were drug users, from the donor pool because giving blood was viewed as a step toward social integration.

In the aftermath of the AIDS epidemic, the mythic equivalence of the voluntary donor and the safe donor has been shattered. What will be the consequences for the future of blood policy and practice? Now that tensions have emerged between the ends of solidarity and safety--twinned by Titmuss--how will they be resolved? In nations where donor organizations have been influential, what role will they play when matters of safety are under consideration? Will they be viewed as representing a narrow sectional concern or the overarching interests of those dependent upon blood and blood products?

As blood plasma is increasingly subject to transformation by pharmaceutical firms, it is difficult to sustain the symbolic attachments evoked by whole blood. So too is it difficult to preserve the centrality of the whole blood donor when plasma donation requires a kind of "professional" commitment in both time and effort that is hardly compatible with the amateur voluntarism so praised by Titmuss. It remains a striking feature of the world supply of plasma that much of it is still purchased from blood sellers in the United States at for-profit plasmapheresis centers.

As scientific advances continue to open the way to new therapeutic developments, the blood system is increasingly shaped by concerns about safety and the avoidance of liability. How the commercial world of sellers and fractionators confronts that of donors and blood banks is a central component of the evolving transformation of the blood industry.

Safety and the Ideology of Self-Sufficiency

In an era of globalization, in which national borders increasingly give way to regional and international economic forces, both international and national health authorities continue to regard selfsufficiency as the fundamental principle of blood policy. From Australia to Japan, from the United States to France, the idea that nations should be self-sufficient in blood is the holy grail of public health policy.

HIV infection caused by the distribution of contaminated blood offers an opportunity to disentangle the meanings of and justifications for self-sufficiency, and to question the faith in self-sufficiency as both a moral and medical aspiration. Because HIV was spread through whole blood transfusions and treatment with blood products, it has made possible a discussion of the relevance of self-sufficiency for both whole blood and the pharmaceutically transformed products derived from blood and its components.

There are a number of concerns about the sale and/or donation of blood across borders that make self-sufficiency in blood and its components appealing. Among the most pragmatic is that whole blood and its components have storage requirements that make it difficult to ship them over long distances. If they are not stored properly, and used rapidly, they become useless. Furthermore, international dependence rather than national self-sufficiency might diminish the sense of belonging to a community in which blood freely circulates as a common currency; it might diminish the willingness to donate; it could result in shortages not remediable through local action; the effectiveness of local regulatory control could be reduced; and injuries or accidents caused by “foreign” blood could lead to accusation and recrimination infused with racism and xenophobia.

There is considerable justification for whole blood and component self-sufficiency, particularly when compared to the possibility of dependence upon distant nations. Still, the justification for self-sufficiency of blood components is least compelling in bordering or geographically close nations with similar economic, social, and regulatory configurations. The primary public health concern ought to be that of obtaining an adequate supply of safe blood from donors likely to have a low prevalence of blood-borne infectious agents. That goal is not necessarily advanced by an ideology of self-sufficiency that values the citizenship of the blood donor over the medical needs of recipients.

In the case of clotting factors, it is more difficult to justify the importance of national self-sufficiency. Unlike blood components, which do not require an expensive or technologically sophisticated infrastructure, blood products pose difficulties for both collection and manufacture. Donating whole blood components is neither invasive, time-consuming, nor costly; providing plasma for blood products takes time and incurs the cost of plasmapheresis. For these reasons, much of the world’s plasma supply is purchased, not donated, regardless of whether or not a nation is self-sufficient. There are, however, nations that derive much of their plasma supply from whole blood donations rather than from plasmapheresis.

The practical impediments to international trade in whole blood do not arise with blood products and derivatives, and their manufacture into pharmaceutical agents differentiates them from whole blood, with its powerful cultural associations. Nevertheless, in nations where hemophiliacs became infected as a consequence of the use of clotting factor made from American plasma, import dependency became the target of outrage. Had local “pure” plasma been used, it is claimed, catastrophe could have been prevented. The claim is well received by the media, the public, those interested in the blood system and, most importantly, by victims seeking to pinpoint blame. The lesson commonly drawn from the spread of HIV through blood products is that never again must nations import products made from blood drawn in other nations. Self-sufficiency is therefore promoted as a mechanism through which injury from future blood borne pathogens can be avoided.

The evidence at hand does not substantiate the popular perception of a link between import dependence and heightened risk. The United States, which produced its own supply of concentrate from America plasma, had one of the highest rates of HIV infection among hemophiliacs. In Australia as well as in France, domestic Factor VIII also produced high rates of infection among hemophiliacs. On the other hand, hemophiliacs in nations such as Japan, which depended upon American imports, had similarly high infection rates. These data indicate that self-sufficiency itself does not answer the question of how best to insure the safety of blood products, since the risk of contamination by blood products is determined, in part, by the underlying serological risks of the population from whom plasma is collected.

Since it is impossible to know where the next blood borne pathogen will arise, it is equally impossible to determine the safest source of blood products. Carefully regulating plasma collection and fractionation centers to ensure that they meet the highest possible international safety standards is more likely than national self-sufficiency to limit the distribution of tainted products. The ideology of self-sufficiency that gives pride of place to the nationality of blood donors masks the most basic concern of those in need of blood; relative safety. The importance of minimizing the possibility that blood will be a vector for the transmission of infectious disease, not the need for self-sufficiency, ought to serve as the fundamental lesson of the AIDS years.

Justice and Compensation for the Victims of Blood-Borne Pathogens

In virtually every industrialized nation, individuals infected with HIV through whole blood or blood products have demanded compensation. They have done so in part by turning to the courts as individuals or classes of the aggrieved. More dramatically, they have pressed governments to make payment either through the public treasury or by coordinating payments from public and private sector sources. Both implicitly and explicitly, those who have sought compensation have distinguished themselves from individuals infected through sexual activity and injection drug use. Unlike those who acted in a way that placed themselves at risk, they claimed, blood transfusion recipients and those who took factor concentrate prescribed to them by their physicians did not make a “choice.” By highlighting their medical vulnerability and their dependence on heath care providers, “victims” of blood borne HIV infection have asserted a special claim on society’s resources.

To a remarkable degree, these claims have been successful. In Japan, each person infected with HIV through the blood supply receives compensation payments of close to $500,000. In France, they receive between $150,000-$400,000; in Denmark, $120,000. Only the United States government failed (as of 1997) to make “restitution,” relying instead on tort litigation to provide an avenue of recompense. In no country do those with HIV infection caused by sexual or needle-sharing behavior, the primary modes of transmission, receive compensation payments. A deep moral and financial wedge has consequently been driven between those infected with HIV through blood and others with HIV infection.

The financial contours of national compensation schemes and the legal and political maneuvering that led to their creation are detailed in the following chapters. Everywhere, the timing of compensation, the generosity of the payments, and the designation of those eligible were political determinations. Those most able to organize, lobby, sue, and marshal the influence of the media were most likely to succeed. Even the language of justice and compensation has been contested. From Canada to Japan, from Australia to Denmark, debate has raged over whether the money paid to those infected by HIV contaminated blood should be called compensation, relief, or extraordinary assistance. The choice of terminology depends upon one’s view of whether those responsible for blood safety failed to exercise due care, whether such failure constituted negligence, and whether such negligence rose to the level of criminal culpability.

The pitched battles over HIV and blood experienced in every nation, the recognition that there had been a medical tragedy and the knowledge that many would die as a result, made it difficult to think carefully and fairly about whether particular individuals and groups infected with HIV had a special claim to financial compensation. The time is now ripe for the kind of analytical--perhaps skeptical--consideration that was so difficult in the face of human misery. That process might start with the following questions:

  • To what extent is justice served by providing state compensation to people infected with HIV through the blood supply when others who suffer iatrogenic injury are not similarly compensated?
  • Should all who were infected through blood and blood products be treated equally regardless of the extent to which their infections were preventable?
  • Should a no-fault system of compensation be created to manage future blood-related claims?
  • Should national legislatures, administrative agencies, or courts be responsible for developing and managing compensatory schemes?

Nations guarantee different levels of medical care and social security to those who are ill and disabled. Individuals infected with HIV in one country may receive care that is life-long and free, along with publicly funded social support for themselves and their families. Those in another may have no claim to affordable or adequate care, and will have to call upon their own resources. Given such differences, it is difficult to arrive at a cross national standard for what constitutes just compensation. If outrage at the failure of the United States to provide public compensation is rooted, at least in part, in concern about how those infected will pay for their medical care, how might compensation be justified in Western European nations that provide not only medical care but robust social insurance?

Compensation payments are generally viewed as a way of making injured parties whole. They are not always linked to judgements about fault. In almost every legal system there are instances in which injuries occur and compensation payments are made, but blame is not assigned. Such schemes reflect a certain rationality. Tragedies do not require travesties. Technological achievements inevitably bring with them the possibility of unintended harms. No fault compensation schemes are appealing under such circumstances because of their efficiency, their low transaction costs, the relative ease with which injured parties can be compensated, and the extent to which economic burdens of injury are borne by those most able to pay.

However, such schemes fail to "make whole" when they do not recognize the psychological needs of individuals or the social needs of communities to hold to account those deemed responsible for actions that caused injury. In France and Japan, where demands for compensation were met with cash payments, many injured parties rejected the proffered funds as insufficient. They did not simply desire a financial payment; they insisted upon the identification of responsible actors and the apportionment of blame. No fault compensation fails in such cases because it emphasizes money and denies the need to censure guilty individual and institutional actors.

Nations must develop efficient and compassionate schemes to meet the economic needs of those who will in the future be victims of unavoidable iatrogenic injuries. Such efforts will not substitute, in the case of negligence-related harms, for effective mechanisms that hold to account those in whom the public has invested its trust.

 

Blood, Bureaucracy and Organizational Reform

In the effort to understand why they were infected, those who contracted HIV through blood and blood products sought to identify individuals who had failed to protect the safety of the blood supply. In the U.S., hemophilia activists denounced a well known hemophilia expert as "The Mengele of the Hemophilia Holocaust." Denmark’s Minister of the Interior was castigated as "Blood Britta." French and Japanese officials were subject to criminal prosecution.

In addition to the search for guilty individuals, there was an attempt to determine what went wrong with the very institutions responsible for blood safety. Investigative commissions in nation after nation concluded that in the late 1970s and throughout the 1980s, governmental structures and other organizations responsible for the blood supply were ill-equipped to understand, evaluate, or respond effectively to the challenge of HIV contaminated blood. Blood experts, whatever their individual beliefs or policy desires, had been constrained by institutional loyalties and concerns that made bold action anathema. The blood bureaucracy had failed.

Common denunciation of past bureaucratic arrangements has linked the perspective of independent commissions of inquiry charged in many nations with the task of uncovering the roots of policy failure. Virtually everywhere it was asserted that the long history of collaboration between regulators and the regulated poorly served the goals of blood safety. The lack of accountability to consumers of blood and blood products was almost everywhere viewed as a guarantee that narrow commercial and institutional interests would prevail on the blood policy agenda. Despite these elements of commonality, striking variation exists in the attempt to pinpoint the cause of systemic failure. The fault in one nation may be viewed as a willingness to accept the burden of Hepatitis B as the price of progress; in another, the failure to appreciate the data on AIDS is underscored. In some nations the dead weight of bureaucratic control is highlighted; in others it is the failure to impose careful and stringent bureaucratic res. Some have argued that market forces corrupted the goals of public health; others claim that market driven behavior was more salutary than the behavior of those in organizations that are formally not-for-profit.

Hope that different national experiences with HIV and blood would result in clear and simple lessons for the future has not been realized. Indeed, the lessons to be learned are more easily read as cautionary tales about the many possible bureaucratic arrangements. Despite systems that varied significantly in terms of degree of centralization, links between state actors and the private sector, responsiveness to public input, the power of the medical lobby, and the overall power of bureaucracy, the initial response to the possibility that HIV could contaminate the blood supply was remarkably similar. Confronted with uncertainty and lacking transparency in policy making, the various blood bureaucracies did too little (in some cases, far too little), at least initially, about the threat of HIV.

In the aftermath of infection, often in response to the calls of national commissions for bureaucratic reform, some features of national blood systems have begun to converge. In the broadest sense, one reaction to conflict, blame, and litigation over HIV-tainted blood has been a move toward greater centralization. The virtues of pyramid-like governmental structures have been extolled, while the well-known costs of such arrangements have been minimized. Ironically, centralizing governmental control of blood has proceeded despite a political climate in many nations that has led to calls for “less regulation” in many other sectors. Even where greater degrees of centralization have not been achieved, the recognition that certain agencies (like Japan’s former Pharmaceutical Affairs Bureau, and the Canadian Red Cross) were themselves tainted by the conflict over blood has generated wide scale institutional reorganization.

Stung by accusations of inaction and apathy, regulators in all of the countries we have studied have come to realize the high costs of political paralysis. They now accept that it is their responsibility to investigate publicly and skillfully potential risks to the blood supply. While action may not be required, active decision making is imperative. In addition, the relationship between regulatory agencies and the entities they are charged with overseeing has come under attack. Concern that the agencies were “captured” by interested parties, in part as a result of their near-monopoly on technical expertise, has led many nations to restructure the formal ties permitted between the blood sector and the state. Industry involvement in the U.S. FDA’s Blood Products Advisory Committee, a hallmark of the earlier era, has been denounced as a recipe for disaster. A new regime that seeks to limit industry’s influence has been embraced. At the same time, consumer participation on such committees, and in other aspects of blood regulation, has increased.

What remains unclear is whether changes in the bureaucratic agencies that manage blood will result in effective responses to future blood borne pathogens. Despite the long conflict over HIV and blood, it remains difficult to clearly specify the most desirable institutional arrangements for limiting the potential for future iatrogenic tragedy caused by blood.

 

Conclusion

While the HIV pandemic has had a profound impact on Third World nations, many of those nations have been spared the tragedy related to transfusion and blood product-associated AIDS. The most impoverished nations have avoided conflicts over HIV and blood because unlike the sexual transmission of AIDS, blood related transmission requires the existence of a medical and technical infrastructure, and the economic capacity to pay for pharmaceuticals such as factor concentrate. Here the data are stark. Approximately 80% of the world’s blood collection and usage occurs in 20% of the world’s nations. In poorer nations, blood transfusions, where they are available, rarely involve more than 2 units of blood because there is not enough blood to meet demand. Most boys with severe hemophilia die of their disease before reaching adulthood and they rarely, if ever, receive clotting factor concentrate.

Although limited, however, transfusions are not unknown in many Third World countries, especially in large cities. With the background level of HIV infection rising, the epidemiological picture is changing. A report from Vietnam in late 1997, for example, revealed that at least 100 individuals had contracted HIV infection from blood transfusions. Although Vietnamese blood is screened for HIV antibody, such testing can fail to detect the presence of infection during the first months after a donor contracts HIV. Where the incidence of new infections is high this can pose a serious threat to blood safety. Moreover, because the majority of blood is collected from professional blood sellers in urban areas, where rates of HIV are higher than in the countryside, blood shipped from the city is introducing HIV into areas that previously had low rates of infection. A similar pattern has emerged in China, making the blood supply a major source for the spread of AIDS.

While the threat of blood-borne HIV infection may be emerging in the Third World, it has clearly receded in the advanced industrial nations where the bitter feuds that erupted in the wake of the iatrogenic disaster spawned in the early 1980s have all but been brought to an end. In retrospect, we now know that the saga of AIDS and blood involved both a tragedy that could not have been averted and a disaster that was the consequence of individual malfeasance and institutionally structured misjudgments. Both aspects of the story must be borne in mind as we think about the demands that securing a safer blood supply should impose upon us and about the limits of public policy in securing that safety.

The challenges of the next years will center on the capacity of sophisticated medical systems to detect and respond to the first signs of contamination in a way that limits the potential for disaster. They will also center on the importance of developing social policies to meet the needs of those who, despite the best of efforts, will be exposed to pathogenic injuries during the course of their treatment. Already, the recognition of a widespread hepatitis C epidemic has caused a new international conflict over risk and responsibility for medically induced disease. Disagreement between public health authorities, blood bankers, and patients about whether those who might have been exposed to hepatitis C should be notified and offered blood testing was the first sign that the era of blood scandal had not come to an end. In Canada and Australia, political turmoil over who was at fault and who should bear the financial costs of compensating those infected through blood transfusions has been acute; the other inized democracies discussed in this volume are destined to experience similar controversy. The questions raised by hepatitis C make clear that blood feuds did not come to an end with conflicts over HIV. Instead, the experience of the AIDS years has left an indelible mark.

The controversies that have surrounded blood in the age of AIDS share common features with those that have been endemic to advanced industrial societies. Technological achievements are Janus faced: along with their promise of progress almost always come unintended social burdens. How to manage those threats without impeding the innovative impulse, and how to assure that the benefits and burdens of progress are equitably borne remain fundamental challenges of governance in societies ever more dependent on scientific developments.

 

ENDNOTES

1This working-paper is part of a research project that will be published as Blood Feuds: AIDS, Blood and the Politics of Medical Disaster, Eric A. Feldman and Ronald Bayer, eds., Oxford University Press, forthcoming. The project was supported by The Toyota Foundation and The Japan Foundation Center for Global Partnership. Feldman received support from the Robert Wood Johnson Foundation’s Health Policy Scholars Program and Yale University’s Center for Interdisciplinary Research on AIDS (Grant No. PO1-MH/DA-56826 from the National Institutes of Mental Health and the National Institute on Drug Abuse); Bayer was partially supported by an NIMH Senior Research Scientist Award (Grant No.5 KO5 MH01376).

The conceptualization of these core issues emerged from an international collaborative project that involved three meetings over the course of one year. The first, held in July 1996, brought together 25 project participants—authors of national case studies, as well as those who were to write comparative analyses focused on politics, policy, culture, and economics. Comparative essays were the focus of the second project meeting. At the final project meeting held in June 1997, we sought to synthesize the emerging understanding of the decade-long controversy involving AIDS and blood with an eye toward anticipating the challenges posed by future pathogenic threats to the blood supply.


Center for Interdisciplinary Research on AIDS conducts research aimed at the prevention of HIV infection and the reduction of the negative consequences of HIV disease in vulnerable and underserved populations. CIRA is located in Yale University's School of Medicine, within the Department of Epidemiology and Public Health.


 

Working Papers describe current legal and policy issues related to HIV/AIDS. Published by Yale University Center for Interdisciplinary Research on AIDS (CIRA).The views and perspectives expressed are those of the author. Supported by grant number PO1-MH/DA-56826 from the National Institutes of Mental Health and Drug Abuse.
Mailing Address: CIRA, Yale University School of Medicine,
40 Temple St., Suite 1B, New Haven, CT 06510-2715
Email: cira@biomed.med.yale.edu

Eric A. Feldman, J.D., Ph.D., is Associate Director of New York University’s Institute for Law and Society. His articles on law and society, Japanese health policy and HIV/AIDS have appeared in edited volumes and publications including the Los Angeles Times, American Journal of Comparative Law, Social and Legal Studies, Hastings Center Report, and the Lancet.


Legal & Copyright © 2000-2004 BloodBook.com. All rights reserved worldwide.
BloodBook.com is an independent commercial enterprise and maintains no
relationship with any medical or civic institution. If you have questions or
comments about this web site please e-mail: .
 
Truss Frame Roof Truss informationCaseber Furniture - Clearwater Florida Clearwater Beach Florida Weather http://www.cityofcupertino.com Cuba Domain Names for Sale http://www.bossbbq.com Visit Casino Morongo - Gambling and Betting Online  http://www.AODB.net http://www.thanksgivingprayers.com Weather Lady Forecasting - Women in Weather    

   last updated 11/10/2004   bloodbook.com