Chapter 9: Extrapolating health risks

 

AT&T TO CUT WORKFORCE 120 PERCENT

NEW YORK, N.Y. (SatireWire.com)  AT&T will reduce its workforce by an unprecedented 120 percent by the end of 2003, believed to be the first time a major corporation has laid off more employees than it actually has.

AT&T stock soared more than 12 points on the news.

The reduction decision, announced Wednesday, came after a year-long internal review of cost-cutting procedures, said AT&T Chairman C. Michael Armstrong. The initial report concluded the company would save $1.2 billion by eliminating 20 percent of its 108,000 employees.

From there, said Armstrong, "it didn't take a genius to figure out that if we cut 40 percent of our workforce, we'd save $2.4 billion, and if we cut 100 percent of our workforce, we'd save $6 billion. But then we thought, why stop there? Let's cut another 20 percent and save $7 billion…

 

Of course, it is immediately obvious that this “news” piece from SatireWire.com is not serious.  One cannot cut more than 100% of a workforce.  Furthermore, the supposed motivation for cutting the entire workforce is to save five times the money that would be saved by cutting only 1/5 of the workforce. 

This example introduces the concept of dose extrapolation.  We deal with it every day, one way or another.  The above article is humorous because we realize that some things cannot be extrapolated proportionately:  adding 10 tablespoons of salt to your kettle of soup does not make it taste 10 times better than 1 tablespoon of salt.  (One of our friends, the well-known population biologist Bruce Levin, once tried to speed the baking of a sponge cake by raising the oven temperature to its maximum; the cake did not turn out as expected.)  

Dose extrapolation enters most calculations of risk.  Our government pays a lot of attention to environmental hazards that kill us or give us disease.  The people who do these calculations worry a lot about small risks, because a small risk multiplied by 250 million (the approximate U.S. population) can add up to a lot of death or disease. (It is of some passing interest that second-hand tobacco smoke is estimated to kill about 3000 Americans each year – the approximate number killed in the World Trade Center attacks, yet the country has been slow to take on this health risk.)  Time and again, however, we estimate small risks by extrapolating from high risks.  Many of these extrapolations may be just as flawed as in the AT&T spoof, but we don’t know it.   

(I) Extrapolation across doses:  the abstract models

Too much of almost anything can kill you – even the things we eat and drink daily.  But by now, most of us know to avoid the things that are likely to kill us outright – we call them poisons, accidents, etc.  There are many other exposures in life that won’t kill us immediately, but they might have a long-term, cumulative effect that eventually does us in.  Cancer is one such concern.  We increase our cancer risks by smoking, eating poorly, being exposed to the sun, driving, getting X-rays and other types of radiation exposure (the risk from radiation exposure is almost too small to measure), and by inhaling or ingesting any of many man-made or natural chemicals in our environment or the workplace.  In most of these cases, we not only want to know how to avoid large risks but also how to avoid small risks – and what the risks actually are.

Small risks are usually calculated from large risks, and to determine the small risks, we need to extrapolate.  Fortunately or unfortunately, the natural laws of risk extrapolation do not necessarily follow straight lines.  There are several basic forms of extrapolation that need to be considered:

It doesn’t matter whether the vertical axis is death rate, survival rate, cancer rate, or other measure, the problem is basically the same.  For concreteness, we will assume that we are trying to determine the relationship between cancer rate and exposure to some agent (e.g., radiation).  All four graphs are models of how radiation might lead to cancer, and we want to know which ones are correct. 

Model A is the easiest to understand. Under this model, your cancer rate is simply proportional to your dose. This model could not apply across all conceivable doses, because the cancer rate can't exceed 100% - just as AT&T can’t lay off more than 100% of its workforce – but it could apply to the range of exposures that interests us.

Model B is possibly counter-intuitive: increases in exposure do not increase cancer rates until the exposure reaches some threshold. We have drawn a linear response to exposures above the threshold, but our interest in this model lies chiefly in the existence of a threshold regardless of the form of the curve beyond the threshold.  In a variant of this type of model, the flat part of the curve (at low doses) actually decreases with increasing exposure – a phenomenon known as hormesis. 

Model C is one in which cancer rate increases more steeply with increasing dose. That is, doubling your exposure more than doubles your increased risk, so we call it accelerating.

Model D also shows increasing cancer levels with increasing exposure, but the rate of increase declines with exposure (decelerating).

The statistical difficulties.   In principle, there needn’t be anything unusually difficult about telling apart different curves – you have no difficulty figuring out that a little bit of salt goes a long way, whereas you have a far wider tolerance range for sugar (unless you are diabetic).  Why, then, is it so difficult to determine health risks?  Four reasons.  First, it is usually difficult to get large numbers – we are often concerned with rates on the order of 1 in ten thousand or less, so we may not have more than a handful of positive cases. For example, there is a grand total of 5000 factory workers exposed to reasonably high levels of dioxin in the US. That’s it -- no chance of obtaining a bigger sample.  Second, we are usually addressing rates of something, whereby everyone is either affected or not.  By analogy to a coin flip, every individual is either a “head” or a “tail”, and the rate we are concerned with is a probability that lies in between heads and tails.  The statistical probabilities of rates are harder to estimate with precision than are quantities such as IQ scores, and speeds.  We have much greater statistical power when the values we measure can taken on any value across a continuum than when they can only take on either of two values.  Third, the different dose extrapolation models differ chiefly in how they deviate from linearity.  Our statistical methods are most useful at fitting straight lines to data.  It requires relative few points to demonstrate the slope of a relationship, but it requires many points to show a deviation from a linear relation.  (Fundamentally, this is also a sample size problem)    Fourth, we often have incomplete information about how potential poisions and carcinogens act inside the body. What are the metabolites? Are the metabolites more or less toxic. What are the half-lives? What cell-receptors does the chemical bind to? What cascades does this trigger? We don’t have this information because the science just has not yet advanced far enough. Indeed, if we understood this much, we would not require large samples to figure out the form of dose extrapolation.

 (II) Animal extrapolations

We are all familiar with the LD50 concept – the dose of a substance that will kill half those exposed to that dose.  Virtually everything we eat has an LD50 – table salt, sugar, water, and many more everyday items.  Animals, mostly mice and rats, are used to establish lethal doses and other biological effects, but other common testing animals are rabbits, guinea pigs, birds, and one person at UT even uses turtle embryos. 

At present, there is no alternative for determining the lethal effects of most substances, although we may be approaching an era in which lethal and other harmful effects can be ascertained in cell cultures.  The determination of a lethal dose in mice, however, does not necessarily extrapolate directly to humans.  Thus, if we determine that the LD50 of some compound is 1 milli-gram of stuff per kilogram of rat, it does not mean that the human LD50 will also be 1 milli-gram per kilogram of human.  A correction is often needed to go from the LD50 in the test animal to the LD50 in humans.

(III) Extrapolations across related hazards:  TEF & rad/rem

In the section below on radiation, we will find that harmful effects of vastly different types of radiation are measured on a single scale (rad or rem).  The same concept has been developed for some classes of related chemicals:  the toxic equivalent factor or TEF. There are over 200 chemically distinct dioxins, furans and PCBs (depending on where the Cl atoms go on the benzene rings. And dioxins, furans and PCBs differ only in how the two benzene rings are attached).  All dioxins, furans and PCBs are thought to act biologically only via binding to the Ah receptor. Evidently, the different congeners differ in their binding affinity, and hence in their biological effects. The TEF for each of these chemical congeners is an estimate of how toxic it is relative to 2,3,7,8 tetrachlordibenzo-dioxin (the most toxic of them all, with a TEF of 1.0 by definition). The TEFs for the others are often orders of magnitude smaller (see table at end of http://www.epa.gov/ncea/pdfs/dioxin/part2/drich9.pdf). Thus, TEFs were created to solve the problem of extrapolating from one chemical to another.  The example shows how molecular biology will eventually shed light on extrapolation problems.

(IV) Short examples

Rodent models of cancer

Drugs and food additives for human consumption need to be tested for their possible abilities to cause cancer. One of the important tests involves feeding the substance to rodents (rats or mice) and assessing cancer rates.

Consider how to use an animal model efficiently and ethically when testing whether a food additive causes cancer. There are several considerations in setting up a study:

i) What organism to use -- humans are the most accurate, but experimentation is costly, lengthy, and often considered unethical (depending on the experiment  and country.  Some pharamceutical manufacturers have gone oversees to gather data using human subjects using experimental protocols that would be unethical and hence impossible to implement in the US.). Bacteria and yeast are inexpensive and free of ethical considerations, but they are single-celled and cannot become cancerous. Rodents offer a good compromise between humans and simpler organisms and thus are used for much of testing.

Given the choice of a rodent model, other issues arise:

ii) how many rats - fewer is cheaper.  (The UT Animal Resources Center lists costs per cage for different types of animals – which does not include the cost of the animal.  A medium rat cage costs about $0.50/day and can house 2-3 animals.  Thus a study using 1000 rats would cost about $250.00 per day.)  Fewer is also more ethical from the perspective of animal welfare, but an inadequate sample size may fail to detect an effect (larger sample sizes will be able to detect smaller effects), which has negative ramifications for humans.

iii) how long to monitor them -- cost increases with the duration of the study but so does the ability to detect harmful effects 

iv) the dose – if small doses cause cancer, then higher doses will cause even more cancer and possibly cause cancer faster.

The compromise achieved amid these conflicting issues has often been to feed rodents large amounts of the substance  -- enough that a calculated number of the animals actually die from it.

Bruce Ames has pointed out a possibly serious limitation of this model -- that the cancer-causing potential of the substance in humans may be overestimated by the rates in mice. High doses typically kill lots of cells in the mouse, even if the mouse survives. The mouse responds by replacing those cells -- by dividing at higher than normal rates. These increased cell division rates, all by themselves, can lead to increases in cancer. (Cancerous cells have an abnormally high cell division rate, and cells become predisposed to become cancerous when they divide.) Feeding mice so much of an otherwise harmless substance that their bodies must produce lots of new cells may give the mice an elevated cancer rates; people would not eat high enough doses to have their cancer rates affected.

The difficulty with this model arises from dose extrapolation – do low doses cause cancer at a rate proportional to the cancer rates from high doses?  In essence, Ames is arguing that many chemicals with cancer-causing activity at high doses are not cancerous at low doses (an extreme form of the “accelerating” model).  The matter is unresolved.

Fetal alcohol syndrome

We can go back nearly 100 years to find the first observations that children born to alcoholic mothers exhibit certain facial abnormalities, have reduced motor skills, and have reduced mental capacities.  The characteristics across different afflicted children are similar enough (though varying in degree) that the term “fetal alcohol syndrome” is used to describe the condition.  Widespread public awareness of the possible ill effects of maternal drinking during pregnancy has come only in the last 3 decades.  Alcoholic beverages now display warnings for pregnant women, something that was not done as little as 30 years ago. 

The first warnings to the public recommended no more than 2 “drinks” a day for a pregnant woman.  Now the advice is to abstain entirely during pregnancy.  The difference between the “2 drinks a day” threshold and the “zero tolerance” rule is the difference between acceptance of a  threshold model (or an extreme form of the accelerating model) on the one hand, versus acceptance of the linear (or decelerating) model on the other hand.  Recent studies demonstrate effects on child behavior down to as low as an average of 1 drink per week during pregnancy.  The enhanced awareness of the risks of fetal alcohol merely represents better and more data from low doses – direct observations at low doses rather than extrapolations. 

Second-hand smoke

The admonitions against smoking given out in high schools 4 decades ago did not extend to smoke from someone else’s cigarettes:  the risk of smoking was only to those who puffed, we were told.  Jump forward to the present:  second-hand smoke is estimated to kill 3,000 people in the U.S. annually.  For lung cancer, living with a smoker elevates your risk by 30% (the smoker’s risk is elevated 1000%). 

This risk has been estimated two ways:  directly, and also as a linear extrapolation.  Both estimates agree reasonably well.  But how is this done?  First of all, how do you even determine the level to which a non-smoker is exposed to tobacco smoke?  This much is a prerequisite for determining the risk from second-hand smoke.  A breakdown product of nicotine is cotinine, whose levels can be assessed in urine (just like other drug tests).  Its persistence time is long enough to be useful for measuring exposure to nicotine, which is used as a surrogate for tobacco smoke (although there are now several products available that can give you nicotine without the smoke).  Cotinine levels in your body are thus a model of exposure to tobacco smoke.  From there, one measures lung cancer rates in large groups of people and estimates the risk.  Initially, the risk of second-hand smoke was estimated by a linear extrapolation from smokers’ risks:  if you know the amount of smoke that a non-smoker receives compared to a smoker, and you know the excess lung cancer rate of a smoker, then you can easily estimate the elevated lung cancer rate of the smoker.  Studies that look at excess cancer rates in non-smokers who receive significant amounts of second-hand smoke have supported the risk estimates.

Pesticides in your food

A topic that occasionally hits the news is pesticide (and other chemical) residues in food.  Not too long ago, the controversy was about Alar in apples, which isn’t even a pesticide, but is used for cosmetic purposes (to keep the produce looking nice).  A few celebrities took issue with the fact that this chemical served no “purpose” but that it was estimated to cause a handful of cancers in the U.S. population each year.  The same issues underlie pesticide residues in food – that produce must have such incredibly low residual levels of pesticides by the time the food reaches market.  Public concern over pesticide residues in food is driven by the fact that high doses can cause cancer in lab animals, and this concern has spawned a significant demand for “organic” produce.  (One of the largest retail suppliers in the U.S. is Whole Foods, a company that started in Austin.)

Acceptable pesticide residues in food are based on extrapolations of risk from higher doses.  Very high doses of pesticides can kill you, but sub-lethal doses can cause cancer.  Here, there is great uncertainty regarding the actual cancer-causing effect of low pesticide residues, and the best that the EPA (Environmental Protection Agency) can do is a linear extrapolation.  However, the data are simply so sparse here (and there are so many confounding factors with diet – plants are full of gobs of natural pesticides) to know whether these risks are anything close to real.

Dioxins and animal extrapolations

Just a few decades ago, dioxin, a chemical widely used in the electronics industry, was found to be incredibly toxic.  There was already enough of it in our environment to cause problems, and indeed, many people then and now have detectable levels of it in their blood.  But determining the risks to humans has not been easy.  For some lab animals, dioxin is very toxic in acute doses– by weight, the LD50 for guinea pigs is at the 1-billionth level (1 billionth of a gram of dioxin per gram of body weight).  But the guinea pig is the most sensitive species found.  Hamsters can tolerate 3000 times this dose.  Different rat strains vary in sensitivity by about a thousand-fold.  The LD50 for acute doses in humans is unknown, but we are certainly at least 10 times and maybe hundreds of times less sensitive than guinea pigs; indeed, no cases of acute mortality (sudden death from exposure) are known in humans – there have been some occupational exposures.  There are, of course, long-term effects of dioxin beyond acute mortality (e.g. cancer).  Some biochemical indicators in cell cultures exposed to dioxin suggest that humans are highly sensitive in other ways besides acute mortality.

 (V) A detailed example:  radiation

It's in this century that we've come to understand radiation, and it offers many technical advances:

Yet long before radiation found wide application, scientists were aware of its harmful effects. In 1927, H.J. Muller reported that X-rays caused inherited lethal mutations in Drosophila (a small fly raised in the laboratory). Initial concern about radiation as a human health hazard thus focused on its potential in causing inherited defects, but most attention nowadays concerns cancer. Despite society’s perhaps excessive concern about cancer from radiation, the scientific foundation here has numerous uncertainties.

A preceding chapter (Dose Extrapolation) addressed the abstract models evaluated when assessing the risk of low exposures.  This chapter looks at the many physical models that assist us in understanding these risks and offers some details about the true complexities of making risk calculations.

(1) No animal extrapolation: humans only

We know from studies of insects that radiation causes genetic damage. But insects don't develop cancer. Even with mice (which can develop tumors), their biology is sufficiently different from human biology to doubt their utility as a model of human cancer. Human thus make the best models and perhaps the only satisfactory models when studying cancer. And virtually all of the data used come from humans. However, because experiments in which people are deliberately exposed to radiation are not permitted, the humans for which we have such data are limited to a few groups: Japanese survivors of atomic bomb blasts, U.S. soldiers exposed to radiation during atomic tests, and victims of diseases whose treatment requires large radiation exposures. So the radiation data are based on the most suitable organisms (humans) but are limited in the extent to which a wide range of sources of human exposure have been included.

(2) Extrapolation across types of outcomes: 

few cancers can be quantified

(A) Leukemia. There are dozens of kinds of cancer, and radiation may well contribute to all of them. Yet most of the radiation-cancer data are for leukemia (an overproliferation of white blood cells). One reason is that leukemia is a relatively common form of cancer, especially so for children. Second, the time lag between radiation exposure and the resulting cancer is shorter for leukemia than for many other cancers, which also contributes to the ease of studying it. The limitation of relying so heavily on leukemia as a "model" cancer is that it may not represent all cancers.

(B) Chromosome Breaks. In view of the enormous sample sizes required for detecting increases in cancer from modest increases in exposure, it is useful to have alternative physical models that give us insight to the risks from radiation. A simple assay that can be applied to large numbers of people is the incidence of chromosome breaks in white blood cells. A sample of blood is drawn, the white cells are cultured, and the chromosomes of the dividing white cells are spread out on a microscope slide. Chromosomes whose "arms" are broken can be identified quite easily. So this assay can be performed on thousands of people, and it has a potential to be quite sensitive, because tens of thousands of chromosome arms can be screened per person, enabling detection of slight increases in the rate of chromosome damage. Even so, studies have concentrated on people who have received large doses of radiation: Japanese bomb survivors, nuclear shipyard workers in Scotland, uranium miners, and victims of ankylosing spondylitis (treated with 15 Gy). The limitation of this assay is that the form of the relationship between radiation and chromosome breaks need not be similar to that between radiation and cancer.

(3) Extrapolation across hazards:  many types of radiation

We tend to talk about all types of ionizing radiation as if they were the same. Well, not quite. We worry about exposing our skin to ultra-violet light, and we realize that a hat, shirt, or sun-screen will protect us. We do not assume that we can be so easily protected from other types of ionizing radiation. But the very use of the word radiation to include these various classes of physical phenomena reflects our lumping them together. In fact, there are many types of ionizing radiation, and we are familiar with most of them. They fall into two main classes:

Electromagnetic (photons): ultra-violet light (UV), X-rays, g rays, cosmic rays

Atomic particles: b particles (electrons), a particles (helium nuclei), neutrons

Each of these types carries enough energy to enter cells, and by colliding with atoms and molecules in the cell, they can cause harm to the cell's chemistry. In general, a high-energy photons (X rays, g rays, cosmic rays) can sometimes go through us and some types do so with only a small probability of causing damage per photon -- the very fact that we can use X-rays to expose film when we stand between the X-ray source and film illustrates that many of the X rays pass through us. The particles, however, tend to be stopped more easily. Furthermore, the kind of damage each type of radiation causes is somewhat different: g rays and cosmic rays are much higher in energy than X-rays and UV, hence can do different types of damage. So the cancer-causing effect of a dose of radiation will vary at least slightly from one type of radiation to another.

Measures of Radiation The science of radiation and cancer would be perhaps unbearably complex were it not for the fact that physicists and biologists have figured out ways to compare the biological effects of radiation on a common scale. That is, the biological effect of an exposure to X-rays can be equated to the biological effects of a particular exposure to gamma rays or b radiation, and so on. One reason that such a common measure is desirable is that people as a group are exposed to multiple kinds of radiation, and it is easier to keep track of overall doses than to monitor each type separately. (The last chapter introduced the concept of TEF, or Toxic Equivalent Factor, which attempted to achieve the same extrapolation across different toxic molecules.)  The following list explains some measures of radiation.

The following measures are attempts to convert physics measures of radiation into biological measures.

There are obviously many possible ways we could have decided to measure the effects of radiation, but these measures have been adopted worldwide and will be with us until a better alternative is found. In the context of this chapter, we can think of each way of measuring radiation itself as a model. Two people who both receive exactly 10 rads of radiation can have different biological responses. Other factors partially determine the biological consequences of 10 rads of radiation, such as a person's age, whether the radiation was received in one large dose or many small doses, and whether the radiation was in the form of b particles or X rays. Hence, the statement that someone "received 10 rads of radiation" is a summary, or model, of the radiation that that person actually received. That model is an attempt to allow us to combine the effects across the different types of radiation.

The difficulty of the problem. Our understanding of the cancer risk from above-background radiation is based on haphazard models when it comes to the types of radiation involved in the exposures. As noted above, we do not experiment with people to determine the cancer risk of radiation, so we must rely on medical, military and occupational exposures. For the most part, these kinds of exposures were not measured at the time, and there is a fair bit of guesswork in calculating the types of radiation involved. Two conceivable problems arise from this dependence on uncontrolled exposures. First, and as already discussed, the different kinds of radiation may have different effects in causing cancer. If the rem and rad indeed accurately collapse the differences in cancer risk from diverse types of radiation, then this problem is not serious; but the rad and rem are not calculated from cancer risk directly, so this problem may be real. Second, the calculated exposures may themselves be in error in these studies. For example, with the Japanese bomb survivors - the largest and most extensive database for cancer risk from radiation - there is now controversy over whether most of the radiation from the blast was neutrons (quite harmful to cells) or photons (less harmful to cells), and earlier calculations had assumed mostly photons.  (After all, a neutron bomb is simply an atomic bomb whose plutonium core was compressed more than normal, to change the types of radiation produced). If neutrons were more prevalent than assumed, it means that we have overestimated the harmful effects of radiation in this data set. Such a miscalculation in the type of radiation results in a miscalculation of the exposure survivors received (the number of rems or rads).

(4) Extrapolation across doses

We do not live in a radiation-free environment.  Each of us is exposed to radiation throughout life.  To appreciate what excess radiation exposure means, it is useful to understand what the baseline exposure is.  An average American can expect to receive is about 300 milli-rem/year (mrem/yr). You might be surprised that this estimate is fully three times what we thought in the 1980s – we didn’t know how common our exposure to radon was until recently.  The breakdown is as follows:

Radon  

~ 200 mrem

Rocks/Soil  

~ 28 mrem (90 in Colorado; 23 on the East or Gulf Coast

Cosmic   

~ 28 mrem

Internal (K-40) 

~ 26 mrem

Medical          

~ 10 mrem

Dental 

~   5 mrem

Fallout from Weapons Testing      

< 0.3 mrem

TOTAL  

~  300 mrem

(from http://www.ehs.washington.edu/training/Radsaf/dental/background_rad.htm)

(for perspective, a single dose of 350 rads is serious enough to require medical care and have effects weeks into the future, but it is not quite enough to kill most people; that is about 1000X the annual average).

These values are averages.  Your own exposure to medical and dental sources depends on whether you get X-rays (a chest X-ray adds 30 mrem to your annual dose, but a mammogram and gut fluoroscopy each add 200 mrem.).  Your exposure to terrestrial sources depends on where you live.  Your exposure to cosmic rays depends both on your elevation and your occupation, because the atmosphere screens out most of the cosmic rays.  For example, pilots and flight crew members are exposed to excess radiation, but the excess is less than 25mrem/year, and there is no evidence for elevated cancer rates in these occupations. And there is an 80-fold increase associated with smoking; however it is due mostly to alpha particles and is limited to the lining of the lungs. (Is 80-fold the same as 80%?).

High doses.  Most of the effort in studying the cancer risk from radiation has gone into groups of people who have received large doses. This is not to suggest that we should be complacent about smaller doses, but rather the cancer risk even from large doses is small enough that it takes years of work and tens of thousands of people to detect statistically-significant increases in cancer rate. So we focus on people who have received large exposures and extrapolate to low doses. Of course, by focusing on people who have received large doses, our data do not tell us about which of the dose extrapolation models apply. The problem is a classic "catch-22": we want to know about the cancer risk from low doses of radiation, but we need to study people who have received large doses in order to measure the effect. Yet these data don't necessarily tell us what we want to know.

The Japanese Database. One of the first, and still perhaps the most extensive database is from Japanese residents of Hiroshima and Nagasaki who survived the atomic bomb blasts.  After the Japanese surrendered, the surviving victims in Nagasaki and Hiroshima were interviewed. Approximately 6,000 survivors exposed to 100 rads of radiation and 40,000 survivors exposed to 1 rad of radiation were identified, based on their statements of how close they had been to ground zero at the time of detonation. These individuals and their families were then monitored. To appreciate the difficulty of studying increased icancer rates, consider the annual number of excess deaths per million people per .01 Gy of exposure from the bomb:

leukemia

4 (in 1952)

1 (in the 1970's)

other cancers

2 (in 1952)

4 (in 1972)

The medical treatment for ankylosing spondylitis involves an accumulated exposure of 15 Gy (given over many years), and this group has been used as well.

Not much definitive resolution on dose extrapolation

Putting it all together, there isn’t much we can say about the shape of the association between radiation and cancer.  Low doses don’t elevate cancer rates very much.  The only cancer with adequate data is leukemia.  For this one type of cancer, the accelerating model applies – big doses are proportionately worse than small doses.  For the rate of chromosome breaks, the linear model applies.

“Cancer”

Model supported

leukemia

Accelerating

chromosome breaks

Linear

 

(5) Power plant accidents and public attitudes

3-Mile Island.  When most of you were too young to remember (which includes not being born), the U.S. had an accident at one of its nuclear power plants: in March, 1979, one of the reactors overheated at the 3-Mile Island power plant near Harrisburg, PA, and some radioactive gas (approximately 10 Curies) was released into the atmosphere. You might ask why we haven't studied cancer rates in those citizens exposed during that accident, thus augmenting the Japanese data. The reason is that the exposures were trivial:

Assuming equal exposure to radon, the average exposure per year for a resident of Harrisburg is 116 mrem, and of Denver is 193 mrem. Thus, the worst increased exposure from this accident for these Pennsylvania residents was on the order of a summer-long visit to Denver. The accident shut down the unit for over a year and was very costly, so it was not economically trivial. But we will never be able to see an elevated cancer rate in residents near 3-Mile Island resulting from this accident.  However, public reaction to the accident was outrage and near hysteria.

Chernobyl

In April of 1986, a true meltdown and rupture of a nuclear power plant occurred at Chernobyl in the Ukraine.  Nuclear power plants cannot explode in the sense of an atomic bomb, but because the core is housed in water, overheating can create such pressure that the cement containment vessel bursts.  The amount of radioactivity released was 50-250 million Curies, which is 5-25 million times the amount of radiation released at 3-Mile Island.  30 people died in the accident, mostly workers at the plant who knowingly went outside to inspect the damage (the effect was to kill their skin cells within days).  The accident shed radioactive ash and gases (radioactive iodine was the main radioactive gas released).  Much of the ash fell nearby, but the gases were distributed widely and exposed people across parts of Europe. 

The long-term impact of this accident on human health is difficult to monitor.  All residents within a 10 km radius of the plant were permanently evacuated.  Residents from the nearby town of Pripyat, only a couple miles away and the only population center near the reactor, was bused away to locations unknown, but not before 24 hours after the accident, so there is no easy way to monitor cancer rates in them.  Their exposures are unknown, but the iodine from the accident was said to be so thick in the air that it could be tasted.

A recent UN report has suggested that there is no scientific evidence of any significant radiation-related health effects to most people exposed to the Chernobyl disaster. There is a significant rise in thyroid cancer (a consequence of the radioactive iodine exposure), and the report points to some 1,800 cases of thyroid cancer.  But "apart from this increase, there is no evidence of a major public health impact attributable to radiation exposure 14 years after the accident. There is no scientific evidence of increases in overall cancer incidence or mortality or in non-malignant disorders that could be related to radiation exposure." There is yet little evidence of any increase in leukemia, even among clean-up workers.

Some U.S. biologists have been studying the wildlife in what has become known as the “10km zone” around the reactor.  Despite the high levels of radioactivity (which at the time of the accident were enough to kill trees in some areas), the wildlife populations today flourish.  Indeed, they report that they have observed more wildlife in the 10km zone than in all other parts of the former U.S.S.R. that they have visited.  While these observations should not be considered as evidence that radiation is harmless, they do point to the tremendous impact humans have on wildlife populations – no one is allowed to live inside the 10km zone these days.

Table of contents

Problems on radiation
Copyright 1996-2001 Craig M. Pease & James J. Bull