Toxicology

September 16, 2011, 7:07 am
Source: NLM
Content Cover Image

Biological hazard disposal bin. Source: NLM.

Introduction

Toxicology is the study of the dynamic interaction of chemicals with living systems. It is also the workhorse science of numerous industries and regulatory agencies—from those involved with development and regulation of food additives to those involved with the use and remediation of hazardous chemicals.

Toxicologists' investigations explore how chemicals interact with biological systems by focusing on the adverse effects and outcomes caused by such interactions. Toxic chemicals may be referred to as either toxins (toxic chemicals produced by living systems such as plants or animals) or toxicants (the remainder of toxic chemicals). Toxicants may also be referred to as xenobiotics; that is, chemicals that are “foreign to living systems.” The location within the body where chemicals interact to cause adverse effects is often referred to as the target tissue or the site of action. With any single chemical there can be one or more site of action. For example, a chemical may affect the liver and the heart. The potency of a chemical, or how toxic it is, depends upon its movement through the body to the target site (toxicokinetics); its ability to interact with the body to cause harm (toxicodynamics); and the dose the body receives (exposure level), which is in turn modified by the toxicokinetics and toxicodynamics of the chemical. The amount of chemical, or concentration, that reaches the target tissue is a result of all three of the above. Both the kinetics and dynamics depend upon the current biochemical status of the organism. For example, enzyme levels around the time of exposure, nutritional status, or even stress levels.

By convention, many (but not all) toxicology studies conducted in the early to mid 20th century were designed to identify the median lethal dose, or the dose required to kill 50 percent of the test population, represented by the abbreviation “LD50” over a defined period of time. These studies are called lethality studies. The most common LD50 test is the acute toxicity test, in which animals are given a single dose of chemical and the LD50 is determined over a 24-hour time period. Today, lethality studies are, as yet, conducted as part of many toxicology investigations—but often only as a first step towards providing some insight into the relative potency of new chemicals.

A Brief History of Toxicology

The historical development of toxicology began with early cave dwellers who recognized poisonous plants and animals and used their extracts for hunting or in warfare. By 1500 BC, written evidence indicated that hemlock, opium, arrow poisons, and certain metals were used to poison enemies or for state executions.

With time, poisons became widely used—and with great sophistication. Notable poisoning victims include Socrates, Cleopatra, and Claudius. By the time of the Renaissance and the Age of Enlightenment, certain concepts fundamental to toxicology began to take shape. Noteworthy in this regard were the studies of Paracelsus (~1500AD) and Orfila (~1800 AD).

Paracelsus determined that specific chemicals were actually responsible for the toxicity of a plant or animal poison. He also documented that the body's response to those chemicals depended on the dose received. His studies revealed that small doses of a substance might be harmless or beneficial whereas larger doses could be toxic. This is now known as the dose-response relationship, a major concept of toxicology. Paracelsus is often quoted for his statement: "All substances are poisons; there is none which is not a poison. The right dose differentiates a poison and a remedy."

Orfila, a Spanish physician, is often referred to as the founder of toxicology. It was Orfila who first established a systematic correlation between the chemical and biological properties of poisons of the time. He demonstrated effects of poisons on specific organs by analyzing autopsy materials for poisons and their associated tissue damage.

The 20th century is marked by an advanced level of understanding of toxicology. DNA (the molecule of life) and various biochemicals that maintain body functions were discovered. Our level of knowledge of toxic effects on organs and cells is now being revealed at the molecular level. It is recognized that virtually all toxic effects are caused by changes in specific cellular molecules and biochemicals.

Toxicity Testing

In the early 20th century as the need for toxicity testing grew, several animal models were developed for large scale acute toxicity testing and chronic toxicity testing and long term—often over 90 days to two years depending on the species. Toxicologists, then as now, relied heavily upon animal testing. Such testing had obvious advantages, allowing for testing in such species other than humans as rats or mice, as well as disadvantages. A primary disadvantage is that all species do not respond similarly to different chemicals, such that there can be a great deal of interspecies difference in responses—even between species of rats and mice. Even for studies concerned with nonhuman species, it is rare that toxicological studies are conducted with the species of concern (or target species). An infamous example of interspecies difference in toxicity is the case of thalidomide. Thalidomide was a sleep and anti-nausea aid prescribed to pregnant women. It had been subjected to animal testing in hamsters and several species of mice. Results indicated that it appeared relatively safe under the regulatory approaches used in some countries. It was not approved, however, for use in the U.S. Unfortunately, thalidomide behaved very differently in humans—causing severe limb malformations in the developing fetus. This case is used as a prime example of interspecies difference, and led to an overhaul of reproductive and developmental toxicity testing.

The result of reliance on animal testing is that there is always some uncertainty (or U) surrounding numbers derived in studies conducted with laboratory animals if the results are to be used to estimate toxicity in a completely different species—for example, humans. Nonetheless, the great majority of toxicology studies use a variety of laboratory test animals. As the ethics of exposing large numbers of laboratory animals to potentially toxic chemicals is now in question, current studies are often designed to reduce the numbers of animals required for such testing—or to use in vitro (typically bench-top studies isolating one system, enzyme or reaction) and in silico (computer based) testing regimes.

Over the years, with improved methodology and technology, toxicology slowly evolved from a science of high doses and relatively insensitive endpoints (e.g., death, changes in organ size or litter size) to a science of low (and environmentally-relevant) doses and more such sensitive endpoints as measurements of biochemical and functional changes in the immune system, endocrine system, and neurological system. As chemical exposure is reduced in studies, effects often become less obvious and more difficult to detect (see Figure 1). As a result, we more often see results expressed as ED50 or the effective dose (for whatever endpoint was selected for study) leading to a 50% response of the test population. While these changes towards low doses and more sensitive endpoints are encouraging, most current toxicology and regulatory policies that are based upon toxicological studies are still heavily dependent upon data derived from the existing body of toxicological data—which includes to a large extent high-dose studies. The result is that toxicologists and risk assessors, in order to relate this large database of high-dose data to relevant low-dose situations of today, may use ‘high dose to low-dose extrapolation’, deriving effects thresholds for small concentrations of chemicals from studies that used much higher concentrations that often produce different types—and severity—of effects. This practice introduces yet another layer of uncertainty.

caption Figure 1

As stated above, chemical concentration is a critical consideration when designing toxicity studies, whether we are considering individual chemicals or chemical mixtures. A fundamental assumption of toxicology is that—at least for chemicals that do not cause cancer—a threshold exists below which adverse effects do not occur. This is known as the No Observed Adverse Effect Level (NOAEL).  An ‘adverse’ effect may be defined as an effect that occurs when the body’s ability to compensate following exposure to a toxic chemical is surpassed. This means that some effect may occur although it is not considered adverse, unlike the formerly used No Observed Effect Level (NOEL), which represented the chemical dose at which no effect of any kind would be observed. The NOAEL and NOEL are direct outcomes of the relationship between the dose, or the amount of a chemical, and the adverse effects caused by the chemical. Remember that adverse effect can be anything from lethality to a change in the level of a particular enzyme. Toxicologists can also report results as the Lowest Observed Adverse Effect Level (LOAEL). These numbers—preferably the NOAELs and sometimes the LOAELs—are used by risk assessors and regulatory agencies to determine such values as the Reference Dose or RfD for individual chemicals commonly used by EPA. The RfD is a calculated dose or concentration of a particular chemical that is assumed to cause no harm to human populations upon daily exposure to that chemical. The following equation is used to derive the RfD:

RfD= NOAEL(or LOAEL)/UF x MF

In this equation the NOAELs or LOAELs that are derived directly from toxicological studies, may be modified by both an uncertainty factor (UF) (which is often a tenfold factor for each uncertainty) and a modifying factor (MF) (which can range between 0-10, with 1 as a default). The uncertainty factor includes test animal to target species extrapolation, along with other causes of uncertainty. For example, exposure to a xenobiotic may occur over a long period of time (chronic), but for logistical and economic reasons, the studies are conducted within a much shorter time period (subchronic). This difference in time of exposure contributes to the uncertainty when using the NOAEL or LOAEL. If the LOAEL is used rather than the NOAEL, then another UF is required. Not surprisingly, if the database (all the various studies typically required) is incomplete, another UF is included. Finally, the MF may be used for any other uncertainties, which include the quality of the study design used to derive the NOAEL.

Selection of Endpoints

Selection of endpoints for NOAELs and LOAELs is critical since these numbers are so often used by risk assessors and regulators. Toxicologists must determine which endpoints they will observe, and design their studies accordingly. Selecting the most appropriate endpoint is fundamental to understanding the toxicity of any given chemical. However, chemicals can, and often do, affect multiple locations within the body. More often than not, however, there is one particular area that is most sensitive to a chemical’s toxic effects or that responds to lower concentrations than another location. For example, a chemical might affect both the brain and the liver, with the brain being more sensitive. If we also consider that it is very difficult to study what we do not know to exist (or do not expect), then there might be many endpoints that are not observed, or that are missed, by toxicologists.

A case in point is the recent recognition of endocrine disruptors, or chemicals that interfere with the normal functioning of the endocrine system. This interference can result in reproductive and developmental impacts, changes in thyroid hormones, or other effects. Clearly, the potential for chemicals to cause these changes is not new, but the realization that these changes are occurring as a result of low-dose chemical exposure is relatively new. Significantly, the methodology and technology to measure these changes is relatively new and evolving dynamically. There are now many examples, particularly in reproductive, developmental and neurological toxicology, where toxicologists were unaware of some very sensitive and important endpoints for years. They must now re-evaluate chemicals using these new endpoints by developing new NOAELs and LOAELs along the way. Now that toxicologists can detect minute changes in hormone concentration or neuron (brain cell) function, they must consider the ‘biological relevance’ of these changes. For example, what do these subtle changes mean for the overall health of an organism? Conversely, what happens when toxicologists ignore or overlook some of these sensitive endpoints?

Note that all of the basic toxicology discussed above describes studies designed to evaluate the toxic effects of single chemicals. Although study design has been modified over time, the great majority of toxicological studies are single-chemical studies, although there is greater awareness of the limitation of such studies.

Toxicology terminology

Terminology and definitions for materials that cause toxic effects are not always used consistently in the literature. The most common terms are toxicant, toxin, poison, toxic agent, toxic substance, and toxic chemical.

Toxicant, toxin, and poison are often used interchangeably in the literature; however, there are subtle differences as indicated below:

A toxic agent is anything that can produce an adverse biological effect. It may be chemical, physical, or biological in form. For example, toxic agents may be chemical (for example, cyanide), physical (for example, radiation), and biological (for example, snake venom).

A distinction is made for diseases due to biological organisms. Those organisms that invade and multiply within the organism and produce their effects by biological activity are not classified as toxic agents. An example of this is a virus that damages cell membranes resulting in cell death.

If the invading organisms excrete chemicals which is the basis for toxicity, the excreted substances are known as biological toxins. These organisms, in this case, are referred to as toxic organisms. An example is tetanus. Tetanus is caused by a bacterium, Clostridium tetani. The bacteria C. tetani itself does not cause disease by invading and destroying cells. Rather, it is a toxin that is excreted by the bacterium that travels to the nervous system (a neurotoxin) that produces the disease.

A toxic substance is simply a material which has toxic properties. It may be a discrete toxic chemical or a mixture of toxic chemicals. For example, lead chromate, asbestos, and gasoline are all toxic substances. Lead chromate is a discrete toxic chemical. Asbestos is a toxic material that does not consist of an exact chemical composition but a variety of fibers and minerals. Gasoline is also a toxic substance rather than a toxic chemical in that it contains a mixture of many chemicals. Toxic substances may not always have a constant composition. For example, the composition of gasoline varies with octane level, manufacturer, and season.

Toxic substances may be organic or inorganic in composition

Toxic substances may be systemic toxicants or organ toxicants.

A systemic toxicant is one that affects the entire body or many organs rather than a specific site. For example, potassium cyanide is a systemic toxicant in that it affects virtually every cell and organ in the body by interfering with cells' ability to utilize oxygen.

Toxicants may also affect only specific tissues or organs while not producing damage to the body as a whole. These specific sites are known as the target organs or target tissues.

  • Benzene is a specific organ toxin in that it is primarily toxic to the blood-forming tissues.
  • Lead is also a specific organ toxin; however, it has three target organs: the (central nervous system, the kidney, and the hematopoietic system).

A toxicant may affect a specific type of tissue (for example, connective tissue) that is present in several organs. The toxic site is then referred to as the target tissue.

There are many types of cells in the body and they have been classified in several ways.

  • basic structure (for example, cuboidal cells);
  • tissue type (for example, hepatocytes of the liver);
  • germinal cells (for example, ova and sperm); and
  • somatic cells (for example, non-reproductive cells of the body).

Germ cells are those cells that are involved in the reproductive process and can give rise to a new organism. They have only a single set of chromosomes peculiar to a specific sex. Male germ cells give rise to sperm and female germ cells develop into ova. Toxicity to germ cells can cause effects on the developing fetus (such as birth defects, abortions).

Somatic cells are all body cells except the reproductive germ cells. They have two sets (or pairs) of chromosomes. Toxicity to somatic cells causes a variety of toxic effects to the exposed individual (for example, dermatitis, death, and cancer).

Further Reading

  • Toxicology Glossary
  • Principles and Methods of Toxicology (Hayes A, ed), London: Taylor and Francis. 2001.
  • The Basis of Toxicity Testing (Ecobichon D, ed), New York: CRC Press. 1997.
  • Toxicology for Non-Toxicologists (Mark E. Stelljes), Rockville, Maryland: Government Institutes, a Division of ABS Group Inc. 1999.
  • The Dose Makes the Poison (M. Alice Ottoboni), New York: John Wiley & Sons. 1997.

 

Disclaimer: This article is taken wholly from, or contains information that was originally published by, the National Library of Medicine. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the National Library of Medicine should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content.

Glossary

Citation

(2011). Toxicology. Retrieved from http://www.eoearth.org/view/article/156675

0 Comments

To add a comment, please Log In.