Knowledge of toxicity is primarily obtained in three ways:
- by the study and observation of people during normal use of a substance or from accidental exposures
- by experimental studies using animals
- by studies using cells (human, animal, plant)
Most chemicals are now subject to stringent government requirements for safety testing before they can be marketed. This is especially true for pharmaceuticals, food additives, pesticides, and industrial chemicals.
Exposure of the public to inadequately tested drugs or environmental agents has resulted in several notable disasters. Examples include:
- severe toxicity from the use of arsenic to treat syphilis
- deaths from a solvent (ethylene glycol) used in sulfanilamide preparations (one of the first antibiotics)
- thousands of children born with severe birth defects resulting from pregnant women using thalidomide, an anti-nausea medicine
By the mid-twentieth century, disasters were becoming commonplace with the increasing rate of development of new synthetic chemicals. Knowledge of potential toxicity was absent prior to exposures of the general public.
Knowledge of toxicity of xenobiotics to humans is derived by three methods:
Clinical investigations are a component of the Investigational New Drug Applications (IND) submitted to U.S. Food and Drug Administration (FDA). Clinical investigations are conducted only after the non-clinical laboratory studies have been completed.
Toxicity studies using human subjects require strict ethical considerations. They are primarily conducted for new pharmaceutical applications submitted to FDA for approval.
Generally, toxicity found in animal studies occurs with similar incidence and severity in humans. Differences sometimes occur, thus clinical tests with humans are needed to confirm the results of non-clinical laboratory studies.
FDA clinical investigations are conducted in three phases. Phase 1 consists of testing the drug in a small group of 20-80 patients. Information obtained in Phase 1 studies is used to design Phase 2 studies. In particular, to:
- determine the drug's pharmacokinetics and pharmacological effects
- elucidate its metabolism
- study the mechanism of action of the drug
Phase 2 studies are more extensive involving several hundred patients and are used to:
- determine the short-term side effects of the drug
- determine the risks associated with the drug
- evaluate the effectiveness of the drug for treatment of a particular disease or condition
Phase 3 studies are expanded controlled and uncontrolled trials conducted with several hundred to several thousand patients. They are designed to:
- gather additional information about effectiveness and safety
- evaluate overall benefit-risk relationship of the drug
- provide the basis for the precautionary information that accompanies the drug
Epidemiology studies are conducted using human populations to evaluate whether there is a causal relationship between exposure to a substance and adverse health effects.
These studies differ from clinical investigations in that individuals have already been administered the drug during medical treatment or have been exposed to it in the workplace or environment.
Epidemiological studies measure the risk of illness or death in an exposed population compared to that risk in an identical (e.g., same age, sex, race, social status, etc.), unexposed population.
There are four primary types of epidemiology studies. They are:
Cohort studies are the most commonly conducted epidemiology studies. They frequently involve occupational exposures. Exposed persons are easy to identify and the exposure levels are usually higher than in the general public. There are two types of cohort studies:
To determine if epidemiological data are meaningful, standard, quantitative measures of effect are employed. The most commonly used are:
There are a number of aspects in designing an epidemiology study. The most critical are appropriate controls, adequate time span, and statistical ability to detect an effect.
The control population used as a comparison group must be as similar as possible to that of the test group, e.g., same age, sex, race, social status, geographical area, and environmental and lifestyle influences.
Many epidemiology studies evaluate the potential for an agent to cause cancer. Since most cancers require long latency periods, e.g., 20 years, the study must cover that period of time.
The statistical ability to detect an effect is referred to as the power of the study. To gain precision, the study and control populations should be as large as possible.
Epidemiologists attempt to control errors that may occur in the collection of data. These errors, known as bias errors, are of three main types:
Animal Testing for Toxicity
Animal tests for toxicity are conducted prior to human clinical investigations as part of the non-clinical laboratory tests of pharmaceuticals. For pesticides and industrial chemicals, human testing is rarely conducted. Animal test results often represent the only means by which toxicity in humans can be effectively predicted.
With animal tests:
- chemical exposure can be precisely controlled
- environmental conditions can be well-controlled
- virtually any type of toxic effect can be evaluated
- the mechanism by which toxicity occurs can be studied
Methods to evaluate toxicity exist for a wide variety of toxic effects. Some procedures for routine safety testing have been standardized. Standardized animal toxicity tests are highly effective in detecting toxicity that may occur in humans. Concern for animal welfare has resulted in tests that use humane procedures and only the number of animals needed for statistical reliability.
To be standardized, a test procedure must have scientific acceptance as the most meaningful assay for the toxic effect. Toxicity testing can be very specific for a particular effect, such as dermal irritation, or it may be general, such as testing for unknown chronic effects.
Standardized tests have been developed for the following effects:
- Acute Toxicity
- Subchronic Toxicity
- Chronic Toxicity
- Reproductive Toxicity
- Developmental Toxicity
- Dermal Toxicity
- Ocular Toxicity
- Genetic Toxicity
Species selection varies with the toxicity test to be performed. There is no single species of animal that can be used for all toxicity tests. Different species may be needed to assess different types of toxicity. In some cases, it may not be possible to use the most desirable animal for testing because of animal welfare or cost considerations. For example, use of monkeys and dogs is restricted to special cases, even though they represent the species that may react closest to humans.
Rodents and rabbits are the most commonly used laboratory species due to their availability, low costs in breeding and housing, and past history in producing reliable results.
The toxicologist attempts to design an experiment to duplicate the potential exposure of humans as closely as possible. For example:
- The route of exposure should simulate that of human exposure. Most standard tests use inhalation, oral, or dermal routes of exposure.
- The age of test animals should relate to that of humans. Testing is normally conducted with young adults, although newborn or pregnant animals may be used in some cases.
- For most routine tests, both sexes are used. Sex differences in toxic response are minimal, except for toxic substances with hormonal properties.
- Dose levels are normally selected so as to determine the threshold as well as dose-response relationship. Usually, a minimum of three dose levels are used.
Acute toxicity tests are generally the first tests conducted. They provide data on the relative toxicity likely to arise from a single or brief exposure. Standardized tests are available for oral, dermal, and inhalation exposures. Basic parameters of these tests are:
Subchronic toxicity tests are employed to determine toxicity likely to arise from repeated exposures of several weeks to several months. Standardized tests are available for oral, dermal, and inhalation exposures. Detailed clinical observations and pathology examinations are conducted. Basic parameters of these tests are:
Chronic toxicity tests determine toxicity from exposure for a substantial portion of a subject's life. They are similar to the subchronic tests except that they extend over a longer period of time and involve larger groups of animals. Basic parameters of these tests include:
Carcinogenicity tests are similar to chronic toxicity tests. However, they extend over a longer period of time and require larger groups of animals in order to assess the potential for cancer. Basic parameters of these tests are:
Reproductive toxicity testing is intended to determine the effects of substances on gonadal function, conception, birth, and the growth and development of the offspring. The oral route is preferred. Basic parameters of these tests are:
Developmental toxicity testing detects the potential for substances to produce embryotoxicity and birth defects. Basic parameters of this test are:
Dermal toxicity tests determine the potential for an agent to cause irritation and inflammation of the skin. This may be the result of direct damage to the skin cells by a substance. It may also be an indirect response due to sensitization from prior exposure. There are two dermal toxicity tests:
Ocular toxicity is determined by applying a test substance for one second to the eyes of 6 test animals, usually rabbits. The eyes are then carefully examined for 72-hours, using a magnifying instrument to detect minor effects. The ocular reaction may occur on the cornea, conjunctiva, or iris. It may be simple irritation that is reversible and quickly disappears or the irritation may be severe and produce corrosion, an irreversible condition.
The eye irritation test is commonly known as the "Draize Test." This test has been targeted by animal welfare groups as an inhumane procedure due to pain that may be induced in the eye. The test allows the use of an eye anesthetic in the event pain is evident. The Draize Test is a reliable predictor of human eye response. However, research to develop alternative testing procedures that do not use live animals is underway. While some cell and tissue assays are promising, they have not as yet proved as reliable as the animal test.
A battery of standardized neurotoxicity tests has recently been developed to supplement the delayed neurotoxicity test in domestic chickens (hens). The hen assay determines delayed neurotoxicity resulting from exposure to anticholinergic substances, such as certain pesticides. The hens are protected from the immediate neurological effects of the test substance and observed for 21 days for delayed neurotoxicity. Other neurotoxicity tests include measurements of:
Genetic toxicity is determined using a wide range of test species including whole animals and plants (e.g., rodents, insects, and corn), microorganisms, and mammalian cells. A large variety of tests have been developed to measure gene mutations, chromosome changes, and DNA activity. The most common gene mutation tests involve:
Chromosomal effects can be detected by a variety of tests, some involving whole animals (in vivo). Some use cell systems (in vitro). Several assays are available to test for chemically induced chromosome aberrations in whole animals. The most common tests are:
Additional in vivo chromosomal assays are:
In vitro tests for chromosomal effects involve the exposure of cell cultures and microscopic examination for chromosome damage. The most commonly used cell lines are Chinese Hamster Ovary (CHO) cells and human lymphocyte cells. The CHO cells are easy to culture, grow rapidly, and have a low chromosome number (22) which makes for easier identification of chromosome damage.
Human lymphocytes are more difficult to culture. They are obtained from healthy human donors with known medical histories. The results of these assays are potentially more relevant to determine effects of xenobiotics that induce mutations in humans.
Two widely used genotoxicity tests measure DNA damage and repair which is not mutagenicity. DNA damage is considered the first step in the process of mutagenesis. The most commonly used test for unscheduled DNA synthesis (UDS) involves exposure of mammalian cells in culture to a test substance. UDS is measured by the uptake of tritium-labeled thymidine into the DNA of the cells. Rat hepatocytes or human fibroblasts are the common mammalian cell lines used.
Another assay to detect DNA damage involves the exposure of repair-deficient E. coli or B. subtilis. DNA damage can not be repaired so the cells die or their growth may be inhibited.
Disclaimer: This article is taken wholly from, or contains information that was originally published by, the National Library of Medicine. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the National Library of Medicine should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content.