Mike Magee MD

As we discussed in the first two sessions of this three part course, it is an impossible task to rank list the top ten medical discoveries that have improved our human health and welfare over the centuries. Taking on this challenge is filled with twists and turns. And at the end of this final session, the long awaited “Top Ten” list will appear.

In sharing this story of human development, we have already encountered artisans and dreamers and determined spirits. Their discoveries have had few geographic bounds, crossing continents on their own time schedules and traveling well, but not without bumps in the road. In most cases, the discoveries have served as their own reward, and quite often have not been accompanied by fame and fortune, at least not immediately, or even in a single lifetime.

In our first two sessions we learned that our bodies are constructed of cells and vulnerable to germs and system wide diseases. We also reflected on the role of public health in organizing civil societies that embrace sanitation, safety and health security. In session three we go “under the skin” to explore our unique molecular structure and the various methods we have used to defend our unique chemistry from outside invaders.

As you will remember, it was 1855 when Rudolph Virchow described “omnis cellula e cellula,” the insight that a single cell is the basic unit of every organism; that all organisms are made of one or more cells; and that all cells arise from pre-existing cells. But how?

To all of our benefits, the mid-19th century provided answers in the form of two brilliant scientist – Charles Darwin and Gregor Mendel. These two very different gentleman shared curiosity, determination, and brilliant minds – but not much else.

Charles Darwin was a child of privilege. His father, Robert Darwin was a British society doctor who raised his curious son on a pastoral estate called “The Mount.” From the age of seven, Charles was a curious collector of natural objects. Well educated in premier institutions, and with his father’s full support, he took off at the age of 22 on (what would become) a five-year worldwide exploration that would shake human knowledge and its reliance on religous principles to their roots.

The path of his multi-masted voyager, the HMS Beagle, spanned multiuple continents, with Darwin all the way dividing his time between specimen collection and extensively noting his observations in notebook after notebook. He arrived home on October 2, 1836 carrying trunks of birds and beetles and bones, and headed straight to Cambridge University where he enlisted the help of that universities best biologists.

In his third year as a “gentleman scientist,” he met and married his 1st cousin, Emma, with whom he shared a grandfather. They spent a lifetime together, having 10 children, 3 of whom died in childhood. By all accounts, Darwin was a loving husband and devoted father. After nearly two decades of collaboration, organization, and writing, he published his Earth-shaking volume, The Origin of The Species, which laid out in details his theories on humankind development.

Rumors of the content preceded the publication on Novemerber 22, 1859. It’s first run of 1,250 copies was already fully sold. Many more would follow. And from that publication, he earned the title “Father of Evolution.” But nowhere in that publication did the word “evolution” even appear. In fact, the closest Darwin ever came to uttering the wortd was in the cbook’s final paragraph which read, “There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.”

Darwin understood well the power and danger of his words, and was careful not to directly connect man to apes. An 1871 cartoon caricature defined the point of tension with Darwin’s head positioned on an ape’s body. But he took care with his writing. For example, on September 5, 1857, Darwin sent the American botanist Asa Gray a detailed outline of his ideas, including an abstract of “Natural Selection,” which omitted human origins and sexual selection. These protective instincts allowed him to stay in good terms with the Church of England and the ruling Monarchy. Evidence of this included his burial place, inside Westminister Abbey, next to Sir Issac Newton, on his death at age 73 on April 19, 1882.

Darwins voyage, collections, and insights live on at the London Natural History Museum . That collection underpins all modern research in evolutionary biology, and specifically the defining “Survival of the Fittest.” In the musewum there are 1,628 workswritten in a wide range of languages, including 477 different versions of “On the Origin of Species.”

Darwin’s work focused on the why and how certain lifeforms survive and thrive long enough to bred and carry on their family lines. But those observations at best were “skin-deep.” It was left to his contemporary, a very different kind of man, to begin to expose the molecular underpinning of “heredity.”

His name was Gregor Mendel. He was born in Moravia on his family’s farm 13 years after Darwin. For most of their lives, he lived side by side dying 21 months after Darwin was laid to rest. The family’s farm in Morovia on the Czech Republic, was shared with his parents and two suisters, Veronuica and Theresa. As biographers recount, he knew more than a little about soils and plants having grown up on a farm owned by his family for 130 year. But his family was under financial pressure in his youth, and his interest in becoming a monk lay, in part, on managing a “perpetual anxiety about a means of livelihood.”

It was a combination of need and religous calling that drew him to the
Augustinian St. Thomas Abbey in Moravia where Mendel was a Catholic priest. Independent and well liked by his fellow monks, when it came to natural science, he didn’t lean on the traditional creation theology as much as hard core measurements, facts, and concrete deductions. It was these qualities that led to his fascination with a 10,000 year old cultivated crop that thrived in the Fertile Crescent of the Middle East – the humble pea.

A farmer he was, and farming he did – with a passion for detail and volume. Over a seven year period he cultivated and calculated, raising 28,000 pea plants, and charting each plants parentage and physical appearance on 7 measures – flower color, seed shape, seed color, pod color, pod shape, plant height, and flower position.

Before the words were invented, Mendel was reverse engineering through calculations the concept that some inherent, order keeping living cell system (genotype) was dircting the creation of specific traits or external appearances (phenotypes). From this knowledge he surmissed in 1866 a system of “dominant” and “recessive” biological determinants. One indication of how advanced Mendel’s thinking was is the fact that it would take an additional 62 years for scientists to actually nail down the biology of the genetic architechture and geneomic diversity of the genes reponsible fro Gregor’s seven traits.

Eleven years had passed between Virchow’s observations and Darwin’s 1959 “evolution” and Mendel’s 1966 “heredity.” A remarkable leap in such a short period, and yet how the human cell accomplished this feat was still unknowm. Three years later, in 1869, Friedrich Miescher, a Swiss physician and biologist, moved the goal post forward again. He had surmised that some kind of protein must be reponsible for Mendel’s findings. He convinced a fellow physician to send him the used, pus-soaked bandages of his dead patients to him in their original form. He then washed the bandages and isolated and spearate out the microscopically visible White Blood Cells, describing the dark nucleus of the cells and a material he called “nuclein.”

Thinking this was the protein he was looking for, he analyzed it. And he was surprised to discover a much higher content of phosphorus and resistance to proteolysis, both suggestion this was not protein but some new substance. Ass he reorded at the time, “It seems probable to me that a whole family of such slightly varying phosphorous-containing substances will appear, as a group of nucleins, equivalent to proteins.”

Thirteen years would pass before German biologist, Walther Flemming, using aniline dyes, described what was now termed “chromatin” as a fibrous network whose “chromosomes split along their length during a process called “meiosis.” The split chromosomes partitioned into daughter cells, and this then suggested the chemical means to allow the molecular extension of “heredity.”

Another six years would pass before German embyologist, Theodor Boveri, in 1888, laid out a theory of “generational continutiy” through germ cells multiplication. He used an optical microscope to examine roundworm eggs that possessed just 2 chromosomes. This made the task of following passage of the material along growing balls of cells feasible.

Walter Sutton, and American farm boy from Kansas, who played basketball for its inventor, James Naismith, and struggled to get patents for his oil pump design while working in the Kansas oil fields, is not the likely next step in molecular genetics. But four years after Boveri’s publication, Sutton, working on a Master’s at Kansas University, using the very local “lumber grasshopper,” teased out it’s 11 very distinctive chromosomes, and noted in 1902 that “Chromosomes are the basis of all genetic inheritance. One chromosome is instrumental in sex determination. Each chromosome is independent of the others.”

A decade later later, in his famous “Fly Room” at Columbia University, evolutionary biologist Thomas Hunt Morgan was able to demonstrate conclusively that inheritance was determined by genes situated on the chromosomes. To do this, he bred Drosophila flies, extracting their 4 pair of chromosomes including a pair of sex chromosomes. His paper in  Science in 1911, following specific traits like eye color proved that certain traits were embedded in the sex chromosome’s gene, while others resided on standard chromosomes. While the geographic sites were now known, their chemistry was still a mystery.

For chemistry, the world turned to a Russian physician turned chemist, Phoebus Levene. He was a prolific researcher with over 700 scientific publications. Among his discoveries in 1919 were these firsts:
1. Nucleotide is a phosphate – sugar – base.
2. Two different sugars – deoxyribose (DNA, and ribose (RNA)
3. Correctly identified the linear connections of (DNA) chemicals

Thus we had the chemistry if not the 3D structure. As biographers noted: ”Neither Levene nor any other scientist of the time knew how the individual nucleotide components of DNA were arranged in space; discovery of the sugar-phosphate backbone of the DNA molecule was still years away. “

Genetics was far from the only science and technology focus at the time. Corporations and Industrialization entered the scientific field in force at the end of the 20th century. Chief among them was Bell Labs, situated in the United States at 24th Street on the West Side Highway overlooking the Hudson River. It would grow into the world’s largest industrial research laboratory with over 300 highly skilled employees. This was the opportunity scientific engineers and entrepreneurs faced – enormous risk and enormous promise.

But through the century that would follow, many of the scientific breakthroughs originated in Europe. Consider the X-ray. Its discovery is attributed to Friedrich Rontgen (Roentgen), a mechanical engineering chair of Physics at the University of Wurzburg. It was in a lab at his university that he was exploring the properties of electrically generated cathode rays in 1896. 

He created a glass tube with an aluminum window at one end. He attached electrodes to a spark coil inside the vacuum tube and generated an electrostatic charge.  On the outside of the window opening he placed a barium painted piece of cardboard to detect what he believed to be “invisible rays.”  With the charge, he noted a “faint shimmering” on the cardboard. In the next run, he put a lead sheet behind the window and noted that it had blocked the ray-induced shimmering.

Not knowing what to call the rays, he designated them with an “X” – and thus the term “X-ray.” Two weeks later, he convinced his wife to place her hand in the line of fire, and the cardboard behind. The resultant first X-ray image (of her hand) led her to exclaim dramatically, “I have seen my death.” A week later, the image was published under the title “Ueber eine neue Art von Strahlen”  (On A New Kind of Rays).

William II, German Emperor and Prussian King, was so excited, he rushed the physicist and his wife to his castle in Potsdam for a celebrity appearance and lecture on these “invisible rays.” The New York Times at 46st Street, less than a mile northeast of Bell Labs, was considerably less excited when they reported on January 19, 1896 on the lecture and Roentgen’s “alleged discovery of how to photograph the invisible” labeling the scientist “a purveyor of old news.” The facts followed: “Emperor William had Prof. Roentgen to rush from Würzburg to Potsdam to give an illustrated lecture to the royal family on his alleged discovery of how to photograph the invisible.” 

But one week later, on January 26, the paper had a change of heart, writing: “Roentgen’s photographic discovery increasingly monopolizes scientific attention. Already numerous successful applications of it to surgical difficulties are reported from various countries, but perhaps even more striking are the proofs that it will revolutionize methods in many departments of metallurgical industry.”

By February 5, 1896, the paper was all in, conceding that the “Roentgen Ray” and the photo of wife Anna’s hand had “nothing in common with ordinary photographs.” The Times used the term “X-rays” but never spoke of it again until Roentgen’s death in 1923, when the Times obituary called it “one of the greatest discoveries in the history of science.” And in 1901 Roentgen received the Nobel Prize in Physics. As for the profiteering spirit of the day, the German academic, Roentgen, never sought a patent on his discovery, feeling to do so would be unethical. He donated the 50,000 Swedish krona prize to the University of Wurzburg.

Eight years after his death, imaging was once again in the news. In 1931, electrical engineer, Ernest Ruska, and physicist, Max Knoll, revealed their discovery, the electron microscope. The German pair of scientists had recognized that the wavelength of electrons was smaller by a factor of 100,000 than light waves raising the prospect that individual atoms could be imaged. Almost immediately, the cells intricate system of organelles became visible. And as form follows function, the reverse is true as well. The door to the inner workings of cells had now been opened. By 1971, the CT Scanner would arrive, and 6 years later Magnetic Resonance Imaging, or MRI’s began to appear at hospitals across the nation.

The cell, and more specifically its nucleus remained the focus, as chemists and imagers collaborated to complete the story of heredity. In 1950, aforty-five year old Austro-Hungarian born American biochemist, Erwin Chargaff stepped into the void with his perceptive analytic mind. Armed with the knowledge that individual “nucleotides” included three distinct elements (a Nitogen containing “nuclease base,” a phosphate and a sugar), and that there were four different nuclease bases (adenine, thymine, cytosine, and guanine), he was able to prove that the percentage of adenine (A) molecules roughly matched the percentage of thymine (T), and that the amount of cytosine(C) matched guanine(G). From these facts, he deduced that these elements must connect to each other. How nucleotides arranged themselves in space had begun to reveal itself. As important, Chargoff noted that nucleotide composition varied in different species, and therefore might be the starting point for species identity. Chromosome chemistry might house various nucleotide “genes” on their visible nuclear chains. By the time Chargaff died in 2002 at the age of 96, the full story had been exposed. But in 1950, Chargaff’s insights were revolutionary.

While the components of DNA and its linear nature were now know, its all important 3D structure remained a mystery. The 4 nuclease bases came in two varieties. Purines have a double-ring structure and include adenine (A) and guanine (G), while pyrimidines have a single-ring structure and include cytosine (C), thymine (T). With Chargaff’s findings, various researchers bean to explore how a prine and pyrimidine might physically attach to each other. As for the sugar and phosphate molecules, they increasingly appeared to be the vertical strands outside rails or backbone of a ladder molecule that was taking shape.

The race was on as scientists on both sides of the Atlantic raced to finally unlock was now being called deoxiribose nucleic acid (DNA). In the US, the lead went to a young pair of researchers, a 38-year old phycist and X-ray crystallographer named Francis Crick, and a 24-year old microbial geneticist, James Watson. Their competition was America’s premier chemist at the time, Linus Pauling, who believed DNA was a three stranded molecule.

Watson and Crick believed to was two strands, and using cardboard cut outs, created a possible model. In 1952, they convinced a British team working on the problem to critique their model. The team included British biophysicist, Maurice Wilkins, and chemist/crystallographer Rosalind Franklin. Franklin had recently perfected the creation of high-resolution X-ray images of DNA. Her pictures revealed a helical corkscrew like shape which was incompatible with the model Watson and Crick had built. With this knowledge, the American team went back to the drawing board and rebuilt the model.

Days after its final construction, they rushed to print an 800 word article with a single illustration. Printed in Nature on April 25, 1953, it was titled “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid.” The accompanying illustration, simply revealing the now famous “double helix” was created by Crick’s wife, artist and painter Odile Speed Crick. In 1962, the Nobel Prize was awarded to Watson, Crick, and Wilkins. What about Rosalind Franklin. Tragically she was diagnosed with Ovarian Cancer in 1956. After two years of treatments, and with convalescence including time as a guest of Odile and Francis Crick, she died in England on April 16, 1968. Two explanations for the Nobel ssnub are offered today. First, the award is not offered posthumously. Second, an ideal award is limited to three people. As for Linus Pauling, he ended up with two Nobels (one of only 5 individuals to achieve this feat) – one for chemistry in 1954, and the Nobel Peace Prize in 1964.

Watson and Crick both lived long lives. Crick lived into his 80’s alongside his wife Adile. In his later years, he teamed up with geneticist Sydney Brenner who himself won a Nobel Prize
“for discoveries concerning genetic regulation of organ development and programmed cell death.” Crick is hardly ever photographed without a smile on his face, most often in the company of his wife. He died in 2002 and she in 2007. As for Watson, he died two weeks ago on November 6, 2025 at the age of 97. He is not particularly remembered by colleagues as a “happy camper.” Harvard biologist Edward O. Wilson apparently was not alone in offering this assessment, “The most unpleasant human being I have ever met.”

During their lifetimes, the story of DNA was largely fleshed out. The code used for creation of proteins was uncovered. A three-base strand formed a “codon”. Codons were replicated in a line, in a single strand of RNA. Messenger RNA (mRNA) travel through the cytoplasm to ribosomes where each codon, specific for one amino acid, lined them up and connected them to create a specific protein. There were 64 different codons, including some duplicates and 3 “stop codons” to end a chain. And with time (some might say “just in time”) mRNA would be recruited as a vaccine to instruct the body to create the spike protein of Covid-19 that would instigate an immune reaction in a worldwide population in the grips of a pandemic. That discovery would itself earn a Nobel Prize in 2023 for American biochemist Katalin Karako and her colleague physician immunologist Drew Weismann.

“Optimism through technologic progress” has been the guiding motto through the 20th century. The scientific community first knew that cells existed, then that germs could threaten, and that cleanliness and sterilization could help prevent disease. They also possessed a growing appreciation that the body had some mysterious capability to fight diseases if exposed to them earlier. And in this recent century of discovery, they were primed to learn more. Their knowledge of life control systems, especially the circulatory and neurologic systems, had grown. They also were increasingly aware of a complex collection of specialized blood cells constantly on the move, with little original understanding of their purpose. That blood, and the cells contained within, were useful somehow. This had been proven out by experiments transfusing blood in animal studies. The challenges in doing this – blood clotting and storage for reuse – were being addressed. But one problem, occasional recipient reactions, at times deadly, was troublesome.

Experimenters knew instantly that the problem must involve some mismatch between host and recipient. In 1901, an Austrian biologist named Karl Landsteiner was able to recognize protein and carbohydrate appendages (or antigens) on red blood cell surfaces which were physiologically significant.  He defined the main blood antigen types – A, B, AB and O – and proved that success in human blood transfusion would rely in the future on correctly matching blood types of donors and recipients. In 1923, he and his family emigrated to the U.S. where he joined the Rockefeller Institute and defined the Rh Antigen (the “+” and “–” familiar to all on their blood types) in 1937. For his efforts, he received the Nobel Prize in Physiology.

As we have seen, human to human blood transfusions, from healthy to wounded serviceman, proved life-saving in WW I. But the invention of “blood banks” would not arrive until 1937. Credit goes to Bernard Fantus, a physician and Director of Therapeutics at Cook County Hospital in Chicago. A year earlier, he had studied the use of preserved blood by the warring factions in the Spanish Revolution. He was convinced that collecting donated containers of blood, correctly typed and preserved, could be life saving for subsequent well-matched recipients. His daughter, Ruth, noting that the scheme of “donors” and future “lenders” resembled a bank, is credited with the label “blood bank.” In his first year of operation, Fantus’s “blood bank” averaged 70 transfusions a month.

Additional breakthroughs, coming in response to the demands of WW II, allowed albumin to be separated from plasma in 1940. Techniques to freeze-dry and package plasma for rapid reconstitution became essential to Navy and Army units in combat. 400cc glass bottles were finally replaced by durable and transportable plastic bags in 1947. And blood warming became the standard of care by 1963. By 1979, the shelf life of whole blood had been extended to 42 days through the use of an anticoagulant preservative, CPDA-1, and refrigeration. Platelets were more susceptible to contamination and were generally preserved for only 7 days. The components were also prescreened for a wide variety of infectious agents including HIV in 1985.

Karl Landsteiner in 1930 became the first of 28 members of the newly formed American Association of Immunologists (AAI) to be awarded a Nobel Prize for his discovery of human blood groups. Importantly, the resultant recipient/host RBC matching system all but eliminated deadly reactions from mismatched transfusions. But they still occasionally occurred, and no one knew why until 1958. That was when French scientist, Jean Dausset, was able to prove that the incompatibility was coming from blood antigens attached to leucocytes or White Blood Cells (WBCs) not RBCs. They were subsequently named “Human Leucocyte Antigens” or HLAs. Studies that followed reveled that they were remarkably specific to each individual resulting in what was termed the “HLA fingerprint.” Their importance in human organ transplantation would become revolutionary.

Discoveries around the human immune system continued as the electron microscopic revealed a range of cells and their functions and roles in fighting specific invaders like HIV. In general, it was clear that WBCs as a whole had evolved in humans to recognize, disable and dispose of the bad guys.

The field of Immunology is little more than a half-century old and still shrouded in a remarkable degree of mystery. Even describing what we do know is a complex challenge. One way to proceed is to climb the scaffolding provided by the wide array of Nobel Prizes in Physiology or Medicine over the last half of the 20th century.

Immunity has Latin roots from the word immunitas which in Roman times was offered to denote exemption from the burden of taxation to worthy citizens by their Emperor.  Protection from disease is a bit more complicated than that and offers our White Blood Cells (WBCs) a starring role. These cells are produced in the bone marrow, then bivouacked in the thymus and spleen until called into action.

They are organized in specialized divisions. WBC macrophages are the first line of defense, literally gobbling and digesting bacteria and damaged cells through a process called “phagocytosis.” B-cells produce specific proteins called antibodies, designed to learn and remember specific  invaders chemical make-up or “antigen.” They can ID offenders quickly and neutralize target bacteria, toxins, and viruses. And T-cells are specially designed to go after viruses hidden within the human cells themselves.

The first ever Nobel Prize in Physiology or Medicine went to German scientist, Emil von Behring, eleven years after he demonstrated “passive immunity.” He was able to isolate poisons or toxins derived from tetanus and diphtheria microorganisms, inject them into lab animals, and subsequently prove that the animals were now “protected” from tetanus and diphtheria infection. These antitoxins, liberally employed in New York City, where diphtheria was the major killer of infants, quickly ended that sad epidemic.

Where Jenner, and later Pasteur’s (anthrax) weak exposures prevented subsequent disease, von Behring’s antitoxin cured those already infected. More than that, it unleashed the passion and excitement of investigators (which continues to this day) to understand how the human body, and specifically its cellular and chemical apparatus, pull off this feat.

The body’s inner defense system began to reveal its mysteries in the early 1900s. Brussel scientist Jules Bordet, while studying the bacteria Anthrax, was able to not only identified protein antibodies in response to anthrax infection, but also a series of companion proteins.  This cascade of proteins linked to the antibodies enhanced their bacterial killing power. In 1919 Bordet received his Nobel Prize for the discovery of a series of “complement” proteins, which when activated help antibodies “drill holes” through bacterial cell walls and destroy them.

Scientists now focused as well on the invaders themselves, termed as a group, “antigens,” and including microorganisms and other foreign bodies. How did the body know the threat and respond? During WWII, the Allies rapidly developed a range of protective vaccines, and mandated that all soldiers be vaccinated. Their schedules were eventually adapted for peace times, and required for entry into public schools.

Victories against certain pathogens were hard fought. In the case of poliovirus, which had a predilection to invade motor neurons, especially in children, and cause paralysis, it required a remarkable collaboration between government, academic medical researchers and local community based doctors and nurses to ultimately succeed. The effort involved simultaneous testing in children of two very different vaccines. 

One was a killed virus that was administered by injection. This vaccine developed by Jonas Salk at the University of Pittsburgh, arrived with great fanfare in 1955. It was both safe and effective, but required skilled clinicians to administer it to over 2 million American children. 

The alternative vaccine developed at NYU by Albert Sabin was made available five years later in 1960. It was a weakened (attenuated) but still live virus that could be administered orally. Its’ disadvantage was that, in rare cases, it could actually result in polio. But its distribution, especially in impoverished nations made great practical sense. Both programs were fully funded by the non-profit National Foundation for Infantile Paralysis, a unique philanthropic arm created by FDR, a victim of the disease himself.

Current vaccine skeptics like RFK Jr. argue against historic facts. One need only to examine graphs of annual case loads for diseases like diphtheria and polio, before and after the introduction of vaccines, to appreciate the dramatic preservation of life that resulted from intentional but safe exposure to killed or attenuated vaccines.

In this same era, scientific theorists like UK scientist Nils Jerne were proven right. But it took three decades for the scientific community to agree. His 1984 Nobel Prize read, “He asserted that all kinds of antibodies already have developed during the fetus stage and that the immune system functions through selection. In 1971, he proved that lymphocytes teach themselves to recognize the body’s own substances in the thymus gland… An immunological reaction arises when an antigen disturbs the system’s equilibrium.”

By then, those Jerne’s WBCs had been termed “B lymphocytes” by an Australian scientist named Macfarlane Burnet, a 1960 Nobel laureate, who also saw antibodies already established in the fetus. These individuals were part of a long tradition of medical science imagineers. For example, Robert Koch’s main assistant was Paul Ehrlich, who imagined the inner workings of the cell this way, “In his eyes, cells were surrounded by tiny spike-like molecular structures, or ‘side-chains’, as he called them, and that these were responsible for trapping nutrients and other chemicals, and for drawing them inside the cell.”

The “side chains” were in fact antibodies, large protein molecules made up of two long and two short chains. It was later proven that roughly 80% of the four chains are identical in all antibodies. The remaining 20% varies, forming unique antigen bonding sites for each and every antigen. Almost immediately scientists began to wonder whether they could reconfigure these large proteins to create “monoclonal antibodies” to fight cancers like melanoma.

Imagination has occasionally carried the day. But more often direct problem solving uncovers answers. That was the case when French scientist, Jean Dausset described an “HLA fingerprint.” One question always leads to another. In this case, “Why do HLAs exist?” What was eventually uncovered was that certain microorganisms (viruses) take up residence inside human cells gaining protected status.  To deal with the problem, humans possess a specialized WBC – termed “T-cell.” We are familiar with them since they have been much publicized in our epic battle with the HIV virus. But for the T-cell to destroy an intracellular virus, it must “recognize and respond” to two messaging signals. First, the virus’s antigen. Second, a permissive signal that informs that the virus is housed in a host cell that deserves preservation. The fingerprint HLA is that signal.

The downside of course is that the body’s own cells under certain circumstances can trigger an over reactive immune response. Most of us have experienced a bee sting or peanut allergy gone bad. This alarming cascade of symptoms called “anaphylaxis” derives from the Greek ( ana– against, philaxis-protection), and clearly involves HLAs. The same is true of auto-immune diseases which may involve genetic variants of HLAs. Finally, successful organ transplantation relies on compatibility of donor and recipient HLAs

So to sum it all up, Immunology is a mysterious, complex, and evolving field of study.  Host and predators (including everything from a microorganism invader to a roque cancer cell, to a wooden splinter left unaddressed) could be fatal. But to respond the host must first identify the threat, and activate a specific and effective response, without inadvertently injuring the host itself. As our understanding has grown, harnessing the immune system to chase down metastatic cancer cells, or suppress a deadly rejection of a transplanted organ, or self-modify to avoid auto-immune destruction are clearly within our grasp in the not too distant future.

As importantly, the continued mining of cell theory and the evolution of tissue culture are now allowing progress in cancer research and unlocking the mysteries of immunology, the workings of virology, the creation of a range of life saving vaccines from polio to mRNA cures for Covid, but much, much more.

It is reasonable to complete this whirlwind survey with a brief tribute to the mRNA vaccine used against Covid that spared an estimated 20 million lives during the recent global epidemic. The scientific origins of the vaccine date back 60 years and the 1961 Nobel Prize, but its creation and distribution, which required just weeks as opposed to years, came “just in time.” 

As the NIH explained, “mRNA vaccines inject cells with instructions to generate a protein that is normally found on the surface of SARS-CoV-2, the virus that causes COVID-19. The protein that the person makes in response to the vaccine can cause an immune response without a person ever having been exposed to the virus that causes COVID-19. Later, if the person is exposed to the virus, their immune system will recognize the virus and respond to it. mRNA vaccines are safe and cannot alter your DNA, and you cannot get COVID-19 from the vaccine. mRNA vaccines may seem to have arrived quickly, but this technology is built on decades of scientific research that have made these vaccines a reality.”

We have come to the end of our time. But before we go to our “top 10,”  we must acknowledge the flurry of discoveries at the end of the 20th century. With the introduction of living cell cultures, and the use of the electron microscope, and the use of AI to reveal protein design, many of the inner workings of the cell have been revealed. Simultaneously, the field of biochemistry has matured, alongside the miracle of genetics. Side by side, in direct view, fertilization, embryonic development, multi-potential stem cells with timed specialization, organ development, and ultimately the Watson and Crick description (building on the work of Rosalind Franklin and Maurice Wilkins) of the DNA double helix in 1953 have opened the doors a half century later to the sequencing of the human cell genome in 2003 after a 13 year race to the finish line by competitors, then collaborators, NIH lead  Francis Collins  and the Celera Corporation CEO  J. Craig Venter.

And now, the promised “Top 10”:

10. The Thermometer & Tools to Measure

9. The X-ray & Imaging

8. Anesthesia

7. Sanitation & Public Health

6. Evolution/Heredity

5. DNA & mRNA

4. Germ Theory/Antibiotics

3. Cell Theory

2. The Circulatory System

1. Immunology