Mike Magee MD
It is an impossible task to rank list the top ten medical discoveries that have improved our human health and welfare over the centuries. Nonetheless, I will provide a countdown list at the end of the third and final session of this three part course.
Taking on this challenge has been filled with twists and turns. As always, I choose to begin by providing some context for what lies ahead. In sharing this story of human development, you will meet a range of discoverers. They are artisans and dreamers and determined spirits. The discoveries they have made have no geographic bounds, crossing continents on their own time schedules and traveling well, but not without bumps in the road. In most cases, the discoveries have usually served as their own reward, and quite often have not been accompanied by fame and fortune, at least not immediately, or even in a single lifetime. Most of the discoveries have built sequentially, one on another, with refinements and new insights providing course corrections. Those that have survived, are timeless truths, unless overtaken by future insights and discoveries.
One of the original challenges has been how exactly to organize the material. In general, these medical discoveries – many more than ten – fall into one of three categories – Basic Science, Public Health, or Molecular Medicine. What all share in common are a persistent human struggle to see, to feel, to measure, and ultimately to understand. This urge to make sense of the world at times crosses over into the realm of science fiction – imaginary journeys into the land of “What if?”
In comic world, science and science fiction confidently blend and blur. In ranking the most powerful super heroes of all time, comic book aficionado Maverick Heart (aka aeromaxxx 777) listed “Flash” as the clear winner, stating “Not only does he have super-speed, but once he reaches terminal velocity, he has shown other incredible powers. During an attempt to measure his top speed, he strained every muscle in his body to run at about 10 Roemers, which is 10 times the speed of light.”
“Roemers?” That’s a reference to Ole Roemer (1644-1710), a Danish uber-scientist, whose seminal discovery of the “speed of light” was celebrated on his 340th anniversary in 2016 with a Google doodle . Details aside (the Earth’s timing of orbits around our sun were measured against Jupiter moon Io’s orbit around the distant planet), Ole’ detected a discrepancy in the measurements of the eclipses which amounted to 11 minutes. He attributed the lag to a speed of light he calculated to be 140,000 miles per second, not infinite as was commonly thought at the time.
Ole’ is a reasonable starting point for our story because he was mentor of choice for scientists of his day, and one of them was a 22-year old dreamer on the run named Daniel Gabriel Fahrenheit (1686-1736).The road that led to that meeting however was bumpy. At age 15, Fahrenheit lost both his parents to an accidental mushroom poisoning. His guardian then arranged a four-year merchant trade apprenticeship in Amsterdam. But when he completed the program at the age of 20 in 1706, he escaped an agreed upon further commitment to the Dutch East India company and his guardians sought an arrest warrant.
When he arrived at the now Mayor of Copenhagen’s Ole Romer’s doorstep two years later, he was seeking business guidance and a pardon from further legal action. He achieved both. Ole Romer explained that there was currently intense interest in high quality instruments that could measure temperature. For nearly a century, everyone from Galileo to Huygens to Halley had been working on it. He himself had invented one in 1676 while convalescing from a broken leg, but there was great room for technical improvements.
The challenges were threefold – physical construction, the creation of a standard measurement scale, and reproducible accuracy. Fahrenheit took up the challenge and spent the next four years refining his glass blowing skills, discovered that mercury was a more reliable reference liquid than alcohol, and realizing he could improve on Romer’s scale – which he did, renaming it the Fahrenheit scale in 1717 in a publication, Acta Editorum.
The famous scale was pegged on three different reference points. The first was the point at which a mixture of ice, water and salt reached equilibrium, which he identified as 0 degrees. The second was the temperature at which ice was just beginning to form on still water. This would be 32 degrees. And the final measure was the temperature when the thermometer was placed under the arm or in the mouth. This became 96 degrees. The span between 0 and 96 allowed Fahrenheit to create a dozen divisions with each subdivided into 8 parts. (12 X 8 = 96).
Just two decades later, in 1745, another scientist name Anders Celsius arrived on the scene with a new scale. It would be a slow-burn, taking approximately two centuries to officially supersede the Fahrenheit scale everywhere in the world except (not surprisingly) the United States. Built on a scale of 0 to 100, the Celsius scale is also called the centigrade scale.
American scientific hubris has forced America’s math students to memorize conversion formulas and engage in what management guru, Tom Peters, would call unnecessary “non-real work.”
A down and dirty one: F -30 /2 = C.
Or more accurately: (F-32)/1.8 = C
What this opening tale illustrates however is that scientific discoverers have determined spirits, are often multi-talented tactile problem solvers, and dream big. Next case in point, William Harvey (1578-1657). Without modern tools, he deduced from inference rather than direct observation that blood was pumped by a four chamber heart through a “double circulation system” directed first to the lungs and back via a “closed system” and then out again to the brain and bodily organs. In 1628, he published all of the above in an epic volume, De Motu Cordis.
Far fewer know much about Stephen Hales, who in 1733, at the age of 56, is credited with discovering the concept of “blood pressure.” A century later, the German physiologist, Johannes Müller, boldly proclaimed that Hales “discovery of the blood pressure was more important than the (Harvey) discovery of blood.”
Modern day cardiologists seem to agree. Back in 2014, the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure reported that “With every 20 mm Hg increase in systolic or 10 mm Hg increase in diastolic blood pressure, there is a doubling risk of mortality from both ischemic heart disease and stroke.”
But comparisons are toxic. No need to diminish Harvey who correctly estimated human blood volume (10 pints or 5 liters), the number of heart contractions, the amount of blood ejected with each beat, and the fact that blood was continuously recirculated – and did this all 400 years ago. But how to measure the function, and connect those measurements to an amazingly significant clinical condition like hypertension, is a remarkable tale that spanned two centuries and required international scientific cooperation.
Harvey was born in 1578 and died in 1657, twenty years before the birth of his fellow Englishman, Stephen Hales. Hales was a clergyman whose obsessive and intrusive fascination with probing the natural sciences drew sarcasm and criticism from the likes of classical scholar and sometimes friend, Thomas Twinning. He penned a memorable insult laced poem from the verdant English village of Teddington in Hales’ honor titled “The Boat of Hales.”
“Green Teddington’s serene retreat
For Philosophic studies meet,
Where the good Pastor Stephen Hales
Weighed moisture in a pair of scales,
To lingering death put Mares and Dogs,
And stripped the Skins from living Frogs,
Nature, he loved, her Works intent
To search or sometimes to torment.”
The torment line was well justified in light of Hales own 1733 account of his historic first ever mention of the measurement of arterial blood pressure, self-described here:
“In December I caused a mare to be tied down alive on her back; she was fourteen hands high, and about fourteen years of age; had a fistula of her withers, was neither very lean nor yet lusty; having laid open the left crural artery about three inches from her belly, I inserted into it a brass pipe whose bore was one sixth of an inch in diameter … I fixed a glass tube of nearly the same diameter which was nine feet in length: then untying the ligature of the artery, the blood rose in the tube 8 feet 3 inches perpendicular above the level of the left ventricle of the heart; … when it was at its full height it would rise and fall at and after each pulse 2, 3, or 4 inches.”
Two centuries later, Hales did have competition for “king of torment.” By then scientists for some time had been aware of the anatomy of the human heart. In 1905, the notion of linking a obstructing inflatable arm cuff to a manometer proved workable. The inventor was a student of Russia’s Imperial Military Medical Academy, Nikolai Sergeevich Korotkoff. He described his discovery this way: “The cuff is placed on the middle third of the upper arm; the pressure within the cuff is quickly raised up to complete cessation of circulation below the cuff. Then, letting the mercury of the manometer fall one listens to the artery just below the cuff with a children’s stethoscope. At first no sounds are heard. With the falling of the mercury in the manometer down to a certain height, the first short tones appear; their appearance indicates the passage of part of the pulse wave under the cuff. It follows that the manometric figure at which the first tone appears corresponds to the maximal pressure. With the further fall of the mercury in the manometer one hears the systolic compression murmurs, which pass again into tones (second). Finally, all sounds disappear. The time of the cessation of sounds indicates the free passage of the pulse wave; in other words at the moment of the disappearance of the sounds the minimal blood pressure within the artery predominates over the pressure in the cuff.”
Put more simply, he recognized that “a perfectly constricted artery under normal conditions does not omit sounds.” It is easy to forget, in the age of semiconductors, photocells and strain guages, that progress in understanding the human circulatory system took centuries to acquire, and international cooperation.
It wasn’t until 1929 that they actually were able to see the system in action. That was when a 24-year old German medical intern in training named Werner Forssmann came up with the idea of threading a ureteral catheter through a vein in his own arm to his heart. A decade earlier, “heart seizuries” were first described (1912), and the correlation between symptoms like chest pain and ECG readings were delineated.
But Werner Forssmann didn’t wait for permission from his superiors for his experiment. With junior accomplices, including an enamored nurse, and a radiologist in training, he secretly catheterized his own heart by opening an incision in his cubital vein and feeding the sterile catheter (intended for a ureter not a vessel) up into the heart. He then went bandaged to the Xray suite and injected dye into the vein, revealing for the first time a live 4-chamber heart beating heart. His “reckless action” was eventually rewarded with the 1956 Nobel Prize in Medicine. Another two years would pass before the dynamic Mason Sones, Cleveland Clinic’s director of cardiovascular disease, successfully (if inadvertently) imaged the coronary arteries themselves without inducing a heart attack in his 26-year old patient with rheumatic heart disease.
Roughly a decade after Forstmann’s heroics, Charlotte Gruntzig gave birth to another adventurous German boy and named him Andeas Gruntzig. Roughly four decades later, with the financial support of CocaCola, and a move to Emory School of Medicine, Dr. Gruntzig would be nominated for a Nobel Prize. This was to acknowledge his successful dilation of Dolf Bachmann’s obstructed coronary artery in the first ever balloon angioplasty. Dolf was nice enough to allow a second catheterization at Emory on the 10-year anniversary of the procedure, which revealed a still patent vessel.
Artistry and bravado stand side by side in many of medicine’s most notable discoveries. But the route toward discovery is often surprising. Take the case of Antonie van Leeuwenhoek. He was a 22-year old Dutch shop manager selling linen, yarn and ribbon to seamstresses and tailors in 1654. His hobby was toying with cut glass lenses and he constructed a small hand-held magnifying metal platform and was able to enlarge objects 275 times their normal size. This was useful to him because it allowed him to assess the stitch quality of the linens he was purchasing for resale.
Curious, he also investigated natural materials like ash tree sprigs but also pond water. In 1676, at the age of 44, he was surprised to see microscopic swimming creatures in the water which he called “animalcules.”
Around the same time, Britain’s Robert Hooke peered through his primitive microscope at a slice of cork and described the little boxes he saw as “cellular” – resembling the tiny rooms (cellula) that monks inhabited. When he published that impression in Micrographia that year, the label “cell” was born.
Over the years that followed, the design of microscopes evolved and all manners of natural material were described. But 1838 was a turning point. That was the year that Germany’s Botanist Matthias Schelden teamed up with Zoologist Theodor Schwann. They had noted that both animal and plant species were constructed of cells. In reflecting on cellular life, they declared three principles – the first two proved correct, the third an error.
The principles were:
- A single cell is the basic unit of every organism.
- All organisms are made up of one or more cells.
- New cells arise spontaneously.
It wasn’t until seventeen years later that the famed German physician, Rudolph Virchow corrected #3. He was known by his students for encouraging them to embrace “attention to detail” by “thinking microscopically.” But Virchow is remembered primarily for a famous phrase – “omnis cellula e cellula” (every cell stems from another cell). That may not sound too radical today. But back in 1855, it was revolutionary.
This principle endured and led Virchow to more than one breakthrough discovery. With this insight, Virchow launched the field of cellular pathology. How exactly cells manage to divide and create identical copies of themselves remained to be determined. But he did figure out, before nearly all other scientists, that diseases must involve distortions or changes in cells. From this he deduced that diagnosis and ultimately treatment could now be guided not simply by symptoms or external findings, but also by cellular pathologic changes. And this was more than theory. In fact, Virchow is credited with first describing the microscopic findings of leukemia way back in 1847.
Virchow is also remembered as the preeminent medical educator of his day who changed the course of countless American medical leaders who traveled to Germany for instruction. His most notable mark was on the newly launched Johns Hopkins School of Medicine in Baltimore. It was founded in 1893 by four founders termed the “Big Four” . These included William Welch (pathologist/public health), William Osler (internal medicine), William Stewart Halsted (surgery), and Howard Atwood Kelly (obstetrics). Welch served as the first Dean of the school and Osler, born and bred in Canada, arrived at Johns Hopkins at the age of 40, and birthed the first residency training program at the school. Prior to their arrival in Baltimore, the two shared a common medical origin story. They both were trained in pathology and cell theory by Virchow.
They were not starting from scratch. The first description of a cell nucleus was made by a Scottish botanist, Robert Brown in 1833. But over the next half-century, cell scientists busily described various cell organelles without a clear understanding of what they did. What was clear with light microscopy was that cells were bounded by a cellular membrane.
Prior to the launching of Johns Hopkins, most of the attention in the second half of the 19th century was on the nucleus and its division and cell replication. In 1874, German biologist, Walther Flemming, first described mitosis in detail. But it wasn’t until 1924 that German biologist Robert Feulgen, “following experiments using basic stains such as haematoxylin, defined chromatin as ‘the substance in the cell nucleus which takes up color during nuclear staining’”. To this day, the Feulgen reaction “still exerts an important influence in current histochemical studies.”
Watson & Crick’s description of the DNA double helix was still far in the distance. But in the mean time, other cell organelles were visualized and named like the Golgi apparatus named in 1898 after Italian biologist Camillo Golgi who also used heavy metal stains (silver nitrate or osmium tetroxide) to aid visualization. Mitochondria, like the Golgi apparatus, stretched the limits of light microscopic visualization. But even without visualization, scientists by the 1930’s were beginning to deduce the functions of organelles they could barely see, and a few (like lysosomes) that they had never seen but knew had to be there.
Three discoveries in the first half of the twentieth century served as accelerants in intracellular science. The first was the electron microscope . Its’ discovery is generally credited to two German PhD students, Max Knoll and Ernst Ruska, who used two magnetic lenses to generate a beam of electrons and achieve much higher magnifications in 1931. For their efforts they received the 1986 Nobel Prize in Physics. Breakthroughs began to roll out almost immediately. High resolution pictures of mitochondria appeared in 1952, followed by the Golgi apparatus in 1954. The inner workings of the cells displayed movement of vesicles across the membrane and from nucleus to cytoplasm, with structures constantly being constructed and deconstructed. And visualization only got better in 1965 when the first scanning electron microscope went commercial.
The second accelerant was improvements in the science of tissue cultures. Attempts to grow living cells in Petri dishes date back more that a century. One critical step along the way occurred in 1882 when a British physician named Sydney Ringer came up with the formula for Ringer’s Lactate (a substance that included sodium chloride, potassium chloride, calcium chloride and sodium bicarbonate) while experimenting on how to keep a frog heart, removed from the frog, beating while suspended in solution.
Three years later, a budding German embryologist, Wilhelm Roux, who was fixated on “showing Darwinian processes on a cellular level,” was able to keep cells he had extracted from a chicken embryo alive for 13 days. With that, the discipline called cell culture or tissue culture was off and running. But the effective use of the technique of tissue culture to preserve, display and grow living cells over the next few decades was frustrated by the nagging issue of bacterial contamination.
The third accelerant addressed this issue. It was the discovery of penicillin by Alexander Fleming in 1928. Within a few decades it became possible to maintain living, self-replicating cells indefinitely. This introduced ethical concerns. For example, in 1951 Henrietta Lacks who suffered with cervical cancer, became the uninformed and uncompensated donor of the still existent HeLa experimental cell line used by investigators. Naturally or scientifically mutated cells, could now grow and continue to divide indefinitely, creating an immortalized cell line. And on this rock was also built the promise of In Vitro Fertilization (IVF), but also genetic manipulation of human embryos and stem cell lines filled with ethical minefields.
Fleming came from a century long line of healers focused on epidemics and plagues. In 1614, the Middle East and Europe were rocked by a epidemic of Smallpox. Following the Plague, its deadly disfigurement became a most feared pestilence. In the 17th and 18th century it appeared and reappeared as a scourge in the Americas. Native tribes were brought to the brink of extinction, first in the Caribbean and South America nations, and later in North America as well.
The rich and well-to-do were not spared. Ben Franklin’s 4 year old son contracted it and died, months after Franklin had refused to allow a new and still experimental inoculation which carried a 2-3% mortality, but also lifelong immunity to the infection that had a killl rate of 50%. The disease was with us literally since our inception as a nation. In 1633, 20 settlers from the original Mayflower died from it. Native Americans were especially vulnerable, and white leaders showed little mercy. Gov. John Winthrop pf the Massachusetts Bay Colony said “the Lord hath cleared our title to what we possess” – the hand of God if you will.
By the 18th century, all eyes were on a new method of suppressing this disease, called “variolation.” It proved a tough sell to independent minded colonists. Not so for General George Washington. After being whipped by British forces on the Canadian border who had all been immunized against Smallpox, leaving Washingtons fighting men vulnerable and soon overwhelmed by the disease, Washington made the scratching inoculation with purified crusts of the remnants of Smallpox infection mandatory throughout the Revolutionary Army in 1777.
He had an easier time of it than did Massachusetts based Puritan minister Cotton Mather 50 years earlier during a Smallpox outbreak that ultimately claimed over 800 souls. During the crisis, one of Mather’s enslaved Africans named Onesimus informed him of the practice of innoculation. The disease was commonplace on slave ships at the time, triggering widespread warnings printed and distributed throughout the evolving colony. Mather, with a doctor friend Zabdiel Boylston, devised an experiment during an outbreak in 1721 that eventually claimed 14% of the population of Boston. Dr. Boylston’s young son was among 242 who were inoculated with only a 2.5% mortality. Rather than hail the success, locals were outraged, seeing the purposeful “infection” of citizens as evil witchcraft. One evening, in November, a malfunctioning bomb came crashing through Others window. Attached to it was a note that read, “Cotton Mather, you dog, dam you: I’ll inoculate you with this; with a Pox to you.’’
Resistance to the procedure presaged modern day anti-Vaxers like RFK Jr. But variolation was common in the 1700’s in Europe. It had been popularized by Lady Mary Wortley Montague, the wife of Edward, the British Ambassador to Istanbul. While overseas, her brother had died of Smallpox. She witnessed the procedure successfully performed in the Byzantine capital and insisted on immunizing her young son. Her survived with only minor symptoms. When she returned, she had Brit court doctors repeat the procedure on her young daughter, Mary Alice, and memorialized it in a painting which she used as part of a nationwide effort to promote the procedure for the general population. She was less than successful, as was Napolean’s sister, who was an equally avid supporter in France in 1805.
The technique at the time was cumbersome and variable both in the materials subjects were exposed to, the tools utilized, and the science underpinning it. It was left to a British physician, Dr. Edward Jenner, to fill in the gaps. The official account is that he had noted that hired milkmaids on his farm appeared to be immune from contracting Smallpox. He had been inoculated as a school boy in the 1760’s. As an adult he surmised that their exposure to a related disease, Cowpox, endemic in the milk cows, might be transferring protection to his employees. As part of an experiment, he successfully inoculated a 13 year old on the farm, James Phipps, and exposed him to Smallpox later, which he did not contract. Later he also inoculated his 11 month old son. In modern times, the story of the milkmaids has been challenged for accuracy. What is undeniable is that in 1798, Jenner published “Inquiry into the Variolae vaccinae known as the Cow Pox” which laid out his theories. Eight decades later, Louis Pasteur added less to the experiment by explaining the science of this phenomenon which he described as vaccination in a nod to Jenner’s work with the “Vaccinia” Cowpox virus.
The end of the 18th century, and much of the 19th century in this growing experiment in self-governance, was marked by confusion, suffering, inequality, and ignorance. And yet, it also laid the grounding for our future as a nation. As Professor Jim Secord of the University of Cambridge wrote recently, “When we think about the past, we think about history. When we think about the future, we think about science. Science builds upon the past, but also simultaneously denies it.”
Choices at the end of the 18th century could be limited. Fleeing white plantation owners from the island of Saint Dominique, numbering in the thousands, had been left with few options in the late 1700’s. If they remained in Saint Dominque, they would be hunted down by former slaves in open revolt, and if they tried to return to France, they would be guillotined as part of the French Revolution. Philadelphia, a thriving port city at the time, appeared the safest choice. The 2000 or so new immigrants that arrived carried with them hastily gathered belongings and one unwelcome guest, mosquitoes carrying the Yellow Fever causing flavivirus.
When Yellow Fever broke out in 1793 shortly after the arrival of a trading ship from Saint-Dominque in Philadelphia, the colonists had no immunity. Most houses were within 7 blocks of the port. They city fathers imposed a quarantine of 3 weeks on the city but were unable to enforce it. By September 20,000 had fled including George Washington who headed for Mount Vernon, and Alexander Hamilton and his wife who evacuated to New York. Between August 10, 1793 and November 9, 1793, 5000 citizens perished, roughly 10% of the population.
It is useful to remember the state of medicine at the time. It was sourced from 18th Century England. British physicians understanding of human physiology and pathophysiology was primitive. Operations were brutal and overseen by surgeon/butchers. And perhaps the most sophisticated and learned professionals of the day were the apothecary chemists.
Philadelphia “experts” were at a loss to explain the cause of the sudden outbreak of disease, and were even more confused how to treat it. The colonies had in total some 200 physicians of varying skills with actual degrees, the very beginnings of hospitals, and no formal system of training for either doctors or nurses. The city at the time harbored two branches of medicine – the Heroics and the Homeopaths.
Benjamin Rush led the Heroic branch of Medicine, believing the problem they now encountered involved a disturbance of the four humors – blood, phlegm, black bile and yellow bile. His solution was the rather liberal and barbaric use of cathartics and blood letting. For the most severe cases, Rush set up the first “fever hospital” in the new nation, the Pennsylvania Hospital, whose primary initial purpose was to house the insane.
Publicity seeking Benjamin Rush published a timely pamphlet with advice for the public on how best to address the crisis. He wrote, “I have found bleeding to be useful, not only in cases where the pulse was full and quick, but where it was slow and tense. I consider in tepidity in the use of the lancet, at present, to be necessary, as it is the user of mercury and jalap in this insidious and ferocious disease.”
His intellectual opponent was the father of Homeopathic Medicine, Samuel Hahnemann. His guiding principle was that “likes are cured by likes.” This translated into delivering primitive pharmaceuticals to suffering patients that would cause a milder version of the symptoms than those they were currently suffering. In the body’s response, he reasoned, would come a kinder and gentler cure than what Rush was proposing.
Samuel Hahnemann, later remembered as the namesake of Hahnemann Medical College, established in 1848, challenged Rush directly. He was aghast at the Heroic’s assaults on human life and limb. He wrote, “My sense of duty would not easily allow me to treat the unknown pathological state of my suffering brethren with these unknown medicines. The thought of becoming, in this way, a murderer or malefactor towards the life of my fellow human beings was most terrible to me.”
But neither doctor was especially successful in halting the 1793 catastrophe, let alone explaining its cause. A century later the cause of Yellow Fever, and the use of vaccination as a preventative would become obvious. But establishing good health practices in “the land of the free” was not an easy proposition.
At the turn of the century, the medical community in America had largely caught up, but citizens remained wary of vaccination, especially if it was compulsory. In 1901, a smallpox epidemic swept through the Northeast, and Cambridge, Massachusetts reacted by requiring all adults receive smallpox inoculations subject to a $5 fine. Rev. Henning Jacobson, a Swedish Lutheran minister, challenged the law, believing it violated his right to “due process.” The case went all the way to the Supreme Court. On February 20, 1905, in a 7 to 2 majority ruling, the ruled against Rev. Jacobson penned by Chief Justice John Marshal Harlan. He wrote in part: “Real liberty for all could not exist under the operation of a principle which recognizes the right of each individual person to use his own, whether in respect of his person or his property, regardless of the injury that may be done to others.”
We will stop here for now. In our second session we will approach the subject of Medical Discoveries from a very human vantage point and in the process uncover the emergence of Public Health and its remarkable success in altering the trajectory of the human experience.