The Second Industrial Revolution

Based on Joel Mokyr, “The Second Industrial Revolution, 1870-1914,” (manuscript, 1998).

The Second Industrial Revolution (1870-1914) was characterized by a close connection between science and technology. Science was developing: chemistry acquired its atomic foundations, for example, and the scientific study of electricity began. The extension of scientific knowledge gave inventors more avenues to investigate. There were also clear instances of feedback from technology to science, such as Lord Kelvin’s investigations of the distortion of messages transmitted through the transatlantic telegraph cable. Science and technology moved forward together during the Second Industrial Revolution, as they have done ever since.

Steel

If iron was the breakthrough material of the First Industrial Revolution, steel was the breakthrough material of the Second. Steel replaced iron in rails, bridges, ship hulls, buildings, tools and machinery, armour plating and artillery. The wholesale replacement of the one with the other occurred because steelmaking had changed dramatically. Steel was produced in massively greater batches, its quality was more consistent, and its price was dramatically lower.

Europeans used several methods to make steel before the Second Industrial Revolution. One method, a survivor from the late sixteenth century, was cementation. Bars of wrought iron (which has a lower carbon content than steel) were layered with powdered charcoal, then sealed in a box and heated for a week or more. The surfaces of the iron bars turned to steel by slowly absorbing carbon from the charcoal, while their interiors remained iron. This product was called blister steel. Its lack of uniformity limited its value.

The clockmaker Benjamin Huntsman developed the crucible steel process to obtain better quality steel for watch springs. Blister steel and iron were placed in a clay crucible, which was then heated in a coke fire, achieving temperatures high enough to completely melt the metals. Fluxes were added to draw out impurities. This process produced steel of a uniform quality, but the volumes were small, with each crucible yielding about 15 kilograms of steel. The crucible process was in use by 1740.

Further progress was not made until the Second Industrial Revolution, when Henry Bessemer found a way of making steel in much larger batches. Pig iron has a high carbon content and contains many impurities, both of which contribute to its brittleness, but pig iron can be converted to steel by removing some of the carbon and as many of the impurities as possible. Bessemer’s process was a way of accomplishing this conversion. The pig iron was first raised to such a high temperature that it melted. High pressure air was then blown through the molten metal, providing the oxygen needed to oxidize (burn away) carbon and impurities such as silicates. The burning carbon became a fuel that further raised the temperature of the molten metal, promoting additional oxidation and generating still more heat. The oxides either dissipated as gases or formed slags that could be easily separated from the metal. The difficulty with Bessemer’s process was controlling it: the oxidation had to be stopped when the correct amount of carbon remained in the metal.

Robert Forester Mushet solved the problem of control. He continued the oxidation until almost all of the carbon and impurities were gone, and then added spiegeleisen (an alloy of iron, carbon and manganese) to correct the carbon and trace element content. His finished product was more malleable and more consistent than Bessemer’s. Bessemer began production using Mushet’s process in 1858 in Sheffield, which would become famous around the world for the quality of its steel.

Bessemer initially made steel from Swedish pig iron, which was attractive because Swedish iron ores were low in phosphorus, an impurity that could not yet be removed from molten pig iron. Sidney Gilchrist Thomas solved this problem in 1878 by lining the Bessemer converter with dolomite rather than clay: the phosphorus binds with the lime in the dolomite to form a slag. This practice allowed steel to be made even from high-phosphorous iron ores.

Steel could also be made with the open hearth method, which used the regenerative furnace designed by Carl Wilhelm Siemens. In this furnace the exhaust gases passed through a network of bricks after leaving the combustion chamber, transferring much of their heat to the bricks. The gas flow was then reversed, with the incoming air and fuel gases passing over the same bricks, and absorbing their heat, on their way to the combustion chamber. The pre-heating of the air and fuel made for efficient combustion, so that the furnace reached very high temperatures. The open hearth method itself, credited to Siemens and the French engineer Pierre-Émile Martin, involved melting low-carbon wrought iron together with high-carbon pig iron to get the right carbon content. Scrap steel could be recycled by adding it to the mix.

The Germans quickly switched from the Bessemer process to the open hearth process — it was first used in the Ruhr in 1869 — and Germany began to challenge Britain in steelmaking. The Germans also adopted Thomas’s method of using lime to draw out phosphorous and bind it into a slag. They then went one step further, converting the slag into fertilizer.

The open hearth method was also speedily adopted in the United States, where it was recognized that increasing the scale of operations reduced the cost of production. The Americans soon led the world in steel production, and were even able to undercut the price of British steel in the British market. England’s commercial domination of the world had come to an end.

Open hearth steelmaking continued to be a major industry in the United States into the middle of the twentieth century. The Russian-American artist Boris Artzybasheff (1899-1965) was fascinated by the machinery of that time:

I am thrilled by machinery’s force, precision and willingness to work at any task, no matter how arduous or monotonous it may be. I would rather watch a thousand ton dredge dig a canal than see it done by a thousand spent slaves lashed into submission.1

Perhaps it was this sentiment that led him to portray steelmaking machines as partners rather than tools.

Boris Artzybasheff: Charging the open hearth
Boris Artzybasheff: Charging the open hearth
Boris Artzybasheff: Tapping a batch of finished steel
Boris Artzybasheff: Tapping a batch of finished steel
Boris Artzybasheff: Filling the ingots
Boris Artzybasheff: Filling the ingots

Industrial Chemistry

Steelmaking required a clear understanding of combustion, which had only been achieved at the end of the eighteenth century. That century had begun with an echo of Aristotle’s theory of the four elements: Georg Ernst Stahl had postulated the existence of phlogiston, a substance that carried the “fire” component of everything that was combustible. Things lost weight when they burned — the log was reduced to ashes — because phlogiston escaped into the air. Combustion in a sealed chamber stopped when the air in the chamber could absorb no more phlogiston. Phlogiston also explained the respiration of living creatures: it was a mechanism for expelling phlogiston from the body. If a mouse was placed in a sealed jar, it would die when the air in the jar became saturated with phlogiston. A theory that could explain both combustion and respiration seemed powerful, but there were some puzzling anomalies.

In 1772 Antoine­-Laurent Lavoisier found that phosphorus and sulphur combined with the air when they burned, so that they gained weight rather than lost it. He also found that heating lead calx caused air to be released.2 Combustion had something to do with the air, but the air itself was still a mystery. A clue was provided by Joseph Priestley, who discovered that heating a mercury calx gives off a gas (“pure air”) that supports both combustion and respiration. In 1777 Lavoisier extended Priestley’s experiment. He was able to show that that air has two parts, one supporting respiration and combustion, and one not supporting either. He concluded that combustion was the reaction of a metal or combustible substance with the part of the air that was “eminently respirable”. Sometime later he found that most acids contained this breathable air, and named it oxygène (acid generator).

Henry Cavendish had isolated “inflammable air” in 1776. He found that when inflammable air and atmospheric air were mixed in a sealed jar and then ignited with a spark, a “dew” formed on the inner sides of the jar. In 1783 Lavoisier repeated the experiment with inflammable air and oxygène, showing that their reaction produced water. This experiment confirmed oxygène’s role in combustion. Lavoisier concluded that phlogiston was an illusion. He began to campaign against phlogiston in 1783; Cavendish joined him in 1787.3

Chemistry was on the verge of a revolution. Lavoisier showed in 1782 that mass is conserved in chemical reactions. Joseph-Louis Proust argued in 1801 that every chemical compound has a fixed composition, and is created by the reaction of other substances in fixed weight ratios. In 1804 John Dalton proposed an atomic theory to explain the fixity of the weight ratios, and atomic theory has been at the center of chemistry ever since.

Chemistry had immediate applications in agriculture. Investigations in the 1840s by John Bennet Lawes in England and Justus von Liebig in Germany established that nitrogen, phosphorous and potassium are required for plant growth. The N-P-K ratio can be found on every package of fertilizer today. Lawes began industrial production of a phosphate fertilizer — superphosphate — in 1843. Superphosphate is still widely used in agriculture.

Coal gas was a mixture of hydrogen, methane, carbon monoxide, and small amounts of other gases. William Murdoch began experimenting with its use for lighting in the 1790s. His employers were Boulton and Watt, and he installed gas lighting in their Soho Foundry in Birmingham in 1798. He lit the exterior of the foundry for a celebration in 1802, and the adoption of coal gas for street lighting followed almost immediately. London installed gas lighting in the Pall Mall in 1807, and Paris adopted it in 1820. Many towns in England had both gas lighting and a gasworks (for the production of coal gas) within the next decade or so.

London's gas lights in 1809.
London’s gas lights in 1809.

Coal gas was initially a by-product of the coking process. Once it entered large scale production, coal gas had useful by-products of its own. One was coal tar, which was used to preserve the wooden ties that were needed for the railroads being laid helter-skelter across Europe. Another was ammonia, which was used in the manufacture of fertilizers, explosives, cleansers, and synthetic dyes.

Synthetic dyes was one of the areas that showed the value of Germany’s systematic approach to technical training. The first dye was serendipitously discovered by a German-trained Englishman, and the second dye was discovered by a Frenchman. German chemists then began a systematic search for synthetic dyes, and Germany soon dominated the industry. Britain was importing large quantities of dyes from Germany at the outbreak of World War I.

Nitroglycerin was invented in 1847 by Italian chemists searching for something more powerful than gunpowder. Its instability limited its usefulness;4 but Alfred Nobel discovered that infusing nitroglycerin into diatomaceous earth yielded a safer explosive. In 1867 he patented his invention under the name dynamite. He patented gelignite, a malleable form of dynamite, in 1876. The amount of labour saved by these inventions was enormous.

Nitroglycerin proved to be a useful medicine: it ameliorated the symptoms of angina, which was otherwise untreatable. Other useful medicines were extracted from natural sources: cocaine from the coca leaf, salicylic acid from willow bark, digitalis from foxglove, morphine from the opium poppy. Quinine had long ago been extracted from cinchona bark; but in the late nineteenth century the Dutch were able to smuggle cinchona saplings out of South America — Peru and its neighbours had attempted to monopolize quinine — and grow them on plantations in the Philippines, greatly increasing quinine’s availability. Anesthetics, disinfectants and antiseptics came into general use.

New materials were developed. The vulcanization of rubber was introduced in 1845. The first plastic, celluloid, appeared in 1869, but was so flammable that its usefulness was limited. Bakelite was developed in 1907; it became indispensable to the electrical industries because it was electrically non-conductive and heat resistant.

Electricity

William Gilbert, in his book On the Magnet (1600), distinguished the attraction of metal to magnet from the attraction of, say, scraps of paper to an amber rod that had been rubbed on wool. He called the latter attraction “electricus” (from the Greek word elektron, meaning amber). The English word “electricity” came into use shortly afterwards. Although Gilbert gave this attraction a name, humans had been aware of it for at least 2000 years.

The process of turning electricity into a power source would begin perhaps two hundred years later. It would involve a two-way exchange between scientists and technologists. The distinction between the two would often be blurred: some of the earliest useful devices were built by scientists. Alessandro Volta created the first true battery, now known as the Voltaic pile, in 1800. It consisted of a column of copper and zinc discs separated by brine-soaked cloth. No-one understood how it worked, but the ability to produce a continuous flow of electricity was an essential first step to electricity’s exploitation. Michael Faraday built the first electric motor in 1821, and the first dynamo in 1831. His dynamo was interesting more for its scientific implications than for its practical value; but a multinational tag team of technicians and scientists — including Hippolyte Pixii, Antonio Pacinotti, Werner von Siemens (older brother of Carl Wilhelm), Charles Wheatstone, and Zénobe Gramme — turned it into a device capable of industrial use by 1870.

A number of people attempted to use electricity for long distance communication. Telegraphy was demonstrated as early as 1804, but the early devices tended to be cumbersome and sometimes rather weird. The first commercial system, the work of William Cooke and Charles Wheatstone, was in operation in Britain in 1838. This system required multiple wires (as many as six), making it expensive to build and maintain. Samuel Morse’s one wire system quickly became the standard in North America, and was adopted as the standard for Continental Europe in 1851.

Very long distance communication required cable to be laid underwater. The first underwater cable was laid across the English Channel in 1861. After some false starts, a cable was successfully laid across the Atlantic Ocean in 1866. The signal suffered from such severe weakness and distortion that the cable was capable of transmitting only eight words per minute. An underwater cable connected Britain to India by 1870, and by 1902, telegraph cables encircled the globe. Messages that had once taken weeks to carry by ship could be transmitted in minutes.

Transmitting the human voice by wire was the next achievement. Alexander Graham Bell received a United States patent for his telephone in 1876. There are other people who have a legitimate claim to be the telephone’s inventor, but no-one else can claim Bell’s commercial success. His telephone was initially installed in pairs, to directly connect two locations. The telephone’s usefulness was greatly expanded, however, when Thomas Edison’s workshop developed the telephone exchange.5 The first commercial telephone exchange opened for business (with 21 subscribers) in 1878. Thomas Edison was simultaneously experimenting with recording and reproducing sound: he patented the phonograph in 1877.

Telephone wires in New York City, 1887
Telephone wires in New York City, 1887

In 1865 James Clerk Maxwell had posited the existence of electromagnetic waves. Heinrich Rudolf Hertz experimentally confirmed Maxwell’s theory in the 1880s, and in the process, generated electromagnetic waves that travelled through the air. This result captured the attention of a number of people who wondered whether these waves could provide the basis for long distance communications. One of them was Guglielmo Marconi, who successfully developed wireless telegraphy, later known as radiotelegraphy or simply radio. Wireless communications were in use by the late nineteenth century. Among the beneficiaries of this device were ships at sea, which could now receive weather forecasts and send status reports or distress calls.

A number of people made significant attempts to develop the incandescent light bulb, but its inventors are generally said to be Britain’s Joseph Swan and America’s Thomas Edison. The light bulb was already in use in some private homes in 1878, and its use spread quickly. Widespread adoption of the lightbulb required power generation on a large scale, and the United States and Britain built their first central power stations in 1882.

The replacement of direct current with alternating current occurred in the late nineteenth century. There were large energy losses when direct current was transmitted over long distances. The energy losses were much smaller with alternating current, because transformers could step up its voltage for transmission (which reduced the loss) and then step it back down at its destination. Much of the pioneering work on alternating current was done in Europe: Sebastian Ziani de Ferranti, Lucien Gaulard and Galileo Ferraris played key roles. George Westinghouse brought alternating current to the United States in 1885, initially using systems purchased in Europe. This decision brought Westinghouse into a confrontation with Thomas Edison, who was committed to direct current. The two entrepreneurs conducted a no-holds-barred fight for public support, highlighted by Edison’s public electrocution of an elephant. Edison conceded the fight in 1892.6

The Internal Combustion Engine

The internal combustion engine was initially conceived as a small scale source of stationary power. The steam engine worked well for large manufacturers whose demands for power were continuous or nearly so, but it was ill-suited to situations where the need for power was small or intermittent. The engines themselves were large, and more space was required to store fuel. It took time to build steam in the boiler, so the fire had to be tended even when the engine was not in use. The boiler was intrinsically dangerous — it would explode if the pressure was too great — so it had to be watched whenever the fire was lit, and it might have to be licensed as well. Inventors began to imagine an alternative: an engine that was fuelled by the lighting gas that was already being piped into the factory, and that would drive the piston by the explosion of the gas inside the cylinder. There would be no fuel storage and no boiler, and the engine could be stopped and started as needed.7

As with the telegraph, the telephone and the light bulb, many inventors could imagine a useful device, but bringing it into existence was quite another thing. Étienne Lenoir was the first person to build a commercially successful internal combustion engine. It had one double-acting cylinder, operated at about 100 RPM, and produced one or two horsepower. A mixture of gas and air was drawn into the cylinder during the first half of the stroke and then ignited with a spark: the expansion of the exploding gas drove the piston through the second half of its stroke. The same sequence occurred during the return stroke. The engine was rough and inefficient, but it satisfied a need. Lenoir began to sell it in 1860, and competitors soon appeared.

One of those competitors was Nicolaus Otto, who started experimenting with engines when he read a newspaper report of Lenoir’s achievement. He built an atmospheric engine with a tall vertical cylinder and a heavy piston. The explosion of the gases drove the piston upwards, and the weight of the piston and atmospheric pressure (as in Newcomen’s engine) brought it back down again. The fall of the piston drove a flywheel. This engine was far more fuel efficient than Lenoir’s engine. Otto and his business partner, Eugen Langen, began to sell it in 1868. Sales stalled in 1875: customers wanted more power than the atmospheric engine could provide.

Otto began to think about compressing the gas and air mixture before exploding it, and ultimately devised the four-stroke scheme that is still used in engines today. The first stroke draws a gas and air mixture into the cylinder and the second stroke compresses it. The mixture is ignited when it reaches maximum compression. The third stroke is the power stroke, in which the expansion of the exploding gas drives the piston up the cylinder. The fourth stroke pushes the exhaust gases out of the cylinder.

Otto offered for sale a four-stroke engine — the Otto Silent Engine — in 1876. It generated three horsepower at 180 RPM. Its design was revolutionary. No previous engine had exceeded three horsepower, but within a few years, four-stroke engines of 1000 horsepower would be driving generators.

Competitors sprang up, and by the end of the century there were about 200,000 four-stroke engines in use. They drove pumps, hoists, printing presses, and workshop machinery.

Some engineers began to think about using internal combustion engines to drive vehicles. It would not be an easy thing to do. The engine would have to be made small and light, and it would have to be adapted to burn a liquid fuel. It would also have to operate at higher RPM, which would require a more precise ignition system. Karl Benz was one of these engineers: he brought his first vehicle to market in 1885. It was powered by a 3/4 horsepower engine. Gottlieb Daimler and Wilhelm Maybach left Otto’s company to build automobiles. Their first Mercedes, offered for sale just sixteen years later, was powered by a 35 horsepower engine and could reach a speed of 85 kilometers per hour.

The first Benz automobile, 1885.
The first Benz automobile, 1885.
The first Mercedes automobile, 1901
The first Mercedes automobile, 1901

Maybach’s carburetor (1885) and Robert Bosch’s magneto (1902) were very significant innovations in engine design, but there was new technology throughout the vehicle, including the brakes, gear shift, radiator, pneumatic tires (originally designed for bicycles), differential, and crank starter.

A lightweight engine also made possible powered heavier-than-air flight. In 1903 Orville and Wilbur Wright became the first people to achieve powered flight, but adding an engine to a glider was actually a small part of their accomplishment. The Wright brothers recognized the need for the pilot to have positive control of an aircraft’s orientation. They used wing warping (twisting the tips of the wings up or down) to control roll, an elevator to control pitch, and a movable rudder to control yaw. All three of these devices were introduced between 1900 and 1902, and were incorporated into the 1902 glider. This glider was the first fully controllable aircraft. The Wright brothers were subsequently awarded a United States patent for their control system.

The Wright Brothers' 1902 glider
The Wright Brothers’ 1902 glider

Manufacturing

Production with interchangeable parts took hold during the Second Industrial Revolution, although its origins are earlier. Christopher Polhem of Sweden was making interchangeable gears for clocks by 1720. Honoré Blanc of France was producing muskets with interchangeable flintlocks by 1778.

Production with interchangeable parts requires only that the parts are precisely made, so that a craftsman can assembled them without the hand-fitting that was once required. But eliminating the need for hand-fitting opens up the possibility that the next stage of production can be accomplished by a machine instead of a craftsman.

Ship's blocks
Ship’s blocks

This possibility was exploited by Marc Isambard Brunel and Henry Maudslay’s block-making machinery. A ship’s block consists of one or more pulleys enclosed in a wooden shell. A navy ship could require a thousand blocks, and in 1800 the British navy required 100,000 blocks each year. Brunel and Maudslay developed a sequence of twenty-two machines, powered by a steam engine, that would produce these blocks. The machinery began operation in 1803, and by 1808, was producing 130,000 blocks per year. Ten men did the work that had once required 110 men.

The move to interchangeable parts, perhaps coupled with mass production, was slow in the United States and even slower Europe.

Many American firms, such as McCormick, Singer, and Colt, owed their success to factors other than complete interchangeability. At first, goods made with interchangeable parts were more expensive and were adopted mostly by government armories, which considered quality more important than price. Only after the Civil War did U.S. manufacturing gradually adopt mass production methods, followed by Europe. First in firearms, then in clocks, pumps, locks, mechanical reapers, typewriters, sewing machines, and eventually engines and bicycles, interchangeable parts technology proved superior and replaced the skilled artisan working with chisel and file. 8

Interchangeable parts were a prerequisite to assembly line manufacturing. Henry Ford used the combination of the two to make the automobile an attainable consumer good.

Conclusions

A brief survey of the technological achievements of the Second Industrial Revolution has been presented. It has necessarily been selective. Refrigeration and canning have been omitted, for example. So has the mechanization of agriculture, and the invention of the diesel engine, and the electrification of trains and trolleys, and the coming of the skyscraper. Nevertheless, this survey does convey a sense of the technological change of the time.

An equally brief survey of the technological achievements of the remainder of the twentieth century is probably impossible. A discussion of flight alone could include the airplane as a weapon of war, intercontinental air travel, the jet engine, the rocket, orbital satellites and satellite communications, manned space flight, and the unmanned exploration of deep space. A history of computing would be a serious undertaking in itself. The Second Industrial Revolution decisively linked technology to science, and that linkage has led to an explosive expansion of technology.


  1. Boris Artzybasheff, As I See (1954).
  2. A calx is a substance formed when an ore or a mineral is heated. These substances are now called oxides, but this designation was not available to people who had not yet discovered oxygen.
  3. The information on Lavoisier’s role in establishing the nature of combustion was taken from a commemorative pamphlet published by the American Chemical Society, “The Chemical Revolution of Antoine­-Laurent Lavoisier”.
  4. The instability of nitroglycerin is central to Henri-Georges Clouzot’s classic film The Wages of Fear (1953).
  5. The idea came from a Hungarian engineer, Tivadar Puskás, who was working for Edison at the time.
  6. The AC versus DC fight was just one part of Edison and Westinghouse’s struggle for dominance in America. Empires of Light (Random House, 2003) by Jill Jonnes is an entertaining survey of that contest.
  7. This section relies heavily on Lynwood Bryant’s “The Beginnings of the Internal Combustion Engine,” from Technology in Western Civilization (Oxford, 1967).
  8. Joel Mokyr, “The Second Industrial Revolution, 1870-1914,” p. 9.