Math Thematic

87 readers
2 users here now

Sharing of different mathematic elements, stories, archives of all kinds.

founded 2 months ago
MODERATORS
1
 
 

Sangaku or san gaku (Japanese: 算額, lit. 'calculation tablet') are Japanese geometrical problems or theorems on wooden tablets which were placed as offerings at Shinto shrines or Buddhist temples during the Edo period by members of all social classes.

A sangaku dedicated to Konnoh Hachimangu (Shibuya, Tokyo) in 1859.

A sangaku dedicated at Emmanji Temple in Nara

The sangaku were painted in color on wooden tablets (ema) and hung in the precincts of Buddhist temples and Shinto shrines as offerings to the kami and buddhas, as challenges to the congregants, or as displays of the solutions to questions. Many of these tablets were lost during the period of modernization that followed the Edo period, but around nine hundred are known to remain.

Fujita Kagen (1765–1821), a Japanese mathematician of prominence, published the first collection of sangaku problems, his Shimpeki Sampo (Mathematical problems Suspended from the Temple) in 1790, and in 1806 a sequel, the Zoku Shimpeki Sampo.

During this period Japan applied strict regulations to commerce and foreign relations for western countries so the tablets were created using Japanese mathematics, developed in parallel to western mathematics. For example, the connection between an integral and its derivative (the fundamental theorem of calculus) was unknown, so sangaku problems on areas and volumes were solved by expansions in infinite series and term-by-term calculation.

https://en.wikipedia.org/wiki/Sangaku

Of the world's countless customs and traditions, perhaps none is as elegant, nor as beautiful, as the tradition of sangaku, Japanese temple geometry. From 1639 to 1854, Japan lived in strict, self-imposed isolation from the West. Access to all forms of occidental culture was suppressed, and the influx of Western scientific ideas was effectively curtailed. During this period of seclusion, a kind of native mathematics flourished.

Devotees of math, evidently samurai, merchants and farmers, would solve a wide variety of geometry problems, inscribe their efforts in delicately colored wooden tablets and hang the works under the roofs of religious buildings. These sangaku, a word that literally means mathematical tablet, may have been acts of homage--a thanks to a guiding spirit--or they may have been brazen challenges to other worshipers: Solve this one if you can! For the most part, sangaku deal with ordinary Euclidean geometry. But the problems are strikingly different from those found in a typical high school geometry course. Circles and ellipses play a far more prominent role than in Western problems: circles within ellipses, ellipses within circles. Some of the exercises are quite simple and could be solved by first-year students. Others are nearly impossible, and modern geometers invariably tackle them with advanced methods, including calculus and affine transformations

https://www.cut-the-knot.org/pythagoras/Sangaku.shtml

The tablet was called a SANGAKU which means a mathematics tablet in Japanese. Many skilled geometers dedicated a SANGAKU in order to thank the god for the discovery of a theorem. The proof of the proposed theorem was rarely given. This was interpreted as a challenge to other geometers, "See if you can prove this."

http://www.wasan.jp/english/

More at http://www.wasan.jp/index.html

2
 
 

The icosian game is a mathematical game invented in 1856 by Irish mathematician William Rowan Hamilton. It involves finding a Hamiltonian cycle on a dodecahedron, a polygon using edges of the dodecahedron that passes through all its vertices. Hamilton's invention of the game came from his studies of symmetry, and from his invention of the icosian calculus, a mathematical system describing the symmetries of the dodecahedron.

Hamilton sold his work to a game manufacturing company, and it was marketed both in the UK and Europe, but it was too easy to become commercially successful. Only a small number of copies of it are known to survive in museums. Although Hamilton was not the first to study Hamiltonian cycles, his work on this game became the origin of the name of Hamiltonian cycles. Several works of recreational mathematics studied his game. Other puzzles based on Hamiltonian cycles are sold as smartphone apps, and mathematicians continue to study combinatorial games based on Hamiltonian cycles.

Game play

A Hamiltonian cycle on a dodecahedron

Planar view of the same cycle

The game's object is to find a three-dimensional polygon made from the edges of a regular dodecahedron, passing exactly once through each vertex of the dodecahedron. A polygon visiting all vertices in this way is now called a Hamiltonian cycle.) In a two-player version of the game, one player starts by choosing five consecutive vertices along the polygon, and the other player must complete the polygon.

Édouard Lucas describes the shape of any possible solution, in a way that can be remembered by game players. A completed polygon must cut the twelve faces of the dodecahedron into two strips of six pentagons. As this strip passes through each of its four middle pentagons, in turn, it connects through two edges of each pentagon that are not adjacent, making either a shallow left turn or a shallow right turn through the pentagon. In this way, the strip makes two left turns and then two right turns, or vice versa.

One version of the game took the form of a flat wooden board inscribed with a planar graph with the same combinatorial structure as the dodecahedron (a Schlegel diagram), with holes for numbered pegs to be placed at its vertices. The polygon found by game players was indicated by the consecutive numbering of the pegs. Another version was shaped as a "partially flattened dodecahedron", a roughly hemispherical dome with the pentagons of a dodecahedron spread on its curved surface and a handle attached to its flat base. The vertices had fixed pegs. A separate string, with a loop at one end, was wound through these pegs to indicate the polygon.

The game was too easy to play to achieve much popularity, although Hamilton tried to counter this impression by giving an example of an academic colleague who failed to solve it. David Darling suggests that Hamilton may have made it much more difficult for himself than for others, by using his theoretical methods to solve it instead of trial and error.

https://en.wikipedia.org/wiki/Icosian_game

Sir William Rowan Hamilton (4 August 1805 – 2 September 1865) was an Irish mathematician, physicist, and astronomer who made numerous major contributions to algebra, classical mechanics, and optics. His theoretical works and mathematical equations are considered fundamental to modern theoretical physics, particularly his reformulation of Lagrangian mechanics. His research included the analysis of geometrical optics, Fourier analysis, and quaternions, the last of which made him one of the founders of modern linear algebra.

https://en.wikipedia.org/wiki/William_Rowan_Hamilton

A graph having a Hamiltonian cycle, i.e., on which the Icosian game may be played, is said to be a Hamiltonian graph. While the skeletons of all the Platonic solids and Archimedean solids (i.e., the Platonic graphs and Archimedean graphs, respectively) are Hamiltonian, the same is not necessarily true for the skeletons of the Archimedean duals, as shown by Coxeter (1946) and Rosenthal (1946) for the rhombic dodecahedron (Gardner 1984, p. 98).

Wolfram (2022) analyzed the icosian game as a multicomputational process, including through the use of multiway and branchial graphs. In particular, the multiway graph for the icosian game begins as illustrated above.

https://mathworld.wolfram.com/IcosianGame.html

The Original Icosian Game

In 1857 Sir William Rowan Hamilton invented the Icosian game. In a world based on the dodecahedral graph, a traveler must visit 20 cities, without revisiting any of them. Today, when the trip makes a loop through all the vertices of the graph, it is called a Hamiltonian tour (or cycle). When the first and last vertices in a trip are not connected, it is called a Hamiltonian path (or trail). The first image shown is a tour; the second is a path.

Hamiltonian cycles gained popularity in 1880, when P. G. Tait made the conjecture: “Every cubic polyhedron has a Hamiltonian cycle through all its vertices”. Cubic means that three edges meet at every vertex. Without the cubic requirement, there are smaller polyhedra that are not Hamiltonian. The simplest counterexample is the rhombic dodecahedron. Every edge connects one of six valence-four vertices to one of eight valence-three vertices. The six valence-four vertices would need to occupy every other vertex in the length-14 tour. Six items cannot fill seven slots, so this is impossible.

Any noncubic graph can be made cubic by placing a small disk over the exceptions.

The word “polyhedral” implies that the graph must be 3-connected. If a line is drawn to disconnect the map, it must pass through at least three borders. Central Europe is not 3-connected, since a line through Spain will disconnect Portugal. France, the Vatican, and various islands also make the shape of Europe nonpolyhedral.

Tait’s method turns a Hamiltonian cycle on a cubic polyhedral graph into a four-coloring, by the following method.

  1. Alternately color the edges of the Hamiltonian cycle blue and purple. Color the other edges red.

  2. Throw out thin edges, and color the resulting polygon blue.

  3. Throw out dashed edges, and color the resulting polygon(s) red.

  4. Overlay the two colorings to get a four-coloring.

For 66 years, Tait’s conjecture held. In 1946, W. G. Tutte found the first counterexample, now known as Tutte’s graph. Since then, some smaller cubic polyhedral non-Hamiltonian graphs have been found, with the smallest such graph being the Barnette-Bosák-Lederberg graph, found in 1965. Seven years earlier, Lederberg had won the Nobel Prize in Medicine.

https://www.mathematica-journal.com/2010/02/05/the-icosian-game-revisited/#Dalgety

Tutte's fragment

The key to this counter-example is what is now known as Tutte's fragment [...].

If this fragment is part of a larger graph, then any Hamiltonian cycle through the graph must go in or out of the top vertex (and either one of the lower ones). It cannot go in one lower vertex and out the other.

The counterexample

The fragment can then be used to construct the non-Hamiltonian Tutte graph, by putting together three such fragments as shown in the picture.

The "compulsory" edges of the fragments, that must be part of any Hamiltonian path through the fragment, are connected at the central vertex; because any cycle can use only two of these three edges, there can be no Hamiltonian cycle.

The resulting Tutte graph is 3-connected and planar, so by Steinitz' theorem it is the graph of a polyhedron. In total it has 25 faces, 69 edges and 46 vertices. It can be realized geometrically from a tetrahedron (the faces of which correspond to the four large faces in the drawing, three of which are between pairs of fragments and the fourth of which forms the exterior) by multiply truncating three of its vertices.

https://en.wikipedia.org/wiki/Tait%27s_conjecture

3
 
 

In mathematics, a series is, roughly speaking, an addition of infinitely many terms, one after the other. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures in combinatorics through generating functions. The mathematical properties of infinite series make them widely applicable in other quantitative disciplines such as physics, computer science, statistics and finance.

Among the Ancient Greeks, the idea that a potentially infinite summation could produce a finite result was considered paradoxical, most famously in Zeno's paradoxes. Nonetheless, infinite series were applied practically by Ancient Greek mathematicians including Archimedes, for instance in the quadrature of the parabola. The mathematical side of Zeno's paradoxes was resolved using the concept of a limit during the 17th century, especially through the early calculus of Isaac Newton. The resolution was made more rigorous and further improved in the 19th century through the work of Carl Friedrich Gauss and Augustin-Louis Cauchy, among others, answering questions about which of these sums exist via the completeness of the real numbers and whether series terms can be rearranged or not without changing their sums using absolute convergence and conditional convergence of series.

Greek mathematician Archimedes produced the first known summation of an infinite series with a method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.

Mathematicians from the Kerala school were studying infinite series c. 1350 CE.

In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series.

https://en.wikipedia.org/wiki/Series_(mathematics)

Infinite series results from us wanting to know how the sum of a series behaves when the terms are infinitely many. We can write any series we see in its infinite series form, so it’s no surprise that infinite series has physics, biology, and engineering.

Infinite series represents the successive sum of a sequence of an infinite number of terms that are related to each other based on a given pattern or relation.

Isn’t it amazing how, through the advancement of mathematics, it is now possible for us to predict the sum of a series made of an endless number of terms?

What is an infinite series?

As our introduction says, infinite series represents the sum of the infinite number of terms formed by a sequence. Below are examples of infinite series:

1/2 + 1/4 + 1/6 + 1/8 + 1/10 +…

  • This is an example of an infinite harmonic series, where the denominator increases by 2 as the series increase.

3+9+27+81+243+…

  • This is an example of an infinite geometric series, where the next term is determined by multiplying 3 to the previous term.

These examples give us an idea of what makes up an infinite series, so let’s go ahead and formally define infinite series. In the next section, we’ll learn how we can express them in terms of sigma notation.

Infinite series definition

Let’s say, we have a finite sequence that consists terms of {𝑎~1~,𝑎~2~,…,𝑎~𝑛−1~,𝑎~𝑛~}, so the sum of its finite series can be expressed as 𝑎~1~ +𝑎~2~ +… +𝑎~𝑛−1~ +𝑎~𝑛~.

The only difference for infinite series, the terms extend beyond 𝑎~𝑛~, so the infinite series will be of the form 𝑎~1~ +𝑎~2~ +𝑎~3~ +…

or

How to find the sum of an infinite series?

At first, it may feel counter-intuitive to think that we can predict the sum of an infinite series. But thanks to limits and calculus, we’re able to create a systematic process to find the sum of a given infinite series.

But first, let’s take a look at this visual representation of an infinite geometric series.

This is a good example of how we can find the sum of infinite series. That’s because as we continue to add more terms (so take half of the previous area), we’ll see that when combined altogether, the total area of the shaded region will fill up almost the entire square’s region.

Any guess on the sum of the infinite series,

then? Visually, since the regions will eventually make up the entire square, the sum of the infinite series is 1.

But how do we confirm this mathematically? Before we dive right into the process of determining the sum of infinite series, let’s find out how to find the sum of a certain portion from a given infinite series.

How to find partial sum of infinite series?

The partial sum of an infinite series is simply the sum of a certain number of terms from the series. For example, the series 1/2 +1/4 + 1/8 is simply a part of the infinite series 1/2 + 1/4 + 1/8 + ...

This means that the partial sum of the first three terms of the infinite series shown above is equal to 1/2 + 1/4 + 1/8 = 7/8

How to find the infinite series’ sum based on its partial sum?

You might be wondering why we’re talking about partial sums when we’re supposedly dealing with the sums of infinite series. That’s because when we want to find the sum of an infinite series, we’ll need the expression of its partial sum.

Let’s say we have an infinite series,

, so its partial sum for the first 𝑛 terms will be

  • If the partial sum, 𝑆~𝑛~, converges, the infinite series, 𝑆, is expected to converge as well. In fact, lim⁡ 𝑛→∞⁢ 𝑆~𝑛~ will represent the sum of the infinite series.

  • If the partial sum, 𝑆~𝑛~, diverges, the infinite series, 𝑆, is expected to diverge as well. In fact, it will not be possible for us to predict the sum of the series when the partial sum diverges.

Why don’t we go ahead and observe the following geometric series and see what happens with their partial sum and infinite series’s sum?

Starting with the series, 1/3 + 1/9 + 1/81 +…, we can see that the common ratio is 1/3 and the next terms will be smaller and will be approaching 0.

The partial sum of the series of 𝑛 terms will be equal to

where 𝑎 = 1/3 and 𝑟 = 1/3.

Let’s take a look at the limit of 𝑆~𝑛~ as 𝑛 approaches infinity.

Since the series converges to 1/2, the sum of the series is equal to 1/2.

What happens when the common ratio is greater than 1? Let’s see how the series, 2 +4 +8 +16 +… behaves to answer that question.

This times, we have 𝑟 =2 and 𝑎 =2.

Conceptually, we’re expecting the series to diverge, and that’s because as we add more terms, the partial sum drastically increases as well. We can confirm this guess by taking the limit of 𝑆~𝑛~ as it approaches infinity.

Since lim𝑛→∞⁡ 𝑆𝑛 =∞, the infinite series’ diverges and will not have a fixed value.

Noticed how when the terms increase throughout the infinite series, the series diverges? That’s a helpful observation and something we need to keep in mind each time.

An important condition for the infinite series,

, to be convergent, lim𝑛→∞⁡ 𝑎~𝑛~ must be equal to 0. This means that terms have to become smaller as the terms progress for the infinite series to be convergent.

https://www.storyofmathematics.com/infinite-series/

Yuktibhāṣā (Malayalam: യുക്തിഭാഷ, lit. 'Rationale'), [...] is a major treatise on mathematics and astronomy, written by the Indian astronomer Jyesthadeva of the Kerala school of mathematics around 1530. The treatise, written in Malayalam, is a consolidation of the discoveries by Madhava of Sangamagrama, Nilakantha Somayaji, Parameshvara, Jyeshtadeva, Achyuta Pisharati, and other astronomer-mathematicians of the Kerala school. It also exists in a Sanskrit version, with unclear author and date, composed as a rough translation of the Malayalam original.

Front and back cover of the Palm-leaf manuscripts of the Yuktibhasa, composed by Jyesthadeva in 1530

The work contains proofs and derivations of the theorems that it presents. Modern historians used to assert, based on the works of Indian mathematics that first became available, that early Indian scholars in astronomy and computation lacked in proofs, but Yuktibhāṣā demonstrates otherwise.

Some of its important topics include the infinite series expansions of functions; power series, including of π and π/4; trigonometric series of sine, cosine, and arctangent; Taylor series, including second and third order approximations of sine and cosine; radii, diameters and circumferences.

Yuktibhāṣā mainly gives rationale for the results in Nilakantha's Tantra Samgraha. It is considered an early text to give some ideas related to calculus like Taylor and infinite series of some trigonometric functions, predating Newton and Leibniz by two centuries. however they did not combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the powerful problem-solving tool we have today. The treatise was largely unnoticed outside India, as it was written in the local language of Malayalam. In modern times, due to wider international cooperation in mathematics, the wider world has taken notice of the work. For example, both Oxford University and the Royal Society of Great Britain have given attribution to pioneering mathematical theorems of Indian origin that predate their Western counterparts.

Yuktibhāṣā contains most of the developments of the earlier Kerala school, particularly Madhava and Nilakantha. The text is divided into two parts – the former deals with mathematical analysis and the latter with astronomy. Beyond this, the continuous text does not have any further division into subjects or topics, so published editions divide the work into chapters based on editorial judgment.

Pages from the Yuktibhasa

The first four chapters of the contain elementary mathematics, such as division, the Pythagorean theorem, square roots, etc. Novel ideas are not discussed until the sixth chapter on circumference of a circle. Yuktibhāṣā contains a derivation and proof for the power series of inverse tangent, discovered by Madhava. In the text, Jyesthadeva describes Madhava's series in the following manner:

The first term is the product of the given sine and radius of the desired arc divided by the cosine of the arc. The succeeding terms are obtained by a process of iteration when the first term is repeatedly multiplied by the square of the sine and divided by the square of the cosine. All the terms are then divided by the odd numbers 1, 3, 5, .... The arc is obtained by adding and subtracting respectively the terms of odd rank and those of even rank. It is laid down that the sine of the arc or that of its complement whichever is the smaller should be taken here as the given sine. Otherwise the terms obtained by this above iteration will not tend to the vanishing magnitude.

The text also contains Madhava's infinite series expansion of π which he obtained from the expansion of the arc-tangent function.

Using a rational approximation of this series, he gave values of the number π as 3.14159265359, correct to 11 decimals, and as 3.1415926535898, correct to 13 decimals.

The text describes two methods for computing the value of π. First, obtain a rapidly converging series by transforming the original infinite series of π. By doing so, the first 21 terms of the infinite series

The text describes two methods for computing the value of π. First, obtain a rapidly converging series by transforming the original infinite series of π. By doing so, the first 21 terms of the infinite series

https://en.wikipedia.org/wiki/Yuktibh%C4%81%E1%B9%A3%C4%81

Madhava (Born 1350 Died 1425 )was a mathematician from South India. He made some important advances in infinite series including finding the expansions for trigonometric functions.

All the mathematical writings of Madhava have been lost, although some of his texts on astronomy have survived. However his brilliant work in mathematics has been largely discovered by the reports of other Keralese mathematicians such as Nilakantha who lived about 100 years later.

Madhava discovered the series equivalent to the Maclaurin expansions of sin x, cos x, and arctan⁡x around 1400, which is over two hundred years before they were rediscovered in Europe. Details appear in a number of works written by his followers such as Mahajyanayana prakara which means Method of computing the great sines. In fact this work had been claimed by some historians such as Sarma to be by Madhava himself but this seems highly unlikely and it is now accepted by most historians to be a 16th century work by a follower of Madhava.

https://mathshistory.st-andrews.ac.uk/Biographies/Madhava/

4
 
 

The number π is a mathematical constant, approximately equal to 3.14159, that is the ratio of a circle's circumference to its diameter. It appears in many formulae across mathematics and physics, and some of these formulae are commonly used for defining π, to avoid relying on the definition of the length of a curve.

The number π is an irrational number, meaning that it cannot be expressed exactly as a ratio of two integers, although fractions such as 22/7 are commonly used to approximate it. Consequently, its decimal representation never ends, nor enters a permanently repeating pattern. It is a transcendental number, meaning that it cannot be a solution of an algebraic equation involving only finite sums, products, powers, and integers. The transcendence of π implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straightedge. The decimal digits of π appear to be randomly distributed, but no proof of this conjecture has been found.

For thousands of years, mathematicians have attempted to extend their understanding of π, sometimes by computing its value to a high degree of accuracy. Ancient civilizations, including the Egyptians and Babylonians, required fairly accurate approximations of π for practical computations. Around 250 BC, the Greek mathematician Archimedes created an algorithm to approximate π with arbitrary accuracy. In the 5th century AD, Chinese mathematicians approximated π to seven digits, while Indian mathematicians made a five-digit approximation, both using geometrical techniques. The first computational formula for π, based on infinite series, was discovered a millennium later. The earliest known use of the Greek letter π to represent the ratio of a circle's circumference to its diameter was by the Welsh mathematician William Jones in 1706. The invention of calculus soon led to the calculation of hundreds of digits of π, enough for all practical scientific computations. Nevertheless, in the 20th and 21st centuries, mathematicians and computer scientists have pursued new approaches that, when combined with increasing computational power, extended the decimal representation of π to many trillions of digits. These computations are motivated by the development of efficient algorithms to calculate numeric series, as well as the human quest to break records. The extensive computations involved have also been used to test the correctness of new computer processors.

Because it relates to a circle, π is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses and spheres. It is also found in formulae from other topics in science, such as cosmology, fractals, thermodynamics, mechanics, and electromagnetism. It also appears in areas having little to do with geometry, such as number theory and statistics, and in modern mathematical analysis can be defined without any reference to geometry. The ubiquity of π makes it one of the most widely known mathematical constants inside and outside of science. Several books devoted to π have been published, and record-setting calculations of the digits of π often result in news headlines.

Definition

The circumference of a circle is slightly more than three times as long as its diameter. The exact ratio is called π.

π is commonly defined as the ratio of a circle's circumference C to its diameter d:

π = C/d

The ratio C/d is constant, regardless of the circle's size. For example, if a circle has twice the diameter of another circle, it will also have twice the circumference, preserving the ratio C/d.

In modern mathematics, this definition is not fully satisfactory for several reasons. Firsly, it lacks a rigorous definition of the length of a curved line. Such a definition requires at least the concept of a limit, or, more generally, the concepts of derivatives and integrals. Also, diameters, circles and circumferences can be defined in Non-Euclidean geometries, but, in such a geometry, the ratio ⁠ C / d need not to be a constant, and need not to equal to π. Also, there are many occurrences of π in many branches of mathematics that are completely independent from geometry, and in modern mathematics, the trend is to built geometry from algebra and analysis rather than independently from the other branches of mathematics.

https://en.wikipedia.org/wiki/Pi

Archimedes’ Method of Approximating Pi

Since the true value of pi could not be measured directly, Archimedes developed a geometric technique using polygons to establish upper and lower bounds for its value. His method relied on inscribing and circumscribing regular polygons around a circle and calculating their perimeters. By progressively increasing the number of sides, he was able to narrow the range within which pi must lie. This approach was a precursor to the concept of limits, which later became a fundamental idea in calculus.

The Inscribed and Circumscribed Polygon Method

In his work Measurement of a Circle Archimedes considered a circle with diameter d and radius r. He inscribed a regular hexagon inside the circle and circumscribed another hexagon outside it. By calculating the perimeters of these polygons, he obtained lower and upper estimates for the circumference of the circle. Since the ratio of the circumference to the diameter is pi (C / d = pi), these perimeters provided bounds for pi.

He then systematically increased the number of sides of the polygons, doubling them from 6-sided to 12-sided, 24-sided, 48-sided, and finally 96-sided polygons. As the number of sides increased, the perimeters of the inscribed and circumscribed polygons became closer to the true circumference of the circle, refining the estimate of pi.

Using this method, Archimedes established the following inequality:

223/71 < pi < 22/7

This meant that pi was approximately 3.1408 < pi < 3.1429, a remarkably accurate estimate for the time.

Mathematical Process Behind Archimedes’ Approximation

To derive these values, Archimedes used the Pythagorean theorem and properties of similar triangles to calculate the side lengths of the polygons. By repeatedly applying trigonometric relationships (though without the formal notation used today), he determined the perimeters of each successive polygon. His method can be broken down as follows:

  1. For an inscribed n-sided polygon:
  • The perimeter Pi provides a lower bound for the circle’s circumference.

  • Formula: Pi = n * s~i~, where s~i~ is the side length.

  1. For a circumscribed n-sided polygon:
  • The perimeter P~c~ gives an upper bound for the circumference.

  • Formula: P~c~ = n * s~c~, where sc is the side length.

  1. Refining the estimate:
  • Archimedes doubled the number of sides, recalculating the new perimeters iteratively.

  • The values of P~i~ and P~c~ converged toward the true circumference of the circle, P~i~ < C < P~c~.

By the time he reached a 96-sided polygon, his estimates were precise to two decimal places. This level of accuracy was unprecedented and remained the best approximation of pi for nearly 1,000 years.

Circle circumscribed and inscribed by a square where n=4.

The Limitations of Archimedes’ Approach

Archimedes' method had several inherent limitations. First, the computational intensity of his approach increased significantly as the number of sides in his polygons grew. Without the tools of modern algebra or trigonometry, he had to rely solely on geometric reasoning, making the process increasingly complex. Additionally, his method could only provide an approximation of pi rather than an exact value. Since pi is an irrational number that cannot be expressed as a finite fraction, Archimedes' approach was necessarily limited in its precision. Another challenge was the laborious nature of manual computation. Each successive step required extensive geometric derivations, making further refinements impractical beyond a certain point. Despite these limitations, Archimedes' work demonstrated a systematic method for refining numerical approximations and laid the foundation for future mathematical advancements.

Implications of Archimedes’ Work on Pi

Archimedes' method of approximating pi was groundbreaking, not only for its accuracy but also for its influence on the development of mathematical techniques. His approach established a systematic way of refining numerical approximations, which later became essential in calculus and numerical analysis. His work remained the most accurate estimate of pi for over a millennium and laid the foundation for future mathematicians to further refine the calculation of pi.

Archimedes’ method set the stage for many mathematicians across different cultures to refine and improve the approximation of pi. In the 3rd century CE, the Chinese mathematician Liu Hui built upon Archimedes' technique and extended it to a 3072-sided polygon, achieving a more precise approximation of pi at 3.14159. Two centuries later, Zu Chongzhi improved on this result, determining that pi was approximately 355/113 (3.1415929), an extraordinarily precise fraction that remained the most accurate estimate for over a thousand years.

In the Islamic Golden Age, mathematicians such as Al-Khwarizmi and Al-Kashi expanded on these ideas using decimal notation and further refinements of the polygonal method. The Renaissance period saw renewed interest in Archimedes' approach, with European scholars like Ludolph van Ceulen extending the method to polygons with millions of sides. This allowed for calculations of pi accurate to more than 30 decimal places. Despite these advancements, Archimedes’ geometric method remained the dominant approach for approximating pi until the development of calculus in the 17th century.

https://discover.hubpages.com/education/how-archimedes-calculated-pi-the-revolutionary-polygon-method-explained

Ludolph van Ceulen (8 January 1540 – 31 December 1610) was a German-Dutch mathematician from Hildesheim known for the Ludolphine number, his calculation of the mathematical constant pi to 35 digits.

Ludolph van Ceulen spent a major part of his life calculating the numerical value of the mathematical constant π, using essentially the same methods as those employed by Archimedes some seventeen hundred years earlier. He published a 20-decimal value in his 1596 book Van den Circkel ("On the Circle"), which was published before he moved to Leiden, and he later expanded this to 35 decimals.

Van Ceulen's 20 digits is more than enough precision for any conceivable practical purpose. Even if a circle was perfect down to the atomic scale, the thermal vibrations of the molecules of ink would make most of those digits physically meaningless. Future attempts to calculate π to ever greater precision have been driven primarily by curiosity about the number itself.

https://en.wikipedia.org/wiki/Ludolph_van_Ceulen

The above image is the title page of Vanden Circkel, a book about the circle and π by Ludolph Van Ceulen (1540–1610). Published in 1596 in Dutch, it contains the longest decimal approximation of π at the time—20 decimal places. In fact, below the portrait of Van Ceulen, the engraving on the title page has a circle with diameter of 10^20^. Across the top semicircle is “314159265358979323846 te cort” (too short), and “314159265358979323847 te lanck” (too long) is along the bottom semicircle. Later, Van Ceulen would determine π to 35 decimal places. A modified Latin version of the work was published in 1619, images of which can also be found on Convergence here and here.

Part of what little is known of Van Ceulen’s life before 1578 comes from the Preface of Vanden Circkel. Starting in 1566, he earned a living as a mathematics teacher, and in 1580 he opened his first fencing school. A few years later Archimedes’ method of approximating π was translated from the Greek for him, and Van Ceulen proceeded to use the technique to improve on approximations of π, publishing Vanden Circkel in 1596. Below are images from folio 1 and folio 7.

Chapter 21 is devoted to analyzing a work of Joseph Justice Scaliger (1540–1609) called Cyclometrica Elementa (Elements of Circle Measurement), which had several incorrect results, including a “proof” that the area of a circle is equal to 6/5 of the area of an inscribed regular hexagon, which results in π=(9/5)√3 or approximately 3.117691454. Van Ceulen doesn’t mention Scaliger by name, but rather calls him a “highly learned man”. Below is Folio 63a.

https://old.maa.org/press/periodicals/convergence/mathematical-treasure-van-ceulen-s-vanden-circkel

Van Ceulen is famed for his calculation of π to 35 places which he did using polygons with 2^62^ sides. Having published 20 places of π in his book of 1596, the more accurate results were only published after his death. In 1615 his widow Adriana Simondochter published a posthumous work by Van Ceulen entitled De arithmetische en geometrische fondamenten. This contained his computation of 33 decimal places for π. The complete 35 decimal place approximation was only published in 1621 in Snell's Cyclometricus. Having spent most of his life computing this approximation, it is fitting that the 35 places of π were engraved on Van Ceulen's tombstone. In fact Van Ceulen had purchased a grave in the Pieterskerk on 11 November 1602 but, after Van Ceulen's death on 31 December 1610, his widow Adriana exchanged this grave for another, still in the Pieterskerk, and it was in this second grave that Van Ceulen was buried on 2 January 1611. The tombstone gave both Van Ceulen's lower bound of 3.14159265358979323846264338327950288 and his upper bound of 3.14159265358979323846264338327950289. However, the original tombstone disappeared around 1800 to be replaced by a replica two hundred years later. The original text on the tombstone was known since it had been recorded in a guidebook of 1712 and after that reprinted in many articles. Vajta writes :

On July 5, 2000 a very special ceremony took place in the St Pieterskerk (St Peter's Church) at Leiden, the Netherlands. A replica of the original tombstone of Ludolph Van Ceulen was placed into the Church since the original disappeared. ... It was therefore a tribute to the memory of Ludolph Van Ceulen, when on Wednesday 5 July, 2000 prince Willem-Alexander (heir to the throne), unveiled the memorial tombstone in the St Peter's Church, in Leiden.

In Germany π was called the "Ludolphine number" for a long time.

https://mathshistory.st-andrews.ac.uk/Biographies/Van_Ceulen/

5
 
 

The Game of Life is a cellular automaton devised by the british mathematician John Horton Conway in 1970. It was popularised by Martin Gardner in his October 1970 column of "Mathematical Games" in the "Scientific American" magazine [6]. The article garnered more response than any other of his previous articles in the magazine, including Gardners famous article on Hexaflexagons.

A notable property of the special rule set used by Conway's "Game of Life" is it's Turing completeness. The Turing completeness is a property that describes that a programming language, a simulation or a logical system is in principle suitable to solve every computing problem. The programming of the "Game of Life" would be done with patterns, which then interact with each other in the simulation. LifeWiki has a large archive of such patterns for Game of Life. A selection of those is implemented in the applet shown below.

What is a Cellular Automaton?

A Cellular automaton is a discrete model that consists of a regular grid of cells wherein each cell is in a finite state. The inital state of the cellular automate is selected by assigning a state to each cell. The simulation then progresses in discreet time steps. The state of a cell at timestep t depends only on the state of nearby cells at timestep t-1 and a set of rules specific to the automate.

Rules of the Game of Life

In the Game of Life each grid cell can have either one of two states: dead or alive. The Game of Life is controlled by four simple rules which are applied to each grid cell in the simulation domain:

  • A live cell dies if it has fewer than two live neighbors.

  • A live cell with two or three live neighbors lives on to the next generation.

  • A live cell with more than three live neighbors dies.

  • A dead cell will be brought back to live if it has exactly three live neighbors.

Merrill Sherman/Quanta Magazine

Boundary Conditions

Cellular automata often use a toroidal topology of the simulation domain. This means that opposing edges of the grid are connected. The rightmost column is the neighbor of the leftmost column and the topmost row is the neighbor of the bottommost row and vice versa. This allows the unrestricted transfer of state information across the boundaries.

Opposing edges of the grid are connected to form a toroidal topology of the simulation domain

Cells beyond the grid boundary are always treated as if they were dead.

Another type of boundary condition treats nonexisting cells as if they all had the same state. In the Game of Life this would mean that nonexisting cells are treated as if they were dead (as opposed to the second state "alive"). The advantage of this boundary condition in the Game of Life is that it prevents gliders from wrapping around the edges of the simulation domain. This will prevent the destruction of a glider gun by the gliders it produces (see text below below for details about what gliders are).

https://beltoforion.de/en/game_of_life/

John Horton Conway FRS (26 December 1937 – 11 April 2020) was an English mathematician. He was active in the theory of finite groups, knot theory, number theory, combinatorial game theory and coding theory. He also made contributions to many branches of recreational mathematics, most notably the invention of the cellular automaton called the Game of Life.

https://en.wikipedia.org/wiki/John_Horton_Conway

Origins

Conway was interested in a problem presented in the 1940s by renowned mathematician John von Neumann, who tried to find a hypothetical machine that could build copies of itself and succeeded when he found a mathematical model for such a machine with very complicated rules on a rectangular grid. The Game of Life emerged as Conway's successful attempt to simplify von Neumann's ideas.

The game made its first public appearance in the October 1970 issue of Scientific American, in Martin Gardner's "Mathematical Games" column, under the title of The fantastic combinations of John Conway's new solitaire game "life". From a theoretical point of view, it is interesting because it has the power of a universal Turing machine: that is, anything that can be computed algorithmically can be computed within Conway's Game of Life. Gardner wrote:

" The game made Conway instantly famous, but it also opened up a whole new field of mathematical research, the field of cellular automata ... Because of Life's analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing class of what are called 'simulation games' (games that resemble real life processes) "

https://conwaylife.com/wiki/Conway's%20Game%20of%20Life

The Game of Life (an example of a cellular automaton) is played on an infinite two-dimensional rectangular grid of cells. Each cell can be either alive or dead. The status of each cell changes each turn of the game (also called a generation) depending on the statuses of that cell's 8 neighbors. Neighbors of a cell are cells that touch that cell, either horizontal, vertical, or diagonal from that cell.

The initial pattern is the first generation. The second generation evolves from applying the rules simultaneously to every cell on the game board, i.e. births and deaths happen simultaneously. Afterwards, the rules are iteratively applied to create future generations. For each generation of the game, a cell's status in the next generation is determined by a set of rules. These simple rules are as follows:

  • If the cell is alive, then it stays alive if it has either 2 or 3 live neighbors

  • If the cell is dead, then it springs to life only in the case that it has 3 live neighbors

There are, of course, as many variations to these rules as there are different combinations of numbers to use for determining when cells live or die. Conway tried many of these different variants before settling on these specific rules. Some of these variations cause the populations to quickly die out, and others expand without limit to fill up the entire universe, or some large portion thereof. The rules above are very close to the boundary between these two regions of rules, and knowing what we know about other chaotic systems, you might expect to find the most complex and interesting patterns at this boundary, where the opposing forces of runaway expansion and death carefully balance each other. Conway carefully examined various rule combinations according to the following three criteria:

  • There should be no initial pattern for which there is a simple proof that the population can grow without limit.

  • There should be initial patterns that apparently do grow without limit.

  • There should be simple initial patterns that grow and change for a considerable period of time before coming to an end in the following possible ways:

    1. Fading away completely (from overcrowding or from becoming too sparse)

    2. Settling into a stable configuration that remains unchanged thereafter, or entering an oscillating phase in which they repeat an endless cycle of two or more periods.

Example Patterns

Using the provided game board(s) and rules as outline above, the students can investigate the evolution of the simplest patterns. They should verify that any single living cell or any pair of living cells will die during the next iteration.

Some possible triomino patterns (and their evolution) to check:

Here are some tetromino patterns (NOTE: The students can do maybe one or two of these on the game board and the rest on the computer):

Some example still lifes:

Square

Boat

Loaf

Ship

The following pattern is called a "glider." The students should follow its evolution on the game board to see that the pattern repeats every 4 generations, but translated up and to the left one square. A glider will keep on moving forever across the plane.

Another pattern similar to the glider is called the "lightweight space ship." It too slowly and steadily moves across the grid.

Early on (without the use of computers), Conway found that the F-pentomino (or R-pentomino) did not evolve into a stable pattern after a few iterations. In fact, it doesn't stabilize until generation 1103.

The F-pentomino stabilizes (meaning future iterations are easy to predict) after 1,103 iterations. The class of patterns which start off small but take a very long time to become periodic and predictable are called Methuselahs. The students should use the computer programs to view the evolution of this pattern and see how/where it becomes stable. The "acorn" is another example of a Methuselah that becomes predictable only after 5206 generations.

Alan Hensel compiled a fairly large list of other common patterns and names for them, available at radicaleye.com/lifepage/picgloss/picgloss.html.

Activity - Two-Player Game of Life

To call Conway's Game of Life a game is to stretch the meaning of the word "game", but there is an fun adaptation that can produce a competitive and strategic activity for multiple players.

The modification made is that now the live cells come in two colors (one associated with each player). When a new cell comes to life, the cell takes on the color of the majority of its neighbors. (Since there must be three neighbors in order for a cell to come to life, there cannot be a tie. There must be a majority)

Players alternate turns. On a player's turn, he or she must kill one enemy cell and must change one empty cell to a cell of their own color. They are allowed to create a new cell at the location in which they killed an enemy cell.

After a player's turn, the Life cells go through one generation, and the play moves to the next player. There is always exactly one generation of evolution between separate players' actions.

The initial board configuration should be decided beforehand and be symmetric. A player is eliminated when they have no cells remaining of their color.

This variant of life can well be adapted to multiple players. However, with more than two players, it is possible that a newborn cell will have three neighbors belonging to three separate players. In that case, the newborn cell is neutral, and does not belong to anyone.

https://pi.math.cornell.edu/~lipa/mec/lesson6.html

Many patterns in the Game of Life eventually become a combination of still lifes, oscillators, and spaceships; other patterns may be called chaotic. A pattern may stay chaotic for a very long time until it eventually settles to such a combination.

The Game of Life is

undecidable

In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether an arbitrary program eventually halts when run.

, which means that given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear. Given that the Game of Life is Turing-complete, this is a corollary of the

halting problem

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable.

: the problem of determining whether a given program will finish running or continue to run forever from an initial input.

Conway's Game of Life: Mathematics and Construction by Nathaniel Johnson and Dave Greene provides a linear exposition of the questions, results, and techniques behind the game. It functions as a companion to the website where one can download the ebook.

The material requires no formal background and is appropriate for its target audience of early undergraduate students. A few topics such as counting, number theory, and algorithm analysis appear, but generally at an elementary level and the key concepts are briefly reviewed in the appendices. There are proofs, but they are generally careful deductions using few mathematical tools. It is a perfect topic to hand to a curious undergraduate mathematics or computer science student and let them go.

There are a lot of objects that need names and the vocabulary can be a bit overwhelming. Many of the names are descriptive – gliders, volcanoes, sparks – but there are also Snarks, Sir Robins, and David Hilberts to navigate. This is the reality of the subject and not a complaint about the book. There is no glossary, but the index is good and there are additional resources online.

One needs to be able to zoom in and out in real-time as the configurations evolve to see how small-scale changes impact large-scale behavior. The authors do a fine job of using color diagrams with consistent coloring and iconography – indeed, the book is visually impressive independent of the content – but there is no substitute for going to the website and noodling with it there. The ebook can be downloaded at the website (for free, with optional donation), allowing for the two to be used in parallel easily. There are also a few print-on-demand options.

The book is organized into three parts, each with four chapters. The first part, “Classical Topics,” covers the fundamental structures and their properties. “Circuitry and Logic” examines techniques for putting these structures together into circuits that exhibit more elaborate and precise behaviors. Finally, “Constructions” develops how these circuits can establish some more general properties of the Game of Life itself: universal computation, wherein we can simulate a universal computer, and universal construction, which establishes a sense in which the Game of Life can create and position its own components. Chapters close with notes and plenty of exercises. There are appendices with some mathematical preliminaries, technical details, and selected exercise solutions. Further material is available on the website, which has plenty of tools for simulating the game, finding specific results, and identifying new investigations that a newcomer could engage in almost immediately.

https://maa.org/book-reviews/conways-game-of-life-mathematics-and-construction/

6
15
submitted 1 week ago* (last edited 1 week ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

Perhaps one of the smartest and most compelling shorts around, ALTERNATIVE MATH, a nine minute American piece directed by David Maddox, is a deeply layered and remarkably sophisticated pieces of intelligent comedy.

Our heroine is a veteran grade school teacher trying to explain to her student that 2+2=4. The child however, believes the answer is 22. So do his parents. How dare this teacher censor their child and restrict his learning. What kind of professional does this? The child’s parents are out for blood and soon our heroine is trapped in a vicious media onslaught and a school board demanding her resignation.

What makes this film so special is that it functions on so many layers. It works comically due to it’s wonderfully executed reducto-absurdum, but just a little bit deeper we find an allegory for our modern world carrying a concerning warning. What happens when beliefs are taken too such a degree that basic knowledge is questioned? What happens to a population when the right to free speech becomes more important than the recognition of fact? There is a frightening undertone in ALTERNATIVE MATH that speaks to a greater and more terrible world lurking in a reality not too far away from our own.

Of course, this allegory is one that comes gift-wrapped clearly and politely in the bow comedy for an audience can unwrap it with glee, not fear. Perhaps this is one of the best reasons to see ALTERNATIVE MATH, a film with heart, humanity and humor, as well as deeper philosophical undertones. A family film to be enjoyed by teacher and student alike.

https://festivalreviews.org/2018/01/29/film-review-alternative-math-usa-comedy/

7
 
 

Maurits Cornelis Escher (17 June 1898 – 27 March 1972) was a Dutch graphic artist who made woodcuts, lithographs, and mezzotints, many of which were inspired by mathematics. Despite wide popular interest, for most of his life Escher was neglected in the art world, even in his native Netherlands. He was 70 before a retrospective exhibition was held. In the late twentieth century, he became more widely appreciated, and in the twenty-first century he has been celebrated in exhibitions around the world.

His work features mathematical objects and operations including impossible objects, explorations of infinity, reflection, symmetry, perspective, truncated and stellated polyhedra, hyperbolic geometry, and tessellations. Although Escher believed he had no mathematical ability, he interacted with the mathematicians George Pólya, Roger Penrose, and Donald Coxeter, and the crystallographer Friedrich Haag, and conducted his own research into tessellation.

https://en.m.wikipedia.org/wiki/M._C._Escher

Reptiles depicts a desk upon which is a two-dimensional drawing of a tessellated pattern of reptiles and hexagons, Escher's 1939 Regular Division of the Plane. The reptiles at one edge of the drawing emerge into three-dimensional reality, come to life and appear to crawl over a series of symbolic objects (a book on nature, a geometer's triangle, a dodecahedron, a pewter bowl containing a box of matches and a box of cigarettes) to eventually re-enter the drawing at its opposite edge. Other objects on the desk are a potted cactus and yucca, a ceramic flask with a cork stopper next to a small glass of liquid, a book of JOB cigarette rolling papers, and an open handwritten note book of many pages. Although only the size of small lizards, the reptiles have protruding crocodile-like fangs, and the one atop the dodecahedron has a dragon-like puff of smoke billowing from its nostrils.

Once a woman telephoned Escher and told him that she thought the image was a "striking illustration of reincarnation".

The critic Steven Poole commented that one of Escher's "enduring fascinations" was "the contrast between the two-dimensional flatness of a sheet of paper and the illusion of three-dimensional volume that can be created with certain marks" when space and flatness exist side by side and are "each born from and returning to the other, the black magic of the artistic illusion made creepily manifest."

https://en.m.wikipedia.org/wiki/Reptiles_(M._C._Escher)

On 19 August 1960 he gave a lecture in Cambridge, during which he said of this print:

'On the page of an opened sketchbook a mosaic of reptiles can be seen, drawn in three colours. Now let them prove themselves to be living creatures. One of them extends his paw out over the edge of the sketchbook, frees himself fully and starts on his path of life. First he climbs onto a book, walks further up across a smooth triangle and finally reaches the summit on the horizontal plane of a dodecahedron. He has a breather, tired but satisfied, and he moves down again. Back to the surface, the ‘flat lands’, in which he resumes his position as a symmetrical figure. I was later told that this story perfectly sums up the theory of reincarnation.'

The reference to reincarnation must have brought a smile to his face, as he always laughed about other people’s interpretations. He also listened in amusement when people stated that the word ‘Job’ on the packet in the bottom left was a reference to the Book of Job in the Bible. Nothing was further from the truth. Escher had lived in Belgium for several years and Job was a popular brand of cigarette paper there.

Because he could not print a lithograph himself, he stayed at his printer Dieperink in Amsterdam for a few days. To his friend Bas Kist he wrote that he had to do ‘a lot of tinkering’ on the stone ‘before a definitive set of copies’ could be produced.

Escher himself called what the reptiles are freeing themselves from ‘a sketchbook’, but it is of course one of his own design sketchbooks. In 1939 he created Regular division drawing nr 25, featuring these reptiles. What is remarkable and interesting about this periodic drawing is the presence of three different rotation points, where three heads meet and three ‘knees’ meet. If you copy the figure onto transparent paper and put a pin through both pieces of paper, in one of these rotation points, you can turn the transparent one 120 degrees and the figures will cover the ones below completely.

https://escherinhetpaleis.nl/en/about-escher/escher-today/reptiles-in-wartime?lang=en

The Mathematical Side of M. C. Escher

While the mathematical side of Dutch graphic artist M. C. Escher (1898– 1972) is often acknowledged, few of his admirers are aware of the mathematical depth of his work. Probably not since the Renaissance has an artist engaged in mathematics to the extent that Escher did, with the sole purpose of understanding mathematical ideas in order to employ them in his art. Escher consulted mathematical publications and interacted with mathematicians. He used mathematics (especially geometry) in creating many of his drawings and prints. Several of his prints celebrate mathematical forms. Many prints provide visual metaphors for abstract mathematical concepts; in particular, Escher was obsessed with the depiction of infinity. His work has sparked investigations by scientists and mathematicians. But most surprising of all, for several years Escher carried out his own mathematical research, some of which anticipated later discoveries by mathematicians. And yet with all this, Escher steadfastly denied any ability to understand or do mathematics. His son George explains:

Father had difficulty comprehending that the working of his mind was akin to that of a mathematician. He greatly enjoyed the interest in his work by mathematicians and scientists, who readily understood him as he spoke, in his pictures, a common language. Unfortunately, the specialized language of mathematics hid from him the fact that mathematicians were struggling with the same concepts as he was.Scientists, mathematicians and M. C. Escher approach some of their work in similar fashion. They select by intuition and experience a likely-looking set of rules which defines permissible events inside an abstract world. Then they proceed to explore in detail the consequences of applying these rules. If well chosen, the rules lead to exciting discoveries, theoretical developments and much rewarding work. [18, p.4]

In Escher’s mind, mathematics was what he encountered in schoolwork—symbols, formulas, and textbook problems to solve using prescribed techniques. It didn’t occur to him that formulating his own questions and trying to answer them in his own way was doing mathematics.

https://www.ams.org/journals/notices/201006/rtx100600706p.pdf

by Matthew Everett and Jeffrey Mancuso

Rendering competition in Pat Hanrahan's CS 348b class: Image Synthesis Techniques in the Spring quarter of 2001.

https://graphics.stanford.edu/courses/cs348b-competition/cs348b-01/escher/

8
3
Beads, Not Bytes (www.mathematik.uni-marburg.de)
 
 

An abacus (pl. abaci or abacuses), also called a counting frame, is a hand-operated calculating tool which was used from ancient times, in the ancient Near East, Europe, China, and Russia, until largely replaced by handheld electronic calculators, during the 1980s, with some ongoing attempts to revive their use. An abacus consists of a two-dimensional array of slidable beads (or similar objects). In their earliest designs, the beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation.

Bi-quinary coded decimal-like abacus representing 1,352,964,708

Any particular abacus design supports multiple methods to perform calculations, including addition, subtraction, multiplication, division, and square and cube roots. The beads are first arranged to represent a number, then are manipulated to perform a mathematical operation with another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations).

In the ancient world, abacuses were a practical calculating tool. It was widely used in Europe as late as the 17th century, but fell out of use with the rise of decimal notation and algorismic methods. Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacus has an advantage of not requiring a writing implement and paper (needed for algorism) or an electric power source. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronic table games. Others may use an abacus due to visual impairment that prevents the use of a calculator. The abacus is still used to teach the fundamentals of mathematics to children in many countries such as Japan and China.

History

Mesopotamia

The Sumerian abacus appeared between 2700 and 2300 BC. It held a table of successive columns which delimited the successive orders of magnitude of their sexagesimal (base 60) number system.

Some scholars point to a character in Babylonian cuneiform that may have been derived from a representation of the abacus. It is the belief of Old Babylonian scholars, such as Ettore Carruccio, that Old Babylonians "seem to have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations".

Egypt

Greek historian Herodotus mentioned the abacus in Ancient Egypt. He wrote that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, there are no known illustrations of this device.

Persia

At around 600 BC, Persians first began to use the abacus, during the Achaemenid Empire. Under the Parthian, Sassanian, and Iranian empires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and the Roman Empire – which is how the abacus may have been exported to other countries.

Greece

The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC. Demosthenes (384–322 BC) complained that the need to use pebbles for calculations was too difficult. A play by Alexis from the 4th century BC mentions an abacus and pebbles for accounting, and both Diogenes and Polybius use the abacus as a metaphor for human behavior, stating "that men that sometimes stood for more and sometimes for less" like the pebbles on an abacus. The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations. This Greek abacus was used in Achaemenid Persia, the Etruscan civilization, Ancient Rome, and the Western Christian world until the French Revolution.

The Salamis Tablet, found on the Greek island Salamis in 1846 AD, dates to 300 BC, making it the oldest counting board discovered so far. [...].

Rome

The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles (Latin: calculi) were used. Marked lines indicated units, fives, tens, etc. as in the Roman numeral system.

Writing in the 1st century BC, Horace refers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus.

Medieval Europe

The Roman system of 'counter casting' was used widely in medieval Europe, and persisted in limited use into the nineteenth century. Wealthy abacists used decorative minted counters, called jetons.

Due to Pope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe again during the 11th century It used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster and was more easily moved.

China

The earliest known written documentation of the Chinese abacus dates to the 2nd century BC.

The prototype of the Chinese abacus appeared during the Han dynasty, and the beads are oval. The Song dynasty and earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus.

In the early Ming dynasty, the abacus began to appear in a 1:5 ratio. The upper deck had one bead and the bottom had five beads. In the late Ming dynasty, the abacus styles appeared in a 2:5 ratio. The upper deck had two beads, and the bottom had five.

Various calculation techniques were devised for Suanpan enabling efficient calculations. Some schools teach students how to use it.

The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, given evidence of a trade relationship between the Roman Empire and China.

India

The Abhidharmakośabhāṣya of Vasubandhu (316–396), a Sanskrit work on Buddhist philosophy, says that the second-century CE philosopher Vasumitra said that "placing a wick (Sanskrit vartikā) on the number one (ekāṅka) means it is a one while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the abacus. Hindu texts used the term śūnya (zero) to indicate the empty column on the abacus.

Japan

In Japan, the abacus is called soroban (lit. "counting tray"). It was imported from China in the 14th century. It was probably in use by the working class a century or more before the ruling class adopted it, as the class structure obstructed such changes. The 1:4 abacus, which removes the seldom-used second and fifth bead, became popular in the 1940s.

The four-bead abacus spread, and became common around the world. Improvements to the Japanese abacus arose in various places. In China, an abacus with an aluminium frame and plastic beads has been used. The file is next to the four beads, and pressing the "clearing" button puts the upper bead in the upper position, and the lower bead in the lower position.

The abacus is still manufactured in Japan, despite the proliferation, practicality, and affordability of pocket electronic calculators. The use of the soroban is still taught in Japanese primary schools as part of mathematics, primarily as an aid to faster mental calculation. Using visual imagery, one can complete a calculation as quickly as with a physical instrument.

Korea

The Chinese abacus migrated from China to Korea around 1400 AD. Koreans call it jupan (주판), supan (수판) or jusan (주산). The four-beads abacus (1:4) was introduced during the Goryeo Dynasty. The 5:1 abacus was introduced to Korea from China during the Ming Dynasty.

Native America

Representation of an Inca quipu

A yupana as used by the Incas

Some sources mention the use of an abacus called a nepohualtzintzin in ancient Aztec culture. This Mesoamerican abacus used a 5-digit base-20 system. The word Nepōhualtzintzin Nahuatl comes from Nahuatl, formed by the roots; Ne – personal -; pōhual or pōhualli Nahuatl – the account -; and tzintzin Nahuatl – small similar elements. Its complete meaning was taken as: counting with small similar elements. Its use was taught in the Calmecac to the temalpouhqueh Nahuatl, who were students dedicated to taking the accounts of skies, from childhood.

The device featured 13 rows with 7 beads, 91 in total. This was a basic number for this culture. It had a close relation to natural phenomena, the underworld, and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximated one year. When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to 18 in floating point, which precisely calculated large and small amounts, although round off was not allowed.

The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo, who in his travels throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them in gold, jade, encrustations of shell, etc. Very old Nepōhualtzintzin are attributed to the Olmec culture, and some bracelets of Mayan origin, as well as a diversity of forms and materials in other cultures.

Sanchez wrote in Arithmetic in Maya that another base 5, base 4 abacus had been found in the Yucatán Peninsula that also computed calendar data. This was a finger abacus, on one hand, 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2, and 3 were used. Note the use of zero at the beginning and end of the two cycles.

The quipu of the Incas was a system of colored knotted cords used to record numerical data, like advanced tally sticks – but not used to perform calculations. Calculations were carried out using a yupana (Quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 Italian mathematician De Pasquale proposed an explanation. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1, 1, 2, 3, 5 and powers of 10, 20, and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum.

Russia

The Russian abacus, the schoty (counting), usually has a single slanted deck, with ten beads on each wire (except one wire with four beads for quarter-ruble fractions). 4-bead wire was introduced for quarter-kopeks, which were minted until 1916. The Russian abacus is used vertically, with each wire running horizontally. [...]

The Russian abacus was in use in shops and markets throughout the former Soviet Union, and its usage was taught in most schools until the 1990s. Even the 1874 invention of mechanical calculator, Odhner arithmometer, had not replaced them in Russia. According to Yakov Perelman, some businessmen attempting to import calculators into the Russian Empire were known to leave in despair after watching a skilled abacus operator. Likewise, the mass production of Felix arithmometers since 1924 did not significantly reduce abacus use in the Soviet Union. The Russian abacus began to lose popularity only after the mass production of domestic microcalculators in 1974.

The Russian abacus was brought to France around 1820 by mathematician Jean-Victor Poncelet, who had served in Napoleon's army and had been a prisoner of war in Russia. To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid. The Turks and the Armenian people used abacuses similar to the Russian schoty. It was named a coulba by the Turks and a choreb by the Armenians.

Neurological analysis

Learning how to calculate with the abacus may improve capacity for mental calculation. Abacus-based mental calculation (AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways. They are able to retrieve memory to deal with complex processes. AMC involves both visuospatial and visuomotor processing that generate the visual abacus and move the imaginary beads. Since it only requires that the final position of beads be remembered, it takes less memory and less computation time.

https://en.m.wikipedia.org/wiki/Abacus

Abacus, which represents numbers via a visuospatial format, is a traditional device to facilitate arithmetic operations. Skilled abacus users, who have acquired the ability of abacus-based mental calculation (AMC), can perform fast and accurate calculations by manipulating an imaginary abacus in mind. Due to this extraordinary calculation ability in AMC users, there is an expanding literature investigating the effects of AMC training on cognition and brain systems. This review study aims to provide an updated overview of important findings in this fast-growing research field. Here, findings from previous behavioral and neuroimaging studies about AMC experts as well as children and adults receiving AMC training are reviewed and discussed. Taken together, our review of the existing literature suggests that AMC training has the potential to enhance various cognitive skills including mathematics, working memory and numerical magnitude processing. Besides, the training can result in functional and anatomical neural changes that are largely located within the frontal-parietal and occipital-temporal brain regions. Some of the neural changes can explain the training-induced cognitive enhancements. Still, caution is needed when extend the conclusions to a more general situation. Implications for future research are provided.

https://pmc.ncbi.nlm.nih.gov/articles/PMC7492585/

Presentation of methods for building a mathematical universe in children that gives meaning to addition and subtraction by rooting them in basic concepts of geometry and logic.

Introduction

Numbers are fascinating, and mathematics is often identified with calculation. Strategies for performing calculations have been refined over time. First came abacuses, devices with several rows of movable pieces used for arithmetic calculations. Then came tables of values, which slowly evolved into graph tables or nomograms, i.e., a network of lines or points giving a result by simple reading or by a basic manipulation process. This expertise expanded considerably until the mid-twentieth century in the fields of physics, finance, and architecture, and the epistemological study of the underlying processes gave this discipline the name nomography.

These empirical mechanical or graphical tools, based on clever mathematical processes, required hours of intensive practice during which constant verification of units and consistency of results was essential. They quickly fell into disuse in the 1980s with the rise of computers and the development of digitization and its methods of analysis. Teachers, freed from the responsibility of teaching calculation, were thus able to focus their attention on developing other approaches and concepts.

However, due to the tragic principle of communicating vessels in human intelligence, the downstream expansion of the field of possibilities offered to science has had the effect, upstream, of disrupting the level of calculation among students. The gradual disappearance of certain crafts or their evolution, which goes hand in hand with that of familiar elements of the mechanistic era that promote learning through manipulation and observation (pendulum, balance, etc.), may also have contributed to this decline. It is therefore essential to clearly distinguish between the value of digital methods in engineering and the value of mastering basic arithmetic, which is acquired in the early years.

This presentation, divided into three parts, aims to refocus children's attention on a few fundamental objects, whose mathematical interest and richness they will discover through long-term observation and manipulation. You will encounter abacuses and nomograms, unusual and aesthetic objects that arouse curiosity and make you want to handle or examine them. The educational benefits include reconciling calculation and geometric vision in order to develop children's mathematical intuition empirically from an early age. This article also proposes a vertical reflection on the elementary operations induced by these objects, i.e., analyzing the angle of approach to elementary operations that these objects offer and their ability to accompany children from a naive representation to a more abstract model. In this first part, we will present processes that enable children to construct a mathematical universe that gives meaning to addition and subtraction by rooting them in elementary concepts of geometry and logic.

Translation of the introduction to the articles written by Ivan Riou

Des abaques pour reprendre le contrôle des opérations I

Des abaques pour reprendre le contrôle des opérations II

How to Use an Abacus

Counting

Adding and Subtracting

Multiplying

Dividing

https://www.wikihow.com/Use-an-Abacus

9
 
 

Si r1 et r2 sont les rayons de courbure principaux d'une surface en un certain point, on peut distinguer le troisième cas suivant pour la mesure gaussienne de la courbure :

(1/r1) . (1/r2) < 0 Les cercles de courbure se trouvent sur les côtés opposés du plan de contact.

Concepteur :

D'après les originaux réalisés à la Großherzoglich technischen Hochschule de Karlsruhe sous la direction du conseiller privé professeur Dr. Chr. Wiener, conçu par l'ingénieur C. Tesch, ancien assistant en géométrie descriptive à la technischen Hochschule de Karlsruhe.

Date de conception : 1894

Fabricant / Éditeur : [Martin Schilling]

Date de fabrication : [Premier quart du 20e siècle]

Lieu de fabrication : [Allemagne]

Dimensions & matériaux :

Hauteur : 22,5 cm ; Largeur : 13 cm ; Profondeur : 13 cm

Carton

10
4
submitted 2 weeks ago* (last edited 2 weeks ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

Logarithms represented at this time in so many ways both what was old and what was new. This relation looked back to reflect concerns of computation, but looked forward to nascent notions about mathematical functions. Although logarithms were primarily a tool for facilitating computation, they were but another of the crucial insights that directed the attention of mathematical scholars towards more abstract organizing notions. But one thing is very clear: the concept of logarithm as we understand it today as a function is quite different in many respects from how it was originally conceived. But eventually, through the work, consideration, and development of many mathematicians, the logarithm became far more than a useful way to compute with large unwieldy numbers. It became a mathematical relation and function in its own right.

In time, the logarithm evolved from a labor saving device to become one of the core functions in mathematics. Today, it has been extended to negative and complex numbers and it is vital in many modern branches of mathematics. It has an important role in group theory and is key to calculus, with its straightforward derivatives and its appearance in the solutions to various integrals. Logarithms form the basis of the Richter scale and the measure of pH, and they characterize the music intervals in the octave, to name but a few applications. Ironically, the logarithm still serves as a labor saving device of sorts, but not for the benefit of human effort! It is often used by computers to approximate certain operations that would be too costly, in terms of computer power, to evaluate directly, particularly those of the form x^n^.

https://old.maa.org/press/periodicals/convergence/logarithms-the-early-history-of-a-familiar-function-conclusion

Possibly, the first approaches to the subjects of logarithms, also including trigonometric functions, was described by the Scottish mathematician John Napier (1550–1617) in his 1614 work Mirifici logarithmorum canonis descriptio. However, the value e, now known as the Euler constant, was a later contribution by Jacob Bernoulli (1655–1705). In a short period of time, these contributions started being widely adopted as means to facilitate numerical calculations, especially of products, with the help of logarithmic tables. Interestingly, the mechanical device developed by Napier, and known as Napier’s bones constitutes a resource for calculation of products and quotients between values that is not based on the concept of logarithms. After preliminary related developments by the English mathematician Roger Cotes (1682–1716), the important result now amply known as the Euler’s formula was described by Leonard Euler (1707–1783) in 1748 in his two volume work Introduction in anaysin infinitorum. The concepts of logarithm and exponential functions, in particular, contributed substantially for establishing relationships with the concept and calculation of the powers and roots, including concerning complex values, especially thanks to developments by Augustin-Louis Cauchy (1789–1857), in his Cours d’analyze (1821). The Fourier series was developed mainly by Jean-Baptiste Joseph Fourier (1768–1830) as a means to solve the heat equation (diffusion) on a metal plate, which he described in his reference work Mémoire sur la propagation de la chaleur dans les corps solides (1807). The development of matrix algebra was to a great extent pioneered by the British mathematician Arthur Cayley (1821-1895), who also employed matrices as resources for addressing linear systems of equations. Cayley focus on pure mathematics included also important contributions to analytic geometry, group theory, as well as in graph theory. One of the first systematic approaches to the application of matrices to dynamics and differential equations has been developed in the book Elementary matrices and some applications to dynamics and differential equations, whose first 155 pages present a treatise on matrices, including infinite series of matrices and differential operators. The remainder of the book described the solution of differential equations by using matrices, as well as applications to dynamics of airplanees.

https://hal.science/hal-03845390v2/document

Overview of the exponential function

The exponential function is one of the most important functions in mathematics (though it would have to admit that the linear function ranks even higher in importance). To form an exponential function, we let the independent variable be the exponent. A simple example is the function f(x)=2^x^.

As illustrated in the above graph of f, the exponential function increases rapidly. Exponential functions are solutions to the simplest types of dynamical systems. For example, an exponential function arises in simple models of bacteria growth

An exponential function can describe growth or decay. The function

is an example of exponential decay. It gets rapidly smaller as x increases, as illustrated by its graph.

In the exponential growth of f(x), the function doubles every time you add one to its input x. In the exponential decay of g(x), the function shrinks in half every time you add one to its input x. The presence of this doubling time or half-life is characteristic of exponential functions, indicating how fast they grow or decay.

Parameters of the exponential function

As with any function, the action of an exponential function f(x) can be captured by the function machine metaphor that takes inputs x and transforms them into the outputs f(x).

The function machine metaphor is useful for introducing parameters into a function. The above exponential functions f(x) and g(x) are two different functions, but they differ only by the change in the base of the exponentiation from 2 to 1/2. We could capture both functions using a single function machine but dials to represent parameters influencing how the machine works.

We could represent the base of the exponentiation by a parameter b. Then, we could write f as a function with a single parameter (a function machine with a single dial): f(x)=b^x^.

When b=2, we have our original exponential growth function f(x), and when b=12, this same f turns into our original exponential decay function g(x). We could think of a function with a parameter as representing a whole family of functions, with one function for each value of the parameter.

We can also change the exponential function by including a constant in the exponent. For example, the function h(x)=2^3x^ is also an exponential function. It just grows faster than f(x)=2^x^ since h(x) doubles every time you add only 1/3 to its input x. We can introduce another parameter k into the definition of the exponential function, giving us two dials to play with. If we call this parameter k, we can write our exponential function f as f(x)=b^kx^.

It turns out that adding both parameters b and k to our definition of f is really unnecessary. We can still get the full range of functions if we eliminate either b or k. [...]. For example, you can see that the function f(x)=3^2x^ (k=2, b=3) is exactly the same as the function f(x)=9^x^ (k=1, b=9). In fact, for any change you make to k, you can make a compensating change in b to keep the function the same. [...].

Since it is silly to have both parameters b and k, we will typically eliminate one of them. The easiest thing to do is eliminate k and go back to the function f(x)=b^x^.

We will use this function a bit at first, changing the base b to make the function grow or decay faster or slower.

However, once you start learning some calculus, you'll see that it is more natural to get rid of the base parameter b and instead use the constant k to make the function grow or decay faster or slower. Except, we can't exactly get rid of the base b. If we set b=1, we'd have the boring function f(x)=1, or, if we set b=0, we'd have the even more boring function f(x)=0. We need to choose some other value of b.

If we didn't have calculus, we'd probably choose b=2, writing our exponential function as f(x)=2^kx^. Or, since we like the decimal system so well, maybe we'd choose b=10 and write our exponential function of f(x)=10^kx^. According to the above discussion, it shouldn't matter whether we use b=2 or b=10, as we can get the same functions either way (just with different values of k).

But, it turns out that calculus tells us there is a natural choice for the base b. Once you learn some calculus, you'll see why the most common base b throughout the sciences is the irrational number

e=2.718281828459045….

Fixing b=e, we can write the exponential functions as f(x)=e^kx^.

Using e for the base is so common, that e^x^ (“e to the x”) is often referred to simply as the exponential function.

To increase the possibilities for the exponential function, we can add one more parameter c that scales the function: f(x)=cb^kx^.

Since f(0)=cb^k0^=c, we can see that the parameter c does something completely different than the parameters b and k. We'll often use two parameters for the exponential function: c and one of b or k. For example, we might set k=1 and use f(x)=c^bx^ or set b=e and use f(x)=ce^kx^

https://mathinsight.org/exponential_function

The number e is a mathematical constant, approximately equal to 2.71828, that is the base of the natural logarithm and exponential function. It is sometimes called Euler's number, after the Swiss mathematician Leonhard Euler, though this can invite confusion with Euler numbers, or with Euler's constant, a different constant typically denoted γ. Alternatively, e can be called Napier's constant after John Napier. The Swiss mathematician Jacob Bernoulli discovered the constant while studying compound interest.

The first references to this constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms to the base e It is assumed that the table was written by William Oughtred. In 1661, Christiaan Huygens studied how to compute logarithms by geometrical methods and calculated a quantity that, in retrospect, is the base-10 logarithm of e, but he did not recognize e itself as a quantity of interest.

The constant itself was introduced by Jacob Bernoulli in 1683, for solving the problem of continuous compounding of interest. In his solution, the constant e occurs as the limit

where n represents the number of intervals in a year on which the compound interest is evaluated (for example, n = 12 for monthly compounding).

The first symbol used for this constant was the letter b by Gottfried Leibniz in letters to Christiaan Huygens in 1690 and 1691.

Leonhard Euler started to use the letter e for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and in a letter to Christian Goldbach on 25 November 1731. The first appearance of e in a printed publication was in Euler's Mechanica (1736). It is unknown why Euler chose the letter e. Although some researchers used the letter c in the subsequent years, the letter e was more common and eventually became standard.

Euler proved that e is the sum of the infinite series

where n! is the

factorial

of n. The equivalence of the two characterizations using the limit and the infinite series can be proved via the binomial theorem.

https://en.m.wikipedia.org/wiki/E_(mathematical_constant)

The number e first comes into mathematics in a very minor way. This was in 1618 when, in an appendix to Napier's work on logarithms, a table appeared giving the natural logarithms of various numbers. However, that these were logarithms to base e was not recognised since the base to which logarithms are computed did not arise in the way that logarithms were thought about at this time. Although we now think of logarithms as the exponents to which one must raise the base to get the required number, this is a modern way of thinking. We will come back to this point later in this essay. This table in the appendix, although carrying no author's name, was almost certainly written by Oughtred. A few years later, in 1624, again e almost made it into the mathematical literature, but not quite. In that year Briggs gave a numerical approximation to the base 10 logarithm of e but did not mention e itself in his work.

The next possible occurrence of e is again dubious. In 1647 Saint-Vincent computed the area under a rectangular hyperbola. Whether he recognised the connection with logarithms is open to debate, and even if he did there was little reason for him to come across the number e explicitly. Certainly by 1661 Huygens understood the relation between the rectangular hyperbola and the logarithm. He examined explicitly the relation between the area under the rectangular hyperbola yx=1 and the logarithm. Of course, the number ee is such that the area under the rectangular hyperbola from 1 to e is equal to 1. This is the property that makes e the base of natural logarithms, but this was not understood by mathematicians at this time, although they were slowly approaching such an understanding.

Huygens made another advance in 1661. He defined a curve which he calls "logarithmic" but in our terminology we would refer to it as an exponential curve, having the form y=ka^x^. Again out of this comes the logarithm to base 10 of e, which Huygens calculated to 17 decimal places. However, it appears as the calculation of a constant in his work and is not recognised as the logarithm of a number (so again it is a close call but e remains unrecognised).

Further work on logarithms followed which still does not see the number e appear as such, but the work does contribute to the development of logarithms. In 1668 Nicolaus Mercator published Logarithmotechnia which contains the series expansion of log⁡(1+x). In this work Mercator uses the term "natural logarithm" for the first time for logarithms to base ee. The number e itself again fails to appear as such and again remains elusively just round the corner.

Perhaps surprisingly, since this work on logarithms had come so close to recognising the number e, when e is first "discovered" it is not through the notion of logarithm at all but rather through a study of compound interest. In 1683 Jacob Bernoulli looked at the problem of compound interest and, in examining continuous compound interest, he tried to find the limit of (1+1/n)^n^ as n tends to infinity. He used the binomial theorem to show that the limit had to lie between 2 and 3 so we could consider this to be the first approximation found to e. Also if we accept this as a definition of e, it is the first time that a number was defined by a limiting process. He certainly did not recognise any connection between his work and that on logarithms.

We mentioned above that logarithms were not thought of in the early years of their development as having any connection with exponents. Of course from the equation x = a^t^, we deduce that t = log⁡ x where the log is to base a, but this involves a much later way of thinking. Here we are really thinking of log as a function, while early workers in logarithms thought purely of the log as a number which aided calculation. It may have been Jacob Bernoulli who first understood the way that the log function is the inverse of the exponential function. On the other hand the first person to make the connection between logarithms and exponents may well have been James Gregory. In 1684 he certainly recognised the connection between logarithms and exponents, but he may not have been the first.

So much of our mathematical notation is due to Euler that it will come as no surprise to find that the notation e for this number is due to him. The claim which has sometimes been made, however, that Euler used the letter e because it was the first letter of his name is ridiculous. It is probably not even the case that the e comes from "exponential", but it may have just be the next vowel after "a" and Euler was already using the notation "a" in his work. Whatever the reason, the notation e made its first appearance in a letter Euler wrote to Goldbach in 1731.

Most people accept Euler as the first to prove that ee is irrational. Certainly it was Hermite who proved that ee is not an algebraic number in 1873.

https://mathshistory.st-andrews.ac.uk/HistTopics/e/

All exponential functions are proportional to their own derivative, but the exponential function base e alone is the special number so that the proportionality constant is 1, meaning e^t^ actually equals its own derivative.

If you look at the graph of e^t^, it has the peculiar property that the slope of a tangent line to any point on the graph equals the height of that point above the horizontal axis.

Examples of the slope of the tangent line for the exponential function.

So how does the exponential function help us find the derivatives of other exponential functions? Well, maybe you noticed that different exponentials look like horizontally scaled versions of each other. This is true for all exponential functions, but best seen with exponential functions with related bases.

This means that you can re-write one exponential in terms of another's base. For example, if we have an exponential function of base 2 and want to re-write the function in terms of base 4, it can be written like this.

2^x^=4^(1/2)x^

One way to see how to convert between two bases is to zoom in on the graph between 0 and 1 to see how fast the first base grows to to the value of the second base. In this case, base 4 grows twice as fast as base 2 and reaches the output of 2 in half the time. So to convert base 4 to base 2 we can multiply the input t of the base 4 function by the constant 1/2, which is the same as scaling 4^x^ by a factor of 2 in the horizontal direction.

So we've found a function, the exponential function of base e, with a really nice derivative property. Can we take any old exponential function and re-write it in terms of the exponential function? Or in other words, what constant do we multiply the input variable by to make the exponential function have the same output as another exponential function?

For example, let's try to re-write 2^t^ in terms of the exponential function.

e^ct^ = 2^t^

As before, we can zoom in on a plot of the two functions, and compare their behavior. Specifically, how long does it take the exponential function to grow to 2?

Well, looking at the graph, it takes about t=0.693... units which is exactly equal to the same proportionality constant we found before! If we multiply the input variable t in the exponential function by this constant, the exponential function has the same output as 2^t^.

e^(0.69314718056...)⋅t^ = 2^t^

This type of question we are asking leads us directly towards another function, the inverse of the exponential function, the natural logarithm function.

The existence of a function like this can answer the question of the mystery constants, and it’s because it gives a different way to think about functions that are proportional to their own derivative. There's nothing fancy here, this is simply the definition of the natural log, which asks the question "e to the what equals 2".

e^??^ = 2

And indeed, go plug in the natural log of 2 to a calculator, and you’ll find that it’s 0.6931..., the mystery constant we ran into earlier. And same goes for all the other bases, the mystery proportionality constant that pops up when taking derivatives and when re-writing exponential functions using e is the natural log of the base; the answer to the question "e to the what equals that base".

Importantly, the natural logarithm function gives us the missing tool we need to find the derivative of any exponential function. The key is to re-write the function and then use the chain rule. For example, what is the derivative of the function 3^t^? Well, let's re-write this function in terms of the exponential function using the natural logarithm to calculate the horizontally-scaling proportionality constant.

3^t^ = e^ln(3)t^

Then, we can calculate the derivative of e^ln⁡(3)t^ using the chain rule by. First, take the derivative of the outermost function, which due to the special nature of the exponential funtion is itself. Then, second, multiply this by the derivative of the inner function ln⁡(3)t, which is the constant ln⁡(3).

This is the same derivative we found using algebra above, since ln⁡(3)=1.09861228867...

The same technique can be used to find the derivative of any exponential function.

In fact, throughout applications of calculus, you rarely see exponentials written as some base to a power t. Instead you almost always write exponentials as e raised to some constant multiplied by t. It’s all equivalent; any function like 2^t^ or 3^t^ can be written as e^c⋅t^. The difference is that framing things in terms of the exponential function plays much more smoothly with the process of derivatives.

Why we care

I know this is all pretty symbol heavy, but the reason we care is that all sorts of natural phenomena involve a certain rate of change being proportional to the thing changing.

For example, the rate of growth of a population actually does tend to be proportional to the size of the population itself, assuming there isn’t some limited resource slowing that growth down. If you put a cup of hot water in a cool room, the rate at which the water cools is proportional to the difference in temperature between the room and the water. Or said differently, the rate at which that difference changes is proportional to itself. If you invest your money, the rate at which it grows is proportional to the amount of money there at any time.

In all these cases, where some variable’s rate of change is proportional to itself, the function describing that variable over time will be some exponential. And even though there are lots of ways to write any exponential function, it’s very natural to choose to express these functions as e^ct^, since that constant c in the exponent carries a very natural meaning: It’s the same as the proportionality constant between the size of the changing variable and the rate of change.

https://www.3blue1brown.com/lessons/eulers-number

11
8
Ruling the Logarithms (sliderulemuseum.com)
submitted 2 weeks ago* (last edited 2 weeks ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of 1000 to base 10 is 3, because 1000 is 10 to the 3rd power: 1000 = 103 = 10 × 10 × 10. More generally, if x = by, then y is the logarithm of x to base b, written logb x, so log10 1000 = 3. As a single-variable function, the logarithm to base b is the inverse of exponentiation with base b.

Logarithms were introduced by John Napier in 1614 as a means of simplifying calculations. They were rapidly adopted by navigators, scientists, engineers, surveyors, and others to perform high-accuracy computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is possible because the logarithm of a product is the sum of the logarithms of the factors:

log~b~(xy) = log~b~ ⁡x + log~b~ ⁡y ,

provided that b, x and y are all positive and b ≠ 1. The slide rule, also based on logarithms, allows quick calculations without tables, but at lower precision.

The common logarithm of a number is the index of that power of ten which equals the number. Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to by Archimedes as the "order of a number". The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation.

https://en.m.wikipedia.org/wiki/Logarithm

John Napier of Merchiston Latinized as Ioannes Neper; 1 February 1550 – 4 April 1617), nicknamed Marvellous Merchiston, was a Scottish landowner known as a mathematician, physicist, and astronomer. He was the 8th Laird of Merchiston.

John Napier is best known as the discoverer of logarithms. He also invented the so-called

"Napier's bones"

Napier's bones is a manually operated calculating device created by John Napier of Merchiston, Scotland for the calculation of products and quotients of numbers. The method was based on lattice multiplication, and also called rabdology, a word invented by Napier. Napier published his version in 1617. It was printed in Edinburgh and dedicated to his patron Alexander Seton.

https://en.m.wikipedia.org/wiki/Napier%27s_bones

and popularised the use of the decimal point in arithmetic and mathematics.

Napier's birthplace, Merchiston Tower in Edinburgh, is now part of the facilities of Edinburgh Napier University. There is a memorial to him at St Cuthbert's Parish Church at the west end of Princes Street Gardens in Edinburgh.

https://en.m.wikipedia.org/wiki/John_Napier

John Napier was a Scottish scholar who is best known for his invention of logarithms, but other mathematical contributions include a mnemonic for formulas used in solving spherical triangles and two formulas known as Napier's analogies.

https://mathshistory.st-andrews.ac.uk/Biographies/Napier/

How to Write it

We write it like this:

log~2~(8) = 3

So these two things are the same:

The number we multiply is called the "base", so we can say:

  • "the logarithm of 8 with base 2 is 3"
  • or "log base 2 of 8 is 3"
  • or "the base-2 log of 8 is 3"

Notice we are dealing with three numbers:

  • the base: the number we are multiplying (a "2" in the example above)
  • how often to use it in a multiplication (3 times, which is the logarithm)
  • The number we want to get (an "8")

Example: What is log~5~(625) ... ?

We are asking "how many 5s need to be multiplied together to get 625?"

5 × 5 × 5 × 5 = 625, so we need 4 of the 5s

Answer: log~5~(625) = 4

https://www.mathsisfun.com/algebra/logarithms.html

Before Logarithms:

The late sixteenth century saw unprecedented development in many scientific fields; notably, observational astronomy, long-distance navigation, and geodesy science, or efforts to measure and represent the earth. These endeavors required much from mathematics. For the most part, their foundation was trigonometry, and trigonometric tables, identities, and related calculation were the subject of intensive enterprise. Typically, trigonometric functions were based on non-unity radii, such as R=10,000,000,

to ensure precise integer output.* Reducing the calculation burden that resulted from dealing with such large numbers for practitioners in these applied disciplines, and with it, the errors that inevitably crept into the results, became a prime objective for mathematicians. As a result, much energy and scholarly effort were directed towards the art of computation.

Accordingly, techniques that could bypass lengthy processes, such as long multiplications or divisions, were explored. Of particular interest were those that replaced these processes with equivalent additions and subtractions. One method originating in the late sixteenth century that was used extensively to save computation was the technique called prosthaphaeresis, a compound constructed from the Greek terms prosthesis (addition) and aphaeresis (subtraction). This relation transformed long multiplications and divisions into additions and subtractions via trigonometric identities, such as:

2cos(A)cos(B) = cos(A+B) + cos(A−B).

When one needed the product of two numbers x and y, for example, trigonometric tables would be consulted to find A and B such that:

x=cos(A) and y=cos(B).

With A and B determined, cos(A+B) and cos(A−B)

could be read from the table and half of the sum taken to find the original product in question. Thus the long multiplication of two numbers could be replaced by table look-up, addition, and halving. Such rules were recognized as early as the beginning of the sixteenth century by Johannes Werner in 1510, but their application specifically for multiplication first appeared in print in 1588 in a work by Nicolai Reymers Ursus (Thoren, 1988). Christopher Clavius extended the methods of prosthaphaeresis, of which examples can be found in his 1593 Astrolabium (Smith, 1959, p. 455).

Finally, with the scientific community focused on developing more powerful computational methods, the desire to capture symbolically essential mathematical ideas behind these developments was also growing. In the fifteenth and sixteenth centuries, mathematicians such as Nicolas Chuquet (c. 1430–1487) and Michael Stifel (c. 1487–1567) turned their attention to the relationship between arithmetic and geometric sequences while working to construct notation to express an exponential relationship. The focus on mathematical symbolism in centuries prior and the growing attention to notation–particularly the experimentation with different versions of exponent notation–played a critical role in the recognition and clarification of such a relationship. Now the mathematical connection between a geometric and an arithmetic sequence could be made all the more apparent by symbolically capturing these sequences as successive exponential powers of a given number and the exponents themselves, respectively (see Figure 6). The work on the relationships between sequences was mathematically important per se, but was equally significant for providing the inspiration for the development of the logarithmic relation.

* Note: Modern trigonometry is essentially based on triangles inscribed in a unit circle; that is, a circle with radius R=1. Early practitioners used circles with various values for the radius. The relationship between the modern sine function and a sine or half-chord in a circle of radius R is given by Sinθ = R sinθ, where the modern sine function has a lower case 's' and the pre-modern sine an upper case 'S'.

https://old.maa.org/press/periodicals/convergence/logarithms-the-early-history-of-a-familiar-function-before-logarithms-the-computational-demands-of

John Napier Introduces Logarithms

In such conditions, it is hardly surprising that many mathematicians were acutely aware of the issues of computation and were dedicated to relieving practitioners of the calculation burden. In particular, the Scottish mathematician John Napier was famous for his devices to assist with computation. He invented a well-known mathematical artifact, the ingenious numbering rods more quaintly known as “Napier's bones,” that offered mechanical means for facilitating computation. (For additional information on “Napier's bones,” see the article, “John Napier: His Life, His Logs, and His Bones” (2006).) In addition, Napier recognized the potential of the recent developments in mathematics, particularly those of prosthaphaeresis, decimal fractions, and symbolic index arithmetic, to tackle the issue of reducing computation. He appreciated that, for the most part, practitioners who had laborious computations generally did them in the context of trigonometry. Therefore, as well as developing the logarithmic relation, Napier set it in a trigonometric context so it would be even more relevant.

Napier first published his work on logarithms in 1614 under the title Mirifici logarithmorum canonis descriptio, which translates literally as A Description of the Wonderful Table of Logarithms. Indeed, the very title Napier selected reveals his high ambitions for this technique---the provision of tables based on a relation that would be nothing short of “wonder-working” for practitioners. As well as providing a short overview of the mathematical details, Napier gave technical expression to his concept. He coined a term from the two ancient Greek terms logos, meaning proportion, and arithmos, meaning number; compounding them to produce the word “logarithm.” Napier used this word as well as the designations “natural” and “artificial” for numbers and their logarithms, respectively, in his text.

Despite the obvious connection with the existing techniques of prosthaphaeresis and sequences, Napier grounded his conception of the logarithm in a kinematic framework. The motivation behind this approach is still not well understood by historians of mathematics. Napier imagined two particles traveling along two parallel lines. The first line was of infinite length and the second of a fixed length (see Figures 2 and 3). Napier imagined the two particles to start from the same (horizontal) position at the same time with the same velocity. The first particle he set in uniform motion on the line of infinite length so that it covered equal distances in equal times. The second particle he set in motion on the finite line segment so that its velocity was proportional to the distance remaining from the particle to the fixed terminal point of the line segment.

Figure 2. Napier's two parallel lines with moving particles (Image used courtesy of Landmarks of Science Series, NewsBank-Readex)

More specifically, at any moment the distance not yet covered on the second (finite) line was the sine and the traversed distance on the first (infinite) line was the logarithm of the sine. This had the result that as the sines decreased, Napier's logarithms increased. Furthermore, the sines decreased in geometric proportion, and the logarithms increased in arithmetic proportion. We can summarize Napier's explanation as follows (Descriptio I, 1 (p. 4); see Figure 3):

AC = log~nap~(γω) where γω = Sinθ~1~

AD = log~nap~(δω) where δω = Sinθ~2~

AE = log~nap~(ϵω) where ϵω= Sinθ~3~

and so on, so that, more generally: x = Sin(θ)

y = log~nap~(x)

where log~nap~ has been used to distinguish Napier's particular understanding of the logarithm concept from the modern one.

Figure 3. The relation between the two lines and the logs and sines

Napier generated numerical entries for a table embodying this relationship. He arranged his table by taking increments of arc θ minute by minute, then listing the sine of each minute of arc, and then its corresponding logarithm. However in terms of the way he actually computed these entries, he would have in fact worked in the opposite manner, generating the logarithms first and then choosing those that corresponded to a sine of an arc, which accordingly formed the argument. For example, he would have computed values that appear in the first column of Table 1 via the relation:

Table 1. Napier's logarithms

The values in the first column (in bold) that corresponded to the Sines of the minutes of arcs (third column) were extracted, along with their accompanying logarithms (column 2) and arranged in the table. The appropriate values from Table 1 can be seen in rows one to six of the last three columns in Figure 4. Napier tabulated his logarithms from 0∘ to 45∘ in minutes of arc, and by symmetry provided values for the entire first quadrant. The excerpt in Figure 4 gives the first half of the first degree and, by symmetry, on the right the last half of the eighty-ninth degree.

To complete the tables, Napier computed almost ten million entries from which he selected the appropriate values. Napier himself reckoned that computing this many entries had taken him twenty years, which would put the beginning of his endeavors as far back as 1594.

Figure 4. The first page of Napier's tables

(Image used courtesy of Landmarks of Science Series, NewsBank-Readex)

Napier frequently demonstrated the benefits of his method. For example, he worked through a problem involving the computation of mean proportionals, sometimes known as the geometric mean. He reviewed the usual way in which this would have been computed, and pointed out that his technique using logarithms not only finds the answer “earlier” (that is, faster!), but also uses only one addition and one division by two! He stated:

"Let the extremes 1000000 and 500000 bee given, and let the meane proportionall be sought: that commonly is found by multiplying the extreames given, one by another, and extracting the square root of the product. But we finde it earlier thus; We adde the Logarithme of the extreames 0 and 693147, the summe whereof is 693147 which we divide by 2 and the quotient 346573 shall be the Logar. of the middle proportionall desired. By which the middle proportionall 707107, and his arch 45 degrees are found as before.... found by addition onely, and division by two. (Book I, 5 (p. 25), as translated by Edward Wright)"

In order to find the mean proportional by traditional methods, Napier observed that one has to compute the product and then take the square root; that is:

√(1000000×500000) = √(500000000000) ≈ 707106.78

This method involves the multiplication of two large numbers and a lengthy square-root extraction. As an alternative, Napier proposed (with computations to 6 significant figures):

log~nap~(1000000)+log~nap~(500000)=0+693147=693147

693147÷2 = 346573 to 6 significant figures

⇒mean proportional = 707107, as required,

which he rightly deemed was much simpler to compute.

https://old.maa.org/press/periodicals/convergence/logarithms-the-early-history-of-a-familiar-function-john-napier-introduces-logarithms

Henry Briggs and the Common Logarithm

Shortly after Napier’s publication, English mathematician Henry Briggs (1561–1630) refined and popularized the concept of logarithms. Briggs collaborated with Napier and proposed the use of base-10 logarithms, also known as common logarithms. In 1617, Briggs published Logarithmorum Chilias Prima, containing the first table of base-10 logarithms.

Briggs’ base-10 system was more intuitive and practical for everyday calculations, as it aligned with the decimal system widely used in Europe. This refinement made logarithms accessible to a broader audience, including scientists, engineers, and navigators.

The Logarithmic Scale: Slide Rules and Early Calculators

One of the earliest applications of logarithms was the development of the slide rule. In 1622, English mathematician William Oughtred invented the circular slide rule, which utilized logarithmic scales for rapid calculations. By the mid-17th century, linear slide rules became common tools for scientists, engineers, and students.

The slide rule remained an essential computational device for over 300 years, until the advent of electronic calculators in the mid-20th century. Its reliance on logarithmic principles demonstrates the enduring utility of logarithms in simplifying calculations.

Logarithms in Astronomy and Navigation

Logarithms played a crucial role in advancing astronomy and navigation during the 17th and 18th centuries. Astronomers like Johannes Kepler and Isaac Newton relied on logarithmic tables to perform complex calculations related to planetary motion and celestial mechanics. By reducing the computational burden, logarithms enabled astronomers to make precise predictions and refine their models of the universe.

Navigators also benefited from logarithms, particularly in determining longitude and calculating distances at sea. The efficiency of logarithmic tables allowed mariners to improve their accuracy in charting courses and conducting explorations.

https://www.historymath.com/logarithms/

The Slide rule

This is a picture of a basic beginner’s slide rule for various math operations including mutiplication/division and square/squareroot:

Components of A Slide Rule

The slide rule is actually made of three bars that are fixed together. The sliding center bar is sandwiched by the outer bars which are fixed with respect to each other. The metal "window" is inserted over the slide rule to act as a place holder. A cursor is fixed in the center of the "window" to allow for accurate readings.

The scales (A-D) are labeled on the left-hand side of the slide rule. The number of scales on a slide rule vary depending on the number of mathematical functions the slide rule can perform. Multiplication and division are performed using the C and D scales. Square and square root are performed with the A and B scales. The numbers are marked according to a logarithmic scale. Therefore, the first number on the slide rule scale (also called the index) is 1 because the log of zero is one.

To know how it works please read the full page

Notice that on this scale the distance between the divisions is decreasing. This is a characteristic of a log scale. A logarithm relates one number to another number much like a mathematical function. The log of a number, to the base 10, is defined by:

The "magic" of the slide rule is actually based on a mathematical logarithmic relation:

These relations made it possible to perform multiplication and division using addition and subtraction. Before the slide rule, the product of two numbers were found by looking up their respective logs and adding them together, then finding the number whose log is the sum, also called the inverse log.

The slide rule made its first appearance in the late 17th century. The slide rule made it easier to utilize the log relations by developing a number line on which the displacement of the numbers were proportional to their logs. The slide rule eased the addition of the two logarithmic displacements of the numbers, thus assisting with multiplication and division in calculations.

LIMITING PHYSICS:

The accuracy of the calculations made with a slide rule depends on the accuracy with which the user can read the numbers off the scale. More divisions allow for more decimal places which means increased accuracy.

https://web.mit.edu/2.972/www/reports/slide_rule/slide_rule.html

12
 
 

According to George Gamow, chess was invented by Sissa ben Dahir, Wazir of the court of King Shiram. King Shiram loved the game so much that he offered Sissa any reward he could name. Perhaps trying to impress the king with his mathematical skills, Sissa asked for some rice,

one grain on the first square of the chessboard, two on the second, four on the third, eight on the fourth, and so on, each square's amount being the double of the previous square's.

How much rice did Shiram owe Sissa?

The last square would contain 2^63^ grains of rice. >This is a large number: 2^63^ = 9,223,372,036,854,775,808 Suppose Shiram had tried to stack the rice of this last square in a column, each grain lying on top of the one below it. A grain of rice is about 1 mm thick. How high a column of rice would Shiram have obtained? Would it be higher than Mt. Everest? Higher than the distance to the moon? To the sun?

Here is the answer

In fact, if he could have stacked them this way, Shiram would have obtained a column of rice one light year tall, one-quarter of the way to the nearest star after the sun. Obviously, Shiram could not give Sissa the reward he requested. What do you suppose was the outcome? Let's just say an important lesson is "Don't be a smart-alek."

.

The quickness of doubling is not just related to the history of chess. The most elementary population models postulate the growth rate is proportional to the population size (twice as many people means twice as many couples having babies, means twice as many babies). This led Thomas Malthus to predict population pressure problems, because Malthus argued populations grow more rapidly than their ability to produce food.

https://gauss.math.yale.edu/public_html/People/frame/Fractals/Chaos/Doubling/Doubling.html

The ancient Indian Brahmin mathematician Sissa (also spelt Sessa or Sassa and also known as Sissa ibn Dahir or Lahur Sessa) is a mythical character from India, known for the invention of chaturanga, the Indian predecessor of chess, and the wheat and chessboard problem he would have presented to the king when he was asked what reward he'd like for that invention.

Sissa, a Hindu Brahmin (in some legends from the village of Lahur), invents chess for an Indian king (named as Balhait, Shahram or Ladava in different legends, with "Taligana" sometimes named as the supposed kingdom he ruled in northern India) for educational purposes. In gratitude, the king asks Sissa how he wants to be rewarded. Sissa wishes to receive an amount of grain which is the sum of one grain on the first square of the chess board, and which is then doubled on every following square.

This request is now known as the wheat and chessboard problem, and forms the basis of various mathematical and philosophical questions.

Until the nineteenth century, the legend of Sissa was one of several theories about the origin of chess. Today it is mainly regarded as a myth because there is no clear picture of the origin of chaturanga (an ancient Indian chess game), and from which modern chess has developed.

The context of the mythical Sissa is described in detail in A History of Chess. There are many variations and inconsistencies, and therefore little can be confirmed historically. Nevertheless, the legend of Sissa is placed by most sources in a Hindu kingdom between 400 and 600 AD, in an era after the invasion of Alexander the Great. The myth is often told from a Persian and Islamic perspective.

However, the oldest known narrative believed to have been the basis for the legend of Sissa is from before the advent of Islam. It tells of Husiya, daughter of Balhait, a queen whose son is killed by a rebel, but of whom she does not initially hear the news. This news is subtly announced to her through the chess game that Sissa introduced to her.

https://en.m.wikipedia.org/wiki/Sissa_(mythical_brahmin)

The problem may be solved using simple addition. With 64 squares on a chessboard, if the number of grains doubles on successive squares, then the sum of grains on all 64 squares is: 1 + 2 + 4 + 8 + ... and so forth for the 64 squares. The total number of grains can be shown to be 2^64^−1 or 18,446,744,073,709,551,615 (eighteen quintillion, four hundred forty-six quadrillion, seven hundred forty-four trillion, seventy-three billion, seven hundred nine million, five hundred fifty-one thousand, six hundred and fifteen).

This exercise can be used to demonstrate how quickly exponential sequences grow, as well as to introduce exponents, zero power, capital-sigma notation, and geometric series. Updated for modern times using pennies and a hypothetical question such as "Would you rather have a million dollars or a penny on day one, doubled every day until day 30?", the formula has been used to explain compound interest. (Doubling would yield over one billion seventy three million pennies, or over 10 million dollars: 2^30^−1=1,073,741,823).

The problem appears in different stories about the invention of chess. One of them includes the geometric progression problem. The story is first known to have been recorded in 1256 by Ibn Khallikan. Another version has the inventor of chess (in some tellings Sessa, an ancient Indian minister) request his ruler give him wheat according to the wheat and chessboard problem. The ruler laughs it off as a meager prize for a brilliant invention, only to have court treasurers report the unexpectedly huge number of wheat grains would outstrip the ruler's resources. Versions differ as to whether the inventor becomes a high-ranking advisor or is executed.

https://en.m.wikipedia.org/wiki/Wheat_and_chessboard_problem

Let one grain of wheat be placed on the first square of a chessboard, two on the second, four on the third, eight on the fourth, etc. How many grains total are placed on an 8×8 chessboard? Since this is a geometric series, the answer for n squares is a Mersenne number. Plugging in n=8×8=64 then gives 2^64^-1=18446744073709551615.

https://mathworld.wolfram.com/WheatandChessboardProblem.html

The Death of Moore’s Law: What it means and what might fill the gap going forward

In 1965, engineer and businessman Gordon Moore observed a trend that would go on to define the unprecedented technological explosion we’ve experienced over the past fifty years. Noting that the number of transistors in an integrated circuit doubles about every two years, Moore laid out his eponymous law, which has since become the engine behind the growing computer science industry, making everything we now enjoy—cellphones, high-resolution digital imagery, household robots, computer animation, etc.—possible.

However, Moore’s Law was never meant to last forever. Transistors can only get so small and, eventually, the more permanent laws of physics get in the way. Already transistors can be measured on an atomic scale, with the smallest ones commercially available only 3 nanometers wide, barely wider than a strand of human DNA (2.5nm). While there’s still room to make them smaller (in 2021, IBM announced the successful creation of 2-nanometer chips), such progress has become prohibitively expensive and slow, putting reliable gains into question. And there’s still the physical limitation in that wires can’t be thinner than atoms, at least not with our current understanding of material physics.

https://cap.csail.mit.edu/death-moores-law-what-it-means-and-what-might-fill-gap-going-forward

Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship. It is an experience curve effect, a type of observation quantifying efficiency gains from learned experience in production.

A semi-log plot of transistor counts for microprocessors against dates of introduction, nearly doubling every two years

Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. In September 2022, Nvidia CEO Jensen Huang considered Moore's law dead, while Intel's then CEO Pat Gelsinger had the opposite view.

https://en.m.wikipedia.org/wiki/Moore%27s_law

13
14
No End in Sight (infosec.pub)
submitted 3 weeks ago* (last edited 3 weeks ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

Many of us recall the sense of wonder we felt upon learning that there is no biggest number; for some of us, that wonder has never quite gone away. It is obvious that, given any counting number, one can be added to it to give a larger number. But the implication that there is no limit to this process is perplexing.

The concept of infinity has exercised the greatest minds throughout the history of human thought. It can lead us into a quagmire of paradox from which escape seems hopeless. In the late 19th century, the German mathematician Georg Cantor showed that there are different degrees of infinity — indeed an infinite number of them — and he brought to prominence several paradoxical results that had a profound impact on the subsequent development of the subject.

Set Theory

Cantor was the inventor of set theory, which is a fundamental foundation of modern mathematics. A set is any collection of objects, physical or mathematical, actual or ideal. A particular number, say 4, is associated with all the sets having four elements. For any two of these sets, we can find a 1-to-1 correspondence, or bijection, between the elements of one set and those of the other. The number 4 is called the cardinality of these sets. Generalizing this argument, Cantor treated any two sets as being of the same size, or cardinality, if there is a 1-to-1 correspondence between them.

Bijection between two sets of cardinality 4.

But suppose the sets are infinite. As a concrete example, take all the natural numbers, 1, 2, 3, … as one set, and all the even numbers 2, 4, 6, … as the other. By associating any number n in the first set with 2n in the second, we have a perfect 1-to-1 correspondence. By Cantor’s argument, the two sets are the same size. But this is paradoxical, for the set of natural numbers contains all the even numbers and also all the odd ones so, in an intuitive sense, it is larger. The same paradoxical result had been deduced by Galileo some 250 years earlier.

Cantor carried these ideas much further, showing in particular that the set of all the real numbers (or all the points on a line) have a degree of infinity, or cardinality, greater than the counting numbers. He did this using an ingenious approach called the diagonal argument. This raised an issue, called the continuum hypothesis: is there a degree of infinity between these two? This question can not be answered within standard set theory.

Infinities without limit

Cantor introduced the concept of a power set: for any set A, the power set P(A) is the collection of all the subsets of A. Cantor proved that the cardinality of P(A) is greater than that of A. For finite sets, this is obvious; for infinite ones, it was startling. The result is now known as Cantor’s Theorem, and he used his diagonal argument in proving it. He thus developed an entire hierarchy of transfinite cardinal numbers. The smallest of these is the cardinality of the natural numbers, called Aleph-zero:

Aleph-zero, the cardinality of the natural numbers and the smallest transfinite number.

Cantor’s theory caused quite a stir; some of his mathematical contemporaries expressed dismay at its counter-intuitive consequences. Henri Poincaré, the leading luminary of the day, described the theory as a “grave disease” of mathematics, while Leopold Kronecker denounced Cantor as a renegade and a “corrupter of youth”. This hostility may have contributed to the depression that Cantor suffered through his latter years. But David Hilbert championed Cantor’s ideas, famously predicting that “no one will drive us from the paradise that Cantor has created for us”.

https://thatsmaths.com/2014/07/31/degrees-of-infinity/

Cantor's paradise is an expression used by David Hilbert (1926, page 170) in describing set theory and infinite cardinal numbers developed by Georg Cantor. The context of Hilbert's comment was his opposition to what he saw as L. E. J. Brouwer's reductive attempts to circumscribe what kind of mathematics is acceptable; see Brouwer–Hilbert controversy.

"From the paradise that Cantor created for us no-one shall be able to expel us." Hilbert (1926, p. 170), in a lecture given in Münster to Mathematical Society of Westphalia on 4 June 1925

https://en.m.wikipedia.org/wiki/Cantor%27s_paradise

Georg Ferdinand Ludwig Philipp Cantor (3 March [O.S. 19 February] 1845 – 6 January 1918) was a mathematician who played a pivotal role in the creation of set theory, which has become a fundamental theory in mathematics. Cantor established the importance of one-to-one correspondence between the members of two sets, defined infinite and well-ordered sets, and proved that the real numbers are more numerous than the natural numbers. Cantor's method of proof of this theorem implies the existence of an infinity of infinities. He defined the cardinal and ordinal numbers and their arithmetic. Cantor's work is of great philosophical interest, a fact he was well aware of.

https://en.m.wikipedia.org/wiki/Georg_Cantor

Take the set of natural numbers {1, 2, 3, 4, … }. How many members are there in the set? Infinitely many, right? OK, now take the set of real numbers. How many members are there in the set? Infinitely many, right? So far so good.

Here is where it starts to get tricky. The number of members of a set is called its cardinality. If the cardinality of the natural numbers is infinity, and the cardinality of the real numbers is infinity, then do these two sets have the same cardinality? Are there the same amount of natural numbers as real numbers?

Whatever your intuition is about that last question, your intuition will hardly do. We need to be methodical about this. So now lets begin the proof [...].

Suppose we have devised some way to list all the real numbers between 0 and 1. This list will naturally be infinitely long, and we can write each entry in an infinitely-long decimal form. Here is how it might start, and note that I have marked some digits in bold.

The digits in bold run down the diagonal of this list. Use them to construct a new real number between 0 and 1.

.1531190918...

Like all the other numbers on the list, this number will have an infinite number of digits; this is because the list is infinitely long.

Now increase each individual digit by 1. If the digit is 9, make it 0:

.2642201029...

Is this new number on the original list?

On one hand, it must be, because the list is infinitely long and contains all the real numbers between 0 and 1. The new number is a real number between 0 and 1.

On the other hand, if we work through the number and the list methodically, we will see that it cannot be on the list. Is the new number the first number on the list? No, because the first digit of the number differs from the first digit of the first entry. Is the new number the second number on the list? No, because the second digit of the number differs from the second digit of the second entry. Is the new number the _n_th number on the list? No, because the _n_th digit of the number differs from the _n_th digit of the _n_th entry. Therefore, the new number, a real number between 0 and 1, cannot appear on an infinitely-long list of real numbers between 0 and 1.

We have contradicted ourselves, and that concludes the proof. It is impossible, even in principle, to denumerate the real numbers between 0 and 1. There are not just infinitely many reals between 0 and 1—there are uncountably many. There are so many that they cannot all be placed in correspondence with the natural numbers (i.e., given a spot on an infinitely-long list). Fun, right?

You may be wondering if there is a correspondence between the cardinality of the set of natural numbers and the cardinality of the set of real numbers. You’re in luck; there is! The cardinality of the set of natural numbers is of course infinity, but it is a kind of infinity that is called aleph-null (ℵ₀). The cardinality of the set of real numbers (“the cardinality of the continuum”) is 2^ℵ₀^ ! That is a very big number.

Finally, if you’re still with me, I’ll offer a bonus. We can divide the real numbers into two sets: the algebraic numbers, which are numbers that can be solutions to one-variable polynomial equations with rational or integer coefficients, and transcendental numbers, which cannot be. It turns out that there are countably many algebraic numbers. You might already see where this is going. If there are 2^ℵ₀^ real numbers, and aleph-null algebraic numbers, how many transcendental numbers are there? 2^ℵ₀^ – ℵ₀, which is exactly equal to 2^ℵ₀^ . (If you have difficulty seeing this, try 100 instead of aleph-null: 2^100^ – 100 is very close to 2^100^ , no?). What this means is that “almost all” numbers are transcendental.

https://www.elidourado.com/p/cantors-diagonal-argument

For convenience, we are going to give the proof in terms of the binary system (see Number Systems), though it applies equally well for the decimal system. We will use the binary system because this makes everything a lot simpler – in the binary system, the only symbols used to define numbers are 0, 1 and the binary point, e.g.

0.1 in decimal notation this means a tenth, in binary notation it means a half,
0.01 in decimal notation this means a hundredth, in binary notation it means a quarter,
0.001 in decimal notation means a thousandth, in binary notation it means one-eight. And so on. For example, 7⁄16 in the decimal system comes out in binary notation as 0.0111.

Obviously, for some fractions such as 1⁄3 , the number cannot actually be written in binary notation, and the binary expansion continues infinitely with a repeating string of digits (in the case of 1⁄3 this repeating bit is 01, giving 0.0101, 0.010101, 0.01010101, and so on). Similarly, no irrational number can be represented by a finite string of the 0s and 1s of binary notation.

Again, for convenience, when the term ‘list’ or ‘enumeration’ of real numbers is used below, the term is used to indicate a function that gives a one-to-one correspondence between natural numbers and real numbers, that is, each real number is uniquely matched to one natural number. Since it refers to an infinite number of things, such a ‘list’/‘enumeration’ cannot be written down - but there can be definitions that define infinitely long lists.

Having dealt with the preliminaries, below is a typical presentation of the Diagonal proof itself:

  1. To prove: that for any list of real numbers between 0 and 1, there exists some real number that is between 0 and 1, but is not in the list.

  2. Obviously we can have lists that include at least some real numbers. In such lists, the first real number in the list is the number that is matched to the number one, the second real number in the list is the number that is matched to the number two, and so on. For any such list, we call the list a function, and we give it the name r(x). So r(1) means the real number matched up to the number 1, while r(2) means the real number matched up to the number 2, and r(17) means the real number matched up to the number 17. And so on. There can be many such lists, and we know that we can have lists that have some finite quantity of real numbers, and some lists that have an infinite quantity of real numbers. We will later address the question of whether there can be such a list that includes every real number.

  3. Now, we suppose that the beginnings of the binary expansions of some list of real numbers are as follows (of course, we cannot actually write down infinitely long binary expansions):

r(1) = 0.101011110101 …

r(2) = 0.00010100011 …

r(3) = 0.0010111011110 …

r(4) = 0.111101010111 …

r(5) = 0.10111101111 …

r(6) = 0.11101011111001 …

  1. For any list of real numbers, there exists a number (which we will call d ) which is defined by the following rule. We start off with a zero followed by a point, viz: ‘0.’ Then we take the first digit of the first number in the list, and if the digit taken is 0 we change it to 1 and we write it down; if it is 1 we change it to 0 and we write it down. This is called the complement, so the complement of 0 is 1 and the complement of 1 is 0. We then take the second digit of the second number in the list and do the same, writing the changed digit after the previous one. And so on, and so on. For the first few numbers in our list above, this would work out like this (here we show the relevant digits in bold text):

r(1) = 0.101011110101 …

r(2) = 0.00010100011 …

r(3) = 0.0010111011110 …

r(4) = 0.111101010111 …

r(5) = 0.10111101111 …

r(6) = 0.11101011111001 …

  1. From this list, we obtain the following number: d = 0.010001. This is commonly called the ‘diagonal’ number. This real number d differs from every other real number in the list since it is different from every number in the list by at least one digit. For any finite list, the number d is a rational number, since the sequence of digits is finite. But if the list is limitless, then d is an endless expansion that is a real number. In this case, we cannot follow the instruction to write down the digits, and the number d is given only by definition - it is defined as the number where its n^th^ digit is the complement of the n^th^ digit of the n^th^ number in the list..

  2. So, given any list of real numbers we can always define another real number that is not in that list – the Diagonal number.

  3. We now assume that there can be a list that includes every real number.

  4. And now we have a contradiction – because the Diagonal number would be at the same time defined as a number that is in the list and also cannot be in the list – because it differs from every number in the list, since it is always different at the n^th^ digit.

  5. That means that the assumption that there can be a list that includes every real number (Step 7 above) is incorrect.

  6. Therefore there cannot be a list that includes every real number.

That concludes the Diagonal argument.

https://www.jamesrmeyer.com/infinite/diagonal-proof

He also showed that they were “non-denumerable” or “uncountable” (i.e. contained more elements than could ever be counted), as opposed to the set of rational numbers which he had shown were technically (even if not practically) “denumerable” or “countable”. In fact, it can be argued that there are an infinite number of irrational numbers in between each and every rational number. The patternless decimals of irrational numbers fill the “spaces” between the patterns of the rational numbers.

Cantor coined the new word “transfinite” in an attempt to distinguish these various levels of infinite numbers from an absolute infinity, which the religious Cantor effectively equated with God (he saw no contradiction between his mathematics and the traditional concept of God). Although the cardinality (or size) of a finite set is just a natural number indicating the number of elements in the set, he also needed a new notation to describe the sizes of infinite sets, and he used the Hebrew letter aleph (Aleph). He defined Aleph0 (aleph-null or aleph-nought) as the cardinality of the countably infinite set of natural numbers; Aleph1 (aleph-one) as the next larger cardinality, that of the uncountable set of ordinal numbers; etc. Because of the unique properties of infinite sets, he showed that Aleph0 + Aleph0 = Aleph0, and also that Aleph0 x Aleph0 = Aleph0.

All of this represented a revolutionary step, and opened up new possibilities in mathematics. However, it also opened up the possibility of other infinities, for instance an infinity – or even many infinities – between the infinity of the whole numbers and the larger infinity of the decimal numbers. This idea is known as the continuum hypothesis, and Cantor believed (but could not actually prove) that there was NO such intermediate infinite set. The continuum hypothesis was one of the 23 important open problems identified by David Hilbert in his famous 1900 Paris lecture, and it remained unproved – and indeed appeared to be unprovable – for almost a century, until the work of Paul Cohen in the 1960s.

https://www.storyofmathematics.com/19th_cantor.html/

14
 
 

The Swallow's Tail - Series on Catastrophes, 1983 by Salvador Dali

The Swallow's Tail - Series of Catastrophes (French: La queue d'aronde - Serie des catastrophes) was Salvador Dali's last painting. It was completed in May 1983, as the final part of a series based on the mathematical catastrophe theory of Rene Thom.

Thom suggested that in four-dimensional phenomena, there are seven possible equilibrium surfaces, and therefore seven possible discontinuities, or "elementary catastrophes": fold, cusp, swallowtail, butterfly, hyperbolic umbilic, elliptic umbilic, and parabolic umbilic. The shape of Dali's Swallow's Tail is taken directly from Thom's four-dimensional graph of the same title, combined with a second catastrophe graph, the s-curve that Thom dubbed, 'the cusp'. Thom's model is presented alongside the elegant curves of a cello and the instrument's f-holes, which, especially as they lack the small pointed side-cuts of a traditional f-hole, equally connote the mathematical symbol for an integral in calculus.

In his 1979 speech, "Gala, Velazquez and the Golden Fleece", presented upon his 1979 induction into the prestigious Academie des Beaux-Arts of the Institut de France, Dali described Thom's theory of catastrophes as "the most beautiful aesthetic theory in the world". He also recollected his first and only meeting with Rene Thom, at which Thom purportedly told Dali that he was studying tectonic plates; this provoked Dali to question Thom about the railway station at Perpignan, France (near the Spanish border), which the artist had declared in the 1960s to be the center of the universe.

https://www.dalipaintings.com/the-swallows-tail-series-on-catastrophes.jsp

A simple example of the behaviour studied by catastrophe theory is the change in shape of an arched bridge as the load on it is gradually increased. The bridge deforms in a relatively uniform manner until the load reaches a critical value, at which point the shape of the bridge changes suddenly—it collapses. While the term catastrophe suggests just such a dramatic event, many of the discontinuous changes of state so labeled are not. The reflection or refraction of light by or through moving water is fruitfully studied by the methods of catastrophe theory, as are numerous other optical phenomena. More speculatively, the ideas of catastrophe theory have been applied by social scientists to a variety of situations, such as the sudden eruption of mob violence.

https://www.britannica.com/science/catastrophe-theory-mathematics

René Frédéric Thom (2 September 1923 – 25 October 2002) was a French mathematician, who received the Fields Medal in 1958.

He made his reputation as a topologist, moving on to aspects of what would be called singularity theory; he became world-famous among the wider academic community and the educated general public for one aspect of this latter interest, his work as the founder of catastrophe theory (later developed by Christopher Zeeman).

https://en.m.wikipedia.org/wiki/Ren%C3%A9_Thom

In mathematics, catastrophe theory is a branch of bifurcation theory in the study of dynamical systems; it is also a particular special case of more general singularity theory in geometry.

Bifurcation theory studies and classifies phenomena characterized by sudden shifts in behavior arising from small changes in circumstances, analysing how the qualitative nature of equation solutions depends on the parameters that appear in the equation. This may lead to sudden and dramatic changes, for example the unpredictable timing and magnitude of a landslide.

Catastrophe theory originated with the work of the French mathematician René Thom in the 1960s, and became very popular due to the efforts of Christopher Zeeman in the 1970s. It considers the special case where the long-run stable equilibrium can be identified as the minimum of a smooth, well-defined potential function (Lyapunov function). Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system. However, examined in a larger parameter space, catastrophe theory reveals that such bifurcation points tend to occur as part of well-defined qualitative geometrical structures.

In the late 1970s, applications of catastrophe theory to areas outside its scope began to be criticized, especially in biology and social sciences. Zahler and Sussmann, in a 1977 article in Nature, referred to such applications as being "characterised by incorrect reasoning, far-fetched assumptions, erroneous consequences, and exaggerated claims". As a result, catastrophe theory has become less popular in applications.

Catastrophe theory analyzes degenerate critical points of the potential function — points where not just the first derivative, but one or more higher derivatives of the potential function are also zero. These are called the germs of the catastrophe geometries. The degeneracy of these critical points can be unfolded by expanding the potential function as a Taylor series in small perturbations of the parameters.

When the degenerate points are not merely accidental, but are structurally stable, the degenerate points exist as organising centres for particular geometric structures of lower degeneracy, with critical features in the parameter space around them. If the potential function depends on two or fewer active variables, and four or fewer active parameters, then there are only seven generic structures for these bifurcation geometries, with corresponding standard forms into which the Taylor series around the catastrophe germs can be transformed by diffeomorphism (a smooth transformation whose inverse is also smooth).[citation needed] These seven fundamental types are now presented, with the names that Thom gave them.

Catastrophe theory studies dynamical systems that describe the evolution of a state variable x over time t

In the above equation, V is referred to as the potential function, and u is often a vector or a scalar which parameterise the potential function. The value of u may change over time, and it can also be referred to as the control variable. In the following examples, parameters like a , b are such controls.

Swallowtail catastrophe

V = x^5^ + ax^3^ + bx^2^ + cx

The control parameter space is three-dimensional. The bifurcation set in parameter space is made up of three surfaces of fold bifurcations, which meet in two lines of cusp bifurcations, which in turn meet at a single swallowtail bifurcation point.

As the parameters go through the surface of fold bifurcations, one minimum and one maximum of the potential function disappear. At the cusp bifurcations, two minima and one maximum are replaced by one minimum; beyond them the fold bifurcations disappear. At the swallowtail point, two minima and two maxima all meet at a single value of x. For values of a > 0, beyond the swallowtail, there is either one maximum-minimum pair, or none at all, depending on the values of b and c. Two of the surfaces of fold bifurcations, and the two lines of cusp bifurcations where they meet for a < 0, therefore disappear at the swallowtail point, to be replaced with only a single surface of fold bifurcations remaining. Salvador Dalí's last painting, The Swallow's Tail, was based on this catastrophe.

https://en.m.wikipedia.org/wiki/Catastrophe_theory

15
 
 

History

It is believed that the first definition of a conic section was given by Menaechmus (died 320 BC) as part of his solution of the Delian problem (Duplicating the cube). His work did not survive, not even the names he used for these curves, and is only known through secondary accounts. The definition used at that time differs from the one commonly used today. Cones were constructed by rotating a right triangle about one of its legs so the hypotenuse generates the surface of the cone (such a line is called a generatrix). Three types of cones were determined by their vertex angles (measured by twice the angle formed by the hypotenuse and the leg being rotated about in the right triangle). The conic section was then determined by intersecting one of these cones with a plane drawn perpendicular to a generatrix. The type of the conic is determined by the type of cone, that is, by the angle formed at the vertex of the cone: If the angle is acute then the conic is an ellipse; if the angle is right then the conic is a parabola; and if the angle is obtuse then the conic is a hyperbola (but only one branch of the curve).

Euclid (fl. 300 BC) is said to have written four books on conics but these were lost as well. Archimedes (died c. 212 BC) is known to have studied conics, having determined the area bounded by a parabola and a chord in Quadrature of the Parabola. His main interest was in terms of measuring areas and volumes of figures related to the conics and part of this work survives in his book on the solids of revolution of conics, On Conoids and Spheroids.

Diagram from Apollonius' Conics, in a 9th-century Arabic translation

The greatest progress in the study of conics by the ancient Greeks is due to Apollonius of Perga (died c. 190 BC), whose eight-volume Conic Sections or Conics summarized and greatly extended existing knowledge. Apollonius's study of the properties of these curves made it possible to show that any plane cutting a fixed double cone (two napped), regardless of its angle, will produce a conic according to the earlier definition, leading to the definition commonly used today. Circles, not constructible by the earlier method, are also obtainable in this way. This may account for why Apollonius considered circles a fourth type of conic section, a distinction that is no longer made. Apollonius used the names 'ellipse', 'parabola' and 'hyperbola' for these curves, borrowing the terminology from earlier Pythagorean work on areas.

Pappus of Alexandria (died c. 350 AD) is credited with expounding on the importance of the concept of a conic's focus, and detailing the related concept of a directrix, including the case of the parabola (which is lacking in Apollonius's known works).

Apollonius's work was translated into Arabic, and much of his work only survives through the Arabic version. Islamic mathematicians found applications of the theory, most notably the Persian mathematician and poet Omar Khayyám, who found a geometrical method of solving cubic equations using conic sections.

A century before the more famous work of Khayyam, Abu al-Jud used conics to solve quartic and cubic equations, although his solution did not deal with all the cases.

An instrument for drawing conic sections was first described in 1000 AD by Al-Kuhi.

Table of conics, Cyclopaedia, 1728

Johannes Kepler extended the theory of conics through the "principle of continuity", a precursor to the concept of limits. Kepler first used the term 'foci' in 1604.

Girard Desargues and Blaise Pascal developed a theory of conics using an early form of projective geometry and this helped to provide impetus for the study of this new field. In particular, Pascal discovered a theorem known as the hexagrammum mysticum from which many other properties of conics can be deduced.

René Descartes and Pierre Fermat both applied their newly discovered analytic geometry to the study of conics. This had the effect of reducing the geometrical problems of conics to problems in algebra. However, it was John Wallis in his 1655 treatise Tractatus de sectionibus conicis who first defined the conic sections as instances of equations of second degree. Written earlier, but published later, Jan de Witt's Elementa Curvarum Linearum starts with Kepler's kinematic construction of the conics and then develops the algebraic equations. This work, which uses Fermat's methodology and Descartes' notation has been described as the first textbook on the subject. De Witt invented the term 'directrix'.

Application

Conic sections are important in astronomy: the orbits of two massive objects that interact according to Newton's law of universal gravitation are conic sections if their common center of mass is considered to be at rest. If they are bound together, they will both trace out ellipses; if they are moving apart, they will both follow parabolas or hyperbolas. See two-body problem.

The reflective properties of the conic sections are used in the design of searchlights, radio-telescopes and some optical telescopes. A searchlight uses a parabolic mirror as the reflector, with a bulb at the focus; and a similar construction is used for a parabolic microphone. The 4.2 meter Herschel optical telescope on La Palma, in the Canary islands, uses a primary parabolic mirror to reflect light towards a secondary hyperbolic mirror, which reflects it again to a focus behind the first mirror.

https://en.m.wikipedia.org/wiki/Conic_section

The conic sections are the nondegenerate curves generated by the intersections of a plane with one or two nappes of a cone. For a plane perpendicular to the axis of the cone, a circle is produced. For a plane that is not perpendicular to the axis and that intersects only a single nappe, the curve produced is either an ellipse or a parabola (Hilbert and Cohn-Vossen 1999, p. 8). The curve produced by a plane intersecting both nappes is a hyperbola (Hilbert and Cohn-Vossen 1999, pp. 8-9).

The ellipse and hyperbola are known as central conics.

Because of this simple geometric interpretation, the conic sections were studied by the Greeks long before their application to inverse square law orbits was known. Apollonius wrote the classic ancient work on the subject entitled On Conics. Kepler was the first to notice that planetary orbits were ellipses, and Newton was then able to derive the shape of orbits mathematically using calculus, under the assumption that gravitational force goes as the inverse square of distance. Depending on the energy of the orbiting body, orbit shapes that are any of the four types of conic sections are possible.

A conic section may more formally be defined as the locus of a point P that moves in the plane of a fixed point F called the focus and a fixed line d called the conic section directrix (with F not on d) such that the ratio of the distance of P from F to its distance from d is a constant e called the eccentricity. If e=0, the conic is a circle, if 0<e<1, the conic is an ellipse, if e=1, the conic is a parabola, and if e>1, it is a hyperbola.

https://mathworld.wolfram.com/ConicSection.html

Conic sections are the result of intersecting the surfaces of a cone (normally, a double cone) and a plane. The three common conic sections are parabola, ellipse, and hyperbola.

How is a conic section obtained?

As we have mentioned in the previous section, these three conic sections are formed by finding the section intersected by a plane and a double cone.

The figures below show how the three conic sections are formed:

The images above show us how these conic sections or conics are formed when the plane intersects the cone’s vertex.

  • If the cone’s plane intersects is parallel to the cone’s slant height, the section formed will be a parabola.
  • We can see that the ellipse is the result of a tilted plane intersecting with the double cone. Circles are special types of ellipses and are formed when the cone is intersected by the horizontal plane.
  • Hyperbolas are the result of the intersection between the vertical plane and the double cone.

These conics are called degenerate conics, and each of these is expected to contain a point, a line, and intersecting lines.

What are the common parts of the conic section?

Now that we know how to construct the three conic sections, we must learn the parts that they share in common.

Each conic section will look different from the other, but they still share common parts that can be used to identify them, such as their focus, directrix, and eccentricity.

  • The focus of a conic refers to the point where the conic section is constructed and is defined.
  • The directrix of a conic refers to the line that we use to construct a particular conic section.
  • Given a point lying on the conic, the eccentricity reflects the ratio between the distance of the point and the focus and the distance of the point and the directrix.

Take a look at these three conics and notice how the directrix and foci (plural of focus) behave in a parabola, ellipse, and hyperbola.

We’ll learn more about each of this conic in the section below, but it helps to know how the focus and directrix would appear in each conic.

How to identify the conic section?

There are three ways to identify a conic section: using its graph’s shape, its eccentricity, or using the coefficients of the equation representing the conic section.

Remember these examples of conic sections as shown below to identify the conic sections, given their graphs easily.

Parabola

Ellipse

Hyperbola

How to identify conic sections from a general equation?

Let’s say we’re an equation of the form as shown below; there will be two ways for us to identify conic sections by inspecting the coefficients’ values.

𝐴⁢𝑥^2^+𝐵⁢𝑥⁢𝑦+𝐶⁢𝑦^2^+𝐷⁢𝑥+𝐸⁢𝑦+𝐹=0

Method 1: Rearrange the Equation

We can rearrange the given equation and see if it can be manipulated to be similar to the standard form of the three conic sections.

  • Isolate all the terms on the right-hand side of the equation.
  • Complete the perfect square whenever possible.
  • After completing the square, make sure the remaining constants are on the left-hand side.
  • Compare the new right-hand side equation with the conics’ standard forms and see if they can be categorized.

Method 2: Use the Coefficients of the Equations

  • Using the coefficients, particularly , 𝐴⁢, 𝐵, and 𝐶, we can immediately identify the conics by finding the value of 𝐵^2^–4⁢𝐴⁢𝐶.
  • If the result is negative and the conic exists, the conic is an ellipse (or a circle when 𝑎 =𝑏).
  • If the result is equal to zero and the conic exists, the conic is a parabola.
  • If the result is positive and the conic exists, the conic is a hyperbola.

Now that we know more important conic sections’ properties, we can also identify conic sections based on the equations that represent them.

https://www.storyofmathematics.com/conic-sections/

16
 
 

Donald in Mathmagic Land is a 27-minute educational cartoon that was produced by the Walt Disney Educational Media Company during the height of the US reaction to Sputnik. It premiered on June 26, 1959 (see the theatrical release poster), on a bill with the full-length Disney film Darby O’Gill and the Little People. Two years later, Donald in Mathmagic Land became the first televised Disney cartoon to appear in color on the premier episode of the NBC-TV show Walt Disney’s Wonderful World of Color, broadcast on September 24, 1961. (Prior to this time, Disney cartoons aired in black-and-white by ABC-TV under the title The Magical World of Disney.) An immediate success with audiences, Donald in Mathmagic Land was nominated for (but did not win) an Academy Award (in 1959, for Best Documentary-Short Subject) and was recognized for outstanding achievement by the 13th Edinburgh International Film Festival (1959), the Southern California Motion Picture Council (1959), the Mexico Instituto De Cultura Cinematográfica (1962) and the International Educational Film Festival of Ministry of Education (1976).

In the film Donald in Mathmagic Land, the cartoon figure Donald Duck (voiced by actor Clarence Nash) is guided through Mathmagic Land by the unseen “True Spirit of Adventure” (voiced by actor Paul Frees). As a result of his Alice-in-Wonderland-like adventure, Donald learns about the uses of mathematics in music, games, logic, art, dance, and science, and he comes to discover the beauty of mathematics. The mathematical concepts featured in the film focus on ratios, proportion, angles and basic geometrical shapes. German physicist and science writer Dr. Heinz Haber served as a technical advisor on the mathematical and scientific content of the film. The non-speaking live billiards player featured in the angle geometry segment of the cartoon (16:49–22:15) was played by the 3-cushion billiards tournament player Roman Yanez. The cartoon was directed by Hamilton Luske. It concludes with a quote by Galileo: “Mathematics is the language with which God has written the universe.”

https://old.maa.org/press/periodicals/convergence/mathematical-treasure-donald-in-mathmagic-land

The film begins with Donald Duck, holding a hunting rifle, as he passes through a doorway to find he has entered Mathmagic Land. This "mighty strange" fantasy land contains trees with square roots, a stream flowing with numbers, and a walking pencil that plays tic-tac-toe. A geometric bird recites (almost perfectly) the first 15 digits of pi. Donald soon hears the voice of the unseen "True Spirit of Adventure", who will guide him on his journey through "the wonderland of mathematics".

Donald is initially not interested in exploring this foreign land, believing that mathematics is just for "eggheads" until "Mr. Spirit" suggests a fascinating connection between math and music. Intrigued, Donald discovers the relationships between octaves and string length which form the musical scale of today. Next, Donald finds himself in ancient Greece, where Pythagoras and his contemporaries are discovering these same relationships. Pythagoras (on the harp), a flute player, and a double bass player hold a "jam session" which Donald joins after a few moments using a vase as a bongo drum. Pythagoras' mathematical discoveries are, as the Spirit explains, the basis of today's music, and that no type of music could have ever existed without "eggheads". The segment ends with a sequence of live-action musicians playing both jazz and classical music and Pythagoras' acquaintances magically disappearing.

After shaking hands with Pythagoras, who then vanishes, Donald finds on his hand a pentagram, the symbol of the secret Pythagorean society. The Spirit then shows Donald how the mysterious golden section appears in this ancient magic star. Next, the star itself is shown to contain the pattern for constructing golden rectangles many times over. According to the Spirit, the golden rectangle has influenced both ancient and modern cultures in many ways. Donald then learns how the golden rectangle appears in many ancient buildings, such as the Parthenon and the Notre Dame cathedral. Paintings such as the Mona Lisa and various sculptures such as the Venus de Milo contain several golden rectangles. The use of the golden rectangle is found in modern architecture, such as the United Nations building in New York City. Modern painters have also rediscovered the infinite mathematical magic of stars and golden rectangles.

The Spirit shows Donald how the golden rectangle and pentagram are even related to the human body and nature themselves. The human body itself contains the "ideal proportions" of the golden section. However, Donald, overinterpreting the Spirit's advice, tries to make his own cartoon body fit such a proportion, but his efforts are to no avail; he ends up "all pent up in a pentagon". The pentagram and pentagon are then shown to be found in many flowers and animals, such as the petunia, the star jasmine, the starfish, the waxflower. Then, with the help of the inside of a nautilus shell, the Spirit explains that the magic proportions of the golden section are often found in the spirals of nature's designs, quoting Pythagoras: "Everything is arranged according to number and mathematical shape."

Donald then learns that mathematics applies not only to nature, architecture, and music, but also to games that are played on geometrical surfaces, including chess, baseball, American football, basketball, hopscotch, and three-cushion billiard. In chess, Donald suggests the game checkers, but the Spirit does not pursue this option. Donald even volunteers the game tiddlywinks, but the Spirit also rejects this option. Instead, the Spirit challenges Donald Duck to the ancient and challenging game of chess, as made famous by Lewis Carroll's 1871 novel Through the Looking-Glass. In this exciting and imaginative scene, Carroll is portrayed as both a writer and a mathematician. Then, the extended billiards scene features a non-speaking live actor shows the calculations involved in the game's "diamond system", and Donald eventually learns how to do the right calculations to hit ten cushions all while needlessly making it tough for himself.

The Spirit then asks Donald to play a mental game, but he finds Donald's mind to be too cluttered with "Antiquated Ideas", "Bungling", "False Concepts", "Superstitions", and "Confusion". After some mental housecleaning, Donald plays with a circle and a triangle in his mind. He is then able to transform them into a sphere and a cone, and he proceeds to rediscover some of man's most useful past inventions, such as the wheel, train, magnifying glass, drill, spring, propeller, and telescope. Donald then discovers that pentagrams can be drawn inside each other indefinitely, almost as if by magic. In the end, he learns that numbers provide an avenue to consider the concept of infinity itself. The Spirit states that scientific knowledge and technological advances are unlimited, and the key to unlocking the doors of the future is mathematics. By the end of the film, Donald understands and appreciates the value of mathematics and the boundless powers of human imagination and invention. The film closes with a beautiful and inspiring quotation from Galileo Galilei: "Mathematics is the alphabet with which God has written the universe."

https://en.m.wikipedia.org/wiki/Donald_in_Mathmagic_Land

17
 
 

Quadratic equations have been considered and solved since Old Babylonian times (c. 1800 BC), but the quadratic formula students memorize today is an 18th century AD development. What did people do in the meantime?

The difficulty with the general quadratic equation (ax^2^+bx+c=0 as we write it today) is that, unlike a linear equation, it cannot be solved by arithmetic manipulation of the terms themselves: a creative intervention, the addition and subtraction of a new term, is required to change the form of the equation into one which is arithmetically solvable. We call this maneuver completing the square.

In Old Babylonian Mathematics

The Yale Babylonian Collection's tablet YBC 6967, as transcribed in in Neugebauer and Sachs, Mathematical Cuneiform Texts, American Oriental Society, New Haven, 1986. Size 4.5 × 6.5cm.

The Old Babylonian tablet YBC 6967 (about 1900 BC) contains a problem and its solution. Here is a line-by-line literal translation from Jöran Friberg, A Remarkable Collection of Babylonian Mathematical Texts, (Springer, New York, 2007).

The igi.bi over the igi 7 is beyond. The igi and the igi.bi are what? You: 7 the the igi.bi over the igi is beyond to two break, then 3 30. 3 30 with 3 30 let them eat each other, then 12 15 To 12 15 that came up for you 1, the field, add, then 1 12 15. The equalside of 1 12 15 is what? 8 30. 8 30 and 8 30, its equal, lay down, then 3 30, the holder, from one tear out, to one add. one is 12, the second 5. 12 is the igi.bi, 5 the igi

The Old Babylonians used a base-60 floating-point notation for numbers, so that the symbol corresponding to 1 can represent for example 60 or 1 or 1/60. In the context of YBC 6967, the reciprocal numbers, the igi and the igi.bi, have product 1 0. Their difference is given as 7.

Here is a diagram of the solution to the YBC 6967 problem, adapted from Eleanor Robson's "Words and Pictures: New Light on Plimpton 322" (MAA Monthly, February 2002, 105-120). Robson uses a semi-colon to separate the whole and the fractional part of a number, but this is a modern insertion for our convenience. The two unknown reciprocals are conceptualized as the sides of a rectangle of area (yellow) 1 0 [or 60 in decimal notation]. A rectangle with one side 3;30 [=3(1/2)] is moved from the side of the figure to the top, creating an L-shaped figure of area 1 0 which can be completed to a square by adding a small square of area 3;30 × 3;30 = 12;15 [=12(1/4)]. The area of the large square is 1 0 + 12;15 = 1 12;15 [=72(1/4)] with square root 8;30 [=81(1/2)] . It follows that our unknown reciprocals must be 8;30 + 3;30 = 12 and 8;30 − 3;30 = 5 respectively.

In modern notation, the YBC 6967 problem would be xy = 60, x − y =7, or x^2^ − 7x − 60 = 0. In this case the term to be added in completing the square is b^2^ /( 4a^2^ ) = 49/4 = 12(1/4) corresponding exactly to the area of the small square in the diagram.

This tablet, and the several related ones from the same period that exist in various collections (they are cataloged in Friberg's book mentioned above), are significant because they hold a piece of real mathematics: a calculation that goes well beyond tallying to a creative attack on a problem. It should also be noted that none of these tablets contains a figure, even though Old Babylonian tablets often have diagrams. It is as if those mathematicians thought of "breaking," "laying down" and "tearing out" as purely abstract operations on quantities, despite the geometrical/physical language and the clear (to us) geometrical conceptualization.

In Islamic Mathematics

Solving quadratic equations by completing the square was treated by Diophantus (c.200-c.285 AD) in his Arithmetica, but the explanations are in the six lost books of that work. Here we'll look at the topic as covered by Muhammad ibn Musa, born in Khwarazm (Khiva in present-day Uzbekistan) and known as al-Khwarizmi, on his Compendium on Calculation by Completion and Reduction dating to c. 820 AD. (I'm using the translation published by Frederic Rosen in 1831). Negative numbers were still unavailable, so al-Khwarizmi, to solve a general quadratic, has to consider three cases. In each case he supposes a preliminary division has been done so the coefficient of the squares is equal to one ("Whenever you meet with a multiple or sub-multiple of a square, reduce it to the entire square").

  1. "roots and squares are equal to numbers" [x^2^+bx=a]

  2. "squares and numbers are equal to roots" [x^2^+a=bx]

3."roots and numbers are equal to squares" [x^2^=bx+a]

Case 1. al-Khwarizmi works out a specific numerical example, which can serve as a template for any other equation of this form: "what must be the square which, when increased by ten of its roots, amounts to thirty-nine."

"The solution is this: you halve the number of the roots, which in the present case equals five. This you multiply by itself; the product is twenty-five. Add this to thirty-nine, the sum is sixty-four. Now take the root of this, which is eight, and subtract from it half the number of the roots, which is five; the remainder is three. This is the root of the square which you sought for; the square itself is nine."

Note that this is exactly the Old Babylonian recipe, updated from x(x+7)=60 to x^2^+10x=39, and that the figure Eleanor Robson uses for her explanation is essentially identical to the one al-Khwarizmi gives for his second demonstration, reproduced here:

"We proceed from the quadrate AB, which represents the square. It is our next business to add to it the ten roots of the same. We halve for this purpose the ten, so it becomes five, and construct two quadrangles on two sides of the quadrate AB, namely, G and D, the length of each of them being five, as the moiety of the ten roots, whilst the breadth of each is equal to a side of the quadrate AB. Then a quadrate remains opposite the corner of the quadrate AB. This is equal to five multiplied by five: this five being half of the number of roots which we have added to each side of the first quadrate. Thus we know that the first quadrate, which is the square, and the two quadrangles on its sides, which are the ten roots, make together thirty-nine. In order to complete the great quadrate, there wants only a square of five multiplied by five, or twenty-five. This we add to the thirty-nine, in order to complete the great square SH. The sum is sixty-four. We extract its root, eight, which is one of the sides of the great quadrangle. By subtracting from this the same quantity which we have before added, namely five, we obtain three as the remainder. This is the side of the quadrangle AB, which represents the square; it is the root of this square, and the square itself is nine."

Case 2. "for instance, 'a square and twenty-one in numbers are equal to ten roots of the same square."

Solution: Halve the number of the roots; the moiety is five. Multiply this by itself; the product is twenty-five. Subtract from this the twenty-one which are connected with the square; the remainder is four. Extract its root; it is two. Subtract this from the moiety of the roots, which is five; the remainder is three. This is the root of the square you required, and the square is nine.

Here is a summary of al-Khwarizmi's demonstration. The last of the four figures appears (minus the modern embellishments) in Rosen, p. 18.

The problem set up geometrically. I have labeled the unknown root x for modern convenience. The square ABCD has area x^2^, the rectangle CHND has area 10x, the rectangle AHNB has area 21, so x^2^+21=10x.

The side CH is divided in half at G, so the segment AG measures 5−x. The segment TG parallel to DC is extended by GK with length also 5−x. Al-Khwarizmi says this is done "in order to complete the square."

The segment TK then measures 5, so the figure KMNT, obtained by drawing KM parallel to GH and adding MH, is a square with area 25.

Measuring off KL equal to KG, and drawing LR parallel to KG leads to a square KLRG. Since HR has length 5−(5−x)=x the rectangles LMHR and AGTB have the same area, so the area of the region formed by adding LMHR to GHNT is the same as that of the rectangle formed by adding AGTB to GHNT, i.e. 21. And since that region together with the square KLRG makes up the square KMNT of area 25, it follows that the area of KLRG is 25−21=4, and that its side-length 5−x is equal to 2. Hence x=3, and the sought-for square is 9.

Al-Khwarizmi remarks that if you add that 2 to the length of CG then "the sum is seven, represented by the line CR, which is the root to a larger square," and that this square is also a solution to the problem.

Case 3. Example: "Three roots and four simple numbers are equal to a square."

Solution: "Halve the roots; the moiety is one and a half. Multiply this by itself; the product is two and a quarter. Add this to the four; the sum is six and a quarter. Extract its root; it is two and a half. Add this to the moiety of the roots, which was one and a half; the sum is four. This is the root of the square, and the square is sixteen."

As above, we summarize al-Khwarizmi's demonstration. The last figure minus decoration appears on Rosen, p. 20.

We represent the unknown square as ABDC, with side-length x. We cut off the rectangle HRDC with side-lengths 3 and x. Since x^2^=3x+4 the remaining rectangle ABRH has area 4.

Halve the side HC at the point G, and construct the square HKTG. Since HG has length 1(1/2), the square HKTG has area 2(1/4).

Extend CT by a segment TL equal to AH. Then the segments GL and AG have the same length, so drawing LM parallel to AG gives a square AGLM. Now TL = AH = MN and NL = HG = GC = BM, so the rectangles MBRN and KNLT have equal area, and so the region formed by AMNH and KNLT has the same area as ABRH, namely 4. It follows that the square AMLG has area 4+2(1/4)=6(1/4) and consequently side-length AG = 2(1/2). Since GC = 1(1/2), it follows that x=2(1/2)+1(1/2)=4.

Please read the full article

https://www.ams.org/publicoutreach/feature-column/fc-2020-11

A Simple Proof of the Quadratic Formula

The Quadratic Formula was a remarkable triumph of early mathematicians, marking the completion of a long quest to solve arbitrary quadratic equations, with a storied history stretching as far back as the Old Babylonian Period around 2000–1600 B.C. . Over four millennia, many recognized names in mathematics left their mark on this topic, and the formula became a standard part of a first course in Algebra. However, it is unfortunate that for billions of people worldwide, the quadratic formula is also their first (and perhaps only) experience of a rather complicated formula which they must memorize. Countless mnemonic techniques abound, from stories of negative bees considering whether or not to go to a radical party, to songs set to the tune of Pop Goes the Weasel. A derivation by completing the square is usually included in the curriculum, but its computations are somewhat messy, and challenging for first-time Algebra learners to follow. Indeed, the concept of completing the square itself is a significant leap of insight, discovered by ancient masters. This article introduces a surprisingly simple derivation of the quadratic formula, which also produces a computationally-efficient, natural, and easy-to-remember algorithm for solving general quadratic equations. The author would actually be very surprised if this approach has entirely eluded human discovery until the present day, given the 4,000 years of history on this topic, and the billions of people who have encountered the formula and its proof. Yet this technique is certainly not widely taught or known (the author could find no evidence of it in English sources), and so this article seeks at the very least to popularize a delightful alternative approach for solving quadratic equations, which is practical for integration into all mainstream curricula.

https://arxiv.org/pdf/1910.06709v1

18
 
 

Créateur : Patrice Jeener

Dimensions & technique

Hauteur : 16 cm ; Largeur : 14 cm (plaque de cuivre)

Hauteur : 25 cm ; Largeur : 16,5 cm (feuilles #1 et #2)

Gravure au burin sur cuivre

2 exemplaires

19
2
Flap Happened (images-des-maths.pages.math.cnrs.fr)
 
 

Translation of part of an article by Etienne Ghys dating from around 2009, I suppose.

Since the quotes from Edward Lorenz are here a translation into English from a French translation, there may be a noticeable lack of precision compared to the original quotes. If you have the real ones, please post them in the comments.

Meteorology studies a phenomenon of inextricable complexity: the movement of the atmosphere. The equation that governs this movement has been known for a long time: it is the Navier-Stokes equation. But knowing how to write an equation does not mean knowing how to solve it! Let's think a little about the amount of information needed to describe the atmosphere: we need to know the temperature, wind speed, atmospheric pressure, humidity, etc., not only in a given place but also in all places on the globe! Having an exact knowledge of this data is simply impossible: it requires an infinite amount of data, most of which is inaccessible.

Edward Lorenz was a meteorological theorist, with a mathematical background, who recently passed away. In 1962, he had the idea of ​​caricaturing the Navier-Stokes equation, simplifying it to the extreme, to make it "as if" the atmosphere depended only on three parameters, whereas it would require an infinite number! Simplifying a complicated problem in the hope that it will keep the essence of the phenomenon studied: this is a mathematician's activity. And in his "atrophied atmosphere" reduced to its three coordinates, E. Lorenz can run his computer and calculate the numerical solutions that are supposed to describe the movement. Imagine Lorenz's computer, with its small capacity, in 1962! It was then that he found "experimentally" that the slightest change in "his toy atmosphere", for example adding 0.0000001 to one of the three coordinates, causes a considerable change in the atmospheric movement after a relatively short time. This is the phenomenon of "sensitive dependence on initial conditions", the paradigm of chaos theory.

Look at the picture. It represents a trajectory of the simplified Lorenz equation, in three-dimensional space. These curves spin like crazy, sometimes to the left, sometimes to the right, and it seems impossible to predict whether a turn to the right will be followed by another turn to the right or to the left. And yet, for a given initial condition, for a given atmosphere, there is a well-defined future; determinism is not called into question. However, two points close in three-dimensional space, so close that they may not be distinguished in the figure, define trajectories that will start out close but may end up separating significantly: one to the left and the other to the right! Thus, if we know a point with a certain uncertainty, however small, the prediction of the future becomes illusory.

In 1973, Lorenz gave a lecture with a magnificent title that perfectly sums up this idea. [...]:

"Does the flap of a butterfly's wings in Brazil set off a tornado in Texas?"

The butterfly effect was born!

Why did we have to wait for Lorenz to make this concept public?

Lorenz was not the first to understand this limitation in determinism. Henri Poincaré and Jacques Hadamard, at the beginning of the twentieth century, had understood this in a slightly different context: the movement of celestial bodies may be sensitive to the initial conditions... Lorenz knew this well, and his article cites his sources extensively.

Perhaps Hadamard and Poincaré did not know how to "find the words" and they were content to write incomprehensible mathematical articles? This interpretation does not hold. Both made efforts to popularise their ideas. Poincaré wrote, for example, in Science and Hypothesis, a popular work with a circulation of hundreds of thousands of copies:

"A tenth of a degree more or less at any point, the cyclone breaks here and not there, and it spreads its ravages on lands that it would have spared."

The butterfly is not there, but the cyclone is! Hadamard and Poincaré were probably too ahead of their time, and society was not ready for this profound change in the concept of determinism. The physics of the early twentieth century represents the triumph of the science of determinism, inherited from Newton and Laplace. Everything is calculated, everything is predicted, and for what we cannot predict, we are confident that it is only a matter of time and that physics or mathematics will be able to answer. That was without the quantum and relativistic revolutions, which shook many preconceived ideas ... In 1973, public opinion was more open to these new ideas, and without engaging in historico-sociologico-philosophical discussions, the idea that the slightest butterfly, and why not my humble person, could have an influence on the overall course of the world around me, was much better perceived in 1973 than in 1900.

The effectiveness of a butterfly in Texas?

But let's not forget that Lorenz drew his conclusions from examining an almost absurd simplification of the “real equation” that governs atmospheric movement. Does the butterfly effect have an impact on meteorology? Lorenz does not commit himself on this point. His aim is to explain that a natural phenomenon, such as meteorology, could be sensitive to initial conditions and that this could have consequences for the impossibility of medium-term weather forecasting. Let's give Lorenz credit for popularizing this simple idea that if the future is determined by the past, it may not be in such a naive way as previously thought. Even if the butterfly in Brazil proves to be powerless, there are many other areas of science where this idea can be applied. We have talked about planets, but some people do not hesitate to talk about history, politics, finance, etc.

Thanks to Poincaré, Hadamard, and Lorenz, our understanding of determinism has changed. We know that the present determines the future, but we also know that an imperfect knowledge of the present, as is almost always the case, makes determining the future illusory. It took a century for this simple but fundamental idea to be assimilated, unfortunately often imperfectly, by the public and even by scientists.

Lorenz's message

Here are two quotes from Lorenz, simplified slightly. The first has been understood:

“If the flapping of a butterfly's wings can cause a hurricane, the same is true for all the other wing beats of the same butterfly, but also for the wing beats of millions of other butterflies, not to mention the influence of the activities of countless other more powerful creatures, such as humans, for example!”

The second, however, went unnoticed:

“I propose that over the years, small disturbances do not change the frequency of events such as hurricanes: the only thing they can do is change the order in which these events occur.”

In short, even if meteorologists cannot predict the weather in Lyon in a month's time, it should be possible to predict averages and frequencies of meteorological events with a high degree of accuracy in a given location over a long period of time. Of course, this type of prediction is more modest, but it is often just as useful. Lorenz's second idea reframes the role of the forecaster.

Today

Of course, these ideas go far beyond the specific case of meteorology. A scientific theory cannot be based on a negative principle such as the impossibility of predicting the future; it must propose a method for overcoming this difficulty. Today, the mathematical theory that addresses these issues is called dynamical systems theory, and when it takes the perspective of Lorenz's second quote, it is called ergodic theory: the aim is then to understand not specific trajectories, but frequencies and averages. This is a fascinating and flourishing mathematical theory, especially since the 1970s!

A battle is raging among weather specialists and Navier-Stokes equation experts over whether the butterfly in Brazil influences Texas. The question is whether Lorenz's approximations are justified in the case of the atmosphere. To answer this question, mathematicians need to better understand dynamic systems that depend on a large number of dimensions (and even an infinite number of dimensions). A recent article is entitled “The butterfly effect no longer exists!” but other authors criticize the assumptions made in it. Of course, physicists and meteorologists must also be consulted: no mathematical theory can be applied to a concrete situation if it cannot be verified that the assumptions are satisfied in practice. So, does the butterfly effect exist “in reality”? Let's leave the mathematicians to work with their physicist colleagues. They may soon have an answer for us. For example, the fact that the Lorenz equation, simplified to only three dimensions, actually satisfies Lorenz's second quote is a very recent purely mathematical result: mathematicians say that the Lorenz equation has a “physical measure.” And this difficult mathematical result is not, a priori, related to the question of whether the Lorenz equation accounts for the movement of the atmosphere.

Even if the butterfly effect did not exist in the atmosphere, it would still remain a rich and powerful mathematical idea. The theory of dynamical systems is not limited to describing the atmosphere. As is often the case in mathematics, an example has become the seed of a theory whose ambition is to understand a much broader field than initially thought, and to establish connections with other areas that seemed far removed. The concept of chaos, which emerged a century ago for reasons related to celestial mechanics, has been enriched by the example of turbulence in the atmosphere and has invaded a large part of mathematics, including even number theory, which seems so “static” and immutable... See, for example, an illustration in this article.

Celestial mechanics, meteorology, and number theory have therefore recently been united in common methods. Poincaré warned us: “To do mathematics is to give the same name to different things.” Today, chaos means many things, far more than Poincaré or Lorenz could have imagined.


On this slide we show the three-dimensional unsteady form of the Navier-Stokes Equations. These equations describe how the velocity, pressure, temperature, and density of a moving fluid are related. The equations were derived independently by G.G. Stokes, in England, and M. Navier, in France, in the early 1800's. The equations are extensions of the Euler Equations and include the effects of viscosity on the flow. These equations are very complex, yet undergraduate engineering students are taught how to derive them in a process very similar to the derivation that we present on the conservation of momentum web page.

The equations are a set of coupled differential equations and could, in theory, be solved for a given flow problem by using methods from calculus. But, in practice, these equations are too difficult to solve analytically. In the past, engineers made further approximations and simplifications to the equation set until they had a group of equations that they could solve. Recently, high speed computers have been used to solve approximations to the equations using a variety of techniques like finite difference, finite volume, finite element, and spectral methods. This area of study is called Computational Fluid Dynamics or CFD.

The Navier-Stokes equations consists of a time-dependent continuity equation for conservation of mass, three time-dependent conservation of momentum equations and a time-dependent conservation of energy equation. There are four independent variables in the problem, the x, y, and z spatial coordinates of some domain, and the time t. There are six dependent variables; the pressure p, density r, and temperature T (which is contained in the energy equation through the total energy Et) and three components of the velocity vector; the u component is in the x direction, the v component is in the y direction, and the w component is in the z direction, All of the dependent variables are functions of all four independent variables. The differential equations are therefore partial differential equations and not the ordinary differential equations that you study in a beginning calculus class.

https://www.grc.nasa.gov/www/k-12/airplane/nseqs.html

A Yale-led study warns that global climate change may have a devastating effect on many butterfly populations worldwide, turning their species-rich, mountain habitats from refuges into traps.

Think of it as the “butterfly effect” — the idea that something as small as the flapping of a butterfly’s wings can eventually lead to a major event such as a hurricane — in reverse.

https://news.yale.edu/2025/03/31/new-warnings-butterfly-effect-reverse

20
 
 

21
4
submitted 1 month ago* (last edited 1 week ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

The Calculus of Love (short)

Ambition collides with logic in the quest to solve a 250-year old mathematical puzzle. Starring Keith Allen.

Writer/ Director: Dan Clifton


A professor who is obsessed with proving Goldbach's Conjecture challenges a class of graduate students to make any progress on it. But, is he truly motivated by a love of pure mathematics and its search for truth, or will baser human emotions get in the way when one of the students seems to succeed?

In an interview, the writer/director says "I've made a lot of films about science and with scientists. I've always been interested in the idea that science is a noble pursuit for the truth, but that the pursuers — scientists themselves — suffer from all the usual human flaws. A character's obsession with proving an unsolved mathematical conjecture felt like a good way to dramatise this conflict."

There are lots of things I like about this movie, and only a few things I don't like. As it is only 15 minutes long and free, I suggest you watch it before reading on because my discussion will include a few spoilers.

The film is well acted and directed; I can see why it won some awards at short film festivals. The film touches on (but does not say much about) the important topic of sexism in mathematics. And, although I hope nobody thinks this is a common or even realistic situation, the idea of a math graduate course that is so challenging that 70% of the students will fail is intriguing.

Spoiler

Finally, of course, I vicariously enjoyed the sweet revenge as the heroine of the film gets even with this jerk of a professor.

On the other hand, there is a noticeable shortage of both calculus and love in this film, and so I'm not sure I like the title. More significantly, I have two big problems with the idea that a young woman with a proof of Goldbach's Conjecture would need help from a professor in order to get recognition. One problem is that I don't believe it is true and so it spoils the story for me. After all, she could just go get it published without involving the professor. Couldn't she get better revenge by actually getting the fame and recognition he so desires? And, my other problem with this is that I'm afraid it will give viewers a misconception of the field of mathematics. One of the wonderful things about math is that you don't have to be part of some club to get recognized for making an important discovery. If anyone had a proof (or disproof) of Goldbach's Conjecture it would not matter who they were. (Consider, for example, the case of Yitang Zhang).

https://kasmana.people.charleston.edu/MATHFICT/mfview.php?callnumber=mf1158

Yitang Zhang (Chinese: 张益唐; born February 5, 1955) is a Chinese-American mathematician primarily working on number theory and a professor of mathematics at the University of California, Santa Barbara since 2015. In 2025, he was appointed a professor at Sun Yat-sen University.

Zhang was born in Shanghai, China, with his ancestral home in Pinghu, Zhejiang. He lived in Shanghai with his grandmother until he went to Peking University. At around the age of nine, he found a proof of the Pythagorean theorem. He first learned about Fermat's Last Theorem and Goldbach's conjecture when he was 10. During the Cultural Revolution, he and his mother were sent to the countryside to work in the fields. He worked as a laborer for 10 years and was unable to attend high school. After the Cultural Revolution ended, Zhang entered Peking University in 1978 as an undergraduate student and received a Bachelor of Science in mathematics in 1982. He became a graduate student of Professor Pan Chengbiao, a number theorist at Peking University, and obtained a Master of Science in mathematics in 1984.

After receiving his master's degree in mathematics, with recommendations from Professor Ding Shisun, the President of Peking University, and Professor Deng Donggao, chair of the university's Math Department, Zhang was granted a full scholarship at Purdue University. Zhang arrived at Purdue in January 1985, studied there for six and a half years, and obtained his PhD in mathematics in December 1991.

After some years, Zhang managed to find a position as a lecturer at the University of New Hampshire, where he was hired by Kenneth Appel in 1999. Prior to getting back to academia, he worked for several years as an accountant and a delivery worker for a New York City restaurant. He also worked in a motel in Kentucky and in a Subway sandwich shop. A profile published in Quanta Magazine reports that Zhang used to live in his car during the initial job-hunting days. He served as lecturer at UNH from 1999 until around January 2014, when UNH appointed him to a full professorship as a result of his breakthrough on prime numbers. Zhang stayed for a semester at the Institute for Advanced Study in Princeton, NJ, in 2014, and he joined the University of California, Santa Barbara in fall 2015. He took a full-time position at Sun Yat-sen University in Guangzhou, China on June 27, 2025.

https://en.m.wikipedia.org/wiki/Yitang_Zhang

22
 
 

This painting was discovered at the Astana Graves, which was the main burial site for aristocrats of the Turpan region from the 3rd - 8th centuries. The two conjoined figures are Fuxi and Nuwa, a brother and sister who, according to a Chinese foundation myth, were the only survivors of a great flood. Charged with repopulating the world, Fuxi and Nuwa created vast numbers of clay figures, which they were able to bring to life with some divine assistance. The Turpan area was introduced to Chinese Han culture in its early history, so burial objects discovered in the region often display a strong Chinese influence. The iconic figures are outlined with clear brush strokes and colored with thick red and white pigments. They are depicted with human upper bodies, but their lower bodies are serpentine. They are holding a compass and a ruler (respectively), which are symbols related to the traditional Chinese understanding of the universe, in which Heaven is round and the Earth is square. Behind them are the sun, the moon, and various constellations, as a microcosm of the universe. The painting has several tiny holes along the edges, which are probably nail holes from when it was tacked onto a ceiling. The painting lacks detail, but the vivid colors and balanced composition make it a notable piece of art, while the subject matter makes it a valuable archeological artifact.

Materials : Silk Fabric - Hemp Fabric

Location : Central Asia

https://www.museum.go.kr/ENG/contents/E0402000000.do?schM=view&relicId=435

Fuxi or Fu Hsi (Chinese: 伏羲) is a culture hero in Chinese mythology, credited along with his sister and wife Nüwa with creating humanity and the invention of music, hunting, fishing, domestication, and cooking, as well as the Cangjie system of writing Chinese characters around 2900 BC or 2000 BC. He is also said to be the originator of bagua (the eight trigrams) after observing that there were eight fundamental building blocks in nature: heaven, earth, water, fire, thunder, wind, mountain, and lake. These eight are all made of different combinations of yin and yang, which are what came to be called bagua.

Fuxi was counted as the first mythical emperor of China, "a divine being with a serpent's body" who was miraculously born, a Taoist deity, and/or a member of the Three Sovereigns at the beginning of the Chinese dynastic period. Some representations show him as a human with snake-like characteristics, "a leaf-wreathed head growing out of a mountain", "or as a man clothed with animal skins."

https://en.m.wikipedia.org/wiki/Fuxi

Nüwa, also read Nügua, is a mother goddess, culture hero, and/or member of the Three Sovereigns of Chinese mythology. She is a goddess in Chinese folk religion, Chinese Buddhism, Confucianism and Taoism. She is credited with creating humanity and repairing the Pillar of Heaven.

As creator of mankind, she molded humans individually by hand with yellow clay. In other stories where she fulfills this role, she only created nobles and/or the rich out of yellow soil. The stories vary on the other details about humanity's creation, but it was a tradition commonly believed in ancient China that she created commoners from brown mud. A story holds that she was tired when she created "the rich and the noble", so all others, or "cord-made people", were created from her "dragg[ing] a string through mud".

Nüwa is often depicted holding a compass or multiple compasses, which were a traditional Chinese symbol of a dome-like sky. She was also thought to be an embodiment of the stars and the sky or a star god.

Fuxi and Nüwa can also appear individually on separate tomb bricks. They generally hold or embrace the sun or moon discs containing the images of a bird (or a three-legged crow) or a toad (sometimes a hare) which are the sun and moon symbolism respectively, and/or each holding a try square or a pair of compasses, or holding a longevity mushroom (靈芝; lingzhi) plant. Fuxi and Nüwa holding the sun and the moon appears as early as the late Western Han dynasty. Other physical appearance variation, such as lower snake-like body shape (e.g. thick vs thin tails), depictions of legs (i.e. legs found along the snake-like body) and wings (e.g. wings with feathers which protrude from their backs as found in late Western Han Xinan (新安) Tomb or smaller quills found on their shoulders), and in hats and hairstyles, also exist.

https://en.m.wikipedia.org/wiki/N%C3%BCwa

23
 
 

The regular polyhedra were known at least since the time of the ancient Greeks. The names of the more complex ones are purely Greek. But despite their being known for close to two millenia no one apparently noticed the fact that the sum of the number of faces F and the number of vertices V less the number of edges E is equal to two for all of them; i.e.,

V - E + F = 2

The value of (V-E+F) is usually denoted by the Greek letter Chi (Χ). Thus Χ(cube)=2.

It was the Swiss mathematician Leonhard Euler who recognized and published this fact. The value of two is said to be the Euler characteristic of each of the polyhedra. This value is not changed by stretching or shrinking any side or face or even shrinking a side or face to zero. This means that the Euler characteric is a topological invariant because it is not altered by any continuous mapping.

Another French-Swiss mathematician, Simon Lhuilier (1750-1840), found a slight generalization of Euler's formula to take into account polyhedra having holes. Lhuilier's formula is

V - E + F = 2 − 2G = 2(1− G)

where G is the number of holes in the polyhedron. Thus the Euler characteristic is 2 for a regular polyhedron but 0 for a torus-like polyhedron.

https://www.sjsu.edu/faculty/watkins/eulerpoincare2.htm

Leonhard Euler (/ˈɔɪlər/ OY-lər;[b] 15 April 1707 – 18 September 1783) was a Swiss polymath who was active as a mathematician, physicist, astronomer, logician, geographer, and engineer. He founded the studies of graph theory and topology and made influential discoveries in many other branches of mathematics, such as analytic number theory, complex analysis, and infinitesimal calculus. He also introduced much of modern mathematical terminology and notation, including the notion of a mathematical function. He is known for his work in mechanics, fluid dynamics, optics, astronomy, and music theory. Euler has been called a "universal genius" who "was fully equipped with almost unlimited powers of imagination, intellectual gifts and extraordinary memory". He spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia.

https://en.m.wikipedia.org/wiki/Leonhard_Euler

The Euler characteristic was originally defined for polyhedra and used to prove various theorems about them, including the classification of the Platonic solids. It was stated for Platonic solids in 1537 in an unpublished manuscript by Francesco Maurolico. Leonhard Euler, for whom the concept is named, introduced it for convex polyhedra more generally but failed to rigorously prove that it is an invariant. In modern mathematics, the Euler characteristic arises from homology and, more abstractly, homological algebra.

Proof of Euler's formula

First steps of the proof in the case of a cube

There are many proofs of Euler's formula. One was given by Cauchy in 1811, as follows. It applies to any convex polyhedron, and more generally to any polyhedron whose boundary is topologically equivalent to a sphere and whose faces are topologically equivalent to disks.

Remove one face of the polyhedral surface. By pulling the edges of the missing face away from each other, deform all the rest into a planar graph of points and curves, in such a way that the perimeter of the missing face is placed externally, surrounding the graph obtained, as illustrated by the first of the three graphs for the special case of the cube. (The assumption that the polyhedral surface is homeomorphic to the sphere at the beginning is what makes this possible.) After this deformation, the regular faces are generally not regular anymore. The number of vertices and edges has remained the same, but the number of faces has been reduced by 1. Therefore, proving Euler's formula for the polyhedron reduces to proving V − E + F = 1 for this deformed, planar object.

If there is a face with more than three sides, draw a diagonal—that is, a curve through the face connecting two vertices that are not yet connected. Each new diagonal adds one edge and one face and does not change the number of vertices, so it does not change the quantity V − E + F . (The assumption that all faces are disks is needed here, to show via the Jordan curve theorem that this operation increases the number of faces by one.) Continue adding edges in this manner until all of the faces are triangular.

Apply repeatedly either of the following two transformations, maintaining the invariant that the exterior boundary is always a simple cycle:

  1. Remove a triangle with only one edge adjacent to the exterior, as illustrated by the second graph. This decreases the number of edges and faces by one each and does not change the number of vertices, so it preserves V − E + F .

  2. Remove a triangle with two edges shared by the exterior of the network, as illustrated by the third graph. Each triangle removal removes a vertex, two edges and one face, so it preserves V − E + F .

These transformations eventually reduce the planar graph to a single triangle. (Without the simple-cycle invariant, removing a triangle might disconnect the remaining triangles, invalidating the rest of the argument. A valid removal order is an elementary example of a shelling.)

At this point the lone triangle has V = 3, E = 3 , and F = 1, so that V − E + F = 1 . Since each of the two above transformation steps preserved this quantity, we have shown V − E + F = 1 for the deformed, planar object thus demonstrating V − E + F = 2 for the polyhedron. This proves the theorem.

For additional proofs, see Eppstein (2013). Multiple proofs, including their flaws and limitations, are used as examples in Proofs and Refutations by Lakatos (1976).

https://en.m.wikipedia.org/wiki/Euler_characteristic

The idea of associating algebraic objects or structures with topological spaces arose early in the history of topology. The basic incentive in this regard was to find topological invariants associated with different structures. The simplest example is the Euler characteristic, which is a number associated with a surface. In 1750 the Swiss mathematician Leonhard Euler proved the polyhedral formula V – E + F = 2, or Euler characteristic, which relates the numbers V and E of vertices and edges, respectively, of a network that divides the surface of a polyhedron (being topologically equivalent to a sphere) into F simply connected faces. This simple formula motivated many topological results once it was generalized to the analogous Euler-Poincaré characteristic χ = V – E + F = 2 – 2g for similar networks on the surface of a g-holed torus. Two homeomorphic surfaces will have the same Euler-Poincaré characteristic, and so two surfaces with different Euler-Poincaré characteristics cannot be topologically equivalent. However, the primary algebraic objects used in algebraic topology are more intricate and include such structures as abstract groups, vector spaces, and sequences of groups. Moreover, the language of algebraic topology has been enhanced by the introduction of category theory, in which very general mappings translate topological spaces and continuous functions between them to the associated algebraic objects and their natural mappings, which are called homomorphisms.

https://www.britannica.com/science/topology/Homeomorphism

Long before Euler, in 1537, Francesco Maurolico stated the same formula for the five Platonic solids (see Friedman). Another version of the formula dates over 100 years earlier than Euler, to Descartes in 1630. Descartes gives a discrete form of the Gauss-Bonnet theorem, stating that the sum of the face angles of a polyhedron is 2π(V - 2), from which he infers that the number of plane angles is 2F + 2V - 4. The number of plane angles is always twice the number of edges, so this is equivalent to Euler's formula, but later authors such as Lakatos, Malkevitch, and Polya disagree, feeling that the distinction between face angles and edges is too large for this to be viewed as the same formula. The formula V - E + F = 2 was (re)discovered by Euler; he wrote about it twice in 1750, and in 1752 published the result, with a faulty proof by induction for triangulated polyhedra based on removing a vertex and retriangulating the hole formed by its removal. The retriangulation step does not necessarily preserve the convexity or planarity of the resulting shape, so the induction does not go through. Another early attempt at a proof, by Meister in 1784, is essentially the triangle removal proof given here, but without justifying the existence of a triangle to remove. In 1794, Legendre provided a complete proof, using spherical angles. Cauchy got into the act in 1811, citing Legendre and adding incomplete proofs based on triangle removal, ear decomposition, and tetrahedron removal from a tetrahedralization of a partition of the polyhedron into smaller polyhedra. Hilton and Pederson provide more references as well as entertaining speculation on Euler's discovery of the formula.

The polyhedron formula, of course, can be generalized in many important ways, some using methods described below. One important generalization is to planar graphs. To form a planar graph from a polyhedron, place a light source near one face of the polyhedron, and a plane on the other side.

The shadows of the polyhedron edges form a planar graph, embedded in such a way that the edges are straight line segments. The faces of the polyhedron correspond to convex polygons that are faces of the embedding. The face nearest the light source corresponds to the outside face of the embedding, which is also convex. Conversely, any planar graph with certain connectivity properties comes from a polyhedron in this way.

Euler's Formula, Proof 6: Electrical Charge

This proof is due to Thurston. He writes:

Arrange the polyhedron in space so that no edge is horizontal – in particular, so there is exactly one uppermost vertex U and lowermost vertex L.

Put a unit + charge at each vertex, a unit - charge at the center of each edge, and a unit + charge in the middle of each face. We will show that the charges all cancel except for those at L and at U. To do this, we displace all the vertex and edge charges into a neighboring face, and then group together all the charges in each face. The direction of movement is determined by the rule that each charge moves horizontally, counterclockwise as viewed from above.

In this way, each face receives the net charge from an open interval along its boundary. This open interval is decomposed into edges and vertices, which alternate. Since the first and last are edges, there is a surplus of one - ; therefore, the total charge in each face is zero. All that is left is +2, for L and for U.

Thurston goes on to generalize this idea to a proof that the Euler characteristic is an invariant of any triangulated differentiable manifold.

https://ics.uci.edu/~eppstein/junkyard/euler/

24
 
 

Sophie Germain → Carl Friedrich Gauß, Paris, 1804 Nov. 21

Manuscript

Archive: Göttingen, Niedersächsische Staats- und Universitätsbibliothek

Signature: Cod. Ms. Gauß Briefe A: Germain 1 (Brief, 4 S., und Anlage, 4 S.), Brief Nr. 1 französisch

Copies / Handwritten copies

Paris, Institut de France, Ms. 2031, pièce 89; 3 S.

https://gauss.adw-goe.de/handle/gauss/3053

Marie-Sophie Germain (French: [maʁi sɔfi ʒɛʁmɛ̃]; 1 April 1776 – 27 June 1831) was a French mathematician, physicist, and philosopher. Despite initial opposition from her parents and difficulties presented by society, she gained education from books in her father's library, including ones by Euler, and from correspondence with famous mathematicians such as Lagrange, Legendre, and Gauss (under the pseudonym of Monsieur Le Blanc). One of the pioneers of elasticity theory, she won the grand prize from the Paris Academy of Sciences for her essay on the subject. Her work on Fermat's Last Theorem provided a foundation for mathematicians exploring the subject for hundreds of years after. Because of prejudice against her sex, she was unable to make a career out of mathematics, but she worked independently throughout her life. Before her death, Gauss had recommended that she be awarded an honorary degree, but that never occurred. On 27 June 1831, she died from breast cancer. At the centenary of her life, a street and a girls' school were named after her. The Academy of Sciences established the Sophie Germain Prize in her honour.

https://en.m.wikipedia.org/wiki/Sophie_Germain

Sophie Germain: The Woman Who Began Her Career Using a Man’s Name

Inspired by Archimedes’ tragic tale - an ancient Greek mathematician killed while engrossed in geometry - Germain chose to study mathematics. Self-taught from the family library, Germain learned number theory, arithmetic, calculus, and even mastered Greek and Latin to understand complex texts. But her studies prompted harsh restrictions from her parents, as mathematics wasn’t considered particularly feminine in the 18th century and, societally speaking, women of status were expected to engage in intelligent conversation without overshadowing the men in the room.

With that, gone were the candles that provided light. Gone was the fire providing warmth in the evening hours. But Sophie’s parents woke to find their daughter wrapped in blankets with a smuggled candle and frozen inkwell beside her. After that, they alleviated the reading restrictions.

At 18, as the École Polytechnique opened its doors to future scientists and mathematicians, Sophie faced the gender barrier; females weren’t permitted to study there. Public lecture notes only took her so far; she needed direct feedback and evaluation to progress.

And so, Sophie Germain resorted to a daring solution – submitting paper assignments under the name of a male student, Antoine-August Leblanc.

Sophie submitted assignments to Joseph-Louis Lagrange, a professor of analysis who realized things weren’t adding up. Lagrange’s in-person conversations with Leblanc didn’t match the mathematical prowess of the student on paper. With that, Lagrange learned of Sophie Germain and, instead of shunning her, visited her while encouraging fellow academics to do the same.

German mathematician Carl Friedrich Gauss found himself impressed by the very same Leblanc, who sent promising research on Fermat’s Last Theorem (which Sophie’s work eventually helped to prove). But with the invasion of Brunswick by French forces, “Leblanc” grew concerned for Gauss’s safety, worrying that the mathematician might meet the same fate as Archimedes. The student sought help from a family friend, a French general, and the façade of Leblanc came unraveling. Upon learning of Leblanc’s identity, Gauss made the following statement:

“How can I describe my astonishment and admiration on seeing my esteemed correspondent M leBlanc metamorphosed into this celebrated person. . . when a woman, because of her sex, our customs and prejudices, encounters infinitely more obstacles than men in familiarising herself with [number theory's] knotty problems, yet overcomes these fetters and penetrates that which is most hidden, she doubtless has the most noble courage, extraordinary talent, and superior genius.” – Carl Frederick Gauss

So, Leblanc became a persona of the past and Sophie Germain entered the world of higher academia. And a puzzling mathematical question soon surfaced. Physicist and musician Ernst Chladni struck a rigid plate with a violin bow, noticing that sand on top of the plate settled into distinct patterns, but how?

The Paris Academy of Sciences was so interested in this quandary, that it hosted a prize competition for a mathematical explanation. Though her first two explanatory memoirs were dismissed, Germain won the prize for her third submission concerning elasticity, making her the first female to win an award in the field of mathematics.

Sophie Germain’s story is one of resilience in the face of misogyny. Throughout her life, she fought against systemic prejudice, proving not only mathematical theories, but that women deserved the right to education.

Despite battling breast cancer, Sophie Germain continued to publish papers on number theory and, two years after her passing, was awarded an honorary doctorate from the University of Gottingen, made possible by her early mentor Carl Frederick Gauss. And while a Parisian street was named after her, a final act of injustice remained: Germain’s birth certificate never claimed her a mathematician, but simply a “property holder.”

https://www.wiley.com/en-us/network/research-libraries/libraries-archives-databases/archives/sophie-germain-the-woman-who-began-her-career-using-a-mans-name

Now let's think about the unknown number of talented women who have been forgotten, or simply those who never had the opportunity to express their ideas.

25
 
 

Fabricant / Éditeur : Joseph Caron

Date de fabrication : 12/02/1913

Lieu de fabrication : Paris, France

Dimensions & matériaux :

Hauteur : 26 cm ; Largeur : 12 cm ; Profondeur : 12 cm

Bois

Fil

Métal

view more: next ›