Nota Math

15
CONTRIBUTION FROM MATHEMATICS Arab contributions to mathematics and the introduction of the Zero Regional, Science, 4/22/1998 Arab contributions to human civilization are noteworthy. In arithmetic the style of writing digits from right to left is an evidence of its Arab origin. For instance, the numeral for five hundred in English should be written as 005, not as 500 according to English's left-to-right reading style. Another invention that revolutionized mathematics was the introduction of the number zero by Muhammad Bin Ahmad in 967 AD. Zero was introduced in the West as late as the beginning of the thirteenth century. Modern society takes the invention of the zero for granted, yet the Zero is a non- trivial concept, that allowed major mathematical breakthroughs. Concerning Algebra, al-Khawarzmi is credited with the first treatise. He solved Algebra equations of the first and second degree (known as quadratic equations, and are are prevelant in science and engineering) and also introduced the geometrical method of solving these equations. He also recognized that quadratic equations have two roots. His method was continued by Thabet Bin Qura, the translator of Ptolemy's works who developed Algebra and first realized application in geometry. By the 11th century the Arabs had founded, developed and perfected geometrical algebra and could solve equations of the third and fourth degree. Another outstanding Arab mathematician is Abul Wafa who created and successfully developed a branch of geometry which consists of problems leading to equations in Algebra of a higher degree than the second. He made a number of valuable contributions to polyhedral theory.

Transcript of Nota Math

Page 1: Nota Math

CONTRIBUTION FROM MATHEMATICS

Arab contributions to mathematics and the introduction of the ZeroRegional, Science, 4/22/1998

Arab contributions to human civilization are noteworthy. In arithmetic the style of writing digits from right to left is an evidence of its Arab origin. For instance, the numeral for five hundred in English should be written as 005, not as 500 according to English's left-to-right reading style.

Another invention that revolutionized mathematics was the introduction of the number zero by Muhammad Bin Ahmad in 967 AD. Zero was introduced in the West as late as the beginning of the thirteenth century. Modern society takes the invention of the zero for granted, yet the Zero is a non-trivial concept, that allowed major mathematical breakthroughs.

Concerning Algebra, al-Khawarzmi is credited with the first treatise. He solved Algebra equations of the first and second degree (known as quadratic equations, and are are prevelant in science and engineering) and also introduced the geometrical method of solving these equations.

He also recognized that quadratic equations have two roots. His method was continued by Thabet Bin Qura, the translator of Ptolemy's works who developed Algebra and first realized application in geometry. By the 11th century the Arabs had founded, developed and perfected geometrical algebra and could solve equations of the third and fourth degree.

Another outstanding Arab mathematician is Abul Wafa who created and successfully developed a branch of geometry which consists of problems leading to equations in Algebra of a higher degree than the second. He made a number of valuable contributions to polyhedral theory.

Al-Karaki, of the 11th century is considered to be one of the greatest Arab mathematicians. He composed one arithmetic book and another on Algebra. In the two books, he developed an approximate method of finding square roots, a theory of indices, a theory of mathematical induction and a theory of intermediate quadratic equations.

Arabs have excelled in geometry, starting with the transition of Euclid and conic section of Apolonios and they preserved the genuine works of these two Greek masters for the modern world, by the 9th century AD. and then started making new discoveries in this domain.

However, Arab achievements in this field were crowned by the discovery made by Abu Jafar Muhammad Ibn Muhammad Ibn al-Hassan, known as Nassereddine al-Tusi. Al-Tusi separated trigonometry from astronomy. This contribution recognizes and explains

Page 2: Nota Math

weakness in Euclid's theory of parallels, and thereby may thus be credited as founder of non-Euclidian geometry.

HISTORY OF MEASUREMENT

Units of measurement were among the earliest tools invented by humans. Primitive societies needed rudimentary measures for many tasks: constructing dwellings of an appropriate size and shape, fashioning clothing, or bartering food or raw materials.

Earliest known systems

The inhabitants of the Indus Valley Civilization (c. 3000–1500 BC, Mature period 2600–1900 BC) developed a sophisticated system of standardization, using weights and measures, evident by the excavations made at the Indus valley sites.[1] This technical standardization enabled gauging devices to be effectively used in angular measurement and measurement for construction.[1] Calibration was also found in measuring devices along with multiple subdivisions in case of some devices.[1]

The earliest known uniform systems of weights and measures seem all to have been created at some time in the 4th and 3rd millennia BC among the ancient peoples of Egypt, Mesopotamia and the Indus Valley, and perhaps also Elam (in Iran) as well. The most astounding of these ancient systems was perhaps that of the Indus Valley Civilization (ca. 2600 BC). The Indus Valley peoples achieved great accuracy in measuring length, mass, and time. Their measurements were extremely precise since their smallest division, which is marked on an ivory scale found in Lothal, was approximately 1.704 mm, the smallest division ever recorded on a scale of the Bronze Age. The decimal system was used. Harappan engineers followed the decimal division of measurement for all practical purposes, including the measurement of mass as revealed by their hexahedron weights. Weights were based on units of 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200, and 500, with each unit weighing approximately 28 grams, similar to the English ounce or Roman uncia, and smaller objects were weighed in similar ratios with the units of 0.871.

Other systems were based on the use of parts of the body and the natural surroundings as measuring instruments. Early Babylonian and Egyptian records and the Bible indicate that length was first measured with the forearm, hand, or finger and that time was measured by the periods of the sun, moon, and other heavenly bodies. When it was necessary to compare the capacities of containers such as gourds or clay or metal vessels, they were filled with plant seeds which were then counted to measure the volumes. When means for weighing were invented, seeds and stones served as standards. For instance, the carat, still used as a unit for gems, was derived from the carob seed.

Page 3: Nota Math

Units of length

The Egyptian cubit, the Indus Valley units of length referred to above and the Mesopotamian cubit were used in the 3rd millennium BC and are the earliest known units used by ancient peoples to measure length. The measures of length used in ancient India included the dhanus (bow), the krosa (cry, or cow-call) and the jojana (stage).

The common cubit was the length of the forearm from the elbow to the tip of the middle finger. It was divided into the span of the hand (one-half cubit), the palm or width of the hand (one sixth), and the digit or width of the middle finger (one twenty-fourth) and the span or the length between the tip little finger to the tip of the thumb. The Sacred Cubit, which was a standard cubit enhanced by an extra span—thus 7 spans or 28 digits long—was used in constructing buildings and monuments and in surveying in ancient Egypt; it may have been based on an astronomical measurement.[2] The inch, foot, and yard evolved from these units through a complicated transformation not yet fully understood. Some believe they evolved from cubic measures; others believe they were simple proportions or multiples of the cubit. In whichever case, the Greeks and Romans inherited the foot from the Egyptians. The Roman foot (~296 mm) was divided into both 12 unciae (inches) (~24.7 mm) and 16 digits (~18.5 mm). The Romans also introduced the mille passus (1000 paces) or double steps, the pace being equal to five Roman feet (~1480 mm). The Roman mile of 5000 feet (1480 m) was introduced into England during the occupation. Queen Elizabeth I (reigned from 1558 to 1603) changed, by statute, the mile to 5280 feet (~1609 m) or 8 furlongs, a furlong being 40 rod (unit)s (~201 m) of 5.5 yards (~5.03 m)each.

The introduction of the yard (0.9144 m) as a unit of length came later, but its origin is not definitely known. Some believe the origin was the double cubit, others believe that it originated from cubic measure. Whatever its origin, the early yard was divided by the binary method into 2, 4, 8, and 16 parts called the half-yard, span, finger, and nail. The association of the yard with the "gird" or circumference of a person's waist or with the distance from the tip of the nose to the end of the thumb of King Henry I (reigned 1100 - 1135) are probably standardizing actions, since several yards were in use in Britain. There were also Rods, Poles and Perches for measurements of length. The following table lists the equivalents. Length 12 lines = 1 inch 12 inches = 1 foot 3 feet = 1 yard 1760 yards = 1 mile 36 inches = 1 yard 440 yards = quarter mile 880 yards = half mile

100 links = 1 chain 10 chains = 1 furlong 8 furlongs = 1 mile 4 inches = 1 hand 22 yards = 1 chain 5.5 yards = 1 rod, pole or perch 4 poles = 1 chain 40 poles = 1 furlong

Units of mass

The grain was the earliest unit of mass and is the smallest unit in the apothecary, avoirdupois, Tower, and troy systems. The early unit was a grain of wheat or barleycorn used to weigh the precious metals silver and gold. Larger units preserved in stone standards were developed that were used as both units of mass and of monetary currency. The pound was derived from the mina used by ancient civilizations. A smaller unit was

Page 4: Nota Math

the shekel, and a larger unit was the talent. The magnitude of these units varied from place to place. The Babylonians and Sumerians had a system in which there were 60 shekels in a mina and 60 minas in a talent. The Roman talent consisted of 100 libra (pound) which were smaller in magnitude than the mina. The troy pound (~373.2 g) used in England and the United States for monetary purposes, like the Roman pound, was divided into 12 ounces, but the Roman uncia (ounce) was smaller. The carat is a unit for measuring gemstones that had its origin in the carob seed, which later was standardized at 1/144 ounce and then 0.2 gram.

Goods of commerce were originally traded by number or volume. When weighing of goods began, units of mass based on a volume of grain or water were developed. For example, the talent in some places was approximately equal to the mass of one cubic foot of water. Was this a coincidence or by design? The diverse magnitudes of units having the same name, which still appear today in our dry and liquid measures, could have arisen from the various commodities traded. The larger avoirdupois pound for goods of commerce might have been based on volume of water which has a higher bulk density than grain. For example, the Egyptian hon was a volume unit about 11 per cent larger than a cubic palm and corresponded to one mina of water. It was almost identical in volume to the present U.S. pint (~473 mL).

The stone, quarter, hundredweight, and ton were larger units of mass used in Britain. Today only the stone continues in customary use for measuring personal body weight. The present stone is 14 pounds (~6.35 kg), but an earlier unit appears to have been 16 pounds (~7.25 kg). The other units were multiples of 2, 8, and 160 times the stone, or 28, 112, and 2240 pounds (~12.7 kg, 50.8 kg, 1016 kg), respectively. The hundredweight was approximately equal to two talents. The ton of 2240 pounds is called the "long ton". The "short ton" is equal to 2000 pounds (~907 kg). A tonne (t) is equal to 1000 kg.

Units of time and angle

We can trace the division of the circle into 360 degrees and the day into hours, minutes, and seconds to the Babylonians who had sexagesimal system of numbers. The 360 degrees may have been related to a year of 360 days. Many other systems of measurement divided the day differently; other calendars divided the year differently.

HISTORY OF MONEY

The history of money spans thousands of years. Numismatics is the scientific study of money and its history in all its varied forms.

Many items have been used as commodity money such as naturally scarce precious metals, cowry shells, barley, beads etc., as well as many other things that are thought of as having value.

Page 5: Nota Math

Modern money (and most ancient money) is essentially a token — in other words, an abstraction. Paper currency is perhaps the most common type of physical money today. However, objects of gold or silver present many of money's essential properties.

Non-monetary exchange: barter and gift

Contrary to popular conception, there is no evidence of a society or economy that relied primarily on barter. Instead, non-monetary societies operated largely along the principles of gift economics. When barter did in fact occur, it was usually between either complete strangers or would-be enemies.

With barter, an individual possessing a material object of value, such as a measure of grain, could directly exchange that object for another object perceived to have equivalent value, such as a small animal, a clay pot or a tool. The capacity to carry out transactions is severely limited since it depends on a coincidence of wants. The seller of food grain has to find a buyer who wants to buy grain and who also could offer in return something the seller wants to buy. There is no common medium of exchange into which both seller and buyer could convert their tradable commodities. There is no standard which could be applied to measure the relative value of various goods and services.

In a gift economy, valuable goods and services are regularly given without any explicit agreement for immediate or future rewards (i.e. there is no formal quid pro quo).[3] Ideally, simultaneous or recurring giving serves to circulate and redistribute valuables within the community.

There are various social theories concerning gift economies. Some consider the gifts to be a form of reciprocal altruism. Another interpretation is that social status is awarded in return for the 'gifts'.Consider for example, the sharing of food in some hunter-gatherer societies, where food-sharing is a safeguard against the failure of any individual's daily foraging. This custom may reflect altruism, it may be a form of informal insurance, or may bring with it social status or other benefits.

Commodity Money

1742 drawing of shells of the money cowry, Cypraea monetaMain article: Commodity money

Bartering has several problems, most notably the coincidence of wants problem. For example, if a wheat farmer needs what a fruit farmer produces, a direct swap is impossible as seasonal fruit would spoil before the grain harvest. A solution is to trade fruit for wheat indirectly through a third, "intermediate", commodity: the fruit is exchanged for the intermediate commodity when the fruit ripens. If this intermediate commodity doesn't perish and is reliably in demand throughout the year (e.g. copper, gold, or wine) then it can be exchanged for wheat after the harvest. The function of the

Page 6: Nota Math

intermediate commodity as a store-of-value can be standardized into a widespread commodity money, reducing the coincidence of wants problem. By overcoming the limitations of simple barter, a commodity money makes the market in all other commodities more liquid.

Many cultures around the world eventually developed the use of commodity money. Ancient China and Africa used cowrie shells. Trade in Japan's feudal system was based on the koku - a unit of rice per year. The shekel was an ancient unit of weight and currency. The first usage of the term came from Mesopotamia circa 3000 BC and referred to a specific weight of barley, which related other values in a metric such as silver, bronze, copper etc. A barley/shekel was originally both a unit of currency and a unit of weight.

Where ever trade is common, barter systems usually lead quite rapidly to several key goods being imbued with monetary properties. In the early British colony of New South Wales, rum emerged quite soon after settlement as the most monetary of goods. When a nation is without a currency it commonly adopts a foreign currency. In prisons where conventional money is prohibited, it is quite common for cigarettes to take on a monetary quality, and throughout history, gold has taken on this unofficial monetary function.

Representative money

An example of representative money, this 1896 note could be exchanged for five US Dollars worth of silver.

Representative money refers to money that consists of a token or certificate made of paper. The use of the various types of money including representative money, tracks the course of money from the past to the present. Token money may be called “representative money” in the sense that, say, a piece of paper might 'represent' or be a claim on a commodity also. Gold certificates or Silver certificates are a type of representative money which were used in the United States as currency until 1933.

The term 'representative money' has been used in the past "to signify that a certain amount of bullion was stored in a Treasury while the equivalent paper in circulation" represented the bullion. Representative money differs from commodity money which is actually made of some physical commodity. In his Treatise on Money,(1930:7) Keynes distinguished between commodity money and representative money, dividing the latter into “fiat money” and “managed money.”

NUMBER

A number is a mathematical object used in counting and measuring. A notational symbol which represents a number is called a numeral, but in common usage the word number is used for both the abstract object and the symbol, as well as for the word for the number. In addition to their use in counting and measuring, numerals are often used for labels

Page 7: Nota Math

(telephone numbers), for ordering (serial numbers), and for codes (ISBNs). In mathematics, the definition of number has been extended over the years to include such numbers as 0, negative numbers, rational numbers, irrational numbers, and complex numbers.

Certain procedures which take one or more numbers as input and produce a number as output are called numerical operations. Unary operations take a single input number and produce a single output number. For example, the successor operation adds one to an integer, thus the successor of 4 is 5. More common are binary operations which take two input numbers and produce a single output number. Examples of binary operations include addition, subtraction, multiplication, division, and exponentiation. The study of numerical operations is called arithmetic.

The branch of mathematics that studies structure in number systems, by means of topics such as groups, rings and fields, is called abstract algebra.

The first use of numbers

It is speculated that the first known use of numbers dates back to around 35,000 BC. Bones and other artifacts have been discovered with marks cut into them which many consider to be tally marks. The uses of these tally marks may have been for counting elapsed time, such as numbers of days, or keeping records of quantities, such as of animals.

Tallying systems have no concept of place-value (such as in the currently used decimal notation), which limit its representation of large numbers and as such is often considered that this is the first kind of abstract system that would be used, and could be considered a Numeral System.

The first known system with place-value was the Mesopotamian base 60 system (ca. 3400 BC) and the earliest known base 10 system dates to 3100 BC in Egypt.

History of zero

The use of zero as a number should be distinguished from its use as a placeholder numeral in place-value systems. Many ancient texts used zero. Babylonians and Egyptian texts used it. Egyptians used the word nfr to denote zero balance in double entry accounting entries. Indian texts used a Sanskrit word Shunya to refer to the concept of void; in mathematics texts this word would often be used to refer to the number zero.

Records show that the Ancient Greeks seemed unsure about the status of zero as a number: they asked themselves "how can 'nothing' be something?" leading to interesting philosophical and, by the Medieval period, religious arguments about the nature and existence of zero and the vacuum. The paradoxes of Zeno of Elea depend in large part on the uncertain interpretation of zero. (The ancient Greeks even questioned if 1 was a number.)

Page 8: Nota Math

The late Olmec people of south-central Mexico began to use a true zero (a shell glyph) in the New World possibly by the 4th century BC but certainly by 40 BC, which became an integral part of Maya numerals and the Maya calendar. Mayan arithmetic used base 4 and base 5 written as base 20. Sanchez in 1961 reported a base 4, base 5 'finger' abacus.

By 130, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for zero (a small circle with a long overbar) within a sexagesimal numeral system otherwise using alphabetic Greek numerals. Because it was used alone, not as just a placeholder, this Hellenistic zero was the first documented use of a true zero in the Old World. In later Byzantine manuscripts of his Syntaxis Mathematica (Almagest), the Hellenistic zero had morphed into the Greek letter omicron (otherwise meaning 70).

Another true zero was used in tables alongside Roman numerals by 525 (first known use by Dionysius Exiguus), but as a word, nulla meaning nothing, not as a symbol. When division produced zero as a remainder, nihil, also meaning nothing, was used. These medieval zeros were used by all future medieval computists (calculators of Easter). An isolated use of their initial, N, was used in a table of Roman numerals by Bede or a colleague about 725, a true zero symbol.

An early documented use of the zero by Brahmagupta (in the Brahmasphutasiddhanta) dates to 628. He treated zero as a number and discussed operations involving it, including division. By this time (7th century) the concept had clearly reached Cambodia, and documentation shows the idea later spreading to China and the Islamic world.

History of negative numbers

The abstract concept of negative numbers was recognised as early as 100 BC - 50 BC. The Chinese ”Nine Chapters on the Mathematical Art” (Jiu-zhang Suanshu) contains methods for finding the areas of figures; red rods were used to denote positive coefficients, black for negative. This is the earliest known mention of negative numbers in the East; the first reference in a western work was in the 3rd century in Greece. Diophantus referred to the equation equivalent to 4x + 20 = 0 (the solution would be negative) in Arithmetica, saying that the equation gave an absurd result.

During the 600s, negative numbers were in use in India to represent debts. Diophantus’ previous reference was discussed more explicitly by Indian mathematician Brahmagupta, in Brahma-Sphuta-Siddhanta 628, who used negative numbers to produce the general form quadratic formula that remains in use today. However, in the 12th century in India, Bhaskara gives negative roots for quadratic equations but says the negative value "is in this case not to be taken, for it is inadequate; people do not approve of negative roots."

European mathematicians, for the most part, resisted the concept of negative numbers until the 17th century, although Fibonacci allowed negative solutions in financial problems where they could be interpreted as debts (chapter 13 of Liber Abaci, 1202) and later as losses (in Flos). At the same time, the Chinese were indicating negative numbers by drawing a diagonal stroke through the right-most nonzero digit of the corresponding

Page 9: Nota Math

positive number's numeral. The first use of negative numbers in a European work was by Chuquet during the 15th century. He used them as exponents, but referred to them as “absurd numbers”.

As recently as the 18th century, the Swiss mathematician Leonhard Euler believed that negative numbers were greater than infinity, and it was common practice to ignore any negative results returned by equations on the assumption that they were meaningless, just as René Descartes did with negative solutions in a cartesian coordinate system.

History of rational numbers

It is likely that the concept of fractional numbers dates to prehistoric times. Even the Ancient Egyptians wrote math texts describing how to convert general fractions into their special notation. The RMP 2/n table and the Kahun Papyrus wrote out unit fraction series by using least common multiples. Classical Greek and Indian mathematicians made studies of the theory of rational numbers, as part of the general study of number theory. The best known of these is Euclid's Elements, dating to roughly 300 BC. Of the Indian texts, the most relevant is the Sthananga Sutra, which also covers number theory as part of a general study of mathematics.

The concept of decimal fractions is closely linked with decimal place value notation; the two seem to have developed in tandem. For example, it is common for the Jain math sutras to include calculations of decimal-fraction approximations to pi or the square root of two. Similarly, Babylonian math texts had always used sexagesimal fractions with great frequency.

History of irrational numbers

The earliest known use of irrational numbers was in the Indian Sulba Sutras composed between 800-500 BC. The first existence proofs of irrational numbers is usually attributed to Pythagoras, more specifically to the Pythagorean Hippasus of Metapontum, who produced a (most likely geometrical) proof of the irrationality of the square root of 2. The story goes that Hippasus discovered irrational numbers when trying to represent the square root of 2 as a fraction. However Pythagoras believed in the absoluteness of numbers, and could not accept the existence of irrational numbers. He could not disprove their existence through logic, but his beliefs would not accept the existence of irrational numbers and so he sentenced Hippasus to death by drowning.

The sixteenth century saw the final acceptance by Europeans of negative, integral and fractional numbers. The seventeenth century saw decimal fractions with the modern notation quite generally used by mathematicians. But it was not until the nineteenth century that the irrationals were separated into algebraic and transcendental parts, and a scientific study of theory of irrationals was taken once more. It had remained almost dormant since Euclid. The year 1872 saw the publication of the theories of Karl Weierstrass (by his pupil Kossak), Heine (Crelle, 74), Georg Cantor (Annalen, 5), and Richard Dedekind. Méray had taken in 1869 the same point of departure as Heine, but the

Page 10: Nota Math

theory is generally referred to the year 1872. Weierstrass's method has been completely set forth by Salvatore Pincherle (1880), and Dedekind's has received additional prominence through the author's later work (1888) and the recent endorsement by Paul Tannery (1894). Weierstrass, Cantor, and Heine base their theories on infinite series, while Dedekind founds his on the idea of a cut (Schnitt) in the system of real numbers, separating all rational numbers into two groups having certain characteristic properties. The subject has received later contributions at the hands of Weierstrass, Kronecker (Crelle, 101), and Méray.

Continued fractions, closely related to irrational numbers (and due to Cataldi, 1613), received attention at the hands of Euler, and at the opening of the nineteenth century were brought into prominence through the writings of Joseph Louis Lagrange. Other noteworthy contributions have been made by Druckenmüller (1837), Kunze (1857), Lemke (1870), and Günther (1872). Ramus (1855) first connected the subject with determinants, resulting, with the subsequent contributions of Heine, Möbius, and Günther, in the theory of Kettenbruchdeterminanten. Dirichlet also added to the general theory, as have numerous contributors to the applications of the subject.