Definition of the Quantity

download Definition of the Quantity

of 7

Transcript of Definition of the Quantity

  • 7/29/2019 Definition of the Quantity

    1/7

  • 7/29/2019 Definition of the Quantity

    2/7

    a) Definition of the quantity:Amount of substance is a standards-defined quantity that measures the size of an ensemble of

    elementary entities, such as atoms, molecules, electrons, and other particles. It is sometimes

    referred to as chemical amount. The International System of Units (SI) defines the amount of

    substance to be proportional to the number of elementary entities present. The SI unit for amount

    of substance is the mole. It has the unit symbol mol. The mole is defined as the amount of substance

    that contains an equal number of elementary entities as there are atoms in 12g of the isotope

    carbon-12. This number is called Avogadro's number and has the value 6.02214179(30) 1023. It is

    the numerical value of the Avogadro constant which has the unit 1/mol, and relates the molar mass

    of an amount of substance to its mass.

    Amount of substance appears in thermodynamic relations such as the ideal gas law, and in

    stoichiometric relations between reacting molecules as in the law of multiple proportions.

    The only other unit of amount of substance in current use is the pound-mole with the symbol lb-mol,which is sometimes used in chemical engineering in the United States. One pound-mole is exactly

    453.59237 mol.

    Historical Development:

    The alchemists, and especially the early metallurgists, probably had some notion of amount of

    substance, but there are no surviving records of any generalization of the idea beyond a set of

    recipes. In 1758, Mikhail Lomonosov questioned the idea that mass was the only measure of the

    quantity of matter, but he did so only in relation to his theories on gravitation. The development of

    the concept of amount of substance was coincidental with, and vital to, the birth of modern

    chemistry.

    1777: Wenzel publishes Lessons on Affinity, in which he demonstrates that the proportions of the

    "base component" and the "acid component" (cation and anion in modern terminology) remain the

    same during reactions between two neutral salts.

    1789: Lavoisier publishes Treatise of Elementary Chemistry, introducing the concept of a chemical

    element and clarifying the Law of conservation of mass for chemical reactions.

    1792: Richter publishes the first volume of Stoichiometry or the Art of Measuring the Chemical

    Elements (publication of subsequent volumes continues until 1802). The term "stoichiometry" used

    for the first time. The first tables of equivalent weights are published for acidbase reactions. Richter

    also notes that, for a given acid, the equivalent mass of the acid is proportional to the mass of

    oxygen in the base.

    1794: Proust's Law of definite proportions generalizes the concept of equivalent weights to all types

    of chemical reaction, not simply acidbase reactions.

    1805: Dalton publishes his first paper on modern atomic theory, including a "Table of the relative

    weights of the ultimate particles of gaseous and other bodies".

  • 7/29/2019 Definition of the Quantity

    3/7

    The concept of atoms raised the question of their weight. While many were skeptical about the

    reality of atoms, chemists quickly found atomic weights to be an invaluable tool in expressing

    stoichiometric relationships.

    1808: Publication of Dalton's A New System of Chemical Philosophy, containing the first table of

    atomic weights (based on H = 1).

    1809: Gay-Lussac's Law of combining volumes, stating an integer relationship between the volumes

    of reactants and products in the chemical reactions of gases.

    1811: Avogadro hypothesizes that equal volumes of different gases contain equal numbers of

    particles, now known as Avogadro's law.

    1813/1814: Berzelius publishes the first of several tables of atomic weights based on the scale of O =

    100.

    1815: Prout publishes his hypothesis that all atomic weights are integer multiple of the atomic

    weight of hydrogen. The hypothesis is later abandoned given the observed atomic weight of

    chlorine (approx. 35.5 relative to hydrogen).

    1819: DulongPetit law relating the atomic weight of a solid element to its specific heat capacity.

    1819: Mitscherlich's work on crystal isomorphism allows many chemical formulae to be clarified,

    resolving several ambiguities in the calculation of atomic weights.

    1834: Clapeyron states the ideal gas law.

    The ideal gas law was the first to be discovered of many relationships between the number of atomsor molecules in a system and other physical properties of the system, apart from its mass. However,

    this was not sufficient to convince all scientists of the existence of atoms and molecules, many

    considered it simply being a useful tool for calculation.

    1834: Faraday states his Laws of electrolysis, in particular that "the chemical decomposing action of

    a current is constant for a constant quantity of electricity".

    1856: Krnig derives the ideal gas law from kinetic theory. Clausius publishes an independent

    derivation the following year.

    1860: The Karlsruhe Congress debates the relation between "physical molecules", "chemical

    molecules" and atoms, without reaching consensus.

    1865: Loschmidt makes the first estimate of the size of gas molecules and hence of number of

    molecules in a given volume of gas, now known as the Loschmidt constant.

    1886: Vant Hoff demonstrates the similarities in behaviour between dilute solutions and ideal gases.

    1886: Eugen Goldstein observed discrete particle rays in gas discharges that laid the foundation of

    mass spectrometry, a tool later used to establish the masses of atoms and molecules.

  • 7/29/2019 Definition of the Quantity

    4/7

    1887: Arrhenius describes the dissociation of electrolyte in solution, resolving one of the problems in

    the study of colligative properties.

    1893: First recorded use of the term mole to describe a unit of amount of substance by Ostwald in a

    university textbook.

    1897: First recorded use of the term mole in English.

    By the turn of the twentieth century, the concept of atomic and molecular entities was generally

    accepted, but many questions remained, not least the size of atoms and their number in a given

    sample. The concurrent development of mass spectrometry, starting in 1886, supported the concept

    of atomic and molecular mass and provided a tool of direct relative measurement.

    1905: Einstein's paper on Brownian motion dispels any last doubts on the physical reality of atoms,

    and opens the way for an accurate determination of their mass.

    1909: Perrin coins the name Avogadro constant and estimates its value.

    1913: Discovery of isotopes of non-radioactive elements by Soddy[33] and Thomson.

    1914: Richards receives the Nobel Prize in Chemistry for "his determinations of the atomic weight of

    a large number of elements".

    1920: Aston proposes the whole number rule, an updated version of Prout's hypothesis.

    1921: Soddy receives the Nobel Prize in Chemistry "for his work on the chemistry of radioactive

    substances and investigations into isotopes".

    1922: Aston receives the Nobel Prize in Chemistry "for his discovery of isotopes in a large number of

    non-radioactive elements, and for his whole-number rule".

    1926: Perrin receives the Nobel Prize in Physics, in part for his work in measuring Avogadro's

    constant.

    1959/1960: Unified atomic weight scale based on 12C = 12 adopted by IUPAP and IUPAC.

    1968: The mole is recommended for inclusion in the International System of Units (SI) by the

    International Committee for Weights and Measures (CIPM).

    1972: The mole is approved as the SI base unit of amount of substance.

  • 7/29/2019 Definition of the Quantity

    5/7

    Quantity Type:

    1. Derived quantityWhen amount of substance enters into a derived quantity, it is usually as the denominator: such

    quantities are known as molar quantities. For example, the quantity which describes the volumeoccupied by a given amount of substance is called the molar volume, while the quantity which

    describes the mass of a given amount of substance is the molar mass. Molar quantities are

    sometimes denoted by a subscript Latin "m" in the symbol, e.g. Cp,m, molar heat capacity at

    constant pressure: the subscript may be omitted if there is no risk of ambiguity, as is often the case

    in pure chemistry.

    The main derived quantity in which amount of substance enters into the numerator is amount of

    substance concentration, c. This name is often abbreviated to "amount concentration", except in

    clinical chemistry where "substance concentration" is the preferred term (to avoid any possible

    ambiguity with mass concentration). The name "molar concentration" is incorrect, if commonlyused.

    2. Dimension of the quantityThe mole is a unit of measurement used in chemistry to express amounts of a chemical substance,

    defined as the amount of any substance that contains as many elementary entities (e.g., atoms,

    molecules, ions, electrons) as there are atoms in 12 grams of pure carbon-12 (12C), the isotope of

    carbon with atomic weight 12. This corresponds to the Avogadro constant, which has a value of

    6.02214179(30)1023 elementary entities of the substance. It is one of the base units in the

    International System of Units, and has the unit symbol mol and corresponds with the dimension

    symbol N.

    The history of the mole is intertwined with that of molecular mass, atomic mass unit, Avogadro's

    number and related concepts.

    The first table of atomic weights was published by John Dalton (17661844) in 1805, based on a

    system in which the atomic weight of hydrogen was defined as 1. These atomic weights were based

    on the stoichiometric proportions of chemical reactions and compounds, a fact that greatly aided

    their acceptance: It was not necessary for a chemist to subscribe to atomic theory (an unprovenhypothesis at the time) to make practical use of the tables. This would lead to some confusion

    between atomic weights (promoted by proponents of atomic theory) and equivalent weights

    (promoted by its opponents and which sometimes differed from atomic weights by an integer

    factor), which would last throughout much of the nineteenth century.

    Jns Jacob Berzelius (17791848) was instrumental in the determination of atomic weights to ever-

    increasing accuracy. He was also the first chemist to use oxygen as the standard to which other

    weights were referred. Oxygen is a useful standard, as, unlike hydrogen, it forms compounds with

    most other elements, especially metals. However, he chose to fix the atomic weight of oxygen as

    100, an innovation that did not catch on.

  • 7/29/2019 Definition of the Quantity

    6/7

    Charles Frdric Gerhardt (181656), Henri Victor Regnault (181078) and Stanislao Cannizzaro

    (18261910) expanded on Berzelius' works, resolving many of the problems of unknown

    stoichiometry of compounds, and the use of atomic weights attracted a large consensus by the time

    of the Karlsruhe Congress (1860). The convention had reverted to defining the atomic weight of

    hydrogen as 1, although at the level of precision of measurements at that time relative

    uncertainties of around 1% this was numerically equivalent to the later standard of oxygen = 16.

    However the chemical convenience of having oxygen as the primary atomic weight standard became

    ever more evident with advances in analytical chemistry and the need for ever more accurate atomic

    weight determinations.

    Developments in mass spectrometry led to the adoption of oxygen-16 as the standard substance, in

    lieu of natural oxygen. The current definition of the mole, based on carbon-12, was approved during

    the 1960s. The four different definitions were equivalent to within 1%.

    Primary, secondary and working standards :

    The calibration facilities provided within the instrumentation department of a company provide the

    first link in the calibration chain. Instruments used for calibration at this level are known as working

    standards. As such working standard instruments are kept by the instrumentation department of a

    company solely for calibration duties, and for no other purpose, then it can be assumed that they

    will maintain their accuracy over a reasonable period of time because use-related deterioration in

    accuracy is largely eliminated. However, over the longer term, the characteristics of even such

    standard instruments will drift, mainly due to ageing effects in components within them. Therefore,

    over this longer term, a programme must be instituted for calibrating working standard instruments

    at appropriate intervals of time against instruments of yet higher accuracy. The instrument used forcalibrating working standard instruments is known as a secondary reference standard. This must

    obviously be a very well-engineered instrument that gives high accuracy and is stabilized against

    drift in its performance with time. This implies that it will be an expensive instrument to buy. It also

    requires that the environmental conditions in which it is used be carefully controlled in respect

    of ambient temperature, humidity etc. When the working standard instrument has been calibrated

    by an authorized standards laboratory, a calibration certificate will be issued. This will contain at

    least the following information:

  • 7/29/2019 Definition of the Quantity

    7/7

    the identification of the equipment calibrated

    the calibration results obtained

    the measurement uncertainty

    any use limitations on the equipment calibrated

    the date of calibration

    the authority under which the certificate is issued.

    Instrument calibration has to be repeated at prescribed intervals because the characteristics

    of any instrument change over a period. Changes in instrument characteristics are brought about by

    such factors as mechanical wear, and the effects of dirt, dust, fumes, chemicals and temperature

    changes in the operating environment. To a great extent, the magnitude of the drift in

    characteristics depends on the amount of use an instrument receives and hence on the amount of

    wear and the length of time that it is subjected to the operating environment. However, some drift

    also occurs even in storage, as a result of ageing effects in components within the instrument.