Ppt Mech 6sem EMM

download Ppt Mech 6sem EMM

of 223

description

emm

Transcript of Ppt Mech 6sem EMM

VELTECH VEL MULTIMEDIA VEL HIGHTECH

SCHOOL OF MECHANICAL

DEPARTMENT OF MECHANICAL ENGINEERING

SYLLABUS

UNIT IConcept of MeasurementCalibration.Calibration is the process of establishing the relationship between a measuring device and the units of measure. This is done by comparing a devise or the output of an instrument to a standard having known measurement characteristics. For example the length of a stick can be calibrated by comparing it to a standard that has a known length. Once the relationship of the stick to the standard is known the stick can be used to measure the length of other things.Sensitivity of a measuring instrument.

Instrumentdy

Reading

dx

Measured quantity

Readability.In the sciences, readability is a measure of an instrument's ability to display incremental changes in its output value. For example, a balance with a readability of 1 mg will not display any difference between objects with masses from 0.6 mg to 1.4 mg, because possible display values are 0 mg, 1 mg, 2 mg etc. Likewise, a balance with a readability of 0.1 mg will not display any difference between objects with masses from 0.06 mg to 0.14 mg.

True size and Actual size.True size ( Theoretical size of a dimension which is free from errors.

Actual size ( size obtained through measurement with permissible error.

Hysterisis.

A system with hysteresis can be summarised as a system that may be in any number of states, independent of the inputs to the system. To be exact, a system with hysteresis exhibits path-dependence, or "rate-independent memory. Range.

Range is the difference between the highest and lowest value. Span.

Span is the distance or interval between two points.

Example : In a measurement of temperature higher value is 200 C and lower value is 150 C means span = 200 150 = 50 C.

resolution.

Resolution is the quantitative measure of the ability of an optical instrument to produce separable images of different points on an object; usually, the smallest angular or linear separation of two object points for which they may be resolved according to the Rayleigh criterion.

Verification.

It is the process of testing the instrument for determining the errors.

Scale interval.

It is the difference between two successive scale marks in units.

Dead Zone.

Dead zone is the range through which a stimulus can be varied without producing a change in the response of the measuring instrument. Threshold.

Threshold is the smallest detectable sensation of an instrument. Discrimination.

Discrimination is the ability of an instrument to differentiate between various physical parameters or ability to measure even the minute changes in readings. Back lash.

Back lash is the play or loose motion in an instrument due to the clearance existing between mechanically contacting parts. It is similar to hysterisis but more commonly applied to mechanical systems. It often occurs between interacting mechanical parts as a result of looseness.response time.

Response time (technology), the time a generic system or functional unit takes to react to a given input

Repeatability.

Repeatability is the variation in measurements taken by a single person or instrument on the same item and under the same conditions. A measurement may be said to be repeatable when this variation is smaller than some agreed limit.

Bias.

Bias is a term used to describe a tendency or preference towards a particular perspective, ideology or result. All information and points of view have some form of bias. A person is generally said to be biased if a reasonable observer would conclude that the person is markedly influenced by inner biases, rendering it unlikely for them to be able to be objective.

magnification.Magnification is the process of enlarging something only in appearance, not in physical size. Magnification is also a number describing by which factor an object was magnified.

Drift.

Drift is a slow change. In metrology and measurements it refers to delay in response of an instrument for changes in input signals.

reproducibility.

Reproducibility is one of the main principles of the scientific method, and refers to the ability of a test or experiment to be accurately reproduced, or replicated, by someone else working independently. uncertainty.

Uncertainty: The lack of certainty, A state of having limited knowledge where it is impossible to exactly describe existing state or future outcome, more than one possible outcome. It applies to predictions of future events, to physical measurements already made, or to the unknown.

Trace ability.

Traceability refers to the completeness of the information about every step in a process chain.

Fiducial value.

The prescribed value of a quantity to which the reference is made.

Parallax.

Parallax, more accurately motion parallax, is the change of angular position of two observations of a single object relative to each other as seen by an observer, caused by the motion of the observer.accuracy and uncertainty with example.

Accuracy Closeness to the true value.

Example: Measuring accuracy is 0.02mm for diameter of part is 25mm.

Here the measurement true value lie between 24.98 to 25.02 mm.

Uncertainty about the true value 0.02mm.

Difference between precision and accuracy.

Accuracy ( The maximum amount by which the result differ from true value.

Precision ( Degree of repetitiveness. If an instrument is not precise it will give different results for the same dimension for the repeated readings.

Differentiate between sensitivity and range with suitable example.

Example : A Instrument have a scale reading of 0.01mm to 100 mm.

Here, the sensitivity of the instrument is 0.01mm i.e the minimum value by the scale by which the instrument can read. The range is 0.01 to 1000mm i.e the minimum to maximum value by which the instrument can read.

From the figure the instrument is ______.

X X X X Average

X X X

True

Precise but not accurate

system error and correction.

Error : The deviation between the results of measured value to the actual value.

Correction : The numerical value which should be added to the measured value to get the correct result.

Measured.

Measured is physically quantity or property like length, diameter, and angle to be measured.

Deterministic Metrology.

Them metrology in which part measurement is replaced by process measurement. The new techniques such as 3D error compensation by CNC systems are applied.

over damped and under damped system.

Over Damped : The final indication of measurement is approached exponentially from one side.

Under damped : The pointer approach the position corresponding to final reading and makes a number of oscillations around it.

Under Damped

Indication

Over Damped

accuracy in terms of repeatability and systematic error.

four methods of measurement.

1.Direct Method

2.Indirect Method

3.Comparison Method

4.Coincidence Method.

classification of measuring instruments.

1.Angle measuring instruments

2.Length measuring instruments

3.Instruments for surface finish

4.Instruments for deviations.

metrology.Metrology s as the Science of pure measurement. But in engineering purposes, it in restricted to measurements of length and angles and other qualities which are expressed in linear or angular terms.

Dynamic metrology .It refers to a group of techniques for measuring small variation of a continuous nature. These technique has proved very valuable and a record of continuous measurement over a surface.

basic need for Measurement

The basic need for Measurement in the engineering industry in to determine whether a component has been manufactured to the requirements of a specification.

dimensional properties need to be considered when checking or measuring a component

Length, Flatness, parallelism, surface, roughness, angle, profile, relative position. Roundness and concentricity, accuracy of form.

difference between indicative type measuring instrument and Non-indicative type measuring instrument

The indicative type measuring instrument indicate the size of the measured value.

The Non-indicate type of instrument does not indicate the measured size. Ex. "Go" and "Not - go" gauge.

factors affecting the accuracy of measurement

1. Temperature difference

2. Support position

3. Reading and parallel effects

4. Accuracy of equipment

5. Application of force

6. Sine and Cosine error

7. Different inspectors

Abbe's principle (or) state alignment principle.

Abbe's principle of alignment states that the line of axis of measurement should coincide with the line of scale or other dimensional reference.

optical principles employed in metrology

1) Reflection 2) Refraction3) Interference

sources of controllable error

1. Calibration error

2. Ambient condition

3. Stylus pressure

4. Avoidable error.

sources of random error

Specific causes for such error can not be determined. But likely sources are

1. Small variations in the position of setting standards and workpiece.

2. Slight displace of lever joints in the measuring device

3. Transient fluctuation in the friction in measuring instrument

4. Operator error in reading scale.

accuracy of Measurement is affected by poor contact between the work piece and measuring probe

The poor contact between the work piece and instrument will cause for error. Although everything feels all right yet the error in bound to occur. Gauge with wide areas of contact should not be used on part with irregular or curved surfaces.

A test indicator is used to check concentricity of a soft but its stylus is set so that in movement makes an angle of 30' with the normal to the shaft, if the total indicator reading 0.02 mm what is the true eccentricity

This is the case of cosine error although the stylus movement in small, the alignment error in large and this cosine error is appreciable.

Total reading = 0.02 cos30'

= 0.017 mm

there fore Eccentricity = 1/2 ( True value )

= 0.0085 mm

" Precision "

Precision refers in variability when used to make repeated measurements under carefully controlled conditions.

Reproducibility.

The term reproducibility of a method of measurement refers to the consistency of its pattern of variation.

Accuracy.

The term accuracy refers to the agreement of the results of a measurement with the true value of the measure quantity.

difference between indicating and recording instrument

In indicative type measure instrument the value of the measured quantity in visually indicative but not recorded. In case of recording instruments the values of the measured quantity are recorded on a chart, digital computer or data logger.

accuracy and sensitivity of a measuring instrument.

Accuracy is the closeness with which the measuring instrument can measure the "true value" of a quantity under stated conditions of use. ie its ability to "tell the truth".

Sensitivity in the relationship between a change in output reading for a given change of input. Sensitivity in often known as scale factor or instrument magnification.

readability

Readability in d as the ease with which readings may be taken an instrument. Readability difficulties may often arise due to parallax errors.

methods of measurements.In precision measurement various methods are followed depends upon the accuracy required.

1. Direct method of measurement

2. Indirect method of measurement

3. Fundamental method of measurement

4. Comparison method of measurement

5. Substitution method of measurement

6. Transposition method of measurement

7. Coincidence method of measurement

8. Transposition method of measurement

9. Deflection method of measurement

10. Interpolation method of measurement

11. Extrapolation method of measurement

12. Complementary method of measurement

13. Composite method of measurement

14. Element method of measurement

15. Contact and contact less method of measurement

measuring Instruments.According to the functions:

1. Length measuring instrument

2. Angle measuring instrument

3. Instrument for checking deviation from geometrical forms

4. Instrument for determining the quality of surface finish.

According to the accuracy.

1. Most accurate instrumentsExample - light interference instrument

2. Less accurate instrumentExample - Pool room Microscope, Comparators, Optimeter

3. Still less accurate instrumentExample - Dial indicator, vernier caliper.

Damping.

Damping is any effect, that tends to reduce the amplitude of oscillations of an oscillatory system.

Geometric dimensioning and tolerancingGeometric dimensioning and tolerancing (GD&T) is a symbolic language used on engineering drawings and computer generated three-dimensional solid models for explicitly describing nominal geometry and its allowable variation.

sources of error During measurement several types of error may arise as indicated and these error can be broadly classified into two categories.a) Controllable Errors:

These are controllable in both their magnitude and sense. These can be determined and reduced, if attempts are made to analyse them. These are also known as systematic errors. These can be due to:

1. Calibration Errors :

The actual length of standards such as slip gauges and engraved scales will vary from nominal value by small amount. Sometimes the instrument inertia and hysteresis effects do not let the instrument translate with complete fidelity. Often signal transmission errors such as a drop in voltage along the wires between the transducer and the electric meter occur. For high order accuracy these variations have positive significance and to minimize such variations calibration curves much be used.

2. Ambient Conditions :

Variations in the ambient conditions from internationally agreed standard value of 20oC, barometric pressure 760mm of mercury and 10mm of mercury vapour pressure, can give rise to errors in the measured size of the component. Temperature is by far the most significant of these ambient conditions and due correction is needed to obtain error free results.

1. Stylus Pressure :

Error induced due to stylus pressure are also appreciable. Whenever any component in measured under a definite stylus pressure both the deformation of the workpiece surface and deflection of the workpiece shape will occur.

Avoidable Errors :

These error include the errors due to Parallel and the effect of misalignment of the workpiece centers. Instrument location errors such as placing a thermometer is sunlight when attempting to measure air temperature also being to this category.

b) Random Errors :

These occur randomly and the specific causes of such errors cannot be determined, but likely sources of this type of error are small variations in the position of setting standards and workpiece, slight displacement of lever joints in the measuring joints in the measuring instrument, transient flaction in the friction in the measuring instrument and operator errors in reading scale and pointer type displays or in reading engraved scale positions.

From the above, it is clear that systematic errors are those which are repeated consistently with repetition of the experiment, where as random errors are those which are accidental and whose magnitude and sign cannot be predicted from a knowledge of the measuring system and condition of measurement.

classification of measurements

In the precision measurements, various methods of measurement are followed depending upon the accuracy required and the amount of permissible error.

The various methods of measurement are classified as follow :-

Direct method of measurement

Indirect method of measurement

Absolute method of measurement

Comparative method of measurement

Contact method of measurement

Contact less method of measurement

The direct method of measurement is one in which the measurement value in determined directly where as in the indirect method of measurement the dimension in determined by measuring the values functionally related to the required value. The direct method of measurement is simple and most widely employed in production.

In many cases, for example, as when checking the pitch diameter of treads, the direct method may lead to large errors in measurement. In this case, it is more expedient to make indirect measurement.

An absolute method of measurement in one in which the zero division of the measuring tool or instrument corresponding zero value of the measured dimension. eg. Steel rule, vernier Caliper, micrometer, Screw gauge). By absolute method the full value of the dimension is determined.

In the comparative method, only the deviation of the measured dimension from a master gauge are determined (eg. Dial comparator).

In contact methods of measurement, the measuring tip of the instrument actually touches the surface to be measured, eg. By dial comparator, screw gauges etc. In such cases arrangements for constant contact pressure should be provided in order to prevent errors due to excess contact pressure.

In Contact less method of measurement, no contact is required. Such instruments include tool maker's micrometer and projection comparator.

According to the functions, the measuring instruments classified as.

Length measuring instruments

Angle measuring instruments

Instrument for checking deviation from geometrical forms

Instrument for determining the quality of surface finish.

According to the accuracy of measurement, the measuring instrument are classified as follows.

Most accurate instrument eg : light interference instruments.

Second group consists of less accurate instruments. Such as tool room Microscopes, comparator optimeter etc.

Third group consists of , still less accurate instruments eg: dial indicators, vernier caliper and rules with vernier skills.

Measuring instrument are also classified in accordance with then metrological proper ties, such as range of instrument, scale graduation value, scale spacing, sensitivity and reading accuracy.

Range of Measurement :

It indicates the size values between which measurements may be made on the given instrument.

Scale Spacing :

It is the distance between the axis of two adjacent graduations on the scale.

Scale division Value :

It the measured value corresponding to one division of the instrument scale, eg. For Vernier Caliper the scale division value 0.1mm.

Sensitivity (amplification or gearing ratio ):

It is the ratio of the scale spacing to the space division value. It would also be expressed as the ratio of the product of all the larger lever arms and the product of all the smaller lever arms.

Sensitivity Threshold :

It is d as the minimum measured value which may cause any movement whatsoever of the indicating hand..

Reading Accuracy :

It is the accuracy that may be attained in using a measuring instrument.

Reading Error :

It is d as the difference between the reading of the instrument and the actual value of the dimension being measured.

Mention a few important precautions for use of instruments towards achieving accuracy in measurement are as follows :

The measurement must be made at right angles to the surfaces of the component.

The component must be supported so that it does not collapse under the measuring pressure or under its own weight. The work piece must be cleaned before being measured, and coated with oil or a corruption inhibitor after inspection. Measuring instrument must be handled with care so that they are not damaged or strained. They must be kept in their cases when not in use and kept clean and lightly oiled on the bright surfaces. They should be regularly checked to ensure that they have not lost their mutual accuracy.

It must be emphasized that it is not good practice to rely on the accuracy of the instruments and on the readings taken readings should be double checked and the instruments should be periodically checked against the appropriate standards. Measuring instruments are produced to a high degree of accuracy, form the engineer's common rule to the most complex optical instrument, and they should be treated accordingly. Instruments are easily damaged, and very often the damage is not noticeable. Always handle instrument with great care, and report immediately any accidental damage. Protect highly polished surfaces from corrosion by handling them as little as possible and by covering them with petroleum jelly when not in use.

Sources of errors in precision measurement .Failure to consider the following factors may introduce errors in measurement :

Alignment Principle

Location of the measured part

Temperature

Parallax.

Alignment Principle (Abbe's Principle) :

Abbe's principle of alignment states that " the axis or line of measurement of the measured part should consider with the measuring scale or axis or measurement of measuring instrument ".

The effect of simple scale alignment error is shown in fig.

L

L

Q

JCL

if Q = angle of scale misalignment

L = apparent length

Loose = true length

if e = induced error

then,

e = L-L cose

= L(1-CoseC)

An alignment error of 2o over iN introduces an error of approximately 0.6mm.

Error in introduced to dial indicator readings if the plunger axis does not coincide with the axis or line of measurement.

Q

If e = Induced error

L = change in indicator reading,

reading.

L case = Surface displacement

\ e = L (1-Cose)

Line or axis of measure Dial gauge axis.

To ensure correct displacement readings on the dial indicator the plunger must, of course be normal to the surface in both mutually perpendicular planes.

A second source of error will illustrated by the vernier Caliper and similar instruments or circumstance is associated with measuring pressure or "feel". The measuring pressure in applied by the adjusting screw which is adjacent and parallel to the scale. A bending moment in introduced equal to the product of the force applied by the adjusting screw and the perpendicular distance between the screw centre line and the line of measurement as in Fig.

Variation of force applied at the screw are augmented at the line of measurement and a hot unusual form of damage to Vernier Caliper is permanent distortion to the measuring jaws presumably from this source as in fig.

Location :

when using a sensitive comparator, the measured part in located on a table which forms the datum for comparison with the standard. The comparator reading in thus an indication of the displacement of the upper surface of the measured part from the datum. Faults at the location surface of the part damage, geometrical variations from part to part or the presence of foreign matter are also transmitted to the indicator. This provides false information regarding the true length of the part by introducing both sine and cosine error.

Where location conditions may not be ideal, ex:- inter stage measurement during production, sensors, operating on each side of the component can be used which eliminate the more serious sine type error. A two probe system measures length rather than surface displacement and highly sensitive electronic comparators of this type are used for slip gauge measurement.

Temperature :-

The standard reference temp. at which line and end standards are said to be at their true length is 20o and for highest accuracy in measurement this temp. Should be maintained. When this is not possible and the length at reference temp. must be known, a correction is made to allow for the difference between ambient and reference temp. The correction value required to 0.001375mm, when steel object exactly 25mm long at 20oC and Co-efficient of linear expansion 11Mm c/m in measured at 25oC, Which is rather larger than the increment step the M88/2 stip gauge set.

However, for less stringent measurement requirements it is not essential that correction to reference temperature is made provided that the following precautions and conditions are observed.

a) The temp. at which measurement is made is not changing significantly.

b) The gauge and work being compared are at the same temp and the temp is the same as ambient temp.

c) The gauge and work have the same Co-efficient of linear expansion.

Conditions a) and b) can be met if gauge and work allowed sufficient time to reach equal temp with surrounding after being arranged in the measuring positions.

If the measurement can be carried out on the surface of a large mass, eg: Surface plate, then temp. equalization will be family vapid as heat will be conducted away form the work and gauge but will not contribute any significant temp. change to the plate.

A component having a co-efficient of linear expansion significantly different from the gauge may be said to correct to size only at a given temp.

Parallax Effect :

On most dials the indicating finger or pointer lies in a plane parallel to the scale but displaced a small distance away to allow free movement of the pointer. It is then essential to observe the pointer along a line normal to the scale otherwise a reading error will occur. This effect is shown in fig. Where a dial is shown observed from three positions where the pointer is set at zero on the scale, observed from position 1) ie, from the left, the pointer appears to indicate some value, to the right off zero, and from position 2) Some value slightly to the left of zero, while only at position. 3) With the pointer Coincide with zero on the scale. Rules and micrometer thimbles are beveled to reduce this effect and on dials the indicates may be arranged to lie in the same plane as the scale, thus completely eliminating parallax, or a silvered reflector may be incorporated on the scale so that the line between the of eye and pointer is normal to the scale only when the pointer obscures in own image in the reflector.

classification of methods of measurements.Classifications of Methods of Measurements

In precision measurements various methods of measurement are followed depending upon the accuracy required and the amount of permissible error.

There are numerous ways in which a quantity can be measured. Any method of measurements should be d in such a detail and followed by such a standard practice that there is little scope for uncertainty. The nature of the procedure in some of the most common measurements is described below. Actual measurements may employ one or more combinations of the following.

(i) Direct method of measurement: In this method the value of a quantity of obtained directly by comparing the unknown with the standard. It involves no mathematical calculations to arrive at the results, for example, measurement of length by a graduated scale. The method is not very accurate because it depends on human insensitiveness in making judgement.

(ii) Indirect method of measurement: In this method several parameters (to which the quantity to be measured is linked with) are measured directly and then the value is determined by mathematical relationship. For example, measurement of density by measuring mass and geometrical dimensions.

(iii) Fundamental method of measurement: Also known as the absolute method of measurement, it is based on the measurement of the base quantities used to the quantity. For example, measuring a quantity directly in accordance with the definition of that quantity, or measuring a quantity indirectly by direct measurement of the quantities linked with the definition of the quantity to be measured.

(iv) Comparison method of measurement: This method involves comparison with either a known value of the same quantity or another quantity which is function of the quantity to be measured.

(v) Substitution method of measurement: In this method, the quantity to be measured is measured by direct comparison on an indicating device by replacing the measuring quantity with some other known quantity which produce same effect on the indicating device. For example, determination of mass by Borda method.

(vi) Transposition method of measurement: This is a method of measurement by direct comparison in which the value of the quantity to be measured is first balanced by a initial known value A of the same quantity; next the value of the quantity to be measured is put in the place of that known value and is balanced again by a second known value B. When the balance indicating device gives the same indication in both cases, the value of the quantity to be measured is . For example, determination of a mass by means of a balance and known weights, using the Gauss double weighing method.

(vii) Differential or comparison method of measurement: This method involves measuring the difference between the given quantity and a known master of near about the same value. For example, determination of diameter with master cylinder on a comparator.

(viii) Coincidence method of measurement: In this differential method of measurement the very small difference between the given quantity and the reference is determined is determined by the observation of the coincidence of scale marks. For example, measurement on vernier caliper.

(ix) Null method of measurement: In this method the quantity to be measured is compared with a known source and the difference between these two is made zero.

(x) Deflection method of measurement: In this method, the value of the quantity is directly indicated by deflection of a pointer on a calibrated scale.

(xi) Interpolation method of measurement: In this method, the given quantity is compared with two or more known value of near about same value ensuring at least one smaller and one bigger than the quantity to be measured and the readings interpolated.

(xii) Extrapolation method of measurement: In this method, the given quantity is compared with two or more known smaller values and extrapolating the reading.

(xiii) Complimentary method of measurement: This is the method of measurement by comparison in which the value of the quantity to be measured is combined with a known value of the same quantity so adjusted that the sum of these two values is equal to predetermined comparison value. For example, determination of the volume of a solid by liquid displacement.

(xiv) Composite method of measurement: In involves the comparison of the actual contour of a component to be checked with its contours in maximum and minimum tolerable limits. This method provides for the checking of the cumulative errors of the interconnected elements of the component which are controlled through a combined tolerance. This method is most reliable to ensure inter-changeability and is usually effected through the use of composite Go gauges, for example, checking of the thread of a nut with a screw plug GO gauge.

(xv) Element method: In this method, the several related dimensions are gauged individually, i.e., each component element is checked separately. For example, in the case of thread, the pitch diameter, pitch, and flank angle are checked separately and then the virtual pitch diameter is calculated. It may be noted that value of virtual pitch diameter depends on the deviations of the above thread elements. The functioning of thread depends on virtual pitch diameter lying within the specified tolerable limits.

In case of composite method, all the three elements need not be checked separately and is thus useful for checking the product parts. Element method is used for checking tools and for detecting the causes of rejects in the product.

(xvi) Contact and contact less methods of measurements: In contact methods of measurements, the measuring tip of the instrument actually touches the surface to be measured. In such cases, arrangements for constant contact pressure should be provided in order to prevent errors due to excess contact pressure. In contactless method of measurements, no contact is required. Such instruments include tool makers microscope and projection comparator, etc.

For every method of measurement a detailed definition of the equipment to be used, a sequential list of operations to be performed, the surrounding environmental conditions and descriptions of all factors influencing accuracy of measurement at the required level must be prepared and followed. Metrological characteristics of Measuring Instruments.Metrological characteristics of Measuring Instruments:

Measuring instruments are usually specified by their metrological properties, such as range of measurement, scale graduation value, scale spacing, sensitivity and reading accuracy.

Range of Measurement: It indicates the size values between which measurements may be made on the given instrument.

Scale range: It is the difference between the values of the measured quantities corresponding to the terminal scale marks.

Instrument range: It is the capacity or total range of values which an instrument is capable of measuring. For example, a micrometer screw gauge with capacity of 25 to 50mm has instrument range of 25 to 50mm but scale range is 25mm.

Scale Spacing: It is the distance between the axes of two adjacent graduations on the scale. Most instruments have a constant value of scale spacing throughout the scale. Such scales are said to be linear.

In case of non linear scales, the scale spacing value is variable within the limits of the scale.

Scale Division Value: It is the measured value of the measured quantity corresponding to one division of the instrument, e.g. for ordinary scale, the scale division value is 1mm. As a rule, the scale division should not be smaller in value than the permissible indication error of an instrument.

Sensitivity (Amplication or gearing ratio): It is the ratio of the scale spacing to the division value. It could also be expressed as the ratio of the product of all the larger lever arms and the product of all the smaller lever arms. It is the property of a measuring instrument to respond to changes in the measurement quantity.

Sensitivity Threshold: It is d as the minimum measured value which may cause any movement whatsoever of the indicating hand. It is also called the discrimination or resolving power of an instrument and is the minimum change in the quantity being measured which produces a perceptible movement of the index.

Reading Accuracy: It is the accuracy that may be attained in using a measuring instrument.

Reading Error: It is d as the difference between the reading of the instrument and the actual value of the dimension being measured.

Accuracy of observation: It is accuracy attainable in reading the scale of an instrument. It depends on the quality of the scale marks, the width or the pointer / index, the space between the pointer and the scale, the illumination of the scale, and the skill of the inspector. The width of scale mark is usually kept one tenth of the scale spacing for accurate reading of indications.

Parallax: It is apparent change in the position of the index relative to the scale marks, when the scale is observed in a direction other than perpendicular to its plane.

Repeatability: It is the variation of indications in repeated measurements of the same dimension. The variations may be due to clearances, friction and distortions in the instruments mechanism. Repeatability represents the reproducibility of the readings of an instrument when a series of measurements in carried out under fixed conditions of use.

Measuring force: It is the force produced by an instrument and acting upon the measured surface in the direction of measurement. It is usually developed by springs whose deformation and pressure change with the displacement of the instruments measuring spindle.

Systematic error and random error.

For statistical study and the study of accumulation of errors, errors are categorized as controllable errors and random errors.

(a) Systematic or controllable errors:

Systematic error is just a euphemism for experimental mistakes. These are controllable in both their magnitude and sense. These can be determined and reduced, if attempts are made to analyse them. However, they can not be revealed by repeated observations. These errors either have a constant value or a value changing according to a definite law. These can be due to:

1. Calibration Errors: The actual length of standards such as slip gauges and engraved scales will vary from nominal value by small amount. Sometimes the instrument inertia, hysteresis effects do not let the instrument translate with complete fidelity. Often signal transmission errors such as drop in voltage along the wires between the transducer and the electric meter occur. For high order accuracy these variations have positive significance and to minimize such variations calibration curves must be used.

2. Ambient Conditions: Variations in the ambient conditions from internationally agreed standard value of 20(C, barometric pressure 760 mm of mercury, and 10mm of mercury vapour pressure, can give rise to errors in the measured size of the component. Temperature is by far the most significant of these ambient conditions and due correction is needed to obtain error free results.

3. Styles Pressure: Error induced due to styles pressure is also appreciable. Whenever any component is measured under a definite stylus pressure both the deformation of the workpiece surface and deflection of the workpiece shape will occur.

4. Avoidable Errors: These errors include the errors due to parallax and the effect of misalignment of the workpiece centre. Instrument location errors such as placing a thermometer in sunlight when attempting to measure air temperature also belong to this category.

5. Experimental arrangement being different from that assumed in theory.

6. Incorrect theory i.e., the presence of effects not taken into account.

(b) Random Errors:

These occur randomly and the specific cases of such errors cannot be determined, but likely sources of this type of errors are small variations in the position of setting standard and workpiece, slight displacement of lever joints in the measuring joints in measuring instrument, transient fluctuation in the friction in the measuring instrument, and operator errors in reading scale and pointer type displays or in reading engraved scale positions.

Characteristics of random errors:

The various characteristics of random errors are:

These are due to large number of unpredictable and fluctuating causes that can not be controlled by the experimenter. Hence they are sometimes positive and sometimes negative and of variable magnitude. Accordingly they get revealed by repeated observations.

These are caused by friction and play in the instruments linkages, estimation of reading by judging fractional part of a scale division, by errors in position the measured object, etc.

These are variable in magnitude and sign and are introduced by the very process of observation itself.

The frequency of the occurrence of random errors depends on the occurrence probability for different values of random errors.

Random errors show up as various indication values within the specified limits of error in a series of measurements of a given dimension.

The probability of occurrence is equal for positive and negative errors of the same absolute value since random errors follow normal frequency distribution.

Random errors of larger absolute value are rather than those of smaller values.

The arithmetic mean of random errors in a given series of measurements approaches zero as the number of measurements increases.

For each method of measurement, random errors do not exceed a certain definite value. Errors exceeding this value are regarded as gross errors (errors which greatly distort the results and need to be ignored).

The most reliable value of the size being sought in a series of measurements is the arithmetic mean of the results obtained.

The main characteristic of random errors, which is used to determine the maximum measuring error, is the standard deviation.

The maximum error for a given method of measurement is determined as three times the standard deviation.

The maximum error determines the spread of possible random error values

The standard deviation and the maximum error determine the accuracy of a single measurement in given series.

From the above, it is clear that systematic errors are those which are repeated consistently with repetition of the experiment, whereas Random Errors are those which are accidental and whose magnitude and sign cannot be predicted from knowledge of measuring system and conditions of measurement.

accuracy and precision and distinction between precision and accuracy.

The agreement of the measured value with the true value of the measured quantity is called accuracy. If the measurement of a dimensions of a part approximates very closely to the true value of that dimension, it is said to be accurate. Thus the term accuracy denotes the closeness of the measured value with the true value. The difference between the measured value and the true value is the error of measurement. The lesser the error, more is the accuracy.

Precision and Accuracy

Precision, The terms precision and accuracy are used in connection with the performance of the instrument. Precision is the repeatability of the measuring process. It refers to the group of measurements for the same characteristics taken under identical conditions. It indicates to what extent the identically performed measurements agree with each other. If the instrument is not precise it will give different (widely varying) results for the same dimension when measured again and again. The set of observations will scatter about the mean. The scatter of these measurements is designated as (, the standard deviation. It is used as an index of precision. The less the scattering more precise is the instrument. Thus, lower, the value of (, the more precise is the instrument.

Accuracy: Accuracy is the degree to which the measured value of the quality characteristic agrees with the true value. The difference between the true value and the measured value is known as error of measurement.

Distinction between Precision and Accuracy

Accuracy is very often confused with precision though much different. The distinction between the precision and accuracy will become clear by the following example. Several measurements are made on a component by different types of instruments (A, B and C respectively) and the results are plotted. In any set of measurements, the individual measurements are scattered about the mean, and the precision signifies how well the various measurements performed by same instrument on the same quality characteristics agree with each other.

The difference between the mean of set of readings of the same quality characteristic and the true value is called as error. Less the error more accurate is the instrument.

Figure shows that the instrument A is precise since the results of number of measurements are close to the average value. However, there is a large difference (error) between the true value and the average value hence it is not accurate.

The readings taken by the instruments are scattered much from the average value and hence it is not precise but accurate as there is a small difference between the average value and true value.

Figure shows that the instrument is accurate as well as precise.

Factors affecting the accuracy of the measuring system.

The basic components of an accuracy evaluation are the five elements of a measuring system such as:

1. Factors affecting the calibration standards

2. Factors affecting the workpiece

3. Factors affecting the inherent characteristics of the instrument

4. Factors affecting the person, who carries out the measurements, and

5. Factors affecting the environment.

1. Factors affecting the standard. It may be affected by:

a. Coefficient of thermal expansion, b. Calibration interval,c. Stability with time,d. Elastic properties,e. Geometric compatibility2. Factors affecting the Workpiece, these are:

a. Cleanliness, surface finish, waviness, scratch, surface defects etc.,

b. Hidden geometry,

c. Elastic properties,

d. Adequate datum on the workpiece

e. Arrangement of supporting workpiece

f. Thermal equalization etc.

3. Factors affecting the inherent characteristics of Instrument

a. Adequate amplification for accuracy objective, b. Scale error, c. Effect of friction, backlash, hysteresis, zero drift error, d. Deformation in handling or use, when heavy workpieces are measured e. Calibration errors, f. Mechanical parts (slides, guide ways or moving elements)g. Repeatability and readabilityh. Contact geometry for both workpiece and standard4. Factors affecting person:

a. Training, skillb. Sense of precision appreciation, c. Ability to select measuring instruments and standardsd. Sensible appreciation of measuring cost, e. Attitude towards personal accuracy achievements f. Planning measurement techniques for minimum cost, consistent with precision requirements etc 5. Factors affecting Environment:

a. Temperature, humidity etc., b. Clean surrounding and minimum vibration enhance precision, c. Adequate illuminationd. Temperature equalization between standard, workpiece, and instrument, e. Thermal expansion effects due to heat radiation from lights, heating elements, sunlight and people, f. Manual handling may also introduce thermal expansion. Higher accuracy can be achieved only if, all the sources of error due to the above five elements in the measuring system are analysed and steps taken to eliminate them.

The above analysis of five basic metrology elements can be composed into the acronym.

SWIPE, for convenient reference

Where, S STANDARD

W- WORKPIECE

I- INSTRUMENT

P - PERSON

E- ENVIRONMENT

Sensitivity ,Readability , Calibration , Repeatability

Sensitivity

Sensitivity may be d as the rate of displacement of the indicating device of a instrument, with respect to the measured quantity. In other words, sensitivity of an instrument is the ratio of the scale spacing to the scale division value. For example, if on a dial indicator, the scale spacing is 1.0 mm and the scale division value is 0.01 mm, then sensitivity is 100. It is also called as amplification factor or gearing ratio.

If we now consider sensitivity over the full range o instrument reading with respect to measured quantities as shown in Fig., the sensitivity at any value of where dx and dy are increments of x and y, taken over the full instrument scale, the sensitivity is the slope of the curve at any value of y.

The sensitivity may be constant or variable along the scale. In the first case we get linear transmission and in the second non-linear transmission and in the second non-linear transmission.

Sensitivity refers to the ability of measuring device to detect small difference in a quantity being measured. High sensitivity instruments may lead to drifts due to thermal or other effects, and indications of lower sensitivity.

Readability

Readability refers to the ease with which the readings of a measuring instrument can be read. It is the susceptibility of a measuring device to have its indications converted into meaningful number. Fine and widely spaced graduation lines ordinarily improve the readability. If the graduation lines are very finely spaced, the scale will be more readable by using the microscope, however, with the naked eye the readability will be poor.

To make micrometers more readable they are provided with vernier scale. It can also be improved by using magnifying devices.

Calibration:

The calibration of any measuring instrument is necessary to measure the quantity in terms of standard unit. It is the process of framing the scale of the instrument by applying some standardized signals. Calibration is a premeasurement process, generally carried out by manufactures.

It is carried out by making adjustments such that the read out device produces zero output for zero measured input. Similarly, it should display an output equivalent to the known measured input near the full scale input value.

The accuracy of the instrument depends upon the calibration. Constant uses of instruments affect heir accuracy. If the accuracy is to be maintained, the instruments must be checked and recalibrated if necessary. The schedule of such calibration depends upon the severity of use, environmental conditions, accuracy of measurement required etc. as far as possible calibration should be performed under environmental conditions which are vary close to the conditions under which actual measurements are carried out. If the output of a measuring system is linear and repeatable, it can be easily calibrated.

Repeatability,

It is the ability of the measuring instrument to repeat the same results for the measurements for the same quantity, when the measurement are carried out

by the same observer

With the same instrument

Under the same conditions.

Without any change in location.

line standard and end standard measurements and their characteristics.

Line and End Measurements

A length may be measured as the distance between two lines or as he distance between two parallel faces. So, the instruments for direct measurement of linear dimensions fall into two categories

1. Line standards

2. End standards

Line standards. When the length is measured as the distance between centres of two engraved lines, it is called line standard. Both material standards yard and metre are line standards. The most common example of line measurement is the rule with divisions shown as lines marked on it.

Characteristics of Line Standard

1. Scales can be accurately engraved but the engraved lines them selves possess thickness and it is not possible to take measurements with high accuracy.

2. A scale is a quick and easy to use over a wide range.

3. The scale markings are not subjected to wear. However, he leading ends are subjected to wear and this may lead to undersize measurements.

4. A scale does not posses a built in datum. Therefore it is not possible to align the scale with the axis of measurement.

5. Scales are subjected to parallax error.

6. Also, the assistance of magnifying glass or microscope is required if sufficient accuracy is to be achieved.

End standards: When length is expressed as the distance between two flat parallel faces, it is known as ends standard. Examples: Measurement by slip gauges, end bars, ends of micrometer anvils, vernier calipers etc. the end faces are hardened, lapped flat and parallel to a very high degree of accuracy.

Characteristics of End Standards:

1. These standards are highly accurate and used for measurement of close tolerance in precision engineering as well as in standard laboratories, tool rooms, inspection departments etc.

2. They require more time for measurements and measure only one dimension at a time.

3. They are subjected to wear on their measuring faces.

4. Group of slips can be wrung together to build up a given size; faulty wringing and careless use may lead to inaccurate results.

5. End standards have built in datum since their measuring faces are flat and parallel and can positively locked on datum surface.

6. They are not subjected to parallax effect as their use depends on feel.

The accuracy of both these standards is affected by temperature change and both are originally calibrated at 20( It is also necessary to take utmost case in their manufacture to ensure that the change of shape with time, secular change is reduced to negligible.

line and end standard measurements:

Comparison between line standards and End Standards:

Sr.

No.CharacteristicsLine standardEnd standard

1.PrincipleLength is expressed as the distance between two lines Length is expressed as the distance between two flat parallel faces

2.Accuracy Limited to is (0.2 mm for high accuracy, scales have to be used in conjunction with magnifying glass or microscope.Highly accurate for measurement of close tolerances up to ( 0.001 mm.

3.Ease and time of and easy.Measurement is quick and easy.

Use of end standard requires skill and is time consuming.

4.Effect of wear Scale markings are not subject to wear. However, significant wear may occur on leading ends. Thus it may be difficult to assume zero of scale as datum.These are subjected to wear on their measuring surfaces.

5.Alignment Cannot be easily aligned with the axis of measurement. Can be easily aligned with the axis of measurement.

6.Manufacture and costSimple to manufacture at low cost.Manufacturing process is complex and cost is high

7.Parallax effectThey are subjected to parallax error.They are not subjected to parallax error.

8.ExamplesScale (yard, metre etc.,)Slip gauges, end bars, V. caliper, micrometers etc.

Geometric dimensioning and tolerancing

Geometric dimensioning and tolerancing (GD&T) is used to the nominal geometry of parts and assemblies, to the allowable variation in form and possibly size of individual features, and to the allowable variation between features. Dimensioning and tolerancing and geometric dimensioning and tolerancing specifications are used as follows:

Dimensioning specifications the nominal, as-modeled or as-intended geometry. One example is a Basic Dimension.

Tolerancing specifications the allowable variation for the form and possibly the size of individual features, and the allowable variation in orientation and location between features. Two examples are Linear Dimensions and Feature Control Frames using a datum reference.

There are several standards available worldwide that describe the symbols and the rules used in GD&T. One such standard is American Society of Mechanical Engineers (ASME) Y14.5M-1994. This article is based on that standard, but other standards, such as those from the International Organization for Standardization (ISO), may vary slightly. The Y14.5M standard has the advantage of providing a fairly complete set of standards for GD&T in one document. The ISO standards, in comparison, typically only address a single topic at a time. There are separate standards that provide the details for each of the major symbols and topics below (e.g. position, flatness, profile, etc)

Dimensioning and tolerancing philosophyAccording to the ASME Y14.5M-1994 standard, the purpose of geometric dimensioning and tolerancing (GD&T) is to describe the engineering intent of parts and assemblies. This is not a completely correct explanation of the purpose of GD&T or dimensioning and tolerancing in general.

The purpose of GD&T is more accurately d as describing the geometric requirements for part and assembly geometry. Proper application of GD&T will ensure that the allowable part and assembly geometry d on the drawing leads to parts that have the desired form and fit (within limits) and function as intended.

There are some fundamental rules that need to be applied (these can be found on page 4 of the 1994 edition of the standard):

All dimensions must have a tolerance. Every feature on every manufactured part is subject to variation, therefore, the limits of allowable variation must be specified. Plus and minus tolerances may be applied directly to dimensions or applied from a general tolerance block or general note. For basic dimensions, geometric tolerances are indirectly applied in a related Feature Control Frame. The only exceptions are for dimensions marked as minimum, maximum, stock or reference.

Dimensioning and tolerancing shall completely the nominal geometry and allowable variation. Measurement and scaling of the drawing is not allowed except in certain cases.

Engineering drawings the requirements of finished (complete) parts. Every dimension and tolerance required to the finished part shall be shown on the drawing. If additional dimensions would be helpful, but are not required, they may be marked as reference.

Dimensions should be applied to features and arranged in such a way as to represent the function of the features.

Descriptions of manufacturing methods should be avoided. The geometry should be described without explicitly defining the method of manufacture.

If certain sizes are required during manufacturing but are not required in the final geometry (due to shrinkage or other causes) they should be marked as non-mandatory.

All dimensioning and tolerancing should be arranged for maximum readability and should be applied to visible lines in true profiles.

When geometry is normally controlled by gage sizes or by code (e.g. stock materials), the dimension(s) shall be included with the gage or code number in parentheses following or below the dimension.

Angles of 90 are assumed when lines (including center lines) are shown at right angles, but no angular dimension is explicitly shown. (This also applies to other orthogonal angles of 0, 180, 270, etc.)

Dimensions and tolerances are valid at 20 C unless stated otherwise.

Unless explicitly stated, all dimensions and tolerances are valid when the item is in a free state.

Dimensions and tolerances apply to the full length, width, and depth of a feature.

Dimensions and tolerances only apply at the level of the drawing where they are specified. It is not mandatory that they apply at other drawing levels, unless the specifications are repeated on the higher level drawing(s).

Geometric tolerancing reference chart

Type of toleranceGeometric characteristicsSymbolCan be applied to a feature Can be applied to a feature of size Can affect virtual condition Datum reference used Can use

modifier Can use

modifier Can be affected by a bonus tolerance Can be affected by a shift tolerance

FormStraightness

FormFlatness

FormCircularity

FormCylindricity

ProfileProfile of a line

ProfileProfile of a surface

OrientationPerpendicularity

OrientationAngularity

OrientationParallelism

LocationSymmetry

LocationPositional tolerance

LocationConcentricity

RunoutCircular runout

RunoutTotal runout

Tolerance Frame with Symbol identifications

Indication of datum

GD&T data exchangeExchange of geometric dimensioning and tolerancing (GD&T) information between CAD systems is available on different levels of fidelity for different purposes:

In the early days of CAD exchange only lines, texts and symbols were written into the exchange file. A receiving system could only display them on the screen or print them out, but only a human could interpret them.

GD&T presentation: On a next higher level the presentation information is enhanced by grouping them together into callouts for a particular purpose, e.g. a datum feature callout and a datum reference frame. And there is also the information which of the curves in the exchange file are leader, projection or dimension curves and which are used to form the shape of a product.

GD&T representation: Unlike GD&T presentation, the GD&T representation does not deal with how the information is presented to the user but only deal with which element of a shape of a product has which GD&T characteristic. A system supporting GD&T representation may display the GD&T information in some tree and other dialogs and allow the user to directly select and highlight the corresponding feature on the shape of the product, 2D and 3D.

Ideally both GD&T presentation and representation are available in the exchange file and are associated with each other. Then a receiving system can allow a user to select a GD&T callout and get the corresponding feature highlighted on the shape of the product.

An enhancement of GD&T representation is defining a formal language for GD&T (similar like a programming language) which also has build in rules and restrictions for the proper GD&T usage. This is still a research area.

UNIT IILinear and Angular MeasurementPART A

purpose of Hook rules.

Hook rules are used to make accurate measurements from a shoulder step, or edge of workpiece. They may be used to measure franges, circular pieces and for setting inside caliper to a dimension.

short length rule.

Short length rules are useful in measuring small openings and hard to reach locations where ordinary rules cannot be used.

how accurate measurement can be made if the end of the rule in worn.

In case of worn rules, measurement can be made by placing the 1cm graduation in line on the edge of the work, taking the reading and subtracting them from final reading.

rule used as a straight edge

The edge of a steel rule are ground flat. The edge of a rule in placed on the work surface which in then held up to the light. In accuracies as small as 0.02 mm may easily be seen by this method.

two types of outside caliper.

1. Spring joint caliper

2. Firm joint caliper

dangerous to measure work while revolving ,outside caliper should be held position when measuring work

An attempt to measure the work while it is revolving would result in an accident and any measurement taken will not be accurate.

Caliper should be held tightly between the thumbs and forefinger in order to get the most accurate measurement. The caliper must be held at right angles.

purposes of inside caliper

Inside Calipers are used to measure the diameter of holes, or width of keyways and slots.

two uses of a surface plate.

1. As a datum reference plane for marking out or inspection.

2. To check the flatness of another surface.

the materials used for surface plate and uses of that material comparing with Cast Iron.

1) Cast Iron2) Granite 3) Glass 4)Non-metallic substance

1) Granite and Glass plates of same depth are more rigid than Cast Iron plates.

2) Damage to this surfaces Causes indentation and does not throw up a projecting burn.

3) Corrosion in virtually absent.

4) It is easier to slide metallic articles such as weight gauges and squares, on their surfaces.

Cast Iron in a preferred material for surface plates and tables

1. It is a self-lubricating, and the equipment slides on its working surface with a pleasant feel.

2. It is easy to provide complex shape of stiffening ribs.

3. It is stable and rigid metal and relatively in-expensive

4. It is easily machined and scrapped to an accurate plane surface.

V-Blocks are generally bought in pairs

V-Blocks are manufactured in pairs so that long components can be supported parallel to the datum surface and for this reason they must always be bought and kept as a pain.

the accuracy of a Vernier Caliper

Vernier Caliper are normally available in measuring accuracy of 0.02mm.

advantage of a vernier depth gauge as compared to micrometer depth gauge

The vernier depth gauge has longer scale than a micrometer depth gauge and does scale than a micrometer depth gauge and does not require the length bars for measuring deep depths.

purpose of the dial test indicator in this application of the varnier height gauge and need for a datum surface

The dial gauge in used to remove errors due to feel and to maintain constant pressure during measurement the datum is required because the reading of vernier height gauge starts from the base.

two usual methods of testing the accuracy of a micrometer.

The first method to check is the zero line on the thimble coin cider with the centre (index) line on the sleeve. If it does coincide, the micrometer in correct.

In the second method a standard or a gauge blocks is measured with the micrometer. The reading of the micrometer must be the same the standard or a gauge block.

two types of dial indicator

1. Those with a linear moving plunger called plunger type.

2. Those with an angular moving stylus called level type.

"magnification" of a dial indicator

The magnification of a dial indicator in the ratio of the movement of the pointer to the movement of the dial indicator item.

As an example, suppose the end of the pointer traverses a circle of diameter 21mm and a full pointer resolution of say 0-100 is in units of 0.01mm

Magnification = 21/100 x 0.01

= 66 to 1

important feature of slip gauges which makes them of considerable importance in engineering measurement.

The important factor in the geometric accuracy of opposing gauging surfaces. The accuracy of flatness enables slip gauges to be wrong to each other to make up a specified length. They can also be wrong to surfaces whose accuracy is of the same orders as the slip gauges. The thickness of the wringing films can be discounted in comparison with the overall size of the slip gauge pile. The accuracy is not only that of flatness, but includes parallelism and length. Combinations of slip gauge produce end standards whose length, flatness and parallelism are of a higher order of accuracy.

Where will you support on end bar of 200mm length

The supports should be 0.557l apart and equidistant from ends as shown in fig .If l=200mm,the support distance should be 0.577 x 200 =115.4mm .The distance of each support from respective end is

(200 -115.4 )/2=42.3mm

Comparator.Comparator is an instrument which enables a comparison to be made between the item being measured and a length standard.

surface gauge.It is also known as height transfer gage which is used to check the accuracy or parallelism of surfaces, and to transfer measurement in layout work by scribing them on a vertical surface.

surface plate.it is an accurately machined flat casting or lapped granite block upon which the part to be check and the surface checking instruments are placed for obtaining some measure of the accuracy of a surface or the condition of finish.

tools maker's flat.

It is a small plate which is lapped to a greater degree of accuracy and is used for inspection of small parts with precision gauge blocks.

optical flats.

Optical flats are flat lenses usually made from natural quality with very accurately polished surfaces having light transmitting quality. These are used in connection with interferometer measurements (science of measuring with light waves ) for testing of plane surfaces.

profilometer.

It is an instrument used for measuring surface roughness. It measures the number of roughness peaks in unit traverse length above a reselected length by passing a fine tracing point over the surface.

characteristic advantages of mechanical indicators

1. Ling measuring range: Mechanical indicators operating on the rack and pinion system have measuring ranges extending over several turns.

2. Small overall size: This property is of great help where space is confined or where several indicators have to be mounted at close distances to each other.

3. Positive contact and controlled measuring force.

4. Rugged construction: Ideally suited for operating machines where substantial vibrations are present. These are also less sensitive to inadvertently caused over travel.

5. Economical: Initial cost is low. can be easily maintained and repaired implant at reasonable cost.

overall magnification or sensitivity of the system.

It is the ratio of scale movement for a given change of dimension and it is the product of sensitivities of measuring head, pneumatic sensitivity and indicator sensitivity.

advantages of differential type pneumatics comparators

The advantage of differential type pneumatics comparator over ordinary pneumatic comparators are:-

i) The small variation in supply pressure are compensated for by the differential pressure measurement.

ii) The differential pressure can be zeroed, using the adjusting valve, corresponding to given mean size.

iii) Full range of scale of measuring device can be used.

different types of comparator.1. Mechanical2. Mechanical- optical3. Electrical and Electronic

4. Optical 5. Pneumatic 6. Fluid displacement7. Electro-Mechanical

8. Multi check9. Auto gauging.

" Damping of an instrument

The damping may be an inherent factor in the operation of a measuring instrument or it may deliberately be introduced as a feature in its design. An instrument is said to be damped when there is a progressive reduction in the amplitude or complete suppression of successive oscillations of the index after an abrupt change in the value of the measured quantity.

How the damping effect is achieved on the " Johansson mikrokator"

In Johansson microkator the damping in provided by immersing a portion of the twisted band in a drop of oil in a split bush adjacent to the pointer and also perforating the strip as shown in fig.

" Magnification " as applied to a mechanical comparator

There are four methods of magnification used in compactor,

1. Mechanical Magnification

a)lever & radius arm

b)inclined plane (or) wedge

c)gear train

2. Optical Magnification

a)optical reflection by optical lever

b)optical projection for enlarging the images

3. Electrical Magnification

a)inductance bridge circuit

b)capacitance bridge circuit

4. Prematic Magnification

a)back pressure system

Usual range of a magnification of mechanical comparator

The usual range of magnification in mechanical comparator does not exceed x500, because of play and size f gears and levels

The magnification to be changed to suit the work

The mechanical comparator Johansson Mikrokator in so designed as to allow easy change in magnification. The magnification can be changed by increasing or reducing the length of cantilever spring.

An increased length reduces the force available to unwind the strip they reducing magnification.

Specialty of a toolmaker's microscope as compared to an ordinary laboratory microscope

A toolmakers microscope shows the object and movements in this natural aspect and direction instead of reversed as in the ordinary laboratory microscope.

Angle dekkor is less sensitive than an autocollimator

While an autocollimator incorporators a microscopy, the same is normally fitted in an angle dekkor and reflected image in viewed through an eyepiece only.

Sine bar with angle gauges

Fine bar is most often used in conjunction with slip gauges. It is not used with angle gauges.

The accuracy of a sine bar depends on following charactors.

The accuracy of a fine bar depends upon the following six factors.

1. Equality of size of rollers.

2. Centre distance of rollers.

3. Parallelism of rollers axes to each other.

4. Parallelism of roller axes to upper surface of bar.

5. Flatness of upper surface.

6. Equality of distance from roller centers to upper surface.

The three sources of error in angular rotation.1. Eccentricity of rotation when considered separately is of sinusoidal form.

2. Error in the indexing mechanism, backlash, wear and etc.

3. Error in the plane of rotation is wobble.

The classification of Angular measurement.1. Measurement of angular features on components or gauges.

2. Measurement of the angular rotation of a divided circle.

The advantages of photo electric autocollimator

1. These replace the judgments of the human eye with appropriate photoelectric systems.

2. Setting accuracy in increased and constant for all operators.

3. Remote reading (digital or analog) are possible.

Important rules for putting dimensions on drawings in respect of Tolerance

1. The dimension should be shown at place where it can be measured directly.

2. Considering the interchangeability of part all important dimensions with reference to the locating surface should be clearly marked.

3. Contradictory additive dimensions which affect the actual location and interchangeability.

Nominal size and tolerance

A nominal size in ascribed to a part for general identification purpose. Thus a shaft may have a nominal size of 60mm, but for practical reasons this size cannot be manufactured without great cost. Hence, certain tolerance or machining allowance must be added to it depending upon the intended application for which this part is to be used.

Taylors principle for the design of "Limit gauge".

The Taylor's principle for limit gauges can be divided into the following two statements.

1. "Go" gauges should inspect all the features of a component at a time and should be able to control the maximum metal limit, or in other words the maximum metal limit of as many related dimensions as possible should be incorporated in the "Go" gauge.

2. "Not - Go" gauge should check only one element at a time for the minimum metal limit.

Ways of measuring the angle of Taper.

1. Vernier bevel Protractor

2. Tool room microscope

3. Sine bar and dial gauge

4. Auto Collimator

5. Taper measuring machine

6. Roller, Slip gauge, and micrometer.

The objective of measurement of thread elements mention some important thread elements of linear measurement

The purpose of thread measurement in to ensure that the thread element are within the tolerance limits in order to satisfy the conditions of required fit.

The important thread elements which have linear measurement are,

1. Effective diameter

2. Major diameter

3. Miner diameter

4. Pith

"best wire" size

The best wire (diameter of the wire) is one such that its points of contact with the thread are on the pitch line or effective diameter.

The desirable qualities of good rule

1. Made from hardened and tempered spring steel.

2. Engine divided, that in, graduations should be precision engraved for accuracy and clarity.

3. Ground on the edges so that it can be used as straight-edge when scribing lines or testing a surface for flatness.

4. Satin chrome (or) matt finish so as to reduce glare and make it easier to read, also to prevent Corrosion

End measuring and line measuring instruments.

End measuring Instrument - Slip gauge block, length bar

Line measuring Instrument - Engineer's Rule, Vernier Caliper, Micrometer.

Smallest graduation which can be clearly seen on a metric rule on an in circle

The smallest graduation on a metric rule : 0.5mm. While on an inch rule it in 1/64 inch.

Types of steel rules used in machine shop work.

1. Spring - tempered

2. Flexible type

3. Narrow type

4. Hook type.

Accurate measurement can be made of the end of the rule in worn.

Measurement can be made by Piecing the 1cm graduation in line on the edge of the work taking the reading and subtracting1cm from the final reading.

Two types of outside Calipers.

1. Spring joint Caliper.

2. Firm joint Caliper.

Principle of Vernier Caliper.

The Vernier contains scale of length 9mm divided into 10 parts. The vernier scale is read in conjunction with the main scale, which in marked in divisions of 1mm, the vernier scale is marked in divisions of 9/10mm (i.e. 0.9mm). That it is possible the read the scale to (1.0 - 0.9) mm or 0.1mm. The accuracy of reading of the vernier scale, a typical size being 12mm divided into 25 graduation. The main scale graduation may also be changed from 1.0mm to 0.5mm. The smallest measurement which may then be conveniently read in.

(0.5 - 12/25)mm = (0.5 - 0.48)mm = 0.02mm.

The main use of a vernier height gauge

The main use of a vernier height gauge in to measure (or) mark out components that require a high degree of dimensional accuracy.

PART B

Slip gauges

(i) Explain wringing of slip gauges.

(ii) Explain the classification of slip gauges.

Slip Gauges

Slip gauges or gauge blocks are universally accepted end standard of length in industry. These were introduced by Johnson, a Sweedish engineer, and are also called as Johnson Gauges.

Slip gauges are rectangular blocks of high grade steel with exceptionally close tolerances. These blocks are suitably hardened though out to ensure maximum resistance to wear. They are then stabilized by heating and cooling successively in stages so that hardening stresses are removed.

After being hardened they are carefully finished by high grade lapping to a high degree of finish, flatness and accuracy. For successful use of slip gauges their working face are made truly flat and parallel. A slip gauge are also made from tungsten carbide which is extremely hard and wear resistance.

The cross- sections of these gauges are 9mm (30mm for sizes up to 10mm and 9mm(35mm for larger sizes. Any two slips when perfectly clean may be wrung together. The dimensions are permanently marked on one of the measuring faces of gauge blocks.

Gauges blocks are used for:

(i) Direct precise measurement, where the accuracy of the work piece demands it.

(ii) For checking accuracy of venire calipers, micro metes, and such other measuring instruments.

(iii) Setting up a comparator to specific dimension.

(iv) For measuring angle of work piece and also for angular setting in conjunction with a sine bar.

(v) The distances of plugs, spigots, etc. on fixture are often best measured with the slip gauges or end bars for large dimensions.

(vi) To check gap between parallel locations such as in gap gauges or between two mating parts.

There are many measurements which can be made with slip gauges either alone or in conjunction with other simple apparatus such as straight edges, rollers, balls sine bars etc.

Wringing of Slip Gauges

The success of precision measurement by slip gauges on the phenomenon of wringing. The slip gauges are wrung together by hand through a combined sliding and twisting motion. The gap between two wrung slips is only of the order of 0.00635 microns (0.635(10-3mm) which is negligible.

Procedure for Wringing

(i) Before using, the slip gauges are cleaned by using a lint free cloth, a chamois leather or a cleansing tissue.

(ii) One slip gauge is then oscillated slightly over the other gauge with a light pressure.

(iii) One gauge is then placed at 900 to other by using light pressure and then it is rotated until the blocks one brought in one line.

In this way is air is expelled out from between the gauge faces causing the gauge blocks to adhere. The adhesion is caused partly by molecular attraction and partly by atmospheric pressure. When two gauges are wrung in this manner is exactly the sum of their individual dimensions. The wrung gauge can be handled as a unit without the need for clamping all the pieces together.

Indian Standard on Slip Gauges

According to IS: 2984-1966, the size of the slip gauges is d as the distance l between two plane measuring faces, are being constituted by the surface of an auxiliary body with which one of the slip gauge faces is wrung and the other by the exposed face to the slip gauge faces is wrung and the other by the exposed face to the slip gauge. Generally the slip gauges are made from high grade steel with coefficient of thermal expansion (11.5+1.5) (10-6 per degree Celsius between 10C to 300C. The slip gauges are hardened more than 800 HV to make them wear resistant. IS:2984 slip gauges gives recommendations covering the manufacture of gauge blocks upto 90mm in length in five grades of accuracy.

Grade II. Grade II gauge blocks are workshop grade for rough checks. They are used for preliminary setting up of components where production tolerances are relatively wide; for positioning milling cutters and checking mechanical widths.

Grade I. Grade I gauge blocks are used fro more precise work such as setting up since bars, checking gap gauges and setting dial test indicators to zero.

Grade 0. These are inspection grade gauge blocks, used in tool room and inspection department for high accuracy work.

Grade OO. These gauges are placed in the standard room and used for highest precision work. Such as checking Grade I and Grade II slip gauges.

Calibration Grade. This is a special grade, with the actual size of the slips calibrated on a special chart supplied with a set. The chart must be referred while making up dimension.

The following two sets of slip gauges are in general use:

Normal set (M-45)

Range (mm), Step (mm)Pieces

1.01 to 1.009

1.01 to 1.09

1.1 to 1.9

1 to 9

10 to 900.001

0.01

0.1

1

109

9

9

9

9

Total 45 Pieces

Special set (M-87)

Range (mm)Step (mm)Pieces

1.001 to 1.009

1.01 to 1.09

0.5 to 0.5

10 to 90

1.0050.001

0.01

0.5

10

-9

49

19

9

1

Total 87 Pieces

The other sets available in metric units are: M112,M105,M50,M33 and M27. The sets M112 and M33 are as follows.

Set M112

Range (mm)Step (mm)Pieces

1.001 to 1.00

1.01 to 1.49

0.5 to 24.50

25 to 100

1.0050.001

0.01

0.05

25

-9

49

49

4

1

Total 112 Pieces

Set M33/2(2mm based set

Range (mm)Step (mm)Pieces

2.005

2.01 to 2.09

2.10 to 2.90

1 to 9

10.30

60

100-

0.01

0.1

1

10

-

-1

9

9

9

3

1

1

Total 33 Pieces

Limit gauges and the different types of limit gaugesLimit Gauges: Limit gauges are very widely used in industries. As there are two permissible limits of the dimension of a part, high and low, two gauges are needed to check each dimension of the part, one corresponding the low limit of size and other to the high limit of size of that dimension. These are known as GO and NO-GO gauges.

The differences between the sizes of these two gauges is equal to the tolerance on the work piece. GO gauges check the Maximum Metal Limit (MML) and NO-GO gauge checks the minimum metal limit (LML). In the case of hole, maximum metal limit is when the hole is as small as possible, that is, it is the low limit of size. In case of hole, therefore, GO gauge corresponds to the low limit of size, while NO- GO gauge corresponds to high limit of size. For a shaft, the maximum metal limit is when the shaft is on the high limit of size. Thus, in case of a shift GO gauge corresponds to the high limit of size and NO-GO gauge corresponds to the low limit size.

While checking, each of these two gauges is offered in turn to the work. A part is considered to be good, if the GO gauge passes through or over the work and NO-GO gauge fails to pass under the action of the part ;is within the specified tolerance. If both the gauges fail to pass, it indicates that hole is under size or shaft is over size. If both the gauges pass, it means that the hole is over size or the shaft is under size.

Limit Plug Gauges

Gauges used for checking the holes are called Plug gauges. The GO plug gauge is the size of the low limit of the hole while NO-GO plug gauge is the size of the high limit of hole.

Types of Plug Gauges

1. Solid type. For sizes up to 10mm. (Refer Fig. 9.17)

2. Renewable type (Taper inserted type). For sizes over 10mm and up to 30mm. (Refer Fig. 9.18)

3. Fastened type:

(a) Double ended: For sizes over 30mm and up to 63mm

(b) Single-ended: For sizes over 63mm and up to 100mm (Refer Fig. 9.20).

4. Flat type. For sizes over 100mm and up to 250mm. (Refer Fig. 9.22).

Fig. 9.24

5. Progressive type. For relatively short through hole. It has both the ends on one side of the gauge as shown in Fig. 9.21.

6. Pilot Plug gauge. To avoid jamming of the plug gauge inside of the hole pilot groove type gauge (Fig. 9.25) may be used. In pilot plug gauge there is first a small chamber, then a narrow ring or pilot-its diameter being equal to that of the body of the gauge, the pilot is of the nature of an ellipse in respect to the hole. It touches at two points across the major axis which is the diameter of the plug on entering the hole. If the pilot enters the hole it is sufficiently large for the rest of the gauge to enter. The chamber behind the pilot lifts the gauge into link, making jamming impossible. The advantages of such a gauge are that the operator can work even with less care and there is saving in time.

Pilot Plug Gauge7. Combined dual purpose limit gauge. Combined plug gauge combines both the GO and NO-GO dimensions in a single member. Thus a single gauge may be used to check both the upper and lower limits. It consist of a spherical end A of the diameter equal to the lower limit. A spherical projection B of the outer edge of the spherical member (Refer Fig. 9.26) is arranged so that the spherical surface B and the diametrically opposite part on the spherical surface is equal to the maximum limit.

For checking the hole by combined limit gauge, for GO limit the gauge is inserted into the hole with the handle parallel to the axis of the hole. For checking the hole the NO- GO limit, the gauge is tilted so that the spherical projection B is normal to the hole. The gauge in this position should not enter the hole.

The plug gauges are marked with the following on their handles for their identification:

(i) Nominal size,

(ii) Class of tolerance

(iii) The word Go on the Go side

(iv) The words NOGO (or Not- Go) on the Not-Go side

(v) The actual value of the tolerance

(vi) Manufacturers trade mark.

(vii) A red colour band near the Not-Go end to distinguish in from the Go-end.

Snap, Gap or Ring Gauges

Snap gauges, Gap gauges or Ring gauges are used for checking the shafts or male components. Snap gauges can be used for both cylindrical as well as non-cylindrical work or compared to ring gauges which are conventionally used only for cylindrical work. To Go snap gauge is the size corresponding to the high limit of the shaft, while the NO GO gauge corresponds to the low limit. Double ended snap gauges can be conveniently used for checking sizes from 3 mm to 100mm and single- ended progressive type snap gauges are suitable for sizes from 100mm to 250mm. The gauging surfaces of the snap gauges are hardened up to 750 HV and are suitably stabilized, ground, and lapped. Ring gauges are available in two designs, GO and NO-GO. These are designated by GO and NO-GO as may be applicable, the nominal size, the tolerance of the work piece to be gauged, and the number of the standard allowed.

Adjustable Type Gap Gauges

In case of fixed gap gauges, no change can be made in the