paper1

29
Dialysis In medicine, Dialysis (from Greek dialusis , "διάλυσις" , meaning dissolution , dia , meaning through , and lysis , meaning loosening or splitting ) is a process for removing waste and excess water from the blood and is used primarily as an artificial replacement for lost kidney function in people with kidney failure . [1] Dialysis may be used for those with an acute disturbance in kidney function ( acute kidney injury , previously acute renal failure) or progressive but chronically worsening kidney function—a state known as chronic kidney disease stage 5 (previously chronic renal failure or end-stage renal disease). The latter form may develop over months or years, but in contrast to acute kidney injury is not usually reversible and dialysis is regarded as a "holding measure" until a kidney transplant can be performed or sometimes as the only supportive measure in those for whom a transplant would be inappropriate. [ Dialysis works on the principles of the diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane . Diffusion is a property of substances in water; substances in water tend to move from an area of high concentration to an area of low concentration. [6] Blood flows by one side of a semi-permeable membrane, and a dialysate, or special dialysis fluid, flows by the opposite side. A semipermeable membrane is a thin layer of material that contains holes of various sizes, or pores. Smaller solutes and fluid pass through the membrane, but the membrane blocks the passage of larger substances (for example, red blood cells, large proteins). This replicates the filtering process that takes place in the kidneys, when the blood enters the kidneys and the larger substances are separated from the smaller ones in the glomerulus . [6] The two main types of dialysis, hemodialysis and peritoneal dialysis, remove wastes and excess water from the blood in different ways. [2] Hemodialysis removes wastes and water by circulating blood outside the body through an external filter, called a dialyzer, that contains asemipermeable membrane. The blood flows in one direction and the dialysate flows in the opposite. The counter-current flow of the bloodand dialysate maximizes the concentration gradient of solutes between the blood and dialysate, which helps to remove more urea andcreatinine from the blood. The concentrations of solutes (for example potassium, phosphorus, and urea) are undesirably high in the

description

hello

Transcript of paper1

DialysisIn medicine, Dialysis (from Greek dialusis,"διάλυσις", meaning dissolution, dia, meaning through, and lysis, meaning loosening or splitting) is a process for removing waste and excess water from the blood and is used primarily as an artificial replacement for lost kidney function in people with kidney failure.[1] Dialysis may be used for those with an acute disturbance in kidney function (acute kidney injury, previously acute renal failure) or progressive but chronically worsening kidney function—a state known as chronic kidney disease stage 5 (previously chronic renal failure or end-stage renal disease). The latter form may develop over months or years, but in contrast to acute kidney injury is not usually reversible and dialysis is regarded as a "holding measure" until a kidney transplant can be performed or sometimes as the only supportive measure in those for whom a transplant would be inappropriate.[

Dialysis works on the principles of the diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane. Diffusion is a property of substances in water; substances in water tend to move from an area of high concentration to an area of low concentration.[6]Blood flows by one side of a semi-permeable membrane, and a dialysate, or special dialysis fluid, flows by the opposite side. A semipermeable membrane is a thin layer of material that contains holes of various sizes, or pores. Smaller solutes and fluid pass through the membrane, but the membrane blocks the passage of larger substances (for example, red blood cells, large proteins). This replicates the filtering process that takes place in the kidneys, when the blood enters the kidneys and the larger substances are separated from the smaller ones in the glomerulus.[6]

The two main types of dialysis, hemodialysis and peritoneal dialysis, remove wastes and excess

water from the blood in different ways.[2]Hemodialysis removes wastes and water by circulating blood

outside the body through an external filter, called a dialyzer, that contains asemipermeable

membrane. The blood flows in one direction and the dialysate flows in the opposite. The counter-

current flow of the bloodand dialysate maximizes the concentration gradient of solutes between the

blood and dialysate, which helps to remove more urea andcreatinine from the blood. The

concentrations of solutes (for example potassium, phosphorus, and urea) are undesirably high in the

blood, but low or absent in the dialysis solution, and constant replacement of the dialysate ensures

that the concentration of undesired solutes is kept low on this side of the membrane. The dialysis

solution has levels of minerals like potassium and calcium that are similar to their natural

concentration in healthy blood. For another solute, bicarbonate, dialysis solution level is set at a

slightly higher level than in normal blood, to encourage diffusion of bicarbonate into the blood, to act

as a pH buffer to neutralize the metabolic acidosis that is often present in these patients. The levels

of the components of dialysate are typically prescribed by a nephrologist according to the needs of

the individual patient.

In peritoneal dialysis, wastes and water are removed from the blood inside the body using

the peritoneum as a natural semipermeable membrane. Wastes and excess water move from the

blood, across the peritoneal membrane, and into a special dialysis solution, called dialysate, in

the abdominal cavity.

Types

There are three primary and two secondary types of dialysis: hemodialysis (primary), peritoneal

dialysis (primary), hemofiltration (primary), hemodiafiltration (secondary), andintestinal

dialysis (secondary).

Hemodialysis[edit]

In hemodialysis, the patient's blood is pumped through the blood compartment of a dialyzer,

exposing it to a partially permeable membrane. The dialyzer is composed of thousands of tiny

hollow synthetic fibers. The fiber wall acts as the semipermeable membrane. Blood flows through

the fibers, dialysis solution flows around the outside of the fibers, and water and wastes move

between these two solutions.[7] The cleansed blood is then returned via the circuit back to the body.

Ultrafiltration occurs by increasing the hydrostatic pressure across the dialyzer membrane. This

usually is done by applying a negative pressure to the dialysate compartment of the dialyzer. This

pressure gradient causes water and dissolved solutes to move from blood to dialysate, and allows

the removal of several litres of excess fluid during a typical 4-hour treatment. In the United States,

hemodialysis treatments are typically given in a dialysis center three times per week (due in the

United States to Medicare reimbursement rules); however, as of 2005 over 2,500 people in the

United States are dialyzing at home more frequently for various treatment lengths.[8] Studies have

demonstrated the clinical benefits of dialyzing 5 to 7 times a week, for 6 to 8 hours. This type of

hemodialysis is usually called "nocturnal daily hemodialysis", which a study has shown a significant

improvement in both small and largemolecular weight clearance and decrease the requirement of

taking phosphate binders.[9] These frequent long treatments are often done at home while sleeping,

but home dialysis is a flexible modality and schedules can be changed day to day, week to week. In

general, studies have shown that both increased treatment length and frequency are clinically

beneficial.[10]

Hemo-dialysis was one of the most common procedures performed in U.S. hospitals in 2011,

occurring in 909,000 stays (a rate of 29 stays per 10,000 population).[11]

Peritoneal dialysis

In peritoneal dialysis, a sterile solution containing glucose (called dialysate) is run through a tube into

the peritoneal cavity, the abdominalbody cavity around the intestine, where the peritoneal membrane

acts as a partially permeable membrane. The peritoneal membrane or peritoneum is a layer of tissue

containing blood vessels that lines and surrounds the peritoneal, or abdominal, cavity and the

internal abdominal organs (stomach, spleen, liver, and intestines).[12] Diffusion and osmosis drive

waste products and excess fluid through the peritoneum into the dialysate until the dialysate

approaches equilibrium with the body's fluids. Then the dialysate is drained, discarded, and replaced

with fresh dialysate.[13]

This exchange is repeated 4-5 times per day; automatic systems can run more frequent exchange

cycles overnight. Peritoneal dialysis is less efficient than hemodialysis, but because it is carried out

for a longer period of time the net effect in terms of removal of waste products and of salt and water

are similar to hemodialysis. Peritoneal dialysis is carried out at home by the patient, often without

help. This frees patients from the routine of having to go to a dialysis clinic on a fixed schedule

multiple times per week. Peritoneal dialysis can be performed with little to no specialized equipment

(other than bags of fresh dialysate).

Hemofiltration

Hemofiltration is a similar treatment to hemodialysis, but it makes use of a different principle. The

blood is pumped through a dialyzer or "hemofilter" as in dialysis, but no dialysate is used. A pressure

gradient is applied; as a result, water moves across the very permeable membrane rapidly,

"dragging" along with it many dissolved substances, including ones with large molecular weights,

which are not cleared as well by hemodialysis. Salts and water lost from the blood during this

process are replaced with a "substitution fluid" that is infused into the extracorporeal circuit during

the treatment.

Hemodiafiltration[edit]

Hemodiafiltration is a combination of hemodialysis and hemofiltration.

Intestinal dialysis[edit]

In intestinal dialysis, the diet is supplemented with soluble fibres such as acacia fibre, which is

digested by bacteria in the colon. This bacterial growth increases the amount of nitrogen that is

eliminated in fecal waste.[14][15][16] An alternative approach utilizes the ingestion of 1 to 1.5 liters of non-

absorbable solutions of polyethylene glycol or mannitolevery fourth hour.[17]

laser applicationsMany scientific, military, medical and commercial laser applications have been developed since the invention of the laser in 1958. Thecoherency, high monochromaticity, and ability to reach extremely high powers are all properties which allow for these specialized applications.

Spectroscopy[edit]

Most types of laser are an inherently pure source of light; they emit near-monochromatic light with a

very well defined range of wavelengths. By careful design of the laser components, the purity of the

laser light (measured as the "linewidth") can be improved more than the purity of any other light

source. This makes the laser a very useful source for spectroscopy. The high intensity of light that

can be achieved in a small, well collimated beam can also be used to induce a nonlinear optical

effect in a sample, which makes techniques such as Raman spectroscopy possible. Other

spectroscopic techniques based on lasers can be used to make extremely sensitive detectors of

various molecules, able to measure molecular concentrations in the parts-per-1012 (ppt) level. Due to

the high power densities achievable by lasers, beam-induced atomic emission is possible: this

technique is termed Laser induced breakdown spectroscopy (LIBS).

Heat Treatment[edit]

Heat treating with lasers allows selective surface hardening against wear with little or no distortion of

the component. Because this eliminates much part reworking that is currently done, the laser

system's capital cost is recovered in a short time. An inert, absorbent coating for laser heat treatment

has also been developed that eliminates the fumes generated by conventional paint coatings during

the heat-treating process with CO2 laser beams.

One consideration crucial to the success of a heat treatment operation is control of the laser beam

irradiance on the part surface. The optimal irradiance distribution is driven by the thermodynamics of

the laser-material interaction and by the part geometry.

Typically, irradiances between 500-5000 W/cm^2 satisfy the thermodynamic constraints and allow

the rapid surface heating and minimal total heat input required. For general heat treatment, a

uniform square or rectangular beam is one of the best options. For some special applications or

applications where the heat treatment is done on an edge or corner of the part, it may be better to

have the irradiance decrease near the edge to prevent melting.

Lunar laser ranging[edit]

Main article: Lunar laser ranging experiment

When the Apollo astronauts visited the moon, they planted retroreflector arrays to make possible

the Lunar Laser Ranging Experiment. Laser beams are focused through largetelescopes on Earth

aimed toward the arrays, and the time taken for the beam to be reflected back to Earth measured to

determine the distance between the Earth and Moon with high accuracy.

Photochemistry[edit]

Some laser systems, through the process of mode locking, can produce extremely brief pulses of

light - as short as picoseconds or femtoseconds (10−12 - 10−15 seconds). Such pulses can be used to

initiate and analyse chemical reactions, a technique known as photochemistry. The short pulses can

be used to probe the process of the reaction at a very high temporal resolution, allowing the

detection of short-lived intermediate molecules. This method is particularly useful in biochemistry,

where it is used to analyse details of protein folding and function.

Laser barcode scanners[edit]

Main article: Barcode reader

Laser barcode scanners are ideal for applications that require high speed reading of linear codes or

stacked symbols.

Laser cooling[edit]

A technique that has recent success is laser cooling. This involves atom trapping, a method where a

number of atoms are confined in a specially shaped arrangement of electricand magnetic fields.

Shining particular wavelengths of laser light at the ions or atoms slows them down,

thus cooling them. As this process is continued, they all are slowed and have the same energy level,

forming an unusual arrangement of matter known as a Bose–Einstein condensate.

Nuclear fusion[edit]

Some of the world's most powerful and complex arrangements of multiple lasers and optical

amplifiers are used to produce extremely high intensity pulses of light of extremely short duration,

e.g. laboratory for laser energetics, National Ignition Facility, GEKKO XII, Nike laser, Laser

Mégajoule, HiPER. These pulses are arranged such that they impact pellets of tritium–

deuterium simultaneously from all directions, hoping that the squeezing effect of the impacts will

induce atomic fusion in the pellets. This technique, known as "inertial confinement fusion", so far has

not been able to achieve "breakeven", that is, so far the fusion reaction generates less power than is

used to power the lasers, but research continues.

Microscopy[edit]

Confocal laser scanning microscopy and Two-photon excitation microscopy make use of lasers to

obtain blur-free images of thick specimens at various depths. Laser capture microdissection use

lasers to procure specific cell populations from a tissue section under microscopic visualization.

Additional laser microscopy techniques include harmonic microscopy, four-wave mixing

microscopy[3] and interferometric microscopy.[4]

The Advent of the "Laser Scalpel"

Early experimenters with medical lasers pointed out that there are surgical operations

that are difficult to perform with the conventional scalpel and that a laser beam might be

used instead. Initial trials showed that a finely focused beam from a carbon dioxide gas

laser could cut through human tissue easily and neatly. The surgeon could direct the

beam from any angle by using a mirror mounted on a movable metal arm.

Several advantages of laser surgery quickly became apparent. First, the light beam is

consistent, which means that it gives off the same amount of energy from

In this photo taken during open-heart surgery, a doctor uses a laser probe to punch small holes in the patient's heart muscle to increase the organ's blood flow.

one second to the next. So as long as the beam is moving along, the cut it makes (the incision) does not vary in depth; whereas when using a scalpel a doctor can accidentally make part of the incision too deep. A second advantage of the surgical laser is that the hot beam cauterizes, or seals off, the open blood vessels as it moves along. (This works well mainly for small vessels, such as those in the skin. The doctor still has to seal off the larger blood vessels using conventional methods.) Still another advantage is that the cells in human tissue do not conduct heat very well, so the skin or any other tissue near the laser incision does not get very hot and is not affected by the beam. This advantage of laser surgery is very helpful when a doctor must operate on a tiny area that is surrounded by healthy tissue or organs.

It should be pointed out that the "laser scalpel" is not necessarily the best tool to use in

every operation. Some doctors feel that while the laser is useful in some situations, it

will never totally replace the scalpel. Others are more optimistic and see a day when

more advanced lasers will make the scalpel a thing of the past.

Cleaning Arteries with Light

For instance, lasers are increasingly used to clean plaque from people's arteries.

Plaque is a tough fatty substance that can build up on the inside walls of the arteries.

Eventually the vessels can get so clogged that blood does not flow normally, and the

result can be a heart attack or stroke, both of which are serious and sometimes fatal.

The traditional method for removing the plaque involves opening the chest and making

several incisions, a long and sometimes risky operation. It is also expensive and

requires weeks for recovery.

An effective alternative is to use a laser beam to burn away the plaque. The key to

making this work is the doctor's ability to see inside the artery and direct the beam,

another area in which fiber optics and lasers are combined into a modern wonder tool.

An optic fiber that has been connected to a tiny television camera can be inserted into

an artery. These elements now become a miniature sensor that allows the doctor and

nurses to see inside the artery while a second fiber is inserted to carry the bursts of light

that will burn away the plaque.

The technique works in the following way. The fiber-optic array is inserted into a blood

vessel in an arm or leg and moved slowly into the area of the heart and blocked

arteries. When the array is in place the laser is fired and the plaque destroyed, and then

the exhaust vapors are sucked back through a tiny hollow tube that is inserted along

with the optical fibers. When the artery has been cleaned out the doctor removes the

fibers and tube, and the operation is finished. This medical process is known as laser

angioplasty. It has several obvious advantages. First, no incision is needed (except for

the small one in the vessel to insert the fibers). There is also little or no bleeding, and

the patient can enjoy total recovery in a day or two.

Lasers Heal and Reshape the Eyes

Some of the most remarkable breakthroughs for medical lasers have been in the area of

ophthalmology, the study of the structure and diseases of the eye. One reason that

laser beams are so useful in treating the eye is that the cornea, the coating that covers

the eyeball and admits light into the interior of the eye, is transparent. Since it is

designed to admit ordinary light, the cornea lets in laser light just as well and remains

unaffected by the beam.

First, the laser is very useful in removing extraneous blood vessels that can form on the

retina—the thin, light-sensitive membrane at the back of the eyeball. It is on the retina

that the images of the things the eye sees are formed. Damage to the retina can

sometimes cause blindness, the most common form in the United States resulting from

diabetes (a disease characterized by high levels of blood sugar) when, in some

advanced cases, hundreds of tiny extra blood vessels form on the retina. These block

light from the surface of the membrane, resulting in partial or total blindness.

The laser most often used in the treatment of this condition is powered by a medium

of argon gas. The doctor aims the beam through the cornea and burns away the tangle

of blood vessels covering the retina. The procedure takes only a few minutes and can

be done in the doctor's office. The laser can also repair a detached retina—one that has

broken loose from the rear part of the eyeball. Before the advent of lasers detached

retinas had to be repaired by hand, and because the retina is so delicate this was a very

difficult operation to perform. Using the argon laser, the doctor can actually "weld" the

torn retina back in place. It is perhaps a strange coincidence that Gordon Gould, one of

the original inventors of the laser, later had one of his own retinas repaired this way.

Using Lasers for Eye Surgery

The laser works like a sewing machine to repair a detached retina, the membrane that

lines the interior of the eye. The laser beam is adjusted so that it can pass harmlessly

through the lens and focus on tiny spots around the damaged area of the retina. When it

is focused, the beam has the intensity to "weld" or seal the detached area of the retina

back against the wall of the eyeball.

Perhaps most exciting of all the eye-related laser applications is the reshaping of the

eye's cornea, a technique widely known as LASIK (which stands

for L aser- A ssisted I n S itu K eratomilensis). As Breck Hitz describes it,

The patient's eyeglass prescription is literally carved inside the cornea with the beam of an excimer laser [a laser device that produces pulses of ultraviolet, or UV, light]. A small flap of the cornea is first removed with a precision knife . . . and an

A patient undergoes eye surgery performed by a laser beam. In addition to treating detached retinas, lasers can remove cataracts.

inner portion of the cornea is exposed to the excimer laser. After the prescription is carved, the corneal flap that was opened is then put back into place over the ablated [surgically altered] cornea. 6

Some Cosmetic Uses of Lasers

Medical lasers are also widely used for various types of cosmetic surgery, including the

removal of certain kinds of birthmarks. Port-wine stains, reddish purple skin blotches

that appear on about three out of every one thousand children, are an example. Such

stains can mark any part of the body but are most commonly found on the face and

neck.

The medical laser is able to remove a port-wine stain for the same reason that a military

laser is able to flash a message to a submerged submarine. Both lasers take advantage

of the monochromatic quality of laser light, that is, its ability to shine in one specific

color. The stain is made up of thousands of tiny malformed blood vessels that have a

definite reddish purple color. This color very strongly absorbs a certain shade of green

light. In fact, that is why the stain looks red. It absorbs the green and other colors in

white light but reflects the red back to people's eyes.

To treat the stain, the doctor runs a wide low-power beam of green light across the

discolored area. The mass of blood vessels in the stain absorbs the energetic laser light

and becomes so hot that it is actually burned away. The surrounding skin is a different

color than the stain, so that skin absorbs only small amounts of the beam and remains

unburned. (Of course, the burned

A doctor uses an argon laser to remove a port-wine stain, a kind of birthmark. Unwanted tissue is burned away while normal skin remains undamaged.

areas must heal, and during this process some minor scarring sometimes occurs.)

Laser-Assisted Dentistry

Dentistry is another branch of medicine that has benefited tremendously from laser

technology. Indeed, lasers have made some people stop dreading a visit to the dentist.

No one enjoys having a cavity drilled, of course. It usually requires an anesthetic (a

painkiller like novocaine) that causes uncomfortable numbness in the mouth; also, the

sound of the drill can be irritating or even sickening to some people.

Many dentists now employ an Nd-YAG laser (which uses a crystal for its lasing medium)

instead of a drill for most cavities. The laser treatment takes advantage of the simple

fact that the material that forms in a cavity is much softer than the enamel (the hard part

of a tooth). The laser is set at a power that is just strong enough to eliminate the

decayed tissue but not strong enough to harm the enamel. When treating a very deep

cavity bleeding sometimes occurs, and the laser beam often seals off blood vessels and

stops the bleeding.

The most often asked question about treating cavities with lasers is: Does it hurt? The

answer is no. Each burst of laser light from a dental laser lasts only thirty-trillionths of a

second, much faster than the amount of time a nerve takes to trigger pain. In other

words, the beam would have to last 100 million times longer in order to cause any

discomfort. So this sort of treatment requires no anesthetic.

Advantages of Lasers for Dental Surgery

In this excerpt from an article in The Dental Clinics of North America Robert A. Strauss

of the Medical College of Virginia mentions some of the advantages of using lasers for

oral surgery.

Decreased post-operative swelling is characteristic of laser use [for oral surgery]. Decreased swelling allows for increased safety when performing surgery within the airway [the mouth] . . . and increases the range of surgery that oral surgeons can perform safely without fear of airway compromise. This effect allows the surgeon to perform many procedures in an office or outpatient facility that previously would have required hospitalization. . . . Tissue healing and scarring are also improved with the use of the laser. . . . Laser wounds generally heal with minimal scar formation and . . . often can be left unsutured [without stitches], another distinct advantage

Medical safety in medical equipment

Patient SafetyThe Patient Safety topic area of HIMSS represents the work of the HIMSS Patient Safety Task Force, which is devoted to providing best practice guidance on how Health IT can be utilized to improve patient safety.Topic areas include risk analysis,

medical devices, and reducing adverse events and re-hospitalizations through Health IT.  Focus and deliverables of this Task Force will be coordinated with the efforts of the HIMSS Clinical Engineering-Information Technology (CE-IT) Communitywww.ceitcollaboration.org

An Introduction to ONC's SAFER Guides, a series of nine evidence-based toolkits that outline best practices for health care organizations to implement and utilize electronic health records and reduce the chance of an adverse event associated with Health IT. 

Analysis    of ONC's Health Information Technology Patient Safety Action & Surveillance Plan which announced a series of initiatives designed improve reporting and data collection of HIT-related adverse events through Voluntary reporting using AHRQ’s Common Data Format, HIT Certification including adverse event reporting functionality using AHRQ Common Data Format, coupled with patient safety audits by ONC Certification bodies, and a launch of new monitoring initiatives by AHRQ, FDA, and CMS.

Case Studies    on how hospitals are reducing the number of hospital acquired infections, adverse drug events, and rehospitalizations through the innovative use of Health IT.

A Complex Systems Risk Analysis Guide for the healthcare industry to understand potential errors/unintended consequences from health IT and how to avoid them.

Medical Device - Quick Start Guide    This guide is designed to provide points of reference and consideration for the purchase and implementation of a clinical documentation system (CDS) that integrates with networked medical device systems. Since there is no one-size-fits-all system, this guide should be treated as a reference document to assist the institution in making an investment decision. This guide is intended for healthcare professionals who are responsible for technology management within their organizations and are considering the purchase or upgrade of a CDS with the added feature of medical device integration.

With the introduction of the American Recovery and Reinvestment Act in 2009, $17-$19 billion USD were earmarked to incentivize practitioners to adopt qualifying electronic health record systems in a meaningful way. Qualified EHRs, such asCCHIT-certified systems, do not include ease of use as evaluation criteria. Therefore, it is crucial that consumers of EHR technology understand the importance of human factors and the important role it plays when defining meaningful use.

The use of health information technology (HIT) can lead to unintended and unwanted consequences. This is not a new phenomenon in medicine, as iatrogenesis, or unintended harm caused by clinicians is often documented in the literature. Electronic iatrogenesis, e-Iatrogeneis, is now being documented as unintended consequences through the use of computerized provider order entry (CPOE). These consequences can include: more or new work (e.g., non-standard cases call for more steps in ordering), extended workflow (e.g., extra time to enter orders), system demands (e.g., need for continuous equipment upgrades), communication (e.g., determining when face to face discussion is needed), emotions (both positive and negative), new kinds of errors (e.g., entering orders on the wrong patient), power shifts (e.g., decisions made by ancillary clinical staff), and dependence on the system (e.g., downtime creates a major issue). Per the Joint Commission’s (JCAHO) sentinel event, “As health information technology (HIT) and “converging technologies”—the interrelationship between medical devices and HIT—are increasingly adopted by health care organizations, users must be mindful of the safety risks and preventable adverse events that these implementations can create or perpetuate.”

Medical Device Reporting (MDR)MDR Overview

Each year, the FDA receives several hundred thousand medical device reports of suspected device-associated deaths, serious injuries and malfunctions. Medical Device Reporting (MDR) is one of the postmarket surveillance tools the FDA uses to monitor device performance, detect potential device-related safety issues, and contribute to benefit-risk assessments of these products.

Mandatory reporters (i.e., manufacturers, device user facilities, and importers) are required to submit certain types of reports for adverse events and product problems to the FDA about medical devices. In addition, the FDA also encourages health care professionals, patients, caregivers and consumers to submit voluntary reports about serious adverse events that may be associated with a medical device, as well as use errors, product quality issues, and therapeutic failures. These reports, along with data from other sources, can provide critical information that helps improve patient safety.

Mandatory Medical Device Reporting Requirements:

The Medical Device Reporting (MDR) regulation (21 CFR 803) contains mandatory requirements for manufacturers, importers, and device user facilities to report certain device-related adverse events and product problems to the FDA.

Manufacturers: Manufacturers are required to report to the FDA when they learn that any of their devices may have caused or contributed to a death or serious injury. Manufacturers must also report to the FDA when they become aware that their device has malfunctioned and would be likely to cause or contribute to a death or serious injury if the malfunction were to recur.

Importers: Importers are required to report to the FDA and the manufacturer when they learn that one of their devices may have caused or contributed to a death or serious injury. The importer must report only to the manufacturer if their imported devices have malfunctioned and would be likely to cause or contribute to a death or serious injury if the malfunction were to recur.

Device User Facilities: A “device user facility” is a hospital, ambulatory surgical facility, nursing home, outpatient diagnostic facility, or outpatient treatment facility, which is not a physician’s office. User facilities must report a suspected medical device-related death to both the FDA and the manufacturer. User facilities must report a medical device-related serious injury to the manufacturer, or to the FDA if the medical device manufacturer is unknown.

A user facility is not required to report a device malfunction, but can voluntarily advise the FDA of such product problems using the voluntary MedWatch Form FDA 3500 under FDA’s Safety Information and Adverse Event Reporting Program. Healthcare professionals within a user facility should familiarize themselves with their institution's procedures for reporting adverse events to the FDA. See "Medical Device Reporting for User Facilities", a guidance document issued by FDA.

Voluntary Medical Device Reporting:

The FDA encourages healthcare professionals, patients, caregivers and consumers to submit voluntary reports of significant adverse events or product problems with medical products to MedWatch, the FDA’s Safety Information and Adverse Event Reporting Program or through the MedWatcher mobile app.

How to Report a Medical Device Problem:

Medical device reports are submitted to the FDA by mandatory reporters (manufacturers, importers and device user facilities) and voluntary reporters (health care professionals, patients, caregivers and consumers).Traditional safety assessment focuses on potential hazards from electrical, mechanical or other aspects of a design occurring during usage. Functional safety is an additional step focussing on the reliability of the product to function correctly and safely in response to its inputs. It provides assurance that safety-related systems in the device will offer the necessary risk reduction required to minimise the severity and probability of harm in the event of malfunction.

 Testing

Electrical Safety Testing EMC Testing Environmental Testing

Independent Safety Assessment (ISA)

Conformity assessment against the relevant Medical Devices, Active Implantable Medical Devices or In Vitro   Diagnostic Medical Devices   Directives

Assessment of safety-related issues Assessment of the functional safety of active medical devices (safety of hardware and software Assessment of the usability of active medical devices Assessment of Risk Management files

Medical equipment management

Healthcare Technology Management (also referred to as biomed, biomedical engineering, bio-medical engineering, biomedical equipment management, biomedical equipment services, biomedical maintenance, clinical engineering, clinical engineering management, clinical equipment management, clinical technology management, clinical technology services, medical equipment management, and medical equipment repair), is a fundamental part of managing, maintaining, and/or designing medical devices used or proposed for use in various healthcare settings from the home, the field, the doctor's office, and the hospital.

Some but not all of the healthcare technology management professional's functions are:

Equipment Control & Asset Management

Equipment Inventories

Work Order Management

Data Quality Management

Equipment Maintenance Management

Equipment Maintenance

Personnel Management

Quality Assurance

Patient Safety

Risk Management

Hospital Safety Programs

Radiation Safety

Medical Gas Systems

In-Service Education & Training

Accident Investigation

Analysis of Failures, Root-Causes, and Human Factors

Safe Medical Devices Act (SMDA) of 1990

Health Insurance Portability and Accountability Act (HIPAA)

Careers in Facilities Management

Equipment Control & Asset Management[edit]

Every medical treatment facility should have policies and processes on equipment control & asset

management. Equipment control and asset management involves the management of medical

devices within a facility and may be supported by automated information systems (e.g., Enterprise

Resource Planning (ERP) systems from Lawson Software are often found in U.S. hospitals, and the

U.S. military health system uses an advanced automated system known as the Defense Medical

Logistics Standard Support (DMLSS) suite of applications). Equipment control begins with the

receipt of a newly acquired equipment item and continues through the item's entire life-cycle. Newly

acquired devices should be inspected by in-house or contracted Biomedical Equipment Technicians

(BMETs), who will receive an established equipment control/asset number from the facilities

Equipment/Property Manager. This control number is used to track and record maintenance actions

in their database. This is similar to creating a new chart for a new patient that will be seen at the

medical facility. Once an equipment control number is established, the device is safety inspected

and readied for delivery to clinical and treatment areas in the facility.

Work Order Management[edit]

Work order management involves systematic, measurable, and traceable methods to all

acceptance/initial inspections, preventive maintenance, and calibrations, or repairs by generating

scheduled and unscheduled work orders. Work order management may be paper-based or

computer-base and includes the maintenance of active (open or uncompleted) and completed work

orders which provide a comprehensive maintenance history of all medical equipment devices used

in the diagnosis, treatment, and management of patient

MIR

What is an MRI scan?

An MRI scan uses a large magnet, radio waves, and a computer to create a detailed cross-

sectional image of the patient's internal organs and structures.

The scanner itself will resemble a large tube with a table in the middle, allowing the patient

to slide into the tunnel.

An MRI scan differs from CT scans and X-rays because it does not use ionizing radiation,

which can be potentially harmful to a patient

How does an MRI scan work?

An MRI scanner can be found in most hospitals and is an important tool to analyze patients.

An MRI scanner contains two powerful magnets, which represent the most critical part of

the equipment.

The human body is largely made of water molecules, which each consists of smaller

hydrogen and oxygen atoms. At the center of each atom lies an even smaller particle called

a proton, which serves as a magnet and is sensitive to any magnetic field.

Normally the water molecules in our bodies are randomly arranged, but upon entering an

MRI scanner, the first magnet causes the body's water molecules to align in one direction,

either north or south.

The second magnetic field is then turned on and off in a series of quick pulses, causing

each hydrogen atom to alter their alignment and then quickly switch back to their original

relaxed state when switched off. This creates a knocking sound inside the scanner and is a

result of the gradient coils being switched on and off. When electricity is passed through the

coil, a magnetic field is created and the coil vibrates, which accounts for the noise you hear.

Although the patient cannot feel these changes, the scanner can detect them, and in

conjunction with a computer, can create a detailed cross-sectional image for the radiologist.

What are MRI scans used for?

The development of the MRI scan represents a huge milestone for the medical world, as

doctors, scientists and researchers are now able to examine the insides of the human body

accurately using a non-invasive tool.

The following are just some of the examples where an MRI scan is used:

Abnormalities of the brain and spinal cord

Tumors, cysts, and other abnormalities in various parts of the body

Injuries or abnormalities of the joints, such as back pain

Certain types of heart problems

Diseases of the liver and other abdominal organs

Causes of pelvic pain in women (e.g. fibroids, endometriosis)

Suspected uterine abnormalities in women undergoing evaluation for infertility.

Medical uses

Neuroimaging[edit]

Main article: MRI of brain and brain stem

MRI image of white matter tracts.

MRI is the investigative tool of choice for neurological cancers, as it is more sensitive than CT for

small tumors and offers better visualization of the posterior fossa. The contrast provided

between grey and white matter makes it the optimal choice for many conditions of the central

nervous system including demyelinating diseases, dementia, cerebrovascular disease, infectious

diseases and epilepsy.[8]Since many images are taken milliseconds apart, it shows how the brain

responds to different stimuli; researchers can then study both the functional and structural brain

abnormalities in psychological disorders.[9] MRI is also used in MRI-guided stereotactic

surgery andradiosurgery for treatment of intracranial tumors, arteriovenous malformations and other

surgically treatable conditions using a device known as the N-localizer.[10][11][12][13]

Cardiovascular[edit]

Main article: Cardiac magnetic resonance imaging

MR angiogram in congenital heart disease

Cardiac MRI is complementary to other imaging techniques, such as echocardiography, cardiac

CT and nuclear medicine. Its applications include assessment of myocardial ischemia and

viability, cardiomyopathies, myocarditis, iron overload, vascular diseases and congenital heart

disease.[14]

iver and gastrointestinal MRI[edit]

Hepatobiliary MR is used to detect and characterize lesions of the liver, pancreas and bile ducts.

Focal or diffuse disorders of the liver may be evaluated using diffusion-weighted, opposed-phase

imaging and dynamic contrast enhancement sequences. Extracellular contrast agents are widely

used in liver MRI and newer hepatobiliary contrast agents also provide the opportunity to perform

functional biliary imaging. Anatomical imaging of the bile ducts is achieved by using a heavily T2-

weighted sequence in magnetic resonance cholangiopancreatography (MRCP). Functional imaging

of the pancreas is performed following administration of secretin. MR enterography provides non-

invasive assessment of inflammatory bowel disease and small bowel tumors. MR-colonography can

play a role in the detection of large polyps in patients at increased risk of colorectal cancer.[16][17][18][19]

Functional MRI[edit]

Main article: Functional magnetic resonance imaging

Functional MRI (fMRI) is used to understand how different parts of the brain respond to

external stimuli. Blood oxygenation level dependent (BOLD) fMRI measures the hemodynamic

response to transient neural activity resulting from a change in the ratio

of oxyhemoglobin and deoxyhemoglobin. Statistical methods are used to construct a 3Dparametric

map of the brain indicating those regions of the cortex which demonstrate a significant change in

activity in response to the task. FMRI has applications in behavioraland cognitive research as well as

in planning neurosurgery of eloquent brain areas.[20][21]

Oncology[edit]

MRI is the investigation of choice in the preoperative staging of rectal and prostate cancer, and has

a role in the diagnosis, staging, and follow-up of other tumors.[22]

ULTRA SONIC IMAGING SYSTEMUltrasounds are sound waves with frequencies higher than the upper audible limit of

human hearing. Ultrasound is not different from 'normal' (audible) sound in its physical properties,

only in that humans cannot hear it. This limit varies from person to person and is approximately

20 kilohertz (20,000 hertz) in healthy, young adults. Ultrasound devices operate with frequencies

from 20 kHz up to several gigahertz.

Ultrasound is used in many different fields. Ultrasonic devices are used to detect objects and

measure distances. Ultrasound imaging or sonography is often in medicine. In the nondestructive

testing of products and structures, ultrasound is used to detect invisible flaws. Industrially, ultrasound

is used for cleaning, mixing, and to accelerate chemical processes. Animals such

as bats and porpoises use ultrasound for locating prey and obstacles.[1] Scientist are also studying

ultrasound using graphene diaphragms as a method of communication.[2]

Ultrasound Identification (USID)

Ultrasound Identification (USID) is a Real Time Locating System (RTLS) or Indoor Positioning

System (IPS) technology used to automatically track and identify the location of objects in real time

using simple, inexpensive nodes (badges/tags) attached to or embedded in objects and devices,

which then transmit an ultrasound signal to communicate their location to microphone sensors.

Imaging

Main article: Medical ultrasonography

Sonogram of a fetus at 14 weeks (profile)

Head of a fetus, aged 29 weeks, in a "3D ultrasound"

The potential for ultrasonic imaging of objects, with a 3 GHZ sound wave producing resolution

comparable to an optical image, was recognized by Sokolov in 1939 but techniques of the time

produced relatively low-contrast images with poor sensitivity.[21] Ultrasonic imaging uses frequencies

of 2 megahertz and higher; the shorter wavelength allows resolution of small internal details in

structures and tissues. The power density is generally less than 1 watt per square centimetre, to

avoid heating and cavitation effects in the object under examination.[22] High and ultra high ultrasound

waves are used in acoustic microscopy, with frequencies up to 4 gigahertz. Ultrasonic imaging

applications include industrial non-destructive testing, quality control and medical uses.[21]

Medical sonography (ultrasonography) is an ultrasound-based diagnostic medical

imaging technique used to visualize muscles, tendons, and many internal organs, to capture their

size, structure and any pathological lesions with real time tomographic images. Ultrasound has been

used by radiologists and sonographers to image the human body for at least 50 years and has

become a widely used diagnostic tool. The technology is relatively inexpensive and portable,

especially when compared with other techniques, such asmagnetic resonance imaging (MRI)

and computed tomography (CT). Ultrasound is also used to visualize fetuses during routine and

emergency prenatal care. Such diagnostic applications used during pregnancy are referred to

as obstetric sonography. As currently applied in the medical field, properly performed ultrasound

poses no known risks to the patient.[23] Sonography does not use ionizing radiation, and the power

levels used for imaging are too low to cause adverse heating or pressure effects in tissue.[citation

needed] Although the long-term effects due to ultrasound exposure at diagnostic intensity are still

unknown,[24] currently most doctors feel that the benefits to patients outweigh the risks.[25] The ALARA

(As Low As Reasonably Achievable) principle has been advocated for an ultrasound examination –

that is, keeping the scanning time and power settings as low as possible but consistent with

diagnostic imaging – and that by that principle non-medical uses, which by definition are not

necessary, are actively discouraged. [26]