Young Scientist Journal (Jan-Jun) 2012

58
Volume 5 Issue 11 Jan-Jun 2012 Online full text at www.ysjournal.com Online full text at www.ysjournal.com Young Scientists Journal Volume 4 Issue 11 January-June 2012 Pages 1-?? <281* 6&,(17,676 MRXUQDO Supported by

description

Young Scientist Journal January to June 2012.

Transcript of Young Scientist Journal (Jan-Jun) 2012

Page 1: Young Scientist Journal (Jan-Jun) 2012

Volume 5 Issue 11 Jan-Jun 2012

Online full text at

www.ysjournal.comOnline full text at

www.ysjournal.com

Young Scientists Journal • Volume 4 • Issue 11 • January-June 2012 • P

ages 1-??

Supported by

Page 2: Young Scientist Journal (Jan-Jun) 2012
Page 3: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal

This magazine web-based Young Scientists Journal is online journal open access journal (www.ysjournal.com). It has been in existence since June 06 and contains articles written by young scientists for young scientists. It is where young scientists get their research and review articles published.

Published byMEDKNOW PUBLICATIONS AND MEDIA PVT. LTD.B5-12, Kanara Business Center, Off Link Road, Ghatkopar (E), Mumbai - 400075, INDIA.Phone: 91-22-6649 1818Web: www.medknow.com

Editorial Board

Chief Editor: Cleodie Swire, UK

Editorial Team Members Emma Copland, UKTeam Leader: Fiona Jenkinson, UK Rachel Wyles, UK

Chris Cundy, UK Arthur Harris, UKLouis Wilson, UK Niyi Adenuga, Nigeria

Louis Sharrock, UK Jake Shepherd-Barron, UKDavid Hewett, UK Harriet Dunn, UK

Fiona Paterson, UK Gilbert Chng, SingaporeMei Yin Wong, Singapore Maria Jose Tamayo, Peru

Alex Lancaster, UK Hannah Morrison, UKMatthew Brady, UK Anne de Vitry d'Avaucourt, FranceBen Lawrence, UK Kiran Thapa, England

Tim Wood, UK Muna Oli, USARobert Aylward, UK Maddy Parker, UK

Chloe Forsyth, UK Technical TeamSavannah Lordis, UK Team Leader: Jacob Hamblin-Pyke, UKEmily Thompsett, UK Mark Orders, UK

Natalie Cooper-Rayner, UK

Young Advisory Board

Jonathan Rogers, UK Malcolm Morgan, UKMuna Oli, USA Otana Jakpor, USA

Pamela Barraza Flores, Mexico

International Advisory BoardTeam Leader: Christina Astin, UK

Ghazwan Butrous, UK Mike Bennett, USAJoanne Manaster, USA Tony Grady, USA

Andreia Azevedo-Soares, UK Ian Yorston, UKPaul Soderberg, USA Charlie Barclay, UK

Anna Grigoryan, USA / Armenia Alom Shaha, UKDon Eliseo Lucero-Prisno, UK Lorna Quandt, USA

Linda Crouch, UK Arjen Dijksman, FranceSteven Chambers, UK Armen Soghoyan, Armenia

Thijs Kouwenhoven, Netherlands/Philippines Anthony Hardwicke, UKTobias Nørbo, Denmark John Boswell, USA

Mark Orders, UK Sam Morris, UKLee Riley, USA Malcolm Morgan, UK

Corky Valenti, USA Joanna Buckley, UKVince Bennett, USA Jonathan Rogers, UK

Volume 5 | Issue 11 | Jan - Jun 2012

Page 4: Young Scientist Journal (Jan-Jun) 2012

Volume 5 | Issue 11 | Jan - Jun 2012Young Scientists JournalContents...All rights reserved. No part of this

publication may be reproduced, or transmitted, in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the editor.The Young Scientists Journal and/or its publisher cannot be held responsible for errors or for any consequences arising from the use of the information contained in this journal.The appearance of advertising or product information in the various sections in the journal does not constitute an endorsement or approval by the journal and/or its publisher of the quality or value of the said product or of claims made for it by its manufacturer.The Journal is printed on acid free paper.Web sites:www.ysjournal.comE-mails:[email protected]

Published byMEDKNOW PUBLICATIONS & MEDIA PVT. LTD.B5-12, Kanara Business Center, Off Link Rd, Ghatkopar (E), Mumbai - 400075, INDIA.Phone: 91-22-6649 1818Web: www.medknow.com

Editorial

Cleodie Swire .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. 1

Interview

Interview with Harry KrotoCleodie Swire .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. 3

Review Articles

The Sun’s the limitDennis Cui .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. 6

Exploring the quantum worldLauren Peters .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . 11

The light in dark chocolateJoyce Chen. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .18

Golden simultaneous equationsGianamar Giovannetti-Singh . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .20

Thermodynamics of the Earth’s atmosphereMartin H Wong.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .23

Opinions

Can science and religion work together while one relies on evidence and the other on faith?Catherine Schuster Bruce .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .31

A new age of science is not dawning – It has arrivedToby McMaster . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .34

Research Articles

The effects of magnetic fields on plant growth and healthEdward Fu .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .38

Household bacteria: Everyday elimination methods uncovered!Defne Gürel, Melis Atalar, Ayça Arslan Ergül .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .43

Page 5: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 1

Editorial

The Young Scientists Journal is proud to present Issue 11. This issue contains a wide variety of articles, and the essence of this is expressed in Toby McMaster’s piece which emphasises the importance of maintaining communication and collaboration between the different specialities to ensure that scientific progress is as efficient, enterprising and extensive as possible. This is something that we, as the next generation of scientists, must all keep in mind lest we subjugate the people who hold the key to the most pressing challenges for mankind.

An example of this is solar power, which has often been presented to me as an infeasible source due to it’s not being profitable enough either in terms of cost or energy conversion. However, in ‘The Sun’s the limit’, a persuasive argument is put forward that will make you re-evaluate your stance and perhaps persuade you that solar power is where our focus should lie. Another article that may change your views is ‘The light in dark chocolate’ which explains why everyone’s favourite indulgence doesn’t deserve to be solely associated with bad health. Catherine Schuster Bruce explores how some of the world’s greatest scientific minds have managed to uphold their seemingly conflicting scientific and religious beliefs.

The issue also contains articles where the authors explain some of the most intriguing subjects in science: quantum physics, the Earth’s atmosphere and the Golden Ratio. These are an easy way to become versed on these topics, as they are written with a student readership in mind.

Two original research articles are also featured: one by a group of students from Turkey that evaluates common ways to eradicate bacteria, and another that studies how magnetic fields affect plant development. These highlight that it is not only graduate scientists who can carry out worthwhile experiments, and show what an effective way it is to become accustomed to the process of executing research.

We have interviewed a Nobel prize-winning scientist, Harry Kroto, whose research involves the discovery of buckminsterfullerene, which has fuelled some of the latest avenues of investigation in solar power. He offers invaluable advice about what young students should look for in the work they intend to pursue, and the questioning nature that they must uphold.

I should like to thank all the authors for submitting their articles and working with us to make them into the refined versions that you now see before you. Our hardworking team of editors also deserves credit, especially when you consider that they have been finding time for the task alongside their demanding work at school preparing for essential public examinations. Finally, thanks must go to the senior team at YSJ: Fiona Jenkinson (the head of the Editorial Team) for her management of the group and for producing the image for the front cover, Jacob Hamblin-Pyke (head of the Technical Team) for maintaining our public portal – the website, Miss Christina Astin for her guidance and encouragement, and Prof. Ghazwan Butrous for his directing that keeps us all focussed on the progression of the journal.

Cleodie Swire Chief Editor

E-mail: [email protected]: 10.4103/0974-6102.97653

Page 6: Young Scientist Journal (Jan-Jun) 2012

2 Young Scientists Journal | 2012 | Issue 11

In addition to the above, the work of one person that should be commended above all is that of our current Chief Editor, Cleodie Swire. Her contributions to the journal have been outstanding in terms of organisation, leadership and commitment. She has helped keep all aspects of the journal under control and indeed made substantial improvements and I am certain she will continue to do so. I feel it is fair to say that this issue would not be what it is if it weren’t for her efforts.

Fiona Jenkinson Head of the Editorial Team

E-mail: [email protected]

About the Author

Fiona Jenkinson is 17 years old and goes to The King's School Canterbury where she is currently studying for her AS Levels. She is studying Biology, Chemistry, Physics and Maths. In her free time she enjoys art, music, photography and reading. She is unsure of what she wants to do in the future.

Page 7: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 3

ABSTRACT

Professor Sir Harold Kroto FRS [Figure 1] is a British chemist who is most famous for being one of the Nobel-prize winning discoverers of buckminsterfullerene, known as ̀ buckyballs' [Figure 2]. This is a spherical cage-like molecule with the formula C60 which can be used to encage and transport other molecules [Box 1]. He now works at the University of Florida, USA. For a brief biography, see Box 2.

What made you interested in science?I’m not sure that anything did, other than being interested in Meccano and being told by my father that I’d better be interested in science and mathematics. My main interest as a kid at school was geography, and art and graphics.

What was the turning point when you decided to pursue chemistry?I had a teacher who took an interest in me and a couple of other students, and encouraged me to go to Sheffield University, which at the time was probably the best chemistry department in the country.

In your opinion, why is it important that we continue to study chemistry?Chemistry is an overarching subject, and almost everything to do with sustainability and survival today involves chemistry.

Science for life: I think it is important to have a scientific background; everyone should do science and mathematics to a reasonable degree so that they have some appreciation of the culture that created modern lifestyles. The more you understand the technology in the modern world, the better decisions you will make if you are in a position of responsibility. Many of the important decisions made nowadays involve scientific understanding: climate change, fuel problems, medical issues, etc. It’s a basis for giving people a better understanding of the world we are in.

Take no one’s word for it: Science for me has a deeper intellectual aspect and I call it ‘natural philosophy’. Natural philosophy is defined in the following way: it is the only philosophical construct that we have to determine truth with any degree of reliability.

For instance, do you think that the Earth goes round the Sun or the Sun goes round the Earth? Until Galileo, people believed that the Sun goes round the Earth – if you look outside the window, this is how it looks.The Royal Society has the motto ‘Take no one’s word for it’ – the questioning of everything is what science is about. Common sense suggests that the Sun goes round us; it’s actually quite complicated to prove the truth, which involves Foucault’s pendulum [Figure 3] showing that the Earth is rotating.

Interview with Harry Kroto

Interview

Cleodie SwireThe King's School Canterbury, England, E-mail: [email protected]

Chief Editor, Cleodie Swire, interviewed Sir Harold Kroto, Nobel Prize winner.

Page 8: Young Scientist Journal (Jan-Jun) 2012

4 Young Scientists Journal | 2012 | Issue 11

Once you realize that, you appreciate how easily people can be misled. I don’t respect anybody who cannot prove to me what they are telling me on the

basis of evidence. You must be really careful about accepting things without evidence. You should tell your teachers that it’s obvious that the Sun goes round us – common sense suggests it. It’s actually uncommon sense that you need to understand nature – it is uncommon sense that makes a scientist look a little more deeply and decide that something isn’t quite right.

How has technology affected the type of work chemists do throughout your career?Enormously – it has changed life completely. Technology now is so efficient and so complex and so inscrutable that people are using technology without knowing what is going on. Twenty years ago, if someone told that you could talk to anybody, anywhere in the world with something the size of a

Figure 3: Foucault’s pendulum at the Pantheon, Paris [available from http://en.wikipedia.org/wiki/File:Pendule_de_Foucault.jpg]

Figure 2: Buckminsterfullerene [available from http://en.wikipedia.org/wiki/Buckminsterfullerene]

Figure 1: Harry Kroto [available from http://en.wikipedia.org/wiki/Harry_Kroto]

Page 9: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 5

About the Author

Cleodie Swire is currently studying Biology, Chemistry, Physics and Further Maths at A Level. She hopes to study Medicine at University. She enjoys sport and travelling.

cigarette packet, you would have thought they were crazy.

Today there is a disconnection from technology. Up until the big advance in technology in the last part of the twentieth century, most people knew how something worked because when it went wrong, you had to fix it, you didn’t just throw it away and replace it. By fixing things, you learn how they work.

What do you view as being the most exciting application that has come from the discovery of buckminsterfullerene?It’s being tested at the moment as a rather good dopant in solar cells (see The Sun’s the Limit, p . 6-10). There will be a market for organic-based solar cells, rather than silicon-based ones. It’s possible that we will make some very inexpensive organic polymers which will not need a battery, and using C60 as a dopant in these solar cells improves the efficiency of the electricity production by several orders of magnitude.

Up to now, the contribution buckminsterfullerene has made to science has been quite a fundamental learning point because we learned that it self-assembles. People didn’t think that it would self-assemble, which shows that we don’t know much.

Do you have any advice to young people?Never do anything where a second-rate effort will satisfy you. If it does, go and find something else. Do something to the best of your ability. If you are prepared to work 25 hours a day, eight days a week on a project, then you’ve chosen the best thing for you to do.

That is the determination that I have, though not everyone does. I never do anything where a second-rate effort will satisfy me. If you find something you feel that way about, then you will almost certainly do it better than others who may have the potential to do it better than you, but don’t have that determination in that particular area.

You’ve got to satisfy yourself – you shouldn’t be doing it to get a good mark, but to feel that you are doing the best you could possibly do. That’s a good recipe for success in the future, I think.

References

1. Buckminsterfullerene. Available from: http://en.wikipedia.org/wiki/Buckminsterfullerene. [Last accessed on 2012 Mar 3].

2. Harry Kroto’s Curriculum Vitae. Available from: http://www.kroto.info/General_info/CV_A.html. [Last accessed on 2012 Mar 3].

Page 10: Young Scientist Journal (Jan-Jun) 2012

6 Young Scientists Journal | 2012 | Issue 11

Figure 1: A pie chart of the United States’ energy spectrum. [Available from http://parsonspr.wordpress.com/2009/10/20/october-is-energy-awareness-month/] [Source: U.S. Energy Information Administration (Oct 2008)]

The Sun has been, and always will be, the Earth’s largest energy reservoir-it powers every living system, from plant photosynthesis to every node of the food web. Humans, like all other organisms, depend on sunlight. Early humanity treasured the Sun’s existence, using its energy as instruments of religious rituals, fire, and war. By the rise of modern civilizations, however, Sun power has been virtually pushed off the energy spectrum. As modern civilizations improved standards of living, the demand for energy increased. This rise in demand has largely been met with fossil fuels. Technologies were developed to locate and extract these resources from the earth and mass infrastructure was built to process and distribute these energies. Figure 1 shows the estimated energy usage by source in the United States in 2009.[1] 83% of the yearly energy consumption in the U.S. is derived from petroleum, coal, and natural gas, while only 8% is supplied by renewable energy. It should be noted that solar energy accounts for only 0.08%, an insignificant amount, of the total United States' energy pie.

Solar History and Theory

Technology capable of harnessing the power of the Sun was first developed in 1954 at Bell Laboratories. However, the invention aroused little interest at the time because petroleum cost less than $2 a barrel and solar energy cost nearly $600 per Watt. In the

ABSTRACT

The Sun’s the limit

Review Article

Dennis CuiLynbrook High School, California, USA. E-mail: [email protected]

DOI: 10.4103/0974-6102.97658

Many scientists are employed to research into novel ways to generate energy for human use since fossil fuels - currently the main source of energy - are a finite resource. One of the most promising fields is solar energy, as the Sun is a very reliable source. It is also a very 'clean' source, as no greenhouse gases are released during the generation of energy, and less destruction of unused land is required compared to many other renewable resources. However, photovoltaic cells are expensive and do not have very high energy conversion ratios at the moment. Ongoing research includes finding materials to make the cells cheaper, and the use of organic semiconductors, which present various advantages.

Page 11: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 7

late 1950s, the National Aeronautics and Space Administration (NASA) saved the photovoltaic (PV) cell from the technological dumpster by using them as lightweight and reliable energy sources to power satellites.[1] As petroleum prices rapidly increased, researchers began to consider the prospect of PV cells designed for use on Earth. In the last 20 years, investment in and progress on PV cells have exploded. Many institutions of higher education, such as Harvard, MIT, and Stanford, have begun research and development (R&D) programs to produce next generation PV cells. Private labs have produced PV cells with energy conversion ratios over 20%, fueling a consumption growth of over 50% in the last 10 years.[1]

A solar panel is a packaged, interconnected assembly of PV cells. The typical PV cell operates in three steps to produce electricity: charge collection, charge separation, and charge extraction. A basic PV cell consists of a transparent active layer, a double-layered conductor (dubbed N and P layers), and an external circuit. The active layer absorbs light; electrons are excited in the electron-excess N-layer, and these excitons, bound electron-hole pairs, move towards the electron-hole rich P-layer. A thin gap between the two layers acts as a check valve-electrons can travel across in one direction but not the other. Because the electrons cannot return to the N-layer, a charge imbalance is created within the conductor. These excitons diffuse through the PV cell towards the external circuit, which extracts this charge by creating a bridge that allows electrons to flow between the N and P layers.[2] This flow of electrons constitutes an electric current.

Sunlight is Abundant

Solar energy is a viable source of energy because of its abundance. The total energy output of the Sun exceeds the output of any other energy source by several orders of magnitude. This solar energy incident on the earth’s surface can be obtained by dimensional analysis:

Energy = Flux × Area × Time

Exper imental ly, the solar f lux constant is 1.366kW/m2 the cross sectional area of Earth is; p r2 = 1.27 × 1014 m2; a year has 8760 hours. Inserting these values into the equation shows that the Earth receives 1.5 × 1018 kWh (kilowatt hours) of energy per year from sunshine, which is roughly 10,000

times the world annual total energy consumption of 15 terawatts. 10% efficient solar conversion system, covering 0.1% of the land on Earth would be sufficient to power the world.[3] With solar energy, the world does not have to worry about energy shortages and our standards of living have room to improve. In addition, the Sun is the most reliable energy source for life on Earth because it will continue to produce energy until it dies, which coincides with, when the Earth will cease to support life.

Sunlight is delivered to nearly all parts of the world year-round, which makes solar energy the ideal source for remote locations. It is economically unfeasible to integrate large electric utilities into these areas, but building a household solar energy generating unit allows energy to reach places where utilities do not go: off the grid. In addition, sunshine follows a diurnal cycle, which matches the human energy usage pattern-it shines when we use energy during the day and sets when we are asleep at night.

Solar Power is Clean

Solar energy is a viable source of energy because of its cleanliness. PV cells generate electricity by the characteristics of the material they are constructed from-that is, no physical or chemical change is involved in the production process, leading to zero waste and emission in energy production. Conventional energy sources such as coal, petroleum, and natural gas have high emission levels of carbon dioxide and other harmful greenhouse gases such as nitrogen (N), methane (CH4), water vapor (H2O), and ozone (O3), which lead to global warming and smog pollution. Nuclear energy is also undesirable because it carries a high risk of radiation (alpha, beta, and gamma) and creates hundreds of thousands of tons of radioactive waste including uranium (U-238), plutonium (Pu-239, Pu-240), fission products (Sr-90, Cs-137, Tc-99), and minor actinides (Np-237, Am-241, Cm-243/244).[1] Moreover, nuclear radiation and waste will remain dangerous for thousands of years. The recent Japanese nuclear power plant leaks are just one example.

Solar energy is also cleaner than other renewable energy. Wind turbines must be built in order to harness wind energy, dams must be built in order to harness water power, and millions of acres of land must be cleared to support growing crops for bio-fuel. These construction (but destructive) projects often deface the surrounding environment and harm the surrounding ecosystem. In contrast, solar panels can

Page 12: Young Scientist Journal (Jan-Jun) 2012

8 Young Scientists Journal | 2012 | Issue 11

be built on a much smaller scale and can be easily installed on the walls and roofs of houses or other pre-existing locations.

Solar Energy is Still Expensive

Commercially available PV cells are composed of silicon and have an average energy conversion efficiency of 12%. So far, it has been expensive to manufacture these cells due to high energy requirements and labor cost. Therefore, the cost of manufacturing and generating energy from PV cells has always been higher than the cost of using other forms of energy. Figure 2 compares the cost per kilowatt-hour of solar energy to other major energy competitors.

The graph shows that, currently, the cost of solar energy is double the nearest competitor. However, solar energy is unique in that it has no decommissioning and production costs because PV cells require little to no maintenance. Although the current cost of producing solar power may be daunting, technological advances in the field promise a bright future for solar energy.

Research and Development

To make solar power competitive in the energy market, researchers and scientists have been developing new PV cells with high energy conversion efficiency and low production cost. Recently, R&D programs have been working on creating PV cells in the form of amorphous, thin films using cheaper

and more flexible organic compounds. Unlike silicon units, organic PV cells are based on p-conjugated organic electronic materials. In conjugated organic molecules (systems with alternating single and double bonds) every carbon center exhibits sp2 hybridization. In nature, these strings of sp2

hybridized orbitals are rigid, but when these delocalized orbitals are oxidized, they degenerate into a band of electrons.[4] When this band is partially emptied, it permits electron mobility, thus turning into an organic semiconductor.

Organic semiconductors bring a host of advantages for the development of cost-efficient and flexible thin-film PV cells. Organic semiconductors are generally made of polymers and can be deposited from solution, making a reel-to-reel coating possible for inexpensive, large scale production; organic semiconductors are also compatible with lightweight substrates such as glass and plastic, making them a prime candidate for mass production and cheap installation; organic semiconductors have high optical absorption coefficients, ideal to harvest a large fraction of the solar spectrum. Most importantly, organic PV cells can be adapted to suit a host of requirements through chemical processing. For example, one experiment at the Advanced Energy Research Center of SANYO Electric Co., Ltd. reported that by replacing the classical electron donor copper phthalocyanine (CuPc) with tetraphenyldibenzoperiflanthene (DBP) (a combination of carbon and hydrogen) the PV cell’s absorption coefficient spectra almost doubled with respect to light between wavelengths of 500 nm and 600 nm.[5] A high absorption coefficient is crucial in PV cell efficiency, and DBP has the highest performance in wavelengths in which the Sun’s light spectrum strength is strong. With breakthroughs in composing and altering materials for organic semiconductors at the molecular level, there is now a plethora of ways to improve organic PV cells.

However, organic PV cells also have considerable drawbacks-their efficiency is roughly a third of that of inorganic PV cells and their lifetime is only a fraction of that of inorganic PV cells. To enhance lifetime, researchers are developing self-repairing organic PV cells modeled on plant chemistry.[6] To improve the conversion efficiency, various methods and techniques are being developed to maximize exciton diffusion, forward electron transfer, and charge transport. Researchers have improved the packing of molecules in organic semiconductors to produce more excitons and facilitate separation into mobile

Figure 2: A comparison of energy prices from 2010. [Available from http://nuclearfissionary.com/2010/04/02/comparing-energy-costs-of-nuclear-coal-gas-wind-and-solar/]

Page 13: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 9

charges. Researchers have also found that charge carrier mobility is enhanced by blending polymers with electron-accepting materials such as fullerene (C60) derivatives, cadmium selenide (CdSe), and titanium dioxide (TiO2).

[7] Earlier this year, one independent research group at MIT discovered that graphene (a nanoscale carbon allotrope) exhibits flexibility, conductivity, and resistivity almost parallel to Indium-Tin-Oxide (ITO), the standard material for electrodes in organic PV cells.[8] Whereas ITO is relatively rare and expensive, carbon is the second most abundant element in the Earth’s crust. Breakthroughs in organic PV cell technologies demonstrate promise for its future success as a world energy producer.

The Future is in Solar

In the past 30 years, various technologies have been developed and the price per kilowatt of using solar energy has dropped from $5 to less than $0.25.[9] Solar energy is no longer in the distant future. It is becoming a reality-more and more solar panels are being installed on houses, in schools, and in open areas. In some parts of the world, where sunlight is omnipresent, the price of using PV cells has already undercut conventional energy prices. Figure 3 shows the history of the cost of PV cells and the projected future of PV cell costs.

The cost of solar energy continues to decrease as the cost of conventional energy continues to increase.

Experts expect grid parity (the price at which solar energy will be competitive with conventional energy) to occur by 2015. Research and investment in organic PV cells are likely to elevate solar energy to be competitive with and cheaper than other forms of energy. Sunlight is sustainable, stainless, and singular. Sunlight is ubiquitous, boundless, and accessible. Sunlight is the tonic of nature. In the future, everything we consume or use-from the lights that we use, the vehicles we drive, to the preparation of the food we eat-will be powered directly by the Sun. Our energy usage will be forever sustainable and independent. Green energy will dominate the energy spectrum, and we will put a brake on climate change. That future is in solar energy.

References1. Energy Explained, Your Guide To Understanding Energy. U.S.

Energy Information Administration. U.S. Department of Energy. Available from: http://www.eia.doe.gov/energyexplained/index.cfm. [Last accessed on 2011 Mar 04].

2. Bushan B. Springer Handbook of Nanotechnology. 1st ed. Vol. 1. Verlag: Springer, Academic Search Premier. 2004. http://www.springerlink.com/content/978-3-642-02525-9 [Last accessed on 2011 Jan 12].

3. Xue J. Perspectives on Organic Photovoltaics. Polymer Review 50.4 (2010): 411-19. Academic Search Premier. [Last accessed on 2011 Mar 06].

4. Brunetti FG, Kumar R, Wudl F. Organic electronics from perylene to organic photovoltaics: Painting a brief history with a broad brush. J Mater Chem 2010;20:2934-48.

5. Fujishima D, Kanno H, Kinoshita T, Maruyama E, Tanaka M, Shirakawa M, et al. Organic thin-film solar cell employing a novel electron-donor material, Solar Energy Materials and Solar Cells. Vol. 93, Issues 6-7. 17th International Photovoltaic Science and Engineering Conference. Fukuoka International Congress Center in Fukuoka, Japan; 2009. p. 1029-32.

6. Strano MS. Photoelectrochemical Complexes for Solar Energy Conversion That Chemically and Autonomously Regenerate. Nature 2010:929-36. Nature Publishing Group: Science Journals, Jobs, and Information. 05 Sept. 2010. [Last accessed on 2011 May 23].

7. Sounni AB. Low Cost Manufacturing of Light Trapping Features on Multi-Crystalline Silicon Solar Cells: Jet Etching Method and Cost Analysis. Thesis. Massachusetts: Massachusetts Institute of Technology; 2010.

8. Chandler DL. “Graphene Electrodes for Organic Solar Cells.” Nanowerk LLC. 6 Jan. 2011. Available from: <http://www.nanowerk.com/news/newsid = 19598.ph P>. [Last accessed on 2011 Jan 11].

9. Margolis RM. Solar Energy: Market Trends and Dynamics.” NARUC 2009 Winter Committee Meeting. Washington D.C; National Renewable Energy Laboratory; 2009 Feb 16.

Figure 3: An overview of solar unit prices from 1978 to present and extrapolated future prices. [Available from http://www.renewableenergyworld.com/rea/news/article/2010/08/test10?cmpid = rss]

Page 14: Young Scientist Journal (Jan-Jun) 2012

10 Young Scientists Journal | 2012 | Issue 11

About the Author

Dennis is currently studying physics in school. He has taken courses in chemistry, biology, and computer science and is currently studying multivariable calculus at West Valley Community College. Dennis is fascinated by nanotechnology, which led to his interest in the application of nanotechnology to solar energy, and computer science, which led him to begin developing applications for the Android platform. He hopes to attend Harvard College in the fall of 2012. In the future, Dennis plans on studying a combination of business and engineering. He plans to be a business entrepreneur.

For Students

Are you impressed by the articles in this issue? If so, then why not submit your own work and see it published in a future edition of the journal...

Young Scientists Journal is a unique online science journal, written by young scientists for young scientists (aged 12-20). More than that, the journal is run entirely by teenagers. It is the only peer review science journal for this age group, the perfect journal for aspiring scientists like you to publish research.• Have you recently done an interesting school project?• Would you like to do some unique research?• Do you have documents written for competitions lying around on your computer?

If so, upload them now to the Young Scientists Journal! Your article will be processed by a team of students and then an International Advisory Board before being made into an official article with its own unique code. As well as this being rewarding, it also will look very good on a CV! We are also keen to receive shorter, review articles, and creative material such as videos or cartoons.

For Teachers

• Have you seen examples of STEM work in schools which deserves to be published?

• Are there projects or coursework out there which will otherwise lie forgotten on a shelf or USB memory stick?

• Would you like to encourage a student (or group) to consider publishing it in a science journal for others to read and for posterity? (…being a published author looks great on their CV!)

The Young Scientists Journal allows students to enter into the world of scientific publishing and journalism by providing them with the opportunity to research and write their own articles. The articles will then be processed by student Editors and an International Advisory Board before being sent to the publishers where they will be made into official articles, each with a unique code.

Many of our authors have conducted scientific research for coursework, competitions, holiday placements or projects. We are also keen to receive shorter, review articles, and creative material such as videos or cartoons.

If you know a student who would be interested in getting more involved, and helping to run the journal, we are actively recruiting students at the moment to our Young Scientists team in many roles including editing articles, managing the website, graphic designing and helping with publicity.

You may be interested in becoming an ambassador for Young Scientists by joining our International Advisory Board. If so, please send an email to Christina Astin, [email protected].

Page 15: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 11

ABSTRACT

Introduction

In the early 19th century, physicists were becoming dangerously content with their understanding of what appeared to be a wholly deterministic1 universe. It was not long, however, before classical physics failed to explain the small-scale phenomena that was being observed by physicists and a new and radical theory was required. Quantum Theory is the study of discrete packets of energy called quanta and its still evolving theory attempts to explain the strange behaviour of small scale systems. Matter is both a wave and a particle; a cat is both dead and alive;2 particles that are light years apart can be inherently linked and others can exist in two places at once.

Superposition is thought by many to be the central

tenet of quantum theory and yet it is not studied until degree level physics. Through this article, I would like to make some of the exciting aspects of quantum theory more accessible to readers. I will investigate the double slit experiment in which we see these quantum properties in context and discuss their implications. By doing so, I will merely touch the very edges of a dynamic, ever-changing, and incredibly controversial theory, yet I hope to provide a worthwhile insight into some of the distinctive characteristics which distinguish quantum theory from classical physics.

Background

The term, ‘classical physics’ refers to universal laws such as Newtonian mechanics that were known and understood prior to the birth of quantum theory. Classical physics was not replaced by quantum theory because such laws remain valid right down to the atomic scale, but they fail at the subatomic level. Classical physics is what we experience on a day to day basis and it is therefore intuitive; it describes

The development of quantum physics over a century ago marked a departure from classical physics. The author introduces the strange world of quantum phenomena by describing the double slit experiment and our inability to grasp the electron's behaviour as wave and/or particle. She goes on to discuss the measurement problem, wherein the very act of measuring a system affects its outcome. Quantum physics signalled a shift away from determinism and the author outlines the 'many worlds' theorem and suggests some of the implications of this interpretation. She also offers an insight into uses of quantum theory in technology and other non-scientific disciplines.

Lauren PetersThe Sixth Form College, Solihull. E-mail: [email protected]

DOI: 10.4103/0974-6102.97661

1Determinism is when the exact properties of a system at a given time are sufficient to define what the system will do next.2It should be noted that the cat idea is a thought experiment that has extrapolated what is seen at a quantum level to a macroscopic level.

Exploring the quantum world

Review Article

Page 16: Young Scientist Journal (Jan-Jun) 2012

12 Young Scientists Journal | 2012 | Issue 11

the forces that make objects move and explains why liquids turn to gases at certain temperatures. It is logical and predictable. In comparison, quantum ideas appear irrational and random. It is essential that one carries no intuitive assumptions from the classical world we live in through to the quantum world as it is explored. As we shall see, wave-particle duality states that matter does not exist as either a wave or a particle, but both. Similarly, superposition insists that something does not exist here or there, but both. It is easy to assume such ideas are false simply because our experience implies that they must be. If two beings from a 2D world looked at a cylinder from perpendicular angles, one would look and insist it is a circle while the other would stubbornly disagree, seeing it as a rectangle [Figure 1]. Physicists experience a similar paradox. They can perform one experiment and see light acting unarguably as a particle, while another experiment will demonstrate the unambiguous wave-nature of light. Just like the cylinder, it is both and neither.

Wave-Particle Duality

While studying physics at A level, one learns about a famous experiment phenomenon called the photoelectric effect3. Although the experiment which shows this effect will not be covered in detail within this article, it is important to know that it demonstrates light acting as a particle. It had been found that the precise observations made when light causes electrons to escape from a metal surface could not be explained with the wave theory of light. The effect remained a mystery until 1905 when Albert Einstein postulated that light travelled in discrete ‘packets’ of energy called quanta. These quanta were later dubbed ‘photons’.

These ideas left physicists in a quandary because there were many other, equally valid, experiments that demonstrated light acting as a wave. In 1801, Thomas Young had performed the double slit experiment. Monochromatic4 light aimed at two slits forms an interference pattern5 of alternating bright and dark areas on a screen positioned behind the two slits [Figure 2]. Such an interference pattern is distinctly characteristic of wave motion and can only be explained by accepting that the light from each slit

travels in waves. (See footnote 5 for more information on wave motion).

Soon after Einstein’s revelation, scientists began to ask: If light can act as both a wave and a particle, can the same be said for matter? Louis de Broglie [Figure 3] applied mathematics to the wave-particle duality of matter, showing that the wavelength of a particle is equal to a constant divided by the product of the particle’s mass and velocity λ= h/mv. The theory was later consolidated when an interference pattern was formed with electrons. This can be demonstrated by using the same apparatus as for the double slit experiment, but replacing the source of monochromatic light with an electron gun. The electrons travel through the slits one by one and hit a large photographic plate on the other side where they make a small mark on impact. Initially, the marks seem to appear at random across the plate, however, after some time, a pattern begins to form [Figure 4]. When thought of as particles, there is no explanation for why the electrons form a consistent pattern. Yet when thought of as waves, it is a convincing example of the familiar interference pattern previously observed with light. The discrete arrival of the marks demonstrates the particle-like nature of electrons, while the overall

Figure 1: The circle-rectangle paradox [diagram drawn by author]

3For more information see “Einstein: His life and Universe” By Walter Isaacson4Monochromatic means light of one wavelength

5How an interference pattern is formed: On passing through each slit, the light waves diffract (spread out) and overlap with each other. A wave consists of peaks and troughs the two individual displacements of each wave are added together. Therefore, if a peak meets with a peak it forms a super-peak and light is observed. Similarly, if a trough meets with another trough it forms a super-trough and light is observed. If, however, a peak meets a trough, they will cancel each other out and no wave is formed- hence no light is observed. The interference pattern consists of alternate light and dark strips that signify where the two light waves have overlapped and formed a light wave or cancelled each other out.

Page 17: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 13

pattern demonstrates their wave-like nature. Just like the monochromatic light, these finite bits of matter can behave as both waves and particles.

Superposition

The interference pattern observed suggests that the electrons travel through both slits as a continuous wave. Yet it is more difficult to answer the question, through which slit does one indivisible electron travel? Intuitively, one would predict that one electron travelling through the upper slit is most likely to hit the photographic plate directly opposite this upper slit, while an electron travelling through the lower slit is most likely to hit the plate opposite this lower slit. Yet strangely, most marks are made midway between the two slits indicating that this is where one electron is most likely to arrive. Physicists, therefore, concluded that one indivisible electron must travel through both slits at the same time, a phenomenon called superposition.

Superposition reveals an important feature of quantum physics. In classical physics, one can be sure that what is observed in an experiment will be the same no matter what method is used to observe it, provided the conditions of the experiment are kept constant. However, in this quantum experiment, we can observe very different results simply by looking in a different place. If we decide to add detectors by each of the two slits to find out exactly which slit each electron travels through before reaching the plate, we will never actually detect two electrons travelling through both slits at the same time, as superposition implies we should. Instead, the electron will always be detected either at the top slit or the lower slit. We will also notice that an interference pattern is no longer observed on the photographic plate, but instead, the marks accumulate directly opposite either slit (as intuition previously predicted). Hence, we have completely changed the outcome of the experiment simply through the act of measurement. Quantum theory brought about a much greater emphasis on the effects caused by measurement which raised questions about what happens when a measurement is made. The results of this experiment appear to depend on the question you are asking: if you ask a wave-like question (what pattern will the electrons make if allowed to act collectively?) then you will get a wave-like answer and observe an interference pattern. If, on the other hand, you ask a particle-like question (which slit does the electron pass through?) you get a particle-like answer: the electron will travel through one slit only and arrive, in isolation, at the photographic plate, forming no such interference pattern.[1]

Figure 2: Double slits experiment showing waves [Available from http://paulkiser.wordpress.com/2010/09/27/negative-time/]

Figure 4: Double slit experiment with electrons [Available from http://commons.wikimedia.org/wiki/File:Two-Slit_Experiment_Electrons.svg]

Figure 3: Louis de Broglie 1892-1987 [Available from http://en.wikipedia.org/wiki/File:Broglie_Big.jpg]

Page 18: Young Scientist Journal (Jan-Jun) 2012

14 Young Scientists Journal | 2012 | Issue 11

The Measurement problem

There have been several attempts to explain what happens when a measurement is made and I will touch on them very briefly. Erwin Schrödinger [Figure 5] explored quantum ideas and used mathematics to aid him in his understanding of superposition. He formed an equation using wave mechanics to describe quantum theory mathematically. On analyzing Schrodinger’s wave equation, Max Born concluded that the equations could not be describing the position of the electron itself - spread into a wave-as it would no longer accommodate particle-like properties (which we clearly observed as each electron arrived at the photographic plate). Instead, Born postulated that Schrodinger’s equation described the probability of an electron (or particle) being in a given location while behaving as a wave. An electron could be here, there, or perhaps over there, and the probabilities of each are presented accordingly in Schrödinger’s wave equation. When one makes a measurement of where the electron does in fact reside, it can no longer exist in any place except the one in which it is measured. Therefore, the probabilities are confined to that single location and all other probabilities must reduce to zero. By taking a measurement, we destroy any potential the electron had to travel another path. Before such a measurement is made, quantum systems exist as a wave function as described by Schrodinger’s equations. The evolution of the wave function is deterministic and everything we measure (velocity, position, energy etc) depends on its wave function. However, at the point of measurement, the wave function collapses and the outcome is probabilistic.

Quantum theory radically demonstrated that some events are not determined by physical laws and the outcome of an experiment cannot be predicted using all knowable information before the measurement is made. It is important to realize that this notion is different to the use of probabilities when throwing dice. The movement of the dice is determined, but many variables and forces are involved therefore making it easier to make probabilistic predictions. There are no such variables6 in quantum physics and so it is believed that non-determined (random) events are inherent in nature. Schrödinger’s wave equation has become well established within physics and physicists have since formed various theories in an attempt to explain what causes and what happens when the wave function collapses-this quest is known as the measurement problem.

The first theory to be popular among physicists was the Copenhagen Interpretation, derived by Niels Bohr. It states that our experience of reality is based on measurement and an entity is only what one measures it to be. In this way, the wave-particle paradox would be explained by stating that when you are measuring the electrons as particles, they are in fact particles. If, on the other hand, you do not detect them as particles but allow an interference pattern to form, the electrons are waves. A consequence of this theory is that it becomes meaningless to ascribe any properties to something without measuring it. Even after measuring it, you can only describe the entity with the measured properties at the time you measured it and to describe it at any time later you would have to measure it again.

A further theory that is becoming increasingly popular is the many-worlds interpretation. It states that everything that could happen does happen, but in different worlds. Whenever there is a choice, the universe splits so that each world is identical except for the path chosen. When we measure which slit the electron travelled through, we are measuring the reality of what happened in our world, while the electron also travels through the second slit in another world. Not only does this theory create a huge number of worlds, it also raises the question at what point does it split? If the universe immediately splits every time there is a choice, then the electrons would not be able to interfere on the other side of the slits in order to form

Figure 5: Erwin Schrödinger 1887-1961 [Available from: http://en.wikipedia.org/wiki/File:Erwin_Schrodinger2.jpg]

6The idea of hidden variables determining the outcome of apparently random events is called the hidden variables theory and was explored at length by a mathematician named John Bell. However, such variables have not yet been proven to exist.

Page 19: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 15

an interference pattern on the photographic plate. The electrons would exist in two, non-interacting worlds.

There is not one widely accepted or fool-proof theory and the search for such a theory continues. On studying quantum physics at a higher level, one finds that the mathematical interpretations are far more complete than our physical understanding. Indeed, the mathematical equations of quantum theory can predict the nature of our universe with startling accuracy despite not having a physical explanation of why. Such mathematics is inaccessible prior to undergraduate study and so is it is easier to simplify the double slits experiment into several straightforward ideas. Firstly, one must simply accept that individually the electrons act as particles, while collectively they can behave as waves. When sent through the two slits, nature will allow the electron to take every possible path and travel down both slits unless forced by the observer to choose. If allowed to take both slits (in superposition), the electrons behave as waves, they diffract7 on passing through the slits and interfere with each other to form the pattern on the plate. If, on the other hand, the observer forces the electrons to choose a path (by taking a measurement), superposition does not occur. Instead, the electrons act as particles, travelling through one slit at a time and arriving at the plate as expected.

Implications

These quantum phenomena have interesting implications on science, technology and the way in which physicists view our universe.

Despite lacking a complete explanation for observations, physicists are able to harness quantum physics’ bizarre properties to advance human abilities beyond the boundaries of classical physics. An insight into quantum properties has allowed physicists and engineers to understand and therefore manipulate materials on a much smaller scale than previously possible. The electronic revolution depended heavily on quantum mechanics such as the design of the laser in a DVD player which relies on the Schrödinger equation. Perhaps the most visible implications for the future will be further advances of technology and the use of quantum mechanics to produce extremely fast quantum computers.

Furthermore, quantum theory changed a physicist’s understanding of our world. It was previously thought that the very nature of physics relied on an element of determinism, whereby the properties of energy and matter will remain constant and not change randomly as one measures them. And yet with the birth of quantum theory, Newtonian mechanics could no longer be applied to all aspects of science or used to determine exactly how a system will change with time. Consequently, many scientists including Albert Einstein were afraid of its non-deterministic implications. It was feared that there might be widespread implications for the rest of established physics as such ideas seemed to undermine the intuition of a physicist. I spoke to Tim Freegarde of the University of Southampton who agreed: “Intuition is extremely valuable for a scientist as it’s both a subconscious method of checking based on experience and a way of roughly modelling what should happen before more rigorous simulations can be performed”.[2] However, in the 1800s, a physicist’s intuition, like ours, was based exclusively on their classical experiences and so it was assumed that anything which obeyed Newtonian mechanics was ‘intuitive’ and anything that didn’t was not. Tim argues that, as a quantum researcher, he observes quantum behaviour frequently, and has, therefore, acquired an intuitive understanding of quantum physics. If earlier physicists had such intuition, they would not have been so put off by quantum ideas.

Quantum Mechanics further implies that an observer can no longer exist independent of the system. Instead the observer will have an inevitable and profound effect on the system through the influence of measurement. Consequently the nature of our universe in the absence of humans has become inherently unknowable. Physicists have pondered on the disturbing implication that when one is not there to observe it, the universe will exist in a mixture of superposed states, only having a defined state when observed. This apparent paradox is explored in the well known thought experiment “Schrödinger’s Cat”. A cat is sat in a sealed box with a quantum mechanical system that has a probabilistic chance of releasing a poisonous gas. According to the Copenhagen interpretation, the gas will be in a superposition state of released and not released, and therefore, unless the cat is observed, it too must exist in a superposition state of being both dead and alive.

The “many worlds” interpretation brings a degree 7When light diffracts, it spreads out on passing through a gap or passing an obstacle. Diffraction is a property of waves.

Page 20: Young Scientist Journal (Jan-Jun) 2012

16 Young Scientists Journal | 2012 | Issue 11

of determinism back into quantum physics as measurement does not cause the particle to take a single path at random, but allows the particle to take every possible path - each in a different world. However, the theory does have other implications. We find that time does not run exclusively forwards as we experience it, but is continually branching off in many different directions. Furthermore, reality is taken completely out of our hands as we cannot decide which world we will exist in.[3] Do we have any control over what happens or does every possibility take place independent of us. Indeed, what is the ‘we’ that we experience or in other words - what is consciousness? Much to the frustration of some physicists, a consequence of quantum physics is that it raises a whole range of philosophical questions.

Finally, the quantum properties explored in this article have the potential to change our view on the very nature of time. We experience an exclusively ‘forward’ direction for time which can be defined classically by the second law of thermodynamics: The entropy of a system must increase. Time gives us an indication of causality, whereby something in the present will cause an event in the future; however, it would appear that quantum systems do not follow these laws of causality. Take, for example, a double slits experiment in which the measurement is made several nanoseconds after the electrons have passed through the slits.[4] Strangely, this measurement still changes the outcome of the experiment, breaking the interference pattern and forcing the electron (which has already passed through the slits) to travel through one slit only. This process, called post selection, is an example of a future event changing something in the past, implying that on a quantum level, the past is no more certain than the future.

Conclusions

I will conclude this article by reiterating the features of quantum theory that have been revealed while investigating two of its phenomena: Wave-particle duality and superposition.

It would appear that it is very difficult to picture or conceptualize quantum phenomena. I discussed how light and matter can act as both waves and particles, and yet in our classical, everyday world such a notion is impossible. We could conclude therefore, that as classical beasts, we are restricted in our natural understanding of the quantum world. In one of his famous lectures, Richard Feynman demonstrated this

idea by saying “[when studying quantum theory,] our imagination is stretched to its utmost not to understand the fiction, but our imagination is stretched to its utmost just to comprehend those things which are there”.[5] However, our intuition is based on classical physics because we are of classical size and are nurtured in a classical world. Physicists who spend their time researching quantum mechanics can gain a similar intuitive understanding of quantum phenomena.

Secondly, these phenomena demonstrate how quantum experiments differ from what we experience on a classical level. Classically, the same conditions will always produce the same results, but the double slits experiment proves that, due to superposition, this is not the case in the quantum world. Similarly, whereas in classical physics, it is possible to perform a measurement which reveals everything about a system without causing a disturbance, in quantum physics the very act of measurement changes the outcome of the system.

Furthermore, this investigation revealed a loss of determinism due to the probabilistic nature of quantum mechanics. Schrodinger’s equations describe a wave function which evolves deterministically, yet the outcome of a measurement is based on probabilities alone. Even if one can measure the exact properties of a quantum system, it is only possible to make probabilistic predictions.

The very feature of quantum physics that gave it its name is the idea that the smooth and continuous are replaced with discrete ‘packets’ called quanta. Just as waves are replaced with particles, energy itself occurs in discrete quanta- an idea that revolutionized our understanding of the atom and interactions between light and matter.

It has become apparent that mathematical capability is vital to an understanding of the quantum world which greatly limits a qualitative understanding of quantum phenomena. As well as the mathematical formalism, classical physics presents a physical description which can (usually) be understood by a layperson. Consequently, it is often not restricted to physicists alone, but other areas of science as well as wider disciplines could make use of its insights. Conversely, some of quantum theory is presented exclusively in mathematical form - called quantum formalism. As a consequence, physicists can mathematically calculate exactly what is observed with immense accuracy - making quantum physics

Page 21: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 17

About the Author

Lauren Peters is interested in the physics behind our natural world and started studying Physics at university in September 2011.

the most successful theory in scientific history - yet they cannot provide an explanation for events.

Therefore, I have told you a lot about superposition, but I have not provided you with a complete explanation for exactly what happens when we detect a particle that is in a superposition state. This almost philosophical conundrum continues to puzzle physicists at the forefront of science. As the Scientific journalist, Michael Brooks wrote: “If you ask a roomful of physicists what goes on when we measure a particles properties, some will tell you that new parallel universes necessarily spring into being. Others will say that, before a measurement is performed, talk of a particle having real properties is meaningless. Still others will say that hidden properties come into play while another group will tell you that they deal with physics not philosophy and dismiss the question without giving you an answer”.[6] Exactly which, if any, of these interpretations are walking in step with reality is yet to be determined. But perhaps there is an element of truth in the notion

that it is irrelevant. The rules of quantum systems are known, they can be successfully applied to a range of experimental and real world scenarios and the results show that quantum formalism works remarkably well. An attempt to find out what it all means will inevitably be clouded by our self-centered, classical mindset and we cannot hope to gain any further insight by imposing our everyday view of the universe on nature itself. After all, contrary to our classical understanding, it does not have to be either or; it could well be both.

References

1. Quantum Theory - John Polkinghorne - ISBN 978-0-19-280252-1.2. Email conversation with Tim Freegarde, University of Southamp-

ton 06/09/10.3. Is the Universe Deterministic? - Vlatko Vedral - New Scientist

18/11/06.4. The Quantum Time Machine - Justin Mullins - New Scientist

20/11/10.5. The Character of Physical Law (1965).6. Rise of the Quantum Machines – Michael Brooks - New Scientist

26/06/10.

Page 22: Young Scientist Journal (Jan-Jun) 2012

18 Young Scientists Journal | 2012 | Issue 11

In our calorie conscious world today, chocolate has consistently garnered a bad reputation. From causing acne to obesity, the cocoa candy has been known to be loaded with fats and sugars; however, research has highlighted some surprising health benefits associated with the consumption of chocolate.

Antioxidant Powers

Chocolate specifically contains a compound called flavanol, a type of flavonoid, which is a compound generally found in vegetables, fruits, teas, and wines that exhibit anti-aging and antioxidant behavior.[1] Flavanol gives the chocolate its sharp flavor and lends its antioxidant powers to prevent oxygen radicals and environmental toxins from damaging the body.[1] Increased amounts of oxygen radicals in the body have links to various diseases such as cancer and atherosclerosis,[2] the condition in which fatty substances such as cholesterol block the blood vessels and thicken the artery wall. These substances in chocolate prevent the buildup of low-density lipoprotein (LDL), known as the bad lipoprotein, which reduces the risk of artery plaques and blood clots and improves the blood circulation. Consequently, these substances also

lower blood pressure, a big contributor to heart disease and stroke, kidney problems, and cognitive dementia.

Mood Enhancers

Moreover, chocolate also contains many mood-

Figure 1: An image to show the chemical structure of phenethylamine [Available from http://en.wikipedia.org/wiki/Phenethylamine]

ABSTRACT Obesity is one of the biggest challenges facing the developed world, so people are very aware about what they eat. Chocolate has always been labelled as one of the foods to avoid if you are intent on losing weight. However, chemicals found in dark chocolate - such as flavanoids and phenethylamine - are known to have positive effects on humans' health and wellbeing. In addition, only one third of the fat in dark chocolate contributes to LDL cholesterol levels.

Joyce ChenLynbrook High School, San Jose, CA, E-mail: [email protected]

DOI: 10.4103/0974-6102.97664

Review Article

The light in dark chocolate

Page 23: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 19

About the Author

Joyce Chen attends Lynbrook High School in San Jose, CA, and enjoys delving into the realms of science. She loves to play golf for her varsity girls’ golf team and is enamored by the French language and culture.

boosting substances such as phenethylamine [Figure 1], which increases the production of endorphins and dopamine.[3] These two products are neurotransmitters associated with pleasure and love, the common feelings that people express when eating chocolate. Another substance found is serotonin,[3] a neurotransmitter linked to happiness and well-being and a common combater of depression. Also found in chocolate are anadamide and theobromine, which are also transmitters secreted by the brain that help in boosting pleasure and motivation.[3] It is no wonder why chocolate has long been used by humans to enhance mood and feeling [Figure 1].

Fat Content

Though many sceptics believe that chocolate contains a high percentage of fat, negating its benefits, it might not be as bad as people think. Chocolate comprises equally three main fats: oleic acid, stearic acid, and palmitic acid.[4] Oleic acid is an unsaturated fat that does not elevate LDL cholesterol levels, and though stearic acid is a saturated fat- which normally contributes to LDL cholesterol- research has shown that stearic acid usually is converted to oleic acid, having no significant effect on LDL cholesterol.[5] Thus, palmitic acid is the only truly detrimental saturated fat

that raises LDL cholesterol in chocolate and only one-third of chocolate’s fat is harmful to health.[5]

Chocolate Inequalities

However, though there are many benefits to eating chocolate, the main culprit for its bad reputation is its added sugars, which curb its nutritional worth. Furthermore, during chocolate processing, much of the bitter flavonoids from cocoa disappear, leaving chocolate with a greater percentage of sugar and calories.[4] However, different types of chocolate [Figure 2] have different amounts of flavanoids. For example, dark chocolate tends to have a greater supply of antioxidant-rich flavanoids and a lesser supply of sugar, while white chocolate has no cocoa solids and thus no flavanoids.[3] Rather, this misnomer comprises cocoa butter, sugar, milk solids, and salt – a concoction void of nutritional content. Thus, moderate consumption of dark chocolate provides the best balance of taste and nutrition.

References

1. Buhler DR, Cristobal M. “Antioxidant Activities of Flavonoids.” Linus Pauling Institute at Oregon State University. Nov. 2000. The Linus Pauling Institute. Available from: http://lpi.oregonstate.edu/f-w00/flavonoid.html. [Last accessed on 2011 Jun 20].

2. Rettner R. “Sweet Science: The Health Benefits of Chocolate | LiveScience.” Current News on Space, Animals, Technology, Health, Environment, Culture and History | LiveScience. 11 Feb. 2010. TechMediaNetwork.com. Available from: http://www.livescience.com/6111-sweet-science-health-benefits-chocolate.html. [Last accessed on 2011 Jun 20].

3. Robbins J. “Chocolate’s Startling Health Benefits.” Breaking News and Opinion on The Huffington Post. 22 Feb. 2011. TheHuffingtonPost.com, Inc. Available from: http://www.huffingtonpost.com/john-robbins/chocolates-startling-heal_b_825978.html. [Last accessed on 2011 Jun 20].

4. Cleveland Clinic. “Heart-Health Benefits of Chocolate Unveiled.” Cleveland Clinic. Feb. 2010. Available from: http://my.clevelandclinic.org/heart/prevention/nutrition/chocolate.aspx. [Last accessed on 2011 Jun 20].

5. Stibich M. “Chocolate - Health Benefits of Chocolate.” Longevity, Anti-Aging and You - Healthy Aging, Longevity, and Anti Aging. 26 Apr. 2009. About.com. Accessed from: http://longevity.about.com/od/lifelongnutrition/p/chocolate.htm. [Last accessed on 2011 Jun 20].

Figure 2: An image showing chocolate (Available from http://en.wikipedia.org/wiki/Chocolate]

Page 24: Young Scientist Journal (Jan-Jun) 2012

20 Young Scientists Journal | 2012 | Issue 11

ABSTRACT

Introduction

The Golden Ratio is a number represented by the Greek letter Φ and is approximately equal to 1.6180339887…; it is irrational and is infinitely long, similar to π. Nevertheless, Φ can be represented by a fraction (which contains √5, an irrational number). The fraction which represents Φ is 1 5

2+ .

Φ is an extremely interesting number; it appears everywhere in nature. The number is a ratio, the Golden Ratio, and it represents the “perfect cut” of nature. Everything in nature, from the number of pollen grains on a flower to the number of bonds in the compounds of uranium oxide, to the spiral shape of galaxies can be accurately modeled by Φ.

Φ is a ratio, and it is often shown in nature as 1:Φ, such as the ratio of the distance between the shoulder and the elbow, and between the elbow and the fingertips. That is yet another Golden Ratio. However, if the larger section is considered to be 1, the smaller ratio can be Φ, known as the Lesser Golden Ratio. In this paper, we aim to prove that the only possible solution to the simultaneous equations below is to use the Greater and Lesser Golden Ratios, therefore making the equations mathematically “beautiful.”

Most natural feats which contain the Golden Ratio are considered to be esthetically “beautiful.”

Simultaneous EquationsFirst, we introduce a set of simultaneous equations, which are at the center of this paper.[1]

1. x-1=y

2. 1x

= y

3. n = x

4. n - 1 = x

5. 2 + y = n

Obviously, it is possible to rearrange these equations to create a longer string of equations such as: x -1=n- 2= n − =1 y, or n=2+y=x2=x-1. However, these cannot give any numerical value for the constants x, y, and z. This is why, the quadratic formula must be used. The final answer that we want to reach is

x =1 52

+ ;

therefore, we know that there must be a way of using ax2+bx+c=0. We shall then find the values of a, b,

Golden simultaneous equations

Review Article

Gianamar Giovannetti-SinghParkside Federation Academy, Cambridge, UK. E-mail: [email protected]

DOI: 10.4103/0974-6102.97668

In this paper, we investigate the simultaneous equations derived from the Golden Ratio, and use differentiation to find the coordinates where Φ in the x-axis meets Φ in the y-axis in a golden quadratic graph. We demonstrate a proof for the solutions to the five simultaneous equations below by using algebraic and graphic solutions.

Page 25: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 21

and c which will give us the value of x by using the quadratic formula,

x =-b b - 4ac2±

2a.

The values of a, b, and c must eventually make

x =1 5+

2,

which is Φ ≈ 1.6180339887… Therefore, we use the formula

Φ =1 5+

2 and substitution to find the numerical

values which would go into the formula as a, b, and c.

Algebraic Proof Since we know that

1 52

j += , 1=–b ∴ b=–1.

Then, 5 b - 4ac2= , but we know that

b=-1 5 = 1- 4ac -1 =1 - x = x2 2∴

At this stage, it is much simpler to work out a because of the 2 in the Φ formula. It is evident that a=1 because 2=2a ∴ a=1. At this point, it becomes clear what the missing constant c is:

5 = 1 4c 5 -1 c= c= 1− ∴ ∴ − ∴ −4 4Another way to determine these constants is to use some of the simultaneous equations. Initially, we calculate:

x 1=1x

x(x 1)=1= x x =12− ∴ − − Therefore, we have obtained values for a, b, and c to use in the quadratic formula.

Now that we have all three of the constants needed for the quadratic formula to work, we must prove that it does;

x =b b 4ac

2a

2− ± − , x =

52

a 1, b= 1, c= 11± = − −

We now have x = ϕ ≈ 1.6180339887 . . . ∴ y = x – 1≈ 0.6180339887... ∴ n = x + 1 ≈ x +1 ≈ 2.6180339887 and x = – φ, y = – ϕ, z = 0.382

Graphic SolutionNow that we have sorted out the equations

algebraically, we can also use graphs and differentiation to find the lowest possible point of the curve y = x2 – x – 1, which is derived from our a, b, and c from the quadratic formula. On a graph we can sketch this curve to find the x-intersects and the minimum point (not maximum because it is quadratic and continues increasing to infinity).

From Figure 1, it is possible to say that the x-intersects of the curve are Φ and –ϕ, the Golden Ratios.

Differentiation: y = x x 12 − − ∴

→−

dydx

(x )=nxdydx

=d(x - x -1)

dx=2x -1=0n n 1

2

∴ ∴ − − −x =12

12

12

1=54

= y2

Therefore, we can see that the minimum point of this curve has the coordinates

12

,54

.

Figure 2 shows a magnification of the graph at the point (ϕ, 0), showing the exact point of the Golden Ratio.[2]

Figure 2: x-intercepts of the function y = x2 – x – 1; crossing at Φ and -ϕ

Figure 1: Quadratic graph of the function y = x2 – x – 1

-5 -4 -3 -2 -1 1 2 3 4 5

10

20

30

x

y

-2 .0 -1 .5 -1 .0 -0 .5 0 .5 1 .0 1 .5 2 .0

-0 .1 0

-0 .0 8

-0 .0 6

-0 .0 4

-0 .0 2

0 .0 2

0 .0 4

0 .0 6

0 .0 8

0 .1 0

x

y

Page 26: Young Scientist Journal (Jan-Jun) 2012

22 Young Scientists Journal | 2012 | Issue 11

About the Author

Gianamar Giovannetti-Singh is extremely interested in Mathematics and Physics. In March 2012, he was awarded the title of Junior UK Young Scientist of the Year for the National Science and Engineering Competition at the Big Bang Fair 2012 in Birmingham for his research on the orbital and physical dynamics of the Trojan asteroid 2010 TK7. Furthermore, he was awarded the CREST Prize for Enthusiasm and Real-World Context, and was selected as an international delegate to the United Kingdom at Broadcom MASTERS International at the Intel International Science and Engineering Fair; amongst the eighteen winners selected from across the world. He is currently taking part in research on possible new cancer therapies with the use of quantum tunneling to increase the chances of total ionization of tumors. He is 15 years old, and is looking forward to continue studying Mathematics with Physics at University. Gianamar Giovannetti-Singh has published two research papers in the Journal of Modern Physics, and in April 2012, he spoke at the IEEE International Conference on Electric, Information, and Control Engineering about one of his projects in the field of quantum mechanics. In July 2012, he will speak at the Biennial International Quantum Structures Association Meeting, whose proceedings will be published in the International Journal of Theoretical Physics about a predictive method for the annihilation tracks of subatomic particles.

References

1. Olsen S. The Golden Section: Nature’s Greatest Secret. Wooden Books. Walker & Company; 2006.

2. International Conference on Data Engineering. 16th International

Conference on Data Engineering: 29 February-3 March 2000.

San Diego, California: Institute of Electrical & Electronics

Engineers; 2000.

Page 27: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 23

Review Article

Thermodynamics of the Earth’s atmosphere

Martin H WongUniversity of Pennsylvania, USA. E-mail: [email protected]

DOI: 10.4103/0974-6102.97673

The Earth’s atmosphere is definitely one of the most intriguing (yet perplexing) topics to explore because it performs crucial functions to lay the foundation for prosperous lives. Biologically, it shields the biosphere from harmful solar radiation. Physically, it is our ‘green house’ that makes the planet habitable by trapping heat energy. Without this precious layer of gases, the earth could not be much different from other planets. Generally, the domain of the atmosphere includes the inner breathable layer of gases as well as the outer region of space influenced by solar wind. The atmosphere extends up to approximately 2000 km above the ground.

As mentioned, the source of energy warming the atmosphere is the heat from sun in the form of electromagnetic radiation with short wavelengths. In terms of spectra, the radiation is 10% of ultra-violet, 40% visible light which can pass through the atmosphere without being absorbed, 49% short infra-red, and 1% higher energy and x-radiation. Around 70% of the energy carried by the electromagnetic waves is absorbed, while the atmosphere, clouds, and ground surface directly reflect the remaining 30%.. However, the energy absorbed will eventually be re-radiated back to space largely via infra-red waves after receiving solar radiation with short wavelengths. It is explained by Kirchhoff’s law of thermal radiation and Wein’s law.

One might think that as we go up the atmosphere, temperature should gradually decrease and reach a minimum upon reaching outer space due to the re-radiation. But surprisingly, it actually varies up and

down a few times as a result of the changing average kinetic energy of air molecules.

In the lowest layer called the troposphere [Figure 1], as height increases, it gets colder up until reaching the tropopause, the junction between troposphere and stratosphere. Moving upwards, the temperature increases through the stratosphere. The rise

Figure 1: A diagram illustrating the atmospheric layers [available from http://en.wikipedia.org/wiki/File:Atmosphere_layers-en.svg]

Page 28: Young Scientist Journal (Jan-Jun) 2012

24 Young Scientists Journal | 2012 | Issue 11

stops at the stratopause, which separates the stratosphere and the mesosphere above. Then, the temperature starts to fall again at the mesosphere. This drop is greater than that in the troposphere as the lowest temperature is achieved at the mesopause. Finally, it becomes even hotter at the thermosphere above, achieving the highest temperature among all the levels at the thermopause.[1]

This is only the general trend along the atmosphere – the detailed variations are significantly dependent on different latitudes and longitudes, seasons, solar activity, and miscellaneous disturbances such as global warming and volcanic eruptions.

The troposphere extends around 7 km from the poles and 17 km from the equator.

The temperature decreases from around 290 Kelvin at the surface to 220 Kelvin at the tropopause. It is named after ‘tropos’, a Greek word which means ‘turning and mixing’. As the atmosphere is heated from below by solar radiation, the hotter air near the ground expands and becomes less dense, displacing the cooler air above. The sinking cool air in turn is heated by the surface and rises again. Thus, a convection current is formed. The troposphere contains 99% of the total water vapour in the atmosphere, the constant mixing of air and turbulence creates the weather and forms clouds.

Gravity is the main force contributing to pressure in the atmosphere. It is highest at sea level (101.9 kPa, where 1 Pa = 1Nm^-2) and decreases almost exponentially with increasing altitude, given by the Barometric formula:[2]

∆P/∆H = -g. r

Approximately, 90% of the weight of the atmosphere lies within the troposphere.

The Lapse Rate is the rate of change of temperature with gain in height. In the troposphere, it has a negative value because temperature decreases up the layer. When a certain parcel of air rises due to disturbance, it expands in volume because the pressure is lower. The expansion is adiabatic, which means there is essentially no heat exchange during the process. This is because air is such a poor thermal conductor that the amount of heat exchange is negligible in a very short period of time.

The first law of thermodynamics states that:

∆U = Q + W (where U is the internal energy, Q is the thermal energy and W is the work done).

Q can be ignored as the value is negligible, we get:

∆U = W

Work is done on the surroundings by the gas when it expands. By the following equation:

W = -P. ∆V (where P is pressure, and V is volume)

Expansion of air increases its volume, therefore, work done becomes negative, and therefore, change in internal energy is also negative.

As the decrease in internal energy equals work done by gas in expansion, when considering one mole of gas:

Cv. ∆T = -P. ∆V, where Cv is the molar heat capacity at constant volume and T is the temperature (*)

Also,

PV = (1) RT, where R is universal gas constant = 8.31 J/mol. K

Differentiating the equation by chain rule:

P. ∆V + V. ∆P = R. ∆T

Rearranging:

R.∆T – P.∆V = V.∆P (**)

Using Cp. ∆T as well as combining (*) and (**)

Cp. ∆T

= (Cv + R). ∆T

= Cv. ∆T + R∆T

= -P. ∆V (from *) + V. ∆P + P. ∆V (from **)

= V. ∆P

= M. ∆P/r (density (p) = mass/volume) where M is the molecular weight of air.

Substituting into the Barometric formula, it becomes:

Cp. ∆T. r/M. ∆H = -g. r

Page 29: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 25

Rearranging:

∆T/∆H = - Mg/Cp

Considering the two most abundant species in air, nitrogen, and oxygen which contribute 78% + 21% = 99% in total, in 1 mol:

Nitrogen = 2 x 14.0067 g/mol = 28.0134 g/mol = 0.0280134 kg/mol

Oxygen = 2 x 15.9994 g/mol = 31.9988 g/mol = 0.0319988 kg/mol

Therefore, average molecular weight of air is:

0.78 × 0.0280134 + 0.21 × 0.0319988 = 0.0285702 kg/ mol

Therefore, the estimated Lapse Rate in troposphere is:

∆T/∆H = -0.0285702 (9.806)/3.5 (8.134) @ -9.8 K per km[3]

(vibration contribution of nitrogen and oxygen to the heat capacities is negligible)

To conclude the above, in the troposphere, the temperature decreases as altitude increases, while the reverse occurs for a sinking parcel of air.

However in reality, as mentioned, the troposphere contains 99% of the water vapour contained in the layers in the atmosphere. Also, the maximum vapour concentration is found near the surface due to evaporation of water. As a result, air does not behave as an ideal gas anymore if we take the presence of water vapour into account. The actual temperature drop rate is called the environmental lapse rate, which averages -6.5 Kelvin per km.

As air rises and expands, temperature would eventually reach the condensation point of water where water vapour condenses to droplets. This change of state releases latent heat of vaporization to the surroundings, so the drop in temperature in reality is smaller. Despite the fact that the above calculations describe the adiabatic change for ‘dry’ air, we can assume that the air is ‘dry’ if it is not saturated with water. This is because air cools at the dry adiabatic lapse rate until the vapour condenses at dew point.

The Polar Regions are exceptional because

temperature decreases with altitude initially. It is a temperature inversion, a phenomenon also occurring at the stratosphere and thermosphere.

The tropopause is the boundary between troposphere and stratosphere. Temperature stops decreasing here and the air almost becomes completely dry. Beyond this layer, the stratosphere extends 11-50 km above the surface. Temperature starts from 220 K and rises to 275 K at the stratopause. Since air is warmer at the top of the layer than at the bottom, convection does not take place and all vertical mixing is impeded. In fact, the name ‘stratosphere’ originates from the stratification of air in this layer. As a result, the stratosphere acts as a ‘lid’ that contains the turbulences below in the troposphere and limits cloud height.

The rise in temperature along the stratosphere is due to the fact that oxygen molecules absorb highly energetic ultra-violet radiation emitted by the sun. The photolysis of oxygen molecules leads to production of ozone. Subsequently, ozone is destroyed by ultra-violet radiation and a continuous cycle of oxygen – ozone is formed. Both, creation and destruction of ozone, (called the Chapman mechanism), convert energy carried by the UV radiation into heat which warms the stratosphere.

The action of a three-body reaction is crucial to understanding the Chapman mechanism.[4] For two species, A and B, to produce a single product AB, an initial reaction takes place where an excited product AB* is produced.

A + B → AB*

This excited product carries excessive energy and needs a third body (M) to remove the excess energy in order to produce the final product AB. The third body is excited in this reaction and eventually will convert the excess energy into heat.

AB* + M → AB + M*

M* → M + Heat

The third body can be any inert molecules. The excited product will eventually break down into their original components in the absence of the third body because it is short-lived.

In general:

Page 30: Young Scientist Journal (Jan-Jun) 2012

26 Young Scientists Journal | 2012 | Issue 11

A + B + M → AB + M + Heat

The Chapman mechanism begins with the photolysis of oxygen molecules present in the stratosphere. They are constantly bombarded by photons of ultra-violet radiation emitted from the sun.

Photons are quanta of light carrying energy, given by the equation:

E = hƒ

Where h is Planck’s constant (6.626×10^-34 Js) and ƒ being frequency of the electromagnetic radiation.

There can be no accumulation of energy from several photons. The energy does not depend on amplitude or intensity of the wave, but depends only on its frequency.

The bond enthalpy of an oxygen molecule is 498 kJ/ mol, which means a total of 498 kJ of energy must be supplied in order to break one mole of oxygen double bonds. From the above equation, for a single oxygen molecule this corresponds to a frequency of 1.253×10^15 Hz.

n = ƒl where v is the speed of the wave, and l is the wavelength of the wave.

A minimum photon wavelength of 239.5 nm is needed in order to break the bond and split apart an oxygen molecule. This falls on the ultra-violet spectrum, in particular, the UV-C region where wavelength ranges from 100 nm to 280 nm. Therefore, the extremely harmful UV-C radiation is effectively shielded from the below by the absorption action of oxygen molecules in the stratosphere.

The oxygen atom is now at its ground level triplet state. It is highly reactive due to the two unpaired electrons. It then combines with an intact dioxygen molecule to form ozone:

O + O2 → O3

This is a three-body reaction where an inert third-body (e.g. Nitrogen) must be present to take away the excess energy and dissipate it as heat to the surroundings. This is one of the ways by which the stratosphere is warmed.[5]

Undoubtedly, ozone concentration is finite in the atmosphere. The balance is maintained by several

destruction processes which are radical-assisted reaction chains. They are important sources of heat production in the stratosphere.

Radicals are relatively reactive species in the atmosphere. Pressure in the atmosphere decreases exponentially with height and collisions between molecules are rare due to the low concentrations. However, radicals are more ‘reactive’ because they have an unpaired electron in their outer shell which gives them high free energy. A radical usually has an odd number of electrons in total (e.g. NO2), but both atomic and molecular oxygen are exceptions.

The first step of radical formation involves input of energy because radicals possess high free energy. In the atmosphere, this external source of energy comes from solar radiation:

Non-radical + energy (photons) → Radical + Radical …(1)

The production of radicals then triggers the reaction chain:[6]

Radical + Non-radical → Radical + Non-radical … (2)

(2) produces a radical which goes on to propagate the reaction itself, while the non-radical can be photolyzed again in (1).

The chain terminates when two radicals react:

Radical + Radical + M → Non-radical + M

This reaction is slow due to the fact that the collisions between radicals are infrequent as they are in low concentrations.

The first mechanism for ozone destruction is called the ozone – oxygen cycle in the Chapman mechanism.[7,8]

For the sake of comparison, if we assume that an ozone molecule has two normal double bonds, the average bond enthalpy is calculated as follows:

3 O2 → 2 O3, where enthalpy of formation is 2 x 142.7 = 285.4 kJ/mol

Three oxygen double bonds are broken, requiring input of 498 x 3 = 1494 kJ/mol

Therefore, a total of 1779.4 kJ must be supplied.

Page 31: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 27

As there are four double bonds created in two ozone molecules, the average bond energy is 1779.4/4 = 444.85 kJ/mol theoretically.

However, in reality, there are two half-bonds in each ozone molecule. They only count as one single bond; therefore, the estimated bond energy to remove one oxygen atom from an ozone molecule is simply:

(498 kJ/2).(3/2) = 373.5 kJ/mol

In this ozone-oxygen cycle, the ozone molecule is again struck by high-energy photons (UV radiation) to undergo photolysis:[9,10]

O3 + energy (photons) → O2 + O* (excited oxygen atom, also known as O (1D))

To break one double bond in an ozone molecule and release an oxygen atom,

(373.5/6×10^23) kJ is needed.

This corresponds to a frequency of 9.39×10^14 Hz and a minimum photon wavelength of 319.3 nm. This range falls on the spectrum of UV-B radiation, meaning that it is also filtered out effectively. UV-C photons are also energetic enough to photolyze ozone.

The excited oxygen atom O (1D) is quickly stabilized by a third body (e.g. nitrogen or another dioxygen molecule) to become O (3P). Heat is released and this is another way to warm the stratosphere.[11]

(The electronic structure of an oxygen atom in its triplet ground state, O (3P), is 1s22s22px

22py12pz

1 with two p orbits not fully filled. The singlet state oxygen, O (1D) with orbital configuration 1s22s22px

22py2 is

even more energetic. Its arrangement disobeys the Hund’s rule, which states that electrons should singly occupy available orbitals with same spin before pairing up.)

Although ozone is split apart, the oxygen atom is still highly likely to react with another intact dioxygen to regenerate ozone. The termination of the chain requires the combination of ozone and the ground state oxygen:

O3 + O → 2 O2

The above ozone-oxygen mechanism is insufficient to remove enough ozone to keep a balance in atmospheric concentration. The equilibrium is further complemented

by several catalytic loss cycles where reactive free radicals are involved; they include hydrogen oxide, nitrogen oxide, and chlorine radicals.[12]

The hydrogen oxide radicalsFirstly water is involved in the cycle. It is transported up from the troposphere as well as produced by the oxidation of methane within the stratosphere. The excited oxygen atom produced in photolysis of ozone is now reduced by water:

H2O + O (1D) → 2OH

Next, the hydroxyl radical OH reacts with ozone:

OH + O3 → HO2 + O2

The HO2 in turn combines with ozone again:

HO2 + O3 → OH + 2O2

Therefore, the net reaction is:

2 O3 → 3 O2

This is a catalytic cycle because the species HOx is reserved and can be reused again to remove ozone.

The terminal reaction is:

OH + HO2 → H2O + O2

Nitrogen oxide radicalsIn the stratosphere, NO reacts rapidly with ozone to produce NO2:

NO + O3 → NO2 + O2

Subsequently, it undergoes photolysis:

NO2 + energy (UV photons) → NO + O

O + O2 + M → O3 + M

Again heat is produced to warm up the stratosphere. However, combining the above equations, there is no net removal of ozone. The catalytic depletion of ozone is achieved by reaction between NO2 and an oxygen atom:

NO2 + O → NO + O2

If this is combined with the initial reaction, net ozone is removed:

Page 32: Young Scientist Journal (Jan-Jun) 2012

28 Young Scientists Journal | 2012 | Issue 11

O3 + O → 2 O2

Chlorine radicalsThis final cyclic mechanism is actually conducted by industrial emissions called chlorofluorocarbons (CFCs). As CFCs are inert in the troposphere, they are transported upward to the stratosphere and are photolyzed by UV radiation:

CF2Cl2 + energy (photons) → CF2Cl + Cl

The chlorine atom produced reacts with ozone:

Cl + O3 → ClO + O2

ClO + O → Cl + O2

Therefore, the net reaction becomes:

O3 + O → 2 O2

Termination of the equation is:

Cl + CH4 → HCl + CH3

Finally, it also involves production of heat:

ClO + NO2 + M → ClNO3 + M

To summarize, the above explained how UV radiation is converted into heat energy to warm up the stratosphere. The temperature rise is contributed to by the creation of ozone from photolysis of oxygen molecules as well as its destruction through the ozone-oxygen mechanism and several catalytic loss cycles.

Temperature variation along the stratosphere is another major aspect to look at. As the stratosphere is an inversion layer, temperature increases with height. This is due to the combined effects of distribution and production of ozone.

One might think that the concentration of ozone is greatest at the top of stratosphere because it reaches maximum temperature there. This is partly true because the rate of production of ozone is indeed highest at the top where solar radiation is strongest, averaging 5×106 molecules per cm3 per second. However, it is also true that the greater the radiation, the faster the rate of ozone destruction which generates heat. Therefore, heating rate due to absorption of ozone is greatest at the top and thus the temperature becomes highest.

On the other hand, the area of greatest ozone accumulation is found in the middle of the stratosphere at around 25 km above the ground. This is where product of UV-C intensity and oxygen concentration is at a maximum. Below the peak, the density of dioxygen molecules is much greater than that at top as air density increases towards the surface. However, the rate of energetic photon arrival is scarce because most of them have been absorbed and filtered above. The rate of dissociation of dioxygen is slow. Due to limited production of oxygen atoms, the formation of ozone is also limited. Above the peak, ozone is short-lived because it is entirely controlled by photolysis with rapid creation and destruction. Therefore, the concentration is not as great as below despite achieving the peak temperature.

The mesosphere lies above the stratosphere beyond the stratopause. It is at the altitudes of 50-85 km above the surface. Temperature drops from 260 K at the stratopause to about 150 K at the mesopause, which is the lowest among all the layers.

The pressure and density of air decreases with height; therefore, oxygen becomes thinner and thinner through the mesosphere. However, the incident ultra-violet radiation in the mesosphere is much more intense than in the stratosphere; therefore, dioxygen molecules are mostly photolyzed by the energetic photons. After dissociation, the oxygen atoms would usually recombine to form dioxygen, subsequently being photolyzed again. In fact, the concentration of molecules is so low that oxygen essentially exists in atomic form at greater heights.

In the stratosphere where air is denser with higher dioxygen concentration, an oxygen atom is more likely to meet a dioxygen to form ozone. But in the mesosphere, the chance of atomic oxygen meeting an intact oxygen molecule decreases greatly with height. Therefore, ozone concentration decreases with height and there is less release of heat energy due to absorption of UV radiation. The rate of drop in temperature is even greater than that in the troposphere.

On the other hand, convection of air occurs in the mesosphere. Air at the bottom is warmer because there is higher ozone concentration and direct heating from the hot stratopause as well. Vertical mixing of air occurs and eddy diffusion intensifies. This contributes to the drop in temperature.

Page 33: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 29

Also, radiative cooling becomes significant at the mesopause through the emission of carbon dioxide (at wavelengths of 15 micrometers). This is another reason for the dramatically low temperature at the mesopause. Carbon dioxide molecules are excited through collisions with oxygen atoms and the internal energy provided is radiated away, thereby cooling the mesosphere.[13]

The thermosphere begins beyond the mesopause, and extends from 90-500 km above the ground. This is where the ionosphere is formed as well. It is another inversion layer where temperature increases with height where a staggering peak of 2000 K is reached. There is also a sharp ‘local’ rise in temperature from the mesopause until reaching an altitude of 140 km.

One difference between the thermosphere and the layers below is that it is classified as the heterosphere, because nitrogen and oxygen no longer have uniform concentration in total composition. Without convective mixing, the density of gases falls off exponentially and atmospheric gases tend to sort into layers according to relative molecular weight. The heavier a constituent such as oxygen and nitrogen, the quicker they fall off compared to lighter constituents like helium and hydrogen.

In the region between 120 – 220 km, temperature increases with height because there is much more high energy radiation available, including extreme ultra-violet and X-rays. Their wavelengths range from around 110 nm – 0.01 nm and the photons are extremely energetic. Species like nitrogen and oxygen photolyze and the peak loss for dioxygen molecules is at 120 km. Many would even ionize under bombardment of powerful photons as electrons are knocked off and positive ions are left behind. This contributes to the formation of the ionosphere.

For example, at around 95 km altitude the formation of N2

+ proceeds more rapidly due to increased exposure to energetic X-rays.

Subsequently, a series of highly exothermic reactions occurs:

N2+ + O → NO + N

N2+ + NO → NO+ + N2

The local peak temperature at 140 km corresponds to

the F layer of the ionosphere where electron density is increased to a relatively high level compared to other regions. The increase in temperature in the lower thermosphere is mainly due to exposure to very energetic photons and the subsequent dissociation of species. However, further up in the thermosphere, the surge in temperature is due to the excess energy produced by ionization processes. The electrons heat up and collide with ions. Afterwards, the neutral gas is heated.

To summarize, the thermospheric temperature becomes highly dependent on the strength of solar radiation as well as other external drives (e.g. solar wind and activity) as altitude increases. Due to the low volume heat capacities and concentration of gases, even absorbing a small amount of energy can cause a significant increase in temperature.

An interesting fact is that we actually cannot feel the ‘hotness’ in the thermosphere nor can a thermometer measure it. This is because the concentration of gases is so low that the contact with the molecules cannot transfer enough energy to ‘heat the detector’.

To conclude, here is a brief summary of this essay – how temperature goes up and down along the atmosphere.

First of all, temperature decreases with height in the troposphere because of the adiabatic expansion of a rising parcel of air. Work done in the surroundings is converted to a decrease in internal energy. Secondly, it gets hotter rising though the stratosphere because of the absorption of energetic photons (UV) during the creation and destruction of ozone. The excess energy in the reactions is dissipated as heat which warms the stratosphere. Thirdly, a greater decrease in temperature occurs at the mesosphere where the warming effect of ozone becomes less and less significant due to decrease in its concentration. Finally, it gets even hotter in the thermosphere because there are more energetic photons available. The dissociation and ionization of molecules contribute to the rise in temperature.

References

1. Gribbin J. Inside Science: Structure of the Earth's Atmosphere. New Sci 1995;148:1-4.

2. Judith, Curry, Peter, Webster . Thermodynamics of Atmospheres & Oceans. Cornwall: Academic Press; 1999.

3. Campbell I. Energy and the atmosphere: A Physical - Chemical Approach. Surrey: John Wiley and Sons; 1977.

Page 34: Young Scientist Journal (Jan-Jun) 2012

30 Young Scientists Journal | 2012 | Issue 11

4. Spiro TG, Stigliani WM. Chemistry of the Environment. New Jersey: Prentice Hall; 1996.

5. Daniel, Jacob. Introduction to Atmospheric Chemistry. Chichester: Princeton University Press; 1999.

6. Lacis, Andrew, Hansen, James. A Parameterization for the Absorption of Solar Radiation in the Earth's Atmosphere. J Atmos Sci 1974;31:125-7. Available from: http://ams.allenpress.com/archive/1520-0469/31/1/pdf/i1520-0469-31-1-118.pdf. [Last Accessed 2009 Feb 20].

7. Colin, Baird. Environmental Chemistry. 2nd ed. New York: WH Freeman and Company; 1999.

8. Fergusson J. Inorganic Chemistry and the Earth. Oxford: Pergamon Press; 1982.

9. Nicolet, Marcel. Stratospheric Ozone: An Introduction to its study. Rev Geophys 1975;13:594-619. Available from: http://www.agu.org/journals/rg/v013/i005/RG013i005p00593/

RG013i005p00593.pdf. [Last Accessed 2009 Feb 23].10. Rowland FS. Stratospheric ozone depletion. Philos Trans R Soc

Lond B Biol Sci 2006;361:769-90. Available from: http://rstb.royalsocietypublishing.org/content/361/1469/769.full. [Last Accessed on 2009 Feb 22].

11. Petrucci R, Harwood, William, Herring, Geoffrey. General Chemistry - Principles and Modern Applications. 8th ed. New Jersey: Prentice Hall; 2002.

12. Solomon, Susan. Stratospheric ozone depletion: A review of concepts and history. Rev Geophys 1999;37:275-82. Available from: http://mls.jpl.nasa.gov/library/Solomon_1999.pdf. [Last Accessed on 2009 Feb 23].

13. Beig G, Keckhut P, Lowe RP, Roble RG, Mlynczak MG, Scheer J, et al. Review of mesospheric temperature trends. Rev Geophys 2003;41:1-6. Available from: http://www.tropmet.res.in/awnew/aw-47.pdf. [Last Accessed on 2009 Feb 25].

About the Author

Martin is currently attending the University of Pennsylvania, an Ivy League Institution located in Philadelphia, Pennsylvania. He is majoring in Physics at the College of Arts and Sciences and minoring in Statistics at the Wharton School of Business. He expects to graduate in 2014 with a Bachelor of Arts in Physics, and plans to pursue either a master's degree in the USA, or secure a quantitative position in the investment banking industry in Hong Kong, his hometown.

In his spare time, Martin likes to do brain teasers and solve puzzles related to probability, as he enjoys the process of getting stuck at first, but eventually coming up with a break through. He also likes biking long distance when in Hong Kong during the holidays. As an aviation enthusiast, he is interested in aerodynamics as well as flight theory, and sometimes does plane spotting online. He is also a classical pianist with a Diploma who sometimes enjoys practicing new pieces for leisure.

Page 35: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 31

ABSTRACT

It is a common belief that science and religion are incompatible views on life that contradict each other and, on first thoughts this appears true; one appears to be based on fact, the other, as some may see it, on fiction. Nevertheless, many great scientists have been religious. Christianity was first adopted by the Greeks from Antioch who were the first non-Jews to take on the faith. These Greeks are said to be the founders of science and logic. Therefore, is there a hidden link between the two ways of thinking of which our society is ignorant and unknowing?

Science and religion are two different ways of interpreting life and the world that we live in. They answer different sets of profound questions. Science deals with the mechanisms of the world and how it works; religion is concerned more with the meaning of the world. This in particular is an example of an aspect of religion that is not dealt with by science. It is a matter of “how” and “why,” the subjective versus the objective. They, therefore, do not answer each other’s questions; scientific questions have scientific answers. However, some questions arise from

science but take us beyond the realms of scientific knowledge and this is when faith endeavors to draw conclusions. For example, it is impossible to carry out experiments that could recreate the moments just before the Big Bang as it is not feasible to create the circumstances and fundamental elements as they are very rare, and so, it will always be speculation as to what happened “before.” In this case science is limited; hence, it may be seen as a “leap of faith” as to what one believes happened prior to and during this short time of unknown history.

It is a common belief that science only deals with objective truths whilst religion is an entirely subjective concept. However, this is not entirely true. Science can also be subjective and is often based on interpretation. Some may speculate that science is merely interpreted fact. Data from an experiment have little value until conclusions are drawn from them. Scientists, being human, do not possess absolute knowledge, so have to have faith in the results and in their understanding of the world to draw a final conclusion.[1] In addition, for science to be accepted

Can science and religion work together while one relies on evidence and the other on faith?

Opinion

Catherine Schuster BruceBristol University, England. E-mail: [email protected]

DOI: 10.4103/0974-6102.97693

Science and religion – Are they really incompatible? Some famous scientists, such as John Polkinghorne, have been highly religious, others such as Albert Einstein have offered varied and complex views on religion, and then there are those, like Richard Dawkins, who can only criticize it. Why is it that such great minds do not think alike? Perhaps science and religion just offer different perspectives on life – the objective evidence against the subjective faith – but when Big Bang theory and any one of the creation stories collide, is there, and will there ever be, an outright winner? Or do science and religion merge to become indistinguishable? This article aims to help to clarify the situation.

Page 36: Young Scientist Journal (Jan-Jun) 2012

32 Young Scientists Journal | 2012 | Issue 11

as correct, society has to have belief and trust. If a paper is published in a respected scientific journal, it is deemed to be fairly reliable by the general public; however, it is questionable whether the faith society has in science is justifiable when there have been previous occasions of scientists tampering with their data to produce fake, “ground breaking” conclusions. If each individual does not prove the conclusion to be correct, then the belief in what a scientist says to be right is based on human trust only. Furthermore, the conclusions in the scientific report are based on the faith the scientist has in their results and in their own knowledge. Religion can also be seen as “interpreted fact” as there are various different accounts of the same events in the Bible, for example, Jesus’ miracles. Therefore, it could be said that there is little difference between the belief in God and trust in science.

Richard Dawkins is a prominent atheist and professor who thinks all belief should be based on evidence, and therefore science and religion are incompatible because of their completely opposing perspectives on life. He feels it is a tragedy to base life on something with no evidence.[2] This links to his reasoning behind why people’s religions are very much linked to where they live in the world. He concludes that they are heavily influenced by family and other people in close contact because in the same way scientists have differing opinions due to a lack of experimental data, people have certain religions due to a lack of evidence to base their views on. However, it could be said that although you cannot prove religion exists, similarly, you cannot prove that a “God” or higher power does not exist. In my opinion, it is Dawkins’ mistake to assume that “God” is natural and is therefore within the capacity of science to experiment with and test; he cannot prove “God” and therefore assumes there is no “God”. However, “God” cannot be part of nature. The idea of “God” is the explanation to why things are the way they are; it is the answer to existence, not existence itself. Furthermore, the limitations of science and scientific experiments must be remembered as some scientific theories, although based on evidence, are proved not to be true. A good scientific theory does not necessarily have to be entirely correct. That is to say, no scientific theory is ever secure; new data can always undermine a previous theory. There have been examples of false theories effectively guiding advances in science before new evidence was found and they gave way to more up-to-date theories. I therefore disagree with Dawkins and see evidence

based on scientific theory not as the truth but as a guide to show the way for new research and put our knowledge together in an ordered manner.

Religion is often criticized for having self-contradictions and logical failings within it. For example, the world being created from nothing in 7 days can be regarded as simply illogical. However, the same could be said about science and the contradictions between quantum theory and general relativity. Neither science nor religion is complete, and both are therefore contradictory.

The biggest problem that faces religion in the modern world is suffering. Society questions its faith in religion when life gets difficult. John Polkinghorne, a particle physicist and priest, draws another link between science and religion as he says, “science can help us deal with this problem” by way of the evolutionary theory.[3] This theory highlights the potential of nature. He claims that God could have created a readymade world, but instead He gave it lots of potential, so creatures could make themselves.[3] I agree with him in this area and believe that death and disaster, although hard to deal with, are part of a life that cannot be perfect; a perfect world is not feasible. According to the Bible, even God has suffered, thus suggesting that no matter who the individual is, suffering is unavoidable and a fact of life. However, both science and religion can give us understanding and one can gain comfort from this understanding that helps us cope with the negative aspects of life that I feel are inevitable. Furthermore, similar to the belief of Polkinghorne, I feel that a higher power, not necessarily “God,” has granted us an independence and freedom to be. This is far more useful than a readymade perfect world. Self-determination is not necessarily better than suffering, but it helps us deal better with this unavoidable fact of life.

There was a study conducted in 1997 by Edward Larson at the University of Georgia in the USA, which attempted to repeat an older study carried out in 1916 by psychologist James Leuba. Both reported the percentage of scientists who believed in God. The studies proved that despite the advances in science, the same percentage of scientists believed in God. I feel that one of the reasons behind this is the fact that everything in life is very specific and finely tuned. For example, quantum physics is based on a very specific energy and resonance. Therefore, scientists experimenting with such accurate and

Page 37: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 33

About the Author

At the time of writing Catherine Bruce was 17 years old and attended Bryanston School where she studied Maths, Chemistry, Biology and French. She is now currently studying medicine at Bristol University and is in her first year. She has an interest in journalism and media and hopes that in the future she will be able to use a medical background to do this. She enjoyed writing this essay due to its challenging questions as well as the fact that ethics is a subject that she finds thoroughly interesting and very relevant to her future career where ethical dilemmas are faced on an everyday basis.

precise data are led to believe that as Polkinghorne has said, “some intelligence has monkeyed with the laws of nature.”[3] This shows that some people, like Richard Dawkins, see religion as a way of explaining a lack of knowledge, but others, like the majority of physicists, use it to explain why the world has so much potential and precision. Without this, the theories behind science would not work, and thus it is a very significant aspect of science.

Determinism is an intellectually stimulating idea that is also relevant to the link between science and religion. There is the intriguing question of whether a higher power has control of the mind and whether we have any choice over our thoughts and actions. If this were to be true, then a higher power is controlling all the scientific knowledge we know and are yet to know, but is also controlling that in which we believe. At the end of the day, both science and religion are concepts formed and understood in our minds, so both would be affected by this higher power.

In conclusion, I have, through studying evidence, decided that science and religion are not an oxymoron when put together. They are two things that complement each other. They both contribute to our knowledge of the world we live in through answering questions which are different, yet also inextricably linked. They differ because they emphasize different aspects of life. To really understand the world, one has to ask both “how” and “why” and these questions can be answered for the same event. In the words of Einstein, “science without religion is lame, religion without science is blind.”

References

1. Kakos, Spyridon. Harmonia Philosophica. Harmonia Philosophy Publications; 1st edition (October 20, 2010).

2. YouTube clip of an interview with Richard Dawkins.3. Polkinghorne, John. Exploring Reality: The Intertwining of

Science and Religion. SPCK Publishing (21 Oct 2005).

Page 38: Young Scientist Journal (Jan-Jun) 2012

34 Young Scientists Journal | 2012 | Issue 11

ABSTRACT

When I was at primary school, I was taught mostly literacy, numeracy, and a little bit of “science.” Then, when I made the gigantic step from primary to secondary school, numeracy suddenly became mathematics, literacy became English, but science was still science. Then, in year 2010, aged 14 and about to start my GCSEs, I was very excited to be taught biology, chemistry, and physics as three separate entities; but who has the right idea? Is it the people who taught me how to have fun and play with plasticine, or those who taught me the content for some of the most important science exams I have ever taken? I argue that it is the former.

In this rapidly advancing world in which we live, there are several major issues for which the whole scientific establishment needs to contribute, meaning that the classical boundaries of science are beginning to blur into one. In this article, I will explore some of these issues and their potential solutions, whilst also showing how science’s “Big Three” (biology, chemistry, and physics) may have to learn to share the spotlight with each other, and how, what I call fringe sciences, such as psychology and economics, may also have their own roles to play.

Climate Change

It is now widely accepted that the Earth’s climate is changing. Is this down to our species’ insatiable desire for easy energy or not? As much as this is a valid question to ask, it is relatively unimportant. We have to do something if we want future generations to live in the houses in which we have lived and enjoy the positive advances which previous generations have achieved [Figure 1].

There has been a wide range of suggestions as to what ought to be done, and none are the brainchild of a single “scientific discipline.” One such example is Carbon Capture and Storage (CCS). This involves preventing some of the carbon dioxide (CO2) that we produce from entering the atmosphere. There are a variety of options for CCS, from pumping the gas into old mines to reacting it with salts to form stable compounds.[1] Although it sounds like pure “chemistry”, the latter option is still in the process of testing due to its high energy requirements, but if done on an industrial scale it would require perfect logistics and a lot of organization. Both of these options are likely to require a financial incentive in

A new age of science is not dawning – It has arrived

Opinion

Toby McMasterTwyford Church of England High School, London. E-mail: [email protected]

DOI: 10.4103/0974-6102.97694

Science is now divided into a multitude of specialities, with each one having very narrowed interests. However, this leads to the departmentalization of a subject that is united, and the ensuing lack of communication slows the progress of science. Many of the questions that perplex scientists today will only be answered by working together and making the most of other specialities, from the challenge of producing enough food for the expanding population to the search for extraterrestrial life.

Page 39: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 35

order to popularize them, such as the UK’s Carbon tax. Enter the economists.

Also, there is a need for “clean” or carbon-neutral fuels, such as hydrogen, which can be produced from the partial combustion of CO2 fuels, or biofuels produced from recently living matter. The use of these fuels will require alternative engines, which will involve some engineering and a pinch of chemistry, another area for collaboration.

Food Production

However, the above suggestion of biofuels poses a problem all of its own. As the world’s population grows and rapidly approaches 9 billion, there are more people to feed and less food to go around. Biofuels amplify this by taking up crop space for plants that cannot be used to produce food.[2]

Plants such as Poplar have been investigated in response to this as Poplar is able to grow on nutritionally poor soils which are unable to support food crops [Figure 2]. However, even if this proves fruitful in saving land, there are many other issues to be overcome, and improving the yield of plants is a big target in agricultural science.

One reason for low yield is that the enzyme ribulose-1,5-bisphosphate carboxylase oxygenase (RuBisCo) is notoriously inefficient at catalyzing its stage in photosynthesis. Due to this, many attempts have been made to try to improve its efficiency, by genetic engineering and other molecular level techniques.[3]

However, this is not the only area of science which is involved; for example, the development of novel fertilizers and other beneficial and ecologically sustainable chemicals requires the work of industrial chemists.

In addition, this is a topic with obvious political connections and there will likely be multiple solutions which are combined to combat what is a great issue. The sensitivity of the topic is something which needs to be considered to allow methods to be implicated as quickly and efficiently as possible to save lives and improve the quality of life.

Medical Treatments

The discovery of treatments for serious medical conditions has increased rapidly in the past few decades. Much of this has been due to the rise of an area known as medical physics, where two disciplines which are taught as completely different concepts at school are reinforcing one other whilst also saving lives. One such example is the MRI machine which is the most commonly used method to locate brain tumors. Radio waves are sent through the body, causing hydrogen atoms to realign themselves, and when they revert to their initial arrangements, they release radio waves themselves which can be detected. Regions with a greater concentration of hydrogen atoms appear darker and those with lesser concentration, such as fatty tissues, show up lighter [Figure 3].[4] This is just another example of how a topic often taught in one subject – the electromagnetic spectrum in physics – can be used to solve a problem in a “completely distinct” area

Figure 1: The Earth from space (from http://gimp-savvy.com/cgi-bin/img.cgi?nasa6UIFv87M8R23430)

Figure 2: A tractor (available from http://en.wikipedia.org/wiki/Tractor)

Page 40: Young Scientist Journal (Jan-Jun) 2012

36 Young Scientists Journal | 2012 | Issue 11

of science, medicine. Many other techniques, both treatments and diagnostic procedures, have been produced from this crossover and the future of medical physics looks bright.

There is also the involvement of other science-like subjects such as psychology in medicine, for example, the rise of Attention Deficit Hyperactivity Disorder (ADHD) which has involved psychological examination of many children, followed by the prescription of drugs from drug designers. One widely used drug is Ritalin, which is often used for the treatment of this disorder; however, it has known side-effects, such as the stunting of a child’s growth, if used for extended periods.[5]

Other longer-term projects involving all three sciences include building living organs-on-a-chip, such as heart-on-a-chip.[6] This concept was devised to aid drug testing and reduce animal testing. The heart-on-a-chip is made up of heart cells on a flexible polymer bathed in the nutrients required by the myogenic cells. This is one of the ultimate collaborations, with contributions from biology, chemistry, and physics producing a potentially revolutionary technology.

Are We Alone in the Universe?

Despite serious issues on our home planet, there are still fundamental questions that have puzzled the human race for centuries and remain relatively unexplored. Yet, this next century is likely to redefine our beliefs of extraterrestrials, no longer to be reserved for Hollywood blockbusters and fanatical preachers; instead they could become reality.

The turn of the millennium has seen the exponential rise of a new science: astrobiology. This is the search for life on other worlds and could be the key to answering some of mankind’s greatest queries. If we truly are alone in the universe, or at least unable to locate other life, then surely we owe it to ourselves and our universe to really commit to solving some of our home’s global issues. If life were found, even small microbial life forms, it would be a major breakthrough and arguably the greatest achievement in the history of science.

The recent voyage of NASA’s Kepler satellite has popularized the search for so called extra-solar planets [Figure 4]. A variety of techniques exist for detecting objects such as these, all of which involve something which would classically be classed as

physics. Astronomers are able to gather amazing amounts of precise information from simple measurements such as the time an object takes to cross in front of a star. The atmosphere of many planets can be analyzed based on the individual spectrum of colors absorbed by different gases. Although there is no guarantee that life in the universe would be anything like us, it has been suggested that we look for such life forms that would produce the same compounds as ourselves, as well as relying on the same basics such as liquid water and a stable environment. The logic behind these decisions is that were there a giant silicon-based superorganism in the nether regions of our galaxy, we would have almost no idea about how to detect its presence.

Based on these assumptions, there are certain regulations for the potentially inhabitable planets,

Figure 3: An MRI scan (available from http://en.wikipedia.org/wiki/Magnetic_resonance_imaging)

Figure 4: Artist’s impression of an extra-solar planet (available from http://en.wikipedia.org/wiki/Astrobiology)

Page 41: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 37

such as size. Too small and their gravity would not be great enough to capture an atmosphere to warm the planet and lower pressure would reduce the liquid range of our old friend H2O. Too large and the core would retain its heat far more efficiently causing great volcanic activity, possibly releasing noxious gases.[7] Moreover, such planets would have to be within a certain distance of their star to avoid overheating or cooling; this “fussiness” has led to the nickname of the Goldilocks’ zone, a region neither too hot nor too cold. These are the sort of characteristics that can be determined using clever physics to add to the knowledge possessed by biologists of the ingredients for life in our search to solve one of the greatest mysteries of life.

Conclusion

Although I believe there is nothing essentially “wrong” with teaching science as three subjects in isolation, it must be emphasized that science is a co-operative subject. There is much to be gained from listening to others with different areas of knowledge and speciality. Having fun is fine, but teachers and professors canvassing that their science is “the best science”

is neither constructive nor particularly helpful to prospective students. Although it is likely to remain that co-operation between individuals, each with their own specialities, is the way most interdisciplinary problems are solved, we are, I believe, heading into a future where knowledge of one single “science” will no longer be enough. The wall between the traditional sciences is not falling; it is lying crumbled where it once stood.

References

1. Available from: http://www.co2captureproject.org/co2_capture.html. [Last cited on 2012 Apr 5].

2. Available from: http://www.oxfam.org.uk/get_involved/campaign/food/. [Last cited on 2012 Apr 5].

3. Spreitzer RJ, Salvucci ME. RUBISCO: Structure, regulatory interactions, and possibilities for a better enzyme. Annu Rev Plant Biol2002;53:449-75.

4. Available from: http://www.netdoctor.co.uk/health_advice/examinations/mriscan.htm. [Last cited on 2012 Apr 5].

5. Available from: http://www.drugs.com/ritalin.html. [Last cited on 2012 Apr 5].

6. Available from: http://www.newscientist.com/article/mg21028184.000-building-a-human-on-a-chip-organ-by-organ.html. [Last cited on 2012 Apr 5].

7. Dartnell L. Life in the Universe. Oneworld Publications; 2007. p. 66-9:

About the Author

Toby, currently in year 13 at Twyford Church of England sixth form, is taking A levels in Biology, Chemistry, and Further Maths. Next year, he hopes to go to university to study Natural Sciences, possibly with a year abroad. In his spare time, he plays tennis, goes running, and enjoys watching the US sitcom The Big Bang Theory.

Page 42: Young Scientist Journal (Jan-Jun) 2012

38 Young Scientists Journal | 2012 | Issue 11

ABSTRACT

Introduction

De Souza et al. showed that the growth and yield of lettuce could be improved by treatment of its seeds before they were grown, using rectified sinusoidal non-uniform electromagnetic fields.[1] It was observed that magnetism has effects on lettuce at the nursery, vegetative, and maturity stages, including a significant increase in root length and shoot height, a greater growth rate, and a significant increase in plant height, leaf area, and fresh mass. Positive biological effects of magnetism on sunflower and wheat seedlings weights were reported.[2] Further data show that the magnetic field induced by the voltage of a specific waveform enhanced or inhibited mung bean growth, depending on the frequencies,[3] which suggests that the magnetic field on plant growth may be sensitive to the waveform and frequency of the source electrical voltage. The effect of static magnetic field on plant

growth has also been studied. Cakmak et al. found that static magnetic field accelerated germination and early growth of wheat and bean seeds.[4] Vashisth et al. obtained similar results with chickpeas; furthermore, they found that the responses of the plant to static magnetic field varied with field strength and duration of exposure with no particular trend.[5] However, as indicated by a literature review, weak magnetic field exhibited negative effects on plant growth, such as inhibition of primary root growth, in some cases.[6] For instance, exposure to magnetic field inhibited early growth of radish seedlings with decrease in the weight and leaf area.[7] An interesting result is that the biological effect of a magnetic field is different between the south and north poles, as illustrated by a study, which showed that radish seedlings had a significant tropic response to the south pole of the magnet, but insignificant response to the north pole. [7] It is theorized that the south pole of the magnet enhances plant and bacterial

A study was conducted to test the hypothesis that a magnetic field can affect plant growth and health. The study divided plants into three groups. The first group of plant seeds grew in a low magnetic field. The second group grew in a high magnetic field. The third group grew in the absence of a magnetic field, serving as a control group. Several growth parameters were measured, including the germination rate, plant height, and leaf size. In addition, the health status was measured by leaf color, spots, the stem curvature, and the death rate. Plant growth was observed continuously for four weeks. The results showed that magnetism had a significant positive effect on plant growth. Plant seeds under the influence of the magnetic field had a higher germination rate, and these plants grew taller, larger, and healthier than those in the control group. No adverse effects of magnetism on plant growth were noticed. However, the removal of the magnetic field weakened the plant stem, suggesting the role of magnetism in supplying plants with energy.

Edward FuStudent, University High School, Irvine, CA. Email: [email protected]

DOI: 10.4103/0974-6102.97696

Research Article

The effects of magnetic fields on plant growth and health

Page 43: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 39

growth by conferring energy, whereas the north pole retards their growth. Thus, it is possible to utilize the magnetic north pole against infections or tumor growths. Morphological anomalies in pollen tubes of a particular plant exposed to magnetism were observed,[8] which raises an important question of whether magnetism can cause gene mutation and cancer. This issue is still controversial and demands more research evidence before any conclusion can be drawn.

Experimental Design and Methods

HypothesisThe main hypothesis is that static magnetic field has effects on plant growth. If plants grow in an environment with magnetic field, they will grow differently than if they grow without magnetic field.

PurposesThe experimental objectives include:• To observe plant growth based on a set of growth

and health parameters under the magnetic field.• To compare plant growth under magnetic fields

of different strengths.• To compare plant growth under the magnetic

field at different points in time.• To determine whether the magnetic field

can influence plant growth based on the observational data.

• To identify the parameters of plant growth affected by the magnetic field, if any.

• To observe changes in plant growth after the magnetic field is removed.

Materials• Plant seeds – a bag• Soil – a bag (28 L)• Sunlight – eight hours per day• Water• Magnets – two magnets with the magnetic

strength of 0.33 Tesla and 0.49 Tesla, respectively• Rulers – one• Magnifiers – one

Procedures1. Prepare three round plastic or glass containers,

each with a diameter of 10 inches and a height of about 6 inches.

2. Place the same soil (mixed natural and artificial) in each container to form the soil bed of about

four inch deep.3. Plant seeds (radish) on the superficial layer of the

soil (one inch deep) in each container. Sixteen seeds are planted, so that they are evenly distributed along a circle with a diameter of six inches.

3.1. In the second container, a horseshoe-shaped magnet of 0.33 Tesla is placed at the center of the circle surrounded by the seeds [Figure 1].

3.2. In the third container, a horseshoe-shaped magnet of 0.49 Tesla is placed at the center of the circle surrounded by the seeds [Figure 1].

4. Each nursing container is placed by the window facing the east and exposed to the sunlight in the daytime. The plants grow under the room temperature at 25oC with a humidity level ranging between 30 and 50%. The soil in each container is kept wet by watering once every day so that the soil surface is neither dry nor soggy to the touch. The soil moisture was estimated at 0.25 ml/in2.

5. Record on a weekly basis the number of seeds that have germinated, plant growth, and observations about plant health such as color of leaves, and spots or holes due to pests and diseases. Diseased spots will be quantified. Plant growth will be measured by plant height and leave size as shown in Figure 2.

6. The experimental observation will last for four weeks. The magnets in the third pot will be removed at the end of the third week, but the observation will continue for the fourth week in an attempt to evaluate the reversibility of the magnetic effect on plant growth, if any.

Results

There were three experimental conditions, as shown in Figure 3: (1) no magnet, (2) low magnetic field, and (3) high magnet field. The plants that had no magnetic field surrounding them were found to have slower growth as well as smaller leaf sizes. The growth and leaf sizes increased as the strength of magnetism increased.

The germination rate of the plants without the magnetic field was less than the germination rate of those with a magnetic field as seen in Table 1. In the control group, 12/16 plants germinated during the first week, while 16/16 of the plants growing under

Page 44: Young Scientist Journal (Jan-Jun) 2012

40 Young Scientists Journal | 2012 | Issue 11

a magnetic field (for both low and high magnetism) did so in the first week. Overall, the germination rate of the control group was 14/16 in the whole study period. The significantly higher germination rate in the first week for the experimental groups with magnetic fields means that magnetism can increase the speed of plant development.

As seen from Table 2, the plants in the groups with magnetic fields grew taller (measured by stem height) and bigger (measured by leaf size) by as much as 25%. For instance, the stem height was 4.18 cm in the control versus 5.25 cm in the high magnetic group at the fourth week. Furthermore, the high magnetic field had more stimulatory effect on plant growth than low magnetic field. The photographs [Figure 3] also show that plants grew with a better overall appearance in the environment with magnetic fields.

As seen from Table 1, the amount of unhealthy stems in the control was 1/14. In comparison, 1/16 of the plants in the trial with low magnetic field had an unhealthy stem, and 1/16 of the plants in the trial with high magnetic field had an unhealthy stem during the third week. When the magnet was removed for the fourth week in the trial with high magnetic field, the number of unhealthy stems increased to 6/16. The number of plant deaths for the control group was 2/14, and the number of deaths for both low magnetic field and high magnetic field groups was 0/16. Thus, more plants died without than with magnetism. Plant

Figure 1: The experiment design where the magnet was surrounded by 16 plant seeds

Figure 2: Plant growth is measured by the stem height (top) and leaf size (bottom) every week over a one month period. Blue: Control. Pink: Low magnetic field. Yellow: High magnetic field. The curves suggest that there is consistent positive effect of magnetic field on plant growth

Figure 3: The plant growth in three trial groups with no, low, and high magnetic fields, respectively, in the middle of the experiments (the third week)

Table 2: The average plant height and leaf size in the control group and two experimental groups with low and high magnetic fields, respectivelyResults Plant height (cm) Leaf size (cm)

Control Low mag High mag Control Low mag High magWeek 1 1.22 1.7 1.5 0.52 0.67 0.59Week 2 2.97 3.75 3.72 0.61 0.73 0.84Week 3 4.09 4.69 5.06 0.79 0.84 1.03Week 4 4.18 4.91 5.25 0.79 0.91 1.14

Table 1: The unhealthy changes and death of plants in control and experimental groups at the end of the fourth week. In the group with high magnetic field, the magnet was removed at the end of the third week; there was a significant increase in the number of stem changes after the magnet was removed. The control group has a significantly lower germination rate in the first week than the experimental groups with magnetic fields

Control Low magnetic High magneticGermination rate 12/16 (1st week)

14/16 (overall)16/16 (1st week) 16/16 (1st week)

Leaf unhealthy 2/14 2/16 2/16Stem unhealthy 1/14 1/16 6/16 (4th week)

1/16 (3rd week)Plant death 2/14 0/16 0/16

Page 45: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 41

Table 3: Statistical comparison on plant height using paired t-test provided by an online statistical calculator:[9]

Comparison t-value P value Statistical significance

Control versus low magnetism 9.59 0.0024 (< 0.01) YesControl versus high magnetism

4.37 0.022 (< 0.05) Yes

Low versus high magnetism 0.86 0.45 No

Table 4: Statistical comparison on leaf size using paired t-test provided by an online statistical calculator:[9]

Comparison t-value P value Statistical significance

Control versus low magnetism 5.19 0.014 (< 0.05) YesControl versus high magnetism

3.86 0.031 (< 0.05) Yes

Low versus high magnetism 1.63 0.20 No

stems became unhealthier after removing the magnet in the trial with high magnetic field.

Statistical analysis, shown in Tables 3 and 4, based on t-test among groups showed that plants exposed to magnetism (low or high) outgrew plants in the control group in terms of both plant height and leaf size, and these results were statistically significant.

Discussion

Previous research showed that plant growth can be increased by both dynamic[1] and static[4,5] magnetic fields. Magnetism also accelerates germination. [4] These findings were confirmed by my research presented here. However, some studies reported negative results. The difference in research outcomes may be due to variations in experimental design. Taken together, the current evidence seems in favor of the view that magnetism has a positive influence on plant growth and development.

However, previous research studies do not measure the plant health status with and without magnetic influence. My research suggests that magnetism makes plants grow not only faster and bigger, but also survive better. An interesting observation is that a large portion of plant stems became curved after the magnet was removed in the group exposed to high magnetic field. A possible explanation is that magnetism accelerates plant growth and supplies energy. In the absence of magnetism, plants lose energy derived from the magnetic field and the plant stem cannot support its weight.

The differential effect on plant growth between the north and south poles has been noticed previously. One theory is that the south pole causes plants to grow faster, but promotes bacteria, and the north pole of the magnet caused slower growth, but healthier plants. A number of unhealthy leaves were noted in my experiment, but the plants close to the north pole and those close to the south pole do not show any observable difference in their growth and health status. This is perhaps due to the use of horseshoe magnets rather than bar magnets in the experiment.

Conclusions

I have found that the plants surrounded with a magnetic field tend to grow faster, taller, bigger, and healthier, as measured by the plant height, leaf size,

and selected parameters related to their health status. The germination rate in the first week is significantly higher with than without magnetic field. The magnetic field may also supply energy, as reflected by my observation that removal of magnetism causes the plant stem to bend. The results confirm my hypothesis that magnetism affects plant growth and health. In the literature, most similar studies have found the positive effect of magnetism on plant growth, but few have reported the magnetic effect on plant health. Moreover, a new finding in my experiment, which has not been reported in the literature, is the potential relationship between magnetism and plant energy. It means that magnetism affects both the structure and function of a plant. Finally, it was not suggested by my study that the magnetic field caused any negative biological effect, as far as the plant growth and development is concerned.

Acknowledgement

I would like to thank my parents for their encouragement and for their suggestion about an online statistical resource and reference styles.

References

1. De Souza A, Sueiro L, González LM, Licea L, Porras EP, Gilart F. Improvement of the growth and yield of lettuce plants by non-uniform magnetic fields. Electromagn Biol Med 2008;27:173-84.

2. Fischer G, Tausz M, Kock M, Grill D. Effects of weak 16 3/2 Hz magnetic fields on growth parameters of young sunflower and wheat seedlings. Bioelectromagnetics 2004;25:638-41.

3. Huang HH, Wang SR. The effects of inverter magnetic fields on early seed germination of mung beans. Bioelectromagnetics 2008;29:649-57.

4. Cakmak T, Dumlupinar R, Erdal S. Acceleration of germination and early growth of wheat and bean seedlings grown

Page 46: Young Scientist Journal (Jan-Jun) 2012

42 Young Scientists Journal | 2012 | Issue 11

under various magnetic field and osmotic conditions. Bioelectromagnetics 2010;31:120-9.

5. Vashisth A, Nagarajan S. Exposure of seeds to static magnetic field enhances germination and early growth characteristics in chickpea (Cicer arietinum L.). Bioelectromagnetics 2008;29:571-8.

6. Belyavskaya NA. Biological effects due to weak magnetic field on plants. Adv Space Res 2004;34:1566-74.

7. Yano A, Ohashi Y, Hirasaki T, Fujiwara K. Effects of a 60

Hz magnetic field on photosynthetic CO2 uptake and early growth of radish seedlings. Bioelectromagnetics 2004;25:572-81.

8. Dattilo AM, Bracchini L, Loiselle SA, Ovidi E, Tiezzi A, Rossi C. Morphological anomalies in pollen tubes of Actinidia deliciosa (kiwi) exposed to 50 Hz magnetic field. Bioelectromagnetics 2005;26:153-6.

9. GraphPad-Software. Quick Calcs: Available from: http://www.graphpad.com/quickcalcs/ttest1.cfm. [Last accessed in 2005].

About the Author

Edward Fu is 15 years old and doing Biology and Chemistry at University High School in California. He is on the Science Olympiad Team at his school and won a medal in 8th and 10th grades.

AcknowledgmentLorna Quandt is a graduate student in Psychology at Temple University in Philadelphia, USA. She uses EEG (electroencephalography – the recording of electrical activity in the brain using electrodes placed on the scalp[1]) to study her areas of interest: action processing, social cognition, and the overlaps between action execution and action observation.[2]

Lorna is a member of the International Advisory Board of the Young Scientists Journal, assisting our group of school-student editors with the science in the most challenging articles submitted to the journal. She is now stepping down from the IAB to focus on her work, and everyone on the YSJ team would like to thank Lorna for the guidance and time she has given us, particularly whilst preparing Issue 10.

References

1. http://en.wikipedia.org/wiki/Electroencephalography2. https://sites.google.com/site/lornacquandt/home

Page 47: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 43

People care about sanitation in all aspects of their lives. Through this concern, different people have come up with their own unique ways of sanitizing, or killing bacteria, in their daily lives. Although many chemical products intended to do the same job are becoming more popular throughout the market, many people prefer to stick to their old-fashioned ways in their homes. After observing this common trend, especially applied in domestic spaces in Turkey, it was decided to further investigate the efficiency of the methods.

Through the surveys completed, it was found that the following methods were used extensively in homes:• Heat exposure• Soaking in vinegar• Soaking in lemon juice• Filtering• Soaking in bleach• Soaking in dilute hydrochloric acid (HCl)• Soaking in salt water

• Soaking in sugar water• Soaking in citric acid• Washing with soap

An informative trip to the Ministry of Health - public health and hygiene center (Refik Saydam Hıfzıssıhha) was made, to learn about bacteria that may be found in our tap and drinking water, as well as methods of testing and identifying them. Following this, 10 common household methods used around us were chosen at random and tested on E. coli – a possible bacterium commonly found (and eliminated) in Ankara’s water. We inoculated the bacteria into media and then exposed them to the criteria we wanted to test: Heat, vinegar, lemon juice, filter, bleach, HCl, salt, sugar, citric acid, and soap. The criteria were tested in different ratios/amounts to see how effective each one was. The samples were measured with a spectrophotometer for any bacterial growth and compared to the experiment’s positive and negative controls.

ABSTRACT We maintain household sanitation as having great importance in the process of disease prevention in the modern world. There are a wide range of options when choosing what to use to kill bacteria and other pathogens. However, this study was carried out in order to discover whether all of the common methods for eliminating household bacteria were indeed effective. This was done by inoculating e-coli into media and then exposing them to common criteria to find out the which method and which quantities for each method were effective in inhibiting bacterial growth. The results showed that almost half the methods tested demonstrated pathogenic contamination, so although providing an unfavorable environment for bacterial growth, some methods are not effective at preventing bacterial growth completely.

Defne Gürel, Melis Atalar, Ayça Arslan Ergül1

Student, Bilkent University Preparatory School, 1Department of Molecular Biology and Genetics, Research and Teaching Assistant Bilkent University, TR-06800, Bilkent, Ankara, Turkey, E-mail: [email protected]

DOI: 10.4103/0974-6102.97698

Research Article

Household bacteria: Everyday elimination methods uncovered!

Page 48: Young Scientist Journal (Jan-Jun) 2012

44 Young Scientists Journal | 2012 | Issue 11

Our results showed that, indeed, not all of the methods people use today were reliable. Almost half of the methods tested demonstrated pathogenic contamination. Therefore, even though these methods provide an unfavorable environment for bacterial growth, most of them are not efficient enough to prevent their growth completely.

Scope

This experiment is designed to collect various household methods used to kill bacteria in our daily lives, and test their reliability on E. coil.

Equipment and Materials

• Escherichia coli DH5µ• Agar plate

Tryptone 10 g Yeast Extract 5 g Agar 10 g NaCl 5 g ddH20 1 L

• LB Broth Medium Tryptone 10 g Yeast Extract 5 g NaCl 5 g ddH20 1 L

• Flame (Bunsen burner)• Approx. 40 falcon tubes• Approx. 6 bacteria plates (Petri Dish)• Inoculating loop• Pasteur pipette• Ethanol (96% alcohol)• Vinegar• Lemon juice• Filter (0.20 µm pore size)• Syringe• NaClO (bleach)• HCl• Salt• Sugar• Citric acid• Liquid soap

Equipment supplied by host laboratory• Autoclave• Incubator• Spectrophotometer• White light

• Pipettes• Water baths• Freezer (-20°C)• Refrigerator (4°C)

Lab safety equipment• Lab coats• Goggles• Gloves

Procedure

Part I (preparation)1. Sterilize the lab bench (work area) and light

a Bunsen burner. Put on all the lab safety equipment (gloves, goggles, and lab coat).

2. Retrieve Escherichia coli DH5µ from frozen stock.

3. Thaw at 37°C.4. Into three separate falcon tubes, inject 3 ml of

LB Broth Medium near the flame (15 cm away).a. In two of them, inoculate 10 µl of DH5µ

using a micropipette.b. In the third tube, inoculate 20 µl of DH5µ

using a micropipette.5. Retrieve three Agar plates prepared beforehand

and open near the flame (15 cm away).a. Drop 20 µl of E. coli DH5µ onto the center

of two plates.b. Drop 40 µl of E. coli DH5µ onto the center

of the third plate.c. Spread the bacteria amongst the plate evenly

using a bent Pasteur pipette (spreader), after sterilizing with ethanol (96% alcohol).

6. a. Place the falcon tubes in an incubator at 37°C, shaking at a rate of 225 rpm for 16 hours.

b. Place the Agar plates in a steady incubator at 37°C for 16 hours.

7. Retrieve the plates and tubes (after 16 hours) and place in a refrigerator at 4°C until ready for Part II.

Part II (the experiment)1. Retrieve one of the E. coli cultures inoculated

into the LB Broth Medium in PART I and allow it to adjust to 37°C.

2. Prepare 26 sterile falcon tubes and pipette 2 ml of LB Broth Medium into each of them.

3. Near a flame (15 cm away), inject 10 µl of Escherichia coli DH5µ from the saturated culture prepared into each falcon tube, except for two.

Page 49: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 45

The two tubes that do not have any bacteria in them (only LB Broth Medium) will be the experiment’s negative control.

4. Test for the criteria obtained through surveys conducted beforehand, as follows:a. Expose four of the tubes to heat in separate

water baths for 10 minutes: 40°C, 60°C, 80°C, and boiling water.

b. Inject 200 µl, 400 µl, and 2 ml of vinegar into three of the tubes.

c. Inject 200 µl, 400 µl, and 2 ml of lemon juice into three of the tubes.

d. Using a syringe, inject all the culture in one of the test tubes through a cellulose acetate filter with pore sizes of 0.20 µm into a separate falcon tube.

e. Pipette 2 µl, 20 µl, and 200 µl of bleach (NaClO or sodium hypochlorite) into three falcon tubes.

f. Pipette 2 µl, 20 µl, and 200 µl of HCl into three falcon tubes.

g. Pipette 2 ml of saturated salt solution prepared beforehand into one of the falcon tubes.

h. Pipette 2 ml of saturated sugar solution prepared beforehand into one of the falcon tubes.

i. Pipette 10 µl of citric acid (1 molar) into one of the falcon tubes.

j. Place one drop of soap into one of the falcon tubes.

5. Label all the falcon tubes. The three left (that were not tested) will be considered the experiment’s positive control.

6. Incubate tubes at 37°C, shaking at a rate of 225 RPM for 16 hours.

Part III (Analysis)1. Retrieve all falcon tubes from the incubator.2. Pipette a sample of approximately 500 µl

from each falcon tube into a separate spectrophotometer cuvette.

3. Measure each sample for any growth using a spectrophotometer. Compare the results to the positive and negative controls.

4. Record and analyze the data.5. Sub-culture samples of the “positive-control.”

a. Inoculate the samples onto Agar plates using the “streak plate technique.”

b. Incubate at 37ºC for 16 hours.c. Observe the plates and isolated colonies

under white light.

Results and Discussion

Results throughout the experiment prove that about half of the household methods tested are not effective. Before obtaining results through the spectrophotometer for the different methods tested, LB Broth Medium (which was not incubated and did not have any bacteria inoculated) was read to obtain blank results. This, along with the results of the experiment’s negative control (where only the medium was incubated and no bacteria was inoculated), was significant in order to see if any contamination occurred in the incubator. As intended, there was no external contamination throughout the growth process of the investigations. This can be verified through the extremely low absorbance figures obtained from the spectrophotometer [Figure 1].

It was seen that exposing bacteria to heat (40°C, 60°C, and 80°C) does not prevent bacterial growth. In fact, 40°C provided a favorable environment for bacterial growth, since the optimum growth temperature for many bacteria is around 37°C. However, keeping the bacteria at boiling temperature for 10 minutes definitely prevented all the bacterial growth. This result prompted us to question the common household use of kettles to boil water for hot drinks, such as tea, coffee, soup, etc. This is because, although kettles are designed to boil water, they tend to stay at the boiling point (around 100°C) for only a short period of time. Then, they turn themselves off, and

Figure 1: Spectrophotometer data with absorbance values. Samples were measured with 600 nm wavelengths to detect the presence of bacteria. Absorbance values were presented on the graph with a logarithmic scale. The threshold value was set as 0.1. When tested, samples generating absorbance rates below the threshold value were considered to inhibit bacterial growth. In the same context, samples with values above the threshold allowed the growth of bacteria

Page 50: Young Scientist Journal (Jan-Jun) 2012

46 Young Scientists Journal | 2012 | Issue 11

the temperature of the water drops rapidly. For this reason, it is reasonable to say that they only reach about 80°C on average. As can be seen in the data, 80°C still allows the bacteria development to a pathogenic degree.

Adding vinegar to the experiments prevented bacterial growth at all concentrations tested (200 µl, 400 µl, and 2 ml) – proving to be an effective method. Adding the same concentrations of lemon juice also prevented the bacterial growth to some extent, though not fully. Even so, due to the liquid’s opaque quality, the spectrophotometer gave high results (suggesting bacterial growth) for the investigation, even after dilution. However, we think that this is because the spectrophotometer read the lemon juice itself as false positive results due to scattering of the light by the colloidal dispersion character of lemon juice.

Passing bacteria through a micro-filter (with 0.20 µl pore size) was one of the most reliable methods tested. However, even though these filters are commonly found in scientific laboratories, they are harder to obtain and apply in household environments.

Bleach and hydrochloric acid are both sold in about 5% concentrations for household cleansing purposes. They are commonly used in many homes. Through the surveys completed, it was seen that especially these two chemicals were used extensively for domestic purposes. For that reason, the two criteria were tested at the same concentrations. Ratios of 1:1000 (2 µl), 1:100 (20 µl), and 1:10 (200 µl) were inoculated into test tubes. From the results obtained, it was seen that adding a ratio of 1:1000 of the chemicals was not enough to prevent growth. Even so, adding only 10 ml (1%) of HCl or bleach into a bucket of cleaning water (approximately 1liter) is enough to kill microorganisms such as E. coli in the surrounding environments. This figure is significant as people tend not to know that even such a small amount of the chemical is sufficient for it to be effective. This finding stipulates that the habit of pouring bottles of bleach to disinfect a small surface should be abolished.

Saturated salt – used to simulate the process of adding salt to water, food, or other sources of pathogenic bacteria – resulted in significantly low figures. In contrast, adding saturated sugar to the media did not turn out to be an effective method. Although sugar did not increase the growth rate of the

bacteria, it did not have any effect in the prevention of growth, either.

Citric acid, another chemical commonly found in household environments for cleansing purposes, was tested as well. Ten microliters of concentration 1 Molar was added in to the medium (2 ml) – diluting the citric acid to 5 mM. This concentration was not enough to prevent bacteria growth, as seen in the data.

Adding one drop of liquid soap to the media definitely stopped the bacterial growth. This proved that our common habit of washing our hands, indeed, is effective.

In order to verify that the results were reliable, a positive control was also kept for the experiment. These media had only bacteria inoculated in them, and they were incubated with the rest of the tubes. No foreign substance was added in order to prevent contamination. For this reason, it can be seen through the data that the positive control obtained the highest results with the most contamination. This, as well as the results obtained from the negative control, proved that the experiment was completed as intended and reasonably accurate results were obtained.

To observe the microbes, a sample taken from the positive control was inoculated onto Agar plates and placed into a steady incubator at 37ºC. After 16 hours, Agar plates were examined under white light for the formation of colonies and photographed [Figure 2].

Figure 2: Photos of bacteria examined under white light. Samples taken from the positive control experiment were inoculated onto Agar plates. The opaque areas on the plates indicate bacteria colonies

Page 51: Young Scientist Journal (Jan-Jun) 2012

Young Scientists Journal | 2012 | Issue 11 47

After testing different methods used to kill “germs” in all aspects of our lives, there are many recommendations that can be made. Food that is consumed uncooked can be washed with vinegar, lemon juice, or salt. This combination is commonly found in salads, hinting that it is safe to eat! Also, food (e.g. pasta) and beverages (e.g. tea) to be cooked can be boiled for 10 minutes in order to kill all bacteria in them. For cleansing household surfaces, adding small amounts of bleach or hydrochloric acid will be sufficient. For our personal hygiene, soap is always reliable!!

In summary, some of the household methods used to kill bacteria in our daily lives did not prove to be effective at all. The data from the spectrophotometer, as well as plate examinations, verify that not all of these methods are reliable,

even though people might use them extensively in their homes.

Acknowledgements

1. Bilkent University Molecular Biology Laboratory, Ankara, Turkey.

2. Refik Saydam Hıfzıssıhha Merkezi (Public Health and Hygiene Center), Ankara, Turkey.

References

1. Gürel D, Atalar M, Arslan-Ergül A. “5 Second Rule” Under the Microscope, Bilim ve Teknik 482; 2008. p. 30-1. (In Turkish).

2. Microbiology Laboratory Manual, 2006-2007. Bilkent University Department of Molecular Biology and Genetics.

3. Lim D. Microbiology. 2nd ed. McGraw Hill; 1998.

About the Authors

Defne Gurel and Melis Atalar are high school students at Bilkent Laboratory and International School in Ankara, Turkey. Having shown great interest in scientific research, they won first and second place titles at their school's science fairs consecutively over the years. Defne and Melis have also qualified to display their projects at a national competition and were awarded 3rd Place nationwide at a Turkish National Invention Fair in 2007.The research in this article was conducted at Bilkent University's Molecular Biology and Genetics Laboratories, under the supervision and support of research assistant Ayca Arslan Ergul.

Page 52: Young Scientist Journal (Jan-Jun) 2012

48 Young Scientists Journal | 2012 | Issue 11

Author Institution Mapping (AIM)

Please note that not all the institutions may get mapped due to non-availability of the requisite information in the Google Map. For AIM of other issues, please check the Archives/Back Issues page on the journal’s website.

Page 53: Young Scientist Journal (Jan-Jun) 2012

www.ysjournal.com

READ, WRITE, REVIEWthe free online science journal for

scientists aged 12-20Recent Articles:

What are tilings and tessellations and how are they used in architecture?Jaspreet Khaira, Issue 7

Water as an alternative fuelSandy Clark, Issue 9

Sunscreen: a catch-22Cole Blum, Samantha Larsen, Issue 8

Page 54: Young Scientist Journal (Jan-Jun) 2012
Page 55: Young Scientist Journal (Jan-Jun) 2012
Page 56: Young Scientist Journal (Jan-Jun) 2012

Young Scientist Journeys Editors: Paul Soderberg and Christina Astin

This book is the first book of The Butrous foundation’s Journeys Trilogy. Young scientists of the past talk to today’s young scientists about the future. The authors were members of the Student Science Society in high school in Thailand in the 1960s, and now, near their own 60s, they share the most important things they learned about science specifically and life generally during their own young scientist journeys in the years since they published The SSS Bulletin, a scientific journal for the International School Bangkok.

Reading this first book is a journey, that starts on this page and ends on the last one, having taken you, Young Scientist, to hundreds of amazing “places,” like nanotechnology, Song Dynasty China, machines the length of football fields, and orchids that detest wasps. But the best reason to take the journey through these pages is that this book will help you

prepare for all your other journeys. Some of these will be physical ones, from place to place, such as to scientific conferences. Others will be professional journeys, like from Botany to Astrobiology, or from lab intern to assistant to researcher to lab director. But the main ones, the most exciting of all your journeys, will be into the Great Unknown. That is where all the undiscovered elements are, as well as all other inhabited planets and every new species, plus incredible things like communication with dolphins in their own language, and technological innovations that will make today’s cutting-edge marvels seem like blunt Stone Age implements.

For further information please write to [email protected]

The Butrous Foundation, which is dedicated to empowering today the scientists of tomorrow. This foundation already publishes Young Scientists Journal, the world’s first and only scientific journal of, by, and for, all the world’s youngsters (aged 12-20) who want to have science careers or want to use science in other careers. 100% of proceeds from sales of The Journeys Trilogy will go to the Foundation to help it continue to fulfill its mission to empower youngsters everywhere.

Book Details:

Title: Young Scientist Journeys

Editors: Paul Soderberg and Christina Astin

Paperback: 332 pages

Dimensions: 7.6 x 5.2 x 0.8 inches, Weight: 345 grams

Publisher: The Butrous Foundation (September 26, 2010)

ISBN-10: 0956644007

ISBN-13: 978-0956644008

Website: http://www.ysjourneys.com/

Retailer price: £12.45 / $19.95

Page 57: Young Scientist Journal (Jan-Jun) 2012

The Butrous Foundation Journeys Trilogy Thirty-one years ago, Sir Peter Medawar wrote Advice to a Young Scientist, a wonderful book directed to university students. The Butrous Foundation’s Journeys Trilogy is particularly for those aged 12 to 20 who are inspired to have careers in science or to use the path of science in other careers. The three volumes are particularly for those aged 12 to 20 who are inspired to have careers in science or to use the path of science in other careers. It is to “mentor in print” these young people that we undertook the creation and publication of this trilogy.

Young Scientist Journeys (Volume 1) This book

My Science Roadmaps (Volume 2) The findings of journeys into key science issues, this volume is a veritable treasure map of “clues” that lead a young scientist to a successful and fulfilling career, presented within the context of the wisdom of the great gurus and teachers of the past in Asia, Europe, Africa, and the Americas.

Great Science Journeys (Volume 3) An elite gathering of well-known scientists reflect on their own journeys that resulted not only in personal success but also in the enrichment of humanity, including Akira Endo, whose discovery as a young scientist of statins has saved countless millions of lives.

Table of Contents: Introduction: The Journeys Trilogy, Ghazwan Butrous . . . 11 Chapter 1. Science is All Around You, Phil Reeves . . . 17 Chapter 2. The Beauty of Science, and The Young Scientists Journal, Christina Astin . . . 19 Chapter 3. The Long Journey to This Book, Paul Soderberg . . . 25 Chapter 4. Dare to Imagine and Imagine to Dare, Lee Riley . . . 43 Chapter 5. How the Science Club Helped Me Become a Human Being, Andy Bernay-Roman . . . 55 Chapter 6. Your Journey and the Future, Paul Soderberg . . . 63 Chapter 7. Engineering as a Ministry, Vince Bennett . . . 83 Chapter 8. Cold Facts, Warm Hearts: Saving Lives With Science, Dee Woodhull . . . 99 Chapter 9. My Journeys in Search of Freedom, Mike Bennett . . . 107 Chapter 10. Insects and Artworks and Mr. Reeves, Ann Ladd Ferencz . . . 121 Chapter 11. Window to Endless Fascination, Doorway to Experience for Life: the Science Club, Kim Pao Yu . . . 129 Chapter 12. Life is Like Butterflies and Stars, Corky Valenti . . . 135 Chapter 13. Tend to Your Root, Walteen Grady Truely . . . 143 Chapter 14. Lessons from Tadpoles and Poinsettias, Susan Norlander . . . 149 Chapter 15. It’s All About Systems—and People, J. Glenn Morris . . . 157 Chapter 16. A Journey of a Thousand Miles, Kwon Ping Ho . . . 165 Chapter 17. The Two Keys to Making a Better World: How-Do and Can-Do, Tony Grady . . . 185 Chapter 18. Becoming a Scientist Through the Secrets of Plants, Ellen (Jones) Maxon . . . 195 Chapter 19. The Essence of Excellence in Everything (and the Secret of Life), Jameela Lanza . . . 203 Chapter 20. The Families of a Scientist, Eva Raphaël . . . 211 Appendix: Lists of Articles by Young Scientists, Past and Present . . . 229 The SSS Bulletin, 1966-1970 . . . 230-237 The Young Scientists Journal, 2008-present . . . 237-241 Acknowledgements . . . 243 The Other Two Titles in the Journeys Trilogy . . . 247 Contents of Volume 2 . . . 249 Excerpt from Volume 3: A Great Scientist . . . 251 Index . . . 273

Editors Christina Astin and Paul Soderberg

Page 58: Young Scientist Journal (Jan-Jun) 2012

The Butrous Foundation

The foundation aims to motivate young people to pursue scientific careers by enhancing scientific creativity and communication skills. It aims to provide a platform for young people all over the world (ages 12-20 years) to participate in scientific advancements and to encourage them to express their ideas freely and creatively.

The Butrous FoundationButrous Foundation

The Butrous Foundation is a private foundation established in 2006. The current interest of the foundation is to fund activities that serve its mission.The MissionThe foundation aims to motivate young people to pursue scientific careers by enhancing scientific creativity and communication skills. It aims to provide a platform for young people all over the world (ages 12-20 years) to participate in scientific advancements and to encourage them to express their ideas freely and creatively.

Thematic approaches to achieve the foundation mission:1. To enhance communication and friendship between young people all over the world and to help each other with their scientific interests.2. To promote the ideals of co-operation and the interchange of knowledge and ideas.3. To enhance the application of science and its role in global society and culture.4. To help young people make links with scientists in order to take advantage of global knowledge, and participate in the advancement of science.5. To encourage young people to show their creativity, inspire them to reach their full potential and to be role models for the next generation.6. To encourage the discipline of good science where open minds and respect to other ideas dominate.7. To help global society to value the contributions of young people and enable them to reach their full potential.Visit Young Scientists journal www.ysjournal.com

The Butrous Foundation

The foundation aims to motivate young people to pursue scientific careers by enhancing scientific creativity and communication skills. It aims to pro-vide a platform for young people all over the world (ages 12-20 years) to participate in scientific advancements and to encourage them to express their ideas freely and creatively.

The Butrous FoundationThe Butrous Foundation is a private foundation established in 2006. The current interest of the foundation is to fund activities that serve its mission.

The Mission

The foundation aims to motivate young people to pursue scientific careers by enhancing scientific creativity and communication skills. It aims to provide a platform for young people all over the world (ages 12-20 years) to participate in scientific advancements and to encourage them to express their ideas freely and creatively.

Thematic approaches to achieve the foundation mission:1. To enhance communication and friendship between young people

all over the world and to help each other with their scientific interests.

2. To promote the ideals of co-operation and the interchange of knowledge and ideas.

3. To enhance the application of science and its role in global so-ciety and culture.

4. To help young people make links with scientists in order to take advantage of global knowledge, and participate in the advance-ment of science.

5. To encourage young people to show their creativity, inspire them to reach their full potential and to be role models for the next generation.

6. To encourage the discipline of good science where open minds and respect to other ideas dominate.

7. To help global society to value the contributions of young people and enable them to reach their full potential, visit Young Scientists journal www.ysjournal.com