BlueSci Issue 21 - Easter 2011

36
Cambridge University science magazine Easter 2011 Issue 21 www.bluesci.co.uk The Cambridge University science magazine from Cell Lines . Chimeras . Chocolate Marie Curie . Freedom of Information . Magnetoreception FOCUS Small Channels, Big Ideas ISSN 1748-6920 9 7 7 1 7 4 8 6 9 2 0 0 0 2 1 >

description

Cambridge University science magazine FOCUS: Gene Therapy - Small channels, Big ideas

Transcript of BlueSci Issue 21 - Easter 2011

Page 1: BlueSci Issue 21 - Easter 2011

Cambridge University science magazine

Easter 2011 Issue 21

www.bluesci.co.uk

The Cambridge University science magazine from

Cell Lines . Chimeras . ChocolateMarie Curie . Freedom of Information . Magnetoreception

FOCUSSmall Channels, Big Ideas

ISSN�1748-6920

9771748692000

21>

Page 2: BlueSci Issue 21 - Easter 2011

‘‘‘‘‘‘‘‘

You now have access to over 260 Cambridge Journals

To access Cambridge Journals please visit:

journals.cambridge.org

Thanks to an agreement with

Cambridge University Library,

all staff and students of the

University of Cambridge have

online access to over 260

peer reviewed academic

journals and over 180

journal archives published by

Cambridge University Press.

Page 3: BlueSci Issue 21 - Easter 2011

‘‘‘‘

Between You and MeLouisa Lyon examines how distinct genomes can co-exist in an individual

Contents 1

Easter 2011Issue 21

Contents

BlueSci was established in 2004 to provide a student forum for science communication. As the longest running science magazine in Cambridge, BlueSci publishes the best science writing from across the University each term. We combine high quality writing with stunning images to provide fascinating yet accessible science to everyone. But BlueSci does not stop there. At www.bluesci.co.uk, we have extra articles, regular news stories and science fi lms to inform and entertain between print issues. Produced entirely by students of the University, the diversity of expertise and talent combine to produce a unique science experience.

President: Tim Middleton ....................................president@bluesci.co.ukManaging Editor: Stephanie Glaser ........ [email protected]: Jessica Robinson .............................. [email protected]: Wendy Mak .................................. [email protected] Manager: Sita Dinanauth .....................................fi [email protected]: Joshua Keeler ............................. [email protected] Manager: Richard Thomson [email protected] & Publicity Offi cer: Helen Gaffney .... [email protected] Editor: Robert Jones [email protected] Editor: Jonathan Lawson [email protected]

Committee

FOCUS

Features

Mountains: Go with the FlowAlex Copely explains how � uid dynamics can help us understand geology

6

8

The Challenge of ChocolateRachel Berkowitz looks at the science that will allow us to make chocolate better

10

Birds’ Eye ViewIan Le Guillou � nds out about the ‘biological compass’ of cows, crocodiles and migrating birds

Superheroes, Fact or Fiction?Mark Nicholson discovers how nature has turned fantasy into reality

14

Regulars

On the Cover 3News 4

Book Reviews 5

Behind the ScienceJessica Robinson uncovers some of the

pioneering female scientists

22

PerspectiveTim Middleton gives his perspective on access to data and the recent scandals

24

Arts and ReviewsStephanie Glaser discovers how shadows caught by

camera-less photography bring light to an image

HistoryNicola Stead looks back at the

beginnings of cell culture

TechnologyAnders Aufderhorst-Roberts examines

the demands of the digital economy

A Day in the Life of...Andy Shepherd talks to Richard Thompson

about working at Caudex Medical

Weird and Wonderful 32

28

31BlueSci explores micro� uidic technology and its dazzling array of applications

About Us...

30

Small Channels, Big Ideas

You now have access to over 260 Cambridge Journals

To access Cambridge Journals please visit:

journals.cambridge.org

Thanks to an agreement with

Cambridge University Library,

all staff and students of the

University of Cambridge have

online access to over 260

peer reviewed academic

journals and over 180

journal archives published by

Cambridge University Press.

12 26

16

Page 4: BlueSci Issue 21 - Easter 2011

2 Editorial Easter 2011

Editor: Wing Ying ChowManaging Editor: Stephanie Glaser

Business Manager: Michael Derringer

Second Editors: Tom Bishop, Felicity Davies, Emma Hatton-Ellis,

Tamara Litwin, Luke Maishman, Claire Mclaughlan, Imogen Ogilvie,

Kirsten Purcell, Paul Simpson, Raliza Stoyanova, Talya Underwood,

Georgie WardSub-Editors: Stephanie Boardman,

Emma Hatton-Ellis, Muhammad Zaeem Khalid, Jonathan Lawson, Tim Middleton,

Lindsey Nield, Rose Spear, Richard Thomson

News Editor: Robert JonesNews Team: Jonathan Lawson,

Imogen Ogilvie, Katy WeiBook Reviews: Taylor Burns,

Talya UnderwoodFocus Team: Helen Gaffney,

Lindsey Nield, Wendy Mak, Vivek ThackerWeird & Wonderful: Tom Bishop,

Mike Kenning, Georgie Ward

Pictures Team: Felicity Davies, Stephanie Glaser, Emma Hatton-Ellis,

Muhammad Zaeem Khalid, Wendy Mak, Tim Middleton, Jessica Robinson, Nicola Stead, Richard Thomson,

Talya UnderwoodProduction Team: Stephanie Boardman, Felicity Davies, Ian Fyfe, Stephanie Glaser,

Tim Middleton, Kirsten Purcell, Mrinal Singh, Rose Spear, Nicola Stead,

Talya UnderwoodIllustrators: Dominic McKenzie, Alex Hahn

Cover Image: Jamie Gundry

ISSN 1748-6920

Varsity Publications LtdOld Examination Hall

Free School LaneCambridge, CB2 3RFTel: 01223 337575www.varsity.co.uk

[email protected]

BlueSci is published by Varsity Publications Ltd and printed by The Burlington Press. All copyright is the

exclusive property of Varsity Publications Ltd. No part of this publication may be reproduced, stored in a retrieval

system or transmitted in any form or by any means, without the prior permission of the publisher.

Issue 21: Easter 2011

another term, another issue of BlueSci—as a magazine that has succeeded through 20 Cambridge terms there is certainly no doubt that we will be carrying on—or is there?

Do not get me wrong, the BlueSci community is as strong as ever. As you can see on the left, this issue has been the culmination of over thirty people’s hard work. With the new series of [POPULAR Science] talks, we will certainly keep growing. But in these austere times, can we justify continuing to print a colourful and glossy magazine?

I am very grateful to our sponsors who make it possible to continue our magazine in ink and on paper. I am sure they appreciate, as I now do, that printing a magazine brings a set of challenges that one may not face while publishing in electronic media. Deadlines need to be set and met, otherwise the magazine cannot be printed in time for distribution at the start of each term. Teamwork and creativity are needed to find images and lay out the design. Technical details such as picture resolution and colour management need to be considered. The negotiation of advertisement and sponsorship has to be preceded by networking and initiative. In all, learning opportunities that will enrich the skills of every volunteer involved, transferable and valuable in any career that they might head on to.

For me, it is a great pleasure to have the opportunity to work with so many perceptive and enthusiastic individuals, who contribute each in their own way to make BlueSci magazine a pleasure to read, in content, style and visual appeal.

So what do we have in store for you this issue? Our Focus section looks at microfluidic devices made of channels as wide as a human hair. Elsewhere in the issue, we look at other small things: the history of cell lines, the wings of insects, the molecular makeup of chocolate and the internal compass of cows. We also look at other big ideas: pioneering woman scientists, open data and freedom of information.

Find yourself a nook and have a good read—and if you feel inspired to get involved, do get in touch.

Wing Ying Chow Issue 21 Editor

In Ink and on Paper

Page 5: BlueSci Issue 21 - Easter 2011

A Prettier Shell than Ordinary

COMBING BEACHES and collecting pretty shells is a common childhood hobby. For some the enjoyment continues into adulthood, maybe even formalising it as a scientifi c interest. Robert MacAndrew, whose collection now forms the nucleus of the shell collection at the University Museum of Zoology, was one of those.

As the owner of a shipping company, he took the ample opportunities his ships provided to collect mollusc shells from the North ---Atlantic, Mediterranean and Gulf of Suez, gaining fellowship of the Royal Society for his work. MacAndrew was one of the pioneers of deep sea dredging, working closely with prominent naturalist Edward Forbes. � ey reported annually to the British Association for the Advancement of Science on the material that they recovered from increasingly greater depths.

On the cover is Divaricella macandrewae, from MacAndrew’s collection, showing beautiful intersecting ribbing that helps it to burrow. � is specimen was collected from the Gulf of Suez in early 1869, before the opening of the Suez Canal later that year. It forms part of a larger collection of shells that provide a baseline from which to gauge the extent of subsequent animal migrations between the Red Sea and Mediterranean via the Suez Canal, which are called ‘Lessepsian’ migrations after the engineer of the Canal.

� is is not the only example of molluscs aff ected by human activities. According to the 2010 statistics of the International Union for Conservation of Nature, 44% of the animal species that have been recorded as becoming extinct since the year 1500 have been molluscs. Several species of mollusc in the collection have already become extinct, due to habitat destruction and introduction of invasive species to their vulnerable habitats.

MacAndrew bequeathed his collection of mollusc shells to the University Museum of Zoology on his death in 1873. It now forms part of a wider collection of mollusc specimens assembled there, one of several collections of international signifi cance in the museum that have led to it receiving designated status by the Museums, Libraries and Archives Council (MLA). Most notable in the MacAndrew collection is the large number of type specimens. � ese are the original specimens of a species to be scientifi cally described,

and form the cornerstone of species defi nitions. MacAndrew was also interested in collecting growth series, mounting specimens of each species at various stages of development on his signature blue card, so increasing the value of his collection as a reference tool. � e majority of the specimens, by necessity, are stored in great cabinets behind the public face of the museum, but work is afoot to open this great resource up to the public.

Recently, a project was undertaken at the museum to produce an online catalogue of all their bivalve molluscs. Bivalves have shells consisting of two halves hinged along one edge by interlocking teeth and fl exible ligaments. � e group includes clams, oysters and mussels. During the last eleven months, Hilary Ketchum has catalogued every one of the ten thousand bivalve molluscs in the museum’s collection. Each of the 200 or so molluscan type specimens have been photographed by Jamie Gundry, examples of which we see on the front cover and on this page. � e bivalve collection is now fully recorded online, meaning that researchers can view good quality images of type specimens from remote locations, reducing the need to send material out on loan. A key strength is the ability to search by sampling sites, collectors or dates of collection.

� is project, sponsored by a designated development fund of MLA, is another step to achieving the museum’s long term aim of cataloguing all of its specimens online, giving the whole world access to these rich scientifi c resources.

If you want to see examples of the shells yourself, a new display has opened at the University Museum of Zoology, showing bivalves thematically displayed according to their lifestyles.

Tom Ash looks into the story behind this issue’s cover image

On the Cover 3Easter 2011

Tom Ash is a PhD student in the Department of Clinical Neurosciences

JAM

IE G

UN

DRY

Page 6: BlueSci Issue 21 - Easter 2011

nasa have moved the twin STEREO imaging probes into position on opposite sides of the Sun, revealing our star for the first time in all its 3D glory. The telescopes on STEREO are sensitive to four

wavelengths of ultraviolet radiation, allowing them to trace key aspects of solar activity, including solar flares, tsunamis and magnetic filaments, which will greatly advance not just theoretical solar physics research, but also space weather forecasting.

Previously, active regions could suddenly emerge from the far side of the Sun, spitting flares of intense electromagnetic radiation towards the Earth that can cause severe disruption to airlines, power supplies and satellite operations. With STEREO, active regions of the Sun can now be tracked, so that scientists can predict with accuracy when solar radiation is likely to reach dangerously high levels. Solar storms on their way towards other planets can also be tracked, which holds great importance for NASA missions elsewhere in the solar system.

Monitoring the entirety of the Sun’s surface could also help to solve some of the many fundamental puzzles underlying solar activity: researchers have long suspected ‘global’ interactions between eruptions on opposite sides of the Sun, and STEREO provides observational data to test these theories. With higher resolution images on their way, it seems that the sky’s the limit for solar physics. kw

4 News Easter 2011

News

malaria is responsible for around one million deaths in Africa alone every year. The

disease is transmitted to humans from bites by Anopheles mosquitoes that carry the Plasmodium parasite.

An international team of scientists describing populations of Anopheles gambiae in West Africa have discovered a new subgroup of the mosquito that may hold the key to a better control of malaria. The genetically distinct subgroup, called ‘Goundry’, was identified by comparing genetic markers and mutations in mosquito genomes. As well as being genetically

distinct, Goundry was found to have an important behavioural trait which surprised many scientists; rather than living primarily inside people’s homes, the new strain lives outside.

This behaviour has potential implications for malaria eradication programmes that have previously focused on preventing mosquito bites in the home. The researchers working on Goundry have already found that the new subgroup is more susceptible than other strains to picking up the Plasmodium parasite from infected blood. The next step is to ascertain whether they commonly bite humans. If the newly discovered mosquitoes are found to be an important transmitter of malaria, eradication programmes must shift their focus to include outdoor bite prevention too. io

Human migration out of AfricaThe Sun as we’ve never seen it before

New mosquito subgroup solves malaria mysteries?

Check out www.bluesci.co.uk

or @BlueSci on Twitter for regular

science news and updates

it is widely accepted that humans originated in Africa, and current theory states that modern humans did not leave their original homeland until 65,000 years ago, with the exception of a

few isolated populations. However, a team working in Jebel Faya in the United Arab Emirates have proposed a much earlier dispersal into the Arabian Peninsula.

The team discovered tools which are at least 95,000 years old, suggesting that modern humans migrated across southern Arabia around 125,000 years ago when the area was more hospitable. Some of the tools found at Jebel Faya show notable similarity to those made by contemporary Homo sapiens in Africa, leading the authors to suggest that modern humans have lived continuously in the area until at least 40,000 years ago. However, this directly contradicts genetic evidence stating that populations were constantly wiped out and replaced by the changing climate during this period. Additionally, the team found no human fossils and their dating evidence was strongly inconsistent.

Whilst it seems likely that Jebel Faya was inhabited by modern humans earlier than was previously thought, the humans probably only stayed for a short period, and other human species may have also occupied the area. Whilst this is a promising lead, more work is needed to reliably establish the role of Arabia in early migration of H. sapiens out of Africa. jl

NA

SA

WES

SEX

AR

CH

AEO

LOG

Y

SHif

AA

N T

HO

WfE

Equ

Page 7: BlueSci Issue 21 - Easter 2011

IS THERE A FEMALE BRAIN? By its biased formulation, the question induces the various stereotypical schema that stain any attempt to answer it. We assume, from the very basis that men and women do exist, that male and female ‘brains’ must also exist. � en, so very often, we, as scientists, work backwards: our culture has found its conclusion—that men and women are diff erent in most aspects—so we search for evidence to support this thesis, uncharacteristically turning the scientifi c process on its head.

� is is why we should be ever grateful for Cordelia Fine’s latest book. Fine makes a strong neuroscientifi c case for the cultural—rather than biological—dimension of gender. But Fine is not trying to convince you that gender is a purely cultural phenomenon. Rather, her conclusion is much humbler—and more palatable than the overextended conclusions of much gender research: the debate is still open, and that research on the ‘biological’ basis of gender is largely inadequate.

At the very least, Fine has provided us with an engaging, literate and powerful argument for thinking twice about gender ‘science’, bringing our brains back from Mars and Venus to a culturally complex Earth. TB

Book Reviews

Book Reviews 5Easter 2011

The Three Cultures“IT IS TIME FOR THE MEMBERS of the three cultures to adopt a posture of greater humility.” Humble, says Jerome Kagan, because they are losing their appeal.

� e ‘three cultures’ of the natural sciences, social sciences and humanities are, Kagan claims, plagued by insularity and a lack of mutual respect. Now, it seems, all three cultures only share one characteristic: their claims and authority are only valid and substantive within their own specifi c communities. Accordingly, the infl uence of all three areas of study is receding in the public sphere, and whenever borders are crossed, it is usually marred by impotence and misunderstanding. � eir primary concerns are scattered, they have no respect for the others’ sources of evidence, they are increasingly jargon-rich and dominated by hegemonic funding bodies.

But, Kagan is clear, there is reason for hope. � ough they each have their fl aws and limitations, including natural science, they all contribute to a shared understanding of the universe that would be impossible if one was not present. Narrow-mindedness, then, is the danger, and it was not so long ago that a healthy integration existed between the three cultures —a paradigm that Kagan would like to reinvent.

� e book is certainly with some fl aws. For instance, barely 10% of the book is dedicated to the humanities, while the social sciences receive well over half of the attention. But the importance of the message remains: that the sum of the human pursuit for knowledge is greater than its parts. TB

AS OUR CLOSEST RELATIVES and surviving alongside us until 28,000 years ago, our fascination with the Neanderthals is understandable. How similar to us were they? How intelligent were they? Most importantly, why did they die out? Clive Finlayson addresses these fundamental questions head on in � e Humans Who Went Extinct. He challenges the central dogma that humans drove Neanderthals to extinction. Instead Finlayson places greater emphasis on the role of serendipity in the Neanderthals’ demise, demonstrating the importance of rapidly fl uctuating environmental conditions during this key period of evolutionary history. � e equally important question of why humans survived is also brought to our attention and the book charts the journey from our birth in Africa to our establishment across the wider world. � e great strength of the book is that Finlayson avoids subscribing to traditional viewpoints that may hinder our understanding, providing a refreshing and perceptive overview of a topic fraught with controversies. � e book leaves the reader with a sense of humility that our survival as the only species of the human lineage was strongly shaped by the environment, climate and chance. TU

The Humans Who Went Extinct

Delusions of Gender

CUP, 2009, £14.99

OUP, 2010, £9.99

Icon Books Ltd, 2010, £14.99

Page 8: BlueSci Issue 21 - Easter 2011

in 1998, karen keegan, a 52-year old woman from Massachusetts, received some extraordinary news. Tests revealed that she was not the mother of two of her three grown-up sons. While genetic tests had confirmed her husband as their biological father, the same tests had failed to detect a match between Karen’s own DNA and that of two of her children. When the tests were repeated, using new samples, the same results were obtained ruling out the possibility of mislabelling or cross-contamination in the lab. While it seemed possible that both sons had been swapped at birth, the chances of this happening to the same woman on two separate occasions, years apart, are incredibly small. In addition, Karen’s husband’s DNA had produced the expected match. Although this unlikely-sounding story is indeed true, the explanation may appear as strange as the finding itself.

The breakthrough in solving the mystery came when researchers used a tissue other than blood to obtain a sample of Karen’s DNA. When genetic material from a cheek swab was sequenced, the anticipated match between Karen’s DNA and that of all three of her sons duly emerged. Further studies revealed that, remarkably, Karen’s body comprised two distinct populations of cells, each with its own unique genetic code. In Karen’s blood, one cell-type had come to

dominate while in other tissues, including Karen’s ovaries, both cell types co-existed side-by-side. One of Karen’s sons had developed from an egg containing Karen’s first set of DNA—the DNA also found in the majority of her blood cells—while the other two sons developed from egg cells containing Karen’s second genome. Researchers believe that the most likely explanation for this phenomenon is that Karen’s mother may have conceived non-identical twins who, at an early stage of pregnancy, fused to form a single embryo, otherwise known as a chimera.

In Greek mythology, the Chimera was a fire-breathing female monster with the body of a lioness, the head of a goat, and the tail of a snake. In genetics, a chimera describes an individual in whom two or more genetically distinct populations of cells, derived from different individuals, co-exist. While chimerism is extremely rare, with only around 30 cases described to date, it is likely that many more instances of the condition go undetected, as there are usually no outward signs.

Microchimerism, on the other hand, is far from rare. Microchimerism may be described as chimerism diluted: whereas chimeric individuals have broadly similar numbers of cells derived from each constituent individual, a microchimeric person will possess approximately 50 ‘foreign’ cells per million of their ‘own’ cells. The most common source of microchimerism is transfer of cells, across the placenta, from foetus to mother. Foetal cells can be detected in the bloodstream of nearly all pregnant women by the third trimester and this cell exchange may actually help the mother’s immune system to tolerate the foetus. Interestingly, microchimeric cells do not appear to be fully eliminated after birth: stable populations of these cells have been detected in women many decades later. Notably, transfer across the placenta also occurs in the opposite direction—from mother to foetus—providing a mechanism through which infants may become microchimeric. The condition may also result from transfer of cells between twins as well as between unrelated individuals via organ transplants, bone-marrow transplants and blood transfusions.

Between You and Me

6 Between you and me Easter 2011

Chimera on an Apulian plate

from the Louvre Museum

MA

RIE

-LA

N N

GU

YEN

Louisa Lyon examines how distinct genomes can co-exist in an individual

DOMINIC McKENZIE

Page 9: BlueSci Issue 21 - Easter 2011

Apart from producing occasional conundrums for geneticists, does chimerism—and indeed microchimerism—have any enduring consequences for health? The evidence for this is mixed, with both beneficial and harmful effects reported. In terms of harm, it has been suggested that microchimerism may increase the risk of developing a number of autoimmune diseases including scleroderma, rheumatoid arthritis and multiple sclerosis. A link was first proposed for scleroderma, a disease characterised by hardening of areas of skin or internal organs, after it was noted that the disease bore some resemblance to graft-versus-host disease. This is an autoimmune response that sometimes occurs following an organ transplant wherein the donated organ begins to attack the recipients’ tissues. One possibility, therefore, is that microchimeric cells may trigger an inappropriate and damaging immune response within their ‘host’. Many studies have reported an increased number of microchimeric cells in scleroderma patients as well as in those with other autoimmune diseases. However, not all autoimmune disease sufferers show microchimerism, and clearly not all microchimeric individuals develop autoimmune disease.

The fact that both microchimerism and autoimmune disease are affected by pregnancy may offer a further clue. Most autoimmune diseases are markedly more common in women with clinical onset typically occurring around the child-bearing years. And yet, paradoxically, some patients find that their symptoms actually improve during pregnancy, but return shortly after the birth. Studies into rheumatoid arthritis suggest that the relationship between pregnancy and autoimmune disease may depend upon human leukocyte antigen (HLA) genes. HLA class II genes exist in many different variants and are known to play a key role in determining immune responses. If a mother and foetus possess relatively distinct HLA gene variants, arthritis symptoms are likely to improve during pregnancy, while if mother and foetus have more similar HLA types, symptoms tend to remain constant or to worsen. Outside of pregnancy, the degree of HLA compatibility between microchimeric cells and their host may likewise be one of the key factors that

determine whether microchimerism is an autoimmune friend or foe.

When it comes to cancer biology, microchimerism is viewed in an altogether more favourable light. Women who have previously been pregnant and who show microchimerism appear less likely to develop many forms of cancer. They also show greater therapeutic response and better survival rates, as compared to women who have previously given birth but in whom microchimerism cannot be detected. The evidence seems to be particularly strong in the case of breast cancer. In a mouse model of the disease, microchimeric cells were detected in large numbers at the tumour site. These cells did not express tumour-markers, however, and did not behave like tumour cells; instead they appeared to be helping with cell repair. Microchimeric cell transfer, from foetus to mother, may be particularly beneficial in this regard as some of the cells transferred appear to be pluripotent stem cells with the potential to develop into any of a number of different cell types. In a rat model of liver and kidney disease, for example, microchimeric cells have been observed moving to the site of the two organs, transforming into hepatocytes and renal tubular cells, respectively, and seemingly engaging in repair of damaged tissue.

The precise mechanisms through which microchimeric cells exert their effects remain largely unknown. Indeed, it is often difficult to identify the role that microchimeric cells are playing. They may be innocent bystanders, accomplices to automimmune diseases, or helpful agents in cell repair. Multiple factors including HLA type, the origin of the microchimeric cells and the length of time since cell transfer may all influence which of these roles microchimeric cells adopt under any given set of circumstances. One thing that is clear is that many of us may have rather more in common than we previously thought.

Louisa Lyon is a postdoctoral researcher in the Department of Experimental Psychology

Between you and me 7Easter 2011

scr

iBBl

etay

lor

cell transfer from foetus to mother is the most common source of microchimerism

Joa

ch

im s

. mü

ller

marmosets have been shown to exhibit chimerism

Page 10: BlueSci Issue 21 - Easter 2011

at school we are taught the theory of plate tectonics, originally developed during the 1960s. We learn how the Earth’s surface is divided into about 14 rigid plates that are slowly moving across the surface of the planet. The theory has a simple elegance: only three parameters for each plate are required to describe all of the observed motions. However, plate tectonic theory cannot explain the movement and formation of all regions of the Earth, and an exciting and vigorous area of research within the Earth Sciences is trying to understand the regions where plate tectonic theory does not work.

The rocks that form the continents are extremely diverse. They have been formed in a variety of different ways and have ages of formation varying from over three billion years ago to the present day. In contrast, those underlying the oceans were all formed by volcanic activity, along the network of undersea mountain ranges where plates are moving apart. These different origins have led to significant

differences between the chemical composition of the rocks forming the oceans and those underlying the continents. Consequently, there are differences in the strengths of the rocks. The strong oceanic plates break along narrow plate boundaries, such as at the mid-ocean ridges, and behave exactly as set out in the theory of plate tectonics. However, the most dramatic mountain ranges on Earth, such as the Andes, the Himalayas, and the Tibetan Plateau, have grown where some of the weaker continental regions have been squashed between converging plates. One such example is southern Asia, where for the past 50 million years India has been colliding with Asia at the geologically rapid rate of 4 centimetres per year. The Tibetan Plateau and the region extending thousands of kilometres to the north is being deformed by this plate convergence, resulting in the unfortunate occurrence of many earthquakes. Unlike the stable interiors of tectonic plates, the entirety of this area is actively deforming. It is in regions such as the Tibetan Plateau that we have been required to develop an alternative to the theory of plate tectonics in order to understand what is happening.

The presence of earthquakes, which occur when two blocks of brittle rock suddenly slip against each other, shows the Tibetan Plateau to be actively deforming. However, this only tells us about what is happening in the uppermost layer of the Earth. The thickness of this brittle layer is tiny, equivalent to less than 0.2% of the Earth’s radius and less than 10% of the upper layers of the planet that participate in mountain building and plate tectonics. Deeper within the planet the temperatures are hotter, which means that defects in the crystal lattices of minerals are able to move with increasing ease (the diffusion and dislocation creep that will be familiar to many scientists from other disciplines). This movement of lattice defects causes the rocks

Alex Copley explains how fluid dynamics can help us understand geology

Mountains: Go with the Flow

As custard cools, a layer of

skin forms on top, which may

sometimes tear : earthquakes can

be understood in the same way

8 Mountains: Go with the Flow

DOMINIC McKENZIE

Easter 2011

S JO

HN

DA

VEY

Page 11: BlueSci Issue 21 - Easter 2011

composed of these minerals to behave as viscous fluids, with a viscosity over 1020 times greater than honey. Mountain ranges can therefore be imagined as piles of very thick custard left out to cool. Earthquakes become the tearing of the cool skin on the surface. Given this fluid-like behaviour of the majority of the mountain ranges, we can use the rich body of fluid dynamics research to understand how the ranges behave, and shed light on the forces that cause the earthquakes.

Such research has recently been conducted on the Tibetan Plateau. The figure shows that the southwestern (Himalayan) and northern margins of the Tibetan plateau are steeply sloping, whilst the northeastern and southeastern sides have much more gentle gradients. Because of their geological histories over the past few billion years, the rocks that form peninsular India and those in the lowlands on the northern side of the Tibetan Plateau are very hard and inert. This is because in previous episodes of mountain building, the rocks have been heated to extreme temperatures. Although they have since cooled down, all of the volatile constituents, which serve to weaken the rocks as a whole, have been melted out of them. In these regions the convergence between the Indian and Asian plates forces the rigid lowlands underneath the mountains. The mountains ooze outwards under their own weight, over-riding the lowlands like a dollop of custard spreading under its own weight across a slice of toast, known as a ‘gravity current’.

Fluid dynamics tells us that in this situation the flow forms a very distinctive shape with a flat top and a steeply sloping front, as we see on the northern and southwestern sides of Tibet.

The gentle slopes on the northeastern and southeastern sides of Tibet show that the story here is clearly different. In these regions, there are no hard rocks being pushed under the edges of the mountains, as is the case in India, so the mountains spread out over hotter and weaker underlying material. Our custard is now spreading over the surface of a tank of olive oil, rather than a slice of toast. In this case fluid dynamics tells us that gentle slopes should form, as we see on the eastern edges of the mountain range. The notable exception is the location of the devastating magnitude 8 earthquake that occurred in the Sichuan province of China in 2008. Here we see steep slopes because the Sichuan Basin is a region of hard rock (much like India), with the mountains oozing out over it. The earthquake was a manifestation of this motion in the brittle upper layer of the Earth—the skin on the top of our custard.

Treating the Tibetan Plateau as a pile of viscous fluid has shown that we can understand the shape of the mountain range and the earthquakes within the region by applying our existing knowledge of fluid dynamics. So the next time you’re eating Marmite on toast, take a moment to place a large dollop in the middle of the slice and look at how it spreads out under its own weight; by doing this you are recreating southern Tibet and the Himalayan mountains in miniature.

This story, however, carries a sobering postscript: the same processes which are occurring in the Himalayas are also occurring in the region of the devastating Sichuan earthquake. This means it is only a matter of time until a similar or larger earthquake rocks the very densely populated Ganges River Valley—an event which, as historical records show, has happened many times in the past.

Alex Copley is a Research Fellow in Pembroke College and the Department of Earth Sciences

Mountains: Go with the Flow 9

The mountains ooze outwards under their own weight onto the lowlands, which are made of more rigid rock

Easter 2011

Ra

jash

hiM

ala

yan

TR

ail

s

The himalayan and northern margins of the Tibetan Plateau are steeply sloping, whilst the northeastern and southeastern sides have much more gentle gradients

ale

x c

opl

ey

Page 12: BlueSci Issue 21 - Easter 2011

with increasing pressure to produce healthier foods and soaring energy costs, companies such as Mars and Nestlé are increasing their support for research into the manufacture of healthier and cheaper chocolate. British research councils are also in on the act: last autumn the Institute of Food Research (IFR), a UK research institute under the Biotechnical and Biological Sciences Research Council, announced its involvement in new projects aimed at diet and health-related research worth a total of four million pounds. Researchers are collaborating with companies to make the most of what they know about the chemistry of chocolate—and consumer tastebuds—to create a new line of lower-fat, cost-effective chocolate.

Making a low-fat chocolate bar that tastes and feels right is not easy. “The big question is how chocolatiers can meet the growing demands of the market for a lower-fat chocolate while keeping true to the classic taste and texture,” says Cambridge chemical engineer Joel Taylor.

British consumers are particularly picky. Chocolate crumb, the basis of the chocolate we know today, was developed in Britain during the early twentieth century to increase the shelf-life of chocolate; and is the source of its unique flavour. It is made by combining dry ingredients with water to form a paste. The paste is then dried and milled, lowering water content and preventing moulds from growing. The drying process causes a reaction between proteins and sugars known as the

Maillard reaction, which introduces a ‘cooked’ flavour. British expatriates in the US and elsewhere pay top dollar for imported British chocolate so that they can enjoy the tastes of home.

The classic chocolate bar is adept at satisfying our exacting taste buds. Most of the flavour of chocolate comes from sugar, which along with cocoa comprises a particle suspension in a fatty fluid, usually cocoa butter. To keep the mixture smooth, chocolate manufacturers add emulsifiers. Emulsifiers eliminate friction between particles by sticking to non-fat particles and making it easier for fats to coat them.

As you bite into a bar of milk chocolate, the chocolate melts in your mouth and reverts to a fluid state. The creamy texture spreads the taste across your tongue, but you might not realise that the melt reaches three different types of flavour receptors in your mouth at different times. The timing is perfect, thanks to the carefully engineered size distribution of particles which affects the friction in the mixture.

Manufacturers are already making progress towards a healthier chocolate by adding oil substitutes and varying ingredients in the emulsion to reduce fat content. However, chemical engineer Phil Cox and his team at the University of Birmingham went one step further in 2009, producing chocolate using more water than oil while still retaining the taste of conventional chocolate. Their cocoa butter emulsions, which are suspensions of two liquids that do not mix, contained up to 60% water. Most emulsions rely on the propensity of oil to remain separate from other liquids and, as a result, are fattier. The water-based emulsions remained stable during storage and the cocoa butter melted around 33°C, the temperature which consumers find the most attractive according to Stephen Beckett, a former Nestlé chocolate researcher. Further research at Birmingham has led to the development of a protein structure filled with air that mimics the properties of fatty molecules in cocoa

Rachel Berkowitz looks at the science that will allow us to make chocolate better: healthier, cheaper and as tasty as before

The Challenge of Chocolate

The classic chocolate bar is

adept at satisfying our exacting

taste buds

10 Chocolate Easter 2011

MU

HA

MM

AD

ZA

EEM

KH

ALI

D, A

yM

En R

IZw

An

DO

MIn

IC M

cKEn

ZIE

Page 13: BlueSci Issue 21 - Easter 2011

butter, and can replace some of the fattening oils in foods such as mayonnaise and salad dressings.

However, these innovations have not been without their own problems. Less fatty chocolates have a harder texture. In an attempt to counter this, biochemist Bettina Wolf and students at the University of Nottingham tried adding limonene to low-fat chocolate. Limonene is a citrus fruit-derived oil-soluble substance that compensates for the compromised softness and quality in reduced-fat chocolate. It decreases cocoa butter viscosity by mixing within the cocoa butter’s structure and diluting the fat. It also reduces the formation of fat crystals in cocoa butter, decreasing the solid fat content and hardness of the chocolate.

Cost is always a concern, too. Taylor notes that “cocoa butter is expensive, so substituting other fats in chocolate is important economically as well as for health reasons. Hence you get combinations of milk fats and cocoa fats, which affect taste and texture.” Manufacturers make particles in the chocolate mixture as large as possible to reduce costs. Smaller particles have a higher surface area and require more fat to coat. On the other hand, larger particles make for less smooth, grittier chocolate, which does not taste as good. Getting the size of the particles right is yet another challenge for researchers.

During the manufacture of chocolate, mixtures of different particle sizes clump together tightly leading to thicker fluid and rougher textures. To avoid this, manufacturers use energy-intensive techniques including conching, a process of kneading and stirring at high temperature for many hours in a seashell-shaped vessel.

Taylor questions whether the conching process is as efficient as it could be. He studies the flow, or rheology, of molten crumb chocolate using a machine called

a rheometer that applies a shear stress or strain over time. The most commonly used rheological model for chocolate was originally developed to model the rheology of printing inks. However, after studying chocolate over a wider range of shear stresses, Taylor and fellow chemical engineer Alex Routh recommended a different model. Their new, improved model better describes the properties of crumb chocolate rheology. Routh argues: “if you can understand chocolate structure over time, maybe you can achieve [the same] structure using less energy, and perhaps eventually bypass the conching process.”

However, as Taylor highlights “it is not just chocolate, all food manufacturers face the same challenge” of producing lower fat products while maintaining their traditional taste. Therefore, the food industry has entered a new period of research into health and nutrition. Nutritionist Alison Lennox of the Human Nutrition Research (HNR) centre in Cambridge notes that recent years have seen food companies’ research teams increase their interest in nutrition quality. Research at the HNR has provided an understanding of specific nutrients and health implications, but Lennox needs to explain to those who support government initiatives “why you cannot make a low-fat biscuit and still have it taste right.” That is where the scientists are needed.

Will food companies have trouble finding talented young minds to do the research? Not likely, if the Cambridge research group is anything to go by. “I have always liked eating chocolate,” explained Taylor, “so researching the stuff seemed like a pretty good idea.”

Rachel Berkowitz is a PhD student at the BP Institute in the Department of Earth Sciences

The Challenge of Chocolate 11Easter 2011

Page 14: BlueSci Issue 21 - Easter 2011

the ancient greeks, like many people since, were confounded and fascinated by the migration of birds. Homer recognised that cranes “flee the winter and the terrible rains and fly off to the world’s end”. Meanwhile, Aristotle wrongly asserted that each year summer redstarts would transform into robins come winter, as the two species were never seen in Greece together. In modern times, we have come to appreciate the vast distances covered by migratory animals and the remarkable precision with which they make the journey. How is this feat achieved?

It is known that animals use sounds, landmarks or even smells to guide and navigate their way across continents. But the most intriguing and least understood navigation ability is magnetoreception: the detection of the Earth’s magnetic field through an internal, biological compass. Evidence for this capability was shown in a wide variety of animals, from ants to crocodiles. In fact, wildlife rangers in Florida resorted to taping magnets to the heads of crocodiles in order to prevent them finding their way back after being relocated. Magnetoreception has even been suggested in the humble cow, after researchers using Google Earth accidentally discovered that cows tend to line up parallel to the Earth’s magnetic field.

Magnetoreception was first observed in captive robins in 1957. In autumn, when it was time for them to migrate from Frankfurt to Spain, they kept flying southwest in their cage. This happened even though the room was isolated from any external visual stimuli with which the robins could orientate themselves. This led to the idea that robins might use an internal magnetic compass to migrate. Many studies have been conducted since, but controversy still rages over the exact underlying mechanism of magnetoreception.

Over fifty animal species have been found to use an internal magnetic compass so far, and several different mechanisms have been proposed and observed. The most established mechanism relies on the presence of small crystals of magnetite, a naturally magnetic mineral, in either the nose or the beak, surrounded by receptor nerves. Magnetite has been found in many animals, including humans, where it can be used to sense the magnetic field of the Earth and create a magnetic field map for migration. However, in experiments on birds where this magnetite receptor was deliberately disrupted by anaesthetic or a strong magnetic pulse, the birds could still orientate themselves along the magnetic field. This suggests that there is an alternative mechanism at work. Even more intriguingly, this alternative magnetoreception mechanism only works when there is visible light, and did not appear to be influenced by reversing the polarity of the field.

In 1978, Klaus Schulten suggested a mechanism for this type of magnetoreception, known as the radical pair mechanism. This mechanism proposes that there is a light-activated reaction in the bird’s eye that is affected by magnetism. By detecting the rate of the reaction, birds can sense the strength and direction of Earth’s magnetic field. The problem with this idea is that the Earth’s magnetic field is incredibly weak, and so its influence on a normal reaction is six orders of magnitude less than the energies involved in a normal chemical reaction. How could it possibly have a detectable effect?

The secret to detecting the magnetic field lies in generating a pair of radicals, which are molecules with unpaired electrons that interact strongly with magnetic fields. Creating these radicals requires a burst of energy, as provided when the molecules are exposed to light. Within a suitable molecule or protein, two radicals can form what is known as a

Robins were one of the first

species to be observed using

an internal magnetic compass

12 Bird’s Eye View

Dia

mo

nD

Da

VEj

/ D

aV

iD

Easter 2011

Bird’s Eye Viewian Le Guillou finds out about the ‘biological compass’ of cows, crocodiles and migrating birds

Do

min

iC m

cKEn

ZiE

Page 15: BlueSci Issue 21 - Easter 2011

‘spin-correlated pair’ that exist in two different states. Conversion between these two states is affected by a magnetic field, and the rate of conversion can be monitored through the concentration of the radicals. In this way, a weak magnetic field can become detectable by cells in an organism.

The radical pair mechanism fits with the observations that cannot be reconciled with magnetite receptors. It is both dependent on the presence of light and unresponsive to the polarity of the field. Experimental evidence was lacking in 1978 when Schulten proposed the mechanism, so the idea received little attention for twenty years.

In 2000, a research group from Illinois suggested that proteins known as cryptochromes may be behind this source of magnetoreception. Cryptochrome proteins are found in the eye of robins, and absorb blue light to initiate a radical reaction—the perfect candidate to generate biologically detectable spin-correlated radical pairs. This led to renewed interest in the area, including the development of a proof-of-principle artificial magnetoreceptor system by a team of researchers at Oxford University. This was the first man-made chemical compass; the first artificial chemical system sufficiently sensitive to detect the Earth’s weak magnetic field on the planet’s surface.

The contribution of cryptochrome and the radical pair mechanism to magnetoreception in animals is still being investigated. Despite initial scepticism, evidence from model systems and computational work has shown that this mechanism is feasible for detecting magnetism. Cryptochromes are primarily responsible for maintaining circadian rhythms in many animals, including humans. Like many proteins throughout evolution, cryptochromes have found a new role in a different part of the body. From their presence in the eye, it has even been suggested that robins can sense the results of the radical reaction along the optic nerve, therefore the direction of the magnetic field may be visible to them in some sense.

With growing evidence of weak magnetic fields affecting biological processes, there is increasing interest in how they might affect us. Numerous studies have shown a significant correlation between proximity to high-voltage power lines—which carry a low frequency magnetic field —and increased rates of childhood leukaemia. In 2001 the International Agency for Research on Cancer classified extremely low frequency magnetic fields as a possible carcinogen. Yet several attempts to demonstrate magnetic field induced carcinogenesis or tumour promotion in cells have failed, so this issue is still surrounded by uncertainty.

Perhaps in years to come our suspicions of magnetic fields transforming healthy cells into cancerous ones might be viewed just as fanciful as Aristotle’s redstarts to robins hypothesis. While we cannot be sure yet that power lines cause cancer, further analysis of Google Earth has shown that they can certainly disrupt the ability of cows to line up with the Earth’s magnetic field—tricking them into aligning with the magnetic field of the power line instead.

Ian Le Guillou is a PhD student in the Department of Biochemistry

Birds use small crystals of magnetite in their beaks to create magnetic field maps for migration

Easter 2011 Bird’s Eye View 13

Bill

lia

o

Due to magneto-reception, cows tend to line up parallel to the Earth’s magnetic field

Fly

ing

JEn

ny

Page 16: BlueSci Issue 21 - Easter 2011

what abilities spring to mind when someone says ‘superhero’? The ability to fly? Walking on walls? Or an uncanny talent for surviving against the odds? Although Marvel Comics have been writing far-fetched tales about characters with superhuman powers for decades, evolution has turned fiction into reality and provided us with living, breathing and indeed flying proof that it got there first. So how do animals effortlessly achieve these things that humans merely dream of? And can we replicate them?

A huge number of species across the animal kingdom can fly, from buzzing midges to lumbering vultures. But not all fliers are created equal. Most birds are only able to fly forwards, and are often relatively ungainly in the air, at least as compared to their smaller brethren: the insects. Insects are often capable of flying backwards or hovering on the spot, more like a helicopter than an aeroplane, and possess a fine control over flight that many birds lack. This allows them to land on your skin without detection, or even land on water. But how do they accomplish their feats of aerial acrobatics?

It turns out that insect flight is a complex phenomenon that is still poorly understood. According to some researchers, insects use at least three different

mechanisms to increase their lift beyond that predicted by simple fluid mechanics. Firstly, their wings beat at a sharp angle to horizontal, creating an effect known in aviation as stalling. In aircraft, this is disastrous, causing huge loss of lift due to separation of the air flow from the wing and often causes the plane to crash. In insects however, the act of stalling creates a vortex (think miniature whirlwind) immediately above the leading edge of the wing, which provides a large lifting force, almost as if the insect is being sucked upwards. Secondly, as their wings travel through the air, they rotate. This rotation creates an additional down-current, which helps to keep them aloft in a manner analogous to a tennis ball with backspin. Finally, in addition to creating the leading edge vortex, any wing beat will inevitably create smaller trailing edge vortices behind the wing. These usually sap energy from the flier, but insects have adapted to sweep their wings back through the turbulent air, recapturing energy that would otherwise be lost. All these mechanisms contribute to a system far more innovative than our brute-force methods of getting into the air. One complex enough that we’re unlikely to be replicating it any time soon.

So perhaps insect-like flight is out of our reach, but walking on walls is a different story. Many species possess the ability to hang around obnoxiously on our ceilings and walls. Their methods may vary, but a couple of unifying themes emerge. Small insects, often flies, tend to take the rather obvious route of having sticky feet. They have tiny glands which slowly secrete an oily adhesive that literally glues them to the surface in question. Spiders have claws on their feet that hook into grooves too small for us to see (which, incidentally, is why they struggle to get out of very smooth containers such as baths and sinks). Yet clever as these two options are, the most ubiquitous and ingenious method is yet to come and proves that you don’t have to be an insect to have superhero qualities. This number is showcased by a friendly little creature: the gecko.

Superheroes, Fact or Fiction?

Insect flight is far more agile than birds or

aeroplanes

14 Superheroes, Fact or Fiction?

Oa

kle

yOr

IgIn

alS

Easter 2011

Mark nicholson discovers how nature has turned fantasy into reality

DO

MIn

IC M

cken

ZIe

Page 17: BlueSci Issue 21 - Easter 2011

Gecko feet stick to surfaces using a well-known piece of chemistry called van der Waals forces. These forces are attractive, though very small, and exhibited by all atoms and molecules towards any other atoms or molecules via generation of instantaneous dipoles. This effect is in part responsible for many everyday phenomena, from being able to fill our cars with liquid petrol to the delightful experience of peeling chewing gum off our shoes. These forces vary inversely with distance, therefore they are very weak when the molecules are greater than a couple of nanometres apart. Most solid surfaces are too rough to see a significant interaction on this scale, even those we think of as being extremely smooth and flat such as glass. This is because only a tiny fraction of the surfaces are ‘touching’ closely enough to experience a force. The exception to the rule lies with extremely pliable solids, like the aforementioned chewing gum, which warp when sufficient force is applied and mould themselves to surfaces so closely that van der Waals forces come into play, sometimes with unpleasant consequences.

But humans can’t walk on walls just yet. The problem with sticky pliable substances is that they tend to stretch and deform when force is applied, and they still fail to have enough van der Waals interactions to bond anything remotely heavy to surfaces —imagine trying to stick yourself to the ceiling with Blu-Tack.

Once again, nature provides us with the answers. Rather than using a single flexible mass, evolution has bestowed some animals with structures known as setae on their feet. Much like a microscopic hairbrush, the setae form a mat of bristles each a great deal smaller than the hairs on your head. Individually, they deform very easily to allow good contact between the gecko’s foot and the ceiling, yet they are all relatively strong, so can be pulled off intact and re-used millions of times. As a result, a gecko’s foot is sticky, though it has no glue or claws; it just sticks. Now how about that for a superhero solution?

Aerial acrobatics and parkour-gone-mad are all very well; animals perform these feats every day. Yet nature’s superheroic tendencies do not end there. Throughout history, some organisms have survived incredibly harsh conditions, enduring weather that would rapidly kill any human without the support of technology. A personal favourite of mine are the insects found near the poles, such as the goldenrod gall moth caterpillar, which can survive astonishingly low temperatures, remaining mobile and unfrozen at temperatures as low as -50°C. Other animals and some plants do freeze, but in such a way that they enter a cryogenic state, and so survive to tell the tale some months later when the temperature increases.

Insects can avoid freezing at temperatures below zero by utilising the curious properties of ultra-pure water. The scientific principle dictates that although conventionally ‘pure’ water freezes at 0°C, water which is completely devoid of any impurities remains liquid down to around -40°C, defying all conventional wisdom and entering a ‘supercooled’ state. It appears that in ordinary water, ice crystals do not just form spontaneously; they always nucleate around some impurity, such as a speck of dirt. This impurity lowers the surface area of the budding crystal, reducing its energy and making it easier for it to form. In order for crystals to nucleate without their specks of dirt, they require a much greater driving force—and so a much lower temperature. Some insects stop feeding prior to the onset of freezing temperatures to avoid the food residues and mineral dust particles acting as nucleating agents. Other insects avoid freezing by producing antifreeze substances such as glycerol for their body fluids. Imagine if you could do that too.

Organisms using a cryogenic approach take the opposite view—rather than removing all nucleating agents, they make the effort to create highly efficient nucleating proteins. Traditionally, intra-organism ice has been considered lethal as water expands on freezing, and so the growing ice tends to rupture cell membranes. However, this only happens if the ice forms within the cell. Instead, animals such as the wood frog (Rana sylvatica) and many insects nucleate ice in the intercellular space, which lowers the amount of liquid water in the cells, making it harder for the remainder to freeze into large and damaging ice crystals. Ultimately, most of the water in the organism is locked into this extracellular ice, and so—when the temperature finally gets so cold that even the cells freeze—there is so little liquid water left that very few cells burst.

Although most of us regard insects as nuisances, the science which has made this class the most populous on the planet, outweighing all other species put together, is truly incredible. We should take the opportunity to learn from our miniature cousins and other superheroic organisms, and our efforts in mimicking nature will continue to enhance our technology. I don’t expect to see Superman whizz past my window any time soon—but perhaps Spider-Man isn’t quite so far-fetched. Maybe it’s time to stock up on some skin-tight outfits and a cape.

Mark Nicholson is a 3rd year undergraduate in the Department of Chemistry

Superheroes, Fact or Fiction? 15

Setea on a gecko’s feet maximise van der Waals interactions,allowing them to walk on walls

Easter 2011

Fur

rySc

aly

Page 18: BlueSci Issue 21 - Easter 2011

22 Katpitza Lent 2010

Small Channels,Big Ideas

Page 19: BlueSci Issue 21 - Easter 2011

Focus 17Lent 2010

imagine a chemistry lab the size of a postage stamp. Or a medical device that fits in your wallet with the promise of an instant health check. Perhaps you would like to scan the atmosphere for pathogens and analyse chemicals on Mars, Star Trek style? Thanks to the myriad of rapidly developing technologies that rely on microfluidics, all this is becoming possible. So, how does microfluidics work?

Microfluidics is defined as the manipulation of systems in which tiny amounts of fluid flow through very narrow channels. Typically, the channels are as narrow as a human hair, less than 1 millimetre wide. The amount of fluid flowing through a device made with these narrow channels can be as little as several ‘attolitres’, which is 1/1000000000000000000th of a litre, smaller than the volume of the smallest viruses. Microfluidic devices are usually built systematically from a series of basic components, including an entrance for reagents and samples, some means of moving and mixing the fluids, and other components for producing output, such as detection or purification devices. First used as an analytical tool in chemistry, the relatively new technology offers many advantages over larger systems. Devices require a very small volume of reagents, a small amount of sample material, but are capable of delivering high resolution results at low cost and short analysis times.

Yet simply scaling down a larger system into micrometre size would not yield a working microfluidic device. This is because the behaviour of fluids at a shrunken scale is very different to that in a larger device. In the narrow channels, fluid flow is turbulent free, so there is very little mixing. If you put coffee and milk down the same channel in your microfluidic device, instead of well-mixed white coffee towards the end of the channel, you will have black coffee and milk, still separated, and flowing side by side. Only when the coffee and milk exit the channel into a big coffee cup would they resume turbulent flow, producing your milky coffee in a few seconds.

The smooth flow of fluids in microchannels means that whilst it is easy to keep two different chemicals

FOCUS

Da

viD

Cat

e &

alb

ert

Fo

lCh

Small Channels,big ideas

BlueSci explores microfluidic

technology and its dazzling

array of applications

Page 20: BlueSci Issue 21 - Easter 2011

18 Focus Easter2011

separate even when they are flowing side by side, it is also very hard to get them to mix. Stirring is not an option because the channels are too narrow. Designers of microfluidic devices have to find ingenious ways to introduce turbulent flow and force mixture, such as building sharp bends into the channels, or using microvalves or micropumps.

The potential of microfluidics to facilitate the work of research scientists and medical professionals has led to rapid and exciting developments in the field.

Personalised medicine, rapid disease identification, forensic evidence from tiny sample; these technologies may seem decades away from realisation, but the development of lab-on-a-chip devices has helped to bring them closer to reality.

These applications depend on identifying genetic material, in particular the exact sequence that builds up a DNA strand. For example, to identify a bacterial cell unambiguously, you need to make a comparison between the DNA sequence of that cell and the sequence of a known bacterium. In many cases, the amount of genetic material available is small. In order to carry out tests, the DNA extracted from the cell of interest needs to be ‘amplified’ by a process called polymerase chain reaction (PCR).

In PCR, the original DNA sample passes through three specific temperature stages for a large number of cycles. First, a high temperature stage breaks apart the double helix of DNA in a ‘melting’ process, yielding two single-strand molecules. The temperature is lowered in the next stage, where the building blocks of DNA adhere to the single chains in a sequence-specific manner. Finally, the temperature is kept low while the building blocks are linked into a strand of DNA by an enzyme, yielding two copies of the original double helix. Through repeating many such cycles, this doubling process can exponentially amplify

Professor Seth Fraden is visiting from Brandeis University in Massachusetts. He specialises in soft condensed matter physics and is currently spending four months collaborating with members of the Department of Chemistry and Cavendish Laboratory in Cambridge. In America, his group has developed a new protein crystallisation technology using microfluidics, culminating in a device called the Phase Chip. He talks to BlueSci writer Vivek Thacker about his current work and future plans.What started your interest in microfluidics? SF: My background is in biological materials, looking at liquid crystals of viruses. Work on microfluidics began in my last sabbatical—at the time I was very impressed with the technological advances being made by Stephen Quake’s group at Caltech. He had developed a suite of microfluidic tools to synthesise small amounts of materials on a valve-based network, and he showed that it was a very scalable technology. I saw that as an advance to study my liquid systems in a very efficient manner, and decided to pick this up in my upcoming sabbatical. Is it easy to scale up from a microfluidic system to a bulk system? SF: No! Because the physics is different, you do not want to have the intention of scaling up. But the technology is scalable. Microfluidic valves are made photo-lithographically, like printing, so the effort to make a hundred is the same. It is like semiconductor manufacturing—once you’ve learnt to make one transistor, you can make ten million of them on the same wafer.

Many of the artistic images in

this article are kindly provided by

Albert Folch, Professor of

Bioengineering at the University of

Washington,also Art Editor of

the journal Lab on a Chip.

The work of his lab has been

showcased at BAIT: Bringing Art

into Technology

18 Focus

SEt

h F

ra

dEn

Gr

EG C

oo

kSE

y &

alb

Ert

Fo

lCh

Page 21: BlueSci Issue 21 - Easter 2011

Focus 19Easter 2011

the sample mass while conserving the sequence of the DNA being investigated.

Traditional methods of PCR require large machines and large quantities of reagents, making this process expensive and time consuming. In addition, if the amount of sample is tiny, such is often the case in forensic investigation, then even after PCR the material is still insufficient for analysis. Errors can also be introduced by amplifying contaminants, damaging admissibility of the DNA evidence in a court of law.

PCR can be speeded up using microfluidic principles, by performing each stage of the process on one of three layers of a single chip. Fluid flow is carefully controlled in the device, allowing mixture only at desired places. The proximity of the layers also means reaction products are transferred in a fraction of the time.

Since typical lab-on-a-chip devices only require several billionths of a litre of reagents and samples, they have increased sensitivity, accurately amplifying and identifying DNA sequences where larger systems fail. As many channels and detection chambers can sit on the same chip, analyses can be done in parallel,

as opposed to the linear workflow used in larger DNA analysis machines. Thus microfluidic DNA sequencing is much faster.

Microfluidic PCR has obvious implications for human genome sequencing. In the human genome project, the whole DNA sequences of several individuals were ‘read’. The traditional methods available meant the process took many years and several hundred thousand US dollars. With the ability to sequence DNA on a microfluidic chip, the process should be faster and cheaper.

Beyond looking at DNA in cells, microfluidic principles can be applied to the detection of cells and chemicals in body fluids. An example is the i-Stat device, allowing doctors to carry out bedside blood tests with almost instantaneous results using a handheld device. Only a few drops of blood are needed to carry out a series of tests. The device allows many patients to be seen and tested at the same time, as their blood is collected onto relatively cheap, single-use microfluidic cassettes. This technology is already in use in some parts of the UK; NHS workers in Kent have reported that it is especially useful for home visits.

Intricate channels can be etched onto a handheld chip, capable of carrying out complex chemical tasks, such as analysing blood samples

FOCUS

Gr

EG C

oo

ksE

y &

Alb

Ert

Fo

lCh

Your Phase Chip carries out protein crystallisation. Is this something some labs focus on particularly? SF: Some labs do focus on making protein crystals, but their interest is not on the crystallisation process itself. They just want to use the crystals in diffraction experiments and obtain the structure of the proteins. If I am going to make a contribution to the field, the product has to leave my lab. If the chip costs $1000, then it will not leave my lab because it is not commercially viable. Since realising this, I have focussed on the question: can we make a device which retains the essential qualities but is 100 times cheaper? How do you propose to do that? SF: You can achieve protein crystallisation with discrete components. Before we had one integrated device that did everything, but now you can make one optimised device for each step. To really tackle the protein crystallisation problem, since the crystals we produce are so small, we have to develop a whole new suite of technologies to complement ours, so I have set up a collaboration with a synchrotron beam scientist who specialises in diffraction experiments. The whole community is converging on this idea because it is clear that small crystals are easier to make than bigger ones.Where do you see microfluidic technology going in the future? SF: Microfluidics will have applications in a large set of devices, but they will definitely be under-the-hood as components. The first ten years of the field were focussed on building extended microfluidic devices, but the future is going to be integration into larger systems.

Alb

Ert

Fo

lCh

Alb

Ert

Fo

lCh

Page 22: BlueSci Issue 21 - Easter 2011

20 Focus Easter 2011

There is even potential for preventative health care using detection devices built with microfluidic technology. Devices used to detect particular pathogens are known as immunoassays. They make use of antibodies, which are proteins that bind to specific molecules, particularly those present on the surface of a pathogen. Microfluidic devices allow bound and unbound antibodies to be discriminated according to their different properties, indicating presence or absence of a pathogen. As with DNA sequencing, microfluidic devices are smaller, more portable, and faster at this specific task than a whole research lab full of equipment. It is claimed that immunoassays can detect E. coli in ground beef at a resolution of just one cell per gram. Perhaps in a few years’ time, these wallet-sized immunoassays will become an essential travel accessory, just like clean water tablets.

While microfluidic chips have been applied to solve many biological problems, its initial use in analytical chemistry has flourished and diversified. Now we have devices that are capable of advanced chemical synthesis and detection, even beyond the Earth.

The basic techniques and equipment in the chemist’s toolkit have remained largely unchanged since the first laboratory synthesis of urea in 1828, and in macroscopic situations miniaturisation of these processes is not necessary. However, one area where the importance of microfluidics is noticeable is in synthesizing nanomaterials.

The properties of nanoparticles are dependent on their size and shape. Current systems for growing nanoparticles show severe limitations, producing generally broad size distributions, leading to mixtures with unpredictable properties. The requirement for a specific particle size can supersede the need for large

quantities, so growing nanoparticles more accurately, even at a small scale, is desirable.

Microfluidics is ideal for meeting this requirement. Not only does it allow rapid and controllable mixing, but variables such as temperature, concentration gradients and pressure can be manipulated to produce particles of specific size. Nanoparticles themselves have a wide range of applications: from drug delivery and medical diagnostics, to optical coatings and catalysis. The efficiency and accuracy that microfluidic devices lend to nanoparticle synthesis will allow us to make technological progress in diverse fields.

Chemical detection is another area where microfluidic principles can be applied, especially where the chemical of interest is present in a complex mixture at trace level. The primary focus for this type of technology is to optimise sensitivity and specificity of measurement in forms that are low cost, fully autonomous and miniaturised for portability. One of the main techniques targeted is that of gas chromatography (GC), a highly sensitive chemical analysis which is used to detect and quantify chemicals in air, water and soil.

A key part of existing GC systems is the fractionation column, which separates samples into their chemical components. These then pass through a detector to produce a chromatogram identifying the various chemicals. Typically the column is between 1.5 and 10 metres in length, making it far too bulky to transport.

For environmental testing, in particular atmospheric monitoring, samples are currently collected at remote locations and then returned to a laboratory for analysis. A small-scale, portable gas chromatography system would enable air quality to be analysed and recorded

Nanoparticles need to have

a well-defined range of size and shape to function as catalysts (left)

At microfluidic quantities, fluids flow smoothly

and do not mix easily, allowing

finetuned manipulations

and control (right)

20 Focus

Ar

go

NiN

E N

Atio

NA

l lA

bor

Ato

ry, b

yEo

Ng

du

lEE

Ch

ris

NiE

ls &

Alb

Ert

Fo

lCh

Page 23: BlueSci Issue 21 - Easter 2011

Focus 21Easter 2011

at the site of measurement, increasing the speed of response to any adverse changes. Microfluidics makes such a device possible as it enables miniaturisation of the GC column onto a single 10 centimetre square piece of glass.

Microfluidics has also allowed the miniaturisation of capillary electrophoresis, a separation technique that uses narrow-bore capillaries to separate molecules based on differences in charge, size and hydrophobicity. This is one of the proposed devices to be included on the European ExoMars rover mission scheduled for launch in 2013. The martian soil will be analysed for traces of biological compounds such as amino acids, the building blocks of proteins. The proposed experiment would separate amino acids from a soil sample, then use microfluidic capillaries to identify them by charge, size and crucially, chirality. This refers to the handedness of the amino acids which can exist in two mirror image forms, left-handed or right-handed. In simple chemical reactions, these molecules behave in the same way, but when it comes to complex biological reactions involving enzymes, the chirality matters. This is shown by the fact that all proteins on Earth are made up of left-handed amino acids. If the device on Mars finds an excess of amino acids of one particular chirality, it will be a clear sign they are biological in origin.

It would be impossible to send into space the amount of equipment required for this type of analysis in conventional setups, but with a microfluidic device, weighing only about two to three hundred grams, the search for life will be carried out right there on Mars.

A bit closer to home, devices based on the same technology are being used to police our own atmosphere. As many nations continue to be concerned about the threat of biological weapons, microfluidic methods are beginning to provide attractive antidotes to fear.

Small-scale detectors can now identify a broad range of chemical and biological agents. By protein fingerprinting, harmful bacteria and viruses are singled out. Biotoxins also feature in the vast catalogue of

substances that new microfluidic devices are able to test for, including chemicals such as ricin and sarin which have both been associated with the military activities of the Cold War.

The USA has a strong focus on developing microfluidic technologies for defense applications. A number of American companies have invested and developed commercial biodefense solutions built on a microfluidic technology. The US Department of Homeland Security, as early as 2005, had already expressed interest in upgrading slow manual atmospheric testing facilities with automated microfluidic systems across major cities.

There seems to be no end to the new ways in which the use of microfluidic technologies can be integrated into our everyday activities. Earlier this year, researchers from Purdue University (Indiana, USA) discovered a new technique for conducting microfluidic analysis on paper. Using lasers to carve channels in the hydrophobic coating, it is hoped that this innovation will provide an even more inexpensive method to bring microfluidic technology to mainstream markets.

Microfluidics has established itself as an exciting field from which we can be sure that even more applications will emerge. As established technologies become ever more efficient and economical, the imaginative application of microfluidics to new technology will enrich and inform our understanding of our world and beyond.

FOCUS

Helen Gaffney is a second year Natural Sciences Tripos student

Wendy Mak is a PhD student in the Department of Physics

Lindsey Nield is a PhD student in the Department of Physics

Vivek Thacker is a PhD student in the Department of Physics

The ExoMars rover will be equipped with a miniaturised device to search for life (left)A combinatorial microfluidic mixer (middle)Biological defense can be enhanced by microfluidic technology (right)

ch

ris

niE

ls &

alb

ErT

Fo

lch

Th

oM

as

MEi

Er

Ky

lE s

TEc

KlE

r, U

s n

aV

y

Page 24: BlueSci Issue 21 - Easter 2011

this year marks 100 years since Marie Curie was awarded the second of her two Nobel Prizes, an accolade which places her in a very select group of only four individuals in the entire history of the Nobel Prize. This centenary provides an ideal opportunity to reflect upon the achievements of some of the earliest pioneering female scientists. These women made significant contributions to vital areas of modern science, including astronomy, computing and medicine despite the barriers held against them at the time.

Up until recent times, academia was a ‘man’s world’. Cambridge only awarded its first degrees to women in 1921 and did not allow women to be full university members until 1947. However there were many individuals who pursued life-long careers in the sciences during the previous two centuries.

The field of astronomy provided one of the first opportunities for women to become scientists. This was because the work always required two people; one person to operate the telescope, and one to record the findings. In this way, some women found themselves assisting their father, brother or husband in his research. Some assistants turned out to be more talented than their male counterparts, making significant findings themselves.

Caroline Herschel (1750-1848) was one of these talented assistants. Thought to be unlikely to marry due to her stunted growth, she was put at her brother’s disposal. She went on to discover eight comets, including the catchily named 35P/Herschel-Rigollet, and perfected complicated equations on spherical trigonometry which allowed the position of stars to be calculated. She even received a small salary from King George III for her work, making her the first paid female scientist. Interestingly, it was said that Herschel never learnt her multiplication tables, and always had to carry them with her on a piece of paper, despite her marvellously analytical mind. Who says you need to know your 12 times tables?

Women made developments on modern calculations and computation methods as early as 1843, as shown by Augusta Ada King. She was the only legitimate child of the poet Lord Byron, the Countess of Lovelace, and possibly the first ever computer programmer. As a skilled mathematician, she was a good friend of Cambridge professor Charles Babbage, the ‘father of the computer’. Babbage invented, but failed to build, many steam-driven, mechanical calculating machines. Among these was the Analytical Engine, which he envisaged producing long logarithms. King was asked to translate a French article about the Analytical Engine into English, and Babbage suggested she write her own notes on the subject. These notes turned out to be three times as long as the original article, and included an algorithm devised by King to be used on the machine, which is in essence the first computer program. Inspired by the cards with punched holes used by weavers to produce intricate patterns in cloth, King thought the same technique could be used to dictate the

Jessica Robinson uncovers some of the pioneering female scientists

Women Who Led the Way

Marie Curie is one of only four people to have

won two Nobel Prizes

22 Behind the Science Easter 2011

no

BEl

fou

nd

atio

n

a. R

uM

inSk

a

the Maria Reactor in

Poland, which is a research nuclear

reactor named after Marie Curie

Page 25: BlueSci Issue 21 - Easter 2011

workings of the Analytical Engine. Indeed, the first computers of the 20th century were programmed by punched cards. It could be argued that King was more of a visionary than Babbage, who only wanted to create a machine for number crunching, whereas she foresaw other potential functionality such as creating music.

While modern computing is still dominated by men, no one is surprised any more by the sight of a female doctor. Yet the field of medicine used to be an exclusively male profession, and the first female doctors in the end of the 19th century faced many challenges—leading to the true sex of the first woman medic not actually being revealed until after her death. Much to the disbelief of many, Martha Stuart, or Dr James Barry as she was known, was said to have had a very hot temper and was incredibly flirtatious with beautiful women—perhaps out of frustration, or perhaps overcompensating in her disguise. She was an excellent doctor who was promoted to Inspector General, the highest rank possible for an army physician.

It is impossible to talk about great females in medicine without mentioning the ‘Lady of the Lamp’, Florence Nightingale. Nightingale is famous for bringing about a drastic reduction in death rates in the Crimean War hospitals, from 42% to 2%, by enforcing cleanliness and better nursing practices. Moreover, she started the first women’s college of nursing in London, and was an expert statistician. Her reports were very influential in the sanitary reforms of the late 1800s.

Moving from medicine to back to the physical sciences, 1867 takes us to the start of Marie Curie’s career. Born in Poland as Maria Sklodowska, she moved to Paris in 1867 to study for a degree in Mathematics and Physics. There she met Pierre Curie, and they were married within the year. They shared a love for science and worked together on radioactivity, a term coined by Marie herself.

For her doctoral studies, Marie Curie worked on the uranium-rich ore, pitchblende, which she discovered to be more radioactive than pure uranium. Reasoning that new radioactive elements must be present in the ore, she focussed on its chemical separation, while Pierre studied its radiation properties. Through this work, she discovered two new elements: polonium, named after her home country, and radium, which Pierre showed could kill cancerous cells.

It took Marie four years to purify 0.1 gram of radium from 8000 kilogram of pitchblende. She wrote, “I had to spend a whole day mixing a boiling mass with a heavy iron rod nearly as large as myself. I would be broken with fatigue by the end of the day.” Marie and Pierre both suffered from radiation

burns and sickness from years of working in such close contact with radioactive materials.

The Curies’ hard work was rewarded in 1903 with a joint Nobel prize in Physics with Becquerel, after which Pierre was made Professor at the Sorbonne. Life looked quite perfect for Marie: two beautiful daughters, a loving husband and the highest acknowledgement for her contributions to science. Sadly, in 1906 Pierre was hit by a horse-drawn vehicle in the street and killed instantly. Marie was devastated but did not give up on her career; she took over Pierre’s Sorbonne chair, becoming France’s first female professor. She continued characterising her new element, radium, and was awarded her second Nobel Prize in 1911, this time in Chemistry.

The Curies’ work was critical in the development of X-rays for medical use. At the break of World War One, Marie set up a Red Cross Radiation Unit, equipped ambulances with X-ray machines and worked on the front lines, nursing injured soldiers with her daughter.

Her legacy lives on in numerous ways; be it in the many universities, the charity, or even the road in Paris named after Marie and Pierre. However, perhaps her biggest legacy is her example for women and men with scientific aspirations: with hard work and true devotion, it is possible to achieve the impossible.

Jessica Robinson is a PhD student in the Department of Oncology

Augusta Ada King, often said to be the world’s first computer programmer

Behind the Science 23Easter 2011

nAt

ion

Al

phy

Sic

Al

gA

llEr

y, t

Edd

ing

ton

Mir

Ko t

oBi

AS

Sch

AEf

Er

charles Babbage’s Analytical Engine arguably owes more to the visionary insight of Augusta Ada King than to Babbage himself

Page 26: BlueSci Issue 21 - Easter 2011

“Why should I make the data available to you, when your aim is to try and find something wrong with it?”

Dr Phil Jones, head of the Climatic Research Unit at the University of East Anglia

information is controversial. From Climategate to WikiLeaks, the issue of access to information has been forced to the forefront of public debate. But how should scientists deal with information? Should all scientific information be made publicly available?

In 1676 Robert Hooke published the anagram: “ceiiinosssstuv”. It was not until two years later that he revealed what he had discovered about the properties of elastic springs: “ut tension, sic vis”—as the extension, so the force. This seems bizarre practice to us today, but in Hooke’s time scientists commonly published their results in the form of a cipher; others such as Leonardo, Huygens and Galileo did the same. In doing so, they bought themselves time to work on their ideas without running the risk of being pipped to intellectual glory.

But science is no longer about hoarding your results until you have developed a grand theory to explain it all. Science today prides itself on large collaborations for the benefit of the whole of society. Meanwhile, the multimedia revolution, in particular the explosion of material in online blogs, encourages an ethos of instant access to all forms of information. This has been accompanied by an increasing political drive for ‘transparency’: openness has become associated not only with good science, but also with well-functioning democracies.

Indeed, there are persuasive arguments for open access to scientific information. It forces scientists to think about the best way to communicate their findings to those beyond the ivory tower. It also enables a much more complete peer-review process: absolutely anyone can check scientists’ conclusions against their original observations. Open access is useful for other scientists too. Scientists rarely publish the results of experiments that don’t work, believing that they lack impact. But there is often just as much value in knowing what does not work as in knowing what does. Scientists in another institution may waste time and money pursuing an approach which is known to be flawed, unless such information is made freely available.

The growing sentiment of the ‘public right to know’ culminated in the introduction of the UK Freedom of Information (FoI) Act in January 2005. The Act enables anyone to submit a request to a public body, such as a university, for any information they like. The institution must respond to any requests within twenty days and the request must ultimately be granted unless an exemption can be satisfied. Exemptions can be granted if the data is available elsewhere, intended for publication, deemed to prejudice public affairs or too costly to make available. The Act has had profound consequences for science. Some people claim that FoI requests usefully facilitate the much-needed move to greater openness, but there are also reasons to be cautious.

The leaked emails of the Climategate scandal revealed correspondence about FoI requests for data between scientists at the Climatic Research Unit at the University of East Anglia. In one such email Dr Phil Jones, head of the Unit, said “why should I make the data available to you, when your aim is to try and find something wrong with it?” Dr Jones was severely criticised for this position. What he says contradicts the accepted consensus that the peer-review process is a vital part of the scientific method; your science must be analysed critically by others to validate its truthfulness. Professor Paul Nurse, President of the Royal Society, goes further and advises that research scientists should be “the worst enemy of their own ideas”. A healthy sense of scepticism is what ensures that the majority of published science is accurate.

But Dr Jones was right to be cautious. Scientists are justifiably uneasy about sharing their work with

Tim Middleton gives his perspective on access to data and the recent scandals

Common Knowledge

Robert Hooke is one of the

leading figures of the scientific

revolution. Hooke, like many

others of his day, published

his results in the form of ciphers

to protect his discoveries

24 Perspective Easter 2011

RiT

a g

REE

R

Page 27: BlueSci Issue 21 - Easter 2011

the general public unless it is on their own terms, not least for fear of being misrepresented. Scientists had spent twenty-five years collating the data that was being requested. Dr Jones didn’t want to comply with vexatious FoI requests only to find untrained polemics throughout the blogosphere ranting about the data. On one particular weekend in 2009 Dr Jones received sixty FoI requests, each asking for data from five countries, listed alphabetically. Dr Jones had made mistakes in the way he had handled previous requests, but such coordinated hassling of scientists seems inappropriate. Climategate was a much-needed wake-up call for the scientific community, but amidst the ensuing arguments the focus was often lost. One commentator wryly summarised affairs saying: “are we more interested in reading scientists’ emails or in shaping the values that guide their work?”

Contrastingly, there are other academics who regularly use FoI requests to conduct their own research. Ironically, Martin Jones of Glasgow Caledonian University is using the FoI process to investigate how many vexatious FoI requests are received in the public sector! However, this phenomenon is largely confined to the arts. Academic scientists in the same discipline typically share data amongst themselves outside the FoI system and it would be considered poor form to do otherwise.

Whilst the arguments for greater openness are compelling, FoI requests are not the best method for broadening the uptake of scientific knowledge. The type of information that can be extracted via FoI requests is in some senses far-reaching, but in other ways inherently limited. What use is raw data to the general public if they don’t have the scientific training and expertise to interpret it? Although some of the eager journalists and sceptics out there are scientific experts, or at least well-informed amateurs, a large number

of them are not. The latter are not in a position to confirm or challenge scientists’ conclusions. Worse, if a scientist’s job becomes simply to convey data from the point of observation to the public sphere, where it is then discussed by an army of unqualified bloggers, then there remains little incentive to become a scientist. The value of scientific training and the appeal of proposing and testing your own theories would be lost.

Rather than letting the mutually distrustful process of FoI hold sway, scientists should look for proactive ways to make their work available to the general public. Funding from research councils is increasingly contingent on some sort of data release scheme. A plan to publish data in the future avoids premature disclosure but ensures that the information will reach the public domain for those that are interested. Also, expert deliberations, for example discussions of the Intergovernmental Panel on Climate Change, should be made public events.

Some say that this engagement is necessary to ensure that science is socially beneficial. But such a utilitarian view of science serves to perpetuate the feeling that everyone has an immediate right to know. Instead, what must be cultivated is the opportunity for scientists to explain their research and why it is exciting in a respectful environment, with room for grown-up discourse on any potential points of disagreement. Scientific research should be made common knowledge not because of dictatorial policy, but because scientists want to share their fascination with the universe.

Tim Middleton is a 3rd year undergraduate in the Department of Earth Sciences

Institutions don’t have to comply with Freedom of Information requests if they can show that it would be very time-consuming or expensive to make the data available

Perspective 25Easter 2011

The Hubert Lamb Building, home of the Climatic Research Unit at the University of East Anglia and the centre of the Climategate controversy

cH

rIs

kH

am

kEn

cH

rIs

o

Page 28: BlueSci Issue 21 - Easter 2011

painting without colour? Writing without a pen? Singing without a voice? None of these seem to be possible. But what about photography without a camera? Is there a way to capture an image, the light, a mood, or a person directly onto paper? This rare thought experiment was recently addressed at the London Victoria and Albert Museum exhibition Shadow Catchers, which featured the work of five contemporary artists who use camera-less photography techniques in their work.

Your first steps into the darkened exhibition room welcome you into a new world you have never experienced before. Being used to natural or artificial bright lights and the colourful life outside, it makes you feel uncomfortable initially. Your first gaze falls on what seems to be the shadow of a woman leaning over a chair, and after only a moment you realise that the image cannot be a real shadow, as no one is sitting in the room. Looking further, shadows of people in various different poses, somehow appearing three dimensional, are captured in true size. Images with fine lines like broken glass and pictures which appear to be made of waves of water come into view. Scenes captured with careful thought, people and objects arranged precisely, and pictures that are beyond reality; all created by camera-less photography.

The basic techniques of camera-less photography can be traced back through history. As early as the 8th century the Arab alchemist Jābir ibn Hayyān made the discovery that silver nitrate changes its colour upon exposure to light. In the 16th century Georg Fabricius experimented with silver chloride and also found that under certain circumstances a darkening of the material can be observed, although the nature of the chemical reactions involved was still unknown. In 1725, the German researcher Heinrich Schulze proved that the reaction of silver compounds was due to light exposure. The use of an artistic technique based on these chemicals was first described in 1802 in a publication by Thomas Wedgewood and Humphry Davy. Leaves and other small objects or paintings on glass were placed onto surfaces covered with silver nitrate. After exposure to sunlight, only the painted or covered areas were not affected by light. However, the ability to fix the images was still lacking, so they disappeared immediately when fully exposed to light. This problem was

solved by William Henry Fox Talbot with the help of Sir John Herschel in 1834, when they fixed images using a sodium hyposulphate solution and made the artwork durable. This also led to the development of the first real, if simplistic camera, by placing light-sensitive paper into a ‘camera obscura’, basically a box with a lens.

In the late 19th century advances in camera-development were fast, and dominated by commercial and practical pressures. Only a few artists such as Talbot and Anna Atkins kept experimenting without the use of a camera to create art or botanical illustrations in true scale.

Much later, in the early 20th century, Christian Schad rediscovered the use of camera-less photography as an artistic medium, which led numerous artists to revive the nearly forgotten technique. After 1922, Man Ray and László Moholy-Nagy became the two artists to adopt the techniques into their art. Man Ray was an American artist best known for his modern photography. He described photography as “a comfort, because it reproduces what is known” and implemented camera-less photography as a means of creating a “sensual realisation of dreams and the subconscious”. The images he created and called ‘Rayographs’ often contained recognisable objects and geometrical forms but in new ways of visualisation as light and shadows. He also used variable exposure times on single objects and exploited the effect of movement in Rayographs. In contrast to Ray’s realism, László Moholy-Nagy, a Hungarian painter and photographer, created images that were more abstract, showing dynamic white forms in black space. “The light can play a central role as the pigments do in a painting”, Maholy-Nagy stated.

After the Second World War, photography was heavily used for documentation of political and social events and the art of camera-less photography again was close to being forgotten. Only between 1950 and 1960 did artists and photographers revived their interest in experimentation and alternative techniques. Two of the protagonists at the Shadow Catchers exhibition started their careers during this time: Floris Neusüss, a German Professor of Photography, and Pierre Cordier, a Belgian artist.

The five artists of the Shadow Catchers exhibition exploit different strategies to capture light and shadows on light-sensitive surfaces. The most intuitive

Stephanie Glaser discovers how shadows caught by camera-less photography bring light to an image

Shadow Catchers

26 Arts and Reviews Easter 2011

© A

dA

m F

USS

© S

USA

N d

ERG

ES©

SU

SAN

dER

GES

Photograms of the movements of a snake (top)

and the development

of spawn (middle, bottom)

Page 29: BlueSci Issue 21 - Easter 2011

technique uses gelatine-silver prints and creates pictures termed ‘photograms’. In this technique, photosensitive surfaces, mostly coats of gelatine containing silver salts, change colour upon exposure to light and subsequent development. Objects that are placed onto these surfaces in certain light conditions will produce pictures of their shadows. Parts of objects, as seen in Floris Neusüss’ ‘Körperfotogramms’ (whole-body photograms), can partially block out light and therefore give rise to lighter shadows, while other parts that are in close contact with the surface create dark shadows of complete light exclusion. This creates a 3-dimensional effect in the picture and gives Neusüss’ images a surreal quality. The full-sized nude females he recorded as photograms are an example of his ability to create “a feeling of surreal detachment, a sense of disengagement from time and the physical world.”

Another technique of camera-less photography involves creating ‘chemigrams’ by treating photographic paper with varnishes, oils or photographic chemicals. It has been adopted by another of the exhibition artists, Pierre Cordier, who has perfected his techniques over fifty years through experimentation and research. Describing his work as painting, he makes the pictures step-by-step, usually by carefully blocking the light sensitive surfaces with wax or plastic patterns and applying developer and fixer to unblocked regions. The pictures Cordier creates are often technical, including labyrinths and tiny details nearly invisible without a magnifying glass.

In contrast to Cordier’s exquisitely planned detail, Susan Derges’ images are created by the forces of nature. She mostly uses gelatine-silver and dye-destruction prints, which use positive colour paper that is bleached out upon development where the dyes were not exposed. She employs various techniques to create magnificent effects on her pictures: by exposing photographic paper to moonlight, flashlight, or immersing the paper in a river before exposure to light. Her images often involve an element of chance and are influenced by the wildness and unpredictability of the elements. In her early work in the 1970s, Derges used sound waves to form geometrical patterns in carborundum powder on light sensitive paper, thereby creating a visual representation of waves. In her series on the development of frog spawn into frogs, she placed spawn-filled jam jars on an enlarging lens which she exposed to light in a darkroom and recorded on the paper below; the cycle of life was captured without a camera. Derges has an intimate connection to science and many of her images make commonly hidden forces of nature visible to the imaginative human eye.

Adam Fuss is an English photographer who discovered camera-less photography in 1986 and has since used this technique to create photograms that, as he describes it, “give the alphabet unfamiliar letters.

What is seen has never been in a camera. Life itself is the image. Viewers sense it. They feel the difference.” His pictures seem to capture movements frozen in time. Recurring motifs in his work include animals such as snakes and butterflies, babies and water. For this exhibition, he placed a child onto photographic paper which was submerged in shallow water and fired a flashlight onto the paper. The resulting image captures the baby and its movements as reflected in the wave patterns the movement created in the water.

Bristol-born artist Garry Fabian Miller’s work is influenced by the properties of light and time. His images are simple, yet energetic. His early works included leaves collected during spring as they change their colour from pale yellow to green or petals recorded over a day’s time span. In recent works he created more abstract minimalistic pictures emphasising strong colourful shapes on a black background. Today he mostly uses dye-destruction paper, beams of light or water-filled glass vessels to create the desired effects. For Miller, light “is not a symbol for something else but the very embodiment of creative energy.”

Without any real sense of dimension in the images, objects seem to float above the pictures rather than resting on the surface. This unusual experience makes the viewer uneasy, giving rise to unexpected curiosity. Each picture is formed by the creative vision of one of five artists who use camera-less photography to portray what Floris Neusüss describes as “the tension between the hidden and the revealed.”

Leaving the rooms of the exhibition, you return to normal life, leaving the warm darkness and dreamy landscapes behind you. What remains is the discovery of a photographic art that always creates originals directly, without negatives involved.

Stephanie Glaser is a PhD student in the Department of Biochemistry

Arts and Reviews 27Easter 2011

Camera-less photographs can capture shadows with eerie effects

ViC

toR

iA &

Alb

ERt

mu

sEu

m

Page 30: BlueSci Issue 21 - Easter 2011

it was late January 1951 when a young African-American woman walked into Johns Hopkins Hospital complaining of a “knot on her womb”. Her name was Henrietta Lacks and unbeknownst to her, or her family, this event changed the face of medical and biological research. Her treatment for cervical cancer resulted in the creation of the HeLa cell line, the first immortal human cell line—a major accomplishment in the new and rapidly growing field of cell and tissue culture.

Fascination with cells has engaged scientists since the 17th Century, when Robert Hooke looked down a microscope at cork bark and saw the basic building blocks of life for the first time. He named these repeating blocks “cells”. Soon afterwards, in 1674, inspired by Hooke’s work, the Dutchman Anton van Leeuwenhoek observed living and moving, single-celled microbes. By the early 1800s, many scientists including Robert Koch and Louis Pasteur had pioneered the culture and study of microbes.

However, a breakthrough in culturing cells from multicellular organisms would not occur until almost a century later. In 1907 Ross Harrison found himself involved in a debate over whether or not nerve fibres were outgrowths of individual cells. The study of the nervous system had traditionally been purely observational. Harrison decided to resolve the debate by studying nerve fibre growth in vitro. To do this, he adapted the hanging drop technique developed for the culture of microbes. Harrison cut fragments of frog neural tube, the embryonic precursor of the central nervous system, and placed them in a drop

of frog lymph fluid on a coverslip. The coverslip was inverted over a glass slide with a well in the middle, allowing the clotted lymph droplet to hang suspended from the coverslip. Within this droplet, Harrison was able to maintain the cells alive in the tissue and observe the active outgrowth of nerve fibres from them—thus settling the debate.

While Harrison’s work had solved the debate, his work would prove of foremost importance in establishing the field of cell culture. Using his skill and training as a surgeon, he had solved the basic problems of culturing tissues: through his choices of growth medium, culture vessel and surgical methods for preventing contamination, he provided scientists with the basic tools for culturing cells and tissues. Harrison was a modest man, and his groundbreaking experiment went largely unnoticed by the public and the media. However the scientific community were very interested, and cell culture developed with great pace. As with any newly emerging field, cell culture was at times surrounded by controversy. This simultaneous furtherance and hindrance of the field is no better embodied than by the French surgeon and biologist, Alexis Carrel, who pursued the work Harrison had started.

After receiving much acclaim for his novel surgical techniques, Carrel turned his attention to cell culture. Under the tutelage of fellow scientist Montrose Burrows at the Rockefeller Institute, Carrel began to culture tissues from many different animals. He used methods that Burrows had learnt while visiting Harrison’s lab, adapting and perfecting

Nicola Stead looks back at the beginnings of cell culture

Immortal Hearts and Henrietta

Early cell culture labs required their workers to dress all in

black in darkened rooms to

‘protect’ cells from the light

28 History Easter 2010

TEr

ry

SH

Ay

Er N

AT

IoN

AL

CA

NC

Er IN

STIT

UT

E

Page 31: BlueSci Issue 21 - Easter 2011

these techniques. Progress was fast, and soon Carrel and his colleagues were able to subculture tissues into new plasma clots by carefully cutting the tissues into smaller fragments. In doing so, they created the first cell lines.

Carrel was awarded the Nobel Prize in 1912. In the same year, he published a controversial paper claiming he had managed to culture an “immortal” cell line from a chicken heart. His Nobel Prize ensured Carrel became an instant celebrity, and although his prize was in recognition of his surgical efforts, much attention was given to Carrel’s cell culture research. It was often misrepresented, and many journalists believed that Carrel had managed to grow an entire immortal chicken heart from these cells. The American public developed a keen interest in his work, believing Carrel had found a way of cheating death. Carrel cultivated his celebrity status, and each year on the 17th January he would gather his lab members, morbidly dressed head to toe in their black gowns—which he wrongly believed protected the cells from light—to sing ‘Happy Birthday’ to the cells. There was scarcely a year when the coming of age of this ‘immortal’ cell line was not reported in a newspaper or magazine, although many scientists remained sceptical and resented his fame.

Carrel’s work incontrovertibly advanced the field of tissue culture. His fame attracted many talented scientists to his lab, even the famous American aviator Charles Lindbergh. Lindbergh was instrumental in designing much early cell culture apparatus thanks to his engineering skills. However, Carrel’s claim of an ‘immortal’ cell line which supposedly lived for 36 years, seems to have been, at best, an experimental oversight or, at worst, an embellishment of the truth. In 1961 Hayflick and Moorhead showed that cells taken from a normal organ have a limited ability to divide and grow; in the case of the chicken heart cells they could only be sustained for 60 to 80 days without actively inducing an event known as transformation, whereby cells become cancerous and able to replicate indefinitely.

The first irrefutably immortal cell line is accredited to the L cell line derived from a mouse cancer in 1948 by Wilton Earle, a former Carrel lab member. The race was now on to create an immortal human cell line, and George Gey at Johns

Hopkins University was determined to be the first. He regularly received tissue samples from African-Americans receiving free treatment for cancer. Up until 1951 he had no success in maintaining any of these cells; Henrietta changed everything. While carrying out an operation on her, the surgeon cut a slice of her malignant cervical tissue and sent it to Gey. Soon afterwards Gey had the first rapidly proliferating human cell line, one that would be in demand from scientists the world over.

Henrietta’s cells have provided unparalleled insight into the biology of human cells. As a consequence of studying these cells, we have a better understanding of what goes wrong in many diseases such as cancer. The use of HeLa cells has also significantly reduced the need for animal experimentation. For example, in 1952 millions of HeLa cells were grown to test the safety and effectiveness of the new polio vaccine, thus avoiding the use of primate testing. HeLa cells have also caused us to consider ethical practice to a greater degree. Henrietta’s cells were taken without her consent, and her family were left unaware of their existence for many years, during which time many people made millions of dollars selling and using HeLa cells. In the mid-1960s, as a direct result of experiments using HeLa cells, stricter rules regarding human experimentation were put in place, with informed consent now a standard requirement in any human study.

Although HeLa cells are still used in many labs today, researchers have since isolated over 3400 different cell lines from 80 species. While scientists have not yet cultivated an ‘immortal heart’, they are now much closer to being able to repair heart tissue. The discovery of stem cells, which have the ability to grow into different tissue types, provides much promise for the treatment of many diseases, from Alzheimer’s to diabetes. The ability to culture stem cells, or indeed any cells, would not have been possible without the pioneering work of these scientists, who through their simple experiments made cell culture what it is today.

Nicola Stead is a PhD student at the Babraham Institute

A modern cell culture carried out in a small disposable plastic petri dish

History 29

KA

ibA

rA

87

Easter 2010

HeLa cells being used in a study where they are infected by an adenovirus

TH

E LA

CK

S FA

MiL

Y

Page 32: BlueSci Issue 21 - Easter 2011

At Caudex Medical, a global medical communications agency established over 20 years ago, our people are our greatest strength. That’s why we developed our innovative in-house training scheme, designed to help great scientists become great medical writers, bringing rigour and accuracy to scientific pharmaceutical communications.

We work to the highest ethical standards, evidenced by our ground-breaking acknowledgements policy in 2005 and continuing to this day. At the 2010 Communiqué awards, we were the only medical communications agency in the finals of the Trust and Reputation category.

The writer training scheme starts again in September 2011. If you have a PhD in a life science, with or without post-doc experience, and a passion for communicating science, we’d love to hear from you. We offer the right person the best possible start in medical communications, and we’re in it for the long term. So, we offer a career that could really take you places, with competitive salaries, promotion opportunities and a culture that will nurture your talent.

Please apply with your CV and a covering letter to [email protected]. Closing date for applications: 14th July 2011.

Trainee Medical WriTerGet it right first time. Start your new career with caudex Medical.

Trainee Medical Writer Ad.indd 1 06/04/2011 12:55

THE SUCCESS of the fi lm � e Social Network reminds us that a programmer with a bright idea can land extraordinary opportunities. Veteran British games designer Peter Molyneux has just been awarded a fellowship at the BAFTA video games awards. From an economic perspective the value added to the EU economy from IT products and services was estimated to be around £480m each year. For those with friends studying computer science at university, you will fi nd they are unlikely to be as worried about fi nding a job as the rest of us.

Yet computing is facing a serious supply problem; over the last ten years applications to university computer science courses fell by around 60%. In response to this, the Royal Society commissioned a study into the state of computing teaching in schools. � ere are a number of possible reasons. Some of these were certainly predictable. For example, the political planning and campaign on science, technology, engineering and mathematics (STEM) did not include computing explicitly as an area for focus.

An even larger issue is the way that young people are taught computing. School curriculums focus on menial tasks such as how to write spreadsheets, rather than looking at the more technical and fundamental aspects

of how computers work. Without this understanding, students are not inspired to imagine what computing is capable of. It is this issue that those in the computing industry cite as being the most pertinent. � e pioneering computer scientist Edsger Dijkstra neatly summed up this feeling by noting that “computer science is no more about computers than astronomy is about telescopes.”

It is diffi cult to tell how things will go. � e Royal Society is due to report back later this year, but no one expects things to change overnight. For comparison, the decline in school-level physics began in the mid-90s and improvements have only been seen in the last few years. It might well be that “a week is a long time in politics”, but when it comes to making lasting changes, a decade is not long enough.

Either way, it is diffi cult to dismiss this problem out of hand; a recent survey by the Science Council indicated that almost all science and technology jobs in the coming years will depend increasingly on computing. At a time where the economic future is uncertain and we look to science as the way forward, better computing teaching will surely take us a long way.

Anders Aufderhorst-Roberts examines the demands of the digital economy

Computationally Challenged?

Anders Aufderhorst-Roberts is a PhD student in the Department of Physics.

30 Technology Easter 2011

Page 33: BlueSci Issue 21 - Easter 2011

caudex medical is a global medical communications agency, supporting pharmaceutical companies with their scientific communications. We provide writing and editorial support for papers, posters, presentations and websites. In addition we support the planning and running of meetings and congress activities, and work closely with international academic and clinical experts in a range of disease areas, coordinating their work in the development of new medicines with the pharmaceutical companies providing the funding. Medical communications may seem like a small field compared to research, but it has an important role in helping to bring new medicines to the patients who need them.

What are your responsibilities within the company, and how do you spend your average working day?As a medical writer my job has a focus on content generation, working on papers from clinical reports, and writing reports on scientific congresses and meetings. I also meet with clients and authors to discuss new or ongoing projects, mostly by teleconference, but there are some opportunities for travelling in person.

When reporting clinical trials, I work closely with the academics and medics who carried out the research to make sure their views are accurately represented. Negotiation skills can be important; there are often several parties with an interest in a piece of work, and their views may differ. The writer’s job is to try and find a solution that is acceptable to all parties. Work is frequently deadline-driven and it is important to stick to timelines and liaise with editors, commercial managers and others working on your account, to ensure that projects are finished on time and to the required specifications.

How did you get involved in medical writing?I was working as a postdoc doing laboratory research when I started looking for a new job that would use my scientific background, but would not require relocating every few years. I started making enquiries about careers in industry as a regulatory writer, when two of the companies I was speaking to recommended that a year of experience with a contract research organisation or in medical communications could help my career. Having never heard of medical communications, I went away and investigated. Very quickly, I decided it sounded much more interesting than the jobs I had previously been looking at. I

applied and was offered a trainee position with Caudex Medical soon after. I have not looked back since!

What is your favourite part of the job?It is important to have a broad, detailed and up-to-date knowledge of the therapy area you are working on. I like staying on top of the science and getting to grips with new topic areas as you work with new clients. The fixed short-term goals and the associated sense of progress and achievement provide a satisfying contrast to life in academic research.

The office where I work has a friendly atmosphere that carries over into various social events and activities. I also like the fact that there always seems to be cake available in the office!

What advice would you give to young scientists hoping to get involved in medical writing?While there are often jobs advertised for experienced medical writers, trainee places rarely seem to be offered. Do not wait to hope to see an advert, be proactive and contact the agency directly. Remember that you are applying to be a writer, so write a proper cover letter saying why you are interested in a career in medical communications, and send it along with your CV. “Why have you applied to us?” is a common interview question, so make sure you are well prepared. Not all agencies are the same—take time to find out as much as you can about the agency you are applying to. Look for an agency that will invest in training you, because this training will be the foundation of a successful career as a medical writer.

Andy Shepherd read Natural Sciences at Trinity Hall, completed a PhD in microbial genetics, then took a three-year postdoc before starting at Caudex. Andy was interviewed by Richard Thompson, a PhD student in the Department of Earth Sciences.

Andy Shepherd talks to Richard Thompson about working at Caudex Medical

A Day in the Life of... 31Easter 2011

An

Dy

Sh

Eph

ErD

Medical Writing

Page 34: BlueSci Issue 21 - Easter 2011

No headache for woodpeckers

the brain of a woodpecker undergoes severe decelerations of nearly 12,000 ms-2 as it scours a tree for its meal. Hammering away with rates of 22 beats per second would leave the best of us with more than a concussion, yet the woodpecker can do this without suffering any brain damage. Scientists are now looking into ways of harnessing the woodpecker’s special abilities to build better shock-absorbers. Using computer tomography scans and video footage of these birds in action they found that the birds possess properties that help absorb mechanical shocks efficiently: a layer of fluid between the brain and skull to attenuate vibrations, a hard but slightly elastic beak, a springy structure called the hyoid that extends behind the skull, and a section of soft skull bone. By using materials, such as rubber and aluminium, as artificial analogues of these four absorbers, a ‘woodpecker-inspired shock absorbing system’ that was able to offer effective protection against bullet blows up to 600,000 ms-2 was created. The implicated uses of this new technology are in aeroplanes’ black boxes (flight recorders currently in use can withstand shocks of about 10,000 ms-2), cars and as protection for satellites against space debris. So before banging your head against a brick wall next time, consider imitating a woodpecker! gw

Life in the slow lane

ever been a passenger in a car, and wished the driver would slow down a little? Russ Branaghan of Arizona State University might have a neat solution. In a study on young drivers of both sexes, it was found that the surreptitious suggestion of words and sentences to do with being elderly resulted in a lower maximum speed and longer driving times. The words were introduced to the driver as scrambled sentence problems, posed via heads-up display while the vehicle was stationary at traffic lights. Measured against a control with non age-related phrases, it seems that while participants reported no realisation of any themes, to think age is to drive aged—careful you don’t catch the bug! No blue rinse required... mk

Seeing through the blindfold?

a team at the dolphin research centre in Grassy Key, Florida, has witnessed dolphins mimicking the movements of other dolphins without being able to see them. Tanner, a male bottlenosed dolphin, was given the signal by Dr Kelly Jaakkola to imitate its partner dolphin, Kibby. Tanner was then blindfolded with opaque goggles, but remarkably, was still able to copy turns and tricks from Kibby. The team tested 19 motor and 8 vocal behaviours. Dr Kelly said Tanner must have been able to follow Kibby “either by recognising the characteristic sound that the behaviour makes, like you or I may recognise the sound of hands clapping, or by using echolocation”. Also known as bisonar, echolocation is predominantly used by bats and dolphins to ‘view’ objects around them using the echoes of sound waves. The study, published in the International Journal of Comparative Psychology, shows that dolphins can adapt the senses they use when imitating an action, the first time this has been shown in an animal other than humans. tb

32 Weird and Wonderful Easter 2011

A selection of the wackiest research in the world of science

Weird and Wonderful

ww

w.alexhahnillu

strato

r.com

ww

w.alexhahnillu

strato

r.com

Page 35: BlueSci Issue 21 - Easter 2011

Living outside Cambridge?You can now place an annual subscription to BlueSci and we’ll send you

three brand new issues throughout the academic year.

Contact [email protected] for more information.

Advertising in BlueSci?Whether you are recruiting top tier talent, raising the pro� le of your company or

promoting a new product, BlueSci offers a unique opportunity to reach your target market.

BlueSci provides direct access to the scienti� c community of the University of Cambridge,

being distributed free of charge at the beginning of every University term.

With over 4,500 copies of each publication being supplied to the staff and students

of the University, this is an opportunity not to be missed!

To get further information and receive the BlueSci media pack,

contact [email protected]

Evolution Inside Us

Cambridge University science magazine

Michaelmas 2010

Issue 19

www.bluesci.co.uk

The Cambridge University

science magazine from

Tutankhamun . Modern Art . Scurvy

Henry Cavendish . Out of Body Experiences . Biodiversity

FOCUSGene Therapy - The beginnings,

challenges and triumphs

For their generous contributions, BlueSci would like to thank the

School of Physical Sciences

School of Technology

School of Clinical Medicine

If your institution would like to support BlueSci, please contact

[email protected]

Cambridge University science magazineLent 2011

Issue 20

www.bluesci.co.uk

The Cambridge University science magazine from

Test Tube Babies . Space Elevator . Music TherapyEinstein’s Life . Science of Significance . Triangulation of India

FOCUSLife in the Universe

ISSN 1748-6920

9771748692000

20>

ISSN 1748-6920

9771748692000

20>

Page 36: BlueSci Issue 21 - Easter 2011

vv

Deadline for next issue is 10th June 2011

Feature articles for BlueSci magazine can be on any scienti� c topic. They should be 1200 words and written

for a wide audience.

Contact [email protected] to get involved with editing, graphics or production

email complete articles or ideas to

[email protected]

Write for

See your article in print...

...or onlineWe need writers of news, feature

articles and regulars for our website.

Submissions can be of any length, and

submitted at any time.

For more information, visit

www.bluesci.co.uk

Evolution Inside Us

Cambridge University science magazine

Michaelmas 2010

Issue 19

www.bluesci.co.uk

The Cambridge University

science magazine from

Tutankhamun . Modern Art . Scurvy

Henry Cavendish . Out of Body Experiences . BiodiversityFOCUSGene Therapy - The beginnings,

Cambridge University science magazine

Lent 2011 Issue 20

www.bluesci.co.uk

The Cambridge University science magazine from

Test Tube Babies . Space Elevator . Music TherapyEinstein’s Life . Science of Significance . Triangulation of India

FOCUSLife in the Universe

ISSN�1748-6920

9771748692000

19>

Cambridge University science magazine

Tutankhamun

Henry Cavendish . Out of Body Experiences Gene Therapy - The beginnings,

Gene Therapy - The beginnings,

Einstein’s Life Life in the UniverseLife in the UniverseLife in the Universe

painting without colour? Writing without a pen? Singing without a voice? None of these seem to be possible. But what about photography without a camera? Is there a way to capture an image, the light, a mood, or a person directly onto paper? This rare thought experiment was recently addressed at the London Victoria and Albert Museum exhibition Shadow Catchers, which featured the work of five contemporary artists who use camera-less photography techniques in their work.

Your first steps into the darkened exhibition room welcome you into a new world you have never experienced before. Being used to natural or artificial bright lights and the colourful life outside, it makes you feel uncomfortable initially. Your first gaze falls on what seems to be the shadow of a woman leaning over a chair, and after only a moment you realise that the image cannot be a real shadow, as no one is sitting in the room. Looking further, shadows of people in various different poses, somehow appearing three dimensional, are captured in true size. Images with fine lines like broken glass and pictures which appear to be made of waves of water come into view. Scenes captured with careful thought, people and objects arranged precisely, and pictures that are beyond reality; all created by camera-less photography.

The basic techniques of camera-less photography can be traced back through history. As early as the 8th century the Arab alchemist Jābir ibn Hayyān made the discovery that silver nitrate changes its colour upon exposure to light. In the 16th century Georg Fabricius experimented with silver chloride and also found that under certain circumstances a darkening of the material can be observed, although the nature of the chemical reactions involved was still unknown. In 1725, the German researcher Heinrich Schulze proved that the reaction of silver compounds was due to light exposure. The use of an artistic technique based on these chemicals was first described in 1802 in a publication by Thomas Wedgewood and Humphry Davy. Leaves and other small objects or paintings on glass were placed onto surfaces covered with silver nitrate. After exposure to sunlight, only the painted or covered areas were not affected by light. However, the ability to fix the images was still lacking, so they disappeared immediately when fully exposed to light. This problem was

solved by William Henry Fox Talbot with the help of Sir John Herschel in 1834, when they fixed images using a sodium hyposulphate solution and made the artwork durable. This also led to the development of the first real, if simplistic camera, by placing light-sensitive paper into a ‘camera obscura’, basically a box with a lens.

In the late 19th century advances in camera-development were fast, and dominated by commercial and practical pressures. Only a few artists such as Talbot and Anna Atkins kept experimenting without the use of a camera to create art or botanical illustrations in true scale.

Much later, in the early 20th century, Christian Schad rediscovered the use of camera-less photography as an artistic medium, which led numerous artists to revive the nearly forgotten technique. After 1922, Man Ray and László Moholy-Nagy became the two artists to adopt the techniques into their art. Man Ray was an American artist best known for his modern photography. He described photography as “a comfort, because it reproduces what is known” and implemented camera-less photography as a means of creating a “sensual realisation of dreams and the subconscious”. The images he created and called ‘Rayographs’ often contained recognisable objects and geometrical forms but in new ways of visualisation as light and shadows. He also used variable exposure times on single objects and exploited the effect of movement in Rayographs. In contrast to Ray’s realism, László Moholy-Nagy, a Hungarian painter and photographer, created images that were more abstract, showing dynamic white forms in black space. “The light can play a central role as the pigments do in a painting”, Maholy-Nagy stated.

After the Second World War, photography was heavily used for documentation of political and social events and the art of camera-less photography again was close to being forgotten. Only between 1950 and 1960 did artists and photographers revived their interest in experimentation and alternative techniques. Two of the protagonists at the Shadow Catchers exhibition started their careers during this time: Floris Neusüss, a German Professor of Photography, and Pierre Cordier, a Belgian artist.

The five artists of the current Shadow Catchers exhibition exploit different strategies to capture light and shadows on light-sensitive surfaces. The most

Stephanie Glaser discovers how shadows caught by camera-less photography bring light to an image

Shadow Catchers

26 Arts and Reviews Easter 2011

© A

dA

m F

USS

© S

USA

N d

ERG

ES©

SU

SAN

dER

GES

Chemigrams of the movements of a snake (top)

and the development

of spawn (middle, bottom)

colour? Writing without a pen? Singing without a voice? None of these seem to be possible. But what about photography without a camera? Is there a way to capture an image, the light, a mood, or a person directly onto paper? This rare thought experiment was recently addressed at the London Victoria and Albert Museum exhibition

, which featured the work of five contemporary artists who use camera-less photography techniques in their work.

Your first steps into the darkened exhibition room welcome you into a new world you have never experienced before. Being used to natural or artificial bright lights and the colourful life outside, it makes you feel uncomfortable initially. Your first gaze falls on what seems to be the shadow of a woman leaning over a chair, and after only a moment you realise that the image cannot be a real shadow, as no one is sitting in the room. Looking further, shadows of people in various different poses, somehow appearing three dimensional, are captured in true size. Images with fine lines like broken glass and pictures which appear to be made of waves of water come into view. Scenes captured with careful thought, people and objects arranged precisely, and pictures that are beyond reality; all created by camera-less photography.

The basic techniques of camera-less photography can be traced back through history. As early as the 8th century the Arab alchemist Jābir ibn Hayyān made the discovery that silver nitrate changes its colour upon exposure to light. In the 16th century Georg Fabricius experimented with silver chloride and also found that under certain circumstances a darkening of the material can be observed, although the nature of the chemical reactions involved was still unknown. In 1725, the German researcher Heinrich Schulze proved that the reaction of silver compounds was due to light exposure. The use of an artistic technique based on these chemicals was first described in 1802 in a publication by Thomas Wedgewood and Humphry Davy. Leaves and other small objects or paintings on glass were placed onto surfaces covered with silver nitrate. After exposure to sunlight, only the painted or covered areas were not affected by light. However, the ability to fix the images was still lacking, so they disappeared immediately when fully exposed to light. This problem was

solved by William Henry Fox Talbot with the help of Sir John Herschel in 1834, when they fixed images using a sodium hyposulphate solution and made the artwork durable. This also led to the development of the first real, if simplistic camera, by placing light-sensitive paper into a ‘camera obscura’, basically a box with a lens.

In the late 19th century advances in camera-development were fast, and dominated by commercial and practical pressures. Only a few artists such as Talbot and Anna Atkins kept experimenting without the use of a camera to create art or botanical illustrations in true scale.

Much later, in the early 20th century, Christian Schad rediscovered the use of camera-less photography as an artistic medium, which led numerous artists to revive the nearly forgotten technique. After 1922, Man Ray and László Moholy-Nagy became the two artists to adopt the techniques into their art. Man Ray was an American artist best known for his modern photography. He described photography as “a comfort, because it reproduces what is known” and implemented camera-less photography as a means of creating a “sensual realisation of dreams and the subconscious”. The images he created and called ‘Rayographs’ often contained recognisable objects and geometrical forms but in new ways of visualisation as light and shadows. He also used variable exposure times on single objects and exploited the effect of movement in Rayographs. In contrast to Ray’s realism, László Moholy-Nagy, a Hungarian painter and photographer, created images that were more abstract, showing dynamic white forms in black space. “The light can play a central role as the pigments do in a painting”, Maholy-Nagy stated.

After the Second World War, photography was heavily used for documentation of political and social events and the art of camera-less photography again was close to being forgotten. Only between 1950 and 1960 did artists and photographers revived their interest in experimentation and alternative techniques. Two of the protagonists at the Shadow Catchers exhibition started their careers during Catchers exhibition started their careers during Catchersthis time: Floris Neusüss, a German Professor of Photography, and Pierre Cordier, a Belgian artist.

The five artists of the current Shadow Catchersexhibition exploit different strategies to capture light and shadows on light-sensitive surfaces. The most

Easter 2011

what abilities spring to mind when someone says ‘superhero’? The ability to fly? Walking on walls? Or an uncanny talent for surviving against the odds? Although Marvel Comics have been writing far-fetched tales about characters with superhuman powers for decades, evolution has turned fiction into reality and provided us with living, breathing and indeed flying proof that it got there first. So how do animals effortlessly achieve these things that humans merely dream of? And can we replicate them?

A huge number of species across the animal kingdom can fly, from buzzing midges to lumbering vultures. But not all fliers are created equal. Most birds are only able to fly forwards, and are often relatively ungainly in the air, at least as compared to their smaller brethren: the insects. Insects are often capable of flying backwards or hovering on the spot, more like a helicopter than an aeroplane, and possess a fine control over flight that many birds lack. This allows them to land on your skin without detection, or even land on water. But how do they accomplish their feats of aerial acrobatics?

It turns out that insect flight is a complex phenomenon, and is still poorly understood. According to some researchers, insects use at least

three different mechanisms to increase their lift beyond that predicted by simple fluid mechanics. Firstly, their wings beat at a sharp angle to horizontal, creating an effect known in aviation as stalling. In aircraft, this is disastrous, causing huge loss of lift due to separation of the air flow from the wing and often causes the plane to crash. In insects however, the act of stalling creates a vortex (think miniature whirlwind) immediately above the leading edge of the wing, which provides a large lifting force, almost as if the insect is being sucked upwards. Secondly, as their wings travel through the air, they rotate. This rotation creates an additional down-current, which helps to keep them aloft in a manner analogous to a tennis ball with backspin. Finally, in addition to creating the leading edge vortex, any wing beat will inevitably create smaller trailing edge vortices behind the wing. These usually sap energy from the flier, but insects have adapted to sweep their wings back through the turbulent air, recapturing energy that would otherwise be lost. All these mechanisms contribute to a system far more innovative than our brute-force methods of getting into the air. One complex enough that we’re unlikely to be replicating it any time soon.

So perhaps insect-like flight is out of our reach, but walking on walls is a different story. Many species possess the ability to hang around obnoxiously on our ceilings and walls. Their methods may vary, but a couple of unifying themes emerge. Small insects, often flies, tend to take the rather obvious route of having sticky feet. They have tiny glands which slowly secrete an oily adhesive that literally glues them to the surface in question. Spiders have claws on their feet that hook into grooves too small for us to see (which, incidentally, is why they struggle to get out of very smooth containers such as baths and sinks). Yet clever as these two options are, the most ubiquitous and ingenious method is yet to come and proves that you don’t have to be an insect to have superhero qualities. This number is showcased by a friendly little creature: the gecko.

Superheroes, fact or fiction?

Insect flight is far more agile than birds or

aeroplanes

14 Superheroes, fact or fiction?

Oa

kle

yOr

IgIn

alS

Easter 2011

Mark nicholson discovers how nature has turned fantasy into reality