blenderart_mag-28_eng

49
Issue 28 | Jun 2010 Blender learning made easy COVERART Dikarya by Will Davis Creating 3d Contents With Blender 2.5 I love My Bubble High Resolution Rendering at The Light of Speed Blender 2.49 Sripting (Book Review)

description

Blender 2.49 Sripting (Book Review) I love My Bubble Issue 28 | Jun 2010 Blender learning made easy COVERART Dikarya by Will Davis

Transcript of blenderart_mag-28_eng

Page 1: blenderart_mag-28_eng

Issue 28 | Jun 2010

Blender learning made easy

COVERART Dikarya by Will Davis

Creating 3d Contents With Blender 2.5

I love My Bubble

High Resolution Rendering at The Light of Speed

Blender 2.49 Sripting (Book Review)

Page 2: blenderart_mag-28_eng

EDITORGaurav Nawani [email protected]

MANAGING EDITORSandra Gilbert [email protected]

WEBSITENam Pham [email protected]

DESIGNERGaurav, Sandra, Alex

PROOFERSBrian C. TreacyBruce WestfallDaniel HandDaniel MateHenriël VeldtmannJoshua LeungJoshua ScottonKevin BraunMark WarrenNoah SummersPatrick ODonnellPhillipRonan PosnicScott HillWade BickValérie Hambert

WRITERSby Francois “Coyhoyt” GrassardMax KiellandNilson Juba & Jeff IsraelDavid WardSatish GodaWilliam Le Ferrand

COVER ARTDikarya -by Will Davis

CONTENTS

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

2

Creating 3D contents using Blender 2.5

I Love My Bubble

Up to Speed

Blender 2.49 Scripting

High resolution rendering @speed of the light

5

27

37

40

42

Page 3: blenderart_mag-28_eng

I stumbled into the Blender universe aboutten years ago, (for about 15 minutes)... atwhich time the massive number of buttonsand options scared me so bad that I franti-cally deleted Blender from my hard driveand went to find something less scary toplay with.

Obviously I found my way back to Blender(about a year later) and learned how to useit. But that first brush with Blender is some-thing many of us are familiar with.

Fast forward ten odd years, Blender 2.5 se-ries has been released. Imagine my surprisewhen I opened it the first time and waspromptly overcome with a 10 year old feel-ing of deja vu. Oh snap! I'm a newbie again.

I'm sure I'm not the only one who receiveda momentary shock upon seeing the newestincarnation of Blender.

Luckily, the learning curve was muchsmoother this time. And while I occasion-ally need to poke around to find a familiar

tool or feature, there is a beautiful logic andflow to Blender that makes the creativeprocess so much easier these days.

The Blender 2.5 series has been out for awhile now, and while it still is under goinga lot of changes and refinement, it is stableenough for some serious testing and play-ing. So let's get everyone "Up to Speed"with Blender 2.5 and how to best take ad-vantage of all the wonderful new toys andoptions available.

If you have not yet taken the 2.5 plunge,now is your chance to learn what the fu-ture holds

Sandra GilbertManaging Editor

EDITORIAL 3

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Page 4: blenderart_mag-28_eng

I have been using Blender for a long time and havedeveloped my own little work flow that I havegotten rather used to.

With the release of the 2.5 series, my work flow hasof course undergone numerous changes and adjust-ments. Most of them for the better. But one change,quite honestly, has continually tripped me up.

My beloved spacebar now brings up a search menuinstead of an add menu. And yes, I have known foryears that Shift + A, will do the same thing. But thatisn't what I learned when I started. And of course, 10years of muscle memory still finds me hitting thespace bar and still being surprised when a searchoption comes up instead of an add menu.

Rather annoying, to say the least.

Having decided that I would just have to get used toit, I was overjoyed when I discovered that there wasa wonderful new addition to Blender.

The "Add On" section of the User Preferences win-dow. This lovely little window is populated with anumber of "add on" extensions that can be enabled /disabled as you need. The addons set to enabled will

of course loadautomaticallywhen you launchBlender.

There are alreadya number of funand useful addons, but the onethat makes myday is the"Dynamic Space-bar Menu".

When enabled itbrings up a con-tent sensitivemenu full of use-ful options, in-cluding "AddObject".

Yay me! My work-flow is saved. Andnow I'm off to ex-plore what else ishidden in the "AddOn" section

IZZY SPEAKS : 4

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

“My beloved spacebar nowbrings up a search menuinstead of an add menu.”

Page 5: blenderart_mag-28_eng

by F

ranc

ois

“Coy

hoyt

” Gra

ssar

d

5

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D contents usingBlender 2.5

Since Avatar came up on the big screen, it's totallyimpossible to walk in the street and avoid hundredsof billboards showing a 3D logo. Over the last 3months, 3D devices are everywhere … and heavymarketing too ! « Are you still working on poor 2Dimages ? Dawn, you're so ridiculous, guy ! » That'sthe kind of sentences you can hear when you workfor broadcasting since Avatar came out.

Producers and directors, all want to create 3D con-tent. But in fact, what is the whole technology be-hind 3D images ? Is that really new ? « Of course ! »shouts together 20th Century Fox and Mister Cam-eron. « There is before Avatar and After Avatar ».What can be said after these kind of sentences ? Themost humble answer we can provide is … 1840. Itdeserves some explanations.

1840 is the date when the first 3D images were re-leased. Yes, morethan a century and ahalf before Avatar !Surprising, isn't it ?In 1820, French guyNicéphore Niépcecreated the first pos-itive photograph.Only twenty yearslater, just afterNiépce died, the firststereoscopic photog-raphy was done byanother guy namedDaguerre, but thewhole process wasknown by scientistsyears before.

Two images, one for each eye and slightly offset inspace. Before photography, it's was really difficultfor the painter to create exactly the same two paint-ings. When pho-tography came up,it was easier totake two shots atthe same timewith two synchro-nized cameras.The stereoscopicview was born !

We will describein detail thewhole process inthe next chapter,but if you are in-terested by the history of 3D images, I highly recom-mend the website http://photostereo.org created bya French guy, Francis Dupin. The website is bilingual(French and English) and contains a lot of stereo-

scopic photographs from the earlyage. Take a look at the « History »page. You will be probably surprisedto discover that the 3D concept isquite old.

First, I'd like to clarify some things. Idon't tell you that Avatar sucks. Tech-nically, it's a really great movie. All thework done by different teams, like thewonderful forest shots from Weta aretotally awesome.

No doubt about that. Modeling, ren-dering, lightning, mocap and facialanimation, they're all great !Old San Souci House, Old Orchard Beach, Maine,

from Robert N. Dennis collection of stereoscopicviews (~1870-1880)

Really simple stereoscopic device

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 6: blenderart_mag-28_eng

But I only complain about the marketing stuff aroundthe movie, who tries to tell us that stereoscopy neverexisted before Avatar.

The goal of this article is to introduce the main conceptsof stereoscopic images, also known as « 3D Images »,the different parameters you have to take into accountto produce good 3D and finally, how to make it withBlender 2.5 ! To do that, wewill be accompanied by char-acters of Big buck Bunny, whowill help us to understand allthe concepts required.

A) How 3D works andhow to produce it:

As we previously said, youneed two 2D images to createone 3D image. When you lookat an object in real life, yourleft eye and your right eye cansee the same things but witha different point of view, justbecause they are not at thesame place. With these twoimages, your brain creates a3D representation of space,based essentially on parallaxdifferences.

A.1) Parallax: The bestfriend of your brain:

Hey, « the best friend of yourbrain » ! That's a great adver-tising slogan, isn't it ? Thismagic word describes one of

the most important concepts in 3D view. The one that isused by 3D tracking software to reconstruct a pointcloud in 3D, extracted from only 2D images to finallycreate a 3D moving camera to match the real one.

To understand what it is, just make this simple experi-ence with me. Put your index finger in front of yournose, about 10 cm away from it. Now, close your right

eye and move your finger toplace THIS WORD on the right ofit.

Now, open your right eye andclose the left one. The word hasjumped to the other side of yourfinger ! That's because your fin-ger is closer than the word. Eachobject, according the distance toyour eyes, is horizontally offsetwhen you switch from one eyeto another. Far objects are mini-mally offset, close objects arehighly offset. Parallax representsall the different offsets for yourbrain to create a mental 3Dworld.

A.1) Parallax: The bestfriend of your brain:

In this case, chinchilla's ear isplaced on the right of the grassfor the left image, and on theleft for the right image(First onnext page). That's the parallaxeffect !

by F

ranc

ois

“Coy

hoyt

” Gra

ssar

d

6

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Two of the numerous 3D devices created by the past

Page 7: blenderart_mag-28_eng

A wonderful applica-tion of this concept,named Photogram-metry, can create a3D model of a moun-tain from only twohi-res photos of it,shot by the samecamera but slightlyoffset horizontally.High offset betweenthe two same pixelsrepresent closepoint, and low offsetrepresent far point. A

3D point cloud can be extracted from the informationand a mesh can be created from it, using Voronoi of De-launay triangulation, for instance. Finally, one of theoriginal images is projected on the mesh via cameramapping techniques, which provides a high-detailed andtextured model of the mountain. Magic, isn't it ?

A.2) Interpupillary: This is not an insult !

The distance between the center of your eyes is called «Interpupillary » (IPD:InterPupillary Distance) and it'sone of the keys of the stereoscopic view. The rotation ofyour eye doesn't change the IPD. Only the distance be-tween the rotation centers is taken into account. So,you can keep a constant value even if you squint. Thisimage of our friend Bunny, shows the IPD of his strangesight.

The average value of IPD for a human is 63mm (about2.5 inches). Of course, a little boy of six years old doesn'thave the same IPD as a basket player of 33 years old.The majority of adults have IPDs in the range 50–75 mmand the minimum IPD for a children is 40mm. We canconsidered that a baby of 3 months have a smaller IPD.

But at this age, chil-dren don't care aboutany 3D images and pre-fer playing with mom'snipples. ;o)

So, that's the firstthing to remember.The optical center ofthe two lenses of thetwo cameras, that theyare real or virtual, hasto be horizontally off-set by the same valueof this average IPD of63mm and perfectlyaligned vertically, asyour eyes are. InBlender, the scale ofyour scene and allobjects in it are thus important.

If you choose an IPD of 50, most of people will be dis-rupted all along the movie because this IPD will be toofar from ours IPD. So, the average value of 63mm is thebest choice because it is a medium value … for an adult.That's mean a 3D movie will be more difficult to « see »for a young child, because the difference between hisIPD and the one used for the movie is higher than theone of his parents. It will require more effort for theeyes of a children to find the right vergence (we will ex-plain this word in a few moments).

So, the choice of the IPD has to be made really carefully.If you choose an IPD of 95mm, that's means you workfor an alien audience and for sure you'll give a humanaudience a big headache. I guess that's not your goal …except if you're an alien and have a plan to conquer theworld.

By M

oisé

s Es

píno

la

7

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

I hope you don't see that when youlook into the mirror

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 8: blenderart_mag-28_eng

That's why the main 3D industry has to use this IPD of63mm, choosen for parents who have enough money topay for cinema … children don't ! So, children can havean headache ... that's not a problem … isn't it ? ;o)

A.3) 3D contents for Cinema or Television: Notthe same fight:

As we just said, the IPD is the first parameter to take inaccount when you want to produce 3D contents. Closeobjects are highly offset between left eye and right eye,far objects are less offset. When you project a movie onthe big screen, a little difference of parallax (how youproduce the offset between left and right eye) can beenough to match your IPD.

Because the big screen is quite big. But if you watch thesame movie on a small television, the relative distancebetween the two images is smaller because it is re-duced. The « 3D effect » will be less impressive.

So, you have to think before your production is started,for what kind of medium your movie is made for andadapt theIPD used tothe screensize. Theat-ers whowant to dis-play 3D mov-ies have tohave a mini-mum sizefor theirscreens. As

an example, the post production of Avatar should be to-tally remade for small screens, if

Mister Cameron wants to show the same experience tothe audience who has seen his movie in theaters. Is thatthe reason why the release date of a BluRay version ofAvatar in 3D is planned for the end of 2010 while the 2Dversion is already out ? The official explanation is thatnot enough people have a 3D BluRay player. That's prob-ably a part of the truth, but my paranoid mind can'tstop to trust in the other explanation. ;o)

We will discuss this in a next chapter, 3D images can berecorded and broadcasted in many ways. But in eachcase, images for each eye can be extracted and processindependently. A software like « Stereoscopic Player »,can extract each image and reduce the offset betweenthem just by a simple change of position, horizontally. Iguess this feature will be available one day on BluRay 3Dplayer and/or 3D television to virtually adapt the IPD toeach size of screen and each viewer. But it's not enoughto convert automatically a « Theaters Offset » to a «Television Offset », who probably required more work toachieve a good 3D TV experience.

A.4) Vergence and focal plane:

We previously described the concept of IPD. But there asecond parameter that has the same importance. It'snamed « Vergence ». Once again, to understand thisconcept let's make together a second experience usingyour finger. Put your index finger in front of your nose,about 10 cm away from it and look at it. While you keepthe focus on your finger, you can see behind it someparts of other objects, like a chair for instance. But younotice that you can see the chair twice. Now, if youkeep the focus on the chair you can see your fingertwice.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

8

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

To look at a close object, Bunny has to squint. Each eye rotatesin opposite directions, according a vengeance angle

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 9: blenderart_mag-28_eng

Whenyou keepthe focuson yourfinger,each eyetries torotate toclearlysee the

target, even if the rotation angle is different for eacheye. You are simply squinting. Now, imagine twostraight lines, one for each eye, starting from the centerof the iris and going away in a direction correspondingto the rotation. In one point, those two lines intersectand create something called the « Focal Point»,formally the point where you look at. Stretched hori-zontally and vertically, this point can be extended to anormal plane named the « Focal Plane ».

When you watch a traditional (2D) TV, your eyes squintaccording to the distance from them to the screen. Ifyour screen is far away from your eye, the convergenceis low and the two lines are closed to be parallels. Thissituation is very relaxing for your eyes because musclesin charge of rotation are close to sleeping.

If the screen is too close to your eyes, the convergencehas to be higher and lateral eye muscles work reallyhard, usually causing a headache.

In a 3D Workflow, the focal plane is used to place theposition of your screen in the scene. All objects located

between the camera and the focal plane can pop out ofthe screen. All objects located far away from the focalplane will look far away « behind » the screen. Objectslocated on the focal plane will be placed « in 3D space »exactly when your TV set is placed in your living room.

These parameters are probably the most importantwhen you plan to create a movie of two hours. Imaginean action movie with two cuts per seconds, so … a reallyspeedy editing. For each shot, your eyes have to findwhere the focal plane is and adapt the vergence for it. Ifeach shot are too short and the focal plane jumps fromone position to another each second, it's headache day !Because your eyes have to do a crazy gymnastics allalong the movie.

When you switch from one shot to another, it's can re-ally be uncomfortable. For instance, you have to shoot aSoccer match live. First cam shoot players from the top,really far from them and probably uses two camerasthat are pretty close to be parallel. Suddenly, the ballhas to be played as corner kick. We switch to a camdriven by a Steadicam.

placed only two meters behind the player who shootsthe ball, with a high vergence angle. And Bam ! Youreyes have to converge to focus on the player who shootsthe ball … and Bam again, we switch back to the farcam. That's just a simple example, but it proves that weprobably have to change the way a sport match is di-rected in 3D, to switch more smoothly from one cam toanother. 90 minutes of eye gymnastics ... it's quite long;o)

That's one of the secrets of Avatar. Why we don't havean headache after more than two hours of movie ?That's because all characters, and particularly theireyes, are always located on the focal plane. When youlook at a character, you immediately look to their eyes.It's a reflex.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

9

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

When Bunny look at a far object, such as the butterfly, thevengeance angle is pretty low and lines extracted from eye are

closed to be parallels

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 10: blenderart_mag-28_eng

By placing each character's eyes on focal plane, youreyes don't have to move at each cut when you switchfrom one character to another. In this case, a 3D movieis as comfortable as a 2D movie because your eyes con-verge always (or at least, most of the time) at the samepoint, where is the place of the screen. By this way, weavoid all that eye gymnastics. You can even watch themovie without any glasses … all the characters won'thave too much « blur » on their faces.

When you shoot a 3D movie, in real life or in CG world,you have to choose if your cameras will be totally paral-lel or use vergence. Both methods exist and are heavilydiscussed by 3D professionals. Parallel shooting is usu-ally more comfortable because eye muscles don't haveto work a lot. But with a « Parallel Rig », we considerthat the focal plane is pushed to infinite. So, objects canpop up out of the screen, but no one can go far « be-hind it ».

When cameras use vergence, you can push objects farfar away. But you have to adjust the rotation value ofeach cam really carefully. If it's too high, audience eyeswill diverge. And your eyes never diverge in real life ! So,the final result of that is, once again, a big headache !

A.5) Optical issue in real life (why first 3D Mov-ies was CGI):

We just discussed rotation of camera rigs using ver-gence. But what is exactly a 3D camera rig ?

In real life, a 3D camera rig is a set of tools that permitthe placement of two cameras side by side and adjustthe IPD and vergence between the cam. This kind of righas to have a high degree of precision. All parameterschanging on one cam have to be reported on the othercam. Focus, iris, zoom, gain, gamma .. and more. Thissynchronisation can be achieved by a mechanical or

electronic process. Of course, the optical lens has to bethe same for the two cameras. Many kinds of rigs exist.Just type « 3d camera rig » in Google image search en-gine to see dozen of different systems.

Cameras are not always placed side by side. Becausesome kinds of camera are quite big ! Even if you placedthe two cameras the closest you can, the distance be-tween each optical center will be quite bigger than ahuman IPD. In this case, one cam can be placed like fora 2D shooting and the other one is placed upside down,filming the image reflected by a semi-transparent mir-ror.

Once again, type « 3d mirror rig » in Google image tosee different systems used. There are many problemsyou have to manage when you shoot with this kind ofrig. For instance, the camera that shoots the image thatpasses through the semi-transparent mirror is darkerthan the one is directly reflected and shot by the secondcam (about 1 stop darker).

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

10

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

When Bunny look at a close object, such as the apple, thevengeance angle is high

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 11: blenderart_mag-28_eng

So, you probably now understand that filming a 3Dmovie in real life is not so easy. Even if a special camerausing two sensors and two lenses slowly came on themarket, like the upcoming Panasonic camera or the ro-bust one from « 3D One », stereoscopic shooting is ascience and there are many things still to improve inthis specific production pipeline.

When you create a 3D movie using only CGI tools, mostof problems describe below disappear. Virtual cameraslike the one you can handle in Blender don't have anysize. So there is no problem about the IPD. Same thingabout the rotation of camera according the vergence. Inreal life, the rotation angle of each camera has to be re-ally precise. The value is usually around 2 or 3 degrees !The rig controlling the two cameras have to be perfect,and obviously are costly. In Blender, setting the Y Rota-tion angle to a value of 0,25 degree is really easy. That isthe main reason why most of 3D movies, for now atleast, are CG.

A.6) Think about the limits … of your screen:

When you produce 3D contents, especially for 3DTV, youhave to think about the limits of your screen, accordingto the field of view of your eyes. In a theater, if you'replaced in front of the screen, pretty close to it, you don'tpay any attention to the edge of the image. If a giantdinosaur jumps out of the screen (remember, closerthan the focal plane), you have other things to do thanlooking the top right corner of the screen. Because,you're scared !!!

But when you watch a 3DTV, the edges of the screen arefully visible according to the field of view of your eyes.And it's pretty wide … around 160 degrees for most hu-man (if you're a alien, let me know how your FOV is) !There's a interesting experience to make if you have thiskind of device. Put a simple cube between the focal

plane and the camera rig. So, when you wear your 3Dglasses, the cube seems to jump out of the screen. But ifthe cube comes closer, the edge of the screen will finallycrop it. At this point, the cube seems to jump back intothe screen very quickly, at a distance equal to the focalplane. Your eyes see some disparity between the twoviews by parallax differences. But your brain said thatit's impossible to see an object at 1 meter away fromyour nose while the same object is cropped by the bor-der of the screen, 2 meter behind.

So, if you can, you have to always keep objects thatjump out of the screen inside the limits of that screen. Ifyou can, that's another process that helps to limit thebrain dilemma named « Floating Windows ».

The same problem appears when you shoot a pano-ramic shot, from left to right for instance … once againin a soccer match. Some elements of the image start toappear on the screen by the right view first, then in leftview, one or more frames later. In this case, your twoeyes don't see the same things on the edge of the TVpicture. And that's bad for your brain ! So, the conceptof floating Windows is quite simple. The goal is to hidein the right view all elements that you can't see into theleft view. The problem is that you can't set the samecrop value for all elements.

All objects have to be cropped according to their dis-tance to the camera (remember the parallax and differ-ence of speed between close and far objects). But thiskind of « adaptive crop » is totally impossible in real lifeespecially when you shoot live. So, we have to find a «generic solution » that's works for all images. The bestsolution is simply to slightly blur the side of the images.For the left view, you have to blur the left side a lot andright side a few, for right view … left side a few and rightside a lot.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

11

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 12: blenderart_mag-28_eng

Their blurry borders don't have to be wide, only 20 or 30pixels on HD footage, and don't need a huge amount ofblur. If the two cameras were perfectly aligned verticallyduring the shooting, horizontal blur is only needed.

This simple technique can strongly reduce this strangeeffect during a dolly or panoramic moving. I personallyuse it a lot when I work with virtual sets and Steadicamshots, with help of 3D tracking.

A.7) So many parameters to take into account:

As a conclusion to this big first part, we can say we de-scribed the most important parameters to take in ac-count to produce 3D content. But in fact, moreparameters should need more studies. For instance, theshading is one of the parameters that tells your brainsome volumetric information. It's a big component ofthe 3D space representation created by your mind. Somany things need to be analysed carefully. We are justat the beginning of the rise of the 3D wave … before itbecame a tsunami.

I hope this part was not too boring for you because it'snot directly related to Blender. But before we describesome processes to create 3D contents using Blender, wehave to describe what 3D is, right ?

Ok, now we know what 3D is and how it works, justtake a look at how to broadcast it … and how we couldbroadcast it in the future.

B) Broadcasting 3D Contents:

When you want to broadcast 3D contents, you have tochoose between two techniques :

First one : Both images, for left and right eye, areprojected at the same time, blended into one «composite » image. Glasses placed on your nose

separate the two images, allowing each eye to seethe right one. For this technique, we can use twokind of glasses, Anaglyph or Polarized, more gener-ally called « Passive glasses ». We will describedthem further.

Second one : Both images are projected sequential-ly, one after another. Left / Right / Left / Right / Left /Right / and so on. On your nose, you have to wearanother kind of glasses, named « Active glasses ».They work by using powercell and are synchronizedaccording to a reference signal emitted in thetheater or by your 3DTV. When the projector showsan image for the left eye, glasses hide your right eyeby activating a surface of LCD.

B.1) Three kind of glasses, three level of pric-es:

Ok, let's review the three kind of glasses :

Anaglyph : The goalof an Anaglyph imageis to tint each « sub-image », for left andright eyes, with a dif-ferent color and fi-nally mix them intoone image. There'snot any strict stand-ard defined for ana-glyph images. Butgenerally, the luminance of the left image is tinted usingRed color at 100% while the luminance of the right im-age is tinted using Cyan color (composed of green andblue at 100%).

Other combinations are possible, like Red/Green for in-stance. Results are quite the same, but obviously, you

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

12

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Anaglyph glasses

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 13: blenderart_mag-28_eng

have to use the right model of glasses to clearly « ex-tract » each image. Using this technique, colors of theoriginal image can't be reproduced perfectly. Most ofthe time, I personally prefer to convert each image togrey scale before the tinting process.

The visualisation of the « 3D effect » will be far better.Remember that today, Analgyph is not really a broad-casting choice. It's mostly used for previewing how 3Dworks. For instance, I don't have any 3D display at homeyet. But all the studios for whom I have worked havemany active or passive screens inside. When I work athome, I check my RGB color and general aspect of therender in 2D.

On the other side, I generate a grey scale anaglyphrender that clearly show me the « 3D effect ». Once eve-rything looks OK, I generate another kind of rendernamed « Side by Side » (we will describe this later), putit on my USB key and watch the resulting sequence withthe studio's screens.

So, even if Anaglyph is not a definitive solution, it can bea good introduction to the 3D world because it's reallycheap ! Anaglyph glasses usually cost only from 1Dollars/Euro to about 8 Dollars/Euro for the most « so-phisticated » model (in plastic, with real glass frames).If you wanna give 3D a try inyour productions, anaglyphwill probably be your best friend at start.

Polarized : You probably learned at school, that light canbe explained by two kinds of phenomena. Particular, us-ing photons concept and by spectral waves. Lightwavesare sinusoidals can exist at different frequencies. Eachfrequency, also know as Wavelength, represent a differ-ent color. To understand what polarization is, just take apiece of paper and draw a sinusoidal wave on it.

Now, hold the paper in your hands and turn-it in all di-rections. If you drop the paper on a table, this sinusoïdal

wave is totally paral-lel with it … andwith the groundtoo. Orientation ofthis wave is nowhorizontal. Now, putyour paper on thewall. The orienta-tion of that waveturned by 90 de-grees. Now, this ori-entation is vertical.When you turn on alight, billions ofwaves are generatedin all directions andwith random orien-tations. But you canput a filter just infront of the light toonly keep waves that are horizontal or vertical. Thisprocess is called polarization.

By this way, you can project on the same screen and atthe same time, two different images, apparentlyblended together, but that can be easily separated usingthe same kind of filter, right in front of your eyes. Filterfor left eye will only let horizontal waves pass throughit, and the other filter, dedicated to right eye will letonly pass vertical waves. By this process, the color limi-tation of anaglyph images can be resolved. The othergood thing about this technique is that polarized glasses(also know as passive glasses) can be produced at a re-ally low price. But theaters who project 3D movies needtwo synchronized projectors (that's the case for theIMAX 3D system), each with a different kind of filter infront of them to generate the two polarized images,horizontal and vertical.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

13

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Polarization process

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 14: blenderart_mag-28_eng

You can see some images of IMAX 3D projectors athttp://widescreenmovies.org/WSM11/3D.htm

Sequential : The third technique to be used use anotherkind of glasses named « active glasses ». As we describebefore, the goal of the sequential files is to project thetwo images one after another and hide the eye thatdoesn't have to see the other. Using this technique, onlyone projector is needed, usually a digital projector likethe ones from Barco or Christy, linked with a DigitalCinema Player, for instance, systems from Doremi that Irecently used.

By this way, film is not needed any-more. The movie is uploaded intothe player via a simple USB portand encoded using JPEG2000 forvideo (at a resolution of 2K or 4K)and AC3 or 6 separated waves forthe audio. Both streams are packedinto a single MXF file and followedby four XML files used by the play-er. There are five files create some-thing called a DCP (Digital CinemaPackage) and grouped into a singlefolder. In this MXF file, images arestore sequentially. Left / Right / Left/ Right / and so on.

When I started to work with for the Doremi player, Iwas really surprised to read into the documentationthat the embedded system was a small Linux and theplayer was built around FFMPEG ! Yes, when you go to adigital theaters and watch a 3D movie, FFMPEG is in theplace ! Funny isn't it ? Ok, do you wanna know some-thing even funnier? Last month, I was working for oneof the biggest french TV and one TD gave me a DCP of astereoscopic movie trailer. The first part of my job wasto extract what is called essences (or streams) to makesome modifications on it.

I tried to extract them using all kind of software thatwas installed on my computer, from Adobe, Avid, evenFinal Cut on a Mac next to me … no one was able toread this damned MXF ! Suddenly, I think about FFMPEGinside the Doremi player, and my poor brain made theleap to Blender. I decided to give it a try, and … YES !!!Blender can read directly an unencrypted MXF file at 4 Kfrom a DCP right into the Sequencer. That's incredible !

Ok, I just saw two problems that, I think, can be easilycorrected by the teams of FFMPEG and/or Blender (Heydevs …I love you, you know). Color space inside the DCP

is not RGB but X'Y'Z'. So, color spacehas to be converted before displayingthe movie. But I read somewhere on aroadmap schematic that color man-agement is on the TODO list. So … Icross my fingers. OK, second problemis more touchy. In this kind of MXF,time code for Left image and Rightimage seems to be the same.

And when you play the video usingthe shortcut ALT+A into the sequencer,this one doesn't seem to base theplayback on time code. For instance,when you put a 30 seconds DPC/MXFfile on the timeline and you scrub

along it using you mouse, you can see the end of themovie at the right time.

Because you don't play the movie continuously. Youjump from one position to another in a random wayand Blender probably looks at the time code at thistime. My goal was to extract all frames of the movie inthe same order they are packed into the MXF file andconvert them into PNG files. I'll separate each eye laterwith sequencer or another editing software.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

14

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Active LCD glasses

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 15: blenderart_mag-28_eng

But if you render this clip just placed on the timeline ofthe sequencer from image 1 to 720 (that is 30 secondswith a FPS of 24), Blender finally renders only the half ofthe clip while it's seems to be finish on the timeline. Iguess it's because the clip was read at a frame rate of 24FPS. And remember, when you work with sequential file,you have to double the frame rate ! When I saw theproperties of the MXF clip into the sequencer,

Blender showed me that the Frame rate is set to 24 FPS.Because it simply read the meta-data store inside thisMXF. But that meta-data lies toBlender ! Shame on it !!! And un-fortunately, in Blender you can'tchange, for now I guess, theframe rate of the clip, directly inthe properties. It could be reallyuseful to avoid this kind of prob-lem! Blender could be the first ed-iting software that could handleDCP package!

And if one day Blender is able toproduce directly a DCP packagethrough the render panel like «OpenCinema Tools »(http://code.google.com/p/opencinematools/)... I'll buy champagnefor all the devs !!! (hummm … atBlender Institute only, OK?) So, tofinish on Sequential files, theworst part is that active glasses ismore costly and heavier than pas-sive ones. If you want do investi-gate on the DCI standard (DigitialCinema Initiative), just go the theDCI website :www.dcimovies.com

B.2) Difference between broadcasting tech-nique in theaters and on 3DTV

Ok, now we know what kind of process are used toproject 3D movies on the big screen, what about 3DTV ?The response is quite simple. That's exactly the sametechniques … with some tiny differences. First, you haveto know that most of the 3D shows are generally cre-ated at least in FullHD at a resolution of 1920x1080,square pixels.

Anaglyph : Anaglyph will never be re-ally used to broadcast 3D contents tothe mass. Now that more sophisticatedtechniques exists, anaglyph is used forpreviewing only and for some « market-ting experience » like Google Street viewin 3D.

Polarized : Same technique as the oneused in theater but with a little differ-ence. Passive screen, as every HDTV, hasa resolution of 1920x1080. But in thisparticular case, one line on two are po-larized horizontally and the others arepolarized vertically. It's exactly the sameas fields rendering. So, if the vendor ofthe screen doesn't choose to double thenumber of lines (to reach 2160 lines), theresolution of each frame is divided by 2vertically. Taking this limitation in ac-count, the resolution of an image is1920x540. For now, I never saw a publicscreen with a vertical resolution of 2160lines … but once again, I cross my fingerto see it quickly.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

15

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

The process of creating an anaglyph image

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 16: blenderart_mag-28_eng

Sequential : For now, only BluRay 3D discs can han-dle this kind of stream at home. The biggest advan-tage of BluRay is that the movie can be played atthe original frame rate, generally 24 FPS, avoidingany telecine processes (even if Gamma correctionsare still to be done). So, in the case of 3D movies,the frame rate will be at least 48 FPS (remember,two eyes at 24 FPS each). But you have to know thatactive screens have to match a minimum refreshrate to work well with 3D movies.

As we said before, an MXF file inside a DCP store imagesLeft/Right/Left/Right … in this order. But if youproject each image only one time, you'll probably see aflickering effect due to your retinal persistence. In theat-ers, even for 2D movie, the same image is shown 3times before switching to the next one.

So, for a 3D movie, the real display sequence is Left 1 /Right 1 / Left 1 / Right 1 / Left 1 / Right 1 / Left 2 / Right 2 /and so on. So, if you quickly calculate the resultingframe rate for a BluRay 3D disc : 24 x 2 x 3 = 144 FPS/Hz.That's why your 3DTV has to have a minimal frequencyof 150 Hz to comfortably display a 3D movie.

B.3) How to transport 3D streams into classicalHD pipeline :

For now, there's not any 3D broadcast standard to sendto the mass a FullHD 3D program at 48 frames per sec-ond through satellite, digital terrestrial service (calledTNT in France) or IPTV (TV by ADSL). Until standards ex-ist, broadcasters are constrained to use existing HDpipelines to stream 3D contents. Like they put 16/9 im-ages into 4/3 pipeline using anamorphic images for SD,the two HD images for each eyes are squeezed to fill thespace of only one HD image. Each view (left and right),are scale at 50% of there original size horizontally andplace « Side-By-Side » (how became an official technical

word) to create a Full HD image with a resolution of1920x1080 containing the two views.

All programs broadcasted since 3DTV came out, includ-ing the last Soccer World Cup are done like that. Side-by-side (also know as SBS) is kind of a « first introduc-tion » to 3D broadcasting … but horizontal resolution(and of course, details) of each image is divided by 2.Several other combinations exist (image has beendarken for better visual understanding) :

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

16

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

The process of creating an « Side-by-Side » image »

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 17: blenderart_mag-28_eng

Top/Bottom : Same as Side-by-Side but here, the scale of50% is done on the verticalaxis.

Line by Line : Nearly similarto fields, one line on two con-tain the left image and otherlines contain the right one.More suitable to decode twoimages nearly at the sametime and keep them synchro-nized if the decoder doesn'thave a big buffer memory tokeep one image while the sec-ond is decoded (that's thecase with Side/Side orTop/Bottom techniques).

Matrix : Here, pixels of eachimage are alternated one ontwo and create visually akind of a grid. Once again,the goal is to decode the two images exactly at thesame time and more precisely than the « Line byLine » technique.

For now, Side-by-Side is the most used technique and all3DTV are able to understand it and extract each imagesfrom this « composite image ».

When 3DTV receives these kind of images, there twosolutions :

Active screen : Each image is stretched back to theiroriginal size to produce a kind of HD image (fakedby bi-linear interpolation) and are played one afteranother as we described previously.

Passive screen : Each image isstretched back to there original sizeto produce a kind of HD image andplayed at the same time but at halfresolution vertically, alternate line byline and differently polarized, as wesaid previously.

So, as you can see in both case, im-ages are stretched back to their orig-inal size. In the case of active screen,we can consider that the Side-by-Side techniques reduce by 50% thehorizontal resolution of originalfootage. But with passive screen(that doesn't have the doublednumber of vertical lines yet), the ver-tical resolution is divide by 2 onceagain.

So, at the moment I wrote this(everything evolves very quickly),passive screen only shows a imagethat has only one quarter of the

original resolution ! So for now, 3D HDTV are not alwaysreally … HD. There will be only when broadcasters willbe able to stream 2 x FullHD footage without any an-amorphic tricks like Side-by-Side or somewhat.

C) Creating 3D contents using Blender 2.5 (atlast):

Yes !!! Finally, here it is. After all that technical stuff, youshould acquired all the knowledge to understand whatwe are going to create and how create it ! I know itwasn't the funniest part, but now I can directly usesome terms like anaglyph, polarized or side-by-sidewithout having to explain them, only focusing on theBlender part.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

17

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Three different 3D broadcasting techniques

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 18: blenderart_mag-28_eng

All I'm going to do will be done without the help of anyscript. My first goal is to describe all the process tomade it easily understandable to finally, I hope, inspiremore people to create scripts to automate some tasks. Ialready start writing scripts but at this time, I'm waitingfor the 2.5 API to be fully stabilised to continue. Thebuild of Blender 2.5 used here is r29308 take fromgraphicall.org.

At the end of this article I'll made some proposals to en-hance Blender 3D capability and made all the work floweasier. OK ... first, let's talk about the 3D Camera rig.

C.1) Creating the 3D Camera Rig :

As usual, if you want to work fast, you have to createsome handy tool sets. As an animator has to create a rigto animate his character, we have to create our own

kind of 3D camera using traditional Blender's tech-niques. And Blender 2.5 has some new powerful featuresfor that. Let's see how to :

1 As we said before, taking in account the scale ofyour scene is really important to achieve realisticeffects when you have to work with common ob-jects used in real life.With Blender 2.5, we cannow set the units systemto « Metric » in the scenepanel, on the right of theinterface. It will be espe-cially handy when we willset the IPD value, ex-pressed in millimeters.

2 Now, create a simpleEmpty by pressing Shift+A >> Empty and press Alt+Rthen Atl+G to remove any transform to replace it atthe center of the world. To easily catch it in the

scene, I switch to the Ob-ject Data panel and changethe Display to Circle

3 It's time to add a newcamera to your scene bypressing Shift+A >> Camera.Once again press Alt+Rthen Atl+G to replace it atthe center of the world,and turn it by an angle of90 degrees on the X axis.Via the Outliner on the topright, Ctrl+clic on Camerato rename it Cam_Center.

4 In the same way you madeit for the cam, renameyour Empty 3D_Cam_Rig.Select your camera, thenpress Shift and click theempty to add it to the se-lection list. With mousecursor over the 3D View,press Ctrl+P to Set parentto object.

5 Select Cam_Center andpress Alt+D to create alinked copy of that cam-era. Rename the dupli-cated one Cam_Left. Asyou can see, if you changethe Angle value ofCam_Center , controllingthe Field of View, the FOVof Cam_Left changes inthe same way and exactlyat the same time. All parameters are fully linked.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

18

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 19: blenderart_mag-28_eng

6 Select Cam_Leftthen press Nkeythe show theTransform panel,on the right of 3DView. Look at thefirst Location pa-rameter shown,named X. You cantype any kind of value in this field, and because youpreviously switched your scene unit to Metric, youcan enter a value followed by mm, as Millimeters. Ifyour value is positive, the camera moves to theright of the Cam_center. If it's negative, it moves tothe left. So, because your duplicated cam is namedCam_Left, the value for X will probably be negativein the Local Space of it's parent, the Empty. As wepreviously said, the most used IPD is around 65mm.But you have to divide this value by two because theleft cam will moves by 65/2 on the left, and the rightcam will moves by 65/2 on the right. So, you candirectly type in the field -65/2mm. Magic, isn't it ?

7 Ok, now we have understood how this propertyworks, right-click on the Location X value andchoose Add singleDriver. This field isnow purple color-ed, meaning thatit's controlled by aDriver. Now, selectthe Empty named3D_Cam_Rig andswitch to the Ob-ject panel. Scrolldown the panel toreach CustomProperties. For me, that's one of the most excitingfeatures of Blender 2.5. The ability to add an unlim-

ited number of custom values to control other pa-rameters. All kinds of parameters on every object!Expand this panel and click the Add button.

8 A new property is now created. All sub-panels canbe easily moved across theProperties window by sim-ply clicking and draggingit's name. I suggest to dragthe Custom Properties tothe top. By this way, youcan see all controllers ofyour rig when you selectit. For now, the new prop-erty is named prop. Click the edit button to changethis name to Cam_IPD. Because the IPD of human isconsidered to be in a range of 50-75mm (remember,this article is not for aliens audience), set min to 50,max to 75 and Property value to 65 who is a me-dium value. If you want, you can fill the Tip fieldwith IPD of the 3D Camera.

9 Now, right click on the65 value and chooseCopy Data Path. InBlender 2.5, each Data-block, as custom prop-erty is, can beidentified by a uniqueID named Data Path.Switch to workspacenamed Animation then select Cam_Left. The lowerGraph Editor is set to F-Curves Editor. Click on thatbutton to swicth to Drivers. TheX_Location(Cam_Left) property appear on the left.Let your mouse cursor over the Graph Editor andpress Nkey to display the properties panel on theright of the editor.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

19

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 20: blenderart_mag-28_eng

10 In the Drivers pan-el, click the AddVariable buttonthen click theempty field, justnext to the Objectdrop-down andchoose3D_Cam_Rig. Anew field namedPath is nowshown. Click on the empty field and press Ctrl+Vkeyto paste the Data Path you previously copied fromthe Cam_IPD parameter. You should have somethinglike ["Cam_IPD"].

11 ["Cam_IPD"] is nowconnected to thisnew variablenamed var. Changeit's named tocamIPD. Just overthe Add Variablebutton, you cansee the fieldnamed Expr. Thisone is the finaloutput of the driv-er, directly plugged into the X_Location of Cam_Left.So, if you simply type camIPD in this field,X_Location will have exactly the same value as theCustom Property.In the over case, you want to cre-ate 3D elements that only live in the wonderfulBlender CG world ! In this case, knowing where thefocal plan is (where each direction of cameras inter-sect) is really difficult and it could be useful to con-trol the position of this focal plane only using aEmpty. So, we have to create a kind of mixed setup,suitable for each case. To do that, we have to add

new Custom Properties to 3D_Cam_Rig namedFP_Influence with a min/max range of 0-1 and an-other one named Vergence with a range of 0-5, evenif 5 is probably to high. Remember, vergence valueare usually between 0-2 degrees to avoid incredibleheadache.

12 Once theCam_Left is set,just select it andpress Alt+D tocreate a linkedcopy. Rename itCam_Right. Evenif all parametersof the camera arefully linked to theoriginal on,e expression typed in the driver settingsseems to work like a non linked copy, that is a reallygood think for us in this case. You just have do de-lete the minus sign in front of the expression :camIPD/2000. And that's it ! Your IPD constraint isset.

13 Using exactly thesame technique,you can add anew CustomProperty to your3D_Cam_Rig,controlling theFOV of Cam_Ref.BecauseCam_Left andCam_Right are linked copies of this original object,their respective FOV will change too.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

20

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 21: blenderart_mag-28_eng

14 OK, now let's talk aboutvergence. As we said be-fore, you can shoot yourscene using a fully paral-lel setup or a convergentone. Sometimes, you haveto set vergence using anangle value, for instancewhen you have to put a CG element into a live ac-tion footage, shot in 3D with a vergence of 1 degree.

15 Create anotherShift+A >> Empty, thenpress Nkey to displaythe « Transform »panel on the right ofthe 3D Viewport. Setthe XYZ Location val-ues to 0/4/0 the lock Xand Z parameters. Youcan now simply movethat Empty usingGkey to move it awayfrom the camera. Rename that Empty FocalPlaneand parent it to 3D_Cam_Rig.

16 Select Cam_Left,Cam_Right and finallyFocalPlane. PressCtrl+T and selectTrack to Constrain.Now, if you move Fo-calPlane, you can seethe vergence of cam-eras change. By mov-ing this FocalPlane,you can easily choose which element of your 3Dworld is located at the distance of the screen, whatis behind it and what's popping out. of the Empty.

But … remember. IPD value have to be divided bytwo, and one unit in Blender world is equal to 1 me-ter, because you previously set unit to Metric. TheCustom Property added to the Empty is designed towork in millimeters. So, you also have to divide thecamIPD by 1000. Finaly, the result is (camIPD/1000)/2= camIPD/2000. But don't forget to inverse the resultbecause the left cam has to move … on the left : Theexpression to enter in the field is : -camIPD/2000

17 If you select 3D_Cam_Rigand try to turn it by press-ing Rkey twice, you cansee that Cam_Left andCam_Right don't rotate inthe same way asCam_Center. To fix thisbad behavior, you have toswitch to Local Space from the two drop downmenus. Right click on FP_Influence and choose AddDriver.

18 Switch back to the GraphEditor displaying Drivers.Select Influence on the leftof the window and switchtype from Scripted Expres-sion to Averaged Value.Click Add variable button,choose 3D_Cam_Rig justnext to the Object drop down. Right click theFP_Influence parameter of 3D_Cam_Rig to CopyData Path and paste it to Path. Now, you can con-trol the influence of the Track to constrain usingyour Custom Property. By setting FP_Influence to 0,your 3DCam rig will be parallel. If FP_Influence isset to 1, the rig will be convergent. Just do the samefor Cam_Right.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

21

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 22: blenderart_mag-28_eng

19 Finally, as you havepreviously done forother parameters,create a Driver forRot Z of each Cam-era and connectthem to Vergenceparameter of3D_Cam_Rig. Butthis time, you haveto convert the rotation values from Rot Z, which isexpressed in Radians, to values in degrees. ForCam_Right, this value has to be positive, forCam_Left it has to be negative.

Your 3D camera is now completely ready to use. Don'tforget, if you want to control vergence using a angularvalue, to set the FP_Influence to 0. You can even have amixed setup using a value between 0 and 1. Of course,this rig is only a base. For instance, to create a twopoints camera, using target, you just have to add a newempty and link it to 3D_Cam_Rig using a Track to con-strain. Keep in mind that 3D_Cam_Rig can be consideras a single camera.

To layout your shot, simply use Cam_Center and checkfrom time to time what append to Cam_Left andCam_Right.

C.2) Set the Left and Right scenes and compos-iting nodes :

To separately render each camera, you have to createmore than one scene. For now, each scene in Blenderuses the camera tagged as Active to render. You can'thave two active cameras at the same time. Otherwise,Blender wouldn't know what camera to use, that's logi-cal. So, to render multiple camera views in a singlerender job, you have to add two more scenes. Here, we

gonna explain how to make an anaglyph render usingthere two views.

1 At the top of the inter-face, rename the cur-rent scene to Center,then click the « + »button to add a newone, named Left. Asyou can see in the Out-liner, Left scene is to-tally empty when it'screated.

2 Switch back to Centerscene then press Akeyto select all objects.Press Ctrl+Lkey (L asLink) and choose Scene>> Left. Look at the Out-liner, Left scene is filledwith the same objectsas the Center Scene.It's important to noticethat all objects are linked together and not copying.Any modification done in Center scene will be donein any other scene.

3 Repeat the two laststeps to create anotherscene names Right andlink all objects to it.Now, jump into theLeft scene, selectCam_Left and pressCtrl +Numpad 0 to setthis cam as the activecam. Do the same forthe Right scene to set Cam_Right as the active cam,and finally Cam_Center for Center scene.by

Fra

ncoi

s “C

oyho

yt”

Gra

ssar

d

22

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 23: blenderart_mag-28_eng

4 Switch back toCenter scene thenjump into the com-positing workspace.Click Use nodes onthe bottom of thenode editor. RenderLayers and Compos-ite nodes. In the firstone, you can choosewhich scene Blenderwill render. Choose Left, then select that node andpress Shift+Dkey to duplicate it and set the dupli-cated one to Right.

5 As we previouslysaid, many solutionsexist to broadcast 3Dimages. We're goingto describe here themost simple andsuitable for everyonethat doesn't have a3D screen : Analgyph.Add to your compos-iting graph two Sepa-rate RGBA nodes, onefor each Render Layers node. Add a Combine RGBAnode and plug the R output to the Separate RGBA Rinput node con-nected to Left render.Plug the B and G out-put to the SeparateRGBA B and G input.

6 As we previouslysaid, this kind of An-alyph combinationtries to keep some

information about color, but it's never really works.To achieve a good representation of the « 3D effect», you have to turn each render to grey scale with adefault Coloramp before combining them. So, thetwo Separate RGBA nodes can now be deleted.

7 It's always good tokeep the originalrenders on the discbefore combiningthem. To do that,you can add twoFile Output nodesfor each render.One thing you haveto know : even ifyou want to onlyoutput each render to work later with them, youhave to combine them into one Composite node.Even if it's a simple Color Mix you don't care about.Otherwise, only one render will be launched.

We can't describe here all the techniques to generateother combinations like Side by Side, or V-Interlace orwhatever. But you'll find in the Blend files provided withyou rfavorite magazine a .blend that combine Anaglyph/ Raw output / Side by Side in the same render job. Itwill be certainly useful for some people.

More info about render settings. Remember that eachscene share the sames objects because they're linkedtogether. But each scene has their own render settings.Anti-aliasing, ambient occlusion, and so many parame-ters can be different in each scene to optimize rendertime. But most of the time, you will have to set all pa-rameters 3 times.

A good idea for python fanatic could be a function toduplicate scene parameters for one scene to another. Itcould be really useful.

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

23

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 24: blenderart_mag-28_eng

The Add-on system is so good with Blender 2.5, all isnow possible.

And don't forget, if you add a new object in the Centerscene, you have to link it with the two other sceneswith Ctrl+Lkey. Once again, a magic button « Link themall » could be useful. ;o)

C.3) Previewing 3D images in real time usingsequencer :

Now you know how to render an anaglyph image with amaximum control over your 3D settings, I'm going toexplain a really good trick to preview the « 3D effect »in real time, while you are working on your scene. Find-ing the right distance of a object in 3D if often a reallytime consuming task. With analgyph glasses on yournose and real time anaglyph preview, checking that dis-tance is incredibly easy. Let's see how to :

1 Jump into the Videoediting workspace,and press Shift+A >>Scene. Here you canchoose between Leftand Right. Start tochoose Left and addthat clip on the time-line.

2 Select the Left clipand look at thereproperties on theright of the screen.Scroll down to ScenePreview/Render,check Open GL Pre-view and chooseSolid in the Drop-

Down menu, just below. You can now, by movingthe timeline, see your animation in real time !That's one of the Blender 2.5 benefits … one of thebest for me !

3 Scroll down once againin parameters to checkUse Color Balance.Once checked, threewheels appears withcolored squares bellow.Click on the right oneand set RGB color to255/0/0.

4 Using the same meth-od, add on a secondtrack a new clip for theRight scene. Onceagain, check the OpenGL Preview and useColor Balance tochange his color toCyan (RGB=0/255/255).

5 Select the two clipsthen press Shift+A >>Effect Strip >> Add. Andhere is your analgyphrender. Just wear yourRed/Cyan glasses andyou'll see the 3D Effect… in real time ! SinceBlender 2.5, it's possi-ble to open a windowwith only the sequencer monitor. So, you can workon a classic 3D view and see the result in a Se-quencer monitor, just next to it !

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

24

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 25: blenderart_mag-28_eng

During a production I worked on last month, I used thesame technique to generate a Side-by-Side render usingTransform effect strip and some Metastripes. I pluggedthe really expensive 3D passive screen of the companyas a second screen (as extended desktop) on HDMI portand 1920x1080 resolution. On that second screen Iplaced a Blender window with a Sequencer screen andmaximized it (Alt+F11).

I removed any header to obtain a 3D HD preview in realtime of my scene using professional 3D screen ! Just twobad things remain. The « + » sign can't be removed, likethe 3 stripes to divide the screen. It can be a little dis-turbing. If devs hear me, could it be possible to have atotally empty full screen mode ? ;o) Thank you in ad-vance, guys !!!

D) How can we improve Blender 3D workflow :

All we have done during this article is done with built inBlender features. As a conclusion, I'd like to make somehumble proposals to every devs that would like to im-prove Blender stereoscopic capabilities. Of course, if anydevelopers from the Blender Institute read this article,these proposals are primary for you, but not only. Withthe wonderful « Add-On » support, anybody can workan play around to improve there functionalities. Sothat's a non exhaustive wish list … but I think one ortwo of there proposals could be interesting for the com-munity. At least, I hope. ;o) Some of them have beenalready discussed in this article.

Possibility to directly create a 3D build in camera, withsame controls created in our 3D_Cam_Rig. For instanceShift+AKey >> 3D Camera. Nearly similar as the camerathat can be found in Eyeon Fusion, for instance.

As a consequence, RenderLayer in compositing couldhave two outputs, Left Output and Right Output

These two outputs could be plugged into a new com-positing node specially created to directly generate Side-By-Side, Anaglyph, Line-by-Line, Matrix or Top/Bottom.

Render layer node could output a quick render directlytook from the OpenGL View (antialiased if possible, andnot only forced by FSAA the graphic cards), as the SceneClip in the Sequencer. Using « Baked textured » ;, wecould render very quickly a sterescopic view of a virtualset and composite over them a chromakeyed human(we will discuss about this probably in BAM 29 ;o)

It could be really useful to link two clips in Sequencer.Each modification on one clip (left view) could be re-ported on the other clip (right view). I know it can bealready done using metastrip, but in some cases, usingseparated clips is better.

I don't know if it's possible, but Raytree could be onlycomputed once for two views, because they are nearlythe same.

The best feature that can be added to Blender regarding3D Workflow : Anaglyph preview directly into the 3DView to avoid the trick using the sequencer. We coulddirectly see in real time the « 3D Effect » during the lay-out. BGE already provides this feature.

A real full screen window, without any « + » or «separation strips » to send to a second screen aside-by-side image, displayed by a 3DTV, plugged asa second screen via a HDMI port.

Color management and colorspace conversion : RGB>> YUV, RGB, X'Y'Z', and so many more … ;o)

Fix issues about frame rate in DCP / 3D MXF readingas described previously.

Directly render as side-by-side, using for instance anew sequencer Effect Stripe.by

Fra

ncoi

s “C

oyho

yt”

Gra

ssar

d

25

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 26: blenderart_mag-28_eng

And so many more … ;o)

So many things could be explored, like support of dis-parity maps to help rotoscoping tasks. For instance,right eye's render could be encoded only by differenceregarding left eye's render. With this totally loss-lessprocess, file's weight could be reduced by around 40% !

I hope this article was for you a good introduction tothe 3D world and gave you inspiration to do more 3Dwith Blender 2.5. You'll find in a zip file a lot of scenes tohelp you understand how to use 3D_Cam_Rig and howto create good and spectacular 3D contents. See ya … in3D

by F

ranc

ois

“Coy

hoyt

” G

rass

ard

26

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: Creating 3D contents using Blender 2.5

Page 27: blenderart_mag-28_eng

IntroductionI usually browse the net and admireother artist’s works and to get inspira-tion. Not only inspiration for subjectsor scenes, but also the enthusiasmyou will need to break down a sceneand actually pull it off in Blender.

This time I wanted to create a sugarsweet fluffy effect with bubbles float-ing in a cloudy atmosphere, so be pre-pared for some pink!

We will use the particle system, compositor, UVwrapping, some textures and animation. There willalso be some useful tips on workflow and tools.Blender 2.52 is still somewhat buggy so you mightrun into some strange behaviour. Remember to save[CTRL+S] often!

Due to a bug (?!?) the particle system will not ani-mate right sometimes. When this happens it canusually be fixed by going to frame 1 and then re-en-ter the particle system's start frame.

The setup

When I started to use Blender, I tried to use thestrict 4 view ISO layout but quickly found it to takeup to much valuable space for the tools. However Iusually use a 2 split view, one for modelling and onefor the camera. In this way I can immediately see ifmy objects are the right size and in the right placewhile I model them.

You can have multiple cameras for different testshots and one for the main render shot. You changethe default camera by first selecting the desired

camera in the outliner, position the mouse over the3D view you want to change and then press [CTRL +Num0]. Another advantage is that you can adjust thecamera in your main 3D view and at the same timesee through the camera in the other view while youposition it.

First delete everything in your scene by hitting [A] toselect all, then press [X] and confirm to delete every-thing. Add a light with [SHIFT+A] and selectLamp>>Sun, name it to Main Light. I prefer to havemy main light a little bit stronger so go into theLight window and in the Lamp panel change En-ergy to 1.16. Now add a camera with [SHIFT+A] andselect Camera, name it to Main Camera.

In all my default projects I have one sun light andone camera already rigged. I also have both the cam-era and the light to track a target. In this way I canmove around both the camera and the light and al-ways be sure to have my target well lit and in cam-era view.

To do this we first create a target object with[SHIFT+A] and select Empty. Everything you add tothe scene will be located at your 3D cursor. So ifyour 3D cursor isn’t at position 0,0,0 you can easilychange that from the Transform panel propertieswindow. Toggle the properties window with [N],pointing the mouse over your 3D view. Under theView panel’s 3D Cursor setting, you can set the posi-tion to whatever you like (0,0,0) in this case.

If your empty ended up in the wrong place, don’tpanic! In the same window (or in the object win-dow) you can enter the exact coordinates in theTransform panel for any selected object in yourscene. Now make sure your empty is located at 0,0,0and name it Camera Target under the in the Objectwindow .By

Max

Kie

lland

27

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

I Love My Bubble

3D WORKSHOP: I Love My Bubble

Page 28: blenderart_mag-28_eng

Since everything now has ended up in the same spot itis an excellent opportunity to exercise the outline win-dow. Use the outline window to easily select objects bytheir names.

In my scene I placed the Main Camera at X 0, Y -19, Z 0and the Main Light at X -6, Y -16 Z 15. You can enterthese coordinates directly in the Transform panel underLocation. There is no point in changing the Rotation be-cause the Track To modifier we will apply next will over-ride the rotation.

Since we removed the camera before, our little view tothe right needs to be set back to Camera view. Hold themouse over the window and press [Num0] to change tothe active camera view.

Select the Camera and go to the Constraints window . Open up the Add Constraint and select Track To. AsTarget select our Camera Target and watch the cameraview. Oops that looks a bit awkward. You need to tellthe Camera what axis should point to the target andwhat axis is the up-axis.

Set To to –Z and Up to Y. Now your camera should pointat the Camera Target empty.

Do the same for the Sun Light. Now you should see ablue dotted constraint line from the light and the Cam-era to the Camera Target.

My work layout looks like this:

I find the Outline window very useful to quickly selectan object. I have it filtered on “Visible Layers” so I onlysee the relevant objects. To the right you see my camerawindow and at the bottom the timeline window so Iquickly can move between the frames. The small UVwindow is good for quick access to reference images,UV maps and rendered layers..

My de-faultsetup isincludedfordown-load.Let’s geton withthis tu-torialand cre-ate thefluffyclouds…

Up in the clouds

You could create the clouds withBlender’s new smoke system witha great load on your CPU, or youcan fake it!

Go to the World window andtick the Paper Sky and Blender Skyboxes. Then head immediatelyover to the Texture windowand select the first texture slot. Pressthe new button and leave the Type asClouds (I guess you can see where weare heading here). Leave all the otherparameters as they are.

Head back to the World window andset the Horizon Color to pink, ZenithColor to white and Ambient color toblack. Now you can see a sort ofcloudy image appear in the Previewpanel.

28

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: I Love My Bubble

Page 29: blenderart_mag-28_eng

Since we have now created our clouds as an environ-ment map, it takes virtually no time at all to processwhen we render.

Bubble trouble in paradise

We need to create a template to represent the bubbles.Press [SHIFT+A] and select Mesh>>UV Sphere, set it tosmooth in the Tool shelf window (toggle with [T] in 3Dview) and name it Bubble. Move it out of the way whereyou can easily select it. I put mine at location -10, -15, 5,scaled it to 0.113 on all axes and changed dimension to0.425,0.25,0.425. You may need to zoom out to see it.Use the mouse wheel to zoom in and out.

I want it to be a pink shiny bubble so we need to workon a new material. Head over to the Material window and press the new button to add a new material andslot for this object, name it Bubble Mat.

In the Diffuse panel set colour to a nice pink one, Inten-sity to 1.000 and tick the Ramp box.

The ramp allows us to blend in another colour depend-ing on the amount of light hitting the surface. We cangive the bubble more depth by using a ramp going fromdark pink to bright pink with different alpha values. Theramp already has 2 positions by default, one on eachend. At the left position (0) set Colour to a little darker

pink and Alphato 0.400. At thesecond one (1)set the colour toalmost whiteand Alpha to1.000.

Now we wantthe highlight-ing, Specular, to be a bit more blue, also with a morebluish ramp. Go down to the Specular panel and set thecolour to a more pink-blue. Tick the Ramp box to bringup the colour ramp.

We do the same here but go from a black to a turquoisecolour. Leave the alpha values but change the secondcolour to turquoise.

A bubble is not a real bubble unless it has some trans-parency, so just tick the box in the Transparency paneland set Alpha to 0.400. This gives us enough transpar-ency to still be able to see the bubble.

The last thing we will do to add more depth is to havethe bubbles receive transparency shadows/light. Thiswill shine up the opposite sides inside the bubble aswell. Go down to the Shadow panel and tick ReceiveTransparent.

29

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

MAKING OF: I Love My Bubble

Page 30: blenderart_mag-28_eng

Now we should have a nice pinkbubble.

Bubbles, bubbles, bubbles…

We will create the bubbles with aparticle system and for that wefirst need an emitter. The emitterobject is sending out the particlesfrom its vertices, faces or volume.Create a Cube with [SHIFT+A] andselect Mesh>>Cube, name it Bub-bles.

In my scene I placed the Bubblesat X 0, Y 0, Z 0 and scale X 9, Y 9, Z9. You can enter the Location andScale directly in the Object win-dow but we also need to set theDimension to 18, 16 and 10. Forsome reason the Dimension canonly be accessed from the Proper-ties window. Bring up the proper-ties window with [N] and do thechanges in the Transform panel.

If your box is solid you can togglebetween wireframe and solidwith [Z].

Since I, in the end, want to ani-mate the bubbles gracefully float-ing in the clouds I need to planhow the particles are enteringthe scene. I only want the parti-cles to float in from the sides andbottom-up. I also don’t wantthem to suddenly just appear anddisappear in the camera view.

To better understand this let us take a look at how parti-cles are emitted (generated).

A particle system is a flow of particles over time. Thismeans that they will begin to emit at the start frameand stop at the end frame. Between the start and theend frame every particle generated will just appear onthe emitter and disappear when it dies. Did I say die?Yes, each particle also have a life time starting at theframe it is generated counting forward and when it hasbeen in the scene for its Life Time number of frames itwill just disappear.

So we will avoid having the particles being generatedwithin our camera view and they must live long enoughto not disappear in the camera view during the anima-tion.

Let us first attach a particle system to our newly createdemitter Bubbles.

Go to the Particle window and press the + button toadd a new system.

Leave the Type as Emitter but change Seed to 11. Theseed only tells the randomizer how to initialize the ran-dom number generation. I found 11 generates a nicelooking particle flow.

Let us have a look at the Emission panel. We don’t wanta forest of bubbles so change the Amount to 200.

As I mentioned before we want the bubbles to float intothe camera view so the idea is to let the camera view fitinside the emitter. If we then let the faces of the emittergenerate the particles, they will be outside the cameraview! To do this set Emit From to Faces and Random.

But we are still not seeing any particles! This is becausewe are still on frame 1 in our animation.

30

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

MAKING OF: I Love My Bubble

Page 31: blenderart_mag-28_eng

If you start/stop the animation with [ALT+A] you will seehow the particles start to emit. But they look a bitsmall, not like bubbles.

For this we will actually not render the particles them-selves, instead the particle will become an “empty” toguide another object.

Under the Render tab change from Halo to Object andselect our Bubble as the Dupli Object. A referenced copyof the dupli object will be placed at each particle insteadof the particle itself. Any change to our Bubble will nowaffect all the bubbles in our particle system. We alsodon’t want to render the emitter itself so untick theEmitter box as well.

As you can see they are still too small to be taken forbubbles. To get more variations in the bubble size go tothe Physics tab and change Size to 2 and Random Size to0.5.

But wait a minute; they all move in the wrong direction,we wanted them inside the camera view! Let us take alook at the Velocity panel. Here we can control the ini-tial speed and direction of our particles.

Set Emitter Geometry Normal to -0.100 to have themfloat slowly.

A positive value will send the particles in the face’s nor-mal direction so a negative value will send them in theopposite direction of the face’s normal. I guess flippingthe emitter’s normals would have done the same trick,but let us keep things simple and don’t mess withBlender’s way of defining the normal directions.

Now the particles are moving inside the emitter butthey aren’t moving slow, they are falling down with in-creasing speed… Time to get physical.

This has to do with gravity (hence the Newtonian sys-tem). We need to change the gravity to have them float.

Go to the Scene window and in the Gravity tab un-tick the Gravity box. This gives us zero gravity, just likein outer space. Now the small initial force from theemitter’s normals will not be changed and the bubbleswill float forever with the same speed in the normal’sdirection.

Now they aren’t falling down but instead they move soslow they will actually not reach the camera before theanimation is done. Is there not a way to force the bub-bles to flow before the animation starts? Yes there is!

Go back to the Particle window and the Emission tabagain; look at the Start, End and Lifetime. Regardless ofwhat frame our animation starts and stops, our particlesystem can start and stop at other frames.

We need the particle system to start before our actualanimation. In this way it will already have generatedenough particles to fill the camera view when the actualanimation starts. Set Start to -2000 to have the particlesystem started 2000 frames before the actual animationwill render.

This created another unwanted side effect because theparticles will only live for 50 frames and then die, theywill still not reach the camera view. Change Lifetime to5000 to ensure that they will live through the whole ani-mation.

Still the bubbles are appearing and disappearing in thecamera view and we have bubbles coming from theabove, floating down. This is because the emitters back,front and top faces are emitting particles straight intothe camera view. Select the emitter box and go into editmode with [TAB]. Select the top, front and back face anddelete them with [X], remove faces.

31

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

MAKING OF: I Love My Bubble

Page 32: blenderart_mag-28_eng

Now the camera should belooking into a corridor with-out a roof.

Stay focused or not…

If we render the scene nowwe have a lot of bubbles butwe are still missing the softcute feeling. In real world photography we can createan effect of a sharp motive with a blurry backgroundand foreground. This will “soften” the image quite a lotand make it fluffier. To do this we need the compositor,so head over to the Compositor (aka Node editor)

Without creating a new tutorial on compositing we canbriefly say that we can stream information from therendering process through a number of black boxes(hereafter called nodes) to add or subtract data/effectsfrom the rendered scene.

Start by ticking the Use Nodes box and Blender will cre-ate two basic nodes for you. The left node Render Layersis taking data from your scene and streams the informa-tion into the compositor through different channels. Asyou can see the Image channel is already connected tothe Composite node’s Image input. The composite nodeis your end station and it is at this point the final renderis produced (your render window). All channels on theleft side of a node are inputs and the right side are out-puts.

With this setup nothing extraordinary will happen so wewill add a node called Defocus. Hit [SHIFT+A] and choseFilter>>Defocus. Connect the Render Layer’s Imagestream to the Defocus’ Image input and the Defocus Im-age stream (right side) to the Composite node’s Image.

In the Defocus node set Bokeh Type to Circular, fStops to13.000 and Treshold to 0.500.

Well, we still don’t have that blurry effect and that isbecause the Defocus node has no information aboutwhere the objects are located in space and where thefocus point is.

Head back to your 3D views and select the Main Cam-era.

In the Object data window’s Display tab, tick the Limitsbox to see the cameras various limits in the editor.

I also prefer to tick the Title Safe and Passepartoutboxes as well.

In the Lens tab, by default the Angle is set to 35.000 andit represent a 35mm lens. This gives some distortion tothe perspective, just as a real camera does. Doctors andscientists have calculated the eye to be approximately a48mm lens. So to get a little bit closer to reality, set theAngle to 48.000 millimetres.

To better visualize the next step, switch over the 3Dview to a top View [Num7] where you can see the wholeEmitter box and the camera.

Back in the Lens tab go down to Depth of Field Distance.If you Left-click, hold and drag the mouse you will seehow a little line will move along your camera track. Thisis your focal point! Everything at this line will be crispand clear, in focus. Set this to 5.

But still, we need to transfer this information over tothe compositor. Go to the Render window and openup the Layers tab. Under passes you will find a list ofinformation that can be passed along to the compositor.Tick the Z box to pass the depth of all objects.

If we now switch over to the Compositor again you willnotice that the Render Layers node now has a newstream: Z. Connect this one to the Defocus node’s Z in-put. Make sure Use Z-Buffer is ticked.

32

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

MAKING OF: I Love My Bubble

Page 33: blenderart_mag-28_eng

If you render now you will have that blurry soft effect.

I love my bubble

To create the “I love my bubble” we need a separatedsphere. This allows us to animate it independent of theparticle system. Hit [SHIFT+A], select UV Sphere andsmooth it (bring up the Tool Shelf with [T] and hitSmooth) and name it ILoveMyBubble. Since this bubblewill be right in focus we need to increase the number offaces to get a round silhouette. If we were to subdividethe sphere we would only subdivide the individual facesbut the shape wouldn’t be any more round. If we applythe modifier SubSurf instead the whole shape will berecalculated and make it round for real. Another advan-tage is that we can keep the changes in the modifierstack and adjust it at anytime if we need to. Just as wedid for the Main Camera and Main Light, head over tothe Modifier window, open up Add Modifier and selectSubdivision surface. The default values are fine for us.

Now we need a material with the text and the heart. Imade a PNG in Photoshop CS4 but Photoshop does notsave the alpha layer in a way that Blender likes so itdidn’t work. I would recommend GIMP instead andmake sure you untick all the PNG boxes when you savethe image. You have to save it as a PNG to get the Alphainformation included. Go to the Material window and

with your new bubble selected hit the material list but-ton and select our previously created Bubble Mat. No-tice the little button named 2 beside the material name.

This indicates how many references this material has.We have 2 because the other reference comes from theparticle system, using the same material. This alsomeans that if we were to do any changes to this materi-al, like adding a decal, itwould change the look onall bubbles in the particlesystem as well.

We need to put our decal ina new material slot so adda new slot by pressing the +button (beside the materialslot list). A new slot is cre-ated and the previous se-lected material is now copied into a new slot.

Notice how the reference counter went up to 3 and thisindicates that we really didn’t have a copy, but yet onemore reference to the original material.

To make this material unique we need to unlink it bypressing the button with the number 3 on it. Now it'sgot a new name and the reference counter disappearedbecause there is only one object using this materialnow. Rename it to ILoveMyBubble Mat.

Go to the Texture window andselect the next available tex-ture slot and press the newbutton, rename it to ILoveMy-Bubble Img. Change the Typeto Image or Movie. Go down tothe Image panel and load thefile ILoveMyBubble.png.

33

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

MAKING OF: I Love My Bubble

Page 34: blenderart_mag-28_eng

Tick the Anti-alias box to have the image anti-aliasedwhen it stretches over the surface.

In the preview panel I usually set the option to see boththe texture and what it will look like when used by thematerial. For some reason (bug?) the alpha is all blackso you can’t see the text. Don’t be alarmed; as long itlooks correct as a material its okay.

By default theimage will berepeated in Xand Y directionall over the sur-face. We onlywant this decalto appear onceon the bubbleso go down tothe Image Map-ping panel and change the Extension from Repeat toclip. The Extension defines what Blender should dowhen it reaches the edges of the image and in this casewe just want it to stop, it will be just like an ordinaryreal world sticker.

If you have the sphere selected in the preview pane youwill notice that our sticker is all distorted, but looks fineon a cube or plane. This is because we use the auto gen-erated UV coordinates for our bubble. But changing thisto sphere is not right either. The sticker gets distortedaround the poles. In order to get the sticker appliedright we need to create a new UV map to tell the stickerhow it should be stretched over the surface.

First we need to create a UV Map slot. Go to the ObjectData window and open up the UV Texture panel. Pressthe + button and name the slot ILoveMyBubble UV.

Wrap that bubble

Now we need to populatethe UV map with coordi-nates and lucky us Blenderhas it all.

Head over to the UV editorwith our Bubble selected.Make sure you look straightonto the bubble by pressing[Num1]. Enter edit modewith [TAB] and deselect all vertices with [A]. Hit [B] anddraw a rectangle around the faces you want to showyour label.

You can use any method you like to select the faces youwant to display the texture.

When your surfaces are selected we are going to un-wrap the area and create the UV coordinates. Press [U]to bring up the UV menu. It is important that you are inedit mode; otherwise you will get another menu. SelectProject from view (bounds).

Now you will see your selected faces laid out over yourtexture. As you can see the mesh is not straight or thesame size as the bubble’s mesh. Don’t be alarmed! TheUV mesh is very different from the Object’s mesh. TheUV mesh has a “face” mapped to each selected face inthe object’s mesh, but the UV face size is not affectingthe object’s mesh size.

If two corresponding faces are of the same shape andsize, the texture filling that face is undistorted. If the UVface is bigger it will cover more texture on the samephysical face, resulting in a compressed texture. If it issmaller you get the opposite; a stretched texture in-stead.

34

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

MAKING OF: I Love My Bubble

Page 35: blenderart_mag-28_eng

There is a nifty function that shows how distorted thetexture will be. To turn it on bring up the Propertiespanel [N] in the UV window. Under display tick theStretch box.

If you select one vertex and move it around you can seehow the colour changes depending on the amount ofdistortion to the texture. Turn off the Stretch box so wecan work with the texture.

At the bottom you can select our previous loadedILoveMyBubble.png image from the image list. Zoomout if you need to so we can seethe whole texture.

The unwrap tries to fit the UVmesh inside the image but in thiscase we need the image to fit in-side the UV mesh because ourtext is going all the way to theedges. Scale up the UV mesh withthe [S] key. If you need to you canmove the mesh to centre it overthe image with [G].

That’s it folks, now we have defined the coordinates forour UV map. Head back to the 3D view and go back tothe Texture window. In the Mapping panel change Coor-dinates to UV. Select our ILoveMyBubble UV in the Layerfield and use projection flat.

Now there is only one final step left before our textureis done. Head down to the Influence panel and tick theColor and Hardness boxes. Because the material istransparent our image will be very dim and we need toenhance it to be more visible. Change the Color to 6.000instead of the default 1.000.

This panel tells Blender how the texture is interactingwith the material itself. Color obviously will transfer the

textures colours (in this case image) and Hardness willcontrol the materials Hardness setting depending onthe pictures colours.

Go back to the Material Window and select this newILoveMyBubble Mat. As you can see we have 2 materialsdefined for this object, but how do we tell the objectwhere to use this second material? You should still be inedit mode with your faces selected from the UV editor.This is perfect because it is precisely these faces wewant to use our new material on. Under your materiallist (slots) press the Assign button to assign this mate-rial to the selected faces.

Move it, move it…

So far we have set up the whole scene, created all thematerials, particle system and UV maps. Now it’s timeto make those bubbles float!

First I want to setup the animation format and size, sogo to the Render windows Dimension panel. To speedup the process I used Resolution 640 x 360 with an As-pect Ratio of 1 x 1 at Frame Rate 25 fps.

Because we have all those transparent bubbles I wantmaximum Anti-Aliasing to smooth out all edges. Godown to the Anti-Aliasing panel and tick the box Anti-Aliasing and Full Sample. Set Anti-Aliasing to 16. The laststep is to choose the output format. Go down to theOutput panel and select the AVI Codec (or your favouriteformat). Select a folder where you want your AVI file.

Go to the Animation window.

The default setup of 250 frames will do fine for this tu-torial; it will generate a 10 second animation. I want theI Love My Bubble to come into focus and show the decaland then disappear out again.

35

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

MAKING OF: I Love My Bubble

Page 36: blenderart_mag-28_eng

The easiest way is to start with the important thingfirst, positioning the bubble in focus.I want this to hap-pen in the middle of the animation so I set the frame to128 and move my bubble in position at -0.017, -13.824,0.014 and rotation 0,0,0.

With the right frame selected and the bubble in positionwe will now insert a key frame with [I] and in the menuselect LocRot. This will save the bubbles Location andRotation values for this frame.

Now we only need to set a start and end position. Go toframe 1 and move the bubble to location -3.302, -15.991,-2.065 and rotation -90,-90,0. Insert a frame key with [I]and select LocRot again. Go to the last frame 250 andmove the bubble to location 4.238, -8.606, -2.650 androtation 90,90,0. Hit [I] again and select LocRot. Done!

The final picture for this article is at frame 99 but feelfree to render the whole animation…

Where to go from here? Try for example different parti-cle systems like the Boid, add collision deflection or playaround with new particle shapes. Only your own imagi-nations are your limit…

I hope you have found this tutorial educational

36

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

MAKING OF: I Love My Bubble

Page 37: blenderart_mag-28_eng

The new version of a program is alwayssuperior compared to the old one. Butthe new Blender 2.5 is really beyond

what we expected.

We started a project before the launching ofBlender's new version 2.5 and we felt the dif-ference by manipulating the facilities be-tween each version. The new vision is moreattractive most at the interface. All the toolsof the program are more quick and accessi-ble. All these efforts made the new Blender2.5 easier for the new users, of course.

For those who are using Blender 2.49, the differ-ence, for example in modeling , is that all the toolsare more close to the hands.

The speed is in-credible. Notethat as we toldbefore, we areworking onproject of an ani-mation using theold versionBlender 2.49 thatwe are finalizing

on the new versionblender 2.5. In thiswork there is a scenewhere the cameramakes a 360° roundthe top of the build-ing with the maincharacter at the cent-er. All this work wasmounted in Blender2.49, but animatedand rendered in Blender 2.5.

The time of rendering inBlender 2.49 was about 5 min-utes. For rendering in Blender2.5 the time was a breathtaking35 seconds.

The agility of this new version isalso at the new features like theinclusion of a window withmenu search that gives you achance to find functions byname.

To activate this feature all you have to do is pressthe space bar on the 3D View

37

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Up to Speed

BISPODEJESUS BROS. Nilson Juba & Jeff Israel

We are two brothers from the southwest of Brazil, NilsonJuba and Jeff Israel.

Blog: http://bispodejesusbros.blogspot.com/

Twitter: http://twitter.com/bispodejesusb

Our contact: [email protected]

MAKING OF: Up to Speed

Page 38: blenderart_mag-28_eng

It never takes the community long to produceneeded documentation for new features inBlender. So it is no surprise that even though

Blender 2.5 is still in beta, there are already a rathernice number of tutorials available to help you getthe most out of the newest incarnation of Blender.

BlenderCookie.com has been using Blender 2.5 se-ries for their video tutorials for some time now andhas even started covering new features only foundthe latest test builds. In addition to full length videotutorials using 2.5, they have also been releasingshorter videos that cover specific tools and optionsin their Tips series. The BlenderCookie 2010 CGCookie Blender Training Series is now live with Part1 and the modeling section of part 2 available forpurchase and download.

BlenderGuru has been releasing tutorials done in2.5, both in video and text format as well as newpdf ebook, The WOW Factor. He has started a newseries of tutorials with each month focusing on aspecific theme. I encourage you to check out theWeather series, his snow is amazing and his light-ning tutorial is electrifying.

Kernon Dillon (BlenderNewbies) has not only beenproducing video tutorials for 2.5, he is also busilyworking on a DVD full of brand new content for 2.5.And of course he has started a new series of mode-ling exercises that focus on technique and not justthe end product, which are very educational andinformative

MAKING OF: Blender 2.5 Educational Links 38

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Page 39: blenderart_mag-28_eng

In the three yearsthat I’ve been work-ing with Blender

(since 2.43), I’ve seenabout seven differentupgrades to the soft-ware. Some of themwere good (well, I guessALL of them are good),but some I liked betterthan others.

For example, when theupdated layout came

out in, what, version 2.46 (?), Ihad a lot of difficulty letting go ofthe previous version, but eventu-ally came to grips with it. As thenew versions kept coming out, Icame to look forward to thesechanges, especially when 2.48was released along with the BigBuck Bunny project.

The new hair particle systemblew me away, and I immediatelyjumped in and started playingwith the hair and fur (even mademe a nice a gorilla to test it out).The “Tree from Curves” script wasalso a handy feature, as it madeTree Creation relatively easy, ifyou knew what you were doing.

With the onset of 2.5, I still washesitant to learn the new layout,

as it was a huge change for me,and I’d never been one to edit theinterface, or use the “dark” set-ting. But, since I was doing tutori-als for others, it was conducivefor me to upgrade my skills andlearn the layout so I could helpothers to do the same. It has beena challenge doing tutorials forsoftware that’s still in develop-ment, but the interface has nowbecome second nature to me, andI have no issues with it now(other than that it’s still beingdeveloped and I’m antsy to seethe final version).

The key features being brought innow will make Blender rival the“industry standards” even moreso than it has in the past; previ-ously, from what I could tell, themain lacking element was the

Volumetrics (y’know, clouds, fire,smoke, etc), but those now comestandard, even in the beta ver-sion.

In addition, the Raytracing seemsto be quite a bit faster; I noticed afew days ago when I was goofingaround with it that the RaytracedShadows rendered much fasterthan the Buffered Shadows, andthat’s not usually the case at all.

Maybe it was just the way I hadthe scene set up, but I think thosedevelopers know exactly whatthey’re doing.

So with these new upgrades, plusthe layout that will make Maxand Maya users more comforta-ble, I’m really hoping to see moremainstream projects make use ofBlender. I’ve been really im-pressed with what I’ve seen ofSintel, especially in the animationarea, and am really excited to seethe final product. Blender contin-ues to amaze me, and the onlything I’d change about my experi-ence with it is that I wish Iwould’ve started using it sooner.

It may be a few years out, but Ican’t imagine where we’ll be atby version 3.0! Maybe then we’llhave the “create awesome game”or “create awesome character”buttons by default

39

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

By D

avid

War

d

MAKING OF: Random Thoughts on Blender 2.5

Page 40: blenderart_mag-28_eng

HOTWIRE: Blender 2.49 Scripting

Revi

ew b

y -

Satis

h G

oda

40

"The wait for a comprehensive bookon Python scripting in Blender 2.49 isover". I said this in my mind when Igot to review the book titled "Blender2.49 Scripting" by Michel Anders (andpublished by [PACKT] Publishing). Ionly wish this book had come out ear-lier, but I guess it's better late thennever. What follows is my review ofthe book, arranged into sections.

IntroductionBlender is a premier open sourcetool for 3d content creation and itstool set and work flow has helpedmany an artist to realize his creativ-ity into reality. Apart from its built-in tools, having the ability to extendthe tool set and customize the soft-ware in a pipeline with Python wasa bonus. The community has risenand developed quite a number oftools, ranging from simple to com-plex, using the Blender Python API.

But there wasn't a dearth of goodtutorials on Python scripting inBlender and on using the API to cre-ate better animations and such. Thebooks from Blender Foundation re-ally helped bridge the gap betweenthe software and novice users, butthe missing link was a book for Py-thon.

The wait is over. Enter "Blender 2.49Scripting."

What does this book assume?

This book assumes that the reader isalready familiar with using Blenderand understands the data system.The basic concepts needed to scriptusing Python are reviewed in eachchapter. For example, the Objectand DataBlock system, IPO and theiruses, Poses etc., are reviewed sothat one has good theoreticalgrounding before jumping intoscripting.

The first chapter sets the ground-work by helping with installing Py-thon. It also explains how Pythonscripts are automatically integratedinto the menu and help systems us-ing simple examples.

Learn By Example

One of the big strengths of thisbook is the breadth ofprogramming/scripting examplesacross various aspects of Blender'stoolset. From simple examples tointermediate and complex ones, theauthor lays down steps of the algo-rithm in simple English and thengoes on to build the python code.

Especially commendable are thescripts/drivers for animating an ICengine. I have learned a lot of newtechniques about using pydriversand designing custom constraints.

The author makes use of Python li-brary modules (internal and external)to create some very interestingscripts. For example, using Python'swave module to animate meshes us-ing shape keys was a very good ex-ample of creating complex systemsusing existing tools.

Render management has also re-ceived good coverage. The examplesfor stitching images from variouscameras was excellent, and using thePython Imaging Library is a verygood example of system integrationto get a job done well!

The best part for me was under-standing how to extend Python'sbuilt-in script editor by writing plu-gins. On top of that, the examples onintegrating the editor with SVN issimply amazing.

Support Files - Thank You!

Support files (blends, py scripts) pro-vided with the book are indispensa-ble and go hand in hand withreading the book.

Some of the chapters deal with writ-ing complex tools, for which the au-thor provides a nice walk through ofthe important pieces of the code.The full source code is provided withinstructions on usage in Blender.

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Page 41: blenderart_mag-28_eng

HOTWIRE: Blender 2.49 Scripting

Revi

ew b

y -

Satis

h G

oda

41

Also, the references chapter is a niceaddition to the book. Links to Wiki-pedia pages are provided that coverthe theoretical details of some ofthe example scripts. This was reallyhelpful in illustrating the impor-tance of research before implement-ing an idea.

What could have been Better?

I believe that this book would havesatisfied a much wider audience ifthere were very simple scripting ex-amples at the beginning of everychapter. The chapter on program-ing custom mesh objects in Blenderwould have especially benefited.

Also, when the code for complexscripts is explained, the paragraphscould have been broken down forbetter readability.

More examples on adding OpenGLoverlays to the 3D View would havebeen useful. I believe that the abilityto do OpenGL programming inBlender is a really awesome featureand good examples on how toachieve this are few and far be-tween.

In Summary

In summary, Blender 2.49 Scriptingis a great technical and program-ming book for anyone interested inlearning about the process of de-

signing and implementing Pythonscripts for Blender

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Page 42: blenderart_mag-28_eng

IntroductionCorefarm.com and corefarm.org arerendering farms dedicated to high-res-olution motionless scenes.

More and more Blenderusers have to rendercomplicated scenes,but what a time-consuming task!Here we introduce

not one but two rendering farmsfor Blender based on the Yafarayengine : corefarm.org, a commu-nity based grid where volunteersshare their computers againstcredits, and corefarm.com, a profes-sional solution for the most demand-ing users.

Have you been stuck by a heavy rendering?Then you know this torment: to have to wait fordozens or even hundreds of hours before getting thefruit of your labor - just to realize that a light is notset properly. You are not alone in this boat: archi-tects, designers and 3d enthusiasts all over theworld have faced those difficulties once, and every-one knows that rendering a 10000 x 10000 px posteris not an easy task.

A cluster is a group of connected computers dedi-cated to high performance computing.

Setting up a local infrastructure in your office tocope with your rendering needs is not an easy task:building a cluster (see frame) requires an dispropor-

tionate investment compared to the average needsof professional modelers. The standard way to over-come this barrier is to mutualize resources and thisis precisely what renderfarms are about. Indeed,rendering farms are the easiest way to relieve your

computers and to get your rendering done inimpressive times. The modelling process is

unchanged: you still use Blender locallyand you still preview your work at lowresolution locally.

The novelty is that when you wantto render your job at the final reso-lution, you simply upload your filesto a remote server and start work-ing on another project. The scene issplit in the corefarm and each part

of the scene is sent to a server; thenthe results are merged and you get a

notification telling you that your imageis ready - and you'll be surprised how fast

you'll get it!

What is a Renderfarm?

A renderfarm is a cluster dedicated to computer-generated imagery. Although rendering farms wereonly set up for proprietary engines in past years,there is now a solution for Blender users based onthe Yafaray engine: corefarm. Actually, there aretwo flavors of this farm, a collaborative edition anda professional one.

Corefarm.org is more an exchange place for CPUpower than a standard farm: when your computeris idle, you share your computer with other 3d en-thusiasts and when you need a burst of power torender your own scene, other users will share theircomputers with you!

by W

illia

m L

e Fe

rran

d

42

Cricket and Friends See a Comet!

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Rendering

3D WORKSHOP: High resolution rendering at the speed of the light

Page 43: blenderart_mag-28_eng

A credit mechanism is here to monitor precisely wholends what amount of power to who. An importantconstraint with corefarm.org is that you have to partic-ipate to your own rendering - and to have a positivecredit balance! Most of the corefarm.org code is opensource and contributors help to continuously improvethe service.

Corefarm.com is a standard rendering farm for Yafaray(v0.1.2): from the dedicated uploader you only have toselect the XML file exported from Blender, you selectprecisely what amount of power you need: 25 GHz? 100GHz? 500GHz?, and here it goes! Corefarm.com is alsobased on a credit mechanism: one credit gives you ac-cess to a 1GHz computer during one hour for your job.

How to render on corefarm.com?

1 Create an account on www.corefarm.com.

2 Download and install the corefarm uploader.

3 From Blender, using the Yafaray panel, exportyour work to a XML file.

4 From the corefarm uploader, select the XML fileand the power you want to request, and upload.

5 We'll send you an email when the job is over!

Performance is here: impressive scenes are rendereddaily on the corefarms. You want to give a try? Settingup a render is really easily (see frame) and you pay onlyfor what you use. It is also pretty cheap! As modeled

scenes get more and more complex, as rendering en-gines get more and more realistic, as resolutions in-crease, render farms will take more and moresignificance. They are the way for you, the 3d artists, toget your ideas sublimated into colorful pictures, as fastas imagination flies.

You can get more information about the corefarms on

http://www.corefarm.org

http://www.corefarm.com;

http://www.yafaray.org.

To keep updated, follow us on Twitter and/or Facebook.Happy rendering

by W

illia

m L

e Fe

rran

d

43

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

3D WORKSHOP: High resolution rendering at the speed of the light

William Le Ferrand

We are two brothers from the southwest of Brazil, NilsonJuba and Jeff Israel.

Our contact: [email protected]

Page 44: blenderart_mag-28_eng

Alone - by Will DavisGALLERIA

44www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Page 45: blenderart_mag-28_eng

Dikarya - by Will DavisGALLERIA

45www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Page 46: blenderart_mag-28_eng

KT - by Daniel D. BrownGALLERIA

46www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Page 47: blenderart_mag-28_eng

Mobile - by David JochemsGALLERIA

47www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Page 48: blenderart_mag-28_eng

1. We accept the following: Tutorials explaining new Blender features, 3dconcepts, techniques or articles based on currenttheme of the magazine.

Reports on useful Blender events throughout the world. Cartoons related to blender world.

2. Send submissions to [email protected]. Send us a notification onwhat you want to write and we can follow up from there. (Some guidelinesyou must follow)

Images are preferred in PNG but good quality JPG can also do. Images should be separate fromthe text document.

Make sure that screenshots are clear and readable and the renders should be at least 800px,but not more than 1600px at maximum.

Sequential naming of images like, image 001.png... etc. Text should be in either ODT, DOC, TXT or HTML. Archive them using 7zip or RAR or less preferably zip.

3. Please include the following in your email: Name: This can be your full name or blenderartist avtar. Photograph: As PNG and maximum width of 256Px. (Only if submitting the article for the firsttime )

About yourself: Max 25 words . Website: (optional)

Note: All the approved submissions can be placed in the final issue or subsequent issue ifdeemed fit. All submissions will be cropped/modified if necessary. For more details see the blend-erart website.

45

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

Want to write for BlenderArt Magazine?

Here is how!

Page 49: blenderart_mag-28_eng

46Upcoming Issue ‘Theme’

Issue 29

Disclaimer

www.blenderart.org Issue 28 | Jun 2010 - "Introduction to Blender 2.5"

blenderart.org does not takes any responsibility both expressed or implied for the material andits nature or accuracy of the information which is published in this PDF magazine. All the ma-terials presented in this PDF magazine have been produced with the expressed permission oftheir respective authors/owners. blenderart.org and the contributors disclaim all warranties,expressed or implied, including, but not limited to implied warranties of merchantability orfitness for a particular purpose. All images and materials present in this document areprinted/re-printed with expressed permission from the authors/owners.

This PDF magazine is archived and available from the blenderart.org website. The blenderartmagazine is made available under Creative Commons‘ Attribution-NoDerivs 2.5’ license.

COPYRIGHT© 2005-2009 ‘BlenderArt Magazine’, ‘blenderart’ and BlenderArt logo are copyrightof Gaurav Nawani. ‘Izzy’ and ‘Izzy logo’ are copyright Sandra Gilbert. All products and com-pany names featured in the publication are trademark or registered trade marks of their re-spective owners.

"Industrial Revolution" Steam Punk

Industrial Machines: big and small

Factories and Industrial landscapes

Pipes, Gears & Gadgets

Grungy Materials suitable for industrial environments and objects