The Ontology of Digital pictures

download The Ontology of Digital pictures

of 11

Transcript of The Ontology of Digital pictures

  • 7/29/2019 The Ontology of Digital pictures

    1/11

    JOHN ZEIMBEKIS

    Digital Pictures, Sampling, and Vagueness: TheOntology of Digital Pictures

    Digital pictures appear to challenge Nelson Good-mans theory that pictures are autographic, that is,that two pictures cannot be type identical repre-sentations. If the challenge turns out to be success-ful, it should be possible for digital pictures to bephenomenal replicas of one another, and digitalpictures may be defined as types. Any defense ofthe value of certain tokens over others (if applica-ble to such pictures) would have to restrict itselfto pointing out differences in the historical prop-erties of distinct tokens, since each copy would to-ken exactly the same representational properties.

    Pictures would thus be split into two categories,autographic and allographic.1

    Yet, granting digital pictures allographic sta-tus would have an unexpected consequence. Itsounds reasonable to hold that digital pictures areallographic, whereas agentive or film-based pic-tures are autographic. But if digital pictures arereplicable (and knowably so), picture replicationwill no longer be a logically and mathematicallygrounded impossibility, to borrow Goodmansphrase.2 In that case, the lack of technologies for

    replicating agentive and film-based pictures couldbe contingent and defeasible; so on what groundswould we maintain that such pictures are non-replicable, or autographic? As Gregory Currie haspointed out, the fact that we think of paintingsas singular does not mean that they are singu-lar.3 It may turn out that all pictures form a sin-gle, allographic representational kind, in a way wecould not imagine was possible before recent dig-ital imaging technology was developed.

    This article argues both that digital picturesare allographic types and that what makes them

    allographic also makes agentive and film-based

    pictures replicable. Showing that digital picturesare types is less straightforward than at first ap-pears because digital imaging technology does notconvert pictures into notations. Instead, it manip-ulates analog quantities by using sampling pro-cesses that can suffice to keep pictures type iden-tical. But this complication has a positive spin-off:if digital pictures can be types without being no-tational characters, then other pictures, which arenot notations either, can also be types for the samereasons. The article is organized as follows. I beginby describing an atomistic intuition about dig-

    ital pictures (Section I), which initially suggeststhat they are notational. On closer examination(Section II), it turns out that the identity of digi-tal pictures runs into problems of vagueness thatmake them violate Goodmans criteria for nota-tional systems. This brings us before a dilemma:either, contrary to intuition, two digital picturescannot be type identical, or the theory that nota-tionality is required for allography is false. In Sec-tion III, I defend the second horn of the dilemma:notationality is not necessary for allography be-

    cause sampling can also provide sufficient condi-tions for pictures to be type identical. Briefly, digi-tal display devices are designed to instantiate onlylimited ranges of objective properties (light inten-sities, sizes, and shapes). Those ranges succeed inkeeping differences in objective magnitudes be-low sensory discrimination thresholds (and do soin a way which preserves transitivity). In otherwords, they define objective conditions sufficientfor pictures to be phenomenally type identical. Inthe conclusion, I explain why the principles thatmake digital pictures allographic also make agen-

    tive and film-based pictures replicable.

    c 2012 The American Society for Aesthetics

  • 7/29/2019 The Ontology of Digital pictures

    2/11

    44 The Media of Photography

    i. an allographic account of digital pictures

    Close-up perceptions of pixels cause a roughlyatomistic intuition about digital pictures. The gist

    of the intuition is that digital pictures are madeup of small identical building blocks that imposea lower limit on how such pictures can differ fromone another. In turn, this suggests that digital pic-tures may avoid the problems of vagueness thatafflict the individuation of other pictures, prob-lems that emerge from the continuous ordering ofshapes, colors, and sizes, the essential propertiesof pictures. If this is true, digital pictures shouldbe allographic. In this section, I construct an al-lographic account of digital pictures along theselines. The account would apply to pictures com-

    prising pixels made with liquid crystal materialsand responding to voltages from programs that de-termine the intensity of visible wavelengths trans-mitted through those materials.

    Some preliminary issues have to be clearedfirst.For digital pictures to be allographic, they wouldhave to be type identical in respect of their shape,color, and size properties. But they would nothave to be identical in respect of their objectiveshapes, colors, and sizes. First, even if the pix-els in two liquid crystal display devices differed

    minutely in size (in nanometers, or even microm-eters), this would not make any difference if thepictures were designed to be perceived by the hu-man visual system. Second, for whatever discrimi-nation system two pictures are designed, there arelikely to be infinitesimal differences in the lightintensities, the objective sizes, and the objectiveshapes they token that the discrimination systemcannot capture. So apart from objective identitybeing unnecessary, it is out of reach. If digital pic-tures are allographic, it will be because they arecapable of being type identical in respect of deter-

    minable, epistemically defined colors, shapes, andsizes, which are more coarse grained than theirdeterminate, objective counterparts. Identity un-der epistemic types includes not only phenomenalidentity, but also identity under more fine-grainedforms of measurement than perception (all theway down to those used by microtechnology andnanotechnology) and identity in respect of morecoarse-grained types than perception (such as theeight colors generated by 3-bit pixel displays). I fo-cus mainly on phenomenal (perceptual) identity,

    but the articles conclusions also apply to otherepistemic forms of identity. Finally, for brevity, I

    almost always use objective (objective colors,objective sizes) to refer to determinable objec-tive types, as long as they are more fine grainedthan phenomenal types.

    The atomistic intuition suggests that digital pic-tures may be able to preserve type identity be-cause their shapes, colors, and sizes are discretelyordered, unlike those of non-digital pictures. Forexample, a digital display cannot generate pic-tures or subiconic parts with lengths between thelengths two pixels and three pixels.4 This appliesacross the range of sizes; it is not a local trait spe-cific to the part of the ordering that contains thetypes two pixels and three pixels, but applies gen-erally to n pixels and n + 1 pixels. Since there areno further types between two such types, the sizes

    that aggregates of pixels can token are not denselyordered. Moreover, it seems that any sizes gener-ated by the display will be discretely ordered.5

    The fact that sizes corresponding to fractions ofpixels, like 1.2 pixels or 1.5 pixels, do not figurein the displays scheme of sizes means that thereare separations between the size-types that thedisplay can generate; the presence of such sep-arations between types amounts to discreteness(noncontinuity) of the types.

    Similar points apply to the shapes generatedby pixel displays. Those shapes are aggregates ofpixels, so any digitally generated shape can be an-alyzed into identical component shapes. All of theshapes that a pixel display can generate are ana-lyzable into values on a Cartesian coordinate sys-tem comprising only a finite number of positiveinteger values, corresponding to the number ofpixels on the screen. Thus, the display can gener-ate only a finite number of shape types; and sincedensity would imply an infinite number of types,the displays scheme of shapes cannot be denselyordered. Once again, in addition to being non-

    dense, the types are discretely ordered: the val-ues of the coordinate system that express shapesare integers, so there are no further shapes be-tween them (expressible by fractions). Althoughthere are such shapes, the display cannot instanti-ate them.

    Colors generated by digitally manipulated dis-plays do not seem to be densely ordered either.Whether or not the human visual system can tellapart the adjacent color options of a display appli-cation, the number of colors that the application

    can instantiate is finite because it corresponds to anumber of binary-code possibilities. For example,

  • 7/29/2019 The Ontology of Digital pictures

    3/11

    Zeimbekis Digital Pictures, Sampling, and Vagueness 45

    in the color system called Truecolor, each pixel hasthree 8-bit subpixels, each of which emits a differ-ent visible wavelength with an intensity from arange of 256 values. This yields 2563 objective col-ors (colors defined as wavelength intensities) forthe pixel. A dense ordering would comprise aninfinite number of types; since the device cannotinstantiate an infinite number of colors, its colorsare not densely ordered.

    Does this mean that the color types digitalpictures can instantiate are also discretely or-dered? (Non-density does not automatically im-ply either discreteness or disjointness; see note 5.)Here, the allographic scenario could hold that

    just as we have carved up the continuous prop-erty spaces of objective sizes and shapes into dis-

    crete packages, we could do the same with contin-uously ordered wavelength intensities by using,for example, a sufficiently fine-grained process ofsampling. In particular, an allographic argumentcould be constructed along the following lines.In coarse-grained, digitally controlled color dis-plays, like 3-bit color, which generates only sixchromatic colors, the colors are obviously discretesince such displays cannot generate color con-tinua. The missing colors are regions of the ob-

    jective color space, dropped from the programsscheme and thus keeping its few colors disjoint. Inmore fine-grained devices, the same principle ap-plies, but with the followingphenomenally strik-ing, but objectively trivialdifference: the appli-cations objective color types are so fine grainedthat differences between adjacent types are be-low sensory discrimination thresholds. Disconti-nuities in the scheme are phenomenally invisible,but they are nevertheless there objectively, mak-ing the wavelength intensities objectively discrete,

    just as they did (but visibly) in the coarse-grainedsystem.

    The range of possible shapes, colors, and sizesthat thedigitally manipulated display can generatecan be seen as the set of its syntactic types. Thosetypes are ordered in the way just seen, that is, digi-tal pictures are made up of shapes, colors, and sizesthat are not densely ordered. The allographic sce-nario has also argued that digital shapes, colors,and sizes are discrete. This is important, becausealthough non-density is necessary for a scheme tobe notational, it is not sufficient, and discretenesswouldgomuchoftheremainingwaytowardshow-

    ing that digital pictures are allographic. It wouldprevent types from having common members, en-

    suring Goodmans notational requirement of dis-jointness. Moreover, the way the disjointness re-quirement is discharged means that the types arefinitely differentiated: if the system generates apicture with the size n pixels, that pictures sizewill be measurably and knowably distinct from thenext size, n + 1 pixels (something that would notbe the case if the system could also generate sizesfor all fractions of pixels). The same applies tocolors, if the argument about discretely measuredwavelengths is correct. To conclude: according tothe allographic scenario, the essential propertiesof digital pictures are taken from sets of disjointand finitely differentiated shapes, colors, and sizes.

    We have now made the most of the building-block or atomistic intuition about digital pic-

    tures, but we have still not shown that digital pic-tures are notational. Disjointness and non-finitedifferentiation of types (characters) are neces-sary for a property space (here, a scheme ofcharacters) to be notational, but they are not

    jointly sufficient. Central to notationality are theconcept of character and the relation Good-man calls character-indifference. Like type iden-tity, character-indifference is meant to be tran-sitive, and characters themselves are meant toinclude all token representations that preservethat transitivity: a character in a notation is amost-comprehensive class of character-indifferentinscriptions; that is, a class of marks such that ev-ery two are character-indifferent and such that nomark outside the class is character-indifferent withevery member of it.6 Somewhat worryingly, theallographic scenario does not explain how the ex-tent of its atoms can be accounted for withoutrunning into problems of continuity and vague-ness that would defeat transitivity. What surfacecan a pixel have when it tokens what we are call-ing the size 1 pixel? Which wavelength intensities

    can a pixel emit when it instantiates one of the nu-merically defined 2563 objectively defined colors?Note that no similar question arises for classic ex-amples of notational schemes, such as words andnumbers, because the size and color of an inscrip-tion are not issues when it comes to recognizingwhat word-type it belongs to; the same word canbe drawn in the sand or scribbled in the margins ofa book. But the question does emerge when we aredealing with the qualitative identity of pictures be-cause changes in size and color are changes to the

    qualitative identity of a picture. Questions aboutthe limits of types like 1 pixel and the intensity

  • 7/29/2019 The Ontology of Digital pictures

    4/11

    46 The Media of Photography

    triplet (Red 100, Green 100, Blue 100) will be hardto answer because objective sizes, shapes, and col-ors are continuously ordered, so any groupings ofthem based on practical discrimination methods(such as those used in industry) will admit somedegree of vagueness, preventing an all-inclusiveaccount of characters in a way such that no markoutside the class is character-indifferent with ev-ery member of it.

    ii. an autographic account of digital pictures

    Alongside the allographic account just sketched,we can also construct an autographic scenariofor digital pictures. To be allographic, pictures

    produced by digitally manipulated pixel displayswould have to preserve some form of type identityin respect of colors, shapes, and sizes. What pre-vents type identity for non-digital pictures, suchas analog photographs and paintings, is that color,shape, and size types are continuously ordered,making their limits vague and undecidable, a pointGoodman captures by saying that those types arenot finitely differentiated. According to the auto-graphic account, the type identity of digital pic-tures can be challenged on the same grounds.

    Finite differentiation is the ordering of types ina property space (such as a color space for col-ors or an axis for lengths) in a way that makes itpossible in principle to establish type membershipunambiguously. To the extent that the limits oftypes in an ordering are vague, membership can-not be established. The allographic account hadtwo replies to this challenge. First, it pointed outthat the digital display cannot generate pictureswith sizes and shapes that differ by fractions ofpixels; there is no size n + 1/2 pixels whose appur-

    tenance to either n pixels or n + 1 pixels could be

    undecidable. Second, it pointed out that the col-ors, shapes, and sizes of digital pictures cannotbe densely ordered because density implies an in-finite number of types, while digitally generatedsizes, shapes, and colors are finitely numbered.

    This is true, but it does not get us off the hookwhere finite differentiation of types is concerned.Density is one way of defeating finite differen-tiation, but there are others that we have notexcluded yet. Types can be non-finitely differ-entiated even when they are finite in number.7

    Consider is a heap and is not a heap, or bald andnon-bald as they divide up the domain of heads:

    thereare only two types, yetthere areheads whosemembership cannot be decided. So non-density(the absence of further types between the twotypes) does not prevent the non-finite differen-tiation of the two types, according to Goodmansdefinition: for every two characters Kand K andevery mark m that does not actually belong toboth, determination either that m does not belongto K or that m does not belong to K is theoreti-cally possible.8

    Undecidability of type membership can alsoemerge at the limits of a single type. Even whenwe do not add a type adjacent to bald in the formof the predicate negation non-bald, there are stillheads whose appurtenance to bald cannot be de-cided. In that case, we cannot define the type

    all-inclusively (most-comprehensively in Good-mans words). For example, we would feel securein holding that any quantity of sand greater thanone billion grains is a heap. All greater numbersof grains will then be type identical in respect ofis a heap. But our certainty comes from the factthat we have kept all borderline cases at a safedistance from the type. So some quantities be-low one billion grains will also be identical to allthe types members in the same respect, but wehave excluded them from the type. On the otherhand, if we try to build up is a heap to includeall such cases, we will run into borderline bor-derline cases, or higher-order vagueness: the factthat the undecidable cases cannot be demarcatedfrom the decidable ones.9 In other words, we candono more than definea subsetof the all-inclusivetype.

    An autographic account of digital pictures canexploit both of these forms of vagueness to showthat the types in respect of which digital pictureswould have to be identical cannot qualify as nota-tional characters. Consider a classic case of non-

    finite differentiation in which we cannot decidewhich of two adjacent types a particular instan-tiates. In the color scheme of the Truecolor sys-tem, the light-intensity values (Red 0, Green 0,Blue 127) and (Red 0, Green 0, Blue 131) arephenomenally discriminable: if we place themside by side, we see a faint line of color step-ping between them. So pixels tokening those in-tensities have phenomenally different colors. Butpixels lit at the intermediate intensity (0, 0, 130)are indiscriminable both from pixels instantiating

    (0, 0, 127) and from pixels instantiating (0, 0, 131).Even when two regions uniformly colored with

  • 7/29/2019 The Ontology of Digital pictures

    5/11

    Zeimbekis Digital Pictures, Sampling, and Vagueness 47

    Blue 127 and Blue 130 respectively are placedside by side (without any other background colorin between), we still cannot see any color step-ping between them; the same applies to Blue 130and Blue 131. Therefore, we cannot exclude thatthe pixels lit at (0, 0, 130) token both colors, andthis defeats the finite differentiation of the colortypes in respect of which pictures would have tobe identical.

    As Goodman pointed out, there is no type iden-tity without finite differentiation. We can now nolonger say that even the indiscriminable Blue 127and Blue 130 are the same color: sameness ofcolor would be transitive, which the similarity re-lation between Blue 127 and Blue 130 apparentlyis not.10 To complete the comparison with classic

    examples of vagueness: not all colors will fall un-ambiguously under the discretely ordered numer-ical triplets, any more than the fact that numbersof hairs are discretely ordered in integers meansthat there is no vagueness about baldness.

    Can we not avoid such cumulations of objectivedifferences simply by maintaining the same nu-merical luminance values? Yes. But this misses thepoint of the non-transitivity argument. Its point isthat since non-transitive series can be constructed,the indiscriminability of two digital pictures, evenwhen they instantiate the same numerically ex-pressed intensity values, does not guarantee theirtype identity in the first place. Keeping the samenumerical value was supposed to ensure iden-tity because it ensured indiscriminability. If it didnot, we would not have thought of arguing thatdigital pictures are allographic in the first place.But now that that entailment is broken, main-taining the same numerical values ensures onlyindiscriminability, minus the identity. Since digitalpictures do not preserve type identity in respect ofcolor even when they are identical under the most

    determinate digital color-types (the numericallyexpressed triplets), digital pictures are not nota-tional. This is exactly what Goodman was gettingat when he argued that non-finite differentiationis necessary for character-indifference and nota-tionality:

    Now the essential feature of a character in a notation

    is that its members may be freely exchanged for one

    another without any syntactical effect. . . . And a true

    copy of a true copy of. . . a true copy of an inscription

    x must always be a true copy of x. For if the relation of

    being a true copy is not thus transitive, the basic purpose

    of a notation is defeated. Requisite separation among

    characters . . . will be lost unless identity is preserved in

    any chain of true copies.11

    Yet, although this argument against the notation-ality of digital pictures is fundamentally correct,it still has a theoretical flavor that makes it in-conclusive. For suppose that we accept the gen-eral point about indiscriminability not guarantee-ing type identity. Then, even if two colors weretype identical, all we could know about them per-ceptually would be that they are indiscriminable.We would not know that they are identical (sincewe have rejected the thesis that indiscriminabil-ity implies identity), but nevertheless they wouldbe identical. Well, how do we know that when we

    maintain the same numerically expressed inten-sity values are not in this kind of casea case inwhich the colors are not just indiscriminable, butalso type identical?

    Before we hear the autographists reply to thisrejoinder, let us step back to take a synoptic viewof it and try to weigh our options. The numericallyexpressed types, like (Red 0, Green 0, Blue 130),are groupings of light intensities. The autographistis going to argue that those groupings do not de-fine the types in respect of which pictures haveto be identical to have the same color. Before theautographist even gets started along such lines,we may begin to wonder why it is so importantfor bona fide types to be defined when we are atlevels of discrimination so far below perception.Surelythe doubt goessuch fine-grained physi-cal manipulation of light intensities can get it rightwhere producing perceptual identity is concerned.This skepticism is legitimate. The reason why weare following the autographist into this area is thatwe have so far granted that unless digital picturesare notational, they will not be allographic. But if

    the requirements for notationality begin to looktoo exacting to allow digital pictures to be allo-graphic, then we may have to try devising a formof allography that can sidestep the criteria for no-tationality. This is a radical enough step to war-rant a close look into the autographists argument:we need to be sure that the autographist is right(that digital pictures are not notational) to decidewhether we really have to embark on the questof showing that digital pictures can be allographicwithoutbeing notational.

    With this wider picture in mind, let us hear thedetails of the autographic rejoinder to the allo-

  • 7/29/2019 The Ontology of Digital pictures

    6/11

    48 The Media of Photography

    graphic scenario, which shows that digital picturesare not notational. Numerically expressed typeslike (0, 0, 130) in Truecolor, or simply Blue 130,are groupings of objective light intensities. Whenanyofthoseintensitiesisinstantiatedbypixelsandperceived, it causes a kind of color experience, .The color corresponds to (can be caused by) aband of indiscriminable intensities. That band iswider than Blue 130, since Blue 129, for example,also causes . On the other hand, a larger aggre-gate of numerical values, like Blue 127131, notonly includes all of , but also goes beyond it,since it includes intensities that cause a color dis-criminable from . (Remember that if we placea group of pixels lit at [0, 0, 127] side by sidewith a group of pixels lit at [0, 0, 131], without

    any space between them, we do see a faint line ofcolor stepping between the two groups.) The rea-son why aggregates of the numerically expressedgroupings of light intensities do not coincide ex-actly with phenomenal colors such as is thatcolors correspond to vaguely defined bands of in-tensities. Intensities are densely ordered, so theydefeat finite differentiation of the limits of anytypes that attempt to group them.

    As such, a light intensity triplet like (0, 0, 130)has some of the virtues that Goodman expectsof notational characters, but it does not have allof them. First, all of the members in the group-ing are type identical in respect of their abilityto cause -experiences: we have to go outside(0,0,130),forinstanceasfaras(0,0,137),tohaveadifferentcolor experience. Second, all thememberintensities within a group are transitively identicalin that respect: we cannot construct non-transitiveseries by using only intensities from within(0, 0, 130). But despite these virtues, the group-ings will not qualify as the types we need for typeidentity in respect of color, because they fail to

    define most-comprehensive classes of objectiveconditions for color appearances:

    A character in a notation is a most-comprehensive class

    of character-indifferent inscriptions; that is, a class of

    marks such that every two are character-indifferent

    and such that no mark outside the class is character-

    indifferent with every member of it. In short, a char-

    acter in a notation is an abstraction class of character-

    indifference among inscriptions.12

    More specifically, although any two light intensi-ties grouped by a numerical value like Blue 130 are

    type indifferent in respect of the ability to cause, there are intensities outside the group (in Blue129, for example) such that they are type indif-ferent with every member inside it in the samerespect. So such groups (Blue 130) are not most-comprehensive. Next, for composites of severalnumerical values, there are two possibilities. Oneis that the composite is not a most-comprehensiveeither (this applies to Blue 127130). The otheris that it extends so far that it violates transitiv-ity in respect of (this applies to Blue 127131).Notational characters do not violate transitivity,so in that case, too, we are not dealing with anotational character. Therefore, the wavelengthintensity groupings represented by the numericaltriplets do not, either alone or together, define or

    coincide with the types in respect of which pictureshave to be identical to have the same color.

    We should concede that theautographist is righton the issue of notationality. However, the issueof the notationality of digital pictures aside, thedoor to type identity is still left ajar for the allo-graphist. She can concede that the light intensitygroupings do not coincide with the types in respectof which pictures have to be identical, because ifthey are narrower than those types, then main-taining the same numerical triplets will make thepictures type identical in respect of color. It seemsthe only way the autographist can resist this con-clusion is to repeat that pictures cannot be allo-graphic (capable of being type identical) withoutbeing notational. But this time, the autographistsargument is void of substance because the pointof notationality in the first place was supposed tobe to preserve the type identity of pictures.

    iii. why digital pictures are allographic

    Let us return to the skeptical question sketchedearlier: how can the fine-grained measurementand manipulation of physical quantities (such ascolor measured in wavelength intensities) far be-low perceptual thresholds fail to pin down theparameters required for the much more coarse-grained, perceptual form of identity that we arelooking for? The reply is that such technologydoes not fail to pin down the physical conditionsrequired for perceptual identity or for other epis-temically defined forms of identity (such as those

    described at the start of Section I). We are rightto imagine that by physically manipulating quan-

  • 7/29/2019 The Ontology of Digital pictures

    7/11

    Zeimbekis Digital Pictures, Sampling, and Vagueness 49

    tities at a subphenomenal level, we can get belowperceptualdiscrimination thresholds and engineerobjects whose objective differences remain wellbelow anything perception could detect, thus mak-ing them perceptually identical.

    The key to exploiting the intuition is that, inorder for digitally manipulated display applica-tions to be able to instantiate type identical colors,shapes, and sizes, we do not need to previously de-fine those types, and we do not need the displayapplications to encode those types. All we needto be able to do is define objective properties thatare sufficientto preserve the more coarse-grainedepistemic forms of type identity, such as phenome-nal identity. Accordingly, display applicationsonlyhave to be able to encode information about those

    sufficient objective conditions. This is feasible be-cause it does not encounter insoluble obstaclesfrom vagueness, unlike the definition and encod-ing of objective information about the full extentof the types (the most-comprehensive types) un-der which the pictures have to be identical.

    Showing that this project is feasible requiresshowing that we can isolate, encode, and preserveacross different devices sets of objective condi-tions sufficient for the identity of digital pictures.Before we begin to deal with those tasks, a prelim-inary issue must be dealt with, albeit in a frustrat-ingly brief way. Our objective is to preserve typeidentity in respect of colors, shapes, and sizes; sowill non-transitivity not crop up again, as soonas we try to show that digital technologies andtheir display applications can ensure such formsof type identity for pictures? The reply dependson ones convictions concerning the identity ofqualia. If one is a skeptic about the identity ofqualia, like Armstrong, then one has to abandonthe hope that two digital pictures can be phenom-enally identical (but on the same grounds, non-

    transitivity, one should also deny all other formsof epistemic identity for digital pictures).13 On theother hand, what is arguably the main school ofthought regarding the definition of qualia or ap-pearance types, represented by Goodman (in TheStructure of Appearance), Austen Clark, and per-haps Jackson and Pinkerton, holds that the non-transitivity argument does not prevent us fromdefining phenomenal identity: we just have to de-fine it on criteria stricter than perceptual indis-criminability.14 Since, as we have seen, the numer-

    ically expressed illumination values for pixels aremore fine grained than phenomenal colors, they

    may cut finely enough to define phenomenal iden-tity. If they do not do so in Truecolor, then perhapsthey do in more advanced systems, whether exist-ing (like 30-bit color) or possible, which distin-guish several times more wavelength intensities.Finally, wondering just how far we need to go be-low indiscriminability to reach phenomenal iden-tity, we may begin to suspect that this approach ison the wrong track: where exactly are we to stop,since we now can no longer use the evident crite-rion of perceptual indiscriminability itself? Thesesuspicions have given rise to approaches that rein-state indiscriminability as a sufficient criterion forphenomenal identity, though on different and con-flicting grounds each time.15 If we adopted one ofthose approaches, we could no longer accept the

    arguments presented by the autographist in Sec-tion II. In that case, the allographist could simplyargue that digital pictures are phenomenally iden-tical whenever they are indiscriminable.

    In what follows, the concept of type identityused is compatible both with the Goodman-Clarkposition, and with the position that indiscrim-inability amounts to type identity. The differenceis that on the Goodman-Clark position, perhapsno existing digital device carves up light intensi-ties finely enough to set up phenomenally identi-cal colors. (Or perhaps such a system does alreadyexist; there is no way of telling from those the-ories what counts as phenomenal identity onceindiscriminability is dropped as the criterion.) Onthe alternative account, which equates indiscrim-inability with phenomenal identity, such devicesare already in use. They came into use when thecolor scheme called Truecolor was applied, byusing video cards, display devices, and operatingsystems that allowed pixels to instantiate colorsthat are objectively different but phenomenallyindiscriminable.

    Now we can return to the main subject of thissection: showing that it is possible to isolate, en-code, and preserve across different devices setsof objective conditions sufficient for the identityof digital pictures in respect of color, shape, andsize. Suppose that our objective is to produce agiven phenomenal color by using a liquid crys-tal display. We do not have to define the full, ex-actly delimited band of wavelength intensities thatcan cause . We have only to define a narrowregion of that band, since the pixels instantiat-

    ingany

    intensity in the band will suffice to cause. So we do not have to solve the intractable

  • 7/29/2019 The Ontology of Digital pictures

    8/11

    50 The Media of Photography

    problems of vagueness and non-transitivity thatwould emerge if we tried to define an all-inclusivegroup of intensities for , including the border-line cases. Accordingly, the display applicationsnumerically represented triplets of intensities (forred, green, and blue) do not have to carve up thecontinuous space of wavelength intensities alongthe same joints as perceptual discrimination doeswhen it discriminates those intensities as or non-. The job of the triplet (0, 0, 130) is only to en-code information about any objective propertiessufficient to cause . This means that it will notdefine necessary objective conditions for causingthat color, since the same color could equally becaused by another intensity not included in thegroup represented by (0, 0, 130).

    The required propertiesnot only wavelengthintensities for colors, but also sizes and shapes ofpixelscan only be known and defined by usingsubphenomenal meansof discrimination. Supposethat objective wavelength intensities F and G areindiscriminable, so that they cause the same colorexperience . Their indiscriminability means thatthey differ by objective magnitudes that are toosmall to pass sensory discrimination thresholds.Therefore, any intensity between and excluding Fand G will be indiscriminable from both of them(it will differ less, in the objective ordering, fromeither F or G than they differ from one another),so the group of intensities between and excludingF and G will be transitively identical in respect ofcausing . If all we had to go on was perception,we could not define such a set: carving out a regionof intensities between two indiscriminable inten-sities requires subphenomenal discrimination.16

    What we have so far described could equallywell apply to the intensities represented by a se-ries of several numerically represented triplets:remember that in Truecolor, such triplets come

    in indiscriminable series so that, for example, thegroupings Blue 127130 are all indiscriminable.Why does the Truecolor scheme fit several group-ings into a space where all the light intensitiesare indiscriminable? Because that way, the colorscheme allows the application to cause percep-tions of color continua. For this, the display hasto be illuminated in such a way that neighboringpixels are indiscriminable and more distant pixelsdiscriminable, but without there being any visiblecolor stepping. This is achieved by gradually cu-

    mulating subphenomenal objective differences inintensity: over a small region of the screen, the

    objective differences remain below sensory dis-crimination thresholds, but over a larger regionthey are cumulated to pass those thresholds.17 Ifthe application could not carve up intensities morefinely than perception can, it would not be capableof causing this appearance of continuity.

    Different transitive groupings of intensitieshave to be kept disjoint from each other; other-wise, the subphenomenal discrimination systemused could not keep them apart (for instance,there would be intensities that for the discrimi-nation system equally belonged to Blue 127 andBlue 128.) Leaving objective differences out of anordering (of signal intensities, wavelengths, volt-ages, or spatial magnitudes) is essential to digi-tal signal processing. For example, the discretely

    ordered binary values (0, 1) in binary code corre-spond to two disjoint domains of voltages. A rangeof the continuous ordering of voltages is dropped(0.8 V to 2 V), so that voltages between 0 V and0.8 V represent 0 and those between 2 V and 5 Vrepresent 1.18 Wavelength intensities instantiatedby pixels are similarly disjoint. In fact, the processof sampling at intervals, which digitally encodescontinuous magnitudes by means of discrete val-ues, has to leave out intensities in order to carveup the continuous ordering into groupings; unlesseach sample-value is disjoint from the next, theresult will still be continuity.19 But, from the per-spective adopted here, sampling can also affordto leave out intensities, since its job is only to en-sure sufficient objective conditions for causing acolor experience. It can do this without runninginto the problem of vagueness, which it would doif we tried to use it to define all-inclusive sets forcolor identity.

    The sizes of pixels can be defined on a com-parable basis in order to preserve identity. Sizestaken from a proper subset of a set of sizes that

    human visual perception cannot discriminate willpreserve transitivity on the same principles asthose described for colors. Again, this requiressubphenomenal discrimination that can differen-tiate among phenomenally identical sizes suffi-ciently to be able to define a disjoint subset ofother indiscriminable sizes between them. Oncesuch a subset is defined by subphenomenal means,all objective sizes within the subset (even ambigu-ousmembers of the subset) willbe perceptually in-discriminable.20 By using microtechnology, which

    applies metrology standards, pixels can be madeto instantiate only sizes and shapes from within

  • 7/29/2019 The Ontology of Digital pictures

    9/11

    Zeimbekis Digital Pictures, Sampling, and Vagueness 51

    such a transitive grouping. The result is that wecannot construct series of digital pictures whoseobjective differences cumulate beyond the mag-nitudes defined in the transitive regions.

    In respect of their size and shape properties,pixels thus constitute cases of indiscriminabilityfrom the same sample: new devices are not madeby copying existing devices, but from data aboutobjective (though determinable) sizes and shapesthat they have to instantiate.21 So it is not thatdifferences of micrometers, or even nanometers,cannot be cumulated sufficiently to pass visualdiscrimination thresholds; it is just that they arenot cumulated because of the way display devicesare made. This is strictly analogous to the way inwhich we can make copies of non-digital pictures

    by starting each time from the original, instead ofmaking copies from copies, to prevent the cumula-tion of differences. However, there is also a crucialdifference between making copies from the origi-nal by using perceptual discrimination and makingthem with the industrial technology that producescomputer displays. Only the latter can carve outsubphenomenal regions of the objective propertyspaces of colors, shapes, and sizes that preservetransitivity. The result is that no two members ofthe group of copies can differ phenomenally. Butin reproductions made without using subphenom-enal means, even when all the reproductions aremade from the same copy, two copies can differphenomenally. (Copies b and c can both be indis-criminable from original a, but still differ percepti-bly from each other: the difference between a andb, and that between a and c, are cumulated whenb and c are compared.)

    Thus, the transitive groupings of light intensi-ties, which preserve type identity for digital pic-tures, bear an ambiguous relation to Goodmanscriteria for notation. (1) They meet a disjointness

    condition: they include only intensities that arekept disjoint (by subphenomenal means) from allother intensities. (2) Members of the groupingsmaybe freely exchanged foroneanother.(3) Iden-tity within the group is transitive. (4) However, thetransitive groupings do not meet a final condition.They are not all-inclusive; there are particularsoutside the subset that are type identical with ev-ery member of it in respect of causing the samephenomenal color or size. Because they do notmeet this condition, the groupings do not count

    as characters in a notational system. If we triedto satisfy this last condition, we would run into

    trouble and spoil everything: we would meet sizeswhose appurtenance is ambiguous, thus losing (1),and as a consequence of this, we would also lose(2) and (3).

    The key point about the transitive groupingsof light intensities, shapes, and sizes that pre-serve type identity for digital pictures is that theykeep the differences in real magnitudes, which in-evitably occur, well below some epistemically de-fined discrimination threshold. Accumulation ofsuch differences is prevented by defining narrowranges of objective properties and staying withinthose ranges to preserve the interchangeability ofthe magnitudes, and their transitive identity, withrespect to some epistemically defined need. Thatneed is defined by the discrimination system for

    which the picture is designed to function. In theabsence of any such epistemic criterion, no formof type identity can be defined, because we wouldhave no criterion on the basis of which to knowwhich magnitudes to include in the transitive sets.

    iv. conclusion

    The main conclusion of this article is thatdigital pictures, while they are not tokens ofnotational representational systems, are neverthe-less allographic, because they can be type identicalin respect of maximally determinate appearances(but also under other epistemic types). In otherwords, two digital pictures can be phenomenallyidentical in respect of color, shape, and size. Forthe reasons pointed out toward the start of SectionIII, this identity is not reducible to indiscriminabil-ity (construed as a similarity relation weaker thanidentity).

    Even though the allographist has turned out tobe right in holding that digital pictures are types,this account of digital pictures is very different

    from the one from which the allographist startedout. Our initial atomistic intuition suggested thatdigital pictures are made up of properties from dis-crete orderings. But digital pictures are made upof shapes, colors, and sizes no less than other pic-tures; these are continuously ordered properties,and they still prevent digital pictures from beingnotational. Instead, what allows digital pictures tobe type identical is that engineers manipulatinganalog properties have found ways to circumvent(not defeat) the problem of continuity by keeping

    variations in analog properties within noncumula-ble subphenomenal limits.

  • 7/29/2019 The Ontology of Digital pictures

    10/11

    52 The Media of Photography

    The technology that allows digital pictures to beallographic is not primarily the digital technologyof binary-coderepresentations, but the microtech-nology and metrology that keeps differences inmagnitudes within subphenomenal ranges. Therole of binary-code representations is indirect: itallows the required subphenomenal informationto be consistent and to be stored long enoughfor the picture to be reproduced. Subphenom-enal discrimination, to a large extent, but espe-cially consistency and storage, were not availablewhen we could only reproduce pictures manually.Magnifying glasses could give us a degree of sub-phenomenal information, but neither consistencynor storage capacity. Without these, visual infor-mation, which is perception-dependent and short-

    lived, could not be preserved consistently until wegot to act on the prospective copy, so phenomenalidentity would be lost.

    The conclusion that digital pictures are allo-graphic because they can be type identical hasrepercussions for the concept of autography itself.It means that pictures are not necessarily auto-graphic. If some pictures can be types, what is thedifference between those that can and those thatcannot? A distinction that would have been use-ful here to the autographist is that some picturesare notational, and only those pictures are allo-graphic. But we have seen that digital pictures arenot notational, so we cannot defend autographyby using that criterion. On the other hand, whatallows digital pictures to be types is not so muchtheir dependence on binary-code representationsas it is the technology that manipulates subphe-nomenal quantities. This, jointly with the fact thatautography is not necessary for pictures, suggeststhat by using the same principles (defining tran-sitive subphenomenal sets of sufficient objectiveconditions), it is possible to make type-identical

    paintings and analog photographs. In that case,we should preserve the distinction between no-tational and nonnotational representations, butabandon the distinction between autographic andallographic representations.

    JOHN ZEIMBEKIS

    Institut Jean Nicod, Paris

    Department of Philosophy

    University of Grenoble

    Grenoble, France 38040

    internet: [email protected]

    1. Thanks to Dominic Lopes and Diarmuid Costellofor comments, and to Denis Vernant, Olivier Massin,and Alexandra Athanasiou for discussions of relatedissues.

    2. Nelson Goodman, Languages of Art (Indianapolis:

    Bobbs-Merrill, 1968), p. 136. In the relevant passage, Good-man in fact argues against the possibility of knowing thattwo objects are type identical (character-indifferent).

    3. Gregory Currie, An Ontology of Art (New York:St. Martins Press, 1989), p. 87.

    4. I adopt the convention of underlining types, but havemade an exception for triplets of intensity values for col-ors (000, 000, 000) in order not to deform the text toomuch.

    5. Non-density does not automatically imply either dis-creteness or disjointness. See Goodman, Languages of Art,p.136n6, on discretenessand p. 137on disjointnessand finitedifferentiation.

    6. Goodman, Languages of Art, p. 132.

    7. Finite differentiation neither implies nor is impliedby a finite number of characters (Goodman, Languages ofArt, p. 136).

    8. Goodman, Languages of Art, pp. 135136.9. Rosanna Keefe, Theories of Vagueness (Cambridge

    University Press, 2000), p. 112.10. Similar arguments have been made in other con-

    texts; for example, David Armstrong, A Materialist The-ory of the Mind (London: Routledge, 1968), p. 218; CrispinWright, On the Coherence of Vague Predicates, Synthese30 (1975): 325365.

    11. Goodman, Languages of Art, pp. 131132.12. Goodman, Languages of Art, pp. 131132.13. Armstrong, A Materialist Theory of the Mind,

    p. 218.14. Nelson Goodman, The Structure of Appearance(Boston: Reidel, 1977), especially p.210; Austen Clark, TheParticulate Instantiation of Homogeneous Pink, Synthese80 (1989): 277304, esp. p. 283; Frank Jackson and R. J.Pinkerton, On an Argument Against Sensory Items, Mind82 (1973): 269272, esp. p. 270. What the relations are be-tween Goodmans twoworks is an interesting issue that can-not be explored here; suffice it to say that I do not think thatthere is a contradiction between the early works definitionof qualia and the later works insistence on autography forpictures.

    15. Delia Graff Fara, Phenomenal Continua and theSorites, Mind 110 (2001): 905935; Robert Schroer,

    Matching Sensible Qualities: A Skeleton in the Closetfor Representationalism, Philosophical Studies 107 (2002):259273; John Zeimbekis, Phenomenal and ObjectiveSize, No us 43 (2009): 346362.

    16. Since intensities are densely ordered, we cannotsharply define the limits of the region between and exclud-ing F and G (there is no last intensity before G). Thus, byexcluding F and G, we in fact exclude a zone of intensities;all we have to know is that a given intensity (for example,toward the middle of the region) is not F, and this is feasiblefor subphenomenal systems of measurement.

    17. For a full explanation of how color continua can begenerated artificially, see Clark, The Particulate Instantia-tion of Homogeneous Pink.

    18. Anand Kumar, Fundamentals of Digital Circuits(Delhi: Prentice-Hall India, 2006), p. 3.

  • 7/29/2019 The Ontology of Digital pictures

    11/11

    Zeimbekis Digital Pictures, Sampling, and Vagueness 53

    19. It is conceivable that instead of disjoint group-ings of intensities, sampling produces disjoint maximallydeterminate intensities. But this will not be feasible if inten-sities can differ infinitesimally, in ways that practical mea-surement methods cannot capture with perfect accuracy. Inthat case, each sample will be a tiny zone in reality, not apoint or line.

    20. Ambiguous intensities at the limits of the groupingcan be included in it because the ambiguity that would mat-ter (that which concerns appurtenance to the set of whichthe grouping is a proper subset) has already been pushedbeyond the limits of the group.

    21. The concept is from John McDowell, Mind andWorld (Harvard University Press, 1996), p. 171.