Dr. Wettstein: Below is a copy of the e-mail sent to Max Glover of Intel on the physical guarantee...

13
Dr. Wettstein: Below is a copy of the Dr. Wettstein: Below is a copy of the e-mail sent to Max Glover e-mail sent to Max Glover of Intel on the of Intel on the physical guarantee imposed by symmetric encryption systems. I hope you enjoy physical guarantee imposed by symmetric encryption systems. I hope you enjoy the read, it is somewhat light hearted but I believe carries an important the read, it is somewhat light hearted but I believe carries an important message. I'm not sure that people have given a lot of consideration to the message. I'm not sure that people have given a lot of consideration to the practical limits of data-mining. practical limits of data-mining. The theoretical thermodynamic energy consumption of iterating over a 128 bit number The theoretical thermodynamic energy consumption of iterating over a 128 bit number is is ¼ the electrical generation capacity of the world the electrical generation capacity of the world . I believe one could . I believe one could develop an interesting piece on the relevance of the develop an interesting piece on the relevance of the Shannon/von-Neumann/Landauer limit Shannon/von-Neumann/Landauer limit to data-mining. This correlation could be to data-mining. This correlation could be drawn from the implicit notion that by definition, data-mining requires drawn from the implicit notion that by definition, data-mining requires iteration over memory and all practical computing systems require a continuous iteration over memory and all practical computing systems require a continuous series of pointer de-references to iterate over memory. As a result, the series of pointer de-references to iterate over memory. As a result, the thermodynamic limit to data-mining is very similar to that of symmetric thermodynamic limit to data-mining is very similar to that of symmetric encryption since iteration of a pointer is a direct congruent to iteration of a encryption since iteration of a pointer is a direct congruent to iteration of a counter. counter. It would be interesting to nail down the dimensions of the Twitter Cube and apply an It would be interesting to nail down the dimensions of the Twitter Cube and apply an SNL computation to that. SNL computation to that. Email to Max Gover of Intel: Email to Max Gover of Intel: I thought it might be helpful to offer a few reflections from an engineering I thought it might be helpful to offer a few reflections from an engineering perspective on how we view the security process. Like everyone else in the perspective on how we view the security process. Like everyone else in the industry we depend on the principle of large numbers as our ultimate security industry we depend on the principle of large numbers as our ultimate security guarantee. In our case, which large number is at the root of the identity guarantee. In our case, which large number is at the root of the identity topology of an organization. topology of an organization. The effective physical guarantee of large number predicated security is the The effective physical guarantee of large number predicated security is the Shannon- Shannon- von Neumann-Landauer von Neumann-Landauer (SNL) limit. Unless Intel has invented reversible (SNL) limit. Unless Intel has invented reversible computing, the ultimate limit on security is governed by the thermodynamic free computing, the ultimate limit on security is governed by the thermodynamic free energy changes needed to support a one bit change in semiconductor media. The energy changes needed to support a one bit change in semiconductor media. The minimum value, in joules, is given by the following equation for binary minimum value, in joules, is given by the following equation for binary circuits: circuits: Emin= Kb*T*ln2, where: Kb=Boltzmann constant of 1.38x10 Emin= Kb*T*ln2, where: Kb=Boltzmann constant of 1.38x10 -23 -23 joules, T is the temp of joules, T is the temp of the circuit in Kelvin., ln2=.6932. the circuit in Kelvin., ln2=.6932. Since this is a direct multiplicative formula increasing T, means it takes more Since this is a direct multiplicative formula increasing T, means it takes more energy to induce the state change. So attacking the security of such a system energy to induce the state change. So attacking the security of such a system suggests the need to run the 'cracking' computer at very low temps. suggests the need to run the 'cracking' computer at very low temps. Fortunately superconductivity gives us a solution in liquid helium, which boils at Fortunately superconductivity gives us a solution in liquid helium, which boils at 4 4 º K. and limits the maximal temp the circuit can reach. It is also K. and limits the maximal temp the circuit can reach. It is also conveniently around the temperature of inter-stellar space, which provides an conveniently around the temperature of inter-stellar space, which provides an additional option for where to do the computing. Multiplying, a computing additional option for where to do the computing. Multiplying, a computing device bathed in helium will require 0.0000000000000000000000138 joules of device bathed in helium will require 0.0000000000000000000000138 joules of energy for a 1 bit state change energy for a 1 bit state change In our case the root identity is selected from a field of 2^256 numbers. A In our case the root identity is selected from a field of 2^256 numbers. A

Transcript of Dr. Wettstein: Below is a copy of the e-mail sent to Max Glover of Intel on the physical guarantee...

Dr. Wettstein: Below is a copy of the Dr. Wettstein: Below is a copy of the e-mail sent to Max Glovere-mail sent to Max Glover of Intel on the physical guarantee imposed by symmetric of Intel on the physical guarantee imposed by symmetric encryption systems.  I hope you enjoy the read, it is somewhat light hearted but I believe carries an important encryption systems.  I hope you enjoy the read, it is somewhat light hearted but I believe carries an important message. I'm not sure that people have given a lot of consideration to the practical limits of data-mining.message. I'm not sure that people have given a lot of consideration to the practical limits of data-mining.

The theoretical thermodynamic energy consumption of iterating over a 128 bit number is The theoretical thermodynamic energy consumption of iterating over a 128 bit number is ¼¼ the electrical generation capacity the electrical generation capacity of the worldof the world. I believe one could develop an interesting piece on the relevance of the . I believe one could develop an interesting piece on the relevance of the Shannon/von-Neumann/Landauer limitShannon/von-Neumann/Landauer limit to data-mining.  This correlation could be drawn from the implicit notion to data-mining.  This correlation could be drawn from the implicit notion that by definition, data-mining requires iteration over memory and all practical computing systems require a that by definition, data-mining requires iteration over memory and all practical computing systems require a continuous series of pointer de-references to iterate over memory. As a result, the thermodynamic limit to data-continuous series of pointer de-references to iterate over memory. As a result, the thermodynamic limit to data-mining is very similar to that of symmetric encryption since iteration of a pointer is a direct congruent to iteration of a mining is very similar to that of symmetric encryption since iteration of a pointer is a direct congruent to iteration of a counter.counter.

It would be interesting to nail down the dimensions of the Twitter Cube and apply an SNL computation to that.It would be interesting to nail down the dimensions of the Twitter Cube and apply an SNL computation to that.Email to Max Gover of Intel:Email to Max Gover of Intel:I thought it might be helpful to offer a few reflections from an engineering perspective on how we view the security process. I thought it might be helpful to offer a few reflections from an engineering perspective on how we view the security process.

Like everyone else in the industry we depend on the principle of large numbers as our ultimate security guarantee.   Like everyone else in the industry we depend on the principle of large numbers as our ultimate security guarantee.   In our case, which large number is at the root of the identity topology of an organization.In our case, which large number is at the root of the identity topology of an organization.

The effective physical guarantee of large number predicated security is the The effective physical guarantee of large number predicated security is the Shannon-von Neumann-LandauerShannon-von Neumann-Landauer (SNL) limit. (SNL) limit.  Unless Intel has invented reversible computing, the ultimate limit on security is governed by the thermodynamic free  Unless Intel has invented reversible computing, the ultimate limit on security is governed by the thermodynamic free energy changes needed to support a one bit change in semiconductor media. The minimum value, in joules, is given energy changes needed to support a one bit change in semiconductor media. The minimum value, in joules, is given by the following equation for binary circuits:by the following equation for binary circuits:

Emin= Kb*T*ln2, where: Kb=Boltzmann constant of 1.38x10Emin= Kb*T*ln2, where: Kb=Boltzmann constant of 1.38x10 -23-23 joules, T is the temp of the circuit in Kelvin., ln2=.6932. joules, T is the temp of the circuit in Kelvin., ln2=.6932.Since this is a direct multiplicative formula increasing T, means it takes more energy to induce the state change.   So Since this is a direct multiplicative formula increasing T, means it takes more energy to induce the state change.   So

attacking the security of such a system suggests the need to run the 'cracking' computer at very low temps.attacking the security of such a system suggests the need to run the 'cracking' computer at very low temps.Fortunately superconductivity gives us a solution in liquid helium, which boils at 4 Fortunately superconductivity gives us a solution in liquid helium, which boils at 4 ºº K. and limits the maximal temp the K. and limits the maximal temp the

circuit can reach. It is also conveniently around the temperature of inter-stellar space, which provides an additional circuit can reach. It is also conveniently around the temperature of inter-stellar space, which provides an additional option for where to do the computing. Multiplying, a computing device bathed in helium will require option for where to do the computing. Multiplying, a computing device bathed in helium will require 0.0000000000000000000000138 joules of energy for a 1 bit state change0.0000000000000000000000138 joules of energy for a 1 bit state change

In our case the root identity is selected from a field of 2^256 numbers.  A potential adversary would thus, at a minimum, In our case the root identity is selected from a field of 2^256 numbers.  A potential adversary would thus, at a minimum, need to count from 0 to 2^256-1 in order to test each possible number. Neglecting other work required, the following need to count from 0 to 2^256-1 in order to test each possible number. Neglecting other work required, the following yields the amount of energy required for it: Emax_comp = yields the amount of energy required for it: Emax_comp = 115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,936 * 115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,936 * 0.0000000000000000000000138. Which rounding to an even number of joules, yields:0.0000000000000000000000138. Which rounding to an even number of joules, yields:

Emax_comp = 1,597,930,831,474,963,496,845,279,593,119,893,128,375,125,788,385,839,783 joulesEmax_comp = 1,597,930,831,474,963,496,845,279,593,119,893,128,375,125,788,385,839,783 joulesThe intrinsic physical limit of the security barrier can be deduced from the energy source required to implement this The intrinsic physical limit of the security barrier can be deduced from the energy source required to implement this

computation.  It has been, conveniently, estimated that the sun generates approximately 1.21x10^34 joules of energy computation.  It has been, conveniently, estimated that the sun generates approximately 1.21x10^34 joules of energy per year. Division yields the total period of time required to complete a count over the number field space using the per year. Division yields the total period of time required to complete a count over the number field space using the total yearly energy output of the sun: Esource = total yearly energy output of the sun: Esource = 1,597,930,831,474,963,496,845,279,593,119,893,128,375,125,788,385,839,783 / 1,597,930,831,474,963,496,845,279,593,119,893,128,375,125,788,385,839,783 / 12,100,000,000,000,000,000,000,000,000,000,000 yields: 12,100,000,000,000,000,000,000,000,000,000,000 yields: Esource = 132,060,399,295,451,528,664 yearsEsource = 132,060,399,295,451,528,664 years

The sun's age is ~5,000,000,000 years with a projected life span of 10,000,000,000 years.  This suggests that the intrinsic The sun's age is ~5,000,000,000 years with a projected life span of 10,000,000,000 years.  This suggests that the intrinsic security barrier is the notion that it would take 13,206,039,929 times longer than the expected life of our solar system security barrier is the notion that it would take 13,206,039,929 times longer than the expected life of our solar system to complete a count from 0 to 2^256-1.to complete a count from 0 to 2^256-1.

An alternative strategy has been to capture the entire output of a generic supernova, referred to in cosmological terms as an An alternative strategy has been to capture the entire output of a generic supernova, referred to in cosmological terms as an FOE or approximately 1.5x10^44 joules, which would yield a 10 fold decrease in computational time.  This is FOE or approximately 1.5x10^44 joules, which would yield a 10 fold decrease in computational time.  This is currently considered a long term research issue given that the National Ignition Facility is attempting to do this with currently considered a long term research issue given that the National Ignition Facility is attempting to do this with 0.001 to 0.002 grams of a deterium/tritium mixture and it is currently unclear how to scale this to a process size of 0.001 to 0.002 grams of a deterium/tritium mixture and it is currently unclear how to scale this to a process size of 4,375,800,000,000,000,000,000,000,000,000 grams.4,375,800,000,000,000,000,000,000,000,000 grams.

It should be noted that these calculations only reflect the cost of iterating the counter and neglect the computational energy It should be noted that these calculations only reflect the cost of iterating the counter and neglect the computational energy input required to implement the cryptographic primitive based on the counter.  Since we employ memory hard input required to implement the cryptographic primitive based on the counter.  Since we employ memory hard functions in the topology derivation process it also neglects the memory costs associated with implementing the functions in the topology derivation process it also neglects the memory costs associated with implementing the necessary memory pool transaction costs. Additionally, these estimates do not reflect the energy costs needed to necessary memory pool transaction costs. Additionally, these estimates do not reflect the energy costs needed to produce a sufficient quantity of liquid helium which would be required to quench the thermal output of the sun, or produce a sufficient quantity of liquid helium which would be required to quench the thermal output of the sun, or alternately a supernova, to 4 degrees kelvin over the expected duration of the calculation.   If there is interest we can alternately a supernova, to 4 degrees kelvin over the expected duration of the calculation.   If there is interest we can provide a constraint estimate based on a thermodynamic calculation of the Joule-Thomson energy costs associated provide a constraint estimate based on a thermodynamic calculation of the Joule-Thomson energy costs associated with a Hampson-Linde implementation of the Carnot cycle. It is these latter costs which suggest the need to carry out with a Hampson-Linde implementation of the Carnot cycle. It is these latter costs which suggest the need to carry out the number field search in inter-stellar space as previously noted. Secondary to this there is a suggestion of the need the number field search in inter-stellar space as previously noted. Secondary to this there is a suggestion of the need to closely monitor the 'black budget' of the NSA to determine if there are covertly funded programs for deep space to closely monitor the 'black budget' of the NSA to determine if there are covertly funded programs for deep space launch capabilities.launch capabilities.

The physical energy limitations of a number space search are at the root of the now, widely held industry conclusion, that the The physical energy limitations of a number space search are at the root of the now, widely held industry conclusion, that the RDRAND and AMD Padlock instructions cannot be trusted as legitimate sources of entropy.  Implementing, for RDRAND and AMD Padlock instructions cannot be trusted as legitimate sources of entropy.  Implementing, for example RDRAND, is widely held to be done:example RDRAND, is widely held to be done:

        Krandom = AES^k256(Ncnt) Where: k256 = 256 bit known key. Ncnt = limited range counter.         Krandom = AES^k256(Ncnt) Where: k256 = 256 bit known key. Ncnt = limited range counter. Which would generate output values which would pass statistical tests for randomness but which contain only Ncnt Which would generate output values which would pass statistical tests for randomness but which contain only Ncnt bits of entropy.  The thermodynamic costs of deducing Krandom would thus only be based on the bitlength of the bits of entropy.  The thermodynamic costs of deducing Krandom would thus only be based on the bitlength of the Ncnt seed value. I am currently submitting that it would be possible to estimate the bitlength of Ncnt based on a Ncnt seed value. I am currently submitting that it would be possible to estimate the bitlength of Ncnt based on a thermodynamic computational cost estimate premised on the daily electrical consumption of the Utah data facility.thermodynamic computational cost estimate premised on the daily electrical consumption of the Utah data facility.

The take away from all this, and the point of this note if people are still reading, is that security systems are not broken by The take away from all this, and the point of this note if people are still reading, is that security systems are not broken by compromising the physical predicate on which these systems are based.  They are compromised by implementation compromising the physical predicate on which these systems are based.  They are compromised by implementation failures, either human or technical.failures, either human or technical.

Ultimately, someone has to pick the secret number which is at the root of the security predicate.  The most straight forward Ultimately, someone has to pick the secret number which is at the root of the security predicate.  The most straight forward compromise is thus to beat the person who did the picking with a pipe wrench until they divulge what the number is. compromise is thus to beat the person who did the picking with a pipe wrench until they divulge what the number is. The other avenue for breaking the security guarantee is to compromise the physical implementation of the predicate. The other avenue for breaking the security guarantee is to compromise the physical implementation of the predicate.  This can take the form of a compromise of any of the physical systems which store, transport or process the numeric  This can take the form of a compromise of any of the physical systems which store, transport or process the numeric root on which the security guarantee is based.root on which the security guarantee is based.

After spending time thinking about how all this applies to health identity security I've concluded the future will involve a After spending time thinking about how all this applies to health identity security I've concluded the future will involve a secured smart-phone based technology to store the list of provider unique identities to be re-patrioted.  Along this secured smart-phone based technology to store the list of provider unique identities to be re-patrioted.  Along this will be all the technology needed to properly implement the physical security constraints at any site which would will be all the technology needed to properly implement the physical security constraints at any site which would implement any component of the health delivery process. I'm hoping that moving forward IDfusion, DakTech and implement any component of the health delivery process. I'm hoping that moving forward IDfusion, DakTech and Intel can have a collaborative discussion on how to conduct a demonstration of the technology needed to implement Intel can have a collaborative discussion on how to conduct a demonstration of the technology needed to implement the security predicates for the generation, transport and re-patriation of health identities.  With the exception of the the security predicates for the generation, transport and re-patriation of health identities.  With the exception of the RDRAND instruction. RDRAND instruction.

p6' 1 1 1 1 1 0 0 0 0 0 0 0 0 0 05/64 [0,64)

p6' 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

p6' 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

p6' 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

p6' 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

p6' 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

p6' 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

p6' 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

p6 0 0 0 0 0 1 1 1 1 1 1 1 1 1 110/64 [64,128)

p6 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

p6 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

p6 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

p6 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

p6 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

p6 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

p6 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

Y y1 y2y1 1 1y2 3 1y3 2 2y4 3 3y5 6 2y6 9 3y7 15 1y8 14 2y9 15 3ya 13 4pb 10 9yc 11 10yd 9 11ye 11 11yf 7 8

yofM 11 27 23 34 53 80118114125114110121109125 83

p6 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

p5 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0

p4 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1

p3 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0

p2 0 0 1 0 1 0 1 0 1 0 1 0 1 1 0

p1 1 1 1 1 0 0 1 1 0 1 1 0 0 0 1

p0 1 1 1 0 1 0 0 0 1 0 0 1 1 1 1

p6' 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

p5' 1 1 1 0 0 1 0 0 0 0 0 0 0 0 1

p4' 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0

p3' 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1

p2' 1 1 0 1 0 1 0 1 0 1 0 1 0 0 1

p1' 0 0 0 0 1 1 0 0 1 0 0 1 1 1 0

p0' 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0

p3' 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1

0[0,8)

p3 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0

1[8,16)

p3' 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1

1[16,24)

p3 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0

1[24,32)

p3' 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1

1[32,40)

p3 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0

0[40,48)

p3' 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1

1[48,56)

p3 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0

0[56,64)

p3' 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1

p3 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0

p3' 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1

2[80,88)

p3 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0

0[88,96)

p3' 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1

0[96,104)

p3 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0

2[194,112)

p3' 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1

3[112,120)

p3 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0

3[120,128)

p4' 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0

1/16[0,16)

p4' 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0

p4 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1

2/16[16,32)

p4 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1

p4' 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0

1[32,48)

p4' 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0

p4 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1

1[48,64)

p4 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1

p4' 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0

0[64,80)

p4' 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0

p4 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1

2[80,96)

p4 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1

p4' 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0

2[96,112)

p4' 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0

p4 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1

6[112,128)

p4 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1

p5' 1 1 1 0 0 1 0 0 0 0 0 0 0 0 1

3/32[0,32)

p5' 1 1 1 0 0 1 0 0 0 0 0 0 0 0 1

p5' 1 1 1 0 0 1 0 0 0 0 0 0 0 0 1

p5' 1 1 1 0 0 1 0 0 0 0 0 0 0 0 1

p5' 1 1 1 0 0 1 0 0 0 0 0 0 0 0 1

2/32[64,96)

p5' 1 1 1 0 0 1 0 0 0 0 0 0 0 0 1

p5' 1 1 1 0 0 1 0 0 0 0 0 0 0 0 1

p5' 1 1 1 0 0 1 0 0 0 0 0 0 0 0 1

p5 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0

2/32[32,64)

p5 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0

p5 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0

p5 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0

p5 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0

¼[96,128)

p5 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0

p5 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0

p5 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0

f=

UDR Univariate Distribution Revealer (on Spaeth:)

Pre-compute and enter into the ToC, all DT(YPre-compute and enter into the ToC, all DT(Ykk) plus those for selected Linear Functionals (e.g., d=main diagonals, ModeVector .) plus those for selected Linear Functionals (e.g., d=main diagonals, ModeVector .Suggestion: In our pTree-base, every pTree (basic, mask,...) should be referenced in ToC( pTree, pTreeLocationPointer, pTreeOneCount ).and these Suggestion: In our pTree-base, every pTree (basic, mask,...) should be referenced in ToC( pTree, pTreeLocationPointer, pTreeOneCount ).and these OneCts should be repeated everywhere (e.g., in every DT). The reason is that these OneCts help us in selecting the pertinent pTrees to access - and in OneCts should be repeated everywhere (e.g., in every DT). The reason is that these OneCts help us in selecting the pertinent pTrees to access - and in

fact are often all we need to know about the pTree to get the answers we are after.).fact are often all we need to know about the pTree to get the answers we are after.).

0 0 1 1 1 1 0 1 01 1 1 1 0 1 0 00 0 0 2 0 0 2 3 32 0 0 2 3 3

1 2 1 1 0 2 2 6 1 2 1 1 0 2 2 6

3 2 2 8 3 2 2 8

5 105 10

depthDT(S)bdepthDT(S)b≡≡BitWidth(S) h=depth of a node k=node offsetBitWidth(S) h=depth of a node k=node offsetNodeNodeh,kh,k has a ptr to pTree{xS | F(x)[k2 has a ptr to pTree{xS | F(x)[k2b-h+1b-h+1, (k+1)2, (k+1)2b-h+1b-h+1)} and )} and

its 1countits 1count

applied to S, a column of numbers in bistlice format (an SpTS), will applied to S, a column of numbers in bistlice format (an SpTS), will produce the produce the DistributionTree of S DT(S)DistributionTree of S DT(S)

1515 depth=h=0depth=h=0

depth=h=1depth=h=1

nodenode2,32,3

[96.128)[96.128)

pTreeIDs: 0 1 2 3 4 5 6 7 8 9 a bSpaeth A1 p13 p12 p11 p10 A2 p23 p22 p21 p20 yod p3 p2 p1 p0y1 1 0 0 0 1 1 0 0 0 1 1.3 0 0 0 1y2 3 0 0 1 1 1 0 0 0 1 3.0 0 0 1 1y3 2 0 0 1 0 2 0 0 1 0 2.7 0 0 1 0y4 3 0 0 1 1 3 0 0 1 1 4.1 0 1 0 0y5 5 0 1 0 1 2 0 0 1 0 5.2 0 1 0 1y6 9 1 0 0 1 3 0 0 1 1 9.0 1 0 0 1y7 15 1 1 1 1 1 0 0 0 1 12. 1 1 0 0y8 14 1 1 1 0 2 0 0 1 0 12. 1 1 0 0y9 15 1 1 1 1 3 0 0 1 1 13. 1 1 0 1ya 13 1 1 0 1 4 0 1 0 0 12. 1 1 0 0yb 10 1 0 1 0 9 1 0 0 1 13. 1 1 0 1yc 11 1 0 1 1 10 1 0 1 0 14. 1 1 1 0yd 9 1 0 0 1 11 1 0 1 1 13. 1 1 0 1ye 11 1 0 1 1 11 1 0 1 1 15. 1 1 1 1yf 7 0 1 1 1 8 1 0 0 0 10. 1 0 1 01-count 9 6 10 12 5 1 9 9 10 10 5 8

d=e1 d=e2 dnnxx=.8 .5mnA1= 1=MinA1 MinA2= 1 main diagonal 1mxA1=15=MaxA1 MaxA2=11 D 14 10

c d e f g h i j k l m nyod p3 p2 p1 p0 yod p3 p2 p1 p0 yod p3 p2 p1 p00.2 0 0 0 0 1.3 0 0 0 1 1.2 0 0 0 11.8 0 0 0 1 3.1 0 0 1 1 3.1 0 0 1 10.4 0 0 0 0 2.6 0 0 1 0 2.5 0 0 1 00.6 0 0 0 0 4.0 0 1 0 0 3.7 0 0 1 12.9 0 0 1 0 5.3 0 1 0 1 5.3 0 1 0 15.5 0 1 0 1 9.3 1 0 0 1 9.4 1 0 0 111. 1 0 1 1 13. 1 1 0 1 14. 1 1 1 010. 1 0 1 0 13. 1 1 0 1 13. 1 1 0 110. 1 0 1 0 14. 1 1 1 0 15. 1 1 1 18.2 1 0 0 0 13. 1 1 0 1 13. 1 1 0 12.9 0 0 1 0 12. 1 1 0 0 12. 1 1 0 03.1 0 0 1 1 14. 1 1 1 0 13. 1 1 0 10.9 0 0 0 0 12. 1 1 0 0 12. 1 1 0 02.5 0 0 1 0 14. 1 1 1 0 13. 1 1 0 11.0 0 0 0 1 9.8 1 0 0 1 9.1 1 0 0 1 4 1 7 5 10 10 5 8 10 9 5 11

dnxnx=.8 -0. dfA=.8 .4 f=y1 dMA=.9 .3mainxniagonal 2 A 8.5 4.7 A=Avg M 9 3 M=MedD 14 -10 D 7.5 3.7 =fA D 9 3 =AM

Assume a WriteOnceMainMemory pDB Assume a WriteOnceMainMemory pDB and that each bit positions is addressable. and that each bit positions is addressable.

010010111000010000011111111110001

000000000100000110010101100000000

101111110010100110111011101011010

100011000010011011100111011011100

100110101101010110111000101000110

111111110110000101111111100010100

100000011111111101110111101101111

100001101111111101001111011001110

101011100101000000111100100101100

111001111010000100000010111100001

110011001010110101000001111111111

100000011111111111000000000011111

100001011101101000000001111111111

100001011101101000000001111000010

000000000110110000010010101011001

The data portion of SpaethWOMMpDB is The data portion of SpaethWOMMpDB is 495495 bits withbits with24 15bit pTrees=24 15bit pTrees=360360 data bits + data bits + 135135 red pad bitsred pad bits . .Key: Key: 24 permuted pTreeIDs24 permuted pTreeIDs, , 24 PadLengths24 PadLengths, 5b each for 240b., 5b each for 240b.or just randomly generate a 48 5b array and send seed?or just randomly generate a 48 5b array and send seed?If there were 15 trillion rows, not just 15, the green ToC is the If there were 15 trillion rows, not just 15, the green ToC is the

~same size, the key is ~same size, the data array is ~same size, the key is ~same size, the data array is trillion (480Tb =60TB) or smaller since pads can trillion (480Tb =60TB) or smaller since pads can stay small, so ~30TB? Next, we put in the DTs.stay small, so ~30TB? Next, we put in the DTs.

1 y1y2 y72 y3 y5 y83 y4 y6 y94 ya5 6 78 yf9 yba ycb yd yecdef0 1 2 3 4 5 6 7 8 9 a b c d e fSpaeth

1 3 1 1 2 3 1 3 DT(IntWidth=2 for A1)1 3 1 1 2 3 1 3 DT(IntWidth=2 for A1)

3 6 1 0 2 3 0 0 DT(IntWidth=2 for A2)3 6 1 0 2 3 0 0 DT(IntWidth=2 for A2)

1 2 2 0 1 1 6 2 DT(IntWidth=2 d_nnxx) 1 2 2 0 1 1 6 2 DT(IntWidth=2 d_nnxx)

6 4 1 0 1 3 0 0 DT(IntWidth=2 d_nxxn) 6 4 1 0 1 3 0 0 DT(IntWidth=2 d_nxxn)

1 2 2 0 2 0 5 3 DT(IntWidth=2 d_fA)1 2 2 0 2 0 5 3 DT(IntWidth=2 d_fA)

1 3 1 0 2 0 6 2 DT(IntWidth=1 d_AM)1 3 1 0 2 0 6 2 DT(IntWidth=1 d_AM)The centers of these intervals are The centers of these intervals are 1 3 5 7 9 b d f 1 3 5 7 9 b d f resp.resp.For A1, a cut would be made at A1=6 and A1=dFor A1, a cut would be made at A1=6 and A1=dA2, cut at A2=6A2, cut at A2=6nnxx which is from (1,1) to (f,b) at 7nnxx which is from (1,1) to (f,b) at 7dxxn which is from (1,b) to (f,1) at 7dxxn which is from (1,b) to (f,1) at 7fA cut at 7 and 11fA cut at 7 and 11MA cut at 7 and 11MA cut at 7 and 11

ToC for Spaeth MMpDB d=DIAGnnxx d=DIAGnxxn d=furth_Avg d=Avg_Med__ p13 p12 p11 p10 p23 p22 p21 p20 p3 p2 p1 p0 p3 p2 p1 p0 p3 p2 p1 p0 p3 p2 p1 p0 pTrees_Array 9 6 10 12 5 1 9 9 10 10 5 8 4 1 7 5 10 10 5 8 10 9 5 11 1Count_Array

LOCATION_POINTER_ARRAY200 333 114 216 281 33 249 135 365 233 56 330 15 457 397 98 415 160 473 265 349 176 267 82

Pad Lengths: e,4,7,b,2,1,6,9,2,a,1,2,1,3,g,2,0,3,2,1,1,2,s,2,

pTreeID Permutation c,5,a,n,f,2,7,h,l,k,3,9,6,m,j,1,b,0,9,4,e,a,d,i,

11234234 55 6677 88 99 aa bbcc ddeeff F-value RULER F-value RULER

Given the Spaeth table, Y, (15 rows and 2 columns), start with a MM sequence of addressed bits (~2,000b) randomly populated. At this point, the Given the Spaeth table, Y, (15 rows and 2 columns), start with a MM sequence of addressed bits (~2,000b) randomly populated. At this point, the ToC A1basicPtrees A2basicPtrees d(nnxx)Ptrees d(nxxn)Ptrees dfApTrees dAMpTreses_ pTreeID 0 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m npTree: p13 p12 p11 p10 p23 p22 p21 p20 p3 p2 p1 p0 p3 p2 p1 p0 p3 p2 p1 p0 p3 p2 p1

p0COUNT: 9 6 10 12 5 1 9 9 10 10 5 8 4 1 7 5 10 10 5 8 10 9 5

11ADDR: 200 333 114 216 281 33 249 135 65 233 56 330 15 457 397 98 415 160 473 265 349 176 267

82

e,4,7,b,2,1,6,9,2,a,1,2,1,3,g,2,0,3,2,1,1,2,s,2,d,3,g,3,9,4,1,8,1,3,1,2,2,3,1,1,2,3,8,4,2,5,2,1,3,2,3,1,2,1,3,1,2,1,2,2,1,2,1,3,2,2,3,6,3,3,2,9,634 pads

Insert pTree by over-writing into gap randomly and update key (6*8=48 DT mask pTrees should be inserted). To enhanced data security, a Random Insert pTree by over-writing into gap randomly and update key (6*8=48 DT mask pTrees should be inserted). To enhanced data security, a Random BackGround Process, RBGP, constantly overwrites pTree-size strings in the pad area (New actual pTrees are fed into the RBGP queue).BackGround Process, RBGP, constantly overwrites pTree-size strings in the pad area (New actual pTrees are fed into the RBGP queue).

01001011100001000001111111111000100000000010000011

00101011000000001011111100101001101110111010110101

00011000010011011100111011011100100110101101010110

11100010100011011111111011000010111111110001010010

00000111111111011101111011011111000011011111111010

01111011001110101011100101000000111100100101100111

00111101000010000001011110000111001100101011010100

00011111111111000000111111111110000000000111111000

01011101101000000001111111111100001011101101000000

00111100001000000000011011000001001010101100111010

012345678910123456789201234567893012345678940123456789

DTs: Width=2 intervals centers: 1,3,5,7,9,11,13,15) Ordinals: base 50 using 50 symbols: 0-9, a-z, A B C D E F G H I J K L M NpTreeID: o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z @ # $ % ^ & * ( ) +

c,5,a,n,f,2,7,h,l,k,3,9,6,m,j,1,b,0,9,4,e,a,d,i,v,H,U,&,+,N,C,o,E,V,$,R,x,r,I,W,M,w,F,X,%,T,D,p,L,Z,^,B,@,#,),(,*,Y,O,S,P,Q,K,G,q,u,t,A,J,z,s,y pTreeIDs1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2

1 3 1 1 2 3 1 31 3 1 1 2 3 1 3 3 6 1 0 2 3 0 03 6 1 0 2 3 0 0 1 2 2 0 1 1 6 21 2 2 0 1 1 6 2 6 4 1 0 1 3 0 06 4 1 0 1 3 0 0 1 2 2 1 2 2 0 2 0 5 30 2 0 5 3 1 3 1 0 2 0 6 21 3 1 0 2 0 6 2

50ss 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D 3 5 8 a d f 18 21 23 25 27 29 32 35 38 41 43 46 49 52 55 58 61 64 66 69 720

0000001110000001100000000000000001011100000010111

01000000000000001010000000000000001111101000000001

01000000000100001000001101010000000000000000011000

01000000000000000100000000000000011011000000000000

01000000000000000100000011100000011001111011000000

01100001000000000000000010000000000000110000000000

01111100000000101111001010010000000000000000110000

00000000000000000000000000000011100000000000001100

00000000000000000000000000000000011100000000000000

00000000000101000000000000000000001000010000000000

00000000000011100000000011011010010000000010010100

01000000010011110010000000000000110000000000000000

00000010000000011000001000000000000000000000000000

00000000000000000000000000010000000000001111101000

00000110000000000000000100000000000100000000010000

01110000000000110101000010000000000010010100000000

00000001000000000000000000010000100000010000011110

00000000001000000000000000000011111010011011110101

00001110100111100010101000000000000111111111010101

10011001110000000111111111100000101011001000000011

00001110100111100010101000000000000111111111010101

10011001110000000111111111100000101011001000000011

00001110100111100010101000000000000111111111010101

10011001110000000111111111100000101011001000000011

00001110100111100010101000000000000111111111010101

10011001110000000111111111100000101011001000000011

00001110100111100010101000000000000111111111010101

10011001110000000111111111100000101011001000000011

00001110100111100010101000000000000111111111010101

10011001110000000111111111100000101011001000000011

Where are we now?Where are we now?

GAP FAUST (AKA Oblique FAUST) is expanded to use the UDR (Univariate Distribution Revealer) to place cut pts at all GAP FAUST (AKA Oblique FAUST) is expanded to use the UDR (Univariate Distribution Revealer) to place cut pts at all Precipitous Count Changes (PCC). Any PCC reveals a cluster boundary almost always. More precisely, almost alwaysPrecipitous Count Changes (PCC). Any PCC reveals a cluster boundary almost always. More precisely, almost always

a Precipitous Count Decrease (PCD) occurs iff (if and only if) we are exiting a cluster somewhere on the cut hyperplane and a Precipitous Count Decrease (PCD) occurs iff (if and only if) we are exiting a cluster somewhere on the cut hyperplane and a Precipitous Count Increase (PCI) occurs iff we are entering a cluster somewhere on the cut hyperplane. a Precipitous Count Increase (PCI) occurs iff we are entering a cluster somewhere on the cut hyperplane.

This cluster boundary existence revelation process can be refined but this version makes a cut at each PCC of the functional.This cluster boundary existence revelation process can be refined but this version makes a cut at each PCC of the functional.We make this a cluster boundary identification process by looking for modes over a small interval on the high side of the cutWe make this a cluster boundary identification process by looking for modes over a small interval on the high side of the cut

A gap is a PCD (decrease to 0) followed by a PCI (increase from 0), so PCC FAUST expands (not replaces) GAP FAUST.A gap is a PCD (decrease to 0) followed by a PCI (increase from 0), so PCC FAUST expands (not replaces) GAP FAUST.

This method is Divisive Hierarchical Clustering which, if continued to its end, builds a fulll dendogram of sub-clusterings.This method is Divisive Hierarchical Clustering which, if continued to its end, builds a fulll dendogram of sub-clusterings.

It shouldn't require a fusion step but there is a need for work on how best to use it - which subclustering is best?. It shouldn't require a fusion step but there is a need for work on how best to use it - which subclustering is best?.

If the problem at hand is outlier detection, then it seem that any singleton subcluster seoparated by a sufficient gap, is an If the problem at hand is outlier detection, then it seem that any singleton subcluster seoparated by a sufficient gap, is an outlier.outlier.

Note that all points between a PCD and an adjacent PCI which are separated by sufficient space, can be individually analyzed Note that all points between a PCD and an adjacent PCI which are separated by sufficient space, can be individually analyzed to determine their distance from the clusters. Those points may be thus determined to be outliers and the space a gap.to determine their distance from the clusters. Those points may be thus determined to be outliers and the space a gap.

I believe PCC FAUST will scale up, because entering and leaving a cluster "smoothly" (meaning without noticeable PCC) is I believe PCC FAUST will scale up, because entering and leaving a cluster "smoothly" (meaning without noticeable PCC) is no more likely for large datasets than for small. (a measure=0 set of situations)no more likely for large datasets than for small. (a measure=0 set of situations)

If we find that PCC FAUST does not adequately scale, we can use Barrel Analysis FAUST to limit the reach of the projections. If we find that PCC FAUST does not adequately scale, we can use Barrel Analysis FAUST to limit the reach of the projections. Again though, we would generalize gap analysis to PCC analysis, based on the [unproved] theorem that radial projections Again though, we would generalize gap analysis to PCC analysis, based on the [unproved] theorem that radial projections

almost never vary smoothly at a cluster boundary either.almost never vary smoothly at a cluster boundary either.

The other use of BARREL FAUST is in isolating full cluster individually. The dendogram produced by PCC FAUST may not The other use of BARREL FAUST is in isolating full cluster individually. The dendogram produced by PCC FAUST may not give us the information we seek give us the information we seek

(e.g., if we are looking for the cluster containing a particular point, y, with the same density as the local density at y, which (e.g., if we are looking for the cluster containing a particular point, y, with the same density as the local density at y, which node in the dendogram above y gives us that cluster?node in the dendogram above y gives us that cluster?

With BARREL FAUST, we may be better able to work out from y to determine the local boundary for the y-cluster with the With BARREL FAUST, we may be better able to work out from y to determine the local boundary for the y-cluster with the properties we need.properties we need.

There is a line of research suggested here: If, as we build the PCC FAUST dendogram, we can measure the Density There is a line of research suggested here: If, as we build the PCC FAUST dendogram, we can measure the Density Uniformity Level of the node clusters, then we can end a branch as soon as the uniformity is high enough (> theshold). Uniformity Level of the node clusters, then we can end a branch as soon as the uniformity is high enough (> theshold). We would also record the DUL of each node in the dendogram. (The DUL of a cluster might be defined as the We would also record the DUL of each node in the dendogram. (The DUL of a cluster might be defined as the reciprical of the variance of the point densities),reciprical of the variance of the point densities),

Understanding the Equity Summary Score Methodology 1

The Equity Summary Score provides a consolidated view of the ratings from a number of independent research providers on Fidelity.com. Historically, the maximum number of providers has been between 10 and 12. However, some stocks are not rated by all research providers. Since the model uses a number of ratings to arrive at an Equity Summary Score, only stocks that have four or more firms rating them have an Equity Summary Score. It uses the providers’ relative, historical, recommendation performance along with other factors to give you an aggregate, historical accuracy‐weighted indication of the independent research firms’ stock sentiment.

As discussed in detail below, this single stock score and associated sentiment is provided by StarMine, a division of Thomson Reuters focused primarily on building quantitative factor models for institutional investors. It is calculated by normalizing thirdparty research providers’ ratings distributions (making them more comparable) and weighting each provider’s rating in the final score based on historical accuracy. Equity Summary Scores for the 1,500 largest stocks by market capitalization are force ranked to help ensure a consistent ratings distribution. This means that there will be a diversity of scores provided by the model, thereby assisting investors in evaluating the largest stocks (in terms of capitalization), which typically make up the majority of Fidelity’s investors’ portfolios. Finally, smaller cap stocks are then slotted into this distribution without a force ranking, and may not exhibit the same balanced distribution. StarMine updates Equity Summary Scores daily based on the ratings provided to it by the independent research providers after the close of each trading day.

How are Equity Summary Scores calculated?The StarMine model takes the multiple standardized ratings of the research providers and creates a single Equity Summary Score/Sentiment using

the following steps:1. Normalize – Look at the research providers’ buy and sell ratings distributions to understand which ratings are scarce and therefore important.

The distribution of ratings from each of the independent research firms are normalized to make them more comparable with each other. For example, some research providers may issue a large number of buy recommendations and few sell recommendations, or vice versa. StarMine adjusts for this by overweighting “scarce” ratings and underweighting “plentiful” ratings. By normalizing the distribution of ratings, the model can recognize the "scarcity value" of ratings that are infrequently given which adds additional info to the model.

2. Weight – Look at the 24 month relative firm/sector ratings accuracy and use that information to determine which firms’ ratings have the most weight in the aggregated Equity Summary Score. For over five years on Fidelity.com, StarMine has run its sophisticated scoring system to facilitate a fair comparison of research provider recommendation performance across widely disparate industries and market conditions. The StarMine Relative Accuracy Score for each research provider uses the past performance of the provider’s individual stock recommendations with that of its peers in each sector to calculate a statistical aggregation ranging from 1 to 100. It is calculated over a 24‐month period based on the performance of a research firm within a given sector against its peer set of other firms in the market rating stocks in this sector. The calculation is analogous to a “batting average score”, that is how often stocks rated “buy” outperform the market and stocks rated “sell” underperform the market as a whole. To get a score higher than 50, the industry‐relative return of a firm's recommendations within a sector must, when taken together, be greater than those of the median provider. The StarMine Relative Accuracy Score is used in the Equity Summary Score model to help weight the individual provider stock recommendations in the aggregated Equity Summary Score.

Understanding the Equity Summary Score Methodology 2

3. Calculate – The normalized analysts’ recommendations and the accuracy weightings are combined to create a single score. For the largest 1,500 stocks by market capitalization, these scores are then forcibly ranked against all the other scores to create a standardized Equity Summary Score on a scale of 0.1 to 10.0 for the 1,500 stocks. This means that there will be a uniform distribution of scores provided by the model thereby assisting investors in evaluating the largest stocks (in terms of Understanding the Equity Summary Score Methodology

Provided By 2 capitalization), which typically make up the majority of individual investors’ portfolios. Finally, smaller cap stocks are then slotted into this distribution without a force ranking, and may not exhibit the same balanced distribution.

The Equity Summary Score and associated sentiment ratings by StarMine are:0.1 to 1.0 ‐ very bearish1.1 to 3.0 ‐ bearish3.1 to 7.0 ‐ neutral7.1 to 9.0 ‐ bullish9.1 to 10.0 ‐ very bullish

Other Important Model Factors: An Equity Summary Score is only provided for stocks with ratings from four or more independent research providers. New research providers are ramped in slowly by StarMine to avoid rapid fluctuations in Equity Summary Scores. Indep. research providers that

are removed from Fidelity.com will similarly be ramped out slowly to avoid rapid fluctuations. Notes on Using the Equity Summary Score: The Equity Summary Score and sentiment ratings are ratings of relative, not absolute forecasted performance. The StarMine model anticipates that the highest rated stocks, those labeled “Very Bullish” as a group, may outperform lower rated groups of stocks. In a rising market, most stocks may experience price increases, and in a declining market, most stocks may experience price declines

Proper diversification within a portfolio is critical to the effective use of the Equity Summary Score. Individual company performance is subject to a broad range of factors that cannot be adequately captured in any rating system.

Larger differences in Equity Summary Scores may lead to differences in future performance. The sentiment rating labels should only be used for quick categorization. An 8.9 Bullish is closer to a 9.1 Very Bullish than a 7.1 Bullish.

For a customer holding a stock with a lower Equity Summary Score, there are many important considerations (for example, taxes) that may be much more important than the Score.

The Equity Summary Score by StarMine does not predict future performance of underlying stocks. The Equity Summary Score model has only been in production since August 2009 and therefore no assumptions should be made about how the model will perform in differing market conditions. Understanding the Equity Summary Score Methodology

Provided By 3 How has the Equity Summary Score performed?Transparency is a core value at Fidelity, and that is why StarMine provides Fidelity with a view of the historical aggregate performance of the

Equity Summary Score across all covered stocks each month. You can use this to obtain insight into the performance and composition of the Equity Summary Score. In addition, the individual stock price performance during each period of the Equity Summary

Understanding the Equity Summary Score Methodology 3Score sentiment can be viewed on the symbol ‐ specific Analyst Opinions History and Performance pages.1. Equity Summary Scorecard Summary: A Total Return by Sentiment chart shows how a theoretical portfolio of stocks in each of the five

sentiments performed within the selected time period. For example, the bright green bar represents the performance of all the Very Bullish stocks. Provided for comparison is the performance of First Call Consensus Recommendation of Strong Buy, the average of all stocks with an Equity Summary Score, and the S&P 500 Total Return Index.

2. Performance by Sector and Market Cap Fidelity customers have access to more in‐depth analysis of the Equity Summary Score universe and performance. The Total Return by Sector chart provides the historical performance of a theoretical portfolio of Very Bullish stocks in each sector over the time period selected. For comparison, the average performance of all stocks with an Equity Summary Score during the time period by sector is also provided. The Total Return by Market Cap shows the historical performance by market capitalization for stocks with an Equity Summary Score of Very Bullish as compared to typical market benchmarks as well the average for the largest 500 stocks, the next smaller 400 stocks, and the next 600 smaller stocks by market capitalization. The last table is the Equity Summary Score universe distribution for the reporting month by market capitalization and score. Understanding the Equity Summary Score Methodology Provided By 4 Important Information on Monthly Performance Calculations by StarMine

The set of covered stocks and ratings are established as of the second to last trading day of a given month. For a stock to be included in the scorecard calculations, it must have an Equity Summary Score as of the second to the last trading day of the month. The positions are assumed to be entered into on the last trading day of the month, and, if necessary, exited on the last trading day of the next month.

The Scorecard calculations use the closing price as of the last trading day of the month. The Scorecard calculations assume StarMine exits old positions and enters new ones at the same time at closing prices on the last trading day of a given month. The calculations assume 100% investment at all times.

The 1‐Year total return by Market Cap table breakpoints for the largest 500 stocks (large cap), the next 400 (mid cap), and the next 600 (small cap), are also established as of the end of trading on the second to the last trading day of a given month.

The calculation of performance assumes an equal dollar weighted portfolio of stocks ie theoretical investment allocated to each stock is the same Performance in a given month for a given stock is calculated as [starting price (starting price meaning closing price as of the last day of trading

of the prior month) less the ending price, divided by the starting price.] Prices incorporate any necessary adjustments for dividends and corporate actions (e.g. splits or spinoffs).

The performance of a given tier of rated stocks is calculated by adding up the performance of all stocks within that given tier, then dividing by the total number of stocks in a given tier.

The process for the next month begins again by looking at Equity Summary Scores as of the second‐to‐last trading day of the new month, placing stocks into their given tiers, and starting the process all over again.

It is important to note that the “theoretical” portfolio rebalancing process that StarMine performs between the end of one month and the beginning of the next month is, for the purposes of the scorecard, a cost‐free process. This means that no commissions or other transaction costs (e.g. bid/ask spreads) are included in the calculations.

If a customer attempted to track portfolios of stocks similar to those included in the scorecard, their returns would likely differ due to transaction costs as well as different purchase and sale prices received when buying or selling stocks.

Understanding the Equity Summary Score Methodology 4About the StarMine: StarMine is a division of Thomson Reuters focused primarily on building quantitative factor models for institutional

investors. StarMine's equity analytics and research management tools help investment firms around the globe generate alpha and process equity information more efficiently. They are one of the largest and most trusted sources of objective equity research performance scores. Their performance scoring helps investors anticipate trends in analyst sentiment, predict surprises, evaluate financial statements for measures of earnings quality, and more.

Using the Equity Summary Score: There are many ways to use the Equity Summary Score. You can use it as a screening criterion, to help identify stocks you may want to include or exclude from further analysis, in conjunction with other criteria. You can also use it to monitor the consolidated opinion of the independent research providers that are following the stocks currently in your portfolio.

The Equity Summary Score from StarMine is not: A Fidelity rating. As with the other content provided in the stock research section of Fidelity.com, the Equity Summary Score comes from an

independent third‐party, StarMine. Simply an average analyst rating. The Equity Summary Score is the output of a model whose inputs are the ratings of the independent research

providers (IRPs). A buy or sell rating. It is a calculated expression of the overall "sentiment" of the IRPs who have provided a rating on a stock. Directly comparable to a consensus rating. A consensus rating is generally a simple "average" rating, while the Equity Summary Score is a

model‐calculated value. First Call Consensus Recommendation is provided where available along with the Equity Summary Score for a stock. First Call Consensus Recommendation is provided by Thomson Reuters, an independent thirdparty, using information gathered from contributors. The number of contributors for each security where there is a consensus recommendation is provided. Each contributor determines how their individual recommendation scale maps to the standardized Thomson Reuters First Call scale of 1‐5.

Who are the Independent Research Providers and how does StarMine receive their ratings?

Fidelity’s brokerage customers enjoy one of the broadest sets of independent research providers (IRPs) available for evaluating stocks. The Equity Summary Score provides a consolidated view of the ratings from a number of independent research providers on Fidelity.com. Historically, the maximum number of providers has been between 10 and 12. However, some stocks are not rated by all research providers. Since the model uses a number of ratings to arrive at an Equity Summary Score, only stocks that have four or more firms rating them have an Equity Summary Score. The large number of providers adds unique value across several dimensions:

The number of IRPs yields an extensive coverage set, with over 6,000 stocks typically having at least one independent provider rating. With the large number of providers, Fidelity is able to provide research from firms that take very different approaches to valuation. The current

research providers cover both technical and fundamental analysis, and have growth, value and momentum methodologies. Fidelity offers a tool that can help customers find the research providers that best meet their criteria. http://research2.fidelity.com/fidelity/research/reports/release2/ExploreResearchFirms.asp

Transparency is a core value as well, and Fidelity is always working to ensure that customers understand the research that they are using. Fidelity.com provides an overview from each research provider of its methodology and access to their stock ratings and reports. As an additional dimension of transparency, Fidelity has for many years, made available performance insight and metrics on the IRPs.

Understanding the Equity Summary Score Methodology 5

For example, the customer has access to the quarterly Integrity Research Scorecard from a third‐party consulting firm that specializes in analyzing and understanding the research industry. Integrity Research performs due diligence on research firms and measures research firms’ recommendation performance. Its scorecard and analysis is designed to help Fidelity customers better understand the performance of the independent research providers made available through Fidelity.com.

The Investars Scorecard is a sector‐based scorecard, updated daily, which represents the performance of a theoretical portfolio of each IRPs’ highest rated stocks. Investars is a firm that independently manages ratings databases and analyzes the performance of research providers. They have been collecting stock ratings from Fidelity's independent research providers for many years and have created a standardized ratings language (e.g., strong buy, sector outperform, neutral) across all research providers on Fidelity.com to facilitate customer understanding and comparison.

Using the Equity Summary Score 6: Investars collects and standardizes recommendations from IRPs that provide ratings and stock research reports on Fidelity.com and sends them at the end of each trading day to StarMine who calculates the Equity Summary Score. Important Notes about IRP ratings and the Equity Summary Score Calculation:

Investars receives end of trading day IRP ratings Monday through Thursday for availability by 8 am ET the next day. For IRPs that provide weekly updates after Friday market close, their rating change may not be available until Tuesday morning due to the length of weekly processing. Also if an IRP misses the deadline for daily processing, their rating change will be included in the Equity Summary Score and available on Fidelity.com the next trading day after receipt. Date of ratings is provided with the rating as well as in the research report.

Finally, we suggest caution with any performance measurement analysis, including the Equity Summary Score and related scorecards. Performance of buy/sell recommendations is only one aspect of the research offered on Fidelity.com. Although it is useful to understand a research firm's overall track record, a research firm's performance on any given stock can diverge significantly from the overall performance. There are additional factors beyond performance that any investor should consider in evaluating a research firm, such as the insights provided and the ease with which the research can be used. Performance of recommendations, while important, should not be the only factor an investor considers in evaluating research firms.

Where can you find the Equity Summary Score on Fidelity.com?

The Equity Summary Score can be found in several places within stock research on Fidelity.com.

1. Symbol‐specific Snapshot Page: Equity Summary Score can be found in the Analyst Opinions bricklet. In addition to the Equity Summary Score, the distribution of underlying analyst opinions that feed into the Equity Summary Score is displayed. By rolling over the icons that represent the different analyst opinions, you can see the details associated with the underlying recommendations.

Understanding the Equity Summary Score Methodology 6Using the Equity Summary Score 7

2. Symbol‐specific Analyst Opinions Page.

1. Equity Summary Score Firms: This is a list of the independent research firms included in the Equity Summary Score. The standardized opinion and StarMine Relative Accuracy Score is provided for each firm. Click on the “?” icon for definitions

2. Equity Summary Score: The Equity Summary Score is the consolidated, accuracy‐weighted indication of the independent research firms’ sentiment for this stock. Click on “Methodology” to learn more on how StarMine calculates the score.

3. Equity Summary Score Performance and History: View 12 month history and associated price performance.

4. All Opinions: This is a list of all available opinions for this stock on Fidelity.com. Columns are sortable. On the list, you will find:

• The name of the firm, designated with an “(i)” when firm is independent. You can click the firm name to view opinion history and performance.

• The firm’s standardized opinion. The dot in the colored bar indicates the firm is included in the Equity Summary Score. Investars, a third‐party research firm, collects and standardizes opinions using a five‐point scale to make it easier for you to compare firm opinions.

• The 1 Year History line shows if the firm’s standardized opinion has changed over the last year. Click for full chart and access details.

• The StarMine Relative Accuracy Score, a measure of the relative historical accuracy of the firm’s opinions in the stock’s sector among its peers over the last 24 months.

• The date and current firm’s non‐standardized opinion. You can change the view to see the last time the opinion changed and if it was an upgrade or downgrade.

• A link to the latest research report (in PDF format) is available from firms Fidelity may distribute.

5. Intraday Opinions: Opinions provided on this page are from the previous trading day. Opinion changes issued during the current trading day will be listed above the All Opinions table.

6. Research Firm Scorecards and Explore Research Firms: These scorecards evaluate the performance of independent research firm ratings over time, individually and in aggregate. They help you to understand and compare the historical accuracy of recommendations.

Explore Research Firms helps you understand the varying objectives, styles, and approaches of the research that Fidelity offers.

Understanding the Equity Summary Score Methodology 7

Using the Equity Summary Score 8

3. Symbol‐specific Opinion History and Performance Page: The Opinion History and Performance pages provide detailed information on the score and sentiment history for the last 12 months where applicable, as well as the price‐performance of that stock during the periods of time when the sentiment and equity score were at different levels. Clicking the box will overlay a price chart for the stock.

4. Symbol‐specific Compare Page: The default, Key Statistics, view includes, where applicable, an Equity Summary Score for the primary symbol as well as competitors when “Show Competitors” is clicked. To see the individual research providers and the ratings that are included in the Equity Summary score, you can roll over the number of analysts or change the view to “Analyst Opinions.” Customers may also create their own compare review that includes the Equity Summary Score.

Using the Equity Summary Score 9

5. Symbol‐specific Company Research Highlights Report

The Company Research Highlights Report is a printable aggregation of various pieces of third‐party content. It includes fundamental data, dividend information, a brief company overview and key facts, as well as analyst opinions. The analyst opinions section includes the information found on the Analyst Opinions page in the stock’s Snapshot.

Using the Equity Summary Score: Stock research mentioned herein is supplied by companies that are not affiliated with Fidelity Investments. These companies’ recommendations do not constitute advice or guidance, nor are they a measure of the suitability of any particular security or trading strategy. Please determine which security, product, or service is right for you based on your investment objectives, risk tolerance, and financial situation. Be sure to review your decisions periodically to make sure they are still consistent with your goals.

Equity Summary Scores /Sentiments and Equity Summary Score Scorecards are provided for informational purposes only, and do not constitute advice or guidance, nor are they an endorsement or recommendation for any particular research provider.

The Equity Summary Score/Sentiment and Equity Summary Scorecard are provided by StarMine, an independent company not affiliated with Fidelity Investments.

The underlying performance data is provided by Investars.com, an independent company not affiliated with Fidelity Investments.

Fidelity Brokerage Services, Member NYSE, SIPC, 900 Salem Street, Smithfield, RI 02917 586367.3.0 10