# Science Project Blog

Upper School students publish their progress several times weekly.

## A Few Clarifications Regarding My Variables

posted in

In my previous post:

I used the Greek lowercase letter mu to refer to the mean size of the quantum dots (the 3 or 4 nm face-value). This is common practice in statistics.

I used x to refer to the specific quantum dot's size, which absorption coefficient A is also a function of. Hence A(x) and Ai, which is the ith absorption coefficient when x is the variable you are counting by.

I used the Greek lowercase lowercase sigma to represent the standard deviation of the quantum dot population. (Also common practice in statistics).

S is an arbitrary variable I assigned myself to simplify computational steps for my programming. S will depend on the sigma accuracy (i.e. 3 or 5 or 6 sigma?).

Thank you,
-VD

## Work It, Work It

posted in

As I discussed in my post "How I Met the Gaussian Distribution," I am assuming that quantum dot sizes follow a Gaussian bell-curve distribution in terms of size in a single layer of a quantum dot solar cell. The presence of variation in quantum dot sizes, as well as the fact that quantum dot sizes "center" about a face-value size like 3 or 4 nm diameter (which is the "advertised" size) are both confirmed by Santra & Kamat. While the presence of these factors does not altogether confirm a Gaussian nature of quantum dot sizes, it does lend significant evidence that such a phenomenon is occurring.

Thus, to account for this variation in sizes of quantum dots in one layer, it can be assumed that the quantum dot sizes remain centered about the face-value size (i.e. 3 or 4 nm diameter) and follow a normal Gaussian distribution, which can be expressed as

Thus, I developed a theoretical algorithm to calculate any absorption point Ai as a function of quantum dot size x given assumed Gaussian properties and a quantum dot size range of 0 to infinity, which is widest range possible.

In retrospect, this algorithm looks a little redundant / common-sensical, because it LOOKS like the big expressions in the numerator and denominator cancel each other only to leave A(x) for Ai, but the A(x) is actually changing for every value of 0 to infinity substituted for x. Thus, this is much more complicated than a simple cancellation (and using a cancellation here would be incorrect).

For the purposes of computation, the above algorithm is simply impossible because it involves an integral to infinity, which the computer can NEVER exactly compute! But I solved this problem with Riemann sums, which is essentially the practice of using a bunch of rectangles defined by the function's x increments and y values to approximate the area under the curve.

My next steps are programming my algorithms and determining the number of sigma (confidence level, significance level) needed to define S in the algorithm immediately above. Do you have any recommendations for number of sigma, or feedback on any other parts of these algorithms? My impression is that the scientific community considers 3 sigma significant, 5 sigma "proof", and 6 sigma CERN Higgs boson-level, but I'd be interested in hearing your take.

Thanks so much for your time and feedback,
-VD

## Forms Check

posted in

I noticed that the science fair forms are due today, and I just wanted to check in with you to make sure that I did, in fact, hand in my forms to you. I recall doing so, but I just want to be sure!

In other news, I am working on an actual mathematical representation of my Gaussian-ish algorithm described in my latest post. Losts of updates coming soon!

Thank you!
VD

P.S. Soooooo pumped for tomorrow's assembly! It'll be WAY not boring.

## Checking In

posted in

I have made progress on my Gaussian algorithm (putting it into theoretical mathematical form, and then computationally-friendly form, namely from integrals to a close approximation using a large number of rectangle areas that sum up to an area under the curve), but before I go too far, I would appreciate any feedback you might have on the description in my last post - "How I Met the Gaussian Distribution." I'm guess I'm still concerned about whether simply multiplying the Gaussian by a factor within an integral, and then dividing it out again to reconcile the units, is good form.

I actually tried some computation with an "improvised" algorithm (half-developed) and it does not pose any glaring errors, but these things can be insidious. (Poking holes, poking holes)

Thanks for your time and help!

Happy holidays (Thanksgivukkah, Eat-Lots-of-Food Day, or Stare-at-Screens Day)! :D
VD

## How I Met the Gaussian Distribution

posted in

Oh dear, looks like it's yet another spinoff of "How I Met Your Mother."

No, actually, this one is far more... calculating, you might say. (I'm preparing jokes for the Science Assembly!)

To preface this discussion, I will first describe the Gaussian distribution in simpler terms. Basically, it's the bell curve! There is a median value that gets most of the stuff (highest number of items), and from there the graph slopes down on both sides of the central median, like this:

(Courtesy University of Maryland)

In my last post, I reasoned that quantum dot sizes (if confined to a certain size range, e.g. 3 nm diameter) would follow this sort of distribution: lots in the middle, at 3 nm diameter, but with variation (scatter) on the sides, for example 2.8, 2.9, 3.1, 3.2 nm.

Here's a fancy equation from statistics that plots the Gaussian distribution graph:

I plan to use this equation in a new algorithm.

To visualize what this looks like in a quantum dot solar cell, I made this conceptual diagram of one layer of quantum dots, with a size range, that absorb similar wavelengths of incoming light.

Knowing the equation to describe the Gaussian distribution of quantum dot sizes, I can modulate the width of the Gaussian distribution by changing sigma (the standard deviation), which is a value in the equation.

Thus, the first input to my algorithm is sigma. A smaller sigma (standard deviation) means that the quantum dot diameters cluster more closely (more precisely) around 3 nm (or whatever size it is) while a higher sigma means that the quantum dot sizes are less precise and cluster more widely around the default size. This way, I could look at the effect of a wider or narrower quantum dot size range (i.e. less precise or more precise) on an overall efficiency, for which I am continuing to develop an algorithm to closely estimate.

Obviously the programming cannot accomodate an infinite number of points on a graph, which is why I will need to approximate. (One of my Java programs is reading points off an absorption coefficient vs. photon energy graph, so there are a finite number of points).

The only real way I can think of approximating this with relative accuracy is to take an "average" of sorts. So, it would be a summation of points around a certain photon energy (the independent variable) - incorporating the Gaussian distribution frequency at that point (described in the function equation above) as a multiplier of the dependent variable (absorption) at each photon energy, and then dividing by a summation of only the Gaussian frequencies (which, if integrated from 0 to infinity, has an area under the curve of 1).

This is only a conceptual description - I will elaborate on the mathematical form of my new algorithm in a later post, but before I do that I need to understand whether adding up a bunch of absorption values after multiplying each by a specific point frequency (a y-value) on the Gaussian distribution graph, and then dividing the whole summation by ANOTHER summation of all Gaussian frequencies, is truly valid. The units do work out, because its absorption = absorption * (frequency/frequency), but is this appropriate? I look forward to your comments on this...

VD

## Free Pumpkin Spice Latte

posted in

Yes - that title is meant purely to attract attention and has no bearing whatsoever on the content of this post.
(Did I succeed in getting you to read this?)

As I discussed with you recently, I am developing the framework for my theoretical model of quantum dot solar cells.

I made the diagram below to illustrate the high-level concept that will underlie my model. Incoming photons from the Sun are in a strong yellow, but as they are absorbed by each layer of quantum dots, the number of photons (and thus photon flux in my algorithms) still available for absorption at the next quantum dot layers will decrease. Thus, at the next levels, the yellow color of the arrows representing photons fades to show that decrease.

After much consideration, I decided that the layers should be organized from the smallest quantum dots to the largest (which corresponds- inversely- to the energy going from higher to lower). To explain this decision, I thought up an analogy.

Think of a water filter. There's water, and in the water there are particles of many different sizes. These particles are like photons to a quantum dot solar cell. We want to absorb (get rid of) as many photons as possible. Should we start with the small pores and progress to the large pores? (i.e. go from low energy to high energy photon absorption?) NO - this would make all the large particles stay stuck at the top layer and NOT get absorbed! Thus, we should start with the large pores and progress to the smaller. (Start from high energy and go lower, which equates to starting from smallest quantum dots and going larger.)

Also, later on in my model, I have decided to create a DISTRIBUTION of quantum dot SIZES in EACH LAYER - currently my algorithms only account for ONE quantum dot size and material in each layer. My distribution approach adds another dimension of complexity, but it is important because it makes my model more accurate, as I am essentially accounting for "experimental error." For instance, in actual quantum dot production, each quantum dot can be slightly larger or slightly smaller than the baseline size, so the whole set of quantum dot sizes is a distribution centered about the specific size value but with variation present. This is actually an example of applying knowledge I learned in statistics to my independent research.

I am thinking of using a normal (Gaussian) distribution, but with less variation. I may use Student's t distribution, which can modulate the "width" of variation in the normal distribution - that might be more appropriate for a distribution of quantum dot sizes, which is by nature relatively precise.

Thanks for reading! Happy early Thanksgiving,
VD

## Research Paper Second Draft

posted in

I've written all about it in Haiku! Lots of edits and (hopefully) improvements, and significant progress made on algorithms and methods.

Thank you so much,
-VD

Files:

## We Can't Stop (Thinking)

posted in

I have been wondering about the problem of multiple photons.

Mini-thought experiment: Consider a single quantum dot. For this example, let's assume it's lead sulfide and, say, 3 nanometers across.

If a bunch of photons decide to hit this quantum dot, one after another (like photons from the Sun), are these collisions independent of each other?

For the purpose of my simulations, I think YES. Even if a photon excites an electron in the quantum dot, it doesn't affect the absorption coefficient or any absorption properties at all. Thus, my algorithm for "multiple photons and a single type of quantum dot" is simply an extension of my "single photon and single type of quantum dot" algorithm. Basically this new algorithm is the execution of the former algorithm m times, where m is the number of photons.

However, I just wanted to write about the inevitable margin of error in a simulation of this sort. When photons are coming in such rapid succession, my gut feeling is that the first photons can preclude the next from excitation of an electron from the same level up.

I have yet to read a paper that discusses this, so I'm not totally sure if it's really the case or not. (Might you have any opinions?) I guess a counter-argument could be that there are so many electrons present that any photon theoretically can find one to excite (given a precise energy) so the whole thing is awash and the absorption is the "same" for each photon as it would be if only one photon hit the quantum dot in the first place. So that's one way I could potentially justify this "extension" algorithm's theoretical soundness.

I'm just trying to poke holes in my own algorithms. Feedback greatly appreciated!

Thank you,
VD

## Comments and Peer Review on Research Paper 1st Draft

posted in

Thank you for your detailed feedback on the first draft of my research paper! Thanks also to Kristin for the helpful critique. :-)

1. I went through and changed all placements of periods to be after the citation, not before.
2. What, you didn't know?? Electorns are the new thing! Electorns! Oh wait, maybe it's just me. ;-) I'm pretty surprised that Word did not consider that an incorrect spelling... unless I accidentally added it to the dictionary. Oh boy. Thank you.
3. Ah, typos where you totally miss a connecting word. Photons are absorbed by the n-type layer first.
4. Yes, some formatting went askew along with the figures, which I wanted centered. I have hopefully addressed these formatting discrepancies.
5. I definitely meant lead sulfide when I wrote about the Stanford and Notre Dame research groups. Lead selenide is so last year (sorry, force of habit!)
6. Oops, I must have inserted a double-reference to the same figure! The Figure 9 in-text reference is corrected.
7. Hmm, the text looks normal on my screen, so it must be Haiku. :-(
8. I re-cropped the quantum dot solar cell Stanford lab photo to include a human thumb and hand holding it. Hopefully the scale is clearer now.

I will be uploading the second (and improved!) draft of my research paper before Friday. Aside from the above changes, I altered the organization scheme slightly (aesthetics) and plan to add critical algorithms that I continue to develop.

VD

## Getting Back on Track

posted in

I do think I'm off track for where I should be with my project, but I'm not overly concerned about it. I turned in the first round of college aps (the ones I was worried about) on Friday, and have been setting up a schedule for the actual experimentation this weekend.

My goal is to get the leaf packs in the stream by the end of the week. This will involve getting the bags, gathering leaves, scouting locations, making the packs, and placing them.

We have two class periods this week, on Monday and Tuesday. I can work after school on Tuesday, maybe Thursday, and Friday. I think the most important thing right now is finding locations so I know how many packs to make, so I will do that tomorrow after our chat.

Concerning the research paper draft:
to be honest, in my college-ap mania I didn't do homework for three days last week, so I'm behind-- I need to get my ass in gear and get it done in the next few days. It would be really helpful to me to talk with you about the best way to go about this.

what I need to do to be where I should be
where to find bags
research paper

## All About the Absorption Coefficient

posted in

Thank you for your support on #5 from my last post! Your encouragement helped me get through this hourlong absorption coefficient marathon...

My findings: yes, my inhibitions were legitimate:

1. The basic absorption coefficient calculation found in most reference and textbooks is NOT normalized. That would make my Monte Carlo random number comparison algorithm inaccurate, because you can't compare apples to pears (you can't compare a probability ranging from 0 to 1 to an arbitrary unit absorption coefficient that ranges from 0 to 4, for example). Arbitrary units, meaning unitless, like this one.
2. But 0 to 4 can't be mapped easily to 0 to 1 either; the problem is that different texts have different absorption coefficient ranges (I also found texts that used 0 to 10^7, 0 to 10^6, 0 to 100... there is no absolute consensus).
3. I asked myself, "Why is there no standard absorption coefficient scale or range?" The answer, I eventually reasoned, was that these texts are all using absorption coefficient tables to COMPARE materials, so absolute numbers don't matter - all that matters is whether brick absorbs more sound (I'm using light, but it's the same idea) than vinyl, like this link. In these cases, it doesn't matter what the absorption coefficient actually is, because it's all relative.
4. Commence crazy amounts of Google Scholar searches. My browser history is literally three pages long as a result of these searches. ALL absorption coefficient calculations I found were PROPORTIONAL to an expression, thereby again RELATIVE, not ABSOLUTE... frankly, it was driving me crazy. (And I just realized that those capital letters make this blog post kind of intimidating. Sorry. I'm trying to convey my academic paper-bashing.)
5. And the answer is always where you least expect to find it. In the SAME resource as the UNITLESS absorption coefficient graph from #1... I found that the actual experimental scale (cross-checked with other graphs from the same website, which is actually the famous Ioffe Institute for Physics Research in Russia) is from 0 to 10^6. Ta-da!!

So I conquered this obstacle. Many more to come, I'm sure, but this is a definitive step forward.

VD

## The Multi- Problem

posted in

I have defined the first microcosm for my work, which is that of a single photon and a single quantum dot. This was a pretty big victory for me; Microcosm 1 will need all five algorithms that I have developed so far, as well as a Monte Carlo simulation programmed for it. My Photon Flux algorithm and 4th Solar Spectrum algorithm are probably going to be the most important for Microcosm 1, and now I can continue to develop the code I started in the last blog post for Monte Carlo.

However, I couldn't sleep last night because I was thinking about the multi- problem.
THE MULTI- PROBLEM: How can I model what happens with MULTIPLE quantum dots and MULTIPLE photons?

Here, I have tried to document what I was thinking. Hopefully, this will help me clarify my thoughts and lead me to the algorithms that will define computation for the multijunction (multi-quantum dot) and multi-photon steps.

1. My first hunch was that this would be a multiplier extrapolation, something that involved multiplying computation done in Microcosm 1 by the number of photons, the number of quantum dots, or the two factors mapped into an (x, y) matrix form.
2. But something wasn't quite right. After all, what if the photon gets absorbed by the uppermost quantum dot? THEN IT WILL NOT BE AVAILABLE FOR THE NEXT QUANTUM DOTS TO ABSORB. And wait, how do I keep track of WHICH photon gets absorbed by which quantum dot, and when, and whether it's available for the next quantum dots, and which---!
3. Ooookay. I have a tendency to make things more complicated than they need to be (even if they are inherently complicated!), so I took a deep breath. First, I was going to keep it simpler and think about ONE photon with a single TYPE of quantum dot.
4. I knew this had to be a loop! Basically, the decision was this: loop through N quantum dots, each of which constitutes a "layer." Using the Monte Carlo absorption simulation I half-coded last time... FOR EACH QUANTUM DOT IN THE LIST OF N QUANTUM DOTS, EXECUTE THIS DECISION... if absorbed = true, terminate. (The photon has been absorbed by a quantum dot). Otherwise, execute THIS decision: if the Monte Carlo random number generation is less than the absorption coefficient of that particular type of quantum dot (i.e. lead sulfide with certain parameters), now set absorbed to TRUE; otherwise, absorbed is still FALSE. Loop again (repeat).
5. But I still feel like something else is wrong. I'm wondering about the absorption decision... is it right to compare the random number to an absorption coefficient itself? I need to do more research on the way the absorption coefficient is quantum mechanically calculated. MY UNDERSTANDING is that absorption coefficient is ALREADY normalized, at least by the equation I have in my research paper. However, I'll post an update soon...

VD

## Microcosms!

posted in

Key terms:

Monte Carlo - a specific type of simulation. I am using a random number generator to calculate a probability, and thereby determine an outcome (true/false). In computer science, a true/false variable is called a boolean, after George Boole.

Pseudocode - "fake code" - code that is not written in the syntax of an actual programming language, but instead written in the form of commands and if/then statements in basic English. For example...

define a function isThree (input: variable):
if (variable) equals 3, return true
else, return false; end;

That was written in a weird mix of Python, Java, and C, but it's pseudocode because it outlines a general set of commands that the programmer can translate to any programming language he/she wants.

Thus, I am here to write up some Monte Carlo pseudocode that will help my model of photon-electron interactions in the quantum dots in the solar cells.

MONTE CARLO PSEUDOCODE FOR A SINGLE PHOTON & A SINGLE QUANTUM DOT (1st MICROCOSM for the bigger QUANTUM DOT SOLAR CELL!)
define a function isAbsorbed (input: photon energy):
assign a boolean variable called isAbsorbed, which will describe whether or not a single photon is absorbed by a single quantum dot.
generate a random number between 0 and 1, assign it to the variable r.
if r is less than or equal to the absorption coefficient, which is calculated from the photon energy input,
set the boolean isAbsorbed to TRUE
else
set the boolean isAbsorbed to FALSE

MONTE CARLO JAVA CODE FOR A SINGLE PHOTON & A SINGLE QUANTUM DOT
// absorption coefficient function is calculation defined elsewhere
boolean isAbsorbed(float photonEnergy)
{
boolean isAbsorbed;
float random = Math.random();
if (random <= absorptionCoeff(photonEnergy))
{ isAbsorbed = true; }
else
{ isAbsorbed = false; }
return isAbsorbed;
}

ECONOMIC MONTE CARLO JAVA CODE FOR A SINGLE PHOTON & A SINGLE QUANTUM DOT (shorter!)
// absorption coefficient function is calculation defined elsewhere
boolean isAbsorbed(float photonEnergy)
{
float random = Math.random();
if (random <= absorptionCoeff(photonEnergy))
{ return true; }
else
{ return false; }
}

VD

## SRP update and next steps

posted in

This week at the lab, my mentor and I went over the protocol I wrote (which I can show you or email to you) for the serum dilution assay, gathered all of the data from the IL3 withdrawal assays and serum dilution assay, and made graphs in Excel.

For the serum dilution: The bar graph measures average absorbance (y-axis) depending on the concentration of serum used in each well (10% down to 1%). Although the mutant showed more variance, there didn't seem to be much overlap between the wild type cell viability and mutant cell viability. We determined that this round of findings was significant. (relevant to my project, NTRK2 mutant)
What we did yesterday was compare the graphs of readings done after the chemical reagent MTS had incubated (at room temp) in the wells for 2 hours with a reading after 6 hours.
The increase in the bars was linear for both the wild type and mutant--not much changed. Therefore, a reading after 2 hours is fine.
Overall, the mutant cells outgrew the wild type cells.
At 6% serum, there showed a clear difference between WT and mutant cell viability.

Graphs for the IL3 withdrawal assays were a little strange. The mutants that died off eventually had nice looking lines, but the ones that grew spiked and decreased in some places. Cell counts would go from numbers 10^5 with high percentage cell viability but then the next day have counts at some 10^6 with low percentage cell viability--I inputted only the number data, not the percentages
Note to self: carefully mark in lab notebook whenever I split cells (a likely reason for the jumps in numbers), and very clearly mark the dates as well as day # of the assay every time I count cells (not just Day 1, Day 3, but ex. November 14, 2013, Day 8)

Next steps:
1. we have to redo the IL3 withdrawal assays anyways, but with the new information from the serum dilution assay:
redo the IL3 withdrawal assays with all the mutants and wild types, two times (one with 10% FBS and one with 6% serum)

2. finish counting colonies in soft agar

3. also, now that I have some evidence for the NTRK2 mutant I am working with--it's showing signs of transformation, more transforming compared to WT as shown in serum dilution--there is much more I can do to study the pathway of this mutation
-soon, I will try a western blot

À la prochaine!

## Summary of the Past Week

posted in

I sincerely apologize for the lack of blog posts this past week. I have been hard at work poring through the literature, conducting hours and hours of Internet and more literature research, pondering my algorithms (which, after a ton of calculus and algebra and hair-pulling, I have finally successfully nailed down!!! What a relief, and how sweet the sound), starting to code my algorithms and Monte Carlo simulations, and writing the first draft of my research paper (attached!) - so I made a conscious decision to prioritize the research over the blog. I'm very sorry, but rest assured - I will bombard the blogs page with sooo many new entries because I am super excited about where this is going and I love quantum physics and writing.

I'm so excited!

Here's an outline of my research paper. I posted this to Haiku, but I'm posting it on the blog too so I can keep a chronicle all in one place.

Outline
Background: I performed extensive online and literature research to compose the background section. Here, I discuss in detail all basic theoretical and design concepts ultimately involved with multijunction quantum dot solar panels.
• Solar Energy
• Solar Cells
• Quantum Dots
• Quantum Dot Solar Cells (QDSCs)
• Multijunction Solar Cells
• Multijunction QDSC Concept & Efforts
Methods: Here, I also used a ton of Internet and literature to find what I needed to sufficiently describe the theoretical framework of my research.
• Schrodinger Equation
• Band Theory
• Quantum Confinement
• Confinement in QDs
• Photon-Electron Interactions
• Absorption
• Monte Carlo Simulation
Then, I developed five novel algorithms to form a foundation for my research. I describe the reasoning process, mathematical derivation, and "proof" behind each algorithm.
1. Solar Intensity