Search Blogs

Thursday, May 25, 2023

Ramble: Science Shill & Semantic Bias

 While doing my regular reading-up on quantum computing, I came across a tweet that referenced a recent Joe Rogan podcast with Michio Kaku as well as the blog post on Not Even Wrong. I have never read any of Michio Kaku's books but I always thought he was really good at communicating popular physics, that's until I watch the JRE episode. Yikes! why did Micho Kaku talk so confidently about technical details he seems so unfamiliar with. His talking points on quantum computing seem off-base. It's not that he is completely wrong about the prospects of quantum computers it's just he makes such grandiose propositions that don't seem to have a clear line of thinking. For example, I'm going to paraphrase, he mentions using quantum computers as fact-checkers for generative LLMs, like ChatGPT. Okay, what are you specifically thinking about? Why is a quantum computer ideal for this or better than a classical computing approach? Is he referring to some kind of complexity problem related to watermarking AI outputs where only a quantum algorithm could find the solution in polynomial time? Without these kinds of additional details, it seems like he just is making things up! These little statements by Kaku were all throughout the episode and made him seem like he was just a  hype man for science, not a knowledgeable professor or researcher.  Prof. Woit over on Not Even Wrong basically lambasts Kaku about this behavior.

My bias with semantics in QM

The other thing Kaku kept saying during the JRE episode when referencing quantum mechanics was "parallel universes". For some reason this phrase has always bothered my ears; it's pretty silly I know and probably of little importance, or maybe I'm wrong here. I just don't like the word "parallel" mainly because I think it conjures up notions of communication or connections between "parallel" universes, but this would be misleading in my view. If I want to take a simple example to explore why I think this way, then take the Bell quantum state of two qubits:

\begin{equation*}|\Psi\rangle = \frac{1}{\sqrt{2}}\left(\vert00\rangle +|11\rangle\right)\end{equation*}

What about the two states $\vert 00\rangle$ and $\vert 11\rangle$ are "parallel"? Nothing to me! For one, the inner product between the two basis states is zero because they are orthogonal: $\langle 00|11 \rangle = \langle 11|00\rangle = 0$, so nothing "parallel" about them there. Yes, you may say well each qubit is in a superposition state and so they have a "parallel" configuration, but my premise is this is misleading because in my view it's all about the configurations provided by Hilbert space. So you say, "Well then what do you propose? Or why does this even matter it's just semantics!", for which I say: your right, it probably doesn't! but I think popular science communicators may be providing mental imagery to the broader public that is fanciful. Someone should do a survey of none STEM individuals and see what they say comes to mind when the words "parallel universes" is used. This goes back to my concerns at the beginning of this post.

Example of a lenticular image. All images exist in the same physical space but depending on what angle you view it from you see different information. We can think of each version of Sayin as a view of a basis state but no information is exchanged nor are they "parallel"

What I'm trying to say with my ramble is that we should use words that better articulate what we interpret as our best theory to describe. My personal opinion is that the word lenticular, coming from lenticular imaging and which is a process used to create the perception of multiple images on the same print (see figure above). At different viewing angles, you only see one image but viewing the print at a set of angles will show superimposed images. This is what I think the quantum state is like. Nothing is parallel only that certain "views" show specific information; here a view is a stand-in for one of the basis states in the quantum state/wavefunction. You could argue that no one is familiar with the word lenticular, and you would be right, but we can use more common descriptors like "tilt-card". One thing that could also cause trip-ups is that a lot of people confuse the lenticular prints for holographic ones, they are different. In holography, the light field is what is captured whereas in lenticular prints it's the use of interlacing and the lenses.

I should probably provide a conclusive statement on what I want to replace "parallel universes" with. Let me mention that "many worlds" is much better and I definitely like "branches of the quantum state". However, I would probably wager for something like the Tilt-Card Universe. Nothing parallel about the name, hopefully elicits thoughts of multiple "views", and cements the concept that all exist at once.


Reuse and Attribution

Wednesday, May 17, 2023

Where is my graphene space elevator?

Today I somehow stumbled upon a LinkedIn post about a Kickstarter kitchen cooktop that is marketed around its use of graphene. At first, I was intrigued and then when I went to the page, I thought, "Really, this is where we are?" To give some context, there are few substances that have captured technologists' attention quite like graphene has. Since its discovery in 2004, the material has been hailed as nothing short of revolutionary. If you are like me and have read countless research and popular articles and graphene, I'm sure you've heard the claims such as just "one atom thick but 200 times stronger than steel" or  "extremely conductive, and simultaneously see-through and flexible". Any product engineer would be giddy at the prospect of having access to graphene to address their design criteria.

Yet, almost two decades later, graphene doesn't seem to have turned our world upside down like some predicted. I mean, where is my graphene space elevator [1,2] that was promised? This prompts the question: why hasn't graphene infiltrated every product we use? I'm going to try answer this in the form of setting the scenes of a story.

Act I: The Promise

When graphene first burst onto the scene via the discovery by Geim and Konstantin [3]*, the hype by other scientists and engineers was real. Technologists started spouting about smartphones that could be rolled up like a newspaper, transformative airplane designs, and everyday bulletproof vests as thin as a T-shirt [1]. So in 2023 where are these? Well, there have definitely been advances in all three mentioned above, just not due to graphene. For example, we have flip smartphones, but they use organic LED technology, not graphene. 

So why no graphene? Basically, transitioning graphene from research lab-based synthesis to widespread commercial use is very challenging. For the first few years, most of the research was confined to academia, focusing on fundamental physics. Even now, most graphene R&D occurs in academia or national labs [4].

Act II: The Challenges

What followed shortly after the discovery of graphene was a  boom in graphene-related companies involved in bringing to market high-quality and scalable production. The challenges that emerged came on quick. One key challenge was the lack of universally agreed standards for graphene [1-2,4-5]. This lead to too many variations and inconsistent quality, which makes it too risky for companies to start substituting traditional materials with graphene. It's kind of like trying to make a recipe when the ingredients keep changing or have gone bad! Another issue is the available synthesis modalities that scale well.

Act III: The Real-World Applications

So, is it game over? Not really. While the applications that the hype of graphene focused on are still on discussed [2], there are some real-world applications, albeit more modest ones. Some of the first applications were in composites, where small amounts of graphene were incorporated into other materials to improve their properties [1,2,5]. Sports equipment was a big benefactor and the creation of better protective paints and coatings, stronger building materials, and conductive inks for 3D and inkjet printing also occurred [1]. An example of early graphene use was in 2013, when the tennis company Head introduced a  racquet with a graphene-enhanced polymer in the shaft. This resulted in a racquet that was 20% lighter overall but maintained the same swing weight. 

Act IV: What's Next?

The future of graphene is probably still very bright, as the number of papers published on it continues to skyrocket (see graph below). It seems the story of graphene isn't about a revolutionary, world-changing wonder material (at least, not yet), but rather a steady progression of modest, real-world applications that leverage the unique properties of graphene.

Source: Source: https://app.dimensions.ai. Exported 17, May 2023.  Criteria: 'graphene' in full data.

So, what about the Kickstarter campaign for a graphene-based kitchen cooktop. I mean kudos to them for actually building a product that emphasizes graphene, but this isn't really what many of us had been envisioning as "life" changing. 


References

[1]  World, E. S., Chemistry. Graphene: Looking beyond the Hype. Scientific American https://www.scientificamerican.com/article/graphene-looking-beyond-the-hype/.
[2] Nixon, A., Knapman, J. & Wright, D. H. Space elevator tether materials: An overview of the current candidates. Acta Astronautica (2023) doi:10.1016/j.actaastro.2023.04.008.
[3] Novoselov, K. S. et al. Electric Field Effect in Atomically Thin Carbon Films. Science 306, 666–669 (2004).
[4] Tiwari, S. K., Sahoo, S., Wang, N. & Huczko, A. Graphene research and their outputs: Status and prospect. Journal of Science: Advanced Materials and Devices 5, 10–29 (2020).
5. Barkan, T. Graphene: the hype versus commercial reality. Nat. Nanotechnol. 14, 904–906 (2019).

* I believe ref. [3] is the original paper by Geim and Novoselov on graphene that won them the Nobel prize, but surprisingly it's only been cited ~109 times. Seems strange.


Reuse and Attribution

Thursday, May 11, 2023

Creating Dr. Kohn

 I recently was able to create a GPT chatbot that provides answers for VASP. I called it Dr. Kohn; I wrote about something like this in an early post this year here. For the past few weeks, I've been working on another activity where I'm trying to use LangChain to create a very robust generative app for aiding computational scientists, but for the GPT chatbot I'm referencing I used another platform called proudly. What made it so fascinating was how easy it was to do. In essence, you provide workflow steps and then deploy. There isn't anything too special about Dr. Kohn compared to other GPT uses, but what makes it maybe a bit better is that it has "knowledge" about VASP documentation and tutorials. One can further expand the knowledge by just providing more reference documents. 

I'm curious how useful it will be to me. I'm planning on using it when I start my VASP calculations from scratch. When it fails, I'll revisit how to improve it. If you want to test it out please do! and let me know if it fails miserably. Eventually, I would redo this using the LangChain package which will allow for a lot more hands-on adjusting 


Reuse and Attribution

Tuesday, May 9, 2023

Am I too hopeful for self-driving labs?

I have always been bullish and captivated by the concept of automated materials design through cutting-edge lab facilities and computation.  When I see all the advances in robotic systems over the past decade I do think we are headed in the direction of material synthesis, characterization, and testing data in a closed-loop system. Maybe we have already started the initial seed. In my view, it does seem that the integration of robotics, data science, and AI is now catalyzing a new era in the field of materials science. I'm hoping that we eventually get the to point that self-driving labs for accelerated materials discovery.

The pressing question is whether we are on the verge of achieving fully automated materials design. Can researchers simply specify a desired material property with constraints and rely on a self-driving lab to devise the synthesis route and characterize properties? Encouragingly, we seem to be heading in that direction. Academic groups like those led by AlΓ‘n Aspuru-Guzik at the University of Toronto [1] and Taylor Sparks at the University of Utah [2] have laid the groundwork for self-driving labs, employing robotics for high-throughput experimentation, data science for handling vast volumes of data, and AI for enhanced prediction and optimization. These efforts are impressive, but further advancements will be required to enable more diverse access to synthesis routes, types of characterization, and testing. The goal should eventually be to develop labs capable of employing various methods for creating and testing materials to meet multi-objective property targets. The success of self-driving labs will probably occur through the collaboration between academia and industry, which will help with overcoming the challenges posed by high capital costs and ensuring the widespread adoption of self-driving labs.

My opinion is that the rapid development and growing capabilities of AI systems will continue to be a driving force behind self-driving labs. These narrow/specialized AI systems, although maybe not yet exhibiting general intelligence, are becoming increasingly adept at processing large datasets and extracting valuable insights using techniques like Bayesian optimization [3]. This enables researchers to explore vast design spaces, generate new hypotheses, and iteratively refine their experiments to identify optimal materials design criteria. I'll posit that the convergence of robotics, data science, and AI will revolutionize the field of materials science given researchers don't overpromise and several strong case studies are realized. This will pave the way for new and groundbreaking technologies that can only be realized if materials can be discovered and regularly synthesized given design criteria. As someone who has been studying Bayesian techniques for the past three years and has long been interested in self-driving labs, I hope I get the chance to work on this.


References

[1] B.P. MacLeod, F.G.L. Parlane, T.D. Morrissey, F. HΓ€se, L.M. Roch, K.E. Dettelbach, R. Moreira, L.P.E. Yunker, M.B. Rooney, J.R. Deeth, V. Lai, G.J. Ng, H. Situ, R.H. Zhang, M.S. Elliott, T.H. Haley, D.J. Dvorak, A. Aspuru-Guzik, J.E. Hein, C.P. Berlinguette, Self-driving laboratory for accelerated discovery of thin-film materials, Sci. Adv. 6 (2020) eaaz8867. https://doi.org/10.1126/sciadv.aaz8867.
[2] S.G. Baird, T.D. Sparks, What is a minimal working example for a self-driving laboratory?, Matter. 5 (2022) 4170–4178. https://doi.org/10.1016/j.matt.2022.11.007.
[3] F. HΓ€se, L. M. Roch, A. Aspuru-Guzik, Chimera: enabling hierarchy based multi-objective optimization for self-driving laboratories, Chemical Science. 9 (2018) 7642–7655. https://doi.org/10.1039/C8SC02239A.


Reuse and Attribution

Thursday, May 4, 2023

Calculating Phase Diagrams

Why do I love phase diagrams so much? I've always been fascinated by these relatively cursory-looking plots that show where the phases of matter at different thermodynamic conditions are stable. I remember my excitement in the intro lecture and lab, MATE 25 at SJSU, where in the lab we constructed points on the phase diagram of a lead alloy system. I thought this was the coolest thing that we could build these maps and then use them later to determine what phase a material would be in a given temperature and composition. At the time I had no thermodynamics course work so I didn't realize the underlying driving force of this phenomenon nor did I realize you can calculate these phase diagrams using the CALPHAD method. When I got to grad school I took the required thermodynamics course and then I was even more blown away at how powerful this framework was. I was particularly lucky because the course was taught from a "grassroots" approach where everything was built from the ground up given a set of postulates (you can see the book by H. Callen to get the gist). 

So what does a phase diagram look like and how does one use it? Here I'm going to leverage the excellent Python library pycalphad [1]. Which lets you construct phase diagrams using thermochemical databases, if available. Let's take an example of Cu-Ni system, you can work through the CALPHAD calculation with pycalphad in this google colab notebook. Here is the  binary phase diagram predicted for Cu-Ni:


Phase diagram predicted using cost507.tdb and pycalphad.

How do you read this? Well the blue and yellow points indicate the equilibrium phase boundary. The regions between phase boundaries indicates what phases are in eqiulibrium and how much (see my old notes on tie-lines). So how does the prediction look? Not very good if you consider what the textbook phase diagram looks like:

Textbook phase diagram for Cu-Ni from adapted by ref. [1] 

As we see the predicted phase diagram isn't even close to that shown in the textbook version. This is a direct consequence of the thermodynamic database. However, if we use the same thermodynamic database and look at another system like Al-Zn its much better:

Phase diagram prediction using cost507.tdb. Not bad!


This is much better when you compare it to the phase diagram reported in ref [2]. I just think the CALPHAD approach is so cool in that you just need thermodynamic descriptions of various phases of a material system to make predictions about stability regions. To make a CALPHAD calculation work you would need the following:

  1. Thermodynamic data/descriptions of individual phases such as enthalpy, entropy, and Gibbs free energy.
  2. The phases that could exist and their structure.
  3. Well-defined reference states (e.g.pure metal) allow for consistent and accurate calculations.
  4. Interaction model/parameters to describe mixing behavior of components/species.

The CALPHAD framework then enables the building of a model with these inputs to predict the phase equilibria and diagrams. You can also calculate other thermodynamic properties like heat capacity or even more useful is that the free energy models can be used within the context of phase-field simulations to evolve microstructures.


References

[1] R. Otis, Z.-K. Liu, pycalphad: CALPHAD-based Computational Thermodynamics in Python, JORS. 5 (2017) 1. https://doi.org/10.5334/jors.140.
[2] https://sv.rkriz.net/classes/MSE2094_NoteBook/96ClassProj/examples/cu-ni.html, reproduced from Callister, William D., Materials Science and Engineering: An Introduction. United States, Wiley.
[3] A. Pola, M. Tocci, F.E. Goodwin, Review of Microstructures and Properties of Zinc Alloys, Metals. 10 (2020) 253. https://doi.org/10.3390/met10020253.


Reuse and Attribution