Search Blogs

Thursday, July 16, 2020

Function Convolution

This blog post will just be a quick digest of the basic concept and math of function convolution. A convolution of functions is essentially the mixing or superposition of functions in a given space. The best example is to look at signals in frequency and time domains  using Fourier transforms. Lets start by looking at the inverse Fourier transform of the product of two signals $g(\omega)$ and $f(\omega)$,

$$ h(t) = \int_{-\infty}^{\infty} g(\omega)\,f(\omega) e^{i\omega t} d\omega. $$

we can assume that $g(\omega)$ is some sort of applied signal/filter and $f(\omega)$ is a source signal. The output signal $h(t)$ will be some sort of filtered signal based on what $g(\omega)$ does. The thing to recall is that these signals are actually the Fourier transforms of the time signal:

$$g(\omega) = \int_{-\infty}^{\infty} g(t)\,e^{-i \omega t} dt $$

and

$$f(\omega) = \int_{-\infty}^{\infty} f(t)\,e^{-i \omega t} dt. $$

If we plug these into the first equation for $h(t)$ and simplify we get:

$$\begin{align} h(t) &= \int_{-\infty}^{\infty} g(t) f(t) e^{-i\omega t} e^{-i \omega t} e^{i \omega t} dt d\omega \\ & = \int_{-\infty}^{\infty} \mathscr{F}[g(t)*f(t)] e^{i \omega t} d\omega, \end{align} $$

where $\mathscr{F}[\,]$ denotes the Fourier transform and $*$ indicates the combination/superposition of two functions in the time domain. This is more commonly referred to as the convolution. The intitive picture is that the convolution captures the similarity or correlation between the two functions. and therefore the output function is akin to a inner product of the two functions. This is best seen in the nice wikipedia visualization:


The quote for this blog post will come from the man himself, Joseph Fourier, who's mathematics are so ubiquitous in math, science, and engineering. 

"Profound study of nature is the most fertile source of mathematical discoveries." --Joseph Fourier in The Analytical Theories of Heat, Ch. 1 p. 7, 1878

References:
[1] The Mathematics Companion, IoP, A. C. Fischer-Cripps.
[2] Advanced Engineering Mathematics, Wiley, 8th ed., E. Kreyszig. 

Reuse and Attribution

Thursday, July 2, 2020

Should Peer Review Really Be Community Review?

I was recently watching a snippet from the Eric Wienstein's podcast, the portal, where he and his guest rail on the peer-review process. I admit I wasn't entirely aware of the context they were referring too, but Eric Wienstein made an interesting comment, I'll paraphrase here; He said that the establishment (i.e., academia) has embraced a process that is full of hogwash and is a completely recent invention used to dictate how information is shared and evaluated. His point was that the peer-review process, a mandatory pillar used in academic and scientific writing, is nothing but a way to filter out ideas and build the careers and egos of those who control it. I'm most certain that there are dozens of articles on why peer-review in its current state is a terrible system.
As someone who has published before and is actively involved in research, I've never really had any major issues with the peer-review process. My own experiences have been pretty subdued. I'll submit a paper to a journal, get some positive and negative feedback from the reviewers, and do my best to address the issues. I have had a case where the reviewers comments where extremely helpful in preventing me from publishing an error (although as you'll see below I'm okay with making errors) and scenarios where the reviewer obviously didn't think we had the reputation to publish even though they couldn't find any major flaws. I would like to take a detour to say that I think at times, scientific publications are taken much too seriously, this "stuff" is basic and fundamental research and is most certainly going to contain errors or inaccurate findings.  Obviously we [the researchers] try to minimize mistakes, but its not always possible. Let me be clear that I'm not talking about deceptive actions such as purposefully generating false information or blatant dishonesty. To me the goal of the community in assessing a scientific or academic publication is to provide constructive feedback on any logical shortcomings, reproducibility concerns, or troubling assertions of the findings.  

But the question  this video segment prompted me to ask is: what is the purpose of disseminating ones research or ideas? If the academic and scientific process is about getting your idea or findings out to the universe, why do we need to have a few people look at it before hand and determine if it is to be made visible. Would it not be better to have the community of researchers be the judge for determining the value based on implications towards other's works? I actually think this makes more sense. It seems that we could  move away from the traditional peer-review process and adopt a community review process. Obviously we can't have an entire community review each paper, but what I mean by this terminology, is that we adopt an approach like the following:

  1. Original written works should be initially placed on free and open electronic repositories. For example arXiv or other domain specific electronic archive databases.
  2. These electronic objects are living documents, so authors should update them with additions or corrections as regularly as possible or needed. This process SHOULD NOT be stigmatized; it should be embraced when we do things better or fix errors. If the authors find a major flaw in one or more of the results, we should celebrate them for correcting it. Therefore a strong "versioning" system  should be used. This is indeed the case with arXiv. As a side note, negative findings should also be strongly celebrated as it is how we determine what is valid and what is not. 
  3. Let the community be the reviewers. Researchers should use these documents as their principal reference materials for their own work. That means that people should cite the works on these electronic repositories, as opposed to the current approach which uses publisher versions. The reason is that this is how a "natural" community review process can take place. That is if a paper on a repository is being cited by other researchers, it is inherently an indication that the community of those researchers believes this work to be credible and valuable in some regard.
  4. This next item is a little bit tricky to do well, but I think it could be a useful addition. Let the community of readers/reviewers interact with the works by allowing for private and public review comments. These comments will need to be monitored for bias, aggressive sentiment, and offensive language. The idea is that the community can see what concerns and issues others in the community may have. For example if two papers studying the same or similar things but paper 1 says "x, y, and z" and paper 2 says "a, b, and c", the commentators can draw attention to this issue and cross-reference papers on the repository to indicate so. The idea is the the review process is a living process. Another critical aspect to consider is allowing for community comments to be ranked, therefore allowing reoccurring comments or sentiment to become the critical issues for the authors to address. Authors of the papers should aim to address the comments in follow-up versions when possible.
  5. When an original work becomes so influential, by using some set of metrics (e.g., citation-rate, citation-number, domain impact, etc.), then publishers can invite the author(s) for submission.  This step should not include a referring round, but the editors should request the authors to provide  rebuttals or to address the highest ranked critical comments. How to determine how many comments should be addressed is a bit of a challenge. Once the editors are satisfied then the paper can be published as a "historical" record, meaning that it is no longer a living document but a statement of research. After this occurs, upon citation of the electronic archive database entry for the paper, it would have some kind of flag indicating where it has been published as a historical record.
I'm fairly sure this is NOT the best approach or the first time this has been thought about, but it might be time to rethink how scientific and academic publishing is conducted. I think this can be done since the tools are somewhat in existence now; its easy to post your work on a repository and assign it a digital object identify which allows others to easily find and cite your work. The aspects that are lacking is being able to comment and rank papers and make it easy for traditional publishers to verify. Although there is PubPeer.com which seems to be a possible solution. Obviously I haven't accounted for any nefarious actors or deceptive practices that would be serious issues with this approach. For example, individuals or groups could create dummy citations to their work or use automation to provide positive comments and rank them. But I think systems and concepts exist within computer and information sciences that can be put to use to combat these. There is also the evident fact that publishers are going to denounce this in every way and form possible as it destroy's their core business model.

Overall, I think the peer-review process, whereby authors send their papers to a publisher and the editor finds 3-5 researchers to look at your paper, is a probably not as valuable as we are told to think. I'm more in-favor or a "natural" selection process as discussed above. Basically, if my papers are getting cited and have a lot of comments and I attempt to address as many of the critical ones as I can, then I know without doubt its being peer reviewed!

I'd be interested for others to pick-apart my thinking on this. I'm sure its riddled with poor justification and thinking. Let me know what you think and why this wouldn't work.

The quote I selected for this blog is Einstein's reply to the editor of Physical Review after the editor sent it out for review and provided the comments back to Einstein:

"We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorised you to show it to specialists before it is printed. I see no reason to address the – in any case erroneous – comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere." ­— Albert Einstein to John Tate, Editor for Physical Review






Reuse and Attribution

Thursday, March 12, 2020

Not so hidden: A clearly written popular quantum physics book by a masterful communicator


$^\dagger$My Commentary


First of all, I am a very big fan of Sean Carroll and in general many of the fantastic science and technology communicators in the 21st century. As a society we are in debt to their efforts and time in aiding in understanding the fascinating and intriguing universe we live in. What I enjoy about Sean Carroll's approach to communicating physics concepts and topics is his unmatched clarity in delivery and pace of speech. If you've every listened to his podcast (Mindscape) or others that he has been a guest on, you know what I'm talking about. His most recent popular physics book titled "Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime" continues his trend of excellence in science communication.

In this book, Sean focuses on the foundations of quantum physics and how we ( the community scientist) have become complaisant with the "shut-up and calculate" mentality when dealing with the quantum realm. The issues stems from the early pioneers of quantum mechanics who could not make sense of the predictions from the mathematics and the experimental observations, namely, that the quantum object/information we call the wavefunction, does not manifest as described by the math when measured in the lab. Lets throw some math into the mix just to make things clear. In nonrelativistic quantum mechanics we have an equation which describes the relationship between the time evolution of the quantum state (i.e. wavefunction) to the evolution of energy content of a system given that quantum state; this is the what we are told the infamous SchrΓΆdinger equation:

$$ i\hbar \frac{\partial} {\partial t} | \Psi \rangle= \hat{H} | \Psi \rangle$$

The quantum state function, $\Psi$ is called the wavefunction because in many cases it has a functional form that reassembles wave-like characteristics. This wavefunction differs from our intuitive notion of classical waves in that the amplitude is a complex number.  Sean Carroll's argument is that the wavefunction is all information needed to describe a quantum system. Furthermore, he argues that we take this equation at face value for what it tells us, namely, that given a quantum wavefunction we can describe its evolution deterministically. This is a key statement, because in popular science you may hear that quantum mechanics is a probabilistic theory, that is only true  if we are concerned with knowing additional information about the system using the wavefunction. For example, if we want to know the position, momentum, or energy then we can only speak in terms of probabilistic outcomes of those observable. But the wavefunction always evolves deterministically via the SchrΓΆdinger equation. For example if we want to know the probability expectation for observing  or measuring the momentum of a wavefunction describing a particle, we would write something down like:

$$ \langle \Psi | \hat{p} | \Psi \rangle $$

this equation provides us with a mathematical result about the momentum representation of the wavefunction  in a probablistic manner. It is  beyond the scope of my intent for this blog but the reason we can't say that the observed momentum of the wavefunction  $\Psi$ is exact, is related to the fact that the wavefunction is a superposition of equally valid solutions in what is known as Hilbert space. 

Now the main focus of the book is on an alternative understanding for the measurement catastrophy in quantum physics, that is to say, when experiments are performed on quantum systems we don't get the entire probability distribution of the wavefunction for an observable/expectation as an output. What we get is a single data point from that sampling space. If we conduct enough experiments, then of course we recover the distribution. But why don't we get the entire wavefunction probability when we measure? The historical and mainstream thought on this is that something happened by which when an observer (e.g., the human eye or a digital sensor) measured the quantum state/wavefunction so that it "collapased" into a single value. Now you say "What do mean? What forced the function to collapse?", yes this is indeed a strange phenomena. The SchrΓΆdinger equation nor any other mathematical interpretation tells us anything about a wavefunction "collapse". For many years I never really thought about this, but more recently it really is bothersome that a quantum state just "collapses" to a single value as if something forced it for which we don't know anything about. Albert Einstein's thought on this was that the forced collapse is due to local hidden variables; things we are unable to identify as being part of the system and are locally causal.

Now Sean Carroll's approach is more epistemic, we know we have this quantum mathematical object called the wavefunction and every thing in the universe can be described by it, so what happens when the quantum system I am describing interacts with an observer who is also treated as being a quantum system. The outcome is that we get  parallel quantum states that are very much deterministic and in existence, but having different probabilistic outcomes (I think this is how I understand it?). In other words, the act of quantum systems interacting produces many outcomes that in a sense occur in parallel worlds, hence the many worlds. To be clear we don't need to think of the same physical space being occupied, but rather that in some abstract representation of many isolated outcomes have occurred with validity. This approach goes by the Everettian or Many Worlds interpretation.

The main tenants of the the many worlds argument are 1.) we should not try and interpret the meaning of the Schrodinger equation but just follow the mathematics as providing what is real and 2.) don't select which systems behave quantum mechanically, assume every physical object in the universe can be described by a quantum state.


Although I very much appreciate Sean's insight and excellent introduction to this foundation quantum physics vantage point, my own human bias doesn't want to agree. Its not that I don't think its a valid understanding of the outcomes of the Schrodinger equation, but more that it leaves me wondering about the other "branches" of the wavefunction. For example, can one roll-back time to traverse a new branch? This should be possible since the time evolution operator in the SchrΓΆdinger equation is a unitary operator.

There are two things I should mention, 1.) I'm not a quantum physicist by training so my understanding of the topic could have gone wary, 2.) I haven't finished the book.


$^{\dagger}$ I haven't finished the book yet so this is a partial commentary.

Reuse and Attribution