From assigning credit for discoveries to teaching AIs to think, rewarding success is a crucial but difficult task for science.
In the heady days of the late 1970s and early 1980s, many physicists—including me—thought they were on the cusp of achieving a unified theory of the fundamental forces. A striking prediction to emerge from this circle of ideas is that protons are unstable and will eventually decay, just as many radioactive nuclei do. It was widely hoped that experimenters would find ways to verify the prediction.
Sure enough, they did. Unfortunately, subsequent work revealed that the claimed observations of proton decay could not be correct, though the nature of the experiments’ flaws was never clarified completely. This story is not unique: In recent years a number of exotic physical phenomena—including magnetic monopoles, cosmological dark matter, axions and supersymmetric particles—have reportedly been detected, only for later, more sensitive experiments to come up empty.
If later work had actually discovered proton decay, or the other effects, those who jumped the gun might have seemed vindicated and then come up for big rewards. Musing on these potential messes, I came up with a whimsical suggestion: the anti-Nobel prize. An anti- Nobel would be awarded for incorrect work that, had it been correct, would have merited a Nobel Prize. It would be awarded secretly, so no one need be embarrassed. The anti- Nobel prize would only come into play if the recipient did subsequent Prize-worthy work, in which case, the two would cancel each other out. This prospect might give overly ambitious, trigger-happy scientists some pause.
The problem of assigning credit and rewards for success is a big issue in the sociology of science. Prizes, plum positions and lucrative intellectual property rights can only be awarded to a few individuals, even when the underlying work involves, at different levels, many contributors. To use a physics term, the rewards are “quantized,” given on an all-or-nothing basis, while the contributions come in varied shapes and sizes.
Money is a great invention that allows rewards for economic effort to be divided up with more flexibility and discernment than barter. Notoriously, though, that breakthrough by itself doesn’t solve all problems of fair and efficient distribution.
The credit assignment problem is also a central issue in learning. Within neural networks, natural or artificial, one wants to “reward”—that is, strengthen—connections that are involved in successful outcomes, while “punishing” those that accomplish little or lead to failures. Since the strength of connections can vary continuously, one can avoid the all-or-nothing quantization of credit.
Typically, however, any decision or activity involves many neural connections, so the problem of apportioning credit for success and blame for failure must still be addressed. Some very clever work is being done on this: Deep learning neural networks have had impressive successes in learning to identify patterns and play difficult games including chess, Go and World of Warcraft extremely well.
One day, AIs will be powerful tools in assigning credit wisely (if, that is, we can teach them wisdom). We’ll need their help in dealing with the knotty issues sure to arise around awarding Nobel prizes to AIs.
Originally appeared on February 18, 2021 on The Wall Street Journal website as ‘Do We Need an Anti-Nobel Prize?‘
Frank Wilczek is the Herman Feshbach Professor of Physics at MIT, winner of the 2004 Nobel Prize in Physics, and author of the books Fundamentals: Ten Keys to Reality (2021), A Beautiful Question: Finding Nature’s Deep Design (2015), and The Lightness of Being: Mass, Ether, and the Unification of Forces (2009).