The Structure of Scientific Revolution: From Newton to Neuroscience

WARNING –  A LONG READ FULL OF PHILOSOPHY AND LARGELY UNREFERENCED.  Originally written as an essay of collected thoughts of neurobiologists, but the editors decided not to publish it, so I decided to do so here.  My apologies, but you were warned.  🙂

Many years ago, I was asked by Jack Cowan, Professor of Mathematics and Neurology at the University of Chicago, to give a lecture to his friends and colleagues in the Department of Physics on the current state of Computational Neuroscience.  That title for that lecture is the same title as the title for this article.  Arriving in his office 30 minutes prior to the start of the lecture, and having just read my talk title, Jack implored me to please not suggest that Neuroscience was “non-paradigmatic” as he said that this was the excuse used by his physics colleagues to not get involved in neuroscience research. I told him that I couldn’t very well change the entire point of my talk with 30 minutes to go, but that I would try to fix the problem.  Stating at the start of the talk that I believed, as they apparently did too, that neuroscience is as yet non-paradigmatic, I then asked who in the audience didn’t want to be Newton?  On reflection later, I realized that, in fact, nobody did – their careers, perhaps especially including the Nobel Laureates among them, being entirely dependent on the fact that modern physics is ‘paradigmatic’.

50 years ago, it is very likely that the use of ‘paradigmatic’ and ‘non-paradigmatic’ in the first paragraph of this essay would have already lost the reader.  However it is now more than 50 years since Thomas Kuhn introduced this terminology to science (and sadly the mass media) in his now classic book The Structure of Scientific Revolution  (Kuhn, 1962).  While Kuhn himself subsequently labored with the concept of a scientific paradigm (and the associated ‘paradigm shift’), his work has more than confirmed the sentiment expressed in the book’s first sentence that:

History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image  of science by which we are now possessed.”  (Pg. 1, Kuhn, 1962).

In this essay, my intention is to use a “Kuhnian” framework to consider the current state of Neuroscience as a ‘Science’ as well as the consequences of that state for our likely short term and longer term progress toward understanding how the brain actually works.  In its essence, my view is that neuroscience is in fact today  ‘non-paradigmatic’, reflecting a more folkloric, descriptive enterprise than a true science.  Accordingly, I have serious concerns regarding current explanations and understandings of neural computation at all levels.  While presenting this concern, I will also briefly attempt to demonstrate how the path to a paradigmatic structure for neuroscience can be informed by exploring the origins of modern physics, and, in particular, the accomplishments of physics in the 16th and 17th century which also formed the basis for Kuhn’s own analysis.

Is Neuroscience Paradigmatic?

While a core component of the content of the ‘Structures’ book, the question of what compromises paradigmatic science troubled Kuhn long after he introduced the idea.  As Kuhn himself stated clearly in the postscript of a subsequent edition: “The paradigm as shared example is the central element of what I now take to be the most novel and least understood aspect of this book” (pg. 186: Kuhn 1962).   As has been pointed out by a number of other authors, Kuhn’s use of paradigm in “Structures” was quite inconsistent, and he subsequently published several papers in an effort to clarify the concept (“Second Thoughts on Paradigms” Kuhn).  This confusion over what constituted paradigmatic science obviously also confuses the definition of non-paradigmatic science.  In some instances Kuhn seems to suggest, for example, that physics reached paradigmatic status in the 16th and 17th centuries, while in others he suggests that both Aristotle’s Physica and Ptolemy’s Almagest provided a paradigmatic base for physics 2000 years earlier (pg. 10,   Kuhn, 1962).  It will be my claim that most of theory in neuroscience today resembles a Ptolemaic style argumentation which, because of complexity of biological systems, has failed to systemically organize the field.

As quoted above, a key indicator for Kuhn for the existence of paradigmatic science is the extent to which a core set of beliefs, values and approaches are shared (my emphasis) among the community of scientists involved.   Quoting again from Kuhn, he believed that paradigmatic science exists when a community of scientists adopt  “accepted examples of actual scientific practice—examples which include law, theory, application, and instrumentation together (my emphasis)—provid(ing) models from which spring particular coherent traditions of scientific research.” (pg. 10, Kuhn, 1962).    While it is important to note that Kuhn himself acknowledges that it is possible to perform scientific experiments in a non-paradigmatic context, it is only when the community as a whole shares a common paradigm that science proceeds in an orderly way (Kuhn’s “Normal Science”), with periodic changes in paradigms, (Kuhn’s “paradigm shifts”).

As just stated, Kuhn regarded paradigmatic science as constituting a combination of “law’s, theories, applications and instrumentation” linked together to provide an accepted community model for the coherent advancement of the field.  While, as a physicist, Kuhn principally used the history of physics for his examples, he himself clearly believed that this analysis likely applied to all scientific endeavors.  While it has often been suggested that the strong emphasis on theory and laws emerging from an analysis based on the history of physics might not apply in as deeply an experimental field as biology, I disagree with this sentiment strongly.  In my view it is the mathematical basis for theories and laws that explicitly establishes the necessary substructure for paradigmatic science.  There are many reasons for this, but one is that mathematics provides a rigorous set of definitions of terms, which can be used as the basis for communication as well as collaboration. While almost all modern biological seminars and presentations start with a ‘box and arrow’ diagram intended to provide a ‘theoretical’ (note in quotes) context for the results to be presented, these diagrams almost never represent actual mathematical entities.  Similarly, articles written about ideas for neural function typically rely on these kinds of box and arrow diagrams with little or no mathematical definitions, or underlying mathematical models.  In actual practice, this often means that different practitioners in a field, while using the same words, often mean quite different things.  If communication in a science is only dependent on ideas presented without mathematical definitions, then the ideas are too poorly defined to really be tested.  In this sense, I am happy to align myself with Karl Popper, who believed that hypotheses were only “scientific” if they are falsifiable.

This is not to say that there are no mathematically defined theories and models in neuroscience, there are.  However, I would claim that almost none of those theories have risen to the level of community acceptance or even general understanding necessary to provide a basis for paradigmatic Neuroscience.  At the largest scale, the vast majority of published electrophysiological and biophysical experimental studies make no mention of modeling results, and only passing, inconsequential and non-specific reference to brain theories.  Typically these theories are mentioned only in the first paragraph of the introduction and the last paragraph of the conclusion.  It is exceedingly rare to have a theory or model influence the paper’s methods, or help organize its’ results.  While on the surface this would appear to be less the case for so-called  “Cognitive Neuroscience”, its theories and models, by intent, largely stand independent of the actual machinery of the brain itself, usually informed by some form of behavioral analysis.   Behavioral analysis of the cognitive type, however, can be very misleading with respect to the actual structure and performance of the underlying neuronal machinery (Vehicles by Valentino Braitenberg).   Ultimately how the brain works depends fundamentally on how information moves through its neurons and networks.  In recent times, cognitive neuroscience has claimed to have access to that movement of information through the use of brain imaging techniques.  However, it is entirely unclear how the signals measured in brain imaging relate to the actual underlying behavior of neurons and their networks.  Further, the design of brain imaging studies is especially subject to built in assumptions inherent in the theories themselves.  In a section to follow I will discuss the application of Ptolemaic style ‘curve fitting’ modeling to neuroscience, but I regard most cognitive theories to be of that type.

Even within the small subset of neuroscience referred to as Computational Neuroscience, almost no common set of models or laws have emerged that are accepted or even referred to by all.  There are exceptions to this general rule like, for example, the now 60 year old Hodgkin / Huxley model of the generation of the action potential.  In fact a conference I recently helped organize to celebrate the 60th anniversary of the publication of this model was, I believe, the first ever meeting in neuroscience organized around a mathematical model.  However, the Hodgkin/Huxley model addresses neuronal behavior at the biophysical level, not explicitly at the level of how the brain actually organizes and coordinates behavior, which is the level of interest of most neurobiologists.

Some have explained the lack of coordinated or common acceptance of a paradigmatic model because the structure of the neurobiological neurons and networks are not yet well enough understood.  However, in a domain I know very well, the study of the cerebellum, we have known the anatomical structure of its neurons and networks for more than 100 years .  Yet, no sets of rules or laws have emerged for what constitutes, for example, a standard set of laws for what actually constitutes a Purkinje cell, the primary neuron in the cerebellum, even though this cell has been modeled for more than 60 years (20 years of Computational Neuroscience, Springer, Bower (ed), 2013).  I would argue that because of the lack of accepted ‘laws’, most of the Purkinje cell models published in the literature bear no actual resemblance to a real Purkinje cell.  Furthermore, considering the computational function of the cerebellum as a whole, most of the dominant theories explicitly require neuronal behavior contrary to actual experimental evidence.  Perhaps for this reason, as with neuroscience as a whole, the vast majority of experimental papers concerning the cerebellum make no mention of modeling, and only passive reference, of the type already discussed, to cerebellar “theory”.

This situation is not isolated to the cerebellum.  While the hippocampus and visual cortex are both probably the most modeled structures in the mammalian brain, there is currently no commonly accepted model for either structure and no effort to make one.  Even fundamental questions like what are the appropriate measures of neuronal behavior, or the right resolution to consider the synchrony of neuronal activity, lack a common operating definition.  Instead, Neuroscience is dominated by the kind of structure described by Kuhn to exist prior to Newton’s publication of Opticks:  “ No period between remote antiquity and the end of the seventeenth century exhibited a single generally accepted view about the nature of light. Instead there were a number of competing schools and subschools, most of them espousing one variant or another of Epicurean, Aristotelian, or Platonic theory.” (pg. 12, Kuhn, 1962).

Reflecting this non-paradigmatic structure, the vast majority of Neuroscience publications remain fundamentally descriptive in nature, at the same time as the field itself actively resists the development of a common and articulated underlying theoretical or law-based structure.  The fact that Computational Neuroscience is regarded as a sub-specialty in neuroscience is itself a clear indication of the non-paradigmatic nature of Neuroscience as a whole.  Without models and theories to provide common definitions, a coherent scientific tradition of research is not possible.  Neuroscience then defaults to the lowest common denominator human organization, a folkloric and effectively a religious style enterprise, based on mythological story telling, and driven by the egos, opinions, and the predetermined worldviews of the individuals involved.  Without an underlying mathematical structure, the field can easily incorporate or ignore conflicting data or not even recognize when data conflicts exist.  As Kuhn recognized, this is they key feature of paradigmatic science.

How then do we proceed?

It is clear from the historical record that the long slow evolution of planetary science provided the foundation for the origins of modern physics as it also also strongly influenced the development of Kuhn’s thesis.  It is my view that this history can also provide an important guide to establishing a paradigmatic base for neuroscience.  As has already been mentioned, however, Kuhn’s own consideration of the history of paradigms in the context of planetary science is somewhat confused.  On the one hand, he clearly recognizes that the work of the 16th and 17th century physicists fundamentally changed the underlying structure and behavior of physics and physicists, more than would be expected from a simple “paradigm shift”.  On the other hand, in the first chapter of ‘Structures’ he lists both Ptolemy’s Almagest, and Newton’s Principia as serving, “for a time implicitly to define the legitimate problems and methods of a research field for succeeding generations of practitioners”  (Pg. 10, Kuhn 1962).

For Kuhn, a critical feature of the success of a theory in supporting paradigmatic science was in its ability to align scientists together into a common model of practice.  Kuhn believed that for a theories to provide this kind of organizing (paradigmatic) structure they should be: “accurate in their predictions, consistent, broad in scope, present phenomena in an orderly and coherent way, and be fruitful in suggesting new phenomena or relationships between phenomena.  (Kuhn, 1962).  In fact, however, the Ptolemaic model only partially met Kuhn’s description.  While the model was “broad in scope, consistent, presented phenomena in an orderly and coherent way, and predicted the data”, it did not actually, ‘suggest new phenomena or relationships between phenomena.”  In fact, one likely reason for its success was that it reinforced the assumed relationships between the phenomena it sought to describe: in this case dominantly that the assertion (and clearly observational fact) that the heavens revolved around the earth.  The engagement of generations of astronomers in further progression of the Ptolemaic model largely involved adding to its complex without changing its core assumptions to account for more accurate experimental data.  Accordingly, while the success of the Ptolemaic model was clearly due to its ability to predict the movement of the planets, and its relative mathematical breadth and simplicity, the fact that it fit with the dominant worldview didn’t hurt.

While not described in this way in children’s textbooks, it was actually the evolved complexity of the Ptolemaic model and especially the fact that the earth was no longer actually at the center of the dynamics, that lead Copernicus to propose his model as an alternative.  In fact, however, the sun- centric model actually predicted the position of the planets less accurately than did the Ptolemaic model for several hundred years, and the model had serious problems with consistency, breadth of scope and the presentation of data in an orderly and coherent way.  Improving the accuracy of the Copernican model required a great deal more fundamental work, as well as the introduction of much more complicated mathematical relationships.  So why then did scientists like Galileo, Kelper, and Newton ‘gravitate’ towards this model?  While there are likely religious and sociological explanations, scientifically, it was clearly the fact that this theory provided “new phenomena or relationships between phenomena.” as well as a significant challenge and opportunity for new science that drove its adoption and development.  The Ptolemaic model could not, by its nature, tell modelers or astronomers anything they did not already know or assume to be true about the structure of the solar system, or anything about the mechanical forces responsible for its structure.

While the work of Kepler and others on extending the Copernican model is historically interesting, it is actually the work of Newton on celestial dynamics that provided the key step in the transition to modern physics.  Interestingly, that work didn’t involve planetary movement, but instead, an analysis of the motion of the moon with respect to the earth.  It turns out that eventual emergence of Newtonian mechanics as manifest in the Principia, almost certainly originated in a mathematical model he constructed, likely at the age of 19 on his sister’s apple farm, to help him understand why the moon moved the way it did around the earth. While documentation is sparse, Newton apparently generalized a conundrum involving the behavior of a ball swinging around at the end of string, to the relationship between the earth and the movement of the moon.  Either inventing or borrowing the calculus he calculated the forces involved discovering what appeared to be an inverse square relationship between that force and the distance between the earth and the moon.  While it is not possible to recount the full story here, in fact, the estimate was not exactly the inverse square, and so Newton put the work aside for many years, until he was informed that someone else was about to publish the inverse square relationship.  Returning to the question and recalculating now with a better estimate for the distance between the earth and moon, he returned to the subject of celestial mechanics, eventually producing the Principia.  In a tour de force, he used his laws of mechanics to derive Kepler’s Laws of planetary motion, even providing an explanation for why the moon failed to obey them.    Again in Kuhn’s words, “No other work known to the history of science has simultaneously permitted so large an increase in both the scope and precision of research.” (pg. 30, Kuhn, 1962).

So what about Newton and Neuroscience? 

So how, then does this brief history of planetary science relate and potentially guide modern neuroscience.  First, I would assert that most neurobiological models today are in fact Ptolemaic in nature, constructed to demonstrate a particular preexisting idea about how the brain works, and largely designed to convince others of their plausibility.   Like Ptolemy, the “predictions” made by most of these models are actually most often “postdictions” of phenomena already known to occur.  Like the Ptolemaic model, these models usually taught their relative mathematical simplicity as a virtue, and often have the associated property that they can be readily adjusted to account for inconsistent data.  The most important way in which they reflect Ptolemaic modeling, however, is that they do not, as their first priority represent the actual physical structure of the nervous system itself.  Instead, they usually pick and choose convenient structural details to include, which among other things, makes them very difficult to test or falsify experimentally.  Like Ptolemy, they principally rely on their ability to ‘postdict’ experimental data as evidence of their validity.  Perhaps most importantly, however, like Ptolemy, the fact that are not built first and foremost to reflect the actual physical structure of the brain, means that it is much less likely the will identify new and unexpected relationships.  Often, like Ptolemaic models, their assumptions represent the worldview of those that built them, as well as a subset of the neuroscientists who share the same preconceptions.  Unlike Ptolemy, however, these types of models have failed to provide a strong enough attraction to the field as a whole to provide a basis for paradigmatic science to emerge.  The reasons for this are complex, but probably generally reflect the disconnect and even suspicion of most experimentalists towards modeling that Ptolemaic modeling actually produces.  Recently in a conversation with several computational neurobiologists of the Ptolemaic variety who were complaining that experimentalists paid no attention to their models, I repeated a statement I have heard attributed to Kant, who, it was claimed, when asked why he didn’t even bother to read Voltaire, stated that he knew the conclusions and therefore the assumptions had to be wrong.  In sympathy for Ptolemaic modelers in neuroscience, they also suffer from the fact that, unlike Ptolemy who was modeling the motion of planets, it is far from clear what are the right measurements to make of neuronal behavior. Unfortunately, figuring that out will very likely require that neuroscience become paradigmatic first.  It is clear, however, that for whatever reasons, Ptolemaic models have failed to have the force necessary to organize the field.

Like planetary science before us, I believe that paradigmatic neuroscience will only emerge when models are built to reflect the actual physical structure of the brain first, in the way that Newton’s model reflected the physical relationship between the moon and the earth.  These models must be built without first assuming how the system works.  By analogy, Newton could have build a model of the moon / earth system that assumed Kepler’s vortex forces were at work.  While, in fact, he apparently attributed the difference between an exact inverse square relationship in his early calculations  to a vortex force, had he assumed that such a force governed the dynamics to begin with, he never would have discovered the inverse square relationship, and as a consequence gravitational attraction.  Similarly and perhaps even especially in a structure as complex as the nervous system, unless we use modeling tools that put us in the position to learn its fundamental relationships from the structure itself, I believe it is unlikely we will ever generate the kinds of structurally testable and unexpected predictions necessary to organize neuroscientists into a paradigmatic enterprise.  In some sense there is already an existence proof for this assertion.  While I believe it is largely misunderstood, especially by those who build Ptolemaic models in neuroscience, the process that Hodgkin and Huxley used to develop their model of the action potential was Newton-like.  While the model they eventually produced had a more abstract, curve fitting form, it is very clear from their own description of their process that they originally intended to make a model based on the actual physical structure of the neuronal membrane.  The fact that they did not was a consequence of their awareness that they did not yet know enough about its physical structure to do so.  This realization in turn arose when early efforts to model the data made clear that their assumptions about mechanism going in were wrong.  Never the less, their strong predisposition towards a structurally accurate model meant that the structure of the model they did produce made specific and testable predictions.  It is on the basis of the success of those predictions as well as the ability of the model to replicate the shape and structure of the action potential that neuroscience as a whole adopted the model as a paradigm for the generation of the action potential.

Implications

While the Hodgkin Huxley model is biophysical in nature and therefore does not directly address how the brain’s neurons and networks processes information, in an almost paradigmatic way, their model has provided the structural basis for a new type of model of the Newtonian type focused at the computational level.  Usually described as ‘realistic’ or ‘biophysically accurate’ models, they still represent the large minority of published Neuroscience models, although their use is slowly growing.  Because the structure of the nervous system is complex, these models are complex can only be run as numerical simulations and accordingly depend on the continued exponential growth in available computing power.  Perhaps more importantly, a few of these models have started to emerge as true ‘community models’ with shared use and development by multiple investigators  (Bower, 2013).  There is even now a database for these types of models allowing their transfer between laboratories.

While these models exist, the majority of Computational Neurobiologists are suspicious of their complexity and the majority of experimental neurobiologists do not have the skills or training necessary to appreciate their value or potential importance for experimental studies.  In fact, the overall non-paradigmatic nature of Neuroscience provides a general resistance to the use of these models as an organizing principle.  Like Copernican models before them, there results, often challenge the ad hock assumptions of the Ptolemaic models.  Yet, I believe, the history of science makes it clear that it is only through these types of realistic models that Neuroscience can transition into a paradigmatic science.  For this to happen, however, several changes and developments will be necessary.  Many, interestingly enough, parallel developments in physics in the 17th century:

–       Like Newton’s model, understanding these new complex realistic biology models will also require the invention of new mathematics and analysis tools, as well as new tools for visualization.  While several modeling systems have been built to provide a foundation for this work, the lack of paradigmatic organization of neuroscience as a whole means that many would rather ‘invent their own’, hindering the formation of an open and inclusive modeling community.

–       Neuroscience must not adopt too readily the structure of modern day physics.  By this I mean that it is only when a paradigm is established that there is a base to coordinate theory and experiment as separate enterprises.  Like Newton himself, neuroscience should be training students facile in both theoretical and experimental neuroscience.  Unfortunately, most computational neuroscience training programs are designed to attract a cadre of theorists trained in physics, without a deep understanding of experimental neuroscience.  Unfortunately, many of the ‘tricks’ of physics (scale independence, averaging) are unlikely to apply to biology.

–       A key component in the development of physics and science in general n the 17th century involved the invention of the scientific journal.  However, traditional scientific journal structure is not in the least amenable to the description let alone reuse of models of the complexity of neurobiologically realistic models.  Accordingly, new methods of publication need to be invented.  Fortunately, the Internet provides an ideal opportunity to do so.

–       The community as a whole must seek and support the construction and understanding of community models.  The development of community models must actually be promoted and encouraged by modelers themselves, many of whom also prefer to build their own. In principle the capacity for understanding and sharing community models should be a central feature of both a new publication system, as well as neurobiological education.

–       In fact, perhaps most importantly, as Kuhn himself pointed out, the methods used to educate the next generation of neuroscientists are a key indication of the presence of a paradigmatic science.  “The study of (these) paradigms… is what mainly prepares the student for membership in the particular scientific community with which he (sic) will later practice”  (pg. 10, Kuhn 1962).  Instead of studying a common set of “paradigms”, our graduate students are subjected to a set of descriptive biological facts and purported mechanisms devoid of any underlying mathematical model or description. I have recently proposed that it would serve the field in its transition if the usual neurobiology survey course were replaced by a semester or even a yearlong study of the origins and structure of the Hodgkin Huxley model, as this represents as close to an established paradigm for one level of neuroscience. Further, as I have stated, the historical development of model in my view reflects a fundamentally Newtonian method.  In fact, however, the majority of our introductory textbooks do not even include the models relatively simple core second order differential equation.  Furthermore at present admission standards to most graduate programs in neuroscience do not require the mathematical skills necessary to understand or build mathematical models of any sort.   Few graduate programs current require or even offer courses in this or any kind of modeling.

–       Finally, for all these reasons, extreme caution is warranted in considering small or large-scale theories of brain function.  Those theories divorced from the actual physical structure of the brains neurons (I include almost all cognitive theories in this category) should be particularly suspect.  Further, it is critical for the field as a whole to recognize and understand the difference between Newtonian and Ptolemaic style modeling.  In my view, until that happens, neuroscience will remain fundamentally folkloric; dependent on what is, in effect, story telling, without a clear and organized direction forward.

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Have you Subscribed via RSS yet? Don't miss a post!