Why are we doing this and where did Whyville come from??

Well, I suppose, in some sense, the reason that I finally decided to jump into the blogosphere was to try to provide an answer to those questions.  As may perhaps now be apparent, the answer is kind of complex – and has become more complex as time has gone on and it is ever more apparent why we would have done this “had we known then what we know now”.  The Whyville team and I will be forever grateful to Whyvillians for that ongoing education.

However, although still early in the overall exposition, perhaps you will indulge an effort to explain why, in 1984, I THOUGHT I should start playing with games embedded in social (virtual) worlds as a mechanism for engaging elementary and middle school aged children in science education.

The answer then had to do with computer simulation technology and the importance of models as a tool for understanding complex things, Whyville’s core technology.  It also came from a deep sense that many, even in science and especially in biology, did not themselves understand the importance of model-based discovery.

Three weeks ago I helped to organize a meeting in Cambridge England celebrating the remarkable accomplishment of Sir Allen Hodgkin and Sir Andrew Huxley 60 years ago, who built a mathematical model as a tool to understand the way in which neurons communicate with each other electrically.

Without mentioning Whyville of course (few in neuroscience know about my ‘other life’), my opening talk at the meeting asked how it is that the results of the Hodgkin / Huxley model have been largely accepted, while their modeling methods still seem foreign to so many biologists.  After all, IT HAS BEEN 60 YEARS!!!  In fact, while giving the meeting introduction, it occurred to me that the meeting in Cambridge might have been the first in the entire history of biology to be organized around an actual model.   Physics has been organized around and by models for 400 years.

This observation (another example of learning by doing), in turn inspired me to submit the commentary copied below to Nature Magazine, which has just rejected the publication because of “pressure on space in our pages”.

No such pressure here however 🙂  although I realize that this commentary may stretch the interests and willingness of those who follow this blog.  I am hoping that those who do fight through it might better understand Whyville’s origins as well as my own perhaps somewhat over-assertive commitment to Whyville as an idea.

 

Commentary on:  “60 Years of the Hodgkin-Huxley Model.  In celebration of the 60th anniversary of the publication of the Hodgkin-Huxley model of the action potential”, Cambridge UK July 11-13, 2012.

 

The contrasting role of standard models in biology and physics: considering the 60 years anniversary of publication of the Hodgkin Huxley Model for the neuronal action potential.

The announcement of the Higgs Boson on July 4 attracted widespread attention among physicists and the general public in large part because it confirmed a theoretical prediction made almost 50 years earlier regarding a particle key to the relation between elementary particles and the forces between them. As such, the discovery of the Higgs boson has been reported as a triumph for the partnership between theory and experimental practice in physics.  A week after the announcement of the discovery of the Higgs Boson, a symposium took place at Trinity College in Cambridge, England, celebrating the 60th anniversary of the original publication of the Hodgkin-Huxley (HH) mathematical model for the initiation and conduction of the neuronal action potential which provides a fundamental mechanism for communication between neurons.  Like the Standard Theory of elementary atomic particles, the original publication of the HH model both unified a diverse set of experimental observations and made a series of predictions for phenomena not yet observed, or at the time observable. As was made clear in the symposium at Trinity College, experimental research in the subsequent 60 years has largely confirmed those predictions and placed the HH model at the center of our understanding of the electrical activity of nerve tissues throughout the animal kingdom from the squid’s giant axon to Human brain cells.

While Alan Hodgkin and Andrew Huxley received a share of the Nobel Prize in 1963 for their work, the success of their model in predicting the ionic processes underlying the generation and propagation of the action potential remains largely unheralded even within neuroscience.  Most neuroscience textbooks instead simply refer to the HH model as a “description” of the ionic basis of the action potential, failing to include any discussion of the scientific process represented by the model, or its role in organizing and leading 60 years of subsequent experimental and theoretical investigations. Typically, there is also no mention of the fact that the HH model today provides the basis for most ongoing efforts to build realistic models of brain circuits and understand brain function and dysfunction.  While the discovery of the Higgs Boson is lauded as a triumph for the Standard Theory of elementary particles, the similar accomplishment of the model built by Hodgkin and Huxley is largely neglected.

Prior to the HH model in the late 19th and early 20th century there was considerable disagreement and confusion about the cellular and biophysical mechanisms responsible for the action potential.  While the action potential itself had been recorded as early as the mid 1860s by the German physiologist Julius Bernstein, there was considerable debate regarding both the ions involved and the mechanism(s) responsible for their movement across the membrane.  In 1937 Alan Hodgkin showed that the action potential depends on regenerative changes in electric charge movement across the membrane, with the change in potential propagating down the axonal fiber.  Contrary to the then prevailing view (associated with Bernstein) that these ionic movements resulted from a transient breakdown in the axonal membrane, Hodgkin and Huxley working together showed that the action potential exhibits a brief transient period when the internal negativity of the membrane potential becomes positive, an “overshoot”, requiring a more sophisticated membrane mechanism than previously assumed.  After World War II, Hodgkin and Huxley returned to their experimental work using a state of the art feedback amplifier to perform voltage and space clamp measurements on the squid giant axon.  Combining the voltage clamp with ion replacement experiments, they measured for the first time in detail the flow of potassium and sodium ions crossing the membrane and their corresponding conductance changes generated during the action potential.

This experimental work was published in a remarkable series of 5 papers in the Journal of Physiology in 1952. While the first 4 described the experimental results, the crowning achievement was the 5th paper, which included the mathematical model itself in the form of 4 ordinary differential equations.  Even today, this sequence of 4 + 1 represents one of the best, and perhaps also one of the clearest demonstrations of the value and proper use of models in biology, exemplifying the links between theoretical ideas and experimental studies.

Hodgkin and Huxley themselves were very aware of this unifying use of their model, making it clear in their paper that more than a description of the phenomena, the model was an essential investigative tool.  Thus, they state:

In order to decide whether these (experimental) effects are sufficient to account for complicated phenomena such as the action potential and refractory period, it is necessary to obtain expressions relating the sodium and potassium conductances to time and membrane potential.”

expressions” in this case of course being the model’s equations which both provided concrete definitions of the processes involved as well as a means to link separate experimental results into a larger understanding of mechanism.  In addition to helping coordinate the experimental results, the model was also used to explicitly rule out mechanisms that were inconsistent with observations:

“…  we shall consider briefly what types of physical system are likely to be consistent with the observed changes in permeability.”

and:

The object … is to show that certain types of theory are excluded by our experiments and that others are consistent with them.”

In this way, Hodgkin and Huxley used the model to test and reject existing ideas about the origin of the action potential, including, importantly, their own:

Consider(ing) how changes in the distribution of a charged particle might affect the ease with which sodium ions cross the membrane … we can do little more than reject a suggestion which formed the original basis of our experiments (Hodgkin et al., 1949).”

To this day, perhaps the highest (and rare) mark of any model is to rule out the author’s own previous beliefs and speculations.

Beyond testing proposed mechanisms, perhaps the greatest achievement of the HH model was in making a series of predictions related to membrane mechanisms not yet described and data not yet obtained or obtainable.  Specifically, the core model prediction was that the movement of sodium and potassium ions through the membrane are independent and controlled in different ways.  While Hodgkin and Huxley could not have known the underlying biophysical mechanism at the time, their model, in effect, predicted not only the presence but also the core functional properties of individual membrane bound ion channels not clearly identified until the invention of patch clamp recording techniques for which Erwin Neher and Bert Sakmann shared the Nobel Prize in 1991.

It is our view that in a science still dominated by descriptive studies, in which the large majority of submitted grants and research projects do not reference or include a quantitative theoretical basis for the work, the history and success of the HH model stands as a testament to the value of modeling, theory and simulations in understanding complex phenomena.  By not emphasizing the predictive nature of the HH model, and the relationship between the construction and testing of the model with experimental data and the subsequent success of its predictions, we deny our students knowledge of a critical component of the scientific process and one of its greatest successes to date.

the plague unveiled at the symposium which now pays tribute to the experimental and theoretical work performed at Trinity College on the neuronal action potential

As was clearly apparent at the symposium at Trinity College, the HH model, like the Standard Model of particle physics, continues to provide the quantitative underpinning for our understanding of the electrochemical properties of the brain and in particular, how its neurons communicate with each other and with the outside world. The HH model and its derivatives also provide the foundation for almost all efforts to build biologically realistic brain models including the compartmental modeling techniques introduced by Wilfrid Rall and his collaborators in the 1960s.  All the major simulation software packages, including GENESIS and NEURON are based on the HH equations, as are large-scale simulation projects like the Blue Brain project aiming to model the mammalian cerebral cortex. These computational efforts, however, continue to involve a relatively small number of neuroscientists and an even smaller number of experimentalists. Perhaps, revisiting the history of the HH model, and presenting the model as a set of predictions rather than the now accepted description of the action potential, might provide a pathway for more neuroscientists, and perhaps even more biologists in general to value, understand, and participate in modeling studies.

Why aren’t my kids gardening?

One of the benefits, or perhaps costs, of having been in the business of developing games for learning for 25 years, is that you can see trends – especially with respect to the kinds of questions that come from audiences.  There was a time, not really that long ago actually, when a talk on kids playing games online was met with skeptics who said that computers would never be cheap enough, or the Internet common enough for kids to have widespread access.

Ha!!

Another concern raised when talking about complex games, especially in meetings full of academics, was that young children didn’t have the cognitive ability to manipulate abstract ideas, or understand problems with multiple variables.  I remember replying once “you must not have kids” – as, in my experience, mine were more than adept at manipulating the complex relationship between two adults that (by definition) predated their existence.

In any event, in recent years perhaps the most frequent audience response when I present data on the remarkable number of minutes, hours, years that kids (now with access) are spending manipulating variables and abstract ideas on Whyville, is that all this use of the Internet is bad for children – that they should really be doing something else.

My usual response is to say that we actively design games and activities that push kids off into the real world, and that we have been in the business of ‘blended’ reality from the outset, however, I find the persistence of this particular question remarkable

–      BECAUSE –

The important question, it seems to me,  is not how much time kids are playing online games, the important question is what they aren’t doing now, that they were doing before?

Turns out, it isn’t gardening.

What they were doing was watching television.

Which raises the somewhat more specific question, is playing online games better or worse than watching television?

I won’t bore you with the abundant neurobiological, cognitive, behavioral, etc. evidence that ACTIVE almost anything is better than almost anything passive for young minds.

I won’t even expand on my own deep held conviction that plugging into content delivered sequentially by supposedly ‘child friendly’ networks like Nickelodeon and The Cartoon Network, leads to much more brain rot than children actively making their own choices as to where they go, what they see, and what they do.

And is there any doubt which of these technologies are going to be in their best eventual economic, political, and social interest?

Instead, I want to ask the question, why isn’t this obvious to everyone?

The easy and context specific answer, of course, is that often the people that ask this question in the conferences and meetings were I speak represent (or are simply parroting) the media industry’s desperate concerned that children would rather do their own thing, than “their” thing, and that they now can.  Those industries and their representatives are simply defending turf and trying to scare the rest of us off.  Understandable, but shame on them (and good luck).

However, I think that there is another reason that this concern continues to resonate.  And that reason is pretty simple actually – most of us older folks did not have access to the Internet when we were young and “impressionable”  – but we did have access to TV.  For this reason, TV is familiar; something we know how to deal with and don’t fear, the Internet in contrast seems new and scary.  In fact, when I was young, I had to defend my use of TV against my grandparents, who didn’t think that children should be allowed access to such a powerfully influential device at all.  (They were also convinced that the Beatles were going to destroy America – the verdict perhaps still out on that one).

Just to be clear, I am not advocating that as adults, parents, grandparents, teachers, government officials, game developers, etc., we simply throw up our hands and pay no attention.  The truth is that this new technology is very powerful in its ability to engage and potentially manipulate children (as well as the rest of us).  Some part of the concern over kids on the Internet is almost certainly a consequence of the other side of active vs. passive:  which is that active minds are more susceptible to manipulation (as well as learning) – We need to be vigilant about who is trying to do what to our kids using this new technology.  No matter how suspect the programming at Nickelodeon or The Cartoon Network, at least it isn’t that effective.  The Internet can be VERY effective.

But, bottom line obviously is that the cat is out of the bag on the Internet.  Our challenge now is not how to keep kids from using it – but to assure that its use is to their maximum benefit.

Gamification: Is a game by any other name still a game?

It occurred to me, as I packed my pirate outfit after Games For Change and headed to yet another airplane, that in principle, it is now entirely possible to remain continually on the road attending conferences discussing the “gamification’ of education.

In fact, considered worldwide, this is physically impossible, minus quantum tunneling  (see for example  http://elearningtech.blogspot.com/2011/11/elearning-conferences-2012.html )

However, as I have sat in the audience this spring either physically or virtually at InPlay (http://www.inplay2012.com/),  GDC (http://www.gdconf.com/) , SxSWedu (http://sxswedu.com/) taking place at the same time in the same city (Austin) as SITE (http://site.aace.org/conf/) ;  E3 (http://www.e3expo.com/); GLS (http://www.glsconference.org/2012/index.html) taking place at the same time as Games for Health (http://www.gamesforhealth.org/index.php/conferences/gfh-2012/); followed immediately by Games for Change (http://www.gamesforchange.org/) and a week later, and most recently ISTE (http://www.iste.org/conference/ISTE-2012.aspx),  I found myself pondering not so much the NP complete  (i.e. basically unsolvable) traveling salesman problem (http://en.wikipedia.org/wiki/Travelling_salesman_problem) ,  as what the heck we are all doing, and in particular, if we are really all talking about the same thing?

I have no interest, what-so-ever, in launching into an academic discourse on games and types of games, but it does seem to me that if you are promoting some form of ‘ification’  then some thought should be given to considering the operator (in this case ‘game’) that is being applied.

This may be particularly important in the case of learning, as we ARE proposing to “ify” something that already exists with its own purposes and long history (see prior post).

In figuring out if we are all on the same page, or close to it, a good place to start perhaps might be “the 100 Games Everyone Should Play” list now being crowd sourced by the “gamifiers” on http://the100.esidesign.com/.  Presumably such a list could tell us something about how we are defining games and certainly what games we ourselves value.

The current top ten:

1)   Chess

2)   Settlers of Catan

3)   Tetris

4)   Dungeons and Dragons

5)   Portal

6)   Go

7)   Civilization

8)   Sim city

9)   Super Mario Brothers

10)  Pandemic

I do not mean to claim that this is an authoritative list – crowd sourcing isn’t like that.  But the thing that is perhaps most striking to me about this list, we generated,  is how different these games are from many if not most of the so-called learning games I have seen demoed at this spring’s meetings.  First, half the games on the top 100 list explicitly depend on interaction between players, or in other words, an active social context.  Many if not most learning games are still single player.  Second, in half of the top six games (Chess, Tetris and Go), the game establishes rules, but the “narrative” or how the rules play out is provided by the human(s) playing the game.  In contrast, a nearly constant theme in gaming and learning meetings is the importance of a strong narrative to engage students (the players).  In fact, only one of the top ten games is developer narrative dominated (Super Mario).

So why then do the learning games we are constructing not include core features of the games that we ourselves appear to value?

The answer, unfortunately, I think is simple – for far too long the educational system has been dominated by content over process, by the desire to teach individuals the facts we think they need to know.  In talks over the last 25 years I have repeatedly pointed out that the close and deep connection between storyboarded video games and storyboarded curriculum is dangerous.  In both cases, the developer’s objective is to lead players through from the beginning to the end.  In the case of the traditional video game industry, start, play and finish means that players use up a game and are primed to buy the sequel, an historically good business model;  but not likely to be so good for education.

However, it is clear that the games that stand the test of time (2,000+ years for “Go” ) are either those in which the storyboarding is so sophisticated that it isn’t apparent (Sim City for example), or those in which the narrative and vitality is created by the players themselves.  In fact, it took the video game industry some time to understand that the vitality in even strongly storyboarded single shooter games came from the social context the players built, on their own, around those games.

So, the final question I would ask is whether we shouldn’t be using our own experience with games we like, to guide our development of learning games for others.  Perhaps it is more important to consider the way we like to play, before we build in too much “what we want you to know” or worse yet “what we want you to believe”.