Free MOOC course: Introduction to Complexity

| 48 Comments

In association with the Santa Fe Institute, Melanie Mitchell will teach a free online course called Introduction to Complexity starting on January 28, 2013. Mitchell has been working in complex systems research for years. Her Ph.D. advisors were Doug Hofstadter of Gödel, Esher, Bach fame and John Holland, a towering figure in the study of complex adaptive systems, which is the title of his influential 1975 book. According to the intro video and the course FAQ, it’s is aimed at non-specialists:

This course is intended for anyone with an interest in complex systems. For this introductory course, there are no prerequisites, and no science or math background is necessary. The level will be similar to that of an interdisciplinary undergraduate class, though the topics are broad enough to be of interest to people ranging from high school students to professionals.

To register to earn a certificate of completion, go here. One can watch the course videos without registering, though one won’t take the final nor be able to participate in the student forum.

Hat tip to Sean Carroll.

48 Comments

Hm. I just noticed that the course is supported by a grant from the John Templeton Foundation. Weird.

I tend to be a bit suspicious of such hand-waving courses. If what we see in the popularizations of this area is any indication, this is the kind of course that can lead to all sorts of woo-woo extrapolations away from fundamental physics and chemistry.

We already see “information” being misused to “overcome” the laws of physics to produce complexity: yet there are never any equations that tell us how this information pushes atoms and molecules around, or how the effects of information vary with distance.

I have nothing against the study of complexity; I have used some of these ideas in my own research. But much of what is still coming out of this field, while interesting and useful, is still of a phenomenological nature. It is mostly descriptive; and without the mathematics, it is hard to see how the course will be able to relate any of this to the actual forces and energies of interaction that connect it with the fundamentals of chemistry and physics.

In the hands of naive laypersons, this kind of course could easily be misinterpreted as support for more junk science.

Mike Elzinga said: I tend to be a bit suspicious of such hand-waving courses. If what we see in the popularizations of this area is any indication, this is the kind of course that can lead to all sorts of woo-woo extrapolations away from fundamental physics and chemistry.

Mike, did you look through Mitchell’s background? Consider this:

My research interests: Artificial intelligence, machine learning, and complex systems. Evolutionary computation and artificial life. Understanding how natural systems perform computation, and how to use ideas from natural systems to develop new kinds of computational systems. Cognitive science, particularly computer modeling of perception and analogy-making, emergent computation and representation, and philosophical foundations of cognitive science.

I don’t see any woo there. Mike wrote further

We already see “information” being misused to “overcome” the laws of physics to produce complexity: yet there are never any equations that tell us how this information pushes atoms and molecules around, or how the effects of information vary with distance.

Again, I see no indication of that approach there. I think there might be a levels of analysis issue, analogous, say, to looking at evolution at the population level or organismal level without immediately invoking events at the molecular level. We can sensibly talk about the selective pressure on a population that’s imposed by resource scarcity without immediately dropping down the the Krebs cycle in cells. So I’m not sure what you’re reacting to in that paragraph, Mike, since there are no specific references.

Mike went on

I have nothing against the study of complexity; I have used some of these ideas in my own research. But much of what is still coming out of this field, while interesting and useful, is still of a phenomenological nature. It is mostly descriptive; and without the mathematics, it is hard to see how the course will be able to relate any of this to the actual forces and energies of interaction that connect it with the fundamentals of chemistry and physics.

Once again, there’s a levels of analysis issue. Sure, the behavior of a complex adaptive system like a biological population or an economy is composed of the behavior of individuals, but I’m not sure how your wariness of some sort of top-down causal inference invalidates studying, say, the math of popgen without knowing (or without referring to) the specifics of the DNA-transcription-RNA-translation-Protein process.

And what’s the choice, given a largely math-illiterate audience? Baffle them with incomprehensible equations or attempt to translate the subject matter into a language consistent with where a lay audience is?

Mike went on

In the hands of naive laypersons, this kind of course could easily be misinterpreted as support for more junk science.

Again, I have no idea what you’re kvetching about here in the absence of references.

Look through Mitchell’s publications and point out the woo-mongering there.

There will be some who drift off into woo, and that’s unfortunate. I had a recent letter to the editor of the Columbus Dispatch about a feng shui practitioner who claimed her practice was informed by “quantum physics and mathematical formulas.” (I was a little surprised they let the “crank alarm” phrase through without editing.) Ain’t no way to ensure that won’t happen, but it’s still worth trying to teach basics to lay people. The alternative is total ignorance.

I would not expect Melanie Mitchell to engage in any woo – perhaps the Templeton Foundation is mistakenly hoping for it.

The Santa Fe Institiute approach to Complexity tries to find generalizations that will apply to all or most complex systems. This goal has been elusive so far, except for one generalization: things can get very complicated and behavior that you might think was brought about by natural selection or by design can sometimes simply arise as a byproduct of complex systems connected at random.

Frequently courses like this investigate particular classes of complex systems and show some of the particular unusual behaviors that occur in them.

But if you want useable generalizations that apply to all (or even most) complex systems, you will have a hard time finding them.

I don’t have time to be one of the MOOChers who take the course, but it would be interesting if someone who does take it could report back here at the end of the course on what generalizations about the behavior of complex systems have been discovered.

Joe, is the (apparently) widespread occurrence of small world networks one such generalization?

I’m considering taking the class. Looks like woo-free fun; my hope is that there will be enough hands-on simulation examples to clarify, for example, how systems can spontaneously self-organize. The trick is to do this in a way that will make sense to a “survey course” audience.

My other hope is that some of our resident design advocates will take the class and perhaps learn a little bit about the topics they claim they’re interested in. Steve P., you listening?

I didn’t mean to come across as impugning the reputations of Melanie Mitchell and the Santa Fe Institute; I know they do good work out there. I read the syllabus before I made my comment.

As I said, I have made use of this some of this stuff in some of my own research. The patterns of complex behavior and the emergence of new phenomena from that complexity have lent themselves to some fine mathematical analysis as well as giving new perspectives on complex behavior.

You can trace this stuff back to Henri Poincaré (chaos theory), to Simon Newcomb, to Frank Benford (Benford’s Law), to percolation theory and scaling, Mandelbrot and fractals. There are all the fine rules and mathematical relationships that get folded into the heuristics that allow for analysis everything from the stock market, to large computer networks, neural networks, the nervous system; that allow for control of cardiac arrhythmias, and a whole lot of practical stuff. Yes, it’s neat stuff; I know, I’ve used it.

On the other hand, the field of complexity is getting a lot of abuse in the popular media. The ID/creationists are among the worst abusers – no surprise; they abuse everything – and they are getting most of their information from the popular media just as they got their misinformation from the popular media about chemistry, physics, and biology.

As with any complex field, I have some reservations about courses like this that don’t have any prerequisites. They are often well-intentioned; and they are often part of the generalized educational component of a college education. I have some bad memories of how courses like this contributed to spreading misconceptions about physics and chemistry; especially things like entropy and the second law.

The popularizations of this field I see in the bookstores leave the impression that there are some forms of “new forces” or “energies” being discovered that override the laws of chemistry and physics. This is especially true of popular abuses of the concept of emergence.

My apologies if I came off as a bit crabby (I’m going into my 5th week with the shingles; but I didn’t think it was affecting my writing); but I still think there is potential in such a course for misinterpretation and misrepresentation despite the best of intentions and the best of talent teaching the course. Popularizations are extremely difficult to pull off; and most of them spread misconceptions (maybe it’s some kind of new law).

Richard B. Hoppe said:

Joe, is the (apparently) widespread occurrence of small world networks one such generalization?

I think I was raising the question of whether the dynamics of things that happen in networks shows any lawlike patterns (other than that “it can be complicated”), patterns that will hold over various sorts of complex networks. That includes ones with small-world connectivity, ones that are lattices, ones that are randomly connected with low or with high connectivity, ones that have random links in lattices, etc.

So the issue is not so much what kinds of networks there are out there, but what generalizations apply to all of them.

(For those who get the reference: an interesting detail of my own family history is that my parents actually used to know Kevin Bacon’s parents!)

Popularizations are extremely difficult to pull off; and most of them spread misconceptions (maybe it’s some kind of new law).

I’m not sure that’s as much the case as you may be worrying. (Good luck with the shingles, by the way.)

It isn’t the case with some of the major popularizing efforts in evolutionary biology.

I went to a Dawkins talk a couple of years ago, with a friend who has less science background and is more into the anti-religion thing. I may not be the biggest Dawkins fan in the world, but he did a perfectly good job accurately conveying some very basic biology. In fact for many years I criticized his failure to incorporate basic molecular biology, and he now makes that a part of his act (not due to feedback from me).

Stephen J. Gould, whom I massively prefer as a writer, was less socially, but more scientifically, controversial than Dawkins, but he did a good job of educating rather than misleading the public overall.

Carl Sagan, Stephen Hawking, Richard Feynman…all known for enlightening rather than misleading.

There are many other examples.

“Goedel, Escher, Bach” is a bit light but not particularly misleading.

I may try to sign up for this course. I’ll probably be too busy, and I’m already familiar with most of the material at the layperson level.

Resident creationist trolls will run FROM this course as fast as possible.

I loathe the Templeton Foundation. They’re (accurately) accused of trying to force religion into science, but their over-riding passion is to ennoble greed-based devil-take-the-hindmost economics with pompous, ostentatious, and empty ethical declarations. They are one of the many sources of funding that makes being a talentless sycophant to billionaires a better paying and more leisurely career than most actual useful jobs (although oddly all that good pay and leisure never seems to make the sycophants happy).

However, both because they are a far more sophisticated outfit than the DI, and also sometimes by accident, they do occasionally fund something decent.

(For those who get the reference: an interesting detail of my own family history is that my parents actually used to know Kevin Bacon’s parents!)

I proudly have no connection to Kevin Bacon that I know of, but my brother and a mutual best friend of ours are both in the entertainment business, so there’s a constant risk.

harold said:

Popularizations are extremely difficult to pull off; and most of them spread misconceptions (maybe it’s some kind of new law).

I’m not sure that’s as much the case as you may be worrying. (Good luck with the shingles, by the way.)

It isn’t the case with some of the major popularizing efforts in evolutionary biology.

I have generally had the impression that writers in the biological sciences are better than those in physics or chemistry; but that may be just my own biases.

It may simply be the fact that trimming out mathematics makes it harder to explain physics and chemistry concepts. I know that has been the case for popularizations of entropy and the second law; and it is especially true with quantum mechanics and relativity. Writers in these areas have been intimidated by publishers into removing the math in order to sell more books. They remove the math but haven’t thought deeply enough about articulating concepts without the math.

I am not entirely convinced that removing all mathematics is the right thing to do; and I say that having been a person who has generally advocated that instructors in physics be able to also articulate physics concepts without using the math.

Removing the math is not easy; and most inexperienced instructors fall back on math when they shouldn’t. Just writing down equations is not enough. But making that transition from automatically using math to being able to explain concepts without the math is something many popularizers have tried to do without sufficient thought.

Sure, there have been a few good popularizers of areas of math, physics, and chemistry; but, as near as I can estimate, not as many as there have been biologists.

On the other hand, it may be my own detailed familiarity with math and physics – as well as having considerable experience with dealing with misconceptions in these areas – that makes me more critical of popularizations of math, chemistry, and physics than I am of popularizations of biology.

But there is something about the writings of many of the biologists that seems more engaging to me. I really do think they are generally better writers; but, as I said, I may simply be less critical of biology popularizations.

This Templeton Foundation seems a bit creepy to me also. Large grants for making muddle headedness seem erudite and intellectually in vogue just muddies the issues while trying to give old superstitions some form of scientific legitimacy. I can understand religions as traditions and sources of social cohesion that also direct people’s concerns toward those less fortunate; but I think that trying to modernize medieval scholasticism is a bit pretentious.

Mike Elzinga said:

harold said:

Popularizations are extremely difficult to pull off; and most of them spread misconceptions (maybe it’s some kind of new law).

I’m not sure that’s as much the case as you may be worrying. (Good luck with the shingles, by the way.)

It isn’t the case with some of the major popularizing efforts in evolutionary biology.

I have generally had the impression that writers in the biological sciences are better than those in physics or chemistry; but that may be just my own biases.

It may simply be the fact that trimming out mathematics makes it harder to explain physics and chemistry concepts. I know that has been the case for popularizations of entropy and the second law; and it is especially true with quantum mechanics and relativity. Writers in these areas have been intimidated by publishers into removing the math in order to sell more books. They remove the math but haven’t thought deeply enough about articulating concepts without the math.

I am not entirely convinced that removing all mathematics is the right thing to do; and I say that having been a person who has generally advocated that instructors in physics be able to also articulate physics concepts without using the math.

Removing the math is not easy; and most inexperienced instructors fall back on math when they shouldn’t. Just writing down equations is not enough. But making that transition from automatically using math to being able to explain concepts without the math is something many popularizers have tried to do without sufficient thought.

Sure, there have been a few good popularizers of areas of math, physics, and chemistry; but, as near as I can estimate, not as many as there have been biologists.

On the other hand, it may be my own detailed familiarity with math and physics – as well as having considerable experience with dealing with misconceptions in these areas – that makes me more critical of popularizations of math, chemistry, and physics than I am of popularizations of biology.

But there is something about the writings of many of the biologists that seems more engaging to me. I really do think they are generally better writers; but, as I said, I may simply be less critical of biology popularizations.

This Templeton Foundation seems a bit creepy to me also. Large grants for making muddle headedness seem erudite and intellectually in vogue just muddies the issues while trying to give old superstitions some form of scientific legitimacy. I can understand religions as traditions and sources of social cohesion that also direct people’s concerns toward those less fortunate; but I think that trying to modernize medieval scholasticism is a bit pretentious.

I’m inclined to agree with the part about taking all the math out.

Isaac Asimov (who was a biochemist) wrote a really good popularizing book on physics. He left most of the math in. To some degree the book is just a high school physics curriculum, written in a more entertaining style. http://books.google.com/books/about[…]pSKvaLV6zkcC

One popular mathematics writer who was as good as they come by any measure was David Foster Wallace (Everything and More: A Compact History of Infinity) - and he uses a good deal of math.

Harold Wrote:

Isaac Asimov (who was a biochemist) wrote a really good popularizing book on physics. He left most of the math in. To some degree the book is just a high school physics curriculum, written in a more entertaining style.

Asimov was also one of the writers who contributed to the misconception that entropy was about disorder. The ID/creationists really latched onto his popularizations; as they also did with Robert Bruce Lindsay (See the footnotes to Morris’s writings). Both Asimov and Lindsay were good writers, but used some poor metaphors at times.

phhht Wrote:

One popular mathematics writer who was as good as they come by any measure was David Foster Wallace (Everything and More: A Compact History of Infinity) - and he uses a good deal of math.

William Dunham and Paul Nahin are also good; but then they don’t leave out the math. ;-)

Mike,

Sorry to hear about the shingles – bloody awful thing to have. I think, though, that you’re being too harsh here. Sure it’s a popularisation, and by definition that means they’ll skim over or simplify some critical concepts, but they can still do a good job of covering the fundamental concepts, some real-life examples, and if they are smart, common misconceptions. And sure, creationists will either ignore it or do the course for the sole purpose of finding erroneous justifications for their own fallacies – but this is equally true of books by Gould, Hawking, Darwin, Maynard Smith, and so on, all of which have been grist for the creationist quote-mill.

My only real qualm is that it’s supported by the Templeton Foundation, but having just looked over the lecture titles it looks pretty solid and I’m willing to give them the benefit of the doubt until such time as I see actual evidence of shonky misapplication of complexity in the course.

harold said:

Isaac Asimov (who was a biochemist) wrote a really good popularizing book on physics. He left most of the math in. To some degree the book is just a high school physics curriculum, written in a more entertaining style. http://books.google.com/books/about[…]pSKvaLV6zkcC

Regarding Asimov’s contribution to the conflation of entropy with disorder, see page 239 of the book to which harold linked.

Chris Lawson said:

Mike,

I think, though, that you’re being too harsh here.

My apologies.

Thank heaven for writers able to convey understanding of complex subjects without resorting to math. Were it not for them I would not be who I am. Ohm’s law may be very basic and trivial but the principles of electricity may well be understood without math. But the law has been a very handy tool. I realize things may be somewhat trickier with quantum physics, but I have no need to handle elementary particles.

Mike Elzinga said:

harold said:

Isaac Asimov (who was a biochemist) wrote a really good popularizing book on physics. He left most of the math in. To some degree the book is just a high school physics curriculum, written in a more entertaining style. http://books.google.com/books/about[…]pSKvaLV6zkcC

Regarding Asimov’s contribution to the conflation of entropy with disorder, see page 239 of the book to which harold linked.

I had forgotten that, but overall, it’s still a good survey of basic physics for lay people. In fairness to Asimov, that analogy was extremely common at the time.

Creationist nonsense about entropy makes no sense as an argument against evolution, even if entropy did have something to do with “order”. When I first heard this argument I used to ask creationists to tell me what the “entropy” of the biosphere during the Jurassic period was, why they think that the “entropy” of the biosphere is less now, and precisely how they think that violates physics. But that was back when I was naive enough to think that creationists had some understanding of the topics they were discussing.

I should add that Asimov’s book is well known to be riddled with minor errors and, in most editions, typos.

It’s also well known for causing young people who read it to develop a strong interest in science.

But enough about that.

Rolf Wrote:

Thank heaven for writers able to convey understanding of complex subjects without resorting to math.

I like at least some math. Unfortunately there’s usually either none, or so much, with so many symbols and terms that I have to look up that I just don’t have the time to follow. Words simply can’t replace what numbers, charts and graphs can convey. Besides, when anti-evolution activists distort what scientists write, its usually easier to show how they ignored, or played favorites with, numbers, than to show how they subtly switched definitions of words.

Frank J said:

Rolf Wrote:

Thank heaven for writers able to convey understanding of complex subjects without resorting to math.

I like at least some math. Unfortunately there’s usually either none, or so much, with so many symbols and terms that I have to look up that I just don’t have the time to follow. Words simply can’t replace what numbers, charts and graphs can convey. Besides, when anti-evolution activists distort what scientists write, its usually easier to show how they ignored, or played favorites with, numbers, than to show how they subtly switched definitions of words.

Quite a few years ago I decided to try to write a popularization of thermodynamics and statistical mechanics as a result of my frustrations with the errors in other popularizations; even the ones by Peter Atkins, which were generally pretty good.

I started out with no math, but concluded I was talking down to my imagined audience. So I started adding some math, but discovered I had to teach not only some of the math but some of the basic physics as well. The book kept getting longer.

Then I thought I could write it in the form of two parallel levels; one level with very minimal math, the other level with more detailed math, and with cross references between the two levels.

By the time I got fairly deep into that version, I noticed that my audience had changed to undergraduate level science students instead of the interested layperson. I also noticed that I was starting to write a textbook; and I concluded that since there were a number of very good textbooks already out there, I decided that I didn’t know how to write a popularization and gave up.

It wasn’t a total waste, however. Some of that material found its way into handouts for my students. It also gave me a bit more respect for the people who do manage to write popularizations, despite the fact that they make occasional errors. I discovered that I could write a textbook more easily than I could write a popularization.

At 82 there isn’t much I need to learn and I find that learnig takes more effort than it used to. I tell myself that my understanding of TD is good enough for my needs, but I would rather make an attempt at getting further into it than watching TV - that’s more like a duty I have to suffer for family reasons and I prefer reading some science book instead, the best of them are good for several readings, when I don’t sneak off to PT etc.

Any suggested reading on TD?

BTW, the Swedish language got a new word: ungoogleable.

The legend is that when Hawking was writing A Brief History of Time, his publisher told him that every mathematical formula that he included would cut his readership by something like 30% (IIRC).

The only one he included was that rather famous 3-term one by Einstein.

Just Bob said:

The legend is that when Hawking was writing A Brief History of Time, his publisher told him that every mathematical formula that he included would cut his readership by something like 30% (IIRC).

The only one he included was that rather famous 3-term one by Einstein.

According to Wikiquote, from A Brief History of Time:

Someone told me that each equation I included in the book would halve the sales. I therefore resolved not to have any equations at all. In the end, however, I did put in one equation, Einstein’s famous equation, . I hope that this will not scare off half of my potential readers.

Mike Elzinga said: My apologies.

No need to apologise, Mike. As I said, I still have one eye open for shenanigans due to the Templeton funding.

Your story about the thermodynamics book is a good one. I had a similar experience trying to write a popular book about genetic engineering because I was sick of the common distortions in the mass media. But I found that every time I tried to write about one of these misconceptions I had to either assert that the misconception was wrong, or burrow down into the science to prove the point and in the process making it tougher reading than the popular audience is interested in. Despite my multiple objections to the simplifications in pop-sci books by Asimov, Gould, Krauss, and Dawkins, I am still in awe of their capacity to cover relatively difficult concepts in easily understandable language. About the only writers I can say manage to be scientifically exacting while writing on the pop-sci level are Carl Zimmer and Richard Feynman, although I’m sure there are others.

Speaking of complexity, I watched Steven Meyer’s talk at Cambridge (I don’t recommend it, it was a complete waste of time); and the main thing that comes to mind is the saying, “If an ID/creationist tells you the sky is blue, you need to go outside and check.”

ID/creationists have never conveyed the concepts or processes of science properly; they always get it wrong. In his talk, Meyer wallows in this tactic all the way through.

Meyer reifies the metaphor of “information” as it has been used in describing the complexities and processes in the cell. He is purporting to find that “intelligence” is the only explanation for what is basically a projection on the part of human minds of information onto biological complexity.

“Information” is what some scientists are using to link the patterns that they see; it doesn’t mean that this “information” is really there. It lies in the relationships that scientists are observing and attempting to organize in their descriptions. It’s a metaphor only. One can equally as well use the language of “information” in the description of just about any complex phenomenon.

Mike Elzinga said:

Speaking of complexity, I watched Steven Meyer’s talk at Cambridge (I don’t recommend it, it was a complete waste of time); and the main thing that comes to mind is the saying, “If an ID/creationist tells you the sky is blue, you need to go outside and check.”

ID/creationists have never conveyed the concepts or processes of science properly; they always get it wrong. In his talk, Meyer wallows in this tactic all the way through.

Meyer reifies the metaphor of “information” as it has been used in describing the complexities and processes in the cell. He is purporting to find that “intelligence” is the only explanation for what is basically a projection on the part of human minds of information onto biological complexity.

“Information” is what some scientists are using to link the patterns that they see; it doesn’t mean that this “information” is really there. It lies in the relationships that scientists are observing and attempting to organize in their descriptions. It’s a metaphor only. One can equally as well use the language of “information” in the description of just about any complex phenomenon.

Mike, I agree that many popularizations don’t do justice to science, at least in properly describing the intricacies and ramifications of the math. It takes a great deal of talent to be able to translate scientific concepts into plain English. But good examples abound. Stephen Hawking’s “The Grand Design” and “A Brief History of Time” and Brian Greene’s “The Hidden Reality” are books written with little or no math and yet are able to teach the basic ideas of where physics is at this time. In fiction, I’ve always admired Michael Crichton’s novels for being able to explain and use scientific ideas into relatively plausible thriller scenarios (though I’ve never admired his literary abilities).

With regard to your concept of information, I must respectfully disagree. Information is not just “what some scientists are using to link the patterns that they see.” Information is really there, whether scientists see it or not, and not a metaphor. A great example of this is Hawking’s argument with other physicists regarding black holes. The question was what happens with the information of the stars and planets, etc. that are drawn into a black hole? The outright realization is that those structures have information in them, in the structure of the molecules, atoms, in their motion, etc. Physics says that information cannot be destroyed. So if everything, including light, gets sucked into the black hole, where did all the information that existed in the structure of those objects go? The conclusion of the argument was that the information gets stored on the surface of the black hole. It is conserved. Moreover, this led to the currently fashionable idea of the holographic nature of our universe, in which all of the information of our universe (and of the other universes in a multiverse) is stored on the surface of the universe, and everything happening inside (including us) is a holographic projection of that information. All heady stuff based on quantum mechanics, but nevertheless it illustrates the point, which is that everything has information stored in it, in its basic structure, motion, etc. And so do all of our biochemical structures in our cells, whether we use that information to understand what is happening or not. Whether or not ID/Creationists use this fact for furthering their argument is irrelevant to the facts themselves.

TomS said:

Just Bob said:

The legend is that when Hawking was writing A Brief History of Time, his publisher told him that every mathematical formula that he included would cut his readership by something like 30% (IIRC).

The only one he included was that rather famous 3-term one by Einstein.

According to Wikiquote, from A Brief History of Time:

Someone told me that each equation I included in the book would halve the sales. I therefore resolved not to have any equations at all. In the end, however, I did put in one equation, Einstein’s famous equation, . I hope that this will not scare off half of my potential readers.

So there’s a formula for the half-life of book sales. Cool.

Physics says that information cannot be destroyed

No it does not.

Bill Maz said:

With regard to your concept of information, I must respectfully disagree. Information is not just “what some scientists are using to link the patterns that they see.” Information is really there, whether scientists see it or not, and not a metaphor.

I would suggest that you obviously cannot define what is meant by “information” in any of those popularizations you cite.

I am not making my own assertions based on a novice’s reading of popularizations; I can do the math. I not only have read and cringed at the metaphors used in all of those popularizations you mention; I know how the metaphors are being used and misused.

There is not one equation used in the calculation of entropy that computes, in any way whatsoever, a quantity called “information” or a quantity called “disorder;” NOT ONE, NOT EVER.

Every formula is either an integral of energy divided by temperature, or a logarithm of the number of microstates, or an average of the logarithms of the probabilities of a system being in each of the states of a set of microstates. There is nothing about an integral (continuous summation), a sum, or an average that says anything about disorder or “information.”

There is not one partial derivative of “information” with respect to total energy that relates “information” to other thermodynamic variables of a system.

“Information” is a metaphor that refers to interrelationships among the constituents of a system. It is a shorthand way of talking about those interrelationships. It doesn’t mean to physicists what you apparently think it means; and physicists and cosmologists are being sloppy when they use that term with laypersons because laypersons generally do not understand the word in the way it is being used in technical talk.

Just to make the point clearer, here is a little concept test on entropy. (I have given this little concept test before on this site; but rather than trying to locate it on threads that occurred here a year or more ago, I will post it here again for you to contemplate.)

This is a very definitive concept test that can check whether or not someone really knows what entropy is and how it relates to other thermodynamic variables of a system.

Take a simple system made up of 16 atoms, each of which has a non-degenerate ground state and a single accessible excited state.

Start with all of the atoms in the ground state. As energy is added in each step, isolate the system and compute the entropy.

1. What is the entropy when all atoms are in the ground state?

2. Add just enough energy to put 4 atoms in the excited state. What is the entropy now?

3. Add more energy so that 8 atoms are in the excited state. What is the entropy now?

4. Add still more energy so that 12 atoms are in the excited state. What is the entropy now?

5. Add more energy so that all atoms are in the excited state. What is the entropy now?

6. Rank-order the temperatures in each of the above cases.

If you know how to do this concept test, then you will also know that entropy has nothing to do with disorder or “information;” and what is more, you will be able to explain why. You will also have demonstrated that you know how entropy relates to other thermodynamic variables of a system.

Give it a try.

harold said:

Bill Maz said: Physics says that information cannot be destroyed

No it does not.

Pseudohistorian David Barton makes a pretty good living destroying information.

Mike Elzinga said:

Bill Maz said:

With regard to your concept of information, I must respectfully disagree. Information is not just “what some scientists are using to link the patterns that they see.” Information is really there, whether scientists see it or not, and not a metaphor.

I would suggest that you obviously cannot define what is meant by “information” in any of those popularizations you cite.

I am not making my own assertions based on a novice’s reading of popularizations; I can do the math. I not only have read and cringed at the metaphors used in all of those popularizations you mention; I know how the metaphors are being used and misused.

There is not one equation used in the calculation of entropy that computes, in any way whatsoever, a quantity called “information” or a quantity called “disorder;” NOT ONE, NOT EVER.

Every formula is either an integral of energy divided by temperature, or a logarithm of the number of microstates, or an average of the logarithms of the probabilities of a system being in each of the states of a set of microstates. There is nothing about an integral (continuous summation), a sum, or an average that says anything about disorder or “information.”

There is not one partial derivative of “information” with respect to total energy that relates “information” to other thermodynamic variables of a system.

“Information” is a metaphor that refers to interrelationships among the constituents of a system. It is a shorthand way of talking about those interrelationships. It doesn’t mean to physicists what you apparently think it means; and physicists and cosmologists are being sloppy when they use that term with laypersons because laypersons generally do not understand the word in the way it is being used in technical talk.

Just to make the point clearer, here is a little concept test on entropy. (I have given this little concept test before on this site; but rather than trying to locate it on threads that occurred here a year or more ago, I will post it here again for you to contemplate.)

This is a very definitive concept test that can check whether or not someone really knows what entropy is and how it relates to other thermodynamic variables of a system.

Take a simple system made up of 16 atoms, each of which has a non-degenerate ground state and a single accessible excited state.

Start with all of the atoms in the ground state. As energy is added in each step, isolate the system and compute the entropy.

1. What is the entropy when all atoms are in the ground state?

2. Add just enough energy to put 4 atoms in the excited state. What is the entropy now?

3. Add more energy so that 8 atoms are in the excited state. What is the entropy now?

4. Add still more energy so that 12 atoms are in the excited state. What is the entropy now?

5. Add more energy so that all atoms are in the excited state. What is the entropy now?

6. Rank-order the temperatures in each of the above cases.

If you know how to do this concept test, then you will also know that entropy has nothing to do with disorder or “information;” and what is more, you will be able to explain why. You will also have demonstrated that you know how entropy relates to other thermodynamic variables of a system.

Give it a try.

I admit, I am not trained in physics as you are. I was trained in medicine, among other things. And no, I don’t have the math required to talk to you on a technical level about entropy or information theory. I have, however, read widely as a layperson. Having said that, if a layperson wants to talk to me about how the brain works, I don’t first have him or her name the branches of the trigeminal nerve or the components of brain structure before I engage in a discussion.

As I understand information as a layperson, it describes the state a system is in at any particular time. On the subatomic level, an electron’s spin has information in it which can involve it in quantum entanglement with another electron and thus determine the second electron’s spin. That “information” can be said to involve an interaction among electrons, but it is also “stored” in that first electron, at least as I understand it. In the brain, at least as we understand it now, “information” is stored in the physical states of the neurons (in their structure which, through use, has either “strengthened” their relationship with other neurons or weakened it, resulting in the storage of knowledge or memory). The way a protein in a cell is structured has “information” in it which, through interactions with other proteins or DNA, or RNA or whatever, can affect them and, in essence, “transfer” information through its interactions. Some of this may be just semantics, but I think we are making a mistake in throwing away the concept of information if we, or at least I, cannot define it in mathematical terms, or at least not yet. I don’t know if anyone has mathematically defined the information transfer between neurons but if they haven’t I don’t think there is anyone who doubts that information is being stored in the brain based on its physical structure at any one point in time. Now, as a physicist, you can correct me if I’m wrong about my understanding about the example I gave above regarding information not being destroyed with regard to the black holes. As I understand it, as a layperson, information cannot be destroyed, only changed in its form. Again, in terms of a definition of information, I can only give you my non-mathematical understanding of it in that it involves the state an object is in with regard to its quantum state, spin, motion, or whatever. If that’s not good enough, correct me. But it seems inherently obvious to me from my reading that any object has information stored in it simply by its structure.

Bill Maz -

Having said that, if a layperson wants to talk to me about how the brain works, I don’t first have him or her name the branches of the trigeminal nerve or the components of brain structure before I engage in a discussion

I’m a physician not a physicist or evolutionary biologist. I’m a pathologist. My undergraduate training is in biology. I’m probably in the top third or so when it comes to biomedical science people with quantitative interests; maybe a little higher. On the other hand, many physicians and biologists have much more training in physics or computer science than I do. Cross-training is common.

I became interested in combatting evolution denial due to the Kansas School Board debacle of 1999. Evolution is key to all biomedical sciences. A pathologist who denies evolution would be like a chemist who denies the existence of electrons. They could function at a very concrete technical level and some probably do. But a society where electron denial is widespread will not be able to produce competent chemists at a sustainable rate.

I often talk to people about concepts from pathology, and medicine in general (not necessarily in the context of evolution denial).

Nearly always their interest is sincere and reasonable.

Usually science-denying quacks don’t want to engage me.

Usually I would not do the equivalent of challenging someone about the trigeminal nerve, but I would if they were denying or misunderstanding the well-understood science of the trigeminal nerve.

You said above that physics states that information cannot be destroyed. That statement is incorrect. If you think it is not incorrect, explain exactly how physics states this in your own words, with all the relevant equations, and them direct me to a mainstream university level physics textbook that includes the material you have paraphrased.

Your comments seem more intelligent and much more civil than the average creationist comment, but you seem to have an underlying obsession with claiming that “entropy”, “complexity”, and “information” somehow cast doubt on central mechanisms of evolution that are known to exist. They do not. I am very familiar with those topics at an interested lay person/undergraduate level myself. Partly because they came up in formal courses that were either required for my biology degree, or that I took out of interest (I have some mild graduate school level training in statistics and probability). Partly because creationist distortions have the unintended beneficial effect of causing me to review those topics.

Bill,

Excuse me, but you seem to be saying that a human brain cannot be destroyed. Is that correct? If so, I can propose an experiment to test that hypothesis. And, even if that were correct, how is that in any way a problem for evolutionary theory?

harold said:

Bill Maz -

Having said that, if a layperson wants to talk to me about how the brain works, I don’t first have him or her name the branches of the trigeminal nerve or the components of brain structure before I engage in a discussion

I’m a physician not a physicist or evolutionary biologist. I’m a pathologist. My undergraduate training is in biology. I’m probably in the top third or so when it comes to biomedical science people with quantitative interests; maybe a little higher. On the other hand, many physicians and biologists have much more training in physics or computer science than I do. Cross-training is common.

I became interested in combatting evolution denial due to the Kansas School Board debacle of 1999. Evolution is key to all biomedical sciences. A pathologist who denies evolution would be like a chemist who denies the existence of electrons. They could function at a very concrete technical level and some probably do. But a society where electron denial is widespread will not be able to produce competent chemists at a sustainable rate.

I often talk to people about concepts from pathology, and medicine in general (not necessarily in the context of evolution denial).

Nearly always their interest is sincere and reasonable.

Usually science-denying quacks don’t want to engage me.

Usually I would not do the equivalent of challenging someone about the trigeminal nerve, but I would if they were denying or misunderstanding the well-understood science of the trigeminal nerve.

You said above that physics states that information cannot be destroyed. That statement is incorrect. If you think it is not incorrect, explain exactly how physics states this in your own words, with all the relevant equations, and them direct me to a mainstream university level physics textbook that includes the material you have paraphrased.

Your comments seem more intelligent and much more civil than the average creationist comment, but you seem to have an underlying obsession with claiming that “entropy”, “complexity”, and “information” somehow cast doubt on central mechanisms of evolution that are known to exist. They do not. I am very familiar with those topics at an interested lay person/undergraduate level myself. Partly because they came up in formal courses that were either required for my biology degree, or that I took out of interest (I have some mild graduate school level training in statistics and probability). Partly because creationist distortions have the unintended beneficial effect of causing me to review those topics.

Thanks harold for your post. First of all, I don’t know why you insist on making me into a creationist. Never have I stated anything that should lead you to believe I am one, unless you regard discussing information or chaos theory as a form of creationism, which I do not, and I don’t think anyone who talks about it in scientific terms does either. And I never insinuated that these ideas casts doubt on the underlying mechanisms of evolution.

In terms of preservation of information, I haven’t had time to direct you to a textbook, but I will simply direct you to Wikipedia.

“Starting in the mid 1970s, Stephen Hawking and Jacob Bekenstein put forward theoretical arguments based on general relativity and quantum field theory that appeared to be inconsistent with information conservation. Specifically, Hawking’s calculations[2] indicated that black hole evaporation via Hawking radiation does not preserve information. Today, many physicists believe that the holographic principle (specifically the AdS/CFT duality) demonstrates that Hawking’s conclusion was incorrect, and that information is in fact preserved.[3] In 2004 Hawking himself conceded a bet he had made, agreeing that black hole evaporation does in fact preserve information.”

I will, if you insist, research further to give you more traditional sources. Wikipedia gives the following reference which I have not yet read. J L F Barbón, “Black holes, information and holography” J. Phys.: Conf. Ser. 171 01 (2009) doi:10.1088/1742-6596/171/1/012009 http://iopscience.iop.org/1742-6596/171/1/012009 p.1: “The most important departure from conventional thinking in recent years, the holographic principle…provides a definition of quantum gravity…[and] guarantees that the whole process is unitary.”

I am not involved, as you seem to be, in combatting evolution denial. Like I stated in other posts, it is a long struggle for which I have neither the time nor the patience. But I do believe that, as scientists, we have a responsibility to educate the public which is very ignorant of evolution, so my hat off to you. What I don’t understand is why I hear so much resistance to discussing the concepts I brought up which don’t challenge evolution, at least as I understand them, but try to enhance it. Yes, I know that Creationists may use these ideas, but so what? They are either true in and of themselves or they’re not. Scientific methodology will prove them right or wrong. This is not about belief but about proof.

You are aware, no doubt, of the joke in medicine that the pathologist knows everything but too late. The anesthesiologist, which I was trained as, knows only what the data show him or her at any particular moment. And he has to make a decision based only on what information he has at that moment. I believe that is what science in general does. We make theories based on what we know. The more information we have the more it either confirms the theory or we have to amend it. I have only suggested that we look at what other scientists are doing in evolution theory and keep an open mind.

Bill Maz said:

Mike Elzinga said: “Information” is a metaphor that refers to interrelationships among the constituents of a system. It is a shorthand way of talking about those interrelationships. It doesn’t mean to physicists what you apparently think it means;

…As I understand information as a layperson, it describes the state a system is in at any particular time…

…In the brain, at least as we understand it now, “information” is stored in the physical states of the neurons…

…As I understand it, as a layperson, information cannot be destroyed, only changed in its form.…

I snipped a lot to try and make the point clear. Mike is right in the above quote, you are probably mostly right in your earlier statments, which is why you are wrong in your conclusion.

Because information consists of the states of systems (or as Mike put it, interrelationships between components of a system), it can be destroyed. Rearrangement of the components is the same thing as destruction of the information.

Let’s say we had a computer that used electron spins to store digital information. If I flip all the spin states to “up,” I will have destroyed the information (assuming that was not the originally intended message). The electrons still have spin, yes, but the state of the whole system has been changed. The interrelationships between the components has been changed.

Energy is a conserved quantity. When you change an electron’s spin, you need work to do it or the energy gained has to go somewhere. But the distribution of energy is not a conserved quantity. Information is a distribution of energy, it is not a form of energy, and so it is not conserved. It can quite easily be created and destroyed, by rearranging mass and energy.

Bill Maz -

First of all, the only thing I’m objecting to here is your incorrect statement that physics states that information can’t be destroyed.

If I toss my ipad into an incinerator I’ll destroy information. I won’t destroy any physical entity that is conserved. I won’t change the amount of matter/energy in the universe. But I’ll destroy information. I’ll also be creating new information, if there is an observer. For example, it might be information for someone who monitors what goes into the incinerator if I throw an ipad in there. But some information has been destroyed. It doesn’t go to information heaven. Information can be destroyed.

The reference you provided is too specific. It refers to a particular advanced issue in the study of black holes. It refers to a specific type of information that is defined by those observers. As Mike pointed out, there is no part of physics that says that no information can ever be destroyed.

You are aware, no doubt, of the joke in medicine that the pathologist knows everything but too late.

That’s an old gag that refers to autopsy findings. A pathologist probably made that joke up, but it’s just a joke. Autopsies, which aren’t part of my current job, but which I have done many of, are done for the benefit of future patients (or future potential crime victims).

The anesthesiologist, which I was trained as, knows only what the data show him or her at any particular moment. And he has to make a decision based only on what information he has at that moment.

That’s exactly what a pathologist does. We have to solve the problems we are presented with. That’s because medicine is an applied science. We don’t get to choose our problems.

I believe that is what science in general does. We make theories based on what we know.

Yes, although there is a massive difference of degree between the applied and the “pure” sciences (actually it’s a spectrum). My old professors in college could ask very specific questions and work with carefully chosen systems to address relatively precise questions with minimal numbers of confounding variables. In medicine we have to deal with the patients who are sick now.

The more information we have the more it either confirms the theory or we have to amend it. I have only suggested that we look at what other scientists are doing in evolution theory and keep an open mind.

People already had an open mind before you came along. Please stop beating up that straw man. Just because someone greets some specific thing you say with a critique does not mean that everyone is resistant to all valid new ideas.

eric said:

Let’s say we had a computer that used electron spins to store digital information. If I flip all the spin states to “up,” I will have destroyed the information

The tricky thing that gets the discussion all hung up is that you have to be really careful about what you call “information”.

Imagine a computer hard drive.

This one is well used and it’s full to the brim with files, “information”, in our everyday colloquial sense.

This information is encoded onto the surface of the platters as defined regions, each of which has a certain polarity of magnetization, readable as a one or a zero. (yes, yes, in real life, it’s more complicated, but work with this model).

Now, take that hard drive and start heating it up to a couple of hundred degrees.

At a certain point, the organized magnetic domains break down, and the carefully ordered patterns representing last years tax returns are gone.

When the platter cools down again, the domains return to a random state.

Now, I would put it to you that information has been destroyed. At least the important information that I’ll need when my taxes get audited because my numbers are all gone.

But in some real way there’s just as much information on the disk as there ever was.

In the beginning, the disk had 100 million zeros and ones. In the end, the disk has 100 million zeros and ones.

Granted, the new pattern is random, but it’s going to take the exact same 100 million digit binary number to completely and unambiguously specify it.

(In fact, since files tend to follow predictable patterns, and often lend themselves to clever compression, you could in some ways argue that the disk now holds more information than it did before. In much the same way, a television frame of static holds much more information than a frame of color bars. This always blows creationist minds).

Of course, if you were to get the disk really hot, hot enough that it dissociated into plasma, then you could argue that it holds virtually no information at all, other than being an indistinct soup of silicon, iron and oxygen nuclei in a broth of free electrons.

But.…

A physicist would counter that each of those nuclei has a position and velocity and each electron has a spin and, in theory, they could all be measured and recorded.

So there’s information, and information, and information and some of it is ephemeral and some of it is eternal (assuming you’re not in the middle of a black hole).

Creationists, of course, are more than happy to conflate the meaning of the terms to confuse the fact that their kind of “information” is the most delicate of all.

Bill Maz said:

Now, as a physicist, you can correct me if I’m wrong about my understanding about the example I gave above regarding information not being destroyed with regard to the black holes. As I understand it, as a layperson, information cannot be destroyed, only changed in its form. Again, in terms of a definition of information, I can only give you my non-mathematical understanding of it in that it involves the state an object is in with regard to its quantum state, spin, motion, or whatever. If that’s not good enough, correct me. But it seems inherently obvious to me from my reading that any object has information stored in it simply by its structure.

When it comes to ENERGY conservation and the spreading around of ENERGY, black holes present some unique problems because they reach to the edge of our understanding of what the fundamental constituents of the universe are.

The first law of thermodynamics says energy is conserved. The second law says essentially that energy spreads around among as many energy states as it can. The reason for that spreading around is that momentum transfers take place in the direction of higher kinetic energies toward lower kinetic energies (higher temperatures to lower temperatures).

How an energy state is defined depends on the constituents that make up a thermodynamic system and how these constituents carry energy. Free particles carry energy in their motion; bound particles carry energy in the force fields with which they interact. In relativity, the distinction between matter and energy is essentially erased.

One of the problems that thermodynamics and statistical mechanics try to solve is to link the energy distributed among the microstates of a system – which also involves defining precisely what those microstates are – and the macroscopic thermodynamic variables of a system; which consist of things like its total energy, temperature, magnetization, spin, and anything that is measurable in the lab or by observing the macroscopic system as a whole. This involves not only making models of the microscopic details of a system, it also involves trying to determine how the microscopic constituents interact with each other and how those interactions are manifested on a macroscopic scale that we actually measure.

The idea is to be able to observe correlated changes in the external, macroscopic variables of a system that link together those macroscopic observable variables, and to have those changes tell us what is going on with the microscopic variables that we don’t directly observe. For example, the changes in entropy with respect to the corresponding changes in the total energy of a system give us the reciprocal of the temperature; all of which are macroscopically observable. Changes in entropy with respect to volume tell us about the pressure. There are many other links among macroscopically measurable variables as well.

If our understanding of the microscopic details is correct, we should find our model of the system predicting what we actually observe from our macroscopic measurements.

Now comes the problem with black holes. These rip matter apart; but into what? What are the ultimate microscopic constituents of matter? Have we even reached the “bottom” in our understanding of the ultimate constituents of the universe? We also know that matter and energy are equivalent in relativity. How does that change what we mean by a microstate?

So the problem with black holes becomes a problem of trying to understand what constitutes a microstate and now energy is distributed among those microstates. Even more interesting, what can we observe external to and emanating from black holes that would give us any hint about what is going on inside? This gets us to some of the fundamental issues with conservation of energy in the universe as a whole. Where does the energy go and how is it distributed?

The “information” that physicists are talking about lies in the linkages among any macroscopic variables we can measure externally from black holes and the internal microstates of those black holes. What can we learn about what happens to the matter and energy that gets sucked into a black hole from what we can observe external to the black hole itself? Are there measurements we can make external to a black hole that tell us anything about what is inside and what state it is in?

For example, particle-antiparticle pairs spontaneously appear out of the “vacuum;” and in the vicinity of a black hole, one of those partners gets sucked into the black hole and the other remains outside. If we measure the quantity and spin of antimatter outside the black hole, can we use conservation of energy and spin to infer anything about their partners inside? Will that tell us anything about what could constitute a microstate inside the black hole? Does anything further happen to a partner as it falls deeper toward a singularity? Does it break apart further into something else that could constitute a set of microstates? Can we use these external measurements to infer what goes on inside a black hole?

These are all frontier issues that are still being worked on without even knowing if we have reached the most fundamental level in our understanding of what constitutes an elementary particle.

The popularizations in this area don’t give a good picture of the detailed issues that physicists and cosmologists are attempting to solve.

stevaroni said: Now, I would put it to you that information has been destroyed. At least the important information that I’ll need when my taxes get audited because my numbers are all gone.

But in some real way there’s just as much information on the disk as there ever was.

Sure, there is information on the plate. We can always describe any state of matter or energy in terms of a distribution or arrangement. In that sense, any arrangement of mass and energy is/has/contains information. Its just not the distribution we had before, so the information we had before has been destroyed.

In the beginning, the disk had 100 million zeros and ones. In the end, the disk has 100 million zeros and ones.

But only, as you point out later in your post, if you don’t heat it up too much. Heat it up past its vaporization point, and even the amount of information is not conserved. I’m not saying it goes down - whether it goes up, down, or is constant will depend on your specific definition of information. But I am pointing out that citing one example of one process that conserves the amount of one type of information does not prove the general adage that information is a conserved quantity. It isn’t. If it was, you wouldn’t need a black hole to conserve it!

Creationists, of course, are more than happy to conflate the meaning of the terms to confuse the fact that their kind of “information” is the most delicate of all.

Agreed. :) Though IMO Bill Maz seems honestly confused over terms, rather than someone intentionally trying to jump between meanings to defend ID.

If a black hole swallows a bunch of negatively charged particles, does it then have an electric field that attracts positively charges (in addition to its gravitational attraction for those particles), or does its gravity prevent photons from transferring the energy that would actually produce the electrical attraction?

Henry J said: If a black hole swallows a bunch of negatively charged particles, does it then have an electric field that attracts positively charges (in addition to its gravitational attraction for those particles)

AIUI, no. Photons are the force carriers for the EM force. If they can’t escape, you’re not going to feel the ‘charge’ of particles below/inside the event horizon.

or does its gravity prevent photons from transferring the energy that would actually produce the electrical attraction?

AIUI, yes. See above.

Keep in mind, however, that lots of processes can occur just above the event horizon, which are predicted to create very impressive EM effects and fields. That’s why quasars shoot out great honking beams of polar x-rays: EM processes around/above its ‘surface.’

Henry J said:

If a black hole swallows a bunch of negatively charged particles, does it then have an electric field that attracts positively charges (in addition to its gravitational attraction for those particles), or does its gravity prevent photons from transferring the energy that would actually produce the electrical attraction?

:-)

You’re on a roll.

There are theoretical models called Reissner-Nordström black holes. Depending on the amount of magnetic and electrical charge relative to the mass, they can get pretty weird in various ways.

However, these are not particularly relevant to realistic astrophysical situations because a highly-charged black hole would be quickly neutralized by interactions with matter in the vicinity of the black hole.

Henry J said:

If a black hole swallows a bunch of negatively charged particles, does it then have an electric field that attracts positively charges (in addition to its gravitational attraction for those particles), or does its gravity prevent photons from transferring the energy that would actually produce the electrical attraction?

Henry J,

All the mass that gets sucked into a black hole, and all the charged particles, appear (to a distant observer) to be frozen in time just micrometers outside the event horizon.

Because of this their mass and their charge still interact with our universe.

Once inside the event horizon, neither photons which mediate electromagnetic attraction and repulsion, nor gravitons which are postulated to mediate gravitational attraction, could communicate with our universe outside the event horizon. And thus a black hole would have no gravitational nor electromagnetic attraction. But it does. Because everything is frozen in time just outside the event horizon.

Pretty weird, huh?

Interesting. So matter that gets pulled in after the black hole forms stays essentially on the outside, for the foreseeable future (i.e., until dark energy messes things up).

That would leave any net charge in the star before it collapsed into a black hole; presumably those charges would be unable to affect stuff on the outside?

There’s also the question of what happens when two black holes collide; could that push stuff waiting on the outside of each into the interior of the black hole?

Henry

Henry J said:

Interesting. So matter that gets pulled in after the black hole forms stays essentially on the outside, for the foreseeable future (i.e., until dark energy messes things up).

That would leave any net charge in the star before it collapsed into a black hole; presumably those charges would be unable to affect stuff on the outside?

There’s also the question of what happens when two black holes collide; could that push stuff waiting on the outside of each into the interior of the black hole?

Henry

It all depends on where the observer is - far away, or nearby.

To an observer in the close vicinity of the event horizon, the event horizon is expanding away from the center of the black hole at the speed of light!

Remember that photos just outside the event horizon and travelling radially away from it can escape, while photons inside the event horizon can never escape. Photons right on the event horizon stay there. That’s why it travels outward at the speed of photons.

From frozen in time, to traveling at the speed of light, all dependent upon where you’re looking from. Is that weird, or what?

So if the event horizon travels outward at the speed of light, does not space-time fall into the black hole? And if space-time is not empty but indeed has some non-zero value of energy does not the black hole encompass it? And if it does, and that energy disappears from our universe, where does it go? Does it fuel the expansion of our universe, or some other universe created at the black hole’s collapse?

These are the questions that interest me.

In case anyone cares, I just enrolled in this MOOC and have been through the first three video segments. So far, it looks like a good fact-based introduction to the topic, including some initial emphasis on emergent properties; content has so far been consistent with my previous knowledge of the topic.

My plan is to post updates here every now and then as my own academic obligations allow.

I am curious – are any other PT regulars taking the course? I’d be interested your reactions to the course.

About this Entry

This page contains a single entry by Richard B. Hoppe published on December 26, 2012 11:09 AM.

PT’s year-end report on AIG was the previous entry in this blog.

Scientific discovery is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.38

Site Meter