**by Joe Felsensteinhttp://www.gs.washington.edu/faculty/felsenstein.htm**

Over at Uncommon Descent Sal Cordova has opened a dramatic new thread “Gambler’s Ruin is Darwin’s Ruin”. Apparently improvement of a population by natural selection is now shown to be essentially impossible. He invokes the example of Edward Thorp, who developed the winning system for blackjack fictionalized in the movie *21*.

Cordova uses the stochastic theory of gene frequency change of citing Motoo Kimura and Tomoko Ohta’s well-known 1971 monograph “Theoretical Aspects of Population Genetics”, and argues that

Without going into details, I’ll quote the experts who investigated the issues. Consider the probability a selectively advantaged trait will survive in a population a mere 7 generations after it emerges:

if a mutant gene is selectively neutral the probability is 0.79 that it will be lost from the population … if the mutant gene has a selective advantage of 1%, the probability of loss during the fist seven generations is 0.78. As compared with the neutral mutant, this probability of extinction [with natural selection] is less by only .01 [compared to extinction by purely random events].

(bracketing is by Cordova)This means is that natural selection is only slightly better than random chance. Darwin was absolutely wrong to suggest that the emergence of a novel trait will be preserved in most cases. It will not! Except for extreme selection pressures (like antibiotic resistance, pesticide resistance, anti-malaria drug resistance), selection fails to make much of an impact.

The Kimura/Ohta quote in question is on page 1 of their book, and describes a mutant with a selective advantage of 1%.

This would be a shocking disproof of decades of work in population genetics—if it accurately reflected the ultimate fate of those mutants. Fortunately, we can turn to an equation seven pages later in Kimura and Ohta’s book, equation (10), which is Kimura’s famous 1962 formula for fixation probabilities. Using it we can compare three mutants, one advantageous (s = 0.01), one neutral (s = 0), and one disadvantageous (s = -0.01). Suppose that the population has size N = 1,000,000. Using equation (10) we find that

- The advantageous mutation has probability of fixation 0.0198013.
- The neutral mutation has probability of fixation 0.0000005.
- The disadvantageous mutation has probability of fixation 3.35818 x
10
^{-17374}

In other words, yes, in this case there is a lot of loss of advantageous mutations, about 49 being lost of every one that makes it to fixation. But they are each nearly 40,000 times as likely to fix as are individual neutral mutations, and deleterious mutations are essentially never going to fix in such a case.

Why does this give such a different result than the comparison of 0.78 to 0.79? It is because after 7 generations the surviving mutants in the case of selective advantage are at a higher frequency than are those in the neutral case, and the result is a much greater chance of fixation.

In fact, the Gambler’s Ruin shows a similar behavior—its mathematics is similar to (but not identical to) the population-genetic case. If you toss coins with a stake of $1 against a house which has $1,999,999 to wager, and you both keep playing until one holds the whole $2,000,000, if the game is fair you will be the ultimate victor one time out of 2,000,000, and the rest of the times the house will win. But if you have a 1% advantage, so that on each toss you have a 50.5% chance of winning, you will be the ultimate victor nearly 1% of the time. Mostly you will be ruined, but you will bankrupt the house 20,000 times as often as you would if the toss were fair.

So yes, the mathematics of Gambler’s Ruin speaks to the issue of natural selection—but it confirms its effectiveness.

(The other issue raised by Cordova, that of interference between mutations at different loci, is the well-known Hill-Robertson effect. If the loci have more than a tiny amount of genetic recombination between them, the interference largely vanishes. Cordova and the other commenters there have forgotten this.)

Basically then, he’s pretending that mutations are rare events?

It is always amazing how creationists, and Sal in particular, can see one thing in a text book that seems to support their presupposition and completely miss the majority of the lesson which indeed shows that their conclusion is wrong. The clue should be that the book Sal quote-mined deals with an integral part of evolution theory, namely population genetics. Also if this book was printed in 1971 and wasn’t an earth shattering refutation of “Darwinism” then, why should we now expect it to be?

Prof. Felsenstein (sir), I must disagree with you about the Hill-Robertson effect - I’m not sure Sal is that advanced. I suspect he’s assuming a (roughly) constant total variance in fitness so that having more genes involved reduces the average effect of each one. But I haven’t pushed him on it, because I forgot to check the literature when I was at work.

I don’t have access to the book in question. Do they provide an equation to calculate the odds that a mutation with s=.01 will be fixed

outof a population?Is it, by chance, the remaining 98%?

It hardly seems meaningful to infer the effectiveness of natural selection by comparing the rate of fixation in advantaged vs. neutral mutations, when in fact you are comparing “tiny” to “much tinier.”

Rather, the most meaningful conclusion appears to be that genetic drift reduces the the odds of fixation of an advantaged trait by 98% in an s=.01 scenario.

I noticed in that UD post that Sal said he’d seen both “21” and “Expelled” the same day. Does he have like a job or something?

Some comments on the comments:

J. Biggswrote:No, they are rare. He’s noticing that even advantageous ones very often get lost, and arguing that this refutes the effectiveness of natural selection.

unglssasked:I’m not sure what you mean by fixed “out of” a population. You can use Kimura’s 1962 formula for fixation probability of a single copy of a mutant that has selective advantage s and population size N, and it is (1-exp(-2s))/(1-exp(-4Ns)). So for s = 0.1 that is 0.181269 if N = 1,000,000. But that’s what I’d call fixation “in” a population. What do you mean by “out of”? By the way, Kimura’s original 1962 paper is in Genetics and is freely available there by web. More equations available in my own population genetics free e-book.

unglssagain:If by “it” and “out of” you mean the probability of getting ultimately lost rather than fixed, yes, it is 1 - Prob(fixation).

unglssagain:It’s a matter of the overall rate of fixation of advantageous mutations and of deleterious mutations, and these have very different probabilities of fixation, as you can see from the probabilities.

You’d be right. Either the mutant fixates or goes extinct, so with s = 0.01, the extinction probability is 1 - 0.02 = 0.98. However, the thing to note here is that if the mutant fixates, then that’s it, it’s “taken over” the population. Considering the timescales of evolutionary change, if advantageous mutants appear time and again, each successful fixation would only increase the population’s fitness (even if it only happens 1% or 2% of the time). This is basically natural selection, and shows how mutation and selection drive evolution, improving populations over time. Isn’t it great?

Replying to the first part of your comment, “tiny” vs. “tinier” isn’t really a good way to look at things. Even a tiny selective advantage can push up the fixation probably several orders of magnitude. This just means that a single mutation doesn’t always lead to “improved” populations, but over time, many mutations (which on long enough timescales, are not *rare*) do.

Sal really stepped into my world on this one, since I counted Blackjack for several years. I take him apart here:

http://scienceavenger.blogspot.com/[…]ero-sum.html

He doesn’t even understand the basics of the issues, making statements like:

“If he has a 1% statistical advantage, that means he has a 50.5% chance of winning and a 49.5% chance of losing.”

Which reveals either that he is completely ignorant of the rules of blackjack, or he doesn’t understand the difference between probability of victory and expected winnings.

I would also note that it is highly unlikely that Thorp’s system was the one used in the movie. Blackjack counting systems improved dramatically in the 70’s and 80’s as computing power allowed for simulations for the first time, and his would no doubt have been surpassed in efficiency by more modern systems.

bump

Sir:

Thank you for your response. By “fixed out of” I meant eliminated as a variant. I apologize for my lax use of words.

Thanks for the equation itself, also. I greatly appreciate your time.

I agree with you that this certainly does not “refute” natural selection. However, I don’t think it’s “Darwin’s gain,” either. An s=.01 mutation may have a 40,000x better chance of fixation than a neutral one, but the more relevant fact is that the s=.01 mutation has only a 2% chance, and the neutral and disadvantageous mutations have virtually no chance at all.

The problem becomes even more severe in smaller, isolated populations (where genetic drift becomes more severe), and in the context of sexual reproduction (where each child receives only 50% of the collective genetic diversity of its parents).

I don’t think this is any meaningful “gain” for Darwin at all. Rather, it’s a significant hurdle that mutations have to get over to get fixed. It may be surmountable, over significant periods of time, but it is a hurdle, nonetheless.

Of course such a theoretical discussion cannot ever prove that evolution by natural selection is impossible anyway. There are always a host of other factors operating in the real world that can drastically alter the probabilities. For example, what is the dominance of the newly arisen mutation? How many offspring are produced carrying the new mutation and what is the rate of inbreeding? Are there other considerations such as hitchhiking, pleiotropy, density dependent selection, sex-linkage, etc. All these factors and many more can help to determine the initial and ultimate fate of mutations no matter how selectively advantageous they are. And then of course there is population size, environmental heterogeneity etc.

Leave it to creationists to find the end of “Darwinism” in everything they read. Why is this guy reading 37 year old papers anyway? No matter what he quote mines he will always find that science has moved on in the last thirty years anyway. How come these guys never seem to notice that? It’s almost as if they as just trying to find things that someone might take the wrong way instead of really trying to learn anything about science.

I am afraid I can’t take credit for that comment. I believe you were referring to Henry J.

AAHHHHH.…The Cordova.…standard with fine Corinthian Bullshit.

Dr. Felsenstein,

Thank you for responding to our discussion at Uncommon Descent. I appreciate that you would take time to respond. I have provided links from Uncommon Descent to your response here. I encourage those reading Uncommon Descent to read your response.

Many thanks for taking time to read what I wrote and offering a response here at PandasThumb.

Salvador T. Cordova

unglsssaid:In small populations the chance of fixation of an advantageous mutation will still be about 2s (Haldane’s approximation from 1927). For N = 1000, for example, fixation probability of an advantageous mutant with s = 0.01 is still about 0.0198013. The probability of fixation of a neutral mutant is now up to 0.0005, and that of a deleterious mutant is bigger than it was but still below 10

^{-19}.And sexual reproduction is no problem because that calculation

wasfor sexual reproduction. Each child receives half of its genes from each parent, but also each child has twice as many parents as in the asexual case.Sal Cordova argued that there was little difference in outcome between advantageous and neutral mutations. He was wrong about that, and I assume that he will admit that.

At least I have to say that Sal is probably the most polite ID/Creationist that comments here.

Mr. Felsenstein:

I greatly appreciate the explanation. Thanks.

Forgive my abject ignorance, but I’ve been wondering whether s=.01 is typical. I gather that if this number were even slightly larger, the probability of fixation would go up dramatically. So is s=.01 conservative? unglss seems to be saying that for evolution at this level of selective advantage, the loss rate implies that evolution will be slower than actually observed in some cases. So my guess would be that some beneficial mutations offer better than a 1% improvement. And those would be a LOT more advantageous than neutral mutations.

So where did the 1% number come from? Is it realistic?

Flint:

Seems like that’s going to depend on the mutation at issue. Certain mutations (like antibiotic resistance) are going to be significantly higher than .01. The S of other mutations (like gradual changes to anatomy) are going to be much lower. Seems like you’d actually have to observe the effect of the mutation to determine the S. Yes?

J. Biggs:

Oh, Sal’s a peach, alright:

His “politeness” here is merely convenient. The above comment is typical of his rhetoric.

I specifically said:

I was taking issue with Darwin’s statement:

However, Kimura and Ohta noted:

Darwin was responsible in large part for the false assumption that “every advantageous mutation that appears in the population is inevitably incorporated”. But Kimura and Ohta demonstrate this claim is false by several orders.

You are correct that selective advantage leads to a stronger probability of fixation, but how many traits can be fixed over time seems not very clear, especially if we are talking multiple traits simultaneously…

I alluded to the problem of selection interference. It has been discussed, but I think it needs to be more fully explored and better known than it is today. John Sanford of Cornell indirectly postulated there is a limit to the value of selective advantage “S” depending on how many traits are viewed as selectively advantaged in the population. He suggests selection in human populations can only be effective for 700 traits simultaneously under a multiplicative fitness model. I have not been able to independently confirm his calculations.

I presume the work of Robertson-Hill might have bearing. It was mentioned in Kimura and Ohta’s work page 13. It appears Sanford follows the line of reasoning of Robertson-Hill in his book

Genetic Entropy.In anycase, I felt your rebuttal was for the most part well argued. I hope I have at least clarified my position, even if you disagree.

regards, Salvador T. Cordova

(my highlighting)

There appears to be a conflation of Hemoglobin S (Human genetic resistance to Malaria) with drug resistance in parasites.

So we have 4 examples of positive evolutionary selection being waved away, rather than just 3.

@ Flint and ungtss:

Here, S is usually taken as “small” so is quite close to zero. This is because selection in nature is “weak”, or has only a small contribution to fitness. If S was “large”, like say S = 1 or S = 2, the biology just doesn’t make sense. It’d be like a human giving birth to Wolverine from the X-Men. Advantageous mutants are typically only a little advantageous. And even in antibiotic resistance or related ideas, selection is usually still weak.

What false assumption? Made by whom? No one with a triple digit IQ would read that passage and think that Darwin literally meant that 100% of advantageous mutations would be passed on. What utter tripe.

Sal, we aren’t in 1859 anymore.

ungtss, yes, the chance of a beneficial mutation spreading throughout the population (using the numbers posited here) is 2%. The chance of the mutation going extinct is 98%.So how many times would the same mutation have to arise to ensure that it gets fixed in the population, again assuming the numbers we are using here?

1 mutation: 2% chance of fixation 2 mutations: 4% chance of fixation 3 mutations: 6% chance of fixation 4 mutations: 8% chance of fixation 5 mutations: 10% chance of fixation 6 mutations: 11% chance of fixation .…. 33 mutations: 49% chance of fixation 34 mutations: 50% chance of fixation 35 mutations: 51% chance of fixation .…. 207 mutations: 98% chance of fixation 208 mutations: 99% chance of fixation

So if just 1 animal in a population has the beneficial mutation, there is a 98% chance the mutation will not spread and a 2% chance that the mutation will spread. Many/most mutations likely start in just this manner, with a single mutation in a single individual.

Now imagine that a 2

^{nd}animal (at some other point in time) has the same mutation. In other words, I am not saying that two animals at the same time have the same mutation (which is certainly possible); I am saying that the first mutation died out and has now re-arisen in a different animal. This mutation has the same 2% chance as the original animal, but theoverallchance that the mutation fixates (either from the first or the second individual) is 4%.The numbers above speak for themselves. Each number assumes that only a single individual has the mutation at a given time. How many times would the mutation have to “independently arise” before it is likely to fixate in the population?

Once the same beneficial mutation appears 35 separate times, it is more likely than not that it will spread through the population.Again, I think these numbers may underestimate how easy it is because more than one individual can have the same mutation.

Mutations are rare? That’s not what one hears at Sandwalk.

No, Sal is literally correct. Darwin did indeed say in that particular quote that selection is preserving ALL that are good. Not some, not most, ALL. But of course, it would be a perverse misreading to think Darwin was making such a case literally when it seems pretty obvious he was describing the general sweep of selection. Elsewhere, Darwin makes it clear he understands that individuals born with potentially beneficial characteristics commonly do not survive to breed for reasons entirely unrelated to that characteristic. I vaguely recall Darwin illustrating this with moths and candle flames.

What Sal has done here is (gasp)

quote minedDarwin.Consider J Biggs’ carefully choreographed astonishment:

I could only ask that people READ the material surrounding Sal’s quote, to see exactly what sort of case Darwin is making at that point.

I’m curious PT biologists, is this what Sal does with biological issues as well? Just spout a bunch of nonsense surrounded by technical terms and hope no one knowledgeable notices? I always assumed his biological writings were impenetrable because I had some biology to learn, but now that he’s stepped into my arena, and I’ve seen how clueless he truly is there, I can’t help but wonder if that’s just his SOP.

Par for the course. Just wait until he starts throwing in ‘peer reviewed’ and ‘Schroedinger’…

Which is still better than the concept of ‘genetic entropy’.

Dr. Felsenstein, Dr. Bob OH,

Amazon informed me that your books are on the way. I apologize for my late reply, but I hope the books will arrive in due time.

Olegt, if you wish to have a copy, I can send one to your office at your school.

regards, Salvador

LOL! Of all the stupid things I heard from you, that tops it all.…

Winning in black jack is not a discrete outcome of winning or losing the initial wager because of double downs and splits pushes and blackjack, but it is accurate to say what the on average outcome will be in terms relative to the initial average wager (which is exactly what I did).

Of course if you want to define winning as a single event where one gets more money than what is wagered, then it will not be a 50.5% chance of winning. But such a characterization is misleading because sometimes one gets 150%, 200%, 400%, etc. the initial wager if one wins. Thus my characterization was sufficiently accurate to get the point across, and is actually more accurate in terms of “on average” performance.…

If you want to play semantic games of what a win on average means, go ahead. I might put an amendment at UD to appease your sorry attempts at a nit pick and your willful desire to uncharitably read what I wrote.….

By the way, can you fork up the data on +6 AOII count given the rules I provided?

All you did “science” avenger, is equivocate the my usage of the term winning and substituted your own. I clarified what my usage was, and you still persist in your equivocations. That’s an unethical and disingenous debate tatic.….

I will double check with the authors as I will meet with ReMine in June.

But on the surface it appears your claims of supposed flaws are actually more favorable to the case of Darwinian evolution not unfavorable. The simulation appears flawed for being too generous. A randomly changing environment amplifies random selection, and hence drowns out the effect of natural selection even more. If it is faulty in that it give too much credit to Darwinian evolution.…The point of gambler’s ruin is the effect of random selection over natural selection. Adding more fidelity would weaken the Darwinian case, not strengthen it.…

As I indirectly pointed out, your invocation of heterozygous advantage doesn’t help, not unless you think kids with Cistic Fibrosis and Sickle Cell anemia aren’t really afflicted – which is what you may believe, since Darwinism places selection advantage above traditional notions of well being.….

The supposed fix you offer through heterozygous advantage is really just a definitional one – label what are traditionally viewed as sources of sickness as “beneficial mutations”.….

What nonsense.

It can’t be stupid when it is objectively, provably true.

This is just another example of you gibbering with terms you don’t understand to distract from the point (one can have multiple possible discrete outcomes), which is that having a 1% advantage is NOT the same as having a 50.5% of winning.

Bullshit. You are again just tossing around verbage you clearly don’t understand. Winning doesn’t have to be a single event, it can be as many as you like. It doesn’t change the fact that a 1% advantage does not mean a 50.5% chance of winning.

It is not a semantic game to use words as they are defined. If you are ignorant of what statistical terms mean, that is your problem, not mine.

That you think such a thing is relevant to the issue here is simply more evidence that you don’t have a clue what you are talking about, and are simply cutting and pasting terminology you hope will blind your audience to that fact.

Bullshit. I used the standard definitions of the terms, and simply pointed out that you haven’t a clue what you are talking about. But keep yammering away, I’m sure the ignoramuses at UD will love it.

From my original response to Sal’s article:

Note also that it makes no difference whether the “plays” I refer to are actually one deal of the cards or one million, nor does it matter what counting system one is using, nor even whether the payouts are discrete or some continuous function. Note also that Sal apparently didn’t even read my article, or he’d have known that I already knew about plays like double-downs which pay more than the initial wager. No wonder he just blindly claimed I misrepresented him.

Like I said Sal, I used to count cards (I used the Uston Advanced Point Count). Still want to dance?

Oh, really, then answer my question. While you’re at it, provide count values yielding an advantage of 1% for the other systems such as :

Hi-Low

Silver Fox

Uston APC

It should be pretty easy if you really know what you’re talking about. Hint: you’ll notice the tables providing it often express the advantage as a Win/Loss percentage –

exactly the convention that I used.Your failure to answer my first simple question only demonstrates you’re not even as knowledgeable as you claim. You’re worthless nit pick only demonstrates you’re not familiar with the conventions actually being used in discussions.

For the reader’s benefit, using the definition of a “win”, where win means getting the initial wager back plus some extra, the probabilities are on the approximate order of:

1. Win 42.5%

2. Loss 50%

3. Tie (push) 7.5%

But because one can get back more than even money on the initial bet because of naturals, double downs, splits, double downs with splits – because one can use insurance plays and things like late surrender, one can achieve an edge of over 1% over the casino, even though the majority of hands over time return less than the initial wager.…

But the approximate odds and the definition of “win”, where “win” is defined according to “Science” Avenger’s idiosyncratic notions, is not the one used in most conversations. I followed the common convention as evidence here:

The Odds of Winning a 21 game.

Finally, it does not follow logically that because I was referring to average win rates, that I did not know what I was talking about. Rather it evidences “Science” avenger’s unwilliness to read what I wrote charitably, and his willingness to use his uncharitable reading to promulgate falsehoods about my level of knowledge of this particular game.…

So “Science” Avenger, why don’t you show what a great card counter you are and answer my simple questions. A pro like you ought to be able to answer them. :-)

Syntax Error:not well-formed (invalid token) at line 7, column 235, byte 1111 at /usr/local/lib/perl5/site_perl/5.12.3/mach/XML/Parser.pm line 187Hello, Sal.

The new Excel code is an improvement over the previous version: it ends the game for a player with no money left and runs long enough to get to the long-term regime where a player with a slight advantage over the house (and lucky enough to survive the initial stage) keeps playing and eventually ruins the house.

One remaining problem is a very small population size. With an advantage set at s=0.014, the fraction of the players who survive the initial randomness-dominated period is approximately 3.7s = 0.026. In other words, 1 in 40 are expected to survive in that situation and you only have a population of 10. So most of the time the code won’t let you see any difference between a slight advantage (s=+0.014) and a slight disadvantage (s=−0.014) because there won’t be any players left in either case.

If you set the population size at a number substantially exceeding 1/(3.7s), say a few hundred or a thousand for the values of s on the order of 1%, then you will see a qualitative difference between positive and negative biases. At s<0 all of your players will be ruined and the house will survive, while at s>0 some players will survive and it is the house that will be ruined. These are

totally differentoutcomes.This singular change across the point s=0 is an example of a

phase transition, a concept that has proved quite valuable in various areas of physics. Much of your confusion stems from the fact that it is acontinuousphase transition: the behavior changes in a continuous manner as you cross the critical point (s=0). The closer you are to s=0, the longer time scales and population sizes must be considered. So for s on the order of 10^{−6} one is not expected to see any difference between positive and negative bias unless the population exceeds a few million.The best way to examine this phase transition is to abstract from the noisy graphs showing the balance of individual players and instead look at the balance of the house playing against a large population. (Statistically speaking, this amounts to tracking the average quantities for a player.) If I have time I will post some graphs with explanations later. Those quantities show all of the hallmarks of a continuous phase transition, including critical scaling.

No one here claims that every mutation conferring a slight advantage will become fixed, whatever Darwin said 150 years ago. The very simple model of Gambler’s ruin illustrates that in a large enough population, alleles at a slight disadvantage have no chance at surviving; however, it is virtually guaranteed that some number of alleles with a slight advantage will survive and take over a large chunk of the population. When the advantage or disadvantage vanishes, everything is left to chance. This is the lesson and one needs to absorb it before one moves on to more complex models.

But sometimes small populations happen, and then one gets founder effect.

On the contrary, your posing of such irrelevant questions reveals that you have no idea what you are talking about. In other words, my statistical ignoramus, it doesn’t make any difference what sort of counting system you use, your original statement is still wrong. You can’t seem to get it through your thick skull that tossing out a bunch of jargon isn’t going to impress a knowledgeable audience.

Exactly, which is why your initial statement that having a 1% edge meant a 50.5% chance of winning was wrong. Glad we got that cleared up.

Bullshit. “Winning” in gambling means “leaving the table with more money than you sat down with”. Number of plays is irrelevant. It is you who are being idiosyncratic.

Liar. Your own statements and irrational arguments make plain your lack of knowledge. And your reputation as a dishonest hack makes any appeal for charitable readings laughable.

You merely reveal your ignorance, again. Once again in case anyone reading this is as ignorant and thickheaded as you are: It makes NO DIFFERENCE what system you are playing, a 1% edge does not equate to a 50.5% chance of winning.

Thanks for playing.

I decided to give Sal the full blog treatment, complete with a primer on card counting systems, for those interested. Here’s the highlight:

It should be clear now that Sal is talking out of his hat. Never mind the complete irrelevancy of his questions to the matter at hand: whether an edge of 1% implies a winning percentage of 50.5%. Never mind that he doesn’t even seem to know that Uston APC is the system I played (he lists it among “other systems”). No one knowledgeable about counting would ask such a question, nor would likely have the answer, since no one would know all three systems he lists.

It also is a completely irrelevant question to the Uston APC system I played, which did not require this knowledge. And as an added bonus, Sal’s question doesn’t even make sense, because the % advantage for a player with a given count in the Uston APC system is not constant, but instead varies by remaining decks. From table 9-2 of Ken Uston’s “Million Dollar Blackjack”, page 128 of my 1981 copy:

PLAYER EDGE AT VARIOUS TRUE COUNTSUSTON ADVANCED POINT COUNT

UPC True count of +3:

1 deck remaining: +1.0%

2 decks remaining: +0.7%

3 decks remaining: +0.6%

the values lay out in similar declining pattern for other counts and decks remaining.

So we see here clearly that Sal has no idea what he is talking about, and is simply cutting and pasting impressive-looking technical information in an attempt to hide his ignorance. No one who understands card counting would have asked this question. This is worth being on the lookout for when listening to IDer/creationist arguments. If it seems impossible to grasp their line of argument, don’t blame yourself. It is likely they are doing what Sal did above.

LOL, Science Pretender! True counts in a balanced systems like Uston APC are to be adjusted by the number of decks remaining, thus your rendering of Uston’s work is all wrong by the modern definition of true counts (TC). The relationship of advantage to true count in balanced counting systems should be independent of the number of decks remaining since true count is derived by dividing the running count (RC) by the number of decks remaining. You aren’t using conventional notions in practice today.

See: True Count Frequencies

and:

Counting 101

Hey Science Pretender, it must be awfully tough for you to perform index variation plays and high count insurance plays using Uston APC since you have a faulty notion of True Count. Not to mention you won’t be properly calculating your fractional kelly wagers correctly in order to lay out bets proportional to you advantage since you use an incorrect notion true counts (TC).

Thanks for the entertainment, Science Pretender…hahaha.….

You are just showing your ignorance again of basic statistics, and how once again you are just cutting and pasting things you clearly don’t understand. The True Counts I listed are adjusted for number of decks (that is what “true count” means ignoramus). Had you checked the referenced text you would have known that. Obviously actually reading up on the FACTS before spouting off at the mouth is too much trouble for you.

Further, your “should be” statement is simply false. The advantage is NOT independent of the number of decks used. Simply take paper and pencil (if you are capable of it) and note that the probability of getting a blackjack declines with the number of decks used. Thus it is clearly NOT the case that the advantage will be consistent with identical true counts and differing numbers of decks.

Once again you reveal that you haven’t a grasp of even BASIC statistical concepts, and are once again cutting and pasting what you clearly don’t understand. Worse yet, you haven’t the intellectual integrity, or the balls, to admit when you’ve been proved wrong.

And I used to wonder why they called you Slimy Sal.

You know what Sal’s little bullshit here reminds me of? The scene in The Main Event where new boxing owner Barbara Streisand is reading about a left hook from a beginning boxing book to help her experienced fighter, as if he didn’t already know that. And here comes Sal explaining to an experienced card counter what a true count is.

Believe it or not Sal, everyone is not as dishonest as you or the rest of your lying crew are. When the rest of us say we have expertise in a subject, we actually do.

There is a difference between the concept of “advantage” and the concept of “relationship of advantage to true count”. You have reading comprehension problems, Science Pretender.

I said:

You misrepresented what I said. You used the word “advantage” and I used the phrase “the relationship of advantage to true count” which are different concepts. You’re comprehension skills are pretty poor, Science Pretender…or maybe you’re having to resort to misrepresentations in order to score debate points.

Let the reader also consider an analogous balanced system like Hi-Lo in a six deck scenario such as shown at this website: Indexes.

Notice the number of undealt decks REMAINING out of the six decks is not even mentioned when considering advantage of true counts (TC). That’s because TC must already consider the number of decks remaining when it is calculated (see the treatment by Stanford Wong above).…

This is exactly like the way I said, and not like the way Science Pretender argued.

To graph the advantge versus Uston APC would yield different numbers than Hi-Lo since we have different system, but the fundamental notion of invariance of true count to advantage remains. True Count is analogous to the notion of density in physics.…True Count and the associated advantage of a true count is independent of the number decks remaining – somewhat like the density of a substance is independent of volume of the substance.

By the way Science Pretender, for the reader’s benefit, and to illustrate your ignorance, what Kelly fractions do you use based on your Uston APC true counts? What index do you split 10’s against sixes since you aren’t adjusting your true count to the number of decks remaining? When do you invoke an insurance play in your distorted application of Uston APC since you don’t adjust true count to the number of decks remaining?

hahaha.…

No shit moron, but as usual, that has nothing to do with what we were arguing. You asked me what true count in the Uston APC system resulted in a 1% edge for the player, and I once again revealed your ignorance by noting that the answer depends on the number of decks remaining. This revealed clearly that you have no idea what you are talking about. And rather than be a man about it, you are once again talking out of your ass, making up terms or using existing terms in idiosyncratic ways and playing word games. Then of course you toss out irrelevant questions at the end of your ignorant attempted diversion, in your own version of the Gish Gallop, to try to distract from the fact that you’ve had your head handed to you…again.

Future dialogue is pointless. Your ignorance has been starkly exposed to anyone with any knowledge of statistics or logic. You thought a 1% edge equated to a 50.5% probability of winning as well, and were wrong there too. At this point you are simply trolling. But thanks for the self-expose. Any time someone asks me why I think you are such an intellectually dishonest bullshit artist, now all I have to do is link to this post.

Dr. Felsenstein,

I sent you a copy of John Sanford’s

Genetic Entropy.Let me know if you received it or not. The admins at PT should have my e-mail.

Thank you again for taking time to read what I wrote at UD and for taking the time to respond. I’m deeply honored.

regards, Salvador Cordova

Oh come on Sal, you love to gloat so why be so submissive?

Just notice Sal’s comment

World renowned geneticist.… Typical YEC abuse

Sorry for the delay, I didn’t notice this inquiry until recently. Yes, the book arrived. Thanks for sending it. It will be helpful to have it, I am sure.

Update