Don Goren: Shuffle Tracking Response

Don Goren’s Response
to Arnold Snyder’s Comments
on his Shuffle Tracking Theories

Snyder’s Proposal | Goren’s Counter Proposal | Additional Snyder Comments

[Editor note:  This response was composed in December 1997.   To view the entire text of Snyder’s comments you had to be a member of the RGE GreenBelt page.  Text that may be sensitive have been indicated with <sensitive>.  The complete set of Blackjack Review issues describing Don Goren’s shuffle tracking methods are available from the BJRnet catalog.]

Dear Mike,

Thank you for passing on Arnold Snyder’s comments.  You know they require a detailed response.  One of the problems ( I’ll address a second problem at the end of this letter ) which promotes skepticism ( I’ve been getting this since one week after you published my first article ) is the fact that a full explanation of what I do and how I developed it would take several volumes.  The three articles in Blackjack Review don’t begin to describe my three years of shuffle tracking casino play, and don’t even begin to illustrate the exceedingly complex statistics behind the neural analysis.  For this I take 99% of the  responsibility.  The other 1% was the description of your readers on which you suggested I focus when writing my articles – “keep it non-technical, tell some of the stories and situations I’ve encountered in the casinos”.

 Please pass this on to Mr. Snyder.  You might feel that some of this material may be too sensitive for the Greenbelt.  That, of course, is your decision.  But I would argue that I didn’t get that option when Mr. Snyder posted his material.

 In general, Mr. Snyder’s comments concerning casino play are accurate.  They ought to be – he’s been playing and studying for years.  His conclusions            ( especially statistical conclusions ), while legitimate for what he thinks I do, are often inaccurate because he’s uninformed as to what I’m actually doing. I can’t blame him.  There wasn’t enough information in my three articles in Blackjack Review to reach correct conclusions.  He, and several others, have elected to read these articles like one would interpret textbooks – looking to pounce on every fragment of non-confirmatory information and contrast it to their masterful, dogmatic, comprehension of contemporary single dimension blackjack knowledge.  I’ll explain the term, single dimension later.  I see an analogy with playing the game of Chess.  Someone who studies the game for years can come up with a given move at a specific, crucial point, win the game and feel he’s made the correct move.  If he studies a few more years, he might make a different (better ) move in the same setting, still win the game and feel he’s played perfectly.  If the best chess computers in the world analyzed the move, it might find another, different “better” move. The irony is that often, as computer chess software advances and computers become faster, some of the earlier “best” moves, long since discarded are revisited through different logic branches and often “reclaim” their title as the current “best” move for that situation.

I believe this is the case with Mr. Snyder.  He possesses a world of blackjack knowledge – both theoretical and practical.  However, even with all the quirks that occur within the real shuffles in the casinos, a better solution than he’s seen to-date, should not be precluded. Nor would I ever presume that my current play is the “ultimate” solution. Far from it.  If I had Snyder’s playing experience, I would improve my game substantially.   (More about the theoretical “ultimate” solution later ).

Let me take this paragraph by paragraph and address Mr. Snyder directly.

 

 Snyder:  I have received many queries from players regarding a series of articles on shuffle tracking which appeared under the byline “Don Goren” in Michael Dalton’s Blackjack Review. Don Goren appears to be a very smart guy, but in my opinion, what he is doing is armchair shuffle tracking, not real world shuffle tracking.

 Thanks for affirmation of my intelligence.  This was you first erroneous conclusion.

You are correct about armchair shuffle tracking, however, I did just that for almost five years.  I hope you concede that I would have enough sense to set up more than adequate ranges and distributions around every cut, riff, grab, plug, strip, eyeball estimate of cut card, and every other parameter I’ve ever used in any of my neural networks and the succeeding simulations.  I spent over two years analyzing plugging alone and I’m well aware of the implications of dealing beyond the 234th card with my approach.  Every factor concerning plugging that you claim I don’t account for ( and many others that you don’t even mention ) have been included in all my theoretical work and real-world casino play.  You could not have possibly gleaned that from the simplified information in my articles.  I’ll address all of your comments as they were made.

 

 Snyder:  He starts with the premise that he has a 6-deck game with 1½ decks cut off, and a shuffle where the dealer utilizes ¾-deck grabs. He uses these realistic approximations for his analyses, and develops some data regarding the methods and potential profitability of tracking such a game.

 Actually, I analyzed many types of shuffles for four, six, and eight deck games ( even hand held games ).  Some had ¾ deck grabs, some had ½ grabs or, or 35 card grabs, or 1 deck grabs, etc..  Others had combinations of the above.  You’ve read only a fraction of 1% my development work.  The parameters I used in the article are just more common than the others.

 Incidentally, as an example of real-world play, although theoretically possible from computer analyses, I eliminated the idea of playing at Atlantic City ( and the Tropicana in LV ) because of the huge variances in 35 card grabs that occur in the two, four deck piles that often begin eight deck shuffles.  ( The eight deck shuffles that initially split into four piles or six piles work fine ).  The problem is that dealers apparently want to “eat into” these substantial four deck piles as fast as possible.  They take large grabs, then small grabs to correct the situation, etc. etc.    The pit bosses seem to allow it.

 SnyderHis methods may work on paper, or in a computer analysis, but I believe his basic assumptions about the game are false. I suspect he has spent much more time analyzing shuffles on computers than
observing them in the casinos, because he ignores some very obvious factors which must be accounted for, and which any shuffle tracker would notice in the real world. Consider what it means to have a shuffle-point of 4½ decks (75% dealt). This is a cutoff portion of 78 cards. Goren advises that since the dealer takes ¾-deck grabs, the discards be counted in ¾-deck segments. In the discard tray, when you reach the cut card, you will have six 39-card segments stacked up, and you will know the count on each of them. In the shuffle that Goren describes, the 78-card cutoff segment is then broken into three half-deck segments and plugged into the 4½-deck stack of discards in the top half, middle, and bottom half.

But, does this happen in the real world?

Of course not.  And, if that’s what I were doing, I’d be in error.

You are correct though.  I spent the last eight years analyzing blackjack on the computer and about three and half years playing at a very limited series of venues at Las Vegas – strip casinos with the best rules (including surrender).  I don’t use a cutoff point of 4 ½ decks like you suggest.  That is simply where the “6th segment” ends – 234 cards into the shoe.  All my simulations were actual “table situation” simulations allowing for the additional cards that are played until the round was over.  These “extra” played cards allow me to “hone in” on the actual count of the 7th segment ( and thus the 8th segment ).  I have megabytes of simulation results just telling me what count to use for the 7th and 8th segments given the penetration and count into the 7th segment.  This particular piece of information, while helpful, has little effect on the bottom line dollar return.  The more consequential effect comes from the variation in plugging, and the resulting offsets, that you later address.

When you say, “Goren advises that since the dealer takes ¾-deck grabs, the discards be counted in ¾-deck segments. In the discard tray, when you reach the cut card, you will have six 39-card segments stacked up, and you will know the count on each of them”

This is incorrect, and I’m sorry if you’ve gotten that impression from my articles.

I NEVER KNOW FOR CERTAIN WHAT THE EXACT COUNT IS FOR ANY SECTION OF THE SHUFFLED SHOE.  Only card counters know the exact count – but, unfortunately, they have no idea as to how that count is distributed for the remainder of the shoe.
 
You’re interpreting my elementary articles using what I call a ” Card Counting Mentality”.  Probably, the shuffle tracking techniques with which you’re familiar also attempted to achieve near-perfect tracking at every point in every shoe. I guess that’s why you mention that they were abandoned when plugging came into vogue.  The trouble is that shuffle inconsistencies kill the accuracy of this type of shuffle analysis very quickly.  And that’s why plugging, as you later describe, will break down the integrity from the top down.

Once again, you’re correct, but this is not what I do.

I KNOW THE “MEAN” COUNT AND THE “VARIANCE” OF EACH SECTION OF THE SHUFFLED SHOE, AND, ONCE THE GAME BEGINS, THE VARIANCE BEGINS TO CONTRACT RESULTING IN GREATER RELIABILITY AS TO THE VALUE OF THE MEAN COUNT.

Furthermore, the “mean” count is an extremely conservative count and the “variance” is substantially larger than the real-world value since my parameter ranges and distributions in my neural networks and simulations were set at values that any knowledgeable casino player would consider more than ample.  I get up from a table when these conservative parameters have been violated by the dealer in the shuffle.  That’s another reason for playing the strip – more disciplined, experienced dealers.

The result is that I could be betting on a +8 raw count and really have a +2 raw count or even a negative count.  However, this +8 mean, on average, should have probably been a +9 or +10 in a real-world game, and, there was as much possibility of the true count exceeding the mean ( +8 ) as there was preceding the +8 ( although the deviation below the mean is greater than the deviation above the mean ).

But what happens is that the counting process, through a feature ( later described ) in the formulas, “learns” from these deviations as the game progresses and “self-corrects” resulting a gradual decline in the variance and a more accurate mean.  Using a over-simplified example ( it’s much more intricate than this ), if you thought a 78 card section had a raw count of -6 and, after counting that segment, it really had a +2 count, then 8 more low cards came out than was expected.  Wouldn’t you want to adjust your other expected counts?  Of course.  I suggested, in one of the articles, to just add +8 to the beginning of the next segment count ( which is count down from the beginning of the segment ).  This is an over-simplification but it illustrates the point.  Here again, I have reams of output analyzing this correction process alone, and, in answer to a later question, this process ( and additional count procedures in the original shoe ) gives me the ability, on occasion, to accurately predict the count of certain 13 card slugs ( when the variance around the mean becomes minimal and the statistical accuracy of the shoe to that point has been nearly perfect ). Even still, I could never say for sure that this 13 card count is correct – like a card counter could if the shoe were dealt down to the last 13 cards.  My game in the casino doesn’t depend on this feature at all. I ignore it when it happens.

As an answer to one of your later comments, I will describe how the neural analysis produces the above results.
 

 Snyder:  In actuality, when the dealer hits that cut card, she will finish dealing the current round of play prior to beginning the shuffle. …

<see RGE web site for the full content of Snyder’s comments>

 Absolutely true.  Incidentally, you mentioned a consistent dealer varying his cut card placement by a half dozen.  I used a distribution of 13 cards on either side of the 234th card in my simulations of 4 ½ deck penetrations.  I’ve done simulations with penetrations of 4 ½ decks to 5 ½ decks in quarter-deck increases for each shuffle that I’ve analyzed with similar conservative ranges for cut card placement.

 The chart below represents the distribution of penetrations that I use in my 6 deck neural networks and simulations.  It incorporates my perception of the frequency of full tables vs. 6 positions vs. 5 positions, etc, in my real world casino play.  Since I tend to have full tables or six playing positions when I play,  and, despite it’s many advantages, I will never play head on or with just one other player ( I can’t keep up with my calculations ), this perception is heavily weighted toward the full table.
 
 
Card        cases per     card         cases per    card       cases per
number    million          number    million         number   million
220         75                241         42168          262         6725
221         405              242         42739          263         5398
222         959              243         42790          264         4255
223         1879            244         42296          265         3425
224         2913            245         41629          266         2690
225         4159            246         40296          267         1967
226         5997            247         38548          268         1467
227         7701            248         36880          269         1065
228         9965            249         34958          270         724
229         12100          250         32784          271         564
230         15080          251         30303          272         387
231         17771          252         27864          273         249
232         21018          253         25361          274         168
233         24701          254         22995          275         89
234         27879          255         20554          276         57
235         30613          256         18169          277         47
236         33572          257         15887          278         18
237         36092          258         13672          279         12
238         37995          259         11507          280         5
239         39473          260         9895            281         1
240         41003          261         8042

 
 Let me use the above chart to illustrate the complexity of neural analysis.  You make a valid point that there is a huge range defining the end of the played cards ( 62 cards in the distribution that I use ).  Do you realize that there is projected 7th segment card count ( mean and variance ) associated with each cutoff card number, above, without the knowledge of any of the played cards.  In other words, there is a dependency between the number of cards dealt after the cut card and the count ( mean and variance ) of segment 7 – and therefore segment 8 – and therefore segments 1 through 6.  There are also hundreds of other intertwining relationships involving the cutoff card number.  A neural network uncovers these relationships in the course of it’s run.

To use an extreme case to make the point, if the play goes to the 281st card, absent a huge placement error of the cutoff care by the dealer,  it had to happen with a predominance of low cards.  In addition to portending positive rounds up to the 7th segment for the just-completed shoe, this clump of low cards will tend to track through the shuffle, and produce a statistically higher chance of high card and low card clumps within the next shoe ( I’ve been able to statistically track clump remnants to as many as seven shoes into the future with single pass shuffles ).  So it’s not an absolute given that the position offsets caused in these extreme cases produces proportionately lower returns. Offsets are quickly corrected within the count process – high card clumps, of course, are soundly advantageous.  As a matter of fact, depending upon certain corroborative relationships determined as the count progresses in the new shoe ( and supported by high bets ), this particular 47 card penetration into segment 7  may actually result in a net benefit vs. the return of the mean penetration of about 10 cards.

 In other words, there’s a distinct possibility that a penetration of 34 cards beyond the cut card could produce enough clumping in the shuffled shoe to counter the temporary negative effects of the offset caused by the extra 24 cards.  I normally can sufficiently correct an offset by the 52nd ( 4th -13 card group ) card played in the next shoe.  You’ll see how later in this response.  So, if some of the high card clumps appear between the the 53rd card and the cutoff card,  I have a reasonable chance of picking them up with high bets.  There was no way you could have seen this from the articles.

 Many other correlative dependencies, however subtle, exist between the example you use for your criticism ( penetration beyond the 234th card ) and factors you’ve not considered.  That’s why I referred to your approach as single dimension analysis.  You’re using the direct effects of one factor as if they’re independent of everything else without regard to correlative factors that produce other ( sometimes opposite ) effects.  While you very validly question my lack of casino experience ( 3 ½ years still gives me rookie status by your standards ), I would have to question the validity of statistical analysis from someone who views blackjack as a series of isolated co-dependent relationships.

A neural network, because it’s primary goal is to maximize the outcome , which we crudely define as return, ferrets out the above relationship and literally thousands of others that are invisible to even most experienced blackjack players.

And each of these relationships are “weighed” against each other – and each combination of the others – to achieve a dynamic formula that represents the optimal predictability of the outcome as best as it knows to that point.

The term, “dynamic formula”, simply means that a change in any component of the formula will change all the other components of the formula.

This is why, if your analytical approach is one dimensional ( cause/effect of one independent factor with another ), and my answer to you was two dimensional ( cause/effect of two dependent factors ), then a neural network could be considered          n-dimensional, where n = the factorial of the number of variables used in the design minus 1.  This analysis would take years to run, even on super-computers.  Only in the last decade, have techniques been developed to accurately reduce (to reasonable run times) the expanding tree that accompanies a neural analysis.  N-dimension mathematics ( especially as it applies to arrays ) is on the cutting edge of theoretical mathematics.  Only recently have the practitioners of applied mathematics been able to incorporate it into their practices.

Don’t take this personally.  Nearly every correctly designed neural network operating in science and industry has discovered or will discover relationships never dreamed of by the experts in the field.  Check the two million or so web sites that testify to this capability.

Back to blackjack.

 You’re correct, reading that second article was painful – even for me.  I think I might have even warned the reader to take a couple of No-Doz if he really wanted to get through it.  It was my inadequate attempt at producing the Reader’s Digest version of those volumes I referred to earlier.

A more detailed description would have included the playable procedure that I use when I boiled down those “megabytes of simulation results” on penetration that I referred to earlier.  Once again, penetration ( because of dynamic formula corrections of the offset ) is of little consequence to the overall results using the type of “mean-variance” shuffle tracking that I use and will describe later in this response.

 I concede, that if the shuffle tracking you use (or know of) is based upon the accuracy of each card ( that’s the card counting mentality I referred to earlier ), penetration, and the resulting offset after the plugging is of supreme consequence and potentially fatal.  Again, that’s not what I do.  The case of 7th segment penetration and it accompanying offset of the rest of the shoe was included in the 100s of millions of simulations for this particular shuffle ( in the correct frequency of occurrence ) that ultimately resulted in the mean count and variance statistics for each 78 card section ( actually each of the 312 card locations ) of the shuffled shoe.  So, as long as the dealer stays within my loose parameters, your case falls within the formula.  Your type of shuffle tracking would certainly be more accurate for the entire shoe with top plugging or even one-part plugging than with three-part plugging.  And, actually, in theory, my type of shuffle tracking approaches the accuracy of your shuffle tracking under these situations as well.  In other words, the variances around the mean are lower so as to create a more accurate mean which nearly gives me the “card counting ” accuracy you’re able to achieve under these ideal conditions.

 Snyder:  Since the most likely case you will encounter in the real world, when you have a 6-deck game with the cut card placed 4½ decks deep, is a seventh segment of between ¼ and ½-deck in size sitting on
top of your 39-card segment 6, why does Goren fail to address this? How does this work into his “formula”? In his “map” of how the segments are married, his assumption that the 39-card sixth segment is always sitting on top of the stack is more often wrong than right. 

 Goren fails to address the fact that, no matter how consistent the dealer might be in her breaks and grabs, every counted segment is skewed from the top down due to a factor over which the player
has no control.

 Once again, you are correct.  The example in the article is only for the case of the played cards ending at the 234th card.  And again, I could have filled a decade’s worth of Blackjack Review with just this particular facet of shuffle tracking alone.  The formulas I use in the casino are non-linear dynamic formulas and not the 1 decimal linear coefficients I used to simplify the article ( in their non-rounded, linear version, they would be accurate on and around the 234th cutoff condition ).  Linear coefficients result when historical statistical regression analysis is used.  When a neural network is employed, coefficients are non-linear and dynamic – hence, the ability to correct the offsets. In other words, the coefficients are actually formulas themselves.

As I mentioned above, these formulas and coefficients do account for the offsets ( in the correct frequency of occurrence ) that you are referring to in your comments ( as well as all the incalculable relationships between this parameter and all the others ).

 My intent in these articles ( as conveyed to Mike Dalton at the outset ), was to come out of the blackjack “closet” and see what was happening in the real blackjack world.  It was his suggestion to write a few articles.  To that point, I had taken great pains to maintain anonymity.  I had a feeling, after incidents at Bally’s and Caesars, that the then current shuffle tracking environment was on it’s way out ( shuffle machines, less disciplined dealers due to more venues, pit bosses and management beginning to catch up, and my personal two incidents ).  This has now borne out nine months later.  No less than nine of the fourteen casinos that I play have modified their shuffles.  The Hiltons use three pass shuffles !!!   And even in my two stalwart single pass venues,  MGM has changed it’s plugging from 2, 4, 6  ( ¼, ½, ¾ ) to 1, 3-5, 7 and the dealers are uncontrolled as to whether their cards are placed in the discard rack before or after the players’ cards.  These moves aren’t fatal – it changes the formulas and slightly lowers the theoretical return.  The Rio, on the other hand, doesn’t seem to care if the dealers take 3 grabs from the three deck piles ( ¾ deck grabs ) or 4 grabs ( 31 cards grabs – usually non-uniform ).  This, of course, is fatal if I’m using a formula based on ¾ grabs.  I’m relegated, now, to only certain dealers.

In any event, I wanted to find those card counters who were interested in  shuffle tracking and who, ultimately, would direct me to people interested in financing the development of a shuffle tracking computer with the capability of identifying the dealer’s down card using neural networks in real time within the casino ( I’m well aware of the various state laws ).  This arrangement has recently come to fruition.  More about that later.  My phone number was prominently placed in all of the articles with an expressed invitation to call.  I expected few people to earnestly read the details and I expected a few calls.  There was not enough information for any sane person to seriously attempt to shuffle track the Bally’s shuffle without further instruction. For instance, mechanical techniques involved with counting cards in order of placement into the rack were left out of the articles.  Without these techniques ( or others that accomplish the same goal ), it’s next to impossible, in certain table situations given time constraints , to accurately count the cards in order of entry into the rack.

I received about a dozen inquiries and six people were interested in learning my procedure.  These six people came down to Miami, individually.  Two gave up after a few days and were charged a nominal fee, two continued to practice with my software “drills” and then quit before receiving the formulas, one got all the way through the drills, paid for and received my formulas, and plays about two weekends a month for low stakes at an Indian Casino.  No results yet.  One player zipped through training and has been playing single pass shuffles, with less than desirable rules, throughout the country.  In two months of continuous play, he’s well ahead of my pace.   He’s told me of many single pass shuffles throughout the country – some with no plugging ( top plugs ), some with single plugs, some with 26 card grabs that are extremely consistent ( since they’re grabbing from six, single deck piles ).  On this fellow’s recommendation, I played for seven consecutive days at the <sensitive> in Sparks, NV a couple of months ago.  They have three shoe tables ( at night ) that don’t use shuffle machines.  The heat at these tables in non-existent. There is no plugging.  The dealer splits the shoe into six decks and consistent ½ deck grabs are taken from the same combinations of stacks each time.  You don’t  need any formulas!!!  My win rate for that week was much higher than I would normally average in Vegas – which means nothing, of course.

He also introduced me to the group of attorneys ( avid BJ players ) with whom I’ve joint ventured to develop the aforementioned shuffle tracking computer.  We have an ETA of 6 months to a year.  I’ve developed most of the software already.  Computer “devices” are not illegal.  Using them in a casino is illegal ( a felony in most casino states) .  Designing, manufacturing, and selling them in a state without this legislation is not illegal.  Examples would include gun manufacturers, manufacturers of radar detectors ( the use of which is illegal in some states ), or, to use a broader example, car manufacturers ( the use of a car is illegal and subject to seizure in the process of committing a drug related felony ).

Incidentally, this may be of some interest to you.  After the development of this computer, these attorneys intend to judiciously test these laws ( using other cooperative players ) in appropriate states to see if they could ultimately overturn the Nevada law through a backdoor approach.  I see that latter goal as a ten year marathon with a less than 1% probability reaching the finish line.

 Snyder:  The way Goren attempts to track this shuffle is the way shuffles used to be tracked, before casinos started plugging the cutoffs. ..

 Well, I don’t know how shuffles used to be tracked, but the statement that my approach requires top down counting ( except when cutoffs are topped ) is incorrect.  I’m not questioning your accuracy – you are simply uninformed about the derivation of my techniques ( of course, they’ve never been published as pertains to blackjack ) and how a neural network accounts for this offset problem ( and dozens of other complications that you haven’t even addressed ).  Because of the difficulty of tracing logic branches within neural networks, I don’t even know what many of these subtle relationships are.  They’re significant but their identities are unimportant.  The network continuously moves toward a better outcome as time goes on.  How it gets there is important only to the purist of pure theorists.

 OK.  Here’s a four page condensation of how I used neural analysis to create a valid shuffle tracking approach ( which incorporates your fixation on the offset effects as well as all the other variables associated with the game that I’m able to identify ).  I could write five books on this.  So, find yourself an expert on neural networks ( maybe I’ll  know him ), and, if I don’t touch on everything, just remember this is the Cliff Notes version.

 This is just for one pre-defined shuffle in a given casino:

 1) A simulation of the play of shoe #1, followed by the shuffle, followed by the play of shoe #2, followed by the shuffle, etc., etc., has to be run, ad infinitum.  I stop my runs at 30,000,000 shoes.

 The skill used in uncovering the parameters of the game and the ranges and distributions associated with these parameters define the accuracy of the neural network which uses the results of the simulations as the basis for the formulas.  So this is the point at which one wants to account for all of the dealer variations and casino quirks that you could possibly expose.

 These parameters needed to be evaluated.  Some took minutes, others took days of analysis, and others ( such as those parameters associated with plugging ) took months and years to define and quantify.

 Starting with the game:

 Are any cards burned?  Most casinos set the number of burned cards ( if any ).  In this case, the “burn card” parameter is not a distribution but a single number. No problem.  But what if each dealer determines his or her number of burn cards ( I’ve seen that happen ).  This would require a distribution for the burn card parameter representing, as close as is possible, what actually happens in that casino.  Another minor reason for playing at the strip casinos.
 
 This simple example shows how, if you were setting these parameters, , with your real-world casino experience, you would have been able to tighten the distributions yielding more exact results.  My distributions were too conservative which has since been borne out my actual casino play.

 Even before this.  What are the playing rules?  These are fixed.  How many players ( positions ) for this round?  This parameter, of course, is a variable.  This is one’s best guess as to what the ultimate player will encounter during his sessions.  If the player is a high roller commanding his own table, for instance, no distribution is required – every game is head on.  If your playing the $5 tables on the strip in the evening, you can just about count on a full table – or pretty close.

 I’ll start listing some of the other parameters with some comments along the way.

 How will the other players play a given hand?  I don’t think its any secret that the other players at a table ( especially $5 tables ) are rarely playing correct basic strategy.  Rather than assume in the simulation that everyone was playing basic strategy, I set up tables ( computer matrices not blackjack tables ) reflecting “typical” non-basic strategy plays – the ones you see happening all the time.  I also set up tables of plays that make you cringe when you see it happen.  So now all I had to do was estimate how often these alternate plays occur and randomly access them in the simulation.  I even weighted the randomness to reflect the probability ( also an estimate ) of the same poor player sitting through consecutive shoes.

 Is the dealer’s down card his first ( rare in the US but seen in the Caribbean ) or his second card?  This parameter has major implications for the neural analysis at levels far above what I can handle in my head.  Since a major focus of the blackjack computer I’m developing is the identification of the dealer’s down card, the algorithms I’m putting on the shuffle tracking computer are heavily dependent upon this information.

 Assuming immediate pickups of Blackjacks, busts, and surrenders, does the dealer pick up from his right to left ( I’ve seen the opposite )?  Does the dealer pickup individual player’s hands with the player’s last drawn card going into the rack first or the opposite?  Or is this casino lax about this and a distribution needs to be required ( more frequent than one would think – I see it all the time )?

 Does the dealer, after picking up the players’ hands, sweep backwards putting his cards on top ( and in what order ), or does he put his cards under the players’ cards so his go in first?

 Where does the dealer place the cutoff card.  This always requires a distribution as you have stated.  Where you thought the error range was 6 cards, I used 13.  Here’s an example of where your experience could have tightened the distribution yielding more accurate results.

 Now the plugging.  I concur with your observation that the first grab is often larger than the others.  I set the distributions to reflect that.  However many venues have their dealers lay out the cutoffs which tend to produce more even plugs.  In any event, I use dozens of distributions involving plug sizes vs. plug positions, small handed dealers ( for lifting the cards in the rack ), first pug size vs. the second and third, etc.  Incidentally, when the Rio began using those four-walled discard racks, I found the plugging to be more accurate.  It’s just their shuffle grabs that have deteriorated.

 Then, let’s say the shuffle involves splitting the deck into two piles.  There’s a distribution involving the final size of the two piles.  Also, there’s a distribution associated with “topping off” that occasionally occurs when the dealer produces two uneven piles and steal a few cards from the top of the bigger one.

 The first right-handed grab needs a distribution.  The first left-handed grab needs a distribution ( it might be different ).  How many riffs?  Usually fixed, but still a very tight distribution.  Any strips?  If so, a definite distribution.

 The second right grab – left grab – riffs – strips – etc., etc.

 On stepladder shuffles, the treatment of the final two piles.  Mixed with the center pile or with each other.

 Any cuts or stripping of the piles along the way?  If so –   distributions.

 If there’s a second pass, same problems.

 The player cuts the cards.  I found the analysis of this parameter particularly interesting.   I use a distribution that is significantly skewed to the players right of the middle of the deck.

 Then the cutoff card and the distribution mentioned earlier.

 There are many other obscure parameters requiring distributions that I’m not even mentioning.  For instance, the size of the right hand break vs. the left hand break in the riff.  The relationship of the second right hand grab to the first right hand grab.  The third grab to the first two.  Etc., etc,.

 I normally have about 50-60 separate parameters in a single pass shuffle.  This is combined with the logic of the play and the accounting functions to produce the 30,000,000 consecutive simulated games.  I have never addressed the problem of new packs of cards – and the resulting biased shoes.  Whereas the average player might want to avoid these shoes, I’ll take any shoes with above average clumping.

 We now have 30,000,000 simulated games which act as the history base for the neural network.

2)  Now the neural network is run.  In the last couple of years, generalized
neural network software has been available on the market.  I wrote all my own software for my blackjack work which allows me to avoid much of the overhead of these standardized networks.  Also, they weren’t around when I did most of my programming.

 The neural network is set to determine weighted coefficients ( in the form of formulas ) for each card of the previous shoe that produces the highest correlation of predicting location of each of the 312 cards in the new shoe.

 It does this by using a standard algorithm for a starting point set of weights and begins going through the 30,00,000 cases modifying nearly 100,000 weights trying to achieve a better outcome ( highest correlation ) against the 30,000,001 shoe.  Then, incorporating the 30,000,001 case into the history with the other 30,000,000 cases, the 30,000,002 case is generated.  Again, the network, using the previously calculated formula attempt to approach a higher and higher correlation with case 30,000,002 while “back-propagating” to case 30,000,001 to achieve a balance as it goes back and forth. At least another 200,000,00 cases are generated. This could turn into an endless process so software procedures have to be instituted to minimize extraneous analysis that is unlikely to produce better results.

 There are a many approaches that are in current use for paring down the expansion.  A good neural network would incorporate several of these.  The class of procedures I’ve researched the most is called Genetic Algorithms.  The theory still boggles my mind after almost seven years of examination.  It’s basically this:

 If the theory of evolution continues to strive for a better and better adaptation to the natural environment, then, because a neural network is continually looking for a better correlation to it’s designed environment, the laws governing evolution (natural selection) should be applicable to neural networks.

 And they are.  What this means is that by following the scores of ratios, relationships, laws, and theories associated with genetics, the network designer can eliminate over 99% of the extraneous paths that would have been traveled by the network and still come up with a near-perfect solution.  Now the network becomes manageable from a run-time basis and permits such applications as nearly instantaneous re-routing of communications lines, diagnosis of medical conditions, fraudulent credit card use, predictability of collision events in space, predicting greyhound racing results, and ….. predicting blackjack results.

 Usually, on single pass shuffles, I run about 250,000,000 cases before I stop the network. As the network is running, a graph of the accuracy ( correlation ) vs. time appears on the screen.  I can visually determine when I want to stop the network by viewing the projection of the graph to see if it offers additional improvement vs. the time I’m willing to spend.  When the projection appears to be asymtotic with the time axis, it’s time to stop.

 The final formula for this shuffle in this casino is actually a 312 x 312 dynamically changing matrix – impossible to memorize and certainly impossible to use within the time constraints in the casino without a computer.

 Dynamically changing means that all values within the matrix are dependent on every other value and can change if any single value in the matrix changes.

 If I used pairs of cards instead of the 312 individual cards,  I would then have a 156 x 156 dynamically changing matrix with less accuracy and also impossible to handle.

 If I used groups of four cards, we’re down to a 78 x78 dynamically changing matrix.  Still no good.

 What I use is a 24 x 24 matrix with an 8 x 4 major subset within the 24 x 24.  The major 8 x 4 matrix is dynamic and sub-matrices are only dynamic within themselves.  This represents ¼ deck packets ( 13 cards ) and allows me, in certain rare instances, to have a very high probability of the count a given packet ( as you find “particularly farfetched”).

It’s a static ( non-dependent ), major subset  8 x 4 matrix for the very specific 234th card cutoff that you saw in the article in BJ Review.

 It’s the dynamic feature of these formulas that (as the shoe progresses ) allows me to negate the negative effects of the segment offsets of a 281st card cutoff while taking advantage of the clumps that will statistically follow through the shuffle from a shoe with a large clump of low cards in segment 7.

 3)  Now, before I can step into a casino using this shuffle, I have to simulate the game – with the pared down formula set – to confirm the results of the neural network and to determine if my consolidation of the formulas will produce unacceptable returns.  Such is the case with some two pass shuffles and all of the few  three pass shuffles I’ve analyzed.

 After 3½ years of playing ( much too small for any worthwhile statistics ), my feeling is that I’m running at about 75% of what these simulations suggest – well ahead of card counting.  I believe this 25% deficiency to be a combination of less-than-perfect play and the conservative values in the parameters I use in the simulations.

 Incidentally, I believe it was Bruce Carlson ( maybe I’m mistaken ) who said in an e-mail to Mike Dalton that the above multidimensional analysis could be accomplished without using neural networks.  If he knows how, I’d love to hear it.  He’d be breaking new ground in the science of mathematics.

 

Snyder:  When the casinos started plugging the cutoffs, the computer tracking programs were still capable of getting an accurate count on the tops, because computers have no problem counting from the top down. They have a perfect memory of what went in from the bottom up, and can easily adjust the segments appropriately from the top down. Humans can’t do this. Those of us who have been tracking tops for years have developed various methods of estimating top counts. Essentially, you have to start getting independent running counts on the last few rounds, and then estimate which ones you will use when you see where the shoe actually ends.

 Certainly.  A good procedure for tracking tops.

 Snyder:  One thing for certain: if you estimate the count on the tops as being the same as the count on the sixth 39-card segment, you will be wrong most of the time, and very wrong quite often. As an experiment, shuffle 6-decks of cards together, then take a 39-card segment at random and get a running count on it using any counting system. Now remove the bottom 15-20 cards, and replace them with a random 15-20 cards from the remaining 5¼ decks. Now count it again. Do this again and again, and you will discover that this makes a huge difference, sometimes changing plus-counts to minus and vice versa.

 Correct.

 Snyder:  Actually, there is one way to control this factor — always play head-up with the dealer, and near the end of the shoe, always play a single hand. This would usually keep your seventh segment a
single-digit number, unless you yourself were dealt a small pair, and drew to multiple-card hands along with the dealer. But Goren does not even mention this possibility. And, as he claims to most often play the $5-minimum tables, how often will he realistically be able to get a heads-up game? At Bally’s in Las Vegas?

 Never. I can’t slow the game down enough to bet, play, and do my calculations.  When the other players jump ship, so do I.

 Snyder:  The fact is, cutoff plugging renders the old count-every-segment methods of tracking obsolete, except for computers, and with the current anti-device laws, concealed computers are pretty obsolete

 I thought they were obsolete, also.  I had never seen one.  The standard blackjack books would refer to them occasionally.  As soon as I wrote my first article,  I got about 9 or 10 independent ( I think ) calls from people claiming to be  actively using card counting computers ( regardless of the law ) and wanting me to program their chips for shuffle tracking.  From my intense questioning concerning input/output methods, what the computer does, etc., I believe that these calls were real and that there were at least five distinct computers among the group.  Two of these people sent their computers to me to analyze.  They were both crude devices by today’s standards.  One used a 286 chip from the late eighties and the other used an X80 chip from the early eighties !!!  They both had capabilities of handling 1 to 8 decks.  I tested them for unusual conditions such as cases where the correct play would be to draw on 17, and they responded correctly all the time.  The input devices were different but both required toe manipulation.  Both required wires from the power source – a clump of standard AA cells in one and connected 9 volt batteries in the other.  The output on one was a vibrator connected to the computer board by wires.  The other had a transmitter on the board which transmitted to an earplug.  This computer “talked”.  Both computers took an unacceptable time determining if a split was the correct play.

 With a number of these people, your name came up as someone experienced in concealed blackjack computers.  They seem to feel that you are currently using a shuffle tracking device  or are controlling a team that’s using one.  They also feel that because of this association with shuffle tracking computers, you “talk down” casino computers and shuffle tracking, in general, and you use your publication to dispel the idea that they are more common than you try to portray – such as in your statement above.  If it is the case, and I have no idea if it is or not, I could understand why you would want to keep this under wraps.  But this would constitute the second “problem” I referred to in the very first paragraph of this letter.  If you are involved with a shuffle tracking computer, then the idea of my proliferating the concept of shuffle tracking and computers through articles, internet communication, and work of mouth could work against your personal interests.  You also have a publication and business to protect.  All this could constitute a conflict.  That’s ok, if it’s the case, congratulations.  Business comes first.

 Snyder:  There are a few games remaining in this world where the cutoffs are not plugged, and in these games — when you have a very consistent dealer — you may utilize a count-every-segment technique to your advantage. I personally do not recommend such techniques because they are very difficult, too few dealers are consistent enough, and I don’t believe most players should be attempting mental gymnastics while they play. You need an act!

 Actually, I was quite surprised to see how many casinos don’t plug cutoffs.  When I was in Vegas with Mike Dalton for the gathering of Wong’s subscribers, I noticed that the Santa Fe and Nevada Club didn’t plug ( and the penetration at the Nevada club was 5 ½ decks !!! ).  The Nugget in Sparks doesn’t plug.  The fellow that uses my shuffle tracking procedures has a surprisingly large list.

 I agree with your recommendation, and you definitely need an act.  I’m deficient in this area  – too busy calculating.

 Snyder:  But the fact is, when you are tracking a 6-deck game with 4½ decks dealt, you must account for the fact that sometimes this means you have a 4½-deck stack of discards, with 1½ decks cut off (to be
plugged) and sometimes it means you have a 5-deck stack of discards with 1 deck of cutoffs to be plugged. In real world casinos, the likelihood of the discard stack being exactly 4½ decks is extremely small. Random game factors control the actual sizes of the discard stack and plug portions, and the likelihood of the cut card appearing precisely at the end of a round is the same as its likelihood of appearing immediately after the first card of a round is dealt, or at any point in between these extremes.

 Previously answered.

 Snyder:  What does this do to Goren’s formulas? What does this do to his
methodology? …

<see RGE web site for the full content of Snyder’s comments>

As many dealers either bury the tops or bury the bottoms with a cut on one pile, this entire 3-deck stack has almost no predictability as far as the values on the 39-card segment sizes from top to bottom.

 Once again your facts are correct, and, if I were doing linear analysis, you conclusion would be correct.  With today’s multi-dimensional techniques such as neural analysis, the last sentence is an erroneous statistical conclusion.

 

Snyder:  I also find Goren’s claim that he can sometimes identify the values of 13-card clumps within a 1½- deck segment particularly far fetched. I must also point out that the consistency of 90% of dealers in
breaks, plug points and grab sizes is very poor if you are attempting to use the exacting methods Goren proposes. To attempt to use such methods in 2-pass shuffles is particularly a waste of time.

 Let me correct a mis-interpretation.  What I said is that I can identify the count of certain 13 card clumps, not the values.  However, the full neural network formula ( 312 x 312 dynamically changing matrix ) when applied to a shoe/shuffle/shoe situation can easily identify the actual values of groups of 13 consecutive cards most of the time.  This requires the input of the number and suit of each card as they come out including picture card distinction.  It also requires “notifying” the computer if a player chooses surrender vs. stand since surrenders go into the rack.

 

Snyder:  This methodology of counting precise segment values from top to bottom has some utility in one-pass R&Rs when cutoffs are topped, bottomed, or even middled, but not multi-plugged. Such games do
exist, but they are not common, and they still require that you find very precise and consistent dealers.

 Previously answered.

 

Snyder:  If Goren feels strongly that I am evaluating his work incorrectly and that his methods would work, then I would like him to demonstrate his method to me. I know players in Las Vegas who have casino
BJ tables set up in their homes. I know some counters who are ex-professional dealers. I would have no trouble setting up a demonstration table where that Bally’s two-pass shuffle could be performed, so that Goren could sit and play and track under real world conditions with a couple of other players at the table. I would like to invite Michael Dalton, Stanford Wong, Don Schlesinger, Bruce Carlson, and various other
experts to witness his demonstration. I don’t believe he can do what he says he can do, and I would love to see it if it’s really possible.

 That’s a very dramatic proposal. But let’s look at it for a minute.

 Assuming, for a second, that there was something I could gain from such a demonstration ( and I can’t think of anything ), what could possibly be proven?

 Let’s say I sit at the table and the first shoe begins.  What would you like to know?  Am I counting correctly? Am I playing correctly? Am I counting cards in order of placement into the shoe? Am I placing the segment count on chips correctly?

 Assuming I do this correctly, does that prove my shuffle tracking procedures work?  Of course not.

 OK.  Let’s go into the shuffle and I do my calculations.  We could stop the game and I could write out the results of the calculations.  Then someone cuts the cards and we could go through the first six, 13 card groups and maybe my -7 prediction turns out to be +4.  It proves nothing.  And if it turned out to be -7, right on the button, it would also prove nothing.  And if I win the shoe, it proves nothing. And if I lose the shoe, it proves nothing.

 In fact, there is nothing I can do in a day, a week, or a month that can prove the validity my shuffle tracking method.  Anything you come up with could only dis-prove my shuffle tracking in your minds – which is how you’re predisposed anyway.

 That’s playing with a stacked deck, and I no interest such an arrangement.  In fact, my future interests in blackjack lie in a self-destructing, credit card sized, computer that ascertains the identity of the dealer’s downcard with extraordinary precision.  So,

 I have a counter proposal.

 In about six months ( let’s say 4 to 8 months ),  I’ll go up to Mike Dalton’s home in central Florida and demonstrate, on a desktop PC, the neural network’s application to a one pass shuffle ( like the MGM or Rio ), with three part plugging (since you’re so hung-up on penetration), in the form that will go on the 800 MHz chip in my shuffle tracking computer.  I hope Mike would invite you and your associates.  Mike, or anyone he designates, could deal actual rounds and do the shuffling using his cards.  As long as he follows general casino rules concerning order of cards into the discard rack and as long as he shuffles as any dealer would perform that type of shuffle ( with all the normal quirks and variations ).

I’ll warrant that the following results will occur:

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 
THE COMPUTER NEEDS THE FIRST TWO OR THREE SHOES TO CREATE RELATIONSHIPS BETWEEN
CUTOFFS THAT ENABLE CORRECT CARD IDENTIFICATION BY THE THIRD OR FOURTH SHOE.
 
THE COMPUTER CAN DEDUCE MANY VALID RELATIONSHIPS WITHIN THE SECOND AND THIRD SHOES,
AS WELL, BUT THESE RELATIONSHIPS OCCUR ONLY IN CERTAIN SECTIONS OF THE SHOE.
 
THE COMPUTER WILL TELL US WHEN IT’S READY.
 
FROM THAT POINT ON, THE COMPUTER WILL PREDICT THE DEALER’S DOWN CARD,
BOTH CARD AND SUIT, WITH THE FOLLOWING APPROXIMATE FREQUENCIES:
 
 
IF THE DEALER’S DOWN CARD IS THE SECOND CARD:
 
    1ST ROUND 2ND ROUND OTHER ROUNDS
ONE POSSIBLE CARD    20% 25% 55%
TWO POSSIBLE CARDS    35% 45% 75%
THREE POSSIBLE CARDS    50% 60% 85%
 
IF THE DEALER’S DOWN CARD IS HIS FIRST CARD:
 
    1ST ROUND 2ND ROUND OTHER ROUNDS
ONE POSSIBLE CARD    25% 35% 70%
TWO POSSIBLE CARDS    45% 55% 85%
THREE POSSIBLE CARDS    65% 75% 95%
 
 
IF THE DEALER’S DOWN CARD IS HIS SECOND CARD, AND AT LEAST 5 DRAW CARDS
ARE PLAYED PRIOR TO THE PLAYER USING THE COMPUTER, THE PERCENTAGES WILL
APPROACH THOSE OF THE TABLE REPRESENTING THE FIRST CARD AS THE DOWN CARD.
 
 
THE LEVERAGE OF THIS KNOWLEDGE, COMBINED WITH THE PREDICTABILITY OF THE
PLAYER’S POTENTIAL DRAW CARD(S) AND THE DEALER’S POTENTIAL DRAW CARD(S)
YIELDS ADVANTAGEOUS PLAY.
 
THE PREDICTABILITY OF THE SET OF THE NEXT 26 CARDS AT THE TIME OF BET YIELDS
A BET ADJUSTMENT THAT IS PROBABLY SUPERIOR TO THE CURRENT STATE OF THE ART.
 
FLAT BETTING, ALONE, SHOULD PRODUCE RETURNS IN EXCESS OF 5%
 
 I’m assuming you can project the power of this predictability into return.  Be careful, the return may not be as high as you think.  However, since the play no longer is based upon the count ( since you know the dealer’s down card most of the time ) and the bet is less dependent on the card count or the shuffle tracking count,  flat betting is almost as potent as bet variation – a huge advantage for remaining undetected.  Surrender is even more important under these circumstances than it is with my type of shuffle tracking.  And insurance, of all things, becomes a potent contributor to the return since you’re insurance decision is almost always correct.

 Here’s my proposal.  If the results match ( or come reasonably close to the above levels ), we sit down and negotiate a marketing agreement using my technology ( both software and hardware ) and your knowledge of the market, the fine line between advertising and keeping information from the casinos, and your ideas of how to best capitalize on the device.  At no time would I break the law or expect you to do so.

 If we come to an agreement, I’ll expose the hardware technology to you.  If you don’t think it’ll evade detection ( and I can’t modify it ), you can withdraw.

 If you feel this arrangement doesn’t forward your interests, or you just abhor blackjack devices, then I guess that ends this discussion.

 Of the thirty or so players that I’ve polled concerning their feelings toward using a cutting edge device ( both hardware and software ) in the casino ( all understood the law ), about 80% thought they would use it without hesitation.  I’ve spoken to a couple of wealthy $5,000-$10,000 bettors who said they would wear the output device if someone else did the input and carried the chip.

 The crude casino computers that I described earlier are not what we’re going to produce.  Technology is well ahead of these toys.

 Without going into too much detail, today we have 800MHz chips with 3 gigabyte capacity.  <sensitive text removed>  Today we have power supplies the size of dimes ( and smaller ).  We have transceivers the size of the tip of a pencil eraser whose local transmissions ( <12 ft. ) are virtually undetectable unless a sensitive detection device were placed directly between the transmitter and receiver.  We have input sensors so small, that they literally assimilate into the lattice of a person’s clothing.  We have output devices that are as small as a 1/8″ ball bearing and don’t require antennas and chips that can be programmed to destroy the software instantaneously with a signal or if touched incorrectly !!!

 And who says the player has to have the device on him?  There are lots of possibilities, including the venues that don’t yet have laws against devices.  Twenty four boats leave Florida ports every day for the 3 mile point where Florida Law ceases and Maritime Law begins.  Neither Florida Law nor Maritime Law address the issue of computer devices.

 Unlike your proposal, from this demonstration, something can be proven.  And I would have something to gain from the exercise.

 

Snyder:  I would like to have input from Wong, Schlesinger, et al., as to what they believe would constitute a fair demonstration of both the methodology itself and Goren’s abilities. I would also be interested in hearing from shuffle trackers who believe in Goren’s methods, especially if they claim to be currently utilizing these methods successfully.

Perhaps I am wrong, but I think Don Goren is mistaken in both his approach and his method of analysis.

 You just got it.

 If you choose to carry this any further, you or anyone else is welcome to call me at 305 – 595 – 5903.  I don’t bite.

 Thanks, Mike, for passing this on.
 

Very truly yours,

Don Goren

%d bloggers like this: