***Disclaimer: I'm going to come off as supporting the Kelly Criterion but I'm actually on the side that says this is impractical for most people. I'm just debating that it is practical on a theoretical level.***
Originally Posted by DoubleA
Rogue, I have some issues with the whole thought process that is going on here.
First of all, Kelly's formula (I believe) is based on a game with known probabilities and odds. Take a ten sided die (yes I used to play AD&D and I'm a loser) with six X's and four O's. You can bet on X or O and get even money on your wager. Kelly wanted to know, if we had $1000 then how much should we bet on X to maximize our bankrolls growth rate. The environment that Kelly tested his Criterion in is controlled. Poker is not that type of game.


A similar environment is present assuming the accuracy of the inputs to model your return distribution. In the example you provide, our bankroll is $1000, we have a bernoulli distribution of, bet size*[.6*(return on x)+.4*(return on o)]. In poker, on any given hand, we can have a $1000 bankroll, a normal distribution characterized by mean (winrate) and standard deviation based on how much we buy in for.
The major difference is that if we isolate ourselves to full buy ins, the increments that we can reduce our bet size turns into the difference of buying in full stacked at the different limits available. If we knew our win rate and standard deviation for every single buy in amount, we could actually reduce our buy in amounts all the way down to the min buy in at the lowest limit but this is impractical and it has a conflict of interest with growth of poker skills.
The reason why this shouldn't be used for poker is that the accuracy of the model is highly dependent on the accuracy of your inputs where your win rate is much more important than standard deviation. The accuracy of your inputs doesn't converge until you have a huge sample size and by then, any winning player should have moved up. The math is there.
Originally Posted by DoubleA
Secondly, you can't reverse the Kelly to find an optimal bankroll. Or, I should say that you don't need to. The optimal bankroll is infinite. But, it doesn't work anyway because the Kelly assumes that you'll be able to continue making bets at a fraction of a penny. You can't. If you use the Kelly to determine your bankroll size, and lose your first bet, then you'll have to move down in stakes. That move will only require half of your current bankroll (or less {$25NLHE to $10NLHE}) and you won't be playing "optimally" any more.


We are not finding an optimal bankroll. We are finding the minimum amount necessary to be able to play a given limit with minimal risk of ruin. Thus, when deciding between two limits to play, we play the higher one we can safely play according to the Kelly Criterion assuming our risk/reward is greater at the higher limit. As stated above, we can not reduce the amount with risk on a continuous basis but instead, we reduce it on an incremental basis. Obviously our risk of ruin is not 0, but Kelly was originally trying to apply this on a monetary basis meaning the reduction was on a $.01 or more practically $1 basis and there is a minimum amount we can risk at
casinos also so this isn't that far off from what Kelly was trying to do. This is why our win rate and standard deviation at the lowest possible limit needs to be extremely conservative to minimize our risk of ruin.
Originally Posted by DoubleA
Third, you're treating your bi's LIKE bets. Essentially betting on yourself to have a "winning" session. I won't even start...


Umm... maybe I'm not getting what you're saying but isn't this what all poker players that are risk averse tend to do? I always expect to make money whenever I play poker. In the Kelly Criterion, a fundamental assumption is that on any given hand (as opposed to session which is a series of bets), our expectation is +EV. If it isn't, the Kelly Criterion would tell us not to bet (or buy in).
Originally Posted by DoubleA
Fourth, what advice would you give your best friend if he told you that he was moving to Las Vegas to become a professional poker player with $700?


I like this question. It's a good way of objectively thinking about the use of this methodology because there are other factors (such as playing scared money and tilting) that don't enter the equation. There is some argument that says the Central Limit Theorem washes this away but I disagree with this and I'll leave this at that.