Ring Situations: Odds to call?

M

mischman

Legend
Joined
Dec 30, 2005
Total posts
2,959
So i started a thread asking if in the first hand of the wsop someone pushed all in blind would you call with AK. ***Please dont discuss anything regarding tournaments or that here***

It got a little derailed and started talking about cash games and i want to discuss another situation more in detail.

Someone said that in cash games they would call if they had the smallest advantage odds wise on there hand, if they were 51% to win they would call.

The example was you at $1/$2 with $400 and the SB shoved(everyone folded) and showed you(BB) KJ before you made your call and you had A2. It was also under the same conditions with a $50K stack at $25/50


Is this correct? Should you call with ANY edge regadless of stack size? Thoughts? Anything else?
 
tenbob

tenbob

Legend
Joined
May 16, 2005
Total posts
11,221
Awards
1
I call with any edge, the sizes of the stacks shouldnt be important "unless" your playing above your BR, in which case you shouldnt be playing the bigger game at all and prob should fold.
 
tiltboy

tiltboy

Guest
Joined
Mar 29, 2007
Total posts
120
Agree with above, the pot sizes shouldnt really matter unless you are not playing within a financially comfortable zone.

If you lose reload and carry on.
 
M

mischman

Legend
Joined
Dec 30, 2005
Total posts
2,959
Agree with above, the pot sizes shouldnt really matter unless you are not playing within a financially comfortable zone.

If you lose reload and carry on.
You cant just reload a $50,000 stack at $25/$50.
 
4Aces

4Aces

is watching you
Joined
Mar 5, 2007
Total posts
1,901
No matter what the stakes i think i would call if i was 51% to win. I mean i know its only a small advantage but, how can you fold if you actually have some sort of advantage?
 
dj11

dj11

Legend
Joined
Oct 9, 2006
Total posts
23,189
Awards
9
Herein lies the rub. The skills to determine whether or not you have an edge probably come into play more in ring games that in tourney's. In tourney's one always has to keep survival in mind, and that often means laying down a big hand because the ultimate odds are more important than the immediate odds.

One might be big stacked in a tourney, and have 60/40 positive odds to a large bet that in context says if one loses this hand, one becomes the shortstack.

When I take those odds and call, I am short stacked 40% of the time. When I lay down that hand, I am still big stacked 90% of the time.
 
aliengenius

aliengenius

Cardschat Elite
Joined
Jul 7, 2006
Total posts
4,596
I said this in the other thread too, but in a ring game at least some thought has to be given to reducing your variance.

The closer an outcome is to 50/50 the greater the variance. It does not matter if you fold or if you call when the the race is exactly 50/50; you are just gamboling.

Now that is NOT the situation give here, but I think the point is to get to an idea of what % edge you need. Theoretically it is only anything positive (even less than 1%). In reality, it depends on a lot of factors for individuals like bankroll, risk tolerance etc.
 
F Paulsson

F Paulsson

euro love
Joined
Aug 24, 2005
Total posts
5,799
Edit: I hate this formatting. Trying to make it better by using bold.

I made a small computer program to run simulations for me testing overall profit over a certain number of hands for different scenarios of bankroll management. The results were not obvious to me.

The set-up is this:

1. Both players start with a "bankroll" of $100.
2. The games that the players play are "perfectly" suited to the size of their bankroll, i.e. they play every hand in a game where the ante is exactly 0.1% of their bankroll (1/1,000).
3. Every time that our hero (either player) finds himself with a hand that's better than a certain percent to win, he will push his entire stack in. His stack is always 5% of his bankroll.
4. Player 1 ("Risky") takes any bet where he is better than 50% to win.
5. Player 2 ("Safe") takes only bets where he is better than 55% to win.

Despite running this simulation over a decent number of iterations, the outcome is that the safer player is only slightly worse off than the risky player in total profits.

This is the output from 10 runs (of 100k hands each):

Risky Safe Diff in %
313149 309163 1,29%
309757 306176 1,17%
314013 309279 1,53%
314438 309671 1,54%
312163 306739 1,77%
316233 312142 1,31%
313808 307914 1,91%
313689 310451 1,04%
314725 310704 1,29%
315317 313456 0,59%
Average Profit for Risky: 313729,2
Average Profit for Safe: 309569,5
Averager Difference in percent: 1,35%


In other words: Despite taking considerably less risk (demanding a 55% edge instead of any even-money proposal), the difference between the two players is less than 1.35% in the end. How much it would differ in psychological pressure between the two players, I don't know. Whether or not my simulation is good for anything at all (or even properly set-up) I don't really know either. I was curious at the time.

For those curious as to how I coded it, see below. The indentation gets messed up when I pasted it here, but you can figure that out for yourselves.

#define MAX_NUM 100
#define ITERATIONS 100
#define BANKROLL 100
#define BETSIZE 20
#define ANTE 1000
void Run()
{
double fRisky = BANKROLL, fSafe = BANKROLL;
int nHeroRoll, nVillainRoll;
int nRiskyTotal = 0, nSafeTotal = 0;
srand(GetTickCount());
int nRiskyMillion = 0, nSafeMillion = 0;
for(int a = 0; a < 1000; a++)
{
fRisky = BANKROLL;
fSafe = BANKROLL;

for(int i = 0; i < ITERATIONS; i++)
{
fRisky -= fRisky/ANTE;
fSafe -= fSafe/ANTE;

nHeroRoll = rand()%MAX_NUM;
nVillainRoll = rand()%MAX_NUM;

// Risky first.
if(nHeroRoll >= (int)((MAX_NUM-1) * 0.5))
{
if(nHeroRoll > nVillainRoll)
fRisky += fRisky / BETSIZE;
else if (nHeroRoll < nVillainRoll)
fRisky -= fRisky / BETSIZE;

// In the odd case that they're equal, nothing happens.
}

// Safe now
if(nHeroRoll > (int)((MAX_NUM-1) * 0.55))
{
if(nHeroRoll > nVillainRoll)
fSafe += fSafe / BETSIZE;
else if (nHeroRoll < nVillainRoll)
fSafe -= fSafe / BETSIZE;

// In the odd case that they're equal, nothing happens.
}
}
nRiskyTotal += (int)fRisky;
nSafeTotal += (int)fSafe;
}
char szBuf[1024];
sprintf(szBuf, "Risky: %d, Safe: %d", nRiskyTotal, nSafeTotal);
AfxMessageBox(szBuf, MB_OK | MB_TOPMOST, 0);
}
 
aliengenius

aliengenius

Cardschat Elite
Joined
Jul 7, 2006
Total posts
4,596
But it should be 5% in the long run, right? I confess I am unqualified to do any actual statistical analysis, or to evaluate the protocol, but it seems like we should get closer to that actual % over one million hands. 1.35% with in standard deviation over than many trials?
 
F Paulsson

F Paulsson

euro love
Joined
Aug 24, 2005
Total posts
5,799
No, it actually does not approach 5%. The reason - I suppose there could be more than one reason, but this is the one I thought of (that made me run the simulations to begin with) - is that someone who has a lot more fluctuations will be forced to play lower limits at times - and at those times, the "safer" player will make more money per hand than the fluctuating player.

... If that makes sense.


The extreme example of this is running the same simulation, but this time the two players wager their ENTIRE bankroll every hand, and we'd make the safer player fold anything but a 100% edge. In that scenario, we'd virtually always see that the safer player will come out on top, despite being the ultimate nit; because when your bankroll is gone you can't play anymore. In the slightly more realistic simulation, we're looking at someone who can still play, but needs to step down in limits.
 
Munchrs

Munchrs

Legend
Joined
May 25, 2007
Total posts
1,935
shouldnt we take dead money thats already in the pot into account??
 
aliengenius

aliengenius

Cardschat Elite
Joined
Jul 7, 2006
Total posts
4,596
No, it actually does not approach 5%. The reason - I suppose there could be more than one reason, but this is the one I thought of (that made me run the simulations to begin with) - is that someone who has a lot more fluctuations will be forced to play lower limits at times - and at those times, the "safer" player will make more money per hand than the fluctuating player.

... If that makes sense.


The extreme example of this is running the same simulation, but this time the two players wager their ENTIRE bankroll every hand, and we'd make the safer player fold anything but a 100% edge. In that scenario, we'd virtually always see that the safer player will come out on top, despite being the ultimate nit; because when your bankroll is gone you can't play anymore. In the slightly more realistic simulation, we're looking at someone who can still play, but needs to step down in limits.

I get it (!). This is a central theme in TPfAP: you sometimes sacrifice +EV in order to get a better survival percentage, since when you go broke, you are done/gone.

But even in cash game you sometimes do this as well!

Why? Reduction in variance (see my post above). A higher variance player will have a higher chance of losing enough to necessitate a drop in stakes, as well as a higher Risk of Ruin overall.

Sklansky puts it nicely when he says you sometimes give up +EV now (specific situation in front of you), if doing so will allow you to get even greater +EV in the future, either you know a better opportunity will arise (and you can't risk losing money before it does), or simply by extending the length of time you can apply smaller +EV situations (which will cumulatively add up to more than your "now" opportunity).
 
shinedown.45

shinedown.45

Legend
Joined
Aug 18, 2006
Total posts
5,389
Edit: I hate this formatting. Trying to make it better by using bold.

I made a small computer program to run simulations for me testing overall profit over a certain number of hands for different scenarios of bankroll management. The results were not obvious to me.

The set-up is this:

1. Both players start with a "bankroll" of $100.
2. The games that the players play are "perfectly" suited to the size of their bankroll, i.e. they play every hand in a game where the ante is exactly 0.1% of their bankroll (1/1,000).
3. Every time that our hero (either player) finds himself with a hand that's better than a certain percent to win, he will push his entire stack in. His stack is always 5% of his bankroll.
4. Player 1 ("Risky") takes any bet where he is better than 50% to win.
5. Player 2 ("Safe") takes only bets where he is better than 55% to win.

Despite running this simulation over a decent number of iterations, the outcome is that the safer player is only slightly worse off than the risky player in total profits.

This is the output from 10 runs (of 100k hands each):

Risky Safe Diff in %
313149 309163 1,29%
309757 306176 1,17%
314013 309279 1,53%
314438 309671 1,54%
312163 306739 1,77%
316233 312142 1,31%
313808 307914 1,91%
313689 310451 1,04%
314725 310704 1,29%
315317 313456 0,59%
Average Profit for Risky: 313729,2
Average Profit for Safe: 309569,5
Averager Difference in percent: 1,35%


In other words: Despite taking considerably less risk (demanding a 55% edge instead of any even-money proposal), the difference between the two players is less than 1.35% in the end. How much it would differ in psychological pressure between the two players, I don't know. Whether or not my simulation is good for anything at all (or even properly set-up) I don't really know either. I was curious at the time.

For those curious as to how I coded it, see below. The indentation gets messed up when I pasted it here, but you can figure that out for yourselves.

#define MAX_NUM 100
#define ITERATIONS 100
#define BANKROLL 100
#define BETSIZE 20
#define ANTE 1000
void Run()
{
double fRisky = BANKROLL, fSafe = BANKROLL;
int nHeroRoll, nVillainRoll;
int nRiskyTotal = 0, nSafeTotal = 0;
srand(GetTickCount());
int nRiskyMillion = 0, nSafeMillion = 0;
for(int a = 0; a < 1000; a++)
{
fRisky = BANKROLL;
fSafe = BANKROLL;

for(int i = 0; i < ITERATIONS; i++)
{
fRisky -= fRisky/ANTE;
fSafe -= fSafe/ANTE;

nHeroRoll = rand()%MAX_NUM;
nVillainRoll = rand()%MAX_NUM;

// Risky first.
if(nHeroRoll >= (int)((MAX_NUM-1) * 0.5))
{
if(nHeroRoll > nVillainRoll)
fRisky += fRisky / BETSIZE;
else if (nHeroRoll < nVillainRoll)
fRisky -= fRisky / BETSIZE;

// In the odd case that they're equal, nothing happens.
}

// Safe now
if(nHeroRoll > (int)((MAX_NUM-1) * 0.55))
{
if(nHeroRoll > nVillainRoll)
fSafe += fSafe / BETSIZE;
else if (nHeroRoll < nVillainRoll)
fSafe -= fSafe / BETSIZE;

// In the odd case that they're equal, nothing happens.
}
}
nRiskyTotal += (int)fRisky;
nSafeTotal += (int)fSafe;
}
char szBuf[1024];
sprintf(szBuf, "Risky: %d, Safe: %d", nRiskyTotal, nSafeTotal);
AfxMessageBox(szBuf, MB_OK | MB_TOPMOST, 0);
}
Sorry FP, couldn't help myself after this^^^^post
 

Attachments

  • cartoon34.jpg
    cartoon34.jpg
    66.6 KB · Views: 42
hott_estelle

hott_estelle

Guest
Joined
Jan 7, 2007
Total posts
1,759
You already know my answer, and my explanations behind it since this post was made because of me and my answer.

Call with any edge.

I don't want to type why all over again, but if anyone but misch, since I think he knows why I state my answer, wants explanations behind my answer just PM me or check the other thread where most of my explanations for this are here
[old link~tb]
 
Last edited by a moderator:
Poker Odds - Pot & Implied Odds - Odds Calculator
Top