Poker Pro vs. ChatGPT-5: Who Wins?
People are using ChatGPT-5 to write emails, meal plans, and even dating profiles. But what happens when general-purpose AI plays 60 hands of poker against a pro?
In our experiment, a professional player handed ChatGPT-5, a large language model (LLM) not trained for poker, a set of Pot Limit Omaha (PLO) hands. Both explained their decisions for each spot.
Poker-specific “super AIs” like Carnegie Mellon’s solver have already beaten pros, but that wasn’t the test here. We wanted to see if the AI people actually use could handle a game built on nuance, deception, and adaptability.
Let’s take a closer look:
About AI
- 63% Agreement: Matched the pro’s decisions in 38 of 60 hands. That’s far better than random guessing (25 %, or about 1 in 4, since there are four basic actions) and roughly in line with what you’d expect from a beginner who knows the basics.
- Bet Sizing Gaps: In 12 hands where both chose to bet, ChatGPT-5 disagreed on size 67% of the time, often defaulting to “big bets with strong hands, small bets with weak ones,” a predictable amateur pattern.
- 42% Misreads: In nearly half the hands, ChatGPT-5 misread details about the cards or board. Even casual players rarely make this many interpretation mistakes.
- Strategic Thinking: Showed solid pre-flop range analysis and often thought ahead to later streets. This kind of forward-looking approach is useful for beginners learning how decisions connect across the hand.
- Limits in Complexity: In tougher spots it struggled, under-bluffed, and relied on patterns that skilled players could read and exploit. Its play was above chance and showed flashes of structured logic, but still far from professional-level play.
About the Pro
- Bluff Mastery: Spotted bluff and semi-bluff opportunities that ChatGPT consistently missed.
- Smart Sizing: Adjusted bets based on ranges and board textures, keeping play balanced and unpredictable.
- Adaptive Play: Mixed checks, calls, and raises across both strong and weak holdings, making strategy harder to exploit.
- Exploited Predictability: Capitalized on ChatGPT’s rigid patterns, pressing edges where the AI’s play was too cautious or obvious.
- Acknowledged AI’s Edge in Theory: Noted that ChatGPT often broke down ranges and future streets with impressive clarity, at times even sharper in articulation than a human mid-hand.
How the AI vs Poker Pro Challenge Worked
This wasn’t a late-night poker binge. It was a structured head-to-head test:
- 60 Hands of PLO: Both the pro and ChatGPT-5 were given the same hand scenarios.
- Same Info: Each saw the hand history, flop, turn, and river action.
- Pro as Dealer: The poker pro wrote the prompts for ChatGPT-5 and explained his own play with reasoning.
- AI in Real Time: No solvers, no databases. Just ChatGPT-5 answering with basic logic.
- Compared on Three Things:
- Bet, check, call, or fold decisions
- Bet sizing
- The reasoning behind each move
Bottom Line at the Table: We compared their answers to see where the pro and AI agreed, where they clashed, and what that meant.
Pro vs AI: Style and Strengths
The poker pro and ChatGPT-5 didn’t just make different moves; they played the same spots with very different philosophies.
The Pro’s Style: Exploitative and Flexible
- Mixed bluffs and strong hands to stay balanced
- Used blockers to pick sharp bluffing spots
- Controlled pot sizes depending on risk and opponent type
- Kept play unpredictable so opponents couldn’t guess what was coming
ChatGPT-5’s Style: Straightforward but Structured
- Focused on range theory and often spotted which flops favored which player
- Looked ahead to later streets, showing useful forward planning
- Sometimes suggested different approaches for player types, hinting at exploitative thinking
- Could serve as a strong tutor for beginners, helping explain ranges and theory
As the pro put it: “One thing it did very well was it thought about the overall strategy of the hand … it thought ahead and analyzed how our decision on the flop will affect the decisions we may face at later junctures...[and] it explored exploitative options vs specific player types, which is very useful.”
Bottom line at the table: ChatGPT could explain poker like a coach but lacked the unpredictability and balance of an experienced human.
Where Chat GPT Fell Short
While it showed flashes of structured thinking, ChatGPT-5 struggled when hands became complex.
“It semi-regularly misread our hand… we disagreed frequently on what sizing we should use… I think it was severely under-bluffing… Across the sixty hands, I don’t think there was a single one where ChatGPT led out with a bluff or check-raised as a bluff… It employed a basic strategy … which can become very exploitable.” - Poker Pro
Key Weaknesses:
- Frequent Misreads: Misinterpreted cards or board texture, leading to faulty decisions.
- Predictable Patterns: Defaulted to value and protection bets, making its play easy to read.
- No Bluff Game: Rarely bluffed, missing chances to balance its range.
- Sizing Issues: Chose bet sizes based on exact hand strength rather than ranges.
- Not Real-Time Ready: A rigid style that would struggle in live or high-level play.
Bottom line at the table: ChatGPT’s theory talk was impressive, but execution flaws left if far too easy for the pro to exploit.
Spotlight: 3 Hands that Show the Difference
These three hands illustrate how the poker pro and ChatGPT-5 approached the same spots in very different ways.
Hand 1: Nut Flush + Middle Set
Spot: UTG raises, CO calls, BB calls — J♣T♣7♣ flop with A♣K♣T❤️T♠
- Pro’s Play: Checks back. Wanted to keep monsters in his check-back range, trap opponents, and allow them to bluff later.
- AI’s Play: Large bet (75–100% pot). Saw protection and value, thinking it needed to push the advantage right away.
- Takeaway: Pro induced future mistakes through deception. AI’s line made its strength predictable.
Hand 2: Nut Straight on a Volatile Board
Spot: Button raises A❤️K♠J❤️8♣, SB 3-bets, Hero calls. Flop Q❤️J♠T♣.
- Pro’s Play: Flat calls the c-bet. Keeps in weaker hands and bluffs, avoids forcing the SB into folding everything but strong hands.
- AI’s Play: Pots it (raise to stack off immediately). Saw a dangerous board and wanted to deny equity fast.
- Takeaway: Pro maximized value with patience. AI locked equity early, missing adaptability.
Hand 3: Bluff Candidate with Blockers
Spot: Button raises A❤️T♠T♦️3♦️, BB 3-bets, Hero calls. Flop J❤️Q♣8❤️.
- Pro’s Play: Bets ½ pot as a bluff. Uses blockers (Ah + Tens) to remove opponent’s best hands, and balances range with bluffs.
- AI’s Play: Checks back. Labeled bluff as “too thin” and preferred to avoid risk.
- Takeaway: Pro added balance through bluffs. AI avoided risk, lowering bluff frequency and making its strategy face-up.
Bottom line at the table: These hands showed the pro balancing deception with strategy, while ChatGPT stuck to rigid, exploitable scripts.
Poker Pro Verdict
“As a professional PLO player … [I think] ChatGPT could be a useful tool for some very basic situations… [and] would be best suited for beginner players who are learning the game and want to test their theory. However, it could not be used in real-time gameplay, and it frequently misread both hands and spots… Its approach was quite predictable, with betting patterns that good poker players could quickly exploit.” — Poker Pro
What This Means for AI in Poker
Poker solvers like Pluribus or Libratus have already shown that specialized AI can beat the best, but this test asked whether everyday ChatGPT-5 could handle a game built on deception.
The answer? It showed promise, but with clear limitations.
Chat GPT-5 explained ranges well and thought ahead, making it a useful tool for beginners. But in complex spots it leaned on rigid patterns, under-bluffed, and misread details. For now, high-level poker remains safe from general AI.
The future of poker with ChatGPT-5 isn’t about whether everyday systems can play perfectly, but about how players incorporate AI insights into flexible, human-centered strategies.
Methodology
This was not a poker solver test. It was an experiment using ChatGPT-5, a large language model (LLM), to see how it stacked up against a professional PLO player.
Setup
- Game Format: Pot Limit Omaha, 100 big blinds deep, single-raised and 3-bet pots
- Number of Hands: 60 scenarios tested
- Data Source: Hands created and described by the pro to mimic real, challenging spots
- AI Input: ChatGPT-5 received each hand as a natural-language prompt (no solver outputs or poker-specific training)
- Comparison Basis: Pro’s decision recorded alongside ChatGPT’s suggested decision
What We Measured
- Decision agreement
- Bet sizing differences
- Misreads or errors in interpreting the hand or board
- Performance in simple vs. complex spots
Limitations
- Not real-time gameplay (AI could not adapt across multiple hands)
- Not against live opponents (no population reads or tendencies)
- Not poker solvers (general AI only)
- Small sample size (60 hands) highlights trends but is not definitive
- PLO chosen as a stress test; results may differ in Hold’em or tournaments
So Who Really Won: the Pro or the Bot?
ChatGPT-5 proved it can talk poker like a coach, but when the chips were down, the pro’s creativity and adaptability kept him firmly in charge. Did our experiment show the limits of AI at the table, or just hint at how players might use it as a training tool? Join the conversation on X and YouTube and share your take with #AIVsProPoker.
Comments