fascinating website bug: pop-up to select color in the last couple guesses is displayed below "6^4 = 1296" formula in the text below - so the pop-up closes when I hover over the numbers
The Kodak Research Labs (like Bell Labs) let their researchers play. In the 1960's my father (who later devised the Bayer filter for digital cameras) coded this algorithm for "Jotto" the 5 letter word version of Mastermind.
Computers were so slow that one couldn't consider every word in the dictionary as a potential guess. He decided empirically on a sample size that played well enough.
I became a mathematician. From this childhood exposure, entropy was the first mathematical "concept" beyond arithmetic that I understood.
(I mention this so people will know the list exists, and hopefully email us more nominations when they see an unusually great and interesting comment, like this one.)
My strategy was simulate my possible next guesses against all possible codes, then pick the option that had the highest number of possible outcomes (sometimes this strategy is called MaxParts). It looks like the author's approach works for similar underlying reasons.
Besides this, I applied some optimizations for the starting move, and some further optimizations on considering 'irrational' guesses -- e.g. choosing a code that had already been eliminated as a possibility, because it returned more information (this was rare, but possible).
I ran my code against all possible games of 4,6 mastermind (I win in an average of 4.2778 guesses), and found that some starting guesses were more optimal than others! The pattern "AABC" (e.g. red-red-yellow-green) was the best performer. Perhaps this is a way that the author can improve their algorithm just a tiny bit.
That reminds me, I wrote it for iPhone when it was released in 2007 - back then there was no App Store, so apps could only be written for browsers. I think I implemented it whenever I learned a new language. By the way, I noticed that Claude, ChatGPT, and DeepSeek - none of those LLMs can solve Mastermind. They get lost after a few iterations, no matter how good the prompt instructions are. Source: https://github.com/muquit/iphonemm - there is a link to play on that page.
Had the same experience. I loved playing with my mom, and I remember spending a lot of time thinking about how to optimize my guesses. It fit all my hobbies around logic.
I love stuff like this. It always tickles my brain to try and find the optimal way (or, as optimal a way as I can) of solving puzzles. Sometimes it's easy, sometimes it's really hard. Oftentimes you can get something decent with not too much effort, and the dopamine hit is great when you see it working
I play Mastermind with my kids. It hasn’t clicked quite yet with them but I’ve shown them my strategy of eliminating colors by making an entire row a single color successively. Either it’s not present or now you know how many of a particular color. Then you just need to figure out ordering. Again you can use a known “bad” color to avoid ambiguity of multiple white pegs
Similarly, I explained to someone how to systematically solve it like this, but then when I suggested they now apply it in a new game, I chose all the same color for the code and watched them second guess themselves until all guesses were exhausted. They just couldn’t accept the possibility that the code could simply be all the same color. Was a good opportunity to quote Sherlock Holmes.
Really great article. Internalizing that a good realizable Mastermind strategy is to always eliminate the same number of possible solutions is a great way to internalize the value of information theory thinking. Getting hung up if it is exactly "plan k steps forward optimal" (i.e. fine details of the remaining possible cases) can be counter-productive.
Really liked it.
I'm curious about games for which the guesses are not always a valid solution but can contain special operators. For instance, add a state which is true for 2 different values in Secret Code : 1or2. The code won't contain it but it can be useful for getting more information at each step.
I've never been much for word games but have been a Wordle regular. But a friend of mine made the very astute observation that Wordle is more like Mastermind (which I played and even wrote a Windows version of as an exercise once upon a time) than any traditional word game. OK. You need a solid English 5 letter vocabulary. But it's really pretty different from crosswords and a lot of other word games including those on the NYT site.
I think you can distinguish between word games where it helps to know what the words mean (like crosswords) and word games where the meaning of the words is irrelevant, only that they are or are not words (like Wordle).
Weirdly, Scrabble resembles crosswords graphically but fits in the second group.
Sudoku is kind of the same, in that the numerical values of the numbers doesn't come into play at all. You can replace the digits with fruits and the game plays the same.
I enjoyed this comment. Fruit emojis don’t sum well, so by your classification separate out the similar Kakuro or KUBOK16 from Sodoku, and with that, I have to drop a plug for my https://www.kakurokokoro.com/ in which “clues” are the column and row sums.
My grade school had a game club at lunch hour. Mr. Newton took time out of his day, every day for us. Still think about him and the many games he brought from his had personal collection. Not sure what the point of this message is other than that he will be remembered until the last time I think about mastermind, Pacific typhoon, star fleet battles, axis and allies and many others..
I had a friend in my high school that was just like that, any day with not class was a excuse to bring some boardgame and spend hours playing in the empty classroom. Xcom, catan, zombicide, ticket ride, and so on
Each constraint given to you by the game can be done in grep and used to filter a master list of all the answers by piping the filtered list through each constraint.
I don't understand the point of this article, as it doesn't define an objective function. It just states a strategy that is only practically implementable for small board sizes (given the cited NP-completeness result) and then calls it good sans theorem or even conjecture.
I believe it is provably not the optimal algorithm for solving the problem under the minimax objective, and I have a hunch that (due to rounding issues) it is also not optimal for minimizing the expected number of guesses under even a uniform prior. So what does this actually accomplish?
I agree with you. I agree with OP in the following sentences:
>We have now landed on our final strategy: start by figuring out the number of possible secret codes n. For each guess, calculate the number n_i' of codes that will still be viable if the Code Master gives response i in return. Do this for all possible responses.
But then I don't agree with:
>Finally, calculate the entropy of each guess; pick the one with the highest.
Why wouldn't we just pick argmin_{guess} sum{i in possible responses}{Pr[i] * n'_i} = sum{i in possible responses}{n'_i/n * n'_i} = sum{i in possible responses}{n'_i^2}? This is the guess that minimizes the expected size of the resulting solution space.
fascinating website bug: pop-up to select color in the last couple guesses is displayed below "6^4 = 1296" formula in the text below - so the pop-up closes when I hover over the numbers
The Kodak Research Labs (like Bell Labs) let their researchers play. In the 1960's my father (who later devised the Bayer filter for digital cameras) coded this algorithm for "Jotto" the 5 letter word version of Mastermind.
Computers were so slow that one couldn't consider every word in the dictionary as a potential guess. He decided empirically on a sample size that played well enough.
I became a mathematician. From this childhood exposure, entropy was the first mathematical "concept" beyond arithmetic that I understood.
This comment goes in the highlights list: https://news.ycombinator.com/highlights!
(I mention this so people will know the list exists, and hopefully email us more nominations when they see an unusually great and interesting comment, like this one.)
> I became a mathematician. From this childhood exposure, entropy was the first mathematical "concept" beyond arithmetic that I understood.
Very cool.
What is it like being cited by Satoshi Nakamoto?
I too became a fan of this game from an early age, note the username
Oh, I loved working on this problem years back.
My strategy was simulate my possible next guesses against all possible codes, then pick the option that had the highest number of possible outcomes (sometimes this strategy is called MaxParts). It looks like the author's approach works for similar underlying reasons.
Besides this, I applied some optimizations for the starting move, and some further optimizations on considering 'irrational' guesses -- e.g. choosing a code that had already been eliminated as a possibility, because it returned more information (this was rare, but possible).
I ran my code against all possible games of 4,6 mastermind (I win in an average of 4.2778 guesses), and found that some starting guesses were more optimal than others! The pattern "AABC" (e.g. red-red-yellow-green) was the best performer. Perhaps this is a way that the author can improve their algorithm just a tiny bit.
That reminds me, I wrote it for iPhone when it was released in 2007 - back then there was no App Store, so apps could only be written for browsers. I think I implemented it whenever I learned a new language. By the way, I noticed that Claude, ChatGPT, and DeepSeek - none of those LLMs can solve Mastermind. They get lost after a few iterations, no matter how good the prompt instructions are. Source: https://github.com/muquit/iphonemm - there is a link to play on that page.
I grew up with this game. It hurt my brain but in a good way. I think a lot of my problem solving and interest in coding stemmed from it.
Had the same experience. I loved playing with my mom, and I remember spending a lot of time thinking about how to optimize my guesses. It fit all my hobbies around logic.
I love stuff like this. It always tickles my brain to try and find the optimal way (or, as optimal a way as I can) of solving puzzles. Sometimes it's easy, sometimes it's really hard. Oftentimes you can get something decent with not too much effort, and the dopamine hit is great when you see it working
I play Mastermind with my kids. It hasn’t clicked quite yet with them but I’ve shown them my strategy of eliminating colors by making an entire row a single color successively. Either it’s not present or now you know how many of a particular color. Then you just need to figure out ordering. Again you can use a known “bad” color to avoid ambiguity of multiple white pegs
Similarly, I explained to someone how to systematically solve it like this, but then when I suggested they now apply it in a new game, I chose all the same color for the code and watched them second guess themselves until all guesses were exhausted. They just couldn’t accept the possibility that the code could simply be all the same color. Was a good opportunity to quote Sherlock Holmes.
Really great article. Internalizing that a good realizable Mastermind strategy is to always eliminate the same number of possible solutions is a great way to internalize the value of information theory thinking. Getting hung up if it is exactly "plan k steps forward optimal" (i.e. fine details of the remaining possible cases) can be counter-productive.
Really liked it. I'm curious about games for which the guesses are not always a valid solution but can contain special operators. For instance, add a state which is true for 2 different values in Secret Code : 1or2. The code won't contain it but it can be useful for getting more information at each step.
This was my favorite game to play with my parents when I was growing up.
Cool. Tiny tip, “Worlde” is obviously a typo for the popular puzzle.
I've never been much for word games but have been a Wordle regular. But a friend of mine made the very astute observation that Wordle is more like Mastermind (which I played and even wrote a Windows version of as an exercise once upon a time) than any traditional word game. OK. You need a solid English 5 letter vocabulary. But it's really pretty different from crosswords and a lot of other word games including those on the NYT site.
I think you can distinguish between word games where it helps to know what the words mean (like crosswords) and word games where the meaning of the words is irrelevant, only that they are or are not words (like Wordle).
Weirdly, Scrabble resembles crosswords graphically but fits in the second group.
Sudoku is kind of the same, in that the numerical values of the numbers doesn't come into play at all. You can replace the digits with fruits and the game plays the same.
I enjoyed this comment. Fruit emojis don’t sum well, so by your classification separate out the similar Kakuro or KUBOK16 from Sodoku, and with that, I have to drop a plug for my https://www.kakurokokoro.com/ in which “clues” are the column and row sums.
Except it’s harder to iterate over the numbers when they’re fruits. (Unless you have a list of nine fruits in your head you can rattle off.)
Although I must say I do like Connections where even subtle meaning very much factors in.
My grade school had a game club at lunch hour. Mr. Newton took time out of his day, every day for us. Still think about him and the many games he brought from his had personal collection. Not sure what the point of this message is other than that he will be remembered until the last time I think about mastermind, Pacific typhoon, star fleet battles, axis and allies and many others..
I had a friend in my high school that was just like that, any day with not class was a excuse to bring some boardgame and spend hours playing in the empty classroom. Xcom, catan, zombicide, ticket ride, and so on
I don't know if you need "information theory" to do something that can be solved with grep.
Sorry, could you elaborate more about using grep to solve this? I can't imagine how
Each constraint given to you by the game can be done in grep and used to filter a master list of all the answers by piping the filtered list through each constraint.
https://github.com/codebox/wordle.sh
I don't understand the point of this article, as it doesn't define an objective function. It just states a strategy that is only practically implementable for small board sizes (given the cited NP-completeness result) and then calls it good sans theorem or even conjecture.
I believe it is provably not the optimal algorithm for solving the problem under the minimax objective, and I have a hunch that (due to rounding issues) it is also not optimal for minimizing the expected number of guesses under even a uniform prior. So what does this actually accomplish?
I agree with you. I agree with OP in the following sentences:
>We have now landed on our final strategy: start by figuring out the number of possible secret codes n. For each guess, calculate the number n_i' of codes that will still be viable if the Code Master gives response i in return. Do this for all possible responses.
But then I don't agree with:
>Finally, calculate the entropy of each guess; pick the one with the highest.
Why wouldn't we just pick argmin_{guess} sum{i in possible responses}{Pr[i] * n'_i} = sum{i in possible responses}{n'_i/n * n'_i} = sum{i in possible responses}{n'_i^2}? This is the guess that minimizes the expected size of the resulting solution space.