The article doesn't explicitly state it in this manner in one concise place, but the way I would always think about A* from a "practical/easy-to-remember" perspective back when I was doing competitive programming is that they're all the same algorithm, but with different priorities on the priority queue:
Breadth-first Search: Priority is order of discovery of edges (that is, no priority queue/just a regular queue)
Dijkstra: Priority is distance so far + next edge distance
A*: Priority is distance so far + next edge distance + estimate of distance to target node.
This also helps me remember whether the estimate must over- or under-estimate: Since Dijkstra is making the estimate "0", clearly the "admissible heuristic" criteria must be an under-estimation.
Another way I like to think about it is that every graph traversal can be represented as a white set of unknown nodes, a grey set of known but unvisited nodes, and a black set of visited nodes. The data structure used to represent the grey set defines the algorithm:
DFS = queue
BFS = stack
Dijstra's = priority queue keyed by edge weight
A* = priority queue with heuristic function
Beam search = bounded priority queue with heuristic function
Topological sort = priority queue keyed by number of unvisited inbound edges
Copying garbage collector = pointer address
Mark & sweep garbage collector = dirty bit on the object pointed to
Generational garbage collector = multi-level grey set represented by the write barriers between each generation.
A* = priority queue with heuristic function _with specific properties_ ... in particular it must be an "admissible" hueristic that never overestimates the true value.
It'll still find a path with an inadmissible heuristic, as the page explains. It's just not guaranteed to be the shortest path in that case. This is commonly done.
More specifically Dijkstra's is a priority queue. A* is a priority queue with an added estimate to the cost function to prioritize searching nodes closer to the destination.
Likewise, you can represent a queue as a priority queue with key = i, where i is an integer monotonically increasing at insertion time. And you can represent a stack as a priority queue where key = -i.
This is the insight behind the decorate-sort-undecorate pattern; it's just heapsort, with a different key function allowing you to represent several different algorithms.
In (theoretical) computer science, we write "#xs" to denote "number of xs".
My sentence was supposed to be read as "g(n) = number of edges", and implicitly, of course (since we're talking about BFS), that means number of edges seen up until now, from ns perspective. And yes, n usually denotes the size of the graph, however, in the context of A*, we usually write n to denote the current node (as per AI:MA).
I take full responsibility. (Disclaimer: I'm a CS professor teaching BFS and Dijkstra's algorithm every semester and A* every 2nd year.)
I think it is true, although "#edges" needs to be understood as "the number of the edges in the path from the starting point to the node", which was not one of my first three candidate interpretations.
> This also helps me remember whether the estimate must over- or under-estimate: Since Dijkstra is making the estimate "0", clearly the "admissible heuristic" criteria must be an under-estimation.
You're thinking too hard. :-) Just think of a map the same way a 10-year-old would.
Straight-line (Euclidean) distance is the most obvious heuristic on a map for estimating distance, and it's admissible.
Straight lines minimize distances i.e. they never overestimate. Which is enough to remind you that you want an underestimate.
> the way I would always think about A* from a "practical/easy-to-remember" perspective back when I was doing competitive programming is that they're all the same algorithm, but with different priorities on the priority queue
A much less obvious but much more fascinating (IMO) way to look at A* is that A* is actually Dijkstra but with a modified graph, where you adjust the heuristic delta between each edge's vertices to the edge's weight.
To remember the sign of the adjustment with this method, just imagine the next vertex getting super close to the destination, and then work out whether the weight needs to increase or decrease significantly in that case. (It needs to decrease, given you're getting closer to the destination.)
Yes, but it doesn't come right away. When preferring deeper nodes for a moment you have a loop-safe Depth-first traversal, and then when simplifying things in case the Graph is a Tree you get your regular stack-based Depth-first traversal, in which if you settle for the first goal you get back a tail-call optimised DFS.
Red Blob Games is a great blog if you are interested in game development. The explanations are solid, they have at least pseudo code or an implementation for most of their posts, and they have great animations on a lot of their bigger posts to help build intuition.
I remember one of the Advent of Code challenges had a hex grid puzzle on it, and Red Blob Games hex grid guide was so good the site got hugged to death because of it for a while. Used that later when I built a civ clone too, it's a fantastic resource.
I still remember a decade on the little pop of joy of realizing that not only do all the graphics on that blog post change when moving from flat to pointy, the code animates too.
The interactive graphics evoke all the fun of being a child in a science museum.
I have a deep love of A* because it was the first complex algorithm I fully understood. In my first data structures and algorithms in college (early 2000's), we had to pick an algorithm to study, code, and write a paper on and I picked A*.
I spent hours painstakingly drawing similar grids that the author of this article made and manually doing the calculations [0]. I still have these notes somewhere, even though they're over 20 years old at this point, because I was so proud of the work I put into it.
At any rate, thanks for this article and the trip down memory lane.
Interesting that this used to be called "AI". I'm still trying to figure out what to call the umbrella field of Artificial Intelligence now that "AI" has come to mean the genAI subset of DL which is a subset of ML which is a subset of what used to be called "AI".
What has been called AI in gaming in the past is rich and varied, and goes all the way down to a computer control opponent “seeing” a player and opening fire, moving towards, or moving away. Any code controlling NPC was referred to as “the AI of the game” even if all the code was doing was applying a few simple rote rules rather than following an exactly pre-specified sequence.
“AI” in gaming means (or has previously meant) a lot less than “AI” has meant in other fields, but with the increasing use of “AI” in all contexts this will soon no longer be the case.
Not just computer game AI. Literally university courses called "Artificial Intelligence" would teach A*, formal logic, planning, knowledge representation, etc. See for example the famous Russell-Norvig textbook. Since deep learning became dominant around 2012-2014, that conception of AI is now (somewhat deprecatingly) called GOFAI, or "good old-fashioned AI".
I'm old enough that my AI class in undergrad was taught from the first edition of Russell and Norvig. Neural networks (i.e. the basis of 95% of what today is called "AI") got one chapter, and it wasn't a long or detailed chapter either.
There is also certainly more sophisticated AI in gaming. Remember the AI used by Deep Blue in chess? Or the AI used by DeepMind in Go against Lee Sedol? Classic games like chess or Go receive more attention from the AI community than contemporary video games.
The definition of "AI" has for a long time now been basically "We know it works somehow, but only few people really understand it", which is a moving target. At one point in the future, the LLMs we use today won't even be called AI anymore.
The way I explain it to my students is through a venn diagram of "Traditional AI", "Machine Learning", and "Data Science" (though I suppose Gen AI is starting to form another circle). A* falls into the "Traditional AI" space, which is a mixed of state searching, logic representation, and statistics/probability (now called data science). What general public considers "AI" is where all the circles meet, and it means everything from Robots to if-else statements.
"Artificial intelligence" has always been a marketing term, not a technical one. John McCarthy coined the phrase when he was applying for a DARPA grant, and he picked it because he thought it would sound cool to the grant reviewers.
I took a course in grad school on "Game AI" that put different path finding approaches and methods of making decisions into a useful bucket away from ML and AI.
Someone send this to the idiots at Garmin, because their navigators will tell you to drive in a straight line across mountains or water water to an unreachable destination. It's like AI that never says "I don't know".
As a game developer for a grid based puzzle game (https://thinky.gg - one of the games Pathology is a game where you have to go from Point A to Point B in shortest amount of steps).
I have found A* fascinating not because of the optimization but also from the various heuristics that can be built on top of it make it more generalized for other types of pathfinding.
Some devs have built solvers that use techniques like bidirectional search, precomputed pattern databases, and dead locking detection.
"This problem assumes that the reader is not familiar with Romanian geography. We apologize to those who are unable to take advantage of this pedagogical device."
(Or words to that effect, it's been 20+ years since I read it.)
Iirc, the best approaches for this nowadays are machine learning based. Otherwise, you probably want to do an exploration step first, and bake in the environment.
GOFAI techniques are:
A) Usually unable to incorporate posteri knowledge of the environments they're working on.
And
B) Often entirely infeasible to adapt to take exploration costs into account.
I've heard of some GOFAI techniques for optimal unbiased pathfinding in unknown environments using optimal transport theory and the like. But unbiased methods are obviously going to lose out in real environments.
The star appears to have been cut off from the link, leaving an article about the corresponding radio source, Sagittarius A. This (official Wikipedia) short link leads to the desired article: https://w.wiki/5A7e
HN escaped the * when I posted. I tried to escape the escape, but it kept adding escapes to my escapes. After a while I conceded defeat and decided folks might enjoy taking the scenic route through disambiguation.
I never got in the habit of using share links, so it's interesting to learn Wikipedia has a URL shortener.
I wish there was a “evergreen” feature for social sites where it tracked resubmissions and would auto suggest them to people who haven’t seen them and periodically surface them to those who have and ask “is this still relevant” That way really good content keeps being recommended to those who need it and you get fewer complaints from old timers who don’t have to see it N times.
My dream is a knowledge-aware Wikipedia that can be more relevant by understanding what the reader knows, might know and might find interesting w/o being overwhelming. I guess you can make this social too and have discussion groups, but it's already too large of a project in my mind.
Yeah people live by this leaky abstraction that an article having been posted before means everyone was online that day and saw it and now it has expired. And for some reason they chase these hall monitor points for pointing it out. Let's see what a discussion would be like from today's point of view.
Also: some people seem to get an amount of pleasure from pointing out repeats, as if remembering that something was posted before is knowledge enough to make them a better person than the poster, us all, or just the person they thought they were themselves. This is fine when something is posted far too often, or is reposted by a point-farming bot (presumably the users running such bots hope to use the reputation of the account somehow in future), but is often done overzealously.
The cheapest available model once you have Theory of Mind (the idea that the other things in the environment might be thinking like you do) is that they're you again.
The Smarties test (What's in this Smarties tube - look it's not Smarties, ok now what does somebody else think is in the tube?) shows that humans need a further step to discover that model isn't enough.
But it's still cheaper and it's pretty good. It will correctly predict that this person you've never met before probably wants cake not death, just like you. It won't reliably predict whether they prefer lemon cake or coffee cake. But it's a good first guess.
It is the same article each time, though the comments coming off the different postings of it might have unique nuggets of useful information to dig for.
> Thank you for providing links to the others though! I’m sure it will be helpful for someone.
It isn't as prominent as on other sites, so it isn't difficult to miss sat right at the bottom of the main page, but HN does have a working search function. I find searching for older posts this way can be quite useful for the above reason, when something comes up that has existed for a few years.
Personally, I don’t bother searching because I only consume the headlines, on other news sites too, come to think of it. There’s lots of interesting things people post but frankly I’d rather pay for a good book on any subject.
hides from the dreaded downvoters
I used to spend more time browsing when reading an actual newspaper or magazine. The discourse on opinion pieces and such is more thought out too—many people, myself included, post too quickly before thinking because we’re always on the go.
Something about the online experience consuming news is less satisfying. Perhaps a hacker out there can set up a print version of HN archives, and print it on a Gutenberg Press. :)
I think A* deserves the popularity. It’s a simple variation on depth-first and breadth-first graph traversal that everyone learns in CS101, massively useful in some situations, yet for some reason isn’t a standard part of CS education. It’s a marvelous thing the first time you learn about it.
It’s even more marvelous if it helps you recognize that the difference between BFS and DFS is how you pick the next node to explore out of your bag of unexplored nodes. That symmetry is easily lost if DFS is only taught as a recursive algorithm.
Let it keep coming up every couple years to marvel a new generation of programmers.
I was surprised to see this article because it's not that time of year. Typically A* shows up around December, because people discover it via Advent of Code. (And that's the only place I've used it.)
I think it's just the first, most obvious thing to teach people just starting in pathfinding. It works in real life, it's easy to visualize and compute. Therefore all the tutorials are about it :)
I think the real reason this article keeps coming back is how good it is at teaching through examples and visualization, rather than the A* algorithm itself.
Even inadmissible heuristics can have their place, and it is easy to reason about how suboptimal they can be: you might want to trade off optimal results for performance (i.e. ignoring some part of the search space that should be searched) or to make an agent in a game a little stupid or stylized (e.g. prone to zig-zagging).
A* isn't how you get there, it's how you decide how to get there. The output is the shortest path, and that's what you actually follow.
It's pretty similar to how people find the shortest path in an unfamiliar environment. If you're looking at a map for the shortest route to a city to the north of your current position, you probably won't spend a lot of time looking at roads going south, and you'll concentrate on road segments that get you crow-flies closer to your destination.
It works really wells for many situations. If I am making a top down strategy game (Think Civilization) then A* is exactly what I need, a fast performance hack that gives me the shortest path without anything weird going on. For different kind of environments, then yes it doesn't work. A* isn't very useful in a racing game.
It took me 3 hours to implement A* with hex tiles, got it working on first attempt (land tiles only), specifically for Civ type game. It gets complex when you want to group units so that they travel together. Adding water crossings with cargo ships and war ships is a different challenge.
If you want cohesion between entities pathfinding, adjust the cost when you do the pathfinding for tiles that has friendlies on them to be lower than their base cost.
The way to think about water crossing with naval transports, is to consider those things to be conditions. You already have a set of condition when pathfinding. Just add another case for water. Make it so the requirement is that you’re either on a ship or there is a ship on the adjacent tile you checked previously, e.g N-1.
If valid, set a flag and now every tile check that is water should be appropriate.
See the target/know which direction it is? Go that direction unless you see an obstacle, in that case go around the obstacle, eventually even backtracking if it turns out the obstacle was worse than you could see. Don't see/know the target? Brownian motion until you do or get tired. Have pathfinded to the target previously? The shortest path you saw while walking there.
Al these require deep and complicated simulation of the entity though instead of solving a graph problem from omniscient perspective. Many topdown games really break my immersion when I see things just blatantly a-staring places.
Basically, things usually have limited information and it's weird to see them behave as if they don't. Plus on grids there's the cringe "diagonal and then straight" movement pattern.
But humans do quite intelligently pathfind around objects they're aware of, and update their mental models with new information. The front door is locked? I'll go around the back.
You can achieve exactly what you're describing by hiding information entities do not have from their pathfinding.
Graphs aren't the problem, and thinking along those lines won't get you where you're trying to go.
I'm not sure your complaint is actually that A* is bad, it's that the heuristic function is unfair (to the player, by giving the mob data they shouldn't have). A more sophisticated game could use a more interesting function like an estimate as to what direction the player's movement sound would be heard from.
Optimal pathfinding isn't always the right objective: for a good looking game agent reasonable pathfinding can often be enough, or even better.
For example, in a RTS making units follow a leader in formation can be more natural and less expensive than performing independent optimal pathfinding for all of them, while in environments with many obstacles staying away from walls (on a simplified graph of good positions) is usually more elegant than hugging corners (on a much more complex graph).
> See the target/know which direction it is? Go that direction unless you see an obstacle, in that case go around the obstacle, eventually even backtracking if it turns out the obstacle was worse than you could see. Don't see/know the target? Brownian motion until you do or get tired.
What a long and convoluted way to try to reinvent the A* algorithm...
This isn't an actual solution. Video Games have to do things fast. You can't just sit there and wait for a minute as you try brownian motion on hundreds of units. There are plenty of different solutions to get more realistic and natural pathfinding, typically navmeshes and then local node based movement. But you still need to go from a to b, somehow.
Your example also fails in several really obvious ways. What if there is a pool of mud or water in the middle of the path, it IS traversable but doing so is not ideal? A* you give this a high traversal cost and you'll naturally path around it. Your solution will have the object going through high cost but traversable areas in strange ways. This is going to be worse, as players will just kite and exploit this fact.
The point of movement for npcs in a videogame isn't to behave realistically (or to be simulated fully), it's to produce engaging and challenging behavior to the player. In 99% of cases giving characters, like enemies in an action game, some extra information to enable them moving towards the player is the correct choice, people are fine with suspending their disbelief if the alternative is npcs giving up or running around like brownian particles, which does not make for a good experience.
Almost all the time the correct choice in game design is that it's fun and interesting, unless for the exception where you're literally building a real world simulator.
Everything you mentioned (Aside from Brownian motion, which is certainly the wrong solution) could be implemented with A* but with an incomplete graph.
I've seen it work that way in an RTS before. Fog of war will make a unit unaware of what the terrain actually looks like and the unit will head straight in the direction of where I told it to go until it finds a cliff, then it starts trying to go around it.
The article doesn't explicitly state it in this manner in one concise place, but the way I would always think about A* from a "practical/easy-to-remember" perspective back when I was doing competitive programming is that they're all the same algorithm, but with different priorities on the priority queue:
Breadth-first Search: Priority is order of discovery of edges (that is, no priority queue/just a regular queue)
Dijkstra: Priority is distance so far + next edge distance
A*: Priority is distance so far + next edge distance + estimate of distance to target node.
This also helps me remember whether the estimate must over- or under-estimate: Since Dijkstra is making the estimate "0", clearly the "admissible heuristic" criteria must be an under-estimation.
Another way I like to think about it is that every graph traversal can be represented as a white set of unknown nodes, a grey set of known but unvisited nodes, and a black set of visited nodes. The data structure used to represent the grey set defines the algorithm:
DFS = queue
BFS = stack
Dijstra's = priority queue keyed by edge weight
A* = priority queue with heuristic function
Beam search = bounded priority queue with heuristic function
Topological sort = priority queue keyed by number of unvisited inbound edges
Copying garbage collector = pointer address
Mark & sweep garbage collector = dirty bit on the object pointed to
Generational garbage collector = multi-level grey set represented by the write barriers between each generation.
A* = priority queue with heuristic function _with specific properties_ ... in particular it must be an "admissible" hueristic that never overestimates the true value.
It'll still find a path with an inadmissible heuristic, as the page explains. It's just not guaranteed to be the shortest path in that case. This is commonly done.
This is very insightful and a handy mental model I'll be using thanks. A small nit is that I think you have DFS and BFS swapped?
Oh yeah, I do. Too late to edit the post now.
Breadth-first is a queue. Depth-first is a stack. A* is a priority queue.
More specifically Dijkstra's is a priority queue. A* is a priority queue with an added estimate to the cost function to prioritize searching nodes closer to the destination.
OP's point is that
· BFS is priority queue with key h(n) + g(n), where h(n) = 0, g(n) = #edges
· Dijkstra's is priority queue with key h(n) + g(n), where h(n) = 0, g(n) = sum over edges
· A* is priority queue with key h(n) + g(n), where h(n) = heuristic(n), g(n) = sum over edges
It's cute.
Likewise, you can represent a queue as a priority queue with key = i, where i is an integer monotonically increasing at insertion time. And you can represent a stack as a priority queue where key = -i.
This is the insight behind the decorate-sort-undecorate pattern; it's just heapsort, with a different key function allowing you to represent several different algorithms.
> OP's point is that
> BFS is priority queue with key h(n) + g(n), where h(n) = 0, g(n) = #edges
He doesn't say that, and it isn't true.
In (theoretical) computer science, we write "#xs" to denote "number of xs".
My sentence was supposed to be read as "g(n) = number of edges", and implicitly, of course (since we're talking about BFS), that means number of edges seen up until now, from ns perspective. And yes, n usually denotes the size of the graph, however, in the context of A*, we usually write n to denote the current node (as per AI:MA).
I take full responsibility. (Disclaimer: I'm a CS professor teaching BFS and Dijkstra's algorithm every semester and A* every 2nd year.)
I think it is true, although "#edges" needs to be understood as "the number of the edges in the path from the starting point to the node", which was not one of my first three candidate interpretations.
> This also helps me remember whether the estimate must over- or under-estimate: Since Dijkstra is making the estimate "0", clearly the "admissible heuristic" criteria must be an under-estimation.
You're thinking too hard. :-) Just think of a map the same way a 10-year-old would.
Straight-line (Euclidean) distance is the most obvious heuristic on a map for estimating distance, and it's admissible.
Straight lines minimize distances i.e. they never overestimate. Which is enough to remind you that you want an underestimate.
> the way I would always think about A* from a "practical/easy-to-remember" perspective back when I was doing competitive programming is that they're all the same algorithm, but with different priorities on the priority queue
A much less obvious but much more fascinating (IMO) way to look at A* is that A* is actually Dijkstra but with a modified graph, where you adjust the heuristic delta between each edge's vertices to the edge's weight.
To remember the sign of the adjustment with this method, just imagine the next vertex getting super close to the destination, and then work out whether the weight needs to increase or decrease significantly in that case. (It needs to decrease, given you're getting closer to the destination.)
I think probably the easiest way to remember under/over is just to remember that euclidean distance is a very common admissible heuristic.
Any other tips for competitive coding ie a book or source of wisdom in a similar vein?
leetcode will strengthen the youngblood programmer for competitive matters such as competitions or interviews
Codeforces is so much better. Better community and better questions.
And depth first search is just a stack!
Yes, but it doesn't come right away. When preferring deeper nodes for a moment you have a loop-safe Depth-first traversal, and then when simplifying things in case the Graph is a Tree you get your regular stack-based Depth-first traversal, in which if you settle for the first goal you get back a tail-call optimised DFS.
Which algorithm should I apply:
I have no information other than the fact that my agent has a decision to make (left or right).
- DFS or BFS
I have some information about the cost of the decision.
- UCS or Djikstra's algorithm
I have some idea of the cost of the decision, and a rough idea which direction the goal is in.
- A star
As well as knowing the cost, and a rough idea of the direction, I also know that I have a uniform cost grid.
- Jump point search
well they all just loop???
Simplifications help people memorize things but if you get too reductive it becomes useless.
Red Blob Games is a great blog if you are interested in game development. The explanations are solid, they have at least pseudo code or an implementation for most of their posts, and they have great animations on a lot of their bigger posts to help build intuition.
I remember one of the Advent of Code challenges had a hex grid puzzle on it, and Red Blob Games hex grid guide was so good the site got hugged to death because of it for a while. Used that later when I built a civ clone too, it's a fantastic resource.
https://www.redblobgames.com/grids/hexagons/
I still remember a decade on the little pop of joy of realizing that not only do all the graphics on that blog post change when moving from flat to pointy, the code animates too.
The interactive graphics evoke all the fun of being a child in a science museum.
Red Blob Games is also "amitp", one of the earliest Google employees. #7 or something like that.
I came here to say the same thing. Red Blob Games is such a gold mine of resources for anyone looking to get into gamedev.
I have a deep love of A* because it was the first complex algorithm I fully understood. In my first data structures and algorithms in college (early 2000's), we had to pick an algorithm to study, code, and write a paper on and I picked A*.
I spent hours painstakingly drawing similar grids that the author of this article made and manually doing the calculations [0]. I still have these notes somewhere, even though they're over 20 years old at this point, because I was so proud of the work I put into it.
At any rate, thanks for this article and the trip down memory lane.
[0] https://imgur.com/a/zRYaodL (apologies for the Imgur link)
Interesting that this used to be called "AI". I'm still trying to figure out what to call the umbrella field of Artificial Intelligence now that "AI" has come to mean the genAI subset of DL which is a subset of ML which is a subset of what used to be called "AI".
> Interesting that this used to be called "AI".
What has been called AI in gaming in the past is rich and varied, and goes all the way down to a computer control opponent “seeing” a player and opening fire, moving towards, or moving away. Any code controlling NPC was referred to as “the AI of the game” even if all the code was doing was applying a few simple rote rules rather than following an exactly pre-specified sequence.
“AI” in gaming means (or has previously meant) a lot less than “AI” has meant in other fields, but with the increasing use of “AI” in all contexts this will soon no longer be the case.
Not just computer game AI. Literally university courses called "Artificial Intelligence" would teach A*, formal logic, planning, knowledge representation, etc. See for example the famous Russell-Norvig textbook. Since deep learning became dominant around 2012-2014, that conception of AI is now (somewhat deprecatingly) called GOFAI, or "good old-fashioned AI".
I'm old enough that my AI class in undergrad was taught from the first edition of Russell and Norvig. Neural networks (i.e. the basis of 95% of what today is called "AI") got one chapter, and it wasn't a long or detailed chapter either.
I guess the 5th edition will need to be called "Artificial Intelligence: An Outdated Approach".
I absolutely love that book. Maybe that's why I get a little cringey at the "AI" monicker for LLMs.
There is also certainly more sophisticated AI in gaming. Remember the AI used by Deep Blue in chess? Or the AI used by DeepMind in Go against Lee Sedol? Classic games like chess or Go receive more attention from the AI community than contemporary video games.
Open AI tried it with Dota 2 and had to limit the gameplay a lot to be competetive against humans.
The definition of "AI" has for a long time now been basically "We know it works somehow, but only few people really understand it", which is a moving target. At one point in the future, the LLMs we use today won't even be called AI anymore.
https://en.m.wikipedia.org/wiki/AI_effect
PS: the wiki article needs updating with more confirmation from the LLM era, if anyone's up for it... :)
The way I explain it to my students is through a venn diagram of "Traditional AI", "Machine Learning", and "Data Science" (though I suppose Gen AI is starting to form another circle). A* falls into the "Traditional AI" space, which is a mixed of state searching, logic representation, and statistics/probability (now called data science). What general public considers "AI" is where all the circles meet, and it means everything from Robots to if-else statements.
"Artificial intelligence" has always been a marketing term, not a technical one. John McCarthy coined the phrase when he was applying for a DARPA grant, and he picked it because he thought it would sound cool to the grant reviewers.
> Interesting that this used to be called "AI".
I remember learning about A* in the AI lab at the University. Now these things sound trivial and we take them for granted. The joys of becoming old.
I took a course in grad school on "Game AI" that put different path finding approaches and methods of making decisions into a useful bucket away from ML and AI.
a lot of early ai was simply applied classical data structures and algorithms.
although perceptrons go back decades and decades.
Someone send this to the idiots at Garmin, because their navigators will tell you to drive in a straight line across mountains or water water to an unreachable destination. It's like AI that never says "I don't know".
As a game developer for a grid based puzzle game (https://thinky.gg - one of the games Pathology is a game where you have to go from Point A to Point B in shortest amount of steps).
I have found A* fascinating not because of the optimization but also from the various heuristics that can be built on top of it make it more generalized for other types of pathfinding.
Some devs have built solvers that use techniques like bidirectional search, precomputed pattern databases, and dead locking detection.
Just a note about bidirectional, or "double-ended" as I learned it - this can be very useful (i read 30% speed-up) for City / National Road searches.
One also has multiple layers of roadways of varying arterial significance, allowing higher speed (lower weight) travel, with real-world roads.
It was used at a mapping job to great boon by our backend.
Relavants of mine
- https://lukeyoo.fyi/recap/2025/5/a-star
- https://github.com/micttyoid/quantized-pathfinding
Best introduction to A* in my opinion is travelling in Romania :)
AI a Modern Approach.
"This problem assumes that the reader is not familiar with Romanian geography. We apologize to those who are unable to take advantage of this pedagogical device."
(Or words to that effect, it's been 20+ years since I read it.)
I always use A* when going to Bucharest.
Path finding visualization, highlighting A*: https://youtube.com/shorts/L8t0tt1Vrsw
Funny enough I got a hit of nostalgia seeing this as I learned A* for a school project many years ago using this exact same tutorial.
I have a perhaps overly simplistic question, but how is this meant to be pronounced? A-star? Ah-sterisk?
Ay-star.
A* is simple enough, but how do you handle pathfinding when the environment isn’t known to the entity?
https://en.m.wikipedia.org/wiki/Simultaneous_localization_an...
SLAM is for mapping. For planning (and execution) in unknown environments, probably something like D* Lite.
Iirc, the best approaches for this nowadays are machine learning based. Otherwise, you probably want to do an exploration step first, and bake in the environment.
ML methods are poor pathfinders. I'm unaware of any pathfinding problem where SOTA is not dominated by state-space search.
I can't find any benchmarks, but here's an example of the sort of approach I was talking about:
https://www.researchgate.net/publication/355761412_Path_plan...
GOFAI techniques are: A) Usually unable to incorporate posteri knowledge of the environments they're working on. And B) Often entirely infeasible to adapt to take exploration costs into account.
I've heard of some GOFAI techniques for optimal unbiased pathfinding in unknown environments using optimal transport theory and the like. But unbiased methods are obviously going to lose out in real environments.
[dead]
I have this bookmarked under "path finding" along with
https://juhrjuhr.itch.io/deep-space-exploitation/devlog/9454...
and https://www.redblobgames.com/grids/hexagons/
Not to be confused with Sagittarius A*.
https://en.wikipedia.org/wiki/Sagittarius_A*
The star appears to have been cut off from the link, leaving an article about the corresponding radio source, Sagittarius A. This (official Wikipedia) short link leads to the desired article: https://w.wiki/5A7e
HN escaped the * when I posted. I tried to escape the escape, but it kept adding escapes to my escapes. After a while I conceded defeat and decided folks might enjoy taking the scenic route through disambiguation.
I never got in the habit of using share links, so it's interesting to learn Wikipedia has a URL shortener.
It's that time of year again. I like A* as much as the next one, but it seems a bit excessive a times.
Title should have a (2014) in it: Introduction to the A* Algorithm (2014).
1 points, 8 months ago, 1 comments: Introduction to the a* Algorithm (https://news.ycombinator.com/item?id=41897736)
202 points, 3 years ago, 30 comments: Introduction to the A* Algorithm (2014) (https://news.ycombinator.com/item?id=30287733)
4 points, 5 years ago, 1 comments: Introduction to the a* Algorithm (https://news.ycombinator.com/item?id=24146045)
201 points, 7 years ago, 14 comments: Introduction to A* (2014) (https://news.ycombinator.com/item?id=18642462)
5 points, 7 years ago, 0 comments: Introduction to A* (https://news.ycombinator.com/item?id=16190604)
Please consider some folks might be new to A*, and perhaps even HN, so maybe this is the first time they’ve seen it! :)
Also, I have ten books on perspective drawing, and my understanding isn’t complete without all ten of them
Or, if I’m teaching a subject on A*, perhaps ONE of those articles conveys the materials best for my students.
Thank you for providing links to the others though! I’m sure it will be helpful for someone.
I wish there was a “evergreen” feature for social sites where it tracked resubmissions and would auto suggest them to people who haven’t seen them and periodically surface them to those who have and ask “is this still relevant” That way really good content keeps being recommended to those who need it and you get fewer complaints from old timers who don’t have to see it N times.
My dream is a knowledge-aware Wikipedia that can be more relevant by understanding what the reader knows, might know and might find interesting w/o being overwhelming. I guess you can make this social too and have discussion groups, but it's already too large of a project in my mind.
Yeah people live by this leaky abstraction that an article having been posted before means everyone was online that day and saw it and now it has expired. And for some reason they chase these hall monitor points for pointing it out. Let's see what a discussion would be like from today's point of view.
Also: some people seem to get an amount of pleasure from pointing out repeats, as if remembering that something was posted before is knowledge enough to make them a better person than the poster, us all, or just the person they thought they were themselves. This is fine when something is posted far too often, or is reposted by a point-farming bot (presumably the users running such bots hope to use the reputation of the account somehow in future), but is often done overzealously.
The cheapest available model once you have Theory of Mind (the idea that the other things in the environment might be thinking like you do) is that they're you again.
The Smarties test (What's in this Smarties tube - look it's not Smarties, ok now what does somebody else think is in the tube?) shows that humans need a further step to discover that model isn't enough.
But it's still cheaper and it's pretty good. It will correctly predict that this person you've never met before probably wants cake not death, just like you. It won't reliably predict whether they prefer lemon cake or coffee cake. But it's a good first guess.
I agree, though to be a pedant:
> perhaps ONE of those articles
It is the same article each time, though the comments coming off the different postings of it might have unique nuggets of useful information to dig for.
> Thank you for providing links to the others though! I’m sure it will be helpful for someone.
It isn't as prominent as on other sites, so it isn't difficult to miss sat right at the bottom of the main page, but HN does have a working search function. I find searching for older posts this way can be quite useful for the above reason, when something comes up that has existed for a few years.
Personally, I don’t bother searching because I only consume the headlines, on other news sites too, come to think of it. There’s lots of interesting things people post but frankly I’d rather pay for a good book on any subject.
hides from the dreaded downvoters
I used to spend more time browsing when reading an actual newspaper or magazine. The discourse on opinion pieces and such is more thought out too—many people, myself included, post too quickly before thinking because we’re always on the go.
Something about the online experience consuming news is less satisfying. Perhaps a hacker out there can set up a print version of HN archives, and print it on a Gutenberg Press. :)
> Also, I have ten books on perspective drawing, and my understanding isn’t complete without all ten of them
Which books might those be? ;)
I think A* deserves the popularity. It’s a simple variation on depth-first and breadth-first graph traversal that everyone learns in CS101, massively useful in some situations, yet for some reason isn’t a standard part of CS education. It’s a marvelous thing the first time you learn about it.
It’s even more marvelous if it helps you recognize that the difference between BFS and DFS is how you pick the next node to explore out of your bag of unexplored nodes. That symmetry is easily lost if DFS is only taught as a recursive algorithm.
Let it keep coming up every couple years to marvel a new generation of programmers.
I was surprised to see this article because it's not that time of year. Typically A* shows up around December, because people discover it via Advent of Code. (And that's the only place I've used it.)
I think it's just the first, most obvious thing to teach people just starting in pathfinding. It works in real life, it's easy to visualize and compute. Therefore all the tutorials are about it :)
I think the real reason this article keeps coming back is how good it is at teaching through examples and visualization, rather than the A* algorithm itself.
There are a lot of people on HN that aren't you.
How could the OP submit this link anyway?
When I submit a link that has been posted on HN before, it just redirects me to the old post.
Maybe there's a secret guide on how to build up karma :))
Please think of all the lucky few. (xkcd 1053)
[dead]
[flagged]
I don't like A*
It's a performance hack, not how entities trying to get somewhere behave.
It's neither a hack nor trying to "behave" like anything.
It is complete and optimal, with provable properties.
Whenever there exists an admissible heuristic, you should use A* over Dijkstra's algorithm.
Even inadmissible heuristics can have their place, and it is easy to reason about how suboptimal they can be: you might want to trade off optimal results for performance (i.e. ignoring some part of the search space that should be searched) or to make an agent in a game a little stupid or stylized (e.g. prone to zig-zagging).
OP's point, I believe, is that inadmissible heuristics sometimes give behaviors that seem more natural, even if not optimal.
A* isn't how you get there, it's how you decide how to get there. The output is the shortest path, and that's what you actually follow.
It's pretty similar to how people find the shortest path in an unfamiliar environment. If you're looking at a map for the shortest route to a city to the north of your current position, you probably won't spend a lot of time looking at roads going south, and you'll concentrate on road segments that get you crow-flies closer to your destination.
It works really wells for many situations. If I am making a top down strategy game (Think Civilization) then A* is exactly what I need, a fast performance hack that gives me the shortest path without anything weird going on. For different kind of environments, then yes it doesn't work. A* isn't very useful in a racing game.
It took me 3 hours to implement A* with hex tiles, got it working on first attempt (land tiles only), specifically for Civ type game. It gets complex when you want to group units so that they travel together. Adding water crossings with cargo ships and war ships is a different challenge.
If you want cohesion between entities pathfinding, adjust the cost when you do the pathfinding for tiles that has friendlies on them to be lower than their base cost.
The way to think about water crossing with naval transports, is to consider those things to be conditions. You already have a set of condition when pathfinding. Just add another case for water. Make it so the requirement is that you’re either on a ship or there is a ship on the adjacent tile you checked previously, e.g N-1. If valid, set a flag and now every tile check that is water should be appropriate.
What is your preference?
See the target/know which direction it is? Go that direction unless you see an obstacle, in that case go around the obstacle, eventually even backtracking if it turns out the obstacle was worse than you could see. Don't see/know the target? Brownian motion until you do or get tired. Have pathfinded to the target previously? The shortest path you saw while walking there.
Al these require deep and complicated simulation of the entity though instead of solving a graph problem from omniscient perspective. Many topdown games really break my immersion when I see things just blatantly a-staring places.
Basically, things usually have limited information and it's weird to see them behave as if they don't. Plus on grids there's the cringe "diagonal and then straight" movement pattern.
But humans do quite intelligently pathfind around objects they're aware of, and update their mental models with new information. The front door is locked? I'll go around the back.
You can achieve exactly what you're describing by hiding information entities do not have from their pathfinding.
Graphs aren't the problem, and thinking along those lines won't get you where you're trying to go.
I'm not sure your complaint is actually that A* is bad, it's that the heuristic function is unfair (to the player, by giving the mob data they shouldn't have). A more sophisticated game could use a more interesting function like an estimate as to what direction the player's movement sound would be heard from.
Optimal pathfinding isn't always the right objective: for a good looking game agent reasonable pathfinding can often be enough, or even better.
For example, in a RTS making units follow a leader in formation can be more natural and less expensive than performing independent optimal pathfinding for all of them, while in environments with many obstacles staying away from walls (on a simplified graph of good positions) is usually more elegant than hugging corners (on a much more complex graph).
> See the target/know which direction it is? Go that direction unless you see an obstacle, in that case go around the obstacle, eventually even backtracking if it turns out the obstacle was worse than you could see. Don't see/know the target? Brownian motion until you do or get tired.
What a long and convoluted way to try to reinvent the A* algorithm...
This isn't an actual solution. Video Games have to do things fast. You can't just sit there and wait for a minute as you try brownian motion on hundreds of units. There are plenty of different solutions to get more realistic and natural pathfinding, typically navmeshes and then local node based movement. But you still need to go from a to b, somehow.
Your example also fails in several really obvious ways. What if there is a pool of mud or water in the middle of the path, it IS traversable but doing so is not ideal? A* you give this a high traversal cost and you'll naturally path around it. Your solution will have the object going through high cost but traversable areas in strange ways. This is going to be worse, as players will just kite and exploit this fact.
> Brownian motion until you do or get tired
The point of movement for npcs in a videogame isn't to behave realistically (or to be simulated fully), it's to produce engaging and challenging behavior to the player. In 99% of cases giving characters, like enemies in an action game, some extra information to enable them moving towards the player is the correct choice, people are fine with suspending their disbelief if the alternative is npcs giving up or running around like brownian particles, which does not make for a good experience.
Almost all the time the correct choice in game design is that it's fun and interesting, unless for the exception where you're literally building a real world simulator.
Everything you mentioned (Aside from Brownian motion, which is certainly the wrong solution) could be implemented with A* but with an incomplete graph.
I've seen it work that way in an RTS before. Fog of war will make a unit unaware of what the terrain actually looks like and the unit will head straight in the direction of where I told it to go until it finds a cliff, then it starts trying to go around it.
I encourage you to build a game with the mechanics you describe, especially something like an RTS, and see if it's any fun to play...
just take all right turns?
> It's a performance hack, not how entities trying to get somewhere behave.
Welcome to game development, where fun and performance tends to be more important than realism :)
[dead]
[dead]