Pathfinding in videogames: A* (A star), Dijkstra and NavMesh with step-by-step examples
You have built your first prototype. Your enemies move, chase the player and even patrol. But there's a problem: they run through walls, get stuck in corners and take absurd routes that scream "this isn't right". You've looked up tutorials, you've copied code, you've even asked a generative AI for help. The problem is still there. It's not that your game is bad: it's that your NPCs don't know how to navigate the world you've created.
Welcome to pathfinding. The difference between an enemy that looks dumb and one that looks scary. Between an immersion-breaking NPC and an immersion-enhancing NPC. Between an embarrassing prototype and a system you can be proud of.
This article is not a theoretical computer science lecture or a history of algorithms. It's a practical guide for you to understand what makes your characters move intelligently through a scenario, when to use A*, Dijkstra or NavMesh, and how to apply it without drowning in incomprehensible mathematics.
If you've ever thought "I want my AI to stop being embarrassing", read on.
What pathfinding is (and what it is NOT)
Pathfinding is the process of calculating a route from point A to point B within a navigable space. Nothing more, nothing less. It is not "artificial intelligence" in the broad sense. It does not make your enemy decide whether to attack or flee. It does not avoid collisions in real time. It doesn't animate your character or give it personality.
It's a specific piece of the navigation system: wayfinding.
Think of it like this. You have three separate layers to an NPC's movement:
Decision (what goal do I pursue?): this is where behaviour trees, finite state machines or utility systems come in. Your enemy decides "I'm going to chase the player" or "I'm going back to my patrol".
Pathfinding (how do I get to that goal?): this is the layer we are dealing with. It calculates the optimal or feasible route between two points considering obstacles, terrain weights and world geometry.
Movement and avoidance (how do I physically move without crashing?): steering behaviors, avoidance, locomotion. Your character follows the path but avoids other agents, smooths turns and translates the path into fluid movement.
If you mix these layers, you will go crazy debugging. An NPC that gets stuck may have a perfect path but a collision problem. An enemy that takes weird paths may have a good algorithm but poorly defined navigable space.
Pathfinding solves a concrete question: given a map and two points, what is the best route?
The map and the method: representation vs algorithm
Here is the most common confusion that paralyses those starting out: mixing up the representation of the world with the search algorithm.
Representation is how you translate your scenario (with walls, doors, stairs, expensive terrain to traverse) into something an algorithm can process: a graph. It can be a grid of tiles, a mesh of navigable polygons (NavMesh), a graph of strategic points or any structure where there are nodes connected by edges.
The algorithm is the method you use to find the optimal path within that representation: Dijkstra, A*, Greedy Best-First, or any variant.
The key here: NavMesh is not an alternative algorithm to A*. NavMesh is a representation of the world. Then, on that representation, you apply an algorithm like A* to compute routes. Understanding this separation is fundamental.
A 2D grid is also a representation. A network of manually connected waypoints is also a representation. Each has advantages and limitations:
- Grid: perfect for tile-based games, roguelikes, turn-based strategy. Easy to implement, but can be expensive in memory if the world is huge, and produces paths at 90 degree angles (or diagonals if you allow that movement).
- NavMesh (Navigation Mesh): ideal for 3D environments with complex geometry. Pre-calculates navigable areas and represents them as convex polygons. Efficient in runtime because it drastically reduces the number of nodes. The problem: requires a baking process and can be difficult to adjust dynamically.
- Waypoint networks: manually placed nodes connected to each other. Useful in games with free movement but defined tactical zones (cover, gates, ambush points). Full control, but time-consuming to maintain.
Now that we've separated the map from the method, let's get to what you came for: the algorithms.
Dijkstra step by step: the search without a compass
Dijkstra is the foundational algorithm. It is optimal, exhaustive and doesn't need to know anything about your target. It simply expands in all directions looking for the path with the lowest cumulative cost.
Imagine this 5×5 grid. We want to go from node S (Start) to node G (Goal). The numbers are the cost of traversing each square. The X's are insurmountable obstacles.
1][ 1][ 1][ 1][ 1][ 1][ 1][ X][ X][ 2][ 1][ 1] [ S][ 1][ 2][ 2][ 2][ 1] [ 1][ X][ 2][ 1][ 1][ G] [ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1][ 1]Dijkstra works with two lists:
- Open list (nodes to be explored): starts with the initial node.
- Closed list (nodes already processed): starts empty.
For each node we store the accumulated cost from the beginning (g) and the previous node in the path (to reconstruct the path at the end).
Iteration 1: we explore S (cost 0). Its neighbours are the top node (cost 1) and the bottom node (cost 1). Both go to Open with g=1. S goes to Closed.
Iteration 2: we take the node with the lowest g from Open (tie, we choose one). Let's say it is the top one of S. Its unvisited neighbours are added/updated in Open. That node goes to Closed.
Iteration 3, 4, 5...: we repeat. We always take the node with the lowest accumulated cost in Open, explore its neighbours, update costs if we find a cheaper path.
The algorithm ends when G goes from Open to Closed. Then you reconstruct the path following the "previous node" pointers from G to S.
The result: the optimal path (the one with the lowest total cost). The problem: Dijkstra has scanned in all directions, even those away from the target. It's like looking for the car keys by going through the whole house room by room without knowing whether they are in the kitchen or the bathroom.
It works. It is optimal. But it is not efficient if you know roughly where you want to go.
A* step by step: Dijkstra with a compass
A* is Dijkstra improved. The difference: it adds a heuristic, an estimate of how far it is to the goal from each node. That estimate "guides" the search towards the goal instead of expanding blindly.
We use the same grid as in the previous example. Now, in addition to the cumulative cost g(n) from the start, we calculate a heuristic h(n) towards the goal. The sum of both is f(n) = g(n) + h(n).
The most common heuristic in grids: the Manhattan distance (sum of differences in X and Y) if you only allow orthogonal movement, or Euclidean distance if you allow diagonals.
Let's take Manhattan. From a node at position (x, y) to G at (4, 4):
h(n) = |x - 4| + |y - 4|Now we repeat Dijkstra's process, but this time we order Open by f(n) instead of just by g(n).
Iteration 1: S is at (0, 2). h(S) = |0-4| + |2-4| = 6. f(S) = 0 + 6 = 6. We explore S, add its neighbours with f calculated.
Iteration 2: we take Open's node with smaller f. That small difference changes everything: instead of exploring uniformly in circles, A* prefers nodes that are closer to the target. Nodes that move away have higher f, so they are at the back of the queue.
The result: you get to the target by exploring far fewer nodes than Dijkstra. On large grids the difference is brutal.
Critical condition: the heuristic must be admissible (never overestimate the real cost). If h(n) overestimates, A* may return suboptimal paths. Manhattan and Euclidean are admissible if your movement costs are consistent with them.
In short: A* = Dijkstra + direction. Faster, just as optimal if you use an admissible heuristic, and the de facto standard in videogames.
NavMesh: from the 3D world to a navigable graph
NavMesh does not compete with A*. It is a representation of space, not an algorithm. But it deserves a section of its own because it is the standard solution for navigation in 3D environments.
The problem it solves: in a 3D game with complex geometry (buildings, ramps, stairs, furniture), turning every triangle in the mesh into a graph node would be unfeasible. You need millions of nodes to represent a medium level. Unmanageable.
NavMesh reduces the problem. Instead of working with all the geometry, it generates a simplified mesh of convex polygons that represent the surfaces where an agent can walk. These polygons are then connected together to form a much smaller graph.
The process has three phases:
Phase 1: Baking.
The engine analyses your level geometry, detects flat walkable surfaces (floor, ramps, platforms) and generates navigable polygons. Here you define parameters like:
- Agent radius: how much space your character needs. This makes NavMesh "shrink" around obstacles so that a large agent does not try to pass through narrow gaps.
- Agent height: to detect low ceilings and tunnels where the agent won't fit.
- Max slope: slopes that are too steep are marked as unnavigable.
- Step height: small steps that the agent can climb without the need for ramps.
The result: a polygon mesh covering the walkable areas of the level, with connections between adjacent polygons and, if necessary, special links (jumps, teleports, gates).
Phase 2: Pathfinding on the NavMesh
Once baked, the NavMesh is a graph. Each polygon is a node, each shared edge is an edge. When you ask for a path from A to B, the engine:
- Identifies which polygon A is in and which polygon B is in.
- Runs A* (or another variant) on the polygon graph.
- It returns a sequence of polygons connecting the two points.
As polygons are large areas, the graph has far fewer nodes than a grid. The search is fast.
Phase 3: Path smoothing
The path you get is a sequence of polygons. But if you move the agent from centre to centre of each polygon, the path will be robotic and with sharp turns. This is where smoothing comes in: techniques such as the funnel algorithm calculate the most direct path within the corridor of polygons, generating a smooth and natural path.
The end result: routes that look like they were thought up by a human, not a machine.
Limitations and trade-offs
Pre-calculated NavMesh is static. If your level changes dynamically (you destroy a wall, open a door, set up a barricade), you need to update NavMesh. Some engines allow partial rebaking or dynamic tiles, but it adds complexity.
Also, NavMesh does not resolve evasion between agents. If two enemies follow converging paths, they may collide. There you need steering behaviours or local avoidance (another layer, as we said at the beginning).
When to use what: decision rules
Here is the table you should have in your head when choosing your approach.
Dijkstra: use it if you need to calculate optimal distances from one point to ALL other nodes in the network (e.g. to implement influence maps or territory analysis in an RTS). In pure pathfinding you rarely use it, because A* is strictly better if you have only one goal.
A*: the workhorse. Use it in:
- 2D grid games (roguelikes, tower defense, turn-based tactics).
- Pathfinding on waypoint graphs.
- Navigation on NavMesh (yes, internally engines often use A* on the polygon graph).
It is fast, optimal, well studied and has a thousand variants if you need advanced optimisations later.
NavMesh: use it in 3D environments with complex geometry where a grid would be unfeasible:
- Shooters (enemies surrounding you in buildings).
- 3D action/adventure (NPCs in cities, dungeons).
- Survival or stealth (realistic patrols that respect the level layout).
NavMesh is not an algorithm, it is the rendering pipeline. Inside you will still use A*.
Hybrid cases:
- Huge open world: local NavMesh + region graph for transit between zones.
- RTS with units destroying obstacles: dynamic grid or NavMesh with updatable tiles.
- Top-down with procedurally generated levels: simple grid and A* without complication.
Final rule: start with the simplest thing that works. A 50×50 grid with basic A* already takes you a long way. NavMesh is an upgrade when the complexity of the level justifies it, not the starting point.
Typical errors and how to debug without losing your sanity
Implementing pathfinding doesn't end when it "works once". Here are the problems you will encounter (and how to attack them).
Error 1: Inadmissible heuristics
Symptom: A* returns suboptimal or rare paths.
Cause: your h(n) overestimates the real cost. Example: you use Euclidean distance but your diagonal movement cost is higher than sqrt(2).
Solution: make sure that h(n) never overestimates. If in doubt, use a more conservative heuristic or directly h(n) = 0 (turn A* into Dijkstra, slowly but surely).
Error 2: Badly connected nodes
Symptom: the algorithm does not find a path even though visually it should exist.
Cause: the graph has a gap. Two seemingly adjacent nodes are not connected. Or you allowed diagonals but did not check that orthogonal squares are also free (the NPC "crosses" corners).
Solution: draw your graph. Literally. Visualise the connections. If you use diagonals, check that you don't allow diagonal traversal of obstacles.
Mistake 3: Agent bigger than the corridor
Symptom: The path is correct, but the NPC gets stuck in doors or corridors.
Cause: the NavMesh was baked with an agent radius smaller than the actual size of your character. Or your grid does not consider the agent size.
Solution: Adjust the agent radius in NavMesh. If you use grid, "inflate" the obstacles according to the agent size before calculating routes.
Bug 4: Horrible performance with many agents
Symptom: 20 enemies calculating paths every frame and your FPS plummets.
Cause: You recalculate paths unnecessarily. Pathfinding is expensive; doing it every frame for every agent is suicide.
Solution:
- Calculate paths only when the target or environment changes (event-driven).
- Use intervals: each agent recalculates its path every N frames, not all at once.
- Cache routes if several agents go to the same place.
- Limit the search depth or area explored.
- Consider "lazy paths": calculate only the first steps of the route and recalculate when you get close.
Pitfall 5: Correct paths but ugly movement
Symptom: NPC reaches the target, but turns at right angles, sticks to walls or zigzags.
Cause: this is NOT a pathfinding problem. It is a steering and avoidance problem.
Solution: add smoothing to the path (string pulling, Catmull-Rom splines). Implement steering behaviors to smooth the path. Separate the problem: pathfinding gives you waypoints, steering translates that into smooth movement.
Bug 6: NPC ignores environment changes
Symptom: You open a door, place a new obstacle, and the enemy still uses the old route.
Cause: You calculated the path once and never updated it.
Solution: Invalidate paths when the environment changes. If you use dynamic NavMesh, rebake the affected area. If using grid, check modified boxes and force recalculation on nearby agents.
Quick diagnostic checklist
When your pathfinding fails, check in order:
- Is the network well defined (nodes, connections, consistent weights)?
- Does the agent fit in the navigable space (radius, height)?
- Are the heuristics admissible?
- Is the calculated path theoretically correct (draw it)?
- If the path is correct but the movement is bad: it is steering, not pathfinding.
- If the path is not found: visualise Open/Closed during the search to see where the algorithm is stuck.
Debugging pathfinding without visualisation is masochism. Draw the network. Draw the path. Draw the explored nodes. Most bugs evaporate when you see what's really going on.
Beyond "it works": pathfinding as design
A common mistake of people starting out: believing that pathfinding is a purely technical problem. You implement it well and that's it. But the truth is that pathfinding is also design.
Sometimes you don't want the optimal route. You want the interesting path.
Examples:
- In a stealth game, you prefer guards patrolling predictable (even sub-optimal) routes so the player can plan. Routes that are too "smart" break the level design.
- In a shooter, you can assign artificial costs to certain areas so that enemies prefer to flank you rather than attack head-on, even if the frontal route is shorter.
- In a tactical RPG, you can make certain unit types avoid difficult terrain even if they can technically traverse it, to reinforce the difference in roles (heavy infantry vs. light scouts).
Pathfinding gives you tools: terrain costs, forbidden areas, zone modifiers. Use them to guide behaviour, not just to "get from A to B as fast as possible".
Well-designed pathfinding is invisible. The player doesn't think "wow, that's a good algorithm". He thinks "these enemies are moving as they should". That's the goal.
The next step: from prototype to system
If you've made it this far, you're no longer at the "my NPC walks through walls" level. You understand the difference between representation and algorithm. You know when to use A*, when to use NavMesh, and why Dijkstra exists. You can follow an example step by step and replicate it in your engine.
That's the leap from someone who copies tutorials without understanding to someone who has judgement.
But there is a bigger difference: going from implementing an isolated technique to building complete systems. Pathfinding does not live alone. It coexists with decision AI, with animation, with level design, with optimisation, with gameplay. Real projects integrate all that and work under pressure: deadlines, feedback, iteration.
That's where the methodology comes in.
If this part gets you hooked (navigation, applied AI, system design, gameplay programming, graphics engines), and you want to professionalise it with real projects, constant feedback and an environment in which to build a portfolio instead of accumulating individual tutorials, the Degree in Design and Development of Video Games and Virtual Environments is the pathway designed for that.
It is not the only way to learn. But it is the structured way: with active professors, projects that simulate real production, and a community where "making videogames" stops being an aspiration and becomes a profession.
Now go back to your prototype. Implement a simple grid, test A* with pen and paper before writing code, visualise the quest. And when it works, when your NPC goes around that obstacle for the first time without going through it, you will feel something that no tutorial can give you: the certainty that you understand it.
That's the first step. The next ones are up to you.
