3: Solving Problems by Searching
3.1 Problem-Solving Agents

Problem-solving agents are goal-driven systems that aim to achieve specific objectives by searching for solutions. These agents operate in a well-defined problem space and use systematic methods to explore possible actions.
3.1.1 Components of Problem-Solving
A problem-solving agent requires:
Initial State: The starting point in the problem space. Example: In a maze, the initial state is the agent’s starting position.
Actions: Possible moves the agent can take. Example: Moving up, down, left, or right in a maze.
Transition Model: Describes the result of each action. Example: Moving "right" from a cell leads to the adjacent cell.
Goal Test: Determines whether the agent has achieved its objective. Example: Reaching the maze’s exit.
Path Cost: The cumulative cost of the path taken. Example: The number of steps or energy consumed in the maze.
3.2 Problem Types
Understanding the nature of a problem helps in selecting the most suitable algorithm.
3.2.1 Single-State Problems
The agent knows exactly where it is at all times. The environment is fully observable and deterministic.
Example: Solving a jigsaw puzzle.
3.2.2 Multi-State Problems
The agent operates with partial knowledge and must account for uncertainty.
Example: Navigating an unfamiliar city without a map.
3.2.3 Adversarial Problems
In these problems, the agent competes against another agent, such as in games.
Example: Chess, where each player’s move influences the other’s strategy.
3.3 Search Strategies
Search algorithms explore the problem space to find solutions. Strategies are broadly categorized into uninformed and informed search.
3.3.1 Uninformed Search
These algorithms have no additional knowledge about the problem beyond the definition of actions and goal tests.
Breadth-First Search (BFS)
Explores all nodes at a given depth before moving to the next level.
Guaranteed to find the shortest path in an unweighted graph.
Depth-First Search (DFS)
Explores as deeply as possible along one branch before backtracking.
Memory-efficient but may not find the optimal solution.
3.3.2 Informed Search
These algorithms use heuristics—additional information about the problem—to guide the search.
Greedy Best-First Search
Prioritizes nodes that appear closest to the goal based on a heuristic.
Fast but not always optimal.
A* Search
Combines path cost and heuristic to guarantee the optimal solution.
Widely used in applications like navigation systems.
3.4 Problem-Solving in Action
Consider a robot tasked with finding the shortest path to a goal in a grid-like environment.
Initial State: Robot’s starting position.
Actions: Move up, down, left, or right.
Goal Test: Reaching the target position.
Path Cost: Minimize the total number of moves.
Uninformed Approach
Breadth-first search ensures the robot finds the shortest path but explores many unnecessary nodes.
Informed Approach
A* search uses heuristics like the Manhattan distance to efficiently guide the robot toward the goal.
3.5 Uninformed Search Strategies
Uninformed search strategies, also called blind searches, do not use additional information about the goal beyond the problem definition. They systematically explore the problem space.
3.5.1 Breadth-First Search (BFS)
Explores all nodes at the current depth level before moving deeper.
Uses a queue to track frontier nodes.
Properties:
Completeness: Guaranteed to find a solution if one exists.
Optimality: Finds the shortest path in an unweighted graph.
Time Complexity: O(b^d), where b is the branching factor and d is the depth of the shallowest solution.
Space Complexity: O(b^d).
Example: BFS is useful in social networks to find the shortest connection between two users.
3.5.2 Depth-First Search (DFS)
Explores as deeply as possible along a branch before backtracking.
Uses a stack to manage frontier nodes.
Properties:
Completeness: Not guaranteed in infinite-depth spaces.
Optimality: Not guaranteed—it may find a suboptimal solution.
Time Complexity: O(b^m), where b is the branching factor and m is the maximum depth of the search tree.
Space Complexity: O(b * m).
Example: DFS is useful in scenarios like solving mazes or puzzles, where depth exploration is prioritized.
3.5.3 Uniform-Cost Search (UCS)
Expands the node with the lowest path cost first.
Uses a priority queue to organize nodes based on cost.
Properties:
Completeness: Guaranteed if path costs are positive.
Optimality: Finds the least-cost solution.
Time Complexity: O(b^(1 + (C*/e))), where C* is the cost of the optimal solution and e is the minimum cost of any action.
Space Complexity: Same as time complexity.
Example: UCS is ideal for pathfinding in weighted graphs, such as GPS navigation systems.
3.6 Informed Search Strategies
Informed strategies, or heuristic searches, use domain-specific knowledge to guide the search efficiently.
3.6.1 Greedy Best-First Search
Expands the node that appears closest to the goal, based on a heuristic function h(n).
Heuristic h(n): Estimates the cost to reach the goal from node n.
Properties:
Completeness: Not guaranteed if the search gets stuck in loops.
Optimality: Not guaranteed—it may follow suboptimal paths.
Time Complexity: O(b^m), where b is the branching factor and m is the maximum depth of the search tree.
Space Complexity: O(b^m).
Example: Used in routing problems where speed is prioritized over guaranteed optimality.
3.6.2 A-Star (A) Search*
Combines path cost and heuristic, using the formula f(n) = g(n) + h(n):
g(n): Path cost from the start to node n.
h(n): Heuristic estimate from node n to the goal.
Properties:
Completeness: Guaranteed if the branching factor is finite.
Optimality: Guaranteed if h(n) is admissible (never overestimates the true cost).
Time Complexity: O(b^d), where b is the branching factor and d is the depth of the solution.
Space Complexity: Same as time complexity.
Example: Widely used in pathfinding algorithms like Google Maps, where both distance and travel time are considered.
3.6.3 Heuristics
Heuristics are domain-specific functions that estimate the cost of reaching the goal. They guide the search process effectively.
Types of Heuristics:
Admissible Heuristic: Never overestimates the true cost to the goal. Example: Straight-line distance in navigation problems.
Consistent Heuristic: Satisfies the condition h(n) <= c(n, n') + h(n'), where c(n, n') is the cost of moving from node n to n'. Example: Manhattan distance in grid-based problems.
3.7 Comparing Search Strategies
Search Strategy
Complete
Optimal
Time Complexity
Space Complexity
Breadth-First Search (BFS)
Yes
Yes (if unweighted)
O(b^d)
O(b^d)
Depth-First Search (DFS)
No
No
O(b^m)
O(b * m)
Uniform-Cost Search (UCS)
Yes
Yes
O(b^(1 + (C*/e)))
Same as time complexity
Greedy Best-First Search
No
No
O(b^m)
O(b^m)
A-Star (A*) Search
Yes
Yes
O(b^d)
Same as time complexity
3.8 Optimizing Search Algorithms
To improve efficiency, search algorithms often incorporate optimization techniques that reduce computation and memory usage.
3.8.1 Graph Search
Graph search is an enhancement over tree search that avoids redundant exploration by keeping track of previously visited states. This eliminates cycles and revisits.
Implementation:
Maintain a closed list (set of explored states) alongside the frontier (set of unexplored states).
Skip any state that is already in the closed list.
Example: In a maze, if the agent revisits previously explored paths, graph search ensures they are skipped, preventing wasted computation.
3.8.2 Bidirectional Search
Bidirectional search works by running two simultaneous searches:
One forward from the initial state.
One backward from the goal state.
When the two searches meet, a solution is found.
Advantages:
Significantly reduces search depth compared to unidirectional search.
Time complexity is approximately O(b^(d/2)) instead of O(b^d), where b is the branching factor and d is the depth of the solution.
Example: Finding the shortest path in a social network, where one search starts from a person and the other from their target connection.
3.8.3 Iterative Deepening Search
This combines the space efficiency of depth-first search (DFS) with the completeness of breadth-first search (BFS). The algorithm performs DFS with increasing depth limits until a solution is found.
Advantages:
Minimal memory usage, like DFS.
Guaranteed to find the optimal solution, like BFS.
Use Case: Suitable for problems with large state spaces but where memory is limited, such as puzzle-solving.
3.8.4 Memory-Bounded Search
To address the high memory requirements of algorithms like A*, memory-bounded variants like Iterative Deepening A* (IDA*) and Simplified Memory-Bounded A* (SMA*) are used.
IDA*:
Uses iterative deepening with A*-style evaluation.
Expands nodes within a threshold cost, increasing the threshold iteratively.
SMA*:
Maintains only the most promising nodes within a fixed memory limit.
Discards less promising nodes and revisits them only if needed.
3.9 Practical Considerations for Search Algorithms
When implementing search algorithms in real-world applications, several factors must be considered:
3.9.1 State Representation
Efficient representation of states is crucial for reducing memory usage and speeding up processing. States can be encoded as:
Arrays or Matrices: For grid-based problems.
Graphs: For network-related problems.
Binary Strings: For optimization problems like the traveling salesman.
3.9.2 Heuristic Design
Heuristics significantly influence the performance of informed search algorithms. Designing effective heuristics involves:
Accuracy: The heuristic should closely approximate the actual cost to the goal.
Efficiency: Computing the heuristic should not be more expensive than the search itself.
Admissibility and Consistency: Ensure the heuristic meets these properties to guarantee optimality with A*.
3.9.3 Scalability
Large problem spaces require scalable solutions. Techniques include:
Using distributed computing to parallelize searches.
Applying approximation algorithms when exact solutions are computationally infeasible.
3.10 Use Cases of Search Algorithms
Search algorithms form the backbone of many real-world applications. Here are some key examples:
3.10.1 Navigation Systems
Search algorithms like A* and Dijkstra's are used in GPS systems to compute the shortest routes between two locations, considering factors like traffic and road closures.
3.10.2 Game Playing
Games like chess and Go rely on adversarial search strategies, including:
Minimax Algorithm: Evaluates moves to minimize the opponent’s maximum advantage.
Alpha-Beta Pruning: Optimizes Minimax by eliminating branches that cannot influence the final decision.
3.10.3 Robotics
Robots use search algorithms to navigate dynamic environments and avoid obstacles. Pathfinding in grid-based maps is often achieved using algorithms like A*.
3.10.4 Network Optimization
Search is applied to optimize communication networks, routing packets efficiently across nodes while minimizing delays and avoiding congestion.
3.11 Summary
In this chapter, we explored:
The foundations of problem-solving agents and their interaction with environments.
Various search strategies, including uninformed and informed approaches.
Optimization techniques like graph search, bidirectional search, and memory-bounded methods.
Practical considerations for implementing search algorithms in real-world applications.
Real-world use cases, including navigation, gaming, robotics, and networking.
Next Steps: In the next chapter, we’ll delve into Adversarial Search, exploring competitive environments where multiple agents interact, such as games. Let me know when you're ready to continue!
Last updated