In this series, we would be exploring, at a high level, some ideas, techniques, and algorithms that are at the foundation of AI. AI covers a wide range of techniques and in this series, I would be covering the following categories of problems:
In this post, we will explore Uncertainty and Optimization.
In the context of Knowledge problems (covered in Part1 of this series), the computer made conclusions based on some known facts. We can apply the same idea to probability.
Uncertainty problems are those where the AI predicts the probability of variables taking on particular values, by using known information, whether it is some evidence or some probabilities.
Suppose I have an appointment for which I need to take the train. Whether I make it on time to the appointment or not depends on the train schedule. The train may be on time or delayed depending on whether it is raining and whether there is track maintenance. Track maintenance is also depending on the rain.
By modeling these nodes in a Bayesian network, we can program the AI to predict the probability of making it to the appointment.
Another example would be to predict today’s weather, using the previous day’s weather and applying the Markov assumption theory.
Optimization problems are about choosing the best option from a set of options.
There are several ways we can formulate these problems. Some approaches are given below.
i) Local search problem - looking at a current node and moving to a neighbor based on whether the neighbor is better or worse than the current node. Algorithms - Hill-climbing and Simulated Annealing.
ii) Linear programs – formulating the problem in the form of equations and constraints. Algorithms - Simplex algorithm and Interior-point algorithm.
iii) Constraint satisfaction problem – creating a graph of the constraints that limit the values of variables. Algorithms - Enforcing arc consistency and Backtracking search.
Reference: CS50’s Introduction to Artificial Intelligence with Python