A problem is represented as a structure with as components INITIAL-STATE,
OPERATORS, GOAL-TEST, and PATH-COST-FUNCTION.
The search graph will use nodes with components STATE,
PARENT-NODE, OPERATOR, DEPTH, and PATH-COST.
The representation of a state will be problem and search strategy dependent.
Clearly this general definition for search algorithms fails to adequately express all the possible variations in search algorithms. For example, it does not deal with the case where nodes are not expanded until doing so becomes necessary [instead only one additional successor of a node is considered at a time]. But it gives a good picture of the general strategy for searching.function General-Search (problem, QUEUING-FN) returns a solution, or a failure { nodes <-- MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) loop { if nodes is empty then return failure node <-- REMOVE-FRONT(nodes) if GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes <-- QUEUING-FN(nodes, EXPAND(node, OPERATORS[problem]))}}
IDA* is, in the same sense as A*, complete and optimal.function IDA*(problem) returns a solution sequence f-limit, the current f-Cost limit root, a node ;;f-limit and root are static local variables { root <-- Make-Node(INITIAL-STATE[problem]) f-limit <-- f-Cost(root) loop { solution,f-limit <-- DFS-CONTOUR(root,f-limit) if solution is non-null then return solution if f-limit is INFINITE then return failure}} function DFS-CONTOUR(node,f-limit) returns a solution sequence and a new f-Cost limit next-f, the f-Cost limit for the next contour, initially INFINITE ;;next-f is a statically allocated local variable { if f-Cost[node]>f-limit then return null, f-Cost[node] if GOAL-TEST[problem](STATE[node]) then return node,f-limit for each node s in SUCCESSOR(node) do {solution,new-f <-- DFS-CONTOUR(s,f-limit) if solution is non-null then return solution,f-limit next-f <-- MIN(next-f,new-f)} return null, next-f}
function SMA*(problem) returns a solution sequence Queue, a queue of nodes ordered by f-cost {Queue is a static local variable} { Queue <-- MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) loop { if Queue is empty then return failure n <-- deepest least f-cost node in Queue if GOAL-TEST(n) then return success s <-- NEXT-SUCCESSOR(n) if s is not a goal and is at maximum depth then f(s) <-- INFINITY else f(s) <-- MIN(f(n),g(s)+h(s)) if all of n's successors have been generated then update n's f-cost and those of its ancestors if necessary if SUCCESSORS(n) all in memory then remove n from Queue if memory is full then {delete shallowest, highest f-cost node in Queue remove it from its parent's successor list insert its parent on Queue if necessary} insert s in Queue}}
"Annealing" is defined as cooling a metal so that it reaches crystalline state,
which is with minimal energy, and not a local energy minimum which leaves
a metal in an amorphous state.
It appears that as the metal cools, instead of following a "Hill-Climbing"
(or descending) algorithm towards a minimum state, at times it jumps around
among energy levels going from low to high. This "nonsensical" move is
characterised by the Delta between energy levels. As the metal cools, this
Delta tends to become smaller. The Annealing Schedule
is the ordered sequence (in time) of the temperatures used during cooling.
Mathematically, the Boltzman Distribution specifies the probability of being at energy level E when at temperature T:
At a temperature the probability of states with low energy is higher than for states with high energy. If DE is the change in energy from a low energy state to a high energy state, the probability of such transition is-E/T e P = ------, where Z is a normalization constant to make sure that Z P is a probability (non-negative, with sum 1)
Clearly, as T decreases, the probability of such energy jumps decreases.-DE/T e
The Simulated Annealing Search algorithm adheres to these ideas:
SIMULATED-ANNEALING(AnnealingSchedule, Parameters, EndCondition, EquilibriumCondition) ;;Here Parameters play the role of state { Previous_Energy <-- Energy(Parameters) loop for next T in AnnealingSchedule and not EndCondition) {loop until EquilibriumCondition {loop for each parameter of optimization problem {Trial-Parameters <-- Parameter + random variation DEnergy <-- Energy(Parameters) - PreviousEnergy if DEnergy <= 0 or Uniform-Rand(0,1) <= exp(-DEnergy/T) then {Previous-Energy <-- Energy(Parameters) Parameter <-- Trial-Parameter}}}} return Parameter (it is the current state) }
ingargiola@cis.temple.edu