Belief propagation: Difference between revisions

Jump to navigation Jump to search
WikiBot (talk | contribs)
m Bot: Automated text replacement (-{{SIB}} + & -{{EH}} + & -{{EJ}} + & -{{Editor Help}} + & -{{Editor Join}} +)
 
WikiBot (talk | contribs)
m Robot: Automated text replacement (-{{WikiDoc Cardiology Network Infobox}} +, -<references /> +{{reflist|2}}, -{{reflist}} +{{reflist|2}})
 
Line 94: Line 94:
* Koch, Volker M. (2007). [http://www.volker-koch.com/diss/''A Factor Graph Approach to Model-Based Signal Separation''] --- A tutorial-style dissertation
* Koch, Volker M. (2007). [http://www.volker-koch.com/diss/''A Factor Graph Approach to Model-Based Signal Separation''] --- A tutorial-style dissertation


{{reflist}}
{{reflist|2}}
[[Category:Biostatistics]]
[[Category:Biostatistics]]
[[Category:Statistics]]
[[Category:Statistics]]

Latest revision as of 14:43, 4 September 2012

Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]


Overview

Belief propagation, also known as the sum-product algorithm, is an iterative algorithm for computing marginals of functions on a graphical model most commonly used in artificial intelligence and information theory. Judea Pearl in 1982[1] formulated this algorithm on trees, and Kim and Pearl (in 1983)[2] on polytrees. Pearl (1988)[3] has then suggested this algorithm as an approximation for general (loopy) network. It is an efficient inference algorithm on trees and has demonstrated empirical success in numerous applications including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability. It is commonly used in pairwise Markov random fields (MRFs with a maximum clique size of 2), Bayesian networks, and factor graphs.

Recall that the marginal distribution of a single random variable <math>X_i</math> is simply the summation of a joint distribution over all variables except <math>X_i</math>, and let <math>\mathbf{x}</math> be an assignment of all variables in the joint distribution:

<math>P(x_i) = \sum_{\mathbf{x}: X_i=x_i} P(\mathbf{x}).</math>

For the purposes of explaining this algorithm, consider the marginal function, which is simply an unnormalized marginal distribution with a generic global function <math>g(\mathbf{x})</math>:

<math>z(x_i) = \sum_{\mathbf{x}: X_i=x_i} g(\mathbf{x}).</math>

Exact algorithm for trees

This algorithm functions by passing real-valued messages across edges in a graphical model. More precisely, in trees: a vertex sends a message to an adjacent vertex if (a) it has received messages from all of its other adjacent vertices and (b) hasn't already sent one. So in the first iteration, the algorithm sends messages from all leaf nodes to each of the lone vertices adjacent to those respective leaves and continues sending messages in this manner until all messages have been sent exactly once, hence explaining the term propagation. It is easily proven that all messages will be sent (there are twice the number of edges of them). Upon termination, the marginal of a variable is simply the product of the incoming messages of all its adjacent vertices. A simple proof of this fact, though somewhat messy, can be done by mathematical induction.

The message definitions will be described in the factor graph setting, as the algorithms for other graphical models are nearly identical. Since factor graphs have variable and factor nodes, there are two types of messages to define:

A variable message is a real-valued function that is a message sent from a variable to a factor, and defined as

<math>X_n\rightarrow f_m(x_n) = \prod_{f_i\in N(X_n)\setminus \{f_m\}} f_i\rightarrow X_n(x_n).</math>

A factor message is a real-valued function that is a message sent from a factor to a variable, and defined as

<math>f_m\rightarrow X_n(x_n) = \sum_{\mathbf{x_m}:X_n=x_n} f_m(\mathbf{x_m}) \prod_{X_i\in N(f_m)\setminus \{X_n\}} X_i\rightarrow f_m(x_i),</math>

where <math>N(u)</math> is defined as the set of neighbours (adjacent vertices in a graph) of a vertex <math>u</math>. <math>\mathbf{x_m}</math> is an assignment to the vertices affecting <math>f_m</math> (i.e. vertices in <math>N(f_m)</math>).

As mentioned in the description of the algorithm, the marginal of <math>X_i</math> can be computed in the following manner:

<math>z(x_i) = \prod_{f_j\in N(X_i)} f_j\rightarrow X_i(x_i).</math>

One can also compute the marginal of a factor <math>f_j</math>, equivalently, the marginal of the subset of variables <math>X_j</math> in the following manner:

<math>z(\mathbf{x_j}) = f_j(\mathbf{x_j})\prod_{X_i\in N(f_j)} X_i\rightarrow f_j(x_i).</math>

Approximate algorithm for general graphs

Curiously, nearly the same algorithm is used in general graphs. The algorithm is then sometimes called "loopy" belief propagation, because graphs typically contain cycles, or loops. The procedure must be adjusted slightly because graphs might not contain any leaves. Instead, one initializes all variable messages to 1 and uses the same message definitions above, updating all messages at every iteration (although messages coming from known leaves or tree-structured subgraphs may no longer need updating after sufficient iterations). It is easy to show that in a tree, the message definitions of this modified procedure will converge to the set of message definitions given above within a number of iterations equal to the diameter of the tree.

The precise conditions under which loopy belief propagation will converge are still not well understood; it is known that graphs containing a single loop will converge to a correct solution. [4] In the general case, no guarantees exist, and there exist graphs which will fail to converge, or which will oscillate between multiple states over repeated iterations.

There are other approximate methods for marginalization including variational methods and Monte Carlo methods.

One method of exact marginalization in general graphs is called the junction tree algorithm, which is simply belief propagation on a modified graph guaranteed to be a tree. The basic premise is to eliminate cycles by clustering them into single nodes.

Related algorithm and complexity issues

A similar algorithm is commonly referred to as the Viterbi algorithm, but also known as the max-product or min-sum algorithm, which solves the related problem of maximization, or most probable explanation. Instead of attempting to solve the marginal, the goal here is to find the values <math>\mathbf{x}</math> that maximises the global function (i.e. most probable values in a probabilistic setting), and it can be defined using the arg max:

<math>\arg\max_{\mathbf{x}} g(\mathbf{x}).</math>

An algorithm that solves this problem is nearly identical to belief propagation, with the sums replaced by maxima in the definitions.

It is worth noting that inference problems like marginalization and maximization are NP-hard to solve exactly and approximately (at least for relative error) in a graphical model. More precisely, the marginalization problem defined above is #P-complete and maximization is NP-complete.

Relation to free energy

The sum-product algorithm is related to the calculation of free energy in thermodynamics. A probability distribution

<math>P(\mathbf{X}) = \frac{1}{Z} \prod_{f_j} f_j(x_j)</math>

(as per the factor graph representation) can be viewed as a measure of the internal energy present in a system, computed as

<math>E(\mathbf{X}) = \log \prod_{f_j} f_j(x_j).</math>

The free energy of the system is then

<math>F = U - H = \sum_{\mathbf{X}} P(\mathbf{X}) E(\mathbf{X}) + \sum_{\mathbf{X}} P(\mathbf{X}) \log P(\mathbf{X}).</math>

It can then be shown that the points of convergence of the sum-product algorithm represent the points where the free energy in such a system is minimized. Similarly, it can be shown that a fixed point of the iterative belief propagation algorithm in graphs with cycles is a stationary point of a free energy approximation.

Generalized belief propagation (GBP)

Belief propagation algorithms are normally presented as messages update equations on a factor graph, involving messages between variable nodes and their neighboring factor nodes and vice versa. Considering messages between regions in a graph is one way of generalizing the belief propagation algorithm. There are several ways of defining the set of regions in a graph that can exchange messages. One method uses ideas introduced by Kikuchi in the physics literature, and is known as Kikuchi's cluster variation method.

Improvements in the performance of belief propagation algorithms are also achievable by breaking the replicas symmetry in the distributions of the fields (messages). This generalization leads to a new kind of algorithm called Survey Propagation (SP), which have proved to be very efficient in NP-complete problems like satisfiability and graph coloring.

The cluster variational method and the survey propagation algorithms are two different improvements to belief propagation. The name generalized survey propagation (GSP) is waiting to be assigned to the algorithm that merges both generalizations.

References

  1. Pearl, J. (1982) Reverend {B}ayes on inference engines: A distributed hierarchical approach. Proceedings American Association of Artificial Intelligence National Conference on AI, Pittsburgh, PA, 133--136.
  2. Kim, J.H. and Pearl, J., (1983) A computational model for combined causal and diagnostic reasoning in inference systems, Proceedings IJCAI-83, Karlsruhe, Germany, 190--193.
  3. Pearl, J. (1988) Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (Revised Second Printing) San Francisco, CA: Morgan Kaufmann.
  4. Y. Weiss. Correctness of Local Probability Propagation in Graphical Models with Loops. Neural Computation, 2000.


Template:WikiDoc Sources