The class of models that can compactly represent independence assumptions that bayesian networks cannot is known as Markov Random Fields (MRFs). How does MRFs work? Using voting preferences as an example, we could define a probabiliy over the joint voting decision of individual A, B, C, and D by assigning scores to each of these variables and then define a probability as a normalised score. A score can be any function but it could look as follows:

\( \tilde{p}(A, B, C, D) = \phi(A, B)\phi(B, C)\phi(C, D)\phi(D, A)\).

Where \( \phi(X, Y) = \left\{ \begin{array} (10 & \mbox{if} & X=Y=1 \\ 5 & \mbox{if} & X=Y=0 \\ 1 & \mbox{otherwise}\end{array}\right.\).

The final probability is computed by normalising \( \tilde{p}(A, B, C, D) \) as follows:

\( p(A, B, C, D) = \frac{1}{Z}\tilde{p}(A, B, C, D) \).

The factors can be view as the interaction between the two variables. This is unlike the bayesian network as in MRFs, we simply indicate a level of coupling between dependent variables. We need less prior knowledge s we don’t need to model how one variable affects the other variable. Instead, the model identify dependent variables and define the strength of their interactions.