Maximum independent set, or "maximum stable" set is one of classical NP-complete problems described in Richard Karp's 1972 paper "Reducibility Among Combinatorial Problems". Other NP-complete problems often have a simple reduction to it, for instance, p.3 of Tony Jebara's "MAP Estimation, Message Passing, and Perfect Graphs" shows how MAP inference in an arbitrary MRF reduces to Maximum Weight Independent Set.

The problem asks to find the largest set of vertices in a graph, such that no two vertices are adjacent.

Finding largest independent set is equivalent to finding largest clique in graph's complement

A classical approach to solving this kind of problem is based on linear relaxation.

Take a variable for every vertex of the graph, let it be 1 if vertex is occupied and 0 if it is not occupied. We encode independence constraint by requiring that adjacent variables add up to 1. So we can consider the following graph and corresponding integer programming formulation:

$$\max x_1 + x_2 + x_3 + x_4 $$

such that

$$

\begin{array}{c}

x_1+x_2 \le 1 \\\\

x_2+x_3 \le 1 \\\\

x_3+x_4 \le 1 \\\\

x_4+x_1 \le 1 \\\\

x_i\ge 0 \ \ \forall i \\\\

x_i \in \text{Integers}\ \forall i

\end{array}

$$

Solving this integer linear integer program is equivalent to the original problem of maximum independent set, with 1 value indicating that node is in the set. To get a tractable LP programme we drop the last constraint.

If we solve LP without integer constraints and get integer valued result, the result is guaranteed to be correct. In this case we say that LP is tight, or LP is integral.

Curiously, solution of this kind of LP is guaranteed to take only values 0,1 and $\frac{1}{2}$. This is known as "half-integrality". Integer valued variables are guaranteed to be correct, and we need to use some other method to determine assignments to half-integer variables.

For instance below is a graph for which first three variables get 1/2 in the solution of the LP, while the rest of the vertices get integral values which are guaranteed to be correct

Here's what this LP approach gives us for a sampling of graphs

A natural question to ask is when this approach works. Its guaranteed to work for bipartite graphs, but it also seems to work for many non-bipartite graphs. None of the graphs above are bipartite.

Since independent set only allows one vertex per clique, we can improve our LP by saying that sum of variables over each clique has to below 1.

Consider the following graph that our original LP has a problem with:

Instead of having edge constraints, we formulate our LP with clique constraints as follows:

$$x_1+x_2+x_3\le 1$$

$$x_3+x_4+x_5\le 1$$

With these constraints, LP solver is able to find the integral solution

In this formulation, solution of LP is no longer half-integral. For the graph below we get a mixture of 1/3 and 2/3 in the solution.

Optimum of this LP is known as the "fractional chromatic number" and it is an upper bound both on the largest clique size and the largest independent set size. For the graph above, largest clique and independent set have sizes 9/3 and 12/3, whereas fractional chromatic number is 14/3.

This relaxation is tighter than previous, here are some graphs that previous LP relaxation had a problem with.

This LP is guaranteed to work for perfect graphs which is a larger class of graphs than bipartite graphs. Out of 853 connected graphs over 7 vertices, 44 are bipartite while 724 are perfect.

You can improve this LP further by adding more constraints. A form of LP constraints that generalizes previous two approaches is

$$\sum_{i\in C} x_i\le \alpha(C)$$

Where $C$ is some subgraph and $\alpha(C)$ is its independence number.

You extend previous relaxation by considering subgraphs other than cliques.

Lovasz describes the following constraint classes (p.13 of "Semidefinite programs and combinatorial optimization")

1. Odd hole constraints

2. Odd anti-hole constraints

3. Alpha-critical graph constraints.

Note that adding constraints for all maximal cliques can produce a LP with exponential number of constraints.

Notebook

Packages

## 31 comments:

Cool post.

So in a practical setting, if we wanted to solve large maximum independent set problems with this approach, we could imagine starting with the basic set of edge constraints, running the LP, checking to see if we have an integral solution, and if not, intelligently adding new constraints.

Do you have any ideas about good ways to choose which additional constraints to add?

(As you may know, this is essentially the problem addressed in the following paper, though the paper is about general MAP inference rather than MIS:

D. Sontag, T. Meltzer, A. Globerson, Y. Weiss, T. Jaakkola. Tightening LP Relaxations for MAP using Message Passing. Uncertainty in Artificial Intelligence (UAI) 24, July 2008.)

Look at, say, graph on (4, 5) position in the first big picture. It is not the only optimal LP solution, an off-the-shelf LP algorithm can assign 1/2 to the both nodes in the left column, and to the both nodes in the right column. So, should one use some technique that prefers integral solutions? Is it the common case when there are several optimums, and integral among them?

Roman - Good question, I actually don't know why integer solution was chosen in that case. There are quite a few cases like that, with non-unique maximum, yet the solver choosing an integer solution, perhaps it's a side effect of the simplex method

Danny - that sounds like a promising approach -- here you could see which regions of the graph produce non-integer solutions, add constraints of the last kind for those vertex sets, and rerun the LP solver

Concerning Roman's question I think it is possible to add random small epsilons (either positive or negative) to the right parts of constaints in LP relaxation. We will seem to loose the property of half-integrality but in such unlikely case when there are several solutions we will definitely converge to the integer one.

I think the reason of 1/2 solution is that the variable takes value 1 in some of the maximum independent sets and 0 in others. In easy cases of Independent-Set, e.g. on Erdos-Renyi random graphs with average connectivity smaller than 2.71828..., leaf-removal algorithm gives an optimal solution, then I guess LP will give a tight solution. In hard cases, when leaf-removal core exists, no efficient algorithm exists to solve the problem, and LP gives fraction solution.

In my example with the bowtie graph, there's no maximum independent set that contains middle node, yet it gets value 1/2

The possible sets of nonintegral values in optimal LP solutions are closed under intersection (Picard&Queyranne: On the integer-valued variables in the linear vertex packing problem, 1977, http://dx.doi.org/10.1007/BF01593772). Thus, there is a unique optimal LP solution with the minimum number of fractional values, and you can find it for example by trying to fix each variable to 1 and seeing if the objective changes.

On G(n, p)-graphs, for any p, with high probability all variables 1/2 is the only optimal LP solution (Pulleyblank: Minimum node covers and 2-bicritical graphs, 1979, http://dx.doi.org/10.1007/BF01588228).

Hi there, awesome site. I thought the topics you posted on were very interesting. I tried to add your RSS to my feed reader

and it a few. take a look at it, hopefully I can add you and follow.

Independent Villas

For Roman's question, the 1/2-solutions proposed are not basic solutions since they are the convex combination of other solutions (in particular, the alternative integer solutions). So, as long as your LP algorithm returns basic solutions (for instance the simplex algorithm; interior point methods have approaches that force basic solutions but that is not required), you will not get the 1/2 solutions.

Jayati Tools Private Limited is a trusted company which is Manufacturer and Supplier of scaffolding equipment all over the world. We are manufactur the Scaffolding Accessories, composite die holder, scaffolding socket, Tuff Scaffolding Machine and all types of Scaffolding equipments. www.scaffoldingmachines.com

Great technology of the best electronic machine -

Can Crusher

Can Recycling

PC Optimizer Free

Best PC Optimizer

PC Speed Booster

This is a awesome blog post.

I want a program for this.

Why does the LP relaxation work on perfect graphs?

Interesting stuff to read. Keep it up.

Machine Learning Solutions Provider

Learn online Digital marketing

Great post with nice information thanks for sharing

Nice Blog.!

Machine Learning Solutions Provider

Nice blog well said useful information thank you so much

Deep Learning Training in Hyderabad

nice blog

android training in bangalore

ios training in bangalore

machine learning online training

useful blog

python interview questions

cognos interview questions

perl interview questions

vlsi interview questions

web api interview questions

msbi interview questions

laravel interview questions

aem interview questions

salesforce interview questions

oops abab interview questions

itil interview questions

informatica interview questions

extjs interview questions

sap bi interview questions

hive interview questions

seo interview questions

as400 interview questions

wordpress interview questions

accounting interview questions

basic accounting and financial interview questions

Nice information thank you

Deep Learning Training in Hyderabad

Nice informative blog. Keep updating these types of informative updates regularly and also please visit our link

Machine Learning Training in Hyderabad

Crafsol is the best leading machine solution provider in Pune, India, USA, South Africa, Indonesia, UK, France and Germany.

Machine Learning Solutions Provider

Iot Training in Bangalore

Machine Learning Training in Bangalore

Pcb Training in Bangalore

Devops Training in Bangalore

Nice blog thank you for sharing this information

Machine Learning Training in Hyderabad

Thanks for your effort to put this information here. I think its useful.for more information about machine learning go through this link. machine learning training in hyderabad

thank you for the information, may this blog information can useful to the people who are confuse. for more details visit us at

Deep Learning Training in Austin

Well explained.Keep updating Artificial intelligence Online Trining

In recent time one of the finest program for the writing is held upon in canada and there has lots of way to get known as a writer. http://www.ukraineoutsourcingrates.com/sql-development-ukraine-rates/ to see more about the wrting tips.

Post a Comment