Quadratic unconstrained binary optimization: Difference between revisions
greatly simplified the notation for both the clustering example and the Ising conversion, using matrix vector notation |
|||
Line 17: | Line 17: | ||
==Definition== |
==Definition== |
||
Let <math>\mathbb{B}=\lbrace 0,1\rbrace</math> the set of [[binary data|binary]] digits (or <i>bits</i>), then <math>\mathbb{B}^n</math> is the set of binary vectors of fixed length <math>n\in\mathbb{N}</math>. |
|||
Given a [[symmetric matrix|symmetric]] upper [[triangular matrix]] <math>\boldsymbol Q\in\mathbb{R}^{n\times n}</math>, whose entries <math>Q_{ij}</math> define a weight for each pair of indices <math>i,j\in\lbrace 1,\dots,n\rbrace</math> within the binary vector, we can define the function <math>f_{\boldsymbol Q}: \mathbb{B}^n\rightarrow\mathbb{R}</math> that assigns a value to each binary vector <math>\boldsymbol x</math> through |
|||
⚫ | |||
We can define a function <math>f_Q: \mathbb{B}^n\rightarrow\mathbb{R}</math> that assigns a value to each binary vector through |
|||
Intuitively, the weight <math>Q_{ij}</math> is added if both <math>x_i=1</math> and <math>x_j=1</math> for all <math>i,j\in\lbrace 1,\dots,n\rbrace</math>. |
|||
⚫ | |||
The QUBO problem consists of finding a binary vector <math>\boldsymbol{x}^*</math> that is minimal with respect to <math>f_{\boldsymbol Q}</math>, namely |
|||
⚫ | |||
In general, <math>\boldsymbol{x}^*</math> is not unique, meaning there may be a set of minimizing vectors with equal value w.r.t. <math>f_{\boldsymbol Q}</math>. |
|||
The complexity of QUBO arises from the number of candidate binary vectors to be evaluated, as <math>\left|\mathbb{B}^n\right|=2^n</math> grows exponentially in <math>n</math>. |
|||
Sometimes, QUBO is defined as the problem of ''maximizing'' <math>f_{\boldsymbol Q}</math>, which is equivalent to minimizing <math>f_{-\boldsymbol Q}=-f_{\boldsymbol Q}</math>. |
|||
⚫ | |||
In general, <math>x^*</math> is not unique, meaning there may be a set of minimizing vectors with equal value w.r.t. <math>f_Q</math>. |
|||
The complexity of QUBO arises from the number of candidate binary vectors to be evaluated, as <math>|\mathbb{B}^n|=2^n</math> grows exponentially in <math>n</math>. |
|||
Sometimes, QUBO is defined as the problem of ''maximizing'' <math>f_Q</math>, which is equivalent to minimizing <math>f_{-Q}=-f_Q</math>. |
|||
==Properties== |
==Properties== |
||
QUBO is scale invariant for positive factors <math>\alpha>0</math>, which leave the optimum <math>x^*</math> unchanged: |
QUBO is scale invariant for positive factors <math>\alpha>0</math>, which leave the optimum <math>{\boldsymbol x}^*</math> unchanged: |
||
: <math>f_{\alpha Q}(x) = \ |
: <math>f_{\alpha\boldsymbol Q}(\boldsymbol x) = \boldsymbol{x}^\intercal(\alpha\boldsymbol{Q})\boldsymbol x = \alpha(\boldsymbol{x}^\intercal\boldsymbol{Qx})=\alpha f_{\boldsymbol Q}(\boldsymbol x)</math>. |
||
In its general form, QUBO is [[NP-hardness|NP-hard]] and cannot be solved efficiently by any polynomial-time algorithm.<ref name="punnen2022"/> |
In its general form, QUBO is [[NP-hardness|NP-hard]] and cannot be solved efficiently by any polynomial-time algorithm.<ref name="punnen2022"/> |
||
However, there are polynomially-solvable special cases, where <math>Q</math> has certain properties,<ref name="cela2022"/> for example: |
However, there are polynomially-solvable special cases, where <math>\boldsymbol Q</math> has certain properties,<ref name="cela2022"/> for example: |
||
* If all coefficients are positive, the optimum is trivially <math>x^*=(0,\dots,0)</math>. Similarly, if all coefficients are negative, the optimum is <math>x^*=(1,\dots,1)</math>. |
* If all coefficients are positive, the optimum is trivially <math>\boldsymbol{x}^*=(0,\dots,0)^\intercal</math>. Similarly, if all coefficients are negative, the optimum is <math>\boldsymbol{x}^*=(1,\dots,1)^\intercal</math>. |
||
* If <math>Q</math> is [[diagonal matrix|diagonal]], the bits can be optimized independently, and the problem is solvable in <math>\mathcal{O}(n)</math>. The optimal variable assignments are simply <math>x^*_i=1</math> if <math>Q_{ii}<0</math>, and <math>x^*_i=0</math> otherwise. |
* If <math>\boldsymbol Q</math> is [[diagonal matrix|diagonal]], the bits can be optimized independently, and the problem is solvable in <math>\mathcal{O}(n)</math>. The optimal variable assignments are simply <math>x^*_i=1</math> if <math>Q_{ii}<0</math>, and <math>x^*_i=0</math> otherwise. |
||
* If all off-diagonal elements of <math>Q</math> are non-positive, the corresponding QUBO problem is solvable in polynomial time.<ref>See Theorem 3.16 in Punnen (2022); note that the authors assume the ''maximization'' version of QUBO.</ref> |
* If all off-diagonal elements of <math>\boldsymbol Q</math> are non-positive, the corresponding QUBO problem is solvable in polynomial time.<ref>See Theorem 3.16 in Punnen (2022); note that the authors assume the ''maximization'' version of QUBO.</ref> |
||
QUBO can be solved using [[integer linear programming]] solvers like [[CPLEX]] or [[Gurobi Optimizer]]. |
QUBO can be solved using [[integer linear programming]] solvers like [[CPLEX]] or [[Gurobi Optimizer]]. |
||
This is possible since QUBO can be reformulated as a linear constrained binary optimization problem. |
This is possible since QUBO can be reformulated as a linear constrained binary optimization problem. |
||
To achieve this, substitute the product <math>x_ix_j</math> by an additional binary variable <math>z_{ij}\in\{ |
To achieve this, substitute the product <math>x_ix_j</math> by an additional binary variable <math>z_{ij}\in\mathbb{B}</math> and add the constraints <math>x_i\ge z_{ij}</math>, <math>x_j\ge z_{ij}</math> and <math>x_i+x_j-1\le z_{ij}</math>. |
||
Note that <math>z_{ij}</math> can also be [[Linear programming relaxation|relaxed]] to continuous variables within the bounds zero and one. |
Note that <math>z_{ij}</math> can also be [[Linear programming relaxation|relaxed]] to continuous variables within the bounds zero and one. |
||
Line 72: | Line 68: | ||
As an illustrative example of how QUBO can be used to encode an optimization problem, we consider the problem of [[cluster analysis]]. |
As an illustrative example of how QUBO can be used to encode an optimization problem, we consider the problem of [[cluster analysis]]. |
||
Here, we are given a set of 20 points in 2D space, described by a matrix <math> |
Here, we are given a set of <math>N=20</math> points in 2D space, described by a matrix <math>\boldsymbol X\in\mathbb{R}^{N\times 2}</math>, where each row contains two [[cartesian coordinate system|cartesian coordinates]]. |
||
We want to assign each point to one of two classes or ''clusters'', such that points in the same cluster are similar to each other. |
We want to assign each point to one of two classes or ''clusters'', such that points in the same cluster are similar to each other. |
||
For two clusters, we can assign a binary variable <math>x_i\in\mathbb{B}</math> to the point corresponding to the <math>i</math>-th row in <math> |
For two clusters, we can assign a binary variable <math>x_i\in\mathbb{B}</math> to the point corresponding to the <math>i</math>-th row in <math>\boldsymbol X</math>, indicating whether it belongs to the first (<math>x_i=0</math>) or second cluster (<math>x_i=1</math>). |
||
Consequently, we have 20 binary variables, which form a binary vector <math>x\in\mathbb{B}^{20}</math> that corresponds to a cluster assignment of all points (see figure). |
Consequently, we have 20 binary variables, which form a binary vector <math>\boldsymbol x\in\mathbb{B}^{20}</math> that corresponds to a cluster assignment of all points (see figure). |
||
One way to derive a clustering is to consider the pairwise distances between points. |
One way to derive a clustering is to consider the pairwise distances between points. |
||
Given a cluster assignment <math>x</math>, |
Given a cluster assignment <math>\boldsymbol x</math>, the expression <math>x_ix_j+(1-x_i)(1-x_j)</math> evaluates to 1 if points <math>i</math> and <math>j</math> are in the same cluster. |
||
Similarly, |
Similarly, <math>x_i(1-x_j)+(1-x_i)x_j=1</math> indicates that they are in different clusters. |
||
Let <math>d_{ij} |
Let <math>d_{ij}>0</math> denote the [[Euclidean distance]] between the points <math>i</math> and <math>j</math>, i.e., |
||
: <math>d_{ij}=\sqrt{\boldsymbol X_i^\intercal\boldsymbol X_j}</math>, |
|||
where <math>\boldsymbol X_i</math> is the <math>i</math>-th row of <math>\boldsymbol X</math>. |
|||
In order to define a cost function to minimize, when points <math>i</math> and <math>j</math> are in the same cluster we add their positive distance <math>d_{ij}</math>, and subtract it when they are in different clusters. |
In order to define a cost function to minimize, when points <math>i</math> and <math>j</math> are in the same cluster we add their positive distance <math>d_{ij}</math>, and subtract it when they are in different clusters. |
||
This way, an optimal solution tends to place points which are far apart into different clusters, and points that are close into the same cluster. |
This way, an optimal solution tends to place points which are far apart into different clusters, and points that are close into the same cluster. |
||
The cost function thus comes down to |
|||
⚫ | |||
f(x) &= \sum_{i<j}d_{ij}(x_ix_j + (1-x_i)(1-x_j))-d_{ij}(x_i(1-x_j)+(1-x_i)x_j) \\ |
|||
&= \sum_{i<j}\left[4d_{ij}x_ix_j-2d_{ij}x_i-2d_{ij}x_j+d_{ij}\right] |
|||
⚫ | |||
Let <math>\boldsymbol D\in\mathbb{R}^{N\times N}</math> with <math>D_{ij}=d_{ij}/2</math> for all <math>i,j\in\lbrace 1,\dots,n\rbrace</math>. |
|||
From the second line, the QUBO parameters can be easily found by re-arranging to be: |
|||
Given an assignment <math>\boldsymbol x\in\mathbb{B}^N</math>, such a cost function is given by |
|||
: <math>\begin{align} |
: <math>\begin{align} |
||
f(\boldsymbol x) &=\boldsymbol x^\intercal\boldsymbol{Dx}-\boldsymbol{x}^\intercal \boldsymbol D(\boldsymbol 1-\boldsymbol x)-(\boldsymbol 1-\boldsymbol x)^\intercal\boldsymbol{Dx}+(\boldsymbol 1-\boldsymbol x)^\intercal\boldsymbol D(\boldsymbol 1-\boldsymbol x)\\ |
|||
Q_{ij} &= \begin{cases} |
|||
&= 4\boldsymbol x^\intercal\boldsymbol D\boldsymbol x-4\boldsymbol{1}^\intercal\boldsymbol D\boldsymbol x+\boldsymbol 1^\intercal\boldsymbol{D1},\end{align}</math> |
|||
d_{ij} &\text{if } i\neq j \\ |
|||
where <math>\boldsymbol 1=(1,1,\dots,1)^\intercal\in\mathbb{R}^N</math>. |
|||
-\left(\sum\limits_{k=1}^{i-1} d_{ki} +\sum\limits_{\ell=i+1}^n d_{i\ell}\right)&\text{if } i=j |
|||
\end{cases} |
|||
⚫ | |||
From the second line we can see that this expression can be re-arranged to a QUBO problem by defining |
|||
⚫ | |||
: <math>\boldsymbol Q=4\boldsymbol D-4\operatorname{diag}[\boldsymbol{D1}]</math> |
|||
and ignoring the constant term <math>\boldsymbol 1^\intercal\boldsymbol{D1}</math>. |
|||
⚫ | |||
==Connection to Ising models== |
==Connection to Ising models== |
||
QUBO is very closely related and computationally equivalent to the [[Ising model]], whose [[Hamiltonian function]] is defined as |
QUBO is very closely related and computationally equivalent to the [[Ising model]], whose [[Hamiltonian function]] is defined as |
||
: <math>H(\sigma) = |
: <math>H(\boldsymbol\sigma) =\boldsymbol\sigma^\intercal \boldsymbol J\boldsymbol\sigma+\boldsymbol h^\intercal\boldsymbol\sigma =\sum_{i,j} J_{ij} \sigma_i \sigma_j +\sum_j h_j \sigma_j</math> |
||
with real-valued parameters <math>h_j, J_{ij}, \mu</math> for all <math>i,j</math>. |
with real-valued parameters <math>h_j, J_{ij}, \mu</math> for all <math>i,j</math>. |
||
The ''spin variables'' <math>\sigma_j</math> are binary with values from <math>\lbrace -1,+1\rbrace</math> instead of <math>\mathbb{B}</math>. |
The ''spin variables'' <math>\sigma_j</math> are binary with values from <math>\lbrace -1,+1\rbrace</math> instead of <math>\mathbb{B}</math>. |
||
Moreover, in the Ising model the variables are typically arranged in a lattice where only neighboring pairs of variables <math>\langle i~j\rangle</math> can have non-zero coefficients. |
Moreover, in the Ising model the variables are typically arranged in a lattice where only neighboring pairs of variables <math>\langle i~j\rangle</math> can have non-zero coefficients. |
||
⚫ | |||
: <math>\begin{align}f(x) &= \sum_{\langle i~j\rangle} -J_{ij}(2x_i-1)(2x_j-1) +\sum_{j}\mu h_j(2x_j-1) \\ |
|||
&= \sum_{\langle i~j\rangle} (-4J_{ij}x_ix_j +2J_{ij}x_i +2J_{ij}x_j -J_{ij}) +\sum_{j}(2\mu h_jx_j-\mu h_j) &&\text{using } x_j=x_jx_j\\ |
|||
&= \sum_{\langle i~j\rangle} (-4J_{ij}x_ix_j) + \sum_{\langle i~j\rangle}2J_{ij}x_i + \sum_{\langle i~j\rangle}2J_{ij}x_j +\sum_{j}2\mu h_jx_j -\sum_{\langle i~j\rangle}J_{ij} -\sum_{j}\mu h_j\\ |
|||
&= \sum_{\langle i~j\rangle} (-4J_{ij}x_ix_j) + \sum_{\langle j~i\rangle}2J_{ji}x_j + \sum_{\langle i~j\rangle}2J_{ij}x_j +\sum_{j}2\mu h_jx_j -\sum_{\langle i~j\rangle}J_{ij} -\sum_{j}\mu h_j &&\text{using } \sum_{\langle i~j\rangle}=\sum_{\langle j~i\rangle}\\ |
|||
&= \sum_{\langle i~j\rangle} (-4J_{ij}x_ix_j) + \sum_j\sum_{\langle k=j~i\rangle}2J_{ki}x_j + \sum_j\sum_{\langle i~k=j\rangle}2J_{ik}x_j +\sum_{j}2\mu h_jx_j -\sum_{\langle i~j\rangle}J_{ij} -\sum_{j}\mu h_j\\ |
|||
&= \sum_{\langle i~j\rangle} (-4J_{ij}x_ix_j) + \sum_j \left(\sum_{\langle i~k=j\rangle}(2J_{ki} + 2J_{ik}) + 2\mu h_j \right)x_j -\sum_{\langle i~j\rangle}J_{ij} -\sum_{j}\mu h_j &&\text{using } \sum_{\langle k=j~i\rangle}=\sum_{\langle i~k=j\rangle}\\ |
|||
&= \sum_{i=1}^n\sum_{j=1}^i Q_{ij}x_ix_j + C\end{align}</math> |
|||
where |
|||
: <math>\begin{align}Q_{ij} &= \begin{cases}-4J_{ij} &\text{if } i\neq j \\ |
|||
\sum_{\langle i~k=j\rangle}(2J_{ki} + 2J_{ik}) + 2\mu h_j &\text{if } i=j\end{cases} \\ |
|||
C &= -\sum_{\langle i~j\rangle}J_{ij} -\sum_{j}\mu h_j\end{align}</math> |
|||
and using the fact that for a binary variable <math> x_j = x_j x_j </math>. |
|||
⚫ | |||
As the constant <math>C</math> does not change the position of the optimum <math>x^*</math>, it can be neglected during optimization and is only important for recovering the original Hamiltonian function value. |
|||
⚫ | |||
&\boldsymbol\sigma^\intercal \boldsymbol J\boldsymbol\sigma+\boldsymbol h^\intercal\boldsymbol\sigma \\ |
|||
&= (\boldsymbol 1-2\boldsymbol x)^\intercal\boldsymbol J(\boldsymbol 1-2\boldsymbol x)+\boldsymbol h^\intercal(\boldsymbol 1-2\boldsymbol x) \\ |
|||
&= 4\boldsymbol x^\intercal\boldsymbol J\boldsymbol x-4\boldsymbol 1^\intercal\boldsymbol{Jx}+\boldsymbol 1^\intercal\boldsymbol{J1}-2\boldsymbol h^\intercal\boldsymbol x+\boldsymbol h^\intercal\boldsymbol 1 \\ |
|||
&= \boldsymbol x^\intercal(4\boldsymbol J)\boldsymbol x-(4\boldsymbol J^\intercal\boldsymbol 1+2\boldsymbol h)^\intercal\boldsymbol x+\underbrace{\boldsymbol 1^\intercal\boldsymbol{J1}+\boldsymbol h^\intercal\boldsymbol 1}_{\text{const.}}, |
|||
⚫ | |||
whose weight matrix <math>\boldsymbol Q</math> is given by |
|||
: <math>\boldsymbol Q=4\boldsymbol J-\operatorname{diag}[4\boldsymbol J^\intercal\boldsymbol 1+2\boldsymbol h],</math> |
|||
again ignoring the constant term, which does not affect the minization. |
|||
Using the identity <math>x=(1-\sigma)/2</math>, a QUBO problem with matrix <math>\boldsymbol Q</math> can be converted to an equivalent Ising model using the same technique, yielding |
|||
: <math>\begin{align} |
|||
\boldsymbol J &= \boldsymbol Q/4, &\boldsymbol h &= -(\boldsymbol{Q1}+\boldsymbol Q^\intercal\boldsymbol 1)/4, |
|||
⚫ | |||
and a constant offset of <math>\boldsymbol 1^\intercal\boldsymbol{Q1}/4</math>.<ref name="muecke2025"/> |
|||
==References== |
==References== |
||
Line 127: | Line 124: | ||
<ref name="punnen2022">A. P. Punnen (editor), Quadratic unconstrained binary optimization problem: Theory, Algorithms, and Applications, Springer, Springer, 2022.</ref> |
<ref name="punnen2022">A. P. Punnen (editor), Quadratic unconstrained binary optimization problem: Theory, Algorithms, and Applications, Springer, Springer, 2022.</ref> |
||
<ref name="cela2022">Çela, E., Punnen, A.P. (2022). Complexity and Polynomially Solvable Special Cases of QUBO. In: Punnen, A.P. (eds) The Quadratic Unconstrained Binary Optimization Problem. Springer, Cham. https://doi.org/10.1007/978-3-031-04520-2_3</ref> |
<ref name="cela2022">Çela, E., Punnen, A.P. (2022). Complexity and Polynomially Solvable Special Cases of QUBO. In: Punnen, A.P. (eds) The Quadratic Unconstrained Binary Optimization Problem. Springer, Cham. https://doi.org/10.1007/978-3-031-04520-2_3</ref> |
||
<ref name="muecke2025">Mücke, S. (2025). Quantum-Classical Optimization in Machine Learning. Shaker Verlag. https://d-nb.info/1368090214</ref> |
|||
}} |
}} |
||
Revision as of 00:14, 18 June 2025
Quadratic unconstrained binary optimization (QUBO), also known as unconstrained binary quadratic programming (UBQP), is a combinatorial optimization problem with a wide range of applications from finance and economics to machine learning.[1] QUBO is an NP hard problem, and for many classical problems from theoretical computer science, like maximum cut, graph coloring and the partition problem, embeddings into QUBO have been formulated.[2][3] Embeddings for machine learning models include support-vector machines, clustering and probabilistic graphical models.[4] Moreover, due to its close connection to Ising models, QUBO constitutes a central problem class for adiabatic quantum computation, where it is solved through a physical process called quantum annealing.[5]
Definition
Let the set of binary digits (or bits), then is the set of binary vectors of fixed length . Given a symmetric upper triangular matrix , whose entries define a weight for each pair of indices within the binary vector, we can define the function that assigns a value to each binary vector through
Intuitively, the weight is added if both and for all . The QUBO problem consists of finding a binary vector that is minimal with respect to , namely
In general, is not unique, meaning there may be a set of minimizing vectors with equal value w.r.t. . The complexity of QUBO arises from the number of candidate binary vectors to be evaluated, as grows exponentially in .
Sometimes, QUBO is defined as the problem of maximizing , which is equivalent to minimizing .
Properties
QUBO is scale invariant for positive factors , which leave the optimum unchanged:
- .
In its general form, QUBO is NP-hard and cannot be solved efficiently by any polynomial-time algorithm.[6] However, there are polynomially-solvable special cases, where has certain properties,[7] for example:
- If all coefficients are positive, the optimum is trivially . Similarly, if all coefficients are negative, the optimum is .
- If is diagonal, the bits can be optimized independently, and the problem is solvable in . The optimal variable assignments are simply if , and otherwise.
- If all off-diagonal elements of are non-positive, the corresponding QUBO problem is solvable in polynomial time.[8]
QUBO can be solved using integer linear programming solvers like CPLEX or Gurobi Optimizer. This is possible since QUBO can be reformulated as a linear constrained binary optimization problem. To achieve this, substitute the product by an additional binary variable and add the constraints , and . Note that can also be relaxed to continuous variables within the bounds zero and one.
Applications
QUBO is a structurally simple, yet computationally hard optimization problem. It can be used to encode a wide range of optimization problems from various scientific areas.[9]
Cluster Analysis
As an illustrative example of how QUBO can be used to encode an optimization problem, we consider the problem of cluster analysis. Here, we are given a set of points in 2D space, described by a matrix , where each row contains two cartesian coordinates. We want to assign each point to one of two classes or clusters, such that points in the same cluster are similar to each other. For two clusters, we can assign a binary variable to the point corresponding to the -th row in , indicating whether it belongs to the first () or second cluster (). Consequently, we have 20 binary variables, which form a binary vector that corresponds to a cluster assignment of all points (see figure).
One way to derive a clustering is to consider the pairwise distances between points. Given a cluster assignment , the expression evaluates to 1 if points and are in the same cluster. Similarly, indicates that they are in different clusters. Let denote the Euclidean distance between the points and , i.e.,
- ,
where is the -th row of .
In order to define a cost function to minimize, when points and are in the same cluster we add their positive distance , and subtract it when they are in different clusters. This way, an optimal solution tends to place points which are far apart into different clusters, and points that are close into the same cluster.
Let with for all . Given an assignment , such a cost function is given by
where .
From the second line we can see that this expression can be re-arranged to a QUBO problem by defining
and ignoring the constant term . Using these parameters, a binary vector minimizing this QUBO instance will correspond to an optimal cluster assignment w.r.t. above cost function.
Connection to Ising models
QUBO is very closely related and computationally equivalent to the Ising model, whose Hamiltonian function is defined as
with real-valued parameters for all . The spin variables are binary with values from instead of . Moreover, in the Ising model the variables are typically arranged in a lattice where only neighboring pairs of variables can have non-zero coefficients.
Applying the identity yields an equivalent QUBO problem [2]
whose weight matrix is given by
again ignoring the constant term, which does not affect the minization. Using the identity , a QUBO problem with matrix can be converted to an equivalent Ising model using the same technique, yielding
and a constant offset of .[10]
References
- ^ Kochenberger, Gary; Hao, Jin-Kao; Glover, Fred; Lewis, Mark; Lu, Zhipeng; Wang, Haibo; Wang, Yang (2014). "The unconstrained binary quadratic programming problem: a survey" (PDF). Journal of Combinatorial Optimization. 28: 58–81. doi:10.1007/s10878-014-9734-0. S2CID 16808394.
- ^ a b Glover, Fred; Kochenberger, Gary (2019). "A Tutorial on Formulating and Using QUBO Models". arXiv:1811.11538 [cs.DS].
- ^ Lucas, Andrew (2014). "Ising formulations of many NP problems". Frontiers in Physics. 2: 5. arXiv:1302.5843. Bibcode:2014FrP.....2....5L. doi:10.3389/fphy.2014.00005.
- ^ Mücke, Sascha; Piatkowski, Nico; Morik, Katharina (2019). "Learning Bit by Bit: Extracting the Essence of Machine Learning" (PDF). LWDA. S2CID 202760166. Archived from the original (PDF) on 2020-02-27.
- ^ Tom Simonite (8 May 2013). "D-Wave's Quantum Computer Goes to the Races, Wins". MIT Technology Review. Archived from the original on 24 September 2015. Retrieved 12 May 2013.
- ^ A. P. Punnen (editor), Quadratic unconstrained binary optimization problem: Theory, Algorithms, and Applications, Springer, Springer, 2022.
- ^ Çela, E., Punnen, A.P. (2022). Complexity and Polynomially Solvable Special Cases of QUBO. In: Punnen, A.P. (eds) The Quadratic Unconstrained Binary Optimization Problem. Springer, Cham. https://doi.org/10.1007/978-3-031-04520-2_3
- ^ See Theorem 3.16 in Punnen (2022); note that the authors assume the maximization version of QUBO.
- ^ Ratke, Daniel (2021-06-10). "List of QUBO formulations". Retrieved 2022-12-16.
- ^ Mücke, S. (2025). Quantum-Classical Optimization in Machine Learning. Shaker Verlag. https://d-nb.info/1368090214
External links
- QUBO Benchmark (Benchmark of software packages for the exact solution of QUBOs; part of the well-known Mittelmann benchmark collection)
- Endre Boros, Peter L Hammer & Gabriel Tavares (April 2007). "Local search heuristics for Quadratic Unconstrained Binary Optimization (QUBO)". Journal of Heuristics. 13 (2). Association for Computing Machinery: 99–132. doi:10.1007/s10732-007-9009-3. S2CID 32887708. Retrieved 12 May 2013.
- Di Wang & Robert Kleinberg (November 2009). "Analyzing quadratic unconstrained binary optimization problems via multicommodity flows". Discrete Applied Mathematics. 157 (18). Elsevier: 3746–3753. doi:10.1016/j.dam.2009.07.009. PMC 2808708. PMID 20161596.
- Hiroshima University and NTT DATA Group Corporation : "QUBO++ with ABS2 GPU QUBO Solver" # Software.