https://en.wikipedia.org/w/index.php?action=history&feed=atom&title=Convolutional_sparse_codingConvolutional sparse coding - Revision history2025-05-31T14:18:26ZRevision history for this page on the wikiMediaWiki 1.45.0-wmf.3https://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1226227054&oldid=prev2603:8000:D300:3650:AD45:E6EE:7A8:8B7C: sp2024-05-29T09:32:15Z<p>sp</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 09:32, 29 May 2024</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 95:</td>
<td colspan="2" class="diff-lineno">Line 95:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>By imposing the sparsity prior in the inherent structure of <math display="inline">\mathbf{x}</math>, strong conditions for a unique representation and feasible methods for estimating it are granted. Similarly, such a constraint can be applied to its representation itself, generating a cascade of sparse representations: Each code is defined by a few atoms of a given set of convolutional dictionaries.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>By imposing the sparsity prior in the inherent structure of <math display="inline">\mathbf{x}</math>, strong conditions for a unique representation and feasible methods for estimating it are granted. Similarly, such a constraint can be applied to its representation itself, generating a cascade of sparse representations: Each code is defined by a few atoms of a given set of convolutional dictionaries.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Based on these criteria, yet another extension denominated <del style="font-weight: bold; text-decoration: none;">mlti</del>-layer convolutional sparse coding (ML-CSC) is proposed. A set of analytical dictionaries <math display="inline">\{\mathbf{D}\}_{k=1}^{K}</math> can be efficiently designed, where sparse representations at each layer <math display="inline">\{\mathbf{\Gamma}\}_{k=1}^{K}</math> are guaranteed by imposing the sparsity prior over the dictionaries themselves.<ref name="papyan_2017_convolutional" /> In other words, by considering dictionaries to be stride convolutional matrices i.e. atoms of the local dictionaries shift <math display="inline">m</math> elements instead of a single one, where <math display="inline">m</math> corresponds to the number of channels in the previous layer, it is guaranteed that the <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}</math> norm of the representations along layers is bounded.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Based on these criteria, yet another extension denominated <ins style="font-weight: bold; text-decoration: none;">multi</ins>-layer convolutional sparse coding (ML-CSC) is proposed. A set of analytical dictionaries <math display="inline">\{\mathbf{D}\}_{k=1}^{K}</math> can be efficiently designed, where sparse representations at each layer <math display="inline">\{\mathbf{\Gamma}\}_{k=1}^{K}</math> are guaranteed by imposing the sparsity prior over the dictionaries themselves.<ref name="papyan_2017_convolutional" /> In other words, by considering dictionaries to be stride convolutional matrices i.e. atoms of the local dictionaries shift <math display="inline">m</math> elements instead of a single one, where <math display="inline">m</math> corresponds to the number of channels in the previous layer, it is guaranteed that the <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}</math> norm of the representations along layers is bounded.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For example, given the dictionaries <math display="inline">\mathbf{D}_{1} \in \mathbb{R}^{N\times Nm_{1}}, \mathbf{D}_{2} \in \mathbb{R}^{Nm_{1}\times Nm_{2}}</math>, the signal is modeled as <math display="inline">\mathbf{D}_{1}\mathbf{\Gamma}_{1}= \mathbf{D}_{1}(\mathbf{D}_{2}\mathbf{\Gamma}_{2})</math>, where <math display="inline">\mathbf{\Gamma}_{1}</math> is its sparse code, and <math display="inline">\mathbf{\Gamma}_{2}</math> is the sparse code of <math display="inline">\mathbf{\Gamma}_{1}</math>. Then, the estimation of each representation is formulated as an optimization problem for both noise-free and noise-corrupted scenarios, respectively. Assuming <math display="inline">\mathbf{\Gamma}_{0}=\mathbf{x}</math>: <math display="block">\begin{aligned}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For example, given the dictionaries <math display="inline">\mathbf{D}_{1} \in \mathbb{R}^{N\times Nm_{1}}, \mathbf{D}_{2} \in \mathbb{R}^{Nm_{1}\times Nm_{2}}</math>, the signal is modeled as <math display="inline">\mathbf{D}_{1}\mathbf{\Gamma}_{1}= \mathbf{D}_{1}(\mathbf{D}_{2}\mathbf{\Gamma}_{2})</math>, where <math display="inline">\mathbf{\Gamma}_{1}</math> is its sparse code, and <math display="inline">\mathbf{\Gamma}_{2}</math> is the sparse code of <math display="inline">\mathbf{\Gamma}_{1}</math>. Then, the estimation of each representation is formulated as an optimization problem for both noise-free and noise-corrupted scenarios, respectively. Assuming <math display="inline">\mathbf{\Gamma}_{0}=\mathbf{x}</math>: <math display="block">\begin{aligned}</div></td>
</tr>
</table>2603:8000:D300:3650:AD45:E6EE:7A8:8B7Chttps://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1208623128&oldid=prevCyfal: spelling - plural (WP:Typo Team)2024-02-18T07:44:25Z<p>spelling - plural (<a href="/wiki/Wikipedia:Typo_Team" title="Wikipedia:Typo Team">WP:Typo Team</a>)</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 07:44, 18 February 2024</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 95:</td>
<td colspan="2" class="diff-lineno">Line 95:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>By imposing the sparsity prior in the inherent structure of <math display="inline">\mathbf{x}</math>, strong conditions for a unique representation and feasible methods for estimating it are granted. Similarly, such a constraint can be applied to its representation itself, generating a cascade of sparse representations: Each code is defined by a few atoms of a given set of convolutional dictionaries.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>By imposing the sparsity prior in the inherent structure of <math display="inline">\mathbf{x}</math>, strong conditions for a unique representation and feasible methods for estimating it are granted. Similarly, such a constraint can be applied to its representation itself, generating a cascade of sparse representations: Each code is defined by a few atoms of a given set of convolutional dictionaries.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Based on <del style="font-weight: bold; text-decoration: none;">this</del> criteria, yet another extension denominated mlti-layer convolutional sparse coding (ML-CSC) is proposed. A set of analytical dictionaries <math display="inline">\{\mathbf{D}\}_{k=1}^{K}</math> can be efficiently designed, where sparse representations at each layer <math display="inline">\{\mathbf{\Gamma}\}_{k=1}^{K}</math> are guaranteed by imposing the sparsity prior over the dictionaries themselves.<ref name="papyan_2017_convolutional" /> In other words, by considering dictionaries to be stride convolutional matrices i.e. atoms of the local dictionaries shift <math display="inline">m</math> elements instead of a single one, where <math display="inline">m</math> corresponds to the number of channels in the previous layer, it is guaranteed that the <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}</math> norm of the representations along layers is bounded.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Based on <ins style="font-weight: bold; text-decoration: none;">these</ins> criteria, yet another extension denominated mlti-layer convolutional sparse coding (ML-CSC) is proposed. A set of analytical dictionaries <math display="inline">\{\mathbf{D}\}_{k=1}^{K}</math> can be efficiently designed, where sparse representations at each layer <math display="inline">\{\mathbf{\Gamma}\}_{k=1}^{K}</math> are guaranteed by imposing the sparsity prior over the dictionaries themselves.<ref name="papyan_2017_convolutional" /> In other words, by considering dictionaries to be stride convolutional matrices i.e. atoms of the local dictionaries shift <math display="inline">m</math> elements instead of a single one, where <math display="inline">m</math> corresponds to the number of channels in the previous layer, it is guaranteed that the <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}</math> norm of the representations along layers is bounded.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For example, given the dictionaries <math display="inline">\mathbf{D}_{1} \in \mathbb{R}^{N\times Nm_{1}}, \mathbf{D}_{2} \in \mathbb{R}^{Nm_{1}\times Nm_{2}}</math>, the signal is modeled as <math display="inline">\mathbf{D}_{1}\mathbf{\Gamma}_{1}= \mathbf{D}_{1}(\mathbf{D}_{2}\mathbf{\Gamma}_{2})</math>, where <math display="inline">\mathbf{\Gamma}_{1}</math> is its sparse code, and <math display="inline">\mathbf{\Gamma}_{2}</math> is the sparse code of <math display="inline">\mathbf{\Gamma}_{1}</math>. Then, the estimation of each representation is formulated as an optimization problem for both noise-free and noise-corrupted scenarios, respectively. Assuming <math display="inline">\mathbf{\Gamma}_{0}=\mathbf{x}</math>: <math display="block">\begin{aligned}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For example, given the dictionaries <math display="inline">\mathbf{D}_{1} \in \mathbb{R}^{N\times Nm_{1}}, \mathbf{D}_{2} \in \mathbb{R}^{Nm_{1}\times Nm_{2}}</math>, the signal is modeled as <math display="inline">\mathbf{D}_{1}\mathbf{\Gamma}_{1}= \mathbf{D}_{1}(\mathbf{D}_{2}\mathbf{\Gamma}_{2})</math>, where <math display="inline">\mathbf{\Gamma}_{1}</math> is its sparse code, and <math display="inline">\mathbf{\Gamma}_{2}</math> is the sparse code of <math display="inline">\mathbf{\Gamma}_{1}</math>. Then, the estimation of each representation is formulated as an optimization problem for both noise-free and noise-corrupted scenarios, respectively. Assuming <math display="inline">\mathbf{\Gamma}_{0}=\mathbf{x}</math>: <math display="block">\begin{aligned}</div></td>
</tr>
</table>Cyfalhttps://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1189773110&oldid=prevM. A Matawalle: Added free to read link in citations with OAbot #oabot2023-12-13T22:43:21Z<p>Added free to read link in citations with <a href="/wiki/Wikipedia:OABOT" class="mw-redirect" title="Wikipedia:OABOT">OAbot</a> #oabot</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 22:43, 13 December 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 144:</td>
<td colspan="2" class="diff-lineno">Line 144:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As a practical example, an efficient image inpainting method for color images via the CSC model is shown.<ref name="wohlberg_2016_convolutional" /> Consider the three-channel dictionary <math display="inline">\mathbf{D} \in \mathbb{R}^{N \times M \times 3}</math>, where <math display="inline">\mathbf{d}_{c,m}</math> denotes the <math display="inline">m</math>-th atom at channel <math display="inline">c</math>, represents signal <math display="inline">\mathbf{x}</math> by a single cross-channel sparse representation <math display="inline">\mathbf{\Gamma}</math>, with stripes denoted as <math display="inline">\mathbf{z}_{i}</math>. Given an observation <math display="inline">\mathbf{y}=\{\mathbf{y}_{r}, \mathbf{y}_{g}, \mathbf{y}_{b}\}</math>, where randomly chosen channels at unknown pixel locations are fixed to zero, in a similar way to impulse noise, the problem is formulated as: <math display="block">\begin{aligned}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As a practical example, an efficient image inpainting method for color images via the CSC model is shown.<ref name="wohlberg_2016_convolutional" /> Consider the three-channel dictionary <math display="inline">\mathbf{D} \in \mathbb{R}^{N \times M \times 3}</math>, where <math display="inline">\mathbf{d}_{c,m}</math> denotes the <math display="inline">m</math>-th atom at channel <math display="inline">c</math>, represents signal <math display="inline">\mathbf{x}</math> by a single cross-channel sparse representation <math display="inline">\mathbf{\Gamma}</math>, with stripes denoted as <math display="inline">\mathbf{z}_{i}</math>. Given an observation <math display="inline">\mathbf{y}=\{\mathbf{y}_{r}, \mathbf{y}_{g}, \mathbf{y}_{b}\}</math>, where randomly chosen channels at unknown pixel locations are fixed to zero, in a similar way to impulse noise, the problem is formulated as: <math display="block">\begin{aligned}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div> \{\mathbf{\hat{z}}_{i}\}&=\underset{\{\mathbf{z}_{i}\}}{\text{argmin}}\frac{1}{2}\sum_{c}\bigg\|\sum_{i}\mathbf{d}_{c,i}\ast \mathbf{z}_{i} -\mathbf{y}_{c}\bigg\|_{2}^{2}+\lambda \sum_{i}\|\mathbf{z}_{i}\|_{1}.\end{aligned}</math> By means of ADMM,<ref>{{cite journal |last1=Boyd |first1=Stephen |title=Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers |journal=Foundations and Trends in Machine Learning |date=2010 |volume=3 |issue=1 |pages=1–122 |doi=10.1561/2200000016 |s2cid=51789432 |issn=1935-8237}}</ref> the cost function is decoupled into simpler sub-problems, allowing an efficient <math display="inline">\mathbf{\Gamma}</math> estimation. Algorithm 2 describes the procedure, where <math display="inline">\hat{D}_{c,m}</math> is the DFT representation of <math display="inline">D_{c,m}</math>, the convolutional matrix for the term <math display="inline">\mathbf{d}_{c,i}\ast \mathbf{z}_{i}</math>. Likewise, <math display="inline">\hat{\mathbf{x}}_{m}</math> and <math display="inline">\hat{\mathbf{z}}_{m}</math> correspond to the DFT representations of <math display="inline">\mathbf{x}_{m}</math> and <math display="inline">\mathbf{z}_{m}</math>, respectively, <math display="inline">\mathcal{S}_{\beta}(.)</math> corresponds to the Soft-thresholding function with argument <math display="inline">\beta</math>, and the <math display="inline">\ell_{1,2}</math> norm is defined as the <math display="inline">\ell_{2}</math> norm along the channel dimension <math display="inline">c</math> followed by the <math display="inline">\ell_{1}</math> norm along the spatial dimension <math display="inline">m</math>. The reader is referred to (,<ref name="wohlberg_2016_convolutional" /> Section II) for details on the ADMM implementation and the dictionary learning procedure.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div> \{\mathbf{\hat{z}}_{i}\}&=\underset{\{\mathbf{z}_{i}\}}{\text{argmin}}\frac{1}{2}\sum_{c}\bigg\|\sum_{i}\mathbf{d}_{c,i}\ast \mathbf{z}_{i} -\mathbf{y}_{c}\bigg\|_{2}^{2}+\lambda \sum_{i}\|\mathbf{z}_{i}\|_{1}.\end{aligned}</math> By means of ADMM,<ref>{{cite journal |last1=Boyd |first1=Stephen |title=Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers |journal=Foundations and Trends in Machine Learning |date=2010 |volume=3 |issue=1 |pages=1–122 |doi=10.1561/2200000016 |s2cid=51789432 |issn=1935-8237<ins style="font-weight: bold; text-decoration: none;">|citeseerx=10.1.1.360.1664 </ins>}}</ref> the cost function is decoupled into simpler sub-problems, allowing an efficient <math display="inline">\mathbf{\Gamma}</math> estimation. Algorithm 2 describes the procedure, where <math display="inline">\hat{D}_{c,m}</math> is the DFT representation of <math display="inline">D_{c,m}</math>, the convolutional matrix for the term <math display="inline">\mathbf{d}_{c,i}\ast \mathbf{z}_{i}</math>. Likewise, <math display="inline">\hat{\mathbf{x}}_{m}</math> and <math display="inline">\hat{\mathbf{z}}_{m}</math> correspond to the DFT representations of <math display="inline">\mathbf{x}_{m}</math> and <math display="inline">\mathbf{z}_{m}</math>, respectively, <math display="inline">\mathcal{S}_{\beta}(.)</math> corresponds to the Soft-thresholding function with argument <math display="inline">\beta</math>, and the <math display="inline">\ell_{1,2}</math> norm is defined as the <math display="inline">\ell_{2}</math> norm along the channel dimension <math display="inline">c</math> followed by the <math display="inline">\ell_{1}</math> norm along the spatial dimension <math display="inline">m</math>. The reader is referred to (,<ref name="wohlberg_2016_convolutional" /> Section II) for details on the ADMM implementation and the dictionary learning procedure.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Algorithm 2: Color image inpainting via the convolutional sparse coding model.'''</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Algorithm 2: Color image inpainting via the convolutional sparse coding model.'''</div></td>
</tr>
</table>M. A Matawallehttps://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1173314818&oldid=prevCitation bot: Alter: title, template type. Add: chapter-url, chapter. Removed or converted URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Abductive | #UCB_webform 959/38442023-09-01T17:06:24Z<p>Alter: title, template type. Add: chapter-url, chapter. Removed or converted URL. Removed parameters. Some additions/deletions were parameter name changes. | <a href="/wiki/Wikipedia:UCB" class="mw-redirect" title="Wikipedia:UCB">Use this bot</a>. <a href="/wiki/Wikipedia:DBUG" class="mw-redirect" title="Wikipedia:DBUG">Report bugs</a>. | Suggested by Abductive | #UCB_webform 959/3844</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 17:06, 1 September 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 12:</td>
<td colspan="2" class="diff-lineno">Line 12:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Overview ==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Overview ==</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Given a signal of interest <math display="inline">\mathbf{x}\in \mathbb{R}^{N}</math> and a redundant dictionary <math display="inline">\mathbf{D}\in\mathbb{R}^{N\times M}, M\gg N</math>, the sparse coding problem consist of retrieving a sparse vector <math display="inline">\mathbf{\Gamma}\in \mathbb{R}^{M}</math>, denominated the sparse representation of <math display="inline">\mathbf{x}</math>, such that <math display="inline">\mathbf{x}= \mathbf{D}\mathbf{\Gamma}</math>. Intuitively, this implies <math display="inline">\mathbf{x}</math> is expressed as a linear combination of a small number of elements in <math display="inline">\mathbf{D}</math>. The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding.<ref>{{cite journal |last1=Jianchao Yang |last2=Wright |first2=John |last3=Huang |first3=Thomas S |last4=Yi Ma |title=Image Super-Resolution Via Sparse Representation |journal=IEEE Transactions on Image Processing |date=November 2010 |volume=19 |issue=11 |pages=2861–2873 |doi=10.1109/TIP.2010.2050625|pmid=20483687 |bibcode=2010ITIP...19.2861Y |s2cid=8187730 }}</ref><ref>{{cite journal |last1=Wetzstein |first1=Gordon |last2=Heidrich |first2=Wolfgang |last3=Heide |first3=Felix |title=Fast and Flexible Convolutional Sparse Coding |date=2015 |pages=5135–5143 |url=https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Heide_Fast_and_Flexible_2015_CVPR_paper.html}}</ref><ref>{{cite <del style="font-weight: bold; text-decoration: none;">journal</del> |last1=Wohlberg |first1=Brendt |title=SPORCO: A Python package for standard and convolutional sparse representations<del style="font-weight: bold; text-decoration: none;"> |journal=Proceedings of the 16th Python in Science Conference</del> |date=2017 |pages=1–8 |doi=10.25080/shinma-7f4c6e7-001 |url=http://conference.scipy.org/proceedings/scipy2017/brendt_wohlberg.html|doi-access=free }}</ref> It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred <ref>{{cite book |last1=Mairal |first1=Julien |last2=Bach |first2=Francis |last3=Ponce |first3=Jean |last4=Sapiro |first4=Guillermo |title=Proceedings of the 26th Annual International Conference on Machine Learning |chapter=Online dictionary learning for sparse coding |date=2009 |pages=689–696 |doi=10.1145/1553374.1553463 |chapter-url=https://dl.acm.org/citation.cfm?id=1553463 |publisher=ACM|isbn=9781605585161 |hdl=11299/180118 |s2cid=7027533 |hdl-access=free }}</ref><ref name="papyan_2017_working">{{cite journal |last1=Papyan |first1=Vardan |last2=Sulam |first2=Jeremias |last3=Elad |first3=Michael |title=Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding |journal=IEEE Transactions on Signal Processing |date=1 November 2017 |volume=65 |issue=21 |pages=5687–5701 |doi=10.1109/TSP.2017.2733447|bibcode=2017ITSP...65.5687P |arxiv=1707.06066 |s2cid=11214580 }}</ref><ref name="wohlberg_2016_convolutional">{{cite book |last1=Wohlberg |first1=Brendt |title=2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) |chapter=Convolutional sparse representation of color images |pages=57–60 |date=6–8 March 2016 |doi=10.1109/SSIAI.2016.7459174 |isbn=978-1-4673-9919-7 |s2cid=9923572 }}</ref></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Given a signal of interest <math display="inline">\mathbf{x}\in \mathbb{R}^{N}</math> and a redundant dictionary <math display="inline">\mathbf{D}\in\mathbb{R}^{N\times M}, M\gg N</math>, the sparse coding problem consist of retrieving a sparse vector <math display="inline">\mathbf{\Gamma}\in \mathbb{R}^{M}</math>, denominated the sparse representation of <math display="inline">\mathbf{x}</math>, such that <math display="inline">\mathbf{x}= \mathbf{D}\mathbf{\Gamma}</math>. Intuitively, this implies <math display="inline">\mathbf{x}</math> is expressed as a linear combination of a small number of elements in <math display="inline">\mathbf{D}</math>. The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding.<ref>{{cite journal |last1=Jianchao Yang |last2=Wright |first2=John |last3=Huang |first3=Thomas S |last4=Yi Ma |title=Image Super-Resolution Via Sparse Representation |journal=IEEE Transactions on Image Processing |date=November 2010 |volume=19 |issue=11 |pages=2861–2873 |doi=10.1109/TIP.2010.2050625|pmid=20483687 |bibcode=2010ITIP...19.2861Y |s2cid=8187730 }}</ref><ref>{{cite journal |last1=Wetzstein |first1=Gordon |last2=Heidrich |first2=Wolfgang |last3=Heide |first3=Felix |title=Fast and Flexible Convolutional Sparse Coding |date=2015 |pages=5135–5143 |url=https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Heide_Fast_and_Flexible_2015_CVPR_paper.html}}</ref><ref>{{cite <ins style="font-weight: bold; text-decoration: none;">book</ins> |last1=Wohlberg |first1=Brendt |title<ins style="font-weight: bold; text-decoration: none;">=Proceedings of the 16th Python in Science Conference |chapter</ins>=SPORCO: A Python package for standard and convolutional sparse representations |date=2017 |pages=1–8 |doi=10.25080/shinma-7f4c6e7-001 |<ins style="font-weight: bold; text-decoration: none;">chapter-</ins>url=http://conference.scipy.org/proceedings/scipy2017/brendt_wohlberg.html|doi-access=free }}</ref> It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred <ref>{{cite book |last1=Mairal |first1=Julien |last2=Bach |first2=Francis |last3=Ponce |first3=Jean |last4=Sapiro |first4=Guillermo |title=Proceedings of the 26th Annual International Conference on Machine Learning |chapter=Online dictionary learning for sparse coding |date=2009 |pages=689–696 |doi=10.1145/1553374.1553463 |chapter-url=https://dl.acm.org/citation.cfm?id=1553463 |publisher=ACM|isbn=9781605585161 |hdl=11299/180118 |s2cid=7027533 |hdl-access=free }}</ref><ref name="papyan_2017_working">{{cite journal |last1=Papyan |first1=Vardan |last2=Sulam |first2=Jeremias |last3=Elad |first3=Michael |title=Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding |journal=IEEE Transactions on Signal Processing |date=1 November 2017 |volume=65 |issue=21 |pages=5687–5701 |doi=10.1109/TSP.2017.2733447|bibcode=2017ITSP...65.5687P |arxiv=1707.06066 |s2cid=11214580 }}</ref><ref name="wohlberg_2016_convolutional">{{cite book |last1=Wohlberg |first1=Brendt |title=2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) |chapter=Convolutional sparse representation of color images |pages=57–60 |date=6–8 March 2016 |doi=10.1109/SSIAI.2016.7459174 |isbn=978-1-4673-9919-7 |s2cid=9923572 }}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions.<ref name="wohlberg_2016_convolutional"/> Interestingly, by imposing a local sparsity prior in <math display="inline">\mathbf{\Gamma}</math>, meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in <math display="inline">\mathbf{D}</math> can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for <math display="inline">\mathbf{\Gamma}</math> to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion<ref name="papyan_2017_working" /></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions.<ref name="wohlberg_2016_convolutional"/> Interestingly, by imposing a local sparsity prior in <math display="inline">\mathbf{\Gamma}</math>, meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in <math display="inline">\mathbf{D}</math> can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for <math display="inline">\mathbf{\Gamma}</math> to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion<ref name="papyan_2017_working" /></div></td>
</tr>
</table>Citation bothttps://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1169295196&oldid=prevHeadbomb: /* top */clean up, added orphan tag2023-08-08T06:22:26Z<p><span class="autocomment">top: </span>clean up, added <a href="/wiki/CAT:O" class="mw-redirect" title="CAT:O">orphan</a> tag</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 06:22, 8 August 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{short description|Neural network coding model}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{short description|Neural network coding model}}</div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Multiple issues|</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Multiple issues|</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{context|date=October 2020}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{context|date=October 2020}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{technical|date=October 2020}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{technical|date=October 2020}}</div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{Orphan|date=August 2023}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
</table>Headbombhttps://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1169012454&oldid=prevCitation bot: Alter: title, template type. Add: chapter-url, chapter. Removed or converted URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox3 | #UCB_webform_linked 417/23062023-08-06T13:49:48Z<p>Alter: title, template type. Add: chapter-url, chapter. Removed or converted URL. Removed parameters. Some additions/deletions were parameter name changes. | <a href="/wiki/Wikipedia:UCB" class="mw-redirect" title="Wikipedia:UCB">Use this bot</a>. <a href="/wiki/Wikipedia:DBUG" class="mw-redirect" title="Wikipedia:DBUG">Report bugs</a>. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox3 | #UCB_webform_linked 417/2306</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 13:49, 6 August 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 10:</td>
<td colspan="2" class="diff-lineno">Line 10:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Overview ==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Overview ==</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Given a signal of interest <math display="inline">\mathbf{x}\in \mathbb{R}^{N}</math> and a redundant dictionary <math display="inline">\mathbf{D}\in\mathbb{R}^{N\times M}, M\gg N</math>, the sparse coding problem consist of retrieving a sparse vector <math display="inline">\mathbf{\Gamma}\in \mathbb{R}^{M}</math>, denominated the sparse representation of <math display="inline">\mathbf{x}</math>, such that <math display="inline">\mathbf{x}= \mathbf{D}\mathbf{\Gamma}</math>. Intuitively, this implies <math display="inline">\mathbf{x}</math> is expressed as a linear combination of a small number of elements in <math display="inline">\mathbf{D}</math>. The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding.<ref>{{cite journal |last1=Jianchao Yang |last2=Wright |first2=John |last3=Huang |first3=Thomas S |last4=Yi Ma |title=Image Super-Resolution Via Sparse Representation |journal=IEEE Transactions on Image Processing |date=November 2010 |volume=19 |issue=11 |pages=2861–2873 |doi=10.1109/TIP.2010.2050625|pmid=20483687 |bibcode=2010ITIP...19.2861Y |s2cid=8187730 }}</ref><ref>{{cite journal |last1=Wetzstein |first1=Gordon |last2=Heidrich |first2=Wolfgang |last3=Heide |first3=Felix |title=Fast and Flexible Convolutional Sparse Coding |date=2015 |pages=5135–5143 |url=https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Heide_Fast_and_Flexible_2015_CVPR_paper.html}}</ref><ref>{{cite journal |last1=Wohlberg |first1=Brendt |title=SPORCO: A Python package for standard and convolutional sparse representations |journal=Proceedings of the 16th Python in Science Conference |date=2017 |pages=1–8 |doi=10.25080/shinma-7f4c6e7-001 |url=http://conference.scipy.org/proceedings/scipy2017/brendt_wohlberg.html|doi-access=free }}</ref> It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred <ref>{{cite <del style="font-weight: bold; text-decoration: none;">journal</del> |last1=Mairal |first1=Julien |last2=Bach |first2=Francis |last3=Ponce |first3=Jean |last4=Sapiro |first4=Guillermo |title<del style="font-weight: bold; text-decoration: none;">=Online Dictionary Learning for Sparse Coding |journal</del>=Proceedings of the 26th Annual International Conference on Machine Learning |date=2009 |pages=689–696 |doi=10.1145/1553374.1553463 |url=https://dl.acm.org/citation.cfm?id=1553463 |publisher=ACM|isbn=9781605585161 |hdl=11299/180118 |s2cid=7027533 |hdl-access=free }}</ref><ref name="papyan_2017_working">{{cite journal |last1=Papyan |first1=Vardan |last2=Sulam |first2=Jeremias |last3=Elad |first3=Michael |title=Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding |journal=IEEE Transactions on Signal Processing |date=1 November 2017 |volume=65 |issue=21 |pages=5687–5701 |doi=10.1109/TSP.2017.2733447|bibcode=2017ITSP...65.5687P |arxiv=1707.06066 |s2cid=11214580 }}</ref><ref name="wohlberg_2016_convolutional">{{cite book |last1=Wohlberg |first1=Brendt |title=2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) |chapter=Convolutional sparse representation of color images |pages=57–60 |date=6–8 March 2016 |doi=10.1109/SSIAI.2016.7459174 |isbn=978-1-4673-9919-7 |s2cid=9923572 }}</ref></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Given a signal of interest <math display="inline">\mathbf{x}\in \mathbb{R}^{N}</math> and a redundant dictionary <math display="inline">\mathbf{D}\in\mathbb{R}^{N\times M}, M\gg N</math>, the sparse coding problem consist of retrieving a sparse vector <math display="inline">\mathbf{\Gamma}\in \mathbb{R}^{M}</math>, denominated the sparse representation of <math display="inline">\mathbf{x}</math>, such that <math display="inline">\mathbf{x}= \mathbf{D}\mathbf{\Gamma}</math>. Intuitively, this implies <math display="inline">\mathbf{x}</math> is expressed as a linear combination of a small number of elements in <math display="inline">\mathbf{D}</math>. The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding.<ref>{{cite journal |last1=Jianchao Yang |last2=Wright |first2=John |last3=Huang |first3=Thomas S |last4=Yi Ma |title=Image Super-Resolution Via Sparse Representation |journal=IEEE Transactions on Image Processing |date=November 2010 |volume=19 |issue=11 |pages=2861–2873 |doi=10.1109/TIP.2010.2050625|pmid=20483687 |bibcode=2010ITIP...19.2861Y |s2cid=8187730 }}</ref><ref>{{cite journal |last1=Wetzstein |first1=Gordon |last2=Heidrich |first2=Wolfgang |last3=Heide |first3=Felix |title=Fast and Flexible Convolutional Sparse Coding |date=2015 |pages=5135–5143 |url=https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Heide_Fast_and_Flexible_2015_CVPR_paper.html}}</ref><ref>{{cite journal |last1=Wohlberg |first1=Brendt |title=SPORCO: A Python package for standard and convolutional sparse representations |journal=Proceedings of the 16th Python in Science Conference |date=2017 |pages=1–8 |doi=10.25080/shinma-7f4c6e7-001 |url=http://conference.scipy.org/proceedings/scipy2017/brendt_wohlberg.html|doi-access=free }}</ref> It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred <ref>{{cite <ins style="font-weight: bold; text-decoration: none;">book</ins> |last1=Mairal |first1=Julien |last2=Bach |first2=Francis |last3=Ponce |first3=Jean |last4=Sapiro |first4=Guillermo |title=Proceedings of the 26th Annual International Conference on Machine Learning<ins style="font-weight: bold; text-decoration: none;"> |chapter=Online dictionary learning for sparse coding</ins> |date=2009 |pages=689–696 |doi=10.1145/1553374.1553463 |<ins style="font-weight: bold; text-decoration: none;">chapter-</ins>url=https://dl.acm.org/citation.cfm?id=1553463 |publisher=ACM|isbn=9781605585161 |hdl=11299/180118 |s2cid=7027533 |hdl-access=free }}</ref><ref name="papyan_2017_working">{{cite journal |last1=Papyan |first1=Vardan |last2=Sulam |first2=Jeremias |last3=Elad |first3=Michael |title=Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding |journal=IEEE Transactions on Signal Processing |date=1 November 2017 |volume=65 |issue=21 |pages=5687–5701 |doi=10.1109/TSP.2017.2733447|bibcode=2017ITSP...65.5687P |arxiv=1707.06066 |s2cid=11214580 }}</ref><ref name="wohlberg_2016_convolutional">{{cite book |last1=Wohlberg |first1=Brendt |title=2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) |chapter=Convolutional sparse representation of color images |pages=57–60 |date=6–8 March 2016 |doi=10.1109/SSIAI.2016.7459174 |isbn=978-1-4673-9919-7 |s2cid=9923572 }}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions.<ref name="wohlberg_2016_convolutional"/> Interestingly, by imposing a local sparsity prior in <math display="inline">\mathbf{\Gamma}</math>, meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in <math display="inline">\mathbf{D}</math> can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for <math display="inline">\mathbf{\Gamma}</math> to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion<ref name="papyan_2017_working" /></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions.<ref name="wohlberg_2016_convolutional"/> Interestingly, by imposing a local sparsity prior in <math display="inline">\mathbf{\Gamma}</math>, meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in <math display="inline">\mathbf{D}</math> can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for <math display="inline">\mathbf{\Gamma}</math> to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion<ref name="papyan_2017_working" /></div></td>
</tr>
</table>Citation bothttps://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1166546778&oldid=prevCitation bot: Add: s2cid. | Use this bot. Report bugs. | #UCB_CommandLine2023-07-22T07:59:59Z<p>Add: s2cid. | <a href="/wiki/Wikipedia:UCB" class="mw-redirect" title="Wikipedia:UCB">Use this bot</a>. <a href="/wiki/Wikipedia:DBUG" class="mw-redirect" title="Wikipedia:DBUG">Report bugs</a>. | #UCB_CommandLine</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 07:59, 22 July 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 142:</td>
<td colspan="2" class="diff-lineno">Line 142:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As a practical example, an efficient image inpainting method for color images via the CSC model is shown.<ref name="wohlberg_2016_convolutional" /> Consider the three-channel dictionary <math display="inline">\mathbf{D} \in \mathbb{R}^{N \times M \times 3}</math>, where <math display="inline">\mathbf{d}_{c,m}</math> denotes the <math display="inline">m</math>-th atom at channel <math display="inline">c</math>, represents signal <math display="inline">\mathbf{x}</math> by a single cross-channel sparse representation <math display="inline">\mathbf{\Gamma}</math>, with stripes denoted as <math display="inline">\mathbf{z}_{i}</math>. Given an observation <math display="inline">\mathbf{y}=\{\mathbf{y}_{r}, \mathbf{y}_{g}, \mathbf{y}_{b}\}</math>, where randomly chosen channels at unknown pixel locations are fixed to zero, in a similar way to impulse noise, the problem is formulated as: <math display="block">\begin{aligned}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As a practical example, an efficient image inpainting method for color images via the CSC model is shown.<ref name="wohlberg_2016_convolutional" /> Consider the three-channel dictionary <math display="inline">\mathbf{D} \in \mathbb{R}^{N \times M \times 3}</math>, where <math display="inline">\mathbf{d}_{c,m}</math> denotes the <math display="inline">m</math>-th atom at channel <math display="inline">c</math>, represents signal <math display="inline">\mathbf{x}</math> by a single cross-channel sparse representation <math display="inline">\mathbf{\Gamma}</math>, with stripes denoted as <math display="inline">\mathbf{z}_{i}</math>. Given an observation <math display="inline">\mathbf{y}=\{\mathbf{y}_{r}, \mathbf{y}_{g}, \mathbf{y}_{b}\}</math>, where randomly chosen channels at unknown pixel locations are fixed to zero, in a similar way to impulse noise, the problem is formulated as: <math display="block">\begin{aligned}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div> \{\mathbf{\hat{z}}_{i}\}&=\underset{\{\mathbf{z}_{i}\}}{\text{argmin}}\frac{1}{2}\sum_{c}\bigg\|\sum_{i}\mathbf{d}_{c,i}\ast \mathbf{z}_{i} -\mathbf{y}_{c}\bigg\|_{2}^{2}+\lambda \sum_{i}\|\mathbf{z}_{i}\|_{1}.\end{aligned}</math> By means of ADMM,<ref>{{cite journal |last1=Boyd |first1=Stephen |title=Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers |journal=Foundations and Trends in Machine Learning |date=2010 |volume=3 |issue=1 |pages=1–122 |doi=10.1561/2200000016 |issn=1935-8237}}</ref> the cost function is decoupled into simpler sub-problems, allowing an efficient <math display="inline">\mathbf{\Gamma}</math> estimation. Algorithm 2 describes the procedure, where <math display="inline">\hat{D}_{c,m}</math> is the DFT representation of <math display="inline">D_{c,m}</math>, the convolutional matrix for the term <math display="inline">\mathbf{d}_{c,i}\ast \mathbf{z}_{i}</math>. Likewise, <math display="inline">\hat{\mathbf{x}}_{m}</math> and <math display="inline">\hat{\mathbf{z}}_{m}</math> correspond to the DFT representations of <math display="inline">\mathbf{x}_{m}</math> and <math display="inline">\mathbf{z}_{m}</math>, respectively, <math display="inline">\mathcal{S}_{\beta}(.)</math> corresponds to the Soft-thresholding function with argument <math display="inline">\beta</math>, and the <math display="inline">\ell_{1,2}</math> norm is defined as the <math display="inline">\ell_{2}</math> norm along the channel dimension <math display="inline">c</math> followed by the <math display="inline">\ell_{1}</math> norm along the spatial dimension <math display="inline">m</math>. The reader is referred to (,<ref name="wohlberg_2016_convolutional" /> Section II) for details on the ADMM implementation and the dictionary learning procedure.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div> \{\mathbf{\hat{z}}_{i}\}&=\underset{\{\mathbf{z}_{i}\}}{\text{argmin}}\frac{1}{2}\sum_{c}\bigg\|\sum_{i}\mathbf{d}_{c,i}\ast \mathbf{z}_{i} -\mathbf{y}_{c}\bigg\|_{2}^{2}+\lambda \sum_{i}\|\mathbf{z}_{i}\|_{1}.\end{aligned}</math> By means of ADMM,<ref>{{cite journal |last1=Boyd |first1=Stephen |title=Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers |journal=Foundations and Trends in Machine Learning |date=2010 |volume=3 |issue=1 |pages=1–122 |doi=10.1561/2200000016<ins style="font-weight: bold; text-decoration: none;"> |s2cid=51789432</ins> |issn=1935-8237}}</ref> the cost function is decoupled into simpler sub-problems, allowing an efficient <math display="inline">\mathbf{\Gamma}</math> estimation. Algorithm 2 describes the procedure, where <math display="inline">\hat{D}_{c,m}</math> is the DFT representation of <math display="inline">D_{c,m}</math>, the convolutional matrix for the term <math display="inline">\mathbf{d}_{c,i}\ast \mathbf{z}_{i}</math>. Likewise, <math display="inline">\hat{\mathbf{x}}_{m}</math> and <math display="inline">\hat{\mathbf{z}}_{m}</math> correspond to the DFT representations of <math display="inline">\mathbf{x}_{m}</math> and <math display="inline">\mathbf{z}_{m}</math>, respectively, <math display="inline">\mathcal{S}_{\beta}(.)</math> corresponds to the Soft-thresholding function with argument <math display="inline">\beta</math>, and the <math display="inline">\ell_{1,2}</math> norm is defined as the <math display="inline">\ell_{2}</math> norm along the channel dimension <math display="inline">c</math> followed by the <math display="inline">\ell_{1}</math> norm along the spatial dimension <math display="inline">m</math>. The reader is referred to (,<ref name="wohlberg_2016_convolutional" /> Section II) for details on the ADMM implementation and the dictionary learning procedure.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Algorithm 2: Color image inpainting via the convolutional sparse coding model.'''</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Algorithm 2: Color image inpainting via the convolutional sparse coding model.'''</div></td>
</tr>
</table>Citation bothttps://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1166494546&oldid=prevCitation bot: Alter: title, template type. Add: s2cid, chapter. Removed parameters. | Use this bot. Report bugs. | #UCB_CommandLine2023-07-21T23:10:10Z<p>Alter: title, template type. Add: s2cid, chapter. Removed parameters. | <a href="/wiki/Wikipedia:UCB" class="mw-redirect" title="Wikipedia:UCB">Use this bot</a>. <a href="/wiki/Wikipedia:DBUG" class="mw-redirect" title="Wikipedia:DBUG">Report bugs</a>. | #UCB_CommandLine</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 23:10, 21 July 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 10:</td>
<td colspan="2" class="diff-lineno">Line 10:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Overview ==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Overview ==</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Given a signal of interest <math display="inline">\mathbf{x}\in \mathbb{R}^{N}</math> and a redundant dictionary <math display="inline">\mathbf{D}\in\mathbb{R}^{N\times M}, M\gg N</math>, the sparse coding problem consist of retrieving a sparse vector <math display="inline">\mathbf{\Gamma}\in \mathbb{R}^{M}</math>, denominated the sparse representation of <math display="inline">\mathbf{x}</math>, such that <math display="inline">\mathbf{x}= \mathbf{D}\mathbf{\Gamma}</math>. Intuitively, this implies <math display="inline">\mathbf{x}</math> is expressed as a linear combination of a small number of elements in <math display="inline">\mathbf{D}</math>. The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding.<ref>{{cite journal |last1=Jianchao Yang |last2=Wright |first2=John |last3=Huang |first3=Thomas S |last4=Yi Ma |title=Image Super-Resolution Via Sparse Representation |journal=IEEE Transactions on Image Processing |date=November 2010 |volume=19 |issue=11 |pages=2861–2873 |doi=10.1109/TIP.2010.2050625|pmid=20483687 |bibcode=2010ITIP...19.2861Y |s2cid=8187730 }}</ref><ref>{{cite journal |last1=Wetzstein |first1=Gordon |last2=Heidrich |first2=Wolfgang |last3=Heide |first3=Felix |title=Fast and Flexible Convolutional Sparse Coding |date=2015 |pages=5135–5143 |url=https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Heide_Fast_and_Flexible_2015_CVPR_paper.html}}</ref><ref>{{cite journal |last1=Wohlberg |first1=Brendt |title=SPORCO: A Python package for standard and convolutional sparse representations |journal=Proceedings of the 16th Python in Science Conference |date=2017 |pages=1–8 |doi=10.25080/shinma-7f4c6e7-001 |url=http://conference.scipy.org/proceedings/scipy2017/brendt_wohlberg.html|doi-access=free }}</ref> It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred <ref>{{cite journal |last1=Mairal |first1=Julien |last2=Bach |first2=Francis |last3=Ponce |first3=Jean |last4=Sapiro |first4=Guillermo |title=Online Dictionary Learning for Sparse Coding |journal=Proceedings of the 26th Annual International Conference on Machine Learning |date=2009 |pages=689–696 |doi=10.1145/1553374.1553463 |url=https://dl.acm.org/citation.cfm?id=1553463 |publisher=ACM|isbn=9781605585161 |hdl=11299/180118 |s2cid=7027533 |hdl-access=free }}</ref><ref name="papyan_2017_working">{{cite journal |last1=Papyan |first1=Vardan |last2=Sulam |first2=Jeremias |last3=Elad |first3=Michael |title=Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding |journal=IEEE Transactions on Signal Processing |date=1 November 2017 |volume=65 |issue=21 |pages=5687–5701 |doi=10.1109/TSP.2017.2733447|bibcode=2017ITSP...65.5687P |arxiv=1707.06066 |s2cid=11214580 }}</ref><ref name="wohlberg_2016_convolutional">{{cite <del style="font-weight: bold; text-decoration: none;">journal</del> |last1=Wohlberg |first1=Brendt |title<del style="font-weight: bold; text-decoration: none;">=Convolutional sparse representation of color images |journal</del>=2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) |pages=57–60 |date=6–8 March 2016 |doi=10.1109/SSIAI.2016.7459174 |isbn=978-1-4673-9919-7 |s2cid=9923572 }}</ref></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Given a signal of interest <math display="inline">\mathbf{x}\in \mathbb{R}^{N}</math> and a redundant dictionary <math display="inline">\mathbf{D}\in\mathbb{R}^{N\times M}, M\gg N</math>, the sparse coding problem consist of retrieving a sparse vector <math display="inline">\mathbf{\Gamma}\in \mathbb{R}^{M}</math>, denominated the sparse representation of <math display="inline">\mathbf{x}</math>, such that <math display="inline">\mathbf{x}= \mathbf{D}\mathbf{\Gamma}</math>. Intuitively, this implies <math display="inline">\mathbf{x}</math> is expressed as a linear combination of a small number of elements in <math display="inline">\mathbf{D}</math>. The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding.<ref>{{cite journal |last1=Jianchao Yang |last2=Wright |first2=John |last3=Huang |first3=Thomas S |last4=Yi Ma |title=Image Super-Resolution Via Sparse Representation |journal=IEEE Transactions on Image Processing |date=November 2010 |volume=19 |issue=11 |pages=2861–2873 |doi=10.1109/TIP.2010.2050625|pmid=20483687 |bibcode=2010ITIP...19.2861Y |s2cid=8187730 }}</ref><ref>{{cite journal |last1=Wetzstein |first1=Gordon |last2=Heidrich |first2=Wolfgang |last3=Heide |first3=Felix |title=Fast and Flexible Convolutional Sparse Coding |date=2015 |pages=5135–5143 |url=https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Heide_Fast_and_Flexible_2015_CVPR_paper.html}}</ref><ref>{{cite journal |last1=Wohlberg |first1=Brendt |title=SPORCO: A Python package for standard and convolutional sparse representations |journal=Proceedings of the 16th Python in Science Conference |date=2017 |pages=1–8 |doi=10.25080/shinma-7f4c6e7-001 |url=http://conference.scipy.org/proceedings/scipy2017/brendt_wohlberg.html|doi-access=free }}</ref> It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred <ref>{{cite journal |last1=Mairal |first1=Julien |last2=Bach |first2=Francis |last3=Ponce |first3=Jean |last4=Sapiro |first4=Guillermo |title=Online Dictionary Learning for Sparse Coding |journal=Proceedings of the 26th Annual International Conference on Machine Learning |date=2009 |pages=689–696 |doi=10.1145/1553374.1553463 |url=https://dl.acm.org/citation.cfm?id=1553463 |publisher=ACM|isbn=9781605585161 |hdl=11299/180118 |s2cid=7027533 |hdl-access=free }}</ref><ref name="papyan_2017_working">{{cite journal |last1=Papyan |first1=Vardan |last2=Sulam |first2=Jeremias |last3=Elad |first3=Michael |title=Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding |journal=IEEE Transactions on Signal Processing |date=1 November 2017 |volume=65 |issue=21 |pages=5687–5701 |doi=10.1109/TSP.2017.2733447|bibcode=2017ITSP...65.5687P |arxiv=1707.06066 |s2cid=11214580 }}</ref><ref name="wohlberg_2016_convolutional">{{cite <ins style="font-weight: bold; text-decoration: none;">book</ins> |last1=Wohlberg |first1=Brendt |title=2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)<ins style="font-weight: bold; text-decoration: none;"> |chapter=Convolutional sparse representation of color images</ins> |pages=57–60 |date=6–8 March 2016 |doi=10.1109/SSIAI.2016.7459174 |isbn=978-1-4673-9919-7 |s2cid=9923572 }}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions.<ref name="wohlberg_2016_convolutional"/> Interestingly, by imposing a local sparsity prior in <math display="inline">\mathbf{\Gamma}</math>, meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in <math display="inline">\mathbf{D}</math> can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for <math display="inline">\mathbf{\Gamma}</math> to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion<ref name="papyan_2017_working" /></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions.<ref name="wohlberg_2016_convolutional"/> Interestingly, by imposing a local sparsity prior in <math display="inline">\mathbf{\Gamma}</math>, meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in <math display="inline">\mathbf{D}</math> can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for <math display="inline">\mathbf{\Gamma}</math> to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion<ref name="papyan_2017_working" /></div></td>
</tr>
<tr>
<td colspan="2" class="diff-lineno">Line 53:</td>
<td colspan="2" class="diff-lineno">Line 53:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For the local approach, <math display="inline">\mathbf{D}</math> mutual coherence satisfies: <math display="inline">\mu(\mathbf{D})\geq \big(\frac{m-1}{m(2n-1)-1}\big)^{1/2}.</math> So, if a solution obeys <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}< \frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D})}\big)</math>, then it is the sparsest solution to the <math display="inline">\ell_{0,\infty}</math> problem. Thus, under the local formulation, the same number of non-zeros is permitted for each stripe instead of the full vector!</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For the local approach, <math display="inline">\mathbf{D}</math> mutual coherence satisfies: <math display="inline">\mu(\mathbf{D})\geq \big(\frac{m-1}{m(2n-1)-1}\big)^{1/2}.</math> So, if a solution obeys <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}< \frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D})}\big)</math>, then it is the sparsest solution to the <math display="inline">\ell_{0,\infty}</math> problem. Thus, under the local formulation, the same number of non-zeros is permitted for each stripe instead of the full vector!</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Similar to the global model, the CSC is solved via OMP and BP methods, the latter contemplating the use of the iterative shrinkage thresholding algorithm (ISTA)<ref>{{cite journal |last1=Beck |first1=Amir |last2=Teboulle |first2=Marc |title=A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems |journal=SIAM Journal on Imaging Sciences |date=January 2009 |volume=2 |issue=1 |pages=183–202 |doi=10.1137/080716542 }}</ref> for splitting the pursuit into smaller problems. Based on the <math display="inline">\ell_{0,\infty}</math> pseudonorm, if a solution <math display="inline">\mathbf{\Gamma}</math> exists satisfying <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D})} \big)</math>, then both methods are guaranteed to recover it. Moreover, the local model guarantees recovery independently of the signal dimension, as opposed to the <math display="inline">\ell_{0}</math> prior. Stability conditions for OMP and BP are also guaranteed if its exact recovery condition (ERC) is met for a support <math display="inline">\mathcal{T}</math> with a constant <math display="inline">\theta</math>. The ERC is defined as: <math display="inline">\theta= 1-\underset{i\notin \mathcal{T}}{\text{max}} \|\mathbf{D}_{\mathcal{T}}^{\dagger}\mathbf{d}_{i}\|_{1}>0</math>, where <math display="inline">\dagger</math> denotes the Pseudo-inverse. Algorithm 1 shows the Global Pursuit method based on ISTA.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Similar to the global model, the CSC is solved via OMP and BP methods, the latter contemplating the use of the iterative shrinkage thresholding algorithm (ISTA)<ref>{{cite journal |last1=Beck |first1=Amir |last2=Teboulle |first2=Marc |title=A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems |journal=SIAM Journal on Imaging Sciences |date=January 2009 |volume=2 |issue=1 |pages=183–202 |doi=10.1137/080716542<ins style="font-weight: bold; text-decoration: none;"> |s2cid=3072879</ins> }}</ref> for splitting the pursuit into smaller problems. Based on the <math display="inline">\ell_{0,\infty}</math> pseudonorm, if a solution <math display="inline">\mathbf{\Gamma}</math> exists satisfying <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D})} \big)</math>, then both methods are guaranteed to recover it. Moreover, the local model guarantees recovery independently of the signal dimension, as opposed to the <math display="inline">\ell_{0}</math> prior. Stability conditions for OMP and BP are also guaranteed if its exact recovery condition (ERC) is met for a support <math display="inline">\mathcal{T}</math> with a constant <math display="inline">\theta</math>. The ERC is defined as: <math display="inline">\theta= 1-\underset{i\notin \mathcal{T}}{\text{max}} \|\mathbf{D}_{\mathcal{T}}^{\dagger}\mathbf{d}_{i}\|_{1}>0</math>, where <math display="inline">\dagger</math> denotes the Pseudo-inverse. Algorithm 1 shows the Global Pursuit method based on ISTA.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Algorithm 1: 1D CSC via local iterative soft-thresholding.'''</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Algorithm 1: 1D CSC via local iterative soft-thresholding.'''</div></td>
</tr>
</table>Citation bothttps://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1150286072&oldid=prevJohn of Reading: /* Connections to convolutional neural networks */Typo fixing, replaced: constaint → constraint2023-04-17T07:54:01Z<p><span class="autocomment">Connections to convolutional neural networks: </span>Typo fixing, replaced: constaint → constraint</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 07:54, 17 April 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 120:</td>
<td colspan="2" class="diff-lineno">Line 120:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Recall the forward pass of the convolutional neural network model, used in both training and inference steps. Let <math display="inline">\mathbf{x}\in \mathbb{R}^{Mm_{1}}</math> be its input and <math display="inline">\mathbf{W}_{k}\in\mathbb{R}^{N\times m_{1}}</math> the filters at layer <math display="inline">k</math>, which are followed by the rectified linear unit (RLU) <math display="inline">\text{ReLU}(\mathbf{x})= \max(0, x)</math>, for bias <math display="inline">\mathbf{b}\in \mathbb{R}^{Mm_{1}}</math>. Based on this elementary block, taking <math display="inline">K=2</math> as example, the CNN output can be expressed as: <math display="block">\begin{aligned}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Recall the forward pass of the convolutional neural network model, used in both training and inference steps. Let <math display="inline">\mathbf{x}\in \mathbb{R}^{Mm_{1}}</math> be its input and <math display="inline">\mathbf{W}_{k}\in\mathbb{R}^{N\times m_{1}}</math> the filters at layer <math display="inline">k</math>, which are followed by the rectified linear unit (RLU) <math display="inline">\text{ReLU}(\mathbf{x})= \max(0, x)</math>, for bias <math display="inline">\mathbf{b}\in \mathbb{R}^{Mm_{1}}</math>. Based on this elementary block, taking <math display="inline">K=2</math> as example, the CNN output can be expressed as: <math display="block">\begin{aligned}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div> \mathbf{Z}_{2}&= \text{ReLU}\big(\mathbf{W}_{2}^{T}\; \text{ReLU}(\mathbf{W}_{1}^{T}\mathbf{x})+\mathbf{b}_{1})+\mathbf{b}_{2}\;\big).\end{aligned}</math> Finally, comparing the CNN algorithm and the Layered thresholding approach for the nonnegative <del style="font-weight: bold; text-decoration: none;">{{sic|?|constaint}}</del>, it is straightforward to show that both are equivalent: <math display="block">\begin{aligned}</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div> \mathbf{Z}_{2}&= \text{ReLU}\big(\mathbf{W}_{2}^{T}\; \text{ReLU}(\mathbf{W}_{1}^{T}\mathbf{x})+\mathbf{b}_{1})+\mathbf{b}_{2}\;\big).\end{aligned}</math> Finally, comparing the CNN algorithm and the Layered thresholding approach for the nonnegative <ins style="font-weight: bold; text-decoration: none;">constraint</ins>, it is straightforward to show that both are equivalent: <math display="block">\begin{aligned}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div> \hat{\mathbf{\Gamma}}&= \mathcal{S}^{+}_{\beta_{2}}\big(\mathbf{D}_{2}^{T}\mathcal{S}^{+}_{\beta_{1}}(\mathbf{D}_{1}^{T}\mathbf{x}) \big)\\</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div> \hat{\mathbf{\Gamma}}&= \mathcal{S}^{+}_{\beta_{2}}\big(\mathbf{D}_{2}^{T}\mathcal{S}^{+}_{\beta_{1}}(\mathbf{D}_{1}^{T}\mathbf{x}) \big)\\</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div> &= \text{ReLU}\big(\mathbf{W}_{2}^{T} \text{ReLU}(\mathbf{W}_{1}^{T}\mathbf{x}+\beta_{1})+\beta_{2}\big).\end{aligned}</math></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div> &= \text{ReLU}\big(\mathbf{W}_{2}^{T} \text{ReLU}(\mathbf{W}_{1}^{T}\mathbf{x}+\beta_{1})+\beta_{2}\big).\end{aligned}</math></div></td>
</tr>
</table>John of Readinghttps://en.wikipedia.org/w/index.php?title=Convolutional_sparse_coding&diff=1076498074&oldid=prevSauer202 at 12:15, 11 March 20222022-03-11T12:15:55Z<p></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 12:15, 11 March 2022</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 103:</td>
<td colspan="2" class="diff-lineno">Line 103:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Theorem 1:''' (Uniqueness of sparse representations) Consider signal <math display="inline">\mathbf{x}</math> satisfies the (ML-CSC) model for a set of convolutional dictionaries <math display="inline">\{\mathbf{D}_{i}\}_{i=1}^{K}</math> with mutual coherence <math display="inline">\{\mu(\mathbf{D}_{i})\}_{i=1}^{K}</math>. If the true sparse representations satisfy <math display="inline">\{\mathbf{\Gamma}\}_{i=1}^{K}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})}\big)</math>, then a solution to the problem <math display="inline">\{\hat{\mathbf{\Gamma}_{i}}\}_{i=1}^{K}</math> will be its unique solution if the thresholds are chosen to satisfy: <math display="inline">\lambda_{i}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})} \big)</math>.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Theorem 1:''' (Uniqueness of sparse representations) Consider signal <math display="inline">\mathbf{x}</math> satisfies the (ML-CSC) model for a set of convolutional dictionaries <math display="inline">\{\mathbf{D}_{i}\}_{i=1}^{K}</math> with mutual coherence <math display="inline">\{\mu(\mathbf{D}_{i})\}_{i=1}^{K}</math>. If the true sparse representations satisfy <math display="inline">\{\mathbf{\Gamma}\}_{i=1}^{K}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})}\big)</math>, then a solution to the problem <math display="inline">\{\hat{\mathbf{\Gamma}_{i}}\}_{i=1}^{K}</math> will be its unique solution if the thresholds are chosen to satisfy: <math display="inline">\lambda_{i}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})} \big)</math>.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>'''Theorem 2:''' (Global <del style="font-weight: bold; text-decoration: none;">Stability</del> of the noise-corrupted scenario) Consider signal <math display="inline">\mathbf{x}</math> satisfies the (ML-CSC) model for a set of convolutional dictionaries <math display="inline">\{\mathbf{D}_{i}\}_{i=1}^{K}</math> is contaminated with noise <math display="inline">\mathbf{E}</math>, where <math display="inline">\|\mathbf{E}\|_{2}\leq \varepsilon_{0}</math>. resulting in <math display="inline">\mathbf{Y=X+E}</math>. If <math display="inline">\lambda_{i}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})}\big)</math> and <math display="inline">\varepsilon_{i}^{2}=\frac{4\varepsilon_{i-1}^{2}}{1-(2\|\mathbf{\Gamma}_{i}\|_{0,\infty}-1)\mu(\mathbf{D}_{i})}</math>, then the estimated representations <math display="inline">\{\mathbf{\Gamma}_{i}\}_{i=1}^{K}</math> satisfy the following: <math display="inline">\|\mathbf{\Gamma}_{i}-\hat{\mathbf{\Gamma}}_{i}\|_{2}^{2}\leq \varepsilon_{i}^{2}</math>.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>'''Theorem 2:''' (<ins style="font-weight: bold; text-decoration: none;">[[Lyapunov stability|</ins>Global <ins style="font-weight: bold; text-decoration: none;">stability]]</ins> of the noise-corrupted scenario) Consider signal <math display="inline">\mathbf{x}</math> satisfies the (ML-CSC) model for a set of convolutional dictionaries <math display="inline">\{\mathbf{D}_{i}\}_{i=1}^{K}</math> is contaminated with noise <math display="inline">\mathbf{E}</math>, where <math display="inline">\|\mathbf{E}\|_{2}\leq \varepsilon_{0}</math>. resulting in <math display="inline">\mathbf{Y=X+E}</math>. If <math display="inline">\lambda_{i}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})}\big)</math> and <math display="inline">\varepsilon_{i}^{2}=\frac{4\varepsilon_{i-1}^{2}}{1-(2\|\mathbf{\Gamma}_{i}\|_{0,\infty}-1)\mu(\mathbf{D}_{i})}</math>, then the estimated representations <math display="inline">\{\mathbf{\Gamma}_{i}\}_{i=1}^{K}</math> satisfy the following: <math display="inline">\|\mathbf{\Gamma}_{i}-\hat{\mathbf{\Gamma}}_{i}\|_{2}^{2}\leq \varepsilon_{i}^{2}</math>.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Projection-based algorithms ===</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Projection-based algorithms ===</div></td>
</tr>
</table>Sauer202