<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to Numerics.DimRed</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>Recent changes to Numerics.DimRed</description><atom:link href="https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/feed" rel="self"/><language>en</language><lastBuildDate>Thu, 01 Jun 2017 13:40:29 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/feed" rel="self" type="application/rss+xml"/><item><title>Discussion for Numerics.DimRed page</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/?limit=25#f2f6</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;Indices k (on N matrices) in last but one equation missing&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Thu, 01 Jun 2017 13:40:29 -0000</pubDate><guid>https://sourceforge.net905de3aeaff3eba53a101069125bc85889b3d4ef</guid></item><item><title>Numerics.DimRed modified by Burkhard Schmidt</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v140
+++ v141
@@ -86,6 +86,13 @@
 [[img src="Error_3.gif" alt="||\Sigma-\hat{\Sigma}||^2_{\mathcal{H}_2}=tr\left(C_EW_EC_E^\ast\right)"]]

+Examples
+--------------
+
+ * [Controlled population transfer in an asymmetric double well](Demos.DoubleWell.DimReduce_1/)
+ * ... more coming soon ...
+ 
+ 
 References
 ----------

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Thu, 27 Apr 2017 13:43:14 -0000</pubDate><guid>https://sourceforge.net989f135dac55c2168d1f12620c49c6c60e9ac4c3</guid></item><item><title>Numerics.DimRed modified by Burkhard Schmidt</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v139
+++ v140
@@ -68,7 +68,7 @@
 The need for efficient numerical treatment of control problems leads to the problem of model order reduction, i.e. the approximation of large-scale systems by significantly smaller ones. While model reduction of linear systems has been studied for several years now, and a well established theory including error bounds and structure-preserving properties fulfilled by a reduced-order model exists, the situation is less favorable for non-linear systems (which occur, e.g. in [control of quantum problems](Numerics.Control)). There, the mathematical techniques developed to linear systems are more difficult and much less general. Here we will deal with following two classes of methods, representing generalizations of their linear counterparts:

 * [Balanced truncation](Numerics.DimRed.BalTru)
-* [ℋ2 norm interpolation](Numerics.DimRed.H2Norm)
+* [ℋ2 optimal model reduction](Numerics.DimRed.H2Model)

 ℋ2 error norm
 --------------
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Tue, 25 Apr 2017 09:11:52 -0000</pubDate><guid>https://sourceforge.net9285308fe9e567218a3eed5c25e93af148c99b3d</guid></item><item><title>Numerics.DimRed modified by Burkhard Schmidt</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v138
+++ v139
@@ -56,7 +56,7 @@

  [[img src="Lyapunov_3.gif" alt="\begin{array}{rcl}AX_0+X_0\hat{A}^\ast +B\hat{B}^\ast&amp;amp;=&amp;amp;0\\AX_j+X_j\hat{A}^\ast+\sum_{k=1}^m N_kX_{j-1}\hat{N}^\ast_k+B\hat{B}^\ast&amp;amp;=&amp;amp;0,\,j.gt.0\end{array}"]]

-(and similarly for the second (dual) equation for the observability Gramian). Convergence X&lt;sub&gt;j&lt;/sub&gt;→W&lt;sub&gt;C&lt;/sub&gt; is guaranteed if the eigenvalue of A with the largest (negative) real part is sufficiently separated from the imaginary axis \[5\]. While for linear control systems the Lyapunov equations can always be solved if matrix A is stable (see [here](Numerics.Control)), for the generalized Lyapunov equations we recall that a system is called stable when the system matrix A has only eigenvalues in the open left half complex plane (i.e., excluding the imaginary axis). Stability thus means that there are constants λ , a &amp;gt; 0 such that ||exp(At)|| ≤ λ exp(−at), where ||•|| is a suitable matrix norm. If moreover
+and similarly for the second (dual) equation for the observability Gramian. Convergence X&lt;sub&gt;j&lt;/sub&gt;→W&lt;sub&gt;C&lt;/sub&gt; is guaranteed if the eigenvalue of A with the largest (negative) real part is sufficiently separated from the imaginary axis \[5\]. While for linear control systems the Lyapunov equations can always be solved if matrix A is stable (see [here](Numerics.Control)), for the generalized Lyapunov equations we recall that a system is called stable when the system matrix A has only eigenvalues in the open left half complex plane (i.e., excluding the imaginary axis). Stability thus means that there are constants λ , a &amp;gt; 0 such that ||exp(At)|| ≤ λ exp(−at), where ||•|| is a suitable matrix norm. If moreover

 [[img src="WCO_4.gif" alt="\frac{\lambda^2}{2a}\sum_{k=1}^m||N_k||^2\langle 1"]]

@@ -103,4 +103,4 @@

 7. P. Benner, T. Breiten: [SIAM Journal on Matrix Analysis and Application, **33**, 859–885 (2012)](http://dx.doi.org/10.1137/110836742)

-For a review of the theory of linear control systems, see e.g. the excellent [lecture notes of Umea university](http://www.control.tfe.umu.se/Courses/Optimal_Control_for_Linear_Systems_2010) and/or the [books by Kemin Zhou](http://www.ece.lsu.edu/kemin).
+For a review of the theory of linear control systems, see e.g. the [lecture notes of Umea university](http://www.control.tfe.umu.se/Courses/Optimal_Control_for_Linear_Systems_2010) and/or the [books by Kemin Zhou](http://www.ece.lsu.edu/kemin).
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Thu, 23 Jun 2016 07:05:38 -0000</pubDate><guid>https://sourceforge.net45fcfe61fd391afc5b0adaea075dc86e8a171797</guid></item><item><title>Numerics.DimRed modified by Burkhard Schmidt</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v137
+++ v138
@@ -36,7 +36,7 @@

 Controllability and observability 
 ---------------------------------
-Two important properties of control systems are the [controllability](http://en.wikipedia.org/wiki/Controllability) (aka reachability) and the [observability](http://en.wikipedia.org/wiki/Observability) which can be regarded as dual aspects of the same problem. For the case of linear control systems, the corresponding [controllability Gramian W&lt;sub&gt;C&lt;/sub&gt;](http://en.wikipedia.org/wiki/Controllability_Gramian) (often referred to as P in the literature) and the [observability Gramian W&lt;sub&gt;O&lt;/sub&gt;](http://en.wikipedia.org/wiki/Observability_Gramian) (often referred to as Q in the literature) are defined as 
+Two important properties of control systems are the [controllability](http://en.wikipedia.org/wiki/Controllability) and the [observability](http://en.wikipedia.org/wiki/Observability) which can be regarded as dual aspects of the same problem. For the case of linear control systems, the corresponding [controllability Gramian W&lt;sub&gt;C&lt;/sub&gt;](http://en.wikipedia.org/wiki/Controllability_Gramian) (often referred to as P in the literature) and the [observability Gramian W&lt;sub&gt;O&lt;/sub&gt;](http://en.wikipedia.org/wiki/Observability_Gramian) (often referred to as Q in the literature) are defined as 

 [[img src="WCO_1.gif" alt="\begin{matrix}W_C&amp;amp;=&amp;amp;\int_0^\infty \exp(At)BB^\ast\exp(A^\ast t)dt\\W_O&amp;amp;=&amp;amp;\int_0^\infty \exp(A^\ast t)C^\ast C\exp(At)dt\end{matrix}"]]

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Thu, 23 Jun 2016 06:57:31 -0000</pubDate><guid>https://sourceforge.net0e7b8e7c183337cfd2202aa654d820266d3acaf8</guid></item><item><title>Numerics.DimRed modified by Burkhard Schmidt</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v136
+++ v137
@@ -52,7 +52,7 @@

 [[img src="WCO_3.gif" alt="\begin{matrix}AW_C+W_C A^\ast +\sum_{k=1}^mN_kW_CN_k^\ast+BB^\ast&amp;amp;=&amp;amp;0\\A^\ast W_0+W_0A+\sum_{k=1}^m N_k^\ast W_0N_k+C^\ast C&amp;amp;=&amp;amp;0\end{matrix}"]] 

-Because direct methods for solving such equations have a numerical complexity of O(n&lt;sup&gt;6&lt;/sup&gt;), alternative approaches become mandatory. For example, by mapping the Gramian matrices onto vectors, the generalized Lyapunov equations can be understood as systems of coupled linear equations which can be solved, e. g., by means of the [biconjugate gradient method](http://en.wikipedia.org/wiki/Biconjugate_gradient_method), preferentially with pre-conditioning \[3\]. Alternatively, the iterative methods \[4,5\] can be used instead which requires the solution of a standard Lyapunov equation in each step \[6\]
+Because direct methods for solving such equations have a numerical complexity of O(n&lt;sup&gt;6&lt;/sup&gt;), alternative approaches become mandatory. For example, by mapping the Gramian matrices onto vectors, the generalized Lyapunov equations can be understood as systems of coupled linear equations which can be solved, e. g., by means of the [biconjugate gradient method](http://en.wikipedia.org/wiki/Biconjugate_gradient_method), preferentially with pre-conditioning \[3\]. Alternatively, an iterative method \[4,5\] can be used which requires the solution of a standard Lyapunov equation in each step \[6\]

  [[img src="Lyapunov_3.gif" alt="\begin{array}{rcl}AX_0+X_0\hat{A}^\ast +B\hat{B}^\ast&amp;amp;=&amp;amp;0\\AX_j+X_j\hat{A}^\ast+\sum_{k=1}^m N_kX_{j-1}\hat{N}^\ast_k+B\hat{B}^\ast&amp;amp;=&amp;amp;0,\,j.gt.0\end{array}"]]

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Fri, 15 Apr 2016 07:08:04 -0000</pubDate><guid>https://sourceforge.net68137bb927ac34c7706ed6a99f055fd035a783b7</guid></item><item><title>Numerics.DimRed modified by Burkhard Schmidt</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v135
+++ v136
@@ -36,7 +36,7 @@

 Controllability and observability 
 ---------------------------------
-Ttwo important properties of control systems are the [controllability](http://en.wikipedia.org/wiki/Controllability) (aka reachability) and the [observability](http://en.wikipedia.org/wiki/Observability) which can be regarded as dual aspects of the same problem. For the case of linear control systems, the corresponding [controllability Gramian W&lt;sub&gt;C&lt;/sub&gt;](http://en.wikipedia.org/wiki/Controllability_Gramian) (often referred to as P in the literature) and the [observability Gramian W&lt;sub&gt;O&lt;/sub&gt;](http://en.wikipedia.org/wiki/Observability_Gramian) (often referred to as Q in the literature) are defined as 
+Two important properties of control systems are the [controllability](http://en.wikipedia.org/wiki/Controllability) (aka reachability) and the [observability](http://en.wikipedia.org/wiki/Observability) which can be regarded as dual aspects of the same problem. For the case of linear control systems, the corresponding [controllability Gramian W&lt;sub&gt;C&lt;/sub&gt;](http://en.wikipedia.org/wiki/Controllability_Gramian) (often referred to as P in the literature) and the [observability Gramian W&lt;sub&gt;O&lt;/sub&gt;](http://en.wikipedia.org/wiki/Observability_Gramian) (often referred to as Q in the literature) are defined as 

 [[img src="WCO_1.gif" alt="\begin{matrix}W_C&amp;amp;=&amp;amp;\int_0^\infty \exp(At)BB^\ast\exp(A^\ast t)dt\\W_O&amp;amp;=&amp;amp;\int_0^\infty \exp(A^\ast t)C^\ast C\exp(At)dt\end{matrix}"]]

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Mon, 22 Feb 2016 09:44:34 -0000</pubDate><guid>https://sourceforge.netae8c06ad36542ccfb1ac9b890520174b48ea7970</guid></item><item><title>Numerics.DimRed modified by Burkhard Schmidt</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v134
+++ v135
@@ -81,7 +81,7 @@

 [[img src="Error_2.gif" alt="A_EW_E+W_EA_E^\ast+\sum_{k=1}^mN_EW_EN_E^\ast+B_EB_E^\ast&amp;amp;=&amp;amp;0"]]

-and the resulting Gramian W&lt;sub&gt;e&lt;/sub&gt; can be used to obtain the error norm as
+and the resulting Gramian W&lt;sub&gt;E&lt;/sub&gt; can be used to obtain the error norm as

 [[img src="Error_3.gif" alt="||\Sigma-\hat{\Sigma}||^2_{\mathcal{H}_2}=tr\left(C_EW_EC_E^\ast\right)"]]

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Tue, 27 Jan 2015 11:01:22 -0000</pubDate><guid>https://sourceforge.net8733ddba7ea24d0fff1b512b80e037dcac896e9c</guid></item><item><title>Numerics.DimRed modified by Burkhard Schmidt</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v133
+++ v134
@@ -4,7 +4,7 @@
 Introduction
 -------------

-Before discussing the dimension reduction itself, we shall first shed some light on the transfer function/matrix (and its H&lt;sub&gt;2&lt;/sub&gt; norm). The transfer function/matrix describes how for every input/control u(t), the [linear/bilinear control systems](Numerics.Control) responds with an output/observation y(t). It is the task of dimension reduction to find lower-dimensional (reduced) systems which approximate this input-output behavior as closely as possible on any compact time interval \[0;T\]. We note that this problem is closely connected to the concepts of controllability and observability. 
+Before discussing the dimension reduction itself, we shall first shed some light on the transfer function/matrix (and its ℋ&lt;sub&gt;2&lt;/sub&gt; norm). The transfer function/matrix describes how for every input/control u(t), the [linear/bilinear control systems](Numerics.Control) responds with an output/observation y(t). It is the task of dimension reduction to find lower-dimensional (reduced) systems which approximate this input-output behavior as closely as possible on any compact time interval \[0;T\]. We note that this problem is closely connected to the concepts of controllability and observability. 

 Transfer function/matrix and H&lt;sub&gt;2&lt;/sub&gt; norm 
 -----------------------------------------------
@@ -52,7 +52,7 @@

 [[img src="WCO_3.gif" alt="\begin{matrix}AW_C+W_C A^\ast +\sum_{k=1}^mN_kW_CN_k^\ast+BB^\ast&amp;amp;=&amp;amp;0\\A^\ast W_0+W_0A+\sum_{k=1}^m N_k^\ast W_0N_k+C^\ast C&amp;amp;=&amp;amp;0\end{matrix}"]] 

-Because direct methods for solving such equations have a numerical complexity of O(n&lt;sup&gt;6&lt;/sup&gt;), alternative approaches become mandatory. For example, by mapping the Gramian matrices onto vectors, the generalized Lyapunov equations can be understood as sy&amp;lt;tems of="" coupled="" linear="" equations="" which="" can="" be="" solved,="" e.="" g.,="" by="" means="" of="" the="" [biconjugate="" gradient="" method](http:="" en.wikipedia.org="" wiki="" Biconjugate_gradient_method),="" preferentially="" with="" pre-conditioning="" \[3\].="" Alternatively,="" the="" iterative="" methods="" \[4,5\]="" can="" be="" used="" instead="" which="" requires="" the="" solution="" of="" a="" standard="" Lyapunov="" equation="" in="" each="" step="" \[6\]="" +Because="" direct="" methods="" for="" solving="" such="" equations="" have="" a="" numerical="" complexity="" of="" O(n&amp;lt;sup=""&amp;gt;6), alternative approaches become mandatory. For example, by mapping the Gramian matrices onto vectors, the generalized Lyapunov equations can be understood as systems of coupled linear equations which can be solved, e. g., by means of the [biconjugate gradient method](http://en.wikipedia.org/wiki/Biconjugate_gradient_method), preferentially with pre-conditioning \[3\]. Alternatively, the iterative methods \[4,5\] can be used instead which requires the solution of a standard Lyapunov equation in each step \[6\]

  [[img src="Lyapunov_3.gif" alt="\begin{array}{rcl}AX_0+X_0\hat{A}^\ast +B\hat{B}^\ast&amp;amp;=&amp;amp;0\\AX_j+X_j\hat{A}^\ast+\sum_{k=1}^m N_kX_{j-1}\hat{N}^\ast_k+B\hat{B}^\ast&amp;amp;=&amp;amp;0,\,j.gt.0\end{array}"]]

@@ -73,7 +73,7 @@
 ℋ2 error norm
 --------------

-The ℋ2 error norm is obtained in the following way \[7\]: First we have to set up norm of the error system which is defined as follows:
+The ℋ2 error norm is obtained in the following way \[3,7\]: First we have to set up norm of the error system which is defined as follows:

 [[img src="Error_1.gif" alt="A_E=\begin{bmatrix}A&amp;amp;0\\0&amp;amp;\hat{A}\end{bmatrix},\,N_{k,E}=\begin{bmatrix}N_k&amp;amp;0\\0&amp;amp;\hat{N}_k\end{bmatrix},\,B_E=\begin{bmatrix}B\\\hat{B}\end{bmatrix},\,C_E=\begin{bmatrix}C&amp;amp;-\hat{C}\end{bmatrix}"]]

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Thu, 22 Jan 2015 13:13:37 -0000</pubDate><guid>https://sourceforge.net7c44d2f63dd9eced8cb6acdbcaeb19de26230d00</guid></item><item><title>Numerics.DimRed modified by Burkhard Schmidt</title><link>https://sourceforge.net/p/wavepacket/wiki/Numerics.DimRed/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v132
+++ v133
@@ -52,15 +52,15 @@

 [[img src="WCO_3.gif" alt="\begin{matrix}AW_C+W_C A^\ast +\sum_{k=1}^mN_kW_CN_k^\ast+BB^\ast&amp;amp;=&amp;amp;0\\A^\ast W_0+W_0A+\sum_{k=1}^m N_k^\ast W_0N_k+C^\ast C&amp;amp;=&amp;amp;0\end{matrix}"]] 

-While direct methods for solving such equations have a numerical complexity of O(n&lt;sup&gt;6&lt;/sup&gt;), the iterative methods \[3,4\] can be used instead which requires the solution of a standard Lyapunov equation in each step \[5\]
+Because direct methods for solving such equations have a numerical complexity of O(n&lt;sup&gt;6&lt;/sup&gt;), alternative approaches become mandatory. For example, by mapping the Gramian matrices onto vectors, the generalized Lyapunov equations can be understood as sy&amp;lt;tems of="" coupled="" linear="" equations="" which="" can="" be="" solved,="" e.="" g.,="" by="" means="" of="" the="" [biconjugate="" gradient="" method](http:="" en.wikipedia.org="" wiki="" Biconjugate_gradient_method),="" preferentially="" with="" pre-conditioning="" \[3\].="" Alternatively,="" the="" iterative="" methods="" \[4,5\]="" can="" be="" used="" instead="" which="" requires="" the="" solution="" of="" a="" standard="" Lyapunov="" equation="" in="" each="" step="" \[6\]="" [[img="" src="Lyapunov_3.gif" alt="\begin{array}{rcl}AX_0+X_0\hat{A}^\ast +B\hat{B}^\ast&amp;amp;amp;=&amp;amp;amp;0\\AX_j+X_j\hat{A}^\ast+\sum_{k=1}^m N_kX_{j-1}\hat{N}^\ast_k+B\hat{B}^\ast&amp;amp;amp;=&amp;amp;amp;0,\,j.gt.0\end{array}" ]]="" -(and="" similarly="" for="" the="" second="" (dual)="" equation="" for="" the="" observability="" Gramian).="" Convergence="" X&amp;lt;sub=""&amp;gt;j→W&lt;sub&gt;C&lt;/sub&gt; is guaranteed if the eigenvalue of A with the largest (negative) real part is sufficiently separated from the imaginary axis \[4\]. While for linear control systems the Lyapunov equations can always be solved if matrix A is stable (see [here](Numerics.Control)), for the generalized Lyapunov equations we recall that a system is called stable when the system matrix A has only eigenvalues in the open left half complex plane (i.e., excluding the imaginary axis). Stability thus means that there are constants λ , a &amp;gt; 0 such that ||exp(At)|| ≤ λ exp(−at), where ||•|| is a suitable matrix norm. If moreover
+(and similarly for the second (dual) equation for the observability Gramian). Convergence X&lt;sub&gt;j&lt;/sub&gt;→W&lt;sub&gt;C&lt;/sub&gt; is guaranteed if the eigenvalue of A with the largest (negative) real part is sufficiently separated from the imaginary axis \[5\]. While for linear control systems the Lyapunov equations can always be solved if matrix A is stable (see [here](Numerics.Control)), for the generalized Lyapunov equations we recall that a system is called stable when the system matrix A has only eigenvalues in the open left half complex plane (i.e., excluding the imaginary axis). Stability thus means that there are constants λ , a &amp;gt; 0 such that ||exp(At)|| ≤ λ exp(−at), where ||•|| is a suitable matrix norm. If moreover

 [[img src="WCO_4.gif" alt="\frac{\lambda^2}{2a}\sum_{k=1}^m||N_k||^2\langle 1"]]

-then controllability and observability Gramians exist. This can be achieved by a suitable scaling u→ηu, N→N/η, B →B/η with real η&amp;gt;1 which leaves the equations of motion invariant but, clearly, not the Gramians. Hence, by increasing η, we drive the system to its linear counterpart. For the limit η→∞, the system matrix N vanishes and we obtain a linear system. For this reason, η should not be chosen too large. Note that also the [shift of the spectrum of A](Numerics.Control) may serve to render the generalized Lyapunov equations solvable \[5\].
+then controllability and observability Gramians exist. This can be achieved by a suitable scaling u→ηu, N→N/η, B →B/η with real η&amp;gt;1 which leaves the equations of motion invariant but, clearly, not the Gramians. Hence, by increasing η, we drive the system to its linear counterpart. For the limit η→∞, the system matrix N vanishes and we obtain a linear system. For this reason, η should not be chosen too large. Note that also the [shift of the spectrum of A](Numerics.Control) may serve to render the generalized Lyapunov equations solvable \[6\].

 Dimension reduction
 -------------------
@@ -73,7 +73,7 @@
 ℋ2 error norm
 --------------

-The ℋ2 error norm is obtained in the following way \[6\]: First we have to set up norm of the error system which is defined as follows:
+The ℋ2 error norm is obtained in the following way \[7\]: First we have to set up norm of the error system which is defined as follows:

 [[img src="Error_1.gif" alt="A_E=\begin{bmatrix}A&amp;amp;0\\0&amp;amp;\hat{A}\end{bmatrix},\,N_{k,E}=\begin{bmatrix}N_k&amp;amp;0\\0&amp;amp;\hat{N}_k\end{bmatrix},\,B_E=\begin{bmatrix}B\\\hat{B}\end{bmatrix},\,C_E=\begin{bmatrix}C&amp;amp;-\hat{C}\end{bmatrix}"]]

@@ -93,12 +93,14 @@

 2. Z. Bai, D. Skoogh: [Lin. Alg. Appl. **415**, 406 (2006)](http://dx.doi.org/10.1016/j.laa.2005.04.032)

-3. E. Wachspress: [Appl. Math. Lett. **1**, 87 (1988)](http://dx.doi.org/10.1016/0893-9659(88)90183-8)
+3. T. Breiten, private communications (2014).

-4. T. Damm: [Numer. Lin. Alg. Appl. **15**, 853 (2008)](http://dx.doi.org/10.1137/110836742)
+4. E. Wachspress: [Appl. Math. Lett. **1**, 87 (1988)](http://dx.doi.org/10.1016/0893-9659(88)90183-8)

-5. B. Schäfer-Bung, C. Hartmann, B. Schmidt, and Ch. Schütte: [J. Chem. Phys. **135**, 014112 (2011)](http://dx.doi.org/10.1063/1.3605243)
+5. T. Damm: [Numer. Lin. Alg. Appl. **15**, 853 (2008)](http://dx.doi.org/10.1137/110836742)

-1. P. Benner, T. Breiten: [SIAM Journal on Matrix Analysis and Application, **33**, 859–885 (2012)](http://dx.doi.org/10.1137/110836742)
+6. B. Schäfer-Bung, C. Hartmann, B. Schmidt, and Ch. Schütte: [J. Chem. Phys. **135**, 014112 (2011)](http://dx.doi.org/10.1063/1.3605243)
+
+7. P. Benner, T. Breiten: [SIAM Journal on Matrix Analysis and Application, **33**, 859–885 (2012)](http://dx.doi.org/10.1137/110836742)

 For a review of the theory of linear control systems, see e.g. the excellent [lecture notes of Umea university](http://www.control.tfe.umu.se/Courses/Optimal_Control_for_Linear_Systems_2010) and/or the [books by Kemin Zhou](http://www.ece.lsu.edu/kemin).
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Burkhard Schmidt</dc:creator><pubDate>Thu, 22 Jan 2015 13:11:10 -0000</pubDate><guid>https://sourceforge.net4ab0edbdce9968ff336b7025fc399c6319d8b8f1</guid></item></channel></rss>