First of all I want to thank all the devs especially Matt for helping me out with my previous problem. It turned out there was a small discrepancy between my MPS and ED implementation, and the OpenMPS was indeed able to replicate the ED results to a high precision.
Right now I just have a quick question. Could OpenMPS enable me to calculate something like < Ex | Obs | GS >? By Ex, Obs, GS I meant the excited states, the observable of interest, the ground state, respectively. I've noticed that in the source code there are fortran subroutines that could calculate inner products so that should be doable, but I am not 100% sure how to approach this in an efficient way.
Thank you very much for your time, as always.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
no, it is not possible right now to define this observable via the python
interface. But indeed, the inner products are already in fortran, e.g., you
could calculate the overlap between the ground state and the excited state
(and verify that they are orthogonal).
Depending on what your observable really is, there might be the chance to
measure reduced density matrices of the ground and excited state and then
handle your measurement in your python post processing. If you share what
measure you had in mind, we might be able to give a bit more feedback.
First of all I want to thank all the devs especially Matt for helping me
out with my previous problem. It turned out there was a small discrepancy
between my MPS and ED implementation, and the OpenMPS was indeed able to
replicate the ED results to a high precision.
Right now I just have a quick question. Could OpenMPS enable me to
calculate something like < Ex | Obs | GS >? By Ex, Obs, GS I meant the
excited states, the observable of interest, the ground state, respectively.
I've noticed that in the source code there are fortran subroutines that
could calculate inner products so that should be doable, but I am not 100%
sure how to approach this in an efficient way.
Actually I think I know part of the answer to my first question, I will just add a FiniteFunction to the MPO observable. However it'd still be nice to calculate the observable b/w different states
However, I want to replace my first question with this one, that is it possible to not do a full sum, as seen in the attachment here?
I will start with q.png ... otherwise, I will risk losing your train of
thought, I fear:
The left-hand side of Eq. (1) looks a lot like the b b_dagger correlation
measurement we have in the Bose-Hubbard example. I assume c and c_dagger
are the annihilation and creation operators in your local basis, and then I
would move the summation outside of < ... | ... | ... > as on the RHS, most
likely including the phi_i, phi_j. I would try an outer product with the
vector phi with itself to build a matrix and multiply it element-wise with
the correlation matrix (see Bose-Hubbard example). Then sum everything;
this works from the python interface without any problems.
The combination of GS and the excited state would need some work on the
Fortran side.
We have an orthogonalization of MPS states implemented in OSMPS, e.g., for
building a basis in the power of H as H^n | psi>. My concerns are (a)
although you seem to have L^2 + 1 basis states scaling with the system size
L (and not a complete basis growing with the Hilbert space), it would get
even for moderate system sizes out of hand ... 400 states for L=20 sounds
too much. (b) stability of Gram-Schmidt ... your equation does not go far
enough to see if you consider the projection of each of the previous
vectors, but either you have a bad scaling or a vulnerable to numerical
instability.
So possible yes, but I would try to simplify the expression to the
measurements of <n gs="" c="" c_dagger="" |=""> and <gs gs="" c="" c_dagger="" |="">. The four-operator correlation is not implemented in the most general
form, but I would consider it easier than the orthogonalization
(implementation-wise and scaling-wise).</gs></n>
I have to check on q2.png another time to judge the approach.
Hi there,
First of all I want to thank all the devs especially Matt for helping me out with my previous problem. It turned out there was a small discrepancy between my MPS and ED implementation, and the OpenMPS was indeed able to replicate the ED results to a high precision.
Right now I just have a quick question. Could OpenMPS enable me to calculate something like < Ex | Obs | GS >? By Ex, Obs, GS I meant the excited states, the observable of interest, the ground state, respectively. I've noticed that in the source code there are fortran subroutines that could calculate inner products so that should be doable, but I am not 100% sure how to approach this in an efficient way.
Thank you very much for your time, as always.
Hello,
no, it is not possible right now to define this observable via the python
interface. But indeed, the inner products are already in fortran, e.g., you
could calculate the overlap between the ground state and the excited state
(and verify that they are orthogonal).
Depending on what your observable really is, there might be the chance to
measure reduced density matrices of the ground and excited state and then
handle your measurement in your python post processing. If you share what
measure you had in mind, we might be able to give a bit more feedback.
Best regards,
Daniel
On Thu, Oct 15, 2020 at 6:17 AM Keyi Liu rayliu1993@users.sourceforge.net
wrote:
Hi there,
Sorry for the very late reply we were distracted by something else.
Also see attached. We have two questions
Actually I think I know part of the answer to my first question, I will just add a FiniteFunction to the MPO observable. However it'd still be nice to calculate the observable b/w different states
However, I want to replace my first question with this one, that is it possible to not do a full sum, as seen in the attachment here?
Last edit: Keyi Liu 2021-01-27
I will start with q.png ... otherwise, I will risk losing your train of
thought, I fear:
The left-hand side of Eq. (1) looks a lot like the b b_dagger correlation
measurement we have in the Bose-Hubbard example. I assume c and c_dagger
are the annihilation and creation operators in your local basis, and then I
would move the summation outside of < ... | ... | ... > as on the RHS, most
likely including the phi_i, phi_j. I would try an outer product with the
vector phi with itself to build a matrix and multiply it element-wise with
the correlation matrix (see Bose-Hubbard example). Then sum everything;
this works from the python interface without any problems.
The combination of GS and the excited state would need some work on the
Fortran side.
We have an orthogonalization of MPS states implemented in OSMPS, e.g., for
building a basis in the power of H as H^n | psi>. My concerns are (a)
although you seem to have L^2 + 1 basis states scaling with the system size
L (and not a complete basis growing with the Hilbert space), it would get
even for moderate system sizes out of hand ... 400 states for L=20 sounds
too much. (b) stability of Gram-Schmidt ... your equation does not go far
enough to see if you consider the projection of each of the previous
vectors, but either you have a bad scaling or a vulnerable to numerical
instability.
So possible yes, but I would try to simplify the expression to the
measurements of <n gs="" c="" c_dagger="" |=""> and <gs gs="" c="" c_dagger="" |="">. The four-operator correlation is not implemented in the most general
form, but I would consider it easier than the orthogonalization
(implementation-wise and scaling-wise).</gs></n>
I have to check on q2.png another time to judge the approach.
Best regards,
Daniel
On Wed, Jan 27, 2021 at 7:50 PM Keyi Liu rayliu1993@users.sourceforge.net
wrote:
Thank you for the reply. Now I think for q2, I can calculate the O_CD using nftotal operators, and corr2nn for the O_BO
Last edit: Keyi Liu 2021-02-26