|
From: Klaus S. <kl...@sp...> - 2022-01-27 22:30:04
|
Hi Yevgeniy don't know if QuantLib would be my first choise to solve general PDEs but this script below does the job. Defintion of D(x,t) and the heat diffusion equation is taken e.g. from https://www.uni-muenster.de/imperia/md/content/physik_tp/lectures/ws2016-2017/num_methods_i/heat.pdf from typing import List import QuantLib as ql import matplotlib.pyplot as plt import numpy as np class HeatDiffusionOperator: def __init__(self, t_max: float, x_values: List[float]): self.__mesher = ql.FdmMesherComposite( ql.Predefined1dMesher(x_values) ) self.__t_max = t_max self.__n = len(x_values) self.__x_values = ql.Array(x_values) self.__dx = ql.FirstDerivativeOp(0, self.__mesher) self.__ddx = ql.SecondDerivativeOp(0, self.__mesher) self.__map = ql.TripleBandLinearOp(0, self.__mesher) def size(self) -> int: return 1 def setTime(self, t1: float, t2: float): t = 0.5 * (t1 + t2) self.__map.axpyb( ql.Array(1, 1.0), self.__dx, self.__ddx.mult(self.__x_values + ql.Array(self.__n, self.__t_max - t)), ql.Array() ) def apply(self, r: ql.Array) -> ql.Array: return self.__map.apply(r) def solve_splitting(self, direction: int, r: ql.Array, dt: float) -> ql.Array: return self.__map.solve_splitting(r, dt, 1.0) if __name__ == "__main__": n_points = 400 n_time_steps = 500 x_min = 0 x_max = 0.4 t_max = 0.02 x_values = np.arange(x_min, x_max + 1e-8, (x_max - x_min) / (n_points - 1)) y_values = ql.Array(list(np.exp(-(x_values - 0.5 * (x_max + x_min)) ** 2 / 1e-4))) op = HeatDiffusionOperator(t_max, list(x_values)) solver = ql.FdmBackwardSolver( ql.FdmLinearOpCompositeProxy(op), ql.FdmBoundaryConditionSet(), ql.FdmStepConditionComposite(ql.DoubleVector(), ql.FdmStepConditionVector()), ql.FdmSchemeDesc.CrankNicolson() ) solver.rollback(y_values, t_max, 0.0, n_time_steps, 0) plt.plot(x_values, y_values) plt.show() On Dienstag, 25. Januar 2022 23:39:57 CET Yev via QuantLib-users wrote: Hi, I have a Python code that solves a 1-dimentional Heat Diffusion PDE. My PDE has Diffusion Coefficient D(x,t) = x+t My grid has 400 points in range [0,0.4] I have a set of x values at time 0, and I iterate for 500 time steps. I'd like to rewrite it using Quantlib, as I hope it will speed up my process. I looked through documentation and source code, but couldn't figure out what classes should I use to implement it. Can you point me to how such problem can be solved with Quantlib? Thank you, Yevgeniy |