Ah ah ah...
I do have a few plans but it is not clear yet...
- First, I am not sure that basing the probabilistic modelling on
OpenTURNS is the best thing to do... I am almost done with my overloaded
class, but I find the conversion between Numerical* OpenTURNS objects and
the more pythonic arrays, lists and tuples rather complicated and heavy to
implement (you need to loop over Numerical* items)... I am wondering if we
wouldn't have better to stick to a home made implementation of some basic
distributions starting from the ones implemented in FERUM. Regarding the
multivariate distributions, I think that elliptical copulas (Gaussian and
student) parameterized with rank correlations are good enough in a first
- Then I started to think about an efficient way to parse the variates
for the evaluation of the performance function(s) and I am not inspired...
Maybe we could gather the realizations in a dictionary whose keys would be
the RVs' names and then use the trick you discovered François:
But first we have to make the base univariate distribution class and
overload its methods with 4 or 5 distributions... And produce examples and
Then we should implement a simple Monte-Carlo propagator for distribution
analysis in order to test the gfun parser.
I know it's hard to start, but we need to think twice before setting up the
Maybe we could also start by writing the typical inputfile we'd like to set
up, in order to get an insight on:
- How do I create/use my marginal distributions?
- How do I get a joint distribution from a collection of univariate
distributions and a set of copulae?
- How do I define my performance model
- How do I instanciate/run my uncertainty propagation methods?
From: "François Deheeger" <francois.deheeger@...>
> To: pyrely-general@...
> Date: Thu, 3 Feb 2011 21:31:30 +0100
> Subject: toc toc toc...
> Hi developers...
> is there any plans on your project ?
> François Deheeger