I have a numerical simulation where I'm randomly varying an input parameter.
This results in variations in three output parameters: A, B and C. The output parameters can be assumed to have normal distributions, but are correlated to each other.
If I calculate the mean and standard deviation on the range of values of A, B and C is there any way I could estimate the change in the mean of B and C due to a small change in the mean of A, making some account of the correlations? I was thinking of an approach based on sensitivity coefficients, but I'm not sure that's appropriate.
Any help/advice would be very welcome! Thanks!
This results in variations in three output parameters: A, B and C. The output parameters can be assumed to have normal distributions, but are correlated to each other.
If I calculate the mean and standard deviation on the range of values of A, B and C is there any way I could estimate the change in the mean of B and C due to a small change in the mean of A, making some account of the correlations? I was thinking of an approach based on sensitivity coefficients, but I'm not sure that's appropriate.
Any help/advice would be very welcome! Thanks!
0 commentaires:
Enregistrer un commentaire