Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor.[2][3] Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering (particularly useful in limit state design) and manufacturing.
When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a continuous random variable with a probability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system.[4]
Because there are so many sources of random and systemic variability when designing materials and structures, it is greatly beneficial for the designer to model the factors studied as random variables. By considering this model, a designer can make adjustments to reduce the flow of random variability, thereby improving engineering quality. Proponents of the probabilistic design approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost.[4][5]
Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. Minimizing random variability is essential to probabilistic design because it limits uncontrollable factors, while also providing a much more precise determination of failure probability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to as robustification, parameter design or design for six sigma.[4]
Though the laws of physics dictate the relationships between variables and measurable quantities such as force, stress, strain, and deflection, there are still three primary sources of variability when considering these relationships.[6]
The first source of variability is statistical, due to the limitations of having a finite sample size to estimate parameters such as yield stress, Young's modulus, and true strain.[7] Measurement uncertainty is the most easily minimized out of these three sources, as variance is proportional to the inverse of the sample size.
We can represent variance due to measurement uncertainties as a corrective factor [math]\displaystyle{ B }[/math], which is multiplied by the true mean [math]\displaystyle{ X }[/math] to yield the measured mean of [math]\displaystyle{ \bar X }[/math]. Equivalently, [math]\displaystyle{ \bar X = \bar B X }[/math].
This yields the result [math]\displaystyle{ \bar B = \frac{\bar X}{X} }[/math] , and the variance of the corrective factor [math]\displaystyle{ B }[/math] is given as:
[math]\displaystyle{ Var[B]= \frac{Var[\bar X]}{X} = \frac{Var[X]}{nX} }[/math]
where [math]\displaystyle{ B }[/math] is the correction factor, [math]\displaystyle{ X }[/math] is the true mean, [math]\displaystyle{ \bar X }[/math] is the measured mean, and [math]\displaystyle{ n }[/math] is the number of measurements made.[6]
The second source of variability stems from the inaccuracies and uncertainties of the model used to calculate such parameters. These include the physical models we use to understand loading and their associated effects in materials. The uncertainty from the model of a physical measurable can be determined if both theoretical values according to the model and experimental results are available.
The measured value [math]\displaystyle{ \hat H(\omega) }[/math] is equivalent to the theoretical model prediction [math]\displaystyle{ H(\omega) }[/math] multiplied by a model error of [math]\displaystyle{ \phi(\omega) }[/math], plus the experimental error [math]\displaystyle{ \varepsilon(\omega) }[/math].[8] Equivalently,
[math]\displaystyle{ \hat H(\omega) = H(\omega) \phi(\omega) + \varepsilon(\omega) }[/math]
and the model error takes the general form:
[math]\displaystyle{ \phi(\omega) = \sum_{i = 0}^n a_i \omega^{n} }[/math]
where [math]\displaystyle{ a_i }[/math] are coefficients of regression determined from experimental data.[8]
Finally, the last variability source comes from the intrinsic variability of any physical measurable. There is a fundamental random uncertainty associated with all physical phenomena, and it is comparatively the most difficult to minimize this variability. Thus, each physical variable and measurable quantity can be represented as a random variable with a mean and a variability.
Consider the classical approach to performing tensile testing in materials. The stress experienced by a material is given as a singular value (i.e., force applied divided by the cross-sectional area perpendicular to the loading axis). The yield stress, which is the maximum stress a material can support before plastic deformation, is also given as a singular value. Under this approach, there is a 0% chance of material failure below the yield stress, and a 100% chance of failure above it. However, these assumptions break down in the real world.
The yield stress of a material is often only known to a certain precision, meaning that there is an uncertainty and therefore a probability distribution associated with the known value.[6][8] Let the probability distribution function of the yield strength be given as [math]\displaystyle{ f(R) }[/math].
Similarly, the applied load or predicted load can also only be known to a certain precision, and the range of stress which the material will undergo is unknown as well. Let this probability distribution be given as [math]\displaystyle{ f(S) }[/math].
The probability of failure is equivalent to the area between these two distribution functions, mathematically:
[math]\displaystyle{ P_f = P(R\lt S)= \int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}f(R)f(S)dSdR }[/math]
or equivalently, if we let the difference between yield stress and applied load equal a third function [math]\displaystyle{ R-S = Q }[/math], then:
[math]\displaystyle{ P_f = \int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}f(R)f(S)dSdR = \int\limits_{-\infty}^{0} f(Q)dQ }[/math]
where the variance of the mean difference [math]\displaystyle{ Q }[/math] is given by [math]\displaystyle{ \sigma_Q^{2} = \sqrt{\sigma_R^{2}+ \sigma_S^{2}} }[/math] .
The probabilistic design principles allow for precise determination of failure probability, whereas the classical model assumes absolutely no failure before yield strength.[9] It is clear that the classical applied load vs. yield stress model has limitations, so modeling these variables with a probability distribution to calculate failure probability is a more precise approach. The probabilistic design approach allows for the determination of material failure under all loading conditions, associating quantitative probabilities to failure chance in place of a definitive yes or no.
In essence, probabilistic design focuses upon the prediction of the effects of variability. In order to be able to predict and calculate variability associated with model uncertainty, many methods have been devised and utilized across different disciplines to determine theoretical values for parameters such as stress and strain. Examples of theoretical models used alongside probabilistic design include:
Additionally, there are many statistical methods used to quantify and predict the random variability in the desired measurable. Some methods that are used to predict the random variability of an output include:
Original source: https://en.wikipedia.org/wiki/Probabilistic design.
Read more |