Description
The new check_bounds flag can be used to skip the parameter and value checks done by the bound
method of most logps (as intended), but it also affects the logcdf methods of distributions evaluated within a model context.
Unlike most logps, however, the logcdf is usually always defined over the real range, even if the respective pdf is only defined/discussed within a finite/non-real domain. Thus, it might make sense to have the cdf give the right result (for valid parameters) even when check_bounds is False.
In [4]: with pm.Model(check_bounds=False) as m:
...: dist = pm.Bernoulli.dist(p=.1)
...: print(dist.logcdf(-2).eval())
...:
-0.10536051565782631
This would require changing most of the logcdf methods to apply the bound on distribution parameters, but not on the input values, as is done now. For instance, in the Bernoulli example:
The function would be changed to:
return bound(
tt.switch(
tt.le(value, 0),
-np.inf,
tt.switch(
tt.lt(value, 1),
tt.log1p(-p),
0,
),
),
0 <= p,
p <= 1,
)
So that the logcdf does not return the wrong result when check_bounds is False due to the input value (but can still do due to invalid parameters). I think this may make the check_bounds flag behavior more consistent/intuitive. I would also extend the docstrings of the check_bounds flag to explain the logcdf implications.
This might become more relevant if, and when, we implement a generic Truncated and Censored class as discussed in #1864.
If people agree this makes sense, I am happy to do a PR for it.