You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FIX: Update python code to simplify and resolve FutureWarning (#540)
* Misc edits to prob lecture
* fix variable name and minor formatting update
* add explanation for infinite support
* ENH: update code to simplify and resolve warnings
* remove all asarray
* address missed merge conflict issues
* remove extra x=df['income']
* FIX: set pd option to see if FutureWarning is resolved for inf and na
* revert test by setting pd option
* upgrade anaconda==2024.06
---------
Co-authored-by: John Stachurski <[email protected]>
Let's discuss the connection between observed distributions and probability distributions.
@@ -941,7 +917,6 @@ ax.set_ylabel('density')
941
917
plt.show()
942
918
```
943
919
944
-
945
920
The match between the histogram and the density is not bad but also not very good.
946
921
947
922
One reason is that the normal distribution is not really a good fit for this observed data --- we will discuss this point again when we talk about {ref}`heavy tailed distributions<heavy_tail>`.
@@ -967,8 +942,6 @@ ax.set_ylabel('density')
967
942
plt.show()
968
943
```
969
944
970
-
971
945
Note that if you keep increasing $N$, which is the number of observations, the fit will get better and better.
972
946
973
947
This convergence is a version of the "law of large numbers", which we will discuss {ref}`later<lln_mr>`.
0 commit comments