Skip to content

Conversation

@SmallJoker
Copy link
Member

This makes the FPS display in the debug information more pleasant to look at, especially at very low dtime values, e.g. when disabling vsync.

To do

This PR is Ready for Review.

How to test

  1. pause_fps_max = 10, open the pause menu and focus another window -> the FPS counter must go down in reasonable time
  2. vsync = false and regular gameplay -> the FPS counter goes up and shows a rather readable average value, at the cost of increased jitter (perhaps calculating 1% lows would be more appropriate?)

This makes the FPS display in the debug information
more pleasant to look at, especially at very low dtime values,
e.g. when disabling vsync.
@sfan5 sfan5 added the Bugfix 🐛 PRs that fix a bug label Nov 21, 2025
@sfan5
Copy link
Collaborator

sfan5 commented Nov 21, 2025

When just starting the game it takes multiple seconds to go from an unknown high value to the actual fps.
Plus it takes noticeably long to "recover" if you turn around and face away from a complex scene.

Also it doesn't really agree with what mangohud says (may or may not be a problem):
grafik

@SmallJoker
Copy link
Member Author

unknown high value

The "issue" is that we're averaging the client loop time (dtime), which is initially 0.0001 on my system due to draw_times.limit(device, &dtime); updating for the first time after m_rendering_engine->run().

I could introduce a boolean to ignore the first iteration and use the subsequent one as initial value for jp->avg if that's preferred.

@HybridDog
Copy link
Contributor

As far as I know, the smoothness which the player perceives depends more on the recent maximum dtime value than the average dtime.
Averaging the dtime over time to show a number to the player therefore sounds like a bad approach to me.
Would the number be more meaningful if we show the minimum inverse dtime within a recent time interval?

This could be implemented with the following calculations:

variable meaning
$t_n$ Time in frame $n$
$fps_n$ Number shown to the player
$I$ Constant controlling the speed of forgetting old maximum dtime values
$m_{p,n}$ Maxium dtime within the previous time interval
$m_{c,n}$ Maxium dtime within the current time interval
$f_{n}$ Last time when old dtime values were forgotten
$d_n$ dtime in frame $n$. $d_n = t_n - t_{n-1}$
$$(f_{n+1}, m_{p,n+1}, m_{c,n+1}) = \begin{cases} (t_n, m_{c,n}, d_n), & t_n - f_n > I \\\ (f_{n}, m_{p,n}, d_n), & t_n - f_n <= I \land m_{c,n} < d_n \\\ (f_{n}, m_{p,n}, m_{c,n}), & otherwise \end{cases}$$ $$fps_n = \frac{1}{\max\{m_{c,n}, m_{p,n}\}}$$

And if showing the recent minimum inverse dtime value is a bad approach, too, would it nonetheless be better to average an exponentiated dtime value instead of the actual dtime, i.e. something like $a_n = \left(0.8 d_n^\lambda + 0.2 a_{n-1}^\lambda\right)^\frac{1}{\lambda}$ instead of $a_n = 0.8 d_n + 0.2 a_{n-1}$?

@SmallJoker
Copy link
Member Author

SmallJoker commented Nov 22, 2025

the smoothness which the player perceives depends more on the recent maximum dtime value than the average dtime

Perhaps. But with this change, high dtime values are weighted higher, thus the FPS counter should honor those better than before.

would it nonetheless be better to average an exponentiated dtime value instead of the actual dtime

Why? Is there any research paper on that? What is the underlying mathematical reasoning for using exponents?

@sfan5
Copy link
Collaborator

sfan5 commented Nov 22, 2025

Maybe we're better off just collecting the last 200 dtimes and doing trivial statistics on top, instead of something clever.

@SmallJoker
Copy link
Member Author

SmallJoker commented Nov 22, 2025

@sfan5 Unfortunately collecting samples to display a median (or 99% percentile) also depends on the FPS. The filter needs special handling to output a reasonable value in time, and that anywhere in the range of 30 FPS to 1000 FPS. Thus, I doubt it gets any easier than the change I proposed.

@HybridDog
Copy link
Contributor

Why? Is there any research paper on that? What is the underlying mathematical reasoning for using exponents?

I couldn't find much profound information about how to calculate an FPS or similar value, so my comments are only based on my assumptions.
Using an exponent $\lambda &gt; 1$ should give a compromise between calculating the average and maximum value. If all dtimes are the same, the calculated value is the average, and if they jitter a lot, the calculated value is larger than the average.

Maybe my comments are offtopic. If "Frames per second" literally refers to the number of frames which are rendered divided by the time of the measurement, maybe the inverse of the maximum or median dtime should not be called FPS.

@sfan5 sfan5 added the Action / change needed Code still needs changes (PR) / more information requested (Issues) label Dec 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Action / change needed Code still needs changes (PR) / more information requested (Issues) Bugfix 🐛 PRs that fix a bug @ Client / Audiovisuals

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants