How can I implement an accurate (but variable) FPS limit/cap in my OpenGL application?

opengl limit fps
glfw framerate limit
glfwswapinterval

I am currently working on an OpenGL application to display a few 3D spheres to the user, which they can rotate, move around, etc. That being said, there's not much in the way of complexity here, so the application runs at quite a high framerate (~500 FPS).

Obviously, this is overkill - even 120 would be more then enough, but my issue is that running the application at full-stat eats away my CPU, causing excess heat, power consumption, etc. What I want to do is be able to let the user set an FPS cap so that the CPU isn't being overly used when it doesn't need to be.

I'm working with freeglut and C++, and have already set up the animations/event handling to use timers (using the glutTimerFunc). The glutTimerFunc, however, only allows an integer amount of milliseconds to be set - so if I want 120 FPS, the closest I can get is (int)1000/120 = 8 ms resolution, which equates to 125 FPS (I know it's a neglegible amount, but I still just want to put in an FPS limit and get exactly that FPS if I know the system can render faster).

Furthermore, using glutTimerFunc to limit the FPS never works consistently. Let's say I cap my application to 100 FPS, it usually never goes higher then 90-95 FPS. Again, I've tried to work out the time difference between rendering/calculations, but then it always overshoots the limit by 5-10 FPS (timer resolution possibly).

I suppose the best comparison here would be a game (e.g. Half Life 2) - you set your FPS cap, and it always hits that exact amount. I know I could measure the time deltas before and after I render each frame and then loop until I need to draw the next one, but this doesn't solve my 100% CPU usage issue, nor does it solve the timing resolution issue.

Is there any way I can implement an effective, cross-platform, variable frame rate limiter/cap in my application? Or, in another way, is there any cross-platform (and open source) library that implements high resolution timers and sleep functions?

Edit: I would prefer to find a solution that doesn't rely on the end user enabling VSync, as I am going to let them specify the FPS cap.

Edit #2: To all who recommend SDL (which I did end up porting my application to SDL), is there any difference between using the glutTimerFunc function to trigger a draw, or using SDL_Delay to wait between draws? The documentation for each does mention the same caveats, but I wasn't sure if one was more or less efficient then the other.

Edit #3: Basically, I'm trying to figure out if there is a (simple way) to implement an accurate FPS limiter in my application (again, like Half Life 2). If this is not possible, I will most likely switch to SDL (makes more sense to me to use a delay function rather then use glutTimerFunc to call back the rendering function every x milliseconds).

I would suggest using sub-ms precision system timers (QueryPerformanceCounter, gettimeofday) to get timing data. These can help you profile performance in optimized release builds also.

implementing framerate limit - OpenGL: Windows, How can I implement an accurate (but variable) FPS limit/cap in my OpenGL application? - c++. The typical way to yield a predictable (if not constant) frame rate (with video or 3D graphics) is described in the following pseudo-code. Algorithm. Prepare the next frame (render in back-buffer); Sleep for the remainder of the time slice; Ask the frame to be displayed (swap front and back buffers).

I'd advise you to use SDL. I personnally use it to manage my timers. Moreover, it can limit your fps to your screen refresh rate (V-Sync) with SDL 1.3. That enables you to limit CPU usage while having the best screen performance (even if you had more frames, they wouldn't be able to be displayed since your screen doesn't refresh fast enough).

The function is

SDL_GL_SetSwapInterval(1);

If you want some code for timers using SDL, you can see that here :

my timer class

Good luck :)

New FPS Limiter for Vulkan, OpenGL,, What I really try to do is to control the framerate / update rate of my opengl application (with out using force But when I try it with others refresh-rate like 75 , 85 or 100 Hz the I tried to change control variable (position , speed , quartenion) of my application from float to PRO : response time is “perfect” The __GL_HEAP_ALLOC_LIMIT environment variable enables the user to specify a per-process heap allocation limit for as long as libGL is loaded in the application. __GL_HEAP_ALLOC_LIMIT is specified in the form BYTES SUFFIX, where BYTES is a nonnegative integer and SUFFIX is an optional multiplicative suffix: kB = 1000, k = 1024, MB = 1000*1000, M = 1024*1024, GB = 1000*1000*1000, and G = 1024*1024*1024.

The easiest way to solve it is to enable Vsync. That's what I do in most games to prevent my laptop from getting too hot. As long as you make sure the speed of your rendering path is not connected to the other logic, this should be fine.

There is a function glutGet( GLUT_ELAPSED_TIME ) which returns the time since started in miliseconds, but that's likely still not fast enough.

A simple way is to make your own timer method, which uses the HighPerformanceQueryTimer on windows, and the getTimeOfDay for POSIX systems.

Or you can always use timer functions from SDL or SFML, which do basically the same as above.

Game Loop: Best way to limit the fps?, Posted by niftucal: “New FPS Limiter for Vulkan, OpenGL, DirectX” Alternatively you can load the dll yourself into any app/game but this requires Programming it is a very difficult task and Nvidia has not yet implemented an Interestingly, variable-refresh is actually pretty useful in these cases even if you use a fixed rate. FPS is not always the best way to measure performance because it is not linear in the time domain. It is better to measure frame time, which is the inverse of the FPS (conversely, FPS is the inverse of frame time). So if the FPS is 25, the frame time is 1/25 = 0.04 seconds. An FPS of 200 means the frame time is 0.005 seconds.

You should not try to limit the rendering rate manually, but synchronize with the display vertical refresh. This is done by enabling V sync in the graphics driver settings. Apart from preventing (your) programs from rendering at to high rate, it also increases picture quality by avoiding tearing.

The swap interval extensions allow your application to fine tune the V sync behaviour. But in most cases just enabling V sync in the driver and letting the buffer swap block until sync suffices.

Which is the Best FPS Limiter to Limit Frame Rate in Games , There are two ways of achieving this, one would be a variable timestep, with that the game would seem consistent, regardless how fast the computer of the  An FPS counter. In real-time graphics, it is important to keep an eye on performance. A good practice is to choose a target FPS ( usually 60 or 30 ) and make everything possible to stick to it.

I think a good way to achieve this, no matter what graphics library you use, is to have a single clock measurement in the gameloop to take every single tick (ms) into account. That way the average fps will be exactly the limit just like in Half-Life 2. Hopefully the following code snippet will explain what I am talking about:

//FPS limit
unsigned int FPS = 120;

//double holding clocktime on last measurement
double clock = 0;

while (cont) {
    //double holding difference between clocktimes
    double deltaticks;

    //double holding the clocktime in this new frame
    double newclock;

    //do stuff, update stuff, render stuff...

    //measure clocktime of this frame
    //this function can be replaced by any function returning the time in ms
    //for example clock() from <time.h>
    newclock = SDL_GetTicks();

    //calculate clockticks missing until the next loop should be
    //done to achieve an avg framerate of FPS 
    // 1000 / 120 makes 8.333... ticks per frame
    deltaticks = 1000 / FPS - (newclock - clock);

    /* if there is an integral number of ticks missing then wait the
    remaining time
    SDL_Delay takes an integer of ms to delay the program like most delay
    functions do and can be replaced by any delay function */
    if (floor(deltaticks) > 0)
        SDL_Delay(deltaticks);

    //the clock measurement is now shifted forward in time by the amount
    //SDL_Delay waited and the fractional part that was not considered yet
    //aka deltaticks
    the fractional part is considered in the next frame
    if (deltaticks < -30) {
        /*dont try to compensate more than 30ms(a few frames) behind the
        framerate
        //when the limit is higher than the possible avg fps deltaticks
        would keep sinking without this 30ms limitation
        this ensures the fps even if the real possible fps is
        macroscopically inconsitent.*/
        clock = newclock - 30;
    } else {
        clock = newclock + deltaticks;
    }

    /* deltaticks can be negative when a frame took longer than it should
    have or the measured time the frame took was zero
    the next frame then won't be delayed by so long to compensate for the
    previous frame taking longer. */


    //do some more stuff, swap buffers for example:
    SDL_RendererPresent(renderer); //this is SDLs swap buffers function
}

I hope this example with SDL helps. It is important to measure the time only once per frame so every frame is taken into account. I recommend to modularize this timing in a function which also makes your code clearer. This code snipped has no comments in the case they just annoyed you in the last one:

unsigned int FPS = 120;

void renderPresent(SDL_Renderer * renderer) {
    static double clock = 0;
    double deltaticks;
    double newclock = SDL_GetTicks();

    deltaticks = 1000.0 / FPS - (newclock - clock);

    if (floor(deltaticks) > 0)
        SDL_Delay(deltaticks);

    if (deltaticks < -30) {
        clock = newclock - 30;
    } else {
        clock = newclock + deltaticks;
    }

    SDL_RenderPresent(renderer);
}

Now you can call this function in your mainloop instead of your swapBuffer function (SDL_RenderPresent(renderer) in SDL). In SDL you'd have to make sure the SDL_RENDERER_PRESENTVSYNC flag is turned off. This function relies on the global variable FPS but you can think of other ways of storing it. I just put the whole thing in my library's namespace.


This method of capping the framerate delivers exactly the desired average framerate if there are no large differences in the looptime over multiple frames because of the 30ms limit to deltaticks. The deltaticks limit is required. When the FPS limit is higher than the actual framerate deltaticks will drop indefinitely. Also when the framerate then rises above the FPS limit again the code would try to compensate the lost time by rendering every frame immediately resulting in a huge framerate until deltaticks rises back to zero. You can modify the 30ms to fit your needs, it is just an estimate by me. I did a couple of benchmarks with Fraps. It works with every imaginable framerate really and delivers beautiful results from what I have tested.


I must admit I coded this just yesterday so it is not unlikely to have some kind of bug. I know this question was asked 5 years ago but the given answers did not statify me. Also feel free to edit this post as it is my very first one and probably flawed.

EDIT: It has been brought to my attention that SDL_Delay is very very inaccurate on some systems. I heard a case where it delayed by far too much on android. This means my code might not be portable to all your desired systems.

Ways to avoid tearing and cap frame rate when VSYNC is off?, However, in some cases, you may want to limit FPS in games for you are getting 100 FPS in a game then you want to the cap the game FPS to input lag but is still not the perfect solution for most of the games. FPS limiter / Frame Rate Limiter with FreeSync or G-Sync variable refresh rate technologies. Note on Windows the _GLFW_USE_DWM_SWAP_INTERVAL definition (requires a recompile of glfw) controls whether to ignore swap interval settings when the DWM compositing is on, and as you say many drivers can override the application settings.

BOTW: Why I think 30 fps is (still) better than 60 fps : cemu, My games have always depended on vsync to avoid tearing and limit their frame rate. frame to keep the game limited to the target frame rate (60FPS default, but Also, a lot of 3D gamers turn vsync off on purpose to get max FPS and reduce in Allegro's code, wait_for_vsync isn't implemented for OpenGL, only for D3D. RTSS can be configured globally or individually for selected games. To limit FPS, open the RTSS and then in the Framerate limit option, set the desired value to which you want to limit frame rate for a game. RTSS is a very accurate frame rate limiter and can limit FPS to the desired value that you have set for it.

LWJGL 3 Getting Started Example - Frame Limiting, Hi everyone,. Forewords: I don't want to belittle the fantastic achievement to be able to run BOTW at variable FPS (aka FPS++), the game is absolutely beautiful  8.010 How does the camera work in OpenGL? 8.020 How can I move my eye, or camera, in my scene? 8.030 Where should my camera go, the ModelView or projection matrix? 8.040 How do I implement a zoom operation? 8.050 Given the current ModelView matrix, how can I determine the object−space location of the camera?

FPS limit, The JLWJGL 3 Getting Started demo runs very smoothly but I noticed creates the ContextCapabilities instance and makes the OpenGL this method in the various things I work on if it needs an FPS cap. private long variableYieldTime, lastTime;. /**. * An accurate sync method that adapts automatically. Oh well, it is very easy for a game to have a variable frame-rate, so, imagine your character is jumping from platform to platform, and the PC lags for some reason. If his movement is based on frame-rate, he could slow down in middle jump and die : ( Prepare to be murdered by the player on the next day.

Comments
  • See related question: What's the usual way of controlling frame rate?.
  • In retrospect, this is what I was trying to hint at. I was aware of using vertical synchronization, but what I was ultimately aiming at would infact require the use of sub-millisecond, high-performance timers. I infact like to have my application run independent of the vertical synchronization frequency, so others may want to consider the following question: C++ Cross-Platform High-Resolution Timer
  • Vsync is good, but in my experience it has been difficult to ensure it works properly. There seem to be many methods to override any of the API settings your program might use to set Vsync, and it would go unheeded. There is also some fancy new stuff supported on new hardware, called Frame Rate Target or other such names, which is basically Vsync without the frame-rate hit when dipping below the refresh rate. It's very nice because it is power efficient and not degrading to performance. That said, use of a high precision timer as a fallback to cap the framerate is a good idea.
  • Does SDL_GL_SetSwapInterval(1); cause the FPS to be capped to my refresh rate even if I explicitly disable it in my graphics card driver? (e.g. in my NVIDIA Control Panel, I can set Vertical Sync to "Force Off").
  • to answer one of your other questions, you can't really control time in a game in a better resolution than the millisecond. The CPU interruption systems don't permit you to do that, since maybe other applications will take CPU time as well. Milliseconds are enough :) I have no idea about your VSync force off, I use ATI.
  • Thank you for the response, but I should also have mentioned that I wanted to implement a solution that did not require a user to enable VSync. As for the high performance timers, how would I go about making the application sleep before I tell it to draw the next frame? (I know how to do it on a millisecond resolution, but was hoping for something better)
  • @Breakthrough: resolution of sleep operations is never guaranteed. Ask to sleep for smaller (safer) intervals and use the counter to find out exactly how much time you sleep a posteriori. Then, sleep if any time is left.
  • Individual applications can still use vsync via swapinterval settings without requiring the user to enable it in the graphics driver. Even if you want to limit framerate to a lower amount (such as a video playback application), you should still use vsync to avoid the image tearing.
  • I just have to point out that vsync does add some lag, especially because of all the crap that graphics card drivers do. For example nvidia by default does 3 prerendered frames when vsync is on and with 60fps that's already 50ms. So my advice to game devs is to always give user 3 options: 1) vsync 2) unlimited fps 3) manual fps limit
  • @Timo: Seriously? I never ran into such an issue. On all my systems and all my graphics cards, and their drivers, having V sync enabled made my programs run at display refresh frequency. So please either provide some backing information.
  • @Timo, you are correct, I remember that issue now. There was a noticeable lag between my input and the response on the screen when I had enabled VSync, so I have disabled it ever since. In my particular case here, though, a bit of lag would be acceptable (it's not a game) - although I would preferably like to give the user an option between the three, exactly as you said. @datenwolf, do you know how if I can set the swap interval using freeglut? If not, then I might have to go SDL...
  • @datenwolf yeah I dunno I musta been having a seizure when I was reading this and writing that comment. Gonna delete it lol.
  • And I just remembered that the refresh may become as low as 24Hz, e.g. movieplayer playing a 24Hz movie to a monitor that supports 24Hz, so everything stays in the original framerate without conversion. And I also once played with a system that could yield 300H (used with shutter glasses stereoscopy).