Pretty much impossible to do this by algorithm. The number of variables - CPU and graphics card characteristics, system memory, which operating system, which underlying graphics library, system load and many others - is too large and varied.
You can easily time some test images yourself with various timer functions, or insert a timer within a program that accumulates estimates as it runs on a given system that will, over time, converge on a reasonably accurate estimate, but even that will always be an approximation at best.
Bookmarks