Win7 和 Win10 之间的定时器差异
Timers differences between Win7 & Win10
我有一个应用程序,我在其中使用 gettimeofday
的 MinGW 实现在 Win7 上实现 "precise" 计时(~1ms 精度)。它工作正常。
但是,当在Win10上使用相同的代码(甚至是相同的*.exe)时,精度急剧下降到著名的15.6ms精度,这对我来说是不够的。
两个问题:
- 您知道造成这种差异的根源是什么吗? (它是 OS 配置/"features" 吗?)
- 我该如何解决?或者,更好的是,是否有与 OS 配置无关的精确计时器?
注意:std::chrono::high_resolution_clock
似乎有同样的问题(至少它确实显示了 Win10 上的 15.6 毫秒限制)。
根据 Hans Passant 的评论和我这边的其他测试,这里有一个更合理的答案:
15.6 毫秒(1/64 秒)的限制是 well-known on Windows and is the default behavior. It is possible to lower the limit (e.g. to 1ms, through a call to timeBeginPeriod()
) though we are not advise to do so, because this affects the global system timer resolution and the resulting power consumption. For instance, Chrome is notorious for doing this。因此,由于计时器分辨率的全局方面,由于第三方程序,人们可能会在没有明确要求的情况下观察到 1 毫秒的精度。
此外,请注意 std::chrono::high_resolution_clock
而不是 在 windows 上具有有效行为(在 Visual Studio 或 MinGW 上下文中)。所以你不能指望这个接口是一个跨平台的解决方案,15.625ms的限制仍然适用
知道了,我们该如何应对呢?好吧,可以使用 timeBeginPeriod()
来提高一些计时器的精度,但是,同样,我们不建议这样做:使用 QueryPerformanceCounter()
(QPC),这是期待 acquire high-resolution time stamps or measure time intervals according to Microsoft. Note that GPC does count elapsed time (and not CPU cycles) 的原生代码的主要 API。这是一个用法示例:
LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
LARGE_INTEGER Frequency;
QueryPerformanceFrequency(&Frequency);
QueryPerformanceCounter(&StartingTime);
// Activity to be timed
QueryPerformanceCounter(&EndingTime);
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
//
// We now have the elapsed number of ticks, along with the
// number of ticks-per-second. We use these values
// to convert to the number of elapsed microseconds.
// To guard against loss-of-precision, we convert
// to microseconds *before* dividing by ticks-per-second.
//
ElapsedMicroseconds.QuadPart *= 1000000;
ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;
根据 Microsoft,QPC 也适用于 multicore/multithread 上下文,尽管它可以更少 precise/ambiguous:
When you compare performance counter results that are acquired from different threads, consider values that differ by ± 1 tick to have an ambiguous ordering. If the time stamps are taken from the same thread, this ± 1 tick uncertainty doesn't apply. In this context, the term tick refers to a period of time equal to 1 ÷ (the frequency of the performance counter obtained from QueryPerformanceFrequency
).
作为附加资源,MS 还提供了 FAQ on how/why use QPC and an explanation on clock/timing in Windows。
我有一个应用程序,我在其中使用 gettimeofday
的 MinGW 实现在 Win7 上实现 "precise" 计时(~1ms 精度)。它工作正常。
但是,当在Win10上使用相同的代码(甚至是相同的*.exe)时,精度急剧下降到著名的15.6ms精度,这对我来说是不够的。
两个问题: - 您知道造成这种差异的根源是什么吗? (它是 OS 配置/"features" 吗?) - 我该如何解决?或者,更好的是,是否有与 OS 配置无关的精确计时器?
注意:std::chrono::high_resolution_clock
似乎有同样的问题(至少它确实显示了 Win10 上的 15.6 毫秒限制)。
根据 Hans Passant 的评论和我这边的其他测试,这里有一个更合理的答案:
15.6 毫秒(1/64 秒)的限制是 well-known on Windows and is the default behavior. It is possible to lower the limit (e.g. to 1ms, through a call to timeBeginPeriod()
) though we are not advise to do so, because this affects the global system timer resolution and the resulting power consumption. For instance, Chrome is notorious for doing this。因此,由于计时器分辨率的全局方面,由于第三方程序,人们可能会在没有明确要求的情况下观察到 1 毫秒的精度。
此外,请注意 std::chrono::high_resolution_clock
而不是 在 windows 上具有有效行为(在 Visual Studio 或 MinGW 上下文中)。所以你不能指望这个接口是一个跨平台的解决方案,15.625ms的限制仍然适用
知道了,我们该如何应对呢?好吧,可以使用 timeBeginPeriod()
来提高一些计时器的精度,但是,同样,我们不建议这样做:使用 QueryPerformanceCounter()
(QPC),这是期待 acquire high-resolution time stamps or measure time intervals according to Microsoft. Note that GPC does count elapsed time (and not CPU cycles) 的原生代码的主要 API。这是一个用法示例:
LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
LARGE_INTEGER Frequency;
QueryPerformanceFrequency(&Frequency);
QueryPerformanceCounter(&StartingTime);
// Activity to be timed
QueryPerformanceCounter(&EndingTime);
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
//
// We now have the elapsed number of ticks, along with the
// number of ticks-per-second. We use these values
// to convert to the number of elapsed microseconds.
// To guard against loss-of-precision, we convert
// to microseconds *before* dividing by ticks-per-second.
//
ElapsedMicroseconds.QuadPart *= 1000000;
ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;
根据 Microsoft,QPC 也适用于 multicore/multithread 上下文,尽管它可以更少 precise/ambiguous:
When you compare performance counter results that are acquired from different threads, consider values that differ by ± 1 tick to have an ambiguous ordering. If the time stamps are taken from the same thread, this ± 1 tick uncertainty doesn't apply. In this context, the term tick refers to a period of time equal to 1 ÷ (the frequency of the performance counter obtained from
QueryPerformanceFrequency
).
作为附加资源,MS 还提供了 FAQ on how/why use QPC and an explanation on clock/timing in Windows。