> Since the actual interval used by the system is a bit larger than what > was asked for, there will be fewer ticks. I understand what happens and why. I admit that I'm not familiar with the POSIX standard on this issue. Questions: * I've heard that the kernel's timer resolution has increased from 10ms to 1ms in 2.6. Why does the itimer have such a large granularity? I expected it to be highly accurate in this range. * Is this how the timer is supposed to work? It seems to me that an algorithm which kept running track of the difference in requested and actual times (a la Bresenham) could make the itimer behave closer to what the user requested. > Maybe you could save this code if it is part of a test suite.... This code is part of a "timer calibration" routine used by the RealNetworks Helix Server. I just noticed that the timer calibration failed on a machine that had 2.6.0-test7 installed (we have not officially looked at supporting 2.6 yet). The test runs on many different flavors of *nix (probably a dozen or so). I can check to see what the behavior is on the other platforms if that would be useful. If this is the Right Way to do timers, we'll probably end up changing our "calibration" routine to read back the actual timer interval as you suggest. -- One good reason why computers can do more work than people is that they never have to stop and answer the phone.