linux-c-programming.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Integer arithmetic vs double precision arithmetic
@ 2006-03-26 15:14 Shriramana Sharma
  2006-03-26 16:12 ` Steve Graegert
  2006-03-26 16:12 ` Glynn Clements
  0 siblings, 2 replies; 9+ messages in thread
From: Shriramana Sharma @ 2006-03-26 15:14 UTC (permalink / raw)
  To: Linux C Programming List

I am trying to work out whether to use an integer or a double for my internal 
storage variable of a class meant to store a time.

Qt uses an unsigned integer with millisecond being the best precision, but I 
am wondering, why I should lose the extra digits of precision that I could 
get if I used a double.

Is there a marked increase in computing speed if integer arithmetic is used 
compared to double precision?
 
-- 

Tux #395953 resides at http://samvit.org
playing with KDE 3.51 on SUSE Linux 10.0
$ date [] CCE +2006-03-26 W12-7 UTC+0530

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Integer arithmetic vs double precision arithmetic
  2006-03-26 15:14 Integer arithmetic vs double precision arithmetic Shriramana Sharma
@ 2006-03-26 16:12 ` Steve Graegert
  2006-03-26 16:24   ` Shriramana Sharma
  2006-03-26 16:12 ` Glynn Clements
  1 sibling, 1 reply; 9+ messages in thread
From: Steve Graegert @ 2006-03-26 16:12 UTC (permalink / raw)
  To: linux-c-programming

On 3/26/06, Shriramana Sharma <samjnaa@gmail.com> wrote:
> I am trying to work out whether to use an integer or a double for my
> internal
> storage variable of a class meant to store a time.
>
> Qt uses an unsigned integer with millisecond being the best precision, but I
> am wondering, why I should lose the extra digits of precision that I could
> get if I used a double.
>
> Is there a marked increase in computing speed if integer arithmetic is used
> compared to double precision?

It depends: every modern average CPU can do floating point arithmetics
at least as fast as simple integer arithmetics (sometimes even
faster).   The emphasis here lies on "can".  If the compiler has
difficulties generating code for the FPU of a specific target platform
it will default to floating point library calls causing real
performance penalties.

Why do you consider doubles to store time values?

	\Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Integer arithmetic vs double precision arithmetic
  2006-03-26 15:14 Integer arithmetic vs double precision arithmetic Shriramana Sharma
  2006-03-26 16:12 ` Steve Graegert
@ 2006-03-26 16:12 ` Glynn Clements
  1 sibling, 0 replies; 9+ messages in thread
From: Glynn Clements @ 2006-03-26 16:12 UTC (permalink / raw)
  To: Shriramana Sharma; +Cc: Linux C Programming List


Shriramana Sharma wrote:

> I am trying to work out whether to use an integer or a double for my internal 
> storage variable of a class meant to store a time.
> 
> Qt uses an unsigned integer with millisecond being the best precision, but I 
> am wondering, why I should lose the extra digits of precision that I could 
> get if I used a double.
> 
> Is there a marked increase in computing speed if integer arithmetic is used 
> compared to double precision?

It depends upon the architecture, but a double could be a lot slower
than an int.

-- 
Glynn Clements <glynn@gclements.plus.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Integer arithmetic vs double precision arithmetic
  2006-03-26 16:12 ` Steve Graegert
@ 2006-03-26 16:24   ` Shriramana Sharma
  2006-03-26 21:03     ` Glynn Clements
  0 siblings, 1 reply; 9+ messages in thread
From: Shriramana Sharma @ 2006-03-26 16:24 UTC (permalink / raw)
  To: Linux C Programming List

Sunday, 26 March 2006 21:42 samaye tvayaa likhitam:

> Why do you consider doubles to store time values?

Oh, I need to use fractional days (in Julian Day Numbers) for astronomical 
calculations.

P.S: Thanks Steve and Glynn, for your feedback on my questions.

-- 

Tux #395953 resides at http://samvit.org
playing with KDE 3.51 on SUSE Linux 10.0
$ date [] CCE +2006-03-26 W12-7 UTC+0530

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Integer arithmetic vs double precision arithmetic
  2006-03-26 16:24   ` Shriramana Sharma
@ 2006-03-26 21:03     ` Glynn Clements
  2006-03-26 23:35       ` Shriramana Sharma
  0 siblings, 1 reply; 9+ messages in thread
From: Glynn Clements @ 2006-03-26 21:03 UTC (permalink / raw)
  To: Shriramana Sharma; +Cc: Linux C Programming List


Shriramana Sharma wrote:

> > Why do you consider doubles to store time values?
> 
> Oh, I need to use fractional days (in Julian Day Numbers) for astronomical 
> calculations.

So store time in seconds or milliseconds and divide by 86400 or
86400000 to get days. If 32 bits isn't enough, use "long long".

Floating-point is meant for the situation where you want a constant
relative error rather than a constant absolute error. For timescales,
a constant absolute error is usually more useful.

-- 
Glynn Clements <glynn@gclements.plus.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Integer arithmetic vs double precision arithmetic
  2006-03-26 21:03     ` Glynn Clements
@ 2006-03-26 23:35       ` Shriramana Sharma
  2006-03-27  1:02         ` Glynn Clements
  0 siblings, 1 reply; 9+ messages in thread
From: Shriramana Sharma @ 2006-03-26 23:35 UTC (permalink / raw)
  To: Linux C Programming List

Monday, 27 March 2006 02:33 samaye, Glynn Clements alekhiit:

> Floating-point is meant for the situation where you want a constant
> relative error rather than a constant absolute error. For timescales,
> a constant absolute error is usually more useful.

Could you clarify that a bit? How do you mean "constant relative error" as 
against "constant absolute error"?

-- 

Tux #395953 resides at http://samvit.org
playing with KDE 3.51 on SUSE Linux 10.0
$ date [] CCE +2006-03-27 W13-1 UTC+0530

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Integer arithmetic vs double precision arithmetic
  2006-03-26 23:35       ` Shriramana Sharma
@ 2006-03-27  1:02         ` Glynn Clements
  2006-03-27  8:32           ` Shriramana Sharma
  0 siblings, 1 reply; 9+ messages in thread
From: Glynn Clements @ 2006-03-27  1:02 UTC (permalink / raw)
  To: Shriramana Sharma; +Cc: Linux C Programming List


Shriramana Sharma wrote:

> > Floating-point is meant for the situation where you want a constant
> > relative error rather than a constant absolute error. For timescales,
> > a constant absolute error is usually more useful.
> 
> Could you clarify that a bit? How do you mean "constant relative error" as 
> against "constant absolute error"?

Absolute error is the difference between the actual value and the
recorded value. Relative error is the absolute error divided by the
actual value.

E.g. suppose that you store a time as an integral number of seconds. 
The absolute error is half a second. If the actual time is 10 seconds,
the relative error is 0.05 (5%); if the actual time is 31536000
seconds (365 days), the relative error is approximately 1.6E-08.

OTOH, if you stored the number of seconds as a floating-point value,
the relative error will always be around 1.2E-07 (single precision) or
2.2E-16 (double precision). The absolute error will vary according to
the value stored, e.g. for single precision, for an actual value of
around 1 second, the absolute error will be around 12us, while for an
actual value of around a year, the absolute error will be around 3.8
seconds.

In general, if you are dealing with values of a fixed magnitude, an
integer type will give better precision than a floating-point type of
the same size, because the floating point type uses some of the bits
to store the exponent. If you fix the exponent (i.e. use a common
scale factor), you can use all 32 or 64 bits to store the mantissa.

For measuring absolute time (i.e. time since "year zero", as opposed
to time intervals), you usually want values which are accurate to
within some fixed interval (e.g. 1s, 1ms, 1us etc), rather than values
which are highly accurate close to the beginning of the timescale but
which get less accurate as time goes on.

E.g. The Unix API uses either seconds since midnight UTC, Jan 1st 1970
(time(), typically 32 bits) or seconds and microseconds since that
time (gettimeofday(), typically 32 bits for each component), while
Java uses milliseconds since that date (64 bits).

-- 
Glynn Clements <glynn@gclements.plus.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Integer arithmetic vs double precision arithmetic
  2006-03-27  1:02         ` Glynn Clements
@ 2006-03-27  8:32           ` Shriramana Sharma
  2006-03-27 17:13             ` Glynn Clements
  0 siblings, 1 reply; 9+ messages in thread
From: Shriramana Sharma @ 2006-03-27  8:32 UTC (permalink / raw)
  To: Linux C Programming List

Monday, 27 March 2006 06:32 samaye, Glynn Clements alekhiit:

> E.g. The Unix API uses either seconds since midnight UTC, Jan 1st 1970
> (time(), typically 32 bits) or seconds and microseconds since that
> time (gettimeofday(), typically 32 bits for each component), while
> Java uses milliseconds since that date (64 bits).

Oh! I did not know that Java can return system time to millisecond precision. 
Is there a C or C++ function that can do that?

-- 

Tux #395953 resides at http://samvit.org
playing with KDE 3.51 on SUSE Linux 10.0
$ date [] CCE +2006-03-27 W13-1 UTC+0530

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Integer arithmetic vs double precision arithmetic
  2006-03-27  8:32           ` Shriramana Sharma
@ 2006-03-27 17:13             ` Glynn Clements
  0 siblings, 0 replies; 9+ messages in thread
From: Glynn Clements @ 2006-03-27 17:13 UTC (permalink / raw)
  To: Shriramana Sharma; +Cc: Linux C Programming List


Shriramana Sharma wrote:

> > E.g. The Unix API uses either seconds since midnight UTC, Jan 1st 1970
> > (time(), typically 32 bits) or seconds and microseconds since that
> > time (gettimeofday(), typically 32 bits for each component), while
> > Java uses milliseconds since that date (64 bits).
> 
> Oh! I did not know that Java can return system time to millisecond
> precision. 

To be precise, it returns the time at millisecond *granularity*; the
precision is limited by the OS.

> Is there a C or C++ function that can do that?

gettimeofday() returns the current time at microsecond granularity.

-- 
Glynn Clements <glynn@gclements.plus.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2006-03-27 17:13 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-03-26 15:14 Integer arithmetic vs double precision arithmetic Shriramana Sharma
2006-03-26 16:12 ` Steve Graegert
2006-03-26 16:24   ` Shriramana Sharma
2006-03-26 21:03     ` Glynn Clements
2006-03-26 23:35       ` Shriramana Sharma
2006-03-27  1:02         ` Glynn Clements
2006-03-27  8:32           ` Shriramana Sharma
2006-03-27 17:13             ` Glynn Clements
2006-03-26 16:12 ` Glynn Clements

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).