Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I imagine many things use Unix time or a similarly flat calendar internally (Twitter's Snowflake[0] uses a custom scale), it's the translation into the Gregorian (/Jewish/Mayan/etc) calendar and time scale where the bugs come from. Can't count on 31540000 seconds being a year, time zones, etc.

At the same time, the human brain doesn't seem particularly well-suited to breaking down Unix time into meaningful and reliably consistent scales, so I don't see time getting much easier barring some revolutionary timekeeping system.

[0] https://github.com/twitter/snowflake



sort by: page size:

Ugh, can't we just use the Unix epoch? Time's already too difficult to program.

Yeah I know about unix time and that isn't what I meant. It never made sense to me. Just have a monotonic timer and an epoch, and worry about UTC and leap seconds in the view.

The fact that dates in general are so screwed up is a great advantage for UNIX timestamps.

A single moment in time maps to a single UNIX timestamp and back again with almost no complications or exceptions (the only one I'm aware of being leap seconds, which is fairly minor). This makes it a great way to store and manipulate actual time.

It's then complicated, sometimes enormously complicated, to go from this simple representation to what humans would call a "date". But that's unavoidable, and by using something nice and simple like UNIX time in the backend, you ensure that all of the craziness when dealing with weird human systems stays at the front end where it belongs.

It's not always possible, of course. Some systems need to store "dates" in terms of what humans think they are, not actual moments in time. (For example, storing a birthday as UNIX time would be a bad idea.) But whenever you want an actual moment in time, UNIX time is great.


The problem with this article is that it implies software regularly uses human datetimes, or that programmers regularly write their own code to convert to and fro.

Programmers do not regularly write their own code to do so (and due to the hairy and brain numbing nature of it, plus that Java, Perl, Python, and Ruby all have good libraries to handle this anyways, no one wants to), and code almost always uses UNIX epoch time (number of seconds since the beginning of Jan 1st 1970 GMT) and does not observe leap seconds (as it is a linear progression of seconds, and does not have any concept of minutes or higher).


> And all it takes to store a time like that is a 64-bit integer and it is very convenient. And a lot of software do precisely that. Most timestamps are just that: they don't care about "real world" details like leap-seconds, {23,24,25}hours per day, etc.

Your claim that using Unix Time saves you from worrying about leap seconds is incorrect. Unix Time goes backwards when a leap second occurs, which can screw up a lot of software. Check out Google's solution to the problem, which is to "smear" the leap second over a period of time before it actually occurs: http://googleblog.blogspot.in/2011/09/time-technology-and-le...

Practically no software uses true seconds since the epoch; if it did then simple operations like turning an epoch time into a calendar date would require consulting a table of leap seconds, and would give up the invariant that every day is exactly 86,400 seconds. Whether this was the right decision or not is debatable, but it is a mistake to think that using Unix Time saves you from all weirdness surrounding civil time.


Absolutely! A large part if h the blame here rests on Unixtime (which is a monotonic count of seconds elapsed) trying to use a time standard where those seconds are not innfact monotonic. Unixtime is basically “TAI done wrong” and the failure to correct this early on … ideally they should have aligned themselves to TAI instead of UTC since originally Unixtime was not actually aligned to any particular time standard then in the mid 70s it was decided to align it with elapsed seconds of UTC time as of a fixed date.

This decision just dominoed down through time causing enough friction that we computer programmers outweighed the metrologists and they caved and abandoned properly keeping UTC in order to stop causing problems for everyone else due to our continued failure to fix the root cause of these issues.


This is one of the reasons why Unix epoch time soldiers on, even though it is totally indecipherable to humans. It can be easily mapped to a timezone-aware type, but performing arithmetic on it is trivial.

One of the many issues I saw today was around converting times to and from different formats. Let's say you have a timestamp in epoch, and you want to calculate a year long offset. You add seconds_in_year to your epoch time,, and call your date conversion library to get some kind of iso date (dd-mm-yyyy). Yes, it's wrong because seconds_in_year is not a constant, but this stuff sneaks its way into a large codebase and nobody realizes because all the tests pass until it's leap day.

This is only the beginning, there are so many possible issues related to an edge case like this, while it might be a "solved problem" that doesn't mean it's something developers are thinking about every day. And don't even get me started on daylight savings time...


Are you saying Unix time doesn't get adjusted when there are calendar changes in the box?

Sigh.

50 year old numerical / geophysical / real time data acquisition/processing/interpretation programmer here.

Unix Time isn't much chop for "real time" continuous data from the "real world" - it's those pesky leap seconds. If you bother to read the first paragraph of the wikipedia article on Unix Time you'll see :

> Unix time, or POSIX time, is a system for describing instances in time, defined as the number of seconds that have elapsed since midnight Coordinated Universal Time (UTC), 1 January 1970,[note 1] not counting leap seconds.[note 2] It is used widely in Unix-like and many other operating systems and file formats. It is neither a linear representation of time nor a true representation of UTC.

It follows on with a definition of Unix Time and points out various examples when it is ambiguous. These are real issues and can occur when missiles fly, when planes navigate, and when stocks are traded.

Time is tricky.


Just use Unix time everywhere.

Unix time is linear and unambiguous. It's our actual accounting of time that is not linear.

The thing is, similar reasoning can also be used to promote getting rid of leap years. Heck, maybe just switch to 12 months of 30 days while we are at it.

If you want to use real-world time, you have to make sure it stays in sync with the real world. Switching to something which is close-but-not-quite-correct will cause even more issues than we are currently having with leap seconds. Can't deal with that? Well, just use the Unix timestamp like the rest of us?


Or Unix time.

"if say computer languages had declarative statements and underlying mechanisms to separate internal and external clocks"

Well, we actually do have those things. You're pretty much describing how the unix time APIs work today.

The internal unix clock is kept as a "time_t", "struct timeval", or "struct timespec", which is seconds, usec, or nsec since an epoch. This is largely insulated from the geopolitical nonsense -- though it is imperfect with respect to leap seconds and it should have been defined as TAI. This is specified in sys/time.h on linux . These are further separated into clocks like CLOCK_REALTIME and CLOCK_MONOTONIC, which track adjusted actual time (updated by ntp), and elapsed relative time respectively.

The "external clocks" as you call them are specified by the time.h functions like "ctime" or "localtime." These functions take the above internal clocks and apply timezone data to localize time to a specific area on earth.

"If one thinks about it this isn't as a preposterous idea as it may seem as the whole notion of relative time naturally falls out of Relativity/Spacetime."

Right, but we do this already. There's no such thing as a clock without a frame of reference though, so there will always be some special location in space/time which is the perspective of the system clock.

"Making computers and compter languages inherently aware of the fact would likely bring benefits. "

We already have reaped these benefits! The existing problems center around stuff like:

* Figuring out who's a correct relative authority for a given place on earth

* Communicating updates (and, correspondingly, guarding against bad updates)

* Figuring out how to keep various autonomous systems consistent

* Helping software developers understand these inherent complexities in timekeeping -- because I guarantee your average software developer is largely unaware that all these distinctions already exist.


Unix Time?

Amazingly Linux and other Unixes (yeah, OSX too) "get" time quite well. Maybe Apple over-optimized iOS?

UNIX time, really?

Strange then that we'd scold a developer for assuming the number of hours in a day, or the timezone, or whatever, but not for using Unix time as an absolute time, when in fact it is not an absolute time at all and even caters to the same error in thought.
next

Legal | privacy