Unix timestamp (actual reference with easy math) + timezone give you absolute time to work with, such that you can work out the correct presentation for the timezone even if the timezone has undergone multiple revisions in the intervening years.
There's still a bunch of problems (timezone of event origin? Event reporting origin? Server? Client? Client + presentational overlay?) and confusing things to work through (including relativistic problems) but the consistency of the reference point is valuable. Nobody is going to confuse what timezone you are using as your point of reference when you use Unix epoch. Just use a 64-bit number to hold it.
For lots of use cases you want to record precise events in a universal sense, and be able to add and subtract them or compare whether one timestamp is the same exact time as another timestamp regardless of timezones.
UNIX timestamps have no timezones. They are just the number of actual seconds that have elapsed since a certain commonly-agreed time. You measure the number of seconds between two events just by subtracting their timestamps arithmetically, and don't need to do any parsing or interpreting of timezones. You can just say "this hacker in Russia breached the firewall 35 seconds after this dev committed code in the US" just by subtracting two numbers.
The time an airplane takes off, the time your firewall detected a security event, the time a cross-timezone meeting starts, these are all good to store as UNIX timestamps.
Birthdays are different because they are inherently imprecise; humans tend to celebrate their birthdays when it's a certain day of the month in their local time zone, with little to no regard for the time zone they were born in. Someone born in Singapore (UTC+8) and lives in Alaska (UTC-8) might still celebrate their 30th birthday on the month and day of their birthday in Alaska time even though their body might need another 16 hours before it has technically experienced 30 years of life.
wait, why? unix timestamps are safe from this kind of mess as they are UTC seconds since the epoch, all the tz-aware conversion happens when you need to display localized dates but the timestamp you save should be as neutral and tz-unaware as possibile. I've seen way more bugs from people assuming system clock to be UTC while it was localized. Like missing and overwritten data on daylight saving changes.
Often, it's best practice to store time in both epoch (UTC) format and the actual time zone when the transaction/event occurred. Epoch time can be the universal truth, used for business rules and it can be displayed in whatever time zone the user prefers. The event's actual time zone can be used for analytics and other research purposes, but for critical business decisions or rules.
UTC or Unix epoch are good for datetimes in the past, e.g. for logging events. They're not good for future datetimes, e.g. for recording appointments.
Because in case of local timezone policy changes, in the vast majority of cases the intention is for the future event to stay constant in the local timezone, and be changed in UTC / epoch. If for example you made the mistake of storing store opening hours in UTC and translate it to local time in the UI, you will be showing the wrong time.
For appointments, you need both the datetime and the location/timezone. In case of cross-timezone appointments, e.g. virtual meetings, if there are timezone policy changes the issue cannot be resolved without rescheduling the event for some participants.
Off the top of my head, it's easier to think of APIs that don't provide UNIX timestamps than ones that do. You wouldn't believe some of the half-baked implementations I've seen used in production code (unless you've also had to deal with it yourself, that is!)
Yes, you still have to deal with timezones, but I'd much rather be given a UNIX timestamp for a UTC time and then adjust accordingly, rather than deal with all the other problems that come from communicating and converting between various other calendar formats and locales.
And this exchange should occur at the very last step, only when being displayed to the user. For almost any other purposes, there's no reason that two different applications should need to talk to each other using anything but UNIX timestamps. The potentially error-prone conversion should happen in the latest stage possible, because that's the part with the most knowledge of how any error should be corrected.
> At the same time, the human brain doesn't seem particularly well-suited to breaking down Unix time into meaningful scales,
But we're talking about for programming purposes - to a computer, one 64-bit int is as good as any other.
Aside from issues of synchronization, leap seconds, data container limitations and such, it's important to choose the correct kind of time to store, and the right kind depends on what the purpose of recording the time is.
There are three main kinds of time:
*Absolute Time*
Absolute time is a time that is fixed relative to UTC (or relative to an offset from UTC). It is not affected by daylight savings time, nor will it ever change if an area's time zone changes for political reasons. Absolute time is best recorded in the UTC time zone, and is mostly useful for events in the past (because the time zone is now fixed at the time of the event, so it probably no longer matters what specific time zone was in effect).
*Fixed Time*
Fixed time is a time that is fixed to a particular place, and that place has a time zone associated with it (but the time zone might change for political reasons in the future,for example with daylight savings). If the venue changes, only the time zone data needs to be updated. An example would be an appointment in London this coming October 12th at 10:30.
*Floating Time*
Floating (or local) time is always relative to the time zone of the observer. If you travel and change time zones, floating time changes zones with you. If you and another observer are in different time zones and observe the same floating time value, the absolute times you calculate will be different. An example would be your 8:00 morning workout.
*When to Use Each Kind*
Use whichever kind of time most succinctly and completely handles your time needs. Don't depend on time zone information as a proxy for a location; that's depending on a side effect, which is always brittle. Always store location information separately if it's important.
Examples:
Recording an event: Absolute
Log entries: Absolute
An appointment: Fixed
Your daily schedule: Floating
Deadlines: Usually fixed time, but possibly absolute time.
Mad late reply but i use epoch when my time needs don't matter. Aka: I can get by without worrying about epoch not being second precise and I don't need to worry about dates except to display them in non UTC.
For my needs 90% of the time iso8601 is overkill and unnecessary.
But dates in what I do don't need to be complicated. Which is rare. Also I never said working with times and dates is easy, evaluate your needs for the situation. Going full tilt with timezones and full date parsing for some general server logs for example doesn't always make sense is all I'm saying. Tschuss!
I agree. In the only project I worked on where time was really important I stored a triple (original datetime, UTC timestamp, original time zone). I used UTC to globally sort events and sent the original datetime back to clients for display.
On the other side, sometimes we don't know where an event happens. Think of GPS trackers close to a border which is also a timezone border. This is a more complex problem which requires at least a model of country borders and maybe roads.
Finally, on some systems I'm using as a user, they have some EU timezones but not even all the major countries. It doesn't matter now but it will. I expect they'll add the missing timezones and make us confirm where we are.
I tend to agree that dates and times are awful to work with but storing everything as a Unix timestamp still seems to be the easiest way to avoid most of the headaches, in my experience. there's nothing worse than trying to reconcile a database filled with naive dates with no associated time zone information. is it daylight savings? is it in this timezone or did they enter it while traveling and their computer automatically updated the time?
The simplest way to minimize these issues, in my experience, is to put the logic for converting local time into UTC in the program and only store the Unix stamps in the DB.
Yep, that's my conclusion too: storing timestamps for events in the past and up to present time - unix timestamp with optional tzid, and re-calculate local time formatting from scratch for display every time. Events in the future, do the opposite: store as yyyymmddhhiiss + tzid, and re-calculate utc from scratch for calculations every time. As a rule of thumb, at least.
That makes sense, but really we are not going to be able to guarantee full accuracy of a timestamp against future changes in timezone boundaries. I guess if you had a GPS coordinate, that could be done, but it would be a monumental task for little benefit. Which is why IMO time data should be stored in unix, because then we can at least guarantee that we have an accurate ordering of events, at least within the precision of the timestamp.
Would rather convert to unix at the time the event is recorded since we will be able to convert to the unix standard from whatever locale. If we store timestamps with a locale then we have to worry about the exact type of locale changes that you mentioned, and there is no guarantee that we will even know how to translate the locale that was stored at the time the TS was created. Sure, unix could change too (e.g. leap seconds), but I would be much more confident that time libraries will handle such changes
I've argued this and the issue comes when you try to Target another timezone from your own.
So for example if you're in EST(-5) and need to Target CST(+8) you can't give the Unix time for the other timezone because that value is relative to yours talking about another timezone.
It would return an equivalent value, which is the important thing. You can specify `AT TIME ZONE 'UTC'` (or whichever timezone) when consuming it to get a consistent look. On the other hand, it’s not clear from just the value what point in time a `timestamp without time zone` represents. That’s why I agree that it should be used when you don’t want to represent a point in time.
Precision can be fixed by just adding a decimal point. And a "UNIX time stamp" doesn't need a time zone because it's always UTC.
However, you're overall point there remains valid, because people will try to pass off something as a "UNIX time stamp" that is actually in a different time zone. There is value to self-describing data.
In this case we're talking about Unix timestamps though, which count the number of seconds since 1970 UTC. Adding a timezone to it is nonsensical. The number is always the same, no matter which timezone you're in. It's in the conversion to year-month-day, etc. that timezone becomes relevant.
Apparently I'm in the minority together with OP but I agree that storing dates as UTC is good advice.
What most repliers seem to have overlooked is that OP is saying that timezones should be used for presentation. Maybe it was not clear, but that also means they should be used for input conversion. But once the dates have to be used by the business logic of your system, you really want to prevent the headaches you will get when you have to deal with timezone transformations everywhere. That really adds a ton of complexity, is easy to overlook and above all an error in timezone conversion can go undetected for a long time.
There's still a bunch of problems (timezone of event origin? Event reporting origin? Server? Client? Client + presentational overlay?) and confusing things to work through (including relativistic problems) but the consistency of the reference point is valuable. Nobody is going to confuse what timezone you are using as your point of reference when you use Unix epoch. Just use a 64-bit number to hold it.
reply