Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That's still time as a synchronization primitive.


sort by: page size:

> "synchronize" literally has the word "time" in it as a greek root (?????/chrónos).

Oh, yeah, the simple times before everybody learned that time is a local and observer-dependent measure.

Modern computers do almost none time-based at-distance synchronization. About 2 decades ago, when I was studying asynchronous computation, hacking the chip-design process to make it possible to do time-based synchronization of nearby transistors on the same silicon crystal was a very trendy idea. It was very hard, because everything is extremely noisy, and I imagine the idea faded away, because everything is much more noisy nowadays. Doing that on any large scale is asking for everything to break.

For all of the computer-era, synchronization has being done by signaling. The millenia-old tech of doing it by time is just deprecated.


The point is that you don't. A clock tick is not the smallest unit of operation. It's the smallest unit of synchronization. A lot of work can be done in-between synchronization points.

> If there is an internal hardware clock that isn't synced to real time, just count relative to that clock.

There is. All modern operating systems, including iOS, have one. Without it, commonly used features like animations and media playback would misbehave during network time synchronization.


From 2010 but still interesting. Time an programming are inexorably intertwined, either by the time to do something is short enough, or the time when to do something is accurate enough. Having done some real time work (trying to get Linux running a robot back in the day) it was amazing at how challenging it was to keep things in sync. Given the cheapness of transistors there are a bazillion timers and clocks in ARM chips. At some point I expect to see a dedicated small core which does nothing but manage the observation of, and reporting on, the passage of time.

I wonder how real time systems would work without a clock to synchronize everything.

On the plus side, the synchronization is explicit on newer consoles because they couldn't rely on constant timing. Caches and multiple bus masters meant that they couldn't cycle count like the used to be able to.

> Even if synchronizing clocks is something any skilled distributed systems engineer would think of within 10s of deciding to design such a system.[1]

Except the patent you link to doesn't sync the clocks.


It's a different use of the term. Synchronous programming languages have a logical clock, and change global state only on clock ticks.

That's an interesting link, it leads to a this 1991 Liskov paper, which might seem out-of-date, but apparently was very foundational to the whole concept. A little searching turned up this modern discussion of it:

https://muratbuffalo.blogspot.com/2022/11/practical-uses-of-...

Seems the bottom line is: "Since clock synchronization can fail occasionally, it is most desirable for algorithms to depend on synchronization for performance but not for correctness."


Yea, why even use synchronous clocks? Asynch designs have been around since the 80s /s

Channeling my inner Wirth, I think he's just saying that, for the duration of any given clock cycle, synchronization ensures that every combinational building-block within the sequential circuit will have a consistent state.

"synchronize" literally has the word "time" in it as a greek root (?????/chrónos).

Time is fine for synchronizing when all components are actually reading and acting in accordance with a clock, and agree on using the same one (or a well-characterized facsimile of the same one).

Time is used at the bottom layers of hardware. Certain components generate a signal, and expect the response signal to be settled within a certain time.

Even in situations in which there is robust signaling (like a device can indicate for indefinite periods that it is "not ready" and another device will wait), the underlying signals on which that is based have to meet timing constraints.

You have rules like "when pin X is driven high, the data is expected to be present on lines Y_0 through Y_32 within time T_x." If not, there will be garbage.

All notion order in the machine rests on arbitrary timing at the lowest electronic design level.


> Do you know if there is a standard way of accomplishing this?

Synchronizing clocks is one of the difficult problems in distributed computing.

Games tend to avoid doing proper synchronization of clocks by working in discrete time steps (frames). Because games run at 15-60 frames per second (a bit more for simulators), you can get away with synchronizing the clocks to +/- a few frames, ie. an accuracy of tens of milliseconds is good enough to provide a perception of real time.

In other words: game time is measured with an unsigned integer that tells how many time steps have elapsed since the game started.


"All timing within the mainframe cabinet is controlled by a single phase synchronous clock network."

Awesome. I would love to work on a system this simple.


> (2) If there is an internal hardware clock that isn't synced to real time, just count relative to that clock.

This is known as a montonic clock and it’s been built in to most hardware for exactly this reason. Mach, Linux, etc. have encouraged use for anything where things like leap seconds or time changes aren’t desirable.


Also, it's massively, asynchronously parallel, in that it's not driven by a central clock signal.

Jane Street has an interesting podcast on this very subject: https://signalsandthreads.com/clock-synchronization/

Yes, I understand. The point is, with a single oscillator (emphasis, I mean oscillator) being the source of all derived clocks, synchronizing once is sufficient, as all clocks will keep a fixed relation.

If, say, the video card had its own oscillator (comparably how modern video cards do or did), the clock domains would drift from each other and frequent resynchronization were necessary (but you'd just use the interrupt instead that most likely would be added in this scenario).

But I think you helped me understand the scheme. I'd add that according to the article, a single frame is not necessarily enough, given the description on how synchronization can be missed at the first attempt, but this is hardly relevant, given that the synchronization still likely happens only once and within a brief timeframe.


there is no need to keep large synchronous areas though. look up clock domains and asynchronous circuits from the 90s and onwards.
next

Legal | privacy