Well there you go. Physical events are not in fact limited to occurring at clock transitions.
But even if they did, it still doesn't make any particular amount of time into a correct stand-in for a serial dependency. The passage of 150ms of time is not what makes thing B clear to proceed. The only thing I don't understand is how anyone can not understand that.
The point is that at the hardware level, arbitrary timings are in fact used. The same logic as
usleep(100000); // give other thread enough time to do B
is actually correct at the hardware level. Devices are expected to produce a stable output within some arbitrary time, that they meet. If they don't, a garbage value is blindly read anyway: there is no actual cause-and-effect synchronization other than adherence to the timing diagram's requirements.
That usleep is wrong. Sure they exist all over the place, but they are wrong. They are merely easy and "good enough". But other thread has not finished it's job because 100ms passed. Other thread does something. Whatever other thread does is the definition of done and ready, not usleep 100000. sleep is crossing your fingers and hoping that other thread did actually finish doing whatever it exists to do by then.
The fact that digital circuits operate by different parts all being synced to clock pulses, and some parts even count the pulses such that the 7th bit in a byte is actually picked out strictly by the timing of 7 pulses, is pointless pedantics when the article is operating in an entirely different domain which does not operate that way and there is cause & effect and the definition of a service being ready is when the service is ready, not any particular passage of time. The service might actually be ready before you even asked, or might still not be ready in an hour, or might have come up fully and gone back down all within the sleep.
Today the service takes 100ms to start and you have a 150ms sleep. Tomorrow the same service runs on faster hardware and takes 1ms to start, and now has it's own wrong timing assumption where when it starts, it shuts back down if there are no clients (stupid thing for most services to do, but people do stupid things all the time), it starts up and sleeps 50ms "to give time for the client". The 150ms sleeping client doesn't issue any requests, the service shuts back down.
You have to be writing the firmware to a spinning disk to talk about time being the actual definition of anything.
> Physical events are not in fact limited to occurring at clock transitions.
You seem to be talking about "events". We are not talking about events. We are talking about the clock component of CPU chips that keeps emitting a periodical signal.
> The passage of 150ms of time is not what makes thing B clear to proceed.
It absolutely does. Except that in modern chips the time interval is in nanoseconds. Two decades ago, it used to be in microseconds.
Not trying to be snarky but a genuine question: Do you have any background in chip design and fab? What @kazinator and I are saying is not exactly surprising. These things are taught in undergrad or grad-level integrated circuit courses.
If the circuitry didn't wait those nanoseconds before the next state change, it would be making the state change based on intermediate and possibly garbage values which would make the chip behave in a non-deterministic manner.
Again, we are not talking about events or cause-and-effect. We are talking about something that is more fundamental and is baked into the design of CPU chips at the very lowest level.
reply