Was looking to boot to Qt under a couple of seconds for a camera use case a few months back, and came across similar demos. Can work well for very specific applications/products.
Here's a video of a talk by the author of this particular pdf
If you need 8G of resident memory to have the apps you want running then you need 8G... no shortcuts. That is a pretty high figure, though. Afaik things like disk caches aren't restored during a suspend/resume.
I wonder if there are any disk suspend systems that are smart enough to tie into the VM system... instead of a specific monolithic RAM-sized suspend file, just flush everything to virtual memory on suspend, and on awake only restore the minimum needed for the OS and accounting, then start swapping everything else in lazily as the user needs it (or in advance in the background if the user is idle)
Given that the disk is several orders of magnitude slower that the RAM, lazily re-loading pages from the disk to the memory can induce very visible and annoying loading delays everywhere (e.g. change tab in browser takes 3 sec).
I'd rather wait 10 extra second than having a slow computer for the next 10 minutes.
How much do you cache? Hardware is difficult because you could add/remove/swap it at any time, and software information is already partly cached depending on your OS.
IIRC, Windows a) builds a list of files to pre-load from disk based on your usage, and b) doesn't actually fully shut down by default (it goes to hibernate mode: http://blogs.msdn.com/b/olivnie/archive/2012/12/14/windows-8...). I don't know if any Linux distros do either of those by default, but it is certainly possible.
Also, some BIOSes already have a "quick boot" setting which caches some hardware information, particularly about memory. It usually comes with a warning that you need to disable the setting for at least one boot when installing RAM.
Hardware requires specific handshaking to get it to initialize, the kernel cannot assume that the hardware that was present last time is available this time, and orchestrating everything is not an easy process.
The holy grail would be a kernel organized and designed to cache everything to disk related to its own configuration, and only re-execute/rerun the hardware reinitialization code.
Admittedly, Windows' current approach to bootup is very close: it closes all applications then pseudo-hibernates the kernel. Bootup simply reads the hibernated kernel state from disk and reloads userspace.
This is of course besides traditional full ACPI hibernation, which is pretty cool but isn't a perfect art. (I'm typing this on a ThinkPad T23, a fairly old (and thus well-tested) hardware configuration; its hibernation/resume is usually rock-solid, but this morning my WLAN decided to get all stupid with dhcpcd, and USB decided that while it could see my external HDD, there wasn't a disk inside it. From-scratch boot is still the only way to get a reasonably predictable system state.)
The biggest problem with caching things during the boot process is that turning our devices off and on again will not magically solve our problems any more.
Linux can be so amazing when configured especially for some tasks. For example, Jack on linux can deliver audio latencies under 2ms! The problem is that configuring Linux like this almost requires a CS degree.
I once had a brief conversation (via Reddit) with a guy using JACK for a commercial (embedded Linux) networked audio system. I wasn't aware of the exact specifics but I understand it was Sonos-style, embedded boards inside speakers. He wasn't particularly interested in talking about the work his team had done :P
This is a quite paradoxal phenomen that happens to some open source projects. The code is open and free, but some of the experts hold their knowledge very close to their chest. It's totally understandable since they of course need to make a living somehow, but it kind of defeats some of the benefits with open source.
In my experience proprietary software is often far more complex, in fact in another league than most FOSS SW for now (source: have installed and reinstalled so many Oracle databases, Enterprise Manager and APEX installations I have lost count.)
Depends on the software and author. Case in point: sendmail; virtually everything ever made by RedHat (the software is free, but there's no useful documentation if you're not a paying customer… cf. NetworkManager, PulseAudio, GlusterFS … only systemd seems an exception, for now).
Good point! However, I would agree that server/system software that is "best-in-breed" tend to be FOSS. But I'm afraid I have no studies or other hard facts to support my case.
Configuration can be a pain. Sane defaults help and smart UIs help not to mention good documentation.
I've been playing with some software over the last week that has wiki for documentation and an active user forum with people willing to help. But it's completely disordered and incredibly hard to discover. Half the battle with usability is making it easy to use and understand.
It doesn't have to be ill will, or even will. Something like JACK is a component of a system. Maintaining this component as open source is fairly simple -- but capturing and sharing the understanding the complex interplay of components and the tradeoff needed to achieve certain ends in the wider system is really, really hard.
Linux might be free and open source software, but it's not a failure of anything that it doesn't come with a free systems administrator education.
I don't think it's such that the person in question doesn't want to share the knowledge, it's that imparting the knowledge is time consuming and draining.
Has nothing to do with intelligence.
This is just a different knowledge domain, and there is shockingly little overlap with a traditional CS education.
To expand on this, its the difference between knowing a field of knowledge, lets take cooking, and how to use a tool, lets say a knife. While you can learn a lot about how to use a knife by going to culinary school much of the knife and the process surrounding what makes a good knife, good and bad knife, bad is very specific knowledge. The analogy kinda falls apart because you need to touch on the topics and metallurgy and industrial process but hopefully that paints a reasonable pictures. Linux is a tool, a very complicated tool with specific features. CS/SE/IS education is a very general set of skill which should apply across not just Linux but most/all of computing.
Also under 1 second, and the approaches this guy took are seriously impressive. Lots of educational tidbits and "good-to-know"s in here even for non-embedded types. Reads very quickly too.
1 seconds seems a rather arbitrary measure. In HCI 100ms seems to be the value that's generally taken under which an action seems to be happening instantaneously [1]
It's quite sad to see how we hardly use the computing power offered by our modern computers to actually speed up common tasks, but instead bloat it up with all kinds of stuff.
But 1s is sufficiently small so that you don't notice that the system is loading between the time you switch on the engine of a car and the time you start using it.
Same applies to Virtual Reality. You need sub 20ms latency for head tracking to eyes latency or you will feel sick (effectively less than 10ms from the cpu side. Display scanning is summed on top).
The 100ms figure applies mostly to traditional UI workflow. Whenever you need a realtime interaction that's 10x too high.
That's a relief. They are finally taking the issue seriously then.
There are also indications that they are going to lower the display path latency, which also stands several times higher than what is in iDevices. It just takes some time to get a certain class of software architects to realize that buffering a frame several times each step might not be the smartest way after all.
I'm glad you posted this. Even the 10ms rule of thumb is...well, not always true either!
FWIW, if you are playing virtual drums, drummers need something like under 2ms response time; even 10ms can be easily felt by a drummer and cause their timing to get way off!
On top of that, if you are doing, say, a real-time effect on a vocal track, even a tiny sub 1ms delay causes a weird reverb effect in her monitor vs. the singer hearing herself, which will cause your singer's singing to get out of sync!
If you can boot faster than the screen can come to life, you don't need to be faster. IoT and servers might not have screens, so this doesn't apply to everything, but for devices with screens, just beating the screen warmup is good enough to feel like immediate startup.
Also, for devices that require user authentication, such as logging in to an account, if you could offload the authentication (such as fingerprint or other biometrics) to something out of the OS itself (ROM routine that starts up with a 0 in a certain memory address and changes it to 1 on authentication), you could be booting up the OS in the background while you take a second or two to log in, and if the OS can get up and running by the time you log in, the OS can pretend that it started up immediately and was the thing that logged you in. For many devices, there is need to boot up in less than a second or so to seem as though boot up was instantaneous.
One second, ten seconds... The problem isn't that it takes time, it's that the user has to sit there and wait while it takes that time.
My desktop takes about two seconds to resume from suspend. I'd sit down, lean down to the floor and press the power button and then wait that two seconds. I fixed the whole problem by moving the case up onto the desk so that I have to walk past it to sit down. Now I press the button and then sit down, by which time the computer is waiting for me. The way it should be.
Sidebar: Now I'm slightly concerned that an AI in the future is going to call me out for my blatant display of meatbag privilege.
We can throw tons of money at engineering boot times but but really the biggest gains are made when you can predict what the user wants. Imagine you approach your car. If it can detect you or your key, it can start doing stuff it'd normally do when you put the key in the ignition. Suddenly it doesn't matter than it takes 5-10 seconds to boot because it also takes 10 seconds to get in the car.
All this stuff is going to get better once we have widespread adoption of technology for fine indoor tracking. When we can tell our relative positions to our gadgets, they'll be able to sleep deeper, ring louder, shine brighter and start booting earlier.
The problem with anticipation is that nobody except you can truly know your intent. For example, walking up to a car with my keys might just mean I want to grab something out of the trunk; or the car might be in a closed garage and that wouldn't be quite safe at the moment.
The easiest and safest thing is usually still to let people control more of what happens because there's a lot of edge cases that might add up, or have really bad consequences for when they rarely occur.
That's not a problem with anticipation as a concept, that's a problem with bad anticipation. Certainly it's possible to detect whether the car is in a closed garage, and it will eventually be possible to predict whether you want to grab something out of the trunk or go for a drive.
Even though it often feels like you do, humans generally don't know their own intents. It's not that high of a bar for a predictive system to clear.
Wouldn't be safe? I'm talking about starting an on-board computer, not the engine.
Unless you're going to be repeatedly walking too and from your car all day, starting a computer needlessly isn't going to be the end of the world.
Yes, it may mean doing more work but that's the trade off here for "instant on". It's either always on, you wait, or it guesses when you're going to use it and you accept there are a few false positives.
> It's either always on, you wait, or it guesses when you're going to use it and you accept there are a few false positives.
Or engineers reduce startup time to a point where it feels instant.
Compare e.g. startup times of DSLRs and point-n-shoot cameras. With a DSLR you switch it on and it is ready to shoot by the time your finger gets to shutter button.
I'm not saying it has to be an exclusive development. You can improve the other stuff too, but once relative location becomes a solved problem, it presents bevy of particularly low hanging fruit.
It's also very hard to engineer a boot time down. TFA is a great example of that. Hard means expertise and time and (often) more expensive hardware. So to boil it right down with my car computer example, it's far easier and cheaper to detect the key externally and boot earlier than it is to strip a 10 second boot time down to 0.1 second.
If you can also use that getting-in-the-car time to do other stuff (lookup up recent searches or upcoming appointments and suggesting navigation, setting fuel waypoints if low, getting traffic advice, etc), that's all the better.
This is why e.g. go pro uses a hybrid approach with an embedded rtos for the camera stuff and then virtualized
Linux on top of this for less critical work where you want to use a stack e.g.http streaming. 3 yrs back I worked in a team which looked at booting Linux for a camera device in < 100ms and couldn't find a solution.
Comparing the richness of the user experience for a wide variety of applications of now versus a decade or two ago, I think we are doing plenty of useful things with modern systems that is not actually "bloat".
Even that may be debunked it seems it's now down to 13ms and that number is only due to testing equipment limitations of 13ms. According to "Mary Potter, an MIT professor of brain and cognitive sciences and senior author of the study".
With my Nexus 4, rebooting every other day is a pretty good idea. Usually fixes my Wi-Fi, 3G, and all sorts of weird bugs for the moment. Don't know how representative this is for Android phones in general, but most Nexus 4 owners I know tell me this is normal.
Oh, usually once a week, sometimes twice a day, the phone will do the rebooting for me. Sometimes when it's in my pocket, sometimes when I try to watch a video that's just too long. Maybe some kind kind of memory error?
I used to have a Nexus 4 and had the same problem. I now have a Samsung Galaxy S3 and still have the same kinds of issues, just not as frequently. From what I've heard from other people I think it's fair to say it's a general problem with Android-based smart phones.
I reboot my iPhone 5s every two or four weeks or so. Sometimes that just the easiest way to deal with some bugs (e.g. Airplay issues). Since it happens so infrequently I don't really mind the boot times that much. But all else being equal, of course faster is better than slow :)
I have a Galaxy S3, and it was rebooting randomly more and more often. At the time, I lived in an apartment. I bought a house earlier this year, and to my knowledge (and memory) this hasn't happened again since.
What you said reminded me of this, because I had forgotten this infuriating rebooting. I wonder if this was related to the building or something else? This is weird.
Wow, that's exactly my experience. Used to reboot 4-5 times a day and now it happens roughly once a week.
Did you happen to use Bluetooth more in the apartment by chance?
I have included "feature phones" in my original comment. None boot close to 1 second, and these, too, include a lot of functionality that I would like stripped out.
My $20 Wiko Lubi does boot in a second, or barely more (not counting PIN input, I don't know if it's booting the background), and I can't complain about having too much functionality.
It takes a second or two to use my lockscreen, so with some prioritisation that should mean that the lockscreen can show early giving the system more time to boot but also feeling more responsive.
KDE used to load a session in the background when the login screen showed so that you didn't then have to wait after logging in.
On Android without root the system seems to use a ton of resources on background jobs that aren't needed; I imagine that slows boot too.
My GF's Samsung smart tv has to update at least once per week when you power it on. There're regular warnings for service unavailability. It spends time logging on each time you start it. Sometimes it won't even turn on when you press the button. It's a completely awful machine. I'd honestly rather have a CRT.
JS is also sandboxed, which seems to be a particular problem for traditional readers. I’d trust PDF.JS to get faster more than I trust PDFium to not have security issues.
Yeah, that's what I've been having to do on a lot of things lately. I told them what would happen when they first started pushing it. Looked at the person next to me to say, with all the things Win98 on a PII did, the modern era chokes a Core Duo 2 trying to look at a document in a browser. Unreal...
That was a bait and switch by management that I could tell. They got a bunch of contributions from the community then pulled it. Or just felt that it wasn't worthwhile after little contributions. I can't be sure.
Nonetheless, underlying technology is solid with that showing in Blackberry Playbook.
People are OK with frequently booting their mobile devices to temporarily "fix bugs", which can take up to tens of seconds, but waiting a moment to boot a desktop device? Nuh uh.
Of course there might be some market advantage to be had with faster booting and it's a plus for the whole UX. And optimisation is definitely cool, though I think many more problems would be solved if people were forced to think once in a while, whether it's while waiting for the bootup sequence to complete or a train to arrive.
The problem with a desktop is that most people have at least one and mostly several applications running. Those need maybe saving as well, before quitting. The mobile phones are better in that way: all apps are required to save their state at any time. So rebooting can be a mindless procedure.
Seems like this is more about car computers, embedded systems, and replacing "suspend" than just improving the desktop experience. The slowest booting machine I own is my windows Steambox, and it's still faster to boot than my TV or receiver (5-10 seconds) just by virtue of having an SSD.
(the steady state is that computers are running, so if it takes 30sec to cleanly shutdown then your cycle time is 31secs... obviously, this doesn't apply to stateless servers or that use write-through caching...)
The kind of device this kind of optimization applies to typically runs with readonly flash, maybe only writing some config rarely... So, just pull the plug.
Our team was working on a vehicle telmatics platform ( connected cars ). We had a requirement in which the remote engine start time should be within 15 seconds from the time the customer clicked the start button on his phone. For this the QNX running in the vehicle had a fastboot mode , which came up within that timeframe , brought up the interfaces & established network connection to our server and started the car. Once the car is started , the system will go down and come back up in normal boot.
Here's a video of a talk by the author of this particular pdf
https://www.youtube.com/watch?v=KTRA1PRJWH8
Will be interesting to see what impact Intel's 3D Xpoint will have on boot times too.
reply