Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
“This Browser Is Lightning Fast”: Effects of Messaging on Perceived Performance [pdf] (arxiv.org) similar stories update story
112.0 points by cpeterso | karma 43338 | avg karma 5.73 2021-03-12 20:44:13+00:00 | hide | past | favorite | 58 comments



view as:

Great! More Academic articles about how to lie to users and convince them that your software is better than the competition without actually making it better.

Dark Patterns are the new Light Patterns!


Here's another academic article about how to lie to your users from just 8 days ago: https://news.ycombinator.com/item?id=26345283

Truth is so 1995. Nobody cares about honesty anymore!


I wonder if we're happier when we're lied to. Reality kind of sucks

brave new world

Same old world

Thought experiment: two programs take exactly the same time to complete a task but one of them is perceived as slow and the other as fast (for whatever reason). Shouldn't this make the latter the better one of the two programs? At least I would count "being less annoying than the other program" (assuming that perceived slowness is annoying) as a positive feature.

If the only basis for perception of speed is essentially deceit, ``being less annoying'' means it lies to you more --- I don't think I'd call that an intrinsically positive feature.

Why should the perception of speed have to be based on deceit?

I bet I would perceive a progress bar that progresses with constant speed as faster than one that stalls and stutters even when both of them take the same time to completion. Even more if the alternative would be an indefinite progress indicator (hourglass pointer, spinner) that just goes away when the task finishes.

On the other hand if the result is that a user is less annoyed by a process, I do not see why it should be wrong to convey that feeling artificially by setting up a situation in which the same result makes them feel better than in a more "honest" one (as you might call it?).


I definitely agree that showing a progress bar makes a slow operation feel faster, I've seen this work many times.

I think I disagree about the constant speed vs stuttering progress bar example though - Progress bars which progress smoothly are great if they are accurate. But because sometimes a UI will show a fake progress bar smoothly filling that will empty after filling and fill again, I've been trained to become skeptical of them. I don't think I've ever seen a stuttery progress bar that was "fake" in that way.


Showing a constant speed progress bar to hide stutter is effectively applying a low-pass filter. Since stutter carries information (that may not be of interest to all your users, but perhaps some), you're low-pass filtering that information.

It is a deceit - you're purposefully concealing information, and you're doing it for some reason. How bad is that? It depends on how much a given piece of information is useful to the recipient. Smoothing out something where its high-frequency data matters to the user isn't nice (and can be paternalizing at times). Imagine your car speedometer didn't display instantaneous speed, but a running average over a one-minute window. Would you feel comfortable about it? Or if your bank account displayed your balance as a running average over a week-long window?

Progress bars being accurate isn't that important for most, but it may be for some. In a same way you can get a feel for your car by listening to the sounds it makes, you can get a feel for the state of your computing environment by observing a progress bar. For example: does it seem to slow down hard whenever the "storage" light on your laptop is blinking fast (in the old days - when your HDD makes noise)? It may imply you have an issue with your storage. Smooth a progress bar out, and you're removing such environmental cues.


The progress was meant to be truthful in both cases. Imagine the case in which someone is getting enough feedback from the backend to be able to show percent-wise changes vs. the case where updates are pretty coarse.

I think the dishonesty comes in when the progress bar uses time as an input instead of observable events. It's fine and good to modify your api to return better progress information as long as that doesn't cause the operation to take longer or significantly increase load.

However, representing a coarse process with time based progress is perhaps dishonest.

For example assuming that step 2 of some 3 stage process always takes 20 seconds and artificially filling the progress bar over that duration is presenting potentially false progress to the user. For operations which may take a long time with some variance the user may be waiting for a while at a stopped progress bar and think an error has occured.


Do you consider progress bars and loading states "deceit"? It helps to know that something is happening, vs just a frozen UI with no indication of work being done in the background.

Because it's perceived as fast because users have been reading articles about how much faster the devs have made it (as in TFA).

If you can consistently put out these articles to prime your users, and have you software still perform at the same speed as your competition, then either the devs aren't doing much at all, or marketing is more full of it than usual.


> Want to speed up your Internet? Try Chrome!

Literally, Google refused to let this go, for at least half a decade.


I agree, instead of making it look faster, make it actually faster. Websites these days are filled with unnecessary crap that just slows things down, start by eliminating those things ffs.

Abstract

With technical performance being similar for various web browsers, improving user perceived performance is integral to optimizing browser quality. We investigated the importance of priming, which has a well-documented ability to affect people’s beliefs, on users’ perceptions of web browser performance. We studied 1495 participants who read either an article about performance improvements to Mozilla Firefox, an article about user interface updates to Firefox, or an article about self-driving cars, and then watched video clips of browser tasks. As the priming effect would suggest, we found that reading articles about Firefox increased participants’ perceived performance of Firefox over the most widely used web browser, Google Chrome. In addition, we found that article content mattered, as the article about performance improvements led to higher performance ratings than the article about UI updates. Our findings demonstrate how perceived performance can be improved without making technical improvements and that designers and developers must consider a wider picture when trying to improve user attitudes about technology.


I think the priming effect is more subtle though. What people express out loud might be primed, but their actual feelings may not change. I wish we had kept the data on this, but our search experiment at reddit is a good counterpoint:

At one point we measured how search was doing, so we added a button to the top of the search results that said "did you find what you're looking for?". 70% clicked yes. Not great, not awful.

Then we upgraded the search engine, but didn't tell anyone. Suddenly that stat jumped to 90%+. But people would still complain just as much about how bad search is. Many months later, we finally announced that we had changed the search engine.

The stats on the button didn't change, but the public narrative did. So what people say they perceive and how they actually feel may not necessarily match.


People remember two things - when things don't work, and when they start working.

The first is why people will complain something is "crappy" if they had one bad experience with it, and the second is why the "new" thing is often perceived as better EVEN if it has more problems than the old - as long as it doesn't have the same problems.

After awhile the "new" wears off and it's crappy again.


I am guessing that most people wouldn't use reddit search because of its reputation, so the 90% of people saying they found what they were looking for were a small % of users. When you posted that you updated search, a lot of people who had given up on it might have tested it out again and changed their opinions.

Does that check out with your data?


Sadly I don't remember nor have the data. But that is certainly possible and could have skewed the data.

Such things are very tricky, because negative experiences are remembered vividly. Search working is expected, search not working is a huge problem. Also, search is a hard thing to get right.

When it goes wrong on reddit, it goes annoyingly wrong. For instance, my main issue is that some searches return a flood of irrelevant content. Searching for some games brings a flood from r/GameSwap or some such place. Or, trying to search about Nikola Corporation will bring up a whole lot of sports personalities.

That makes sense in that it's a tough problem to solve, but what's annoying is that it has to be dealt by hand every time. I can write a filter, but what I'd really like is a permanent setting: "I'm not ever interested in results from /r/GameSwap or /r/SportsSubreddit". Also it might be helpful to be able to set a limit how much stuff can come from a single subreddit, because some contain very repetitive content that drowns out all useful results.

Edit: Also, search should parse youtube URLs and ignore HTTP vs HTTPS, youtube.com vs youtu.be and the ?feature=share junk at the end. I can't be the only one who thinks "This must have been discussed on Reddit, and the discussion has to be more useful over there", but Reddit comparing the URLs literally makes it annoying to find matches.


I’m not sure why people hate Reddit search so much. I’ve never used it much, but when I have, it’s been fine over the last decade or whatever.

The narrative is so strong though, I’m not sure how you could defeat that without creating a radically different solution that derails the narrative.


I’ve been using reddit since around 2012 and throughout that time I rarely used Reddit’s search mainly because it didn’t search through comments in a post to score relevance. The only times I would use Reddit’s search was if I remembered some words or phrases in the posts title and I had a specific post in mind I was looking for. I’m also pretty sure that back then the relevance of search results in general using Reddit’s search was far inferior to site:reddit.com specifically in query expansion (synonyms & misspellings in particular).

I only started using Reddit’s search recently because of changes to google that make reddit search results have incorrect times.


I hate that in the new design, when I'm IN a specific subreddit, there's no way to limit search to that specific subreddit. After the search is done, it'll show a link do only display results from inside that subreddit, but 100% of the time I'm in a subreddit, I want results from that subreddit.

Edit: adding that because of the search, and lack of managing multi-reddits, I'm still using the old reddit with the RES extension


Besides the rare "OMG! This is AMAZING!", people who are upset by something are much more likely to make noise than people who are content with it. It's just how we are wired.

Odds are, no changes you make will completely silence the loud few, and even if it does, it'll trigger others. You can track engagement though. If the majority of people silently but demonstrably show that they enjoy how things are, you have a solid foundation to build on. The numbers will speak for themselves.


This is something I learned as a subreddit moderator. People were quite vocal about certain things, but when we polled the community, they turned out to be a minority.

For example, people constantly complain about pictures of the same few things filling the sub, but those pictures get massively upvoted, and several polls confirmed that people want them to stay. Yet we go over this topic every damn week.

Another was my own behaviour. I often post links to my own website in answer to questions. Some users were quite vocal about it being spammy, but every time we polled the community, they said it's alright. My comments get upvoted, OP gets his answer, and mostly everyone is happy. The mods also gave their approval.

There's a point where you can say "haters gonna hate", and rest easy knowing that most of your users are happy.


The fact that the search engine changed though is information. If the search engine is the same, maybe they had 490 mediocre experiences with the old one, and then 10 better ones. Since it's the same search engine, they're going to average all of those together and say it sucks.

If you tell them that the search engine is brand new, they'll reset their expectations and only look at the new data to make a judgment.


Is this referring to Google. I never click any buttons or participate in surveys to send more data to companies like Google. Whether I found what I was looking for, etc. is none of their business. As such, any conclusions made from users (or bots) who do click such buttons is ignoring all the users who don't.

At the same time, I do have opinions about these companies and what they have done to the concept of a "www search engine". I may share these elsewhere, such as on HN. Any conclusions based solely on clicking buttons on a company website would be ignoring user comments elsewhere. One could argue any results are only potentially applicable to the type of user that clicks buttons asking for feedback, which may be a small subsection of total users.

"Priming" is an interesting idea. One of the earliest, most cited studies was performed by one of Robert Zajonc's PhD students who joined the faculty at NYU and is now at Yale -- John Bargh. However those foundational studies and subsequent ones by Bargh, as well as countless social psychology research that relies on them, were called into question about ten years ago when other labs found the results could not be replicated.^1 When one of these labs in Belgium published about the failure to replicate, Bargh went bananas. He attacked the investigator's paper but failed to address the issue by trying to replicate the original study himself. Daniel Kahnemman, whose popular science books which often rely on these studies, acknowledged the problem and called for more replication studies in social pscychology.^2

1.

http://blogs.discovermagazine.com/notrocketscience/2012/03/1...

http://www.nature.com/news/replication-studies-bad-copy-1.10...

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3430642

http://www.sciencenews.org/view/feature/id/340408/descriptio...

2.

http://www.nature.com/polopoly_fs/7.6716.1349271308!/suppinf...


Don’t take this the wrong way but Reddit search is still objectively awful

There is no date in the paper...


I'm prepared to catch a lot of shit for this, but I get the feeling this is where we're at with the M1. Yes, the M1 Macbook Air is faster than Intel Macs, but that's not a high bar of entry. People were reasonably frustrated with how Apple gimped Intel's CPUs to run in ultra-thin machines, so why take that anger out on Intel? Intel is far from being the best company on the block (or even the CPU space), but it's pretty concerning to watch how fast people jump to conclusions based on the messaging they get from YouTube and Twitter. I've argued about this with several Apple users, and it always boils down to the same closing argument: "but I want to use a Mac!" There's nothing wrong with that, but it's certainly a certainly a better place to start than "This x is so fast!"

I'm not sure what you're getting at.

I might agree that the M1 Air being faster than the previous Macbook Air is one reason why users perceive it as faster, though that doesn't really explain it seeming "much faster." You'd also have to argue that Big Sur makes up for some of the difference, and assume comparisons aren't being made between updated Intel Macbook Airs.

I haven't had an opportunity to use an M1-based device, so I really just have to accept that it's surprisingly fast. Of course, just like some "want to use a Mac", I "want to use a gaming PC/laptop" and so I do. And my gaming PC and laptop are both "very fast." I don't know that their speed would surprise people coming from older, Intel-based PCs or laptops, though depending on what they are doing, they might.

But I still don't know what point you're trying to make. I guess you're just saying people read that the M1 machines are fast, and so they think they are. But there also benchmarks that show it performs remarkably well, on par with low-wattage Ryzen laptop CPUs and in some benchmarks / single-core with high-wattage Ryzen desktop CPUs, which are really fast.


Try actually using one?

The 2015-2017 12" Macbook was an ultra-thin fanless machine that was delightfully portable but was basically unusable for anything more than basic web browsing. It'd burn my thighs just watching Youtube.

The 2021 M1 Air is a slightly chunkier fanless machine that runs circles around Intel laptops with desktop-class processors, at lower cost and up to triple/quadruple the battery life. It's unlike any Intel laptop you'll find on the market today.

Apple deserves plenty of flak for their laptop design decisions (crappy webcam, fat screen bezels, port selection...) but the M1 is probably the biggest improvement to laptop performance in a decade.


How does it compare to the Ryzens?

In any case, x86/x64 is a terrible ISA and it really needs to go. Sad that it's still around and I'm surprised it can even run as fast as it does.


> The 2021 M1 Air is a slightly chunkier fanless machine that runs circles around Intel laptops with desktop-class processors, at lower cost and up to triple/quadruple the battery life. It's unlike any Intel laptop you'll find on the market today.

Second this. I got the cheapest M1 macbook air in december and it really is streets ahead of competetion in terms of performace/ battery life.


>Try actually using one? I actually have one, sent in by corporate. They gave me an M1 Macbook Air for unit-testing various things, and it collects dust most of the time. It doesn't work with either of my displays, 70% of the software I run on it has some sort of "gotcha", and the GPU performance is... well, not great. That said, it's not the weakest machine in the drawer, but it's certainly not as zippy as I've heard people claim. I still reach for my Thinkpad when closing issues or reading emails.

Coming straight to it - in my perception, performance-wise ranking would be ?

1. Microsoft Edge

2. Google Chrome

3. Mozilla Firefox

Although Firefox is a RAM guzzler and can get excruciatingly slow, I made Firefox my primary browser after I got fed up of Google AMP, was surprised to so many useful features in Firefox, such as sending tabs across Mobile ? Desktop.

Edge has done a pretty decent job, thought I have some issues with their freezing tabs and recently introduced vertical tabs.


I haven't tried the new Edge, but the old Edge used to get into states where button presses on the controls would be queued. That's not great for performance perception. (Incidentally, firefox on Android sometimes gets there too, especially after viewing npr org, hmm)

Edge Legacy would be deprecated this year, along with IE, AFAIK.

Firefox has some of the best memory tools and your problems could always be extensions. (Or if not, getting a content blocker extension might help.)

Check about:memory.


Did you read an article about Edge's performance recently?

Which one? Please post the link here.

I have always thought that the fastest browser is the one I don't use daily. That the fact that my daily browser has 50-100 tabs open makes the browser I switch to seem much faster.

Unless of course the first thing that happens when i switch is that the seldom used browser asks me to restart it to install an new version.


I'm staying with FF since a) foss and user first and b) multi account containers

Nevertheless, I tried browser bench's benchmarks yesterday.. was shocked and saddened on how far FF has fallen behind..


I had some issues with Firefox memory usage quite a while ago, tweaking about:config and using the Auto Tab Discard addon made it much better. IIRC the about:config changes were changes to various cache setttings, but one of the more surprising things was that not having disk cache enables significantly reduced memory usage, no idea why, maybe Firefox's bookkeeping code for stuff cached on disk used a lot of memory for some reason. This is just what worked for me, so YMMV, it only might work for anyone else.

One of my bigger annoyances with Firefox at the moment is that it writes to the cookies.sqlite and cookies.sqlite-wal files consistently every few seconds. This writing all the time behaviour has led to Firefox being called a 'SSD killer' a few years ago when SSDs were not as durable as they are now. Every chance I have tried on the about:config settings to try and fix it hasn't worked, got rid of session restore etc, no dice. It looks like I will have to put the Firefox profile directory on a ramdisk to fix this, softlinking the password files to the HDD, because they are the only thing I care about getting written to disk straight away.


Apropos of auto discard add-on, is it made by Firefox or at the very least verified by them? TBH, I am wary of using any plugins or add-one, since they might slow down the browser.

Currently, I have only Grammarly plug-in installed.

Might I enquire what changes did you perform on about:config to speed up your browser?


The addon is this one: https://addons.mozilla.org/en-GB/firefox/addon/auto-tab-disc...

On that page it says that it is recommended and that "Firefox only recommends add-ons that meet our standards for security and performance.". I have had no problems with any performance issues with it. However, you have to be very careful with it and make sure that if you are filling out a form or something that you tell the addon to keep the tab. Auto Tab Discard has the options to keep a tab for the session, or always keep tabs for a specific site. Just make sure it keeps the tab for a page that you are writing something on, otherwise you might lose information not submitted. I am not sure about this, I don't remember testing it, but just be safe.

As for the about:config settings, I believe they were these two:

browser.cache.disk.enable = false

browser.cache.memory.max_entry_size = 512

The first one is obvious. The second one limits the cache to only storing it if the entry is less than 512KB. My reasoning to change was that it should this to a smaller value was that smaller things like tiny images would be used across pages whereas larger things are less likely to be.

One I didn't change was browser.cache.memory.capacity, look it up if you want. From memory, the one that really seemed to matter was browser.cache.disk.enable to false, I really don't know why.

I hope it works for you. Have a good day.


Thanks so much.

Just like Auto Tab, I have heard of OneTab.


You can fool some people some times, but people use the browser every day, if it s not fast they ll use another.

And afaik firefox isn't planning an IPO or being acquired by google so i dont see why they would want to use these cheap tricks


Mozilla's main income comes from Google being default search engine on Firefox. I think it's about half a billion dollars per year.

It's in their absolute best interest to market Firefox as much as possible.


>It's in their absolute best interest to market Firefox as much as possible.

Google? No. Financing Mozilla is merely a move to defer monopolistic claims. Google doesn't care about Mozilla.


Obviously I meant it's in Mozilla's interest to market Firefox.

Yeah they care about retaining users, not showing "promising growth", which is what they'd achieve with such tricks

A classic: mojave experiment https://www.youtube.com/watch?v=bsStHxtVr_w

"Well, I have to confess to you; this is all staged; everyone on camera is reading a script; none of this is real."

Legal | privacy