Since the advent of ChatGPT, there has been an accelerating downward trend in growth rate of traffic in StackOverflow. If one were to extrapolate the traffic from before the release of ChatGPT, currently there is a ~12% difference between expected traffic and real traffic. Over 60% of the difference has happened in the last 7 days.
Well, if such a "tiny" reduction in workforce (certainly less than 12% or 60%) has led to such a traffic decrease on StackOverflow, I guess the layoffs have targeted those who significantly use SO for their day-to-day work.
Thus, if you don't want to get laid off in the next round, stop using StackOverflow ;)
What if the 12% were answering questions - not asking o.O. Hell, I'd rather a person go through the effort/thought process of asking a _good_ question to get closer to the answer they might be looking for.
I imagine majority of the SO visitors are neither asking nor answering questions, but only looking up answers for questions somebody else posted.
I can, however, come up with scenarious where company would choose either cohort for layoffs (so you are helping engineers from other companies on company dime?).
Still, I was completely tongue in cheek regarding causation and mostly noting that it is unlikely to be any correlation there.
This is anecdotal. In my search results, StackOverflow is less often the top result. Often I find solutions in GitHub issues, blogs, or first-party documentation. The bar for first-party docs has been raised in recent years - new React docs are one such example.
2. How is the extrapolation done? ("If one were to extrapolate...")
3. The headline (12% drop in the last week) contradicts the body of the post (60% of a 12% difference in the last week).
It would not surprise me in the slightest if Stack Overflow was becoming less popular. However, without more details the post comes across as just pulling numbers out of thin air.
Woof. As long as you’re verifying the answer and making sure not to spread buggy code, this is probably good.
However; I’m sure there are agents out there that are more interested in gaming the stack overflow points system than acting morally.
AI has been trained on human generated code. What happens when AI is trained on code that AI has created. What a wonderfully absurd scenario. Technology is so much fun to be in, especially now.
I don't see the fiasco, most of the time, the human answers are wrong, and chatGPT answered one is right. I just run/test the code and review it.
This extension does not post your LLM generated answer, and I usually don't post it back tbh, it's just a nice shortcut from google > SO > ChatGPT. Have the same for google now
It excels when SO questions have no answers, ChatGPT will readily generate one for you. It's up about to the human to use that answer or repost it or not, I personally don't but I don't see why others shouldn't if the code works its not like its real magic
stackoverflow can be useful, but it’s not and never has been great. it’s very much the definition of good-enough. so i think with AI chat tools giving traditional search engines a run for their money at the moment, stackoverflow is looking at tough times ahead.
How about you not spend the same weekend creating throwaway accounts to make snide remarks in the hope of getting some engagement? This won't fill the longing that you have in your lonely heart for any connection. I wish you well.