I thought the CFAA was 1986ish. That anyone could believe it up for debate in 1989 is a little silly... or did it take a couple decades of case law to give CFAA teeth?
Interesting to see the perspectives from early hacking pioneers. Seems like some things haven't changed much - debates over ethics of unauthorized access, whether it's criminal, free speech implications, etc. But more nuance now as hacking's gone more mainstream.
Biggest change is probably threat models. In 1990 main concern was individuals hacking systems for challenge, curiosity, etc. Today it's nation-states and organized crime using hacking for financial gain, espionage, even kinetic attacks.
Other change is commercialization/professionalization of hacking. Now huge industry around cybersecurity, ethical hacking, bug bounties. Hacking skills lead to lucrative careers, not just hobby or activism.
More diversity today too - no longer just male techies. But part of cyberpunk spirit remains, even as hacking's become bigger business and political issue.
John Perry Barlow, from this transcript: "Driving 110 miles per hour on Main Street is a common symptom of rural adolescence, publicly denounced but privately understood."
Out of context, That is wildly accurate. Older people will tsk tsk when they hear about that kind of thing, but most people in rural areas use it as an excuse to recount their dumb teenage stories. When I got a speeding ticket was when I found out my father got a ticket when he was riding a piece of waxed cardboard behind a car on a snowy highway.
In Fred Turner's "From Counterculture to Cyberculture", Turner contrasts the perspectives of John Perry Barlow and Lee Felsenstein as the old guard of cybernetic counterculturalists versus Acid Phreak and Fiber optik as the new guard of modern hackers:
> When they joined the discussion on the WELL, Phreak and Optik immediately set off a culture clash. The conflict could be seen clearly in the edited version of the forum eventually printed in Harper's. Like the online forum, and like it's predecessor, The Hackers' Conference of 1984, the conversation opened with a discussion of the hacker ethic. WELL regulars described the ethic in cybernetic and countercultural terms familiar to their online colleagues. Lee Felsenstein compared hackers to the "Angelheaded hipsters" of Allen Ginsberg's poem "Howl." John Perry Barlow described them as solitary inventors designing a system through which humans woule acquire the simultaneous unity of other "collective organisms." Acid Phreak would have none of it. "There is no one hacker ethic," he wrote. "Everyone has his own. To say that we all think the same way is preposterous." Among WELL regulars like Felsenstein and Barlow, hackers were cybernetic counter-culturalists, creatures devoted to establishing a new, more open culture by any electronic means necessary. For Acid Phreak, the hackers were break-in artists devoted to exploring and exploiting weaknesses in closed and especially corporate systems.
I would be really interested to hear how the perspectives of the surviving participants in this conversation have evolved.
I’d argue that if computer hacking wasn’t treated as a crime but rather just a natural occurrence we’d have a much more robust security culture today. It would also be a lot more fun to be a hacker.
Agreed. What is holding a LOT of people back from doing even kind-hearted hacking (i.e. "grayhat" activities with the purpose of helping the victim) is the fact that a criminal record can break your career entirely. Bug bounty programs only get you that far, and some I believe (I have very limited experience with bug bounty hunting) have a strict scope.
Greyhats and especially bug bounty programs, pen testers, etc, have explicit authorization from the owners of the systems to access their systems, and perform ethical hacking with a mutually beneficial goal, hackers get paid, and the company gets a little bit less of an attack surface.
That’s not illegal
What’s illegal is accessing a computer system without the authorization of the owners of the computer system. Technically speaking, port scanning the internet is illegal hacking, as you are not authorized to scan each port number on any of those machines. Ever find a random ip and give port 22 a few random tries over ssh to see if the root password is “guest”, you just committed a federal offense, because you were not authorized to access and attempt to login to that system. Is anyone going to report port scans to the fbi? Failed ssh loggin attempts? (Use a vpn/tailscale and don’t expose ssh to the internet anyway).
I often wonder where “knowing” someone’s password and “hacking” their social accounts falls in this discussion. You see or hear about it all the time. “So and so hacked my page” If you have someone’s FB login info and they have no idea that you do, you may have permission to access FB, as everyone does if you accept their TOS, but you don’t have the account owner’s permission to access their account, and if FB knew it wasn’t the account owner, they would not allow that either. So if they don’t allow that, you’re likely violating their TOS, and no longer allowed to access their systems, so maybe it could technically be able to be prosecuted as illegal hacking, idk.
Ah yeah I guess it’s true they don’t have permission. At the end of the day I think it comes down to the owner choosing to press charges or not, or even detecting it and subsequently reporting it. I would guess that if the systems have ways to be hacked, the owners likely won’t see the hacks until the white/grey hat reports it to them.
Somewhat related, the hackers submitting a vulnerability disclosure to the companies are in a very “extortion-y” dynamic. I wonder how often companies get something like “pay us X amount or we let the world know today instead of waiting for you to fix it”.
Not really, because it depends on who the target is. If the greyhat for example maliciously targets a Mexican cartel or Iranian nuclear centrifuge, are they really the bad guy?
I’d argue that if trespassing and burglary weren’t treated as crimes but rather as natural occurrences we’d have a much more robust home security culture today. It would also be a lot more fun to be a trespasser.
Hacking requires software or hardware vulnerabilities, without them it’s possible to have a completely invulnerable computer system. Users are frequently blamed, but the dumbest user can’t leak credentials that don’t exist.
The odds of "nobody ever accidentally writing a security bug" are astronomically low.
Were hacking legal, the risk/reward calculation still encourages huge amounts of effort spent on hacking. No matter how robust your programming security culture, mistakes will happen. And the people who exploited them could have HUGE gains. All for only risking having wasted their time in failure.
Do consider "we're hacking legal" or "hacking the legal system", which is widely practiced by corporatae everywhere. Why should it be OK to hack the legal rules set in place by regulators and not the business rules set in place by a business or individual?
In both cases it typically involves finding loopholes that allow you to get what you normally wouldn't, while complying with de-facto implementation of rules. (E.g. rooting an Android phone you bought, or requesting data from a publicly readable but probably misconfigured S3 bucket).
I didn't and wouldn't go that far. Logging in to a remote server with default credentials and wiping remote data after exfiltration would be a closer example to that than the ones I gave, and to answer the rhetorical question: no.
There's a big difference when you start talking about destruction of data for others, faking credentials in interactions with a third-party, and/or knowingly causing remote unavailability.
> There is this insane notion that if you can accomplish something in cyber space, then you are allowed to. That's not how society works.
What my previous comment is trying to challenge is the rules-for-thee-and-not-for-me dogma which means that publicly exposing private records of others on an unprotected S3 bucket is an oopsie-daisie while an individual doing a request for it risks ending up in jail.
> Gov. Parson's office continues to insist that the journalist committed a crime. "The hacking of Missouri teachers' personally identifiable information is a clear violation of Section 569.095, which the state takes seriously."
> "It is unlawful to access encoded data and systems in order to examine other people's personal information, and we are coordinating state resources to respond and utilize all legal methods available," Parson said in October.
The possibility of creating a given security bug depends on a host of things such as network connectivity, system architecture, computer language etc. These decisions were made early so we wouldn’t see anything like the current internet, operating systems, or even hardware etc in such a world. Granted, computer systems would presumably suck in such a world but that’s a different question.
It’s likely the risk/reword would actually be worse in such a world as fewer things would be possible to exploit for meaningful gain.
Would we have a more robust security culture, or would we all be forced to pay with cash everywhere, not use IoT, and mail paperwork in the actual mail?
People might not want to use something that disqualified them from legal protection if they used it. It would make tech look unsafe if we just straight up said "This will be hacked and we're not even going to do anything about it".
Early 2000a culture was amazing but I'm not sure I'd want to go back to using cash and not having tile trackers.
Maybe if you had extremely tight limits on what you can do, you can break in but not alter anything for any reason.
But then again, maybe it would have the opposite effect, like in the 90s when we still tried to move to doing as much as possible digitally even though it was almost never secure. Maybe people would just accept it.
reply