Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Unfortunately p2p computing is hindered badly by the copyright industry. The research is still active and we have a lot of ideas for distributed computing and p2p for more than file exchange. A lot is used today to distribute a mainframe infrastructure instead of creating truely distributed network.


sort by: page size:

What happened to peer-to-peer as a technological concept? Actually, we still use a lot of that technology.

There are two reasons why P2P has been generally less successful than a lot of us hoped around the turn of the century: the first is largely technical, namely the small number of people with plenty of uplink bandwidth, and something like municipal fiber would actually help a lot there. When people have 1,000/25 connections they have less capacity to share in general and with things like video chat and gaming being common, the amount of idle capacity consistently available is relatively limited.

The other is harder: some ISPs outright blocked or throttled P2P protocols entirely (much to the annoyance of, say, Linux users torrenting ISOs) and most others will have some mechanism to respond to copyright claims. The latter is really hard for a YouTube competitor if it allows anyone to upload content — if the network attempts to auto-mirror content, you are potentially at risk if someone uploads something illegal; if it doesn't, only the most popular public content will be well-replicated.


Unfortunately in that regard there is little to no choice. This is why I continue to advocate building a truly peer 2 peer network with all services relying on that infrastructure instead of centralized systems in the hands of a few companies. Everything running and stored on that P2P network. It's coming but taking longer than I'd hope for.

What's your take on how everything works these days?

Do you miss p2p?

Do you think we could ever get back to it?


P2P it's still pretty much alive, at least for Bittorrent and ED2K.

I'd be interested to hear other perspectives on this, but I think p2p computing in general got skipped over when the cloud became a thing. DHTs have had steady use in the p2p world though, including with BitTorrent.

DHTs are not without their problems. I know some other projects in our space decided it wasn't tenable to solve the Sybil attack and so they created a peer-routing system that bootstraps off of a blockchain. As brad0 pointed out, there have also been performance issues in some deployments. We're going to iterate on the security and performance and see how it goes.


Distributed p2p is a thing.

Yea must p2p if it's grid computing.

Yeah, the more I look at P2P applications the more I realize our infrastructure is not set up to support them without some serious work on the application side.

"What about distributed systems without central control?"

The copyright industry and their bought-and-paid-for politicians have repeatedly demonstrated that they have no mental model for such forms of distribution. Have you forgotten the panic that ensued fifteen years ago, when the music industry was suing middle schoolers?

(The irony is that the very same platforms these industry lobbyists are whining about only became popular because they killed P2P via the courts.)


Personally, I am a fan of p2p applications and doing things locally. Unfortunately, there are a lot of unsolved issues, especially when it comes to data storage, like you mentioned.

It did not work terribly well for other networks like Gnutella; it just gets flooded with spam and malware attacks. Good engineers and researchers never had the chance to develop solutions to that problem because the copyright industry delegitimized peer-to-peer and pushed everyone to more centralized systems (by abusing the court system).

Well, there's nothing properly developed, that's why we are still here. The basic building blocks are all there, cryptography for privacy and identity. p2p networks for data transfer have worked in the piracy world for many years too.

What is missing is putting it all together and be able to replicate the network effects you get from centralized media, like if you are able to reach one person, be able to reach in a simple way all of his friends as well (supposing they want to be reached).


I think the bigger issue with p2p nowadays is the move to portable devices. The majority of people nowadays are on phones if not laptop and tablets. P2P has the user giving up their storage, battery life, and data. It works fine in more niche applications, but it'll never be mainstream.

It's probably not even possible on iOS devices because of the limitations in background processes.


No P2P storage networks have succeeded? What about torrents? It is a huge market.

Sure, and that is the real battle, who can get an external IP!

Yep, I'm ok with the P2P stuff to be over UDP, but if you ever made UDP hole-punching work you will realize it's not necessarily the most straight forward solution.

P2P does not scale for all problems, while distributed client/server does!

The bottleneck never was network bandwidth (binary vs. text), it's been atomic parallelism since multi-core processors peaked in concurrent RAM bandwidth 2014+ after DDR4 increased latency.

Copying files is not a problem because you can always make files smaller by reducing quality!


As mentioned in my previous comment, there are a lot of things that would have to solved and deployed in order for P2P to be 100% feasible. I didn't expect to receive a list of things to be solved right now!

But you do bring up good points, as the current infrastructure (everywhere) is not setup for P2P. In most modern countries (sans US), ISP networks are actually pretty good and cheap, and works fine P2P. Otherwise there are other ways of distributing as well, mesh networks is one way.

All these questions you are outlining are definitely solvable though, just like when these questions arised when we built our current centralized infrastructure. Problem is that P2P networks are not nearly as funded as centralized infrastructure, leading to less people working on actually solving these problems.


Many of the problems you talk about are off topic.

---

The trust problem still applies to HTTP. We still download viruses on the web, and we still lose our credit card numbers to con artists. The only reason there's more malware on current P2P network is because those networks are disproportionately used to infringe copyright. The risk doesn't come from the distributed nature of the network, but from the lack of legality of the content.

Securing peer to peer communications to current web levels is trivially easy: just sign the damn data, and have a certificate authority ascertain the identity of the signer. For static content such as YouTube videos, you can also use a content addressable system.

While that would require some level of centralisation, it woulndn't exceed that of DNS, and would definitely solve the bandwidth issue.

> What happens if you need more bandwidth than your usernet can give?

I won't happen, because we enjoy symmetric bandwidth, thanks to our regulators being sensible, competent people. (At least that's the case in my rainbows & unicorns world). Seriously, though, symmetric bandwidth is the ultimate and only solution to many problems: it ensures total upload keeps up with total download, so we get the equilibrium we want.


Yes, but decentralization is a difficult problem. I think eventually it will become like email, where everyone just uses GMail.

NAT, Firewall, etc all make it very difficult to do true P2P as well.

next

Legal | privacy