AT&T did this for Uverse - it’s unicast for 5 seconds then it switches over to multicast. I assume there’s some key frames and h26x codec configuration that’s necessary.
Unfortunately multicast doesn’t work across the internet. It’s typically filtered or ignored I think.
BT in the UK do this. It's awesome, but doesn't fix encoding delays like iPlayer is suffering from. Plus, it's over their Network which is a subset of the UK public internet infrastructure.
I had an old blog page about how it was designed but I took it down. May be worth revisiting it. That said the general public prefer unicast so they can watch it when they want, which renders multicast a moot point.
It'd be great if it could be done using IP multicast! Sadly over the Internet to consumer devices multicast isn't an option, sometimes it is within a particular ISP for their services, but from an arbitrary broadcaster to a receiver it's all unicast (and often pull-based using HLS or DASH)
Yes, these setups are really just a big mystery regarding the inner workings if you don't happen to work in that field. The unicast stream really started almost instantaneously when switching channels (sub second), so they somehow got this optimized well, but multicast always took 2 to 3 seconds...
A good portion of the bandwidth being used on the Internet could be done over multicast if it were widely supported.
For example, popular content on Hulu could be multicast. You'd download the beginning of the video over HTTP but simultaneously tune into the multicast that started the most recently. Once the two streams meet you drop the HTTP connection. If widely deployed this would reduce costs for both Hulu and ISPs, and for places where multicast doesn't work it can fall back to HTTP.
That's "just" a problem with unicast though. Lots of different media houses are working on getting multicast support over the internet, by which point IP streams will behave a lot more like OTA's one-to-many relationship.
I couldn't say how close they are to getting that working - nor if it even is possible - because I don't work on that specific side of things. But I do know there used to be multicast networks over the internet back in the 90s (IIRC David Bowie even streamed one of his concerts over it) so it strikes me as a solvable problem.
Interesting that both of those mediums are broadcast/multicast only, not unicast.
All of these video streams (with maybe the exception of CNN's P2P plugin) are unicast, TCP/IP data streams. To really survive a crisis, ISPs need to implement real multicast data streams correctly so that one server can broadcast the video feed without requiring all of the clients to have their own individual copy of the feed going out.
Edit: Sorry, looks like this is already being discussed down thread.
Multicast would save exactly zero Hz of spectrum bandwidth, which a moment's thought would reveal to you. Allow me encourage a habit of that.
Multicast routing uses, exclusively, point-to-point links. So, a packet could go up to the satellite once, but would need to be copied down to every terminal individually.
But internet video is in any case not distributed by multicast UDP. Even if it were, you still would need caching for viewers not watching identically the same frames at the same time, because a multicast router does not do time-shifting: all copies of each packet are queued to send immediately.
The technical explanation of multicast is quite good. But the reason it isn't available on the internet has nothing to do with the technical hurdles.
It isn't used because of the economics behind the ISP business.
With multicast a single sender can have arbitrary many receivers but sends its data only once. The network infrastructure 'clones' than that data on its way to the receivers as necessary. But that's not in line with the economic interests of an ISP.
With unicast the sender has to use increasingly more bandwidth when he wants to reach more receivers, and the ISP gets payed for that additional bandwidth. The more bandwidth the sender uses the more money makes the ISP. With multicast on the other hand the sender needs to send everything only once no mater how many receivers are listening.
Imagine you could send an audio or video stream to potentially everyone with internet access in the whole world but would need to pay only for the bandwidth of exactly one stream. That would be very nice for you, or Spotify, or Netflix, but not such a good deal as the current one for your ISP.
That's why ISPs don't sell multicast connectivity. Technically it would be easy. The current network-infrastructure would be able to handle multicast (almost) without any additional effort on the carrier side. After all the technology is build in in almost every switch or router for years now. Live streaming of AV media would be possible for everyone with internet access. One would not need the bandwidth of say YouTube to reach as many receivers as they do. But that will never happen because ISPs just aren't interested in providing multicast connectivity!
No need to reinvent video encoding. At least my local provider seems to fix this by having all the channels streamed as multicast continuously and then having the TV box request a small bit of video over normal TCP to do the channel switch immediately and only later syncing to the multicast. That allows you to change channels quickly at any time and starting to watch at whatever the latest I-frame was.
I notice this happening when IGMP forwarding is broken in my router and channels will only play for a second or two after being switched to and then stopping. Switch times are pretty good.
In the early internet, where every device has a public IP and the only firewall is the one you should've set up, multicast penetrated through all networks and a single packet stream could be subscribed to from anywhere, replicated across the internet.
These days, only IPv6 capable networks (so half of the web or so) satisfy the necessary requirements for such a system and internet multicast has wisely been turned off for the enormous DDoS/bandwidth waste it implies.
This mechanism is still used on some TV networks, though, especially digital ones that come over fiber. There is a single stream of packets generated to send to all subscribers that the subscriber devices can then subscribe to with the proper network config. This is often accomplished through IGMP and other such multicast protocols.
As for sending a single broadcast stream out, that's exactly what online streaming services do. A 30mbps Twitch stream with a million viewers doesn't require you to get a data center's worth of internet capacity at home; instead, you upload a single stream to your favourite service and that service replicates the stream for you. You can set up such a system yourself if you want to stream from home, have a cloud server with good internet, but only cable or DSL upload speeds through somerhing as simple as nginx with RTMP enabled.
But how many people really stream the same content at the exact same time anymore these days?
Large sports events are probably the only remaining application where broadcast would still see much advantage over individual streams.
In the case of a few simultaneous viewers, unicast can sometimes still use bandwidth more efficiently in case of a(since the transmitter precisely knows the receiver’s SNR and doesn’t need to waste any energy transmitting at the wrong coding rate or the wrong direction/place). That’s how modern 802.11 (Wi-Fi) often does reliable multicast, these days.
A seamless transition from unicast to multicast once the critical ratio of listeners per base station is reached would be very cool, but require pretty deep levels of integration between broadcasters and ISPs that are probably not worth it.
it's a shame multicast isn't universally supported on the Internet. It'd make it possible to broadcast a live video to millions or billions of people straight from a cell phone.
Sure, but it doesn't have to be that way. Something built around multicast groups could be used to stream multiple people the same content in a drastically more efficient manner, and then it could be stored locally on clients for time-shifting purposes.
If you're trying to optimize for efficient use of limited bandwidth, unicast transferring of identical content to many many people is pretty wasteful. I think Netflix would argue that network links should be getting bigger and fatter to render the point moot, but given the streams are also getting bigger (we didn't always stream 720p everywhere, did we?) that would certainly take a lot more investment than what's happening now.
Multicast was designed exactly for this - same data streamed to many endpoints at the same time. Too bad it's not being more widely used, the bandwidth savings would likely be huge.
There actually is a multicast standard on the internet. It uses UDP. Basically, instead of sending out multiple streams of packets for each viewer you send only one stream and the routers copy the packets into new streams as needed along the way.
Amazing. Are there any optimizations for video conferencing in case one participant doesn't have upload bandwidth to stream his video to 10 different IP addresses at once?
I was fidgeting with the tought of something like a overlay multicast network on top of unicast IP, since RFC 1770 was deprecated/never implemented.
Does multicast see any actual use? I was under the impression that while it's theoretically great for a live video stream, nobody ever actually uses it.
Unfortunately multicast doesn’t work across the internet. It’s typically filtered or ignored I think.
reply