News from Industry

Four years of WebRTC Insights

bloggeek - Tue, 11/26/2024 - 12:30

Stay informed about the latest trends and insights in WebRTC technology. Our unique service offers expert analysis and valuable information for anyone developing using WebRTC.

We are into our 5th year of WebRTC Insights. Along with Philipp Hancke, I’ve been doing this premium biweekly newsletter. Every two weeks, we send it out to our subscribers, covering everything and anything that WebRTC developers need to be aware of. This is used to guide developers with the things important to them. We include bug reports, upcoming features, Chrome experiments, security issues and market trends.

The purpose of it all? Letting developers and decision makers who develop with WebRTC focus on their own application, leaving a lot of the issues that might surprise them to us. We give them the insights they need before they get complaints from customers or get surprised by their competitors.

Each year Philipp asks me if this might be our last one, because, well, let’s face it – there are times when the newsletter is “only” 7 or 8 pages long without a lot of issues. The thing is, whatever is in there is important to someone. I myself took note of something Philipp indicated in issue #102 to be sure to integrate it into our testRTC products.

Why is WebRTC Insights so valuable to our clients?

It comes down to two key benefits:

  1. Time
  2. Focus

We help engineers and product teams save time by quickly identifying WebRTC issues and market trends. Instead of spending hours searching the internet for clues or trying to piece together fragmented information, we deliver everything they need directly – often several days before their clients or management bring up the issue.

Beyond saving time, we help clients stay focused on what matters most. Whether it’s revisiting past issues, tracking security concerns, understanding Google’s ongoing experiments, or staying updated on areas where Google is investing, we make it easy for them to stay informed.

If I weren’t so humble, I’d say that for those truly dedicated to mastering WebRTC, we’re a force multiplier for their expertise.

WebRTC Insights by the numbers

Since this is the fourth year, you can also check out our past “year in review” posts:

This is what we’ve done in these 4 years:

26 Insights issued this year with 250 issues & bugs, 141 PSAs, 13 security vulnerabilities, 312 market insights all totaling 235 pages. We’re keeping ourselves busy making sure you can focus on your stuff.

We have covered well over a thousand issues and written close to 1,000 pages so far.

2024…

In the past year, we’ve seen quite a steep decline in issues and bugs that were filed and we talked about. From our peak of ~450 a year in 2022, to ~320 in 2023 and now 250 in 2024:

YearIssues we reported onIssues filed (libWebRTC/Chrome)2020-2021331658 / 5792021-2022447549 / 6392022-2023329515 / 5572023-2024250361 / 420

This correlates with the overall decline in the activity around libWebRTC which has dropped below 200 commits per month in the last year:

This is more visible by looking at the last three years:

The Google team working on WebRTC is now just keeping the lights on. While commit numbers stayed roughly the same, external contributions are now approximately 30% of the total commits. There’s little in the way of innovation and creativity. Most of the work is now technocratic maintenance, if we were to use boring slur words…

The reality is that libWebRTC is mature and good enough. It is embedded inside Chrome, with over a billion installations, and any change in it has a wide range of effect on many applications and users. In the language of Werner Vogels, the CTO of AWS, the blast radius of a bug in libWebRTC can be rather big and impactful.

Let’s dive into the categories of our WebRTC Insights service, to figure out what we’ve had in our 4th year.

Bugs

In this section we track new issues filed and progress (in the form of code changes) for both libWebRTC and Chromium. We categorize the issues into regressions for which developers need to take action, insights and features which inform developers about new capabilities or changes to existing behavior and assign a category such as “audio”, “video” or “bandwidth estimation” to make it easy for more specialized developers to only read about the issues affecting their area.

A good example of regressions this year were several regressions in the handling of H.264:

In a nutshell, relatively harmless and very reasonable changes to the way libWebRTC deals with H.264 packetization caused interop issues for services that use H.264 and rely on some of its more exotic features. And those changes made it all the way to Chrome stable which suggests a lack of testing in Beta and Canary versions.

We also track progress on feature work such as “corruption detection” and speculate on why Google is embarking on such projects:

Google migrating both Chromium and WebRTC from the Monorail issue tracker system to the more modern Buganizer caused us a little bit of a headache here.

PSAs & resources worth reading

In this section we track “public service announcements” on the discuss-webrtc mailing list, webrtc-related threads on the blink/chromium mailing list, W3C activity (where we often shake our heads) and highly technical blog posts which do not fit into the “market” category.

A good example of this is Google experimenting with a new way to put the device permissions into the page content which we noted in May, followed by seeing how Google Meet put this into action in November. The process for this is “open” but as a developer you need to be aware of what is possible and being experimented with by Google to keep up.

We also used to track libWebRTC release notes in this section but stopped sending those earlier this year when the migration from Monorail to Buganizer broke the tooling we had. Not many folks missed them so far.

Experiments in WebRTC

Chrome’s field trials for WebRTC are a good indicator of what large changes are rolling out which either carry some risk of subtle breaks or need A/B experimentation. Sometimes, those trials may explain behavior that only reproduces on some machines but not on others. We track the information from the chrome://version page over time which gives us a pretty good picture on what is going on. Most recently we used it to track how Google is experimenting with a change in getUserMedia which changes how the “ideal” deviceId constraint behaves:

See this issue for more information about the change. We also waved goodbye to the longest-lasting field trial which had been with us the entire four years, being enabled 100% and causing a different behavior in Chrome versus Chromium-based browsers not using Google’s field trial configuration such as Microsoft Edge: 

WebRTC-VP8ConferenceTemporalLayers

It was removed (without the default value changing) in this commit. Which is great because it had side-effects on other codecs like H.264.

WebRTC security alerts

We continued tracking WebRTC-related security issues announced in the Chrome release blog. We had eight of them this year, all but one related to how Chromium manages the underlying WebRTC objects. And a vulnerability in the dav1d decoder (as we predicted last year, codec implementations will get some more eyes on them).

WebRTC market guidance

What is happening in the world of WebRTC? Who is doing what? Why? When? Where?

We’re looking at the leading vendors, but also at the small startups.

There are probably 3 main areas we cover here:

  1. CPaaS and Programmable Video – what are these vendors up to? How are they differentiating from each other? What do they dabble with when it comes to WebRTC?
  2. Video conferencing and collaboration – what are the leaders here doing? Especially Google Meet, and how it affects WebRTC itself. But also how others are leveraging WebRTC in meaningful ways
  3. LLM and Generative AI – this popped up in 2024 and will likely grow strong into 2025. Understanding where the challenges are and what solutions vendors come up with in this area, especially in the low latency space where WebRTC is needed

From time to time, you’ll see us looking at call centers, security and privacy, governance, open source, etc. All with a view from the prism of WebRTC developers and with an attempt to find an insight – something actionable for you to do with that information.

The purpose of it all? For you to understand the moves in the market as well as the best practices that are being defined. Things you can use to think over your own strategy and tactics. Ways for you to leave your company’s echochamber for a bit. All in the purpose of improving your product at the end of the day.

With our shift towards an ever maturing WebRTC market, the market insights section is growing as well. We expect this to happen in the coming year yet again.

Join the WebRTC experts

We are now headed into our fifth year of WebRTC Insights.

On one hand, there are less technical issues you will bump into. But those that you will, are going to be more important than ever. Why? Because the market is maturing and competition is growing.

So if you’re working with WebRTC and not subscribed to the WebRTC Insights yet – you need to ask yourself why it is. And if you might be interested, then let me know – and I’ll share with you a sample issue of our insights, so you can see what you’ve been missing out on.

The post Four years of WebRTC Insights appeared first on BlogGeek.me.

one hundred Zombies Remark Position Ratings

bloggeek - Tue, 11/12/2024 - 15:53

Rating access immediately in order to a large number of ports out of finest software company from the VegasSlotsOnline. A writer and you will editor with a good penchant to possess games and you may method, Adam Ryan has been for the Local casino.org party to own eight years now. With authored to own and edited numerous iGaming labels in the career, he’s anything of a content sage when it comes to our iGaming duplicate in america and you will Canada. Local casino.org have a rigid 25-step review procedure that we realize for each and every casino comment.

Free Revolves Existing Customers no deposit

Continue reading for solutions to the most popular questions relating to which kind of casino added bonus. Since the a fact-checker, and you may all of our Chief Betting Officer, Alex Korsager verifies the online casino info on this site. The guy yourself measures up our profiles to your casino’s and you can, when the some thing try unsure, he connectivity the new local casino. In a nutshell, Alex assurances you can make the best and you will accurate decision. Imagine if your FanDuel Michigan Gambling establishment’s indication-up incentive is a great “$2,000 Get involved in it Once more” render.

Knowledge No deposit 100 percent free Revolves

In identical vein since the earn limitations, you might be extremely scarcely permitted to share everything you have redeemed from the NZ no deposit bonus codes using one spin. The new betting requirements away from 50x are a little greater than we’d including, but it is very standard with no deposit incentives inside the NZ. Yet , Casimba also provides 4 times as numerous zero depoist free revolves for a passing fancy game. This means 4 times as numerous opportunities to struck one to $a hundred max cashout. Besides the bonus size, you should see casual incentive words including reduced wagering requirements and you can a much bigger successful limit. All the free spins is actually valued from the £step 1.sixty, providing a whole bonus property value £8.

No deposit incentives basically become connected with heftier wagering criteria than just matches deposit incentives because they’re liberated to discover. Yes, specific no-deposit casinos in britain don’t have any betting standards on the free signal-upwards incentives. LeoVegas is a prime exemplory case of a casino having a no-bet no deposit extra. It’s 50 100 percent free revolves to your position online game from its range instead betting, nevertheless games change weekly.

This may be sure you are utilizing the newest bonuses correctly and will maximize your prospective profits. For each gambling enterprise has its own book choices and words, very learning the fresh fine print and you will knowing the standards ahead of saying people bonuses is extremely important. Here’s a go through the specific no deposit bonuses provided by this type of greatest casinos.

This isn’t always the case but there are several conditions and terms you need to be cautious about when claiming an excellent incentive choice give and no put. Make sure to see the full Ts and you will Cs of the no-deposit extra 100 percent free choice when saying your give. Here’s probably the most preferred inquiries we’ve got received from the zero deposit incentives in the usa.

Stating The No-deposit Incentive: A step-by-Action Book

In cases like this, the individuals systems are the various percentage steps given by casinos on the internet. One of the some other types out of no-deposit bonuses, 100 percent free gamble and you will extra dollars stand out using their book characteristics. Free play will provide you with an admission to the local casino’s park, letting you participate in casino games without the need to invest any individual money. At the same time, added bonus bucks offers you a selected sum which you can use as you wish within the local casino.

Fundamentally, players need to wager the main benefit number a certain number of times just before they could withdraw one earnings. Check the fresh conditions and terms to make sure your’lso are completely advised regarding the regulations. BetOnline is another on-line casino one runs glamorous no deposit incentive selling, along with individuals online casino bonuses. These types of sales can include totally free spins otherwise free gamble options, usually given as part of a pleasant package. Very, whether your’re a fan of slots or like dining table games, BetOnline’s no-deposit bonuses are certain to keep you captivated.

One of the best the way you use 7Bit gambling establishment bonuses appropriately is to steer clear of the following the added bonus abuse. You’re inclined to withdraw a no deposit added bonus away from your account, but being able to all hangs entirely on the new terminology and you can requirements of the internet casino. No deposit bonuses are perfect for people who do n’t need so you can going their money whenever exploring another local casino or online game. Free elite instructional courses to possess internet casino group geared towards industry recommendations, boosting user sense, and you can reasonable method to betting. Only a few gaming sites offer no-deposit bets many manage and we’ve got seen her or him to your pursuing the sportsbooks. Maximum added bonus amount you can get from your no-deposit extra wager give may differ ranging from playing web site and added bonus, but may also be a lot below what you can get off their incentives.

Even though you can access a no cost no-deposit extra or something like that else, you will want to see the campaign’s Fine print. For many who wear’t accomplish that promptly, you acquired’t know very well what to do, and even the new tiniest error may cause one get rid of the new extra. If you are unclear about all local casino offers, find knowledgeable responses with the content field. Offer an email address, and you will get a response in minutes.

You have to know which in the no-deposit expected bonuses

Of a lot web based casinos offer support or VIP software one reward present players with exclusive no-deposit bonuses or any other incentives including cashback rewards. As an example, Bovada also offers a suggestion program getting to $a hundred for each and every deposit referral, in addition to a plus for guidelines having fun with cryptocurrency. They are particular promotions, as there are often a spot to them. That time is usually to score people to try specific online game created by the newest casino’s lovers.

The post one hundred Zombies Remark Position Ratings appeared first on BlogGeek.me.

Twilio Programmable Video is back from the dead

bloggeek - Mon, 11/04/2024 - 12:30

Twilio Programmable Video is back. Twilio decided not to sunset this service. Here’s where their new focus lies and what it means to you and to the industry.

A year ago, Twilio announced sunsetting its Programmable Video service. Now, it is back from the dead, like a phoenix rising up from the ashes. Or is that going to be more like a dead walking zombie?

Here’s what I think happened and what it means – to CPaaS, Twilio and other vendors.

👉 Twilio being central about CPaaS means they have a dedicated page of their own on my site – you can check it up here: Twilio

Table of contentsZig: Twilio Programmable Video sunset

Let’s first look at two important aspects of the decision of Twilio to sunset their Twilio Programmable Video service. I did a couple of video recordings converting some of the visuals from my Video API report and placed them on YouTube (you should subscribe to my channel if you haven’t already).

The first one? A look at Twilio’s video services.

The second one? A look at how the market is going to figure this one out:

  • Twilio customers will migrate to other Programmable Video solutions OR go to build their own
  • Companies looking for a solution will be more likely now to build on their own instead of using Programmable Video (due to fear of that vendor sunsetting it like Twilio)

All in all, not good for the market.

Twilio Customers in the past year

To be frank, this started before the EOL announcement. If you look at the commits done to the Twilio Video SDK you see this picture:

Half a year prior to the announcement, the SDK got no commits whatsoever. And then? The official EOL came.

This last year has been tough on Twilio’s customers who use Programmable Video.

They had to migrate away from Twilio, with the need to do it by the end of 2024.

The time wasn’t long enough for many of the customers, and they likely complained to Twilio. The EOL (End Of Life) date moved to 2026, giving two more years for these customers.

The development work needed to switch and migrate away from Twilio might not have been huge, but it was not scheduled and came in as a critical requirement. In some cases, the customers didn’t have the engineering team in place for it, because external outsourcing vendors and freelancers originally developed the integration. In other cases, the migration required also dealing with mobile native applications, which is always more expensive and time consuming.

In one case, I had a vendor complain that they can’t replace the code in the appliances it deployed in a timespan of a year even if he wanted to – he works in a regulated industry and environment with native mobile applications.

Twilio set up their customers to a royal mess and a real headache here.

Zag: Twilio Programmable Video back from the dead

Then came the zag. Twilio decided to revert its decision and keep Twilio Programmable Video going. Here’s the statement/announcement from Twilio’s blog.

Here’s how they start it off:

“Today, we’re excited to announce that Twilio Video will remain as a product that we are committed to investing in and growing to best meet the needs of our customers. […]

Twilio Video will not be discontinued, and instead, we are investing in its development moving forward to continue to enhance customer engagement by enabling businesses to embed Video calling into their unique customer experiences.”

In their “why the change” section of the post, Twilio is trying to build a case for video (again). In it, they are making an effort to explain that they aren’t going to sunset video in the future, which is an important signal to potential new customers as well as existing ones. Their explanation revolves around the customer engagement use cases – this is important.

The “what to expect moving forward” section is the interesting part. It is built out of 4 bullets. Here’s what I think about them:

  1. Focused Innovation. Based on the explanation, Twilio will invest in customer engagement use cases. These are mainly 1:1 video calls
  2. New features and enhancements. These are likely to be focused around Segment and integrating with that part of Twilio. At least based on what they write (the bold words are my interpretation): “We will specifically focus on making it easier to seamlessly integrate Video calling into your customer engagement experiences as well as extracting and leveraging data to optimize your experiences and deliver actionable business insights
  3. Customer and product support. Another way of saying that this is a real product and not just an under maintenance service
  4. Training and resources. Same as the previous point

Alli in all, Twilio is planning on focusing predominantly on 1:1 customer engagement use cases and connecting them to Segment. At least that’s my reading of things.

Sunk costs or a hidden opportunity for customers

What about Twilio Programmable Video customers?

They had a year to plan and move away from the service to something else. Many of them either finished their migration or close to that point.

Should they now revert back to using Twilio? Stick with the competition?

Those who are in the middle of migration – should they stick to Twilio or keep investing resources in migrating away from Twilio?

These customers spent time and money on moving away. Should they view that as sunk costs or as an opportunity?

From discussions with a few Twilio customers, it seems that the answers are varied. In some cases, what they’ve done is built an abstraction running on top of two vendors – Twilio and the new vendor they’re migrating to. This way, they can keep Twilio as a backup as long as Twilio runs the service.

Now? They have the option to pick and choose which of the two alternatives to use.

This works well for services that do 1:1 meetings. Less so for group meetings.

In a way, Twilio reverting back adds another layer of headache and decisions that customers now need to go through (again).

Twilio’s challenges ahead

This leads us to the challenges Twilio is about to face.

The 3 leading ones are:

  1. Dilution of customers’ trust. These back and forth decisions will have customers thinking of other alternatives before picking Twilio. Twilio will need to convince customers that they are the better choice. Going for data and Segment integration is likely the way for them to do it. By focusing on where they see their future and entwining video to it, they stand a chance of getting through to potential customers
  2. A wasted year. The market hasn’t been waiting for Twilio. Their competitors have continued investing in their Programmable Video offering while Twilio’s stagnated since well before the EOL announcement. Twilio will somehow have to make up for this
  3. Decimated engineering team. When Twilio announced the EOL it also let go of employees. Some of them from the Programmable Video team. The knowledge and knowhow they have at Twilio today about their own product is likely lower than it was a year ago. It will take time to ramp it back up

All 3 are solvable, but will take time, attention and commitment on behalf of Twilio.

Zoom: The biggest winner of all

The big winner this past year? Zoom.

Zoom had an SDK and a Programmable Video offering, but it was known and popularized for its UCaaS service. Twilio sunsetting Programmable Video while at the same time suggesting and sending customers to Zoom was a proof of quality from a third party in the space that Zoom enjoyed.

This cannot be taken back now. It rocketed the Zoom Video SDK to one of the alternatives that potential buyers now need to review and explain why they shouldn’t be trialing it.

All in all, a good thing for Zoom.

This change of heart by Twilio? Not going to affect Zoom.

What should you do

If you are already using Twilio and were migrating away –

  • If you finished the migration and you are happy with the new vendor or your own infrastructure – then stay with the new solution
  • If you still haven’t finished, then it depends
    • If your service is 1:1 in nature, consider sticking with Twilio
    • If your service is group meetings, I’d continue investing in an alternative solution. Twilio is unlikely to invest much in there in the coming year or two
  • If you haven’t started yet, then check Twilio as an alternative if your meetings are 1:1 and you’re focused on customer engagement. Go elsewhere otherwise

There’s also always my Video API report to help you out (contact me for a discount on it or if you want some more specific consultation)

The post Twilio Programmable Video is back from the dead appeared first on BlogGeek.me.

Twilio Programmable Video is back from the dead

bloggeek - Mon, 11/04/2024 - 10:30

Twilio Programmable Video is back. Twilio decided not to sunset this service. Here’s where their new focus lies and what it means to you and to the industry.

A year ago, Twilio announced sunsetting its Programmable Video service. Now, it is back from the dead, like a phoenix rising up from the ashes. Or is that going to be more like a dead walking zombie?

Here’s what I think happened and what it means – to CPaaS, Twilio and other vendors.

Twilio being central about CPaaS means they have a dedicated page of their own on my site – you can check it up here: Twilio

Table of contentsZig: Twilio Programmable Video sunset

Let’s first look at two important aspects of the decision of Twilio to sunset their Twilio Programmable Video service. I did a couple of video recordings converting some of the visuals from my Video API report and placed them on YouTube (you should subscribe to my channel if you haven’t already).

The first one? A look at Twilio’s video services.

The second one? A look at how the market is going to figure this one out:

  • Twilio customers will migrate to other Programmable Video solutions OR go to build their own
  • Companies looking for a solution will be more likely now to build on their own instead of using Programmable Video (due to fear of that vendor sunsetting it like Twilio)

All in all, not good for the market.

Twilio Customers in the past year

To be frank, this started before the EOL announcement. If you look at the commits done to the Twilio Video SDK you see this picture:

Half a year prior to the announcement, the SDK got no commits whatsoever. And then? The official EOL came.

This last year has been tough on Twilio’s customers who use Programmable Video.

They had to migrate away from Twilio, with the need to do it by the end of 2024.

The time wasn’t long enough for many of the customers, and they likely complained to Twilio. The EOL (End Of Life) date moved to 2026, giving two more years for these customers.

The development work needed to switch and migrate away from Twilio might not have been huge, but it was not scheduled and came in as a critical requirement. In some cases, the customers didn’t have the engineering team in place for it, because external outsourcing vendors and freelancers originally developed the integration. In other cases, the migration required also dealing with mobile native applications, which is always more expensive and time consuming.

Once I had a vendor complain that they can’t replace the code in the appliances it deployed in a timespan of a year even if he wanted to – he works in a regulated industry and environment with native mobile applications.

Twilio set up their customers to a royal mess and a real headache here.

Zag: Twilio Programmable Video back from the dead

Then came the zag. Twilio decided to revert its decision and keep Twilio Programmable Video going. Here’s the statement/announcement from Twilio’s blog.

Here’s how they start it off:

“Today, we’re excited to announce that Twilio Video will remain as a product that we are committed to investing in and growing to best meet the needs of our customers. […]

Twilio Video will not be discontinued, and instead, we are investing in its development moving forward to continue to enhance customer engagement by enabling businesses to embed Video calling into their unique customer experiences.”

In their “why the change” section of the post, Twilio is trying to build a case for video (again). In it, they are making an effort to explain that they aren’t going to sunset video in the future, which is an important signal to potential new customers as well as existing ones. Their explanation revolves around the customer engagement use cases – this is important.

The “what to expect moving forward” section is the interesting part. It is built out of 4 bullets. Here’s what I think about them:

  1. Focused Innovation. Based on the explanation, Twilio will invest in customer engagement use cases. These are mainly 1:1 video calls
  2. New features and enhancements. These are likely to be focused around Segment and integrating with that part of Twilio. At least based on what they write (the bold words are my interpretation): “We will specifically focus on making it easier to seamlessly integrate Video calling into your customer engagement experiences as well as extracting and leveraging data to optimize your experiences and deliver actionable business insights
  3. Customer and product support. Another way of saying that this is a real product and not just an under maintenance service
  4. Training and resources. Same as the previous point

Alli in all, Twilio is planning on focusing predominantly on 1:1 customer engagement use cases and connecting them to Segment. At least that’s my reading of things.

Sunk costs or a hidden opportunity for customers

What about Twilio Programmable Video customers?

They had a year to plan and move away from the service to something else. Many of them either finished their migration or close to that point.

Should they now revert back to using Twilio? Stick with the competition?

Those who are in the middle of migration – should they stick to Twilio or keep investing resources in migrating away from Twilio?

These customers spent time and money on moving away. Should they view that as sunk costs or as an opportunity?

From discussions with a few Twilio customers, it seems that the answers are varied. In some cases, what they’ve done is built an abstraction running on top of two vendors – Twilio and the new vendor they’re migrating to. This way, they can keep Twilio as a backup as long as Twilio runs the service.

Now? They have the option to pick and choose which of the two alternatives to use.

This works well for services that do 1:1 meetings. Less so for group meetings.

In a way, Twilio reverting back adds another layer of headache and decisions that customers now need to go through (again).

Twilio’s challenges ahead

This leads us to the challenges Twilio is about to face.

The 3 leading ones are:

  1. Dilution of customers’ trust. These back and forth decisions will have customers thinking of other alternatives before picking Twilio. Twilio will need to convince customers that they are the better choice. Going for data and Segment integration is likely the way for them to do it. By focusing on where they see their future and entwining video to it, they stand a chance of getting through to potential customers
  2. A wasted year. The market hasn’t been waiting for Twilio. Their competitors have continued investing in their Programmable Video offering while Twilio’s stagnated since well before the EOL announcement. Twilio will somehow have to make up for this
  3. Decimated engineering team. When Twilio announced the EOL it also let go of employees. Some of them from the Programmable Video team. The knowledge and knowhow they have at Twilio today about their own product is likely lower than it was a year ago. It will take time to ramp it back up

All 3 are solvable, but will take time, attention and commitment on behalf of Twilio.

Zoom: The biggest winner of all

The big winner this past year? Zoom.

Zoom had an SDK and a Programmable Video offering, but it was known and popularized for its UCaaS service. Twilio sunsetting Programmable Video while at the same time suggesting and sending customers to Zoom was a proof of quality from a third party in the space that Zoom enjoyed.

This cannot be taken back now. It rocketed the Zoom Video SDK to one of the alternatives that potential buyers now need to review and explain why they shouldn’t be trialing it.

All in all, a good thing for Zoom.

This change of heart by Twilio? Not going to affect Zoom.

What should you do

If you are already using Twilio and were migrating away –

  • If you finished the migration and you are happy with the new vendor or your own infrastructure – then stay with the new solution
  • If you still haven’t finished, then it depends
    • If your service is 1:1 in nature, consider sticking with Twilio
    • If your service is group meetings, I’d continue investing in an alternative solution. Twilio is unlikely to invest much in there in the coming year or two
  • If you haven’t started yet, then check Twilio as an alternative if your meetings are 1:1 and you’re focused on customer engagement. Go elsewhere otherwise

There’s also always my Video API report to help you out (contact me for a discount on it or if you want some more specific consultation)

The post Twilio Programmable Video is back from the dead appeared first on BlogGeek.me.

Best practices for WebRTC POC/Demo development

bloggeek - Mon, 10/21/2024 - 12:30

Struggling with WebRTC POC or demo development? Follow these best practices to save time and increase the success of your project.

I get approached by a lot of startups and developers who start on the path to building WebRTC applications. Oftentimes, they reach out to me when they can’t get their POC (Proof of Concept) or demo to work properly.

For those who don’t want to go through paid consulting, here are some best practices that can save you time and can considerably increase the success rate of your project.

Table of contentsMedia servers and WebRTC. What can possibly go wrong?

I don’t want to delve here too much on peer to peer type solutions. These require no media server and due to that are “easier” to develop into a nice demo. The services that use media servers are the ones that are often more beefy and are also the ones that fall into many challenging traps during a POC development.

Media requires the use of ephemeral ports that get allocated dynamically. It needs to negotiate connections. There are more moving parts that can break and fail on you.

All of the following sections here include best practices that you should read before going on to implement your WebRTC demo. Best to use them during your design and planning phases.

👉 An introduction to WebRTC media servers

Use CPaaS

Let’s start with the most important question of all. If you’ve decided to install and host media servers in AWS or other locations – are you sure this is an important part of your demo?

I’ll try to explain this question. A demo or a POC comes to prove a point. It can be something like “we want to validate the technical viability of the project” or “we wanted to have something up and running quickly to start getting real customers’ feedback”.

If what you want is to build an MVP (Minimal Viable Product) with the intent of attracting a few friendly customers, go to a VC for funding or just test the waters before plunging in, then be sure to do that using CPaaS or a Programmable Video solution. These are usually based on usage pricing so they won’t be expensive when you’re just starting out. But they will reduce a lot of the headaches in development and maintenance of the infrastructure – so they’re more than worth it.

Sometimes, what you will be after is a POC that seeks to answer the question “what does it mean to build this on our own”. Not only due to costs but mainly due to the uniqueness of the requirements desired – these may include the need to run in a closed network, connect to certain restricted components, etc. Here, having the POC not use CPaaS and rely on open source self hosted components will make perfect sense.

First have the “official” media server demo work

Decided not to use CPaaS? Picked a few open source media servers and components that you’ll be using?

Make sure to install, run and validate the demo application of that open source media server.

You should do this because:

  1. You need to know the open source component actually works as advertised
  2. It will get you acquainted with that component – its build system, documentation, etc
  3. Taking things one step at a time, which is discussed later on

Using a 3rd party? Install and run its demo first.

Don’t. Use. Docker

Docker is great. Especially in production. Well… that’s what I’ve been told by DevOps people. It makes deploying easier. It is great for continuous integration. It is fairy dust on the code developers write.

But for WebRTC media servers? It is hell on earth to get configured properly for the first time. Too many ports need to be opened all over the place. Some TCP. Lots of them UDP. And if you miss the configuration – the media won’t get connected. Or it will. Sometimes. Which is worse.

My suggestion? Leave all the DevOps fairy dust for production. For your POC and demo? Go with operating systems on virtual machines or on bare metal. This will save you a lot of headaches by making sure things will fail less due to not having ports opened properly on your Docker configuration(s).

You don’t have time to waste when you’re developing that WebRTC POC.

Don’t do native. Go web

Remember that suggestion about doing the full session for your demo so you know the infrastructure is built properly? If you need native applications on mobile devices – don’t.

The easiest way to develop a demo for WebRTC would be by using a web browser for the client side. I’d go farther and say by using Chrome web browser. Ignore Firefox and Safari for the initial POC. Skip mobile – assume these are a lot of work but won’t validate anything architecturally. At least not for the majority of application types.

👉 Still need to go native and mobile? Here are your WebRTC mobile SDK alternatives

Use a 3rd party TURN service

Always always always configure TURN in your iceServers for the peer connections.

Your initial “hello world” moment is likely to take place on the local LAN or even on the same machine. But once you start placing the devices on different networks, things will start failing without TURN servers. To make sure you don’t get there, just have TURN configured.

And have it configured properly.

And don’t install and host your own TURN servers.

Just use a managed TURN service.

The ones I’d pick for this task are either Twilio or Cloudflare for this stage. They are easy to start with.

You can always replace them with your own later without any vendor lock-in risk. But starting off with your own is too much work and hassle and will bring with it a slew of potential bugs and blockers that you just don’t need at this point in time.

👉 More on NAT Traversal and TURN servers in WebRTC

Be very specific about your requirements (and “demo” them)

Don’t assume that connecting a single user to a meeting room in a demo application means you can connect 20 users into that meeting room.

Streaming a webcam to a viewer isn’t the same as streaming that same webcam to 100 viewers.

If you plan on doing a real proof of concept, be sure to define the exact media requirements you have and to implement them at the scale of a different session. Not doing so means you aren’t really validating anything in your architecture.

A 1:1 meeting uses a different architecture than a 4-way video meeting which in turn uses a different architecture than a 20-50 participants in a meeting, which is different once you think about 100 or 200 participants, which again looks different architecturally when you’re hitting 1,000-10,000 and then… you get the point on how to continue from here.

The same applies for things like using screen sharing, doing spatial audio, multiple video sharing, etc. Have all these as part of your POC. It can be clunky and kinda ugly, but it needs to be there. You must have an understanding of if and how it works – of what are the limits you are bound to hit with it.

For the larger and more complex applications, be sure you know all of the suggestions in this article before coming to read it. If you don’t, then you should beef up your understanding and experience with WebRTC infrastructure and architecture…

Got a POC? Build it to scale for that single session you’re aiming for. I won’t care if you can do 2 of these in parallel or a 1,000. That’s also important, but can wait for later stages.

👉 More on scaling WebRTC meeting sizes

One step at a time

Setting up a WebRTC POC is a daunting task. There are multiple moving parts in there, each with its own quirks. If one thing goes wrong, nothing works.

This is true for all development projects, but it is a lot more relevant and apparent in WebRTC development projects. When you start these exploration steps with putting up a POC or a demo, there is a lot to get done right. Configurations, ports, servers, clients, communication channels.

Taking multiple installation or configuration steps at once will likely end up with a failure due to a bug in one of these steps. Tracing back to figure out what was the change causing this failure will take quite some time, leading to delays and frustrations. Better to take one step at a time. Validating each time that the step taken worked as expected.

I earned that the hard way at the age of 22, while being the lead integrator of an important project the company I worked for had with Cisco and HP. I blamed a change that HP did on an issue we had with our VoIP implementation that lost us a full week. It ended up me… doing two steps instead of one. But that’s a story for another time.

Know your tooling

If you don’t know what webrtc-internals is and haven’t used dump-importer then you’re doing it wrong.

Not using these tools mean that when things go wrong (and they will), you’re going to be totally blind on why. These aren’t perfect tools, but they give you a lot of power and visibility that you wouldn’t have otherwise.

Here’s how you download a webrtc internals file:

You’ll need to do that if you want to view the results on fippo’s webrtc-dump-importer.

And if you’re serious about it, then you can read a bit about what the WebRTC statistics there really mean.

Now if you’re going to do this properly and with a budget, I can suggest using testRTC for both testing and monitoring.

Know more about WebRTC

Everything above will get you started. You’ll be able to get to a workable POC or demo. Is that fit for production? What will be missing there? Is the architecture selected the one that will work for you? How do you scale this properly?

You can read about it online or even ask ChatGPT as you go along. The thing is that a shallow understanding of WebRTC isn’t advisable here. Which is a nice segway to say that you should look at our WebRTC courses if you want to dig deeper into WebRTC and become skilled with using it.

The post Best practices for WebRTC POC/Demo development appeared first on BlogGeek.me.

Best practices for WebRTC POC/Demo development

bloggeek - Mon, 10/21/2024 - 09:30

Struggling with WebRTC POC or demo development? Follow these best practices to save time and increase the success of your project.

I get approached by a lot of startups and developers who start on the path to building WebRTC applications. Oftentimes, they reach out to me when they can’t get their POC (Proof of Concept) or demo to work properly.

For those who don’t want to go through paid consulting, here are some best practices that can save you time and can considerably increase the success rate of your project.

Table of contentsMedia servers and WebRTC. What can possibly go wrong?

I don’t want to delve here too much on peer to peer type solutions. These require no media server and due to that are “easier” to develop into a nice demo. The services that use media servers are the ones that are often more beefy and are also the ones that fall into many challenging traps during a POC development.

Media requires the use of ephemeral ports that get allocated dynamically. It needs to negotiate connections. There are more moving parts that can break and fail on you.

All of the following sections here include best practices that you should read before going on to implement your WebRTC demo. Best to use them during your design and planning phases.

An introduction to WebRTC media servers

Use CPaaS

Let’s start with the most important question of all. If you’ve decided to install and host media servers in AWS or other locations – are you sure this is an important part of your demo?

I’ll try to explain this question. A demo or a POC comes to prove a point. It can be something like “we want to validate the technical viability of the project” or “we wanted to have something up and running quickly to start getting real customers’ feedback”.

If what you want is to build an MVP (Minimal Viable Product) with the intent of attracting a few friendly customers, go to a VC for funding or just test the waters before plunging in, then be sure to do that using CPaaS or a Programmable Video solution. These are usually based on usage pricing so they won’t be expensive when you’re just starting out. But they will reduce a lot of the headaches in development and maintenance of the infrastructure – so they’re more than worth it.

Sometimes, what you will be after is a POC that seeks to answer the question “what does it mean to build this on our own”. Not only due to costs but mainly due to the uniqueness of the requirements desired – these may include the need to run in a closed network, connect to certain restricted components, etc. Here, having the POC not use CPaaS and rely on open source self hosted components will make perfect sense.

First have the “official” media server demo work

Decided not to use CPaaS? Picked a few open source media servers and components that you’ll be using?

Make sure to install, run and validate the demo application of that open source media server.

You should do this because:

  1. You need to know the open source component actually works as advertised
  2. It will get you acquainted with that component – its build system, documentation, etc
  3. Taking things one step at a time, which is discussed later on

Using a 3rd party? Install and run its demo first.

Don’t. Use. Docker

Docker is great. Especially in production. Well… that’s what I’ve been told by DevOps people. It makes deploying easier. It is great for continuous integration. It is fairy dust on the code developers write.

But for WebRTC media servers? It is hell on earth to get configured properly for the first time. Too many ports need to be opened all over the place. Some TCP. Lots of them UDP. And if you miss the configuration – the media won’t get connected. Or it will. Sometimes. Which is worse.

My suggestion? Leave all the DevOps fairy dust for production. For your POC and demo? Go with operating systems on virtual machines or on bare metal. This will save you a lot of headaches by making sure things will fail less due to not having ports opened properly on your Docker configuration(s).

You don’t have time to waste when you’re developing that WebRTC POC.

Don’t do native. Go web

Remember that suggestion about doing the full session for your demo so you know the infrastructure is built properly? If you need native applications on mobile devices – don’t.

The easiest way to develop a demo for WebRTC would be by using a web browser for the client side. I’d go farther and say by using Chrome web browser. Ignore Firefox and Safari for the initial POC. Skip mobile – assume these are a lot of work but won’t validate anything architecturally. At least not for the majority of application types.

Still need to go native and mobile? Here are your WebRTC mobile SDK alternatives

Use a 3rd party TURN service

Always always always configure TURN in your iceServers for the peer connections.

Your initial “hello world” moment is likely to take place on the local LAN or even on the same machine. But once you start placing the devices on different networks, things will start failing without TURN servers. To make sure you don’t get there, just have TURN configured.

And have it configured properly.

And don’t install and host your own TURN servers.

Just use a managed TURN service.

The ones I’d pick for this task are either Twilio or Cloudflare for this stage. They are easy to start with.

You can always replace them with your own later without any vendor lock-in risk. But starting off with your own is too much work and hassle and will bring with it a slew of potential bugs and blockers that you just don’t need at this point in time.

More on NAT Traversal and TURN servers in WebRTC

Be very specific about your requirements (and “demo” them)

Don’t assume that connecting a single user to a meeting room in a demo application means you can connect 20 users into that meeting room.

Streaming a webcam to a viewer isn’t the same as streaming that same webcam to 100 viewers.

If you plan on doing a real proof of concept, be sure to define the exact media requirements you have and to implement them at the scale of a different session. Not doing so means you aren’t really validating anything in your architecture.

A 1:1 meeting uses a different architecture than a 4-way video meeting which in turn uses a different architecture than a 20-50 participants in a meeting, which is different once you think about 100 or 200 participants, which again looks different architecturally when you’re hitting 1,000-10,000 and then… you get the point on how to continue from here.

The same applies for things like using screen sharing, doing spatial audio, multiple video sharing, etc. Have all these as part of your POC. It can be clunky and kinda ugly, but it needs to be there. You must have an understanding of if and how it works – of what are the limits you are bound to hit with it.

For the larger and more complex applications, be sure you know all of the suggestions in this article before coming to read it. If you don’t, then you should beef up your understanding and experience with WebRTC infrastructure and architecture…

Got a POC? Build it to scale for that single session you’re aiming for. I won’t care if you can do 2 of these in parallel or a 1,000. That’s also important, but can wait for later stages.

More on scaling WebRTC meeting sizes

One step at a time

Setting up a WebRTC POC is a daunting task. There are multiple moving parts in there, each with its own quirks. If one thing goes wrong, nothing works.

This is true for all development projects, but it is a lot more relevant and apparent in WebRTC development projects. When you start these exploration steps with putting up a POC or a demo, there is a lot to get done right. Configurations, ports, servers, clients, communication channels.

Taking multiple installation or configuration steps at once will likely end up with a failure due to a bug in one of these steps. Tracing back to figure out what was the change causing this failure will take quite some time, leading to delays and frustrations. Better to take one step at a time. Validating each time that the step taken worked as expected.

I earned that the hard way at the age of 22, while being the lead integrator of an important project the company I worked for had with Cisco and HP. I blamed a change that HP did on an issue we had with our VoIP implementation that lost us a full week. It ended up me… doing two steps instead of one. But that’s a story for another time.

Know your tooling

If you don’t know what webrtc-internals is and haven’t used dump-importer then you’re doing it wrong.

Not using these tools mean that when things go wrong (and they will), you’re going to be totally blind on why. These aren’t perfect tools, but they give you a lot of power and visibility that you wouldn’t have otherwise.

Here’s how you download a webrtc internals file:

You’ll need to do that if you want to view the results on fippo’s webrtc-dump-importer.

And if you’re serious about it, then you can read a bit about what the WebRTC statistics there really mean.

Now if you’re going to do this properly and with a budget, I can suggest using testRTC for both testing and monitoring.

Know more about WebRTC

Everything above will get you started. You’ll be able to get to a workable POC or demo. Is that fit for production? What will be missing there? Is the architecture selected the one that will work for you? How do you scale this properly?

You can read about it online or even ask ChatGPT as you go along. The thing is that a shallow understanding of WebRTC isn’t advisable here. Which is a nice segway to say that you should look at our WebRTC courses if you want to dig deeper into WebRTC and become skilled with using it.

The post Best practices for WebRTC POC/Demo development appeared first on BlogGeek.me.

WebRTC video codec generations: Moving from VP8 and H.264 to VP9 and AV1

bloggeek - Mon, 09/30/2024 - 13:30

Explore the world of video codecs and their significance in WebRTC. Understand the advantages and trade-offs of switching between different codec generations.

Technology grinds forward with endless improvements. I remember when I first came to video conferencing, over 20 years ago, the video codecs used were H.261, H.263 and H.263+ with all of its glorious variants. H.264 was starting to be discussed and deployed here and there.

Today? H.264 and VP8 are everywhere. We bump into VP9 in WebRTC applications and we talk about AV1.

What does it mean exactly to move from one video codec generation to another? What do we gain? What do we lose? This is what I want to cover in this article.

Table of contentsThe TL;DR version

Don’t have time for my ramblings? This short video should have you mostly covered:

👉 I started recording these videos a few months back. If you like them, then don’t forget to like them 😉

The TL;DR:

  • Each video codec generation compresses better, giving higher video quality for the same bitrate as the previous generation
  • But each new video codec generation requires more resources – CPU and memory – to get its job done
  • And there there are nuances to it, which are not covered in the tl;dr – for them, you’ll have to read on if you’re interested
What is a video codec anyway?

A codec is a piece of software that compresses and decompresses data. A video codec consists of an encoder which compresses a raw video input and a decoder which decompresses the compressed bitstream of a video back to something that can be displayed.

👉 We are dealing here with lossy codecs. Codecs that don’t maintain the whole data, but rather lose information trying to hold as much as the original as possible with as little data that needs to be stored as possible

The way video codecs are defined is by their decoder:

Given a bitstream generated by a video encoder, the video codec specification indicates how to decompress that bitstream back into a viewable format.

What does that mean?

  • The decoder is mostly deterministic
    • Our implementation of the decoder will almost always be focused on performance
    • The faster it can decode with as few resources as possible (CPU and memory) the better
    • Remember this for the next section, when we’ll look at hardware acceleration
  • Encoders are just a set of tools for us to use
    • Encoders aren’t deterministic. They come in different shapes and sizes
    • They can cater for speed, for latency, for better quality, etc
    • A video codec specification indicates different types of tools that can be used to compress data
    • The encoder decides which tools to use at which point, ending up with an encoded bitstream
    • Different encoder implementations will have different resulting compression bitrate and quality
  • In WebRTC, we value low latency
    • Which means that our codecs are going to have to make decisions fast, sometimes sacrificing quality for speed
    • Usually making use of a lot of heuristics and assumptions while doing so
    • Some would say this is part math and part art
Hardware acceleration and video codecs

Video codecs require a lot of CPU and memory to operate. This means that in many cases, our preference would be to offload their job from the CPU to hardware acceleration. Most modern devices today have media acceleration components in the form of GPUs or other chipset components that are capable of bearing the brunt of this work. It is why mobile devices can shoot high quality videos with their internal camera for example.

Since video codecs are dictated by the specification of their decoder, defining and implementing hardware acceleration for video decoders is a lot easier than doing the same thing for video encoders. That’s because the decoders are deterministic.

For the video encoder, you need to start asking questions –

  • Which tools should the encoder use? (do we need things like SVC or temporal scalability, which only make sense for video conferencing)
  • At what speed/latency does it need to operate? (a video camera latency can be a lot higher than what we need for a video conference application for example)

This leads us to the fact that in many cases and scenarios, hardware acceleration of video codecs isn’t suitable for WebRTC at all – they are added to devices so people can watch YouTube videos of cats or create their own TikTok videos. Both of these activities are asynchronous ones – we don’t care how long the process of encoding and decoding takes (we do, but not in the range of milliseconds of latency).

Up until a few years ago, most hardware acceleration out there didn’t work well for WebRTC and video conferencing applications. This started to change with the Covid pandemic, which caused a shift in priorities. Remote work and remote collaboration scenarios climbed the priorities list for device manufacturers and their hardware acceleration components.

Where does that leave us?

  • Hard to say
  • Sometimes, hardware acceleration won’t be available
  • Other times, hardware acceleration will be available for decoding but not for encoding. Or the encoding available in hardware acceleration won’t be suitable for things like WebRTC
  • At times, hardware acceleration will be available, but won’t work as advertised
  • While in many cases, hardware acceleration will just work for you

The end result? Another headache to deal with… and we didn’t even start to talk about codec generations.

New video codec generation = newer, more sophisticated tools

I mentioned the tools that are the basis of a video codec. The decoder knows how to read a bitstream based on these tools. The encoder picks and chooses which tools to use when.

When moving to a newer codec generation what usually happens is that the tools we had are getting more flexible and sophisticated, introducing new features and capabilities. And new tools are also added.

More tools and features mean the encoder now has more decisions to make when it compresses. This usually means the encoder needs to use more memory and CPU to get the job done if what we’re aiming for is better compression.

Switching from one video codec generation to another means we need the devices to be able to carry that additional resource load…

A few hard facts about video codecs

Here are a few things to remember when dealing with video codecs:

  • Video codecs are likely the highest consumers of CPU on your device in a video conference (well… at least before we start factoring in the brave new world of AI)
  • Encoders require more CPU and memory than decoders
  • In a typical scenario, you will have a single encoder and one or more decoders (that’s in group video meetings)
  • Hardware acceleration isn’t always available. When it is, it might not be for the video codec you are using and it might be buggy in certain conditions
  • Higher resolution and frame rate increase CPU and memory requirements of a video codec
  • Some video codecs are royalty bearing (more on that later)
  • Encoding a video for the purpose of streaming it live is different than encoding it for a video conference which is different than encoding it to upload it to a photo album. In each we will be focusing on different coding tools available to us
  • Video codecs require a large ecosystem around them to thrive. Only a few made this level of adoption, and most of them are available in WebRTC
  • Different tools in different video codecs mean that switching from one codec to another in a commercial application isn’t as simple as just replacing the codec in the edge devices. There’s a lot more to it in order to make real use of a video codec’s capabilities
WebRTC MTI – our baseline video codec generation

It is time to start looking at WebRTC and its video codecs. We will begin with the MTI video codecs – the Mandatory To Implement. This has been a big debate back in the day. The standardization organizations couldn’t decide if VP8 or H.264 need to be the MTI codecs.

To make a long story short – a decision was made that both are MTI.

What does this mean exactly?

  • Browsers implementing WebRTC need to support both of these video codecs (most do, but not all – in some Android devices, you won’t have H.264 support for example)
  • Your application can decide which of these video codecs to use – it isn’t mandatory on your end to use both or either of them

These video codecs are rather comparable for their “price/performance”. There are differences though.

👉 If you’re contemplating which one to use, I’ve got a short free video course to guide you through this decision making process: H.264 or VP8 – What Shall it be?

The emergence of VP9 and rejection of HEVC

The descendants of VP8 and H.264 are VP9 and HEVC.

H.264 is a royalty bearing codec and so is HEVC. VP8 and VP9 are both royalty free codecs.

HEVC being newer and considerably more expensive made things harder for it to be adopted for something like WebRTC. That’s because WebRTC requires a large ecosystem of vendors and agreements around how things are done. With a video codec, not knowing who needs to pay the royalties stifles its adoption.

And here, should the ones paying be the chipset vendor? Device manufacturer? The browser vendor? The application developer? No easy answer, so no decision.

This is why HEVC ended up being left out of WebRTC for the time being.

VP9 was an easy decision in comparison.

Today, you can find VP9 in applications such as Google Meet and Jitsi Meet among many others who decided to go for this video codec generation and not stay in the VP8/H.264 generation.

The big promise of VP9 was its SVC support

Our brave new world of AV1

AV1 is our next gen of video codecs. The promise of a better world. Peace upon the earth. Well… no.

Just a divergence in the road that puts a focus in a future that is mostly royalty free for video codecs (maybe).

What do we get from AV1 as a new video codec generation compared to VP9? Mainly what we did from VP9 compared to VP8. Better quality for the same bitrate and the price of CPU and memory.

Where VP9 brought us the promise of SVC, AV1 is bringing with it the promise of better screen sharing of text. Why? Because its compression tools are better equipped for text, something that was/is lacking in previous video codecs.

AV1 has behind it most of the industry. Somehow, at a magical moment in the past, they got together and got to the conclusion that a royalty free video codec would benefit everyone, creating the Alliance of Open Media and with it the AV1 specification. This got the push the codec needed to become the most dominant video coding technology of our near future.

For WebRTC, it marks the 3rd video generation codec that we can now use:

  • Not everywhere
  • It still lacks hardware acceleration
  • Performance is horrendous for high resolutions

Here’s an update of what Meta is doing with AV1 on mobile from their RTC@Scale event earlier this year.

This is a start. And a good one. You see experiments taking place as well as first steps towards productizing it (think Google Meet and Jitsi Meet here among others) in the following areas:

  • Decoding only scenarios, where the encoder runs in the cloud
  • Low bitrates, where we have enough CPU available for it
  • When screen sharing at low frame rate is needed for text data
Things to consider when introducing a new video codec in your application

First things first. If you’re going to use a video codec of a newer generation than what you currently have, then this is what you’ll need to decide:

Do you focus on getting the same bitrate you have in the past, effectively increasing the media quality of the session. Or alternatively, are you going to lower the bitrate from where it was, reducing your bandwidth requirements.

Obviously, you can also pick anything in between the two, reducing the bitrate used a bit and increasing the quality a bit.

Starting to use another video codec though isn’t only about bitrate and quality. It is about understanding its tooling and availability as well:

  • Where is the video codec supported? In which browsers? Operating systems? Devices?
  • Is there hardware acceleration available for it? How common is it out there? How buggy might it be?
  • Are there special encoding tools that we can/should adopt and use? Think temporal scalability, SVC, resilience, specific coding technique for specific content types, etc.
  • In which specific scenarios and use cases do we plan on using the codec?
  • Do we have the processing power needed on the devices to use this codec?
  • Will we need transcoding to other video codec formats for our use case to work? Where will that operation take place?
Where to find out more about video codecs and WebRTC

There’s a lot more to be said about video codecs and how they get used in WebRTC.

For more, you can always enroll in my WebRTC courses.

The post WebRTC video codec generations: Moving from VP8 and H.264 to VP9 and AV1 appeared first on BlogGeek.me.

WebRTC video codec generations: Moving from VP8 and H.264 to VP9 and AV1

bloggeek - Mon, 09/30/2024 - 10:30

Explore the world of video codecs and their significance in WebRTC. Understand the advantages and trade-offs of switching between different codec generations.

Technology grinds forward with endless improvements. I remember when I first came to video conferencing, over 20 years ago, the video codecs used were H.261, H.263 and H.263+ with all of its glorious variants. H.264 was starting to be discussed and deployed here and there.

Today? H.264 and VP8 are everywhere. We bump into VP9 in WebRTC applications and we talk about AV1.

What does it mean exactly to move from one video codec generation to another? What do we gain? What do we lose? This is what I want to cover in this article.

Table of contentsThe TL;DR version

Don’t have time for my ramblings? This short video should have you mostly covered:

I started recording these videos a few months back. If you like them, then don’t forget to like them

The TL;DR:

  • Each video codec generation compresses better, giving higher video quality for the same bitrate as the previous generation
  • But each new video codec generation requires more resources – CPU and memory – to get its job done
  • And there there are nuances to it, which are not covered in the tl;dr – for them, you’ll have to read on if you’re interested
What is a video codec anyway?

A codec is a piece of software that compresses and decompresses data. A video codec consists of an encoder which compresses a raw video input and a decoder which decompresses the compressed bitstream of a video back to something that can be displayed.

We are dealing here with lossy codecs. Codecs that don’t maintain the whole data, but rather lose information trying to hold as much as the original as possible with as little data that needs to be stored as possible

The way video codecs are defined is by their decoder:

Given a bitstream generated by a video encoder, the video codec specification indicates how to decompress that bitstream back into a viewable format.

What does that mean?

  • The decoder is mostly deterministic
    • Our implementation of the decoder will almost always be focused on performance
    • The faster it can decode with as few resources as possible (CPU and memory) the better
    • Remember this for the next section, when we’ll look at hardware acceleration
  • Encoders are just a set of tools for us to use
    • Encoders aren’t deterministic. They come in different shapes and sizes
    • They can cater for speed, for latency, for better quality, etc
    • A video codec specification indicates different types of tools that can be used to compress data
    • The encoder decides which tools to use at which point, ending up with an encoded bitstream
    • Different encoder implementations will have different resulting compression bitrate and quality
  • In WebRTC, we value low latency
    • Which means that our codecs are going to have to make decisions fast, sometimes sacrificing quality for speed
    • Usually making use of a lot of heuristics and assumptions while doing so
    • Some would say this is part math and part art
Hardware acceleration and video codecs

Video codecs require a lot of CPU and memory to operate. This means that in many cases, our preference would be to offload their job from the CPU to hardware acceleration. Most modern devices today have media acceleration components in the form of GPUs or other chipset components that are capable of bearing the brunt of this work. It is why mobile devices can shoot high quality videos with their internal camera for example.

Since video codecs are dictated by the specification of their decoder, defining and implementing hardware acceleration for video decoders is a lot easier than doing the same thing for video encoders. That’s because the decoders are deterministic.

For the video encoder, you need to start asking questions –

  • Which tools should the encoder use? (do we need things like SVC or temporal scalability, which only make sense for video conferencing)
  • At what speed/latency does it need to operate? (a video camera latency can be a lot higher than what we need for a video conference application for example)

This leads us to the fact that in many cases and scenarios, hardware acceleration of video codecs isn’t suitable for WebRTC at all – they are added to devices so people can watch YouTube videos of cats or create their own TikTok videos. Both of these activities are asynchronous ones – we don’t care how long the process of encoding and decoding takes (we do, but not in the range of milliseconds of latency).

Up until a few years ago, most hardware acceleration out there didn’t work well for WebRTC and video conferencing applications. This started to change with the Covid pandemic, which caused a shift in priorities. Remote work and remote collaboration scenarios climbed the priorities list for device manufacturers and their hardware acceleration components.

Where does that leave us?

  • Hard to say
  • Sometimes, hardware acceleration won’t be available
  • Other times, hardware acceleration will be available for decoding but not for encoding. Or the encoding available in hardware acceleration won’t be suitable for things like WebRTC
  • At times, hardware acceleration will be available, but won’t work as advertised
  • While in many cases, hardware acceleration will just work for you

The end result? Another headache to deal with… and we didn’t even start to talk about codec generations.

New video codec generation = newer, more sophisticated tools

I mentioned the tools that are the basis of a video codec. The decoder knows how to read a bitstream based on these tools. The encoder picks and chooses which tools to use when.

When moving to a newer codec generation what usually happens is that the tools we had are getting more flexible and sophisticated, introducing new features and capabilities. And new tools are also added.

More tools and features mean the encoder now has more decisions to make when it compresses. This usually means the encoder needs to use more memory and CPU to get the job done if what we’re aiming for is better compression.

Switching from one video codec generation to another means we need the devices to be able to carry that additional resource load…

A few hard facts about video codecs

Here are a few things to remember when dealing with video codecs:

  • Video codecs are likely the highest consumers of CPU on your device in a video conference (well… at least before we start factoring in the brave new world of AI)
  • Encoders require more CPU and memory than decoders
  • In a typical scenario, you will have a single encoder and one or more decoders (that’s in group video meetings)
  • Hardware acceleration isn’t always available. When it is, it might not be for the video codec you are using and it might be buggy in certain conditions
  • Higher resolution and frame rate increase CPU and memory requirements of a video codec
  • Some video codecs are royalty bearing (more on that later)
  • Encoding a video for the purpose of streaming it live is different than encoding it for a video conference which is different than encoding it to upload it to a photo album. In each we will be focusing on different coding tools available to us
  • Video codecs require a large ecosystem around them to thrive. Only a few made this level of adoption, and most of them are available in WebRTC
  • Different tools in different video codecs mean that switching from one codec to another in a commercial application isn’t as simple as just replacing the codec in the edge devices. There’s a lot more to it in order to make real use of a video codec’s capabilities
WebRTC MTI – our baseline video codec generation

It is time to start looking at WebRTC and its video codecs. We will begin with the MTI video codecs – the Mandatory To Implement. This has been a big debate back in the day. The standardization organizations couldn’t decide if VP8 or H.264 need to be the MTI codecs.

To make a long story short – a decision was made that both are MTI.

What does this mean exactly?

  • Browsers implementing WebRTC need to support both of these video codecs (most do, but not all – in some Android devices, you won’t have H.264 support for example)
  • Your application can decide which of these video codecs to use – it isn’t mandatory on your end to use both or either of them

These video codecs are rather comparable for their “price/performance”. There are differences though.

If you’re contemplating which one to use, I’ve got a short free video course to guide you through this decision making process: H.264 or VP8 – What Shall it be?

The emergence of VP9 and rejection of HEVC

The descendants of VP8 and H.264 are VP9 and HEVC.

H.264 is a royalty bearing codec and so is HEVC. VP8 and VP9 are both royalty free codecs.

HEVC being newer and considerably more expensive made things harder for it to be adopted for something like WebRTC. That’s because WebRTC requires a large ecosystem of vendors and agreements around how things are done. With a video codec, not knowing who needs to pay the royalties stifles its adoption.

And here, should the ones paying be the chipset vendor? Device manufacturer? The browser vendor? The application developer? No easy answer, so no decision.

This is why HEVC ended up being left out of WebRTC for the time being.

VP9 was an easy decision in comparison.

Today, you can find VP9 in applications such as Google Meet and Jitsi Meet among many others who decided to go for this video codec generation and not stay in the VP8/H.264 generation.

The big promise of VP9 was its SVC support

Our brave new world of AV1

AV1 is our next gen of video codecs. The promise of a better world. Peace upon the earth. Well… no.

Just a divergence in the road that puts a focus in a future that is mostly royalty free for video codecs (maybe).

What do we get from AV1 as a new video codec generation compared to VP9? Mainly what we did from VP9 compared to VP8. Better quality for the same bitrate and the price of CPU and memory.

Where VP9 brought us the promise of SVC, AV1 is bringing with it the promise of better screen sharing of text. Why? Because its compression tools are better equipped for text, something that was/is lacking in previous video codecs.

AV1 has behind it most of the industry. Somehow, at a magical moment in the past, they got together and got to the conclusion that a royalty free video codec would benefit everyone, creating the Alliance of Open Media and with it the AV1 specification. This got the push the codec needed to become the most dominant video coding technology of our near future.

For WebRTC, it marks the 3rd video generation codec that we can now use:

  • Not everywhere
  • It still lacks hardware acceleration
  • Performance is horrendous for high resolutions

Here’s an update of what Meta is doing with AV1 on mobile from their RTC@Scale event earlier this year.

This is a start. And a good one. You see experiments taking place as well as first steps towards productizing it (think Google Meet and Jitsi Meet here among others) in the following areas:

  • Decoding only scenarios, where the encoder runs in the cloud
  • Low bitrates, where we have enough CPU available for it
  • When screen sharing at low frame rate is needed for text data
Things to consider when introducing a new video codec in your application

First things first. If you’re going to use a video codec of a newer generation than what you currently have, then this is what you’ll need to decide:

Do you focus on getting the same bitrate you have in the past, effectively increasing the media quality of the session. Or alternatively, are you going to lower the bitrate from where it was, reducing your bandwidth requirements.

Obviously, you can also pick anything in between the two, reducing the bitrate used a bit and increasing the quality a bit.

Starting to use another video codec though isn’t only about bitrate and quality. It is about understanding its tooling and availability as well:

  • Where is the video codec supported? In which browsers? Operating systems? Devices?
  • Is there hardware acceleration available for it? How common is it out there? How buggy might it be?
  • Are there special encoding tools that we can/should adopt and use? Think temporal scalability, SVC, resilience, specific coding technique for specific content types, etc.
  • In which specific scenarios and use cases do we plan on using the codec?
  • Do we have the processing power needed on the devices to use this codec?
  • Will we need transcoding to other video codec formats for our use case to work? Where will that operation take place?
Where to find out more about video codecs and WebRTC

There’s a lot more to be said about video codecs and how they get used in WebRTC.

For more, you can always enroll in my WebRTC courses.

The post WebRTC video codec generations: Moving from VP8 and H.264 to VP9 and AV1 appeared first on BlogGeek.me.

Power-up getStats for Client Monitoring

webrtchacks - Tue, 09/03/2024 - 12:45

WebRTC’s peer connection includes a getStats method that provides a variety of low-level statistics. Basic apps don’t really need to worry about these stats but many more advanced WebRTC apps use getStats for passive monitoring and even to make active changes. Extracting meaning from the getStats data is not all that straightforward. Luckily return author […]

The post Power-up getStats for Client Monitoring appeared first on webrtcHacks.

Lip synchronization and WebRTC applications

bloggeek - Mon, 08/26/2024 - 13:30

Lip synchronization is a solved problem in WebRTC. That’s at least the case in the naive 1:1 sessions. The challenges start to amount once you hit multiparty architectures or when audio and video get generated/rendered separately.

Let’s dive into the world of lip synchronization, understand how it is implemented in WebRTC and in which use cases we need to deal with the headaches it brings with it.

Table of contentsConnecting audio and video = lip synchronization

Discover the fascinating world of lip synchronization technology and its impact on WebRTC applications.

When you watch a movie or any video clip for that matter on your device – be it a PC display, tablet, smartphone or television – the audio and video that gets played back at you gets lip synced. There’s no “combination” of audio and video. These are two separate data sets / files / streams that are associated with one another in a synchronized fashion.

When you play out an mp4 file for example, it is actually a container file of multiple media streams. Each decoded and played out independently, synchronized again by timing the playout.

This was a decision made long ago that enables more flexibility in encoding technologies – you can use different codecs for the audio and the video of the content, based on your needs and the type of content you have. It also makes more sense since the codecs and technologies for compression audio and video are quite different from one another.

The RTP/RTCP solution to lip synchronization

When we’re dealing with WebRTC, we’re using SRTP as the protocol to send our media. SRTP is just the secure variant of RTP which is what I want to focus on here.

RTP is used to send media over the internet. RTCP acts as the control protocol for RTP and is tightly coupled with it.

The solution used for lip synchronization of RTP and RTCP was to rely on timestamps. To make sure we’re all confused though, the smart folks who conjured this solution up, decided to go with different types of timestamps and frequencies (it likely made them feel smart, though there’s probably a real reason I am not aware of that at least made sense at some point in the past).

We’re going to dive together into the charming world of RTP and NTP timestamps and see how together, we can lip sync audio and video in WebRTC.

RTP timestamp

RTP timestamp is like using “position: relative;” in CSS. We cannot use it to discern the absolute time a packet was sent (and we do not know the receiver’s clock in relation to ours).

What we can do with it, is discern the time that has passed between one RTP timestamp to another.

The slide above is from my Low-level WebRTC protocols course in the RTP lesson. Whenever we send a packet of media over the internet in WebRTC, the RTP header for that packet (be it audio or video) has a timestamp field. This field has 32 bits of data in it (which means it can’t be absolute in a meaningful way – not enough bits).

WebRTC picks the starting point for the RTP timestamps randomly, and from there it increases the value based on the frequency of the codec. Why the frequency of the codec and not something saner like “milliseconds” or similar? Because.

For audio, we increment the RTP timestamp by 48,000 every second for the Opus voice codec. For video, we increment it by 90,000 every second.

The headache we’re left dealing with here?

  • Audio and video streams have different starting points in RTP timestamps
  • Their corresponding RTP timestamps move forward at a totally different pace
NTP timestamp

We said RTP timestamp is relative? Then NTP timestamp is like using “position: absolute;” in CSS. It gives us the wallclock time. It is 64 bits of data, which means we don’t want to send it as much over the network.

Oh, and it covers 1900-2036 after which it wraps around (expect a few minor bugs a decade from now because of this). This is slightly different from the more common Unix 1970 startpoint timestamp.

The slide above is from my Higher-level WebRTC protocols course in the Inside RTCP lesson.

You can see that when an RTCP SR block is sent over the network (let’s assume once every 5 seconds), then we get to learn about the NTP timestamp of the sender, as well as the RTP timestamp associated with it.

In a way,we can “sync” between any given RTP timestamp we bump into with the NTP/RTP timestamp pair we receive for that stream in a RTCP SR.

What are we going to use this for?

  • Once we see RTCP SR blocks for BOTH audio and video channels, we can understand the synchronization needed
  • Since this is sent every few seconds, we can always resync and overcome packet losses as well as clock drifts
Calculating absolute time for lip synchronization in WebRTC

Let’s sum this part up:

  • We’ve got RTP timestamps on every packet we receive
  • Every few seconds, we receive RTCP SR blocks with NTP+RTP timestamps. These can be used to calculate the absolute time for any RTP timestamp received
  • Since we know the absolute time for the audio and video packets based on the above, we can now synchronize their playback accordingly

Easy peasy. Until it isn’t.

👉 RTP, RTCP and other protocols are covered in our WebRTC Protocols courses. If you want to dig deeper into WebRTC or just to upskill yourself, check out webrtccourse.com

When lip synchronization breaks in WebRTC

RTP/RTCP gives us the mechanism to maintain lip synchronization. And WebRTC already makes use of it. So why and how can WebRTC lose lip synchronization?

There are three main reasons for this to happen:

  1. The data used for lip synchronization is incorrect to begin with
  2. Network fluctuations are too bad, causing WebRTC to decide not to lip sync
  3. Device conditions and use case make lip synchronization impossible or undesirable

I’d like to tackle that from the perspective of the use cases. There are a few that are more prone than others to lip synchronization issues in WebRTC.

Group video conferences

In group video conferencing there are no lip synchronization issues. At least not if you design and develop it properly and make sure that you either use the SFU model or the MCU model.

Some implementations decide to decouple voice and video streams, handling them separately and in different architectural solutions:

The diagram above shows what that means. Take a voice conferencing vendor that decided to add video capabilities:

  • Voice conferencing traditionally was done using mixing audio streams (using an MCU)
  • Now that the product needs to introduce video, there’s a decision to be made on how to achieve that:
    • Add video streams to the MCU, mixing them. This is quite expensive to scale, and isn’t considered modern these days
    • Use an SFU, and shift all audio traffic to an SFU as well. Great, but requires a complete replacement of all the media infrastructure they have. Risky, time consuming, expensive and no fun at all
    • Use an SFU in parallel to the MCU. Have the audio continue the way it was up until today, and do the video part on a separate media server altogether
  • The shortest path to video in this case is the 3rd one – splitting audio and video processing from one another, which causes immediate desynchronization
  • You can’t lip synchronize the single incoming mixed audio stream with the multiple incoming video streams

In such cases, I often hear the explanation of “this is quite synchronized. It only loses sync when the network is poor”. Well… when the network is poor is when users complain. And adding this to their list of complaints won’t help. Especially if you want to be on par with the competition.

💡 What to do in this case? Go all in for SFU or all in for MCU – at least when it comes to the avoidance of splitting the audio and video processing paths.

Cloud rendering

The other big architectural headache for lip synchronization is cloud rendering. This is when the actual audio and/or video gets rendered and not acquired from a camera/microphone on some browser or mobile device.

In cloud gaming, for example, a game gets played, processed and rendered on a server in the cloud. Since this isn’t done in the web browser, the WebRTC stack used there needs to be aware of the exact timing of the audio and video frames – prior to them being encoded. This information should then be translated to the NTP+RTP timestamps that WebRTC needs. Not too hard, but just another headache to deal with.

For many cases of cloud gaming, we might even prioritize latency over lip synchronization, playing whatever we have when we get it as much as possible over having audio (or video) wait up for the other media type. That’s because in cloud games, a few milliseconds can be the difference between winning and game over.

When we’re dealing with our brave new world of conversational AI, now powered by LLM and WebRTC, then the video will usually follow the rendering of the audio, and might be done on a totally different machine. At the very least, it will occur using a different set of processes and algorithms.

💡 Here, it is critical to understand and figure out how to handle the NTP and RTP timestamps to get proper lip synchronization.

Latency and peripherals (and their effect on lip synchronization)

Something I learned a bit later in my life when dealing with video conferencing is that the devices you use (the peripherals) have their own built in latency.

  • Displays? They might easily buffer a frame or two before showing it to the user – that can add 50 or more milliseconds easily.
  • Cameras? They can be slow… old USB connectors were slower than the video frames captured by cameras in HD. Cameras used to compress the video to MJPEG, send it over USB and have the PC decode it before… encoding it again.
  • Microphone and speakers? Again, lag. Especially if you’re using bluetooth devices. Gamers go to lengths to use connected headsets or low latency wireless headsets for this reason.

The sad thing here is that there’s NOTHING you can do about it. Remember that this is the user’s display or headset we’re talking about – you can’t tell them to buy something else.

On top of this, you have software device drivers that do noise reduction on the audio or add silly hats on the video (or replace the video altogether). These take their own sweet time to process the data and to add their own inherent latency into the whole media pipeline.

Device drivers on the operating system level should take care of this lag and this need to be factored into your lip synchronization logic – otherwise, you are bound to get issues here.

Got lip synchronization issues in your WebRTC application?

Lip synchronization is one of these nasty things that can negatively impact the perception of media quality in WebRTC applications. Solving it requires reviewing the architecture, sniffing the network, and playing around with the code to figure out the root cause prior to doing any actual fixing.

I’ve assisted a few clients in this area over the years, trying together to figure out what went wrong and working out suitable solutions around this.

The post Lip synchronization and WebRTC applications appeared first on BlogGeek.me.

Reducing latency in WebRTC

bloggeek - Mon, 08/12/2024 - 12:30

Explore the concept of WebRTC latency and its impact on real-time communication. Discover techniques to minimize latency and optimize your application.

WebRTC is about real time. Real time means low latency, low delay, low round trip – whatever metric you want to relate to (they are all roughly the same).

Time to look closer at latency and how you can reduce it in your WebRTC application.

Table of contents Latency, delay and round trip time

Let’s do this one short and sweet:

Latency sometimes gets confused with round trip time. Let’s put things in order quickly here so we can move on:

  • Latency (or delay, and sometimes lag) is the time it takes a packet sent to be received on the other side. We relate to it also as “half a round-trip”
  • Round trip time is the time from the moment we send out a packet until we receive a response to it – a full round trip
  • Jitter is the variance in delay between received packets. It isn’t latency, but usually if latency is high, jitter is more likely to be high as well

Need more?

👉 I’ve written a longer form post on Cyara’s blog – What is Round-trip Time and How Does it Relate to Network Latency

👉 Round trip time (RTT) is one of the 7 top WebRTC video quality metrics

Latency isn’t good for your WebRTC health

When it comes to WebRTC and communications in general, latency isn’t a good thing. The higher it is, the worse off you are in terms of media quality and user experience.

That’s because interactivity requires low latency – it needs the ability to respond quickly to what is being communicated.

Here are a few “truths” to be aware of:

  • The lower the latency, the higher the quality
    • For interactivity, we need low latency
    • The faster data flows, the faster we can respond to it
    • The more connected we feel due to this
  • If the round trip time is low enough, we can add retransmissions
    • Assume latency is 20 milliseconds
    • A retransmission can take 30-40 milliseconds to get by from the moment we notice a missing packet, request a retransmission and getting one back
    • 40 milliseconds is still great, so we can now better handle packet losses
    • Just because latency is really low in our session
  • Indirectly, there are going to be less packet losses
    • Low latency usually means shorter routes on the network
    • Shorter routes means less switches and routers and other devices
    • There’s less “time” and “place” for packets to get lost
    • So expectation is usually that with lower latency there will be on average less packet loss
  • You are going to discard less packets
    • WebRTC will discard packets if their time has passed (among other reasons)
    • The less time packets spend over the network, the more time we have to decide if and what to do with them
    • The more time packets spend on the network, the higher the likelihood is that we’re going to not be able to use them (due to timing)
  • Lower latency usually means lower jitter
    • Jitter is the variance in delay between packets received
    • If the latency is low in general, there’s less “room” for packets to jitter around and daddle
    • Like the packet loss and packet discarding, this isn’t about hard truths – just things that tend to happen when latencies are lower

👉 One of the main things you should monitor and strive to lower is latency. This is usually done by looking at the round trip time metrics (which is what we can measure better than latency).

What are we measuring when it comes to latency?

When you say “latency” – what do you mean exactly?

  • We might mean the latency of a single leg in a session – from one device to another
  • It might be the latency “end to end” – from one participant to another across all network devices along the route
  • WebRTC stats latency related metrics? They focus and deal with network latency between two WebRTC entities that communicate directly one with the other
  • Sometimes we’re interested in what is known as “glass-to-glass” latency – from the camera’s lens (or microphone) to the other side’s display (or speaker)

Latency starts with defining what part of the session are we measuring

And within that definition, there might be multiple pieces of processing in the pipeline that we’d want to measure individually. Usually we’d want to do that to decide where to focus our energies in optimizing and reducing the latency.

Here are two recent posts that talk about latency in the WebRTC-LLM space:

👉 You can decide to improve latency of the same use case, and take very different routes in how you end up doing that.

Different use cases deal with latency differently

Latency is tricky. There are certain physical limits we can’t overcome – the most notable one used as an example is the speed of light: trying to pass a message from one side of the globe to the other will take considerable milliseconds no matter what you do, even not accounting for the need to process the data along its route.

Each use case or scenario has different ways to deal with these latencies. From defining what a low enough value is, through where in the processing pipeline to focus on optimizations, to the techniques to use to optimize latency.

Here are a few industries/markets where we can see how these requirements vary.

👉 Interested in the Programmable Video market, where vendors take care of your latency and use case? Check out my latest report: Video APIs: market status

Conferencing

Video conferencing has a set of unique challenges:

  • You can’t change the location of the edges. Telling people to relocate to conduct their meetings just isn’t possible
  • Everything is real time. These are conversations between people. They need to be interactive

💡 Latency in conferencing? Below 200 milliseconds you’re doing great. 400 or 500 milliseconds is too high, but can be lived with if one must (though grudgingly).

Streaming

Streaming is more lenient than video conferencing. We’re used to seconds of latency for streaming. You click on Netflix to start a movie and it can take a goodly couple of seconds at times. Nagging? Yes. Something to cancel the service for? No.

That said, we are moving towards live streaming, where we need more interactivity. From auctions, to sports and betting, to webinars and other use cases. Here are a few of the challenges seen here:

  • You can’t change the location of the edges. The broadcast is live, so the source is fixed. The viewers won’t move elsewhere either
  • Latency here depends on the level of interactivity you’re after. Most scenarios will be fine with 1-2 seconds (or more) of latency. Some, mostly revolving around gambling, require sub second latency

💡 For live streaming? 500 milliseconds is great. 1-2 seconds is good, depending on the scenario.

Gaming

Gaming has a multitude of scenarios where WebRTC is used. What I want to focus on here is the one of having the game rendered by a cloud server and played “remotely” on a device. 

The games here vary as well (which is critical). These can be casual games, board games (turn by turn), retro games, high end games, first person shooters, …

Often, these games have a high level of interaction that needs to be real time. Online gamers would pick an ISP, equipment and configuration that lowers their latency for games – just in order to get a bit more reaction time to improve their performance and score in the game. And this has nothing to do with rendering the whole game in the cloud – just about passing game state (which is smaller). Here’s an example of an article by CenturyLink for gamers on latency on their network. Lots of similar articles out there.

Cloud gaming, where the game gets rendered on the server in full and the video frames are sent via WebRTC over the network? That requires low latency to be able to play properly.

💡 In cloud gaming 50-60 milliseconds latency will be tolerable. Above that? Game over. Oh, and if you play against someone with 30 milliseconds? You’re still dead at 50 milliseconds. The lower the better at any number of milliseconds

Conversational AI

Conversational AI is a hot topic these days. Voice bots, LLM, Generative AI. Lots of exciting new technologies. I’ve covered LLM and WebRTC recently, so I’ll skip the topic here.

Suffice to say – conversational AI requires the same latencies as conferencing, but brings with it a slew of new challenges by the added processing time needed in the media pipeline of the voice bot itself – the machine that needs to listen and then generate responses.

I know it isn’t a fare comparison to latencies in conferencing (because there we don’t add it the human participant time or even the time it takes him to understand what is being sent his way, but at the moment, the response time of most voice bots is too slow for high levels of interaction).

💡 In conversational AI, the industry is striving to reach sub 500 milliseconds latencies. Being able to get to 200-300 milliseconds will be a dream come true.

Reducing latency in WebRTC

Different use cases have different latency requirements. They also have different architecture and infrastructure. This leads to the simple truth that there’s no single way to reduce latency in WebRTC. It depends on the use case and the way you’ve architected your application that will decide what needs to be done to reduce the latency in it.

If you split the media processing pipeline in WebRTC to its coarse components, it makes it a bit easier to understand where latency occurs and from there, to decide where to focus your attention to optimize it.

Browsers and latency reduction

When handling WebRTC in browsers there’s not much you can do on the browser side to reduce latency. That’s because the browser controls and owns the whole media processing stack for you.

There are still areas where you and and should take a look at what you’re doing. Here are a few questions you should ask yourself:

  1. Should you reduce the playout delay to zero? This is supported in Chrome and usually used in the cloud gaming use cases – less in conversational use cases
  2. Are you using insertable streams? If you are, then the code you use there to reshape the frames received and/or sent might be slow. Check it out

The most important thing in the browser is going to be the collection of latency related measurements. Ones you can use later on for monitoring and optimizing it. These would be rtt, jitter and packet loss that we mentioned earlier.

Mobile and latency reduction

Mobile applications, desktop applications, embedded applications. Any device side application that doesn’t run on a browser is something where you have more control of.

This means there’s more room for you to optimize latency. That said, it usually requires specialized expertise and more resources than many would be willing to invest.

Places to look at here?

  • The code that acquires raw audio and video from the microphone and the camera
  • Code that plays the incoming media to the speaker and the display
  • Codec implementations. Especially if these can be “replaced” by hardware variants
  • Data passing and processing within the media pipeline itself. Now that you have access to it, you can decide not to wait for Google to improve it but rather attempt to do it on your own (good luck!)

When taking this route, also remember that most optimizations here are going to be device and operating system specific. This means you’ll have your hands full with platforms to optimize.

Infrastructure latency reduction

This is the network latency that most of the rtt metric in WebRTC statistics come from.

Where your infrastructure is versus the users has a huge impact on the latency.

The example I almost always use? Two users in France connected via a media or TURN server in the US.

Figuring out where your users are, what ISPs they are using, where to place your own servers, through which carriers to connect them to the users, how to connect your servers to one another when needed – all these are things you can optimize.

For starters, look at where you host your TURN servers and media servers. Compare that to where your users are coming from. Make sure the two are aligned. Also make sure the servers allocated for users are the ones closest to them (closest in terms of latency – not necessarily geography).

See if you need to deploy your infrastructure in more locations.

Rinse and repeat – as your service grows – you may need to change focus and infrastructure locations.

Other areas of improvement here are using Anycast or network acceleration that is offered by most large IaaS vendors today (at higher network delivery prices).

Media server processing and latencies

Then there are the media servers themselves. Most services need them.

Media servers are mainly the SFUs and MCUs that take care of routing and mixing media. There are also gateways of many shapes and sizes.

These media servers process media and have their own internal media processing pipelines. As with other pipelines, they have inherent latencies built into them.

Reducing that latency will reduce the end to end latency of the application.

The brave (new) world of generative AI, conversation AI and… LLMs

Remember where we started here? Me discussing latency because WebRTC-LLM use cases had to focus on reducing latency in their own pipeline.

This got the industry looking at latency again, trying to figure out how and where you can shave a few more milliseconds along the way.

Frankly? This needs to be done throughout the pipeline – from the device, through the infrastructure and the media servers and definitely within the TTS-LLM-STT pipeline itself. This is going to be an ongoing effort for the coming year or two I believe.

Know your latency in WebRTC

We can’t optimize what we don’t measure.

The first step here is going to be measurements.

Here are some suggestions:

  • Measure latency (obvious)
    • Break down the latency into pieces of your pipeline and measure each piece independently
    • Decide which pieces to focus and optimize each time
  • Be scientific about it
    • Create a testbed that is easy to setup and operate
    • Something you can run tests in CI/CD to get results quickly
    • This will make it easier to see if optimizations you introduce work or backfire
  • Do this at scale
    • Don’t do it only in your lab. Run it directly on production
    • This will enable you to find bottlenecks and latency issues for users and not only synthetic scenarios

Did I mention that testRTC has some of the tools you’ll need to set up these environments? 😉And if you need assistance with this process, you know where to find me.

The post Reducing latency in WebRTC appeared first on BlogGeek.me.

OpenAI, LLMs, WebRTC, voice bots and Programmable Video

bloggeek - Mon, 07/29/2024 - 12:30

Learn about WebRTC LLM and its applications. Discover how this technology can improve real-time communication using conversational AI.

Talk about an SEO-rich title… anyways. When Philipp suggests something to write about I usually take note and write about it. So it is time for a teardown of last month’s demo by OpenAI – what place WebRTC takes there, how it affects the programmable video market of Video APIs.

I’ve been dragged into this discussion before. In my monthly recorded conversation with Arin Sime, we talked about LLMs and WebRTC:

Time to break down the OpenAI demo that was shared last month and what role WebRTC and its ecosystem plays in it.

Table of contents The OpenAI GPT-4o demo

Just to be on the same page, watch the demo below – it is short and to the point:

(for the full announcement demos video check out this link. You really should watch it all)

There were  several interfaces shown (and not shown) in these demos:

  • No text prompts. Everything was done in a conversational manner
  • And by conversation I mean voice. The main interface was a person talking to ChatGPT through his phone app
  • There were a few demos that included “vision
    • They were good and compelling, but they weren’t video per se
    • It felt more like images being uploaded, applying OCR/image recognition on them or some such
    • This can be clearly indicated when in the last demo on this, the person had to tell ChatGPT to use the latest image and not an older one – there are still a few polishes needed here and there

Besides the interface used, there were 3 important aspects mentioned, explained and shown:

  • This was more than just speech to text or text to speech. It gave the impression that ChatGPT perceived and generated emotions. I dare say, the OpenAI team did above and beyond to show that on stage
  • Humor. It seems humor and in general humans are now more understandable by ChatGPT
  • Interruptions. This wasn’t a turn by turn prompting but rather a conversation. One where the person can interrupt in the middle to veer and change the conversation’s direction

Let’s see why this is different from what we’ve seen so far, and what is needed to build such things.

Text be like…

ChatGPT started off as text prompting.

You write something in the prompt, and ChatGPT obligingly answers.

It does so with a nice “animation”, spewing the words out a few at a time. Is that due to how it works, or does it slow down the animation versus how it works? Who knows?

This gives a nice feel of a conversation – as if it is processing and thinking about what to answer, making up the sentences as it goes along (which to some extent it does).

This quaint prompting approach works well for text. A bit less for voice.

And now that ChatGPT added voice, things are getting trickier.

“Traditional” voice bots are like turn based games

Before all the LLM craze and ChatGPT, we had voice bots. The acronyms at the time were NLP and NLU (Natural Language Processing and Natural Language Understanding). The result was like a board game where each side has its turn – the customer and the machine.

The customer asks something. The bot replies. The customer says something more. Oh – now’s the bot’s turn to figure out what was said and respond.

In a way, it felt/feels like navigating the IVR menus via voice commands that are a bit more natural.

The turn by turn nature means there was always enough time.

You could wait until you heard silence from the user (known as endpointing). Then start your speech to text process. Then run the understanding piece to figure out intents. Then decide what to reply and turn it into text and from there to speech, preferably with punctuation, and then ship it back.

The pieces in red can easily be broken down into more logic blocks (and they usually are). For the purpose of discussing the real time nature of it all, I’ve “simplified” it into the basic STT-NLU-TTS

To build bots, we focused on each task one at a time. Trying to make that task work in the best way possible, and then move the output of that task to the next one in the pipeline.

If that takes a second or two – great!

But it isn’t what we want or need anymore. Turn based conversations are arduous and tiring.

Realtime LLMs are like… real-time games

Here are the 4 things that struck a chord with me when GPT-4o was introduced from the announcement itself:

  • GPT-4o is faster (you need that one for something that is real-time)
  • Future of collaboration – somehow, they hinted on working together and not only man to machine, whatever that means at this early stage
  • Natural, feels like talking to another person and not a bot (which is again about switching from turn based to real-time)
  • Easier, on the user. A lot due to the fact that it is natural

Then there was the fact that the person in the demo cuts GPT-4o short in mid-sentence and actually gets a response back without waiting until the end.

There’s more flexibility here as well. Less to learn about what needs to be said to “strike” specific intents.

Moving from turn based voice bots to real-time voice bots is no easy feat. It is also what’s in our future if we wish these bots to become commonplace.

Real life and conversational bots

The demo was quite compelling. In a way, jaw dropping.

There were a few things there that were either emphasized or skimmed through quickly that show off capabilities that if arrive in the product once it launches are going to make a huge difference in the industry.

Here are the ones that resonated with me

  • Wired and not wireless. Why on earth would they do a wired demo from a mobile device? The excuse was network reception. Somehow, it makes more sense to just get an access point in the room, just below the low table and be done with it. Something there didn’t quite work for me – especially not for such an important demo (4.6M views in 2 months on the full session on YouTube)
  • Background noise. Wired means they want a clean network. Likely for audio quality. Background noise can be just as bad for the health of an LLM (or a real conversation). These tools need to be tested rigorously in real time environments… with noise in them. And packet loss. And latency. Well… you go the hint
  • Multiple voices. Two or more people sitting around the table, talking to GPT-4o. Each time someone else speaks. Does GPT need to know these are different people? That’s likely, especially if what we aim at is conversations that are natural for humans
  • Interruptions. People talking over each other locally (the multiple voices scenario). A person interrupting GPT-4o while it runs inference or answers. Why not GPT-4o interrupting a rumbling human, trying to focus him?
  • Tone of voice. Again, this one goes both ways. Understanding the tone of voice of humans. And then there’s the tone of voice GPT-4o needs to play. In the case of the demo, it was friendly and humorous. Is that the only tone we need? Likely not. Should tone be configurable? Predetermined? Dynamic based on context?

There are quite a few topics that still need to be addressed. OpenAI and ChatGPT have made huge strides and this is another big step. But it is far from the last one.

We will know more on how this plays out in real life once we get people using it and writing about their own experiences – outside of a controlled demo at a launch event.

Working on the WebRTC and LLM infrastructure

In our domain of communication platforms and infrastructure, there are a few notable vendors that are actively working on fusing WebRTC with LLMs. This definitely isn’t an exhaustive list. It includes:

  • Those that made their intentions clear
  • Had something interesting to say besides “we are looking at LLMs”
  • And that I noticed (sorry – I can’t see everyone all the time)

They are taking slightly different approaches, which makes it all the more interesting.

Before we start, let’s take the diagram from above of voicebots and rename the NLU piece into LLM, following marketing hype as it is today:

The main difference now is that LLM is like pure black magic: We throw corpuses of text into it, the more the merrier. We then sprinkle a bit of our own knowledge base and domain expertise. And voila! We expect it to work flawlessly.

Why? Because OpenAI makes it seem so easy to do…

Programmable Video and Video APIs doing LLM

In our domain of programmable video, what we see are vendors trying to figure out the connectors that make up the WebRTC-LLM pipeline and doing that at as low latency as possible.

Agora

Agora just published a nice post about the impact of latency on conversational AI.

The post covers two areas:

  1. The mobile device, where they tout their native SDK as being faster and with lower latency than the typical implementation
  2. The network, relying on their SD-RTN infrastructure for providing lower latency than others

In a way, they focus on the WebRTC-realm of the problem, ignoring (or at least not saying anything about) the AI/LLM-realm of the problem.

It should be said that this piece is important and critical in WebRTC no matter if you are using LLMs or just doing a plain meeting between mere humans.

Daily

Daily take their unique approach for LLM the same way they do for other areas. They offer a kind of a Prebuild solution. They bring in partners and integrations and optimize them for low latency.

In a recent post they discuss the creation of the fastest voice bot.

For Daily, WebRTC is the choice to go for since it is already real time in nature. Sprinkle on top of it some of the Daily infrastructure (for low latency). And add the new components that are not part of a typical WebRTC infrastructure. In this case, packing Deepgram’s STT and TTS along with Meta’s Llama 3.

The concept here is to place STT-LLM-TTS blocks together in the same container so that the message passing between them doesn’t happen over a network or an external API. This reduces latencies further.

Go read it. They also have a nice table with the latency consumers along the whole pipeline in a more detailed breakdown than my diagrams here.

LiveKit

In January this year, LiveKit introduced the LiveKit Agents. Components used to build conversational AI applications. They haven’t spoken since about this on their blog, or about latency.

That said, it is known that OpenAI is using LiveKit for their conversational AI. So whatever worries OpenAI has about latencies are likely known to LiveKit…

LiveKit has been lucky to score such a high profile customer in this domain, giving it credibility in this space that is hard to achieve otherwise.

Twilio’s approach to LLMs

Twilio took a different route when it comes to LLM.

Ever since its acquisition of Segment, Twilio has been pivoting or diversifying. From communications and real time into personalization and storage. I’ve written about it somewhat when Twilio announced sunsetting Programmable Video.

This makes the announcement a few months back quite reasonable: Twilio AI Assistant

This solution, in developer preview, focuses on fusing the Segment data on a customer with the communication channel of Twilio’s CPaaS. There’s little here in the form of latency or real time conversations. That seems to be secondary for Twilio at the moment, but is also something they are likely now exploring as well due to OpenAI’s announcement of GPT-4o.

For Twilio? Memory and personalization is what is important about the LLM piece. And this is likely highly important to their customer base. How will other vendors without access to something like Segment are going to deal with it is yet to be seen.

Fixie anyone?

When you give Philipp Hancke to review an article, he has good tips. This time it meant I couldn’t make this one complete without talking about fixie.ai. For a company that raised $17M they don’t have much of a website.

Fixie is important because of 3 things:

  1. Justin Uberti, one of the founders of WebRTC, is a Co-founder and CTO there
  2. It relies on WebRTC (like many others)
  3. It does things a wee bit differently, and not just by being open source

Fixie is working on Ultravox, an open source platform that is meant to offer a speech-to-speech model. No more need for STT and TTS components. Or breaking these into smaller pieces yet.

From the website, it seems that their focus at the moment is modeling speech directly into LLM, avoiding the need to go through text to speech. The reasoning behind this approach is twofold:

  1. You don’t lose latency on going through the translation to text and from there into the LLM
  2. Voice has a lot more to it than just the spoken words. Having that information readily available in the LLM can be quite useful and powerful

The second part of it, of converting the result of the LLM back into speech, is not there yet.

Why is that interesting?

  • Justin… who is where WebRTC is (well… maybe apart from his stint at Clubhouse)
  • The idea of compressing multiple steps into one
  • It was tried for transcoding video and failed, but that was years ago, and was done computationally. Here we’re skipping all this and using generative AI to solve that piece of the puzzle. We still don’t know how well it will work, but it does have merit
What’s next?

There are a lot more topics to cover around WebRTC and LLM. Rob Pickering looks at scaling these solutions for example. Or how do you deal with punctuations, pauses and other phenomena of human conversations.

With every step we make along this route, we find a few more challenges we need to crack and solve. We’re not there yet, but we definitely stumbled upon a route that seems really promising.

The post OpenAI, LLMs, WebRTC, voice bots and Programmable Video appeared first on BlogGeek.me.

Video quality metrics you should track in WebRTC applications

bloggeek - Mon, 07/15/2024 - 12:30

Get your copy of my ebook on the top 7 video quality metrics and KPIs in WebRTC (below).

I’ve been dealing with VoIP ever since I finished my first degree in computer science. That was… a very long time ago.

WebRTC? Been at it since the start. I co-founded testRTC, dealing with testing and monitoring WebRTC applications. Did consulting. Wrote a lot about it.

For the last two years I’ve been meaning to write a short ebook explaining video quality metrics in WebRTC. And I finally did that 😎

The challenges of measuring video quality

Ever since we started testRTC, customers came to us asking for a quality score to fit their video application. But where do you even begin?

  • A 1:1 call quality will be perceived differently at 1mbps running on a smartphone or a PC with a 27” display
  • These same 2 participants collaborating together on a document require much less bitrate and resolution
  • Group video calls with 15 people or more requires a totally different perspective as to what can be seen as good video quality
  • Cloud gaming with a unidirectional video stream at really low latency has different quality requirements
  • A webinar is different than the scenarios above

Deciding what’s good or bad is a personal decision that needs to be made by each and every company for its applications. Sometimes, differently per scenario used.

Where do we even start then?

Packet loss and latency aren’t enough

If I had to choose two main characteristics of media quality in real time communications, these were going to be packet loss and latency.

Packet loss tells you how bad the network conditions are (at least most of the time this is what it is meant to do). Your goal would be to reduce packet loss as much as possible (don’t expect to fully eradicate it).

Latency indicates how far the users are from your infrastructure or from each other. Shrinking this improves quality.

But that’s not enough. There’s more to it than these two metrics to be able to get a better picture of your application’s media quality – especially when dealing with video streams.

Know your top 7 video quality metrics in WebRTC

Which is why I invite you to download and review the top 7 video quality metrics in WebRTC – my new ebook which lists the most important KPIs when it comes to understanding video quality in WebRTC. There you will find an explanation of these metrics, along with my suggestions on what to do about them in order to improve your application’s video quality.

And yes – the ebook is free to download and read – once you jot down your name and email, it will be sent to you directly.

The post Video quality metrics you should track in WebRTC applications appeared first on BlogGeek.me.

Fixing packet loss in WebRTC

bloggeek - Mon, 07/01/2024 - 12:30

Discover the hidden dangers of packet loss and its impact on your WebRTC application. Find out how to optimize your network performance and minimize packet loss.

If there’s one thing that can give you better media quality in WebRTC it is going to be the reduction (or elimination?) of packet loss. Nothing else will be as effective as this.

What I want to do here, is to explain packet loss, what it is inevitable, and the many ways we have at our disposal to increase the resilience and quality of our media in WebRTC in the face of packet losses.

Table of contents Why do we have packet loss in WebRTC?

There are many reasons for packet losses to occur on modern networks and with WebRTC. To count a few of these:

  • Wireless and cellular networks may suffer due to the distance between the device and the access point, as well as other obstructions (physical or just aerial interference)
  • Routers and switches can get congested, causing delays as well as dropped packets
  • Ethernet cables can be faulty at times
  • Connections between switches are not always as clean as they could be
  • Media servers not doing their job correctly or just getting overtaxed with traffic
  • Entropy. The more we miniaturize and condense things, the more entropy will kick in (I added this one just to sound smart)
  • Devices might not be faring too well at times either

We think of the internet as a reliable network. You direct a browser to a web page. And magically the page loads. If it doesn’t, then the network or server is down. End of story. That’s because packet losses there are handled by retransmitting what is lost. The cost? You wait a wee bit longer for your page to load.

With WebRTC we are dealing with real time communications. So if something gets lost there is little time to fix that.

👉 Packet losses are a huge headache for WebRTC applications

What to do to overcome packet losses?

Packet loss is an inevitability when it comes to WebRTC and VoIP in general. You can’t really avoid them. The question then becomes what can we do about this?

There are four different approaches here that can be combined for a better user experience:

  1. Have less packet losses – if we have less of these, then user experience will increase
  2. Conceal packet losses (PLC) – once we have packet losses, we need to try and figure out what to do to conceal that fact from the user
  3. Retransmit lost packets (RTX) – we might want to try and retransmit what was lost, assuming there’s enough time for it
  4. Correct packet losses in advance (FEC) – when we know there’s high probability of packet losses, we might want to send packets more than once or add some error correction mechanism to deal with the potential packet losses

From here on, let’s review each one of these four approaches.

Have less packet losses

This is the most important solution.

Because I don’t want you to miss this, I’ll write this again:

This is the most important solution.

If there is less packet loss, there is going to be less headache to deal with when trying to “fix” this situation. So reducing packet loss should be your primary objective. Since you can’t fully eradicate packet loss, we will still need to use other techniques. But it starts with reducing the amount of packet losses.

Location of infrastructure elements in WebRTC

Where you place your media servers and TURN servers and how you route traffic for your WebRTC service will have a huge impact on packet loss.

Best practice today is having the first server that WebRTC media hits as close to the user as possible. The understanding behind that is that this reduces the number of hops and network infrastructure components that the media packets need to traverse over the open internet. Once on your server, you have a lot more control over how that data gets processed and forwarded between the servers.

Having a single data center in the US cater for all your traffic is great. Assuming your users are from that region – once users start joining from across the pond – say… France. Or India. You will start seeing higher latencies and with it higher levels of packet loss.

A few things here:

  • Where you place your servers highly depends on your users and their behavior
  • TURN servers are important to spread globally, but at the end of the day, check how much of your actual traffic gets related through TURN servers
  • Media servers are something I’d try to spread globally more, assuming these are needed in all meetings. I’d also focus on cascaded/distributed architectures where users join the closest media server (versus allocating a specific server for all users in the same meeting)

Where to start?

👉 Know the latency (RTT) of your users. Monitor it. Strive towards improving it

👉 Check if there are locations and users that are routed across regions. Beef up your infrastructure in the relevant regions based on this data

👉 Since we want to reduce packet loss, you should also monitor… packet loss

Better bandwidth estimation

I should have called this better bandwidth management, but for SEO reasons, kept it bandwidth estimation 😉

Here’s the thing:

Sending more than the network can handle, the sender can send or the receiver can receive leads to packet loss and packet drops.

Fixing that boils down to bandwidth management – you don’t want to send too little since media quality will be lower than what you can achieve. And you don’t want to send too much since… well… packet loss.

Your service needs to be able to estimate bandwidth. That needs to happen on both the uplink and the downlink for each user.

The challenge is that available bandwidth is dynamic in nature. At each point in time, we need to estimate it. If we overshoot – packets are going to be delayed or lost. If we undershoot, we are going to reduce media quality below what we can achieve.

Web browser implementations of WebRTC have their own bandwidth management algorithms and they are rather good. Media servers have different implementations and their quality varies.

For media servers, we also need to remember that we aren’t dealing only with bandwidth estimation but rather with bandwidth management. Once we approximately know the available bandwidth, we need to decide which of the streams to send over the connection and at which bitrates; doing that while seeing the bigger picture of the session (hence bandwidth management and not estimation).

Conceal packet losses (PLC)

Packet loss concealment is what we do after the fact. We lost packets, but we need to play out something for the user. What should we do to conceal the problem of packet loss?

This may seem like the last thing to deal with, but it is the first we need to tackle. There are two reasons why:

  1. No matter what kind of techniques and resiliency mechanisms you use, at the end of the day, some level of packet loss is bound to occur
  2. Other techniques we have are more sophisticated. Usually we will get to implement them later on. We NEED to have a rock solid concealment strategy before adding more techniques

Audio and video are different, which is why from here on, we will distinguish between the two in the techniques we are going to use.

Audio and packet loss concealment

With audio, a loss of an audio packet almost always translates immediately to a loss of one or more audio frames (and we usually have 50 audio frames per second).

“Skipping” them doesn’t work so well, as it leads to robotic audio when there’s packet loss.

Other naive approaches here include things like playing back the last frame received – either as is or with a reduction in its volume.

More sophisticated approaches try to estimate what should have been received by way of machine learning (or what we love calling it these days – generative AI). Google has such a capability inhouse (though not inside the open source implementation of WebRTC that they have). If you are interested in learning more about this, you can check out Google’s explanation of WaveNetEQ.

A few things to remember here:

👉 For the most part, this isn’t something in your control, unless you own/compile your WebRTC stack on the device side

👉 Knowing how browsers behave here enables you to be slightly smarter with the other techniques you are going to use (by deciding when to use them and how aggressively)

👉 In your own native application? You can improve on things, but you need to know what you’re doing and you need to have a compelling reason to take this route

Video and packet loss concealment 👉 frame dropping

Video is trickier with packet losses:

  • With video coding, each frame is usually dependent on past frames (to improve upon compression rates)
  • A video frame is almost always composed of multiple packets

One lost packet translates into a lost frame, which can easily cause loss of the whole video sequence:

Packet loss concealment in video means dropping a frame, and oftentimes freezing the video until the next keyframe arrives.

What can the receiver do in case of such a loss? If it believes it won’t recuperate quickly (which is most commonly the case), he can send out a FIR or PLI message over RTCP to the sender. These messages indicate to the sender that there’s a loss that needs to be addressed, where the usual solution is to reset the encoder and send a new keyframe.

In the past, systems used to try and overcome packet losses by continuing to decode without the missing packets. The end result was smearing artifacts on the video until a new keyframe arrived. Today, best practice is to freeze the video until a keyframe arrives (which is what all browser implementations do).

A few things to remember here:

👉 You have more control here than in audio. That’s because a lost packet means you will receive FIR or PLI message on the other end. If that’s your media server receiving these messages, you can decide how to respond

👉 Sending a keyframe means investing more on bitrate for that frame. If there’s congestion over the network, then this will just put more burden. Most media servers would avoid sending too many of these in larger group meetings

👉 There are video coding techniques that reduce the dependencies between frames. These include temporal scalability and SVC

Retransmitting lost packets (RTX)

If a packet is missing, then the first solution we can go for is to retransmit it.

The receiver knows what packets it is missing. Once the sender knows about the missing packets (via 

NACK messages), it can resend them as RTX packets.

Retransmission is the most economic solution in terms of network resources. It is the least wasteful solution. It is also the hardest to make use of. That’s because it ends up looking something like this:

In order to retransmit, we need to:

  • Know there are missing packets (by receiving a newer packet)
  • Decide that the older ones won’t be arriving and are lost
  • Let the sender know they are lost
  • Have the sender retransmit them

This takes time. A long time.

The question then becomes, is it going to be too late to retransmit them.

Video and RTX

Video can make real use of retransmissions (and it does in WebRTC).

With video compression, we have a kind of hierarchy of frames. Some frames are more important than others:

  • Keyframes (or I-frames) are the most important. They are “standalone” frames that aren’t reliant on any past frames
  • In SVC and temporal scalability, some frames are a kind of a dead-end, with nothing reliant on them, while in other cases, have frames reliant on them

The above illustration, for example, shows how keyframes and temporal scalability build dependency chains. Key denotes the keyframe while L0 has higher usability than L1 frames (L1 frames are dependent on L0 frames and nothing depends on them).

When we have such a dependency tree of frames, we can do some interesting things with resiliency. One of them is deciding if it is worthwhile to ask for a retransmission:

  • If the missing packets are from a keyframe, then asking for a retransmission is useful even if the keyframe itself won’t be displayed due to the time that passed
  • Similarly, we can decide to do this for L0 frames (these being quote important)
  • And we can just skip packets of L1 frames that are lost – we might not have time to playback this frame once the retransmission arrives, and that data will be useless anyway
Audio and RTX

Audio compression doesn’t enjoy the same dependency tree that video compression does. Which is why libwebrtc doesn’t have code to deal with audio RTX.

Would having RTC for audio be useful? It can. Audio packets usually wait for video packets to arrive for lip synchronization purposes. If we can use that wait time to retransmit, then we can improve upon audio quality. Google likely deemed this not important enough.

Correct packet losses in advance (FEC)

We could ask for a retransmission after the fact, but what about making sure there’s no need? This is what FEC (Forward Error Correction) is all about.

Think of it this way – if we had one shot at what we want to send and it was super important – would it make sense to send 100 copies of it, knowing that the chances that one of these copies would reach its destination is high?

FEC is about sending more packets that can be used to reconstruct or replace lost packets.

There are different FEC schemes that can be used, with the main 3 of them being:

  1. Duplication (send the same thing over and over again)
  2. XOR (add packets that XOR the ones we wish to protect)
  3. Reed Solomon (similar to XOR just more complex and more resilient)

WebRTC supports duplication and XOR out of the box.

The biggest hurdle of FEC is its use of bitrate – it is quite network hungry in that regard.

Audio FEC

Audio FEC comes in two different manners:

  1. In-codec FEC (such as Opus in-band FEC), where the FEC mechanism is part of the codec implementation itself
  2. RTP-based FEC, where the FEC mechanism is part of the RTP protocol

In-band FEC is implemented as part of the Opus codec library. It is ok’ish at best – nothing to write home about.

Then there’s RED – Redundancy Encoding – where each audio packet holds more than a single audio frame. And the ones it holds are just slightly older frames, so that if a packet is lost, we get it in another packet.

RED is implemented in libwebrtc. Support is limited to 1 level of redundancy for RED (meaning recovering up to one sequential lost packet). You can use WebRTC’s Insertable Streams mechanism to generate RED packets at higher redundancy or dynamic redundancy in the browser though.

In the above, Philipp Hancke explains RED (along with other resiliency features for audio in WebRTC).

Video FEC

FEC for video is considered wasteful. If we need to increase bitrate by 20% or more to introduce robustness using FEC, then it comes at a cost of video quality that we could increase by using higher video bitrate.

For the most part, WebRTC ignores FEC for video, which is a shame. When using temporal scalability or SVC, the same way that we can decide to retransmit only important packets, we can also decide to only add FEC protection only to more important frames.

Wrapping it all up

Dealing with packet loss in WebRTC isn’t a simple task. It gets more complex over time, as more techniques and optimizations are bolted on to the implementation. What I want to do here is to list the various tools at our disposal to deal with packet losses. When and how we decide to use them would determine the resulting robustness and media quality of the implementation.

Here’s a quick table to sum things up a bit:

PLCRTXFECFocusWhat to playback to the userWhen to ask for missing packetsWhen to send duplicated packetsAdvantagesNone. You must have this logic implementedLow network footprintLow latency overheadChallengesAudio may sound roboticVideo will freezeIncreases latency. Might not be usable due to itHigh network footprint. Can be quite wastefulAudioDuplicate last frames or reduce volumeUse Gen AI to estimate what was lostNot commonly used for audio in WebRTCFlexFEC used by WebRTCCan use RED if you want toVideoSkip video framesAsk for a fresh keyframe to reset the video streamCan be optimized to retransmit packets of important frames onlyNot commonly used for video in WebRTC

Oh – and make sure you first put an effort to reduce the amount of packet losses before starting to deal with how to overcome packet losses that occur…

Learn more about WebRTC (and everything about it)

Packet loss is one of the topics you need to deal with when writing WebRTC applications. There are many aspects affecting media quality – packet loss is but one of them. This time, we looked into the tools available in WebRTC for dealing with packet losses.

To learn more about media processing and everything else related to WebRTC, check out these services:

And if what you want is to test, monitor, optimize and improve the performance of your WebRTC application, then I’d suggest checking out testRTC.

The post Fixing packet loss in WebRTC appeared first on BlogGeek.me.

WebRTC & HEVC – how can you get these two to work together

bloggeek - Mon, 06/17/2024 - 13:00

Getting HEVC and WebRTC to work together is tricky and time consuming. Lets see what the advantages are and if this is worth your time or not.

Does HEVC & WebRTC make a perfect match, or a match at all???

WebRTC is open source, open standard, royalty free, …

HEVC is royalty bearing, made by committee, expensive

And yet… we do see areas where WebRTC and HEVC mix rather well. Here’s what I want to cover this time:

Table of contents WebRTC and royalty free codecs

Digging here in my blog, you can find articles discussing the WebRTC codec wars dating as early as 2012.

Prior to WebRTC, most useful audio and video codecs were royalty bearing. Companies issued patents related to media compression and then got the techniques covered by their patents integrated into codec standards, usually, under the umbrella of a standardization organization.

The logic was simple: companies and research institutes need to make a profit out of their effort, otherwise, there would be no high quality codecs. That was before the internet as we know it…

Once websites such as YouTube appeared, and UGC (User Generated Content) became a thing, this started to shift:

  • Browser vendors grumbled a bit about this, since browsers were given away freely. Why should they pay for licensing codec implementations?
  • Content creators and distributors alike didn’t want to pay either – especially since these were consumers (UGC) and not Hollywood in general

The new business models broke in one way or another the notion of royalty bearing codecs. Or at least tried to break. There were solutions of sorts – smartphones had hardware encoders prepaid for, decoder licenses required no payments, etc.

But that didn’t fit something symmetric like WebRTC.

When WebRTC was introduced, the codec wars began – which codecs should be supported in WebRTC?

The early days leaned towards royalty free codecs – VP8 for video and Opus for voice. At some point, we ended up with H.264 as well…

How H.264 wiggled its way into WebRTC

H.264 is royalty bearing. But it still found its way into WebRTC that was due to Cisco in a large part – they decided to contribute their encoder implementation of H.264 and pay the royalties on it (they likely already paid up to the cap needed anyways). That opened a weird technical solution to be concocted to make room for H.264 and allow it in WebRTC:

  • WebRTC spec would add H.264 as a mandatory to implement codec for browsers
  • Browsers would use the Cisco OpenH264 implementation for the encoder, but won’t have it as part of their browser binary
  • They would download it from Cisco’s CDN after installing the browser

Why? Because lawyers. Or something.

It worked for browsers. But not on mobile, where the solution was to use the hardware encoder on the device, that doesn’t always exist and doesn’t always work as advertised. And it left a gaping headache for native developers that wanted to use H.264. But who cared? Those who wanted to make a decision for WebRTC and move on – got it.

That made certain that at some point in the future, the H.264 royalty bearing crowd would come back asking for more. They’d be asking for HEVC.

HEVC, patents and big 💰

HEVC is a patents minefile, or at least were – I admit I haven’t been following up on this too closely for a few years now.

Here are two slides I have in my architecture course:

There are a gazillion patents related to HEVC (not that many, but 5 figures). They are owned by a lot of companies and get aggregated by multiple patent pools. Some of them are said to be trickling into VP9 and AV1, though for the time being, most of the market and vendors ignore that.

These patents make including HEVC in applications a pain – you need to figure out where to get the implementation of HEVC and who pays for its patents. With regard to WebRTC:

  • Is this the browser vendors who need to pay?
  • Maybe the chipset vendors?
  • Or device manufacturers?
  • What about the operating system itself?
  • How about the application vendor?

Oh, and there’s no “easy” cap to reach as there is/were with H.264 when it was included in WebRTC and paid for by Cisco.

HEVC is expensive, with a lot of vendors waiting to be paid for their efforts.

HEVC hardware

Software codecs and royalty payments are tricky. Why? Because it opens up the can of worms above, about who is paying. Hardware codecs are different in nature – the one paying for them is either the hardware acceleration vendor or the device manufacturer.

This means that hardware acceleration of codecs has two huge benefits – not only one:

  1. Less CPU use on the device
  2. Someone already paid the royalties of the codec

This is likely why Apple decided to go all in with HEVC from iPhone 8 and on – it gave them an edge that Android phones couldn’t easily solve:

  • iPhone is vertically integrated – chipset, device and operating system
  • Android devices have the chipset vendor, the device manufacturer and Google. Who pays the bill on HEVC?

This gap for Android devices was a nice barrier for many years that kept Apple devices ahead. Apple could “easily” pay the HEVC royalties while Android vendors try to figure out how to get this done.

Today?

We have Intel and Apple hardware supporting HEVC. Other chipset vendors as well. Some Android devices. Not all of them. And many just do decoding but not encoding.

For the most part, the HEVC hardware support on devices is a swiss cheese with more holes than cheese in it. Which is why many focus on HEVC support in Apple devices only today (if at all).

Advantages of HEVC in WebRTC

When it comes to video codecs, there are different generations of codecs. In the context of WebRTC, this is what it looks like:

There are two axes to look at in the illustration above

  1. From left to right, we move from one codec generation to another. Each one has better compression rates but at higher compute requirements
  2. Then there’s bottom to top, moving from royalty bearing to royalty free

If we move from the VP8 and H.264 to the next generation of VP9 and HEVC, we’re improving on the media quality for the same bitrate. The challenge though is the complexity and performance associated with it.

To deal with the increased compute, a common solution is to use hardware acceleration. This doesn’t exist that much for VP9 but is more prevalent in HEVC. That’s especially true since ALL Apple devices have HEVC support in them – at least when using WebRTC in Safari.

The other reason for using HEVC is media processing outside of WebRTC. Streaming and broadcasting services have traditionally been using royalty bearing video codecs. They are slowly moving now from H.264 to HEVC. This shift means that a lot of media sources are going to have available in them either H.264 or HEVC as the video codec – a lot less common will be VP8 or VP9. This being the case, vendors would rather use HEVC than go for VP9 and deal with transcoding – their other alternative is going to stick to using H.264.

So, why use HEVC?

  • It is better than VP8 and H264
  • Existence of hardware acceleration for HEVC that is more common than VP9
  • Things we want to connect to might have HEVC and not VP9
  • Differentiation. Some users, customers, investors or others may assume you’re doing something unique and innovative
Limitations of HEVC in WebRTC

HEVC requires royalty payments in a minefield of organizations and companies.

Apple already committed itself fully to HEVC, but Google and the rest of the WebRTC industry haven’t.

Google will be supporting HEVC in Chrome for WebRTC only as a decoder and only if there’s hardware accelerator available – no software implementation. Google’s “official” stance on the matter can be found in the Chrome issues tracker.

So if you are going to support HEVC, this is where you’ll find it:

  • Most Apple devices (see here)
  • Chrome (and maybe Edge?) browsers on devices that have hardware acceleration for HEVC, but only for decoding. But not yet – it is work in progress at the moment
  • Not on Firefox (though Mozilla haven’t gotten yet to adding AV1 to Firefox either)
Waiting for Godot AV1

Then there is AV1. A video codec years in the making. Royalty free. With a new non-profit industry consortium behind it, with all the who’s who:

The specification is ready. The software implementation already exists inside libwebrtc. Hardware acceleration is on its way. And compression results are better than HEVC. What’s not to like here?

This makes the challenge extra hard these days –

Should you invest and adopt HEVC, or start investing and adopting AV1 instead?

  • HEVC has more hardware support today
  • AV1 can run anywhere from a royalties standpoint
  • HEVC isn’t available on many devices and device categories
  • AV1 is too new and can’t seriously deal with high bitrates and video resolutions
  • HEVC won’t be adopted by many devices even in the foreseeable future
  • AV1 is likely to be supported everywhere in the future, but it is almost nowhere in the present

Adopt VP9? Wait for AV1?

Where can you fit HEVC and WebRTC?

Let’s see where there is room today to use HEVC. From here, you can figure out if it is worth the effort for your use case.

The Apple opportunity of WebRTC and HEVC

Why invest now in HEVC? Probably because HEVC is available on Apple devices. Mainly the iPhone. Likely for very specific and narrow use cases.

For a use case that needs to work there, there might be some reasoning behind using HEVC. It would work best there today with the hardware acceleration that Apple pampered us with for HEVC. It will be really hard or even impossible to achieve similar video quality in any other way on an iPhone today.

Doing this brings with it differentiation and uniqueness to your solution.

Deciding if this is worth it is a totally different story.

Intel (and other) HEVC hardware

Intel has worked on adding HEVC hardware acceleration to its chipsets. And while at it, they are pushing towards having HEVC implemented in WebRTC on Chrome itself. The reason behind this is a big unknown, or at least something that isn’t explained that much.

If I had to take a stab at it here, it would be the desire of Intel to work closely with Apple. Not sure why, it isn’t as if Intel chipsets are interesting for Apple anymore – they have been using their own chips for their devices for a few years now.

This might be due to some grandiose strategy, or just because a fiefdom (or a business unit or a team) within Intel needs to find things to do, and HEVC is both interesting and can be said to be important. And it is important, but is it important for WebRTC on Intel chipsets? That’s an open question.

Should you invest in HEVC for WebRTC?

No. Yes. Maybe. It depends.

When I told Philipp Hancke I am going to write about this topic, he said be sure to write that “it is a bit late to invest in HEVC in 2024”.

I think this is more nuanced than this.

It starts with the question how much energy and resources do you have and can you spend them on both HEVC and AV1. If you can’t then you need to choose only one of them or none of them.

Investing in HEVC means figuring out how the end result will differentiate your service enough or give it an advantage with certain types of users that would make your service irresistible (or usable).

For the most part, a lot of the WebRTC applications are going to ignore and skip HEVC support. This means there might be an opportunity to shine here by supporting it. Or it might be wasted effort. Depending how you look at these things.

Learn more about WebRTC (and everything about it)

Which codecs are available, which ones to use, how is that going to affect other parts of your application, how should you architect your solutions, can you keep up with the changes coming to WebRTC?

These and many other questions are being asked on a daily basis around the world by people who deal with WebRTC. I get these questions in many of my own meetings with people.

If you need assistance with answering them, then you may want to check out these services that I offer:

The post WebRTC & HEVC – how can you get these two to work together appeared first on BlogGeek.me.

WebRTC Plumbing with GStreamer

webrtchacks - Tue, 06/11/2024 - 14:30

GStreamer is one of the oldest and most established libraries for handling media. As a core media handling element in Linux and WebKit that as launched near the turn of the century, it is not surprising that many early WebRTC projects use various pieces of it. Today, GStreamer has expanded options for helping developers plumb […]

The post WebRTC Plumbing with GStreamer appeared first on webrtcHacks.

Reasons for WebRTC to discard media packets

bloggeek - Mon, 05/27/2024 - 12:30

From time to time, WebRTC is going to discard media packets. Monitoring such behavior and understanding the reasons is important to optimize media quality.

WebRTC does things in real time. That means that if something takes its sweet time to occur, it will be too late to process it. This boils down to the fact that from time to time, WebRTC will discard media packets, which isn’t a good thing. Why is that going to happen? There are quite a few reasons for it, which is what this article is all about.

Table of contents A WebRTC Q&A

I just started a new initiative with Philipp Hancke. We’re publishing an answer to a WebRTC related question once a week (give or take), trying to keep it all below the 2 minutes mark.

We are going to cover topics ranging from media processing, through signaling to NAT traversal. Dealing with client side or server side issues. Or anything else that comes to mind.

👉 Want to be the first to know? Subscribe to the YouTube channel

👉 Got a question you need answered? Let us know

Discarded media packets in WebRTC

Media packets and frames can and are discarded by WebRTC in real life calls. There are even getstats metrics that allow you to track these:

The screenshot above was taken from the RTCInboundRtpStreamStats dictionary of getstats. I marked most of the important metrics we’re interested in for discarding media data.

packetsDiscarded – this field indicates any fields that the jitter buffer decided to discard and ignore because they arrived too early or too late. It relates to audio packets.

framesXXX fields are dealing with video only and look at full frames which can span multiple packets. They get discarded because of a multitude of reasons which we will be dealing with later in this article. For the time being – just know where to find this.

The diagram below is a screenshot taken in testRTC of a real session of a client. Here you can see a spike of 200 packetsDiscarded less than a minute into the call. We’ve recently added in testRTC insights that hunt for such cases (as well as for video frame drops), alerting about these scenarios so that the user doesn’t have to drill down and search for them too much – they now appear front and center to the user.

WebRTC = Real-Time. Timing is everything

WebRTC stands for Web Real Time Communication. The Real Time part of it is critical. It means that things need to happen in… real time… and if they don’t, then the opportunity has already passed. This leads to the eventuality that at times, media packets will need to be discarded simply because they aren’t useful anymore – the opportunity to use them has already passed.

For all that logic to happen, WebRTC uses a protocol called RTP. This protocol is in charge of sending and receiving real time media packets over the network. For that to occur, each RTP packet has two critical fields in its header:

The illustration above is taken from our course Low level WebRTC protocols. In it, you can see these two fields:

  1. Sequence number
  2. Timestamp

The sequence number is just a running counter which can easily be used to order the packets on the receiving end based on the value of the counter. This takes care of any reordering, duplication and packet losses that can occur over modern networks.

The timestamp is used to understand when the media packet was originally generated. It is used when we need to playback this packet. Multiple packets can have the same timestamp for example, when the frame we want to send gets split across packets – something that occurs frequently with video frames.

These two, sequence number and timestamp, are used to deal with the various characteristics of the network. Usually, we deal with the following problems (I am not going to explain them here): jitter, latency, packet loss and reordering.

All of this goodness, and more is handled in WebRTC by what is called a jitter buffer. Here’s a short explainer of how a jitter buffer works:

WebRTC discarding incoming audio packets

The above video is our first WebRTC Q&A video. We started off with this because it popped up in discuss-webrtc. The question has since been deleted for some reason, but it was a good one.

Latency

The main reason for discarded audio packets is receiving them too late.

When audio packets are received by WebRTC, it pushes them into its jitter buffer. There, these packets get sorted in their sending order by looking at the sequence number of these packets. When to play them out is then dependent on the timestamp indicated in the packet.

Assuming we already played a newer packet to the user, we will be discarding packets that have a lower (and older) sequence number since their time has already passed.

Lipsync

Audio and video packets get played out together. This is due to a lip synchronization mechanism that WebRTC has, where it tries to match timestamps of audio and video streams to make sure there’s lip synchronization.

Here, if the video advanced too much, then you may need to drop some audio packets instead of playing them out in sync with the video (simply because you can’t sync the two anymore).

Bugs

Here’s another reason why audio packets might end up being discarded by the receiver – bugs in the sender’s implementation…

When the sender doesn’t use the correct timestamp in the packets, or does other “bad” things with the header fields of the RTP packets, you can get to a point when packets get discarded.

👉 Our focus here was on the timestamp because for some arcane reasons, figuring out the timestamp values and their progression in audio (and video) is never a simple task. Audio and video use different frequency clocks when calculating timestamps, done with values that make little sense to those who aren’t dealing with the innards and logic of audio and video encoders. This may easily lead to miscalculations and bugs in timestamp setting

WebRTC discarding outgoing audio packets

This doesn’t really happen. Or at least WebRTC ignores this option altogether.

How do we know that? Besides looking at the code, we can look at the fields that we have in getstats for this. While we have discarded frames for incoming and outgoing video and discarded incoming audio packets, we don’t have anything of this kind for outgoing audio packets.

These packets are too small and “insignificant” to cause any dropping of them on the sender side. That’s at least the logic…

WebRTC discarding incoming video frames

Before we go into the reasons, let’s understand how video packets are handled in the media processing pipeline of WebRTC. This is partial at best, and specifically focused on what I am trying to convey here:

The above diagram shows the process that video packets go through once they are received, along with the metrics that get updated due to this processing:

  1. It starts with the video packets being Received from the network
  2. They then get Reordered as they get inserted into the jitter buffer. Here, the jitter buffer may discard packets. In the case of video packets though, don’t expect packetsDiscarded to be updated properly
  3. For video, we now construct frames, taking multiple packets and concatenating them into frames in Construct a frame. This also gives us the ability to count the framesReceived metric
  4. Once we have frames, WebRTC will go ahead and Decode them. Here, we end up counting framesDecoded and framesDropped
  5. Now that we have decoded frames, we can Play them back and indicate that in framesRendered

👉 The exact places where these metrics might be updated are a wee bit more nuanced. Consider the above just me flailing my hands in the air as an explanation.

This also hints that with video, there are multiple places where things can get dropped and discarded along the pipeline.

The above is another screenshot from testRTC. This time, indicating framesDropped. You can see how throughout the session, quite a few frames got dropped by WebRTC.

Let’s find the potential reasons for such dropped frames..

Latency, lip sync & bugs

Just like incoming audio packets, we can get dropped packets and video frames because of much the same reasons.

Latency and lip synchronization may cause the jitter buffer to discard video packets.

And bugs on the sender side can easily cause WebRTC to drop incoming packets here as well.

That said, with video, we have to look at a slightly bigger picture – that of a frame instead of that of a singular packet.

Not all packets of a frame are available

Assume you have a packet dropped. And that packet is part of a frame that is sent over a series of 7 packets. We had 1 packet drop that caused a frame drop, which in turn, caused another 6 packets to be useless to us since we can’t really decode them without the missing packet (we can to some extent, but we usually don’t these days).

Dependency on older frames

With video, unless we’re decoding a keyframe, the frame we need to decode requires a previous frame to be decoded. There are dependencies here since for the most part, we only encode and compress the differences across frames and not the full frame (that would be a keyframe).

What happens then if a frame we need for decoding a fresh frame we just received isn’t available? Here, all packets were received for this new frame, but the frame (and all its packets) will still get dropped. This will be reported in framesDropped.

Not enough CPU

We might not have enough CPU available to decode video. Video is CPU intensive, and if WebRTC understands that it won’t have time to decode the frame, it will simply drop it before decoding it.

But, it might also decode the frame, but then due to CPU issues, miss the time for playout, causing framesRendered not to increment.

WebRTC discarding outgoing video frames

With outgoing media, there is a different dictionary we need to look at in getstats – RTCOutboundRtpStreamStats:

Here, the relevant fields are framesSent and framesEncoded. We should strive to have these two equal to each other.

We know that WebRTC decided to discard frames here if framesEncoded is higher than framesSent. If this happens, then it is bad in a few levels:

  • Encoding video is a resource intensive process. If we took the effort to encode a frame and didn’t send it in the end, then we’ve wasted resources. To me this means something is awfully wrong with the implementation and it isn’t well balanced
  • Video frames are usually dependent on one another. Dropping a frame may lead to future frames that the receiver will be unable to decode without the frame that was dropped
  • Such failures are usually due to network or memory problems. These hint towards a deeper problem that is occurring with the device or with the way your application handles the resources available on the device

On the RTCIceCandidatePairStats dictionary, there’s also packetsDiscardedOnSend metric, which hints to when and why would we lose and discard packets and frames on the sender side:

Total number of packets for this candidate pair that have been discarded due to socket errors, i.e. a socket error occurred when handing the packets to the socket. This might happen due to various reasons, including full buffer or no available memory.

If you’re dropping video frames on the sender side (framesEncoded < framesSent), then in all likelihood the network buffer on the device is full, causing a send failure. Here you should check the resources available on the device – especially memory and CPU – or just understand the network traffic you are dealing with.

Maintaining media quality in WebRTC

Media quality in WebRTC is a lot more than just dealing with bitrates or deciding what to do about packet losses. There are many aspects affecting media quality and they all do it dynamically throughout the session and in parallel to each other.

This time, we looked into why WebRTC discards media packets during calls. We’ve seen that there are many reasons for it.

To learn more about media processing and everything else related to WebRTC, check out these services:

The post Reasons for WebRTC to discard media packets appeared first on BlogGeek.me.

WebRTC simulcast – what is it and how is it used

bloggeek - Mon, 05/13/2024 - 12:30

What exactly is simulcast, how is it used in WebRTC and why is it a critical component in any SFU media server.

WebRTC simulcast is one of these things that is commonly used by WebRTC applications that have SFU media servers. If your media server doesn’t use simulcast – make sure to ask why and to understand the answer. And if it does, then you should know what it means exactly. Which is why we’re here now.

In this article, I want to explain what WebRTC simulcast is, when and how it is used AND some new advancements coming to simulcast.

Table of contents A crash course on video quality and bitrate

Before we begin, we need to understand the concept of bitrate. In a WebRTC video session, the first thing to look at and understand is the bitrate used. Video encoding requires sending a lot of data over the network, and WebRTC tries to match the bitrate it sends to the available bandwidth of the network.

See how I switched between talking about sending data to bitrate to bandwidth? For me, sending data is what we are trying to do. Bitrate is the actual (or target) amount of data we’re aiming for, and bandwidth is what is available for us on the network (assume that bandwidth should always be the same or preferably even higher than the bitrate).

When it comes to audio, we’re mostly working with bitrates that are static and known in advance. They are also low compared to video bitrates, so we just don’t care as much. Which leaves us with video streams.

For video streams:

  • The higher the bitrate, the higher the quality (most of the time)
  • The higher the bitrate, the higher the CPU and memory needed to encode and decode the data

This means that what we want to do is use as little bitrate as possible to get the highest possible quality. We’re trying to reach for the stars first by deciding our desired bitrate, and then we start lowering due to the constraints of the real world. Here are a few reasons for this:

  • Our CPU is over-burdened, so we need to reduce the bitrate we encode or decode
  • The resolution of the video that ends up being displayed is going to be quite rather small, so there’s no point in investing too much in bitrate. Same logic can be applied to the camera
  • We can’t push through the network the bitrate we want, so we need to reduce it to fit the bandwidth available on the network

👉 If you want to learn more about this topic, then read this article on WebRTC video quality

SFU media servers and group video sessions

For video group sessions in WebRTC, we use SFU media servers. Not always, but most of the time. Why? Because SFUs route media – this ends up costing us less compared to MCUs and in many ways makes things more flexible for us on the viewer’s end.

The challenge though is that SFUs harbor a wee bit more complex logic and smarts than the alternatives and they also delegate a lot of the work to the clients themselves. A good SFU is one that has tight integration and optimization methods with the clients using it. And remember here that the implementation of the browser (Chrome) is optimized for Google Meet’s needs.

Simulcast was “invented” for SFUs. Let’s take a quick example to show what we mean here.

We have 4 people on a call. All connected to an SFU. Each participant is sending his video to the SFU, and the SFU routes that video to the other 3 participants in the call:

If everyone has a decent network, then we’re all happy. But what if D has poor network conditions on his downlink? Here are some assumptions for our scenario:

  • All participants can send 2Mbps of video data towards the SFU
  • A, B and C can receive up to 20Mbps in total on the downlink
  • D can receive only 1Mbps in total on the downlink

If we want everyone to be displayed at the same quality on D’s screen, we need to give each one of them ~330Kbps. That’s instead of 2Mbps. So… do we just reduce the sending bitrate of everyone down to 330Kbps to accommodate for user D? Or do we drop him out of the call altogether?

Notice how we can still send 2Mbps from D to the rest of the participants? That’s just the nature and dynamics of the network and capabilities we have in this example.

Here’s where simulcast comes in…

We’re going to engineer the solution so that each participant is going to create 3 separate bitstreams of their video data: 1150kbps, 600kbps and 250kbps, totalling 2Mbps. The exact numbers are less important than the concept itself, so please go with the flow here.

* Being lazy, I’ve denoted simulcast lines as dotted lines, indicating Simulcast instead of using a better notation like 1150/600/250.

Now that we do that, A, B and C get 1150Kbps video from everyone else and D receives the lower 250Kbps bitstreams (it can’t handle 1150kbps or 600kbps even for only one of the users without dropping one of the other video streams it is receiving altogether). Now each one is getting the most he can handle (or at the very least, closer to that than just lowering everyone down).

Media quality: LCD or BAB

I am going to use names that don’t necessarily exist. I am making them up here to explain the nature of simulcast a bit better.

What we’ve seen in the example above is how we move from LCD (Least Common Denominator) to BAB (Best Available Bandwidth).

We started with a naive implementation where the same video bitrate is being sent to everyone. So if there’s a hiccup somewhere along the session, everyone is going to be affected. When D had network issues, everyone had to lower their bitrate from 2Mbps down to 330Kbps… that’s quite a hit to media quality across the board for them all.

That’s our LCD – we’re going to need to accommodate the bitrate to the lowest common denominator of the available bandwidth we have across our meeting participants. And that sucks. Bigtime…

But then we went for BAB – we’re going to try and work with the best available bandwidth that each user is capable of receiving.

How did we do that? By asking the participants (nicely) to generate more than a single bitstream. Each bitstream has a different bitrate here, which gives the SFU the flexibility it needs to decide which bitrate to send to which user.

We use simulcast (or SVC, though not in this article) because there’s no equality in digital communications. Participants have different devices, they connect with different networks and they even see and focus on different things during the same meeting. Simulcast enables us to give different participants a different view of the meeting with varying degrees of quality based on the capabilities of each participant at any given moment AND based on each participants’ preference/desire.

How much flexibility and how high media quality we can reach is determined by the tools and optimizations we end up employing in our implementation. No two implementations of SFU with simulcast are exactly alike.

Client side = Simulcast; Server side = Adaptive bitrate

Simulcast as a concept and solution is about a client generating multiple streams so that a media server can use whichever of the streams it needs to send to other participants.

Video streaming had a similar(?) solution known as ABR – Adaptive Bitrate.

Here, the client sends a single media stream to the server and the server is the one that generates any number of streams in different bitrates as it sees fit. This makes sense when there are many viewers (thousands or more) and it can be useful to invest in server resources (these cost money to the vendor providing the service) for the given scenario.

Some use ABR as a term to simply say that the bitrate is variable in nature and adapts to the network. I use it to refer to server side adaptation, where there are multiple video streams generated (in advance or in realtime) and the server simply chooses the best to use per viewer.

For large scale live streaming broadcasts, you can start seeing solutions that incorporate ABR as a technology to transcode the stream to broadcast on the server and generate multiple bitrates with it. This can and is done sometimes in parallel to using simulcast from the client as well.

The way for me to compartmentalize and remember this? Simulcast is multiple bitrates generated by the client. ABR is multiple bitrates generated by the server.

👉 Your can learn more about ABR vs simulcast or just about simulcast

Advantages and weaknesses of using simulcast in WebRTC

Simulcast is great, but it isn’t a catchall solution.

What simulcast does as a concept is to offload some of the work from the media server. Offloading here means that for the client it comes at an increase in CPU use and outgoing bandwidth required.

WebRTC simulcast advantages

Here are some great things that simulcast brings with it:

  • Reduces the costs of media servers drastically
    • By not needing to decode and encode media streams, media servers need way less CPU power
    • This means that scaling large deployments becomes easier and more feasible for a lot more use cases
  • Different layouts for each participant
    • Since each user ends up receiving multiple video streams (in different bitrates), the application is free to display a different layout for each participant
    • Other media servers that mix media would need to invest even more CPU to support something like “encoder per participant” to achieve this
  • Display participants’ video and other data in the same space
    • Again, since each participant video is separate from the others, it is simpler to place additional visual items in the same area
    • Mixing all videos into a single stream makes this harder and clunkier
WebRTC simulcast weaknesses

It isn’t all good though. There are weaknesses to the use of WebRTC simulcast:

  • Higher bandwidth use on uplink of users
    • Networks are asymmetric in bandwidth sometimes (think ADSL), and uplinks are usually lower in bandwidth than downlinks
    • Simulcast has a higher uplink requirement (1.3125x to be precise) than not using simulcast, which means that there are scenarios when using simulcast can actually lower quality if not done properly
  • Higher CPU use for user devices
    • Clients generate 2-3 media streams in different bitrates with simulcast
    • So they “invest” more in the encoding when it comes to CPU use
  • Higher system complexity
    • To really make use of simulcast in WebRTC, there should be a lot more synchronization between client and server code
    • That means higher complexity of the overall system
  • Dependency on client code
    • With other solutions, especially media mixing ones (see MCU), the clients might not even know they are in a group call
    • But when it comes to simulcast and group calling, clients have a huge role to play in making sure calls are of high quality (due to the complexity mentioned above)
Who decides on bitrates in WebRTC simulcast

There are usually two to three layers/streams when it comes to WebRTC simulcast. Each with a different bitrate, and from there, also with different resolutions, frame rates and quality. I am focusing on bitrate because for me, that’s the leading factor – everything else gets derived from it.

Which bitrates are we going to support and which ones get sent to whom are the most important questions for any SFU implementation that uses simulcast.

WebRTC by itself can’t make such decisions. It has its own default bitrates for simulcast, but this is only what they are – defaults. I wouldn’t recommend developers to use these without understanding their implications (they’re likely not useful for the use case you have at hand).

The decision which bitrates to support in simulcast to begin with should take into consideration the possible display layouts of the videos on the viewers’ end. By knowing at what resolutions the videos get displayed we can try to better estimate the desired bitrates to use while using simulcast. Factor into it things like number of videos in the layout (so that you take total bitrates and available bandwidth into consideration), importance of videos on the display (lower priority streams can manage with lower frame rates and resolutions), etc.

Here’s the thing though:

  • The client is the one generating and encoding simulcast media streams. It knows best its own CPU and performance capabilities
  • The SFU media server knows best the estimated bandwidth in front of all viewers. It also knows what media streams and at what bitrates it has at its disposal when the time comes to send media to viewers
  • The viewer is the one that knows best how the video gets laid out on the display, along with its own CPU and performance capabilities
  • Oh, and the viewer may change the layout on the display throughout the call, changing what’s best to send to it

The end result is that the application in charge of it all needs to orchestrate the clients and the media servers in order to optimize the session for higher media quality, taking into consideration all the information. It also means that your application needs to somehow share this out-of-band information with the application session logic so decisions can be made. And this part is proprietary – it isn’t something that we have written as a standard or even a best practice.

Keyframes and switching costs in simulcast

With all this goodness, there’s an achilles heel. One that stems from the way Google implemented simulcast in Chrome, but also by the realities of such a solution.

Here’s the thing: Whenever a viewer switches from one simulcast layer to another, there’s a change in the video stream that gets decoded. That change can only occur with a fresh keyframe on the layer that is being switched to, so that the video decoder will be able to decode the stream properly.

When there’s a need to generate a keyframe in simulcast, Chrome will automatically generate a keyframe across all simulcast layers. This isn’t a good thing, but it is what it is.

This also means that SFU media servers need to be conscious about this and not have viewers switch between the different layers all the time, limiting switches to the minimum necessary to maintain high video quality.

Temporal scalability improves WebRTC simulcast

When using temporal scalability alongside simulcast in WebRTC it gives us another level of flexibility.

In temporal scalability, the frames of a video stream are encoded in such a way that their dependency chain enables us to decode some of the frames and not others – something that is usually impossible in video compression. WebRTC’s implementation has in Chrome temporal scalability in VP8 with 2 such “layers”, so if you’re sending 30 frames per second, the SFU media server can decide to send either 30 or 15 FPS to participants (the 15 frames per second is roughly 60% of the bitrate of the 30 frames per second).

Think of it like multiplying your simulcast streams without an additional cost:

And yes, like everything else, this depends on the codec you use, the browser and the fact that some layers might not have enough frames per second to begin with (for example, the lower layer might only produce 10 or 15 frames per second and then temporal scalability might be useless).

When using simulcast, the level and variety of tools you use will enable you to increase the media quality you offer your users.

Decisions of highest layer bitrate in WebRTC simulcast

Simulcast in WebRTC gives us another level of flexibility. One that Daily explains nicely in their post where they title their solution as adaptive bitrate.

Let’s assume we’re going for the classic 3 media stream in our WebRTC simulcast solution:

Remember our example from before? Our smallest bitrate (250kbps) and medium sized bitrate (600kbps) are “static” in nature. The video encoder in our browser is going to generate these in such a way each and every time (assuming the CPU allows and bandwidth estimation is higher than the summation of these two).

That highest bitrate there isn’t really static. At least not by default. It will use as much bitrate as it needs, taking into consideration the CPU consumption and bandwidth estimation. Left to its own device, this highest bitrate layer is going to be greedy in its resource consumption. It can also get below the medium sized bitrate if there’s not enough CPU or bandwidth available, which beats the point of this being the highest layer. This all leads us to what we need to do…

Like everything else that WebRTC does in the browser though, it needs to be managed and taken into account by the SFU media server. In this case, deciding what that highest layer bitrate should be at any given point in time.

Here are some questions to ask yourself when making that decision in your SFU:

  • Do you want the highest layer to have a static bitrate? (hint: no)
  • The participants who need to get this user’s video at the highest quality – what’s the highest bitrate / resolution that they can cope with based on their device and network conditions?
    • Do you need to limit the bitrate of this layer to accommodate for more of these participants?
    • Are you willing to move some of these users to the mid bitrate in order to increase the quality for the other participants who have better conditions?
  • Are you recording this stream?
    • If you are, do you need it at the highest possible quality?
    • Does it mean you can “sacrifice” some of your participant’s viewing quality to get a better recording out of this session?
    • Or is the recording fine with lower bitrates or quality?
  • I’ll finish off with a question about all the layers – which ones are actually used?
    • If some of the layers aren’t being sent to any of the users in the meeting, you can decide to suspend them in the first place, practically “changing” the simulcast configuration dynamically for that specific participant. It will come at a cost when you’ll need to switch from one layer to another if the other layer is non-existent
    • And if we decided not to send a specific bitrate, does it mean the other bitrates can change as well to accommodate for the extra headroom we now have of bitrate and CPU available?

These questions don’t have a single simple answer. The answer to these will vary based on the strategy you wish to employ, the use case you have, the video layouts you support, the level of your engineers, the media server you start with, …

At the end of the day, your answers are just a set of heuristics, and being able to compare one to another is going to be a challenging task. Make sure you get this right (or right enough) for your needs.

WebRTC and multi-codec simulcast

This is something that we’re just starting to see now.

Up until recently, as a developer, you chose a codec, used simulcast on it and that’s about it. The available alternatives were mostly VP8 and H.264. These days? With the introduction of the AV1 video codec a new idea started cropping:

  • AV1 is a better codec when it comes to media quality per bitrate compared to the other codecs available
  • But AV1 also takes up more CPU and there’s almost no hardware acceleration available in the market
  • At very low bitrates, using AV1 is possible, since it won’t take up much CPU for that
  • But using it at higher bitrates isn’t possible in most scenarios

So the above diagram was thought out in a way. Instead of using the same video codec in a simulcast session for WebRTC, why not use multiple codecs? Have AV1 used on the lowest bitrate and then another codec, say VP8 or VP9 on the higher bitrates?

This way, the machine’s CPU is capable of encoding the data, and the resulting media quality of the lowest bitrate in there is now higher than it would have been if we used a single codec for simulcast.

At the time of writing, this hasn’t been implemented in a workable fashion just yet.

In a way, this is our future for the coming years, until AV1 will become popular enough and its use made possible by commonplace hardware acceleration or better CPUs on the devices.

A word about SVC… and where to learn more

There are alternatives to using WebRTC simulcast:

  1. Deciding NOT to use simulcast but still using an SFU, moving towards a LCD (least common denominator) approach to media quality
  2. Not using SFU or media routing, going for mesh or mixing solutions
  3. Replacing simulcast with SVC

SVC stands for Scalable Video Coding. At its heart, it is quite similar to simulcast, just done on the codec level. The video encoder itself generates a bitstream that can be peeled like an onion into multiple bitrates. This gives a solution that is less wasteful than simulcast in bitrate and CPU. The downside here is an increase in complexity and in lack of availability of hardware encoders and decoders that know how to handle SVC.

There are video meeting solutions out there that use SVC. They can usually also use WebRTC simulcast – simply because SVC gets added later as an additional tool for further optimization and flexibility.

To learn more about simulcast, SVC and everything related to WebRTC, check out these services:

The post WebRTC simulcast – what is it and how is it used appeared first on BlogGeek.me.

Probing WebRTC Bandwidth Probing – why and how in gcc

webrtchacks - Tue, 05/07/2024 - 14:56

Maximizing stream quality on an imperfect network in real-time is a delicate balancing act. If you send too much information then will cause congestion and packet loss. If you send too little then your video quality (or audio) will look like garbage. But how much can you send? One of the techniques used to find […]

The post Probing WebRTC Bandwidth Probing – why and how in gcc appeared first on webrtcHacks.

Does WebRTC need a change in governance?

bloggeek - Mon, 04/29/2024 - 12:30

Is it time to change the governance of WebRTC in order to keep it growing and flourishing?

WebRTC started life in 2011 or 2012. Depending when you start counting.

That’s around 13 years now. Time to put things on the table – we might need a change in governance. A different way of thinking about WebRTC.

Table of contents The concept of WebRTC unbundling https://www.linkedin.com/feed/update/urn:li:activity:7178742753281929216/

I published the above on LinkedIn last month.

It was a culmination of thoughts I’ve been having for the past several years.

You can pinpoint the first time I made that distinction in 2020 while coining the term WebRTC unbundling.

The notion was that WebRTC is being broken down into smaller pieces and developers are given more leeway and control over what WebRTC does (=a good thing). The result of all this is the ability to differentiate further, but also that the baseline of what WebRTC is gets farther behind what good media quality means.

There’s the popular open source implementation for WebRTC known as libwebrtc. It is maintained and governed by Google. When Google can enact its strategy by implementing their technologies and IP outside and around libwebrtc instead of inside libwebrtc – why wouldn’t they?

Google runs a business. They have commercial objectives. Differentiating from competitors who use libwebrtc to outwin Google would be a poor decision to make. Giving competitors using proprietary technology the source code of libwebrtc to copy from and improve upon without contributing back isn’t a smart move either.

This means cutting edge technologies and research is now done mostly outside of libwebrtc (and WebRTC) as much as possible. And the unbundling of WebRTC that started some 4 years ago is now starting to show.

Before we dive into the details

Something I always explain to people new to WebRTC is that WebRTC isn’t a single thing. When someone refers to it, he either thinks of WebRTC as a standard or WebRTC as an open source project:

The above is one of the first slides I’ve ever created about WebRTC.

WebRTC is an open standard. It is being specified by the IETF and W3C. The IETF deals with the network side while the W3C is all about the browser interface (JavaScript APIs).

WebRTC is also viewed as an open source project. That’s actually libwebrtc… the most common and popular implementation of WebRTC which has been created and is maintained by Google.

So remember – when people say WebRTC they can refer to it as either a standard or a package or both at the same time.

What we will do in this article from here on, is jump between these two definitions and see where we are with them today. We will start with the libwebrtc open source library.

The power and importance of libwebrtc

Here’s what I shared in my RTC@Scale 2024 session:

In WebRTC, libwebrtc is the most important library. There are others, but this is by far the most important. Why?

  • It is integrated and used by ALL modern browsers (Chrome, Edge, Firefox and Safari)
  • So when you interact with any browser in your WebRTC application, you end up working against libwebrtc
  • Many mobile applications decided to use libwebrtc natively inside the app. Why? Because it is good enough

The end result is that… well… It is the most important WebRTC library out there.

Before libwebrtc, what we had was lame open source libraries that implemented media engines. All good options were commercial ones. In fact, libwebrtc (and WebRTC) started with Google acquiring a company called GIPS who had a great implementation of a commercial media engine that they licensed to companies. I know because the company I worked at licensed it, and the moment they got acquired, we got a flood of requests and questions about finding an alternative.

What WebRTC did was make media engines a commodity of sorts. A new era where high quality media can be had from open source. This also meant that the commercial media engine market died at the same time.

This new development of pushing innovations and improvements in the media engine pipeline outside of libwebrtc is what is going to take that advantage from open source and libwebrtc away.

More on that, a bit later. But next, why don’t we look at the standardization of WebRTC?

WebRTC standardization efforts

The standardization of WebRTC was split between two different organizations: the W3C and the IETF. They were always semi-aligned.

The IETF was in charge of what goes on in the network. How a WebRTC session looks like on the wire. For WebRTC, it uses stuff that we all considered quite modern in 2012 – light years in tech-time. The IETF Working Group working on WebRTC, RTCWEB, concluded its work and closed down.

The W3C was/is in charge of the API layer in the browser. The JavaScript interface, mostly revolving around the RTCPeerConnection. And yes, they are trying to wrap this one up and call it a day.

In many ways, what brought WebRTC to what it is today is the W3C – the part focused on the interface in the browser that developers use. That is because the browser is our window to the internet (and in many ways to the world as well). And this window includes the ability to use WebRTC through the APIs specified by the W3C.

The catch here is that the standardization done by the W3C for WebRTC consists almost solely by the browser vendors themselves. There aren’t any (or not enough) web developers sitting at the table. The ones who need and end up using the WebRTC APIs have no real voice in the WebRTC spec itself. The cooks in the kitchen are far remote from the restaurant diners who need to enjoy their dish.

And meanwhile, the cooks have different opinions and directions as well:

  • Chrome protects its interests, focusing mainly on Google Meet’s requirements. This is what drives many of the contributions Google has been making to the W3C on the spec
  • The rest? Mostly trying to block any forward movement so they won’t have to add changes to their own browser implementation. This is especially true for Safari and Firefox

So what do we end up with?

Google, trying to add things it needs to the WebRTC specification to solve their product needs

Other browser vendors, trying to delay Google a bit..

And developers who aren’t part of the game at all and are happy with the leftovers from what Google needs.

Vendors differentiating outside of (lib)WebRTC

The whole WebRTC ecosystem is enjoying the work of Google in libWebRTC. They do so in various ways:

  1. Directly by taking libWebRTC codebase, making it their own and compiling it into native applications
  2. Indirectly by having WebRTC run inside web browsers, and figuring out any bugs and issues they bump into
  3. By carving bits and pieces of it to use in their own app (like tearing the echo canceller or other algorithms from libWebRTC and using it elsewhere)

The first alternative is the most interesting one here.

When vendors do that, they usually end up forking the original codebase and modifying bits and pieces of it to fit their own needs. These might be minor bug fixes for edge cases or they may be full blown optimizations (like what Meta has done with their new MLow codec and Beryl echo cancellation algorithm – there were other areas as well. You’ll find them in the RTC@Scale event summary).

Video API vendors are no different. They usually take libWebRTC and compile it as part of their own mobile SDKs. Again, with likely changes in the code. They also get to see and work with a multitude of customers, each with its own unique requirements. In a way,they see a LOT of the market. Having these insights and understanding is great. Passing it to the libWebRTC team can be even better. These Video API vendors can be a great aggregator of customer insights…

Then there’s the fact that not many end up contributing back what they’ve done to libWebRTC. And even that comes with a whole set of reasons why:

  1. Assuming (rightly or wrongly) that these changes made are unique, proprietary, a competitive advantage – you name it
  2. Being afraid of the legal implications of doing so (exposure or whatever)
  3. Too much fuss to do

If you ask me, (1) is just bad manners – you get something for free from another vendor you might even be competing directly with. The least you can do is to share and contribute back, so that you have a level playing field at that low level of the stack.

Looking at (2) means someone needs to sit and talk to the legal team at your company. On one hand, you make use of open source and on the other you’re not giving back anything. I am not even sure if that reduces your exposure in any way. I am not a lawyer, but I do see the problem in this free lunch approach of the industry.

That third one is a big issue. And partly due to the fault of Google. They don’t make it easy enough to contribute back to the codebase. I can easily understand the reasoning – with billions of Chrome installations, having a no-name developer with a weird github alias from *somewhere* in the globe trying to push a piece of arcane/mundane code into libWebRTC that ends up in Chrome is darn dangerous. But the current situation seems almost insufferable.

I just don’t know who’s to blame here – companies who are just too lazy to contribute back and take the hoops required to get there or Google, for adding more blockers and hoops along their way.

Is standardization moving to the next shiny thing(s)?

There are two separate routes in web browsers that are setting up themselves to displace WebRTC: WebTransport + WebCodecs + WebAssembly & MoQ (Media over QUIC)

WebTransport + WebCodecs + WebAssembly

This trio is the unbundling of WebRTC. Taking it and breaking it into smaller components that cannot really be implemented in a web browser – these are WebTransport and WebCodecs. And adding the glue to them so that developers can cobble up the missing pieces however they feel like it – that’s the WebAssembly piece.

Vendors are already using WebAssembly to intervene with the WebRTC media processing pipeline to differentiate and improve on the user experience in various ways (noise suppression and background replacement being the main examples).

The next step is to skip WebRTC altogether:

  • Use WebTransport for sending media over the network
  • WebCodecs are there to encode and decode audio and video efficiently
  • WebAssembly for the rest (packet loss, retransmission logic, echo cancellation, etc)

Don’t believe me? Zoom is doing almost that. They are using the WebRTC data channel as transport, and use WebCodecs and WebAssembly for the rest of it. Switching to WebTransport will likely happen for Zoom once it is ubiquitous across browsers (and offers solid performance compared to the data channel in WebRTC).

A new shiny toy for developers? Definitely.

Where will we see it first? In live streaming. I’ve written about it when discussing WHIP and WHEP, calling it the 3 horsemen.

MoQ (Media over QUIC)

The next big thing is likely to be MoQ.

WebTransport makes use of QUIC as its own transport. Around 5 years ago, I thought that QUIC can be a really good solution to replace WebRTC’s transport altogether. And it now has an official name – MoQ.

MoQ is about doing to RTP what WebTransport does to HTTP.

WebTransport takes QUIC and uses it as a modernized transport for web browsers, replacing HTTP and WebSocket.

MoQ takes QUIC and uses it as modernized media streaming for web browsers, replacing HLS and DASH.

There’s an overview for MoQ on the IETF website. Here’s the best part of it, directly from this post:

It includes a single protocol for sending and receiving high-quality media (including audio, video, and timed metadata, such as closed captions and cue points) in a way that provides ultra low latency for the end user.

If that sounds like WebRTC to you, then you’re almost correct. It is why many are going to see it (and use it) as a WebRTC alternative once it gets standardized and implemented by web browsers.

The main differences?

  • The timed metadata piece, which WebRTC sourly missed for many years
  • No P2P capability. Sacrificed for improved NAT traversal (by relying on QUIC and servers)
  • The definition of media relays (servers) along with their operation

While this is targeted at live streaming services, this can easily trickle into video conferencing.

Just like WebRTC was designed and built for video conferencing, but later adopted by live streaming services – the opposite can and is likely to happen: MoQ is being designed and built first and foremost for live streaming and it will be adopted and used by video conferencing services as well.

Would Google be interested in WebRTC enough? Maybe it would venture to use WebTransport + WebCodecs + WebAssembly instead. Or just go for MoQ and consolidate its protocols across services (think YouTube + Google Meet). What would happen to WebRTC if that would take place?

Who contributes to libwebrtc?

Here’s what I showed at RTC@Scale:

Let’s unpack this a bit.

The bars show the number of commits on a yearly basis. We see the numbers dwindling and winding down just as the use of WebRTC skyrockets (the redline) due to the pandemic. 2024 is likely to be even lower in terms of commits.

The greenish colored bars are Google’s contributions to libwebrtc. The blue? All the rest of the industry who make money using WebRTC – not all of them mind you – just those that contribute back (there are many others who never contribute back). Google has been sponsoring them somewhat which can not make them happy.

Why is that?

Why are so few contributions outside of Google end up in libwebrtc?

I guess there are two reasons here:

  1. Google doesn’t make it easy to contribute. In the end, libwebrtc gets embedded into Chrome which goes to billions of users every month with a new release. Not knowing what got integrated (malware or patent-encumbered code for example) is a real issue. Having insecure or not thoroughly tested code is also unacceptable at this scale
  2. Laziness of those who use libwebrtc but never contribute back
    • In large corporations, the developers need to “fight” with the legal teams to contribute code back (the excuses are usually around liability and protecting IP)
    • Smaller companies can’t be bothered with the friction that Google adds to the process – or just don’t want to spend the needed time
    • Not wanting to make your competitor’s product better by contributing
    • Struggling with the server side parts of WebRTC that in the end are quite tightly coupled with libWebRTC on the client. Google Meet undoubtedly delivers the best experience because the client side is designed for its needs

Many developers the world over enjoy the fruits of libwebrtc, but most aren’t willing to contribute back. This is true for both individual engineers as well as companies. Google even gave up on being frustrated with this and resorts to solving their own issues these days. They probably have a very good understanding of the overall usage in Chrome where Google Meet remains the dominant user.

On the one hand, Google isn’t making this easy. On the other hand, companies are lazy or protective of their own forked libwebrtc code to never contribute it back.

Can we save libwebrtc & WebRTC?

It is time to rethink WebRTC’s future.

For libwebrtc, we might need some other form of governance. Have more of the bigger vendors pitch in with the engineering effort itself. Meta, Microsoft and a few others who rely heavily on libwebrtc need to step up to that responsibility (the W3C Working Group is not where this kind of discussion happens) while Google needs to let go a bit. I have no clue how things are done in the world of Linux and I am sure libwebrtc isn’t big enough or important enough for that. But I do believe that something can be done here. At the end of the day it will require taking some of the maintenance cost off Google.

Just like Chrome has third party libraries such as libopus and dav1d (AV1 decoder) embedded into Chrome as part of libwebrtc, there is no real reason why libwebrtc itself can’t end up in the same way.

For WebRTC standardization, it is time to ask – is it finished, or are there more things needed?

Do we want to progress and modernize it further or are we happy with it as is?

Should we “migrate” it towards MoQ or a similar approach?

In the W3C, do we need to get more people involved? The web developers themselves maybe? They need to be listened to and made part of the process.

Will the above happen? Likely not.

The post Does WebRTC need a change in governance? appeared first on BlogGeek.me.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.