bloggeek

Subscribe to bloggeek feed bloggeek
The leading authority on WebRTC
Updated: 40 min 14 sec ago

WebRTC simulcast and ABR – two sides of the same coin

Mon, 05/20/2019 - 12:00

WebRTC simulcast and ABR is all about offer choice to “viewers”.

I’ve been dealing recently with more clients who are looking to create live broadcast experiences. Solutions where one or more users have to broadcast their streams from a single session to a large audience. Large is a somewhat lenient target number, which seems to be stretching from anywhere between 100 to a 1,000,000 viewers. And yes, most of these clients want that viewers will have instantaneous access to the stream(s) – a lag of 1-2 seconds at most, as opposed to the 10 or more seconds of latency you get from HLS.

Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:

Compare simulcast to ABR

What I started seeing more and more recently are solutions that make use of ABR. What’s ABR? It is just like simulcast, but… different.

What’s Simulcast?

Simulcast is a mechanism in WebRTC by which a device/client/user will be sending a video stream that contains multiple bitrates in it. I explained it a bit in my WebRTC Multiparty Architectures last month.

With simlucast, a WebRTC client will generate these multiple bitrates, where each offers a different video quality – the higher the bitrate the higher the quality.

These video streams are then received by the SFU, and the SFU can pick and choose which stream to send to which participant/viewer. This decision is usually made based on the available bandwidth, but it can (and should) make use of a lot of other factors as well – display size and video layout on the viewer device, CPU utilization of the viewer, etc.

The great thing about simulcast? The SFU doesn’t work too hard. It just selects what to send where.

What’s ABR?

ABR stands for Adaptive Bitrate Streaming. Don’t ask me why R and not S in the acronym – probably because they didn’t want to mix this with car breaks. Anyways, ABR comes from streaming, long before WebRTC was introduced to our lives.

With streaming, you’ve got a user watching a recorded (or “live”) video online. The server then streams that media towards the user. What happens if the available bitrate from the server to the user is low? Buffering.

Streaming technology uses TCP, which in turn uses retransmissions. It isn’t designed for real-time, and well… we want to SEE the content and would rather wait a bit than not see it at all.

Today, with 1080p and 4K resolutions, streaming at high quality requires lots and lots of bandwidth. If the network isn’t capable, would users rather wait and be buffered or would it be better to just lower the quality?

Most prefer lowering the quality.

But how do you do that with “static” content? A pre-recorded video file is what it is.

You use ABR:

With ABR, you segment bandwidth into ranges. Each range will be receiving a different media stream. Each such stream has a different bitrate.

Say you have a media stream of 300kbps – you define the segment bandwidth for it as 300-500kbps. Why? Because from 500kbps there’s another media stream available.

These media streams all contain the same content, just in different bitrates, denoting different quality levels. What you try doing is sending the highest quality range to each viewer without getting into that dreaded buffering state. Since the available bitrate is dynamic in nature (as the illustration above shows), you can end up switching across media streams based on the bitrate available to the viewer at any given point in time. That’s why they call it adaptive.

And it sounds rather similar to simulcast… just on the server side, as ABR is something a server generates – the original media gets to a server, which creates multiple output streams to it in different bitrates, to use when needed.

The ABR challenge for WebRTC media servers

Recently, I’ve seen more discussions and solutions looking at using ABR and similar techniques with WebRTC. Mainly to scale a session beyond 10k viewers and to support low latency broadcasting in CDNs.

Why these two areas?

  1. Because beyond 10k viewers, simulcast isn’t enough anymore. Simulcast today supports up to 3 media streams and the variety you get with 10k viewers is higher than that. There are a few other reasons as well, but that’s for another time
  2. Because CDNs and video streaming have been comfortable with ABR for years now, so them shifting towards WebRTC or low latency means they are looking for much the same technologies and mechanisms they already know

But here’s the problem.

We’ve been doing SFUs with WebRTC for most of the time that WebRTC existed. Around 7-8 years. We’re all quite comfortable now with the concept of paying on bandwidth and not eating too much CPU – which is the performance profile of an SFU.

Simulcast fits right into that philosophy – the one creating the alternate streams is the client and not the SFU – it is sending more media towards the SFU who now has more options. The client pays the price of higher bitrates and higher CPU use.

ABR places that burden on the server, which needs to generate the additional alternate streams on its own, and it needs to do so in real time – there’s no offline pre-processing activity for generating these streams from a pre-existing media file as there is with CDNs. this means that SFUs now need to think about CPU loads, muck around with transcoding, experiment with GPU acceleration – the works. Things they haven’t done so far.

Is this in our future? Sure it is. For some, it is already their present.

Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:

Compare simulcast to ABR

What’s next?

WebRTC is growing and evolving. The ecosystem around it is becoming much richer as time goes by. Today, you can find different media servers of different types and characteristics, and the solutions available are quite different from one another.

If you are planning on developing your own application using a media server – make sure you pick a media server that fits to your use case.

The post WebRTC simulcast and ABR – two sides of the same coin appeared first on BlogGeek.me.

Google I/O 2019 was all about AI, Privacy and Accessibility

Mon, 05/13/2019 - 12:00

At Google I/O 2019, the advances Google made in AI and machine learning were put to use for improving privacy and accessibility.

I’ve attended Google I/O in person only once. It was in 2014. I’ve been following this event from afar ever since, making it a point to watch the keynote each year, trying to figure out where Google is headed – and how will that affect the industry.

This weekend I spend some time going over te Google I/O 2019 keynote. If you haven’t seen it, you can watch it over on YouTube – I’ve embedded it here as well.

The main theme of Google I/O 2019

Here’s how I ended my review about Google I/O 2018:

Where are we headed?

That’s the big question I guess.

More machine learning and AI. Expect Google I/O 2019 to be on the same theme.

If you don’t have it in your roadmap, time to see how to fit it in.

In many ways, this can easily be the end of this article as well – the tl;dr version.

Google got to the heart of their keynote only in around the 36 minute mark. Sundar Pichai, CEO of Google, talked about the “For Everyone” theme of this event and where Google is headed. For Everyone – not only for the rich (Apple?) or the people in developed countries, but For Everyone.

The first thing he talked about in this For Everyone context? AI:

From there, everything Google does is about how the AI research work and breakthroughs that they are doing at their scale can fit into the direction they want to take.

This year, that direction was defined by the words privacy, security and accessibility.

Privacy because they are being scrutinized over their data collection, which is directly linked to their business model. But more so because of a recent breakthrough that enables them to run accurate speech to text on devices (more on that later).

Security because of the growing number of hacking and malware attacks we hear about all the time. But more so because the work Google has put into Android from all aspects is placing them ahead on competition (think Apple) based on third party reports (Gartner in this case).

Interestingly, Apple is attacking Google around both privacy and security.

Accessibility because that’s the next billion users. The bigger market. The way to grow by reaching ever larger audiences. But also because it fits well with that breakthrough in speech to text and with machine learning as a whole. And somewhat because of diversity and inclusion which are big words and concepts in tech and silicon valley these days (and you need to appease the crowds and your own employees). And also because it films well and it really does benefit the world and people – though that’s secondary for companies.

The big reveal for me at Google I/O 2019? Definitely its advances in speech analytics by getting speech to text minimized enough to fit into a mobile device. It was the main pillar of this show and for things to come in the future if you ask me.

A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:

AI in RTC report

Event Timeline

I wanted to understand what is important to Google this year, so I took a rough timeline of the event, breaking it down into the minutes spent on each topic. In each and every topic discussed, machine learning and AI were apparent.

Time spentTopic10 minSearch; introduction of new feature(s)8 minGoogle Lens; introduction of new feature(s) – related to speech to text16 minGoogle assistant (Duplex on the web, assistant, driving mode)19 minFor Everyone (AI, bias, privacy+security, accessibility)14 minAndroid Q enhancements and innovations (software)9 minNext (home)9 minPixel (smartphone hardware)16 minGoogle AI

Let’s put this in perspective: out of roughly 100 minutes, 51 were spent directly on AI (assistant, for everyone and AI) and the rest of the time was spent about… AI, though indirectly.

Watching the event, I must say it got me thinking of my time at the university. I had a neighbor at the dorms who was a professional juggler. Maybe not professional, but he did get paid for juggling from time to time. He was able to juggle 5 torches or clubs, 5 apples (while eating one) and anywhere between 7-11 balls (I didn’t keep track).

One evening he comes storming into our room, asking us all to watch a new trick he was working on and just perfected. We all looked. And found it boring. Not because it wasn’t hard or impressive, but because we all knew that this was most definitely within his comfort zone and the things he can do. Funny thing is – he visited us here in Israel a few weeks back. My wife asked him if he juggles anymore. He said a bit, and said his kids aren’t impressed. How could they when it is obvious to them that he can?

Anyways, there’s no wow factor in what Google is doing with machine learning anymore. It is obvious that each year, in every Google I/O event, some new innovation around this topic will be introduced.

This time, it was all about voice and text.

Time to dive into what went on @ Google I/O 2019 keynote.

Speech to text on device

We had a glimpse of this piece of technology late last year when Google introduced call screening to its Pixel 3 devices. This capability allows people to let the Pixel answer calls on their behalf, see what people are saying using live transcription and decide how to act.

This was all done on device. At Google I/O 2019, this technology was just added across the board on Android 10 to anything and everything.

On stage, the explanation given was that the model used for speech to text in the cloud is 2.5Gb in size, and Google was able to squeeze it down to 80Mb, which meant being able to run it on devices. It was not indicated if this is for any language other than English, which probably meant this is an English only capability for now.

What does Google gain from this capability?

  1. Faster speech to text. There’s no need to send audio to the cloud and get text back from it
  2. Ability to run it with no network or with poor network conditions
  3. Privacy of what’s being said

For now, Google will be rolling this out to Android devices and not just Google Pixel devices. No mention of if or when this gets to iOS devices.

What have they done with it?

  • Made the Google assistant more responsive (due to faster speech to text)
  • Created system-wide automatic captioning for everything that runs on Android. Anywhere, on any app
Search

The origins of Google came from Search, and Google decided to start the keynote with search.

Nothing super interesting there in the announcements made, besides the continuous improvements. What was showcased was news and podcasts.

How Google decided to handle Face News and news coverage is now coming to search directly. Podcasts are now made searchable and better accessible directly from search.

Other than that?

A new shiny object – the ability to show 3D models in search results and in augmented reality.

Nice, but not earth shattering. At least not yet.

Google Lens

After Search, Google Lens was showcased.

The main theme around it? The ability to capture text in real time on images and do stuff with it. Usually either text to speech or translation.

In the screenshot above, Google Lens marks the recommended dishes off a menu. While nice, this probably requires each and every such feature to be baked into lens, much like new actions need to be baked into the Google Assistant (or skills in Amazon Alexa).

This falls nicely into the For Everyone / Accessibility theme of the keynote. Aparna Chennapragada, Head of Product for Lens, had the following to say (after an emotional video of a woman who can’t read using the new Lens):

“The power to read is the power to buy a train ticket. To shop in a store. To follow the news. It is the power to get things done. So we want to make this feature to be as accessible to as many people as possible, so it already works in a dozen of languages.”

It actually is. People can’t really be part of our world without the power to read.

It is also the only announcement I remember that the number of languages covered was mentioned (which is why I believe speech to text on device is English only).

Google made the case here and in almost every part of the keynote in favor of using AI for the greater good – for accessibility and inclusion.

Google assistant

Google assistant had its share of the keynote with 4 main announcements:

Duplex on the web is a smarter auto fill feature for web forms.

Next generation Assistant is faster and smarter than its predecessor. There were two main aspects of it that were really interesting to me:

  1. It is “10 times faster”, most probably due to speech to text on the phone which doesn’t necessitate the cloud for many tasks
  2. It works across tabs and apps. A demo was shown, where a the woman instructed the Assistant to search for a photo, picking one out and then asking the phone to send it on an ongoing chat conversation just by saying “send it to Justin”

Every year Google seems to be making Assistant more conversational, able to handle more intents and actions – and understand a lot more of the context necessary for complex tasks.

For Everyone

I’ve written about For Everyone earlier in this article.

I want to cover two more aspect of it, federated learning and project euphonia.

Federated Learning

Machine learning requires tons of data. The more data the better the resulting model is at predicting new inputs. Google is often criticized for collecting that data, but it needs it not only for monetization but also a lot for improving its AI models.

Enter federated learning, a way to learn a bit at the edge of the network, directly inside the devices, and share what gets learned in a secure fashion with the central model that is being created in the cloud.

This was so important for Google to show and explain that Sundar Pichai himself showed and gave that spiel instead of leaving it to the final part of the keynote where Google AI was discussed almost separately.

At Google, this feels like an initiative that is only starting its way with the first public implementation of it embedded as part of Google’s predictive keyboard on Android and how that keyboard is learning new words and trends.

Project Euphonia

Project Euphonia was also introduced here. This project is about enhancing speech recognition models towards hard to understand speech.

Here Google stressed the work and effort it is putting on collecting recorded phrases from people with such problems. The main issue here being the creation or improvement of a model more than anything else.

Android Q

Or Android 10 – pick your name for it.

This one was more than anything else a shopping list of features.

Statistics were given at the beginning:

  • 2.5 billion active devices
  • Over 180 device makers

Live captions was again explained and introduced, along with on-device learning capabilities. AI at its best baked into the OS itself.

For some reason, the Android Q segment wasn’t followed with the Pixel one but rather with the Nest one.

Nest (helpful home)

Google rebranded all of its smart home devices under Nest.

While at it, the decided to try and differentiate from the rest of the pack by coining their solution the “helpful home” as opposed to the “smart home”.

As with everything else, AI and the assistant took center stage, as well as a new device, the Nest Hub Max, which is Google’s answer to the Facebook Portal.

The solution for video calling on the Next Hub Max was built around Google Duo (obviously), with a similar ability to auto zoom that Facebook Portal has, at least on paper – it wasn’t really demoed or showcased on stage.

The reason no demo was really given is that this device will ship “later this summer”, which means it wasn’t really ready for prime time – or Google just didn’t want to spend more precious minutes on it during the keynote.

Interestingly, Google Duo’s recent addition of group video calling wasn’t mentioned throughout the keynote at all.

Pixel (phone)

The Pixel section of the keynote showcased a new Pixel phone device, the Pixel 3a and 3a XL. This is a low cost device, which tries to make do with lower hardware spec by offering better software and AI capabilities. To drive that point home, Google had this slide to show:

Google is continuing with its investment in computational photography, and if the results are as good as this example, I am sold.

The other nice feature shown was call screening:

The neet thing is that your phone can act as your personal secretary, checking for you who’s calling and why, and also converse with the caller based on your instructions. This obviously makes use of the same innovations in Android around speech to text and smart reply.

My current phone is Xiaomi Mi A1, an Android One device. My next one may well be the Pixel 3a – at $399, it will probably be the best phone on the market at that price point.

Google AI

The last section of the keynote was given by Jeff Dean, head of Google.ai. He was also the one closing the keynote, instead of handing this back to Sundar Pichai. I found that nuance interesting.

In his part he discussed the advancements in natural language understanding (NLU) at Google, the growth of TensorFlow, where Google is putting its efforts in healthcare (this time it was oncology and lung cancer), as well as the AI for Social Good initiative, where flood forecasting was explained.

That finishing touch of Google AI in the keynote, taking 16 full minutes (about 15% of the time) shows that Google was aiming to impress and to focus on the good they are making in the world, trying to reduce the growing fear factor of their power and data collection capabilities.

It was impressive…

Next year?

More of the same is my guess.

Google will need to find some new innovation to build their event around. Speech to text on device is great, especially with the many use cases it enabled and the privacy angle to it. Not sure how they’d top that next year.

What’s certain is that AI and privacy will still be at the forefront for Google during 2019 and well into 2020.

A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:

AI in RTC report

The post Google I/O 2019 was all about AI, Privacy and Accessibility appeared first on BlogGeek.me.

Google I/O 2019 was all about AI, Privacy and Accessibility

Mon, 05/13/2019 - 12:00

At Google I/O 2019, the advances Google made in AI and machine learning were put to use for improving privacy and accessibility.

I’ve attended Google I/O in person only once. It was in 2014. I’ve been following this event from afar ever since, making it a point to watch the keynote each year, trying to figure out where Google is headed – and how will that affect the industry.

This weekend I spend some time going over te Google I/O 2019 keynote. If you haven’t seen it, you can watch it over on YouTube – I’ve embedded it here as well.

The main theme of Google I/O 2019

Here’s how I ended my review about Google I/O 2018:

Where are we headed?

That’s the big question I guess.

More machine learning and AI. Expect Google I/O 2019 to be on the same theme.

If you don’t have it in your roadmap, time to see how to fit it in.

In many ways, this can easily be the end of this article as well – the tl;dr version.

Google got to the heart of their keynote only in around the 36 minute mark. Sundar Pichai, CEO of Google, talked about the “For Everyone” theme of this event and where Google is headed. For Everyone – not only for the rich (Apple?) or the people in developed countries, but For Everyone.

The first thing he talked about in this For Everyone context? AI:

From there, everything Google does is about how the AI research work and breakthroughs that they are doing at their scale can fit into the direction they want to take.

This year, that direction was defined by the words privacy, security and accessibility.

Privacy because they are being scrutinized over their data collection, which is directly linked to their business model. But more so because of a recent breakthrough that enables them to run accurate speech to text on devices (more on that later).

Security because of the growing number of hacking and malware attacks we hear about all the time. But more so because the work Google has put into Android from all aspects is placing them ahead on competition (think Apple) based on third party reports (Gartner in this case).

Interestingly, Apple is attacking Google around both privacy and security.

Accessibility because that’s the next billion users. The bigger market. The way to grow by reaching ever larger audiences. But also because it fits well with that breakthrough in speech to text and with machine learning as a whole. And somewhat because of diversity and inclusion which are big words and concepts in tech and silicon valley these days (and you need to appease the crowds and your own employees). And also because it films well and it really does benefit the world and people – though that’s secondary for companies.

The big reveal for me at Google I/O 2019? Definitely its advances in speech analytics by getting speech to text minimized enough to fit into a mobile device. It was the main pillar of this show and for things to come in the future if you ask me.

A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:

AI in RTC report

Event Timeline

I wanted to understand what is important to Google this year, so I took a rough timeline of the event, breaking it down into the minutes spent on each topic. In each and every topic discussed, machine learning and AI were apparent.

Time spentTopic10 minSearch; introduction of new feature(s)8 minGoogle Lens; introduction of new feature(s) – related to speech to text16 minGoogle assistant (Duplex on the web, assistant, driving mode)19 minFor Everyone (AI, bias, privacy+security, accessibility)14 minAndroid Q enhancements and innovations (software)9 minNext (home)9 minPixel (smartphone hardware)16 minGoogle AI

Let’s put this in perspective: out of roughly 100 minutes, 51 were spent directly on AI (assistant, for everyone and AI) and the rest of the time was spent about… AI, though indirectly.

Watching the event, I must say it got me thinking of my time at the university. I had a neighbor at the dorms who was a professional juggler. Maybe not professional, but he did get paid for juggling from time to time. He was able to juggle 5 torches or clubs, 5 apples (while eating one) and anywhere between 7-11 balls (I didn’t keep track).

One evening he comes storming into our room, asking us all to watch a new trick he was working on and just perfected. We all looked. And found it boring. Not because it wasn’t hard or impressive, but because we all knew that this was most definitely within his comfort zone and the things he can do. Funny thing is – he visited us here in Israel a few weeks back. My wife asked him if he juggles anymore. He said a bit, and said his kids aren’t impressed. How could they when it is obvious to them that he can?

Anyways, there’s no wow factor in what Google is doing with machine learning anymore. It is obvious that each year, in every Google I/O event, some new innovation around this topic will be introduced.

This time, it was all about voice and text.

Time to dive into what went on @ Google I/O 2019 keynote.

Speech to text on device

We had a glimpse of this piece of technology late last year when Google introduced call screening to its Pixel 3 devices. This capability allows people to let the Pixel answer calls on their behalf, see what people are saying using live transcription and decide how to act.

This was all done on device. At Google I/O 2019, this technology was just added across the board on Android 10 to anything and everything.

On stage, the explanation given was that the model used for speech to text in the cloud is 2.5Gb in size, and Google was able to squeeze it down to 80Mb, which meant being able to run it on devices. It was not indicated if this is for any language other than English, which probably meant this is an English only capability for now.

What does Google gain from this capability?

  1. Faster speech to text. There’s no need to send audio to the cloud and get text back from it
  2. Ability to run it with no network or with poor network conditions
  3. Privacy of what’s being said

For now, Google will be rolling this out to Android devices and not just Google Pixel devices. No mention of if or when this gets to iOS devices.

What have they done with it?

  • Made the Google assistant more responsive (due to faster speech to text)
  • Created system-wide automatic captioning for everything that runs on Android. Anywhere, on any app
Search

The origins of Google came from Search, and Google decided to start the keynote with search.

Nothing super interesting there in the announcements made, besides the continuous improvements. What was showcased was news and podcasts.

How Google decided to handle Face News and news coverage is now coming to search directly. Podcasts are now made searchable and better accessible directly from search.

Other than that?

A new shiny object – the ability to show 3D models in search results and in augmented reality.

Nice, but not earth shattering. At least not yet.

Google Lens

After Search, Google Lens was showcased.

The main theme around it? The ability to capture text in real time on images and do stuff with it. Usually either text to speech or translation.

In the screenshot above, Google Lens marks the recommended dishes off a menu. While nice, this probably requires each and every such feature to be baked into lens, much like new actions need to be baked into the Google Assistant (or skills in Amazon Alexa).

This falls nicely into the For Everyone / Accessibility theme of the keynote. Aparna Chennapragada, Head of Product for Lens, had the following to say (after an emotional video of a woman who can’t read using the new Lens):

“The power to read is the power to buy a train ticket. To shop in a store. To follow the news. It is the power to get things done. So we want to make this feature to be as accessible to as many people as possible, so it already works in a dozen of languages.”

It actually is. People can’t really be part of our world without the power to read.

It is also the only announcement I remember that the number of languages covered was mentioned (which is why I believe speech to text on device is English only).

Google made the case here and in almost every part of the keynote in favor of using AI for the greater good – for accessibility and inclusion.

Google assistant

Google assistant had its share of the keynote with 4 main announcements:

Duplex on the web is a smarter auto fill feature for web forms.

Next generation Assistant is faster and smarter than its predecessor. There were two main aspects of it that were really interesting to me:

  1. It is “10 times faster”, most probably due to speech to text on the phone which doesn’t necessitate the cloud for many tasks
  2. It works across tabs and apps. A demo was shown, where a the woman instructed the Assistant to search for a photo, picking one out and then asking the phone to send it on an ongoing chat conversation just by saying “send it to Justin”

Every year Google seems to be making Assistant more conversational, able to handle more intents and actions – and understand a lot more of the context necessary for complex tasks.

For Everyone

I’ve written about For Everyone earlier in this article.

I want to cover two more aspect of it, federated learning and project euphonia.

Federated Learning

Machine learning requires tons of data. The more data the better the resulting model is at predicting new inputs. Google is often criticized for collecting that data, but it needs it not only for monetization but also a lot for improving its AI models.

Enter federated learning, a way to learn a bit at the edge of the network, directly inside the devices, and share what gets learned in a secure fashion with the central model that is being created in the cloud.

This was so important for Google to show and explain that Sundar Pichai himself showed and gave that spiel instead of leaving it to the final part of the keynote where Google AI was discussed almost separately.

At Google, this feels like an initiative that is only starting its way with the first public implementation of it embedded as part of Google’s predictive keyboard on Android and how that keyboard is learning new words and trends.

Project Euphonia

Project Euphonia was also introduced here. This project is about enhancing speech recognition models towards hard to understand speech.

Here Google stressed the work and effort it is putting on collecting recorded phrases from people with such problems. The main issue here being the creation or improvement of a model more than anything else.

Android Q

Or Android 10 – pick your name for it.

This one was more than anything else a shopping list of features.

Statistics were given at the beginning:

  • 2.5 billion active devices
  • Over 180 device makers

Live captions was again explained and introduced, along with on-device learning capabilities. AI at its best baked into the OS itself.

For some reason, the Android Q segment wasn’t followed with the Pixel one but rather with the Nest one.

Nest (helpful home)

Google rebranded all of its smart home devices under Nest.

While at it, the decided to try and differentiate from the rest of the pack by coining their solution the “helpful home” as opposed to the “smart home”.

As with everything else, AI and the assistant took center stage, as well as a new device, the Nest Hub Max, which is Google’s answer to the Facebook Portal.

The solution for video calling on the Next Hub Max was built around Google Duo (obviously), with a similar ability to auto zoom that Facebook Portal has, at least on paper – it wasn’t really demoed or showcased on stage.

The reason no demo was really given is that this device will ship “later this summer”, which means it wasn’t really ready for prime time – or Google just didn’t want to spend more precious minutes on it during the keynote.

Interestingly, Google Duo’s recent addition of group video calling wasn’t mentioned throughout the keynote at all.

Pixel (phone)

The Pixel section of the keynote showcased a new Pixel phone device, the Pixel 3a and 3a XL. This is a low cost device, which tries to make do with lower hardware spec by offering better software and AI capabilities. To drive that point home, Google had this slide to show:

Google is continuing with its investment in computational photography, and if the results are as good as this example, I am sold.

The other nice feature shown was call screening:

The neet thing is that your phone can act as your personal secretary, checking for you who’s calling and why, and also converse with the caller based on your instructions. This obviously makes use of the same innovations in Android around speech to text and smart reply.

My current phone is Xiaomi Mi A1, an Android One device. My next one may well be the Pixel 3a – at $399, it will probably be the best phone on the market at that price point.

Google AI

The last section of the keynote was given by Jeff Dean, head of Google.ai. He was also the one closing the keynote, instead of handing this back to Sundar Pichai. I found that nuance interesting.

In his part he discussed the advancements in natural language understanding (NLU) at Google, the growth of TensorFlow, where Google is putting its efforts in healthcare (this time it was oncology and lung cancer), as well as the AI for Social Good initiative, where flood forecasting was explained.

That finishing touch of Google AI in the keynote, taking 16 full minutes (about 15% of the time) shows that Google was aiming to impress and to focus on the good they are making in the world, trying to reduce the growing fear factor of their power and data collection capabilities.

It was impressive…

Next year?

More of the same is my guess.

Google will need to find some new innovation to build their event around. Speech to text on device is great, especially with the many use cases it enabled and the privacy angle to it. Not sure how they’d top that next year.

What’s certain is that AI and privacy will still be at the forefront for Google during 2019 and well into 2020.

A lot of the AI innovations Google is talking about is around real time communications. Check out the recent report I’ve written with Chad Hart on the subject:

AI in RTC report

The post Google I/O 2019 was all about AI, Privacy and Accessibility appeared first on BlogGeek.me.

Google CallJoy & the age of automation in communications

Mon, 05/06/2019 - 12:00

ML/AI is coming to communications really fast. It is going to manifest is as automation in communications but also in other ways.

Me? I wanted to talk about automation and communications. But then Google released CallJoy, which was… automation and communications. And it shows where we’re headed quite clearly with a service that is butt simple, and yet… Google seems to be the first at it, at least when it comes to aiming for simplicity and a powerful MVP. Here’s where I took this article –

Ever since Google launched Duplex at I/O 2018 I’ve been wondering what’s next. Google came out with a new service called CallJoy – a kind of a voice assistant/agent for small businesses. Before I go into the age of automation and communications, let’s try to find out where machine learning and artificial intelligence can be found in CallJoy.

Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:

Register to the webinar

CallJoy and AI

What CallJoy does exactly?

From the CallJoy website, it looks that the following takes place: you subscribe for the service, pick a local phone number to use and you’re good to go.

When people call your business, they get greeted by a message (“this call is being recorded for whatever purposes” kind of a thing). Next, it can “share” information such as business hours and ask if the caller wants to do stuff over a web link instead of talking to a human. If a web link is what you want (think a “yes please” answer to whatever you hear on the phone when you call), then you’ll get an SMS with a URL. Otherwise, you’ll just get routed to the business’ “real” phone number to be handled by a human. All calls get recorded.

What machine learning aspects does this service use?

#1 – Block unwanted spam calls

Incoming spam calls can really harass small businesses. Being able to get less of these is always a blessing. It is also becoming a big issue in the US, one that brings a lot of attention and some attempts at solving it by carriers as well as other vendors.

I am not sure what blocking does Google do here and if it makes direct use of machine learning or not – it certainly can. The fact that all calls get handled by a chatbot means that there’s some kind of a “gating” process that a spam call needs to pass first. This in itself blocks at least some of them spam calls.

#2 – Call deflection, using a voice bot

Call deflection means taking calls and deflecting them – having automation or self service handle the calls instead of getting them to human agents. In the case of CallJoy, a call comes in. message plays out to the user (“this call is being recorded”). User is asked if he wants to do something over a text message:

If the user is happy with that, then an SMS gets sent to the caller and he can continue from there.

There’s a voicebot here that handles the user’s answer (yes, yap, yes please, sure, …) and makes that decision. Nothing too fancy.

This part was probably implemented by using Google’s Dialogflow.

Today, the focus is on restaurants and in order-taking for the call deflection part. It can be used for other scenarios, but that’s the one Google is starting with:

Notice how there’s “LEARN MORE” only on restaurants? All other verticals in the examples on the CallJoy websites make use of the rest of CallJoy’s capabilities. Restaurants is the only one where call deflection is highlighted through an integration with a third party The Ordering.app, who are, for all intent and purpose an unknown vendor. Here’s what LinkedIn knows about them:

(one has to wonder how and why this partner was picked – and who’s cousin owns this company)

Anyways – call deflection now is done via SMS, and integration with a third party. Future releases will probably have more integrations and third parties to work with – and with that more use cases covered.

Another aspect in the future might be making a decision of where to route a user to – what link to send him based on his intent. This is something that happens in terms of a focus in larger businesses today in their automation initiatives.

#3 – Call transcription

This one seems like table stakes.

Transcription is the source of gaining insights from voice.

CallJoy offers transcription of all calls made.

The purpose? Enable analytics for the small business, which is based on tags and BI (below).

This most certainly makes use of Google’s speech to text service

#4- Automated tagging on call transcripts

It seems CallJoy offers tagging of the transcripts or finding specific keywords.

There’s not much explanation or information about tags, but it seems to work by specifying search words and these become tags across the recordings of calls that were made.

Identifying tags might be a manual process or an automated one (it isn’t really indicated anywhere). The intent here is to allow businesses to indicate what they are interested in (order, inventory, reservation, etc.).

#5- Metrics and dashboards

Then there’s the BI part – business intelligence.

Take the information collected, place it on nice dashboards to show the users.

This gives small businesses insights on who is calling them, when and for what purpose. Sounds trivial and obvious, but how many small businesses have that data today?

No machine learning or AI here – just old school BI. The main difference is that the data collected along with the insights gleaned make use of machine learning.

Sum it up

To sum things up, CallJoy uses transcription and makes basic use of Dialogflow to build a simple voicebot (probably single step – question+answer) and wraps it up in a solution that is pretty darn useful for businesses.

It does that for $39 a month per location. Very little to lose by trying it out…

A different route

Where most AI vendors are targeting large enterprises, Google decided to take the route of the small business. Trying to solve their problems. The challenge here is that there’s not enough data within a single business – and not enough money for running a data science project.

Google figured out how to cater for this audience with the tools they had at hand, without using the industry’s gold standard for call centers or try a fancy catch-all solution to answer and manage all calls.

The industry’s gold standard? An IVR. Get a person to menu-hell until he reaches what he needs.

Catch-all solution? Put an AI that can handle 90%+ if the call scenarios on its own automatically.

Both an IVR and mapping call scenarios means customizing the solution, which suggests longer onboarding with a more complicated solution. By taking the route of simplification Google made it possible to cater for small businesses.

A virtuous cycle

Google gains here twice.

Once by attracting small businesses to its service.

Twice by collecting these calls and the intents and tags businesses put. This ends up gaining more insights for Google, turning them into additional features, which later on attracts yet more businesses to a better CallJoy business.

It is all about automation

Here’s what you’ll find on the FAQ page of CallJoy:

With CallJoy, you’ll be able to:

  • Gain powerful insights with audio recordings and searchable text transcripts of all connected incoming calls.
  • Make better business decisions with metrics such as peak call times, new vs. returning callers, and conversation topics.
  • Easily direct callers via text message text to place an order or schedule an appointment online, increasing sales while freeing up your staff.

Most of it talks about improving a service by automating much of what takes place. Which is what the whole notion of AI and machine learning is with communications. Well… mostly. There are a few other areas like quality optimization.

The whole AI gold rush we see today in the communications space boils down to the next level of automation we’re getting into with communications. In many cases this is about machine helping humans and not really machine replacing humans – not for many of the use cases and interactions. That will probably come later  

Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:

Register to the webinar

The post Google CallJoy & the age of automation in communications appeared first on BlogGeek.me.

Google CallJoy & the age of automation in communications

Mon, 05/06/2019 - 12:00

ML/AI is coming to communications really fast. It is going to manifest is as automation in communications but also in other ways.

Me? I wanted to talk about automation and communications. But then Google released CallJoy, which was… automation and communications. And it shows where we’re headed quite clearly with a service that is butt simple, and yet… Google seems to be the first at it, at least when it comes to aiming for simplicity and a powerful MVP. Here’s where I took this article –

Ever since Google launched Duplex at I/O 2018 I’ve been wondering what’s next. Google came out with a new service called CallJoy – a kind of a voice assistant/agent for small businesses. Before I go into the age of automation and communications, let’s try to find out where machine learning and artificial intelligence can be found in CallJoy.

Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:

Register to the webinar

CallJoy and AI

What CallJoy does exactly?

From the CallJoy website, it looks that the following takes place: you subscribe for the service, pick a local phone number to use and you’re good to go.

When people call your business, they get greeted by a message (“this call is being recorded for whatever purposes” kind of a thing). Next, it can “share” information such as business hours and ask if the caller wants to do stuff over a web link instead of talking to a human. If a web link is what you want (think a “yes please” answer to whatever you hear on the phone when you call), then you’ll get an SMS with a URL. Otherwise, you’ll just get routed to the business’ “real” phone number to be handled by a human. All calls get recorded.

What machine learning aspects does this service use?

#1 – Block unwanted spam calls

Incoming spam calls can really harass small businesses. Being able to get less of these is always a blessing. It is also becoming a big issue in the US, one that brings a lot of attention and some attempts at solving it by carriers as well as other vendors.

I am not sure what blocking does Google do here and if it makes direct use of machine learning or not – it certainly can. The fact that all calls get handled by a chatbot means that there’s some kind of a “gating” process that a spam call needs to pass first. This in itself blocks at least some of them spam calls.

#2 – Call deflection, using a voice bot

Call deflection means taking calls and deflecting them – having automation or self service handle the calls instead of getting them to human agents. In the case of CallJoy, a call comes in. message plays out to the user (“this call is being recorded”). User is asked if he wants to do something over a text message:

If the user is happy with that, then an SMS gets sent to the caller and he can continue from there.

There’s a voicebot here that handles the user’s answer (yes, yap, yes please, sure, …) and makes that decision. Nothing too fancy.

This part was probably implemented by using Google’s Dialogflow.

Today, the focus is on restaurants and in order-taking for the call deflection part. It can be used for other scenarios, but that’s the one Google is starting with:

Notice how there’s “LEARN MORE” only on restaurants? All other verticals in the examples on the CallJoy websites make use of the rest of CallJoy’s capabilities. Restaurants is the only one where call deflection is highlighted through an integration with a third party The Ordering.app, who are, for all intent and purpose an unknown vendor. Here’s what LinkedIn knows about them:

(one has to wonder how and why this partner was picked – and who’s cousin owns this company)

Anyways – call deflection now is done via SMS, and integration with a third party. Future releases will probably have more integrations and third parties to work with – and with that more use cases covered.

Another aspect in the future might be making a decision of where to route a user to – what link to send him based on his intent. This is something that happens in terms of a focus in larger businesses today in their automation initiatives.

#3 – Call transcription

This one seems like table stakes.

Transcription is the source of gaining insights from voice.

CallJoy offers transcription of all calls made.

The purpose? Enable analytics for the small business, which is based on tags and BI (below).

This most certainly makes use of Google’s speech to text service

#4- Automated tagging on call transcripts

It seems CallJoy offers tagging of the transcripts or finding specific keywords.

There’s not much explanation or information about tags, but it seems to work by specifying search words and these become tags across the recordings of calls that were made.

Identifying tags might be a manual process or an automated one (it isn’t really indicated anywhere). The intent here is to allow businesses to indicate what they are interested in (order, inventory, reservation, etc.).

#5- Metrics and dashboards

Then there’s the BI part – business intelligence.

Take the information collected, place it on nice dashboards to show the users.

This gives small businesses insights on who is calling them, when and for what purpose. Sounds trivial and obvious, but how many small businesses have that data today?

No machine learning or AI here – just old school BI. The main difference is that the data collected along with the insights gleaned make use of machine learning.

Sum it up

To sum things up, CallJoy uses transcription and makes basic use of Dialogflow to build a simple voicebot (probably single step – question+answer) and wraps it up in a solution that is pretty darn useful for businesses.

It does that for $39 a month per location. Very little to lose by trying it out…

A different route

Where most AI vendors are targeting large enterprises, Google decided to take the route of the small business. Trying to solve their problems. The challenge here is that there’s not enough data within a single business – and not enough money for running a data science project.

Google figured out how to cater for this audience with the tools they had at hand, without using the industry’s gold standard for call centers or try a fancy catch-all solution to answer and manage all calls.

The industry’s gold standard? An IVR. Get a person to menu-hell until he reaches what he needs.

Catch-all solution? Put an AI that can handle 90%+ if the call scenarios on its own automatically.

Both an IVR and mapping call scenarios means customizing the solution, which suggests longer onboarding with a more complicated solution. By taking the route of simplification Google made it possible to cater for small businesses.

A virtuous cycle

Google gains here twice.

Once by attracting small businesses to its service.

Twice by collecting these calls and the intents and tags businesses put. This ends up gaining more insights for Google, turning them into additional features, which later on attracts yet more businesses to a better CallJoy business.

It is all about automation

Here’s what you’ll find on the FAQ page of CallJoy:

With CallJoy, you’ll be able to:

  • Gain powerful insights with audio recordings and searchable text transcripts of all connected incoming calls.
  • Make better business decisions with metrics such as peak call times, new vs. returning callers, and conversation topics.
  • Easily direct callers via text message text to place an order or schedule an appointment online, increasing sales while freeing up your staff.

Most of it talks about improving a service by automating much of what takes place. Which is what the whole notion of AI and machine learning is with communications. Well… mostly. There are a few other areas like quality optimization.

The whole AI gold rush we see today in the communications space boils down to the next level of automation we’re getting into with communications. In many cases this is about machine helping humans and not really machine replacing humans – not for many of the use cases and interactions. That will probably come later  

Interested in AI in communications? Tomorrow I’ll be hosting a webinar with Chad Hart on this topic – join us:

Register to the webinar

The post Google CallJoy & the age of automation in communications appeared first on BlogGeek.me.

Latest WebRTC Developer Tools Landscape (and report)

Mon, 04/29/2019 - 12:00

The landscape of WebRTC developer tools is ever-changing. Here’s where we are at now.

It was time. Over a year passed since last I’ve updated my WebRTC PaaS report. The main changes that occurred since December 2017?

While working on the report, there were a few things that I needed to do:

  1. Update all 21 vendors with relevant information. Some progressed more than others. Some haven’t made any significant changes.
  2. Refresh all references, links and information in the report, to fit the status of WebRTC in 2019
  3. Publicize the appendix on group calling architectures, to give room for a new appendix on Flow and Embedded – two trends that are taking shape
WebRTC Developer Tools landscape

A chapter in the report deals with the WebRTC Developer Tools landscape – the vendors, frameworks, products and services that developers can use when building their WebRTC applications. And that was from June 2017… a long time ago in WebRTC-time.

So I got that updated as well.

You can download the WebRTC Developer Tools landscape infographic.

Helping developers decide

A theme that occurs on a daily basis almost is people asking what to use for their project.

Someone asked about a PHP signaling server in 2017. That question was raised again this month. I got a kind of a similar question over email about Python. Others use one CPaaS vendor and want to switch to another (because they are unhappy about quality, support, pricing, …). Or they want to try and build the infrastructure on their own.

The WebRTC Index is there to cater for that need. Guide people through the process of finding the tools they can use. It is great, but it isn’t detailed enough in some cases – it gives you the list of vendors to research, but you still need to go and research them to check their feature list and capabilities.

That’s why I created my paid report – Choosing a WebRTC API Platform. This report covers the CPaaS vendors who has WebRTC capabilities. And now with the updated edition, it is again up to date with the most current information on all vendors.

Thinking of using a 3rd party?

Trying to determine a different vendor to use?

Want to know how committed a certain vendor is to his platform?

All that can be found in the report, in a way that is easily reachable and digestible.

The report is available at a discounted price until the end of April (only 2 days left).

If you want to learn more about the report, you can:

  1. Download the table of contents and introduction
  2. Check out Agora.io’s 4-pager from the report (each vendor profiled as such a 4-pager for it)
  3. Contact me to ask questions

You can purchase the report online.

Shout out to Agora.io

The reason that 4-pager from Agora.io is openly available is that they sponsored this report.

Agora.io is one of the interesting vendors in this space. They have their own network and coding technologies, and they hook it up to WebRTC. Their solution is also capable of dealing with live broadcasts at scale (think million viewers in a single video stream).

Check them out, and if you’re in San Francisco – attend their AllThingsRTC event.

The post Latest WebRTC Developer Tools Landscape (and report) appeared first on BlogGeek.me.

Latest WebRTC Developer Tools Landscape (and report)

Mon, 04/29/2019 - 12:00

The landscape of WebRTC developer tools is ever-changing. Here’s where we are at now.

It was time. Over a year passed since last I’ve updated my WebRTC PaaS report. The main changes that occurred since December 2017?

While working on the report, there were a few things that I needed to do:

  1. Update all 21 vendors with relevant information. Some progressed more than others. Some haven’t made any significant changes.
  2. Refresh all references, links and information in the report, to fit the status of WebRTC in 2019
  3. Publicize the appendix on group calling architectures, to give room for a new appendix on Flow and Embedded – two trends that are taking shape
WebRTC Developer Tools landscape

A chapter in the report deals with the WebRTC Developer Tools landscape – the vendors, frameworks, products and services that developers can use when building their WebRTC applications. And that was from June 2017… a long time ago in WebRTC-time.

So I got that updated as well.

You can download the WebRTC Developer Tools landscape infographic.

Helping developers decide

A theme that occurs on a daily basis almost is people asking what to use for their project.

Someone asked about a PHP signaling server in 2017. That question was raised again this month. I got a kind of a similar question over email about Python. Others use one CPaaS vendor and want to switch to another (because they are unhappy about quality, support, pricing, …). Or they want to try and build the infrastructure on their own.

The WebRTC Index is there to cater for that need. Guide people through the process of finding the tools they can use. It is great, but it isn’t detailed enough in some cases – it gives you the list of vendors to research, but you still need to go and research them to check their feature list and capabilities.

That’s why I created my paid report – Choosing a WebRTC API Platform. This report covers the CPaaS vendors who has WebRTC capabilities. And now with the updated edition, it is again up to date with the most current information on all vendors.

Thinking of using a 3rd party?

Trying to determine a different vendor to use?

Want to know how committed a certain vendor is to his platform?

All that can be found in the report, in a way that is easily reachable and digestible.

The report is available at a discounted price until the end of April (only 2 days left).

If you want to learn more about the report, you can:

  1. Download the table of contents and introduction
  2. Check out Agora.io’s 4-pager from the report (each vendor profiled as such a 4-pager for it)
  3. Contact me to ask questions

You can purchase the report online.

Shout out to Agora.io

The reason that 4-pager from Agora.io is openly available is that they sponsored this report.

Agora.io is one of the interesting vendors in this space. They have their own network and coding technologies, and they hook it up to WebRTC. Their solution is also capable of dealing with live broadcasts at scale (think million viewers in a single video stream).

Check them out, and if you’re in San Francisco – attend their AllThingsRTC event.

The post Latest WebRTC Developer Tools Landscape (and report) appeared first on BlogGeek.me.

Upcoming WebRTC events in 2019

Mon, 04/22/2019 - 12:00

Suddenly, there are so many good WebRTC events you can attend.

My kids are still young, and for some reason, still consider me somewhat important in their lives. It is great, but also sad – I found myself this year needing to decline so many good events to attend. Here’s a list of all the places that I am not going to be at, but you should if you’re interested in WebRTC

BTW – Some of these events are still in their call for papers stage – why not go as a speaker?

AllThingsRTC

URL: http://allthingsrtc.org/

When? 13 June

Where? San Francisco

Call for speakers: https://www.papercall.io/allthingsrtc

AllThingsRTC is hosted by Agora.io. The event they did in China a few years back was great (I haven’t attended but got good feedback about it), and this one is taking the right direction. They have room for more speakers – so be sure to add your name if you wish to present.

Sadly, I won’t be able to join this event as I am just finishing a family holiday in London.

CommCon 2019

URL: https://2019.commcon.xyz/

When? 7-11 July

Where? Buckinghamshire, UK

CommCon started last year by Dan Jenkins from Nimble Ape.

It takes a view of the communications market as a whole from the point of view of the developers in that market. The event runs in two tracks with a good deal of sessions around WebRTC.

I couldn’t attend last year’s even and can’t attend this year’s event (extended family trip to Eastern Europe). What I’ve heard from last year’s attendees was that the event was really good – and as testament, the people I know are going to attend this year’s event as well.

ClueCon

URL: https://www.cluecon.com/

When? 5-8 August

Where? Downtown Chicago

Call for speakers: https://www.cluecon.com/speakers/

This is the 15th year that ClueCon will be held. This event is about open source projects in VoIP, with the team behind the event being the FreeSWITCH team.

This one is just after that extended family trip to Eastern Europe, and I’d rather not be on another airplane so soon.

Twilio Signal

URL: https://signal.twilio.com/

When? 6-7 August

Where? San Francisco

Call for speakers: https://eegeventsite.secure.force.com/twiliosignal/twiliosignalcfpreghome

Twilio Signal is a lot of fun. Twilio is the biggest CPaaS vendor out there and their event is quite large. I’ve been to two such events and found them really interesting. They deal a lot about Twilio products and new launches which tend to define a lot of the industry, but they have technical and business sessions as well.

Can’t make it this year. Falls at roughly the same time as ClueCon which I am skipping as well.

JanusCon

URL: https://www.januscon.it/

When? 23-25 September

Where? Napoli, Italy

Call for papers: https://www.papercall.io/januscon2019

The meetecho team behind Janus decided to create a conference around Janus.

Janus is one of the most popular open source WebRTC media servers today, and this is a leap of faith when it comes to creating an event – always a risky business.

I might end up attending it. For Janus (and for the food obviously). Only challenge is my daughter is starting a new school that month, so need to see if and how will that fit.

IIT RTC

URL: https://www.rtc-conference.com/2019/

When? 14-16 October

Where? Chicago

Call for speakers: https://www.rtc-conference.com/2019/submit-presentation-for-conference/

The IIT RTC is a mixture of academic and industry event around real time communications. I’ve taken part in it twice without really being there in person, through a video conference session. The event runs multiple tracks with WebRTC in a track of its own. As with many of the other larger industry events, IIT RTC is preceded by a TADHack event and one of its tracks is TAD Summit.

I’ll be skipping this one due to Sukkot holiday here in Israel.

Kranky Geek

URL: https://www.krankygeek.com/

When? 15 November

Where? San Francisco

Call for speakers: just contact me

That’s the event I am hosting with Chris Koehncke and Chad Hart. Our focus is WebRTC and ML/AI in real time communications. We’re still figuring out the sponsors and agenda for this year (just started planning the event).

Obviously, I’ll be attending this event…

Which event should you attend?

This is a question I’ve been asked quite a few times, and somehow, this year, there are just so many of them that I want and can’t attend. If you think of going to an event to learn about WebRTC and communications in general, then any of these will be great.

Go to a few – why settle for one?

Next Month

Next month, I’ll be hosting a webinar along with Chad Hart. We will be reviewing the changing domain of machine learning and artificial intelligence in real time communications. We’ve published a report about it a few months back, and it is time to take another look at the topic. If you’re interested – join us.

The post Upcoming WebRTC events in 2019 appeared first on BlogGeek.me.

Upcoming WebRTC events in 2019

Mon, 04/22/2019 - 12:00

Suddenly, there are so many good WebRTC events you can attend.

My kids are still young, and for some reason, still consider me somewhat important in their lives. It is great, but also sad – I found myself this year needing to decline so many good events to attend. Here’s a list of all the places that I am not going to be at, but you should if you’re interested in WebRTC

BTW – Some of these events are still in their call for papers stage – why not go as a speaker?

AllThingsRTC

URL: http://allthingsrtc.org/

When? 13 June

Where? San Francisco

Call for speakers: https://www.papercall.io/allthingsrtc

AllThingsRTC is hosted by Agora.io. The event they did in China a few years back was great (I haven’t attended but got good feedback about it), and this one is taking the right direction. They have room for more speakers – so be sure to add your name if you wish to present.

Sadly, I won’t be able to join this event as I am just finishing a family holiday in London.

CommCon 2019

URL: https://2019.commcon.xyz/

When? 7-11 July

Where? Buckinghamshire, UK

CommCon started last year by Dan Jenkins from Nimble Ape.

It takes a view of the communications market as a whole from the point of view of the developers in that market. The event runs in two tracks with a good deal of sessions around WebRTC.

I couldn’t attend last year’s even and can’t attend this year’s event (extended family trip to Eastern Europe). What I’ve heard from last year’s attendees was that the event was really good – and as testament, the people I know are going to attend this year’s event as well.

ClueCon

URL: https://www.cluecon.com/

When? 5-8 August

Where? Downtown Chicago

Call for speakers: https://www.cluecon.com/speakers/

This is the 15th year that ClueCon will be held. This event is about open source projects in VoIP, with the team behind the event being the FreeSWITCH team.

This one is just after that extended family trip to Eastern Europe, and I’d rather not be on another airplane so soon.

Twilio Signal

URL: https://signal.twilio.com/

When? 6-7 August

Where? San Francisco

Call for speakers: https://eegeventsite.secure.force.com/twiliosignal/twiliosignalcfpreghome

Twilio Signal is a lot of fun. Twilio is the biggest CPaaS vendor out there and their event is quite large. I’ve been to two such events and found them really interesting. They deal a lot about Twilio products and new launches which tend to define a lot of the industry, but they have technical and business sessions as well.

Can’t make it this year. Falls at roughly the same time as ClueCon which I am skipping as well.

JanusCon

URL: https://www.januscon.it/

When? 23-25 September

Where? Napoli, Italy

Call for papers: https://www.papercall.io/januscon2019

The meetecho team behind Janus decided to create a conference around Janus.

Janus is one of the most popular open source WebRTC media servers today, and this is a leap of faith when it comes to creating an event – always a risky business.

I might end up attending it. For Janus (and for the food obviously). Only challenge is my daughter is starting a new school that month, so need to see if and how will that fit.

IIT RTC

URL: https://www.rtc-conference.com/2019/

When? 14-16 October

Where? Chicago

Call for speakers: https://www.rtc-conference.com/2019/submit-presentation-for-conference/

The IIT RTC is a mixture of academic and industry event around real time communications. I’ve taken part in it twice without really being there in person, through a video conference session. The event runs multiple tracks with WebRTC in a track of its own. As with many of the other larger industry events, IIT RTC is preceded by a TADHack event and one of its tracks is TAD Summit.

I’ll be skipping this one due to Sukkot holiday here in Israel.

Kranky Geek

URL: https://www.krankygeek.com/

When? 15 November

Where? San Francisco

Call for speakers: just contact me

That’s the event I am hosting with Chris Koehncke and Chad Hart. Our focus is WebRTC and ML/AI in real time communications. We’re still figuring out the sponsors and agenda for this year (just started planning the event).

Obviously, I’ll be attending this event…

Which event should you attend?

This is a question I’ve been asked quite a few times, and somehow, this year, there are just so many of them that I want and can’t attend. If you think of going to an event to learn about WebRTC and communications in general, then any of these will be great.

Go to a few – why settle for one?

Next Month

Next month, I’ll be hosting a webinar along with Chad Hart. We will be reviewing the changing domain of machine learning and artificial intelligence in real time communications. We’ve published a report about it a few months back, and it is time to take another look at the topic. If you’re interested – join us.

The post Upcoming WebRTC events in 2019 appeared first on BlogGeek.me.

WebRTC Multiparty Architectures

Mon, 04/15/2019 - 12:00

There are multiple ways to implement WebRTC multiparty sessions. These in turn are built around mesh, mixing and routing.

In the past few days I’ve been sick to the bone. Fever, headache, cough – the works. I couldn’t do much which meant no writing an article either. Good thing I had to remove an appendix from my upcoming WebRTC API Platforms report to make room for a new one.

I wanted to touch the topic of Flow and Embed in Communication APIs, and how they fit into the WebRTC space. This topic will replace an appendix in the report about multiparty architectures in WebRTC, which is what follows here – a copy+paste of that appendix:

Multiparty conferences of either voice or video can be supported in one of three ways:

  1. Mesh
  2. Mixing
  3. Routing

The quality of the solution will rely heavily on the different type of architecture used. In Routing, we see further refinement for video routing between multi-unicast, simulcast and SVC.

WebRTC API Platform vendors who offer multiparty conferencing will have different implementations of this technology. For those who need multiparty calling, make sure you know which technology is used by the vendor you choose.

Mesh

In a mesh architecture, all users are connected to all others directly and send their media to them. While there is no overhead on a media server, this option usually falls short of offering any meaningful media quality and starts breaking from 4 users or more.

Mesh topology

For the most part, consider vendors offering mesh topology for their video service as limited at best.

Mixing

MCUs were quite common before WebRTC came into the market. MCU stands for Multipoint Conferencing Unit, and it acts as a mixing point.

MCU mixing topology

An MCU receives the incoming media streams from all users, decodes it all, creates a new layout of everything and sends it out to all users as a single stream.

This has the added benefit of being easy on the user devices, which see it as a single user they need to operate in front; but it comes at a high compute cost and an inflexibility on the user side.

Routing

SFUs were new before WebRTC, but are now an extremely popular solution. SFU stands for Selective Forwarding Unit, and it acts like a router of media.

SFU routing topology

An SFU receives the incoming media streams from all users, and then decides which streams to send to which users.

This approach leaves flexibility on the user side while reducing the computational cost on the server side; making it the popular and cost effective choice in WebRTC deployments.

To route media, an SFU can employ one of three distinct approaches:

  1. Multi-unicast
  2. Simulcast
  3. SVC
Multi-unicast

This is the naïve approach to routing media. Each user sends his video stream towards he SFU, which then decide who to route this stream to.

If there is a need to lower bitrates or resolutions, it is either done at the source, by forcing a user to change his sent stream, or on the receiver end, by having the receiving user to throw data he received and processed.

It is also how most implementations of WebRTC SFUs were done until recently. [UPDATE: Since this article was originally written in 2017, that was true. In 2019, most are actually using Simulcast] Simulcast

Simulcast is an approach where the user sends multiple video streams towards the SFU. These streams are compressed data of the exact same media, but in different quality levels – usually different resolutions and bitrates.

Simulcast

The SFU can then select which of the streams it received to send to which participant based on their device capability, available network or screen layout.

Simulcast has started to crop in commercial WebRTC SFUs only recently.

SVC

SVC stands for Scalable Video Coding. It is a technique where a single encoded video stream is created in a layered fashion, where each layer adds to the quality of the previous layer.

SVC

When an SFU receives a media stream that uses SVC, it can peel of layers out of that stream, to fit the outgoing stream to the quality, device, network and UI expectations of the receiving user. It offers better performance than Simulcast in both compute and network resources.

SVC has the added benefit of enabling higher resiliency to network impairments by allowing adding error correction only to base layers. This works well over mobile networks even for 1:1 calling.

SVC is very new to WebRTC and is only now being introduced as part of the VP9 video codec.

The post WebRTC Multiparty Architectures appeared first on BlogGeek.me.

WebRTC Multiparty Architectures

Mon, 04/15/2019 - 12:00

There are multiple ways to implement WebRTC multiparty sessions. These in turn are built around mesh, mixing and routing.

In the past few days I’ve been sick to the bone. Fever, headache, cough – the works. I couldn’t do much which meant no writing an article either. Good thing I had to remove an appendix from my upcoming WebRTC API Platforms report to make room for a new one.

I wanted to touch the topic of Flow and Embed in Communication APIs, and how they fit into the WebRTC space. This topic will replace an appendix in the report about multiparty architectures in WebRTC, which is what follows here – a copy+paste of that appendix:

Multiparty conferences of either voice or video can be supported in one of three ways:

  1. Mesh
  2. Mixing
  3. Routing

The quality of the solution will rely heavily on the different type of architecture used. In Routing, we see further refinement for video routing between multi-unicast, simulcast and SVC.

WebRTC API Platform vendors who offer multiparty conferencing will have different implementations of this technology. For those who need multiparty calling, make sure you know which technology is used by the vendor you choose.

Mesh

In a mesh architecture, all users are connected to all others directly and send their media to them. While there is no overhead on a media server, this option usually falls short of offering any meaningful media quality and starts breaking from 4 users or more.

Mesh topology

For the most part, consider vendors offering mesh topology for their video service as limited at best.

Mixing

MCUs were quite common before WebRTC came into the market. MCU stands for Multipoint Conferencing Unit, and it acts as a mixing point.

MCU mixing topology

An MCU receives the incoming media streams from all users, decodes it all, creates a new layout of everything and sends it out to all users as a single stream.

This has the added benefit of being easy on the user devices, which see it as a single user they need to operate in front; but it comes at a high compute cost and an inflexibility on the user side.

Routing

SFUs were new before WebRTC, but are now an extremely popular solution. SFU stands for Selective Forwarding Unit, and it acts like a router of media.

SFU routing topology

An SFU receives the incoming media streams from all users, and then decides which streams to send to which users.

This approach leaves flexibility on the user side while reducing the computational cost on the server side; making it the popular and cost effective choice in WebRTC deployments.

To route media, an SFU can employ one of three distinct approaches:

  1. Multi-unicast
  2. Simulcast
  3. SVC
Multi-unicast

This is the naïve approach to routing media. Each user sends his video stream towards he SFU, which then decide who to route this stream to.

If there is a need to lower bitrates or resolutions, it is either done at the source, by forcing a user to change his sent stream, or on the receiver end, by having the receiving user to throw data he received and processed.

It is also how most implementations of WebRTC SFUs were done until recently.

Simulcast

Simulcast is an approach where the user sends multiple video streams towards the SFU. These streams are compressed data of the exact same media, but in different quality levels – usually different resolutions and bitrates.

Simulcast

The SFU can then select which of the streams it received to send to which participant based on their device capability, available network or screen layout.

Simulcast has started to crop in commercial WebRTC SFUs only recently.

SVC

SVC stands for Scalable Video Coding. It is a technique where a single encoded video stream is created in a layered fashion, where each layer adds to the quality of the previous layer.

SVC

When an SFU receives a media stream that uses SVC, it can peel of layers out of that stream, to fit the outgoing stream to the quality, device, network and UI expectations of the receiving user. It offers better performance than Simulcast in both compute and network resources.

SVC has the added benefit of enabling higher resiliency to network impairments by allowing adding error correction only to base layers. This works well over mobile networks even for 1:1 calling.

SVC is very new to WebRTC and is only now being introduced as part of the VP9 video codec.

The post WebRTC Multiparty Architectures appeared first on BlogGeek.me.

Handling session disconnections in WebRTC

Mon, 04/08/2019 - 12:00

WebRTC disconnections are quite common, but you can “fix” many of them just by careful planning and proper development.

Years ago, I developed the H.323 Protocol Stack at RADVISION (later turned Avaya, turned Spirent turned Softil). I was there as a developer, R&D manager and then the product manager. My code is probably still in that codebase, lovingly causing products around the globe to crash from time to time – as any other developer, I have my share of bugs left behind.

Anyways, why am I mentioning this?

I had a client asking me recently about disconnections in WebRTC. And it kinda reminded me of a similar issue (or set of issues) we had with the H.323 stack and protocol years back.

If you bear with me a bit – I promise it will be worth your while.

I am starting this week the office hours for my WebRTC course. The next office hour (after the initial “hi everyone”) will cover WebRTC disconnections.

Check out the course – and maybe go over the first module for free:

Learn WebRTC

A quick intro to H.323 signaling and transport

H.323 is like SIP just better and more complex. At least for me, who started his way in VoIP with H.323 (I will always have a soft spot for it). For many years, the way H.323 worked is by opening two separate TCP connections for transporting its signaling. The first for passing what is called Q.931 protocol and the next for passing H.245 protocol.

If you would like to compare it to the way WebRTC handles things, then Q.931 is how you setup the connection – have the users find each other. H.245 is similar to what SDP and JSEP are for (I am blatantly ignoring H.225 here, another protocol in H.323 which takes care of registration and authentication).

Once Q.931 and H.245 get connected, you start adding the RTP/RTCP stuff over UDP, which gets you quite a lot of connections.

Add to that complexities like tunneling H.245 over Q.931, using something called faststart instead of H.245 (or before H.245), then sprinkle a dash of “parallel H.245” and then a bit of NAT traversal and/or security and you get a lot of places that require testing and a huge number of edge cases.

Where can H.323 get “stuck” or disconnected?

With so many connections, there are a lot of places that things can go wrong. There are multiple state machines (one for Q.931 state, one for H.245 state) and there are different connections that can get severed for one reason or another.

Oh – and in H.323 (at least in its earlier specifications that I had the joy to work with), when the Q.931 or H.245 connections get severed – the whole session is considered as disconnected, so you go and kill the RTP/RTCP sessions.

At the time, we suffered a lot from zombie sessions due to different edge cases. We ended up with solutions that were either based on the H.323 specification itself or best practices we created along the way.

Here are a few of these:

  • If the Q.931 connection gets severed – kill the session
  • If the H.245 connection gets severed – kill the session
  • If you don’t receive media or media control packets on RTP or RTCP respectively for a configurable period of time (think 5-10 seconds) – kill the session
  • When a state machine for Q.931 or H.245 initiates – start a timer. If that timer ends and the state machine didn’t get to the connected state – switch the state to timeout and… – kill the session
  • Killing the session means trying to gracefully close all connections, but if we can’t within a short period of a timeout – we just shut things down to collect the resources back to be used later

H.323 existed before smartphones. Systems were usually tethered to an ethernet cable or at most over WiFi in a static location at a time. There was no notion of roaming or moving between networks. Which meant that there was no need to ask yourself if a connection got severed because of a switch in the network or because there’s a real issue.

Life was simple:

And if you were really insistent then maybe this:

(in real life scenarios, these two simplistic state machines were a lot bigger and complicated, but their essence was based on these concepts)

Back to WebRTC signaling and transport

WebRTC is simpler and more complicated than H.323 at the same thing.

It is simpler, as there is only SRTP. There’s no signaling that is standardized or preselected for WebRTC. And for the most part, the one you use will probably require only a single connection (as opposed to the two in H.323). It also has a lot less alternatives built into the specification itself that H.323 has.

It is more complicated, as you own the signaling part. You make that selection, so you better make a good one. And while at it, implement it reasonably well and handle all of its edge cases. This is never a simple task even for simple signaling protocols. And it’s now on you.

Then there’s the fact that networks today are more complex. User expect to move around while communicating, and you should expect such scenarios where users switch networks in mid-session.

If you use WebRTC in a browser, then you get these interesting aspects associated with your implementation:

  1. When you close the browser, the session dies
  2. When you close the tab where the WebRTC session lives, the session dies
  3. When you refresh the page where the WebRTC session lives, the session dies
  4. When you click a link to move to a different page (even on the same site), the session dies

A lot of dying taking place on the browser, and the server, or the other client, will need to “sniff” these scenarios as they might not be gracefully disconnected, and decide what to do about them.

Where can WebRTC get “stuck” or disconnected?

We can split disconnections of WebRTC into 3 broad categories:

  1. Failure to connect at all
  2. Media disconnections
  3. Signaling disconnections

In each, there will be multiple scenarios, defining the reasons for failure as well as how to handle and overcome such issues.

In broad strokes, here’s what I’d do in each of these 3 categories:

#1 – Failure to connect at all

There’s a decent amount of failures happening when trying to connect WebRTC sessions. They start from not being able to even send out an SDP, through interoperability issues across browsers and devices to ICE negotiation failing to connect media.

In many of these cases, better configuration of the service as well as focus on edge cases would improve the situation.

If you experience connection failures for 10% or more of the sessions – you’re doing something wrong. Some can get it as low as 1% or less, but oftentimes that depends on the type of users your service attracts.

This leads to another very important aspect of using WebRTC:

Measure what you can if you want to be able to improve it in the future

#2 – Media disconnections

Sometimes, your sessions will simply disconnect.

There are many reasons why that can happen:

  • The firewall policies of the access point used are configured to kill P2P encrypted traffic (blame all them bittorrent-hating-IT-people)
  • The user switched from one network to another in mid-session, and you should follow WebRTC’s ICE restart mechanism
  • The other end crashed, closed or just got offline

Each of these requires different handling – some in the code while others some manual handling (think customer support working out the configuration with a customer to resolve the firewall issue).

#3 – Signaling disconnections

Unlike H.323, if signaling gets disconnected, WebRTC doesn’t even know about it, so it won’t immediately cause the session itself to disconnect.

First thing you’ll need to do is make a decision how you want to proceed in such cases – do you treat this as session failure/disconnection or do you let the show go on.

If you treat these as failures, then I suggest killing peer connections based on the status of your websocket connection to the server. If you are on the server side, then once a connection is lost, you should probably go ahead and kill the media paths – either from your media server towards the “dead” session leg or from the other participant on a P2P connection/session.

If you want to make sure the show goes on, you will need to try and reconnect the peer connection towards the same user/session somehow. In which case, additional signaling logic in your connection state machine along with additional timers to manage it will be necessary.

Announcing the WebRTC course snippets module

Here’s the thing.

My online WebRTC training has everything in it already. Well… not everything, but it is rather complete. What I’ve noticed is that I get repeat questions from different students and clients on very specific topics. They are mostly covered within lessons of the course, but they sometimes feel as being “buried” within the hours and hours of content.

This is why I decided to start creating course snippets. These are “lessons” that are 3-5 minutes long (as opposed to 20-40 minutes long), with a purpose to give an answer to one specific question at a time. Most of the snippets will be actionable and may contain additional materials to assist you in your development. This library of snippets will make up a new course module.

Here are the first 3 snippets that will be added:

  1. WebRTC session disconnections
  2. ICE servers configuration
  3. A Quick review of QUIC

While we’re at it, office hours for the course start today. If you want to learn WebRTC, now is the best time to enroll.

The post Handling session disconnections in WebRTC appeared first on BlogGeek.me.

Handling session disconnections in WebRTC

Mon, 04/08/2019 - 12:00

WebRTC disconnections are quite common, but you can “fix” many of them just by careful planning and proper development.

Years ago, I developed the H.323 Protocol Stack at RADVISION (later turned Avaya, turned Spirent turned Softil). I was there as a developer, R&D manager and then the product manager. My code is probably still in that codebase, lovingly causing products around the globe to crash from time to time – as any other developer, I have my share of bugs left behind.

Anyways, why am I mentioning this?

I had a client asking me recently about disconnections in WebRTC. And it kinda reminded me of a similar issue (or set of issues) we had with the H.323 stack and protocol years back.

If you bear with me a bit – I promise it will be worth your while.

I am starting this week the office hours for my WebRTC course. The next office hour (after the initial “hi everyone”) will cover WebRTC disconnections.

Check out the course – and maybe go over the first module for free:

Learn WebRTC

A quick intro to H.323 signaling and transport

H.323 is like SIP just better and more complex. At least for me, who started his way in VoIP with H.323 (I will always have a soft spot for it). For many years, the way H.323 worked is by opening two separate TCP connections for transporting its signaling. The first for passing what is called Q.931 protocol and the next for passing H.245 protocol.

If you would like to compare it to the way WebRTC handles things, then Q.931 is how you setup the connection – have the users find each other. H.245 is similar to what SDP and JSEP are for (I am blatantly ignoring H.225 here, another protocol in H.323 which takes care of registration and authoentication).

Once Q.931 and H.245 get connected, you start adding the RTP/RTCP stuff over UDP, which gets you quite a lot of connections.

Add to that complexities like tunneling H.245 over Q.931, using something called faststart instead of H.245 (or before H.245), then sprinkle a dash of “parallel H.245” and then a bit of NAT traversal and/or security and you get a lot of places that require testing and a huge number of edge cases.

Where can H.323 get “stuck” or disconnected?

With so many connections, there are a lot of places that things can go wrong. There are multiple state machines (one for Q.931 state, one for H.245 state) and there are different connections that can get severed for one reason or another.

Oh – and in H.323 (at least in its earlier specifications that I had the joy to work with), when the Q.931 or H.245 connections get severed – the whole session is considered as disconnected, so you go and kill the RTP/RTCP sessions.

At the time, we suffered a lot from zombie sessions due to different edge cases. We ended up with solutions that were either based on the H.323 specification itself or best practices we created along the way.

Here are a few of these:

  • If the Q.931 connection gets severed – kill the session
  • If the H.245 connection gets severed – kill the session
  • If you don’t receive media or media control packets on RTP or RTCP respectively for a configurable period of time (think 5-10 seconds) – kill the session
  • When a state machine for Q.931 or H.245 initiates – start a timer. If that timer ends and the state machine didn’t get to the connected state – switch the state to timeout and… – kill the session
  • Killing the session means trying to gracefully close all connections, but if we can’t within a short period of a timeout – we just shut things down to collect the resources back to be used later

H.323 existed before smartphones. Systems were usually tethered to an ethernet cable or at most over WiFi in a static location at a time. There was no notion of roaming or moving between networks. Which meant that there was no need to ask yourself if a connection got severed because of a switch in the network or because there’s a real issue.

Life was simple:

And if you were really insistent then maybe this:

(in real life scenarios, these two simplistic state machines were a lot bigger and complicated, but their essence was based on these concepts)

Back to WebRTC signaling and transport

WebRTC is simpler and more complicated than H.323 at the same thing.

It is simpler, as there is only SRTP. There’s no signaling that is standardized or preselected for WebRTC. And for the most part, the one you use will probably require only a single connection (as opposed to the two in H.323). It also has a lot less alternatives built into the specification itself that H.323 has.

It is more complicated, as you own the signaling part. You make that selection, so you better make a good one. And while at it, implement it reasonably well and handle all of its edge cases. This is never a simple task even for simple signaling protocols. And it’s now on you.

Then there’s the fact that networks today are more complex. User expect to move around while communicating, and you should expect such scenarios where users switch networks in mid-session.

If you use WebRTC in a browser, then you get these interesting aspects associated with your implementation:

  1. When you close the browser, the session dies
  2. When you close the tab where the WebRTC session lives, the session dies
  3. When you refresh the page where the WebRTC session lives, the session dies
  4. When you click a link to move to a different page (even on the same site), the session dies

A lot of dying taking place on the browser, and the server, or the other client, will need to “sniff” these scenarios as they might not be gracefully disconnected, and decide what to do about them.

Where can WebRTC get “stuck” or disconnected?

We can split disconnections of WebRTC into 3 broad categories:

  1. Failure to connect at all
  2. Media disconnections
  3. Signaling disconnections

In each, there will be multiple scenarios, defining the reasons for failure as well as how to handle and overcome such issues.

In broad strokes, here’s what I’d do in each of these 3 categories:

#1 – Failure to connect at all

There’s a decent amount of failures happening when trying to connect WebRTC sessions. They start from not being able to even send out an SDP, through interoperability issues across browsers and devices to ICE negotiation failing to connect media.

In many of these cases, better configuration of the service as well as focus on edge cases would improve the situation.

If you experience connection failures for 10% or more of the sessions – you’re doing something wrong. Some can get it as low as 1% or less, but oftentimes that depends on the type of users your service attracts.

This leads to another very important aspect of using WebRTC:

Measure what you can if you want to be able to improve it in the future

#2 – Media disconnections

Sometimes, your sessions will simply disconnect.

There are many reasons why that can happen:

  • The firewall policies of the access point used are configured to kill P2P encrypted traffic (blame all them bittorrent-hating-IT-people)
  • The user switched from one network to another in mid-session, and you should follow WebRTC’s ICE restart mechanism
  • The other end crashed, closed or just got offline

Each of these requires different handling – some in the code while others some manual handling (think customer support working out the configuration with a customer to resolve the firewall issue).

#3 – Signaling disconnections

Unlike H.323, if signaling gets disconnected, WebRTC doesn’t even know about it, so it won’t immediately cause the session itself to disconnect.

First thing you’ll need to do is make a decision how you want to proceed in such cases – do you treat this as session failure/disconnection or do you let the show go on.

If you treat these as failures, then I suggest killing peer connections based on the status of your websocket connection to the server. If you are on the server side, then once a connection is lost, you should probably go ahead and kill the media paths – either from your media server towards the “dead” session leg or from the other participant on a P2P connection/session.

If you want to make sure the show goes on, you will need to try and reconnect the peer connection towards the same user/session somehow. In which case, additional signaling logic in your connection state machine along with additional timers to manage it will be necessary.

Announcing the WebRTC course snippets module

Here’s the thing.

My online WebRTC training has everything in it already. Well… not everything, but it is rather complete. What I’ve noticed is that I get repeat questions from different students and clients on very specific topics. They are mostly covered within lessons of the course, but they sometimes feel as being “buried” within the hours and hours of content.

This is why I decided to start creating course snippets. These are “lessons” that are 3-5 minutes long (as opposed to 20-40 minutes long), with a purpose to give an answer to one specific question at a time. Most of the snippets will be actionable and may contain additional materials to assist you in your development. This library of snippets will make up a new course module.

Here are the first 3 snippets that will be added:

  1. WebRTC session disconnections
  2. ICE servers configuration
  3. A Quick review of QUIC

While we’re at it, office hours for the course start today. If you want to learn WebRTC, now is the best time to enroll.

The post Handling session disconnections in WebRTC appeared first on BlogGeek.me.

CPaaS differentiation in 2019

Mon, 04/01/2019 - 12:00

CPaaS differentiation seems to be revolving around tackling niches.

Time for another look at the world of CPaaS – Communication Platform as a Service. In January 2018, a bit over a year ago, I’ve looked at CPaaS trends for 2018. The ones there were:

  1. Serverless – which didn’t really happen, at least not as a direct CPaaS offering, other than what Twilio has to offer and what Voximplant had as well
  2. Omnichannel – where we see most vendors collecting channels to support, with Whatsapp being the lead noise-maker
  3. Visual/IDE – ended up being a winner in 2018, with Plivo, MessageBird, Voximplant and Infobip joining Twilio. It is also now usually called “Flow”
  4. Machine learning and AI – still more talk than action, but we’re moving in this direction. The whole industry is
  5. AR/VR – happening, though less with the CPaaS vendors directly
  6. Bots – that’s part of the omnichannel + ML/AI story. And we see instances of it done with CPaaS
  7. GDPR – something that was done and somehow mostly forgotten

I’d like to look at what’s happening in CPaaS this time from a slightly different angle, which alludes itself to trends as well, but in a more nuanced way. From briefings I’ve been given this past few weeks and the announcements and stories coming out of Enterprise Connect 2019, it looks like different CPaaS vendors are settling on different target audiences and catering to different use cases and market niches.

Today CPaaS is almost synonymous to Twilio. Every player looks at what Twilio does in order to plot its own route in the market, at times, with the intended aim of disrupting Twilio and then mostly with lower price points. In other times, with trying to offer something more/better.

Then there are external players who add APIs to their platform. Usually UCaaS (Unified Communications as a Service) platform. They don’t directly compete with CPaaS, but if you are purchasing a “phone system” for your enterprise from a UCaaS player, then why not use its APIs and services instead of opting for another vendor (a CPaaS vendor in this case)?

Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:

Get the shortlist

Here are how some of the vendors in this space are trying to differentiate, pivot and/or find their niche within the CPaaS market.

Agora.io – Gaming

If you look at Agora’s blog, what you’ll find out there is a slew of posts around gaming and gaming related frameworks (Unity to be exact):

  • It’s How You Play the Game: Trends at Game Developers Conference – Day 1 Recap
  • Adding Voice Chat to a Multiplayer Cross-Platform Unity game
  • How To: Create a Video Chat App in Unity
  • Add Voice Chat to your Unity game
  • (iOS) Run Video Chat within your Unity application
  • (Android) Run Video Chat within your Unity application
Agora offers a specific solution for gaming

Gaming is an untapped market for CPaaS.

There’s communications there of all kinds – collaboration or communications across gamers inside a game, talking before the game, streaming the game to viewers, etc.

All this communications is either developed by the gaming companies (not a lot), being catered for by specialized VoIP gaming vendors, done out of scope (using Discord, Skype, …). Rarely is it covered by a CPaaS vendor.

Somehow, for CPaaS cracking this market is really tough. Agora.io is trying to do just that, along with its other focus areas – live broadcast and social (two other tough nuts).

ECLWebRTC – Media Pipeline

The Japanese platform from NTT Communications – ECLWebRTC.

Like many of the WebRTC-first/only platforms out there, ECLWebRTC had an SFU implementation and support for various devices and browsers.

When you get to that point, one approach is to go after voice and PSTN. Another one is to add more features and increase the sizes of meetings and live broadcasts that can be supported.

ECLWebRTC decided to go after machine learning here, with the intent of letting its customers integrate and connect its media paths directly to cloud APIs. This is done using what they call Media Pipeline Factory, which feels from the looks of it like a general purpose media server.

ECLWebRTC is less known in Europe and the US, and probably not much outside of Japan either. With the Japanese market focus on automation, it makes sense that media pipeline would be a focus area for ECLWebRTC. This type of a capability is relevant elsewhere as well, but it doesn’t seem to be a priority for others yet.

Infobip – Omnichannel

I’ve had the opportunity to fiddle around with Infobip Flow recently, something that turned out to be a very pleasant experience. From Flow, it became apparent that Infobip is working hard on offering its customers an omnichannel experience. Compared to other CPaaS vendors, they seem to have the most coverage of channels:

To the above, you can add SMS and RCS and email.

Infobip Flow has another nice  quality – it is built for both inbound and outbound communications. Most of its competitors do inbound flows only.

In a world where competition may force price wars on CPaaS basic offerings of voice and SMS, adding support for omnichannel seems like a good way to limit attrition and churn and increase vendor lock-in.

RingCentral – Embeddables

RingCentral isn’t a CPaaS vendor. They offer a communication service for the enterprise. You got a company and need a way to communicate? There’s RingCentral.

What they’ve done in the past couple of years was add an API layer to some of their services. Things like pushing messages into Glip, handling phone calls, etc.

The idea is that if you need something done in an automated fashion in RingCentral you can use the API for it. In many simple cases, this might be used instead of adopting CPaaS APIs. in other cases, it is about using a single vendor or having specific integrations relevant to the RingCentral platform.

What RingCentral did was add what they call Embeddable:

“With RingCentral Embeddable, you can embed a full-featured softphone into your favorite web application for an integrated communications experience that drives productivity and ease of use without lengthy development time“

This concept of embedding a piece of code isn’t new – YouTube videos offer such a capability as well as a slew of other services out there. When it comes to communications, it is similar in nature to what TokBox has in the form of Video Chat Embeds but done at the level of users and their user accounts on RingCentral.

This definitely makes integrations of RingCentral with CRM tools a lot easier to get done, and makes it easier to non-developers to engage with them – similar to how Flow type offerings make it easier for non-developers to handle communication flows.

SignalWire – Price and Flexibility

SignalWire is an interesting proposition. It comes from the team that created and is maintaining FreeSWITCH, the leading open source framework used today by many communication providers, including some of the CPaaS vendors.

The FreeSWITCH team decided to build their own managed service (=CPaaS in this case), calling it SignalWire. Here’s a few examples of the punchy copy they have on their website:

  • Advanced communications from the source
  • We don’t price gouge you for carrier services like per-minute and per-message rates. Focus on what’s important to your business, not your phone bill

What they seem to be aiming for are two things: price and flexibility

Price

They offer close to whole-sale price points (at least based on the website – I haven’t gone to a price comparison on this one, though their sample pricing for the US does seem low).

To make things easier, they are targeting Twilio customers, doing that by offering TwiML support (similar to what Pilvo did/is doing). TwiML is a markup language for Twilio, which can be used to control what happens on connected calls. Continuing with the blunt approach, SignalWire calls this LāML – Legacy Antiquated Markup Language.

While this may fit a certain type of Twilio customers, it certainly doesn’t cover the whole gamut of Twilio services today.

Flexibility

On the flexibility front, there’s mostly marketing messages today and not any real announced products on the SignalWire website.

Besides LāML there’s a WebSocket based client API/SDK, not so different than what you’ll find elsewhere.

They can probably get away with it in the sales process by saying “we give you FreeSWITCH from the source”, but I am not sure what happens when developers want to configure that elastic cloud service the way they are used to be doing with their own FreeSWITCH installation.

All in all, this is an interesting offering and an interesting approach and go to market.

TeleSign – Security and Data Analytics

TeleSign is focused on SMS. And a bit of voice. As their website states: “APIs Delivering User Verification, Data Insights & Communications”

Since security, verification and fraud prevention these days rely heavily on analytics, TeleSign are “horeding” data about phone numbers, using it for these use cases. It isn’t that others don’t do it (there’s Twilio Authy, nexmo Number Insight and others), but this is what they are putting front and center.

Since their acquisition by BICS, a wholesale operator for wireline and wireless carries, that has grown even further, as they gain access to more and more data.

It will be interesting to see how TeleSign grows their business from security to additional communication domains, or will they try to focus on security and expand from the telecom space to adjacent areas.

Twilio – Adjacencies

Talking about adjacencies, that’s what Twilio is doing. Now that they are a public company, there is even more insatiability for growth within Twilio, in an effort to find more revenue streams. So far, this has worked great for Twilio.

Here are two areas we’ve seen Twilio going into:

  1. Contact centers, shifting away from developers per se with their CPaaS platform towards a cloud based contact center offering, competing head to head with some of their own customers (that would be Twilio Flex)
  2. Email, through the acquisition of SendGrid

How email fits into the Twilio communication APIs is still an open question, though I can see a few interesting initiatives there.

And then there’s the wireless offering of Twilio, which resembles a more flexible M2M play.

But where would Twilio go next?

UCaaS, going after unified communications vendors and competing with them head to head?

Maybe try to jump towards an Intercom like service of its own? Or purchase Intercom?

Or find another market of developers that is growing nicely – similar maybe to its recent Stripe integration of Twilio Pay.

Twilio in a way has been defining and redefining what CPaaS is for the past several years. They need to continue doing that to stay in the lead and well ahead of their competition.

VoIP Innovations – Marketplace

VoIP Innovations came out with what they call Showroom.

Here’s a short video of the explanation of what that is exactly:

Many of the CPaaS vendors offer a partner program of sorts. This is where vendors who develop stuff for others or build tooling and apps on top of the CPaaS vendor’s APIs can go and showcase their work. The programs vary from CPaaS company to another.

Twilio has Showcase as well as an add-on marketplace of sorts. Nexmo has a partners directory. VoIP Innovations are banking on their showroom.

What makes it different a bit is the target audience associated with it:

  1. Developers – obvious, as CPaaS caters first and foremost for developers
  2. Resellers – who can pick off marketplace apps, whitelabel and resell them
  3. Subscribers – who pay for that privilege

While there isn’t much documentation to go about, I am assuming that the whole intent behind the marketplace is to offer direct monetization opportunities for developers and resellers by taking care of customer acquisition as well as payment on behalf of the developer and reseller.

A concept taken from other marketplaces (think mobile app stores). It will be intersting to see how successful this will be.

Vonage – UCaaS+CPaaS

Vonage is interesting. Started as consumer VoIP, turned cloud UC vendor (=enterprise communications) through acquisitions, turned to acquire Nexmo and then TokBox to add CPaaS, continued with NewVoiceMedia acquisition to cover contact center space.

How does one differentiate in such a way? Probably by leveraging synergies across its product offerings and markets.

What Vonage recently did was bring number programmability from its Nexmo/CPaaS offering to its VBC/UCaaS platform.

What do they gain?

  1. Single API across product lines, making it easier to learn and use the same APIs
  2. Large ecosystem of developers using Nexmo able to build on VBC – it is… the same API
  3. The level of flexibility that a CPaaS platform has right on top of a UCaaS offering. In this case, scripting using Nexmo NCCO

Is this good for Nexmo customers and partners? Yap. They can now reach out to the Vonage business customers as an additional target market.

Is this good for Vonage customers and partners? Yap. They can now do more, and more customized communications solutions with this added flexibility.

Voximplant – Flow

Voximplant is one of the lesser known CPaaS vendors. Its whole platform is built on the concept of an App Engine, where you write the communications logic right onto their platform using Java Script. It is serverless from the ground up. A year or two ago, Voximplant added Smartcalls. A product that enables you to sketch out call flows for outbound interactions – marketing, sales, etc. These interactions would be played out across a large number of phone numbers and get automated, making it really easy and flexible to drive phone based campaigns.

Now? Voximplant took the next step of adding inbound interactions, covering the IVR and contact center types of scenarios.

Twilio, MessageBird and Plivo offer inbound visual flow products. These allow developers to drag and drop communication widgets to build a flow – a customer interaction through the system.

Voximplant and Infobip offer inbound and outbound flows, where you can also plot company/agent based initiatives with greater ease as well as the customer initiated interactions.

Why aren’t you listed here?

The CPaaS market is large and varied. It is hard to see everyone all the time. It is also hard to innovate and differentiate every year. The vendors here are the ones I had briefings with or ones who promoted their products in ways that were visible to me. But more than anything, these are the ones that I felt have changed their offerings in the past year in a differentiating manner.

BTW – if you think that differentiation here means that it is a functionality that other vendors don’t have then you are wrong. Doing that is close to impossible today. Differentiation is simply where each vendor is putting his focus and trying to attract customers and carve his niche within the broader market. It is the stories each vendor tells about his product.

If you feel like a vendor needs to be here, or did something meaningful and interesting, just contact me. I am always happy to learn more about what is happening in the market.

Who is missing in my WebRTC PaaS report?

Later this month, I will be releasing my latest update of the WebRTC PaaS report.

There are changes taking place in the market, and what vendors are offering in the WebRTC space as a managed API service is also changing. This report is there to guide buyers and sellers in the market on what to do.

For buyers, it is about which platform to pick for their project – or in some cases, in which of the platform vendors to invest.

For sellers, it is about what to add to their roadmap. To understand how they are viewed from the outside and how do they compare to their peers.

Here’s who’s been in the last update of the report:

Think you should be there? Contact me.

Want to purchase the report? There’s a 30% discount on it from today and until the update gets published (and yes – you will be receiving the update once it gets published for no additional fee).

There will be a new appendix in the report, covering the topic of Flow and Embeddable trends in the market. Something that will become more important as we move forward.

The post CPaaS differentiation in 2019 appeared first on BlogGeek.me.

How does WebRTC connect people?

Mon, 03/25/2019 - 12:00

WebRTC doesn’t really connect people, but the way you think about it signaling is important to your WebRTC application.

Here’s a comment left on one of my recent articles:

WebRTC is… still just a little confusing…Tsahi, i’m reading the book recommended by Loreto & Romano but the examples are outdated. With regards to the SDP signal – if peer A is on a webRTC application, but peer B is surfing youtube – How does peer B get notified of an offer? It would have to go to peer B’s email address right? — because there is no way of knowing peer B’s IP address. Please help.

A few quick things before I dig deeper into this WebRTC connectivity thing:

  • Yap. WebRTC is a little confusing. Maybe even a lot. It doesn’t behave like any other browser technology we have
  • The sad thing about books about WebRTC is that they didn’t age all too well. WebRTC still changes too fast
  • There’s some confusion here in wording – peers, offer, etc.

How well do you know WebRTC? Check it out in my online WebRTC quiz.

Take the WebRTC quiz

Connecting, Signaling and WebRTC

I’ll try to use a kind of a bad comparison here to try to explain this.

Let’s say you are the proud owners of a Pilates studio. You’re the instructor there (#truestory – at least for my wife).

My wife gives Pilates lessons at different hours of the day. These are private lessons so it is rather flexible on both sides. But let me ask you this – how do people know when to come for a lesson?

This being Israel, they usually communicate with my wife via Whatsapp to decide together on the date and time. Usually, people stick to the day of week and time and start communicating only if they can’t make it, want to reschedule or just make sure the lesson is still taking place.

Back to WebRTC.

WebRTC is that Pilates studio. It does one thing – enables live media to flow from one browser to another. Sometimes also non-browsers, but let’s stick to the basics here.

How do the people who need to share or receive that live media connect to each other? That’s not what WebRTC does – it happens somewhere else. And that somewhere is the signaling mechanism that you pick for your own application. I am calling it a mechanism and not a protocol, since it is going to be a tad more confusing in a second.

Or not.

Now let’s go back to WebRTC, signaling and connecting people and look at it from a point of view of different scenarios.

Scheduled Meeting

We’ll start with a scheduled meeting. At any given point in time, I have a few of those coming up. Meetings with clients, partners and potential clients. Here’s one such calendar invitation:

This one happens to take place using Google Meet. Who’s calling who? No one really. I’ll just click that link in the invite when the time comes and magically find myself in the same conference with the other participants.

In most scheduled conferences, you just join a WebRTC link

Where do you get that link to use?

  • Inside the calendar invite
  • In an email that was sent
  • Through an SMS reminder

Some of these services allow inviting people from inside the meeting. That ends up being sent to them via email or an SMS as a link or just dialing their phone (without WebRTC).

Ad-hoc “upgrade” of text chat to video conference

There are ad-hoc calls. These usually start from a chat message.

Often times, I’d rather text chat than do a voice or a video call. It has to do with the speed and asynchronous nature of text. Which means that I’ll be chatting with someone over whatever instant messaging service we select, and at some point, I might want to switch medium – move from text to something a bit more synchronous like video:

Like this example with Philipp – most of our conversations start in Hangouts (that’s where he is most reachable to me) and when needed, we’ll just jump on a call, without planning it first.

Who is calling whom here? Does it matter?

What happens here is that both of us are already “inside” the communications app, so we both have a direct link to the service. Passing that information from one side to the other is a no brainer at this point.

So how will that get signaled? However you see fit. Probably on top of a Websocket or over HTTPS.

I am calling you on the “phone”

What if there’s nothing pre-planned, so it isn’t a scheduled meeting. And we haven’t really been on a text chat to warm things up towards a call. How do you reach me now?

How do you “dial”?

Puneet is one of our support/testing engineers at testRTC. While he will usually text me over slack to start a call, he might just try calling directly from time to time.

What happens then?

I am not in front of my laptop with the Slack app opened. My phone is on standby mode. How does it start ringing on me? What does WebRTC do to get my attention?

Nothing.

The phone starts dialing because it received a mobile push notification. I’ve got the Slack app installed, so it can receive push notifications. Slack invoked a push notification to wake up the app and make it “ring” for me.

The same can be done with web notifications. And there are probably other means to do similar things in IOT devices. The thing is – this is out of scope for WebRTC, but something that is doable with the signaling technologies available to you.

Contact center agent answering calls

When a contact center adopts WebRTC to be able to migrate its agents from using desktop phones or installed softphone towards WebRTC, calls will end up being received in the browser.

This happens by integrating callbars inside CRMs or just by having the CRM implement the contact center part of the equation as well.

What happens then? How do calls get dialed? (the above is a screenshot taken from Talkdesk’s support site)

They go through PSTN towards a PBX. More often than not, that PBX will be based on Asterisk or FreeSWITCH, though other alternatives exist. PBXs usually base themselves around the SIP protocol, which will lead to two alternatives on the signaling protocol that will be used by WebRTC in the browser:

  1. SIP over Websocket. Practically the same thing happening in SIP will happen on the browser
  2. Some proprietary protocol will be used, translated from SIP

In both cases, the contact center agent is registered in advance. It is also marked as “available” in most contact center software logic – this means that incoming calls waiting in the call center queue can be routed to that agent. So it is sitting and waiting for incoming calls. In some ways, this is similar to the upgrade from text chat scenario.

Connecting? WebRTC?

When it comes to actual users, WebRTC doesn’t get them “connected”. At least not from a signaling point of view.

What WebRTC does is negotiate the paths that the media will use throughout the session. That’s the “offer-answer” (or JSEP) messages that pass between one WebRTC entity to another. And even that isn’t sent by WebRTC itself – WebRTC creates the blob of data it wants to send and lets your application send it in any way you see fit.

Still confused? There’s a course for that – my online WebRTC training. The first module (out of eight modules) is free, so go learn about WebRTC.

Get a WebRTC training

The post How does WebRTC connect people? appeared first on BlogGeek.me.

Why is WebRTC winning over its (non)competition?

Mon, 03/18/2019 - 12:00

WebRTC wins over competition because there is no competition – browsers offer only WebRTC as a technology for web developers.

It was raining and miserable this last Saturday. I had lost of ideas for articles to write for BlogGeek.me in my backlog, but none of them really inspired me to action. The 8yo went to his cousin. The wife had her own things to do. My 11yo daughter was bored to death. She comes to me and says: “Can we do a trip outside to the park? I need some fresh air.” How could I answer besides saying yes?

The rain stopped a bit, so we went outside. What she really wanted wasn’t fresh air, but a chaperone to the closest candy vending machine. They are having a game at school for Purim, where she needs to bring small presents and candies to another kid in her class without her knowing who is pampering her. She needed an extra candy.

How is this related to WebRTC? It isn’t.

When I asked her about her plans for this game, she mentioned the trinket she planned on giving today –

2 mechanical pencils.

And that’s definitely WebRTC related.

A quick conversation ensued between me and my daughter – are these 0.5 mm or 0.7 mm point type? My daughter went to explain that it might even be 0.9 mm.

So many alternatives.

Competing standards

It got me thinking:

With analog video recording we had VHS and Betamax.

Paper size? A4 and Letter.

Power frequency? 50 Hz and 60 Hz.

With VoIP signaling we had H.323 and SIP. And also XMPP.

Audio and video codecs? A shopping mall of alternatives.

Web browser streaming? HLS and MPEG-DASH.

Inches and Meters. Left side vs right side driver in cars.

The list is endless.

WebRTC standard

But browser based real time media communications?

WebRTC.

There. Is. No. Other. Alternative.

We had that short romance around ORTC, which ended with ORTC dead and its main concepts just wrapped back into WebRTC.

What other technology would you use or could you use inside a browser to do a video call?

Nothing.

Just WebRTC.

The other alternatives just don’t cure it (including what Zoom is presumably doing).

  • You want to build a real time service
  • It needs to run in the browser
  • You use WebRTC

What does that mean exactly? It gives us a kind of a virtuous circle.

  • You want to build a real time service
  • Looking at alternatives, you find WebRTC
  • There’s a vibrant community around it (because of web browsers)
  • Alternatives are limited proprietary solutions or old open source
  • You pick WebRTC
  • Adding to its popularity, adoption and ecosystem

For the most part, there’s no question if you should select WebRTC these days. There’s also no question what are the alternatives (there usually are none). It isn’t a question if WebRTC is getting adopted, used, growing or popular.

When our window to the world is the browser, then WebRTC is what you use.

For mobile apps or other devices, the need for browsers or just having an ecosystem around the technology picked translates again to WebRTC.

Thinking of using real time media technology? That’s synonymous to WebRTC.

Want to learn more about WebRTC? Check out the first module of my online course – it is free.

Start learning WebRTC

The post Why is WebRTC winning over its (non)competition? appeared first on BlogGeek.me.

Are you blocked by the rules of your upbringing in your WebRTC application?

Mon, 03/11/2019 - 12:00

I know I am. I am constantly surprised what people are doing with WebRTC.

Here’s something I hear a lot:

How do you make a call with WebRTC?

Well… you don’t. Not really. And in many scenarios – that term call, or dialing, or answering – has no real meaning.

Here’s a funny opposite for you:

Kids in front of old phones don’t know what to do. It isn’t “natural”. Guess what? Nothing is. The things that are natural to you are things you’ve learned, and are now used to. They are a set of rules in your upbringing.

If you come from a VoIP background, then WebRTC brings with it quite a challenge to your world. I know – I had 13 years of VoIP background before WebRTC was announced. Since that announcement, I’ve been surprised time and again by what people are doing with WebRTC. Especially people who shouldn’t be able to even use it because they don’t know VoIP enough.

Coming from VoIP? Interested in streaming? Broadcasting? Some other communication use cases? Tomorrow I am hosting a free webinar – Google Does Gaming: WebRTC Man-to-Machine Use Cases

Register to the webinar

When we all first started out in this adventure called WebRTC, what we’ve seen was video calling. It was all about face to face meetings. It took time to think about WebRTC in other settings and for other use cases.

And here we are. Years later, dealing with WebRTC in the aid of cloud gaming. Google used WebRTC in Project Stream, where they showcased playing the game Spartan through a web browser – the game itself was rendered in Google’s cloud.


(that’s a screenshot of one of my slides for tomorrow’s webinar)

Who would have thought WebRTC would be used for that?

Anyways, if you come from a VoIP background, here are some aspects of WebRTC you’ll need to unlearn and relearn – I am still grappling with them myself every once in awhile:

Signaling? What’s “Signaling”?

With any other VoIP protocol out there, it seems like we’re starting off with signaling.

H.323? Signaling.

SIP? That’s signaling.

XMPP? Ditto.

WebRTC? Nope. No signaling. Sorry.

What does that mean exactly? That you can use whatever signaling mechanism/protocol you see fit. That’s assuming you can get it to run inside a web browser or wherever it is your application needs to operate.

SIP, which is the most popular VoIP signaling protocol out there, is probably an overkill for a lot of WebRTC services. I tend to look at it as a hindrance when I see it in architectures – I often ask time and again why is it there to make sure there’s a real need other than saying someone needed signaling for his WebRTC application.

You. Don’t. Answer. Calls.

There’s no such thing as a call while we’re at it.

I remember doing a live WebRTC training a couple of years back. I had to hammer out of the people the need to ask incessant questions about dial, answer, mute, hold and a bunch of other paradigms they thought are golden rules in communications.

If you feel that way too, then look at that video at the top of this article again. What made sense 20 years ago doesn’t hold water today.

WebRTC isn’t fixed in any specific concept of how “calls” are made. I prefer using the term session and deal with the initiation part of it on a case by case basis.

If there’s no need for dialing or answering – just don’t force it on your WebRTC solution.

It isn’t only Google

Most days of the week, I like thinking of WebRTC as the source code that resides on webrtc.org. That’s the codebase Google is maintaining and putting inside its Chrome browser.

The thing is, many end up modifying it for their own needs. They:

  • Port it over to mobile
  • Fix private bugs in it
  • Add their own minor modifications to it where needed
  • Seriously change it (check out what Discord did)
  • Modify the Chromium version, replace it inside Electron and release their own stuff

There are some really interesting “mods” to the vinyl WebRTC implementations out there, usually held privately for internal use of companies. In many ways, this is a shortcut to building your own media engine from scratch.

There’s more than one way

What I like about WebRTC is that usually, there’s a single way of doing things with it: everything is encrypted – you can’t override that; it defaults to multiplex and bundle its media connections; the list goes on.

How you use it is a totally different story.

Each SFU implementation is different than the other. There are different ways to record a session. Different ideas and approaches to broadcasting at low latency.

The “right” answer differs a lot not only based on the use case, but also on the business model, the developers available, the DNA of the company, etc.

Wasteful can be just fine

There’s also a school of thought that never really existed with VoIP: the “good enough” approach – one where we’re just fine with not optimizing everything and leaving things it a kind of a mediocre stage that is good enough for what we’re trying to do. It may eat up to much bandwidth or tax on the CPU. Or just not be how things are done around here. But it works. Good enough.

Heck – the default WebRTC implementation does it on its own, deciding to waste 1.7Mbps for a VGA resolution encoding instead of limiting it to 800kbps or less. Such a waste of good resources.

I learned to love this approach (and then try to optimize it with my clients).

How do you think about WebRTC?

What about you?

What mistakes you see people make when thinking about WebRTC that fits the web or VoIP better?

What things do you need to unlearn about WebRTC?

Coming from VoIP? Interested in streaming? Broadcasting? Some other communication use cases? Tomorrow I am hosting a free webinar – Google Does Gaming: WebRTC Man-to-Machine Use Cases

Register to the webinar

The post Are you blocked by the rules of your upbringing in your WebRTC application? appeared first on BlogGeek.me.

When will WebRTC 1.0 be available?

Mon, 03/04/2019 - 12:00

Some believe WebRTC isn’t ready. I think it is ready. But when will WebRTC 1.0 be available?

Ready or not, WebRTC is here. The thing is, we still don’t have a closed standard specification we can all print and take on a plane to read for our enjoyment. There are drafts – but nothing that is final.

And once final, does it mean that it is available?

There are 3 parts that needs to be addressed to answer this question. I’ll deal with only two of them (skipping the IETF one):

  1. When will the relevant WebRTC draft become IETF RFC
  2. When will the relevant WebRTC draft become W3C recommendation
  3. When will browsers implement the new specification

Want to learn more about WebRTC, the various components in its specification and what compute power you need for each WebRTC server? Try out my free video course:

Learn about WebRTC servers

Want to learn more about WebRTC, the various components in its specification and what compute power you need for each WebRTC server? Try out my free video course:

WebRTC standardization

WebRTC as a standard is built out of two components:

  1. What goes on over the network – that’s what the IETF is working on
  2. What APIs can developers use on top of a web browser – that’s what the W3C is working on

Most of the industry is already viewing WebRTC as a done deal – so much so that the IETF already has an RFC for SIP over WebSocket. The only reason to have such an RFC is to be able to use SIP inside a browser, and the only way to use SIP inside a browser with media being sent or received would be by way of WebRTC. The people working at the IETF were so certain WebRTC will get an RFC of its own in 2014 already (5 years ago!).

Each of these organizations has its own set of rules, policies, governance and flow.

I’ve tried to keep the standardization of WebRTC at arm’s length. In the past I’ve been part of standardization processes related to H.323 and 3G-324M, going to ITU-T and 3GPP standardization meetings as well as acting as a co-chair of the 3G-324M activity group at the IMTC (dealing with interoperability). It is a tedious work that combines technology with politics. As fun as it is (at times at least), dealing with it as an employee of a company is different than doing it as a consultant. The value for me just wasn’t there.

For vendors? If you want to take a driver’s seat at this, and decide what gets more attention, then you should invest time in it.

But where are we with WebRTC then?

W3C WebRTC status

I’ve asked Dominique Hazael-Massieux about WebRTC’s status. He works as a W3C Staff dealing with WebRTC. Here’s what I got –

When it comes to W3C, where the browser WebRTC APIs are being defined, WebRTC is considered to be at the CR stage.

CR means a Candidate Recommendation. We’ve moved from a Working Draft (WD) towards a Candidate Recommendation.

Next up would be PR – Proposed Recommendation, and from there, a Recommendation.

How do we move to the next step?

  1. First the draft needs to be finalized. There are some open issues that needs to be closed for that to happen (at the time of writing this, there were 53 open issues)
  2. All the features written in the draft need to be implemented in two independent browsers (this is kinda tricky now that Chrome is gobbling up the market). More on browser implementations later
  3. It needs to be tested for interoperability across browsers. So tests needs to be written to validate that

That first one is “easy”. Get the people writing the spec into a room. Have them agree. Then have someone write down the agreement on “paper”. Get everyone to read it. And agree again. Rinse and repeat. It’s never easy.

That second one of implementing in browsers? That’s also not easy. They have other things on their minds as well. And WebRTC is pretty darn complex to implement. But we’re getting there.

That third one of interoperability testing? With a test suite. That tests for the various features? This is downright suicidal. And daunting.

All that work needs to be done for “free”. There’s no direct money to be made out of it. But lost of hours needs to be spent by many people to get it done. We’re getting there, but we’re not there yet.

WebRTC 1.0 browser implementation

And then there are the browser implementations.

The specification is as good as its implementations. People always complain when I suggest following the Chrome behavior in WebRTC as opposed to implementing against the specification. That’s where theory and expectations meets reality.

At the end of the day, your service will need to:

  1. Run inside web browsers; and/or
  2. Integrate/port/embed a WebRTC SDK in your app

In the first case, Chrome wins on market share; Microsoft Edge will be migrating to Chromium. And for most use cases, Chrome is the first browser to target anyway.

In the second case, if you are using the code in webrtc.org for your app, then you are effectively basing your app on Chrome’s WebRTC implementation.

Better go with what’s available now than what will be ready some time in the future.

In the past, the changes we’ve seen in browser implementations of WebRTC revolved a lot around media optimizations and interoperability across browsers. What we are seeing now a lot more is changes in the API layer, where browsers are shifting towards the WebRTC 1.0 specification. This is necessary because:

  • Without spec compliant implementations we can’t move WebRTC from CR to PR
  • People still (rightfully) expect to have the specification implemented by browser vendors
  • It is about time…

These changes mean one sad thing though. You can be certain in one thing – during 2019, WebRTC implementations in browsers is going to break existing apps multiple times. This is due to the changes taking place. We are seeing migration from Plan B towards Unified Plan, modifications to the connection state machine, and an experimental implementation of mDNS. There’s more that I probably forgot and more ahead of us still.

The only certainty is that nothing is certain. You’ll need to continue investing in aligning with the browser implementations with each and every browser version release.

When then?

The current intent is to be able to get to the PR stage for WebRTC somewhere in Q3 2019. Will it be postponed further? I don’t really know.

Interestingly, work has started in parallel about WebRTC NV – what comes next. I’ve covered the WebAssembly in WebRTC part of it in the past.

Want to learn more about WebRTC, the various components in its specification and what compute power you need for each WebRTC server? Try out my free video course:

Learn about WebRTC servers

The post When will WebRTC 1.0 be available? appeared first on BlogGeek.me.

The five make-or-break WebRTC challenges you need to address

Mon, 02/25/2019 - 12:00

WebRTC is a great piece of technology, assuming you can develop a coherent strategy on how you plan on using it.

There are two extremes happening in the enterprise communication space, and they are quite opposite in nature. On one hand, companies are striving towards more automation and this is coming to their contact centers by way of machine learning and bots “replacing” humans. On the other hand, many of us are striving for better and more meaningful communications. Be it for long distance relationships (personal as well as business ones) or by the use of machine learning (again) and context, to guide us through an interaction – being able to know beforehand the intents of people for example.

Enter WebRTC, which enables communications to take place anywhere – be it a mobile application, a physical device or a modern web browser. What WebRTC brings with it is better context of sessions and lower barrier of entry for enterprises to make use if this technology. Some enterprises use it to improve business agility or lower their operating costs. Others use it to create new businesses never before seen or to improve the communications with their customers or peers in the industry.

We are now 7-8 years since the announcement of WebRTC (depends on who’s doing the counting and from which date), but in many ways, a lot of enterprises (I don’t want to say most) have failed in to capture the value they initially envisioned from using WebRTC. In many cases, the lack of any thoughtful strategy created a rush towards initiatives that never really matured.

Through my work with many clients on their WebRTC initiatives along with discussions with many others on their projects and services – failed as well as successful ones, I’ve seen a few challenges that crop up consistently across such initiatives.

#1 – Where to begin?

WebRTC is a versatile and powerful building block in your arsenal. This means that you can do a lot with it. That range of utility can be overwhelming, oftentimes leading to wasted resources. The other problem is that WebRTC can’t do everything, while the expectations of it are rather high. This leads to requirements and plans that are often not grounded in what can be done in reality or within the allocated budget and resources.

Deciding what to build using WebRTC requires an understanding of the capabilities and limitations of WebRTC coupled with a clear view of the communication problems you are trying to solve for your customers. There’s a lot of feature creep happening when it comes to WebRTC. I find myself asked about a simple video chat service for 2 people, but once you dig a layer deeper, you see requirements for group video calls, recording and even broadcasts as part of the project. Being able to see the full picture, and map it back into requirements and a roadmap comprised out of multiple phases is an important first step in any WebRTC initiative.

There are a few other things to keep in mind –

Integration with existing infrastructure

Oftentimes, you’d be planning on adding WebRTC to an existing service. This can happen in many ways:

  • A chat application that gets voice/video interactions as an additional feature
  • An existing telephony/communication service that needs to get guest access via the web browser
  • Just a regular self service application with a new option to connect to the contact center via the application itself (instead of using expensive 1-800 numbers)

This requires extra care in how WebRTC gets introduced as it isn’t going into a green field where anything you pick immediately fits your needs.

Cloud migration and transformation

WebRTC was born in the cloud era. Many of its deployments are cloud based.

Most of its uses in non-cloud environments are actually enabling guest access from the public cloud towards the internal communications infrastructure. In other cases, it just needs to integrate with on premise data centers for things like users database and policies.

This places an additional strain on enterprises who are just starting out their migration towards the cloud.

Not your regular web application

WebRTC is different than other web technologies. It has a lot more moving parts to get to a minimal viable product, and then there’s that media quality issue to contend with. Its deployment needs to start as a global one for many of the use cases.

What are the server side components needed for WebRTC? Learn that in my free online mini video course.

Register now

#2 – Who should I have on my team?

Putting a team of developers on a WebRTC initiative is a daunting task. There are multiple disciplines they need to come from and the myth of a full stack developer that can do it all gets stretched even further here, as that superhero needs to also know about media processing, WebRTC APIs, browser changes and standardization processes.

Here’s what i wrote a while back about WebRTC developers after discussing the topic with a few people who manage/hire them.

Some other aspects you’ll need to decide on:

Internal vs External

Will you be relying on your existing engineering team or will you be outsourcing some/most of the project to an external vendor? Assuming you decide to go for an external vendor, who will maintain the service on an ongoing basis?

Multidisciplinary

The team in question needs to be multidisciplinary, capable of handling anything from media processing, to mobile app development, to backend integration work and ongoing DevOps and maintenance.

There needs to be a skilled product manager and a system architect who understand WebRTC enough to know what is possible and what’s… less possible. What incurs risk and where quick wins can be found.

Which new skills are needed?

Your teams. Do they have the necessary skills?

Here it goes to a lot more than just developers. There are product managers, testers, DevOps people, support staff.

Do I need to enhance some in-house capabilities?

What skills are you missing? If you operate everything on premise and WebRTC is forcing you to start using cloud services, then this is an in-house capability you will need to start contending with.

The same goes for mobile application development, going global in how you deploy servers, etc.

Looking to beef up the WebRTC experience and skills of your team? Check out my WebRTC training (the first module is free).

Enroll to my course

#3 – What technology stack do I use?

Different companies have different DNA to them. That often dictates what their technology stack will look like and how they’d prefer to partner/hire.

There are three main aspects that need to be taken into account when picking a WebRTC technology stack:

Open source / commercial

You might favor open source components and frameworks for your WebRTC service or you might be someone who prefers a commercial offering with a company focused on that product development.

Both alternatives can come with support contracts but companies seem to prefer one or the other.

Which alternative will it be for you?

Hosted or on prem?

These two approaches means different technology stacks, levels of expertise and staffing on your end.

Are you planning on hosting this on your own, in your data centers, on bare metal or in the cloud? Or are you going to have someone else host the service for you? Which parts of it will be managed and which will be self managed?

Acquisitions

WebRTC is still relatively new, with the vendors ecosystem dynamically shifting. There have been quite a few acquisitions in this space. These acquisitions sometimes removed solutions from the market, made them weaker or made them stronger.

When selecting a technology stack, the potential acquisition scenario of the vendors in question needs to be taken into consideration as well.

Fit for the requirements

This one seems silly but it is highly relevant and important.

Are you sure the technology stack you’ve selected can do the things you want it to do?

I’ve seen too many cases where the framework used wasn’t up for the task. Things like taking signaling when media servers needs to be used, picking a CPaaS vendor when the scenario requires too much control of media processing, etc.

Just look at what WebRTC signaling alternatives people have these days.

#4 – How do I know it is working?

You built it. Tested it in the lab. Did a call or two with your colleagues. Went home and showed it to a friend.

Does it scale? Will it work properly?

I had a customer recently who is developing a group video calling feature. He wanted to test the service with around 20 people in a single room. It wasn’t easy to find 20 people to run that one scenario. And when he did – things broke and needed fixing. So he had to find 20 people to run it again once a fix was put in place.

Testing is often neglected when it comes to WebRTC applications and it shouldn’t be. Take this one seriously. You can cobble up a testing environment on your own (there are even a few open source projects that can help you out here) or you can just use testRTC (I am a co-founder there) and start running tests within a couple of hours.

#5 – What do I track?

Tracking websites is rather “easy” these days. Use Nagios, Cacti, Zabbix or any other open source tool that sounds like a disease. Or use something like New Relic or DataDog to do it managed in the cloud.

Problem is, these tools only cover the machines metrics and performance and they don’t really watch for the media and its quality (or even if a session got connected for that matter). There’s no end to end monitoring/tracking.

You will need to collect WebRTC related metrics from either the backend or the devices (or both). You’ll need to track it for quality.

You’ll need to monitor your service (we’re doing a webinar on WebRTC monitoring next more @ testRTC – register to join).

How can I get help?

There are various ways in which you can get some help for what you are doing.

The best approach is probably to get some external assistance in what you are doing as part of your research and planning – even before you go outsourcing the whole project (if that’s the path you are going to take).

You can contact me for that, or go to other consultants. Some of the outsourcing vendors offer such consultancy service as well. Whatever you do – don’t go it alone. At least not in the planning stages.

The post The five make-or-break WebRTC challenges you need to address appeared first on BlogGeek.me.

Who needs QUIC in WebRTC anyway?

Mon, 02/18/2019 - 12:00

Is QUIC in WebRTC a solution looking for a problem or a real requirement?

QUIC is the next evolution of browser transport protocols. I’ve written about it in 2015, when Google started experimenting with the idea of replacing SCTP with QUIC for data channels. Three and a half years later, and we still don’t really have QUIC in WebRTC – at least not until last month. Google decided to come out with a new RTCQUICTransport for WebRTC in Chrome and written a post about it on their Chrome Developers site.

UDP, TCP, SCTP & QUIC. How do these transport protocols compare?

Download my free Transport Comparison Table

What is QUIC again?

I am not going to go into the technical details – I’ve done that in the past already, and there are other places for that. I want to focus here on the bigger picture.

If you look at the timeline of web transport protocols, it looks something like this:

We had TCP and UDP for some 40 years now. HTTP 1.1 is defunct, but runs most of the internet at the moment. HTTP/2 is growing nicely in adoption. According to W3Techs, we’re standing on ~33% adoption for HTTP/2 (Feb 2019):

HTTP/2 came to be after Google came out with SPDY, a “fix” for HTTP and got parts (most?) of it wrapped into HTTP/2 to get it standardized.

HTTP 1.0, 1.1 and HTTP/2 are all built on top of TCP. Signaling, which requires reliability and causality won’t work on top of UDP without adding these characteristics. After around 40 years, it is time for a refresh. Enter QUIC. It uses UDP and works in ways that are better than TCP for signaling purposes.

QUIC follows a similar path – Google created it to “fix” the ailments of HTTP over TCP. the end goal here is to turn it into HTTP/3.

Since QUIC is built on top of UDP, it can handle a lot more than just HTTP signaling. Which is why it is becoming an interesting topic for WebRTC –

Where QUIC in WebRTC fits exactly?

This is the real question. My answer to it in 2015 was this:

There are two places where QUIC fits in WebRTC:

1. In the signaling, which is out of scope of WebRTC, but interesting, as it enables faster connection of the initial call (theoretically at least)

2. In the data channel, by replacing SCTP with QUIC wholesale

Google’s answer in their post on Chrome Developers blog?

Why?

A powerful low level data transport API can enable applications (like real time communications) to do new things on the web. You can build on top of the API, creating your own solutions, pushing the limits of what can be done with peer to peer connections, […] WebRTC’s NV effort is to move towards lower level APIs, and experimenting early with this is valuable.

Why QUIC?

The QUIC protocol is desirable for real time communications. It is built on top of UDP, has built in encryption, congestion control and is multiplexed without head of line blocking.

Hmm… somehow they lost me in that explanation somewhere. This is about real time communications. It is about doing stuff on top of UDP. And it is about low level APIs. Great. Why do I need it again? For voice and video I already have SRTP in WebRTC. The SCTP data channel works quite well. So where exactly do I need this great thing called QUIC in WebRTC?

I think there’s merit, but it is in totally different places.

QUIC is about having a single, modern, common transport protocol for the web.

Here’s what we do today with WebRTC in terms of transport protocols:

  • HTTPS, HTTP/2 or WebSocket for our signaling, which runs over TCP/TLS
  • SRTP for media, which runs over UDP
  • SCTP for data channels

There’s this popular drawing from the High Performance Browser Networking book that shows this amalgamation of protocols:

So many transport protocols in a single standard. This makes implementations of the backend more complex, as they need to be able to understand all these transport protocols as well. One can say that this is already common enough and widely used already that it is a solution looking for a problem, but the developer in me can appreciate unifying all these functionality over a single transport protocol.

Here’s how life will look like with QUIC in WebRTC:

  • QUIC is being planned for HTTP/3, so it can be used for WebRTC signaling moving forward (replacing both WebSocket and HTTP/2)
  • QUIC is looked as an SRTP replacement, which means sending real time audio and video can take place on top of it
  • QUIC can replace SCTP for the data channels (that was the obvious use of QUIC in WebRTC to begin with)

Putting it into an architecture diagram of my own, we get this:

Much simpler.

What do we gain?

Theoretically, we can multiplex signaling, voice, video and low latency data in a single QUIC connection. That’s powerful:

  • We can now tunnel or proxy all that WebRTC traffic with a lot less logic, boxes and code in our servers
  • For smaller deployments, we might not even need multiple servers – just the one that handles it all
  • It makes developing web servers that handle media and data channels simpler, as they need to support only one transport – QUIC, instead of having to implement multiple transports
What do we lose?

This isn’t going to happen in a day. Getting there is going to be a journey of multiple years and people will complain and whine about it along the way. Similar to what is happening today with WebRTC – whenever something is modified or something new is added – things tend to break (either because APIs get deprecated, behavior changes or just pure bugs).

Moving to a QUIC based stack is a huge undertaking – for the WebRTC stack, browser vendors and all the related internet infrastructure vendors.

Connecting to other realms such as SIP? That’s going to get even harder, as we move away from the domain of SRTP towards QUIC, more translations and protocol interworking will be required.

The question then becomes – is it worth all the fuss? Are we gaining enough to make this effort worthwhile?

Can you use QUIC in WebRTC now?

To some extent you can. Check out the recent post on QUIC @ webrtcHacks for that.

I will be adding a new dedicated lesson to my online WebRTC course about QUIC – my goal is to have the most up to date and relevant WebRTC training curriculum in the market, so keeping up with these changes comes with the territory.

Interested in WebRTC? Check out my WebRTC course.

The post Who needs QUIC in WebRTC anyway? appeared first on BlogGeek.me.

Pages

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.