bloggeek
Future of CPaaS; a look ahead
Looking at the future of CPaaS, the lines are blurring in the cloud communication API future. And this isn’t only about UCaaS and CCaaS.
I’ve been asked recently by multiple clients to analyze for them the future of specific technologies they are developing. The process was very interesting and provided a lot of insights – somethings things that haven’t been obvious to me to begin with.
It got me into thinking. What if I do the same around CPaaS? Looking at how the future of cloud communication APIs look like, what are vendors after, what they pitch and brief analysts about, and what their customers are looking for.
I decided to do exactly that, ending up writing this article and creating a new comparison sheet and eBook (this eBook/sheet combo can be found in my WebRTC Course paid-for ebooks section).
–
When looking at what the future holds in the CPaaS domain, there are many aspects to review. If this topic interests you, then you should probably also read these other 4 articles I’ve written previously:
- 7 CPaaS Trends to Follow in 2018 – the perspective I had a year ago. Mostly still true, but I think we’re accelerating the pace of change and evolving this a lot further
- What Comes Next in Communications? – a look at how CPaaS, UCaaS and CCaaS vendors are looking at the market and at the blurring lines between them
- CPaaS differentiation in 2019 – because it shows how different vendors try to operate differently and rise above the noise
- Twilio Signal 2019 and the future of the programmable enterprise – a summary of the recent Twilio Signal event. Important simply because Twilio is the market leader and innovator in this domain
Now that we’re on “the same page”, here’s where I see things heading for communication APIs.
Want to figure out exactly what each vendor is doing in each of these future trajectories? You can purchase my CPaaS Vendors Comparison.
nocodeThere’s this new trend of making software development all-encompassing. It boils down to a single non-word used for it known as #nocode
Here’s some of the things people like saying about this trend:
As creating things on the internet becomes more accessible, more people will become makers. It’s no longer limited to the <1% of engineers that can code resulting in an explosion of ideas from all kinds of people. #NoCode
— Shaheer Ahmed ✪ (@Boringcuriosity) September 13, 2019The best code you could write is #nocode at all
— Denis Anisimov (@dbanisimov) September 14, 2019Interestingly, the place where you see people talk the most about #nocode is in the third party API space. Now that we’ve made integrating with third parties simpler via APIs, it is time to make it even more so by requiring less development skills to do so.
This has been a long time coming to the communication API space as well.
We’ve had visual IVRs for quite some time, and we’ve seen in the past 2-3 years many of the CPaaS vendors adding visual drag and drop tools. Twilio calls their tool Twilio Studio, while the rest of the industry settled on the name Flow.
Who is doing it today with CPaaS?
- IMImobile IMIconnect
- Infobip Flow
- MessageBird Flow Builder
- Plivo PHLO
- Twilio Studio
- Voximplant SmartCalls
Others, like Nexmo, opted for releasing a Node-RED package, enabling developers more flexibility in the integration points the Flow tool has to offer them.
What I fail to understand is why so little activity is taking place in the serverless trend. It is as if CPaaS vendors knowingly decide NOT to offer these and instead jump directly towards the visual drag & drop flow tool.
Look at the diagram above. It shows why I believe it is a mistake to skip the serverless opportunity. We’ve started with APIs, to simplify the task of inhouse development, going towards cloud so we don’t need to install complex systems. We’ve seen a shift towards serverless (think AWS Lambda), where developers can focus on their use case and not think too much about the whole non-functional infrastructure stuff. Then came the visual drag and drop tools, which made life even simpler, as for many scenarios, there is no more need to code anything – just express your intents by connecting dots to boxes.
Developers end up using ALL of the tools given to them. They will use a visual drag & drop tool to speed up development when the flow is easier to express in that tool. They’ll write code when necessary. And they will use serverless functions to reduce the effort of scaling and maintenance if that is needed. So why not give them all of these tools?
CPaaS vendors are doing APIs and moving towards visual. The serverless part is an internal implementation which most don’t expose to their customers. Why? I am not sure.
What should you expect in the coming years?
Visual Flow tools will become an integral part of any CPaaS offering, with more widget types being added into these tools – supporting new features, adding new channels or integrating with external third parties.
OmnichannelOmnichannel is the biggest thing in CPaaS at the moment.
There are two reasons for this:
- SMS is crap. And it is getting worse
- SMS (and voice) is being commoditized. Omnichannel means less churn for CPaaS vendors
Why is SMS crap? Because in the last week or so I’ve received so much spam on SMS related to the election here in Israel that it made that channel useless. I am sure I am not the only one and that this isn’t only in Israel.
SMS is being marketed to marketers as the channel that gets the highest attention rate from the spammed audience. What it gets is the highest deliverability – maybe. Definitely not the highest attention. This makes SMS great for transactional messages but I am not sure how good it is for sales or marketing promotions if done in the current stupid carpet-bombing tactics.
How does omnichannel change that? It doesn’t. But the social networks that act as channels treat their users better than carriers, which means they are guarding the entry to their garden from sales people and marketers, trying to bake the rules of permission marketing into the engagement. This is done by things like manually approving message templates, not letting businesses send unsolicited messages, forcing identity on the sender, allowing users to mark crap they receive as spam, etc.
It does one more thing – it brings the game into a new field which is murkier than SMS today. There are many channels already, with a promise of more channels to come in the future. Will you develop it on your own or rely on a third party CPaaS vendor for that? Most will choose the CPaaS vendor approach.
Timing is also good. Social networks are opening up their APIs, letting CPaaS vendors (and other vendors) access to their users, in an effort to enhance their usefulness to their users and to have more monetization options on their platform. They are doing that while trying really hard not to piss off their users, so spam levels are low and will be kept that way for years to come.
Omnichannel is the leading force of future CPaaS growth. This is where most invest their focus on, and where there’s an easy path for migrating SMS revenue/engagement from.
EmailEmail was always shunned from. Akin to fax. A relic of a bad past.
But it isn’t.
Most of my business revolves around the ability to reach people via email. And it mostly works for me (don’t like my content? unsubscribe).
It isn’t a replacement for SMS messages. Not really. But it has many uses of its own. Especially if you factor omnichannel. Businesses need to communicate with their customers and prospects, and doing that only over SMS or WhatsApp is a limited worldview. There’s email as well.
Some CPaaS platforms already had email integrations and capabilities to some extent. Twilio has taken it to a whole new level with the acquisition of SendGrid. Did Twilio decide on this acquisition to increase their bottom line and appeal to Wall Street? Were they after an operation with less costs attached to it to increase their revenue per share? Was it a genuine strategic move towards email?
Doesn’t matter anymore. Email is part of the game of CPaaS. I don’t think many agree with me on that. The reason it is becoming part of CPaaS is because we need to look at communications holistically. As we head towards the enterprise with CPaaS, email is yet another channel of interaction – same as SMS, WhatsApp and others. Being better at email means answering more of the needs of an enterprise communications which means appealing more in a vendor selection process.
Email will take a bigger and more important position in CPaaS. The more omnichannel becomes the norm, the more customers will ask about Email support and capabilities.
Streaming media to third partiesWe call it AI – Artificial Intelligences. If we’re not overly hyped, then ML – Machine Learning. And if we’re true to ourselves, then most of it is probably statistics, sometimes sprinkled with a bit of machine learning.
CPaaS is too generic and broad to be able to cover all possible algorithms and models. What do you want to do with that recorded voice call? Transcribe it? Translate to another language? Maybe do some emotion analysis? Find intents? Summarize? Look for action items?
Too many alternatives, with too much data to train from to get a good enough model. And then each scenario needs its own data to train for and get a specialized model to use.
The end result?
CPaaS vendors offer a few out of the box integration with popular features and frameworks. The known culprits are speech to text and text to speech. Or just connectivity to AWS or Google machine learning algorithms in this speech analytics domain.
Another approach which is gaining a lot of traction is to be able to stream the media itself to any third party – be it an on premise/proprietary machine learning model or a cloud based machine learning API. Usually over a WebSocket, but sometimes on top of other transport mechanisms.
The name of the game here? Simplicity and real time.
Enabling easy access to the media streams is key. The easier it is to access the media streams and integrate them with third parties that do machine learning the more attractive the CPaaS vendor will be moving forward.
Chatbots and voicebotsThe digital transformation of enterprises is a transition that is taking now over a decade and will continue for many years to come. Part of that transition is figuring out how businesses communicate with users. Part of that communication needs to be relegated to bots.
Why?
- Because as a business we want greater scale. The more we can automate the more we can accomplish for less price, less friction and less mistakes
- Because users seem to prefer self service in many cases. “Empowering” users to do more by having a lot of their interactions taken care of with bots help that
- Interaction interfaces are moving from button clicking towards voice interactions. And text is the main form of communications on social networks (I am ignoring emojis and gifs here)
I’ve written about this trend and its reasoning when reviewing the two recent acquisitions of Cisco and Vonage in this space.
There are startups focusing solely on the bots industry, which is great. But in many ways, this is part of what a CPaaS vendor can offer – enablement of communications at scale.
Some CPaaS vendors today integrate directly or indirectly with bot frameworks such as Dialogflow or have built their own bot infrastructure. Moving forward, expect to see this more.
Enabling easy creation and configuration of chatbots and voice bots will be an important feature in CPaaS. The better tooling a CPaaS vendor will have in this space, the easier it will be for him to maintain enterprise customers looking to better communicate with their users.
UCaaS and CPaaSAcronyms might be confusing in this section and the next so follow closely (or skip altogether)
UCaaS vendors are looking at CPaaS as a potential growth opportunity.
Vonage has seen that first with the acquisition of Nexmo.
Since then we’ve had Cisco acquire Tropo (and botch that one), RingCentral introducing developer APIs and 8×8 acquiring Wavecell.
There are definite synergies at the infrastructure level of UCaaS and CPaaS, though it is a bit less obvious what synergies there are on the frontend/application/business side. They do exist, but just a bit harder to see.
UCaaS vendors are adding APIs and points of integrations to their service because it makes sense. Everyone’s doin’ it in one way or another. It isn’t CPaaS but in some minor cases it can replace the need for using CPaaS.
What you don’t see, is CPaaS vendors heading towards UCaaS. Yet.
And you don’t see any successful independent UCaaS vendor using a 3rd party CPaaS vendor to operate all of its communication infrastructure. Yet.
For UCaaS, CPaaS is a growth potential. For CPaaS, UCaaS is just another use case. The lines are blurring between these two domains but not enough to matter.
CCaaS and CPaaSCloud contact centers take the exact opposite powerplay than UCaaS.
Many of the cloud based contact centers are using CPaaS and not their own infrastructure.
Twilio decided to build a contact center solution – Twilio Flex. In a way, it competes with some of its own customers. As successful companies grow large, they go toward adjacencies and CPaaS is an adjacency.
Will Twilio succeed with Flex? Too early to know.
Will more CPaaS vendors introduce contact center solutions? Probably not, but they are being bunched up and consolidated as larger entities – just see what Vonage and 8×8 have been doing in their acquisitions.
Twilio Flex is a singular occurrence. The norm would be other larger communication players who have CCaaS, acquiring smaller CPaaS players. The end result? A blurring of the lines between the various communication vendors.
For Twilio, Flex might be just the beginning. If this bet succeeds, Twilio will find the appetite to look at other adjacent enterprise applications it could build or acquire and make its own.
M2M / IOTThis. isn’t. part. of. CPaaS.
Or is it?
I’ll start by splitting this one into two areas:
- M2M (cellular stuff)
- IOT (messaging between devices)
Twilio has their Programmable Wireless offering, which at its core is a modern M2M solution (for me M2M and IOT are one and the same).
In this domain, communication is needed between devices. Less human intervention for the most part, so some of the requirements are different.
But this is still communications.
CPaaS will redefine M2M/IOT as one of the use cases it covers. I don’t see a reason why CPaaS vendors wouldn’t take that route in an effort to grow their product line horizontally.
IOT – serverless infrastructure for real-time messagingI tried to find a name for this subdomain and settled on what vendors like PubNub, Pusher and Ably end up with (or something in-between). There’s a set of vendors offering a kind of general purpose managed messaging that developers can use when they build their apps.
These vendors are settling on something like serverless infrastructure for real-time messaging as a name.
Serverless because it sounds modern, advanced and cool (marketing asked for that).
Infrastructure because this is what they have.
Real-time messaging because this is what they do.
How is that related to CPaaS? It doesn’t directly. Because no CPaaS vendor offers a “serverless infrastructure for real-time messaging”.
Here’s a surprising thing.
All of the CPaaS vendors who support WebRTC have a global backend real-time messaging infrastructure already. It is used to drive signaling across the network.
It might be more centralized. It might be slightly slower. It might be simplistic.
But at the end of the day – it is a serverless infrastructure for real-time messaging.
These CPaaS vendors can slap an API on top of that infrastructure and offer that as yet another distinct service. And they will. Either by inhouse development or through acquisitions.
Serverless infrastructure for real-time messaging will be wrapped into CPaaS.
Cloud native, no hybridThere were attempts in the past by CPaaS vendors to offer both cloud and on premise alternatives.
Some are probably doing it still.
The vendors that see more growth though are cloud native and offer no on premise alternative.
Things aren’t going to change here.
The future of CPaaS is cloud. Hybrid is a nice idea, but until cloud vendors themselves don’t offer an easy (and cost effective) path towards that goal, the hybrid model makes less sense – it becomes too expensive to develop and maintain.
Measurements and SLAsQuality across vendors, carriers, networks, infrastructures, time of day, day of the week or any other parameter you wish to use is variable at best. CPaaS vendors are “supposed” to handle that. They track and optimize media quality and connectivity across their services. They strive to maintain high uptime and reliability. Some even use quality as reasons for opting for their service.
At some point, TokBox and Twilio started offering quality measurement tools. TokBox introduced Inspector, a way for its users to troubleshoot network issues of recent sessions. Twilio launched Voice Insights, offering its users a quality dashboard of the calls conducted through its service.
A similar aspect is the use of SLAs as part of the service – a binding of what type of service expectations the customer should expect and what happens when the expectation isn’t met. These apply mostly to enterprise plans of some of the CPaaS vendors.
Why am I mentioning it here? Because it see it happening. It is what got Talkdesk to pick testRTC for a network testing tool (I am a co-founder at testRTC). It is also an issue that causes a lot of challenges to customers – understanding the quality their own users experience.
Measurement and SLAs will take bigger roles in customer’s buying decision making. As the market evolves and matures, expect to see more of these capabilities crop up in CPaaS offerings. It will happen due to pressure from competitors but more likely due to pressure from enterprise customers.
Vying towards the Programmable EnterpriseWe’re shifting from on premise to the cloud. From analog to digital. From siloed solutions towards highly integrated ones. This migration changes the requirements of the enterprise and the types of tools it would require.
I think we will end up with the Programmable Enterprise. One where the software used is highly integratable. Many of these early trends we now see in CPaaS will trickle and find their way across all enterprise software.
Want to figure out exactly what each vendor is doing in each of these future trajectories? You can purchase my CPaaS Vendors Comparison.
The post Future of CPaaS; a look ahead appeared first on BlogGeek.me.
Kranky Geek, WebRTC sponsorships and other updates around my services
Some updates you might want to be aware of.
This is going to be mainly about updates of things that are going on that you may want to be aware of. Mainly:
- Kranky Geek 2019
- Available WebRTC related sponsorships
- Revamping my consulting pages
Kranky Geek 2019 is coming up fast.
Date is set to Friday, November 15 2019
At our traditional location: Google’s office at 345 Spear St, San Francisco
We are going to continue this year in our look at WebRTC and machine learning in communications as our main theme.
Want to register for Kranky Geek?Registration to the Kranky Geek event are now open.
We’ve got limited room, so you should register earlier rather than later.
There’s a token registration fee ($10) – it is how we make sure everyone has a place to sit during the event.
Want to speak at Kranky Geek?If you’re into sharing your knowledge and experience with others, then how about speaking at Kranky Geek?
We’re working on the agenda at the moment, and are looking for speakers to join us. Each year we get one or two such requests that end up quite well. Need examples? Check out last year’s Facebook session on Portal or maybe Discord on their infrastructure.
Want to try this out? Contact us.
Want to sponsor Kranky Geek?We get to do Kranky Geek on a yearly basis due to our great sponsors.
Our sponsors this year include:
This leaves room for one or two more sponsors. If you’d like to help us our, and show off your brand where it matters when it comes to WebRTC, then let us know.
Meet me in personIn the next couple of months I’ll be traveling. If you’d like to meet, ping me.
October 24-25, BeijingI’ll be heading to Beijing for Agora.io’s RTC 2019 event.
My session at the event is “Common WebRTC mistakes and how to avoid them”. Still need to work out on my presentation.
If you’re in Beijing for the event, it would be great to see you in person.
November 11-16, San FranciscoKranky Geek takes place November 15. I’ll be in San Francisco for the duration of that week.
My time in San Francisco is usually limited and hectic, but I am always happy to catch up and talk when I can find an open slot for it.
If you are interested in meeting up – just tell me.
Available WebRTC related sponsorshipsThere are sponsorship opportunities that are available if you want to highlight your products, services or even job listings. These are available not directly on BlogGeek.me, but rather in a few partner domains:
- webrtcHacks – that’s where most WebRTC developers end up when they need to learn a trick or two or a best practice around WebRTC
- WebRTC Weekly – a weekly newsletter that people interested in WebRTC are subscribed to
- WebRTC Index – a list of vendors offering services in the WebRTC ecosystem
- Kranky Geek – as stated above, we’re looking for a last sponsors or two for this event
There’s now an orderly media kit you can review for the webrtcHacks and WebRTC Weekly sponsorships. Check it out.
New testRTC product: Network TestingAt testRTC, we’ve launched a new product a few months back – Network Testing
While our other products are geared towards developers, testers and IT, this new product caters for support teams.
What it does is connects to your backend directly (there’s an onboarding/integration associated with this product), and then runs a battery of network tests from the machine you use our service. It ends up providing the information it gathers to both the person running the test as well as your support team.
This was developed with the help of Talkdesk, one of our first clients for this product. Check out the testimonial we did with Talkdesk using testRTC’s Network Testing.
Interested in learning more? Contact us @ testRTC
A new WebRTC course – for support teamsI have started working on a new course called “Supporting WebRTC”. The purpose of this course is to assist support teams that need to deal with issues related to WebRTC to better understand and handle them.
This comes as I celebrate my 500 course students in my developer focused Advanced WebRTC training.
Anyways, ping me if youre interested in learning more about the new Supporting WebRTC course – or even want to be there during the prelaunch, providing feedback as I create lessons.
Revamping my consulting pagesThis how the menu bar on my website looked until yesterday:
And this is how it looks now:
I’ve replaced the “non-performing” and somewhat cluttered Workshops/Consulting combo with the more usual Products/Services alternative.
Why the change?
Because there are many of my services that were gone unnoticed. I found that out while speaking to clients and potential clients. So it made sense to change the structure. Another reason is the recent launch of my ebooks section – while these are part of the WebRTC Course website (along with the courses themselves), I wanted to be able to share everything on my main site – BlogGeek.me.
I’ve decided to make this change available now and not wait for it, but these pages will be updated soon. I have commissioned a few unique illustrations for these new pages and can’t wait to get them up.
Here’s a glimpse of one of the concept sketches I received (this one for the courses):
Doing something with communications? I am here to help.
The post Kranky Geek, WebRTC sponsorships and other updates around my services appeared first on BlogGeek.me.
When will Zoom use WebRTC?
There are different ways to use WebRTC. Zoom is using WebRTC, just not in the most common way possible today.
Zoom seems to be an interesting topic when it comes to WebRTC. I’ve written about them two times recently (and a bonus one from webrtcHacks:
- When Jitsi played with Zoom vs Jitsi in bandwidth limiting
- That in turn led to webrtcHacks looking at the browser implementation of Zoom
- Just two months back we had the security vulnerability in Zoom
That in itself begged the question where WebRTC starts and where it ends, since Zoom uses getUserMedia to access the media to begin with.
What was found lately is even more interesting:
Looks like @zoom_us has switched it's web client from web sockets to #WebRTC data channels. Performance a lot better compared to their old web client. pic.twitter.com/SQhP9XhHXP
— Nils Ohlmeier (@nilsohlmeier) September 5, 2019Nils (Mozilla) noticed that Zoom is using WebRTC’s data channel. Which led webrtcHacks to update that Zoom article.
Interesting times
Want to effectively connect WebRTC sessions with the success rate that Zoom is capable of? Check out my free mini course
Connect more WebRTC sessions
What does “use WebRTC” mean?If you go by the specification components in W3C, then the split looks something like this:
From the W3C specifications standpoint, WebRTC is support for Peer Connection and the Data Channel. This encompasses in it other elements/components such as getStats, SDP negotiation, ICE negotiation, etc.
But at its core, WebRTC is about sending data in real time in peer-to-peer fashion across browsers. Be it voice, video or arbitrary data.
getUserMedia and getDisplayMedia have their own specification – Media Capture and Streams. This is what Zoom has been using out of WebRTC. It allows browsers access to cameras, microphones and the screen itself. These are used for things that have nothing to do with communications – like MailChimp or Whatsapp taking a snapshot for a long time now. Others are doing the same as well.
Then there’s the MediaRecorder component, which is defined in MediaStream Recording. Its use? To record media locally. Dubb and Loom use it for example.
Is MediaStream Recording WebRTC? Is Media Capture and Streams WebRTC?
I like taking an encompassing view here and consider them part of what WebRTC is in its essence when used in a browser.
Zoom’s route to WebRTCBack to Zoom.
Zoom started by using only getUserMedia. This allowed them access to other browser technologies such as WebAssembly. They got their real time media processing somewhere else.
The next step is what Nils just bumped into – Zoom decided that streaming the media over a WebSocket is nice but not that efficient. Since it ends up over TCP, the performance and media quality is subpar once packet losses kick in. That’s because TCP starts retransmitting the media when it is already too late for a real time task like video calling to use it, ending up with even more congestion and more packet losses.
What is a company to do at such problem? Find a non-reliable connection to send their data on. There are two alternatives today to do that in web browsers:
- WebRTC’s data channel (which uses SCTP today)
- QUIC (HTTP/3), which is still a bit too new
Zoom decided on WebRTC’s data channel in its current SCTP implementation. They haven’t gone for the Google Chrome experiment of a QUIC data channel (which should be rather “safe” considering Google Stadia is said to be using it). And they haven’t decided to use HTTP/3, which I find as a bit odd.
The end result? Zoom is using WebRTC. Somewhat. With a data channel. To handle live video streams, with their previous WebSocket architecture as fallback. And not the peer connection itself. It is really cool, but… don’t try this at home.
Is this the end of the road for WebRTC in Zoom?
I don’t think so.
They still have the installation friction and now all them pesky security experts breathing down their necks looking for vulnerabilities. It won’t hurt their valuation or their revenue, but it will eat into management’s attention.
And frankly? Zoom on a data channel will still be subpar, since doing everything in WebAssembly isn’t optimized enough. At some point, Zoom will need to throw the towel and join the WebRTC game.
Why?
Because of either VP9 or AV1. Whichever ends up being the breaking point for Zoom.
What will be the next step for Zoom’s adoption of WebRTC?Zoom has two main things working for it today, as far as I can see:
- It just works
- Quality is great
Both are user/market perception more than they are an objective reality (if there even is such a thing).
1. It just worksIt just works is about simplicity. It is the reason Zoom started with using GetUserMedia and later the data channel. Without it, guest access to Zoom would mandate installing their app. At a time when all of their competitors require no installation, that’s a problem. The problem is that this small friction that is left means that “it just works” is no longer a Zoom advantage. It becomes their hindrance.
2. Quality is greatZoom uses H.264, at least from the analysis done by webrtcHacks (based on packet header inspection).
Since WebRTC has H.264 support, my assumption is that Zoom’s H.264 implementation is proprietary or at the very least, not compliant with the WebRTC one. They might have their own H.264 implementation which they like, value and can’t live without – or at least can’t replace in a single day.
At some point, that implementation is going to lose its luster and its advantages, and that day is rather close now.
H.264 is computationally simpler than VP9 and AV1 – a good thing. But at the same time, VP9 and AV1 offer better quality than H.264 at the same bitrate.
When Zoom’s competitors migrate to using VP9 or AV1, what is Zoom to do?
It can probably adopt VP9 or go for HEVC. It might even decide to use AV1 when the time comes.
But what if it does that without supporting WebRTC? Would running an implementation of a video codec twice or three times as complex as H.264 in WebAssembly make sense? Will it be able to compete against hardware implementations or optimized software implementations that will be found at that point in web browsers?
Without relying on WebRTC, Zoom will be impacted severely in its web browser implementation, and will need to stick to installing an app. At some point, this will no longer be acceptable.
If I were Zoom, I’d start working on a migration plan towards WebRTC. One lasting at least 2-3 years. It is going to be long, complicated, painful and necessary.
Microsoft has taken that route with Skype. Cisco did the same with WebEx.
Both Microsoft and Cisco are probably mostly there but not there yet.
Zoom should start that route.
The end of proprietary communicationsIn a way, this marks the end of proprietary communications. At least for the coming 5-10 years.
It is funny how things flip.
The market used to look like this:
Companies standardized on signaling, placed acceptable standardized codecs. And then pushed proprietary non-standard improvements to their codecs.
And now it looks like this:
Companies standardize on codecs, using whatever WebRTC has available (and complaining about it), placing their own proprietary signaling and infrastructure to make things work well.
In that same challenge, you’ll find additional vendors:
Agora.io, who has their own proprietary codecs, claiming superior error resiliency. They just joined the AOMedia, becoming part of the companies behind the AV1 video codec.
Dolby, who has their own proprietary voice codec, offering a 3D spatial experience. It works great, but limited when it comes to the browser environment.
As WebRTC democratized communications it also killed a lot of what proprietary optimizations in the codec level can do to assist in gaining a competitive advantage.
It isn’t that better codecs don’t exist. It is that using them has an impossibly high limitation of not being able to be used inside browsers – and that’s where everyone is these days.
Want to effectively connect WebRTC sessions with the success rate that Zoom is capable of? Check out my free mini course
Connect more WebRTC sessions
The post When will Zoom use WebRTC? appeared first on BlogGeek.me.
How different companies (and industries) are trying to fight spam calls
What I like at how companies are tackling spam calls and robocalling is that the solutions they bring to the table are based a lot on their DNA.
There is more than one way to solve a problem. There is usually more than one way to solve a problem effectively. Which means that it isn’t that easy to pick the best solution – simply because there are a few good alternatives. This is the case with spam calls. These spam calls are also called robocalls, which we’ll get to later.
When I wanted to explain it through an analogy – it hit me!
It is akin to the many lightbulb jokes. At the end of the day, in each industry or persona type, a different approach is taken to change a lightbulb.
Let’s change the subject instead of the lightbulb though. We’re talking about calls. Here in Israel we get a few unsolicited spam calls. Not that many if you consider what’s going on in the US – and it is still not that fun.
My own spam calls experience(s) – or lack ofI used to live in a rather religious city a few years back. We are a secular family. The neighborhood and the city around us changed to become more religious over time through a kind of natural selection. At that time, I used to get a call or two a week, starting with a recording with a wording that can roughly be translated to a preacher saying “Precious Jews!” – that’s the point where I hung up automatically, so I have no clue how that “conversation” progressed.
This miraculously stopped as we moved to a city nearby. This time a secular one (almost too secular). It stopped not only in our landline but also in our mobile phones, which was interesting. This week though, I started receiving different calls, probably due to the upcoming election here. These calls start with something like “Save Liberalism” – which I again identified as my cue to hang up the call.
Here in Israel? This isn’t such a big deal. Probably due to the exorbitant cost of call automation or simply because the market is too small or too immature for it. In the US? It seems like there this plague is so common that many people don’t answer their phones for numbers they don’t have in their address book.
Here’s what Andy Abramson has to say about his spam calling experience:
Sure my regular dial up and mobile phone numbers rings throughout the day with calls from toll-free and from numbers that look like they’re from a neighbor, when they’re nothing but spam like calls.
Most of his conversations take place over OTTs these days, which don’t carry spam.
Up until recently, this seemed like a necessary evil that no one is going to really handle. But something has changed this last year. So much so that this now looks to be the main issue in phone calling. Especially if what we’re looking for is maintaining a semblance of usefulness to using telecom carriers to handle our phone conversations.
How did we get to this point?I get a feeling that it involves a mixture of reasons:
- The low cost of calling (or sending an SMS) to people
- The ability to programmatically automate that process and leave humans out of the equation for the spammer
- We’ve gone through a digital transformation in telecom – from analog to digital communications – which made the interfaces towards telecommunication networks easier and more accessible via the internet. At the same time, the capacity of these networks to handle calls grew significantly
- When carriers interconnected with each other, they didn’t really think that far into the future of the types of abuse and attack vectors available today
Remember that just until a few years ago, the concept of encrypting traffic other than financial transaction seemed an exaggeration (encryption and cryptographic authentication in communications was not part of an MVP, a version 1 or a version 2 of a product, and it almost never interoperated well out of the box). Today? We’re discussing end to end encryption as if that’s a human right and zero trust networks as if that’s the norm.
–
There is no doubt a problem. And this problem is getting bigger each year. How are companies tackling it? Each one with the tools its has available and the DNA it has.
Carriers: Lets standardizeOne of the main concerns with spam calls is based on spoofing. The ability of the originator of the call to masquerade as any number he wants, including local numbers, close to that of the called number. This technique tries to add trust to the originating call, to pass the automated response of people (not answering calls that look somewhat fishy).
You’d think that by 2019 this wouldn’t be such a simple thing to do (zero trust anyone?), but it is. So much so that the standards suggested – SHAKEN/STIR – a cryptographic authentication of caller IDs. As explained by the FCC on combating spoofing:
This means that calls traveling through interconnected phone networks would have their caller ID “signed” as legitimate by originating carriers and validated by other carriers before reaching consumers. SHAKEN/STIR digitally validates the handoff of phone calls passing through the complex web of networks, allowing the phone company of the consumer receiving the call to verify that a call is from the person making it.
For the FCC and carriers such a solution makes a lot of sense:
- You start by better defining the problem – it isn’t spam calls but rather caller identity spoofing
- Then you continue by picking a solution – authentication of caller identity
- And then you go spec it out as a standard – SHAKEN/STIR
- Last but not least, you get all carriers (100’s of them) to implement the new standard
In the US, AT&T, T-Mobile and Comcast have started implementing SHAKEN/STIR (PDF). I didn’t find much information about other carriers around the globe.
Here are a few challenges with this approach:
- SHAKEN/STIR doesn’t block calls. Just indicate if they are authenticated or not. Think of it as the green indicator of the past on your browser bar for websites served via HTTPS. Or even worse – the Extended Validation Certificates for HTTPS (now officially dead and useless). In other words, you will keep getting spam calls, but something on your display will allow you to better decide if you wish to answer or ignore
- It requires software changes on mobile devices (and landline phones). Since it blocks no calls, the indication of unauthenticated caller ID needs to appear on your display when there’s an incoming call
- It requires all carriers to be effective. Otherwise, a lot of them phone numbers will come unauthenticated adding too much noise
- It requires OTTs, CPaaS vendors, UC vendors, contact centers, enterprises and anyone interconnecting with carriers for his voice traffic to authenticate his numbers using the same standard specification
These challenges means that until we see value in this initiative, we will be well into 2025 or something similar.
The “go it alone” carriersThere are instances where carriers are going it alone, trying to solve spam on their own.
The notable example here is Verizon, offering free and paid call filtering services, targeted at robocalling. They are now pre-enable it on Android phones.
Frankly? This approach is again within the realm of carriers-DNA. From Verizon’s website:
- With carriers, everything has a price. Caller ID – if that means authentication like SHAKEN/STIR (or SHAKEN/STIR itself), then why only under a paid plan? Aren’t carriers supposed to take care of spam similarly to how most email services do today?
- Automated call blocking by filtering them as spam is great, but what about false positives? How many important calls from businesses is this going to block? (and yes, I know I complained before about SHAKEN/STIR not blocking calls)
Verizon isn’t alone in this approach. Other carriers are offering similar solutions as well.
My challenge here? I’ve never seen a carrier app on mobile that works well. They always seem and feel half baked.
There’s an app for thatAnd a lot more than a single app.
Since our smartphones allow for apps, there are those who created apps that allow blocking incoming calls based on who the caller is. The intent is to be able to block robocalls/spam from coming in. Which is great.
The challenges are?
- Not all call blocking apps are created equal. Some offer an abysmal user experience while others integrate nicely with the operating system. It is left to the user to pick one that works for him
- These apps often build their database via crowdsourcing the spam indication from users. While great, this did block calls from my insurance company a few times when I really needed to receive these calls. This also means that different people have a different definition of what spam is and that will affect what gets blocked on your phone
- They are selling your data. Or at least that’s the current news. Robocall blocking apps collect more data than they should and use it for unknown reasons (it might just be developer log collection data, but some of these apps actually might sell data)
- You need to actively install these apps on your phone. Select one and register to it. So not frictionless
Mobile operating systems allow some semblance of control that is/can be given to the user.
If you are using Google’s phone application on your Android device, then you can use Android’s caller ID & spam protection.
This relies on Google to decide if an incoming call is suspect of spam or not (more on that later), and be able to simply block it. Users also have the ability to mark calls as spam, which I am sure Google then uses as crowdsourced information as well.
Why this approach by Google? Google is a data-first company, so any challenge gets first solved using data.
Apple, on the other hand, decided to not look or rely on their users’ data. What they did was add a simple rule in iOS 13 to silence unknown callers. This will just not ring your phone if the caller ID isn’t found in your address book. While a nice feature, this doesn’t really scale and the result is too aggressive.
Why can Apple take this route? To get more businesses into its Apple Business Chat solution – effectively enticing businesses to communicate with iPhone users via Apple, and getting them into the user’s address book.
Google: AIThere’s one more thing Google is now doing for their new Pixel phones, called Call Screen.
Call Screen is a kind of a virtual assistant or a voicebot that “lives” in your phone. It can answer calls on your behalf, transcribing and checking on your behalf who is calling and why. You will then be continue interacting with the caller via menu buttons on the screen, instead of actually talking to him.
Why this approach?
It does what only Google can do. Run speech to text as well as text to speech on device, in real time, and do that with an accuracy that is good enough.
The funny thing is that it gets robocalls interacting with voicebots. I wonder if this is communications or can we start talking here about the M2M (machine to machine) market instead…
The problems? You still need to man-handle all these spam calls. Would be better if we could just make them go away to begin with. Oh… and it is available only on Pixel phones for now.
Twilio: Programmable IdentificationIn their recent Signal event, Twilio announced Verified by Twilio.
The idea here is to create a kind of a marketplace where Twilio customers add metadata to their outgoing calls to users – like who is calling and the reason for the call. And then that data gets picked up by caller id apps and shown to the users when that call rings on their smartphone.
This is a nice thing, but it does have its own set of challenges:
- It requires businesses to identify their intent via APIs. And they can do it only through Twilio today. This isn’t an open standard
- As a user you still need to install an app to make this work
- And it doesn’t block the calls – just gives you a bit more information before you answer it
That said, if Twilio can pull it off, it will secure its lead even further in the CPaaS market.
Is all robocalling spam?No.
A lot of it is transactional.
I get a call every 6 months from the dentist. An automated reminder a day before a visit. If I don’t answer it and press “1”, it tries to hunt me down. Never checked what happens to my appointment if it fails to do so.
That term digital transformation is old by now, but the transition we are going through towards digitizing and automating interactions between businesses and users is a real one, and it is a growing trend. The purpose of it isn’t just to deflect incoming calls and communications so customers don’t bother “us businesses”. The purpose is to genuinely improve the customer experience and to do so at scale, while relying less on human agents (or at least not relying on them in the boring and the trivial).
Then how do we filter out these spam calls from the automated transactional ones that we really want to receive?
Today, it seems, there are two main solutions:
- Block “spam”, which might catch real calls and block them as well. My guess is that false positives here are higher than what email spam shields are doing – think of it as being 10-20 years behind in the technology curve
- Mark intent of the calls or “manage” incoming calls, which means users are still being bothered with it, just a bit less so
Not a good solution in sight yet.
Back to lightbulbsI started with lightbulbs so better finish with that. Especially since there’s no aha-moment here for us, or a great lightbulb idea to work with when it comes to spam calls.
So… How many board meetings does it take to get a light bulb changed? (or to fight spam calls)
This topic was resumed from last week’s discussion, but is incomplete pending resolution of some action items. It will be continued next week. Meanwhile . . .
The post How different companies (and industries) are trying to fight spam calls appeared first on BlogGeek.me.
AI in communications is inevitable
Resistance is futile when it comes to AI in communications. Are you going to get there on your own or dragged there?
I’ve worked with Chad Hart last year on a unique report that covers AI in RTC. Since then, we’ve seen the trends we’ve analyzed in the report strengthen and take shape. Why did we start on that route? Because of an understanding that the use of machine learning and artificial intelligence will be part and parcel of the communications industry – simply because it is now penetrating all industries. The difference here, though, is that with communications this isn’t just about finding churn or optimizing workflows – it is about looking at the data itself – the communications – in real time.
One of the things we looked at and debated at length was how communication vendors are going to integrate AI into their own products. Are they going to have that done in house? Will they outsource that part? If they outsource, would that be outsourcing everything or just certain parts? Will they be using cloud based AI APIs for that. And if they do, would they go for Amazon, Google, Microsoft and/or IBM. Or would they rather go for smaller, more specialized vendors?
The answers were all over the place, and the main challenge was deciding how the future would look like. Me? I thought and still think, that a lot of that technology and experience must come in-house for vendors who want to keep an edge and a competitive advantage.
You can’t use third parties for a technology like AI and assume you will gain differentiation.
Why? Because everyone else will be doing the same.
Need proof?
A year ago, almost everyone we spoke to said they were experimenting or rolling out call summaries on recorded calls. How were they implementing that? By using Voicera (turned Voicea and acquired by Cisco. More on that later).
When Google introduced their Contact Center AI, everyone and his uncle partnered with them for the announcement.
How does that assist in differentiation? I don’t know.
During August, two interesting acquisition announcements took place on the exact same day:
Both acquisitions are in the communication space. Both acquisitions are around NLP/NLU capabilities (natural language processing and understanding).
Looking for better understanding of the AI space in communications? Check out the overview of our AI in RTC report
Cisco & VoiceaVoicea does two things:
- Offers transcription
- Captures decisions, notes and action items
The transcription part is usually referred to ASR (Automatic Speech Recognition) or NLP (Natural Language Processing). Dumbing it down a bit, this is speech to text capabilities.
The second part is about NLU (Natural Language Understanding). The “AI” or “assistant” is capable of reviewing what is said (usually on the textual level) and deducing from that what it means. In the case of Voicea and their EVA product, this is about the creation of meeting summaries and action items taking.
In many ways, this is similar to Dialpad’s acquisition of TalkIQ a year earlier.
Where do these two differ?
Cisco
From the press release, it seems that Cisco intends to wrap Voicea into WebEx, with a focus on collaboration. This means help facilitate meetings and conversations within an enterprise. As Amy Chang, SVP & GM, Cisco Collaboration states in the press release:
“Voicea’s true market leading technology will be a game changer for our Webex customers to experience more productive and actionable meetings”
Cisco’s approach stems from their focus on what they call “cognitive collaboration”. I am sure Voicea will find its way into the sales calls that Cisco enables for its customers, but first priority seems to be collaboration.
Dialpad
Dialpad focused on getting TalkIQ into its contact center offering, creating AI assistant to sales people and adding it to the unified communication offering that they have. In the acquisition press release, the initial capabilities mentioned are real-time call transcription, smart notes, real-time sentiment analysis for call center and real-time coaching.
–
Cisco decided to bring in-house the competency of NLP and NLU. It isn’t the first AI technology that Cisco is acquiring. It makes me wonder if they’ll keep the Voicea team independent, fold them under WebEx or wrap them into an AI team they already have (probably through a past acquisition).
Vonage & Over.aiVonage acquisition of Over.ai may seem somewhat similar to the Cisco acquisition of Voicea at first glance. Both acquired an NLP/NLU startup. Both plan on wrapping the tech into their communications offerings. Both acquired an AI team in a location close to one of their existing offices. But that’s where things start to look somewhat different.
Over.ai is focused more on voicebots that it is on just speech analytics. As such, it doesn’t seek to glean meaning, converting it into summaries and action items. It is geared towards understanding intents and “holding” a conversation between a human and a bot.
This requires knowledge and competencies in speech to text, text to speech, intent recognition and building something akin to Google’s Dialogflow (and the application logic on top of it).
What can Vonage do with such technology? Here are some initial thoughts and ideas:
- Offer its own transcription engine, across its product line (API for developers via Nexmo and TokBox, contact center via NewVoiceMedia and unified communications via Vonage Business Cloud
- Replace and/or augment IVRs across that same product offerings
- Create outbound calling bots, probably with a focus on NewVoiceMedia and maybe Nexmo
- Expand to other AI related challenges in the communication space
That last one is the most interesting.
The main challenge vendors have today with AI is finding experienced developers. Or more accurately, experienced employees. In this acquisition, Vonage got itself a complete team with expertise around communication related AI. this includes developers, testers, product managers, sales and marketing – the whole shebang. While we tend to focus on developers and their experience, AI has proven as a technology that needs all these added functions to be experienced in it as well.
This gives Vonage a nice head start in this, where others need to build such capabilities in house, or acquire them elsewhere (as Vonage did).
Communications + AI = FutureDialpad has a really nice explanation of today’s state of AI in communications:
It is hard as hell and requires lots of customizations, so you can mostly use it at scale.
I believe that the interesting use cases are still ahead of us. And that many of them would require cracking the scaling issue – how to be able to deploy AI algorithms and models that can work well for small businesses and not only at the largest ones where customization and fine tuning is part of the process.
Looking for better understanding of the AI space in communications? Check out the overview of our AI in RTC report
The post AI in communications is inevitable appeared first on BlogGeek.me.
Facebook eavesdropping Whatsapp? The everlasting tension between security and privacy
While this is a non-story, it does raise an interesting conversation about security, privacy and the tension between them.
This one’s going to be philosophical. Might be due to my birthday and old age. Feel free to skip or join me on this somewhat different journey of an article…
A month ago, an article on Forbes, started a storm in a teacup. The article discussed a Facebook plan to thwart encryption in WhatsApp by adding client side moderation of sorts:
“In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.”
A few days later, Facebook disputed these as something they’d never do.
Problem solved.
As I was working at the time on security related product requirements for one of my clients, this story has stuck with me, especially bringing home the challenge and the difference between security and privacy – in most enterprise scenarios – you just can’t have them both.
I’d like to raise here a few of my thoughts on this subject, and also look at some of the differences between individuals, businesses and governments. This will also mix with the fact that I am a father of two children at ages 9 and 12 who both use WhatsApp on their smartphones regularly (that’s the norm here in Israel – if I could, I’d wait with smartphones for them a bit longer).
Trying to understand security in WebRTC? Here’s a developer’s checklist you should follow:
Download the WebRTC security checklist
Protecting our privacyThere’s an expectation here that whatever we do online will stay private, similarly to how things work in the real world.
It sounds great, but what exactly is privacy anyway? And how do we translate it to our daily lives without technology?
Without technology, conversations were transient, they never stored in any way, so people who talked to friends never really had the recording of that conversation either. They had no transcript either. And if we’re talking about technology, do we include the written word as part of a technology advancement or is that a pre-tech thing?
Today though, I can search and find a conversation with my daughter’s teacher from 5 years ago on WhatsApp. Is that a breach of the privacy of the teacher?
I don’t know the answers, and I am not advocating against privacy.
At the very least, I believe in encryption everywhere and in the concepts behind zero trust (ie – not trusting machines in your own network). Is that privacy? Or security?
The challenge with privacy – the idea that the things we do in private stay private – is when you try and mix it with security.
Securing our societyI live in a country which seems to be at constant war with its neighboring countries and with the many who want to harm it and its citizens.
When going to a shopping mall, I am used to having my bags scanned and my privacy breached. Why? Because in the context of where I live – it saves lives. In order to maintain security, some privileges around privacy cannot be maintained.
The challenge here is when do we breach privacy too much in the name of security. How far is too far?
Taking all this online makes things even more challenging. Can governments rely on the ability to “spy” on its own citizens in the name of security? Are there things that are better be spied on to make sure people don’t die? How far is too far?
Our society today values the lives of people so much. Is the life of a single person saved worth being able to spy on everyone?
Then there’s the bigger issue of corporations being multinational and social networks being global – who is securing society here? The corporations or the countries? Should corporations and social networks secure people against their governments or vice versa?
Securing our childrenThis one is where things get really tricky.
I’ve got kids. They’re small. I am in charge of them.
I’d like that whatever they do online will be private and ephemeral in its nature. Stupid stuff they do today shouldn’t haunt them as grownups. Good luck with that request…
On the other hand, how much privacy should they be allowed on social networks and on the Internet?
Should I be spying on them? Should I be able to filter content? Should I be alerted about questionable content that they get exposed to or are exposing themselves?
If anything, do my kids have the same privacy we so much value for ourselves against me being able to educate them on what’s out there lurking in the shadows of the internet?
There are different apps to help parents with that. Most of them are quite invasive. I decided to go with something rather lightweight here but I can’t say it lets me sleep well at night. Nothing really does when you have kids.
Securing our businessIf you are a business owner, you somehow need to do what your employees do on your behalf. This affects how customers look and value your brand, so the privacy of your employees… well…
If a customer complains about a transaction, you’d like to go back and figure out the history of the interactions with that customer. If you’re in an industry that has strict rules and regulations, you might be forced by law to make a record of your employees’ interactions anyways.
How does that compare to the requirement for privacy? How does that fit with the march towards end-to-end encryption where the service provider himself (=you) can’t look at the interactions?
On one hand, you want and need encryption and security, on the other hand, this might not go hand in hand with securing the privacy of an individual employee. What works for consumers may not work in enterprise scenarios.
Our age of automationThen there’s automation and machine learning and artificial intelligence.
As businesses, we want to automate as much of what we do in order to scale faster, better and at a lower price point.
As consumers, we want easier lives with less “steps” to make and remember. We’ve shifted from physical buttons on TVs to remote controls to voice control and content recommendations. At some point, these steps involve smarts and optimization that can only be obtained by looking at large collections of data across users.
In other words, we’re at a point in time that much of the next level of automation can only be introduced by collecting data, which in turn means breaching privacy.
Here are a few recent examples for all the great voice interfaces that are cropping around:
- Apple contractors were allegedly listening to 1,000 Siri recordings a day — each
- Google workers can listen to what people say to its AI home devices
- Your Xbox Is Listening to You, and So Are Microsoft’s Contractors
- Amazon reportedly employs thousands of people to listen to your Alexa conversations
- Facebook admits contractors listened to users’ recordings without their knowledge
Here’s the funny bit – it doesn’t really seem like there’s anyone we can trust, while we need to trust everyone.
As an employee, I need to trust my employer. At least to some extent.
As a citizen, I need to trust my government. Especially in democracies, where I choose that government along with my fellow citizens. At least to some extent.
As a user of “apps”, I need to trust the apps I use. At least to some extent.
And yet, none of these organizations have shown that they should be trusted too much.
So in Blockchain we trust?
I beg to differ, at least today, with all the data and security breaches, along with other scandals around it. I can’t see this as a trusting environment.
Can we have both privacy and security?Companies are looking for ways to bridge between the two alternatives.
It is interesting to see how Apple and Google each takes a side. Apple vying towards privacy more than Google while Google trying to use security and math to be able to offer some extent of privacy while maintaining its machine learning advantage and ability to serve ads.
Then there are cloud-based end-to-end encryption solutions for enterprises where the privacy is maintained by letting the enterprise hold the keys to its messaging kingdom and not letting the cloud provider have a peak. Cisco Webex for example does a good job here, going as far as giving granular controls also on where end-to-end encryption works on an individual or a group level.
Today though, we still don’t have a good solution that can offer both privacy and security and work well at all the levels we expect it to. I am not sure if we ever will have.
Why Facebook’s idea isn’t farfetched?While Facebook said this isn’t even planned, the solution makes sense on many levels.
What are the challenges Facebook has with messaging?
- The need to offer end-to-end encryption to its users. This is table stakes in social messaging these days
- The need to play nice with governments around the globe. Each government has its own rules and unances. Europe has GDPR and the right to be forgotten. The US is somewhat less restrictive in its privacy policies. China has all encryption keys to its kingdom. Different countries offer different privacy profiles to its citizens
- Facebook has everyone looking over its shoulder, waiting for it to fail with user’s privacy. And it fails almost on a weekly basis
- It has competitors to contend with. These competitors are working on bots and automation to improve the user experience. Facebook needs to do (and is doing) the same. Which requires access to user interaction data at some level
To do all this, Facebook needs to be able to access the messages, read them, decide what to do about them, but not do it on its own servers so they aren’t “exposed” to the actual content and data of the user. Which is exactly what this no-news item was all about. Lets go back to the original quote from the Forbes piece:
“In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.”
will include embedded content moderation and blacklist filtering algorithms – there will be a piece of code running on the mobile device of the user reading the messages. Here’s the news for you – there already is. That piece of code is the one sending the messages or receiving them. It is now going to be somewhat smarter though and look at the content itself, for various purposes – “content moderation” and “blacklist filtering”. → I definitely don’t want Facebook to do this for my content, but I really do want them to do it for my kids’ content and report back to me
these algorithms will be continually updated from a central cloud service – they already are. We call this software updates. Each release of an app gives us more features (or bug fixes). With machine learning, which these algorithms are doing, there’s a need to tweak and tune the model continually to keep it relevant. Makes perfect sense that this is needed.
will run locally on the user’s device – the content itself isn’t going to be stored in the cloud by Facebook. At least not in a way they can read and share directly. Which is what end-to-end encryption is all about.
This immediately reminded me of someone else who is doing that and offering it as an API. Google Smart Reply:
The Smart Reply model generates reply suggestions based on the full context of a conversation, and not just a single message, resulting in suggestions that are more helpful to your users. It detects the language of the conversation and only attempts to provide responses when the language is determined to be English. The model also compares the messages against a list of sensitive topics and won’t provide suggestions when it detects a sensitive topic.
It runs on device, reads all messages in cleartext. Applies algorithms and determines what to do next offering it as reply suggestions. All running on device, without interacting with the cloud.
To get to this level though, Google had to look and read a lot of messages and build a model for the algorithms it uses.
–
Figuring security and/or privacy in modern applications and services isn’t easy. It comes with a lot of tradeoffs that need to be taken throughout the whole process – from requirements to deployment.
Trying to understand security in WebRTC? Here’s a developer’s checklist you should follow:
Download the WebRTC security checklist
The post Facebook eavesdropping Whatsapp? The everlasting tension between security and privacy appeared first on BlogGeek.me.
Twilio Signal 2019 and the future of the programmable enterprise
Twilio Signal 2019 was all about contact centers and legendary customer engagement. And then some.
This one is going to be long. There’s just a lot of ground to cover…
Twilio Signal is the biggest event of the programmable communications industry. I had the opportunity to join the first one and have been covering these events ever since – mostly remotely but also in person. This time, I had to do it remotely
I took the time last week to sit down for the two hours needed to watch the first day’s keynote and a few more hours thinking and reading about the event everywhere I could.
If you haven’t seen the event already, but CPaaS and programmable communications is of interest to you, then I warmly suggest you take the time to sit down and view the keynote:
From here on, I’ll try to explain my thoughts about the keynote and the new announcements in it.
The main theme?160,000 customers.
Seriously. Jeff Lawson, CEO and co-founder of Twilio, and other Twilions brought on stage made certain they mention that number 160,000.
I’ll have more on numbers later, but this is something that Jeff and Twilio really wanted you to remember. That they are the biggest and baddest vendor offering programmable communications.
The real theme though was Legendary Customer Engagement:
It came up some 35 minutes into the event and set the pace for many of the scope, context and announcements that Twilio wanted to make at Signal.
“Legendary Customer Engagement” translates into a focus is on contact center use cases in 2019. Or is it the other way around?
Anyways, here’s what Twilio had in store for us in the keynote:
If you ask me, Flex dictated that focus. SendGrid gave it the marketing tint it was missing, but seemed it was there mostly as lip service. More on that later.
Topics coveredIn the two hours of the keynote, Twilio tried to cover a lot of territory. A lot is going on for Twilio in a year to be able to cover it all. My coverage will be based on these announcements more than on the actual flow and topics dictated by the keynote. I’ll keep most of the coverage in the order that it was presented at the keynote.
Twilio by the numbersThis year, Twilio took the Moscone Center in San Francisco, the largest conferences building in town, to be able to fit all the visitors it wanted to cater for. I am assuming that meant doubling the number of visitors than the previous year.
Two main numbers were important to Twilio: Developers and Customers
That said, I’ll start in the reverse order of what Jeff shared… putting it in the order of interest to me.
Other NumbersHere’s what got shared later on during the metrics section of the keynote:
- 750B human interactions / year
- 32,500 calls / minute
- 13,000 peak SMS / second (double than last year)
- 2.8B unique phone numbers / year (dialed in or out)
- 3B email addresses / quarter (in and out)
Twilio. Is. Big
But big where? And in what way?
Twilio were interested in showing peak traffic and not traffic totals.
What does 750B human interactions mean exactly?
We got the calls per minute at peak or maximum – a really impressive stat, but what’s the average? How about calls per day or month or year? And while we’re at it, how many of these calls are short or long? What’s the average call length on the Twilio platform?
Peak SMS is nice, but again, where are we with average? With total? The number shows the ability to scale more than actual scale.
Phone numbers and email addresses show touch points, but less amount of traffic. That part was definitely missing from the numbers. I wonder why.
160,000 CustomersThat’s the number of companies that use Twilio. How many of them actively use it and how many “forgot” a number they acquired and happen to pay a dollar a month out of credits on Twilio? I don’t know. But that’s a really big number.
I wonder how many of these are above $100/month customers. How many are really large customers. These would be the relevant/interesting ones.
6 Million DevelopersJeff started strong, revealing the number of developers using Twilio are 6 million. More accurately, though, this would be registered accounts. I have 2-3 accounts on the Twilio platform myself.
As big as this number is, it isn’t interesting. Not really.
When you look at mature platforms, they stop talking about registrations and start talking about engagement. That’s why most messaging platforms switched to revealing their Monthly Active Users (MAU), some going as far as daily active users. The idea being that people don’t care how many people are on the platform but rather how many are actively using it.
Twilio can definitely do the same today, and change the way CPaaS vendors flaunt their growth – I’d be surprised if anyone comes near to their size today.
Why? 160,000 customers
Enterprise ComplianceAs Twilio is moving toward the enterprise for a few years now, they have been beefing up security, adding an enterprise plan, user roles, etc.
What comes next is compliance.
The part that Jeff Lawson mentioned and wanted to share was support for HIPAA compliance across Twilio’s product lines. This is coming in 2020, and will probably start with one or two products and spread from there to the rest, based on their own internal roadmap.
For those who aren’t aware of it, HIPAA is the regulation put in place in the US for patient privacy. Every and all vendors who wish to offer communication in this space must be HIPAA compliant, and to do so, if they are using a third party like Twilio or other CPaaS vendors, they need them to sign something known as BAA (Business Associate Agreement). Up until now, it wasn’t easy to get a BAA from Twilio, now it will be possible.
This announcement was kind of an aside in the keynote, done to get it out and into the open publicly on stage. It is a really important topic just not as glamorous as the rest. It wasn’t picked up by many who were covering the event. Searching Google for Twilio HIPAA gives results from 2017 but not anything from the keynote…
Twilio CLITwilio CLI is a command line interface to the Twilio API. Much like the Nexmo CLI or even WordPress CLI, it offers easy access for scripting scenarios without the need to “really code”. Here’s the Twilio CLI announcement.
This is a long overdue addition to Twilio. What sweetens the wait though is what this implementation includes out of the box:
- Coverage of a lot of the Twilio APIs already (maybe all of it, I am not sure)
- Plugins, which Twilio and third parties can build for the CLI and use
- Proxy capability to handle webhooks locally during development
- Ready-made templates in their documentation for the CLI
- Ability to pipe the tail of the log straight through the CLI
In order to show off the power of the CLI, they even created a kind of an online competition to make a point about it. You can watch it at the 24:30 mark in the keynote.
All in all, this is the only announcement around the Twilio developer tools, but definitely an important addition.
SendGridUp until that point at the keynote, Jeff was setting the stage. He got a few things out of the way – explained that Twilio is big by sharing numbers, shared Twilio’s diversity goals and progress, and announced Twilio CLI (because it didn’t fit in the overall theme of the event, but was hugely important to share on stage).
While getting to the communication part of it, and Twilio’s key points for 2019 and on, they had to say something about SendGrid. This was a big acquisition that was announced last year, and the intent was to show progress and cohesion.
Here’s how things went for this 20 minutes session:
Sameer Dholakia, the CEO of Twilio SendGrid came on stage. He spent most of the time explaining SendGrid, why email is important, and why this needs to be relegated to SaaS vendors.
Two things about SendGrid:
- They’re also big
- They’re APIs
That means a perfect match for Twilio.
The explanation given wasn’t really necessary for those used to using email APIs (which may seem like an obvious requirement to many developers). It was meant for the people in the room – Twilio customers – communications people who understand SMS and voice but are somewhat less conversant in the need and importance of email. Remember – most CPaaS vendors are used to explaining why carrier relations is complicated, but probably feel like sending emails should be way simpler.
Why spend all that time about SendGrid and emails? To increase the use of SendGrid by existing Twilio customers.
Sameer shared 3 new features/capabilities that SendGrid rolled out, none of them in too many words:
- Email Validation API; built around the same concepts as Twilio Verify for phone numbers. The intent here is to be able to know if an email is valid or not and with what confidence. Sending emails to invalid addresses may ruin your standing and hurt the deliverability of emails you really want to send to real people, so this can come in really handy
- Marketing campaigns; support for email sequences and the ability to send them as a campaign, which is nice. I use this in my own business by through a marketing automation vendor. This means SendGrid is starting to go up the stack and food chain as Twilio did with Studio and Flex
- Ads; ability to integrate personalized ads in emails. Another usable feature (which was in beta since last year)
All these are targeted towards marketing activities and less towards communications.
The time spent on sharing these new features? 3 minutes. Not nearly enough to grok.
10 minutes into SendGrid, the discussion came back to Twilo. And SendGrid.
The intent was to show how Twilio+SendGrid = more. This makes perfect sense, especially when shifting towards a world of multiple channels and when that world is what Twilio is pitching at Signal. It wasn’t explained in such a way, or integrated as much at this point. A kind of a missed opportunity.
It was done by sharing live code of how you can add email sending via SendGrid within an application that uses Twilio for SMS communications. It was nice. I hope Twilio will have better integration and surprises for 2020 on the Twilio+SendGrid storyline.
Twilio FlexDave Michels said it quite well: “Flex was a major theme of Signal 2019, and I anticipate it will be an even bigger theme at Signal 2020.”
Flex had 3 parts to it in the keynote:
- Why is Flex so great
- What have we done new in Flex since last time?
- Here’s how flexible Flex is
While the product is great from a concept point of view (and I am sure from an implementation point of view as well – never really used it myself), I think the delivery during the keynote could have been improved a bit.
1. Why is Flex so greatHere’s what Jeff said during the keynote about Flex:
“A year ago, if you’ve had told me that the legacy vendors in the space would have even heard of Flex I would have thought you were crazy. Let alone buying up all the ads around Moscone for Signal this year. I mean, isn’t that crazy? But you know what? I think they are afraid. I think they are afraid of what happens when we unleash the creative ability of millions of software developers to innovate in the contact center.”
Next, it went into positioning of Flex within the industry:
First, came the legacy vendors, who were positioned as:
- Hard to support
- Hard to maintain
- Expensive
- Hard to change
All true by the way.
Then came the turn of cloud contact centers in SaaS models, said to be 15% of the market now. These are “neither scalable nor flexible in the way we need […] not built for continuous improvement”.
Ouch.
True or not, that’s Twilio’s stance now, and it seems like they’re competing head to head with their cloud contact center customers – more verbally than last year.
For me, Flex is a great product – it shows that future enterprise products must be programmable products, ushering a new era of programmable enterprise. But that’s for some future article.
Two numbers were given for Flex:
- %500 growth in interactions
- 250 ecosystem partners
Both are somewhat meaningless without context, and there isn’t none as this product is just too new in the market.
2. What have we done new in Flex since last time?There’s more work and focus on customer acquisition in Flex than in other products of Twilio.
We will probably see more of that in 2020 and 2021 – the same way that Amazon Connect is now winning a lot more deals and going into a lot more deployments than we’ve seen when it launched. It takes time for these products to really get adopted and integrated.
What did get into the announcements here, probably to give it some tint of progress, were the following:
- Native Zendesk CTI integration. Mentioned in a full sentence throughout the whole keynote
- Autopilot is now GA. The thing is that Autopilot is a distinct Twilio product in its own right, and it lives as a feature in Flex. The tie-in here to Flex is to show progress of Flex more than to say anything about Autopilot. More on the Autopilot announcement here
- Announced Media Streams API in beta, a brand new product for Twilio. I think this should have gotten a section of its own rather than just a mention and a minute or two wrapped inside Flex
I think Flex is now in the grueling part of getting from an MVP towards a version 1.0 that Twilio can be proud of. Once there, Twilio will definitely open the back burners and start showing a lot more product specific progress in Flex itself.
3. Here’s how flexible Flex isThat was given to software engineer in the Flex team (I couldn’t catch her name and didn’t find her as a speaker in other sessions at Signal, so sorry about that).
She showed a few ways in which Flex can easily be extended, adding some custom features to it. It was an eye opener to the speed at which you can modify a deployed contact center and improve it, but I am not sure how well that was conveyed to the audience.
I think Twilio has to experiment here a bit further on how to make that into a wow moment.
Media Streams APIMedia Streams API is the generic approach CPaaS vendors can offer (and already offer) ML (Machine Learning) to their customers.
With AI and ML you never truly know what a customer would like to use or how:
- There are many cloud based SaaS offerings today around ML that can bring value. Which ones would your customer want to use?
- Each customer is unique, as he might want to train a model using his own unique data
- As a generic player (CPaaS), it is hard to decide what solutions to offer in the ML space
- It is still early days
The approach here is to be able to interfere within an ongoing session, collecting the media out of the session and sending it towards a machine learning algorithm for classification. That can be used to handle things like speech to text, sentiment analysis, recognition, etc.
The easiest way to integrate such a thing today? By shipping that media over a websocket towards a cloud service. Other CPaaS vendors are doing this already. Twilio has added that capability now.
Out of the box? Amazon Lex and Google speech to text engines are provided via websocket interface. This means that there’s a bit of integration work on the developer’s end to get them to work, but it also means it is a kind of an open interface that can be fit (with some effort) to virtually anything.
Gridspace was added as an exclusive launch partner with a direct integration, making it simpler to use than others. Twilio promises this will change with more partners added into the roster, which stands to reason.
I am assuming this will come to Programmable Video at some point in the future (to analyze the voice channel only but also to analyze video).
Here’s the launch announcement for Media Streams API.
This got a really small portion of the keynote this year.
The most revealing slides about messagingI found the following two slides the most interesting in all the keynote at Signal 2019.
In the above, Jeff indicates the reply rate Twilio customers see with the Twilio API for WhatsApp.
To me, this means the following:
- With social messaging, enterprises are actually making an effort to make it a two-way interaction and not only one-way notifications
- People are more inclined to reply over social networks (WhatsApp in this case) than they are to do the same over SMS. Why? Because we’re trained to assume companies won’t listen to us over SMS anyway
- For companies seeking true conversations with their customers, social channels are better today than carrier based channels (SMS)
The second slide was revealing about how Twilio sees things:
This was given as context to unveiling Twilio Conversations.
Here, Jeff tried to explain what is driving these programmable conversations. He did it left-to-right.
Messaging AppsThe ones mentioned were Apple Business Chat, WhatsApp and Facebook Messenger.
Somehow, Telegram, Instagram, Twitter, Viber, WeChat and all the others were ignored. Why? Because the top 3 are probably over a billion users each. They are the biggest ones. And also the ones easiest for Twilio to reach (being predominantly a US West coast company).
One I found even more interesting here is the reference to Apple Business Chat – they haven’t allowed any of the CPaaS vendors to date access to their API. Up until now, Apple’s focus with it was on chat widgets and contact center type applications. Does this give us any indication about a possible announcement in the future of Twilio as the first CPaaS vendor with Apple Business Chat? Or maybe of Apple opening up to more CPaaS vendors and Twilio being just one of them.
The other thing that caught my attention came later, when Twilio Conversations was unveiled. It doesn’t yet include support for Facebook Messenger. For social, there’s only WhatsApp. For now.
One alternative here is that Twilio is banking on Facebook merging Facebook Messenger and WhatsApp infrastructure to a point where support for WhatsApp only would win. Or simply because they saw more traction to their Twilio API for WhatsApp than they did for their support of Facebook Messenger so this got prioritized for Twilio Conversations.
In any case, the coming year or two is going to be limited in the number of social channels that Twilio will be focusing on.
CarriersFor some reason, the slide includes RCS and 5G. I am not sure why 5G is there, besides trying to make the slide more appealing to the eye. Jeff might even agree with me, as he made no mention of 5G – on this slide or on any other part of the keynote.
RCS? That’s the carriers’ replacement for SMS. It has been promised in the last 20 years and with Google’s own adoption of it, it is getting more attention. Will it be deployed? I don’t know. Right now, the prospects aren’t that good.
Twilio isn’t alone here. Many CPaaS vendors made and are making announcements around RCS support.
Going back to my thoughts around WhatsApp and social, time will tell of enterprises will use RCS to “spam” customers with notifications and marketing or if they will truly try to use it for conversations. If the former happens, then conversations are better off done via social messaging platforms, leaving RCS to be the enterprise “notifications center” that SMS is today.
DataFor Data, Jeff had AI (Artificial Intelligence). Twilio decided to go for the marketing term as opposed to the technical term (ML for Machine Learning). Makes sense. For non-technical people, they are one and the same and AI just sounds way better.
ML is happening in communications. Got a whole report around AI in RTC on that written with Chad Hart if you’re interested.
This in the context of conversations is what will allow us to optimize and scale.
Twilio ConversationsTwilio Conversations was revealed as the last part of the keynote at Twilio Signal.
It is a shame that at this point, the main stage screen started fidgeting. It probably couldn’t wait to see what Twilio Conversations is about
After a long setup and explanation of how the world is headed towards multiple channels, how customers expect to be treated better, how enterprises who make an effort to strike conversations win, and how legendary customer experiences is what we should strive for, came Twilio Conversations.
This is a messaging component that handles conversations across multiple channels. Dumbing things down a bit, this is Twilio Programmable Chat does across channels and with history storage.
What does that mean?
It means that on the business side, users can use a single interface on their mobile device or in a browser to interact with customers. And that customers can use different channels to interact with the business. And all this is done as a conversation.
The result? A party.
The demo that Twilio put in place explains that conversations are messy and ever changing, and something that can assist there is Twilio Conversations.
Here’s what you get with Twilio Conversations:
- Support for multiple channels: Chat (IP messaging), SMS, MMS and WhatsApp
- Group chat, mixed with different channels
- Text, images and videos
- History storage of the conversations
Here’s where it can be improved (probably in future releases if this gets adopted):
- Multiple channels are nice. Somehow, Facebook Messenger which Twilio has support for, at least in beta
- Voice (and video) channels, as part of the conversation
- Email. They have SendGrid and decided not to use/include it in the first release of Twilio Conversations
And here’s the announcement on the Twilio blog.
Interestingly, Twilio decided not to cover pricing for Twilio Conversations during the keynote. They usually mention that for new products, at least to some extent. Checking on their website, it seems like Twilio went for a mixture of MAU+Storage approach.
You pay per Monthly Active User (count the number of users who did “something” in the last month, multiply them with the price per user). And then you pay for the storage you have inside conversations per GB. I am assuming storage is there for images and videos but might be also for text. Not written there, but also a reasonable assumption, you pay for each channel interactions separately based on their price list.
An interesting (and reasonable) choice.
Is Conversations necessary? It probably is.
As we’re moving forward with the use of CPaaS and communication APIs, there’s a need to provide ever higher abstractions, giving developers more of the tooling they need and having them write less of that tooling.
With omnichannel being hammered as the next big thing for about a decade, such tooling makes sense.
Twilio isn’t the only vendor headed in this direction either. Nexmo has their own variant, called Nexmo Conversation, somewhat hidden within their developer documentation.
As more conversation revolves around “Twilio is competing with their customers” and “Twilio is expensive”, having new constructs like Twilio Conversations makes perfect sense. It is a usable tool for customers, not competing with their own products (at least for most customers) AND it is a lot more than just SMS and voice.
If Twilio Conversations becomes a popular product for Twilio, this places many of the other CPaaS vendors yet again on the hamster wheel, playing catch-up.
Other announcementsThere were other interesting announcements made by Twilio that weren’t introduced during the keynote itself.
Verified by TwilioThis is a different product.
It wasn’t part of the first day keynote, and all I could find is the post about it on Twilio’s blog, the press release and rehash of these two elsewhere. Not much on the product page besides an invitation to the beta program.
Here’s the problem: spam in calls. Lots of it. Most calls probably. Which is why people stopped answering their phone.
Everyone’s trying to solve that problem, each with his own concept – which I find really interesting. How the solutions proposed fit well into the DNA and thought processes of each company. This though, is for another, future article.
Twilio Verify connects between businesses who want their calls to pass through the noise of spam calls, customers who do not want to be bothered with spam and call identification apps. The deal is this one:
- A customer installs a call identification app (unrelated to Twilio)
- The call identification app integrates with Twilio (via an API obviously)
- Whenever a call comes in, the app checks with Twilio who the caller is and if the intent of the call is known. If it isn’t, it does whatever it did up until today. If Twilio has that information, then that information is shown to the customer
- The business identifies itself and its brand to Twilio, connecting that information to phone numbers
- The business can also indicate per call what the intent of that call is (or on the phone number level – that information is somewhat murky at this point)
Tada!
Problem solved.
It just needs most users to install these apps on their phones and businesses to use this new API.
Is that going to be opened to Twilio’s competitors? To the carriers themselves? Would this be adopted by Apple and Google?
Lots of questions. Great initiative.
More on the whole spam solutions space in September.
Missing at Twilio Signal 2019 keynoteA two hour keynote session is long. Twilio had to pick and choose what to talk about and what to leave aside. A few things that were left aside reveal what interests Twilio, or at least what they want to interest us with.
Here are 4 things that didn’t make it to the keynote that I felt were missing.
1. “Architecture” diagramEvery year, Jeff has shared the architecture view of the Twilio products portfolio. It was a fascinating view as to how Twilio sees its mix of products and how it thinks they fit into a customer’s thought processes.
I don’t recall if there was such a bird’s eye view for Signal 2018, but there definitely wasn’t for 2019.
Where is Twilio Conversations located compared to Authy and Notify? What about the new Media Streams APIs? How does Twilio rationalize SendGrid or simply email into the mix and within its ever-growing set of products?
If you head to the Twilio products page, you get this categorization:
- Solutions
- Flex
- Marketing Campaigns
- Services
- Identity
- Lookup
- Verify
- Authy
- Intelligence
- Autopilot
- TaskRouter
- Conversations
- Twilio Conversations
- Proxy
- Identity
- Channels APIs
- Programmable SMS
- Programmable Voice
- Twilio SendGrid
- Programmable Chat
- Programmable Video
- Programmable Fax
- Twilio APIs for WhatsApp
- Super Network
- Phone Numbers
- Programmable Wireless
- Short Codes
- Super SIM
- Elastic SIP Trunking
- Narrowband
- Interconnect
- Tools
- Runtime
Nothing on that page is visual. Just a long list of products. It is workable if you know Twilio, but feels somewhat hard for newcomers.
A few interesting things here:
- Conversations as a category under Services and Twilio Conversations as a product inside it. I am assuming marketing has debated this one a bit, trying to resolve the confusion it could bring and decided just to use the same word
- The word APIs appears only on the Channels. Services are considered higher level constructs here, even if (obviously) they are treated as APIs
- Tools includes only “Runtime”, wrapping into them things like Functions and Studio, and ignoring other free of charge tools and capabilities such as the new CLI
- Twilio Pay, Media Streams API and probably a slew of other capabilities get hidden inside the layers of products and not mentioned here at all
I think not speaking about these layers and how Twilio sees the world of communication APIs in a layered structure in their yearly keynote is a missed opportunity.
2. VideoNo video. Nothing.
It wasn’t mentioned at all.
Was it because it isn’t a money maker?
Was it because there was nothing new to say about it?
Was it because the progress made wasn’t important enough to fit in the keynote?
3. VoiceThis one’s interesting.
Thinking back at the keynote, there was nothing substantial in it about voice calls. Sure, it was somehow covered as part of the Twilio CLI and Flex, but nothing more.
I am guessing that support for voice is a done deal, so no real need to say anything new about it during the keynote itself.
Closest thing to coverage at Twilio Signal 2019 around voice was the new Media Streams API and even that wasn’t covered much.
4. IOTThe Twilio SIM card and its M2M play wasn’t part of Signal.
The focus was Flex and human interactions. Not bots-to-bots or machines-to-machines.
I am wondering what progress and announcements could have been made.
Where is CPaaS headed?CPaaS is complicated these days.
SMS and voice? Sure.
But is SMS and voice mandatory? Some think it is, and place the whole focus on that part.
Is video part of CPaaS? What about only video?
WhatsApp support? Other channels?
Email anyone?
What makes a CPaaS platform well… CPaaS?
For me, it is being generic, appealing to developers and handling real time interactions (be it text, voice, video or whatever). And it also needs to somehow deal with communications between humans.
We’re at a point where CPaaS is appealing. Both for enterprises who need to improve their communications and to new entrants to the CPaaS market – either as vendors, enablers, partners, or whatnot. There’s an ecosystem building in the CPaaS space for a few years now.
Twilio is somehow managing to stay ahead of most vendors in both breadth and depth of their offering which is not an easy task.
Trying to figure out CPaaS?- Who are the players in this market?
- How do they differentiate?
- Where is the industry headed?
- Is CPaaS being commoditized?
- How do you compare to other vendors in this space?
Ping me if you want answers to any of these questions for a private consultation.
The post Twilio Signal 2019 and the future of the programmable enterprise appeared first on BlogGeek.me.
Common (beginner) mistakes in WebRTC
WebRTC can be hacked-away with great results. Often though, this leads to sub-par experiences.
WebRTC as a VoIP technology is the best thing ever. It “democratizes” this whole domain, taking it from the hands of experts into the hands of the masses of developers out there. Slapping a bit of code and seeing real time video is magical. And we’re now starting to add it to more and more businesses using web technology.
While this all seems easy now (and it is a lot easier than it used to be before WebRTC), there are a few mistakes that many beginners make in WebRTC. And to be honest, these mistakes are not only made by beginners. That is why I am sharing a couple of common (beginner) mistakes in WebRTC that I’ve seen for a couple of years now.
1. Using an outdated signaling server (from github)This happens all too often. You start by wanting to build something, you search github, you pick a project, and with WebRTC – it just doesn’t work. It might for the simple scenario but it won’t handle edge cases, or scale nicely, or accomodate for the more complex thing you’re thinking about.
The truth is, that today, there’s no single, goodly, off the shelf, out of the box, readymade, pure goodness, open source, signaling server for WebRTC that you can use. Sorry.
There might be a few contenders, but I haven’t seen any specific project that everyone’s using (unlike TURN for example, where coturn definitely rulz). The sadder truth? SFUs offer better signaling than signaling servers with WebRTC (and almost always I’d suggest against using their signaling directly in front of your WebRTC client).
2. Mis-configuring NAT traversalThis should have been trivial by now, but apparently it isn’t.
A few rules of thumb:
- Don’t. Rely. On. Google. Public. STUN
- Don’t use free github STUN and TURN server lists
- Don’t decide not to deploy TURN because your server has a public IP address
- And then a few
This is such a basic and common mistake that I even created a free video course for it: Effectively Connecting WebRTC Sessions.
3. Testing locallyThis one’s also basic.
Locally things tend to work well. Due to different network configuration, but also due to fairy dust that I am sure you sprinkle over your local router (I know I do every morning).
Once you go to the real world with real networks, things tend to break.
Test in the real world and not on your machine using 2 tabs, or being professional, from a Chrome tab to a Firefox tab.
The real world is messy and messy isn’t healthy for naive deployments.
Need help with automating that? Look at testRTC, but don’t neglect real world testing.
4. Not using adapter.jsWebRTC is a great specification but it is rather new.
This means that:
- Different browsers are going to behave a bit differently
- Browser implementations are somewhat buggy
- Different versions of the same browser act differently
And I haven’t even started about getting WebRTC browser implementations to be spec compliant with 1.0.
This all boils down to you having to work out a strategy in your code where all that if/then directives to deal with these differences takes place.
The alternatives?
- Whenever you find such an issue, add that if/then statement in the code directly (the most common approach, albeit not really smart in the long term)
- Create a shim/polyfill/whatever you want to call it, where you do all these if/then thingies (great, but not easy to maintain)
- Just use adapter.js
Guess which one I prefer?
5. Forgetting to take care of securityTwo reasons for you to forget about security:
- WebRTC is secure, so why should you do anything more about it?
- Because your service doesn’t deal with payments or sensitive data so why bother?
Both reasons are won’t lead you to a good place. In 2019, security is coming to the front, especially with communications. You can ask Zoom about it, or go check what Google’s Project Zero did recently.
Want a good starting point? I’ve got a WebRTC security checklist for you.
6. Assuming you can outsource it allYou can’t. Not really.
Need a design for a whitepaper? An article written? A WordPress website created? Find someone on Upwork, Fiverr or the slew of other websites out there and be done with it.
With WebRTC? Don’t even think about it.
WebRTC is ever-changing, which means that whatever you deploy today, you will need to maintain later. If you are outsourcing the work – some of it or all of it – assume this is going to be a long term relationship, and that for the most part, you may be able to outsource the development work but not the responsibility.
Going this route? Here are 6 things to ask yourself before hiring an outsourcing WebRTC vendor.
7. Diving into the code before grokking WebRTC- Go to github.
- Pick a project.
- “Install” it.
- Run it.
- Fix a few lines of code.
- Assume you’re done.
No. WebRTC is much more complicated than that scenario above. There are a few different servers you’ll need to deploy and use, there’s geography sensitivity to consider, and lots of other things.
You need to understand WebRTC if you want to really use it properly. Even if all you’re doing is using a 3rd party.
Don’t make these mistakes!Be sure to review these to see if there’s anything you’re doin’ wrong:
- Using an outdated signaling server (from github)
- Mis-configuring NAT traversal
- Testing locally
- Not using adapter.js
- Forgetting to take care of security
- Assuming you can outsource it all
- Diving into the code before grokking WebRTC
Check out my free WebRTC Basics course, or the bigger Advanced WebRTC one.
The post Common (beginner) mistakes in WebRTC appeared first on BlogGeek.me.
WebRTC Courses: Free, Advanced and Tooling
The next evolution of my WebRTC training program is here.
A few years ago I wanted to try something new, so I spent a few months creating the Advanced WebRTC Architecture course. 3 years and 300 students later, it is time for a refresh.
While I keep my course up to date, hosting office hours, adding links on a monthly basis and modifying existing lessons when the need arise, there were things that I just never got around to. Which is why three months ago, I sat down and planned the next stage for my course – thinking of how to add more content but not implode the course and its price point due to it.
The end result?
4 separate courses, 3 courses available starting this month, and the fourth one? Once I am done creating it.
I’ve renamed them a bit, at least on the higher level, for simplicity, while keeping the Advanced WebRTC Architecture course mouthful-name inside the course itself (it made no sense to record it all again just for a “name-change”). Here is the new structure:
- WebRTC Basics
- Advanced WebRTC
- WebRTC Tooling
- Supporting WebRTC
The WebRTC Basics course is something I’ve been thinking about on and off for quite some time. The content of this course are quite simple – it is the first module of my Advanced WebRTC Architecture course.
I even made that module free to access in my existing course in the past few months, though it is hard to tell how many people understood that it is free to access. For this reason, and a few others, I’ve decided to split it from the main course and offer it as a stand-alone free course.
Interested in learning the basics of WebRTC? You can just enroll to this new course today for free and watch the lessons at your own pace.
Advanced WebRTC (Architecture course)This is my signature WebRTC course. It got a facelift in this round:
- The learning experience has been upgraded and made modern
- Content got updated as well, to reflect the current WebRTC state
- Lessons now have lesson briefs, to make it easier for you to get back to them and review content without watching the lesson videos
- Lessons now have Q&A, helping with common questions on that specific topic, and to assist you in making sure you understand the lesson content
- At the request of some of the students, I’ve added a link to the glossary terms which appear in each lesson as part of the additional materials on each lesson
If you are a student of this course already, login today and see if you can notice the difference
One thing that didn’t make it in the migration is your course progress… all in the name of… progress
WebRTC Tooling – a brand new “course”This one’s brand new and is geared to become a rich library of resources.
Today, it includes two modules:
- Interviews – 10-15 minute interviews (which I unwisely called “in 10 minutes or less” with the people behind popular commercial and open source tools in the industry. The idea? If you want to pick a tool, you can quickly skim through the relevant video interviews to filter out alternatives, saving you tons of time
- Snippets – technical answers to common technical questions that I see. They can be found inside the Advanced WebRTC course in various places or deduced from it, but here you have them in byte-size chunks of 3-8 minutes each
In each of these modules there are already over 8 “lessons”, and I plan to grow the list on a monthly basis – especially by request/demand of the students who enroll to it.
For this week only, the All included course comes with the Tooling course for free (it is priced like the Advanced WebRTC course).
Supporting WebRTC – coming soonThis is a new course I’ve been thinking of on and off in the last year. It seems like I am getting more and more requests for such a thing and in some of my consulting engagements I end up working directly with support teams on figuring out what they see in WebRTC dumps.
The intention of this course is to focus on support teams and what they need to know about WebRTC to effectively assist their customers.
This is in the ideation phase for me, but will soon go into creation phase. If you are interested to learn more or participate – contact me.
All Included – a bundled offeringThis is a bundle of the Advanced WebRTC and the WebRTC Tooling course into one package.
It costs less than enrolling to each separately. And for the coming week, it is priced like the Advanced WebRTC course. Which means large savings.
In the one week launch period, there are 3 eBooks that will be supplied for free as well. Which leads me to the next part –
eBooksWhile we’re at it, I’ve written a new eBook and made two other eBooks available for purchase:
- Best practices in scaling WebRTC deployments – a new eBook, detailing the various aspects of scaling WebRTC services. This should get you going in understanding what’s expected of you and what are the common best practices in the industry
- Scaling Jitsi – Jitsi operates Jitsi Meet, a global, scalable group video calling service. I’ve sat down with them two years ago, to get the gist of how they’ve managed to scale it. This eBook details that
- The perfect WebRTC developer profile – what do managers and entrepreneurs look for in WebRTC developers? This holds an interview I’ve done with 7 managers and developers who are working with WebRTC for quite some time
During the coming week, through the launch period of the course, these eBooks will be freely available as part of the All Included bundle. If you’re not interested in the courses, but interested in one or all of the eBooks, you can purchase them separately.
Q&A about this WebRTC course restructuringI understand that this might confuse a bit, especially students who are already enrolled in the course. I’ll try to address these issues and other questions here –
What happens to those who enrolled in the WebRTC course in the last 12 months?
Nothing special.
They get to enjoy the new tools available for them in the Advanced WebRTC course. If you are one of these people and you have difficulties logging in – contact me.
What if I enrolled more than 12 months ago?
Then your subscription to the course is over. If you still want access, contact me.
When is the next office hours round taking place?
After the summer vacation.
I plan on starting these come September.
When will this restructuring take place?
It already has.
The courses and eBooks are now all available on webrtccourse.com.
Where can I learn more about the WebRTC courses?
On the course website.
You can find there testimonials from people who took the course, an FAQ, the list of partners, the syllabus and other details.
If you have specific questions, feel free to reach out to me and ask them.
The post WebRTC Courses: Free, Advanced and Tooling appeared first on BlogGeek.me.
WebRTC connectivity is challenging (a free video course)
Connecting WebRTC sessions effectively isn’t overly complicated, but it is something you need to be mindful of.
Every other day someone asks somewhere over the internet why his sessions don’t get connected with WebRTC. This can happen on discuss-webrtc, through my contact page, on open source WebRTC related forums, etc. Here’s one that published on Stack Overflow this month:
I am working on video calling functionality using webRTC. I have used “Google webRTC” framework instead of libJingle.Once my peerconnection established it remains always in “RTCICEConnectionChecking” state.
I have few question.
1) Peerconnection state always remain in “RTCICEConnectionChecking”.
2) When network is different (3g/4g) video call is not working.
3) Same network it is working fine.
I have used many turn server but could not get success.
Please, suggest me ,thanks in advance.
The usual complaint?
WebRTC works fine on a local network, but stops working when trying to run it on other networks.
That’s so common you’d think people would know what to do with it by now.
That nice question has another angle to it – “I have used many turn server but could not get success”. Hmm… someone here feels WebRTC should be free.
If you haven’t read about it already, then please do – Why Doesn’t Google Provide a Free TURN Server? It turns out that TURN costs real money to operate. And at scale even serious money. Which is why finding “turn server” and “get success” is rather hard (and probably impossible for the long run).
This continuous unstoppable flow of similar questions in the past couple of years got me to the point when it was time to put out a nice answer to it. Which is why I created my latest video mini-course – a 3 short videos that will explain how we got to this ridiculous point: being unable to connect simple use cases with WebRTC.
In these videos, I’ll be teaching you the problem that is causing this to happen, what are the mistakes developers usually do when trying to solve that problem (think “used many turn server”), and then 2 actionable solutions for you that will guarantee that more WebRTC sessions will get connected.
Why am I doing this?
First because I like receiving emails from people saying “thank you“ (so if you’ll find this course useful – be sure to reply with a thank you note).
But also because another round of office hours will take place soon for my WebRTC course. For this one, I am making a lot of changes in the structure of my WebRTC course and creating almost 3 additional hours worth of content.
Want to know how to get more WebRTC sessions connected?
Learn how to effectively connect WebRTC sessions
The post WebRTC connectivity is challenging (a free video course) appeared first on BlogGeek.me.
Zoom app vulnerability shows why WebRTC is important
It must have been a fun week for Zoom. It showed why WebRTC is needed if you value security.
For those who haven’t followed the tech news, a week ago a serious vulnerability was publicly disclosed about Zoom by Jonathan Leitschuh. If you have a Mac and installed Zoom to join a meeting, then people could use web pages and links to force your machine to open up your Zoom client and camera. To make things worse, uninstalling Zoom was… impossible. That same link would forcefully reinstall zoom as well.
I don’t want to get into the details of the question of how serious the actual vulnerability that was found is, but rather want to discuss what got Zoom there, and to some extent, why WebRTC is the better technical choice.
What caused the Zoom vulnerability?the road to hell is paved with good intentions.
When the Zoom app installs on your machine, it tries to integrate itself with the browser, in an effort to make it really quick to respond. The idea behind it is to reduce friction to the user.
An installation process is usually a multistep process these days:
- You click a link on the browser
- The link downloads an executable file
- You then need to double click that executable
- A pop up will ask you if you are sure you want to install
- The installation will take place and the app will run
Anything can go wrong in each step along the way – and when things can get wrong they usually do. At scale, this means a lot of frustration to users.
I’ve been at this game myself. Before the good days of WebRTC, when I worked at a video conferencing company, this was a real pain for us. My company at the time developed its own desktop client, as an app that gets downloaded as a browser plugin. Lots of issues and bugs in getting this installed properly and removing friction.
These days, you can’t install browser plugins, so you’re left with installing an app.
Zoom tried to do two things here:
- If the Zoom app was installed, automate the process of running it from a web page
- If the Zoom app was not installed, try and automate the process of installing and running it
That first thing? Everyone tries to do it these days. We’re in removing friction for users – remember?
The second one? That’s something that people consider outrageous. You uninstall the Zoom app, and if you open a web page with a link to a zoom meeting it will go about silently installing it in the background for the user. Why? Because there’s a “virus” left by the Zoom installation in your system. A web server that waits for commands and one of them is installing the Zoom client.
Here’s how joining a Zoom call looks on my Chrome browser in Linux:
The Zoom URL for joining a meeting opens the above window. Sometimes, it pops up a dialog and sometimes it doesn’t. When it doesn’t, you’re stuck on the page with either the need to “download & run Zoom” (which is weird, since it is already installed on my machine), “join from your browser” which we already know gives crappy quality or “click here”.
Since I am used to this weirdly broken behavior, I already know that I need to “click here”. This will bring about this lovely pop up:
This isn’t Zoom – it is Chrome opening a dialog of its own indicating that the browser page is trying to open a natively installed Linux application. It took me quite some time to decide to click that “Open xdg-open” button for these kinds of installed apps. For the most part, this is friction. Ugly friction at its best.
Does Google Chrome team cares? No. Why should they? Companies who want to take the experience out of the domain of the browser into native-land is something they’d prefer not to happen.
Does Zoom care? It does. Not on Linux apparently (otherwise, this page would have been way better in its explanation of what to do). But on Mac? It cares so much that it went above and beyond to reduce that friction, going as far as trying to hack its way around security measures set by the Safari team.
Is the Zoom vulnerability really serious?Maybe. Probably. I don’t know.
It was disclosed as a zero-day vulnerability, which is considered rather serious.
The original analysis of the vulnerability indicated quite a few avenues of attack:
- The use of an undocumented API on a locally installed web server
- Disguising the API calls as images to bypass and ignore a browser security policy
- Ability to force a user to join a meeting with a click of a link without further request for permissions. The user doesn’t need to even approve that meeting
- Ability to force a webcam to open in meeting on a click of a link without further request for permissions. The user doesn’t need to even approve that meeting
- Denial of service attack by forcing the Zoom app to open over and over again
- Silently installing Zoom if it was uninstalled
Some of these issues have been patched by Zoom already, but the thing that remains here is the responsibility of developers in applications they write. We will get to it a bit later.
While I am no security expert, this got the attention of Apple, who decided to automate the process and simply remove the Zoom web server from all Mac machines remotely and be done with it. It was serious enough for Apple.
Security is a game of cat and mouseThere are 3 main arm races going on in the internet these days:
- Privacy vs data collection
- Ads vs ad blockers (related to the first one)
- Hackers vs security measures
Zoom fell for that 3rd one.
Assume that every application and service you use has security issues and unknown bugs that might be exploited. The many data breaches we’ve had in the last few years of companies large and small indicate that clearly. So does the ransom attacks on US cities.
Unified communications and video conferencing services are no different. As video use and popularity grows, so will the breaches and security exploits that will be found.
There were security breaches for these services before and there will be after. This isn’t the first or the last time we will be seeing this.
Could Zoom or any other company minimize its exposure? Sure.
Zoom’s responseMy friend Chris thinks Zoom handled this nicely, with Eric Yuan joining a video call with security hackers. I see it more as a PR stunt. One that ended up backfiring, or at least not helping Zoom’s case here.
The end result? This post from Zoom, signed by the CEO as the author. This one resonates here:
Our current escalation process clearly wasn’t good enough in this instance. We have taken steps to improve our process for receiving, escalating, and closing the loop on all future security-related concerns
At the end, this won’t reduce the amount of people using Zoom or even slow Zoom’s growth. Users like the service and are unlikely to switch. A few people might heed to John Gruber’s suggestion to “eradicate it and never install it again”, but I don’t see this happening en masse.
Zoom got scorched by the fire and I have a feeling they’ll be doing better than most in this space from now on.
Competitor’s dancing movesA few competitors of Zoom were quick to respond. The 3 that got to my email and RSS feed?
LogMeIn, had a post on the GoToMeeting website, taking this stance:
- “We don’t have that vulnerability or architectural problem”
- “We launch our app from the browser, but through the standard means”
- “Our uninstalls are clean”
- “We offer a web client so users don’t need to install anything if they don’t want to”
- “We’re name-dropping words like SOC2 to make you feel secure”
- “Here’s our security whitepaper for you to download and read”
Lifesize issued a message from their CEO:
- “Zoom is sacrificing security for convenience”
- “Their response is indefensibly unsatisfactory
- “Zoom still does not encrypt video calls by default for the vast majority of its customers”
- “We take security seriously”
Apizee decided to join the party:
- “We use WebRTC which is secure”
- “We’re doing above and beyond in security as well”
The truth? I’d do the same if I were a competitor and comfortable with my security solution.
The challenge? Jonathan Leitschuh or some other security researcher might well go check them up, and who knows what they will find.
Why WebRTC improves security?For those who don’t know, WebRTC offers voice and video communications from inside the browser. Most vendors today use WebRTC, and for some reason, Zoom doesn’t.
There are two main reasons why WebRTC improves security of real time communication apps:
- It is implemented by browser vendors
- It only allows encrypted communications
Many have complained about WebRTC and the fact that you cannot send unencrypted media with it. All VoIP services prior to WebRTC run unencrypted by default, adding encryption as an optional feature.
Unencrypted media is easier to debug and record, but also enable eavesdropping. Encrypted media is thought to be a CPU hog due to the encryption process, something that in 2019 needs to be an outdated notion.
When Zoom decided not to use WebRTC, they essentially decided to take full responsibility and ownership of all security issues. They did that from a point of view and stance of an application developer or maybe a video conferencing vendor. They didn’t view it from a point of view of a browser vendor.
Browsers are secured by default, or at least try to be. Since they are general purpose containers for web applications that users end up using, they run with sandboxed environments and they do their best to mitigate any security risks and issues. They do it so often that I’d be surprised if there are any other teams (barring the operating system vendors themselves) who have better processes and technologies in place to handle security.
By striving for frictionless interactions, Zoom came headon into an area where browser vendors handle security threats of unknown code execution. Zoom made the mistake of trying to hack their way through the security fence that the Safari browser team put in place instead of working within the boundaries provided.
Why did they take that approach? Company DNA.
Zoom “just works”, or so the legend goes. So anything that Zoom developers can do to perpetuate that is something they will go the length to do.
WebRTC has a large set of security tools and measures put in place. These enables running it frictionlessly without the compromises that Zoom had to take to get to a similar behavior.
Where may WebRTC fail?There are several places where WebRTC is failing when it comes to security. Some of it are issues that are being addressed while others are rather debatable.
I’d like to mention 4 areas here:
#1 – WebRTC IP leakLike any other VoIP solution, WebRTC requires access to the local IP addresses of devices to work. Unlike any other VoIP solution, WebRTC exposes these IP addresses to the web application on top of it in JavaScript in order to work. Why? Because it has no other way to do this.
This has been known as the WebRTC IP leak issue, which is a minor issue if you compare it to the Zoom zero day exploit. It is also one that is being addressed with the introduction of mDNS, which I wrote about last time.
A few months from now, the WebRTC IP leak will be a distant problem.
I also wouldn’t categorize it as a security threat. At most it is a privacy issue.
#2 – Default access to web camera and microphoneWhen you use WebRTC, the browser is going to ask you to allow access to your camera and microphone, which is great. It shows that users need to agree to that.
But they only need to agree once per domain.
Go to the Google AppRTC demo page. If it is the first time you’re using it, it will ask you to allow access to your camera and microphone. Close the page again and reopen – and it won’t ask again. That’s at least the behavior on Chrome. Each browser takes his own approach here.
Clicking the Allow button above would cause all requests for camera and microphone access from appr.tc to be approved from now on without the need for an explicit user consent.
Is that a good thing? A bad thing?
It reduces friction, but ends up doing exactly what Jonathan Leitschuh complained about with Zoom as well – being able to open a user’s webcam without explicit consent just by clicking on a web link.
This today is considered standard practice with WebRTC and with video meetings in general. I’d go further to say that if there’s anything that pisses me off, it is video conferencing services that makes you join with muted video requiring me to explicitly unmute my video.
As I said, I am not a security expert, so I leave this for you to decide.
#3 – Ugly exploitsDid I say a cat and mouse game? Advertising and ad blockers are there as well.
Advertisers try to push their ads, sometimes aggressively, which brought into the world the ad blockers, who then deal with cleaning up the mess. So advertisers try to hack their way through ad blockers.
Since there’s big advertising money involved, there are those who try to game the system. Either by using machines to automate ad viewing and clicking to increase revenue, getting real humans in poor countries to manually click ads for the same reason or just inject their own code and ads instead of the ads that should have appeared.
That last one was found using WebRTC to inject its code, by placing it in the data channel. There’s some more information on the DEVCON website. Interestingly, this exploit works best via Webview inside apps like Facebook that open web pages internally instead of through the browser. It makes it a lot harder to research and find in that game of cat and mouse.
I don’t know if this is being addressed at all at the moment by browser vendors or the standards bodies.
#4 – Lazy developersThis is the biggest threat by far.
Developers using WebRTC who don’t know better or just assume that WebRTC protects them and do their best to not take responsibility on their part of the application.
Remember that WebRTC is a building block – a piece of browser based technology that you use in your own application. Also, it has no signaling protocol of its own, so it is up to you to decide, implement and operate that signaling protocol yourself.
Whatever you do on top of WebRTC needs to be done securely as well, but it is your responsibility. I’ve written a WebRTC security checklist. Check it out:
Download the WebRTC security checklist
Why isn’t Zoom using WebRTC?Zoom was founded in 2011.
WebRTC was just announced in 2011.
At the time it started, WebRTC wasn’t a thing.
When WebRTC became a thing, Zoom were probably already too invested in their own technology to be bothered with switching over to WebRTC.
While Zoom wanted frictionless communications for its customers, it probably had and still has to pay too big a price to switch to WebRTC. This is probably why when Zoom decided to support browsers directly with no downloads, they went for WebAssembly and not use WebRTC. The results are a lot poorer, but it allowed Zoom to stay within the technology stack it already had.
The biggest headaches for Zoom here is probably the video codec implementation. I’ll take a guess and assume that Zoom are using their own proprietary video codec derived from H.264. The closest indication I could find for it was this post on the Zoom website:
We have better coding and compression for our screen sharing than any other software on the market
If Zoom had codecs that are compatible with WebRTC or that can easily be made compatible with WebRTC they would have adopted WebRTC already.
Zoom took the approach of using this as a differentiator and focusing on improving their codecs, most probably thinking that media quality was the leading factor for people to choose Zoom over alternative solutions.
Where do we go from here?It is 2019.
If you are debating using WebRTC or a proprietary technology then stop debating. Use WebRTC.
It will save you time and improve the security as well as many other aspects of your application.
If you’re still not sure, you can always contact me.
The post Zoom app vulnerability shows why WebRTC is important appeared first on BlogGeek.me.
PSA: mDNS and .local ICE candidates are coming
Another unstabilizing WebRTC experiment in Chrome to become reality.
I’ve had clients approaching me in the past month or two with questions about a new type of address cropping up in as ICE candidates. As it so happens, these new candidates have caused some broken experiences.
In this article, I’ll try to untangle how local ICE candidates work, what is mDNS, how it is used in WebRTC, why it breaks WebRTC and how this could have been handled better.
How local ICE candidates work in WebRTC?Before we go into mDNS, let’s start with understanding why we’re headed there with WebRTC.
When trying to connect a session over WebRTC, there are 3 types of addresses that a WebRTC client tries to negotiate:
- Local IP addresses
- Public IP addresses, found through STUN servers
- Public IP addresses, allocated on TURN servers
During the ICE negotiation process, your browser (or app) will contact its configured STUN and TURN server, asking them for addresses. It will also check with the operating system what local IP addresses it has in its disposal.
Why do we need a local IP address?If both machines that need to connect to each other using WebRTC sit within the same private network, then there’s no need for the communication to leave the local network either.
Why do we need a public IP address through STUN?If the machines are on different networks, then by punching a hole through the NAT/firewall, we might be able to use the public IP address that gets allocated to our machine to communicate with the remote peer.
Why do we need a public IP address on a TURN server?If all else fails, then we need to relay our media through a “third party”. That third party is a TURN server.
Local IP addresses as a privacy riskThat part of sharing local IP addresses? Can really improve things in getting calls connected.
It is also something that is widely used and common in VoIP services. The difference though is that VoIP services that aren’t WebRTC and don’t run in the browsers are a bit harder to hack or abuse. They need to be installed first.
WebRTC gives web developers “superpowers” in knowing your local IP address. That scares privacy advocates who see this is as a breach of privacy and even gave it the name “WebRTC Leak”.
A few things about that:
- Any application running on your device knows your IP address and report it back to someone
- Only WebRTC (as far as I know) gives the ability to know your local IP addresses in the JavaScript code running inside the browser
- People using VPNs assume the VPNs takes care of that (browsers do offer mechanisms to remove local IP addresses), but they sometimes fail to add WebRTC support properly
- Local IP addresses can be used by JavaScript developers for things like fingerprinting users or deciding if there’s a browser bot or a real human looking at the page, though there are better ways of doing these things
- There is no security risk here. Just privacy risk – leaking a local IP address. How much risk does that entail? I don’t really know
Yes, we have known that problem ever since the NY Times used a webrtc-based script to gather IP addresses back in 2015. “WebRTC IP leak” is one most common search terms (SEO hacking at its best).
Luckily for us, Google is collecting anonymous usage statistics from Chrome, making the information available through a public chromestatus metrics site. We can use that to see what percentage of the page loads WebRTC is used. The numbers are quite… big:
RTCPeerConnection calls on % of Chrome page loads (see here)
Currently, 8% of page loads create a RTCPeerConnection. 8%. That is quite a bit. We can see two large increases, one in early 2018 when 4% of pageloads used RTCPeerConnection and then another jump in November to 8%.
Now that just means RTCPeerConnection is used. In order to gather local IPs the setLocalDescription call is required. There are statistics for this one as well:
setLocalDescription calls on % of Chrome page loads (see here)
The numbers here are significantly lower than for the constructor. This means a lot of peer connections are constructed but not used. It is somewhat unclear why this happens. We can see a really big increase in November 2018 to 4%, at about the same time that PTCPeerConnection calls jumped to 7-8%. While it makes no sense, this is what we have to work with.
Now, WebRTC could be used legitimately to establish a peer-to-peer connection. For that we need both setLocalDescription and setRemoteDescription and we have statistics for the latter as well:
setRemoteDescription calls on % of Chrome page loads (see here)
Since the big jump in late 2017 (which is explained by a different way of gathering data) the usage of setRemoteDescription hovers between 0.03% and 0.04% of pageloads. That’s close to 1% of the pages a peer connection is actually created on.
We can get another idea about how popular WebRTC is from the getUserMedia statistics:
getUserMedia calls on % of Chrome page loads (see here)
This is consistently around 0.05% of pageloads. A bit more than RTCPeerConnection being used to actually open a session (that setRemoteDescription graph) but there are use-cases such as taking a photo which do not require WebRTC.
Here’s what we’ve arrived with, assuming the metrics collection of chromestats reflects real use behavior. We have 0.04% of pageloads compared to 4%. This shows that a considerable percentage of the RTCCPeerConnections are potentially used for a purpose other than what WebRTC was designed for. That is a problem that needs to be solved.
* credits and thanks to Philipp Hancke for assisting in collecting and analyzing the chromestats metrics
What is mDNS?Switching to a different topic before we go back to WebRTC leaks and local IP addresses.
mDNS stands for Multicast DNS. it is defined in IETF RFC 6762.
mDNS is meant to deal with having names for machines on local networks without needing to register them on DNS servers. This is especially useful when there are no DNS servers you can control – think of a home with a couple of devices who need to interact locally without going to the internet – Chromecast and network printers are some good examples. What we want is something lightweight that requires no administration to make that magic work.
And how does it work exactly? In a similar fashion to DNS itself, just without any global registration – no DNS server.
At its basic approach, when a machine wants to know the IP address within the local network of a device with a given name (lets say tsahi-laptop), it will send out an mDNS query on a known multicast IP address (exact address and stuff can be found in the spec) with a request to find “tsahi-laptop.local”. There’s a separate registration mechanism whereby devices can register their mDNS names on the local network by announcing it within the local network.
Since the request is sent over a multicast address, all machines within the local network receive it. The machine with that name (probably my laptop, assuming it supports mDNS and is discoverable in the local network), will return back with its IP address, doing that also over multicast.
That means that all machines in the local network heard the response and can now cache that fact – what is the IP address on the local network for a machine called tsahi-laptop.
How is mDNS used in WebRTC?Back to that WebRTC leak and how mDNS can help us.
Why do we need local IP addresses? So that sessions that need to take place in a local network don’t need to use public IP addresses. This makes routing a lot simpler and efficient in such cases.
But we also don’t want to share these local IP addresses with the Java Script application running in the browser. That would be considered a breach of privacy.
Which is why mDNS was suggested as a solution. There It is a new IETF draft known as draft-ietf-rtcweb-mdns-ice-candidates-03. The authors behind it? Developers at both Apple and Google.
The reason for it? Fixing the longstanding complaint about WebRTC leaking out IP addresses. From its abstract:
WebRTC applications collect ICE candidates as part of the process of creating peer-to-peer connections. To maximize the probability of a direct peer-to-peer connection, client private IP addresses are included in this candidate collection. However, disclosure of these addresses has privacy implications. This document describes a way to share local IP addresses with other clients while preserving client privacy. This is achieved by concealing IP addresses with dynamically generated Multicast DNS (mDNS) names.
How does this work?
Assuming WebRTC needs to share a local IP address which it deduces is private, it will use an mDNS address for it instead. If there is no mDNS address for it, it will generate and register a random one with the local network. That random mDNS name will then be used as a replacement of the local IP address in all SDP and ICE message negotiations.
The result?
- The local IP address isn’t exposed to the Java Script code of the application. The receiver of such an mDNS address can perform a lookup on his local network and deduce the local IP address from there only if the device is within the same local network
- A positive side effect is that now, the local IP address isn’t exposed to media, signaling and other servers either. Just the mDNS name is known to them. This reduces the level of trust needed to connect two devices via WebRTC even further
Here’s the rub though. mDNS breaks WebRTC implementations.
mDNS is supposed to be innocuous:
- It uses a top-level domain name of its own (.local) that shouldn’t be used elsewhere anyway
- mDNS is sent over multicast, on its own dedicated IP and port, so it is limited to its own closed world
- If the mDNS name (tsahi-laptop.local) is processed by a DNS server, it just won’t find it and that will be the end of it
- It doesn’t leave the world of the local network
- It is shared in places where one wants to share DNS names
With WebRTC though, mDNS names are shared instead of IP addresses. And they are sent over the public network, inside a protocol that expects to receive only IP addresses and not DNS names.
The result? Questions like this recent one on discuss-webrtc:
Weird address format in c= line from browser
I am getting an offer SDP from browser with a connection line as such:
c=IN IP4 3db1cebd-e606-4dc1-b561-e0af5b4fd327.local
This is causing trouble in a webrtc server that we have since the parser is bad (it is expecting a normal ipv4 address format)
[…]This isn’t a singular occurrence. I’ve had multiple clients approach me with similar complaints.
What happens here, and in many other cases, is that the IP addresses that are expected to be in SDP messages are replaced with mDNS names – instead of x.x.x.x:yyyy the servers receive <random-ugly-something>.local and the parsing of that information is totally different.
This applies to all types of media servers – the common SFU media server used for group video calls, gateways to other systems, PBX products, recording servers, etc.
Some of these have been updated to support mDNS addresses inside ICE candidates already. Others probably haven’t, like the recent one above. But more importantly, many of the deployments made that don’t want, need or care to upgrade their server software so frequently are now broken as well, and should be upgraded.
Could Google have handled this better? Close-up Businessman Playing Checkers At Office DeskIn January, Google announced on discuss-webrtc this new experiment. More importantly, it stated that:
No application code is affected by this feature, so there are no actions for developers with regard to this experiment.
Within a week, it got this in a reply:
As it stands right now, most ICE libraries will fail to parse a session description with FQDN in the candidate address and will fail to negotiate.
More importantly, current experiment does not work with anything except Chrome due to c= line population error. It would break on the basic session setup with Firefox. I would assume at least some testing should be attempted before releasing something as “experiment” to the public. I understand the purpose of this experiment, but since it was released without testing, all we got as a result are guaranteed failures whenever it is enabled.
The interesting discussion that ensued for some reason focused on how people interpret the various DNS and ICE related standards and does libnice (an open source implementation of ICE) breaks or doesn’t break due ton mDNS.
But it failed to encompass the much bigger issue – developers were somehow expected to write their code in a way that won’t break the introduction of mDNS in WebRTC – without even being aware that this is going to happen at some point in the future.
Ignoring that fact, Google has been running mDNS as an experiment for a few Chrome releases already. As an experiment, two things were decided:
- It runs almost “randomly” on Chrome browsers of users without any real control of the user or the service that this is happening (not something automated and obvious at least)
- It was added only when local IP addresses had to be shared and no permission for the camera or microphone were asked for (receive only scenarios)
The bigger issue here is that many view only solutions of WebRTC are developed and deployed by people who aren’t “in the know” when it comes to WebRTC. They know the standard, they may know how to implement with it, but most times, they don’t roam the discuss-webrtc mailing list and their names and faces aren’t known within the tight knit of the WebRTC community. They have no voice in front of those that make such decisions.
In that same thread discussion, Google also shared the following statement:
FWIW, we are also considering to add an option to let user force this feature on regardless of getUserMedia permissions.
Mind you – that statement was a one liner inside a forum discussion thread, from a person who didn’t identify in his message with a title or the fact that he speaks for Google and is a decision maker.
Which is the reason I sat down to write this article.
mDNS is GREAT. AWESOME. Really. It is simple, elegant and gets the job done than any other solution people would come up with. But it is a breaking change. And that is a fact that seems to be lost to Google for some reason.
By enforcing mDNS addresses on all local IP addresses (which is a very good thing to do), Chrome will undoubtedly break a lot of services out there. Most of them might be small, and not part of the small majority of the billion-minutes club.
Google needs to be a lot more transparent and public about such a change. This is by no means a singular case.
Just digging into what mDNS is, how it affects WebRTC negotiation and what might break took me time. The initial messages about an mDNS experiment are just not enough to get people to do anything about it. Google did a way better job with their explanation about the migration from Plan B to Unified Plan as well as the ensuing changes in getStats().
My main worry is that this type of transparency doesn’t happen as part of a planned rollout program. It is done ad-hoc with each initiative finding its own creative solution to convey the changes to the ecosystem.
This just isn’t enough.
WebRTC is huge today. Many businesses rely on it. It should be treated as the mission critical system that developers who use it see in it.
It is time for Google to step up its game here and put the mechanisms in place for that.
What should you do as a developer?First? Go check if mDNS breaks your app. You can enable this functionality on chrome://flags/#enable-webrtc-hide-local-ips-with-mdns
In the long run? My best suggestion would be to follow messages coming out of Google in discuss-webrtc about their implementation of WebRTC. To actively read them. Read the replies and discussions that take place around them. To understand what they mean. And to engage in that conversation instead of silently reading the threads.
Test your applications on the beta and Canary releases of Chrome. Collect WebRTC behavior related metrics from your deployment to find unexpected changes there.
Apart from that? Nothing much you can do.
As for mDNS, it is a great improvement. I’ll be adding a snippet explanation about it to my WebRTC Tools course, something new that will be added next month to the WebRTC Course. Stay tuned!
The post PSA: mDNS and .local ICE candidates are coming appeared first on BlogGeek.me.
Migrating BlogGeek.me and why it is quiet here lately
Marketing automation isn’t easy.
I’ve been doing that for a few years now in BlogGeek.me, trying to figure it out as I go along. My newsletter service configuration and settings looks like a large ball of spagetti at this point, with little way for me to handle things in it. This as well as a few more reasons got me to switch my marketing automation provider as part of a larger project I am running.
It has taken its toll. Mainly a lot of time and energy spent on figuring things out yet again and cleaning up stuff. Along this process, I’ve enrolled to an online course and learned some more about what I can do without pissing off subscribers. Hopefully, I’ll be headed down that road a bit more in the coming months.
Anyways, a few quick notes:
- I am currently in “mid-migration”. All emails from now on (this would be the first broadcast one at that) are sent out of a different provider
- If you’re unhappy with it – unsubscribe, or just reply back and I’ll try figuring out what’s going on
- I am restructuring my WebRTC course as well as adding to it some fresh new content. More on that closer to the end of the month, once it is ready. If you’re interested about it, just ping me
- Less articles here during July. Going to be on business trip as well as a vacation. On top of that, I got two largish consulting projects with my clients (clients get prioritized before writing articles here)
- Why this post then? To test if the new newsletter provider is working well for me
See you on the other end of my infrastructure nightmare
The post Migrating BlogGeek.me and why it is quiet here lately appeared first on BlogGeek.me.
What’s the status of WebRTC in 2019?
In 2019, WebRTC is ready, but there’s still work ahead.
When I wrote that WebRTC is ready over 6 months ago it pissed a few people off.
Here’s the thing – WebRTC is ready simply because the industry deems it ready and companies are deploying products that rely on WebRTC to work for them.
Are there challenges along the way? Sure.
Do things break? Sure.
But if you are thinking of whether you should start using WebRTC and build an application on top of it or wait for the next fad to come by for your video calling service, then don’t. Use WebRTC as nothing else will do today.
Trying to understand where WebRTC is available? Download my free cheat sheet
WebRTC 1.0 – the specificationIn 2015 I remember someone telling me that WebRTC 1.0 will be closed and published by year end.
I heard the same in 2016. And later in 2017.
In 2018 I ignored such promises.
2019? There is a small chance that things will be ready. Why? Because the spec is almost completed. That almost is the sticking point.
But then again, who cares?
Everyone is already using WebRTC as if it is a done deal. Because it is.
We’ve agreed on the technology (WebRTC). We’ve agreed on the larger picture and the ways things are going to look like (peer connection and how browsers implement it today). We’re left with the nitty gritty details of how to make the experience easier and uniform across browsers for developers. We will get there, but just remember – users expect it to work, and it does.
Chrome and WebRTCConsider Chrome to be the de facto specification for WebRTC. It isn’t WebRTC 1.0 compliant. Yet. According to Statista, 69% of the desktop internet is driven by Chrome. On this website? 74% of the viewers use Chrome.
The thing about Chrome is that it is slowly getting the missing WebRTC 1.0 support, and by moving there it is breaking things up with each release. Usually because the way it works today isn’t exactly spec compliant, so things have to break – or just because the additions are delicate and the work done breaks behavior that developers relied on in the past. At times, it is because Google has no qualms when it comes to technical debt and code rewrites and when it sees a need to optimize something it usually does that (we’re now in the 3rd generation of echo canceller in WebRTC, each one was a complete rewrite of the previous one).
If you are developing anything that needs to run in the browser and use WebRTC, then Chrome is the first thing you should be developing for.
Firefox and WebRTCFirefox is close to be spec compliant when it comes to WebRTC.
They had it easy with the recent decision to adopt Unified Plan instead of Plan B in the WebRTC specification. Where Google had to shift from Plan B to Unified Plan, Firefox had only slight modifications to make.
The problem is that Firefox is a distant second to Chrome in market share. At times, developers actively decide not to support Firefox just because they consider it a waste of time. This is doubly true for those who use Chrome for guest access and as a stepping stone to getting their users to download their Electron app instead.
Safari and WebRTCSafari now supports WebRTC. That includes things like simulcast and both VP8 and H.264. Which is to say that most WebRTC features already work in Safari, but not all of them.
You wouldn’t find VP9 which isn’t mandatory or popular yet, but something that is more than desirable. And then some of the more complicated scenarios such as multiparty sessions have more pending open issues of both functionality and interoperability than Chrome or Firefox have.
The challenge is that Safari is important to developers. Both because it is the only way to get on iOS devices and because it is the default browser for Mac, a desktop/laptop that for some reason is becoming a fad with developers (go figure).
Edge and WebRTCEdge was once its own browser with its own technology stack, but is now becoming just another flavor of Chrome. Microsoft announced that Edge will be using Chromium as its browser engine. This has gotten Edge to work on Mac already with rumors of a possible Linux release.
Edge runs on Chromium.
Chrome runs on Chromium.
Chrome isn’t WebRTC spec compliant because Chromium isn’t WebRTC spec compliant.
So Edge isn’t spec compliant either. But it is well… the same as Chrome.
This all relates to the upcoming official release of Edge.
Microsoft IE and WebRTCStill dream about Internet Explorer at night?
Stop it.
IE won’t be supporting WebRTC. Not now and not ever.
Use a plugin or just use Electron. Or better yet – update to a more modern browser.
Opera/Brave/whoever and WebRTCMost of the other browsers out there, be it Opera, Brave or anything else is just a fork of Chromium or a skin on top of Chromium.
For all intent and purpose, they are Chrome, offering the same spec compliance to WebRTC as Chrome does. At least if they haven’t gone and intentionally made changes to it (like disabling it in the name of privacy).
Android and WebRTCAndroid has support of WebRTC.
Chrome browser that ships with Android has WebRTC support.
Other browsers shipping on Android have WebRTC support (such as Firefox).
Sometimes, a device manufacturer ends up shipping his own browser (Samsung for example). Then WebRTC compliance and availability is somewhat questionable.
The good thing is that the Webview in Android also supports WebRTC. So built-in application browsers such as the one used by Facebook or Slack also end up supporting WebRTC experiences.
And if you write your own app, you can use the Webview, a precompiled version of WebRTC for Android or compile it on your own.
iOS and WebRTCOn iOS things are slightly trickier.
Safari supports WebRTC on iOS and there are companies making commercial use of it already.
Other browsers don’t and can’t support WebRTC on iOS. That’s because the supplied iOS Webview still doesn’t support WebRTC (or disables it on purpose).
If you write your own app, you can use a precompiled version of WebRTC for iOS or compile it on your own. No Webview for you yet.
Your Next Steps?Haven’t started with WebRTC yet? Now’s the time. I can help.
Trying to understand where WebRTC is available? Download my free cheat sheet
The post What’s the status of WebRTC in 2019? appeared first on BlogGeek.me.
What’s the status of WebRTC in 2019?
In 2019, WebRTC is ready, but there’s still work ahead.
When I wrote that WebRTC is ready over 6 months ago it pissed a few people off.
Here’s the thing – WebRTC is ready simply because the industry deems it ready and companies are deploying products that rely on WebRTC to work for them.
Are there challenges along the way? Sure.
Do things break? Sure.
But if you are thinking of whether you should start using WebRTC and build an application on top of it or wait for the next fad to come by for your video calling service, then don’t. Use WebRTC as nothing else will do today.
Trying to understand where WebRTC is available? Download my free cheat sheet
WebRTC 1.0 – the specificationIn 2015 I remember someone telling me that WebRTC 1.0 will be closed and published by year end.
I heard the same in 2016. And later in 2017.
In 2018 I ignored such promises.
2019? There is a small chance that things will be ready. Why? Because the spec is almost completed. That almost is the sticking point.
But then again, who cares?
Everyone is already using WebRTC as if it is a done deal. Because it is.
We’ve agreed on the technology (WebRTC). We’ve agreed on the larger picture and the ways things are going to look like (peer connection and how browsers implement it today). We’re left with the nitty gritty details of how to make the experience easier and uniform across browsers for developers. We will get there, but just remember – users expect it to work, and it does.
Chrome and WebRTCConsider Chrome to be the de facto specification for WebRTC. It isn’t WebRTC 1.0 compliant. Yet. According to Statista, 69% of the desktop internet is driven by Chrome. On this website? 74% of the viewers use Chrome.
The thing about Chrome is that it is slowly getting the missing WebRTC 1.0 support, and by moving there it is breaking things up with each release. Usually because the way it works today isn’t exactly spec compliant, so things have to break – or just because the additions are delicate and the work done breaks behavior that developers relied on in the past. At times, it is because Google has no qualms when it comes to technical debt and code rewrites and when it sees a need to optimize something it usually does that (we’re now in the 3rd generation of echo canceller in WebRTC, each one was a complete rewrite of the previous one).
If you are developing anything that needs to run in the browser and use WebRTC, then Chrome is the first thing you should be developing for.
Firefox and WebRTCFirefox is close to be spec compliant when it comes to WebRTC.
They had it easy with the recent decision to adopt Unified Plan instead of Plan B in the WebRTC specification. Where Google had to shift from Plan B to Unified Plan, Firefox had only slight modifications to make.
The problem is that Firefox is a distant second to Chrome in market share. At times, developers actively decide not to support Firefox just because they consider it a waste of time. This is doubly true for those who use Chrome for guest access and as a stepping stone to getting their users to download their Electron app instead.
Safari and WebRTCSafari now supports WebRTC. That includes things like simulcast and both VP8 and H.264. Which is to say that most WebRTC features already work in Safari, but not all of them.
You wouldn’t find VP9 which isn’t mandatory or popular yet, but something that is more than desirable. And then some of the more complicated scenarios such as multiparty sessions have more pending open issues of both functionality and interoperability than Chrome or Firefox have.
The challenge is that Safari is important to developers. Both because it is the only way to get on iOS devices and because it is the default browser for Mac, a desktop/laptop that for some reason is becoming a fad with developers (go figure).
Edge and WebRTCEdge was once its own browser with its own technology stack, but is now becoming just another flavor of Chrome. Microsoft announced that Edge will be using Chromium as its browser engine. This has gotten Edge to work on Mac already with rumors of a possible Linux release.
Edge runs on Chromium.
Chrome runs on Chromium.
Chrome isn’t WebRTC spec compliant because Chromium isn’t WebRTC spec compliant.
So Edge isn’t spec compliant either. But it is well… the same as Chrome.
This all relates to the upcoming official release of Edge.
Microsoft IE and WebRTCStill dream about Internet Explorer at night?
Stop it.
IE won’t be supporting WebRTC. Not now and not ever.
Use a plugin or just use Electron. Or better yet – update to a more modern browser.
Opera/Brave/whoever and WebRTCMost of the other browsers out there, be it Opera, Brave or anything else is just a fork of Chromium or a skin on top of Chromium.
For all intent and purpose, they are Chrome, offering the same spec compliance to WebRTC as Chrome does. At least if they haven’t gone and intentionally made changes to it (like disabling it in the name of privacy).
Android and WebRTCAndroid has support of WebRTC.
Chrome browser that ships with Android has WebRTC support.
Other browsers shipping on Android have WebRTC support (such as Firefox).
Sometimes, a device manufacturer ends up shipping his own browser (Samsung for example). Then WebRTC compliance and availability is somewhat questionable.
The good thing is that the Webview in Android also supports WebRTC. So built-in application browsers such as the one used by Facebook or Slack also end up supporting WebRTC experiences.
And if you write your own app, you can use the Webview, a precompiled version of WebRTC for Android or compile it on your own.
iOS and WebRTCOn iOS things are slightly trickier.
Safari supports WebRTC on iOS and there are companies making commercial use of it already.
Other browsers don’t and can’t support WebRTC on iOS. That’s because the supplied iOS Webview still doesn’t support WebRTC (or disables it on purpose).
If you write your own app, you can use a precompiled version of WebRTC for iOS or compile it on your own. No Webview for you yet.
Your Next Steps?Haven’t started with WebRTC yet? Now’s the time. I can help.
Trying to understand where WebRTC is available? Download my free cheat sheet
The post What’s the status of WebRTC in 2019? appeared first on BlogGeek.me.
WebRTC video recording may be more useful than WebRTC video calling
Video recording using WebRTC can be a lot more lucrative a business than WebRTC video calling.
There’s been an ongoing rumble around WebRTC in a lot of discussions I had about it and sometimes from what you read online – What’s the market size of WebRTC? How do you make money out of it? Who is making money out of it?
Questions that are really hard to answer. Usually because people don’t like to hear the answers to them.
Looking to understand where and how to fit WebRTC into your business? Let’s talk
The Zoom IPOIs there money in video conferencing or video calling?
The service today is practically free, spread across a multitude of different service types:
Social- Apple FaceTime
- Google Duo & Google Hangouts
- Facebook Messenger
- Skype
- Houseparty
- …
An unending list of social communication services that happen to have video calling in them. I’ve bunched Apple and Google in here simply because they “own” the smartphones we use today.
Business- Google Meet
- Zoom
Here you’ll find services that are free to a certain extent. They are either time limited, feature limited, or just bundled up to bigger offerings.
Zoom were probably the first to go this route with a well-featured product where the biggest limit for a free account was time – 40 minutes per session. Long enough for a lot of uses.
Consumer/SohoThere are many consumer-type services that got built using WebRTC and gained traction. The services started as free offerings, and each grew of its own accord. Jitsi Meet got acquired by Atlassian and then 8×8 acquired it from Atlassian. Appear.in started offering paid Pro accounts and got acquired by Videonor. Talky became a showcase for SimpleWebRTC.
Others started with a free service, ending with a paid service, like Gruveo.
Show me the moneyThis is where things got complicated.
No one saw a way to make money out of WebRTC. Or video.
At least not until Zoom IPO’d. ~$425 million annual run rate, growing at over 100% a year. Alex Clayton has a nice breakdown of their filing:
The moment this happened, both BlueJeans and LifeSize decided to publish their numbers – BlueJeans reached $100m ARR while Lifesize reached $100m in bookings. Their message? Zoom isn’t alone.
For the record, and to make this clear:
- Zoom doesn’t use WebRTC
- BlueJeans and Lifesize use WebRTC though both existed before WebRTC
The thing here is video conferencing service, and how do you make money out of it? You can, if you’re big enough, though it will be hard to join the game now and try to outdo Zoom in video conferencing by using their playbook.
The challenge is probably that everyone is looking under the light post.
You’ve got practically 100s of developers, startups, enterprises and whatnots vying towards disrupting the video conferencing market with WebRTC. The challenge is that with so many players coming in with the same technology, only a few will stay standing.
Differentiation is tough in this space. Why would someone pick up your service and not another? How will they find you? Why should they pay?
Which brings me to the reason I started writing this in the first place –
Not video calling – WebRTC video recordingI went to AppSumo this week, deciding to purchase another deal on their site. Every once in awhile I find there some great deals and new services to use for my business. The latest featured offer on that site? Dubb (now sold out)
DubbThis is a service that runs as a Chrome extension enabling its users to record a short video and share it with customers over SMS, email or other networks.
I don’t know if Dubb supports WebRTC or not, but –
- It works in the browser with no need to install anything (besides a Chrome extension)
- It records video and voice right there inside the browser
In all likelihood, this is using WebRTC’s MediaRecorder to record locally and upload the result to the Dubb cloud service.
Dubb is positioned as a sales tool to build rapport – not as a video conferencing or a communication tool. There’s no “real time”, “collaboration” or “conferencing” here.
Seeing it got me thinking of another tool I bumped into recently – Loom
LoomI started a coaching program a few months back. My WebRTC Course showed success in the last 3 years of its existence and I wanted to grow it in size – have more people enroll and learn WebRTC in the process. The coaching program is interesting. I am learning a ton in it, some of it already found its way into the course and a lot more will be coming in the next course launch in a few months time.
Anyways, when I ask questions via email, I usually get back video recordings of my coach reviewing the question and answering it, thinking through the issues I raise. I can see him and his screen, which is great. The link and tool he uses? Loom.
So I checked it out:
Similarly to Dubb, this one is about recording videos from the browser, with no installation needed. In Loom’s case, they are even trying to showcase the various uses of their tool.
WebRTC isn’t only about callingWebRTC isn’t only about calling.
It has other capabilities. There’s the data channel, there’s the simple access to the camera and mic and there’s the ability to record media on the client side to name a few.
That client side recording enables these services – Dubb and Loom. there’s also Ziggeo and Pipe for those looking for a managed API for it.
I am wondering. When everyone is closely looking at video calling, trying to figure out how to make $$$ out of that space, is the real usability of WebRTC lies elsewhere altogether?
Looking to understand where and how to fit WebRTC into your business? Let’s talk
The post WebRTC video recording may be more useful than WebRTC video calling appeared first on BlogGeek.me.
WebRTC video recording may be more useful than WebRTC video calling
Video recording using WebRTC can be a lot more lucrative a business than WebRTC video calling.
There’s been an ongoing rumble around WebRTC in a lot of discussions I had about it and sometimes from what you read online – What’s the market size of WebRTC? How do you make money out of it? Who is making money out of it?
Questions that are really hard to answer. Usually because people don’t like to hear the answers to them.
Looking to understand where and how to fit WebRTC into your business? Let’s talk
The Zoom IPOIs there money in video conferencing or video calling?
The service today is practically free, spread across a multitude of different service types:
Social- Apple FaceTime
- Google Duo & Google Hangouts
- Facebook Messenger
- Skype
- Houseparty
- …
An unending list of social communication services that happen to have video calling in them. I’ve bunched Apple and Google in here simply because they “own” the smartphones we use today.
Business- Google Meet
- Zoom
Here you’ll find services that are free to a certain extent. They are either time limited, feature limited, or just bundled up to bigger offerings.
Zoom were probably the first to go this route with a well-featured product where the biggest limit for a free account was time – 40 minutes per session. Long enough for a lot of uses.
Consumer/SohoThere are many consumer-type services that got built using WebRTC and gained traction. The services started as free offerings, and each grew of its own accord. Jitsi Meet got acquired by Atlassian and then 8×8 acquired it from Atlassian. Appear.in started offering paid Pro accounts and got acquired by Videonor. Talky became a showcase for SimpleWebRTC.
Others started with a free service, ending with a paid service, like Gruveo.
Show me the moneyThis is where things got complicated.
No one saw a way to make money out of WebRTC. Or video.
At least not until Zoom IPO’d. ~$425 million annual run rate, growing at over 100% a year. Alex Clayton has a nice breakdown of their filing:
The moment this happened, both BlueJeans and LifeSize decided to publish their numbers – BlueJeans reached $100m ARR while Lifesize reached $100m in bookings. Their message? Zoom isn’t alone.
For the record, and to make this clear:
- Zoom doesn’t use WebRTC
- BlueJeans and Lifesize use WebRTC though both existed before WebRTC
The thing here is video conferencing service, and how do you make money out of it? You can, if you’re big enough, though it will be hard to join the game now and try to outdo Zoom in video conferencing by using their playbook.
The challenge is probably that everyone is looking under the light post.
You’ve got practically 100s of developers, startups, enterprises and whatnots vying towards disrupting the video conferencing market with WebRTC. The challenge is that with so many players coming in with the same technology, only a few will stay standing.
Differentiation is tough in this space. Why would someone pick up your service and not another? How will they find you? Why should they pay?
Which brings me to the reason I started writing this in the first place –
Not video calling – WebRTC video recordingI went to AppSumo this week, deciding to purchase another deal on their site. Every once in awhile I find there some great deals and new services to use for my business. The latest featured offer on that site? Dubb (now sold out)
DubbThis is a service that runs as a Chrome extension enabling its users to record a short video and share it with customers over SMS, email or other networks.
I don’t know if Dubb supports WebRTC or not, but –
- It works in the browser with no need to install anything (besides a Chrome extension)
- It records video and voice right there inside the browser
In all likelihood, this is using WebRTC’s MediaRecorder to record locally and upload the result to the Dubb cloud service.
Dubb is positioned as a sales tool to build rapport – not as a video conferencing or a communication tool. There’s no “real time”, “collaboration” or “conferencing” here.
Seeing it got me thinking of another tool I bumped into recently – Loom
LoomI started a coaching program a few months back. My WebRTC Course showed success in the last 3 years of its existence and I wanted to grow it in size – have more people enroll and learn WebRTC in the process. The coaching program is interesting. I am learning a ton in it, some of it already found its way into the course and a lot more will be coming in the next course launch in a few months time.
Anyways, when I ask questions via email, I usually get back video recordings of my coach reviewing the question and answering it, thinking through the issues I raise. I can see him and his screen, which is great. The link and tool he uses? Loom.
So I checked it out:
Similarly to Dubb, this one is about recording videos from the browser, with no installation needed. I Loom’s case, they are even trying to showcase the various uses of their tool.
WebRTC isn’t only about callingWebRTC isn’t only about calling.
It has other capabilities. There’s the data channel, there’s the simple access to the camera and mic and there’s the ability to record media on the client side to name a few.
That client side recording enables these services – Dubb and Loom. there’s also Ziggeo and Pipe for those looking for a managed API for it.
I am wondering. When everyone is closely looking at video calling, trying to figure out how to make $$$ out of that space, is the real usability of WebRTC lies elsewhere altogether?
Looking to understand where and how to fit WebRTC into your business? Let’s talk
The post WebRTC video recording may be more useful than WebRTC video calling appeared first on BlogGeek.me.
WebRTC vs WebSockets
WebRTC vs WebSockets: They. Are. Not. The. Same.
Sometimes, there are things that seem obvious once you’re “in the know” but just isn’t that when you’re new to the topic. It seems that the difference between WebRTC vs WebSockets is one such thing. Philipp Hancke pinged me the other day, asking if I have an article about WebRTC vs WebSockets, and I didn’t – it made no sense for me. That at least, until I asked Google about it:
It seems like Google believes the most pressing (and popular) search for comparisons of WebRTC is between WebRTC and WebSockets. I should probably also write about them other comparisons there, but for now, let’s focus on that first one.
Need to learn WebRTC? Check out my online course – the first module is free.
What are WebSockets?WebSockets are a bidirectional mechanism for browser communication.
There are two types of transport channels for communication in browsers: HTTP and WebSockets.
HTTP is what gets used to fetch web pages, images, stylesheets and javascript files as well as other resources. In essence, HTTP is a client-server protocol, where the browser is the client and the web server is the server:
My WebRTC course covers this in detail, but suffice to say here that with HTTP, your browser connects to a web server and requests *something* of it. The server then sends a response to that request and that’s the end of it.
The challenge starts when you want to send an unsolicited message from the server to the client. You can’t do it if you don’t send a request from the web browser to the web server, and while you can use different schemes such as XHR and SSE to do that, they end up feeling like hacks or workarounds more than solutions.
Enter WebSockets, what’s meant to solve exactly that – the web browser connects to the web server by establishing a WebSocket connection. Over that connection, both the browser and the server can send each other unsolicited messages. Not only that, they can send binary (gasp!) messages – something impossible without yet another hack (known as base64) in HTTP.
Because WebSockets are built-for-purpose and not the alternative XHR/SSE hacks, WebSockets perform better both in terms of speed and resources it eats up on both browsers and servers.
WebSockets are rather simple to use as a web developer – you’ve got a straightforward WebSocket API for them, which are nicely illustrated by HPBN:
var ws = new WebSocket('wss://example.com/socket'); ws.onerror = function (error) { ... } ws.onclose = function () { ... } ws.onopen = function () { ws.send("Connection established. Hello server!"); } ws.onmessage = function(msg) { if(msg.data instanceof Blob) { processBlob(msg.data); } else { processText(msg.data); } }You’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage. Of course there’s more to it than that, but this is holds the essence of WebSockets.
It leads us to what we usually use WebSockets for, and I’d like to explain it this time not by actual scenarios and use cases but rather by the keywords I’ve seen associated with WebSockets:
- Bi-directional, full-duplex
- Signaling
- Real-time data transfer
- Low latency
- Interactive
- High performance
- Chat, two way conversation
Funnily, a lot of this sometimes get associated with WebRTC as well, which might be the cause of the comparison that is made between the two.
WebRTC, in the context of WebSocketsThere are numerous articles here about WebRTC, including a What is WebRTC one.
In the context of WebRTC vs WebSockets, WebRTC enables sending arbitrary data across browsers without the need to relay that data through a server (most of the time). That data can be voice, video or just data.
Here’s where things get interesting –
WebRTC has no signaling channelWhen starting a WebRTC session, you need to negotiate the capabilities for the session and the connection itself. That is done out of the scope of WebRTC, in whatever means you deem fit. And in a browser, this can either be HTTP or… WebSocket.
So from this point of view, WebSocket isn’t a replacement to WebRTC but rather complementary – as an enabler.
You can send media over a WebSocketSort of.
I’ll start with an example. If you want you connect to a cloud based speech to text API and you happen to use IBM Watson, then you can use its WebSocket interface. The first sentence in the first paragraph of the documentation?
The WebSocket interface of the Speech to Text service is the most natural way for a client to interact with the service.
So. you stream the speech (=voice) over a WebSocket to connect it to the cloud API service.
That said, it is highly unlikely to be used for anything else.
In most cases, real time media will get sent over WebRTC or other protocols such as RTSP, RTMP, HLS, etc.
WebRTC’s data channelWebRTC has a data channel. It has many different uses. In some cases, it is used in place of using a kind of a WebSocket connection:
The illustration above shows how a message would pass from one browser to another over a WebSocket versus doing the same over a WebRTC data channel. Each has its advantages and challenges.
Funnily, the data channel in WebRTC shares a similar set of APIs to the WebSocket ones:
const peerConnection = new RTCPeerConnection(); const dataChannel = peerConnection.createDataChannel("myLabel", dataChannelOptions); dataChannel.onerror = (error) => { … }; dataChannel.onclose = () => { … }; dataChannel.onopen = () => { dataChannel.send("Hello World!"); }; dataChannel.onmessage = (event) => { … };Again, we’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage.
This makes an awful lot of sense but can be confusing a bit.
There this one tiny detail – to get the data channel working, you first need to negotiate the connection. And that you do either with HTTP or with a WebSocket.
When should you use WebRTC instead of a WebSocket?Almost never. That’s the truth.
If you’re contemplating between the two and you don’t know a lot about WebRTC, then you’re probably in need of WebSockets, or will be better off using WebSockets.
I’d think of data channels either when there are things you want to pass directly across browsers without any server intervention in the message itself (and these use cases are quite scarce), or you are in need of a low latency messaging solution across browsers where a relay via a WebSocket will be too time consuming.
Need to learn WebRTC? Check out my online course – the first module is free.
The post WebRTC vs WebSockets appeared first on BlogGeek.me.
WebRTC vs WebSockets
WebRTC vs WebSockets: They. Are. Not. The. Same.
Sometimes, there are things that seem obvious once you’re “in the know” but just isn’t that when you’re new to the topic. It seems that the difference between WebRTC vs WebSockets is one such thing. Philipp Hancke pinged me the other day, asking if I have an article about WebRTC vs WebSockets, and I didn’t – it made no sense for me. That at least, until I asked Google about it:
It seems like Google believes the most pressing (and popular) search for comparisons of WebRTC is between WebRTC and WebSockets. I should probably also write about them other comparisons there, but for now, let’s focus on that first one.
Need to learn WebRTC? Check out my online course – the first module is free.
What are WebSockets?WebSockets are a bidirectional mechanism for browser communication.
There are two types of transport channels for communication in browsers: HTTP and WebSockets.
HTTP is what gets used to fetch web pages, images, stylesheets and javascript files as well as other resources. In essence, HTTP is a client-server protocol, where the browser is the client and the web server is the server:
My WebRTC course covers this in detail, but suffice to say here that with HTTP, your browser connects to a web server and requests *something* of it. The server then sends a response to that request and that’s the end of it.
The challenge starts when you want to send an unsolicited message from the server to the client. You can’t do it if you don’t send a request from the web browser to the web server, and while you can use different schemes such as XHR and SSE to do that, they end up feeling like hacks or workarounds more than solutions.
Enter WebSockets, what’s meant to solve exactly that – the web browser connects to the web server by establishing a WebSocket connection. Over that connection, both the browser and the server can send each other unsolicited messages. Not only that, they can send binary (gasp!) messages – something impossible without yet another hack (known as base64) in HTTP.
Because WebSockets are built-for-purpose and not the alternative XHR/SSE hacks, WebSockets perform better both in terms of speed and resources it eats up on both browsers and servers.
WebSockets are rather simple to use as a web developer – you’ve got a straightforward WebSocket API for them, which are nicely illustrated by HPBN:
var ws = new WebSocket('wss://example.com/socket'); ws.onerror = function (error) { ... } ws.onclose = function () { ... } ws.onopen = function () { ws.send("Connection established. Hello server!"); } ws.onmessage = function(msg) { if(msg.data instanceof Blob) { processBlob(msg.data); } else { processText(msg.data); } }You’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage. Of course there’s more to it than that, but this is holds the essence of WebSockets.
It leads us to what we usually use WebSockets for, and I’d like to explain it this time not by actual scenarios and use cases but rather by the keywords I’ve seen associated with WebSockets:
- Bi-directional, full-duplex
- Signaling
- Real-time data transfer
- Low latency
- Interactive
- High performance
- Chat, two way conversation
Funnily, a lot of this sometimes get associated with WebRTC as well, which might be the cause of the comparison that is made between the two.
WebRTC, in the context of WebSocketsThere are numerous articles here about WebRTC, including a What is WebRTC one.
In the context of WebRTC vs WebSockets, WebRTC enables sending arbitrary data across browsers without the need to relay that data through a server (most of the time). That data can be voice, video or just data.
Here’s where things get interesting –
WebRTC has no signaling channelWhen starting a WebRTC session, you need to negotiate the capabilities for the session and the connection itself. That is done out of the scope of WebRTC, in whatever means you deem fit. And in a browser, this can either be HTTP or… WebSocket.
So from this point of view, WebSocket isn’t a replacement to WebRTC but rather complementary – as an enabler.
You can send media over a WebSocketSort of.
I’ll start with an example. If you want you connect to a cloud based speech to text API and you happen to use IBM Watson, then you can use its WebSocket interface. The first sentence in the first paragraph of the documentation?
The WebSocket interface of the Speech to Text service is the most natural way for a client to interact with the service.
So. you stream the speech (=voice) over a WebSocket to connect it to the cloud API service.
That said, it is highly unlikely to be used for anything else.
In most cases, real time media will get sent over WebRTC or other protocols such as RTSP, RTMP, HLS, etc.
WebRTC’s data channelWebRTC has a data channel. It has many different uses. In some cases, it is used in place of using a kind of a WebSocket connection:
The illustration above shows how a message would pass from one browser to another over a WebSocket versus doing the same over a WebRTC data channel. Each has its advantages and challenges.
Funnily, the data channel in WebRTC shares a similar set of APIs to the WebSocket ones:
const peerConnection = new RTCPeerConnection(); const dataChannel = peerConnection.createDataChannel("myLabel", dataChannelOptions); dataChannel.onerror = (error) => { … }; dataChannel.onclose = () => { … }; dataChannel.onopen = () => { dataChannel.send("Hello World!"); }; dataChannel.onmessage = (event) => { … };Again, we’ve got calls for send and close and callbacks for onopen, onerror, onclose and onmessage.
This makes an awful lot of sense but can be confusing a bit.
There this one tiny detail – to get the data channel working, you first need to negotiate the connection. And that you do either with HTTP or with a WebSocket.
When should you use WebRTC instead of a WebSocket?Almost never. That’s the truth.
If you’re contemplating between the two and you don’t know a lot about WebRTC, then you’re probably in need of WebSockets, or will be better off using WebSockets.
I’d think of data channels either when there are things you want to pass directly across browsers without any server intervention in the message itself (and these use cases are quite scarce), or you are in need of a low latency messaging solution across browsers where a relay via a WebSocket will be too time consuming.
Need to learn WebRTC? Check out my online course – the first module is free.
The post WebRTC vs WebSockets appeared first on BlogGeek.me.
WebRTC simulcast and ABR – two sides of the same coin
WebRTC simulcast and ABR is all about offer choice to “viewers”.
I’ve been dealing recently with more clients who are looking to create live broadcast experiences. Solutions where one or more users have to broadcast their streams from a single session to a large audience. Large is a somewhat lenient target number, which seems to be stretching from anywhere between 100 to a 1,000,000 viewers. And yes, most of these clients want that viewers will have instantaneous access to the stream(s) – a lag of 1-2 seconds at most, as opposed to the 10 or more seconds of latency you get from HLS.
Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:
Compare simulcast to ABR
What I started seeing more and more recently are solutions that make use of ABR. What’s ABR? It is just like simulcast, but… different.
What’s Simulcast?Simulcast is a mechanism in WebRTC by which a device/client/user will be sending a video stream that contains multiple bitrates in it. I explained it a bit in my WebRTC Multiparty Architectures last month.
With simlucast, a WebRTC client will generate these multiple bitrates, where each offers a different video quality – the higher the bitrate the higher the quality.
These video streams are then received by the SFU, and the SFU can pick and choose which stream to send to which participant/viewer. This decision is usually made based on the available bandwidth, but it can (and should) make use of a lot of other factors as well – display size and video layout on the viewer device, CPU utilization of the viewer, etc.
The great thing about simulcast? The SFU doesn’t work too hard. It just selects what to send where.
What’s ABR?ABR stands for Adaptive Bitrate Streaming. Don’t ask me why R and not S in the acronym – probably because they didn’t want to mix this with car breaks. Anyways, ABR comes from streaming, long before WebRTC was introduced to our lives.
With streaming, you’ve got a user watching a recorded (or “live”) video online. The server then streams that media towards the user. What happens if the available bitrate from the server to the user is low? Buffering.
Streaming technology uses TCP, which in turn uses retransmissions. It isn’t designed for real-time, and well… we want to SEE the content and would rather wait a bit than not see it at all.
Today, with 1080p and 4K resolutions, streaming at high quality requires lots and lots of bandwidth. If the network isn’t capable, would users rather wait and be buffered or would it be better to just lower the quality?
Most prefer lowering the quality.
But how do you do that with “static” content? A pre-recorded video file is what it is.
You use ABR:
With ABR, you segment bandwidth into ranges. Each range will be receiving a different media stream. Each such stream has a different bitrate.
Say you have a media stream of 300kbps – you define the segment bandwidth for it as 300-500kbps. Why? Because from 500kbps there’s another media stream available.
These media streams all contain the same content, just in different bitrates, denoting different quality levels. What you try doing is sending the highest quality range to each viewer without getting into that dreaded buffering state. Since the available bitrate is dynamic in nature (as the illustration above shows), you can end up switching across media streams based on the bitrate available to the viewer at any given point in time. That’s why they call it adaptive.
And it sounds rather similar to simulcast… just on the server side, as ABR is something a server generates – the original media gets to a server, which creates multiple output streams to it in different bitrates, to use when needed.
The ABR challenge for WebRTC media serversRecently, I’ve seen more discussions and solutions looking at using ABR and similar techniques with WebRTC. Mainly to scale a session beyond 10k viewers and to support low latency broadcasting in CDNs.
Why these two areas?
- Because beyond 10k viewers, simulcast isn’t enough anymore. Simulcast today supports up to 3 media streams and the variety you get with 10k viewers is higher than that. There are a few other reasons as well, but that’s for another time
- Because CDNs and video streaming have been comfortable with ABR for years now, so them shifting towards WebRTC or low latency means they are looking for much the same technologies and mechanisms they already know
But here’s the problem.
We’ve been doing SFUs with WebRTC for most of the time that WebRTC existed. Around 7-8 years. We’re all quite comfortable now with the concept of paying on bandwidth and not eating too much CPU – which is the performance profile of an SFU.
Simulcast fits right into that philosophy – the one creating the alternate streams is the client and not the SFU – it is sending more media towards the SFU who now has more options. The client pays the price of higher bitrates and higher CPU use.
ABR places that burden on the server, which needs to generate the additional alternate streams on its own, and it needs to do so in real time – there’s no offline pre-processing activity for generating these streams from a pre-existing media file as there is with CDNs. this means that SFUs now need to think about CPU loads, muck around with transcoding, experiment with GPU acceleration – the works. Things they haven’t done so far.
Is this in our future? Sure it is. For some, it is already their present.
Simulcast, ABR – need a quick reference to understand their similarities and differences? Download the free cheatsheet:
Compare simulcast to ABR
What’s next?WebRTC is growing and evolving. The ecosystem around it is becoming much richer as time goes by. Today, you can find different media servers of different types and characteristics, and the solutions available are quite different from one another.
If you are planning on developing your own application using a media server – make sure you pick a media server that fits to your use case.
The post WebRTC simulcast and ABR – two sides of the same coin appeared first on BlogGeek.me.