bloggeek
Get trained to be your company’s WebRTC guy
Demand for WebRTC developers is stronger than supply.
My inbox is filled with requests for experienced WebRTC developers on a daily basis. It ranges from entrepreneurs looking for a technical partner, managers searching for outsourcing vendors to help them out. My only challenge here is that developers and testers who know a thing or two about WebRTC are hard to find. Finding developers who are aware of the media stack in WebRTC, and not just dabbled into using a github “hello world” demo – these are truly rare.
This is why I created my WebRTC course almost 2 years ago. The idea was to try and share my knowledge and experience around VoIP, media processing and of course WebRTC, with people who need it. This WebRTC training has been a pleasant success, with over 200 people who took it already. And now it is time for the 4th round of office hours for this course.
Who is this WebRTC training for?This WebRTC course is for anyone who is using WebRTC in his daily work directly or indirectly. Developers, testers, software architects and product managers will be those who benefit from it the most.
It has been designed to give you the information necessary from the ground up.
If you are clueless about VoIP and networking, then this course will guide you through the steps needed to get to WebRTC. Explaining what TCP and UDP are, how HTTP and WebSockets fit on top of it, going to the acronyms used by WebRTC (SRTP, STUN, TURN and many others).
If you have VoIP knowledge and experience, then this course will cover the missing parts – where WebRTC fits into your world, and what to take special attention to, assuming a VoIP background (WebRTC brings with it a different mindset to the development process).
What I didn’t want to do, is have a course that is so focused on the specification that: (1) it becomes irrelevant the moment the next Chrome browser is released; (2) it doesn’t explain the ecosystem around WebRTC or give you design patterns of common use cases. Which is why I baked into the course a lot of materials around higher level media processing, the WebRTC ecosystem and common architectures in WebRTC.
TL;DR – if you follow this blog and find it useful, then this course is for you.
Why take it?The question should be why not?
There are so many mistakes and bad decisions I see companies doing with WebRTC. From deciding how to model their media routes, to where to place their TURN servers (or configure them). Through how to design scale out, to which open source frameworks to pick. Such mistakes end up a lot more expensive than any online course would ever be.
In April, next month, I will be starting the next round of office hours.
While the course is pre-recorded and available online, I conduct office hours for a span of 3-4 months twice a year. In these live office hours I go through parts of the course, share new content and answer any questions.
What does it include?The course includes:
- 40+ lessons split into 7 different modules with an additional bonus module
- 15 hours of video content, along with additional links for extra reading material
- Several e-books available only as part of the course, like how the Jitsi team scales Jitsi Meet, and what are sought after characteristics in WebRTC developers
- A private online forum
- The office hours
In the past two months I’ve been working on refreshing some of the content, getting it up to date with recent developments. We’ve seen Edge and Safari introducing WebRTC during that time for example. These updated lessons will be updated in the course before the official launch.
When can I start?Whenever you want. In April, I will be officially launching the office hours for this course round. At that point in time, the updated lessons will be part of the course.
What more, there will be a new lesson added – this one about WebRTC 1.0. Philipp Hancke was kind enough to host this lesson with me as a live webinar (free to attend live) that will become an integral lesson in the course.
If you are interested in joining this lesson live:
What if I am not ready?You can always take it later on, but I won’t be able to guarantee pricing or availability of the office hours at that point in time.
If you plan on doing anything with WebRTC in the next 6 months, you should probably enroll today.
And by the way – if you need to come as a team to up the knowledge and experience in WebRTC in your company, then there are corporate plans for the course as well.
CONTENT UPGRADE: If you are serious about learning WebRTC, then check out my online WebRTC training:
The post Get trained to be your company’s WebRTC guy appeared first on BlogGeek.me.
How WebRTC Statistics and Performance Monitoring Changed VoIP Monitoring
Monitoring focus is shifting from server-side to client-side in WebRTC statistics collection.
WebRTC happens to decentralize everything when it comes to VoIP. We’re on a journey here to shift the weight from the backend to the edge devices. While the technology in WebRTC isn’t any different than most other VoIP solutions, the way we end up using it and architecting our services around it is vastly different.
One of the prime examples here is how we shifted focus for group calling from an MCU mixing model to an SFU routing model. Suddenly, almost overnight, the notion of deploying MCU started to seem ridiculous. And believe me – I should know – I worked at a company where %60+ came from MCUs.
The shift towards SFU means we’re leaning more on the capabilities and performance of the edge device, giving it more power in the interaction when it comes to how to layout the display, instead of doing all the heavy lifting in the backend using an MCU. The next step here will be to build mesh networks, though I can’t see that future materializing any time soon.
VoIP != WebRTC. Maybe not from a direct technical point, but definitely from how we end up using it. If you need to learn more about WebRTC, then my WebRTC training is exactly what you need:What I wanted to mention here is something else that is happening, playing towards the same trend exactly – we are moving the collection of VoIP performance statistics (or more accurately WebRTC statistics) from the backend to the edge – we now prefer doing it directly from the browser/device.
VoIP Statistics Collection and MonitoringIf you are not familiar with VoIP statistics collecting and monitoring, then here’s a quick explainer for you:
VoIP is built out of the notion of interoperability. Developers build their products and then test it against the spec and in interoperability events. Then those deploying them integrate, install and run a service. Sometimes this ends up by using a single vendor, but more often than not, multiple vendor products run in the same deployment.
There is no real specification or standard to how monitoring needs to happen or what kind of statistics can, should or is collected. There are a few means of collecting that data, and one of the most common approaches is by employing HEP/EEP. As the specification states:
The Extensible Encapsulation protocol (“EEP”) provides a method to duplicate an IP datagram to a collector by encapsulating the original datagram and its relative header properties (as payload, in form of concatenated chunks) within a new IP datagram transmitted over UDP/TCP/SCTP connections for remote collection. Encapsulation allows for the original content to be transmitted without altering the original IP datagram and header contents and provides flexible allocation of additional chunks containing additional arbitrary data. The method is NOT designed or intended for “tunneling” of IP datagrams over network segments, and best serves as vector for passive duplication of packets intended for remote or centralized collection and long term storage and analysis.
Translating this to plain English: media packets are duplicated for the purpose of sending them off to be analyzed via a monitoring service.
The duplication of the packets happens in the backend, through the different media servers that can be found in a VoIP network. Here’s how it is depicted on HOMER/SIPCAPTURE’s website:
HOMER collects its data directly from the servers – OpenSIPS, FreeSWITCH, Asterisk, Kamailio – there’s no user devices here – just backend servers.
Other systems rely on the switches, routers and network devices that again reside in the backend infrastructure. Since in VoIP production networks, we almost always route the media through the backend servers, the assumption is that it is easier to collect it here where we have more control than from the devices.
This works great, but not really needed or helpful with WebRTC.
WebRTC Statistics Collection and MonitoringWith WebRTC, there are only a handful of browsers (4 to be exact), and they all adhere to the same API (that would be WebRTC). And they all have that thing called getstats() implemented in them. These get the same information you find in chrome://webrtc-internals.
Many deployments end up running peer-to-peer, having the media traverse directly through the internet and not through the backend of the service itself. Google Hangouts decided to take that route two years ago. Jitsi added this capability under the name Jitsi P2P4121. How do these services control and understand the quality of their users?
If you look at other media servers out there, most of them are a few years old only. WebRTC is just 6 years old now. So everyone’s focused on features and stability right now. Quality and monitoring is not in their focus area just yet.
Last, but not least, WebRTC is encrypted. Always. And everywhere. So sniffing packets and deducing quality from them isn’t that easy or accurate any longer.
This led to the focus of WebRTC applications in gathering WebRTC statistics from the browsers and devices directly, and not trying to get that information from the media servers.
The end result? Open source projects such as rtcstats and commercial services such as callstats.io. At the heart of these, WebRTC statistics gets collected using the getstats() API at an interval of one or more seconds, sent over to a monitoring server, where it is collected, stored, aggregated and analyzed. We use a similar mechanism at testRTC to collect, analyze and visualize the results of our own probes.
What does that give us?
- The most accurate indication of performance for the end user – since the statistics are collected directly on the user’s device, there’s no loss of information from backend collection
- Easy access to the information – there’s a uniform means of data collection here taking place. One you can also implement inside native mobile and desktop apps that use WebRTC
- Increased reliance on the edge, a trend we see everywhere with WebRTC anyway
WebRTC chances a lot of things when it comes to how we think and architect VoIP networks. The part of how and why this is done on statistics and monitoring is something I haven’t seen discussed much, so I wanted to share it here.
The reason for that is threefold:
- Someone asked me a similar question on my contact page in the last couple of days, so it made sense to write a longform answer as well
- We’re contemplating at testRTC offering a passive monitoring product to use “on premise”. If you want to collect, store and analyze your own WebRTC statistics without giving it to any third party cloud service, then ping us at testRTC
- My online WebRTC training is getting a refresher and a new round of office hours. This all starts in April. Time to enroll if you want to educate yourself on WebRTC
The post How WebRTC Statistics and Performance Monitoring Changed VoIP Monitoring appeared first on BlogGeek.me.
Twilio Flex = Twilio Flexing its Flexibility (or the programmable contact centers)
Twilio Flex is a peak into the future of enterprise software.
This week, Twilio announced a new product called Flex. The name and the broad strokes about what Flex is found their way to TechCrunch some two weeks ago. I wanted to share my thoughts about Twilio Flex.
A few notes before I start- Twilio isn’t paying me for writing this
- They are a customer in other areas, but this one is all me. I think Flex (as well as Studio, Engagement Cloud, Functions, etc.) are interesting products coming from Twilio, and they are worth a long form analysis and review
- Articles on BlogGeek.me are never paid for. Neither are guest posts or interviews. If something interests me, I’ll write about it
- The information here is based mainly on a briefing I received about Flex and what I found since then on other sites (and on Twilio’s website)
- Flex is a departure of many things Twilio has been doing, making it an interesting initiative to analyze
Twilio Flex is CCaaS (Contact Center as a Service. It isn’t the first one. Twilio is touting it a Programmable Contact Center, which is how they are referring to all of their products.
Here’s Jeff Lawson’s keynote from Enterprise Connect, as usual, Jeff’s keynotes are worth the time and attention:
Where Twilio tried to differentiate Flex from existing solutions is by making it a fully functional contact center solution that is Flexible enough to customize and modify. It has APIs, but the day-to-day users won’t see them, and a lot of the customizations needed don’t require digging deep into the API layer either. That’s at least the intent (I didn’t have the chance to see the integration and API layers of Flex yet).
Twilio highlights 5 main benefits with Flex:
- Unlimited customization – through the lower layers of Twilio’s product portfolio, along with a new addition to it, the Flex UI (not a lot/enough was explained about it thus far)
- Instant omnichannel – support for multiple communication channels. More on this later
- Contextual intelligent – Twilio’s ML/AI roadmap lies here
- Trusted scale- due to its use of the Twilio infrastructure
- 2 million developers – that’s the number of Twilio registered developers
Flex fits well into one of Twilio’s largest market segments – the contact center. And there, Twilio are aiming for the contact centers sizing 1,000+ seats. The big boyz.
As it was working to move up the food chain, offering ever larger components, migrating away from developers towards end users in the B2B space and in contact centers made sense.
Flex and the Twilio PortfolioIf I had to map the road Twilio is taking with its portfolio, it would end up being something like this (I’ve removed a lot of the products for simplicity):
Transactional: It started with SMS and Voice, adding VoIP services and later on expanding horizontally to other components and building blocks such as IP Messaging and others. In this layer, and to some extent in Omnichannel, Twilio’s focus is in a horizontal expansion towards “Best of Suite” offering.
Omnichannel: In 2017, Twilio added the Twilio Engagement Cloud. It placed a few existing products from its portfolio in that layer and added Notify and Proxy to them. They stated that these are “Declarative APIs” talking about general intent while including logic of their own. At the end of the day, many of the products/APIs in this layer are Omnichannel – they work across channels using the one available/preferred/whatever for the task at hand.
Visual: This is where the story became really interesting. Twilio added Studio to its portfolio. It went up the food chain again, this time, with a visual IDE and a message that Twilio is no longer a company that serves only developers, but one that can be used by others within the organization.
Programmable Enterprise Software: This is where Flex comes in, going up the food chain again. This time, offering a solution that doesn’t interact with the end users only as a consequence (a phone rings), but rather has a new set of users – people who aren’t developers or planners who sit in front of the tool every day and use it. The contact center agents and personnel.
Flex was defined to me in the domain of “Programmable Applications”. Twilio, in a way, trying to do two things with this definition:
- Programmable means it isn’t diverging from its roots completely, just taking the obvious next step in its evolution. All of its core products are Programmable X (X being SMS, Voice, Video, …)
- It allows it to position Flex not as another contact center, but rather as something new that is different
To me it is about the future of enterprise software and how to make it programmable and flexible in ways that are still impossible today. The closest to that we’ve got is probably having so many vendors integrate with Zapier.
I am sold to that kind of a future, but I am not sure others will be.
Flex Channels PropositionFlex leans on a lot of other products in Twilio’s portfolio. One of its core values lies in omnichannel, and the fact that Twilio is already investing in a programmable layer that handles that (the Engagement Cloud). The proposition here is that whatever Twilio adds as a channel for developers, gets almost automatically added to Flex for its contact center customers.
Out the door, Flex comes with support for Voice, SMS, Chat, Video, Email, Fax, Twitter DM, Google RCS, Facebook Messenger and LINE. It also includes Screen Sharing and Co-Browsing as additional capabilities within the interactions. Developers can add additional channels to customize their contact center as well.
The list of channels is impressive, but somehow Apple Business Chat is missing in that list. Apple’s launch partners in this case were contact center vendors (LivePerson, Nuance, Genesys and Salesforce). Twilio, which is still recognized solely as a CPaaS vendor didn’t make the cut. I am sure Twilio tried becoming a partner, so this is more likely a decision made by Apple. I am also sure that once Apple opens up Business Chat to more developers, Twilio will be adding support to it.
The biggest promise here? Twilio is already committed to omnichannel in its products, and Flex will enjoy from that commitment as will Flex’ customers.
Think you know how WebRTC fits in a contact center? Check out with The Complete WebRTC Contact Center Uses Swipefile
Get the swipefile Machine Learning and Artificial Intelligence in FlexA year or two ago, ML and AI in CPaaS was science fiction. Twilio as well as its competitors delved in the real time. In transactional and transient communications. If any machine learning work was taking place, it was in the operational layers – in an effort to optimize cost and deliverability of its service to its customers.
Last year, Twilio launched Understand, a layer built on top of Google’s Natural Language Processing capabilities (NLP). Understand is where Twilio started looking in ML and AI in the context of actual services for its customers. It looks at the problem domain of its customers (mainly contact centers) and tries to offer higher level APIs that are easier to use and are targeted at NLU (Natural Language Understanding). This then gets focused to the specific domain of the customer’s needs, and you get something that is usable today (as opposed to building a general purpose AI such as Siri, Alexa or Google Assistant).
The result in Understand is a way to simplify the development processes and requirements for Twilio’s customers when it comes to NLU.
That also got wrapped into Flex, at least on slides.
My feelings? The AI story of Flex is built out of two parts:
- Collecting all the existing ML/AI/intelligent related capabilities of Twilio and wrapping them inside Flex. This is done through internal APIs as well as via partners
- Having a roadmap vision / story of what AI means in Flex moving forward
AI being the holy grail that it is, you can’t ignore it when launching a new service these days.
Flex Pricing is KeyPricing for Flex hasn’t been announced, but one thing was made clear – it will be based on a per seat price and not usage based as other Twilio products.
This is where things get somewhat challenging for Twilio, and here’s why:
- Twilio has been comfortable so far to offer a usage based model. Switching to a per seat model will have its differences in how it calculates its revenue and margins
- By opting for per seat pricing, Twilio falls into the contact center industry “comfort zone” – the model is known and accepted already
- But this also makes comparing Twilio Flex pricing to other contact centers rather “easy”. It means I can now compare apples to apples when selecting between Flex and any other vendor
- We don’t have price points, but if the price point will be based on the industry average or accepted standard, then many analysts and experts will end up saying that there’s no disruption or anything new in Twilio Flex. For the pundits, Flex may seem like an ordinary contact center and without price disruption there can be no disruption with that mindset
- If the price points are too high, then Twilio will be going after its own contact center customers, who will see this as direct competition. Such a move can signal others that Twilio is willing to go into their turf as well. It will question the potential and attractiveness of joining the Flex marketplace
- If the price points will be lower, then where will be the margins for Twilio?
My guess is that Twilio is still looking for price validation and it is doing so this week at Enterprise Connect and planning to continue doing so in the coming weeks until it is ready to announce the price points publicly.
Who is Twilio Flex for?This is the main question, and one that I am not sure of the answer.
Twilio is saying the target audience is 1,000+ seats contact centers. It makes sense to go for the larger contact centers at a time when the transition towards the cloud and digital transformations of contact centers is happening more.
But would I be using it in my business or go through a third party?
Should a Twilio customer that built a contact center on its own on top of Twilio migrate to Flex?
Should a Twilio customer that built a contact center for others to use on top of Twilio see Flex as a threat or as an opportunity to improve its own contact center offering?
Twilio stated that 89% of contact centers today are still deployed on premise, and that the market is large enough. These statement was said to answer two questions:
- The market is big enough for both its existing customers and for Flex, so it isn’t competing directly with its customers (I guess its customers will have to decide if that’s true for them or not)
- The market is big for Twilio to grow in. Twilio is relying on that to keep growing
Twilio was already trending upwards when the word on Flex leaked by TechCrunch on Feb 17, and has increasing since:
source: Google
Is that related to Flex or not, I can’t say. To me, going to contact centers as an adjacent market and eating up more of the pie there is a bold move. If it succeed, then Twilio will be much bigger than it is today.
The UnknownsThere are things that are still unknown to me here. They are technical ones, but important for my own perspective and analysis. They are related to what wasn’t directly in the briefing or the materials I’ve seen since the official announcement.
Here are a few things I am really interested in:
- What are the exact integration points for Flex?
- How are developers expected to integrate with it?
- Where do you use Twilio APIs? Where will you be making use of Twilio Studio? Where do you write a Twilio Function? How about Twilio Understand?
- Flex UI is brand new. How does it fair as a standalone product enabler? What can developers do with it?
- What will it mean to integrate Flex with a CRM? Does it make more sense to integrate the CRM into the Flex UI or does it make more sense to integrate Flex into the CRM UI?
- What parts of “contextual intelligence” really exist in Flex today? How does it compare to existing market offerings?
- What do contact center vendors using Twilio think about Flex? How will they react to it?
Maybe.
Here’s one way to map the communications landscape:
And here’s another:
What’s your worldview here?
The post Twilio Flex = Twilio Flexing its Flexibility (or the programmable contact centers) appeared first on BlogGeek.me.
WebRTC 1.0 – What on earth is it anyway? (register to the webinar)
TL;DR – register to this webinar about WebRTC 1.0
As I am prepping to another launch of my Advanced WebRTC Architecture Course, I went through the content to make sure it is up to date. This is by far the hardest thing about a course about something like WebRTC – what was right on Chrome 63 might not be correct anymore for Chrome 64. Or is it 65 now?
I ended up spending time in updating and refreshing some of the lessons with some new material, but I ended up with one area that the course is weak at. And that’s WebRTC 1.0 information.
The problem there is that while I can tell some of the story, I definitely can’t tell it to the level I wanted. It got me to partner again with Philipp Hancke, which I love working with on lots of mini-projects. I asked Philipp if he will be willing to host such a lesson for me as a live webinar and he said yes (yippie).
What’s in the Webinar?So here’s what we’re going to do:
Next month, right after Passover, and because Philipp asked for April, we’re going to host a lesson/webinar about WebRTC 1.0.
Philipp will skim quickly over the backstory of WebRTC 1.0, where we are today and more importantly where we’re headed with it. What we will cover in more detail will include answers to questions like:
- What should you change in your app due to WebRTC 1.0?
- What new tricks did 1.0 teach the “old” WebRTC dog?
- Do you need to update my app to be compliant and work in Chrome next year?
- How much effort is involved in this migration to WebRTC 1.0 anyway?
- If you pick out a WebRTC project on github, how would you know if it supports WebRTC 1.0 or not?
What I want here is for you (and me) to really understand the impact WebRTC 1.0 is going to have on all of us in 2018 and on.
When?This webinar/lesson will take place on
Tuesday, April 10
1-2PM EST (view in your timezone)
The session’s recording will NOT be available after the event itself. While this lesson is free to attend live, the recording will become an integral part of the course’ lessons.
The post WebRTC 1.0 – What on earth is it anyway? (register to the webinar) appeared first on BlogGeek.me.
You Better Ignore the Default Protocol Ports You Implement
Default protocol ports are great, but ones that will work in the real world are better.
If you want something done properly, you should probably ignore the specification of the protocols you use every once in awhile. When I worked years ago in implementing protocols directly, there was this notion – you need to send messages in the strictest format possible but be very lenient in how you enable receiving them. The reason behind that is that by being strict on the sender side, you will achieve higher interoperability (more devices will be able to “decipher” what you sent) and by being lenient on the receiving side, you achieve the same (being able to understand messages from more devices). Somehow, it isn’t worth to be right here – it just makes more sense to be smart.
The same apply to default protocol ports.
Assume for the sake of argument that we have a theoretical protocol that requires the use of port number 5349. You setup the server, configure it to listen on that port (after all, we want to be standard compliant), and you run your service.
Will that work well for you?
For the most part, as the illustration above shows, yes it will.
The protocol is probably client-server based. A client somewhere from inside his private network is accessing the Internet, going to the public IP of your server to that specific port and connects. Life is good.
Only sometimes it isn’t.
Hmm… what’s going on here now? Someone in the IT department decided to block outgoing traffic to port 5349. Or maybe, just maybe, he decided to open outgoing traffic solely for ports 80 and 443. And why would he do that? Because that’s where HTTP and HTTPS traffic go to, which is web servers that our browsers connect to. And I don’t know any blue collar employee today who would be able to do his job without connecting the the Internet with his browser. Writing this draft of an article requires such a connection (I do it on Google Doc and then copy it to WordPress once done).
So the same scenario, with the same requirements won’t work if our server decides to use the default port 5349.
What if we decide to pass it through port 443?
Now it has a better chance of working. Why? Because port 443 is reserved for TLS traffic, which is encrypted. This means that beyond the destination of the data, the firewall we’re dealing with can’t know a thing about what’s being sent or where, so he will usually treat it as “HTTPS” type of traffic and will just pass it along.
There are caveats here. If the enterprise is enforcing a local trusted web proxy, it actually acts as a man in the middle and opens all packets, which means he now sees the traffic and might decide not to pass it since he can’t understand it.
What we’re aiming for is best coverage. And port 443 will give us that. It might get blocked, but there’s less of a chance for that to happen.
Here are a few examples where ignoring your protocol default ports is suggested:
TURNThe reason for this article is TURN. TURN is used by WebRTC (and other protocols) to get your media session connected in case you can’t send it directly peer-to-peer. It acts as a relay to the media that sits in the public internet with the sole purpose of punching holes in NATs and traversing firewalls.
TURN runs over UDP, TCP and TLS. And yes. You WANT to configure and run it on UDP, TCP and TLS (don’t be lazy – configure them all – it won’t cost you more).
Want to learn more about WebRTC in general and NAT traversal specifically? Enroll to my WebRTC training today to become a pro WebRTC developer.
The default ports for your STUN and TURN servers (you’re most probably going to deploy them in the same process) are:
- 3478 for STUN (over UDP)
- 3478 for TURN over UDP – same as STUN
- 3478 for TURN over TCP – same as STUN and as TURN over UDP
- 5349 for TURN over TLS
A few things that come to mind from this list above:
- We’re listening to the same port for both UDP and TCP, and for both STUN and TURN – which is just fine
- Remember that 5349 from my story above?
Here’s the thing. If you deploy only STUN, then many WebRTC sessions won’t connect. If you deploy also with TURN/UDP then some sessions still won’t connect (mainly because of IT admins blocking UDP altogether). TURN/TCP might not connect either. And guess what – TURN/TLS on 5349 can still be blocked.
What a developer to do in such a case?
Just point your WebRTC devices towards port 443 for ALL of your STUN/TURN traffic and be done with it. This approach has no real downsides versus deploying with the default ports and all the potential upsides.
Here’s how a couple of services I checked almost on random do this properly (I’ve used chrome://webrtc-internals to get them):
Hangouts Meet
Or Google Hangouts. Or Google Meet. Or whatever name it now has. I did use the Meet one:
https://meet.google.com/goe-nxxv-ryp?authuser=1, { iceServers: [stun:stun.l.google.com:19302, stun:stun1.l.google.com:19302, stun:stun2.l.google.com:19302, stun:stun3.l.google.com:19302, stun:stun4.l.google.com:19302], iceTransportPolicy: all, bundlePolicy: max-bundle, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {enableDtlsSrtp: {exact: false}, enableRtpDataChannels: {exact: true}, advanced: [{googHighStartBitrate: {exact: 0}}, {googPayloadPadding: {exact: true}}, {googScreencastMinBitrate: {exact: 400}}, {googCpuOveruseDetection: {exact: true}}, {googCpuOveruseEncodeUsage: {exact: true}}, {googCpuUnderuseThreshold: {exact: 55}}, {googCpuOveruseThreshold: {exact: 85}}]}
Google Meet comes with STUN:19302 with 5 different subdomain names for the server. There’s no TURN here because the service uses ICE-TCP directly from their media servers.
The selection of port 19302 is quaint. I couldn’t find any reference to that number or why it is interesting (not even a mathematical one).
Google AppRTC
You’d think Google’s showcase of WebRTC would be an exemplary citizen of a solid STUN/TURN configuration. Well… he’s what it got me:
https://appr.tc/r/986533821, { iceServers: [turn:74.125.140.127:19305?transport=udp, turn:[2a00:1450:400c:c08::7f]:19305?transport=udp, turn:74.125.140.127:443?transport=tcp, turn:[2a00:1450:400c:c08::7f]:443?transport=tcp, stun:stun.l.google.com:19302], iceTransportPolicy: all, bundlePolicy: max-bundle, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 },
It had TURN/UDP at 19305, TURN/TCP at 443 and STUN at 19302. Unlike others, it had explicit IPv6 addresses. It had no TURN/TLS.
Jitsi Meet
https://meet.jit.si/RandomWerewolvesPierceAlone, { iceServers: [stun:all-eu-central-1-turn.jitsi.net:443, turn:all-eu-central-1-turn.jitsi.net:443, turn:all-eu-central-1-turn.jitsi.net:443?transport=tcp, stun:all-eu-west-1-turn.jitsi.net:443, turn:all-eu-west-1-turn.jitsi.net:443, turn:all-eu-west-1-turn.jitsi.net:443?transport=tcp, stun:all-eu-west-2-turn.jitsi.net:443, turn:all-eu-west-2-turn.jitsi.net:443, turn:all-eu-west-2-turn.jitsi.net:443?transport=tcp], iceTransportPolicy: all, bundlePolicy: balanced, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {advanced: [{googHighStartBitrate: {exact: 0}}, {googPayloadPadding: {exact: true}}, {googScreencastMinBitrate: {exact: 400}}, {googCpuOveruseDetection: {exact: true}}, {googCpuOveruseEncodeUsage: {exact: true}}, {googCpuUnderuseThreshold: {exact: 55}}, {googCpuOveruseThreshold: {exact: 85}}, {googEnableVideoSuspendBelowMinBitrate: {exact: true}}]}
Jitsi shows multiple locations for STUN and TURN – eu-central, eu-west with STUN:443, TURN/UDP:443 and TURN/TCP:443. No TURN/TLS.
appear.in
https://appear.in/bloggeek, { iceServers: [turn:turn.appear.in:443?transport=udp, turn:turn.appear.in:443?transport=tcp, turns:turn.appear.in:443?transport=tcp], iceTransportPolicy: all, bundlePolicy: balanced, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {advanced: [{googCpuOveruseDetection: {exact: true}}]}
appear.in went for TURN/UDP:443, TURN/TCP:443 and TURN/TLS:443. STUN is implicit here via the use of TURN.
Facebook Messenger
https://www.messenger.com/videocall/incall/?peer_id=100000919010117, { iceServers: [stun:stun.fbsbx.com:3478, turn:157.240.1.48:40002?transport=udp, turn:157.240.1.48:3478?transport=tcp, turn:157.240.1.48:443?transport=tcp], iceTransportPolicy: all, bundlePolicy: balanced, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {advanced: [{enableDtlsSrtp: {exact: true}}]}
Messenger uses port 3478 for STUN, TURN over UDP on port 40002, TURN over TCP on port 3478. It also uses TURN over TCP on port 443. No TURN/TLS for Messenger.
Here’s what I’ve learned here:- People don’t use the default STUN/TURN ports in their deployments
- Even if they don’t use ports that make sense (443), they may not use the default ports (See Google Meet)
- With seemingly something straightforward as STUN/TURN, everyone ends up implementing it differently
We’ve looked at at NAT Traversal and its STUN and TURN server. But what about some signaling protocols? The first one that came to mind when I thought about other examples was MQTT.
MQTT is a messaging protocol that is used in the IOT and M2M space. Others use it as well – Facebook for example:
They explained how MQTT is used as part of their Messenger backend for the WebRTC signaling (and I guess all other messages they send over Messenger).
MQTT can run over TCP listening on port 1883 and over TLS on port 8883. But then when you look at the AWS documentation for AWS IOT, you find this:
There’s no port 1883 at all, and now port 443 can be used directly if needed.
It would be interesting to know if Facebook Messenger on their mobile app use MQTT over port 443 or 8883 – and if it is port 443, is it MQTT over TLS or MQTT over WebSocket. If what they do with their STUN and TURN servers is any indication, any port number here is a good guess.
SIPSIP is the most common VoIP signaling protocol out there. I haven’t remembered the details, so I checked in Wikipedia:
SIP clients typically use TCP or UDP on port numbers 5060 or 5061 for SIP traffic to servers and other endpoints. Port 5060 is commonly used for non-encrypted signaling traffic whereas port 5061 is typically used for traffic encrypted with Transport Layer Security (TLS).
Port 5060 for UDP and TCP traffic. And port 5061 for TLS traffic.
Then I asked a friend who knows a thing or two about SIP (he’s built more than his share of production SIP networks). His immediate answer?
443.
He remembered 5060 was UDP, 5061 was TCP and 443 is for TLS.
When you want to deploy a production SIP network, you configure your servers to do SIP over TLS on port 443.
Next StepsIf you are looking at protocol implementations and you happen to see some default ports that are required, ask yourself if using them is in your best interest. To get past firewalls and other nasty devices along the route, you might want to consider using other ports.
While you’re at it, I’d avoid sending stuff in the clear if possible and opt for TLS on the connection, which brings us back to 443. Possibly the most important port on the Internet.
If you are serious about learning WebRTC, then check out my online WebRTC training:
The post You Better Ignore the Default Protocol Ports You Implement appeared first on BlogGeek.me.
“Open Source” SDK for SaaS and CPaaS are… Meh
Open Source SDKs from SaaS vendors aren’t interesting.
Every once in awhile, I see a SaaS vendor boasting to have open source SDKs. The assumption is that if you say “open source” on something you are doing it immediately makes the thing free and open. The truth is far from it.
Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:
Get the shortlist
Open Source TodayI want to start with an explanation of open source today.
Open source is a way for a vendor or a single developer to share his code with the “community” at large. There are many reasons why a vendor would do such a thing:
- To get others in the industry to assist in the effort of building and maintaining that code base (in most cases, such initiatives fail to meet their objective)
- To show technical savviness as a company. This is good for the brand’s name and when a company wants to attract top notch developers
- To showcase one’s technical abilities. An individual developer can use his github account to attract potential employers and projects
- To offer a reference implementation or a helper library for integrating with the company’s application
The above reasons are related to companies with proprietary software that they want protected. What they end up doing, is share modules or parts of their codebase as open source. Usually ones they assume won’t help a competitor copy and compete with them directly.
The other approach, is to use open source as a full fledged business model:
- Releasing a project as open source, then offering a non-open source license
- Or offering support and an SLA to it
- Or offering a hosted version of it
- Or offering customization work around it
A good example here is FreeSWITCH. They are offering support and customization work around this popular open source project. And now, there’s SignalWire, an upcoming hosted version of FreeSWITCH.
You see, for a company to employ open source, there needs to be an upside. Philanthropy isn’t a business model for most.
Cloud versus On-premise when Consuming Open SourceSaaS changes the equation a bit.
I tried placing different open source licenses on a kind of a graph, alongside different deployment models. Here’s what I got:
(if you’re interested here’s where to learn more about open source licenses)
CPaaS and SaaS in general are cloud deployments. They enable the company more leeway in the type of open source licenses it can consume. An on-premise type of business better beware of using GPL, whereas a cloud deployment one is just fine using GPL.
This isn’t to say that GPL can’t be used by on premise deployments – just that it complicates things to a point that oftentimes the risks of doing so outweighs the potential reward.
CPaaS / SaaS vendors and InterfacesOn the other end of the equation you’ll find how customers interact with CPaaS vendors.
Towards that goal, the main approach today is by way of an API. And APIs today are almost always defined using REST.
In the illustration above, we have a SaaS or CPaaS vendor exposing a REST API. On top of that API, customers can build their own applications. The vendor wants to make life easier for them, to increase adoption, so he ends up implementing helper libraries. The helper libraries can be official ones or unofficial ones, either created by third parties or the vendor himself. They can just be reference implementations on top of the API, offered as starting points to customers with no real documentation or interface of their own.
For the most part, helper libraries are something I’d expect customers to deploy and run on their servers, to make it easier for them to connect from whatever language and framework they want to use to the vendor’s service.
On a client device, we have SDKs. In some ways, SDKs are just like helper libraries. They connect to the backend REST API, though sometimes they may have a more direct/optimized connection to the platform (proprietary, undocumented WebSocket connection for example).
SDKs is something you’ll find with most of the services where a state machine needs to be maintained on the client side. In the context of most of the things I write here, this includes CPaaS platforms deciding to offer VoIP calling (voice or video) by way of WebRTC or by other means over non-browser implementations. In many of these cases, the developers never actually implement REST calls – they just use the SDK’s interface to get things done.
Which is where the notion of open source SDKs sometimes comes up.
The Open Source SDKIf we’re talking about a SaaS platform, then having the source code of the SDK has its benefits, but none of them relate to “open source”. There’s no ecosystem or adoption at play for the open source code.
The reasons why we’d like to have the source code of an SDK are varied:
- Reading the code can give us better understanding of how the service works
- Being able to run the code step by step in a debugger makes it easier to troubleshoot stuff
- Stack traces are more meaningful in crashes
Here’s the thing though –
Trying to market the SDK as open source is kinda misleading as to what you’re getting out of your end of the deal.
When it comes to CPaaS and WebRTC, there’s this added complexity: vendors will “open source” or give the source code of their JS SDK (because there’s no real alternative today, at least not until WebAssembly becomes commonplace). As for the Android and iOS SDKs, I don’t remember seeing one that is offered in source code form – probably because all vendors are tweaking and modifying the baseline WebRTC code.
SaaS and Open SourceIn a way, SaaS has changed the models and uses of open source. When it was first introduced to the world, software was executed on premise only. There was no cloud, and SDKs and frameworks were commercially licensed. If you wanted something done, you either had to license it or build it yourself.
Open source came and changed all that by enabling vendors to build on top of open source code. Vendors came out with business models around dual licensing of code as well as support and customization models.
SaaS vendors today use open source in three different ways:
- They use it to build their platform. Due to their model, they are less restricted as to the type of open source licenses they can live with
- They open source code modules. Either by forking and sharing modified open source modules they use or by open sourcing specific modules
- Mostly because their developers push towards that goal
- And because they believe these modules won’t give away any of their competitive advantages
- Or to attract potential customers
- They may open source their whole platform. Not common, but it does happen. Idea here is to make revenue out of hosting the service at scale and giving away the baseline service for free (think WordPress for example)
Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:
Get the shortlist
The post “Open Source” SDK for SaaS and CPaaS are… Meh appeared first on BlogGeek.me.
Do I Need a Media Server for a One-to-Many WebRTC Broadcast?
TL;DR – YES.
Do I need a media server for a one-to-many WebRTC broadcast?
That’s the question I was asked on my chat widget this week. The answer was simple enough – yes.
Decided you need a media server? Here are a few questions to ask yourself when selecting an open source media server alternative.
Get the Selection Sheet
Then I received a follow up question that I didn’t expect:
Why?
That caught me off-guard. Not because I don’t know the answer. Because I didn’t know how to explain it in a single sentence that fits nicely in the chat widget. I guess it isn’t such a simple question either.
The simple answer is a limit in resources, along with the fact that we don’t control most of these resources.
The Hard Upper LimitWhenever we want to connect one browser to another with a direct stream, we need to create and use a peer connection.
Chrome 65 includes an upper limit to that which is used for garbage collection purposes. Chrome is not going to allow more than 500 concurrent peer connections to exist.
500 is a really large number. If you plan on more than 10 concurrent peer connections, you should be one of those who know what they are doing (and don’t need this blog). Going above 50 seems like a bad idea for all use cases that I can remember taking part of.
Understand that resources are limited. Free and implemented in the browser doesn’t mean that there aren’t any costs associated with it or a need for you to implement stuff and sweat while doing so.
Bitrates, Speeds and FeedsThis is probably the main reason why you can’t broadcast with WebRTC, or with any other technology.
We are looking at a challenging domain with WebRTC. Media processing is hard. Real time media processing is harder.
Assume we want to broadcast a video at a low VGA resolution. We checked and decided that 500kbps of bitrate offers good results for our needs.
What happens if we want to broadcast our stream to 10 people?
Broadcasting our stream to 10 people requires bitrate of 5mbps uplink.
If we’re on an ADSL connection, then we can find ourselves with 1-3mbps uplink only, so we won’t be able to broadcast the stream to our 10 viewers.
For the most part, we don’t control where our broadcasters are going to be. Over ADSL? WiFi? 3G network with poor connectivity? The moment we start dealing with broadcast we will need to make such assumptions.
That’s for 10 viewers. What if we’re looking for 100 viewers? A 1,000? A million?
With a media server, we decide the network connectivity, the machine type of the server, etc. We can decide to cascade media servers to grow our scale of the broadcast. We have more control over the situation.
Broadcasting a WebRTC stream requires a media server.
Sender UniformityI see this one a lot in the context of a mesh group call, but it is just as relevant towards broadcast.
When we use WebRTC for a broadcast type of a service, a lot of decisions end up taking place in the media server. If a viewer has a bad network, this will result with packet loss being reported to the media server. What should the media server do in such a case?
While there’s no simple answer to this question, the alternatives here include:
- Asking the broadcaster to send a new I-frame, which will affect all viewers and increase bandwidth use for the near future (you don’t want to do it too much as a media server)
- Asking the broadcaster to reduce bitrate and media quality to accomodate for the packet losses, affecting all viewers and not only the one on the bad network
- Ignoring the issue of packet loss, sacrificing the user for the “greater good” of the other viewers
- Using Simulcast or SVC, and move the viewer to a lower “layer” with lower media quality, without affecting other users
You can’t do most of these in a browser. The browser will tend to use the same single encoded stream as is to send to all others, and it won’t do a good job at estimating bandwidth properly in front of multiple users. It is just not designed or implemented to do that.
You Need a Media ServerIn most scenarios, you will need a media server in your implementation at some point.
If you are broadcasting, then a media server is mandatory. And no. Google doesn’t offer such a free service or even open source code that is geared towards that use case.
It doesn’t mean it is impossible – just that you’ll need to work harder to get there.
Looking to learn more about WebRTC? In the coming weeks, I’ll be refreshing my online WebRTC training. Join now so you don’t miss out.
The post Do I Need a Media Server for a One-to-Many WebRTC Broadcast? appeared first on BlogGeek.me.
The Internet of Things or Things on the Internet?
Time to stop playing things on the internet and start building the internet of things.
We’ve been using that stupid IOT acronym for quite some time. Probably a decade. The idea and notion that every object can be network enabled, share its collected data and receive its commands remotely is quite exciting. I think we’re far from that vision.
It isn’t that we’re not making progress. We are. The apartment building I now live in is 3 years old. It is more automated than the previous apartment building I lived in, which was 15 years old. I wouldn’t call it IOT or a smart building quite yet. And I don’t think there’s a simple way to turn a dumb building into a smart one either.
When we moved to our new apartment we renovated a bit. There was this opportunity to add smart-home capabilities into the apartment. There were just a few teeny set of problems here:
- There’s no real business case for us yet. As a family, we really don’t need a smart-home, and frankly – I still haven’t seen one to appreciate the added benefit
- Since we’re in a highrise, the need for an apartment security/surveillance system seemed like an overkill. The most we ended up with is a peephole camera for the door. Mainly to empower or kids to see who’s knocking (no IOT or smarts in it)
- Talking to the electrician to ended up dealing with our power outlets at home, I understood that there’s not enough electricians available who know how to install a smart-home kit here in Israel
And to top it all, it felt like a one time undertaking that will be hard/impossible to upgrade or modify later on without a complete overhaul. That wasn’t what I was aiming for.
Mozilla just announced their Things Gateway that can be installed on a Raspberry Pi 3. It is a rather interesting project, especially since its learnings are then applied to the W3C Web of Things Interest Group with the intent of reducing the fragmentation of IOT. They’ve got their hands full of work.
IOT today is a patchwork of devices and companies, each trying to become a dominant player. The end result is that we’re living in a world where things can be placed on the internet, but they don’t amount for an internet of things.
Here are a few questions/hurdles that I think we’ll need to answer as an industry before we can reach that vision of IOT.
SecurityI am putting security here first. Here’s why:
- We all know it is mandatory
- We all know it is left as a backlog item if it is considered at all
I’ve seen it happen with VoIP and it is definitely happening today with IOT.
Until this becomes a priority, IOT will not really happen.
Security has many different aspects to it:
- Encryption of the communications, to maintain privacy and allow for authorization and authentication of it
- Upgradability, which itself should be secure, straightforward and automated
- Audit logs that are hard to tamper with, so we can investigate hacks
Most vendors won’t be able to get these done properly to being with. And they don’t have any real incentive to do that either.
StandardizationThere’s a need for standardization in this space. One that tackles all levels of the IOT food-chain.
Out of the top of my head, here are a few areas:
- Physical – Wi-Fi, Zigbee, Bluetooth – all are standards for the underlying network layer to be used. There’s also RFID and other type of connections that can be used. And we need to factor in 5G at some point. We’ve got wireless ones and wireline ones. A total mess. Just look at the mozilla Things Gateway announcement for the set of connectors they support and how these get supported. Too much information to get things done easily
- Transport – once we get communications, and assume (naively) that we have IP communications going, do we then run our data over TCP? Or TLS? Or maybe UDP? Or should we go for QUIC? Or HTTP/2? Should we do it over MQTT maybe? Over a WebSocket? There’s too many alternatives here
- Signaling – What are the types of messages we’re going to allow? What controls what sensor data? How do we describe it in a way that can be easily extendable and unambiguous? I’ve been there with VoIP and it was hard enough. Doing it for IOT is an order of magnitude harder (more players, more devices, more everything)
- Processing – this relates to the next topic of automation. Once we can collect, control and make decisions over a single device, can we do it in aggregate, and in ways that won’t lock us in to a single vendor?
I don’t believe we’ll get this thing standardized properly in our industry for quite some time.
AutomationI’ve seen a lot of rules engines when it comes to IOT. You can program them to create sequences of events – if the density sensor indicates someone is at home, open the lights.
The problem is that you need to program them. This can’t scale.
The other problem is the issue of what to do with all that sensor data? Someone needs to collect it, aggregate it, process it, analyze it and make decisions out of it.
Simple rule engines are nice, but they won’t get us far down the IOT path.
We also need to add machine learning and AI into the mix.
The end result? Probably similar in nature to AWS Deep Lens. Only problem, it either needs to be really generic and flexible.
Different Industries, Different Requirements and EcosystemsThere are different markets in IOT. they have different needs and different customers. They will have different ecosystems around them.
In broad strokes, we can split to consumer and enterprise. Enterprise here includes industrial, smart cities, etc. The consumer is all about the home, the car and the self.
Who will be the players here?
From Smartphones to Smart SpeakersThis is where I think we made the most progress.
Up until a year ago, IOT was something you end up delivering to customers via apps on a smartphone. You purchase a lightbulb, you get an app. You get a new TV, there’s an app. Refrigerator? App.
Amazon Alexa did something miraculous. It moved the discussion over the home from an app towards a stationary home device with voice activation and control. No screen or touch screen needed.
Since then, Google and Apple have joined and voice assistants in the home are all the rage now.
In some ways, I expect this to find its way into the enterprise as well. First via conference rooms and later – who knows?
This is one more piece in the IOT puzzle.
Where do we go from here?I have no clue.
To me, it seems that we’re still in the things on the internet, and we will be there for a lot longer.
The post The Internet of Things or Things on the Internet? appeared first on BlogGeek.me.
5 Mistakes to Avoid When Developing WebRTC Applications
There are things you don’t want to do when you are NIH’ing your way to a stellar WebRTC application.
Here’s a true, sad story. This month, the unimaginable happened. Rain (!) dropped from the sky here in Israel. The end of it was that 6 apartments in my building are suffering from moisture due to a leakage from a balcony of the penthouse. Being a new building, we’re at the mercies of the contractor to fix it.
Nothing in the construction market moves fast in Israel – or without threats, so we had to start sending official sounding letters to the constructor about the leak. I took charge, and immediately said we need to lawyer up and have a professional assist us in writing a letter from us to the constructor. Others were in the opinion we can do it on our own, as we need a lawyer only if he is signed directly on the document.
And then it hit me. I wanted to lawyer up is because I see many smart people failing with WebRTC. They are making rookie mistakes, and I didn’t want to make rookie mistakes when it comes to the moisture problems in my apartment.
Why are we Failing with WebRTC?I am not sure that smart people fail a lot more around WebRTC technology than they are with other technologies, but it certainly feels that way.
A famous Mark Twain quote goes like this:
“There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations. We keep on turning and making new combinations indefinitely; but they are the same old pieces of colored glass that have been in use through all the ages.”
Many of the rookie mistakes people do about WebRTC stems from this. WebRTC is this kind of new. It is simply a lot of old ideas meshed into a new and curious combination. So we know it. And we assume we know how to handle ourselves around it.
Entrepreneurs? Skype is 14 years old. It shouldn’t be that hard to build something like Skype today.
VoIP developers? SIP we know. WebRTC is just SIP without the signaling. So we force SIP onto it and we’re done.
Web developers? WebRTC is part of HTML5. A few lines of JS code and we’re practically ready to go live.
Video developers? We can just take the WebRTC video feeds and put them on a CDN. Can’t we?
The result?
- Smart people decide they know enough to go it alone. And end up making some interesting mistakes
- People put their faith in one of the above personas… only to fail
My biggest gripe recently is people who decide in 2018 that peerJS is what they need for their WebRTC application. A project with 402 lines of code, last updated in 2015 (!). You can’t use such code with WebRTC. Code older than a year is stale or dead already. WebRTC is still too new and too dynamic.
That said, it isn’t as if you have a choice anymore. Flash is dying, and there’s no other serious alternative to WebRTC. If you’re thinking of adopting WebRTC, then here are five mistakes to avoid.
Mistake #1: Failing to Configure STUN/TURNYou wouldn’t believe how often developers fail to configure NAT traversal servers. Just yesterday I had someone ask me over the chat widget of my website how can he run his application by hosting his signaling and web servers on HostGator without any STUN/TURN servers. It just doesn’t work.
The simple answer is that you can’t – barring some esoteric use cases, you will definitely need STUN servers. And for most use cases, TURN servers will also be mandatory if you want sessions to connect.
In the past month, I found myself explaining quite a lot about NAT traversal:
- You must use STUN and TURN servers
- Don’t rely on free STUN servers, and definitely don’t use “free” TURN servers
- Don’t force all sessions via TURN unless you absolutely know what you’re doing
- TURN has no added security in using it
- You don’t need more than 1 STUN server and 3 TURN servers (UDP, TCP and TLS) in your servers configuration in WebRTC
- Use temporary/ephemeral passwords in your TURN configuration
- STUN doesn’t affect media quality
- coturn or restund are great options for STUN/TURN servers
There’s more, but this should get you started.
Mistake #2: Selecting the WRONG Signaling FrameworkPeerJS anyone? PeerJS feels like a tourist trap:
With 1,693 stars and 499 forks, PeerJS is one of the most popular WebRTC projects on github. What can go wrong?
Maybe the fact that it is older than the internet?
A WebRTC project that had its last commit 3 years ago can’t be used today.
Same goes for using Muaz Khan’s code snippets and expecting them to be commercial grade, stable, highly scalable products. They’re not. They’re just very useful code snippets.
Planning to use some open source project? Make sure that:
- Make sure it was updated recently (=the last couple of months)
- Make sure it is popular enough
- Make sure you can understand the framework’s code and can maintain it on your own if needed
- Try to check if there’s someone behind it that can help you in times of trouble
Don’t take the selection process here lightly. Not when it comes to a signaling server and not when it comes to a media server.
Mistake #3: Not Using Media Servers When You ShouldI know what you’re thinking. WebRTC is peer to peer so there’s no need for servers. Some think that even signaling and web servers aren’t needed – I hope they can explain how participants are going to find each other.
To some, this peer to peer concept also means that you can run these ridiculously large scale sessions with no servers that carry on media.
Here are two such “architectures” I come across:
Mesh. It’s great. Don’t assume you can get it to run properly this year or the next. Move on.
Live broadcasting by forwarding content. It can be done, but most probably not the way you expect it to grow to a million users with no infrastructure and zero latency.
For many of the use cases out there, you will need a media server to process and route the media for you. Now that you are aware of it, go search for an open source media server. Or a commercial one.
Mistake #4: Thinking Short-TermYou get an outsourcing vendor. Write him a nice requirements doc. Pay him. Get something implemented. And you’re done.
Not really.
WebRTC is still at its infancy. The spec is changing. Browser implementations are changing. It is all in flux all the time. If you’re going to use WebRTC, either:
- Use some WebRTC API platform (here are a few), and you’ll be able to invest a bit less on an ongoing basis. There will be maintenance work, but not much
- Develop on your own or by outsourcing. In this case, you will need to continue investing in the project for at least the next 3 years or more
WebRTC code rots faster than most other HTML5 code. It will eventually change, but we’re not there yet.
It is also the reason I started with a few colleagues testRTC a few years ago. To help with the lifecycle of WebRTC applications, especially in the area of testing and monitoring.
Mistake #5: Failing to Understand WebRTCThey say assumption is the mother of all mistakes. Google seems to agree with it. Almost.
WebRTC isn’t trivial. It sits somewhere between VoIP and the web. It is new, and the information out there on the Internet about it is scattered and somewhat dynamic (which means lots of it isn’t accurate).
If you plan on using WebRTC, make sure you first understand it and its intricacies. Understand the servers that are needed to deploy a WebRTC application. Understand the signaling mechanisms that are built into WebRTC. Understand how media is processes and sent over the network. understand the rich ecosystem of solutions that can be used with WebRTC to build a production ready system.
Lots of things to learn here. Don’t assume you know WebRTC just because you know web development or because you know VoIP or video processing.
If you are looking to seriously learn WebRTC, why not enroll to my Advanced WebRTC Architecture course?
–
What about my apartment? We’ve lawyered up, and now I have someone review and fix all the official sounding letters we’re sending out. Hopefully, it will get us faster to a resolution.
The post 5 Mistakes to Avoid When Developing WebRTC Applications appeared first on BlogGeek.me.
WebRTC Electron Implementations are on 🔥
For WebRTC, Mobile and PC are moving in different directions. In the desktop, WebRTC Electron apps are gaining momentum.
In the good old days, people used to complain that WebRTC isn’t available on all browsers. Mobile was less of an issue for most as mobile application developers port WebRTC and use it natively on both iOS and Android.
How times change.
Need to know where WebRTC is available? Download this free WebRTC Device Cheat Sheet.
Today? All modern browsers support WebRTC. We’ve got Chrome, Firefox, Edge and Safari with official WebRTC implementations.
The challenge? None of the browsers are ready:
- Chrome uses Plan B, switching to Unified Plan
- Firefox is doing fine, but isn’t high on the priority list
- Edge doesn’t support the data channel, had its market share isn’t that great
- Safari doesn’t support VP8 and breaks a wee bit too often at the moment
What’s a developer to do?
Use adapter.js. Or go for a plugin. Or just ignore a few browsers.
Or maybe. Just maybe you should treat PCs and laptops the same way you do mobile? And build an app.
If that’s what you plan on doing then you’re not alone.
The most popular way to build an app for the desktop is by using Electron. There are other ways, like CEF and actual native development, but Electron is by far the most common approach.
Here are 3 vendors making use of Electron (and WebRTC) for their desktop application:
#1 – SlackSlack are a popular team collaboration application. I’ve been using it in the browser for the last 3 years, but switched to their desktop Electron app on both my Ubuntu desktop and my Windows 10 laptop.
Why didn’t I use the app for so long? Because I don’t like installing things.
Why have I installed it now? Because I need to track 3+ slack accounts in parallel at all times now. This means a tab per slack account in my browser. On the desktop app, they don’t “eat up” multiple tabs. It isn’t a matter of memory or performance for me. Just one of “esthetics” – trying to preserve a tabs diet on my Chrome.
And that’s how Slack likes it. During the last Kranky Geek, the Slack team gave an interesting presentation about their current plans. It had about a minute dedicated to Electron in 2:30 of the session:
This recording lacks the Q&A part of the session. In an answer to a question regarding browsers support, Andrew MacDonald of Slack, said their focus is in their desktop app – not the browser. They make sure everything works on Chrome. Invest less time and effort on the other browsers. And focus a lot on their Slack desktop application.
It was telling.
If you are looking for desktop-application-only-features in Slack, then besides having a single window for all projects, there’s the collaboration they offer during screen sharing that isn’t available in the browser (yet another reason for me to switch – to check it out).
During that session, at 2:30 minutes? Andrew says why Electron is so useful to Slack, and it is in the domain of cross platform development and time to market – with their team size, they can’t update as fast as Electron does, so they took it “as is” for the built-in WebRTC implementation of it.
#2 – DiscordDiscord is a kind of Slack but different. A social network targeting gamers. You can also find there non-gaming groups. Discord is doing all it can to get you from the comfort of your browser right into their native application.
Here’s how the homepage looks like:
From the get go their call to action is to either Open Discord (in the browser) or Download for your operating system. On mobile, if you’re curious, the only alternative is to download the app.
Here’s the interesting part, though.
Discord’s call to action suggest by using green buttons you open Discord in the browser. That’s a lower friction action. You select a user name. Then pick an email and password (or use an unclaimed channel until you add your username and password). And now that you’re signed up for the service, it is time to suggest again you use their app:
And… if you skip this one, you’ll get a top bar reminder as well (that orange strip at the top):
You can do with Discord almost anything inside the browser, but they really really really want to get you off that damn internet and into their desktop app.
And it is working for them!
#3 – TalkDeskTalkDesk has its own reason for adopting Electron.
TalkDesk is a contact center solution that integrates with CRMs and third party systems. Towards that goal, you can:
- Use the TalkDesk application (=browser web app)
- Install the TalkDesk extension from Chrome, and have it latch on to other CRM systems
- install the Chrome Callbar app, so you can use it as a standalone without the need to have the browser opened at all
That third option is going the way of the dodo, along with Chrome apps. TalkDesk solved that by introducing Callbar Electron.
What we see here differs slightly from the previous two examples.
Where Slack and Discord try getting people off the web and into their desktop application, TalkDesk is just trying to be everywhere for them. Using HTML5 and Electron means they need not write yet-another-application for the desktop – they can reuse parts of their web app.
They are NOT AloneThere are other vendors I know of that are using Electron for their WebRTC applications. They do it for one of the following reasons:
- It is an easy way to support Internet Explorer by not supporting it (or Safari)
- They want a “native” app because they need more control than what a browser could ever offer, but still want to work with cross platform development, and HTML5/JS seems like the cleanest approach
- Their users work in front of the service all day, so the browser isn’t the best interface for them
- They don’t want to tether themselves or limit themselves to the browser. Using web technology is just how they want to develop
- It brings with it “stability”, as it is up to you to decide when to push an update to your users as opposed to having browser vendors do it on their own timeframe. It is only semblance as most would still support both browsers and applications in parallel
Add to that CPaaS vendors officially supporting Electron. Vidyo.io and TokBox are such examples. They do it not because they think it is nice, but because there’s customer demand for it.
This shift towards Electron apps makes it harder to estimate the real usage base of WebRTC. If most communications is shifting from Chrome browser (lets face it, most WebRTC comms happens in Chrome today if you only care about browsers) towards applications, then the statistics and trends collected by Google about WebRTC use are skewed. That said, it makes Chrome all the more dominant, as Electron use can be attributed back to Chromium.
Expect vendors to continue adopting Electron for their WebRTC applications. This trend is on .
Need to know where WebRTC is available? Download this free WebRTC Device Cheat Sheet.
The post WebRTC Electron Implementations are on 🔥 appeared first on BlogGeek.me.
AWS DeepLens and the Future of AI Cameras and Vision
Are AI cameras in our future?
In last year’s AWS re:invent event, which took place end of November, Amazon unveiled an interesting product: AWS DeepLens
There’s decent information about this new device on Amazon’s own website but very little of anything else out there. I decided to put my own thoughts on “paper” here as well.
Interested in AI, vision and where it meets communications? I am going to cover this topic in future articles, so you might want to sign-up for my newsletter
Get my free content
What is AWS DeepLens?AWS DeepLens is the combination of 3 components: hardware (camera + machine), software and cloud. These 3 come in a tight integration that I haven’t seen before in a device that is first and foremost targeting developers.
With DeepLens, you can handle inference of video (and probably audio) inputs in the camera itself, without shipping the captured media towards the cloud.
The hype words that go along with this device? Machine Vision (or Computer Vision), Deep Learning (or Machine Learning), Serverless, IoT, Edge Computing.
It is all these words and probably more, but it is also somewhat less. It is a first tentative step of what a camera module will look like 5 years from today.
I’d like to go over the hardware and software and see how they combine into a solution.
AWS DeepLens HardwareAWS DeepLens hardware is essentially a camera that has been glued to an Intel NUC device:
Neither the camera nor the compute are on the higher end of the scale, which is just fine considering where we’re headed here – gazillion of low cost devices that can see.
The device itself was built in collaboration with Intel. As all chipset vendors, Intel is plunging into AI and deep learning as well. More on AWS+Intel vs Google later.
Here’s what’s in this package, based on the AWS blog post on DeepLens:
- 4 megapixel camera with the ability to capture 1080p video resolution
- Nothing is said about the frame rate in which this can run. I’d assume 30 fps
- The quality of this camera hasn’t been detailed either. In many cases, I’d say these devices will need to work in rather extreme lighting conditions
- 2D microphone array
- It is easy to understand why such a device needs a microphone, a 2D microphone array is very intriguing in this one
- This allows for better handling of things like directional sound and noise reduction algorithms to be used
- None of the deep learning samples provided by Amazon seem to make use of the microphone inputs. I hope these will come later as well
- Intel Atom X5 processor
- This one has 4 cores and 4 threads
- 8GB of memory and 16GB of storage – this is meant to run workloads and not store them for long periods of time
- Intel Gen9 graphics engine (here)
- If you are into numbers, then this does over 100 GFLOPS – quite capable for a “low end” device
- Remember that 1080p@30fps produces more than 62 million pixels a second to process, so we get ~1600 operations per pixel here
- You can squeeze out more “per pixel” by reducing frame rate or reducing resolution (both are probably done for most use cases)
- Like most Intel NUC devices, it has Wi-Fi, USB and micro HDMI ports. There’s also a micro SD port for additional memory based on the image above
The hardware tries to look somewhat polished, but it isn’t. Although this isn’t written anywhere, this is:
- The first version of what will be an iterative process for Amazon
- A reference design. Developers are expected to build the proof of concept with this, later shifting to their own form factor – I don’t see this specific device getting sold to end customers as a final product
In a way, this is just a more polished hardware version of Google’s computer vision kit. The real difference comes with the available tooling and workflow that Amazon baked into AWS DeepLens.
AWS DeepLens SoftwareThe AWS DeepLens software is where things get really interesting.
Before we get there, we need to understand a bit how machine learning works. At its basic, machine learning is about giving a “machine” a large dataset, letting it learn the data in one way or another, and then when you introduce similar new data, it will be able to classify it.
Dumbing the whole process and theory, at the end of the day, machine learning is built out of two main steps:
- TRAINING: You take a large set of data and use it for training purposes. You curate and classify it so the training process has something to check itself against. Then you pass the data through a process that ends up generating a trained model. This model is the algorithm we will be using later
- DEPLOY: When new data comes in (in our case, this will probably be an image or a video stream), we use our trained model to classify that data or even to run an algorithm on the data itself and modify it
With AWS DeepLens, the intent is to run the training in the AWS cloud (obviously), and then run the deployment step for real time classification directly on the AWS DeepLens device. This also means that we can run this while being disconnected from the cloud and from any other network.
How does all this come to play in AWS DeepLens software stack?
On deviceOn the device, AWS DeepLens runs two main packages:
- AWS Greengrass Core SDK – Greengrass enables running AWS Lambda functions directly on devices. If Lambda is called serverless, then Greengrass can truly run serverless
- Device optimized MXNet package – an Apache open source project for machine learning
Why MXNet and not TensorFlow?
- TensorFlow comes from Google, which makes it less preferable for Amazon, a direct cloud competitor. It is also preferable by Intel (see below)
- MXNet is considered faster and more optimized at the moment. It uses less memory and less CPU power to handle the same task
The main component here is the new Amazon SageMaker:
SageMarker takes the effort away from the management of training machine learning, streamlining the whole process. That last step in the process of Deploy takes place in this case directly on AWS DeepLens.
Besides SageMaker, when using DeepLens you will probably make use of Amazon S3 for storage, Amazon Lambda when running serverless in the cloud, as well as other AWS services. Amazon even suggests using AWS DeepLens along with the newly announced Amazon Rekognition Video service.
To top it all, Amazon has a few pre-trained models and sample projects, shortening the path from getting a hold of an AWS DeepLens device to seeing it in action.
AWS+Intel vs GoogleSo we’ve got AWS DeepLens. With its set of on-device and cloud software tools. Time to see what that means in the bigger picture.
I’d like to start with the main players in this story. Amazon, Intel and Google. Obviously, Google wasn’t part of the announcement. Its TensorFlow project was mentioned in various places and can be made to work with AWS DeepLens. But that’s about it.
Google is interesting here because it is THE company today that is synonymous to AI. And there’s the increasing rivalry between Amazon and Google that seems to be going on multiple fronts.
When Google came out with TensorFlow, it was with the intent of creating a baseline for artificial intelligence modeling that everyone will be using. It open sourced the code and let people play with it. That part succeeded nicely. TensorFlow is definitely one of the first projects developers would try to dabble with when it comes to machine learning. The problem with TensorFlow seems to be the amount of memory and CPU it requires for its computations compared to other frameworks. That is probably one of the main reasons why Amazon decided to place its own managed AI services on a different framework, ending up with MXNet which is said to be leaner with good scaling capabilities.
Google did one more thing though. It created its own special Tensor processing unit, calling it TPU. This is an ASIC type of a chip, designed specifically for high performance of machine learning calculations. In a research paper released by Google earlier last year, they show how their TPUs perform better than GPUs when it comes to TensorFlow machine learning work loads:
And if you’re wondering – you can get CLOUD TPU on the Google Cloud Platform, albait this is still in alpha stage.
This gives Google an advantage in hosting managed TensorFlow jobs, posing a threat to AWS when it comes to AI heavy applications (which is where we’re all headed anyway). So Amazon couldn’t really pick TensorFlow as its winning horse here.
Intel? They don’t sell TPUs at the moment. And like any other chip vendor, they are banking and investing heavily in AI. Which made working with AWS here on optimizing and working on end-to-end machine learning solutions for the internet of things in the form of AWS DeepLens an obvious choice.
Artificial Intelligence and VisionThese days, it seems that every possible action or task is being scrutinized to see if artificial intelligence can be used to improve it. Vision is no different. You can find it other computer vision or machine vision and it covers a broad set of capabilities and algorithms.
Roughly speaking, there are two types of use cases here:
- Classification – with classification, the images or video stream, is being analyzed to find certain objects or things. From being able to distinguish certain objects, through person and face detection, to face recognition to activities and intents recognition
- Modification – AWS DeepLens Artistic Style Transfer example is one such scenario. Another one is fixing the nagging direct eye contact problem in video calls (hint – you never really experience it today)
As with anything else in artificial intelligence and analytics, none of this is workable at the moment for a broad spectrum of classifications. You need to be very specific in what you are searching and aiming for, and this isn’t going to change in the near future.
On the other hand, there are many many cases where what you need is a camera to classify a very specific and narrow vision problem. The usual things include person detection for security cameras, counting people at an entrance to a store, etc. There are other areas you hear about today such as using drones for visual inspection of facilities and robots being more flexible in assembly lines.
We’re at a point where we already have billions of cameras out there. They are in our smartphones and are considered a commodity. These cameras and sensors are now headed into a lot of devices to power the IOT world and allow it to “see”. The AWS DeepLens is one such tool that just happened to package and streamline the whole process of machine vision.
PricingOn the price side, the AWS DeepLens is far from a cheap product.
The baseline cost is of an AWS DeepLens camera? $249
But as with other connected devices, that’s only a small part of the story. The device is intended to be connected to the AWS cloud and there the real story (and costs) takes place.
The two leading cost centers after the device itself are going to be AWS Greengrass and Amazon SageMaker.
AWS Greegrass starts at $1.49 per year per device. Amazon SageMaker costs 20-25% on top of the usual AWS EC2 machine prices. To that, add the usual bandwidth and storage pricing of AWS, and higher prices for certain regions and discounts on large quantities.
It isn’t cheap.
This is a new service that is quite generic and is aimed at tinkerers. Startups looking to try out and experiment with new ideas. It is also the first iteration of Amazon with such an intriguing device.
I, for one, can’t wait to see where this is leading us.
3 Different Compute Models for Machine VisionAWS DeepLens is one of 3 different compute models that I see in this space of machine vision.
Here are all 3 of them:
#1 – CloudIn a cloud based model, the expectation is that the actual media is streamed towards the cloud:
- In real time
- Or at some future point in time
- When events occur; like motion being detected; or sound picked up on the mic
The data can be a video stream, or more often than not, it is just a set of captured images.
And that data gets classified in the cloud.
Here are two recent examples from a domain close to my heart – WebRTC.
At the last Kranky Geek event, Philipp Hancke shared how appear.in is trying to determine NSFW (Not Safe For Work):
The way this is done is by using Yahoo’s Open NSFW open source package. They had to resize images, send them to a server and there, using Python classify the image, determining if it is safe for work or not. Watch the video – it really is insightful at how to tackle such a project in the real world.
The other one comes from Chad Hart, who wrote a lengthy post about connecting WebRTC to TensorFlow for machine vision. The same technique was used – one of capturing still images from the stream and sending them towards a server for classification.
These approaches are nice, but they have their challenges:
- They are gravitating towards still images and not video streams at the moment. This relates to the costs and bandwidth involved in shipping and then analyzing such streams on a server. To give you an understanding of the costs – using Amazon Rekognition for one minute of video stream analysis costs $0.12. For a single minute. It is high, and the reason is that it really does require some powerful processing to achieve
- Sometimes, you really need to classify and make faster decisions. You can’t wait that extra 100’s of milliseconds or more for the classification to take place. Think augmented reality type of scenarios
- At least with WebRTC, I haven’t seen anyone who figured how to do this classification on the server side in real time for a video stream and not still images. Yet
This alternative is what we have today in smartphones and probably in modern room based video conferencing devices.
The camera is just the optics, but the heavy lifting takes place in the main processor that is doing other things as well. And since most modern CPUs today already have GPUs embedded as part of the SoC, and chip vendors are actively working on AI specific additions to chips (think Apple’s AI chip in the iPhone X or Google’s computational photography packed into the Pixel X phones).
The underlying concept here is that the camera is always tethered or embedded in a device that is powerful enough to handle the machine learning algorithms necessary.
They aren’t part of the camera but rather the camera is part of the device.
This works rather well, but you end up with a pricy device which doesn’t always make sense. Remember that our purpose here is to aim at having a larger number of camera sensors deployed and having an expensive computing device attached to it won’t make sense for many of the use cases.
#3 – In the CameraThis is the AWS DeepLens model.
TBD – IMAGE
The computing power needed to run the classification algorithms is made part of the camera instead of taking place on another CPU.
We’re talking about $249 right now, but assuming this approach becomes popular, prices should go down. I can easily see such devices retailing at $49 on the low end in 2-3 technology cycles (5 years or so). And when that happens, the power developers will have over what use cases can be created are endless.
Think about a home surveillance system that costs below $1,000 to purchase and install. It is smart enough to have a lot less false positives in alerting its users. AND can be upgraded in its classification as time goes by. There can be a service put in place behind it with a monthly fee that includes such things. You can add face detection and classification of certain people – alerting you when the kids come home or leave for example. Ignoring a stray cat that came into view of the camera. And this system is independent of an external network to run on a regular basis. You can update it when an external network is connected, but other than that, it can live “offline” quite nicely.
No Winning ModelYet.
All of the 3 models have their place in the world today. Amazon just made it a lot easier to get us to that third alternative of “in the camera”.
IoT and the CloudEdge computing. Fog computing. Cloud computing. You hear these words thrown in the air when talking about the billions of devices that will comprise the internet of things.
For IoT to scale, there are a few main computing concepts that will need to be decided sooner rather than later:
- Decentralized – with so many devices, IoT services won’t be able to be centralized. It won’t be around scale out of servers to meet the demands, but rather on the edges becoming smarter – doing at least part of the necessary analysis. Which is why the concept of AWS DeepLens is so compelling
- On net and off net – IoT services need to be able to operate without being connected to the cloud at all times. Think of an autonomous car that needs to be connected to the cloud at all times – a no go for me
- Secured – it seems like the last thing people care about in IoT at the moment is security. The many data breaches and the ease at which devices can be hijacked point that out all too clearly. Something needs to be done there and it can’t be on the individual developer/company level. It needs to take place a lot earlier in the “food chain”
I was reading The Meridian Ascent recently. A science fiction book in a long series. There’s a large AI machine there called Big John which sifts through the world’s digital data:
“The most impressive thing about Big John was that nobody comprehended exactly how it worked. The scientists who had designed the core network of processors understood the fundamentals: feed sufficient information to uniquely identify a target, and then allow Big John to scan all known information – financial transactions, medical records, jobs, photographs, DNA, fingerprints, known associates, acquaintances, and so on.
But that’s where things shifted into another realm. Using the vast network of processors at its disposal, Big John began sifting external information through its nodes, allowing individual neurons to apply weight to data that had no apparent relation to the target, each node making its own relevance and correlation calculations.”
I’ve emphasized that sentence. To me, this shows the view of the same IoT network looking at it from a cloud perspective. There, the individual sensors and nodes need to be smart enough to make their own decisions and take their own actions.
–
All these words for a device that will only be launched April 2018…
We’re not there yet when it comes to IoT and the cloud, but developers are working on getting the pieces of the puzzle in place.
Interested in AI, vision and where it meets communications? I am going to cover this topic in future articles, so you might want to sign-up for my newsletter
Get my free content
The post AWS DeepLens and the Future of AI Cameras and Vision appeared first on BlogGeek.me.
How Many Users Can Fit in a WebRTC Call?
As many as you like. You can cram anywhere from one to a million users into a WebRTC call.
You’ve been asked to create a group video call, and obviously, the technology selected for the project was WebRTC. It is almost the only alternative out there and certainly the one with the best price-performance ratio. Here’s the big question: How many users can we fit into that single group WebRTC call?
Need to understand your WebRTC group calling application backend? Take this free video mini-course on the untold story of WebRTC’s server side.
At least once a week I get approached by someone saying WebRTC is peer-to-peer and asking me if you can use it for larger groups, as the technology might not fit for such use cases. Well… WebRTC fits well into larger group calls.
You need to think of WebRTC as a set of technological building blocks that you mix and match as you see fit, and the browser implementation of WebRTC is just one building block.
The most common building block today in WebRTC for supporting group video calls is the SFU (Selective Forwarding Unit). a media router that receives media streams from all participants in a session and decides who to route that media to.
What I want to do in this article, is review a few of the aspects and decisions you’ll need to take when trying to create applications that support large group video sessions using WebRTC.
Analyze the ComplexityThe first step in our journey today will be to analyze the complexity of our use case.
With WebRTC, and real time video communications in general, we will all boil down to speeds and feeds:
- Speeds – the resolution and bitrate we’re expecting in our service
- Feeds – the stream count of the single session
Let’s start with an example.
Assume you want to run a group calling service for the enterprise. It runs globally. People will join work sessions together. You plan on limiting group sessions to 4 people. I know you want more, but I am trying to keep things simple here for us.
The illustration above shows you how a 4 participants conference would look like.
Magic Squares: 720pIf the layout you want for this conference is the magic squares one, we’re in the domain of:
You want high quality video. That’s what everyone wants. So you plan on having all participants send out 720p video resolution, aiming for WQHD monitors (that’s 2560×1440). Say that eats up 1.5Mbps (I am stingy here – it can take more), so:
- Each participant in the session sends out 1.5Mbps and receives 3 streams of 1.5Mbps
- Across 4 participants, the media server needs to receive 6Mbps and send out 18Mbps
Summing it up in a simple table, we get:
Resolution 720p Bitrate 1.5Mbps User outgoing 1.5Mbps (1 stream) User incoming 4.5Mbps (3 streams) SFU outgoing 18Mbps (12 streams) SFU incoming 6Mbps (4 streams) Magic Squares: VGAIf you’re not interested in resolution that much, you can aim for VGA resolution and even limit bitrates to 600Kbps:
Resolution VGA Bitrate 600Kbps User outgoing 0.6Mbps (1 stream) User incoming 1.8Mbps (3 streams) SFU outgoing 7.2Mbps (12 streams) SFU incoming 2.4Mbps (4 streams)
The thing you may want to avoid when going VGA is the need to upscale the resolution on the display – it can look ugly, especially on the larger 4K displays.
With crude back of the napkin calculations, you can potentially cram 3 VGA conferences for the “price” of 1 720p conference.
Hangouts StyleBut what if our layout is a bit different? A main speaker and smaller viewports for the other participants:
I call it Hangouts style, because Hangouts is pretty known for this layout and was one of the first to use it exclusively without offering a larger set of additional layouts.
This time, we will be using simulcast, with the plan of having everyone send out high quality video and the SFU deciding which incoming stream to use as the dominant speaker, picking the higher resolution for it and which will pick the lower resolution.
You will be aiming for 720p, because after a few experiments, you decided that lower resolutions when scaled to the larger displays don’t look that good. You end up with this:
- Each participant in the session sends out 2.2Mbps (that’s 1.5Mbps for the 720p stream and the additional 80Kbps for the other resolutions you’ll be simulcasting with it)
- Each participant in the session receives 1.5Mbps from the dominant speaker and 2 additional incoming streams of ~300Kbps for the smaller video windows
- Across 4 participants, the media server needs to receive 8.8Mbps and send out 8.4Mbps
0.3Mbps (2 streams) SFU outgoing 8.4Mbps (12 streams) SFU incoming 8.8Mbps (4 streams)
This is what have we learned:
Different use cases of group video with the same number of users translate into different workloads on the media server.
And if it wasn’t mentioned specifically, simulcast works great and improves the effectiveness and quality of group calls (simulcast is what we used in our Hangouts Style meeting).
Across the 3 scenarios we depicted here for 4-way video call, we got this variety of activity in the SFU:
Magic Squares: 720p Magic Squares: VGA Hangouts Style SFU outgoing 18Mbps 7.2Mbps 8.4Mbps SFU incoming 6Mbps 2.4Mbps 8.8Mbps
Here’s your homework – now assume we want to do a 2-way session that gets broadcasted to 100 people over WebRTC. Now calculate the number of streams and bandwidths you’ll need on the server side.
How Many Users Can be Active in a WebRTC Call?That’s a tough one.
If you use an MCU, you can get as many users on a call as your MCU can handle.
If you are using an SFU, it depends on a 3 different parameters:
- The level of sophistication of your media server, along with the performance it has
- The power you’ve got available on the client devices
- The way you’ve architected your infrastructure and worked out cascading
We’re going to review them in a sec.
Same Scenario, Different ImplementationsAnything about 8-10 users in a single call becomes complicated. Here’s an example of a publicly available service I want to share here.
The scenario:
- 9 participants in a single session, magic squares layout
- I use testRTC to get the users into the session, so it is all automated
- I run it for a minute. After that, it kills the session since it is a demo
- It takes into account that with 9 people on the screen, reducing resolutions for all to VGA, but it allocates 1.3Mbps for that resolution
- Leading to the browsers receiving 10Mbps of data to process
The media server decided here how to limit and gauge traffic.
And here’s another service with an online demo running the exact same scenario:
Now the incoming bitrate on average per browser was only 2.7Mbps – almost a fourth of the other service.
Same scenario. Different implementations.
What About Some Popular Services?What about some popular services that do video conferencing in an SFU routed model? What kind of size restrictions do they put on their applications?
Here’s what I found browsing around:
- Google Hangouts – up to 25 participants in a single session. It was 10 in the past. When I did my first-ever office hour for my WebRTC training, I maxed out at 10, which got me to start using other services
- Hangouts Meet – placed its maximum number at 50 participants in a single session
- Houseparty – decided on 8 participants
- Skype – 25 participants
- appear.in – their PRO accounts support up to 12 participants in a room
- Amazon Chime – 16 participants on the desktop and up to 8 participants on iOS (no Android support yet)
Does this mean you can’t get above 50?
My take on it is that there’s an increasing degree of difficulty as the meeting size increases:
The CPaaS Limit on SizeWhen you look at CPaaS platforms, those supporting video and group calling often have limits to their meeting size. In most cases, they give out an arbitrary number they have tested against or are comfortable with. As we’ve seen, that number is suitable for a very specific scenario, which might not be the one you are thinking about.
In CPaaS, these numbers vary from 10 participants to 100’s of participants in a single sesion. Usually, if you can go higher, the additional participants will be view-only.
Key Points to RememberFew things to keep in mind:
- The higher the group size the more complicated it is to implement and optimize
- The browser needs to run multiple decoders, which is a burden in itself
- Mobile devices, especially older ones, can be brought down to their knees quite quickly in such cases. Test on the oldest, puniest devices you plan on supporting before determining the group size to support
- You can build the SFU in a way that it doesn’t route all incoming media to everyone but rather picks partial data to send out. For example, maybe only a single speaker on the audio channels, or the 4 loudest streams
Sizing and media servers is something I have been doing lately at testRTC. We’ve played a bit with Kurento in the past and are planning to tinker with other media servers. I get this question on every other project I am involved with:
How many sessions / users / streams can we cram into a single media server?
Given what we’ve seen above about speeds and feeds, it is safe to say that it really really really depends on what it is that you are doing.
If what you are looking for is group calling where everyone’s active, you should aim for 100-500 participants in total on a single server. The numbers will vary based on the machine you pick for the media server and the bitrates you are planning per stream on average.
If what you are looking for is a broadcast of a single person to a larger audience, all done over WebRTC to maintain low latency, 200-1,000 is probably a better estimate. Maybe even more.
Big Machines or Small Machines?Another thing you will need to address is on which machines are you going to host your media server. Will that be the biggest baddest machines available or will you be comfortable with smaller ones?
Going for big machines means you’ll be able to cram larger audiences and sessions into a single machine, so the complexity of your service will be lower. If something crashes (media servers do crash), more users will be impacted. And when you’ll need to upgrade your media server (and you will), that process can cost you more or become somewhat more complicated as well.
The bigger the machine, the more cores it will have. Which results in media servers that need to run in multithreaded mode. Which means they are more complicated to build, debug and fix. More moving parts.
Going for small machines means you’ll hit scale problems earlier and they will require algorithms and heuristics that are more elaborate. You’ll have more edge cases in the way you load balance your service.
Scale Based on Streams, Bandwidth or CPU?How do you decide that your media server achieved full capacity? How do you decide if the next session needs to be crammed into a new machine or another one or be placed on the current media server you’re using? If you use the current one, and new participants want to join a session actively running in this media server, will there be room enough for them?
These aren’t easy questions to answer.
I’ve see 3 different metrics used to decide on when to scale out from a single media server to others. Here are the general alternatives:
Based on CPU – when the CPU hits a certain percentage, it means the machine is “full”. It works best when you use smaller machines, as CPU would be one of the first resources you’ll deplete.
Based on Bandwidth – SFUs eat up lots of networking resources. If you are using bigger machines, you’ll probably won’t hit the CPU limit, but you’ll end up eating too much bandwidth. So you’ll end up determining the capacity available by way of bandwidth monitoring.
Based on Streams – the challenge sometimes with CPU and Bandwidth is that the number of sessions and streams that can be supported may vary, depending on dynamic conditions. Your scaling strategy might not be able to cope with that and you may want more control over the calculations. Which will lead to you sizing the machine using either CPU or bandwidth, but placing rules in place that are based on the number of streams the server can support.
–
The challenge here is that whatever scenario you pick, sizing is something you’ll need to be doing on your own. I see many who come to use testRTC when they need to address this problem.
Cascading a Single SessionCascading is the process of connecting one media server to another. The diagram below shows what I mean:
We have a 4-way group video call that is spread across 3 different media servers. The servers route the media between them as needed to get it connected. Why would you want to do this?
#1 – Geographical DistributionWhen you run a global service and have SFUs as part of it, the question that is raised immediately is for a new session, which SFU will you allocate for it? In which of the data centers? Since we want to get our media servers as close as possible to the users, we either have pre-knowledge about the session and know where to allocate it, or decide by some reasonable means, like geolocation – we pick the data center closest to the user that created the meeting.
Assume 4 people are on a call. 3 of them join from New York, while the 4th person is from France. What happens if the French guy joins first?
The server will be hosted in France. 3 out of 4 people will be located far from the media server. Not the best approach…
One solution is to conduct the meeting by spreading it across servers closest to each of the participants:
We use more server resources to get this session served, but we have a lot more control over the media routes so we can optimize them better. This improved media quality for the session.
#2 – Fragmented AllocationsAssume that we can connect up to 100 participants in a single media server. Furthermore, every meeting can hold up to 10 participants. Ideally, we won’t want to assign more than 10 meetings per media server.
But what if I told you the average meeting size is 2 participants? It can get us to this type of an allocation:
This causes a lot of wasted server resources. How can we solve that?
- By having people commit in advance to the maximum meeting size. Not something you really want to do
- Taking a risk, assume that if you allocate 50% of a server’s capacity, the rest of the capacity you leave for existing meetings allowing them to grow. You still have wasted resources, but to a lower degree. There will be edge cases where you won’t be able to fill out the meetings due to server resources
- Migrating sessions across media servers in an effort to “defragment” the servers. It is as ugly as it sounds, and probably just as disrupting to the users
- Cascade sessions. Allow them to grow across machines
That last one of cascading? You can do that by reserving some of a media server’s resources for cascading existing sessions to other media servers.
#3 – Larger MeetingsAssuming you want to create larger meetings than one a single media server can handle, your only choice is to cascade.
If your media server can hold 100 participants and you want meetings at the size of 5,000 participants, then you’ll need to be able to cascade to support them. This isn’t easy, which explains why there aren’t many such solutions available, but it definitely is possible.
Mind you, in such large meetings, the media flow won’t be bidirectional. You’ll have fewer participants sending media and a lot more only receiving media. For the pure broadcasting scenario, I’ve written a guest post on the scaling challenges on Red5 Pro’s blog.
RecapWe’ve touched a lot of areas here. Here’s what you should do when trying to decide how many users can fit in your WebRTC calls:
- Whatever meeting size you have in mind it is possible to support with WebRTC
- It will be a matter of costs and aligning it with your business model that will make or break that one
- The larger the meeting size, the more complex it will be to get it done right, and the more limitations and assumptions you’ll need to add to the equation
- Analyze the complexity you need to support
- Count the incoming and outgoing streams to each device and media server
- Decide on the video quality (resolution and bitrate) for each stream
- Define the media server you’ll be using
- Select a machine type to run the media server on
- Figure out the sizing needed before you reach scale out
- Check if the growth is linear on the server’s resources
- Decide if you scale out based on bandwidth, CPU, streams count or anything else
- Figure how cascading fits into the picture
- Offer with it better geolocation support
- Assist in resource fragmentation on the cloud infrastructure
- Or use it to grow meetings beyond a single media server’s capacity
What’s the size of your WebRTC meetings?
Need to understand your WebRTC group calling application backend? Take this free video mini-course on the untold story of WebRTC’s server side.
The post How Many Users Can Fit in a WebRTC Call? appeared first on BlogGeek.me.
7 CPaaS Trends to Follow in 2018
Here are CPaaS trends you should be expecting this year.
There’s no doubt about it. CPaaS is growing and it is doing so rapidly. It is a multi billion dollars industry, and while still small, there’s no sign of its growth stopping anytime soon. You’ll see the numbers $4 billion and $8 billion a year appearing in different reports and estimates that are flying around when talking about the near future of the CPaaS market size and growth potential. I have no clue if the numbers are correct – I’ve never been one to play with estimates.
What I do know, is that we’ve got multiple CPaaS vendors now with ARR (Annual Run Rate) higher than $100 million. Most of it may still come from good old SMS and phone calls, but I think this will change along with how consumers communicate.
This change will make CPaaS a lot more interesting and diversified than the boring race to the bottom that seems to be prevalent in some of the players’ offering and messaging in this market. The problem with CPaaS today is twofold:
- SMS and voice are somewhat commoditized. There is a finite way in which you can send and receive SMS and phone calls over phone numbers, and we’ve exhausted them and how to express them in a simple API for developers to use years ago. Since then, the game we played was one of scalability, stability and price points
- Developers are resistant to paying for IP based communications services at the moment. They somehow believe that these are a lot easier to develop. While that is correct for the “hello world” implementation, once you need to provide long term maintenance and scalability capabilities this can grow into a huge headache – especially when you couple this with some of the trends in communication that are being introduced
Which brings me to what you can expect in 2018. Here are 7 CPaaS trends that will grow and become important this year – and more importantly – what they mean.
Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:
Get the shortlist
#1 – ServerlessServerless is also known as Functions.
You might know about serverless from AWS Lambda, Azure Functions, Google’s Cloud Functions and Apache’s OpenWhisk. The list here isn’t random – it goes to show that all big cloud platforms are now offering serverless capabilities.
This still isn’t prevalent in CPaaS, where for the most part, developers are expected to develop, maintain and operate their own servers that communicate with the CPaaS vendor’s infrastructure. But we do see signs of serverless making its way here.
I’ve covered that last year, when I took a deeper look into the Twilio Functions offering and what that means to the CPaaS market.
At the time, Twilio stated that Functions is already Twilio’s fastest growing product ever. Here’s where they explain what it does:
Twilio being the market leader in CPaaS, and Functions being a fast growing product of theirs means that other CPaaS vendors will follow. Simply because demand here is obvious.
#2 – OmnichannelWhen SMS just isn’t enough.
Not sure when you last used SMS for personal reasons – I know that I rarely end up inside that app on my smartphone. The way things are going, SMS can be considered the spam channel of 2018. Or maybe the channel used by businesses who’ve been told that this is the best way to reach customers and interrupt them.
While I definitely see value in SMS, I also think that businesses should strive to communicate with their customers on other channels – channels their users are now focusing on with their social life. In Israel that would be Whatsapp. In the US probably a mixture of Facebook and iMessage will work better. Telegram would be the choice for Russia.
Whatever that channel is, to support it, someone needs to integrate with it. And then decide which channel to use for which customer and for what interaction. For CPaaS, that’s what Omnichannel is about. Enabling developers, and by extension businesses to communicate with their customers on the customer’s preferred channel.
2018 is going to be the year Omnichannel becomes a serious requirement.
Why?
Because now we can actually use it.
Apple’s own Business Chat service is planned to make its public debut this year.
Facebook has its own APIs already, and Whatsapp announced business accounts (=APIs).
That alone covers a large majority of customer bases.
Throw in SMS, mix and choose the ones you want. And voila! Omnichannel.
For businesses, relying on CPaaS for Omnichannel makes sense, as the hassle of adding all of these channels and maintaining them is expensive. Omichannel CPaaS APIs will abstract that away.
For CPaaS vendors, this is a way to differentiate and make switching between vendors harder.
A win-win.
The ones offering that already? Nexmo with their Chat App and Twilio through their Engagement Cloud.
#3 – Visual / IDEFrom code, to REST, to point-and-click.
We used to use DOS as an “operating system”. I worked at a small computer shop as a kid when I grew up. For a couple of years, my role was to go to people’s homes and explain to them how to use the new computer they just purchased. How to put the DOS disk inside the floppy drive, list the files in a floppy, run games and other applications.
Then came Windows (along with Mac and OS/2 and others) and we all just moved to using a visual operating system and a mouse.
As a kid, I programmed using Logo and Basic. Then Turbo Pascal – in a decent IDE for the first time. In the university, I got acquainted to Tcl/Tk. And then UI development seemed fun. Even it if was by writing code by hand. Then one day, vtcl came to life – a visual editor. Things got easier.
Developing communications is taking the same path now.
It started by needing to build your own stuff from scratch, then with open source frameworks and later CPaaS and REST (or god forbid SOAP) APIs.
In 2017, Twilio Studio was announced – a visual IDE to use on top of the Twilio functionality. In that corner, you can also count Amazon Connect, though not CPaaS but still in the domain of communications – it has a visual IDE of its own.
In a recent VoxImplant event I was invited to speak at in Russia, VoxImplant introduced a new service in beta called Smartcalls – a visual IDE on top of their CPaaS offering. Albeit… in Russian.
The concept of using visual tools requiring less coding can greatly increase productivity and the target audience of these tools. They are no longer restricted to developers “who code”. Hell – I can use these tools. I played with Twilio Studio a bit – it was fun and intuitive. It guides the way you think about what needs to be done. About the flow of the service.
I really can’t see how other CPaaS vendors are going to ignore this trend and not work on their own visual offerings during 2018.
#4 – Machine Learning and Artificial IntelligenceIt is time to be smart about communications
When I worked at Amdocs some years ago, we’ve looked into the area of Big Data Analytics. It was all about how you take the boatloads of information telecommunication companies have and do something with it. You start by analyzing and visualizing it, moving towards the domain of actionable.
It frustrated the hell out of me to understand how little communication vendors are doing with their data compared to enterprises in other markets. Or at least that was my impression looking from inside a vendor.
Fast forward to today, and what you find with CPaaS vendors is that they are offering a well oiled machine that provides generic communications. You can do whatever you want with it, and the smart ones are adding analytics on top for their own needs.
But want about the CPaaS vendors themselves? Shouldn’t they be doing something about analytics? Or its better branded colleague known as machine learning?
Gustavo Garcia wrote a good article about it – improving real time communications with machine learning. This is where most CPaaS vendors are probably looking today, optimizing their network to offer a better service.
But it is just scratching the surface.
The obvious is adding things around NLP – speech to text, text to speech, translation. All those are being done by integrating with third parties today, and many of the CPaaS vendors offer these out of the box.
To move the needle and differentiate, more needs to be done:
- The internal structure of the CPaaS vendors should take into account the need for researching data. Data scientists and machine learning people have to be part of the development and product teams for this to ever happen
- CPaaS vendors need to start thinking on what they can offer by analyzing their own data (and their customer’s communications) beyond just optimizing it
If you are a CPaaS vendor and you don’t have at least a data scientist, a machine learning developer and a product manager savvy in this domain yet, then start recruiting.
#5 – AR/VRTime to connect ARKit and ARCode to communications.
Augmented reality and virtual reality have been around for the better part of the last decade or two. But somehow, they are only now becoming interesting.
I guess the popularity of AR has grown a lot, and where it fits directly in smartphones today (and not the bulky 3D headsets) is with things like Pokemon Go and camera filters (started by popularized snapchat and found everywhere today).
With the introduction of Apple ARKit and Google ARCore, this is only going to get more commonplace. And what we see now is CPaaS vendors finding their way around this technology.
The most interesting one yet is Twilio’s work with ARKit, which they showcased at last year’s Kranky Geek event:
With all the focus put in this domain, I am sure we’ll see more CPaaS vendors looking into it.
#6 – BotsOmnichannel + Machine Learning + Automation = Bots
Chat bots is all the rage. Search the internet and you’ll be thinking that humans no longer talk to customers anymore. It is all taken care of by bots.
I’ve added a chat widget to certain pages on my website. And every once in awhile I get a question there asking if that’s a human they’re interacting with.
Bots require integration and APIs. They are also about communications. Which is probably why CPaaS vendors are taking a step towards this direction as well. The ones adding Omnichannel offerings across multiple channels are in effect enabling bots to be created there across channels.
That’s a first step though, as the next would be to cater this market better by enabling conversational interfaces and easing the part of packaging the bots for the various channels.
Expect to see a few announcements around bots to be made by CPaaS vendors this year. A lot of it will revolve around Amazon Alexa and Google Home
#7 – GDPRThe governance headache we’ve all been waiting for.
GDPR stands for General Data Protection Regulation. It is a new set of EU rules that have been put in place to protect the data related to EU citizens that is collected and stored.
While it is easy to assume that CPaaS vendors store no data – they “live” in the real time, that isn’t accurate.
Stored meta data and logs may fall into the GDPR black hole, and definitely recording services. With the introduction of Omnichannel and Bots comes chat history storage.
Twilio jumped on this bandwagon last year with a GDPR program. Other vendors such as MessageBird indicated future support of GDPR. All global CPaaS vendors will need to support GDPR, and since these regulations come to force this year, 2018 will be the year GDPR gets more attention and focus by CPaaS vendors.
2018 – The Year CPaaS Vendors DifferentiatedIn the past few years, we’ve seen CPaaS vendors struggling in two directions:
- Increasing their customer base, mainly around SMS and voice offerings – which is where most of the revenue is these days
- Growing from a telecom focused player to a global player
That second point is important. Up until recently, CPaaS equated to running one or two data centers (or the equivalent of running from a small number of cloud based data centers), connecting developers via REST APIs to the telecom backend. With the introduction of IP based communications (and WebRTC), the was a growing need for client side SDKs along with more points of presence closer to the end user.
We seem to be past that hurdle for most CPaaS vendors. Most of them have grown their footprint to include a global infrastructure.
The next frontier is going to happen elsewhere:
- Serverless – in making the services easier for developers to adopt by reducing the requirement for customers to deploy their own machines
- Omnichannel – extending the reach beyond the telecom channels of SMS and voice into social networks
- Visual / IDE – grow the service beyond developers, making it easier to use and faster to deploy with
- Machine Learning and Artificial Intelligence – add intelligence and analytics based services
- AR/VR – capture the new world of augmented and virtual reality and enhance it with communications
- Bots – align with the A2P model of businesses communicating with customers through automation
- GDPR – provide support for the new EU initiative, adding governance and regulation as another added value of choosing CPaaS instead of in-house development
CPaaS will move in rapid pace in the next few years. Vendors who won’t invest and grow their offerings and business will not stay with us for long.
Planning on selecting a CPaaS vendor? Check out this shortlist of CPaaS vendor selection metrics:
Get the shortlist
The post 7 CPaaS Trends to Follow in 2018 appeared first on BlogGeek.me.
What is WebRTC adapter.js and Why do we Need it?
adapter.js is the glue that sticks your code to the different browser implementations of WebRTC.
This article was co-written with Philipp Hancke. He has been the driving force behind adapter.js in the last two years, so it seemed like the best approach to have him contribute large portions of it. You can follow his writing here.
One of the visuals I created when I started out with WebRTC was this one:
It had several incarnations, and the main concept here is to show how WebRTC is different than traditional VoIP.
With traditional VoIP, you have multiple vendors implementing the specification, in hopes (as well as active interoperability testing) that the implementations will work in front of each other. If you knew one VoIP implementation, it said nothing about your ability to be able to yield another.
WebRTC was different. It brought to the table the concept of free, but also HTML5; and by that, I mean having a single API that every developer can use to add interactive voice and video to his application.
getUserMedia, PeerConnection and the data channel are all APIs specified in WebRTC. We’re now all speaking the same language when we’re implementing applications. And that, in turn, creates an ecosystem around it. One that was never there with such force with traditional VoIP.
Problem is, you can think of the WebRTC API as a suggestion only. That’s because today, version 1.0 of the specification isn’t yet a reality. We’ve got a candidate for it, but that says nothing about the implementations. Browser implementations of WebRTC are more like dialects of the same language. When you speak one, you understand another, but not fully. Not its nuances. And bad things can happen if two people with different dialects try to talk to each other without patience or understanding.
Which is probably where adapter.js comes into play.
Before we ask ourselves if adapter.js is needed today (it is), it would be worthwhile to understand how it came to be.
adapter.js Origin Storyadapter.js has been around since the early days of WebRTC in late 2012 and early 2013. It was originally part of Google’s apprtc sample application. The original version can still be found in the Chrome tree. It was a very small project, less than 150 lines. The main job was to hide prefix differences like webkitRTCPeerConnection and mozRTCPeerConnection and to provide helper functions to attach a MediaStream to an HTML <audio> or <video> element.
During those wild west days of WebRTC, everyone wrote their own library to make WebRTC easier. This started to change in mid-2015 when Microsoft Edge came along. While Edge did not require prefixes for getUserMedia, attaching the MediaStream to a video element still worked in three different ways in as many implementations. This showed that there was a need to move to standardized behaviour. Also, as Microsoft’s Bernard Aboba pointed out, books were printed that showed the prefixed versions of the APIs — which is the wrong thing to teach.
Preferring ORTC over the WebRTC 1.0 API, Microsoft was extremely happy to support the addition of a shim of the RTCPeerConnection API on top of ORTC. This enabled early interoperability tests and allowed ironing out some bugs before the first public ORTC-enabled Edge version.
MS showing love for our #webrtc polyfill (adapter.js) and sample codehttps://t.co/YhHstGjQps
(thanks @HCornflower) pic.twitter.com/qPzwZEA3VK
— Justin Uberti (@juberti) April 4, 2016
A bit later, Promise support was added to adapter.js. Moving to Promises was one of the first big changes in the WebRTC specification and while Firefox has been adding them swiftly, Chrome was lagging behind. At that point, the “mission statement” for adapter changed. Instead of just trying to fill the gaps it became an enabler, allowing to write modern WebRTC Javascript. Mozilla’s Jan-Ivar Bruaroey recognized that and started contributing more elaborate pieces like a shim for the getUserMedia constraints.
When Safari started shipping WebRTC they contributed a shim for the “legacy” bits of the WebRTC API that they did not want to ship. This was an interesting attempt to get developers to write modern, promise-based WebRTC code. However, it does not seem to have worked out as sadly the release version shipped with the legacy API is enabled by default.
With growing complexity (currently over 2,200 lines of code) and being in the “hot path”, testing of changes to the adapter.s code itself became more of an issue. Initially powered by Selenium the tests have been split up into unit tests and end-to-end tests that use standard testing tools like karma, mocha and chai to make assertions while running in a multitude of browsers on Travis-CI for every pull request and compare the results to previous runs. This shows the state of the art for testing WebRTC libraries and has been adopted by other projects as well.
During much of 2017, the main focus was on shimming the track-based API in Chrome. This is one of the bigger pieces of the move toward the WebRTC 1.0 API, described in this blog post by Mozilla and it was in adapter.js as well. The tests proved useful to ensure the consistency of the API which is particularly tricky since existing code might rely on certain interactions with the legacy API and that API (along with the interactions) is not specified. As is usual with large changes, there were a number of regressions — however, it is much better to discover those regressions in a javascript library where the version can be pinned than to have Chrome ship them natively. Early in 2018, Chrome 64 will become stable and the native addTrack version will take over from the shimmed variant. Note: addTrack turned out not to be quite ready for production yet due to a bug related to getStats. The shim will continue to be preferred until Chrome M65 — make sure your adapter version is updated after that change.
adapter.js TodayFor a quick and dirty project you can simply include https://webrtc.github.io/adapter/adapter-latest.js in your code.
This will give you the latest published version. Note however that your application will automatically pull any changes so this is not recommended for larger applications.
The main source of adapter.js downloads is NPM. In most Javascript projects, you install webrtc-adapter as follows:
npm install webrtc-adapterNote: Since adapter.js is manipulating the core WebRTC javascript APIs upgrading it is somewhat risky. Therefore it is recommended to keep the exact version specified in your package.json file and test a lot when upgrading that version.
To use it, just require the module in one of your javascript files:
const adapter = require(‘webrtc-adapter’);Since it is a polyfill, it transparently modifies the window object by default. The adapter object gives you information about the browser variant and version it detected in the browserDetails object:
console.log(adapter.browserDetails.browser); console.log(adapter.browserDetails.version);
This is slightly different from a version detection library like platform as it treats Chromium-based browsers like Opera as Chrome — since they run the same WebRTC engine that makes sense.
You can use the detected browser and version to add your own logic for working around bugs present in certain Chrome versions (e.g. the Chrome 61/Android video freeze or the Chrome 58 TURN/TCP issue).
To check WebRTC support you will need to check that RTCPeerConnection is defined:
!!window.RTCPeerConnectionand, if your use-case requires it, getUserMedia
!!(navigator.mediaDevices && navigator.mediaDevices.getUserMedia)or the createDataChannel method of the RTCPeerConnection:
‘createDataChannel’ in RTCPeerConnection.prototypeAfter that you can simply write your WebRTC code as shown in the specification:
http://w3c.github.io/webrtc-pc/#simple-peer-to-peer-example
The official WebRTC samples are a great way to get started as they show a lot of use-cases and the maintainers ensure that they are semantically correct. Most of the shims are written in such a way that they will not become inactive when the native variant is available.
Moving ForwardThere are 4 forces at play with adapter.js:
- The WebRTC specification itself. This is what we expect and suggest developers build against.
- The browser’s implementation of WebRTC. At the moment, this is lagging behind the WebRTC specification and will take time to catch up. Until that time, use of adapter.js is suggested (you can write your own, but why bother maintaining it?)
- The adapter.js implementation, where you’ll need to keep an eye on newer versions, adopt them and test against them
- Your own implementation, and how it interacts with the other 3 forces
Will a day come when we no longer need adapter.js?
Definitely.
But don’t wait up for it.
If the lifespan of jQuery is any indication (11 years and still going strong, with the last 4 of them with articles on why we don’t need jquery), we will be using adapter.js for many years to come.
The post What is WebRTC adapter.js and Why do we Need it? appeared first on BlogGeek.me.
10 Massive Applications Using WebRTC
WebRTC is… everywhere.
WebRTC started some 6 years ago. It was just another VoIP protocol specification that just happened to be targeted at browsers.
Six years in, and now WebRTC is everywhere. There are still those who believe it has failed, or haven’t lived up to its expectations. I’d say it is the vendors who failed to adopt it are the ones that have failed.
How do I know?
It has to do with those that are using it. Here are 10 massive applications that are making use of WebRTC. These companies trust WebRTC to offer them the leverage they need to deliver the user experience they strive for.
Looking for more vendors using WebRTC? Here are 10 interviews with inspiring vendors using WebRTC.
Download the eBook
What’s Massive in WebRTC Land?Before we start though, I want to say a word about what massive is.
It is really hard to know what’s massive. How do you count it? Especially when none of the vendors are willing to share their numbers in meaningful ways here.
So let’s do a back-of-the-napkin kind of calculation here for a sec –
In the recent Kranky Geek event, Google shared in their session an interesting statistic:
Over 1.5 billion of weekly audio/video minutes.
That’s easily upwards of 214 million minutes a day.
And that’s only on Chrome.
This number does include:
- Other browsers. Today that means Firefox, Edge and Safari
- Usage through plugins. Which covers Internet Explorer
- Electron and CEF based applications. And there are a few very popular ones I can think of
- Mobile applications, making use of WebRTC
- Those who take the bits and pieces of WebRTC that they need, integrate it with their service, and then just make use of it (not always with proper attribution)
So the numbers are larger. Much larger.
The Google Machine and its LeftoversBack to that more than 214 million minutes a day.
During March 2017, Serge Lachapelle, the person in charge of WebRTC in the past and now of Google Hangouts and Meet, shared some numbers about video conferencing at Google during Google Cloud Next 2017:
9+ years daily translates to over 4.7 + minutes daily.
That’s the amount of use Google makes internally of Hangouts.
It is safe to assume that external use of non-Googlers can double that number with little effort to over 9 million minutes a day.
And continuing this lenient calculation, Hangouts accounts for 4-5% of all voice and video traffic in WebRTC.
Consider here fact that I counted Hangouts over multiple devices, browsers and applications while comparing it to Chrome only numbers, so I am fudging here a bit. On the other hand, I took non-Googlers to account for only half the usage, which is probably way too little.
Anyways, let’s look at them 10 massive applications who are already using WebRTC.
1. Google Meet and Google Hangouts9+ years daily. Inside Google alone.
Google Meet (or more accurately, Hangouts) is most probably one of the main reasons we have WebRTC.
Google had their own video conferencing service, working from Gmail, but it needed a plugin. Real time video just wasn’t there in the browser, which is where and why WebRTC started. And it started with a contribution by Google which we now know as webrtc.org.
To date, Google Meet (or Hangouts), is a massive application that makes use of WebRTC.
2. Facebook MessengerHere’s something I wrote some 5 years ago. It is about Skype vs Facebook. Here’s how I phrased it then:
Facebook can adopt WebRTC and provide a calling experience that surpasses most VoIP players.
The rest of the analysis then is kinda funny. Facebook did end up adopting WebRTC wholeheartedly into Messenger, but none of my suggestions were implemented (which in hindsight was probably for the best).
Here’s where Facebook have integrated WebRTC already:
- Messenger – video chat and group video chat, mobile and browser
- Facebook Live – when co-broadcasting
- VR Chat – video calls in Oculus
- Then there’s Workplace by Facebook and Instagram Live Video Chat
All using WebRTC. I am even ignoring WhatsApp here (not sure what parts of WebRTC they use exactly).
At the recent Kranky Geek, we had Li-Tal Mashiach of Facebook talk about what it is they are doing with WebRTC and how do they scale their service.
No minutes here, but 400 million people using WebRTC every month. That’s 13+ million people a day on average. With only a minute each this is already massive.
3. DiscordI came across Discord and its use of WebRTC in July 2016. That’s when I added them to my dataset, through a message I saw on Facebook somewhere. As any other vendor that gets into my radar, I continued to follow them closely.
Discord is a social platform for gamers (for lack of a better term). They have been around for only 2.5 years. This month, they shared a few numbers. Specifically:
Nothing here about voice and video, but I do know that the numbers here are impressive.
4. Amazon ChimeAmazon Chime is new to the scene of unified communications and already big.
Chime started as an acquisition only a year ago of a company called Biba. It was probably already well underway to become a replacement for Amazon’s own internal video conferencing services. At Amazon’s re:invent event last month, Amazon shared a few numbers of how they use Chime internally:
24.8 million minutes a month. That’s almost a million minutes a day. From Amazon’s internal meetings only. Not including any of their Chime customers.
Not as massive as the others, but still quite large.
One thing to note – this isn’t “pure” WebRTC. Amazon took the approach of supporting legacy video conferencing systems first, so they “did” something to WebRTC to make it work. Their roadmap for next year is to add direct browser for users as well. What we do know, is that this uses WebRTC technologies inside today already.
Oh… and I didn’t even mention Amazon Connect, Alexa and Mayday – all making use of WebRTC.
5. HousepartyHouseparty is huge. Especially if you’re a teen. My daughter will probably start using it in a few years… once she grows out of Whatsapp and Musical.ly. Or so I’ve been told.
Houseparty makes use of WebRTC, although it is a mobile only service.
There’s not much numbers going on about Houseparty this year, so I’ll stick to the ones we know from a year ago.
20 million minutes a day.
Enough said.
6. Appear.inAppear.in started as a summer internship project at Telenor Digital somewhere, growing up to this point in time. Today it got acquired by Videonor.
The service is a favorite of many in the WebRTC community (and elsewhere – they are doing million of minutes a day).
If you haven’t tried it yet, then you should: appear.in
And yes. It is in the league of the other vendors here when it comes to size.
7. GotomeetingThere are many traditional VoIP (interesting that VoIP can now be considered traditional) that have started adding WebRTC to their offerings.
Most can probably make it into this list of massive applications.
Out of them, I decided to choose GoToMeeting. Why? Because the integration they’ve done was quite a natural one. I’ve been using it for well over a year now whenever someone invited me into a meeting over GoToMeeting – in most cases, they weren’t even aware of the browser option.
8. Peer5I wanted to add a company that doesn’t do voice and video. Or rather ones that are making use of the WebRTC’s data channel.
The one I picked here was Peer5. It was the easiest for me to get numbers from (I am an advisor there).
The P2P CDN scene is getting quite interesting lately. Alongside the startups like Peer5 that are pushing the envelope we now see companies like Akamai who stated publicly that they are headed this way with WebRTC as well.
In this year’s Kranky Geek event, Hadar Weiss, Co-founder and CEO of Peer5, shared a few of their numbers:
1 billion connections a day is large. Compared to millions of minutes a day. But we have to remember – a lot of these connections are short-lived in nature (viewers reaching out to peers they might stream data from or to) and that the more interesting number, which isn’t publicly available yet, is about actual data traffic.
9. CPaaS vendorsCPaaS vendors drive this industry forwards. They do so for the smaller vendors as well as the largest ones.
Need examples?
In 2016, Twilio claimed to process “more than a billion minutes of WebRTC calls made through Twilio” as part of their launch of Voice Insights.
TokBox has stated this year that they power social video apps including Monkey, Houseparty, Fam and live.ly.
And they are not alone with it. There are 20+ such vendors catering to the needs of other developers.
Some of the CPaaS vendors can definitely be considered massive when it comes to the WebRTC traffic they generate.
10. Back to you nowI most definitely forgot a vendor or two here.
Scroll down and comment below with your 10th candidate for the massive application using WebRTC.
WebRTC is Still MinisculeLet’s look at some other engagement metrics out there.
Netflix shared their numbers for the year this month:
Netflix members around the world watched more than 140 million hours per day
Hours. Not minutes. In minutes? That’s 8.4 billion minutes a day. For a single vendor. Compared to WebRTC’s 214 million minutes a day on Chrome.
I’d say WebRTC has room to grow.
Here’s for a bigger 2018.
Looking for more vendors using WebRTC? Here are 10 interviews with inspiring vendors using WebRTC.
Download the eBook
The post 10 Massive Applications Using WebRTC appeared first on BlogGeek.me.
WebRTC API Platform Pricing is… Complicated
Are you doing your WebRTC pricing per minute? per gigabyte? per device?
You’re a developer. You decide it is time to build an application. But you don’t really want to do everything from scratch. Hell – you don’t even want to maintain and update all of that media backend – what do you really know about video? So you go look for someone to do it for you, finding a nice set of vendors offering WebRTC PaaS services. You can easily plug into their SDK and in no time have your service do group calling.
You probably won’t be conquering the world as the next Whatsapp with such an approach, but getting that healthcare service up and running an education application or a visual contact center is now within easy reach.
And you won’t be alone in this either. About a third of the dataset of vendors using WebRTC that I am tracking is using third parties. Most of them use managed services.
But here comes the question. Do you know how much you’re going to pay for that WebRTC PaaS service?
I get requests to assist in vendor selection on a weekly basis. This has been going for a few years now. This year, one of the main focus areas in this process has been pricing. Or more accurately, understanding the pricing schemes or the different vendors, and comparing the costs of these vendors.
There’s no easy way to get that done…
Why?
- Because vendors have different pricing models
- Because you need to fully understand your scenario
- Because it just isn’t straightforward
Let’s review the 3 leading pricing parameters are going to dictate your costs:
MinutesThis one may seem easy.
You are going to pay for the number of minutes you use in a service.
It should be easy to calculate. Easy to understand the value (the more you use the more you pay).
But somehow, people translate minutes to the “old” days of telecom, where you paid top dollars to make phone calls. By the minute of course.
The devil is in the details here.
Here are few differences you’ll see between vendors.
- Is there a minimum allowance of minutes? In many cases, a baseline monthly fee will be requested. That monthly fee will include pre-calculated minutes that you can use. They will usually be priced at their cost value. This is:
- Seriousness fee. You pay so the vendor will spend the time necessary in answering your nagging support questions
- Signal to customers. If that fee is high (hundreds of dollars or more), it is meant to signal you they are interested in businesses with money to spend – probably enterprises: “we’re taking only premium customers”. The alternative of very low monthly fee indicates a stance of “we cater all developers and happy to embrace the long tail”
- Reduce noise. Non-paying”free-tier” customers are noise. Lots and lots of noise. They ask the most amount of questions, and usually these questions (and demands) won’t lead to a sale anyway. So vendors put some built-in must-pay price point to filter out the free riders who probably won’t help their bottom line anyway
- Flat rate? Tiered? Pre-commit? Call us? Different vendors offer different methods to offer better price points (discounts) based on usage. Here’s what I’ve seen vendors do:
- Flat rate. There’s a single price point. Take it or leave it. You just take the number, multiply it by the minutes and voila! You get your costs. It always comes with text saying that high volume pricing is available
- Tiered. First X minutes are free (included in the plan). Next Y minutes come at a certain price. Z following minutes are at a lower price point and so one. Later minutes cost you less
- Pre-commit. Commit in advance (and pay) for a certain number of minutes. If you pass that number, the low price point you already committed to will continue to apply
- Call us. Almost always there in all plans. For big enough customers, we will negotiate deals suitable for both sides
- What gets counted? Saying the price is per minute is nice, but what are these minutes counted against? Here are a few examples:
- Actual media minutes. This is a common approach. You got an SDK of the vendor connected to a session, the time starts ticking
- Connected devices. Then there’s the approach of connected devices. You are connected – you pay. Even if you send or receive nothing. This isn’t a common approach, but it does exist when the price per minute is low and combined with bandwidth payment (see below). It can also be tiered
- Subscriptions. See below
The great thing about minutes? They are easy to comprehend and count.
If you have 10 people in a call for 10 minutes – that’s 100 minutes (assuming we count per device here).
The downside is that with minutes, there’s usually less regard to what is done in that minute. A video minute is the same as a voice minute on most platforms when it comes to pricing. And a low resolution video minute is the same as a high resolution video minute.
SubscriptionsSubscriptions is related to minutes, and deals with the question of what it is you count the minutes against?
The two most common practices here is to count devices or count subscriptions.
Some of the WebRTC PaaS services work off the notion of a publish subscribe mechanism. Devices can publish media streams into a session, and devices can subscribe to media streams from the session. This is an elegant approach that can nicely be used when describing a complex scenario with asymmetric behaviors.
In an SFU group video call model, where each user publishes his own media streams and subscribes to the media streams of all other participants, the number of subscriptions grows at a polynomial rate: with N active users in a session, you’ll be counting N*(N-1) subscribed media streams.
In WebRTC PaaS, paying per subscribed minutes tends to be cheaper than paying per device minutes for lower group sizes (and vice versa)
Click To Tweet
It makes sense for a vendor to apply a per-subscription price as in many cases, his own costs are probably tightly coupled with the number of media subscriptions in the system.
Subscriptions are slightly harder to count than devices, but it is still gives you a solid number and an easy estimate.
BandwidthThe main complaint about per minute pricing is that it is a reminder of the old telecom days. The notion was that once we go for VoIP, cloud, web, WebRTC or whatever you want to call it, you can price it closer to the usage and not stay at the high level of a minute concept.
If you AppRTC, Google’s “hello world” implementation of WebRTC, you can easily get 2.5mbps in each direction over a 720p or full HD resolution using VP8. Audio only? That would normally take 40kbps:
If it was limited only to the difference between audio and video then so be it. Give two price points per minute and you’re done. But video is different. It becomes more of a hassle with video. You can probably get video going with as little as 300kbps with 10-20mbps being applicable to 4K video resolutions. That’s not including things like 360 videos and other crazy trends like 8K or 10K resolutions that were just added to the HDMI spec.
So vendors are now looking into taking the route that is so common in IaaS – pricing per bandwidth processed.
Usually, that would be subscribed bandwidth. The reason for that is that cloud services usually cost the vendor based on the bandwidth he sends to browsers and mobile devices and not for bandwidth it receives on its cloud servers.
Here are a few quick things to validate in this price schemes:
- Is price calculated on subscribed bandwidth only or on both send and receive?
- If media gets routed towards the vendor (recording or SFU usually) AND the session needs to be relayed via a TURN server. Do you count the costs of TURN related traffic AND server processing traffic?
Note that if you’re doing peer-to-peer sessions (that means doing a 1-on-1 session where you don’t want media to go through the vendor’s servers), you won’t be paying for bandwidth at all – unless the media gets relayed via TURN. TURN relay depends on network conditions and can’t be estimated properly (highly reliant on your users), but a rule of thumb of 15-20% of the sessions is usually used here.
Paying per bandwidth will tend to be cheaper than by minute. The reason is that the end result will be tailored to your exact usage pattern. That said, there are several downsides here:
- It is usually hard to estimate in advance, as translating minutes of use to bandwidth isn’t straightforward
- Different services will give different bitrates for seemingly the same service (I am working for a customer now, looking into the differences across many group video services, and it is devilishly hard to find commonality across the applications)
- It is harder to calculate than the rest, and it usually contains also a per minute counting to go alongside the bandwidth calculation
Going for this IaaS type of a model is a great way to lower price points for customers, but at the same time it is a great way of dealing them with a huge headache.
At testRTC, I’ve been trying for some time now with my colleagues there to estimate what are costs are/should be. How much will we end up paying for our IaaS vendors every month? It is so hard, that I usually can’t even understand the detailed invoices we receive at the end of each month. I fear that the same is/will occur with per bandwidth pricing in WebRTC PaaS.
Where Do We Go From Here?In the latest update to my WebRTC PaaS report I’ve included a new appendix explaining pricing models in this space.
But the coolest thing yet was the inclusion of a new tool – a price calculator.
It is probably the 4th or 5th that I’ve created in 2017, each with its own nuances, target use cases and complexities.
This one was meant to be as generic and as simple as possible.
You enter the expected number of sessions you plan to have on a monthly basis, the number of users and the bandwidth per stream (there are a few suggested values in there).
Then you enter the pricing model and the price points of the vendors you want to compare, and the result will be the expected monthly cost you’ll have for each vendor.
Need something a bit more tailored? Reach out to me and I’ll help you out.
The post WebRTC API Platform Pricing is… Complicated appeared first on BlogGeek.me.
My WebRTC PaaS Report: December Release
This latest update of my WebRTC PaaS report brings with it new vendors as well as a new price calculator.
It is becoming a ritual. Every 8 months or so I update the WebRTC PaaS (or CPaaS) report.
Every time I am surprised by the changes that occur. They come in 4 different areas:
- There are new vendors joining this market
- There are old vendors leaving this market
- There are changes in the feature set of existing vendors already covered in the report
- There are new trends that needs to be covered
How did we do since last time?
New Vendors Covered ECLWebRTC by NTT CommunicationsI’ve been watching the work done by NTT Communications for quite some time. It started as a project that has signaling capabilities in it. At the time, they called it SkyWay.
Later on, they developed and added an SFU into the mix.
In September 2017 they decided to open up their platform globally. That’s the point where it made sense to add them to the report.
PhenixPhenix has been an enigma to me in the past two years.
From afar, it looked like a vendor trying to go after the broadcast market with a low latency technology based on WebRTC. Recently they approached me to explain what it is that they do and to check if it fits into this report.
And it did.
Phenix is focused on the large scale interactive streaming sessions. Places where you want to pick one or a few broadcasters and have their interactions shared with a larger audience.
Vendors Closing DoorsWe had those as well.
Tropo by CiscoAcquisitions of a WebRTC CPaaS vendor is sometimes beneficial and sometimes terrible for its customers.
TokBox’ acquisition by Telefonica was a good thing.
Tropo’s acquisition by Cisco… not so much.
Two years after its acquisition, Tropo closed doors to new customers. The signs were out there, since the platform didn’t really evolve. The service is still up and running, but I don’t think Tropo customers are happy to be using Tropo right now, and I don’t think Tropo/Cisco are happy to be needing to serve these customers. A lose-lose situation here.
Cisco simply pivoted. They decided that Tropo was not the right strategy and wanted to double down on Cisco Spark APIs and developer ecosystem.
forge by XuraForge is another sad story of our industry.
Starting life as Crocodile RCS, it has been acquired by Acision. Acision was acquired by Comverse. Which got rebranded to Xura. Which was taken off the market by Siris Capital.
Forge, and probably other assets of Xura were just collateral damage in this process.
M&A and Pivots in WebRTC PaaS Apidaze acquired by VoIP InnovationsVoIP Innovations acquired Apidaze. This is a good signal for the platform’s health. Looking at the investment section of Apidaze’ 4-pager in my report shows the story:
A lot of the attention and focus was taken from Apidaze API platform and put towards Ottspot, a “slack business phone app”.
This acquisition by VoIP Innovations might mean a renewed focus on the Apidaze platform and the developers who use it.
TrueVoice is now VoxeetTrueVoice was added to the report earlier this year. At the time, Voxeet added it as another product offering. This time around, Voxeet is making the APIs the main product.
This caused the TrueVoice brand to be removed, and Voxeet to be the actual thing.
Building a platform for developers is an all consuming process. Larger companies might be able to cope with doing that in parallel to other activities, but the smaller vendors will struggle. The fact that Voxeet decided to pivot and focus on developers is a good sign.
Putting it all in a VisualHere’s what it means visually:
2 in. 2 out. A few minor changes elsewhere.
The report shows the transitions in this market since 2014.
What’s in the report?The report is quite long. It now contains 223 pages. This includes:
- The explanation of WebRTC from the point of view of someone who has a build vs buy decision to make
- KPIs to use in the selection process – and why they should matter to you
- Vendor sections (20 of them) – 4 pages per vendor
- Old vendors – to give an understanding of why they “left” the market, and maybe use it as signals to the existing vendors and their future stability
- Appendixes. 9 of them
Want to get a sneak peak into the report? You can check out these two PDF resources:
As you can see, this time, TokBox were kind enough to sponsor their 4-pager of the report and have it publicly available.
Here’s what Badri Rajasekar, TokBox CTO had to say:
2017 has been a big year for WebRTC. In what many considered a very significant piece of the puzzle, Apple announced support for WebRTC in Safari, finally allowing developers to use WebRTC on any browser platform. At the same time, we’ve seen a surge in adoption of live video communications driven in part by consumer demand. BlogGeek.me’s evaluation of this market is a valuable read for those looking for snapshot of this year’s trends in WebRTC.
Check out TokBox 4-pager from the report. You can expect to see 19 other such detailed profiles of the other vendors that the report covers.
Report ToolsThe report doesn’t come only as a “standalone” PDF file. You can access to a few additional tools:
- Price calculator – an Excel sheet designed to make it easier to estimate your costs using different vendors
- Online vendors comparison matrix – an online comparison matrix you can use to quickly validate which vendors offer the feature set and capabilities you need
- Vendor selection blueprint – an Excel sheet and Word workbook with a step-by-step guide on how to narrow down and score vendors for your application
- Presentation visuals – the presentation visuals from the report, easily available for use in your own internal or external presentations
There’s a ton more in the report, and work I do with vendors in this space – those offering such services, looking to offer such services or want to use these services.
Feel free to reach out to me or to enquire further about the report.
The post My WebRTC PaaS Report: December Release appeared first on BlogGeek.me.
The Makeup of a WebRTC API Platform
WebRTC API Platforms are different than the classic/legacy/common CPaaS.
As I am working on getting the final TBDs in my upcoming report update on Choosing a WebRTC API Platform, I wanted to share something that may seem obvious, but probably isn’t.
When talking about CPaaS, WebRTC brings with it something more than just accessibility from the browser.
Here’s the makeup of a CPaaS platform:
There’s backend telephony in there, built out of some VoIP server components, connected to the carriers to handle things like phone numbers and actual calling.
Developers connect to that backend via REST APIs, or some other form of scripting interface.
Latencies and wait times aren’t important for the most part, so the CPaaS vendor doesn’t need to be spread across the globe to provide the service. A couple of data centers for redundancy and some reduction in latencies is usually enough.
Here’s what a WebRTC API platform looks like:
There might or might not be REST APIs. they are important, but definitely aren’t the main way developers interact with the system. That’s done via the SDKs. The SDKs are wrappers around the REST APIs or some other interface (probably WebSocket based), allowing getting the actual media and processing it as part of the SDK – either in the browser or on a mobile device.
And then there’s the backend. Signaling and NAT traversal are rather mandatory. Without them, this won’t be a WebRTC API platform. In the majority of the cases, you’ll also have access to an SFU, allowing you to support group video calls. All that backend? Especially the media parts of NAT traversal and SFU? They have to be as close to the end user as possible, so these platforms often deploy globally, on all possible data centers of a cloud provider (think AWS or GCE) and sometimes running on multiple cloud providers to increase their reach.
The difference then?
- SDK that handles actual media processing; with less focus on REST APIs
- Globally spread backend, to reduce latencies
There’s a challenge selling to developers. They tend to underestimate the effort involved. And they usually prefer building new shiny toys than polishing and maintaining something that’s working. This is made worse by the seemingly “easy” fashion by which you can get a WebRTC peer-to-peer call happen inside a browser between two tabs. It gives the impression that developing and running WebRTC at scale is trivial.
Especially when you compare it to connecting to a phone number and dialing it. Doing this via an API is easy. But how do you go about dialing out a number on your own without the assistance of CPaaS? Is there a really simple example of this? Not really. This requires more than just programming – the value here is the accessibility to the phone network, which is considered a royal ongoing headache. So it is easy to outsource and to understand its value.
Here’s how the thinking goes:
SDKs? Sure. We can write them.
Signaling? I found a project on github that looks popular enough.
NAT Traversal? Everyone’s already using coturn. Should be simple enough to get it up and running.
SFU? Just passing data around. Can be written in a weekend.
Will WebRTC API Platform vendors be able to overcome this challenge? How can this be explained to developers? There is a lot that goes into building such a platform. More than the mere initial technical hurdles.
Browsers are changing. There are now 4 of them that have “support” for WebRTC. That support is different between browsers. New browser versions break things that used to work before. The specification is being finalized now, but no browser supports it yet.
Media backends need to be maintained. Monitored. Updated. Secured. In an ongoing basis.
In the coming years we will see a shift from H.264 and VP8 video codecs to VP9, HEVC and/or AV1 video codecs. This will require additional investment in the infrastructure.
And still it is believed to be easy and simple.
It isn’t.
Planning on Launching Your Own WebRTC API Platform?If you are planning to launch your own WebRTC API Platform, then you should know what you’re up against.
In the past 4 years I’ve been looking at this market, analyzing it. Seeing it grow and mature. The report covers 20+ vendors offering WebRTC API Platforms. Most of the are active. A few died or got acquired and taken off market.
One of the things to note is how new WebRTC API Platform vendors make their decision to launch their service. What do they decide to include in their initial launch. What do they use as differentiating factors from the existing players.
The space is rather crowded already, even if no clear winner exists yet.
Make sure to do your homework here. Understand what you’re up against and why should developers come to you and not to others. And plan for the long run.
Planning to Use a WebRTC API Platform?If you are in the build vs buy decision point, then think of the alternative costs of each approach. Also figure out your time to market and each and the risk of failure. For new projects, I tend to suggest a platform instead of self development. It reduces risk and upfront costs, but more than that, it enables experimenting and proving the business before committing too much into the project.
If you decided to build on your own, make sure your reasoning is rock solid. If the only reason is cost, then I suggest you recalculate.
If you decided to buy into a platform instead, then pick a platform that fits your need. But make sure it is here to stay as much as you can – this market is dynamic and is bound to stay that way for a few more years.
The Report UpdateThe updated report will get published later this week.
If you want to learn more about it, just contact me.
The post The Makeup of a WebRTC API Platform appeared first on BlogGeek.me.
Are You Listed in the WebRTC Index?
WebRTC Index has been around for 3 years now. Are you listed?
I don’t remember whose idea was it, but by the end of 2014, I’ve launched along with Amir Zmora the WebRTC Index.
The idea behind it was quite simple. We create a place where someone can come and publish his company and its services – assuming they are related to WebRTC. The list grew, and now stands at 250 published vendors.
What we also did, was make sure the site is sustainable (there’s work to be done to keep it up to date). We chose the sponsorship approach:
Vendors can be listed freely in the index, but if you are a sponsor, then you get a bit of extra juice. You appear on the main page as a sponsor, get listed first on relevant search results, and get a few more ways to express what it is you offer on your own page.
What the WebRTC Index turned out into is a place to search for relevant vendors to assist people in understanding the industry and to pick up someone to work with.
And here comes my question to you?
Are you listed in the WebRTC Index?
Got check – http://webrtcindex.com/
I’ll sit and wait here. In the dark. Next to the nameless virtual machine that is hosting this website of mine.
Not there? Then read on…
How can you join the WebRTC Index?The system is easy and works as a manual process.
- Go to https://webrtcindex.com and check if your company is already listed
- If it isn’t, then just press the red button saying “Add your company”:
- Fill out the Google Form you reached
- Wait a couple of days (a week tops – I promise) – until you get an email with your listing
It really is that simple.
And it is a free process – no need to pay anything to join the list.
So why wait?
The post Are You Listed in the WebRTC Index? appeared first on BlogGeek.me.
Is the Future of CPaaS Serverless?
Twilio isn’t the first CPaaS vendor to offer serverless. And it definitely won’t be the last. Expect serverless CPaaS offerings in the future.
When I started researching for my first WebRTC API platforms report, one of the vendors I looked at was Voximplant. One of the things they referred me to was something they call VoxEngine. As its web page describes it, it is “an application engine that runs your apps inside the VoxImplant cloud” = Serverless.
I liked the idea, but didn’t think much of it at the time. It was rather new anyway.
What is Serverless Computing?If you haven’t been following the API scene, then you might have missed the notion of serverless computing. It is a concept where the code you write gets executed by the cloud. Directly. No need to run your own OS, VM or whatever container. Write the code. And it runs. Magically.
If you look at the compute models of XaaS, here’s the picture you’ll probably find:
- If you use On Premise, then you’re in charge of EVERYTHING
- With IaaS, everything up to the operating system is something “someone else” is taking care of. Amazon, Google, Microsoft or someone else entirely
- Then there’s PaaS. With it, everything up to the runtime is something “done for you”. Your data and application are yours to worry about. You connect with the runtime via APIs (not always, more quite common)
- SaaS is just getting the whole thing out of the box. Not our worry here
Where would Serverless fit in?
With Serverless, you write the “Application” but it and its data get handled and maintained by someone else.
What do you gain out of it?
- Scalability – you no longer need to care about it. Someone else now does that for you. You wrote the core logic of what you want to achieve, and the platform hosting your code is the one that needs to sweat it when it comes to scaling the thing as needed
- Maintenance – less code means you have less to maintain. And you’re shedding here all the boring work of getting the thing to work. In a way, you’re writing the initial prototype, and have it run in production
- Security – assuming the PaaS vendor handles the headache of security well, then you have less to deal with here
- Time to market – less to write also means faster time to market. It will take you less time to get that application in the hands of customers
- Latency – since the code runs directly on top of the PaaS APIs, on the servers of the same vendor, there a lot less latency involved in the API calls. Might be important, or might not be – just a fact
What do we have here then? Economies of scale at play. The vendor doing PaaS is already handling scalability, maintenance and security for you and a lot of other customers, so theoretically, he is doing and can do a better job of it than you can in the long run. This free you up to focus more on the user experience, ending with a better application and faster time to market. And there’s the added benefit of where the code is running (closer to the rest of the code).
Serverless = FunctionsWhile Serverless is the popular name, there’s another one that has been coined – FaaS – Functions as a Service; which then made it into the names of many of these products: Google Cloud Functions, PubNub Functions and Twilio Function to name a few.
The most widely known example is probably AWS Lambda; and then there’s the open source project Apache OpenWhisk.
Many API vendors now are starting to offer these serverless capabilities – so now you no longer need to have a server of your own connected to their service – you can just run your code in their XXX Functions product instead.
In some cases, using these Functions product is free, while in most cases, there’s a usage based payment model on running these Functions.
Serverless CPaaSBack to CPaaS and where serverless fits.
I think there are only two vendors in the CPaaS market today who are offering serverless (If I missed anyone – please share in the comments below):
- Voximplant, via their VoxEngine
- Twilio Functions
In the last Twilio Signal event in London, Jeff Lawson mentioned that Functions was Twilio’s fastest growing product since its launch, so there must be a market for that.
CPaaS is slightly more complex these days, so it is important to see what serverless fits first. Let’s split CPaaS into a couple of API layers and products:
API Layers
Products
- SMS and voice (via phone numbers)
- IP messaging, chat and omnichannel messaging
- VoIP (voice and video via WebRTC)
In some ways, the proprietary scripting language API layer can be viewed as a crude form of serverless. You state your needs inside a piece of script that indicates the flow of actions to take on events, offering it as response to webhooks from the CPaaS vendor.
The REST APIs are those that are easily usable within a serverless environment. Instead of making remote calls via APIs from one server to another, handling things like security, authentication and scale, you just run the call as close as possible to its destination.
And then there’s the client SDKs. These run on the target devices themselves, and it is hard to see how you can translate them into serverless – they are already built to communicate with the CPaaS vendor’s backend, so they’re out of scope here.
Since CPaaS products are roughly aligned by the types of API layers that are used for them, we can reach the following conclusions:
A few things to note here:
- IP Messaging makes more sense to run in serverless computing when traffic is high and latency is important
- Latency is usually less of an issue when it comes to SMS and voice
- VoIP has its own set of solutions other than serverless. These usually come in the form of pre-built widgets and iframes (but that’s for another article)
From a vendor’s perspective, serverless is now becoming important.
Why?
Simply because it is part of Twilio’s runtime offering. And one that Twilio states is growing rapidly. I wouldn’t want to be left behind as a competitor.
Why not use an IaaS vendor’s FaaS offering?Just had to put these two in the same sentence.
Since the dominant IaaS vendors (Azure, AWS and Google Cloud) all have a serverless offering, why do you need one in CPaaS? Can’t you just connect the IaaS one to the CPaaS one?
You most certainly can. But you will be using two different vendors now. And to some extent, using something like AWS Lambda only makes sense if you are already making use of multiple AWS services.
Assuming what you do gravitates around communications, then using a Serverless CPaaS product makes more sense. It will bring with it reduced latency and improved security over using an external serverless product.
Serverless is coming to CPaaSLike it or not, serverless is coming to CPaaS.
If you are a CPaaS vendor and you are asking yourself what’s next – make sure you’ve got serverless in your offering or your immediate roadmap.
If you are a developer using CPaaS – see if serverless can help you develop your application faster.
Selecting a CPaaS vendor for your WebRTC application? Check out my WebRTC APIs reportThe post Is the Future of CPaaS Serverless? appeared first on BlogGeek.me.