bloggeek

Subscribe to bloggeek feed bloggeek
The leading authority on WebRTC
Updated: 1 hour 39 min ago

Advanced WebRTC Acrhitecture Course – Updates

Mon, 10/17/2016 - 12:00

WebRTC course starts Monday next week.

At long last, the wait will be coming to an end and my recent sleepless nights as well. I’ve been working these past months to put up the content for the course, not knowing how it will end up.

Most of the materials have been recorded, uploaded and prepared already, waiting for me to just manually add all the people who enrolled. There’s a lot of material in that course, and a lot more that I am sure is still missing in there. Trying to cover WebRTC in its entirety isn’t easy.

Through the process of putting this stuff up and out there, I’ve learned a lot myself.

 

The course is split into 7 sections:

  1. The basics of WebRTC – explanation of what WebRTC is, a review of its APIs and call flows, and general knowledge. This should get you up to speed about what it is and will probably place you among the first 10,000 people in the world who know it at this depth. It will also enable you to read the stuff that is out there about WebRTC more critically
  2. Networking basics – while we all use the Internet, many of us don’t know the distinction between TCP and UDP, or what Websockets really is. This section tries to put these things in perspective and lay the groundwork for later sections. It will be super useful for VoIP developers but also great for web developers. It also covers the NAT traversal challenge and the solutions found in WebRTC for it
  3. WebRTC signaling – signaling isn’t part of WebRTC, but is often something to contemplate. This section dives into the alternatives of signaling that are available, different types of transport protocols, as well as a lesson on SDP. It also covers the security aspects relevant to WebRTC – and it it sheds some light on FUD (fear, uncertainty and doubt) around WebRTC
  4. Codecs – I love codecs. I know little about them, but somehow, more than most. This section explains voice and video codecs, while focusing on what you need to know about them in the context of WebRTC. You won’t be able to implement a codec after this section (I never implemented a codec), but you will gain the understanding necessary for you to decide the codecs for your own scenarios
  5. Media processing – media processing is at the heart of most decisions you will take in your use case. In this section, I take the time to review how RTP and RTCP work, and then dive into different architectures and processes you might need in your back end. Things like mixing, routing and recording
  6. 3rd party frameworks and services – here we will be diverting from the beaten path of “normal course material”, and instead of talking about specifications, standards and capabilities, we will look into the various products and open source frameworks that are out there. We will review them and see which one fits what use case, and also gain an understanding of the various routes available to us, trying to match our company’s DNA and requirements to the alternatives at hand
  7. Common WebRTC design patterns – this is where we will take specific scenarios and challenges, from a list of those I see every day when people reach out to me, and analyze them. Go over the scenario, break it down to requirements and then map them into architectural alternatives. The idea here is to give you the tools to do such things on your own with your products

Most of the lessons are already ready. There are around 6 lessons that I still need to write. Hopefully, they will be available on launch day, but if not, then the following week.

 

I want to answer a few quick questions here – things I’ve been asked time and again in the past month:

Is this a one-time thing?

Yes and no.

The course takes place October 24 and lasts for 2 months. Those who enroll for office hours get an extended duration of 4 months (as well as office hours).

I don’t plan on doing this an ongoing thing where you can enroll whenever and do the course. I will be taking the time throughout these two months to listen to the students and see if there’s anything that requires updating in “real time”. I can’t do it if this is an ongoing thing.

This might change in the future, but for now, there’s this timing.

I might do that some months from now, after I rest a bit from the effort and decide if it makes financial sense to run it again.

If you have your own timing issues, then understand that the course is self-paced. You can “leave” for a week or two and come back, do it faster or slower.

Is the course for me?

I can’t really say.

Here are a few types of students that I have already enrolled for the course:

  • Developers who need to start using WebRTC, more often than not through a framework that was already selected. They know how it works, but are looking to gain deeper understanding so they can troubleshoot issues or add features to their product
  • Product managers who want to learn and understand more about WebRTC. Mostly to give them the language necessary to talk with their developers. And mainly to keep the developers honest
  • Teams who work with WebRTC, so they can get the experience together as a team and improve their proficiency
  • Testers wanting to understand the technology better and find effective ways to design their test plans

The course doesn’t include too much code. There’s the occasional piece of code shown, but the idea isn’t to explain to you how to develop with WebRTC. In truth – most of you won’t develop with WebRTC directly anyway – you’ll end up using a framework or a third party for that.

The intent is to give you the understanding of the limits and capabilities of WebRTC. To know how to yield this amazing tool and how to use it effectively in your product.

How is the course conducted?

If you enrolled, then you will be receiving an email a day or two prior to the course.

I will be registering you to the course mini-site inside the BlogGeek.me website. Once you login, you will be able to access all course sections and lessons.

Each lesson has a page of its own in the site. Most lessons have a recorded video session as the main bulk of it, along with text and additional reading material. In most cases, that additional reading material is important.

You can “tune in” to any lesson you wish and learn it at your own pace and in your free time.

There is an online forum for the course. Students will be able to raise their questions, issues and feedback there. If things require changes on my end, I’ll try making the changes to the lessons as we move along, maybe even adding course materials and lessons if the need will arise. I will also be using the forum to ask questions myself, and check out on the progress of students.

For those taking office hours, these will take place twice a week at different times to accommodate different time zones. In there, I will answer questions as they come and basically make myself available to you “in the flesh”. I haven’t decided yet which WebRTC service to use for that – suggestions are welcome.

I am still debating if I should use quizes as part of the course, placing them at the end of each section or not. If you have an opinion – please voice it (even if you’re not going to attend the course).

 

Enroll today

Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.

 

The course starts next week.

There’s a Q&A page that may answer additional questions you might have.

Official course syllabus is also available in PDF form.

I’d be happy to meet you if you decide to enroll to the course. This is a new thing for me and I an quite excited about it.

If you are not sure about the course – email me. If there’s no fit – I’ll tell you immediately. If this might help you, I’ll explain to you what you will gain from it so you can make a better decision

Until next Monday – have an awesome week.

The post Advanced WebRTC Acrhitecture Course – Updates appeared first on BlogGeek.me.

Dailymotion, Peer5 and the Future of Streaming

Mon, 10/10/2016 - 12:00

The future of streaming includes WebRTC.

Disclaimer: I am an advisor for Peer5.

If you look at reports from Ericsson or Cisco what you’ll notice is the growth of video as a large portion of what we do over the Internet. As video takes up an order of magnitude more data to pass than almost anything else we share today this is no wonder. Here are a few numbers from Cisco’s forecast from Feb 2016:

  • Mobile video traffic accounted for 55 percent of total mobile data traffic in 2015. Mobile video traffic now accounts for more than half of all mobile data traffic
  • Three-fourths of the world’s mobile data traffic will be video by 2020. Mobile video will increase 11-fold between 2015 and 2020, accounting for 75 percent of total mobile data traffic by the end of the forecast period

Source: Cisco

I think there are a few reasons for this growth:

  1. While we’re continuously moving towards HD video resolutions, 4K is already being experimented with. The increase in resolution and frame rates is inevitable. We’ve seen this growth with the displays of our devices and with the cameras we hold in our pockets. Time to see it in the videos we play back
  2. The hegemony of content creators is broken. User generated content is growing rapidly. It started with YouTube, moving to services such as Vine and now live streaming services such as Periscope, Facebook Live, YouNow and others. More creators = more video sources
  3. Viewing habits are changing. We are no longer interested in TV series broadcasted on air but rather pick and choose what we want to watch and when we want to watch, from an exponentially larger pool and variety of content

The challenge really begins when you look at the Internet technologies available to stream these massive amounts of content:

  • Flash / RTMP. This is how we streamed video over our internet for years, and that period is coming to an end. Google announced limiting its support to Flash by requiring users to opt in on sites that make use of it. This is causing large content sites to scurry towards HTML5 based streaming technologies
  • HLS. HTTP Live Streaming – Apple’s mechanism used on iOS devices. And one that is enforced if you wish to stream to iOS devices. To some extent, this makes it “necessary” to support it elsewhere – so there’s also an HLS player for browsers
  • MPEG-DASH – the standardized cousin of HLS
  • Something else, not necessarily intended for video streaming

The challenge with HLS and MPEG-DASH is latency. While this might be suitable for many use cases, there are those who require low latency live streaming:

From my course on WebRTC architecture

For those who can use HLS and MPEG-DASH, there’s this nagging issue of needing to use CDNs and pay for expensive bandwidth costs (when you stream that amount of video, everything becomes expensive).

Which brings me to the recent deal between Peer5 and Dailymotion. To bring you up to speed:

  • Dailymotion is huge
    • Similarweb ranks them #4 in their category, after YouTube, Netflix and niconico
    • Their site states they have 300 million unique monthly visitors and they stream 3.5 billion videos a month
  • Peer5 is a startup dealing with peer assisted delivery
    • They offload video traffic and reduce strain on servers and CDNs by sending video data across peers
    • They do this by using WebRTC’s data channel
  • Some of the traffic of Dailymotion now flows via Peer5’s technology, and that’s now official

There are other startups with similar technologies to Peer5, but this is the first time any of them has publicized a customer win, and with such a high profile to top it off.

In a way, this validates the technology as well as the need for new mechanisms to assist in our current state of video streaming – especially in large scales.

WebRTC seem to fit nicely in here, and in more than one way only. I am seeing more cases where companies use WebRTC either as a complementary technology or even as the main broadcast technology they use for their service.

It is also the reason I’ve added this important topic to my upcoming course – Advanced WebRTC Architecture. There is a lesson dedicated to low latency live broadcasting, where I explain the various technologies and how WebRTC can be brought into the mix in several different combinations.

If you would like to learn more about WebRTC and see how to best fit it into your scenario – this course is definitely for you. It starts October 24, so enroll now.

Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.

 

The post Dailymotion, Peer5 and the Future of Streaming appeared first on BlogGeek.me.

What’s Your Preferred Language for WebRTC Development?

Mon, 10/03/2016 - 12:00

WebRTC isn’t limited to JavaScript.

This is something I don’t get asked directly, but it does pop up from time to time. Especially when people come up with a specific language in mind and ask if it is suitable for WebRTC.

While the answer is almost always yes, I think a quick explanation of where programming languages meet WebRTC exactly is in order.

We will start with a small “diagram”, to show where we can find WebRTC related entities and move from there.

We’ve got both client and server entities with WebRTC, and I think the above depicts the main ones. There are more as your service gets more complicated, but that’s all an issue of scaling and pure development not directly related to WebRTC.

 

Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.

 

So what do we have here?

Web app

The web app is what most people think about when they think WebRTC.

This is what ends up running in the browser, loaded from an HTML and its derivatives.

What this means is that the language you end up with is Java Script.

Mobile app

When it comes to the mobile domain, there are two ways to end up with WebRTC. The first is by having the web app served inside a mobile browser, which brings you back to Java Script.

The more common approach though is to use WebRTC inside an app. You end up compiling and linking the WebRTC codebase as an SDK.

The languages here?

  • C, C++ for the low level stuff that makes up WebRTC. In all likelihood, you won’t need to handle this (either because it will just work or because you’ll be outsourcing it to someone else)
  • Java for native Android app development
  • Objective-C and/or Switft for native iOS app development

There’s also the alternative of C# via Xamarin or Java Script again if you use something like Crosswalk. With these approaches, someone should already have WebRTC wrapped for you in these platforms.

Embedded app

Embedded is where things get interesting.

There are cases where you want devices to run WebRTC for one reason or another.

Two main approaches here will dictate the languages of choice:

  1. C, C++ if you port the webrtc.org code base and use it. And then whatever else you fancy on top of it
  2. Any language you wish (Java anyone?), while implementing what you need of the WebRTC protocol (=what goes on the network) on your own

In general, here you’ll be going to lower levels of abstraction, getting as close as possible to the machine language (but stopping at C most probably).

TURN server

STUN and TURN servers are also necessary. Most likely – you won’t be needing to do a thing about them besides compiling, configuring and running them.

So no programming languages here.

I would note that the popular open source alternatives are all written in C. Again – this doesn’t matter.

Media server

Media servers come in different shapes and sizes. I’ve covered them here recently, discussing Jitsi/Kurento and later Kurento/Janus.

The programming languages here depend on the media server itself. Jitsi and Kurento are Java based. Janus is mostly C. In most cases – you wouldn’t care.

Media servers are usually entities that you communicate with via REST or Websocket, so you can just use whatever language you like on the controlling side. It is a very popular choice to juse Node.js (=Java Script) in front of a Kurento server for example. It also brings us to the last entity.

App/Signaling server

The funny thing is that this is where the question is mostly targeted at. The application and/or signaling server is what stitches everything together. It serves the web app, communicates with the mobile and embedded apps. It offers the details of the TURN server and handles any ephemeral passwords with it, it controls the media servers.

And it is also where the bulk of the development happens since it holds the business logic of the application.

And here the answer is rather simple – use whatever you want.

  • Node.js and Java Script are great and popular choice (there are good reasons for that)
  • Java seems to be a thing in enterprises though for the life of me I just can’t understand why
  • PHP works well. It is used by many WordPress plugins for WebRTC
  • Erlang seems to be something that adventurous developers like to adopt – and like
  • Ruby and Python are also good choices
  • .Net is something I’ve seen once or twice used

In general, whatever you can use to build websites can be used to build a WebRTC service.

What’s your language?

Back to you. What is the programming languages you use with WebRTC?

If you are looking for developers, then what would be the languages you’d view as mandatory and which ones as preferable with applicants?

This as well as other topics are covered in my upcoming Advanced WebRTC Architecture course. Be sure to enroll if you wish to deepen your understanding in this topic.

The post What’s Your Preferred Language for WebRTC Development? appeared first on BlogGeek.me.

Advanced WebRTC Architecture Course: Adding a Premium Package

Fri, 09/30/2016 - 12:00

So far so good, but it is time to add some more options for you.

A selection of three different course packages

I am working to complete all lessons for the course. It takes time to work things through, go over the lessons, make sure everything is in order and record the sessions.

The interesting thing to me is the variety of people that enroll to this course – they come from all over the globe, varying from small startups to large companies. I found some interesting vendors who are looking at WebRTC that I wasn’t aware of.

A few updates about the course

There are a few minor updates that are taking place in the course:

  • I will most probably add a forum to go along with the course. The forum is opened for all packages, and it will be a place where discussions and questions can take place between the students
  • The FAQ page was updated, based on questions I received in the past several weeks – check it out
  • The enrollment page now shows a pricing table, in an effort to make things clearer
  • There are now 3 packages:
    1. Basic – access to the course for 2 months + forum
    2. Plus – access to the course for 4 months + forum + office hours
    3. Premium – a new package – see below for information
  • For those who wish to enroll by wire transfer instead of PayPal – just contact me through my contact form
Course length

The course duration is 8 weeks, give or take a few days.

That said, if you want access to the recorded materials for a longer period, then you might want to consider going for the Plus or Premium packages.

The Plus package extends access to the course materials, including the forum and the office hours by an additional 2 months.

Office hours happen twice a week, at two different times to accommodate multiple time zones. During office hours I will be reviewing with the students their learning and understanding of WebRTC and assist in person in areas that will arise. I might even decide to hold a quick online lesson on relevant or timely topics during the office hours.

The Premium package extends access to the curse materials up to a full year. More about the premium package below.

Groups

If you want to enroll multiple employees or just come join as a team, they just contact me directly.

For large enough groups, I can offer discounts. For others, just the service of proforma invoice and wire transfer (which can still be better than PayPal for you).

We will be having 3-4 medium sized groups in our course this time, which will make things interesting – especially during office hours.

The Premium Package

I decided to add a premium package to the offering.

The idea behind it is to allow those who want more access to my time, and in a more private way.

The premium package offers two substantial additions on top of the Plus package:

  1. Access to course materials for a full year (instead of 2 or 4 months)
  2. Two private consultation calls with me

In the past few months I’ve noticed a lot of small companies who end up wanting an advice. A few hours of my time to explain to me what they are doing and chat about it, to see if there’s anything I can suggest. I decided to offer this service through this course as well, by bundling it as two consultation calls that go on top of the course itself.

We select together the agenda of these calls and what you want to achieve in them before we start. We then schedule the time and medium to use for the call (think something with WebRTC and a webcam, but not necessarily). And then we sit and chat.

If you already enrolled

If you already enrolled via PayPal and haven’t heard anything from me other than an order form and an invoice – don’t worry. I will be reaching out to all students a week or two before the course.

I am excited to do this, and really hope you are too.

 

See you next month!

 

Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.

 

The post Advanced WebRTC Architecture Course: Adding a Premium Package appeared first on BlogGeek.me.

Recording WebRTC Sessions: client side or server side?

Mon, 09/26/2016 - 12:00

Recording WebRTC? Definitely server side. But maybe client side.

This article is again taken partially from one of the lessons in my upcoming WebRTC Architecture Course. There, it is given in greater detail, and i recorded.

Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.

 

Recording is obviously not part of what WebRTC does. WebRTC offers the means to send media, but little more (which is just as it should be). If you want to record, you’ll need to take matters into your own hands.

Generally speaking, there are 3 different mechanisms that can be used to record:

  1. Server side recording
  2. Client side recording
  3. Media forwarding

Let’s review them all and see where that leads us.

#1 – Server side recording

This is the technique I usually suggest developers to use. Somehow, it fits best in most cases (though not always).

What we do in server-side recording is route our media via a media server instead of directly between the browsrs. This isn’t TURN relay – a TURN relay doesn’t get to “see” what’s inside the packets as they are encrypted end-to-end. What we do is terminate the WebRTC session at the server on both sides of the call – route the media via the server and at the same time send the decoded media to post processing and recording.

What do I mean by post processing?

  • We might want to mix the inputs from all participants and combine it all to a single media file
  • We might want to lower the filesize that we end up storing
  • Change format (and maybe the codecs?), to prepare it for playback in other types of devices and mediums

There are many things that factor in to a recording decision besides just saying “I want to record WebRTC”.

If I had to put pros vs cons for server side media recording in WebRTC, I’d probably get to this kind of a table:

+–No change in client-side requirementsAnother server in the infrastructureNo assumptions on client-side capabilities or behaviorLots of bandwidth (and processing)Can fit resulting recording to whatever medium and quality level necessaryNow we must route media#2- Client side recording

In many cases, developers will shy away from server-side recording, trying to solve the world’s problem on the client-side. I guess it is partially because many WebRTC developers tend to be Java Script coders and not full stack developers who know how to run complex backends. After all, putting up a media server comes with its own set of headaches and costs.

So the basics of client-side recording leans towards the following basic flow:

We first record stuff locally – WebRTC allows that.

And then we upload what we recorded to the server. Here’ we don’t really use WebRTC – just pure file upload.

Great on paper, somewhat less in reality. Why? There are a few interesting challenges when you record locally on machine you don’t know or control, via a browser:

  • Do you even know how much available storage do you have to use for the recording? Will it be enough for that full hour session you planned to do for your e-learning service?
  • And now that the session is done and you’re uploading a Gb of a file. Is the user just going to sit there and wait without closing his browser or the tab that is uploading the recording?
  • Where and what do you record? If both sides record, then how do you synchronize the recordings?

It all leads to the fact that at the end of the day, client side recording isn’t something you can use. Unless the recording is short (a few minutes) or you have complete control over the browser environment (and even then I would probably not recommend it).

There are things you can do to mitigate some of these issues too. Like upload fragments of the recording every few seconds or minutes throughout the session, or even do it in parallel to the session continuously. But somehow, they tend not to work that well and are quite sensitive.

Want the pros and cons of client side recording? Here you go:

+–No need to add a media server to the media flowClient side logic is complex and quite dependent on the use caseRequires more on the uplink of the user – or more wait time at the end of the sessionNeed to know client’s device and behavior in advance#3 – Media forwarding

This is a lesser known technique – or at least something I haven’re really seen in the wild. It is here, because the alternative is possible to use.

The idea behind this one is that you don’t get to record locally, but you don’t get to route media via a server either.

What is done here, is that media is forwarded by one or both of participants to a recording server.

The latest releases of Chrome allows to forward incoming peer connection media, making this possible.

This is what I can say further about this specific alternative:

+–No need to add a media server into the flow – just as an additional external recording serverRequires twice the uplink or moreDo you want to be the first to try this technique?Things to remember

Recording doesn’t end with how you record media.

There’s meta data to treat (record, playback, sync, etc).

And then there’s the playback part – where, how, when, etc.

There are also security issues to deal with and think about – both on the recording end and on the playback side.

These are covered in a bit more detail in the course.

What’s next?

If you are going to record, start by leaning towards server side recording.

Sit down and list all of your requirements for recording, archiving and playback – they are interconnected. Then start finding the solution that will fit your needs.

And if you feel that you still have gaps there, then why not enroll to the Advanced WebRTC Architecture course?

 

Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.

The post Recording WebRTC Sessions: client side or server side? appeared first on BlogGeek.me.

Twilio’s Voice Insights for WebRTC – a line on the sand

Fri, 09/23/2016 - 12:00

Analytics != Operation

Twilio just announced a new service to its growing cadre of services. This time – Voice Insights.

What to expect in the coming days

This week Twilio announced several interesting initiatives:

  1. Country specific guidelines on using SMS
  2. A new Voice Insights service
  3. The Kurento acquisition

Add to that their recent announcement on their new Enterprise offering and the way they seem to be adding more number choices in countries. What we get is too much work to cover a single vendor in this industry.

Twilio is enhancing its services in breadth and depth at the same time, doing so while trying to reach out to new customer types. I will be covering all of these issues soon enough. Some of it here, some on other blogs where I write. Customers with an active subscription for my WebRTC PaaS report will receive a longform written analysis separately covering all these aspects later this month.

What I want to cover in this article

I already wrote about Twilio’s Kurento acquisition. This time, I want to focus on Voice Insights.

All the media outlets I’ve checked to read about Voice Insights were regurgitating the Twilio announcement with little to add. At most, they had callstats.io to refer to. I think a lot is missing from the current conversation. So lets dig in.

What is Voice Insights?

Voice Insights is a set of tools that can be used to understand what’s going on under the rug. When you use a communications API platform – or build your own for that matter – the first thing to notice is that there’s lack of understanding of what’s really happening.

Most dashboards focus on giving you the basics – what sessions you created, how long were they, how much money you owe us. Others add some indication of quality metrics.

The tools under the Voice Insights title at Twilio include:

  1. Collection of all network stats, so you can check them out in the Twilio console
  2. Real time triggers on the client, telling you when network issues arise or the volume is too low/high
  3. Pre-call network test on the client
  4. User feedback collection (the Skype “how was your call quality” nag)

Some of them were already available in some form or another in the Twilio offering – such as user feedback collection.

The features here can be split into two types:

  1. Client side – the real time triggers, pre-call network test
  2. Server side – collection of network stats

Twilio gave a good introduction to all of thee capabilities, so I won’t be repeating them here.

What is interesting, is if and how they have decided to implement the real time triggers – do they get triggered from the backend or directly by running rules on the device. But I digress here.

How is it priced?

Interestingly, Voice Insights is priced separately from the calling service itself.

If you want insights into the voice minutes you use on Twilio, there’s an extra charge associated with it.

Prices start from $0.004 per minute, going down to ~$0.002 per minute for those who can commit to 1 million voice minutes a month. It goes down to a shy above $0.001 a minute.

For comparison, SIP-to-SIP voice calling on Twilio starts at $0.005 per minute, making Voice Insights a rather expensive service.

Comparisons with callstats.io are necessary at this point. If you take a low tier of 10,000 voice minutes a month, callstats.io is priced at 19 EUR (based on their calculator – it can get higher or lower based on “data points”) whereas Twilio Voice Insights stands at 40 USD. How do these two vendors offer lower rates at bulk is an exercise I’ll leave for others to make.

Is this high? low? market price? I have no clue.

TokBox, on the other hand, has their own tool called Inspector and another feature called Pre-Call Test. And it is given for free as part of the service.

Where is it headed?

Voice Insights can take several directions with Twilio:

  • Extend it to support video sessions as well
  • Enhance and deepen the analytics capabilities, probably once enought  feedback is received from customers on this feature
  • Switch from a paid to free offering, again, based on customer feedback
  • Unbundle it from Twilio and offer it as a stand-alone service to others – maybe to all the vendors that are using Kurento on premise?

With analytics, the sky usually isn’t the limit. It is just the beginning of the dreams and stories you can build upon a large data set. The problem is how can you take these dreams and make them come true.

Which brings us to the next issue.

The future of Analytics in Comm APIs

There’s a line drawn in the sand here. Between communications and analytics.

Analytics has a perceived value of its own – on top of enabling the interaction itself.

Will this hold water? Will other communication API vendors add such capabilities? Will they be charging extra for them?

I’ve had my share of stories around CEM (Customer Experience Management). Network equipment vendors and those handling video streaming are marketing it to their customers. Analytics on network data. This isn’t much different.

Time will tell if this is something that will become common place and desired, or just a failed attempt. I still don’t have an opinion where this will go.

Up next

Next in my quick series of articles on Twilio comes coverage of their new Enterprise plan, and how Twilio is trying to grow in breadth and depth at the same time.

 

Test and Monitor your WebRTC Service like a pro - check out how testRTC can improve your service' stability and performance.

The post Twilio’s Voice Insights for WebRTC – a line on the sand appeared first on BlogGeek.me.

Discount on the Advanced WebRTC Architecture Course ends tomorrow

Thu, 09/22/2016 - 12:00

If you haven’t yet enrolled to my Advanced WebRTC Architecture course – then why wait?

I just noticed that I haven’t written any specific post here about the upcoming course, so consider this one that announcement. To my defense – I sent it out a few days ago to the monthly newsletter I have.

Why a course on WebRTC architecture?

I’ve been working with entrepreneurs, developers, product managers and people in general about their WebRTC products for quite some time. But somehow I missed to notice that in many such discussions there were large gaps in what people thought about WebRTC and what WebRTC really is.

There’s lots of beginner’s information out there for WebRTC, but somehow it always focuses on how to use the WebRTC APIs in the browser, or what the meaning of a specific feature in the standard is. There is also a large set of walk-throughs of different frameworks that you can use, but no one seems to offer a path for a developer to decide on his architecture. To answer the question of “what should I be choosing for my service?

So I set out to put a course that answers that specific question. It gives the basics of what WebRTC is, and then dives into the part of what it means to put an architecture in place:

  • How to analyze the real requirements of your scenarios?
  • What are the various components you will need?
  • Go through common design patterns that crop up in popular service archetypes
What’s in the course?

The easiest way is to go through the course syllabus. It is available online here and also in PDF form.

When will the course take place?

The course is all conducted online, but not live.

It starts on October 24, and I am now in final preparation of recording the materials after creating them in the past two months.

The course is designed to be:

  • Built out of 7 modules
  • Have 40 lessons give or take, each on average should take you 30 minutes
  • This means if you take a lesson on every working day, you should complete this in 2 months
  • You can do it at a faster pace if you wish
  • Course materials are available online for students for a period of 2 months. This can be extended to 4 months for those who wish to add Office Hours on top of the course
Any discount for friends and family?

Enrolling to the course is $247 USD. Adding Office Hours on top of it means an additional $150 USD.

Until tomorrow, there’s a $50 USD discount – so enroll now if you’re already certain you want to.

There are discounts for those who want to enroll as a larger group – contact me for that.

Have more questions?

Check the FAQ. I’ll be updating it as more questions come it.

If you can’t find what you need there – just contact me.

The post Discount on the Advanced WebRTC Architecture Course ends tomorrow appeared first on BlogGeek.me.

Twilio Acquires Kurento. Who will Acquire Janus?

Wed, 09/21/2016 - 12:00

Open source media frameworks in WebRTC are all the rage these days.

Jitsi got acquired by Atlassian early last year and now Twilio grabs Kurento.

What to expect in the coming days

Yesterday Twilio announced several interesting initiatives:

  1. Country specific guidelines on using SMS
  2. A new Voice Insights service
  3. The Kurento acquisition

Add to that their recent announcement on their new Enterprise offering and the way they seem to be adding more number choices in countries. What we get is too much work to cover a single vendor in this industry.

Twilio is enhancing its services in breadth and depth at the same time, doing so while trying to reach out to new customer types. I will be covering all of these issues soon enough. Some of it here, some on other blogs where I write. Customers with an active subscription for my WebRTC PaaS report will receive a longform written analysis separately covering all these aspects later this month.

What I want to cover in this article

What I want to cover in this part of my analysis of the recent Twilio announcements is their acquisition of Kurento.

Things I’ll be touching is Why Kurento – how will it further Twilio’s goal – and also what will happen to the many users of Kurento.

I’ll also touch the open source media server space, and the fact that the next runner up in the acquisition roulette of our industry should be Janus.

But first things first.

What is Kurento?

Kurento is an open source WebRTC server-side media framework implemented on top of GStreamer. While it may not be limited to WebRTC, my guess is that most if not all of its users make use of WebRTC with it.

What does that mean exactly?

  • Open source – anyone can download and use Kurento. And many do
    • There’s a vibrant community around it of developers that use it independently, Outsourcing development shops that use it in their projects to customers and the Kurento team itself offering free and paid support to it
    • It is distributed under the Apache license which is quite lenient and enterprise-friendly
  • server-side media framework – when you want to process media in WebRTC for recording, multiparty or other processes, a server-side media framework is necessary
  • GStreamer – another popular open source project for media processing. Just another tidbit you may want to remember

I am seeing Kurento everywhere I go. Every couple of meetings I have with companies, they indicate that they make use of Kurento or when you look at their service it is apparent it uses Kurento. Somehow, it has become one of these universal packages that developers turn to when they need stuff done.

The Kurento team is running multiple activities/businesses (I might be doing a few mistakes here – it is always hard to follow such internal structures):

  1. Kurento, the open source project itself
    • Assisted by research done at theUniversidad Rey Juan Carlos located in Madrid, Spain
    • Funding raised through the European Commission
    • Money received by selling support and customization services
  2. NUBOMEDIA
    • A new initiative focused on scaling and an open source PaaS offering on top of Kurento
    • You can read more about it in a guest post by Luis Lopez (the face of Kurento)
  3. elasticRTC
    • Another new initiative, but a commercial one
    • Focused at getting scalable Kurento running on AWS
  4. Naevatec / Tikal Technologies SL
    • The business side of the Kurento project, where customization and support is done for a price

Kurento have a busy team…

What did Twilio acquire exactly?

This is where things get complicated. From my understanding, reading the materials online and through a briefing held with Twilio, this is what you can expect:

  • Kurento as an open source project is left open source, untouched and un-acquired. That said, the bulk of the team maintaining Kurento (the Naevatec developers) will be moving to be Twilio employees
  • Naevtec was not acquired and will live on. A new team will need to be hired and trained. During the transition period, the Twilio team will work on the Kurento project fulfilling any existing obligations. After that, Naevatec will supposedly have the internal manpower to take charge of that part of the business
  • elasticRTC was acquired. They will not be onboarding any new customers, but will continue supporting existing customers
    • This sounds like the story of AddLive and Snapchat (they waited for support contracts to expire and worked diligently but legally to get customers off the AddLive service)
    • That said, it seems like Twilio wants to leverage these early adopters of elasticRTC to design and build their own Twilio API offering around that domain (more on that later)
    • As I don’t believe there are many customers to elasticRTC, I don’t see this as a real blow to anyone
  • NUBOMEDIA was not mentioned in any of the announcements of the acquisition
    • I forgot to prod about it in my briefing…
    • Twilio are probably unhappy about this one, but had nothing to do about it
    • NUBOMEDIA is funded by multiple European projects, so was either impossible to acquire or too expensive for what Twilio had an appetite for
    • It might also had more partners to it than just the Kurento team(s)
    • How will the acquisition affect NUBOMEDIA’s project and the zeal with which Twilio’s new employees from Naevatec will have for it is an open question

To sum things up:

Twilio acqui-hired the team behind the Kurento project and took their elasticRTC offering out of the market before it became too popular.

How will Twilio use Kurento?

I’d like to split this one to short term and long term

Short term – multiparty calling

Twilio needed an SFU. Desperately.

In April 2015 the Twilio Video initiative was announced. Almost 18 months later and that service is still in beta. It is also still 1:1 calling or mesh for multiparty.

Something had to be done. While I am sure Twilio has been working for quite some time on a solid multiparty option, they probably had a few roadblocks, which got them to start using Kurento – or decide they need to buy that technology instead of build it internally.

Which got them to the point of the acquisition. Twilio will probably embed Kurento into their Twilio Video offer, adding three new capabilities to their platform with it:

  1. Multiparty calling, in an SFU model, and maybe an MCU one
  2. Video recording capability – a popular Kurento use case
  3. PSTN connectivity for video calling – Kurento has a SIP-Gateway component that can be used for that purpose
Long term – generic media server

In the long term, Twilio can employ the full power of Kurento and offer it in the cloud with a flexible API that pipelines media in real time.

This can be used in our new brave world of AI, Bots, IOT and AR – all them acronyms people love talking about.

It will be interesting to see how Twilio ends up implementing it and what kind of an API and an offering they will put in place, as there are many challenges here:

  • How do you do something so generic but still maintain low resource consumption?
  • How do you price it in an attractive way?
  • How do you decide which use cases to cover and which to ignore?
  • How do you design it for scale, especially if you are as big as Twilio?
  • How do you design simple yet flexible and powerful API for something so generic in nature?

This is one of the most interesting projects in our industry at the moment, and if Twilio is working towards that goal, then I envy their product managers and developers.

What will be left of the Kurento project?

That’s the big unknown. Luis Lopez, project lead of Kurento details the official stance of Kurento and Twilio on the Kurento blog. It is an expected positive looking write up, but it leaves the hard questions unanswered.

Maintaining the Kurento project

Twilio is known for their openness and the way they work with developers. While that is true, the Twilio github has little in the way of projects that aren’t samples written on top of the Twilio platform or open sourced projects that touch the core of Twilio. While that is understandable and expected, the question is how will Twilio treat the Kurento open source project?

Now that most of the workforce that is leading Kurento are becoming Twilio employees, will they work on the open source Kurento build or on internal needs and builds of Twilio? Here are a few hard questions that have no real answers to them:

  • What will be contributed back to the Kurento project besides stability and bug fixes?
  • If Twilio work on optimizing Kurento to higher capacities or add horizontal scalability modules to Kurento. Will that be open sourced or left inside Twilio?
  • How will Twilio prioritize bugs and requests coming from the large Kurento community versus handling their own internal roadmap?

While in many cases, with Kurento the answer would have been that Naevatec could just as well limit the access to higher level modules for paid customers – there was someone you could talk to when you wanted to purchase such modules. Now with Twilio, that route is over. Twilio are not in the business of paid support and customization of open source projects – they are in the business of cloud APIs.

There will be ongoing friction inside Twilio with the decision between investing in the open source Kurento platform versus using it internally. If you thought that was bad with Atlassian acquiring Jitsi – it is doubly so here, where Twilio may have to compete with a build vs buy decisions of companies where “build” is done on top of Kurento.

I assume Twilio doesn’t have the answers to these questions yet either.

Maintaining the business model

Kurento has customers. Not only users and developers.

These customers pay Naevatec. They pay for support hours or for customization work.

Will this be allowed moving forward?

Can the yet-to-be-hired new team at Naevatec handle the support?

What happens when someone wants to pay a large sum of money to Naevatec in order to deploy a scalable Kurento service in the cloud? Will Naevatec pick that project? If said customer also wants to build an API platform on top of it, will that be something Naeva Tec will still do?

What will others who see themselves as Twilio competitors do if they made use of Kurento up until now? Especially if they were a Naevatec paying customer…

The good thing is, that many of the Kurento users ended up getting paid support and customization by third party vendors. Now if you only could know which one of them does a decent job…

Should TokBox be worried?

Yes and no.

Yes, because it means Twilio will be getting their multiparty story, and by that competing with TokBox. Twilio has a wider set of features as well, making them more attractive in some cases.

No, because there’s room for more players, and for video calling services at the moment, TokBox is the go-to vendor. I wonder if they can maintain their lead.

What about Janus?

I recently compared Jitsi to Kurento.

Little did I know then that Twilio decided on Kurento and was in the process of acquiring it.

I also raised the question about Janus.

To some extent, Janus is next-in-line:

  • Those I know who use the project are happy with it and its architecture. A lot more than other smaller open source media framework projects
  • Slack has been using Janus for awhile now
  • Other vendors, some got acquired recently, also make use of it

Does Meetecho, the company behind Janus, willing to sell it isn’t important. It is a matter of price points.

We’ve seen the larger vendors veer towards acquiring the technology that they are using.

Will Slack go after Janus? Maybe Vonage/Nexmo? Oracle, to beef their own WebRTC offering?

Open source media frameworks have proven to be extremely effective in churning out commercial services on top of them. WebRTC made that happen by being its own open source initiative.

It is good to see Kurento finding a new home and growing up. Kudos to the Kurento team.

 

Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.

 

The post Twilio Acquires Kurento. Who will Acquire Janus? appeared first on BlogGeek.me.

How Media and Signaling flows look like in WebRTC?

Mon, 09/19/2016 - 12:00

I hope this will clear up some of the confusion around WebRTC media flows.

I guess this is one of the main reasons why I started with my new project of an Advanced WebRTC Architecture Course. In too many conversations I’ve had recently it seemed like people didn’t know exactly what happens with that WebRTC magic – what bits go where. While you can probably find that out by reading the specifications and the explanations around the WebRTC APIs or how ICE works, they all fail to consider the real use cases – the ones requiring media engines to be deployed.

So here we go.

In this article, I’ll be showing some of these flows. I made them part of the course – a whole lesson. If you are interested in learning more – then make sure to enroll to the course.

#1 – Basic P2P Call Direct WebRTC P2P call

We will start off with the basics and build on that as we move along.

Our entities will be colored in red. Signaling flows in green and media flows in blue.

What you see above is the classic explanation of WebRTC. Our entities:

  1. Two browsers, connected to an application server
  2. The application server is a simple web server that is used to “connect” both browsers. It can be something like the Facebook website, an ecommerce site, your heatlhcare provider or my own site with its monthly virtual coffee sessions
  3. Our STUN and TURN server (yes. You don’t need two separate servers. They almost always come as a single server/process). And we’re not using it in this case, but we will in the next scenarios

What we have here is the classic VoIP (or WebRTC?) triangle. Signaling flows vertically towards the server but media flows directly across the browsers.

BTW – there’s some signaling going off from the browsers towards the STUN/TURN server for practically all types of scenarios. This is used to find the public IP address of the browsers at the very least. And almost always, we don’t draw this relationship (until you really need to fix a big, STUN seems obvious and too simple to even mention).

 

Summing this one up: nothing to write home about.

Moving on…

#2 – Basic Relay Call Basic WebRTC relay call

This is probably the main drawing you’ll see when ICE and TURN get explained.

In essence, the browsers couldn’t (or weren’t allowed) to reach each other directly with their media, so a third party needs to facilitate that for them and route the media. This is exactly why we use TURN servers in WebRTC (and other VoIP protocols).

This means that WebRTC isn’t necessarily P2P and P2P can’t be enforced – it is just a best effort thing.

So far so go. But somewhat boring and expected.

Let’s start looking at more interesting scenarios. Ones where we need a media server to handle the media:

#3 – WebRTC Media Server Direct Call, Centralized Signaling WebRTC Media Server Direct Call, Centralized Signaling

Now things start to become interesting.

We’ve added a new entity into the mix – a media server. It can be used to record the calls, manage multiparty scenarios, gateway to other networks, do some other processing on the media – whatever you fancy.

To make things simple, we’ve dropped the relay via TURN. We will get to it in a moment, but for now – bear with me please.

Media

The media now needs to flow through the media server. This may look like the previous drawing, where the media was routed through the TURN server – but it isn’t.

Where the TURN server relays the media without looking at it – and without being able to look at it (it is encrypted end-to-end); the Media Server acts as a termination point for the media and the WebRTC session itself. What we really see here is two separate WebRTC sessions – one from the browser on the left to the media server, and a second one from the media server to the browser on the right. This one is important to understand – since these are two separate WebRTC sessions – you need to think and treat them separately as well.

Another important note to make about media servers is that putting them on a public IP isn’t enough – you will still need a TURN server.

Signaling

On the signaling front, most assume that signaling continues as it always have. In which case, the media server needs to be controlled in some manner, presumably using a backend-to-backend signaling with the application server.

This is a great approach that keeps things simple with a single source of truth in the system, but it doesn’t always happen.

Why? Because we have APIs everywhere. Including in media servers. And these APIs are sometimes used (and even abused) by clients running browsers.

Which leads us to our next scenario:

#4 – WebRTC Media Server Direct Call, Split Signaling WebRTC Media Server Direct Call, Split Signaling

This scenario is what we usually get to when we add a media server into the mix.

Signaling will most often than not be done between the browser and the media server while at the same time we will have signaling between the browser and the application server.

This is easier to develop and start running, but comes with a few drawbacks:

  1. Authorization now needs to take place between multiple different servers written in different technologies
  2. It is harder to get a single source of truth in the system, which means it is harder for the application server to know what is really going on
  3. Doing such work from a browser opens up vulnerabilities and attack vectors on the system – as the code itself is wide open and exposes more of the backend infrastructure

Skip it if you can.

Now lets add back that STUN/TURN server into the mix.

#5 – WebRTC Media Server Call Relay WebRTC Media Server Call Relay

This scenario is actually #3 with one minor difference – the media gets relayed via TURN.

It will happen if the browsers are behind firewalls, or in special cases when this is something that we enforce for our own reasons.

Nothing special about this scenario besides the fact that it may well happen when your intent is to run scenario #3 – hard to tell your users which network to use to access your service.

#6 – WebRTC Media Server Call Partial Relay WebRTC Media Server Call Partial Relay

Just like #5, this is also a derivative of #3 that we need to remember.

The relay may well happen only in one side of the media server – I hope you remember that each side is a WebRTC session on its own.

If you notice, I decided here to have signaling direct to the media server, but could have used the backend to backend signaling.

#7 – WebRTC Media Server and TURN Co-location WebRTC Media Server and TURN Co-location

This scenario shows a different type of a decision making point. The challenge here is to answer the question of where to deploy the STUN/TURN server.

While we can put it as an independent entity that stands on its own, we can co-locate it with the media server itself.

What do we gain by this? Less moving parts. Scales with the media server. Less routing headaches. Flexibility to get media into your infrastructure as close to the user as possible.

What do we lose? Two different functions in one box – at a time when micro services are the latest tech fad. We can’t scale them separately and at times we do want to scale them separately.

Know Your Flows

These are some of the decisions you’ll need to make if you go to deploy your own WebRTC infrastructure; and even if you don’t do that and just end up going for a communication API vendor – it is worthwhile understanding the underlying nature of the service. I’ve seen more than a single startup go work with a communication API vendor only to fail due to specific requirements and architectures that had to be put in place.

One last thing – this is 1 of 40 different lessons in my Advanced WebRTC Architecture Course. If you find this relevant to you – you should join me and enroll to the course. There’s an early bird discount valid until the end of this week.

The post How Media and Signaling flows look like in WebRTC? appeared first on BlogGeek.me.

IMTC: Supporting WebRTC Interoperability

Thu, 09/15/2016 - 12:00

Where is the IMTC focusing it efforts when it comes to WebRTC?

[Bernard Aboba, who is IMTC Director and Principal Architect for Microsoft wanted to clarify a bit what the IMTC is doing in the WebRTC Activity Group. I was happy to give him this floor, clarifying a bit the tweet I shared in an earlier post]

One of the IMTC’s core missions is to enhance interoperability in multimedia communications, with real-time video communications having been a focus of the organization since its inception. With IMTC’s membership including many companies within the video industry, IMTC has over the years dealt with a wide range of video interoperability issues, from simple 1:1 video scenarios to telepresence use cases involving multiple participants, each with multiple cameras and screens.

With WebRTC browsers now adding support for H.264/AVC as well as VP9, and support for advanced video functionality such as simulcast and scalable video coding (SVC) becoming available, the need for WebRTC video protocol and API interoperability testing has grown, particularly in scenarios implemented by video conferencing applications. As a result, the IMTC’s WebRTC Activity Group has been working to further interoperability testing between WebRTC browsers.

In the past, the IMTC has sponsored development of test suites, including a test suite for SIP over IPv6, and most recently a tool for testing interoperability of HEVC/H.265 scalable video coding. For SuperOp 2016, the WebRTC AG took on testing of WebRTC audio and video interoperability. So a logical next step was to work on development of automated WebRTC interoperability tests. Challenges include:

  1. Developing basic audio and video tests that can run on all browsers without rewriting the test code for each new browser to be supported.
  2. Developing tests covering not only basic use cases (e.g. peer-to-peer audio/video), but also advanced use cases requiring a central conferencing server (e.g. conferencing scenarios involving multiple participants, simulcast, scalable video coding, screen sharing, etc.)

For its initial work, IMTC decided to focus on the first problem. To enable interoperability testing of the VP9 and H.264/AVC implementations now available in browsers, the IMTC supported Philipp Hancke (known to the community as “fippo”) in enhancing automated WebRTC interoperability tests, now available at https://github.com/fippo/testbed. Sample code used in the automated tests is available at https://github.com/webrtc/samples.

The interoperability tests depend on adapter.js, a Javascript “shim” library originally developed by the Chrome team to enable tests to be run on Chrome and Firefox. Support for VP9 and H.264/AVC has been rolled into adapter.js 2.0, as well as support for Edge (first added by fippo in October 2015). The testbed also depends on a merged fix (not yet released) in version 2.0.2. The latest adapter.js release as well as ongoing fixes is available at https://github.com/webrtc/adapter.

With the enhancements rolled into adapter.js 2.0, the shim library enables WebRTC developers to ship audio and video applications running across browsers using a single code base. At ClueCon 2016, Anthony Minessale of Freeswitch demonstrated the Verto client written to the WebRTC 1.0 API supporting audio and video interoperability between Chrome, Firefox and Edge.

Got questions or want to learn more about the IMTC and its involvement with WebRTC? Email the IMTC directly.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post IMTC: Supporting WebRTC Interoperability appeared first on BlogGeek.me.

Do you still need TURN if your media server has a public IP address?

Mon, 09/12/2016 - 12:00

Yes you do. Sorry.

This is something I bumped into recently and was quite surprised it wasn’t obvious, which lead me to the conclusion that the WebRTC Architecture course I am launching is… mandatory. This was a company that had their media server on a public IP address, thinking that this should remove their need to run a TURN server. Apparently, the only thing it did was remove their connection rate.

It is high time I write about it here, as over the past year I actually saw 3 different ways in which vendors break their connectivity:

  1. They don’t put a TURN server at all, relying on media servers with public IP addresses
  2. They don’t put a TURN server at all, assuming STUN is enough for a peer to peer based service (!)
  3. They don’t configure the TURN server they use for TCP and TLS connectivity, assuming UDP relay is more than enough

Newsflash: THIS ISN’T ENOUGH

I digress though. I want to explain why the first alternative is broken:

Why a public IP address for your media server isn’t enough

With WebRTC, traffic goes peer to peer. Or at least it should:

But this doesn’t always work because one or both of the browsers are on private networks, so they don’t really have a public address to use – or don’t know it. If one of them has a public IP, then things should be simpler – the other end will direct traffic to that address, and from that “pinhole” that gets created, traffic can flow the other way.

The end result? If you put your media server on a public IP address – you’re set of success.

But the thing is you really aren’t.

There’s this notion of IT and security people that you should only open ports that need to be used. And since all traffic to the internet flows over HTTP(S); and HTTP(S) flows over TCP – you can just block UDP and be done with it.

Now, something that usually gets overlooked is that WebRTC uses UDP for its media traffic. Unless TURN relay over TCP/TLS is configured and necessary. Which sometimes it does. I asked a colleague of mine about the traffic they see, and got something similar to this distribution table:

With up to 20% of the sessions requiring TURN with TCP or TLS – it is no wonder a public IP configured on a media server just isn’t enough.

Oh, and while we’re talking security – I am not certain that in the long run, you really want your media server on the internet with nothing in front of it to handle nasty stuff like DDoS.

What should you do then?
  1. Make sure you have TURN configured in your service
    • But make sure you have TCP and TLS enabled in it and found in your peer connection’s configuration
    • I don’t care if you do that as part of your media server (because it is sophisticated), using a TURN server you cobbled up or through a third party service
  2. Check out my new WebRTC Architecture course
    • It covers other aspects of TURN servers, IP addresses and things imperative for a production deployment
    • The images used in this article come from the materials I’ve newly created for it
  3. Test the configuration you have in place
    • Limit UDP on your test machines, do it on live networks
    • Or just use testRTC – we have in this service simple mechanisms in place to run these specific scenarios

Whatever you do though, don’t rely on a public IP address in your media server to be enough.

The post Do you still need TURN if your media server has a public IP address? appeared first on BlogGeek.me.

Should you use Kurento or Jitsi for your multiparty WebRTC video conference product?

Mon, 09/05/2016 - 12:00

Kurento or Jitsi; Kurento vs Jitsi – is the the ultimate head to head comparison for open source media servers in WebRTC?

Yes and no. And if you want an easy answer of “Kurento is the way to go” or “Jitsi will solve all of your headaches” then you’ve come to the wrong place. As with everything else here, the answer depends a lot on what it is you are trying to achieve.

Since this is something that get raised quite often these days by the people I chat with, I decided to share my views here. To do that, the best way I know is to start by explaining how I compartmentalized these two projects in my mind:

Jitsi Videobridge

The Jitsi Videobridge is an SFU. It is an open source one, which is currently owned and maintained by Atlassian.

The acquisition of the Jitsi Videobridge serves Atlassian in two ways:

  1. Integrating Jitsi Videobridge into HipChat while owning the technology (it took the better part of the last 18 months)
  2. Showing some open source love – they did change the license of Jitsi from LGPL to APL

Here’s the intro of Jitsi from its github page:

Jitsi Videobridge is an XMPP server component that allows for multiuser video communication. Unlike the expensive dedicated hardware videobridges, Jitsi Videobridge does not mix the video channels into a composite video stream, but only relays the received video channels to all call participants. Therefore, while it does need to run on a server with good network bandwidth, CPU horsepower is not that critical for performance.

I emphasized the important parts for you. Here’s what they mean:

  • XMPP server component – a decision was made as to the signaling of Jitsi. It was made years ago, where the idea was to “compete” head-to-head with Google Hangouts. So the choice was made to use XMPP signaling. This means that if you need/want/desire anything else, you are in for a world of pain – doable, but not fun
  • does not mix the video channels – it doesn’t look into the media at all or can process raw video in any way
  • only relays the received video – it is an SFU

Put simply – Jitsi is an SFU with XMPP signaling.

If this is what you’re looking for then this baby is for you. If you don’t want/need an SFU or have other signaling protocol, better start elsewhere.

You can find outsourcing vendors who are happy to use Jitsi and have it customized or integrated to your use case.

Kurento

Kurento is a kind of an media server framework. This too is an open source one, but one that is maintained by Kurento Technologies.

With Kurento you can essentially build whatever you want when it comes to backend media processing: SFU, MCU, recording, transcoding, gateway, etc.

This is an advantage and a disadvantage.

An advantage because it means you can practically use it for any type of use case you have.

A disadvantage because there’s more work to be done with it than something that is single purpose and focused.

Kurento has its own set of vendors who are happy to support, customize and integrate it for you, one of which are the actual authors and maintainers of the Kurento code base.

Which one’s for you? Kurento or Jitsi?

Both frameworks are very popular, with each having at the very least 10’s of independent installations and integrations done on top of them and running in production services.

Kurento or Jitsi? Kurento or Jitsi? Not always an easy choice, but here’s where I draw the line:

If what you need is a pure SFU with XMPP on top, then go with Jitsi. Or find some other “out of the box” SFU that you like.

If what you need is more complex, or necessitates more integration points, then you are probably better off using Kurento.

What about Janus?

Janus is… somewhat tougher to explain.

Their website states that it is a “general purpose WebRTC Gateway”. So in my mind it will mostly fit into the role of a WebRTC-SIP gateway.

That said, I’ve seen more than a single vendor using it in totally other ways – anything from an SFU to an IOT gateway.

I need to see more evidence of use cases where production services end up using it for multiparty as opposed to a gateway component to suggest it as a solid alternative.

Oh – and there are other frameworks out there as well – open source or commercial.

Where can I learn more?

Multiparty and server components are a small part of what is needed when going about building a WebRTC infrastructure for a communication service.

In the past few months, I’ve noticed a growing requests in challenges and misunderstandings of how and what WebRTC really is. People tend to focus on the obvious side of the browser APIs that WebRTC has, and forget to think about the backend infrastructure for it – something that is just as important, if not more.

It is why I’ve decided to launch an online WebRTC Architecture course that tackles these types of questions.

Course starts October 24, priced at $247 USD per student. If you enroll before October 10, there’s a $50 discount – so why wait?

The post Should you use Kurento or Jitsi for your multiparty WebRTC video conference product? appeared first on BlogGeek.me.

Will there ever be a decentralized web?

Mon, 08/29/2016 - 12:00

No. Yes. Don’t know.

I’ve recently read an article at iSchool@Syracuse. For lack of a better term on my part, pundits opining about the decentralized web.

It is an interesting read. Going through the opinions there, you can divide the crowd into 3 factions:

  1. We want privacy. Also we hate governments and monopolies. This is the largest group
  2. There’s this great tech we can put in place to make the internet more robust
  3. We actually don’t know

I am… somewhat split across all of these three groups.

#1 – Privacy, Gatekeepers and Monopolies

Like any other person, I want privacy. On the other hand, I want security, which in many cases (and especially today) comes at the price of privacy. I also want convenience, and at the age of artificial intelligence and chat bots – this can easily mean less privacy.

As for governments and monopolies – I don’t think these will change due to a new protocol or a decentralized web. The web started as something decentralized and utopian to some extent. It degraded to what it is today because governments caught on and because companies grew inside the internet to become monopolies. Can we redesign it all in a way that will not allow for governments to rule over the data going into them or for monopolies to not exist? I doubt it.

I am taking part now in a few projects where location matters. Where you position your servers, how you architect your network, and even how you communicate your intent with governments – all these can make or break your service. I just can’t envision how protocols can change that in a global scale – and how the forces that be that need to promote and push these things will actively do so.

I think it is a good thing to strive for, but something that is going very challenging to achieve:

  • Most powerful services today rely on big data = no real privacy (at least not in front of the service you end up using). This will always cause tension between our design for privacy versus our desire for personalization and automation
  • Most governments can enforce rules in the long run in ways that catch up with protocols – or simply abuse weaknesses in products
  • Popular services bubble to the top, in the long run making them into monopolies and gatekeepers by choice – no one forces us to use Google for search, and yet most of us view search on the web and Google as synonymous
#2 – Tech

Yes. Our web is client-server for the most part, with browsers getting their data fix from backend servers.

We now have technologies that can work differently (WebRTC’s data channel is one of them, and there are others still).

We can and should work on making our infrastrucuture more robust. More impregnable to malicious attackers and prone to errors. We should make it scale better. And yes. Decentralization is usually a good design pattern to achieve these goals.

But if at the end of the day, the decentralized web is only about maintaining the same user experience, then this is just a slow evolution of what we’re already doing.

Tech is great. I love tech. Most people don’t really care.

#3 – We just don’t know

As with many other definitions out there, there’s no clear definition of what the decentralized web is or should be. Just a set of opinions by different pundits – most with an agenda for putting out that specific definition.

I really don’t know what that is or what it should be. I just know that our web today is centralized in many ways, but in other ways it is already rather decentralized. The idea that I have this website hosted somewhere (I am clueless as to where), while I write these words from my home in Israel, it is being served either directly or from a CDN to different locations around the globe – all done through a set of intermediaries – some of which I specifically selected (and pay for or use for free) – to me that’s rather decentralized.

At the end of the day, the work being done by researchers for finding ways to utilize our existing protocols to offer decentralized, robust services or to define and develop new protocols that are inherently decentralized is fascinating. I’ve had my share of it in my university days. This field is a great place to research and learn about networks and communications. I can’t wait to see how these will evolve our every day networks.

 

 

The post Will there ever be a decentralized web? appeared first on BlogGeek.me.

Are WebRTC room systems interesting again?

Mon, 08/22/2016 - 12:00

I get a feeling that the room system is actually about to change. And that’s probably a good thing.

For many years, video conferencing was defined by the “codec”. The “codec” in this case wasn’t H.264 or any other specification of a video compression standard. It was the term given to the grey box sitting inside a meeting room connected to a camera. For me, a better term for it was always the “room system”. The first ones started as designed, proprietary hardware, running proprietary embedded operating systems. They were connected to a specific camera that was either a part of the box or connected to the box externally – but in most cases was again a proprietary camera.

There have been attempts in the past to replace the room system with something less expensive. I even remember GIPS (remember them? Google acquired them 6 years ago and made WebRTC out of them) writing a post on their blog on how to build your own video conferencing system from an Intel machine and a Logitech webcam. It was nice, but it really didn’t change the industry.

Little has changed in the video conferencing room system. When I stopped following that industry closely, which was a few years ago, things were still in the same trajectory:

  • Use proprietary hardware (the industry leaned towards the TI DSP at the time)
  • Use Embedded Linux as the OS (at the time, this was actually a refreshing sidestep from VxWorks)
  • Use an external proprietary camera (sourced from Sony if you wanted expensive highend or from another vendor if you wanted expensive “lowend”)

Software was taking the same design concepts of embedded platforms and closed systems at the time. You wrote ugly proprietary code from scratch with specialized UI frameworks. No fun at all.

When I decided to write my first posts about WebRTC, I wanted to share my views o f what WebRTC will do to the video conferencing room system. I noted three changes we will see:

So how will we handle it now?

  1. Commodity hardware, probably still with proprietary cameras
  2. Android operating system
  3. WebRTC multimedia and a web browser for signaling and everything else

I wrote it more than 4 years ago. And it still hasn’t happened. What I did fail to see, was how two additional changes are going to affect this industry:

  1. Migration towards cloud based deployments, services and business models (specifically in the video conferencing industry)
  2. Open hardware. Or at the very least, the constant grind of Moore’s Law and the stupidly capable hardware we have today

Hardware is cool again. IoT (the Internet of Things) made sure of that. Everything from wristbands, to drones, to self driving cars. Somehow, hardware startups had to also look at the video conferencing system.

Highfive was an early indication of that. A company conceived in 2012, just about the time I’ve written my own thoughts on the video conferencing room system. To some extent, also Double Robotics, who made use of an iPad and a Segway-like device. Both employed cloud for their distribution, selling a service around their devices. They were pioneers in selling their own video “codec” (=room system) coupled with a service they host and manage.

In the past month, things seem to be progressing in this same trajectory. Three items on the news recently caught my attention:

#1 – HELLO

HELLO is a video conferencing room system created by Solaborate. Solaborate is a social business/collaboration platform that has been around for several years now. Their CEO, Labinot Bytyqi was interviewed here a few years ago about Solaborate. I am not sure how they are fairing since then, but they must have been busy.

It seems that they are now adding a hardware component to the Solaborate platform in the form of HELLO. And what better place to go about doing that than a Kickstarter campaign?

HELLO Kickstarter

The thing I liked most is the image they shared of their first prototype:

For the uninitiated, that’s the Logitech C920 webcam, cut from its plastic contraption and glued together to something that looks like one of them Linux or Android-in-a-stick devices. Probably what holds the quad core ARM processor. Commodity hardware at its best.

Solaborate took a low goal for their Kickstarter campaign, passing it and then some. They will probably end up below the million dollar mark, but with a rather solid number of backers considering this is at the end of the day an enterprise product.

Oh – and did I mention they use WebRTC?

#2 – Pluot

Pluot is a new startup I came across over TechCrunch when they reported that Pluot raised $2.5 million.

The idea isn’t any different than the previous set of vendors. You get a small box and a camera, connected to the Pluot service.

From a hardware standpoint, it isn’t much different than the HELLO box. The camera from the picture is a Logitech C920 one.

The box, if you ask me, is too similar to an Intel NUC.

And it is actually running an Intel off-the-shelf commodity hardware:

The Pluot device is an Intel NUC running Ubuntu Core. […]

All the WebRTC media streams are peer-to-peer. […] That’s why we’re using an Intel Core i3 instead of a cheaper ARM option.

And yes. It is using WebRTC. And guess what? As with Skype, Pluot is also based on Electron (and Chromium as an extension of it):

So we scratched our own itch and built a little appliance, using WebRTC and atom-shell (which is now electron).

Pluot took a different business model approach – one used extensively by mobile operators: the box is free and you pay for the monthly subscription service only.

Commodity hardware, commodity software, commodity video conferencing core inside a Chromium shell, powering the whole video conferencing service.

#3 – Cisco trimming its workforce

In seemingly unrelated news, Cisco is trimming down its workforce. Everywhere in the news that this is mentioned, it also comes with an indication that the cuts are mainly on the hardware side of the house. There’s a need to focus more on software these days.

As one of the biggest players in video conferencing room systems, I wonder what that means. Is it a move towards leaner, more software focused room systems? Is the room systems in Cisco considered hardware or software in essence? Will we see a shift in business models?

The room system is slowly starting to change and take a new shape.

This change isn’t just a technical one in the specification of the hardware and software, but goes a lot deeper than that. These changes come with a change of how the room system is built, which parts are developed and which are “sourced” from open source alternatives (or paid third parties), who offers the service and how the business model look like.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Are WebRTC room systems interesting again? appeared first on BlogGeek.me.

Microsoft Acquires Beam, Showing the Value of WebRTC to Interactive Live Streaming

Mon, 08/15/2016 - 12:00

Low latency is critical for interactive live streaming.

Microsoft acquired last week Beam, a company focused on a gamer interactive live streaming service.

According to CrunchBase, Beam has been around for almost 2 years before getting plucked by Microsoft. The investment in them has been smaller than 0.5M USD.

For some reason unknown to me, there are people who love watching other people play games. I guess it is similar to some extent to people sitting down to watch a soccer game. Another thing I can’t really understand. It is the reason why Twitch was acquire by Amazon for almost a billion dollar – a month prior to Beam’s founding.

What Beam worked on was a way to enable viewers to be a part of the game and up their engagement. You do this by allowing viewers to push feedback to the gamers – add challenges to them, buy virtual goods for them, etc. From Beam’s website:

We make it possible for streamers to involve viewers in their gameplay, no matter what game they’re playing.

Want to let your viewers choose your weapon, make quests for you, or even fly a drone around your room? You can do that, all in realtime. Our SDK allows developers to create interactive experiences for existing games with as few as 25 lines of code.

In the console world, there are two major players – Microsoft Xbox and Sony PlayStation. With the acquisition of Beam, Microsoft is trying to build an ecosystem of viewers around the gamers and games offered in Xbox. Will they share the SDK and platform with Sony? It is too soon to tell, especially now that Microsoft is opening up and trying to build large ecosystem around its services as opposed to its operating systems. It might just be that Microsoft is trying to become a big player in gaming in general – not just console ones but also mobile.

Back to Beam and video streaming.

To enable higher and richer interactions between viewers and gamers, and offer the kind of  that, latency higher than a second are detrimental. This makes HLS and MPEG-DASH protocols irrelevant. Flash is on its way out the window. The only other technology that can get to a sub-second latency for real time video streaming then is WebRTC.

 

WebRTC is exactly what Beam has been using in their “protocol” dubbed FTL. It used WebRTC to stream video to the viewers instead of the more traditional mechanism of Flash.

I have been a believer in WebRTC for live streaming and broadcast for over a year now. It is just another place where WebRTC makes a lot of sense, but it will take time for us to get there. The main reason for that is that current implementations are too focused on video chat scenarios – trying to leverage the WebRTC implementation found in Chrome and hooking it up to backend media servers that are again geared towards video chat use cases.

There are 4 different techniques that WebRTC can be leveraged in interactive live streaming (or streaming at all):

  1. Use WebRTC’s data channel as a replacement for HTTP(S) to send video packets
    • Theoretically, this should be faster than HTTP and enables optimization to buffering
    • No one has taken that route yet as far as I can tell
  2. Build a kind of P2P CDN on top of WebRTC’s data channel
    • Think BitTorrent inside the browser
    • Peer5 and a view other vendors are doing just that
  3. Use WebRTC in its full glory – voice and video channels opened and streamed
    • Acquire the original live stream using WebRTC or some other mechanism, and then use WebRTC to connect the viewers via a VOD like architecture to the broadcast
    • Probably the most wasteful of all approached
    • And the one I am guessing Beam is currently employing
  4. Optimize on (3) to offer something akin to a Flash/HLS streamer
    • Handle multiple bitrates and resolutions
    • Be able to get high density of streams in a single machine

Options (1) and (2) require knowledge of networking.

Option (2) requires knowledge of P2P networks.

Option (3) requires WebRTC knowledge at its basic level.

Option (4) means you practically implement a WebRTC stack of your own with a focus on live streaming.

My guess is that with time, we will see vendors implementing options (2) and (4) which will be the winning architectures for live streaming.

Option (2) will be deployed to support today’s use cases, while option (4) will be deployed to support future use cases, where interactivity between viewer and broadcaster are important.

Beam took the right challenge on itself. It got it acquired in a short timespan and in a way redefine live streaming and low latency.

For Microsoft, this is yet another acquisition in the WebRTC space, and another area in which it now relies on this technology – even without supporting it on IE.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Microsoft Acquires Beam, Showing the Value of WebRTC to Interactive Live Streaming appeared first on BlogGeek.me.

WebRTC Plugin? An Electron WebRTC app is the only viable fallback

Mon, 08/08/2016 - 12:00

I was meaning to write something about Skype, Linux and WebRTC. But never got around to it. Until now.

The reason why I decided to write about it eventually? This tweet by Alex:

IMTC (Microsoft, Cisco, polycom, unify, sonus, …) to provide free (no cost) and free (do what you want) webrtc plugin for I.E. And Safari.

— Dr. Alex. Gouaillard (@agouaillard) August 3, 2016

Hmm. The IMTC is planning to offer a FREE plugin for IE and Safari.

Sounds like Temasys, and from the person who worked at Temasys at the time of releasing their plugin – now a commercial one rather than a free offering.

While some like this plugin, others don’t. They tried it and decided that the warning messages it pops up when being installed aren’t worth the effort.

The Electron WebRTC app approach

What did catch my eye was the Skype for Linux announcement. This is an alpha release of the Skype app for Linux – something that Microsoft have been neglecting for quite some time now.

The interesting bit isn’t that Microsoft is actively investing in a Linux version for Skype and acknowledging this part of the user base, but rather how they did that and the stance they have.

Here are a few lines from the announcement on the Skype community site:

The new version of Skype for Linux is a brand new client using WebRTC, the launch of which ensures we can continue to support our Linux users in the years to come.

[…] you’ll be using the latest, fastest and most responsive Skype UI, so you can share files, photos, videos and a whole new range of new emoticons with your friends.

The highlighted text is my own addition.

Here are my thoughts:

  • This is implemented on top of WebRTC and not ORTC. In a way, we’ve gone full circle with Microsoft – from ORTC, to adding WebRTC support in Edge to using WebRTC to develop their own products where needed
  • Microsoft gives the best reasoning behind using WebRTC in its own development: to ensure continued support for Linux
    • For the most part, using WebRTC equates better support for more devices and platforms than any other technology out there today
    • Yes. You still need to put some effort into getting it working on some platforms – but with a lot less of a hassle than any other technology and at a lower cost
  • Responsive Skype UI = HTML5. So there’s some browser engine / rendering engine for HTML in there somewhere
  • Latest and fastest…

It turns out Microsoft decided to use Electron.

What is Electron? It is a framework around Chromium that can be used to created desktop apps from web apps. And it is the most popular platform for doing it these days.

The irony.

Microsoft. Who owns, develops and promotes IE and Edge. Who was against WebRTC and for ORTC. That Microsoft used Chromium (effectively Chrome) to bring its Linux Skype app to market.

A few years ago, that would have been unheard of. Today? It makes too much sense – it actually increased the value of Microsoft in my eyes. Making the most practical decision of all and putting the ego aside.

Back to a WebRTC Plugin

So.

The IMTC is now investing its time and effort in a WebRTC plugin. Call me skeptic, but I can’t see this heading in the right direction.

Here’s why:

  • The IMTC is an interoperability group. Its strength lies in getting multiple vendors into the same room and having them test their products against each other. “their products” being products that follow the same specification and end up being deployed in the same network and service
  • Companies put their money into the IMTC to enable them that testing services
  • The problem with WebRTC and the IMTC is that WebRTC doesn’t really require interoperability per se – besides that between browser vendors. And browser vendors aren’t exactly the type of audience the IMTC caters for. To be exact, Microsoft is the only browser vendor who is part of the IMTC – and that’s probably for their Skype for Business product and not Edge or IE
  • Writing and maintaining a WebRTC plugin is hard work. It gets updated too frequently to be considered a one-time effort, so maintaining it comes at a cost – a type of cost that is new to the IMTC and its member companies

I believe it will be hard for the IMTC to maintain such a plugin on their own, and if the idea is to open source it to the larger community so the external community can take it up and continue to work and maintain it for the IMTC then that’s just wishful thinking. Open source projects are not synonymous with community development – they don’t all get picked up, adopted, used and maintained by the masses. The webrtc-everywhere project on github shows that – 2 contributors, a few forks, but not much of a collaboration or community around it.

Since the IMTC is a group of vendors who all seek reaching interoperability of the spec while maintaining a technical advantage on the rest of the vendors (I was there once), I can’t see them cooperating for a long term development of such a thing and putting the resources into it while contributing back to the community.

Furthermore, do we really need a WebRTC plugin?

Yes. I know. Safari. Important. IE. All those poor enterprise guys forced to use it. You can’t live without it and such.

But guess what? That same target market? How receptive do you think it will be for a plugin? What will be the install rate and usage rate for a plugin in such environments?

I have a warm place in my heart for the IMTC, but I think it is losing its way when it comes to WebRTC. I can’t see how a free plugin for WebRTC today will make a change. There are better things to focus on.

What to do in 2016 with WebRTC on IE/Safari?

There are two use cases here:

  1. I need to use the service daily
  2. I just want to get on a URL and do whatever needs to be done (call a doctor for example)

The first one can be solved with an installed PC app. A quaint choice maybe, but one which seems to be popular by comms vendors who started from the web. Think Slack or even Whatsapp – they both have a PC app. If you are using a service daily, the idea goes, you might as well just have it somewhere handy in the background of your PC instead of having to have it opened in a browser tab all the time.

The second one is where things get nasty. Asking for a plugin installation for it is just like asking for an app installation for it. Maybe worse if the installer of the plugin comes with a large set of browser warnings (because browsers now hate plugins). So you might just rethink the app option – or just ask the user to come back with a better browser.

My suggestion?

Explore the option of using Electron instead of a plugin.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post WebRTC Plugin? An Electron WebRTC app is the only viable fallback appeared first on BlogGeek.me.

Surprise: Free Video Calling is no Guarantee for Success (or Adoption)

Mon, 08/01/2016 - 12:00

Guess what? Mozilla is removing Hello from Firefox.

It will still be available as an add-on, but it seems to have degraded in its importance to Mozilla, which is understandable.

Goodbye HelloWhat is/was Hello?

Hello was Mozilla’s attempt to build a video calling service. Something that is baked right into the browser, but can be used by any browser supporting WebRTC. Think FaceTime or Hangouts but without the app or even a website.

Mozilla partnered for Hello with TokBox (a Telefonica company), which provided the backend to the service – mainly NAT traversal as far as I can tell.

When Hello was announced, I had my doubts and questions about it.

What went wrong?

A few things were wrong from the onset in Firefox Hello:

  1. While it debuted on a desktop browser, its main purpose was mobile. The problem is that Firefox OS got scrapped/pivoted, leaving Hello with no real use
  2. It came at a low point in Mozilla’s history. Mozilla partnered during 2014 with 3 vendors, trying to reduce Google’s hold on it: Yahoo, Cisco and Telefonica
    • Yahoo is all but dead – it just got acquired by Verizon
    • Telefonica needed Firefox OS on mobile, and now that that hasn’t matured, my guess is that its interests lie elsewhere these days, so having Telefonica/TokBox as part of Hello probably isn’t helping too much today
    • Cisco only wanted to protect its H.264 investments, which it succeeded
    • This cost Mozilla in focus and diluted its brand from being a pure open alternative
  3. Firefox has no real network effect or user base to rely on. It doesn’t connect users to one another but rather it connects viewers to web pages. Having hundreds of millions of viewers doesn’t equate monthly active users for a personal communication tool that is baked into the same product
  4. Hello was simple, but offered nothing interesting/innovative/new/needed. People who used apps continued to use apps. Those that wanted to meet over URLs used URLs. Having the button in the browser wasn’t enough to make people leap for the opportunity to use it
  5. While available in all WebRTC supporting browsers (=Chrome & Firefox), it was really a Firefox thing. This limited the user base, and especially the ability to start or to really receive a call over a mobile device

The main issue though is that a free video calling service isn’t that much of a deal these days (if this surprises you – just ask Google).

So Mozilla started by embedding Hello right into the browser. Then making it into a system add-on. And now it is making it into just another add-on. I assume it has a lot to do with the usage they’ve seen over the past year for Hello (and its non-adoption). It makes no sense to continue investing the time and effort in it if no one is using it – and having it officially released with the browser once every few months is a waste. Better throw it out of the browser and simplify the browser releases.

The next step might be to sunset the add-on/service altogether and say goodbye to Hello.

Is this predictive to Google’s Duo app?

Google announced Duo and is about to release it. Simplifying things a bit (and dumbing it down), Duo is a FaceTime clone. I covered Allo/Duo a few months back.

On face value, there’s no reason why Google Duo won’t meet a similar fate as Mozilla Hello.

That said, there are a few notable differences:

  • Duo is a mobile only app, whereas Hello focused on desktop browsers
  • Duo will probably be released on Android and iOS, covering 100% of the mobile market from day one
  • Google has a large users base on Android and the ability to get Duo in front of users. It also has the social graph of these people – via the phone’s address book
  • While Google kept Duo simple, it did bake two features into it:
    • Speed of connectivity, taking it to the extreme by adding QUIC into the mix
    • Caller’s video sent even before you accept the call

Will this be enough for Google Duo to get the adoption? I don’t know.

Where do we go from here?

In 2016 there should be no doubt anymore:

If you plan to monetize a video calling service, you need a serious business plan.

Most services I see launched have no business plan. They attempt to grow to millions of users. There’s a lot of dumb luck involved in it.

I’ve had my doubts about the viability of Wire as a company due to the same reasons. The only progress made by Wire is open sourcing their app – this doesn’t strike me like a business plan or a signal of strength and healthy growth.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Surprise: Free Video Calling is no Guarantee for Success (or Adoption) appeared first on BlogGeek.me.

VP9 Hardware Acceleration is Real

Mon, 06/20/2016 - 12:00

Hardware acceleration for video codecs is almost mandatory.

VP9 is getting a performance boost

There are three things that keep VP8 in the game when compared to H.264:

  1. It was the only video codec in Chrome for WebRTC in the last 5 years, giving it a headstart in deployments
  2. H.264 while available in mobile chipsets isn’t always accessible for the developer (or works as it should when it is accessible)
  3. VP8 and H.264 are rather old now, so software implementations of them are quite decent

 

With VP9, the main worry was that it will be left behind and not get the love and attention from chipset vendors – leading it to the same fate as VP8 – abysmal, if any, hardware acceleration support. It is probably why Google went to great lengths to make it running on YouTube so soon and is publicizing its stats all the time.

This worry is now rather behind us. Recent signs show some serious adoption from the companies that we should really care about:

#1 – ARM

Mobile=ARM

Without checking stats, I’d say that 99% or more of all smartphones sold in the past 5 years are based on ARM.

If and when ARM decides to support a feature directly, that brings said feature very close towards world domination in future smartpones.

Which is somewhat what happened last week – ARM announced its Mali Egil Video Processor with VP9 acceleration.

Here’s a deck they shared:

ARM Mali "Egil" technical preview from Phil Hughes

Being farther away from chipsets than I were 5 years ago, it is hard for me to say if this is an integral part of an ARM processor, but I believe that it isn’t. It is an add-on component that takes care of video processing that chipset vendors add next to their ARM core. They can source the design from ARM or other suppliers – or they can develop their own.

Not sure how popular the ARM alternative is for video processing, but they have the advantage of being the first alternative for any chipset vendor (hell – they already source the ARM core itself, so why not bundle?). Which also means every other vendor needs to match up to their feature set – and improve on it.

Now that VP9 encode/decode capabilities are front and center in the ARM Mali Egil, it has become a mandatory checkmark for everyone else as well.

#2 – Intel

If ARM is the king of mobile, then Intel rules the desktop.

As with ARM, I haven’t been following up on Intel CPU acceleration lately. And as with ARM, it was Fippo who got my attention with this link here: the new Intel Media SDK.

For those who don’t know, Intel is providing several interesting software packages that make direct use of its chipset capabilities. Especially when it comes to optimizing different types of workloads. The Intel IPP and Media SDKs handle media related processing, and are quite popular by low level developers who need access to such facilities.

From the release page itself:

With this release we are happy to announce new full hardware accelerated support for HEVC and VP9.

  • HEVC Main 10 (10-bit) encoder and decoder support
  • VP9 8-bit and 10-bit decoder support

So… HEVC (=H.265) has encode and decode while VP9 only has decode support.

Probably because HEVC has been in the works for a lot longer than VP9, but there’s hope still.

#3 – Alliance of Open Media

The Alliance of Open Media. I’ve published a recent update on the alliance.

Intel was there from the start. The recent additions include ARM, AMD and NVIDIA.

I am sure additional chipset vendors will be joining in the coming months – there seems to be a ramp up in memerships there, with Ateme and Adobe added to their logos just last week.

While the alliance is about what comes after VP9, it is easy to see how these vendors may sway to using VP9 in the interim.

The Future

The future is most definitely one of royalty free video codecs. We’ve got there with voice, now that we have OPUS (though Speex and SILK were there before to pave the way). We will get there with video as well.

Coding technologies need to be accessible and available to everyone – freely – if we are to achieve Benedict Evans’ latest claims: Video is the new HTML. But for that, I’ll need another post.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post VP9 Hardware Acceleration is Real appeared first on BlogGeek.me.

Will Microsoft’s Acquisition of LinkedIn Change the WebRTC Landscape?

Tue, 06/14/2016 - 12:00

It’s good to have Fippo when there’s lack of ideas in your head.

While there are synergies abound, a flawless execution is necessary

Yap. Fippo again prodded me about a topic, so here comes the post for it.

If you missed it, yesterday Microsoft acquired LinkedIn. $26.2B.

In some ways, Microsoft now rules the enterprise space – communication, collaboration and creation:

  • Microsoft Office suite (Excel, PowerPoint and Word as the main pillars)
  • Microsoft Outlook and the Exchange server (Email)
  • Yammer (Enterprise communications)
  • Skype (Voice and video communications)
  • LinkedIn (User identities and profiles)

Dean Bubley puts it nicely:

The @microsoft / @linkedin deal has nailed enterprise comms federation. Complete map of who knows whom. Add Skype4B & goodbye telephony

— Dean Bubley (@disruptivedean) June 13, 2016

There’s a longform here, but I am less convinced.

I am more inclined to how Radio Free Mobile sees this:

However, for all of this to work, LinkedIn’s systems and data has to become deeply integrated with those of Microsoft which with the companies remaining independent, will be orders of magnitude more difficult.

Microsoft of late has an issue with the ability to execute and follow through.

Skype, while huge, isn’t growing since Microsoft’s acquisition. It is actually letting others take its place.

Same with Yammer. Have you heard anything about it in the last few years? The news is all about Slack, and worse still – it is about how Atlassian’s HipChat is struggling because of Slack – Yammer isn’t even mentioned as a competitor/contender in this space.

Which brings us to LinkedIn, Microsoft’s intents for it and its ability and willingness to follow through.

Back to LinkedIn

I wrote about LinkedIn exactly a year ago. It was about their acquisition at the time of Lynda, a learning company, and me griping on why LinkedIn isn’t doing anything about comms (and WebRTC).

The people at LinkedIn aren’t stupid. They are $26.2B smarter than I am. And frankly, that’s also $17.7B smarter than Skype.

What does that tell us?

  • LinkedIn saw no real value in real time communications
    • Not enough to invest in it and build something with WebRTC
    • Not enough to acquire someone outright
    • Not enough to partner and integrate someone like Skype (Facebook did that in the past for example)
  • That decision played well for LinkedIn – they just got acquired
  • Messaging isn’t that important to LinkedIn either
    • They have rudimentary messaging capability in their platform
    • But it is lacking in so many ways that it is hard to enumerate them
    • And you can’t call its messaging anything similar to… messaging. If feels more like emails

If LinkedIn can’t find value in real time communications for its platform on its own, can Microsoft do a better job at it?

I don’t know.

Now lets look at the Microsoft assets that canbe integrated with LinkedIn.

Skype and LinkedIn

As Dean suggested, there is some synergy in Skype connecting to LinkedIn.

LinkedIn can slap a Skype button on its profiles, making it easy to connect to the people you’re connected with on LinkedIn.

While that’s great, most communication today happens OUTSIDE of LinkedIn. You reach out to people on it, connect with them, and then shift to email and other means of communications. Especially once you know a person to some extent.

To make a point – I wouldn’t send a message to Dean over LinkedIn – I’ll make it over email. Or just ping him on Skype, because that’s where he is.

When someone asks me for an introduction, it usually goes like this: “I saw you are connected to John Doe on LinkedIn. Can you send an intro email for me?”. It happens a lot less on LinkedIn even when it is driven from LinkedIn.

Getting the communication back to LinkedIn will be hard. Getting slightly more communications from LinkedIn directly to Skype is possible, though I am not sure it will be widely accepted.

Yammer and LinkedIn

Yammer isn’t best of breed in enterprise messaging. Not even sure if doing anything with it and LinkedIn is worth the effort.

My suggestion is to open the coffers and take out a few more billions of dollars and acquire Slack. Then throw out all voice integrations and bolt Skype in there. But that has nothing to do with LinkedIn.

Outlook/Exchange and LinkedIn

Email is what drives LinkedIn in the most effective way.

Having the ability to embed and merge profiles properly into Outlook – without any ugly add-ons – that’s great.

But nothing earth shattering that we haven’t seen before with Rapportive on Gmail.

Office and LinkedIn

I guess that having a tighter integration between PowerPoint and Slideshare would be great. But that isn’t the reason LinkedIn was acquired.

Sarah Perez of TechCrunch wrote about the integration of Office and LinkedIn. It includes Outlook. Focuses on Outlook.

And mostly goes one-way: how LinkedIn can enrich Office/Outlook related information. A bit on how Office can enrich LinkedIn data by adding more users. But nothing about how LinkedIn’s functionality can grow. A shame.

If this is where things are headed – growing Office but not growing LinkedIn, then I am afraid LinkedIn is expecting a similar fate to Yammer and Skype. Its days of greatness will be behind it and its level of innovation and introduction of powerful features that can compete in the market – will come to an end.

Other Domains

Cortana and Microsoft’s CRM are areas I missed. You can read more about them in Richard’s analysis on Radio Free Mobile.

The Corporate Structure

It seems that LinkedIn will sit as an independent entity within Microsoft under Satya Nadella directly.

I wonder how that will make things easy for the tight integrations envisioned for LinkedIn and the rest of Microsoft’s assets. How easy will it get to get the Skype team to cooperate and assist the LinkedIn team to integrate Skype for Web? What will the Office team want in return for the data they will be passing to LinkedIn? Will legal even authorize it?

There will be a lot of coordination taking place here, and I do hope that along the way, they won’t lose what’s needed to be done – there’s a lot of synergies and power here, but this will require a lot of agility from a huge company.

Back to WebRTC

This affects larger players in the UC space. If (and that’s a big if) Microsoft can connect the dots of Office, Exchange, Skype and LinkedIn – this makes for a very compelling offering. One that can differentiate and top Cisco and Google.

If Microsoft can make LinkedIn into the congregation point of people across enterprises – and not only a place to find CVs – it will be in a position to expand its offering towards real time communications in ways that others will find hard to compete against. LinkedIn lacked this vision. I wonder if Microsoft can follow through – or will they as well see it as unnecessary.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Will Microsoft’s Acquisition of LinkedIn Change the WebRTC Landscape? appeared first on BlogGeek.me.

The Alliance of Open Media – 10 Months in

Thu, 06/09/2016 - 12:00

How time flies.

About 10 months ago, the announcement of the creation of a new alliance caught me off guard.

Somehow, Google, Microsoft and a few other companies put their differences aside and decided to create the Alliance of Open Media. The intent – create royalty free video codec to rival H.265/HEVC. I’ve written about the Aliance of Open Media. It is time to revisit the topic.

A few things happened these last few months that are worth mentioning:

  1. We’ve learned more about the alliance – Jan Ozer  wrote a good progress report
  2. AMD, ARM and Nvidia joined the alliance
  3. Ittiam joined the alliance
  4. Vidyo joined the alliance

I am told work is being done on the actual codec itself. From the report Jan Ozer wrote, the following is apparent:

  • Baseline for the codec is VP10 (Google)
  • Most contributions of technologies on top of it come from Mozilla and Cisco; though I assume Microsoft is contributing there as well
  • Hardware vendors are putting their weight to make sure the algorithms used are easy to place in a hardware design
  • There’s a focus on GPU acceleration, which is important
  • Intent is to have it integrated into a browser by the beginning of 2017 and have hardware acceleration a year later

All the right moves.

ARM and Nvidia

Adding ARM and Nvidia is quite a catch.

ARM is in charge of the architecture of most smartphones on the market today, along with many of the IOT devices out there. Having them on board means that considerations for mobile and low power devices are taken into consideration by the alliance – but also that the work of the alliance will find its way into future designs of ARM.

Nvidia is where you find GPU processing power. They complement the attendance of Intel, brining the important GPU players to the table. In a recent whitepaper I’ve written for Surf, I touched the GPU issue briefly. I’ve done some research in that domain, and it does seem like the GPU is the best candidate to handle our future video coding – having GPUs relevant to this next generation codec fron the start is an important catch for the alliance.

Ittiam

Ittiam is a recent addition to the alliance.

I’ve had the chance to know Ittiam a decade ago, while competing head to head with their VoIP software. They have expertise in the multimedia space and in video compression, but they still are the smallest (or least relevant) player in this alliance. Having them is required to fill in the ranks and grow in numbers.

It would be nice to see others join such as Imagination Technologies (who are larger and a lot more meaningful).

Vidyo

Vidyo just join the alliance. On one hand, it surprised me. On the other hand, it should have.

Vidyo is collaborating with Google for a long time now in VPx and WebRTC. Recently it reiterated that with the work it is doing on VP9 SVC for WebRTC (you can find out more about it on a guest post Alex Eleftheriadis shared here on scalability and VP9).

Their addition to the alliance means several things:

  • Vidyo is making itself an integral part of every initiative related to future video codecs. This is a smart move, as it maintains its lead in the backend side and the smarts that is placed on top of SVC capabilities
  • This future codec will have SVC support in it, hopefully from the moment it is released to market
  • While a smaller company compared to the other members, the contribution of Vidyo to the alliance can be larger than many others of its members
Qualcomm

Qualcomm is missing.

So is Samsung.

And a few other smaller mobile chipset vendors.

I think it is their loss, as well as a missed opportunity.

They both should have joined the alliance at its inception.

Apple

Apple being Apple, they aren’t a part of it. Putting ads in the App Store and changing subscription revenue sharing models were more important to them, which is understandable.

The thing I don’t understand here is that Apple has removed most of its support in H.265. What does it have to lose by joining the alliance?

There are three paths available to Apple:

  1. Go with H.265. The current reduction in its support of H.265 can only be explained as a negotiation tactic in such a case
  2. Go with the Alliance of Open Media. Which it could do at any point in time. But if that is the case, then why wait?
  3. Release its own unique iCodec. Apple knows best, and it is time to lock its customers a bit further anyways

I wonder which route they are taking here.

Content Creators and Service Providers

We’ve got YouTub, Netflix and Amazon already covered. The internet may rejoice.

But what about Game of Thrones? Or the next movie blockbuster? Are they staying on the route of H.265 or will they veer away from it towards the alliance?

Hard to tell, though for the life of me, I can’t understand a long term decision of staying with H.265.

It would be nice to see the large studios and even Bollywood join the alliance – or at the very least back it publicly.

Timeline

If we look at the VP9 timeline, we havethe following estimates:

  • 1 year – Chrome decoding, along with a small percentage of YouTube videos supported
  • 2 years – First chipsets and reference designs support. My bet is on Nvidia and Intel here
  • 2.5 years – Chrome official support of it for WebRTC
H.264 in WebRTC

H.264 is hear to stay. More worrying – H.264 will grow in popularity in WebRTC services during 2016.

This progress and success of the alliance changes nothing in the current ecosystem and the current video technology.

The future of H.265

The future of H.265 does look grim. I do hope the alliance will kill it.

H.265 is in a collision course with VP9. It is still the more “popular” choice in legacy businesses, but that may change, as commercial deployments of it are small or non-existent.

The alliance simply means that a future codec is based on the VPx line of codecs instead of the H.26x ones. Now developers shifting from H.264 to a better codec will need to decide if they switch codec lines now or just later.

The royalty issues around H.265 along with the progress made in the alliance should tip the scales towards VP9 on this one.

What’s next?

Money time.

Where does that leave us all?

  • Vendors who handle codecs directly should join the alliance. The benefits outweigh the risks.
  • Consumers and users can continue not caring
  • Developers, especially those of backend media servers, need to decide if they shift towards VP9 or wait for the next generation to switch to a royalty free codecs. They also need to decide if they want to use VP8 or H.264 today

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The Alliance of Open Media – 10 Months in appeared first on BlogGeek.me.

Pages

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.