News from Industry

New Kamailio module: acc_json

miconda - Wed, 04/11/2018 - 18:43
Julien Chavanton from Flowroute added recently a new module: acc_jsonThe module builds JSON documents from the accounting records and can send them to mqueue to be consumed by other processes or write to syslog. For example, when using it configured with mqueue, the consumers (e.g., started with rtimer module) can send the accounting JSON document to an external system via HTTP (see http_client or http_async_client modules), rabbitmq, nsq or even as a payload to a new SIP message (see uac module).More details about acc_json module can be read at:And do not forget about the next Kamailio World Conference, taking place in Berlin, Germany, during May 14-16, 2018. It is the place to network with Kamailio developers and community members!Thanks for flying Kamailio!

WebRTC 1.0 Training and Free Webinar Tomorrow (on Tuesday)

bloggeek - Sun, 04/08/2018 - 12:00

Join Philipp Hancke and me for a free training on WebRTC 1.0, prior to the relaunch of my advanced WebRTC training.

Here’s something that I get at least once a week through my website’s chat widget:

It is one of the main reasons why I’ve created my advanced WebRTC course. It is a paid WebRTC course that is designed to fill in the gaps and answer the many questions developers face when needing to deal with WebRTC.

Elephants, blind Men, alligators and WebRTC

I wanted to connect it to the parable of the six blind man and an elephant, explaining how wherever you go in the Internet, you are going to get a glimpse about WebRTC and never a full clear picture. I even searched for a good illustration to use for it. Then I bumped into this illustration:

It depicts what happens with WebRTC and developers all too well.

If you haven’t guessed it, the elephants here are WebRTC and the requirements of the application and that flat person is the developer.

This fits well with another joke I heard yesterday from a friend’s kid:

Q: Why can’t you go into the woods between 14:00-16:00?

A: Because the elephants are skydiving

There’s a follow up joke as well:

Q: Why are the alligators flat?

A: Because they entered the woods between 14:00-16:00

WebRTC development has a lot of rules. Many of which are unwritten.

WebRTC 1.0

There is a lot of nuances about WebRTC. A lot of written material, old and new – some of it irrelevant now, the rest might be correct but jumbled. And WebRTC is a moving target. It is hard to keep track of all the changes. There’s a lot of knowledge around WebRTC that is required – knowledge that doesn’t look like an API call or written in the standard specification.

This means that I get to update my course every few months just to keep up.

With WebRTC 1.0, there’s both a real challenge as well as an opportunity.

It is a challenge:

  • WebRTC 1.0 still isn’t here. There’s a working draft, which should get standardized *soon* (=soon started in 2015, and probably ends in 2018, hopefully)
  • Browser implementations lag behind the latest WebRTC 1.0 draft
  • Browser implementations don’t behave the same, or implement the same parts of the latest WebRTC 1.0 draft

It is an opportunity:

We might actually get to a point where we have a stable API with stable implementations.

But we’re still not there

Should you wait?

No.

We’re 6-7 years in with WebRTC (depending who does the counting), and this hasn’t stopped well over a 1,000 vendors to jump in and make use of WebRTC in production services.

There’s already massive use of WebRTC.

Me and WebRTC 1.0

For me, WebRTC 1.0 is somewhat of a new topic.

I try to avoid the discussions going on around WebRTC in the standardization bodies. The work they do is important and critical, but often tedious. I had my fair share of it in the past with other standards and it isn’t something I enjoy these days.

This caused a kind of a challenge for me as well. How can I teach WebRTC, in a premium course, without explaining WebRTC 1.0 – a topic that needs to be addressed as developers need to prepare for the changes that are coming.

The answer was to ask Philipp Hancke to help out here, and create a course lesson for me on WebRTC 1.0. I like doing projects with Philipp, and do so on many fronts, so this is one additional project. It also isn’t the first time either – the bonus materials of my WebRTC course includes a recorded lesson by Philipp about video quality in WebRTC.

Free WebRTC 1.0 Webinar

Tomorrow, we will be recording the WebRTC 1.0 lesson together for my course. I’ll be there, and this time,  partially as a student.

To make things a bit more interesting, as well as promoting the whole course, this lesson will be given live in the form of a free webinar:

  • Anyone can join for free to learn about WebRTC 1.0
  • The recording will only be available as part of the advanced WebRTC course

This webinar/lesson will take place on

Tuesday, April 10

2-3PM EST (view in your timezone)

Save your seat →

The session’s recording will NOT be available after the event itself. While this lesson is free to attend live, the recording will become an integral part of the course’ lessons.

The post WebRTC 1.0 Training and Free Webinar Tomorrow (on Tuesday) appeared first on BlogGeek.me.

So your VPN is leaking because of Chrome’s WebRTC…

webrtchacks - Tue, 04/03/2018 - 03:14

We have covered the “WebRTC is leaking your IP address” topic a few times, like when I reported what the NY Times was doing and in my WebRTC-Notifier. Periodically this topic comes up now and again in the blogosphere, generally with great shock and horror. This happened again recently, so I here is an updated look […]

The post So your VPN is leaking because of Chrome’s WebRTC… appeared first on webrtcHacks.

AV1 Specification Released: Can we kiss goodbye to HEVC and royalty bearing video codecs?

bloggeek - Mon, 04/02/2018 - 12:00

AV1 for video coding is what Opus is for audio coding.

The Alliance of Open Media (AOMedia) issued last week a press release announcing its public release of the AV1 specification.

Last time I wrote about AOMedia was over a year ago. AOMedia is a very interesting organization. Which got me to sit down with Alex Eleftheriadis, Chief Scientist and Co-founder of Vidyo, for a talk about AV1, AOMedia and the future of real time video codecs. It was really timely, as I’ve been meaning to write about AV1 at some point. The press release, and my chat with Alex pushed me towards this subject.

TL;DR:

  • We are moving towards a future of royalty free video codecs
  • This is due to the drastic changes in our industry in the last decade
  • It won’t happen tomorrow, but we won’t be waiting too long either

Before you start, if you need to make a decision today on your video codec, then check out this free online mini video course

H.264 or VP8?

Now let’s start, shall we?

AOMedia and AV1 are the result of greed

When AOMedia was announced I was pleasantly surprised. It isn’t that apparent that the founding members of AOMedia would actually find the strength to put their differences aside for the greater good of the video coding industry.

Video codec royalties 101

You see, video codecs at that point in time was a profit center for companies. You invested in research around video coding with the main focus on inventing new patents that will be incorporated within video codecs that will then be globally used. The vendors adopting these video codecs would pay royalties.

With H.264, said royalties came with a cap – if you distributed above a certain number of devices that use H.264, you didn’t have to pay more. And the same scheme was put in place when it came to HEVC (H.265) – just with a higher cap.

Why do we need this cap?

  1. Companies want to cap their commitment and expense. In many cases, you don’t see direct revenue per device, so no cap means this it is harder to match with asymmetric business models and applications that scale today to hundreds of millions of users
  2. If a company needs to pay based on the number of devices they sell, then the one holding the patents and getting the payment for royalties knows that number exactly – something which is considered trade secret for many companies

So how much money did MPEG-LA took in?

Being a private company, this is hard to know. I’ve seen estimates of $10M-50M, as well as $17.5B on Quora. The truth is probably somewhere in the middle. Which is still a considerable amount of money that was funnelled to the patent owners.

With royalty revenues flowing in, is it any wonder that companies wanted more?

An interesting tidbit about this greed (or shall we say rightfulness) can be found in the Wikipedia page of VP8:

In February 2011, MPEG LA invited patent holders to identify patents that may be essential to VP8 in order to form a joint VP8 patent pool. As a result, in March the United States Department of Justice (DoJ) started an investigation into MPEG LA for its role in possibly attempting to stifle competition. In July 2011, MPEG LA announced that 12 patent holders had responded to its call to form a VP8 patent pool, without revealing the patents in question, and despite On2 having gone to great lengths to avoid such patents.

So… we have a licensing company whose members are after royalty payments on patents. They are blinded by the success of H.264 and its royalty scheme and payments, so they go after anything and everything that looks and smells like competition. And they are working towards maintaining their market position and revenue in the upcoming HEVC specification.

The HEVC/H.265 royalties mess

Leonardo Chiariglione, founder and chairman of MPEG, attests in a rather revealing post:

Good stories have an end, so the MPEG business model could not last forever. Over the years proprietary and “royalty free” products have emerged but have not been able to dent the success of MPEG standards. More importantly IP holders – often companies not interested in exploiting MPEG standards, so called Non Practicing Entities (NPE) – have become more and more aggressive in extracting value from their IP.

HEVC, being a new playing ground, meant that there were new patents to be had – new areas where companies could claim having IP. And MPEG-LA found itself one of many patent holder groups:

MPEG-LA indicated its wish to take home $0.2 per device using HEVC, with a high cap of around $25M.

HEVC Advance started with a ridiculously greedy target of $0.8 per device AND %0.5 of the gross margin of streaming services (unheard of at the time) – with no cap. It since rescinded, making things somewhat better. It did it a bit too late in the game though.

Velos Media spent money on a clean and positive website. Their Q&A indicate that they haven’t yet made a decision on royalties, caps and content royalties. Which gives great confidence to those wanting to use HEVC today.

And then there are the unaffiliated. Companies claiming patents related to HEVC who are not in any pool. And if you think they won’t be suing anyone then think again – Blackberry just sued Facebook for messaging related patents – easy to see them suing for HEVC patents in their current position. Who can blame them? They have been repeatedly sued by patent trolls in the past.

HEVC is said to be the next biggest thing in video coding. The successor of our aging H.264 technology. And yet, there’s too many unknowns about the true price of using it. Should one pay royalties to MPEG-LA, HEVC Advance and Velos Media or only one of them? Would paying royalties protect from patent litigation?

Is it even economically viable to use HEVC?

Yes. Apple has introduced HEVC in iOS 11 and iPhone X. My guess is that they are willing to pay the price as long as this keeps the headache and mess on the Android camp (I can’t see the vendors there coming to terms of who is the one in the value chain that will end up paying the royalties for it).

With such greed and uncertainty, a void was left. One that got filled by AOMedia and AV1.

AOMedia – The who’s who of our industry

AOMedia is a who’s who list of our industry. It started small, with just 7 big names, and now has 12 founding members and 22 promoter members.

Some of these members are members of MPEG-LA or already have patents in HEVC and video coding. And this is important. Members of AOMedia effectively allow free access to essential patents in the implementation of AOMedia related specifications. I am sure there are restrictions applied here, but the intent is to have the codecs coming out of AOMedia royalty free.

A few interesting things to note about these members:

  • All browser vendors are there: Google, Mozilla, Microsoft and Apple
  • All large online streaming vendors are there: Google (=YouTube), Amazon and Netflix
  • From that same streaming industry, we also have Hulu, Bitmovin and Videolan
  • Most of the important chipset vendors are there: Intel, AMD, NVidia, Arm and Broadcom
  • Facebook is there
  • Of the enterprise video conferencing vendors we have Cisco, Vidyo and Polycom
  • Qualcomm is missing

AOMedia is at a point that stopping it will be hard.

Here’s how AOMedia visualize its members’ products:

What’s in AV1?

AV1 is a video codec specification, similar to VP8, H.264, VP9 and HEVC.

AV1 is built out of 3 main premises:

  1. Royalty free – what gets boiled into the specification is either based on patents of the members of AOMedia or uses techniques that aren’t patented. It doesn’t mean that companies can’t claim IP on AV1, but as far as the effort on developing AV1 goes, they aren’t knowingly letting in patents
  2. Open source reference implementation – AV1 comes with an open source implementation that you can take and start using. So it isn’t just a specification that you need to read and build with a codec from scratch
  3. Simple – similar to how WebRTC is way simpler than other real time media protocols, AV1 is designed to be simple

Simple probably needs a bit more elaboration here. It is probably the best news I heard from Alex about AV1.

Simplicity in AV1

You see, in standardization organizations, you’ll have competing vendors vying for an advantage on one another. I’ve been there during the glorious days of H.323 and 3G-324M. What happens there, is that companies come up with a suggestion. Oftentimes, they will have patents on that specific suggestion. So other vendors will try to block it from getting into the spec. Or at the very least delay it as much as they can. Another vendor will come up with a similar but different enough approach, with their own patents, of course. And now you’re in a deadlock – which one do you choose? Coalitions start emerging around each approach, with the end result being that both approaches will be accepted with some modifications and get added into the specification.

But do we really need both of these approaches? The more alternatives we have to do something similar, the more complex the end result. The more complex the end result, the harder it is to implement. The harder it is to implement, well… the closer it looks like HEVC.

Here’s the thing.

From what I understand, and I am not privy to the intricate details, but I’ve seen specifications in the past, and been part of making them happen, HEVC is your standard design-by-committee specification. HEVC was conceived by MPEG-LA, which in the last 20 years have given us MPEG-2, H.264 and HEVC. The number of members in MPEG-LA with interests in getting some skin in this game is large and growing. I am sure that HEVC was a mess of a headache to contend with.

This is where AV1 diverges. I think there’s a lot less politics going on in AOMedia at the moment than in MPEG-LA. Probably due to 2 main reasons:

  1. It is a newer organization, starting fresh. There’s politics there as there are multiple companies and many people, but since it is newer, the amount of politics involved will be lower than an organization that has been around for 20+ years
  2. There’s less money involved. No royalties means no pie to split between patent holders. So less fights about who gets his tools and techniques incorporated into the specification

The end result? The design is simpler, which makes for better implementations that are just easier to develop.

AV1 IRL

In real life, we’re yet to see if AV1 performs better than HEVC and in what ways.

Current estimates is that AV1 performans equal or better than HEVC when it comes to real time. That’s because AV1 has better tools for similar computation load than what can be found in HEVC.

So… if you have all the time in the world to analyze the video and pick your tools, HEVC might end up with better compression quality, but for the most part, we can’t really wait that long when we encode video – unless we encode the latest movie coming out from Hollywood. For the rest of us, faster will be better, so AV1 wins.

The exact comparison isn’t there yet, but I was told that experiments done on the implementations of both AV1 and HEVC shows that AV1 is equal or better to HEVC.

Streaming, Real Time and SVC

There is something to be said about real time, which brings me back to WebRTC.

Real time low delay considerations of AV1 were discussed from the onset. There are many who focus on streaming and offline encoding of videos within AOMedia, like Netflix and Hulu. But some of the founding members are really interested in real time video coding – Google, Facebook, Cisco, Polycom and Vidyo to name a few.

Polycom and Vidyo are chairing the real time work group, and SVC is considered a first class citizen within AV1. It is being incorporated into the specification from the start, instead of being bolt-on into it as was done with H.264 and VP9.

Low bitrate

Then there’s the aspect of working at low bitrates.

With the newer codecs, you see a real desire to enhance the envelope. In many cases, this means increasing the resolution and frame rates a video codec supports.

As far as I understand, there’s a lot of effort being put into AV1 in the other side of the scale – in working at low resolutions and doing that really well. This is important for Google for example, if you look at what they decided to share about VP9 on YouTube:

For YouTube, it isn’t only about 4K and UHD – it is on getting videos to be streamed everywhere.

Based on many of the projects I am involved with today, I can say that there are a lot of developers out there who don’t care too much about HD or 4K – they just want to get decent video being sent and that means VGA resolutions or even less. Being able to do that with lower bitrates is a boon.

Is AV1 “next gen”?

I have always considered AV1 to be the next next generation:

We have H.264 and VP8 as the current generation of video codecs, then HEVC and VP9 as the next generation, and then there’s AV1 as the next next generation.

In my mind, this is what you’d get when it comes to compression vs power requirements:

Alex opened my eyes here, explaining that reality is slightly different. If I try translating his words to a diagram, here’s what I get:

AV1 is an improvement over HEVC but probably isn’t a next generation video codec. And this is an advantage. When you start working on a new generation of a codec, the work necessary is long and arduous. Look at H.261, H.263, H.264 and HEVC codec generations:

Here are some interesting things that occured to me while placing the video codecs on a timeline:

  • The year indicated for each codec is the year in which an initial official release was published
  • Understand that each video codec went through iterations of improvements, annexes, appendices and versions (HEVC already has 4 versions)
  • It takes 7-10 from one version until the next one gets released. On the H.26x track, the number of years between versions has grown through time
  • VP8 and VP9 have only 4 years between one and the other. It makes sense, as VP8 came late in the game, playing catch-up with H.264 and VP9 is timed nicely with HEVC
  • AV1 comes only 6 years after HEVC. Not enough time for research breakthroughs that would suggest a brand new video codec generation, but probably enough to make improvements on HEVC and VP9
About the latest press release

AOMedia has been working towards this important milestone for quite some time – the 1.0 version specification of AV1.

The first thing I thought when seeing it is: they got there faster than WebRTC 1.0. WebRTC has been announced 6 years ago and we’re just about to have it announced (since 2015 that is). AOMedia started in 2015 and it now has its 1.0 ready.

The second one? I was interested in the quotes at the end of that release. They show the viewpoints of the various members involved.

  • Amazon – great viewing experience
  • Arm – bringing high-quality video to mobile and consumer markets
  • Cisco – ongoing success of collaboration products and services
  • Facebook – video being watched and shared online
  • Google – future of media experiences consumers love to watch, upload and stream
  • Intel – unmatched video quality and lower delivery costs across consumer and business devices as well as the cloud’s video delivery infrastructure
  • NVIDIA – server-generated content to consumers. […] streaming video at a higher quality […] over networks with limited bandwidth
  • Mozilla – making state-of-the-art video compression technology royalty-free and accessible to creators and consumers everywhere
  • Netflix – better streaming quality
  • Microsoft – empowering the media and entertainment industry
  • Adobe – faster and higher resolution content is on its way at a lower cost to the consumer
  • AMD – best media experiences for consumers
  • Amlogic – watch more streaming media
  • Argon Design – streaming media ecosystem
  • Bitmovin – greater innovation in the way we watch content
  • Broadcom – enhance the video experience across all forms of viewing
  • Hulu – Improving streaming quality
  • Ittiam Systems – the future of online video and video compression
  • NGCodec – higher quality and more immersive video experiences
  • Vidyo – solve the ongoing WebRTC browser fragmentation problem, and achieve universal video interoperability across all browsers and communication devices
  • Xillinx – royalty-free video across the entire streaming media ecosystem

Apple decided not to share a quote in the press release.

Most of the quotes there are about media streaming, with only a few looking at collaboration and social. This somewhat saddens me when it comes from the likes of Broadcom.

I am glad to see Intel and Arm taking active roles. Both as founding members and in their quotes to the press release. It is bad that Qualcomm and Samsung aren’t here, but you can’t have it all.

I also think Vidyo are spot-on. More about that later.

What’s next for AOMedia?

There’s work to be done within AOMedia with AV1. This is but a first release. There are bound to be some updates to it in the coming year.

Current plans are to have some meaningful software implementation of AV1 encoder/decoder by the end of 2018, and somewhere during 2019 (end of most probably) have hardware implementations available. Here’s the announced timeline from AOMedia:

Rather ambitious.

Realistically, mass adoption would happen somewhere in 2020-2022. Until then, we’ll be chugging along with VP8/H.264 and fighting it out around HEVC and VP9.

There are talks about adding still image format based on the work done in AV1, which makes sense. It wouldn’t be farfetched to also incorporate future voice codecs into AOMedia. This organization has shown it can bring into it the industry leaders into a table and come up with royalty free codecs that benefit everyone.

AV1 and WebRTC

Will we see AV1 in WebRTC? Definitely.

When? Probably after WebRTC 1.0. Or maybe not

It will take time, but the benefits are quite clear, which is what Alex of Vidyo alluded to in the quote given in the press release:

“solve the ongoing WebRTC browser fragmentation problem, and achieve universal video interoperability across all browsers and communication devices”

We’re still stuck in the challenge of which video codec to select in WebRTC applications.

  • Should we go for VP8, just because everyone does, it is there and it is royalty free?
  • Or should we opt for H.264, because Safari supports it, and it has better hardware support.
  • Maybe we should go for VP9 as it offers better quality, and “suffer” the computational hit that comes with it?

AV1 for video coding is what Opus is to audio coding. That article I’ve written in 2013? It is now becoming true for video. Once adoption of AV1 hits – and it will in the next 3-5 years, the dilemma of which video codec to select will be gone.

Until then, check out this free mini course on how to select the video codec for your application

Sign up for free

The post AV1 Specification Released: Can we kiss goodbye to HEVC and royalty bearing video codecs? appeared first on BlogGeek.me.

Progressive Web Apps (PWA) for WebRTC (Trond Kjetil Bremnes)

webrtchacks - Wed, 03/28/2018 - 13:30

One of WebRTC’s biggest challenges has been providing consistent, reliable support across platforms. For most apps, especially those that started on the web, this generally means developing a native or hybrid mobile app in addition to supporting the web app.  Progressive Web Apps (PWA) is a new concept that promises to unify the web for […]

The post Progressive Web Apps (PWA) for WebRTC (Trond Kjetil Bremnes) appeared first on webrtcHacks.

Kamailio World 2018 – Participation Grants

miconda - Tue, 03/27/2018 - 19:30
Once again, we are committing to the program from the last years to give free event passes at next Kamailio World (May 14-16, 2018) to several people from academic environment (universities or research institutes – bachelor, master or PhD programs qualify) as well as people from underrepresented groups.Kamailio has its origin in the academic environment, being started by FhG Fokus Research Institute, Berlin, Germany, evolving over the time into a world wide developed project, with an open and friendly community.If you think you are eligible and want to participate, email to <registration [at] kamailio.org> . Participation to all the content of the event (workshops, conference and social event) is free, but you will have to take care of expenses for traveling and accommodation. Write a short description about your interest in real time communications and, when it is the case what is the university or the research institute you are affiliate to.Also, if you are not a student, but you are in touch with some or have access to students forums/mailing lists, it will be very appreciated if you forward these details.All these are possible thanks to Kamailio World Conference sponsors: Evosip, 2600hz, Sipwise, Netaxis, Sipgate, FhG Fokus, Asipto, Simwood, LOD.com, NG-Voice, Evariste Systems, Digium, VoiceTel, Pascom and Core Network Dynamics.More information about Kamailio World Conference 2018 is available on the web site:Thanks for flying Kamailio!

Get trained to be your company’s WebRTC guy

bloggeek - Mon, 03/26/2018 - 12:00

Demand for WebRTC developers is stronger than supply.

My inbox is filled with requests for experienced WebRTC developers on a daily basis. It ranges from entrepreneurs looking for a technical partner, managers searching for outsourcing vendors to help them out. My only challenge here is that developers and testers who know a thing or two about WebRTC are hard to find. Finding developers who are aware of the media stack in WebRTC, and not just dabbled into using a github “hello world” demo – these are truly rare.

This is why I created my WebRTC course almost 2 years ago. The idea was to try and share my knowledge and experience around VoIP, media processing and of course WebRTC, with people who need it. This WebRTC training has been a pleasant success, with over 200 people who took it already. And now it is time for the 4th round of office hours for this course.

Who is this WebRTC training for?

This WebRTC course is for anyone who is using WebRTC in his daily work directly or indirectly. Developers, testers, software architects and product managers will be those who benefit from it the most.

It has been designed to give you the information necessary from the ground up.

If you are clueless about VoIP and networking, then this course will guide you through the steps needed to get to WebRTC. Explaining what TCP and UDP are, how HTTP and WebSockets fit on top of it, going to the acronyms used by WebRTC (SRTP, STUN, TURN and many others).

If you have VoIP knowledge and experience, then this course will cover the missing parts – where WebRTC fits into your world, and what to take special attention to, assuming a VoIP background (WebRTC brings with it a different mindset to the development process).

What I didn’t want to do, is have a course that is so focused on the specification that: (1) it becomes irrelevant the moment the next Chrome browser is released; (2) it doesn’t explain the ecosystem around WebRTC or give you design patterns of common use cases. Which is why I baked into the course a lot of materials around higher level media processing, the WebRTC ecosystem and common architectures in WebRTC.

TL;DR – if you follow this blog and find it useful, then this course is for you.

Why take it?

The question should be why not?

There are so many mistakes and bad decisions I see companies doing with WebRTC. From deciding how to model their media routes, to where to place their TURN servers (or configure them). Through how to design scale out, to which open source frameworks to pick. Such mistakes end up a lot more expensive than any online course would ever be.

In April, next month, I will be starting the next round of office hours.

While the course is pre-recorded and available online, I conduct office hours for a span of 3-4 months twice a year. In these live office hours I go through parts of the course, share new content and answer any questions.

What does it include?

The course includes:

  • 40+ lessons split into 7 different modules with an additional bonus module
  • 15 hours of video content, along with additional links for extra reading material
  • Several e-books available only as part of the course, like how the Jitsi team scales Jitsi Meet, and what are sought after characteristics in WebRTC developers
  • A private online forum
  • The office hours

In the past two months I’ve been working on refreshing some of the content, getting it up to date with recent developments. We’ve seen Edge and Safari introducing WebRTC during that time for example. These updated lessons will be updated in the course before the official launch.

When can I start?

Whenever you want. In April, I will be officially launching the office hours for this course round. At that point in time, the updated lessons will be part of the course.

What more, there will be a new lesson added – this one about WebRTC 1.0. Philipp Hancke was kind enough to host this lesson with me as a live webinar (free to attend live) that will become an integral lesson in the course.

If you are interested in joining this lesson live:

Free WebRTC 1.0 Live Lesson

What if I am not ready?

You can always take it later on, but I won’t be able to guarantee pricing or availability of the office hours at that point in time.

If you plan on doing anything with WebRTC in the next 6 months, you should probably enroll today.

And by the way – if you need to come as a team to up the knowledge and experience in WebRTC in your company, then there are corporate plans for the course as well.

CONTENT UPGRADE: If you are serious about learning WebRTC, then check out my online WebRTC training:

Enroll to course

The post Get trained to be your company’s WebRTC guy appeared first on BlogGeek.me.

YouTube Does WebRTC – Here’s How

webrtchacks - Fri, 03/23/2018 - 15:22

I logged into YouTube on Tuesday and noticed this new camera icon in the upper right corner, with a “Go Live (New)” option, so I clicked on it to try. It turns out you can now live stream directly from the browser. This smelled a lot like WebRTC, so I loaded up chrome://webrtc-internals to see […]

The post YouTube Does WebRTC – Here’s How appeared first on webrtcHacks.

New Kamailio module: app_python3

miconda - Tue, 03/20/2018 - 21:00
A while ago app_python3 module was added to Kamailio’s GIT master branch (to be released as stable version 5.2.0 in several months), thanks to the development efforts of Anthony Alba.Although it started from the old app_python, besides being implemented to work with Python3, the new modules added a lot of improvements, leveraging the Python3 architecture for better performances, as well as including the support for Python script reload at runtime via an RPC command (so no need to restart Kamailio — the feature was ported to app_python meanwhile). The readme of the module is available at:Now all the Kemi interpreter modules can reload the SIP routing scripts without restarting Kamailio — it works for Lua, JavaScript, Python2/3 and Squirrel languages.Happy SIP routing in Python3! You can learn more about the Kemi scripting languages at Kamailio World Conference 2018 — an workshop is dedicated to this topic!Thanks for flying Kamailio!

How WebRTC Statistics and Performance Monitoring Changed VoIP Monitoring

bloggeek - Mon, 03/19/2018 - 12:00

Monitoring focus is shifting from server-side to client-side in WebRTC statistics collection.

WebRTC happens to decentralize everything when it comes to VoIP. We’re on a journey here to shift the weight from the backend to the edge devices. While the technology in WebRTC isn’t any different than most other VoIP solutions, the way we end up using it and architecting our services around it is vastly different.

One of the prime examples here is how we shifted focus for group calling from an MCU mixing model to an SFU routing model. Suddenly, almost overnight, the notion of deploying MCU started to seem ridiculous. And believe me – I should know – I worked at a company where %60+ came from MCUs.

The shift towards SFU means we’re leaning more on the capabilities and performance of the edge device, giving it more power in the interaction when it comes to how to layout the display, instead of doing all the heavy lifting in the backend using an MCU. The next step here will be to build mesh networks, though I can’t see that future materializing any time soon.

VoIP != WebRTC. Maybe not from a direct technical point, but definitely from how we end up using it. If you need to learn more about WebRTC, then my WebRTC training is exactly what you need:

Enroll to course

What I wanted to mention here is something else that is happening, playing towards the same trend exactly – we are moving the collection of VoIP performance statistics (or more accurately WebRTC statistics) from the backend to the edge – we now prefer doing it directly from the browser/device.

VoIP Statistics Collection and Monitoring

If you are not familiar with VoIP statistics collecting and monitoring, then here’s a quick explainer for you:

VoIP is built out of the notion of interoperability. Developers build their products and then test it against the spec and in interoperability events. Then those deploying them integrate, install and run a service. Sometimes this ends up by using a single vendor, but more often than not, multiple vendor products run in the same deployment.

There is no real specification or standard to how monitoring needs to happen or what kind of statistics can, should or is collected. There are a few means of collecting that data, and one of the most common approaches is by employing HEP/EEP. As the specification states:

The Extensible Encapsulation protocol (“EEP”) provides a method to duplicate an IP datagram to a collector by encapsulating the original datagram and its relative header properties (as payload, in form of concatenated chunks) within a new IP datagram transmitted over UDP/TCP/SCTP connections for remote collection. Encapsulation allows for the original content to be transmitted without altering the original IP datagram and header contents and provides flexible allocation of additional chunks containing additional arbitrary data. The method is NOT designed or intended for “tunneling” of IP datagrams over network segments, and best serves as vector for passive duplication of packets intended for remote or centralized collection and long term storage and analysis.

Translating this to plain English: media packets are duplicated for the purpose of sending them off to be analyzed via a monitoring service.

The duplication of the packets happens in the backend, through the different media servers that can be found in a VoIP network. Here’s how it is depicted on HOMER/SIPCAPTURE’s website:

HOMER collects its data directly from the servers – OpenSIPS, FreeSWITCH, Asterisk, Kamailio – there’s no user devices here – just backend servers.

Other systems rely on the switches, routers and network devices that again reside in the backend infrastructure. Since in VoIP production networks, we almost always route the media through the backend servers, the assumption is that it is easier to collect it here where we have more control than from the devices.

This works great, but not really needed or helpful with WebRTC.

WebRTC Statistics Collection and Monitoring

With WebRTC, there are only a handful of browsers (4 to be exact), and they all adhere to the same API (that would be WebRTC). And they all have that thing called getstats() implemented in them. These get the same information you find in chrome://webrtc-internals.

Many deployments end up running peer-to-peer, having the media traverse directly through the internet and not through the backend of the service itself. Google Hangouts decided to take that route two years ago. Jitsi added this capability under the name Jitsi P2P4121. How do these services control and understand the quality of their users?

If you look at other media servers out there, most of them are a few years old only. WebRTC is just 6 years old now. So everyone’s focused on features and stability right now. Quality and monitoring is not in their focus area just yet.

Last, but not least, WebRTC is encrypted. Always. And everywhere. So sniffing packets and deducing quality from them isn’t that easy or accurate any longer.

This led to the focus of WebRTC applications in gathering WebRTC statistics from the browsers and devices directly, and not trying to get that information from the media servers.

The end result? Open source projects such as rtcstats and commercial services such as callstats.io. At the heart of these, WebRTC statistics gets collected using the getstats() API at an interval of one or more seconds, sent over to a monitoring server, where it is collected, stored, aggregated and analyzed. We use a similar mechanism at testRTC to collect, analyze and visualize the results of our own probes.

What does that give us?

  1. The most accurate indication of performance for the end user – since the statistics are collected directly on the user’s device, there’s no loss of information from backend collection
  2. Easy access to the information – there’s a uniform means of data collection here taking place. One you can also implement inside native mobile and desktop apps that use WebRTC
  3. Increased reliance on the edge, a trend we see everywhere with WebRTC anyway
What’s Next?

WebRTC chances a lot of things when it comes to how we think and architect VoIP networks. The part of how and why this is done on statistics and monitoring is something I haven’t seen discussed much, so I wanted to share it here.

The reason for that is threefold:

  1. Someone asked me a similar question on my contact page in the last couple of days, so it made sense to write a longform answer as well
  2. We’re contemplating at testRTC offering a passive monitoring product to use “on premise”. If you want to collect, store and analyze your own WebRTC statistics without giving it to any third party cloud service, then ping us at testRTC
  3. My online WebRTC training is getting a refresher and a new round of office hours. This all starts in April. Time to enroll if you want to educate yourself on WebRTC

 

The post How WebRTC Statistics and Performance Monitoring Changed VoIP Monitoring appeared first on BlogGeek.me.

Twilio Flex = Twilio Flexing its Flexibility (or the programmable contact centers)

bloggeek - Wed, 03/14/2018 - 12:00

Twilio Flex is a peak into the future of enterprise software.

This week, Twilio announced a new product called Flex. The name and the broad strokes about what Flex is found their way to TechCrunch some two weeks ago. I wanted to share my thoughts about Twilio Flex.

A few notes before I start
  • Twilio isn’t paying me for writing this
    • They are a customer in other areas, but this one is all me. I think Flex (as well as Studio, Engagement Cloud, Functions, etc.) are interesting products coming from Twilio, and they are worth a long form analysis and review
    • Articles on BlogGeek.me are never paid for. Neither are guest posts or interviews. If something interests me, I’ll write about it
  • The information here is based mainly on a briefing I received about Flex and what I found since then on other sites (and on Twilio’s website)
  • Flex is a departure of many things Twilio has been doing, making it an interesting initiative to analyze
What is Twilio Flex?

Twilio Flex is CCaaS (Contact Center as a Service. It isn’t the first one. Twilio is touting it a Programmable Contact Center, which is how they are referring to all of their products.

Here’s Jeff Lawson’s keynote from Enterprise Connect, as usual, Jeff’s keynotes are worth the time and attention:

Where Twilio tried to differentiate Flex from existing solutions is by making it a fully functional contact center solution that is Flexible enough to customize and modify. It has APIs, but the day-to-day users won’t see them, and a lot of the customizations needed don’t require digging deep into the API layer either. That’s at least the intent (I didn’t have the chance to see the integration and API layers of Flex yet).

Twilio highlights 5 main benefits with Flex:

  • Unlimited customization – through the lower layers of Twilio’s product portfolio, along with a new addition to it, the Flex UI (not a lot/enough was explained about it thus far)
  • Instant omnichannel – support for multiple communication channels. More on this later
  • Contextual intelligent – Twilio’s ML/AI roadmap lies here
  • Trusted scale- due to its use of the Twilio infrastructure
  • 2 million developers – that’s the number of Twilio registered developers

Flex fits well into one of Twilio’s largest market segments – the contact center. And there, Twilio are aiming for the contact centers sizing 1,000+ seats. The big boyz.

As it was working to move up the food chain, offering ever larger components, migrating away from developers towards end users in the B2B space and in contact centers made sense.

Flex and the Twilio Portfolio

If I had to map the road Twilio is taking with its portfolio, it would end up being something like this (I’ve removed a lot of the products for simplicity):

Transactional: It started with SMS and Voice, adding VoIP services and later on expanding horizontally to other components and building blocks such as IP Messaging and others. In this layer, and to some extent in Omnichannel, Twilio’s focus is in a horizontal expansion towards “Best of Suite” offering.

Omnichannel: In 2017, Twilio added the Twilio Engagement Cloud. It placed a few existing products from its portfolio in that layer and added Notify and Proxy to them. They stated that these are “Declarative APIs” talking about general intent while including logic of their own. At the end of the day, many of the products/APIs in this layer are Omnichannel – they work across channels using the one available/preferred/whatever for the task at hand.

Visual: This is where the story became really interesting. Twilio added Studio to its portfolio. It went up the food chain again, this time, with a visual IDE and a message that Twilio is no longer a company that serves only developers, but one that can be used by others within the organization.

Programmable Enterprise Software: This is where Flex comes in, going up the food chain again. This time, offering a solution that doesn’t interact with the end users only as a consequence (a phone rings), but rather has a new set of users – people who aren’t developers or planners who sit in front of the tool every day and use it. The contact center agents and personnel.

Flex was defined to me in the domain of “Programmable Applications”. Twilio, in a way, trying to do two things with this definition:

  1. Programmable means it isn’t diverging from its roots completely, just taking the obvious next step in its evolution. All of its core products are Programmable X (X being SMS, Voice, Video, …)
  2. It allows it to position Flex not as another contact center, but rather as something new that is different

To me it is about the future of enterprise software and how to make it programmable and flexible in ways that are still impossible today. The closest to that we’ve got is probably having so many vendors integrate with Zapier.

I am sold to that kind of a future, but I am not sure others will be.

Flex Channels Proposition

Flex leans on a lot of other products in Twilio’s portfolio. One of its core values lies in omnichannel, and the fact that Twilio is already investing in a programmable layer that handles that (the Engagement Cloud). The proposition here is that whatever Twilio adds as a channel for developers, gets almost automatically added to Flex for its contact center customers.

Out the door, Flex comes with support for Voice, SMS, Chat, Video, Email, Fax, Twitter DM, Google RCS, Facebook Messenger and LINE. It also includes Screen Sharing and Co-Browsing as additional capabilities within the interactions. Developers can add additional channels to customize their contact center as well.

The list of channels is impressive, but somehow Apple Business Chat is missing in that list. Apple’s launch partners in this case were contact center vendors (LivePerson, Nuance, Genesys and Salesforce). Twilio, which is still recognized solely as a CPaaS vendor didn’t make the cut. I am sure Twilio tried becoming a partner, so this is more likely a decision made by Apple. I am also sure that once Apple opens up Business Chat to more developers, Twilio will be adding support to it.

The biggest promise here? Twilio is already committed to omnichannel in its products, and Flex will enjoy from that commitment as will Flex’ customers.

Think you know how WebRTC fits in a contact center? Check out with The Complete WebRTC Contact Center Uses Swipefile

Get the swipefile Machine Learning and Artificial Intelligence in Flex

A year or two ago, ML and AI in CPaaS was science fiction. Twilio as well as its competitors delved in the real time. In transactional and transient communications. If any machine learning work was taking place, it was in the operational layers – in an effort to optimize cost and deliverability of its service to its customers.

Last year, Twilio launched Understand, a layer built on top of Google’s Natural Language Processing capabilities (NLP). Understand is where Twilio started looking in ML and AI in the context of actual services for its customers. It looks at the problem domain of its customers (mainly contact centers) and tries to offer higher level APIs that are easier to use and are targeted at NLU (Natural Language Understanding). This then gets focused to the specific domain of the customer’s needs, and you get something that is usable today (as opposed to building a general purpose AI such as Siri, Alexa or Google Assistant).

The result in Understand is a way to simplify the development processes and requirements for Twilio’s customers when it comes to NLU.

That also got wrapped into Flex, at least on slides.

My feelings? The AI story of Flex is built out of two parts:

  1. Collecting all the existing ML/AI/intelligent related capabilities of Twilio and wrapping them inside Flex. This is done through internal APIs as well as via partners
  2. Having a roadmap vision / story of what AI means in Flex moving forward

AI being the holy grail that it is, you can’t ignore it when launching a new service these days.

Flex Pricing is Key

Pricing for Flex hasn’t been announced, but one thing was made clear – it will be based on a per seat price and not usage based as other Twilio products.

This is where things get somewhat challenging for Twilio, and here’s why:

  • Twilio has been comfortable so far to offer a usage based model. Switching to a per seat model will have its differences in how it calculates its revenue and margins
  • By opting for per seat pricing, Twilio falls into the contact center industry “comfort zone” – the model is known and accepted already
  • But this also makes comparing Twilio Flex pricing to other contact centers rather “easy”. It means I can now compare apples to apples when selecting between Flex and any other vendor
  • We don’t have price points, but if the price point will be based on the industry average or accepted standard, then many analysts and experts will end up saying that there’s no disruption or anything new in Twilio Flex. For the pundits, Flex may seem like an ordinary contact center and without price disruption there can be no disruption with that mindset
  • If the price points are too high, then Twilio will be going after its own contact center customers, who will see this as direct competition. Such a move can signal others that Twilio is willing to go into their turf as well. It will question the potential and attractiveness of joining the Flex marketplace
  • If the price points will be lower, then where will be the margins for Twilio?

My guess is that Twilio is still looking for price validation and it is doing so this week at Enterprise Connect and planning to continue doing so in the coming weeks until it is ready to announce the price points publicly.

Who is Twilio Flex for?

This is the main question, and one that I am not sure of the answer.

Twilio is saying the target audience is 1,000+ seats contact centers. It makes sense to go for the larger contact centers at a time when the transition towards the cloud and digital transformations of contact centers is happening more.

But would I be using it in my business or go through a third party?

Should a Twilio customer that built a contact center on its own on top of Twilio migrate to Flex?

Should a Twilio customer that built a contact center for others to use on top of Twilio see Flex as a threat or as an opportunity to improve its own contact center offering?

Twilio stated that 89% of contact centers today are still deployed on premise, and that the market is large enough. These statement was said to answer two questions:

  1. The market is big enough for both its existing customers and for Flex, so it isn’t competing directly with its customers (I guess its customers will have to decide if that’s true for them or not)
  2. The market is big for Twilio to grow in. Twilio is relying on that to keep growing

Twilio was already trending upwards when the word on Flex leaked by TechCrunch on Feb 17, and has increasing since:

source: Google

Is that related to Flex or not, I can’t say. To me, going to contact centers as an adjacent market and eating up more of the pie there is a bold move. If it succeed, then Twilio will be much bigger than it is today.

The Unknowns

There are things that are still unknown to me here. They are technical ones, but important for my own perspective and analysis. They are related to what wasn’t directly in the briefing or the materials I’ve seen since the official announcement.

Here are a few things I am really interested in:

  • What are the exact integration points for Flex?
  • How are developers expected to integrate with it?
  • Where do you use Twilio APIs? Where will you be making use of Twilio Studio? Where do you write a Twilio Function? How about Twilio Understand?
  • Flex UI is brand new. How does it fair as a standalone product enabler? What can developers do with it?
  • What will it mean to integrate Flex with a CRM? Does it make more sense to integrate the CRM into the Flex UI or does it make more sense to integrate Flex into the CRM UI?
  • What parts of “contextual intelligence” really exist in Flex today? How does it compare to existing market offerings?
  • What do contact center vendors using Twilio think about Flex? How will they react to it?
Is CPaaS Eating CCaaS?

Maybe.

Here’s one way to map the communications landscape:

And here’s another:

What’s your worldview here?

 

The post Twilio Flex = Twilio Flexing its Flexibility (or the programmable contact centers) appeared first on BlogGeek.me.

Kamailio At Fossasia Summit 2018

miconda - Wed, 03/14/2018 - 11:00
I, as a co-founder of Kamailio, will give a presentation at Fossassia Summit 2018, event taking place in Singapore, during March 22-25. It is the largest conference in Asia gathering a consistent group of speakers from many projects and organisations developing or supporting open source software.My presentation with the title “Kamailio – The open source framework to build your own VoIP service” is scheduled at 18:00 on Saturday, March 24, 2018. The focus is on highlighting how to build easily VoIP and realtime communication services with Kamailio in server side and other open source applications for client apps.If you attend the event or just happen to be in the city during the event, get in touch via email (miconda [at] gmail.com) in case you want to chat more about Kamailio and open source RTC.After Fossasia, the next event where to meet many folks from our community is the Kamailio World Conference, May 16-18 2018, in Berlin, Germany.

WebRTC 1.0 – What on earth is it anyway? (register to the webinar)

bloggeek - Mon, 03/12/2018 - 12:00

TL;DR – register to this webinar about WebRTC 1.0

As I am prepping to another launch of my Advanced WebRTC Architecture Course, I went through the content to make sure it is up to date. This is by far the hardest thing about a course about something like WebRTC – what was right on Chrome 63 might not be correct anymore for Chrome 64. Or is it 65 now?

I ended up spending time in updating and refreshing some of the lessons with some new material, but I ended up with one area that the course is weak at. And that’s WebRTC 1.0 information.

The problem there is that while I can tell some of the story, I definitely can’t tell it to the level I wanted. It got me to partner again with Philipp Hancke, which I love working with on lots of mini-projects. I asked Philipp if he will be willing to host such a lesson for me as a live webinar and he said yes (yippie).

What’s in the Webinar?

So here’s what we’re going to do:

Next month, right after Passover, and because Philipp asked for April, we’re going to host a lesson/webinar about WebRTC 1.0.

Philipp will skim quickly over the backstory of WebRTC 1.0, where we are today and more importantly where we’re headed with it. What we will cover in more detail will include answers to questions like:

  • What should you change in your app due to WebRTC 1.0?
  • What new tricks did 1.0 teach the “old” WebRTC dog?
  • Do you need to update my app to be compliant and work in Chrome next year?
  • How much effort is involved in this migration to WebRTC 1.0 anyway?
  • If you pick out a WebRTC project on github, how would you know if it supports WebRTC 1.0 or not?

What I want here is for you (and me) to really understand the impact WebRTC 1.0 is going to have on all of us in 2018 and on.

When?

This webinar/lesson will take place on

Tuesday, April 10

1-2PM EST (view in your timezone)

Save your seat →

The session’s recording will NOT be available after the event itself. While this lesson is free to attend live, the recording will become an integral part of the course’ lessons.

The post WebRTC 1.0 – What on earth is it anyway? (register to the webinar) appeared first on BlogGeek.me.

Kamailio At Asterisk Africa Conference 2018

miconda - Fri, 03/09/2018 - 12:30
Alex Balashov from Evariste Systems, one of our Kamailio management team members, went the long route from Atlanta, USA, to Johannesburg, South Africa, to participate at Asterisk Community Conference Africa 2018, event happening during March 14-15.He is presenting two sessions:The event is promoting Asterisk and open source VoIP technologies, with a selected group of local speakers and invited international guest, besides Alex, one can meet there with  Matt Fredrikson (project lead of Asterisk), David Duffett (community manager of Asterisk) or Lorenzo Emilitri (QueueMetrics) and interact via remote video participation with Allison Smith (the Asterisk IVR voice) and Dan Jenkins (CommCon UK).Should you be in the area and working with real time communications, try not to miss this conference. Catch Alex around and get more familiar with Kamailio and the latest project updates!Also do not forget about the next Kamailio World Conference, May 14-16, 2018, in Berlin, Germany! Alex will be there as well, the details for most of the sessions are published. Still few weeks for early registration price, however, be aware that the number of seats are limited, at the past editions we were fully booked. Do not delay the registration in order to secure your participation!Thanks for flying Kamailio!

Part 2: Building a AIY Vision Kit Web Server with UV4L

webrtchacks - Tue, 03/06/2018 - 12:36

In part 1 of this set, I showed how one can use UV4L with the AIY Vision Kit send the camera stream and any of the default annotations to any point on the Web with WebRTC. In this post I will build on this by showing how to send image inference data over a WebRTC […]

The post Part 2: Building a AIY Vision Kit Web Server with UV4L appeared first on webrtcHacks.

AIY Vision Kit Part 1: TensorFlow Computer Vision on a Raspberry Pi Zero

webrtchacks - Tue, 03/06/2018 - 12:35

A couple years ago I did a TADHack  where I envisioned a cheap, low-powered camera that could run complex computer vision and stream remotely when needed. After considering what it would take to build something like this myself, I waited patiently for this tech to come. Today with Google’s new AIY Vision kit, we are […]

The post AIY Vision Kit Part 1: TensorFlow Computer Vision on a Raspberry Pi Zero appeared first on webrtcHacks.

You Better Ignore the Default Protocol Ports You Implement

bloggeek - Mon, 03/05/2018 - 12:00

Default protocol ports are great, but ones that will work in the real world are better.

If you want something done properly, you should probably ignore the specification of the protocols you use every once in awhile. When I worked years ago in implementing protocols directly, there was this notion – you need to send messages in the strictest format possible but be very lenient in how you enable receiving them. The reason behind that is that by being strict on the sender side, you will achieve higher interoperability (more devices will be able to “decipher” what you sent) and by being lenient on the receiving side, you achieve the same (being able to understand messages from more devices). Somehow, it isn’t worth to be right here – it just makes more sense to be smart.

The same apply to default protocol ports.

Assume for the sake of argument that we have a theoretical protocol that requires the use of port number 5349. You setup the server, configure it to listen on that port (after all, we want to be standard compliant), and you run your service.

Will that work well for you?

For the most part, as the illustration above shows, yes it will.

The protocol is probably client-server based. A client somewhere from inside his private network is accessing the Internet, going to the public IP of your server to that specific port and connects. Life is good.

Only sometimes it isn’t.

Hmm… what’s going on here now? Someone in the IT department decided to block outgoing traffic to port 5349. Or maybe, just maybe, he decided to open outgoing traffic solely for ports 80 and 443. And why would he do that? Because that’s where HTTP and HTTPS traffic go to, which is web servers that our browsers connect to. And I don’t know any blue collar employee today who would be able to do his job without connecting the the Internet with his browser. Writing this draft of an article requires such a connection (I do it on Google Doc and then copy it to WordPress once done).

So the same scenario, with the same requirements won’t work if our server decides to use the default port 5349.

What if we decide to pass it through port 443?

Now it has a better chance of working. Why? Because port 443 is reserved for TLS traffic, which is encrypted. This means that beyond the destination of the data, the firewall we’re dealing with can’t know a thing about what’s being sent or where, so he will usually treat it as “HTTPS” type of traffic and will just pass it along.

There are caveats here. If the enterprise is enforcing a local trusted web proxy, it actually acts as a man in the middle and opens all packets, which means he now sees the traffic and might decide not to pass it since he can’t understand it.

What we’re aiming for is best coverage. And port 443 will give us that. It might get blocked, but there’s less of a chance for that to happen.

Here are a few examples where ignoring your protocol default ports is suggested:

TURN

The reason for this article is TURN. TURN is used by WebRTC (and other protocols) to get your media session connected in case you can’t send it directly peer-to-peer. It acts as a relay to the media that sits in the public internet with the sole purpose of punching holes in NATs and traversing firewalls.

TURN runs over UDP, TCP and TLS. And yes. You WANT to configure and run it on UDP, TCP and TLS (don’t be lazy – configure them all – it won’t cost you more).

Want to learn more about WebRTC in general and NAT traversal specifically? Enroll to my WebRTC training today to become a pro WebRTC developer.

Enroll to course

The default ports for your STUN and TURN servers (you’re most probably going to deploy them in the same process) are:

  • 3478 for STUN (over UDP)
  • 3478 for TURN over UDP – same as STUN
  • 3478 for TURN over TCP – same as STUN and as TURN over UDP
  • 5349 for TURN over TLS

A few things that come to mind from this list above:

  1. We’re listening to the same port for both UDP and TCP, and for both STUN and TURN – which is just fine
  2. Remember that 5349 from my story above?

Here’s the thing. If you deploy only STUN, then many WebRTC sessions won’t connect. If you deploy also with TURN/UDP then some sessions still won’t connect (mainly because of IT admins blocking UDP altogether). TURN/TCP might not connect either. And guess what – TURN/TLS on 5349 can still be blocked.

What a developer to do in such a case?

Just point your WebRTC devices towards port 443 for ALL of your STUN/TURN traffic and be done with it. This approach has no real downsides versus deploying with the default ports and all the potential upsides.

Here’s how a couple of services I checked almost on random do this properly (I’ve used chrome://webrtc-internals to get them):

Hangouts Meet

Or Google Hangouts. Or Google Meet. Or whatever name it now has. I did use the Meet one:

https://meet.google.com/goe-nxxv-ryp?authuser=1, { iceServers: [stun:stun.l.google.com:19302, stun:stun1.l.google.com:19302, stun:stun2.l.google.com:19302, stun:stun3.l.google.com:19302, stun:stun4.l.google.com:19302], iceTransportPolicy: all, bundlePolicy: max-bundle, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {enableDtlsSrtp: {exact: false}, enableRtpDataChannels: {exact: true}, advanced: [{googHighStartBitrate: {exact: 0}}, {googPayloadPadding: {exact: true}}, {googScreencastMinBitrate: {exact: 400}}, {googCpuOveruseDetection: {exact: true}}, {googCpuOveruseEncodeUsage: {exact: true}}, {googCpuUnderuseThreshold: {exact: 55}}, {googCpuOveruseThreshold: {exact: 85}}]}

Google Meet comes with STUN:19302 with 5 different subdomain names for the server. There’s no TURN here because the service uses ICE-TCP directly from their media servers.

The selection of port 19302 is quaint. I couldn’t find any reference to that number or why it is interesting (not even a mathematical one).

Google AppRTC

You’d think Google’s showcase of WebRTC would be an exemplary citizen of a solid STUN/TURN configuration. Well… he’s what it got me:

https://appr.tc/r/986533821, { iceServers: [turn:74.125.140.127:19305?transport=udp, turn:[2a00:1450:400c:c08::7f]:19305?transport=udp, turn:74.125.140.127:443?transport=tcp, turn:[2a00:1450:400c:c08::7f]:443?transport=tcp, stun:stun.l.google.com:19302], iceTransportPolicy: all, bundlePolicy: max-bundle, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 },

It had TURN/UDP at 19305, TURN/TCP at 443 and STUN at 19302. Unlike others, it had explicit IPv6 addresses. It had no TURN/TLS.

Jitsi Meet

https://meet.jit.si/RandomWerewolvesPierceAlone, { iceServers: [stun:all-eu-central-1-turn.jitsi.net:443, turn:all-eu-central-1-turn.jitsi.net:443, turn:all-eu-central-1-turn.jitsi.net:443?transport=tcp, stun:all-eu-west-1-turn.jitsi.net:443, turn:all-eu-west-1-turn.jitsi.net:443, turn:all-eu-west-1-turn.jitsi.net:443?transport=tcp, stun:all-eu-west-2-turn.jitsi.net:443, turn:all-eu-west-2-turn.jitsi.net:443, turn:all-eu-west-2-turn.jitsi.net:443?transport=tcp], iceTransportPolicy: all, bundlePolicy: balanced, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {advanced: [{googHighStartBitrate: {exact: 0}}, {googPayloadPadding: {exact: true}}, {googScreencastMinBitrate: {exact: 400}}, {googCpuOveruseDetection: {exact: true}}, {googCpuOveruseEncodeUsage: {exact: true}}, {googCpuUnderuseThreshold: {exact: 55}}, {googCpuOveruseThreshold: {exact: 85}}, {googEnableVideoSuspendBelowMinBitrate: {exact: true}}]}

Jitsi shows multiple locations for STUN and TURN – eu-central, eu-west with STUN:443, TURN/UDP:443 and TURN/TCP:443. No TURN/TLS.

appear.in

https://appear.in/bloggeek, { iceServers: [turn:turn.appear.in:443?transport=udp, turn:turn.appear.in:443?transport=tcp, turns:turn.appear.in:443?transport=tcp], iceTransportPolicy: all, bundlePolicy: balanced, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {advanced: [{googCpuOveruseDetection: {exact: true}}]}

appear.in went for TURN/UDP:443, TURN/TCP:443 and TURN/TLS:443. STUN is implicit here via the use of TURN.

Facebook Messenger

https://www.messenger.com/videocall/incall/?peer_id=100000919010117, { iceServers: [stun:stun.fbsbx.com:3478, turn:157.240.1.48:40002?transport=udp, turn:157.240.1.48:3478?transport=tcp, turn:157.240.1.48:443?transport=tcp], iceTransportPolicy: all, bundlePolicy: balanced, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }, {advanced: [{enableDtlsSrtp: {exact: true}}]}

Messenger uses port 3478 for STUN, TURN over UDP on port 40002, TURN over TCP on port 3478. It also uses TURN over TCP on port 443. No TURN/TLS for Messenger.

Here’s what I’ve learned here:

  • People don’t use the default STUN/TURN ports in their deployments
  • Even if they don’t use ports that make sense (443), they may not use the default ports (See Google Meet)
  • With seemingly something straightforward as STUN/TURN, everyone ends up implementing it differently
MQTT

We’ve looked at at NAT Traversal and its STUN and TURN server. But what about some signaling protocols? The first one that came to mind when I thought about other examples was MQTT.

MQTT is a messaging protocol that is used in the IOT and M2M space. Others use it as well – Facebook for example:

They explained how MQTT is used as part of their Messenger backend for the WebRTC signaling (and I guess all other messages they send over Messenger).

MQTT can run over TCP listening on port 1883 and over TLS on port 8883. But then when you look at the AWS documentation for AWS IOT, you find this:

There’s no port 1883 at all, and now port 443 can be used directly if needed.

 

It would be interesting to know if Facebook Messenger on their mobile app use MQTT over port 443 or 8883 – and if it is port 443, is it MQTT over TLS or MQTT over WebSocket. If what they do with their STUN and TURN servers is any indication, any port number here is a good guess.

SIP

SIP is the most common VoIP signaling protocol out there. I haven’t remembered the details, so I checked in Wikipedia:

SIP clients typically use TCP or UDP on port numbers 5060 or 5061 for SIP traffic to servers and other endpoints. Port 5060 is commonly used for non-encrypted signaling traffic whereas port 5061 is typically used for traffic encrypted with Transport Layer Security (TLS).

Port 5060 for UDP and TCP traffic. And port 5061 for TLS traffic.

Then I asked a friend who knows a thing or two about SIP (he’s built more than his share of production SIP networks). His immediate answer?

443.

He remembered 5060 was UDP, 5061 was TCP and 443 is for TLS.

When you want to deploy a production SIP network, you configure your servers to do SIP over TLS on port 443.

Next Steps

If you are looking at protocol implementations and you happen to see some default ports that are required, ask yourself if using them is in your best interest. To get past firewalls and other nasty devices along the route, you might want to consider using other ports.

While you’re at it, I’d avoid sending stuff in the clear if possible and opt for TLS on the connection, which brings us back to 443. Possibly the most important port on the Internet.

If you are serious about learning WebRTC, then check out my online WebRTC training:

Enroll to course

The post You Better Ignore the Default Protocol Ports You Implement appeared first on BlogGeek.me.

Kamailio v5.1.2 Released

miconda - Thu, 03/01/2018 - 21:00
Kamailio SIP Server v5.1.2 stable is out – a minor release including fixes in code and documentation since v5.1.1. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio® v5.1.2 is based on the latest source code of GIT branch 5.1 and it represents the latest stable version. We recommend those running previous 5.1.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous releases of the v5.1 branch.Resources for Kamailio version 5.1.2Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.1 origin/5.1Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.1.x release series is summarized in the announcement of v5.1.0:Do not forget about the next Kamailio World Conference, taking place in Berlin, Germany, during May 14-16, 2018. The first group of sessions and speakers were announced, registration is open!Thanks for flying Kamailio!

Kamailio v5.0.6 Released

miconda - Tue, 02/27/2018 - 19:00
Kamailio SIP Server v5.0.6 stable is out – a minor release including fixes in code and documentation since v5.0.5. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio v5.0.6 is based on the latest version of GIT branch 5.0. We recommend those running previous 5.0.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous release of the v5.0 branch.Resources for Kamailio version 5.0.6Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.0 origin/5.0Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.0.x release series is summarized in the announcement of v5.0.0:Note: the branch 5.0 is the previous stable branch. The latest stable branch is 5.1, at this time with v5.1.1 being released out of it. Be aware that you may need to change the configuration files and database structures from 5.0.x to 5.1.x. See more details about it at:Check also the details of next Kamailio World Conference, taking place in Berlin, Germany, during May 14-16, 2018. Details with a selection of speakers and sessions have been published. The registration is open!Thanks for flying Kamailio!

Kamailio v4.4.7 Released

miconda - Mon, 02/26/2018 - 18:00
Kamailio SIP Server v4.4.7 stable is out – a minor release including fixes in code and documentation since v4.4.6. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio v4.4.7 is based on the latest version of GIT branch 4.4. We recommend those running previous 4.4.x versions to upgrade either to v4.4.7 or even better to 5.0.x or 5.1.x series. When upgrading to v4.4.7, there is no change that has to be done to configuration file or database structure comparing with the previous release of the v4.4 branch.Important: Kamailio v4.4.7 is the last planned release in 4.4.x series. From this moment, the maintained stable release series are 5.0.x and 5.1.x.Resources for Kamailio version 4.4.7Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 4.4 origin/4.4Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.4.x release series is summarized in the announcement of v4.4.0:Note: the branch 4.4 is an old stable branch, going out of mainenance with the release of v4.4.7 – if no major regression discovered, then no future releases will be made out of branch 4.4. The latest stable branch is 5.1, at this time with v5.1.1 being released out of it. The project is officially maintaining the last two stable branches, these are now 5.0 and 5.1. Therefore an alternative is to upgrade to latest 5.1.x – be aware that you may need to change the configuration files and database structures from 4.4.x or 5.0.x to 5.1.x. See more details about it at:We hope also to meet many of you at the next Kamailio World Conference, May 14-16, 2018, in Berlin, Germany. The details for a selection of speakers and sessions has been already published and the registration is open. See more on the website of the event at:Thanks for flying Kamailio!

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.