So I talked about Skype and Viber at KrankyGeek two weeks ago. Watch the video on youtube or take a look at the slides. No “reports” or packet dumps to publish this time, mostly because it is very hard to draw conclusions from the results.
The VoIP services we have looked at so far which use the RTP protocol for transferring media. RTP uses a packet header which is not encrypted and contains a number of attributes such as the payload type (identifying the codec used), a synchronization source (which identifies the source of the stream), a sequence number and a timestamp. This allows routers to identify RTP packets and prioritize them. This also allows someone monitoring all network traffic (“Pervasive Monitoring“) to easily identify VoIP traffic. Or someone wiretapping your internet connection.
Skype and Viber encrypt all packets. Does that make them them less susceptible for this kind of attack?
Bear with me, the answer is going to be very technical. tl;dr:
Not expecting to find much, I ran a standard set of scenarios with Skype of Android and iOS similar to those used in the Whatsapp analysis.
A first look did not show much. Luckily, when analyzing WhatsApp I had developed some tooling to deal with RTP. I modified those tools, removing the RTP parser, and was greeted with these graphs:
While the bitrate alone (blue is my ipad3 with a 172.16. ip address, black is my old Android phone) is not very interesting, the packet rate of exactly 50 packets was interesting. Also, the packet length distribution was similar to Opus. As I figured out later from the integrated debugging (on the Android device, this must be too technical for iOS users!), this was the Silk codec. In fact, if you account for some overhead the black distribution matches what we saw from WhatsApp earlier and what is now known to be Opus at 16khz or 8khz.
So the encryption did not change the traffic pattern. Nor does it hide the fact that a call is happening.
When muting the audio on one device, one can even see regular spikes in the traffic every then seconds. Supposedly, those are keepalive packets.
Let’s look at some video traffic. Note the two distinct distributions in the third graph? Let’s suppose that the left one is audio and everything else is video. This works well enough looking at the last graph which shows the ‘audio’ traffic in green and orange respectively.
The accuracy could possibly be improved a little by looking at the number of packets which is pretty much constant for audio.
In RTP, we would use the synchronization source (SSRC) field from the header to accomplish this. But that just makes things easier for routers.
Last but not least relays. When testing this from Europe, I was surprised to see my traffic being routed through Redmond, Washington.
This is quite interesting in comparison to the first graph. The packet rate stays roughly the same, but the bitrate doubles to 100 kilobits/second. That is quite some overhead compared to the standard TURN protocol which has negligible overhead. The packet length distribution is shifted to the right and there are a couple of very large packets. Latency was probably higher but this is very hard to measure.
While I got some pretty interesting results from Skype, Viber turned out to harder. Thanks to the tooling it took now only a matter of seconds to discover that, like Whatsapp, it uses a relay server to help with call establishment:
Blue traffic is captured locally before it is sent to the peer, the black and green traffic is received from the remote end. The traffic shown in black almost vanishes after a couple of initial spikes (which contain very large packets at a low frequency). Visualizations of this kind are a lot easier to understand than the packet dumps captured with Wireshark.
And for the sake of completeness, muting audio on both sides showed keepalive traffic, visible as tiny period spikes in this graph:
VoIP security is hard. And this not really news, attacks on encrypted VoIP traffic have been known for quite a while, see e.g. this paper from 2008 and the more recent ‘Phonotactic Reconstruction’ attacks.
The fact that RTP does not encrypt the header data makes it slightly easier to identify, but it seems that a determined attacker could have come to the same conclusions about the encrypted traffic of services like Skype. Keep that in mind when talking about the security of your service. Also, keep the story of the ECB penguin in mind.
Or, as Emil Ivov said about the security of peer-to-peer: “Unless there is a cable going between your computer and the other guys computer and you can see the entire cable, then you’re probably in for a rude awakening”.
{“author”: “Philipp Hancke“}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post Traffic Encryption appeared first on webrtcHacks.
4K isn’t part of the current round of fighting.
A quick disclaimer: I own a Chromecast dongle. I don’t use it much. My daughter plays Just Dance Now every couple of days on it. And sometimes we watch our pictures on the large screen. So I can’t be called a true user of these devices.
That said, these devices are heavily used for streaming, which means video, which means a video codec. Which means I am a bit interested in them lately. Especially now with the H.265 crisis and the newly found Alliance for Open Media.
We had two launches lately and rumors of a third one. Let’s look at each one of them from the prism of codec support and resolution.
Apple TVApple TV has its issues with the web. The spec of this upcoming device, from Apple’s website, include the following video formats:
H.264 video up to 1080p, 60 frames per second, High or Main Profile level 4.2 or lower
H.264 Baseline Profile level 3.0 or lower with AAC-LC audio up to 160 Kbps per channel, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats
MPEG-4 video up to 2.5 Mbps, 640 by 480 pixels, 30 frames per second, Simple Profile with AAC-LC audio up to 160 Kbps, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats
Running an A8 chip, it can be deduced that it might actually have H.265 capabilities, but Apple decided to not use them for the time being – the same way it removed H.265 from FaceTime on the iPhone 6.
They also aren’t going overboard with the resolution, sticking to 1080p, streamed with H.264. The nice thing here is their 60 fps support.
There’s no 4K though. And no H.265.
Amazon Fire TVAmazon announced its own response to the Apple TV a day after the Apple TV announcement. As with all classic after-Apple announcement, this had the two obvious features: lower price point and better hardware.
The better hardware part boils down to support for 4K resolutions.
The specs indicate the following content formats:
Video: H.265, H.264, Audio: AAC-LC, AC3, eAC3 (Dolby Digital Plus), FLAC, MP3, PCM/Wave, Vorbis, Dolby Atmos (EC3_JOC), Photo: JPEG, PNG, GIF, BMP
So higher resolutions probably get streamed at H.265 while everything else is H.264.
Here’s the rub though:
This is a hardware device. No real option to add or replace video codecs easily – at least not at such high resolutions. They worked on this one for over a year, so they couldn’t have foretold the mess that H.265 patents will be today. They didn’t want (or couldn’t) risk it with VP9. So now what?
Will this 4K device be useful for watching Amazon video movies at 4K? How higher will these need to be priced to deal with the royalty headaches of H.265?
Google’s YouTube service certainly isn’t going to support H.265 for its 4K streams anytime soon.
Can’t see 4K using H.265 on a hardware device in 2015 the right choice. Sorry.
Google ChromecastOnly rumors for now, but it seems this one will be announced on September 29th. We will know soon enough how stupid my estimates really are.
Here we go – these are my own estimates:
We will see in a a week how I fared on this one.
Bottom LineWhile 4K is a higher resolution than 1080p, it is too new and too niche at this point:
Kranky and I are planning the next Kranky Geek - Q1 2016. Interested in speaking? Just ping me through my contact page.
The post Apple TV, Amazon Fire TV or a new Google Chromecast Dongle – 4K Won’t Matter appeared first on BlogGeek.me.
Hello, again. This past week in the FreeSWITCH master branch we had 57 commits. The features for this week include: added support for timestamp based counting for jitter buffer in audio mode, added support for X-headers in this 3p mode in mod_sofia, and fine-tuning FEC with repacketization and improved jitter buffer debugging with FEC in mod_opus. And, a security issue was fixed by properly handling malformed json when parsing json!
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
Security Issues:
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
And, this past week in the FreeSWITCH 1.4 branch we had 7 new commits merged in from master. And the FreeSWITCH 1.4.21 release is here! Go check it out!
Security Issues:
New features that were added:
The following bugs were fixed:
Yes and no.
Microsoft just announced officially that they have added ORTC to Edge. ORTC is… well… it’s kind’a like’a WebRTC. But not exactly.
Someone is doing his best NOT to mention WebRTC in all this…
Here are a few random thoughts I had on the subject:
All in all, another good indicator for the health of this community and real time communications in the web.
For a real analysis, read Alex’s ruminations on ORTC in Edge.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post Do we Care about ORTC on Edge? appeared first on BlogGeek.me.
From Microsoft – http://blogs.windows.com/msedgedev/2015/09/18/ortc-api-is-now-available-in-microsoft-edge/
Our initial ORTC implementation includes the following components:
More here..
ORTC support in Edge has been announced today. A while back, we saw this on twitter:
Windows Insider Preview build 10525 is now available for PCs: http://t.co/zeXQJocgLs This release lays groundwork for ORTC in Microsoft Edge
— Microsoft Edge Dev (@MSEdgeDev) August 18, 2015
“This release [build 10525] lays the groundwork for ORTC” was quite an understatement. It was considered experimental and while the implementation still differs from the specification (which is still work in progress) slightly, it already worked and as a developer you can get familiar with how ORTC works and how it is different from the RTCPeerConnection API.
If you want to test this, please use builds newer than 10547. Join the Windows Insider Program to get them and make sure you’re on the fast ring.
The approach taken differs from the RTCPeerConnection way of giving you a blob that you exchange as this WebRTC PC1 sample shows quite well. It’s more about giving you the building blocks.
In ORTC, you have to incrementally build up things. Let’s walk through the code (available on github):
Setting up a Peer to Peer connection var gatherer1 = new RTCIceGatherer(iceOptions); var transport1 = new RTCIceTransport(gatherer1); var dtls1 = new RTCDtlsTransport(transport1);There are three elements on the transport side:
* the RTCIceGatherer which gathers ICE candidates to be sent to the peer,
* the RTCIceTransport where you add the candidates from the peer,
* the DtlsTransport which is sitting on top of the ICE transport and deals with encryption.
As in the peerConnection API, you exchange the candidates:
// Exchange ICE candidates. gatherer1.onlocalcandidate = function (evt) { console.log('1 -> 2', evt.candidate); transport2.addRemoteCandidate(evt.candidate); }; gatherer2.onlocalcandidate = function (evt) { console.log('2 -> 1', evt.candidate); transport1.addRemoteCandidate(evt.candidate); };Also, you need to exchange the ICE parameters (usernameFragment and password) and start the ICE transport:
transport1.start(gatherer1, gatherer2.getLocalParameters(), 'controlling'); transport1.onicestatechange = function() { console.log('ICE transport 1 state change', transport1.state); };This is done with SDP in the PeerConnection API. One side needs to be controlling, the other is controlled.
You also need to start the DTLS transport with the remote fingerprint and hash algorithm:
dtls1.start(dtls2.getLocalParameters()); dtls1.ondtlsstatechange = function() { console.log('DTLS transport 1 state change', dtls1.state); };Once this is done, you can see the candidates being exchanged and the ICE and DTLS state changes on both sides.
Cool. Now what?
Sending a MediaStream track over the connectionLet’s send a MediaStream track. First, we acquire it using the promise-based navigator.mediaDevices.getUserMedia API and attach it to the local video element.
// call getUserMedia to get a MediaStream. navigator.mediaDevices.getUserMedia({video: true}) .then(function(stream) { document.getElementById('localVideo').srcObject = stream;Next, we determine the send and receive parameters. This is where the PeerConnection API does the “offer/answer” magic.
Since our sending capabilities match the receiving capabilities, there is little we need to do here.
Some black magic is still involved, check the specification for the gory details.
Then, we start the RtpReceiver with those parameters:
// Start the RtpReceiver to receive the track. receiver = new RTCRtpReceiver(dtls2, 'video'); receiver.receive(params); var remoteStream = new MediaStream(); remoteStream.addTrack(receiver.track); document.getElementById('remoteVideo').srcObject = remoteStream;Note that the Edge implementation is slightly different from the current ORTC specification here since you need to specify the media type as second argument when creating the RtpReceiver.
We create a stream to contain the track and attach it to the remote video element.
Last but not least, let’s send the video track we got:
That’s it. It gets slightly more complicated when you have to deal with multiple tracks, and have to actually negotiate capabilities in order to interop between Chrome and Edge. But that’s a longer story…
{“author”: “Philipp Hancke“}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post First steps with ORTC appeared first on webrtcHacks.
There are a few side stories around Apple lately that relate to WebRTC. I wanted to share them here.
Apple TV ships with no HTML5 supportIt seems that in the Apple TV reboot, there are going to be apps. But not ones that can make use of HTML5. Just native apps. John Gruber points to a post around that topic titled Everything but the Web, concluding that Web views won’t find their way to the TV screen if Apple has anything to do with it.
He doesn’t state the reason though. If you ask me, this has nothing to do to the Apple/Flash war of the past. It also has nothing to do with design or aesthetics. It has anything to do with ecosystem control. For Apple, the ability of cross platform development is an aberration. Why on earth enable developers to write their code once and then run it elsewhere? There’s nothing outside the closed garden of Apple, so why bother?
Killing HTML5 in Mac is impossible. Killing it on the iPhone or iPad is also rather hard – too many apps already use it, and there’s that pesky browser people use. But on the TV? That’s greenfield! So why not just forget about HTML5 altogether?
If you ask me, the good people in Apple see only one reason for HTML5 to exist – and that is for people to be able to go to apple.com website. Other than that? Useless.
The future of WebKitThere have been a lot of back and forth lately about the future of the web. Should we run full steam ahead with it or sit and wait. People prefer having it change and progress less. I can’t see why – when every year thousands of new APIs are rained at us by Apple and WWDC and Google at I/O – why can’t the Web improve? Why should it stay static?
WebKit, on the other had, is a rather dead rendering engine at this point in time. It might be fast and optimized, but it is becoming a bit old when it comes to adopting and supporting standards. WebRTC isn’t there, but multiple other technologies aren’t either. It seems to be keeping up with the HTML5 and CSS notation, but the programmatic parts of JavaScript? Falling behind the other browsers.
I’ve written before on how Microsoft Edge is getting way better. Mozilla is getting their act together and modernizing the older parts of their Firefox browser (extensions for example) and Google are speeding up and optimizing Chrome now that it has become huge. But Safari? Microsoft Edge will keep Google and Mozilla on edge and get them to improve. I don’t think the other browser vendors care too much about Safari getting too good anytime soon.
I wonder how much care and affection the Safari/WebKit team gets inside Apple these days. Probably not that much.
This goes somewhat counter intuitive to the positive assertions Alex made here about Apple and WebRTC.
H.265 / VP9Apple and H.265 take center stage in my video codec sessions lately. You can see the video codec wars session I gave at TokBox last week.
My usual spiel?
But then I get directed to this interesting post in 9to5Mac:
Another interesting detail: 4K videos are being recorded in H.264, and Apple is no longer making reference to H.265 support for any purpose, FaceTime or otherwise
Hmm…
Is it only me or did Apple just drop H.265 support and is shifting camps? Or at the very least, sitting on the fence. It might have something to do with the HEVC Advance stupidity that brought the gang to open up the Alliance of Open Media. They might be edging away from royalty bearing codecs and moving to the free alternative. Or they might try using it as leverage over HEVC Advance to make their licensing terms more palatable.
How do you do 4K video resolutions with a camera if not by using H.265? Use H.264? Ridiculous. But that’s exactly what seems to be happening now for the new iPhone 6.
Should they be moving to VP9 instead? Probably, but it will be hard on Apple. They rely heavily on hardware acceleration and they don’t seem to have it on their chipsets at the moment.
This is a loss to the H.26x camp at the moment.
–
Where is this all headed?
I am not sure, but here are a couple of things I’d plan if I had that task given to me.
Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.
The post Thoughts on Apple, WebRTC, HTML5, H.265 and VP9 appeared first on BlogGeek.me.
It turns out people like their smartphone apps, so that native mobile is pretty important. For WebRTC that usually leads to venturing outside of JavaScript into the world of C++/Swift for iOS and Java for Android. You can try hybrid applications (see our post on this), but many modern web apps applications often use JavaScript frameworks like AngularJS, Backbone.js, Ember.js, or others and those don’t always mesh well with these hybrid app environments.
Can you have it all? Facebook is trying with React which includes the ReactJS framework and React Native for iOS and now Android too. There has been a lot of positive fanfare with this new framework, but will it help WebRTC developers? To find out I asked VoxImplant’s Alexey Aylarov to give us a walkthrough of using React Native for a native iOS app with WebRTC.
{“editor”: “chad hart“}
If you haven’t heard about ReactJS or React Native then I can recommend to check them out. They already have a big influence on a web development and started having influence on mobile app development with React Native release for iOS and an Android version just released. It sounds familiar, doesn’t it? We’ve heard the same about WebRTC, since it changes the way web and mobile developers implement real-time communication in their apps. So what is React Native after all?
“React Native enables you to build world-class application experiences on native platforms using a consistent developer experience based on JavaScript and React. The focus of React Native is on developer efficiency across all the platforms you care about — learn once, write anywhere. Facebook uses React Native in multiple production apps and will continue investing in React Native.”
https://facebook.github.io/react-native/
I can simplify it to “one of the best ways for web/javascript developers to build native mobile apps, using familiar tools like Javascript, NodeJS, etc.”. If you are connected to WebRTC world (like me) the first idea that comes to your mind when you play with React Native is “adding WebRTC there should be a big thing, how can I make it?” and then from React Native documentation you’ll find out that there is a way to create your own Native Modules:
Sometimes an app needs access to platform API, and React Native doesn’t have a corresponding module yet. Maybe you want to reuse some existing Objective-C, Swift or C++ code without having to reimplement it in JavaScript, or write some high performance, multi-threaded code such as for image processing, a database, or any number of advanced extensions.
That’s exactly what we needed! Our WebRTC module in this case is a low-level library that provides high-level Javascript API for React Native developers. Another good thing about React Native is that it’s an open source framework and you can find a lot of required info on GitHub. It’s very useful, since React Native is still very young and it’s not easy to find the details about native module development. You can always reach out to folks using Twitter (yes, it works! Look for #reactnative or https://twitter.com/Vjeux) or join their IRC channel to ask your questions, but checking examples from GitHub is a good option.
React Native’s module architectureNative modules can have C/C++ , Objective-C, and Javascript code. This means you can put the native WebRTC libraries, signaling and some other libs written in C/C++ as a low-level part of your module, implement video element rendering in Objective-C and offer Javascript/JSX API for react native developers.
Technically low-level and high-level code is divided in the following way:
While in Objective-C you can interact with the OS, C/C++ libs and even create iOS widgets. The Ready-to-use native module(s) can be distributed in number of different ways, the easiest one being via a npm package.
WebRTC module APIWe’ve been implementing a React Native module for our own platform and already knew which of our API functions we would provide to Javascript. Creating a WebRTC module that is independent of signaling that can be used by any WebRTC developer is a much more complicated problem.
We can divide the process into few parts:
Integration with WebRTCSince webRTC does not limit developers how to discover user names and network connection information, this signaling can be done in multiple ways. Google’s WebRTC implementation known as libwebrtc. libwebrtc has a built-in library called libjingle that provides “signaling” functionality.
There are 3 ways how libwebrtc can be used to establish a communication:
This is the simplest one leveraging libjingle. In this case signaling is implemented in libjingle via XMPP protocol.
This is a more complicated one with signaling on the application side. In this case you need to implement SDP and ICE candidates exchange and pass data to webrtc. One of popular methods is to use some SIP library for signaling.
For the hardcore you can avoid using signaling altogether This means the application should take care of all RTP session params: RTP/RTCP ports, audio/video codecs, codec params, etc. Example of this type of integration can be found in WebRTC sources in WebRTCDemo app for Objective-C (src/talk/app/webrtc)
Adding SignalingWe used the 2nd approach in our implementation. Here are some code examples for making/receiving calls (C++):
First of all we need to create react-native module (https://facebook.github.io/react-native/docs/native-modules-ios.html) , where we describe the API and implement audio/video calling using WebRTC (Obj-C , iOS):
@interface YourVoipModule () { } @end @implementation YourVoipModule RCT_EXPORT_MODULE(); RCT_EXPORT_METHOD(createCall: (NSString *) to withVideo: (BOOL) video ResponseCallback: (RCTResponseSenderBlock)callback) { NSString * callId = [createVoipCall: to withVideo:video]; callback(@[callId]); }If want to to support video calling we will need an additional component to show the local camera (Preview) or remote video stream (RemoteView):
@interface YourRendererView : RCTView @endInitialization and deinitialization can be implemented in the following methods:
- (void)removeFromSuperview { [videoTrack removeRenderer:self]; [super removeFromSuperview]; } - (void)didMoveToSuperview { [super didMoveToSuperview]; [videoTrack addRenderer:self]; }You can find the code examples on our GitHub page – just swap the references to our signaling with your own. We found examples very useful while developing the module, so hopefully they will help you to understand the whole idea much faster.
DemoThe end result can look like as follows:
Closing ThoughtsWhen WebRTC community started working on the standard one of the main ideas was to make real-time communications simpler for web developers and provide developers with a convenient Javascript API for real time communications. React Native has similar goal, it lets web developers build native apps using Javascript. In our opinion bringing WebRTC to the set of available React Native APIs makes a lot of sense – web app developers will be able to build their RTC apps for mobile platforms. Guys behind React Native has just released it for Android at Scale conference, so we will update the article or write a new one about building the module compatible with Android as soon as we know all the details.
{“author”, “Alexey Aylarov”}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @victorpascual and @tsahil.
The post Reacting to React Native for native WebRTC apps (Alexey Aylarov) appeared first on webrtcHacks.
Hello, again. This past week in the FreeSWITCH master branch we had 35 commits. This week we had one feature: the handling of a=sendonly, a=sendrecv, a=recvonly to change who is sending video during a call.
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
And, this past week in the FreeSWITCH 1.4 branch we had 9 new commits merged in from master. And the FreeSWITCH 1.4.21 release is here! Go check it out!
The following bugs were fixed:
We’re slightly more than 20 days away from KazooCon 2015 and we want you here to see the power and scalability of our new VoIP platform firsthand. So if you are still on the fence on attending, we’re revealing the top ten reasons why you need to attend the Unified Communications Revolution. Early-bird pricing ends on Sunday, September 20th, so register now and save money. So what are you waiting for, go to www.KazooCon.com and register.
Number 10 - Demo’s, Demo’s, Demo’s
We heard your feedback from KazooCon 2014, and the overwhelming favorite session was the breakout demo. So we are doubling down this year and will be hosting five hands-on demos (hint: bring your laptop). You will get a first-hand look of the of our new Kazoo API’s and our Engineering team will take you through it step-by-step. Our interactive demo’s will include:
Number 9 - Amazing Guest Speakers
Our staff is poised to give outstanding presentations, and we have been prepping demos for weeks. But we’re not the only ones going to be on the podium this week. Telco gurus from around the world are going to wow you with their talks by presenting their own stories of success, tools for enhancement in the telecom industry, and more. We have C-Level speakers from Mast Mobile, Orange, Kamailio, FreeSWITCH, VirtualPBX, IBM Cloudant, Telnexus and SIPLABS. You don’t want to miss it!
Number 8 - Let’s make it a week in San Francisco
Join us for our official Kazoo Training directly following KazooCon from October 7-9. This three-day training will teach you about Kazoo and all of the third-party components that power the platform. You will deep-dive into Kazoo APIs, and learn how to set up a cluster, GUI, WhApps, FreeSWITCH, BigCouch and much more. Purchase both an Early-Bird KazooCon + Kazoo Training ticket and save money! Register Here!
Number 7 - The After-Party
No conference is complete without an after-party. Join us this year for some spirited libations and telco networking. If you attended last year, you bore witness to our alcohol-fueled Scrabble, Pictionary, and arcade games. Last year our Lead Designer Josh was undefeated at Mario Kart, so we’re all practicing to take him down this year. We’ve got plenty of fun and collaborative events planned for this year’s party, but we’re not spoiling the surprise.
Stay tuned as we count down to Number 1. But Register Now at www.KazooCon.com
WebRTC in the big screen
Video Conf
Small
Voice, Video
WebRTC on the large screen.
[If you are new around here, then you should know I’ve been writing about WebRTC lately. You can skim through the WebRTC post series or just read what WebRTC is all about.]I am not a fan of video calling in the living room. Not because I have real issues with it, but because I think it is a steep mountain to climb – I am more of the low-hanging-fruit kind of a guy.
That’s not the case with Tellybean, a company focused on TV video calling and recently doing that using WebRTC.
Cami Hongell, CEO of Tellybean, found the time to chat with me and answer a few questions about what they are doing and the challenges they are facing.
What is Tellybean all about?
Tellybean is all about easy video calling on the TV. My two co-founders are Aussies living in Finland and they had a problem. A software update or a forgotten password too often got in the way of their weekly Skype call with grandma Down Under. Once audio and video were finally working both ways, there were four people fighting for a spot in front of the 13” screen.
We realised that modern life tends to separate families and our problem was far from unique. That’s when we decided to build an easy video calling service for the TV. It had to be so easy that even grandma could use it from the comfort of her couch. At the same time as we worked hard to eliminate complexity, we also needed to keep it affordable and build a channel which would provide users an easy way of getting the service.
Today we have an app which allows easy video calls on Android TV devices of our TV and set-top box partners. Currently you can make calls between selected Tellybean enabled Android TV devices and our web app on www.tellybean.com. To make it as easy as possible to call somebody from your TV, we will release apps for Android and iOS mobiles and tablets in the future.
You started by building your TV solution using Skype. What made you switch to WebRTC?
When we founded Tellybean four years ago the tech landscape looked very different from today. WebRTC wasn’t there. Android TV and Tizen weren’t there – the TV operating systems were all over the place. So initially we set out to build an easy service which would run on our own dedicated Linux box. Our intention was to allow our service to connect with other existing services by putting our own UI on top of headless clients developed using the SDK’s provided by some of the existing services. We started with SkypeKit and had a first version of it ready a few years ago. We were going to continue by adding Gtalk.
However, Skype decided to wind down the support of 3rd party developers and Google stopped Gtalk development. This happened almost at the same time as WebRTC was starting to gain traction. Switching to WebRTC turned out to be an easy decision once we looked into it and moved over to working on Android and 3rd party hardware only.
What excites you about working in WebRTC?
Having tried different VoIP platforms in the past, we have learned to appreciate the fact that working with WebRTC has allowed us to focus our resources on the more important UX and UI development. Since WebRTC offers a plugin-free and no-download alternative for video calling with modern browsers, combined with our TV and upcoming mobile device approach we are able to provide easy use for a huge audience, with almost all entry barriers removed.
We are excited about having a great service which is getting a lot of interest from everybody in the Android TV value chain from the chip manufacturers to the TV and STB manufacturers as well as the operators. We’ve announced co-operation with TP Vision / Philips TVs and Nvidia and much more in the pipeline. The great support and resources available in the WebRTC community, coupled with the support from the hardware manufacturers means that WebRTC is truly becoming a compelling open source alternative for service developers, such as ourselves.
Can you tell me a bit about the challenges of getting WebRTC to operate properly in an embedded environment fit for the TV?
An overall problem has been that we are moving slightly ahead of the curve.
Firstly, we need access to a regular USB camera. Unfortunately the Android TV platform and most devices lack UVC camera support. So we have been pushing everybody, Google, the device manufacturers and the chip suppliers, to add camera support. The powerful Nvidia Shield Console has camera support and we already have a few of the other major players implementing it for us.
Secondly, there are still devices that are underpowered and/or lack support for VP8 HW encoding, meaning that it is hard for us to provide a satisfactory call quality. Luckily again, most of the devices launched this year can handle video calling and our app.
The third problem relates to fine tuning the audio for our use case where the distance between the USB camera’s mic and the TV’s speakers is not a constant. Third time lucky: WebRTC provides us pretty good echo cancellation and other tools to optimize this and produce good audio quality.
What signaling have you decided to integrate on top of WebRTC?
Wanting to support browsers for user convenience and to get going quickly, we started out building our own solution with Socket I/O, but we are transitioning to MQTT for two reasons. Firstly, we came to the conclusion that MQTT provided us much more efficient scalability. Secondly, MQTT is much easier on the battery for mobile devices.
Current implementations of MQTT also allow us to use websockets for persistent connections in browsers, so it suits our purposes well. Additionally, some transaction-like functionality is done using REST. We are writing our own custom protocol as we go, which allows us to grow the service organically instead of trying to match a specification set forth by another party that doesn’t match our requirements or introduces undue complexity in architecture or implementation.
Backend. What technologies and architecture are you using there?
We have server instances on Amazon Web Services, running our MQTT brokers and REST API, as well as the TURN/STUN service required for WebRTC. We use Node.JS on the servers and MongoDB from a cloud service which allows us easy distributed access to shared data.
Where do you see WebRTC going in 2-5 years?
The recent inclusion of H.264 will lead to broader adoption of WebRTC in online services, and also in dedicated hardware devices since H.264 decoders are readily available. Microsoft is also starting to adopt WebRTC in their new Edge browser, so it seems like there’s a bright future for rich communication using WebRTC once all the players have started moving. Like everybody else, we would naturally like full WebRTC support from Microsoft and Apple sooner rather than later, and it will be hard for them to ignore it with all the support it is already receiving. In this timeframe, at least high-end mobile devices should have powerful enough hardware to support WebRTC in the native browsers without issues. With this kind of background infrastructure a lot of online services will be starting to use WebRTC in some form, instead of more isolated projects. With everyone moving towards a new infrastructure, hopefully any interoperability issues between different endpoints have been sorted out, which allows service developers to focus on their core ideas.
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
WebRTC is still an emerging technology, that will surely have an impact for developers and businesses going forward, but it’s not completely mature yet. We’ve seen a lot of good development over time, so for a specific use case, it might be a plug-and-play experience or then in a more advanced case you may need a lot of development work.
Given the opportunity, what would you change in WebRTC?
WebRTC has been improving a lot during the time that we’ve worked with it, so we believe that current issues will be improved on and disappear over time. The big issue right now on the browser side is obviously adoption, with Microsoft and especially Apple not up to speed yet. We would also like to see good support for all WebRTC codecs from involved parties, to avoid transcoding and to be able to use existing hardware components for a great user experience.
What’s next for Tellybean?
We’ve recently launched our Android TV app and are seeing the first users on the Nvidia Shield console, the first compatible device. We are now learning a lot and have a chance to fine tune our app. From a business point of view we currently have full focus on building a partner network which will provide us the platform for 100+ million TV installations in the coming years. Next we are starting development of mobile apps for Android and iOS. Later we will need to decide if moving to other TV operating systems or e.g. enabling other video calling services to connect to Tellybean TVs will be the next most important step towards achieving our aim of becoming THE video calling solution for the TV.
–
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.
The post Tellybean and WebRTC: An Interview With Cami Hongell appeared first on BlogGeek.me.
Hello, again. This past week in the FreeSWITCH master branch we had 75 commits! The new features for this week include: improvements to the verto communicator, allowing Freeswitch to initiate late offer calls, the addition of CUSTOM esl events menu::enter and menu::exit when a call enter and exit an ivr menu, and more work toward the new module, mod_hiredis!
Join us on Wednesdays at 12:00 CT for some more FreeSWITCH fun! And head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
And, this past week in the FreeSWITCH 1.4 branch we had 9 new commits merged in from master. And the FreeSWITCH 1.4.21 release is here! Go check it out!
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were fixed:
The FreeSWITCH 1.6.0 release is here! This is the release of the Video/MCU branch!
Release files are located here:
And we’re dropping support in packaging for anything older than Debian 8.0 and anything older than Centos 7.0 due to a number of dependency issues on older platforms.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.