Check out the weekly conference call to see the latest news!
I needed to test some master-slave software in a situation that the master communicated to the slave over NAT (master’s IP address was replaced with the firewall’s external address), and then NAT would be removed, keeping master and slave addresses the same, but the slave would see the master directly.
This is the test scenario that worked on my desk, without having to add any routing to the LAN.
atom02 is the computer that emulates the slave system. It is connected back-to-back to alix102, and has only one IP address to communicate to:
ip link set dev eth0 up ip addr add 192.168.1.50/31 dev eth0alix102 is a Linux box with multiple Ethernet ports: eth0 is connected to my home LAN and has a DHCP address 192.168.1.142/24. Also eth1 (192.168.1.51/31) is connected directly to atom02.
The following configuration makes alix102 answer to ARP requests for 192.168.1.50 and forward packets to atom02, replacing the source address with 192.168.1.51. Also atom02 can make an SSH connection to 192.168.1.51:3022 and it will be connected to another box in the LAN that emulates the software master (192.168.1.147:22).
# enable IP forwarding echo 1 > /proc/sys/net/ipv4/ip_forward # Bring up eth1 ip link set dev eth1 up ip addr add 192.168.1.51/31 dev eth1 # Enable proxy ARP on eth0 echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp # Set up the NAT translation iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to 192.168.1.51 iptables -t nat -A PREROUTING -p tcp --dport 3022 -i eth1 -j DNAT --to 192.168.1.147:22After that, atom02 can be re-connected directly into the LAN, keeping the address 192.168.1.50 with /24 network mask, and the software can be tested with direct communication. Alix102 has to be disconnected from the LAN, so that it does not pollute it with proxy ARP responses.
I needed to install CentOS 6 on one an old Acer Aspire One notebook (with Intel Atom CPU) for some software testing. The problem is, that it could not perform a reboot, and I needed to press the power button every time. These instructions for reboot=X parameter for kernel did not help at all.
What really helped, is `kernel-ml` package from elrepo.org repositories. At the moment of writing, it was version `4.0.0-1.el6.elrepo.x86_64`.
Keep in mind that after installing kernnel-ml package, you need to edit /etc/grub.conf and make this new kernel as default. No additional boot options are required.
Vancouver is one of the hotbeds for IP communication technology and is home to many developers. With the advent of WebRTC, integration of voice and video chat into almost any application is within reach but as always, there are always pitfalls. Sounds like a great reason to start a WebRTC meetup in Vancouver!
As of today Vancouver now has its own WebRTC meetup group. If you are interested in linking up and talking to like-minded RTC geeks implementing real time comm using WebRTC please join and let’s get together. We will also be looking for meetup facilities & sponsors (snacks, drinks etc.).
I am thinking our first meetup will be in May sometime, not sure on exact dates yet.
Agenda and topic for the first meeting is wide open. Topics like, “WebRTC 101″ or “Dos and Don’ts” come to mind, but we can decide on that when we have heard from some active members.
We will also be bringing in some live guests from time to time via what else, WebRTC!
Hope to see you soon!
/Erik
One of our first posts was a Wireshark analysis of Amazon’s Mayday service to see if it was actually using WebRTC. In the very early days of WebRTC, verifying a major deployment like this was an important milestone for the WebRTC community. More recently, Philipp Hancke – aka Fippo – did several great posts analyzing Google Hangouts and Mozilla’s Hello service in Firefox. These analyses validate that WebRTC can be successfully deployed by major companies at scale. They also provide valuable insight for developers and architects on how to build a WebRTC service.
These posts are awesome and of course we want more.
I am happy to say many more are coming. In an effort to disseminate factual information about WebRTC, Google’s WebRTC team has asked &yet – Fippo’s employer – to write a series of publicly available, in-depth, reverse engineering and trace analysis reports. Philipp has agreed to write summary posts outlining the findings and implications for the WebRTC community here at webrtcHacks. This analysis is very time consuming. Making it consumable for a broad audience is even more intensive, so webrtcHacks is happy to help with this effort in our usual impartial, non-commercial fashion.
Please see below for Fippo’s deconstruction of WhatsApp voice calling.
{“editor”: “chad“}
After some rumors (e.g. on TechCrunch), WhatsApp recently launched voice calls for Android. This spurred some interest in the WebRTC world with the usual suspects like Tsahi Levent-Levi chiming in and starting a heated debate. Unfortunately, the comment box on Tsahi’s BlogGeek.Me blog was too narrow for my comments so I came back here to webrtchacks.
At that point, I had considered doing an analysis of some mobile services already and, thanks to support from the Google WebRTC team, I was able to spend a number of days looking at Wireshark traces from WhatsApp in a variety of scenarios.
Initially, I was merely trying to validate the capture setup (to be explained in a future blog post) but it turned out that there is quite a lot of interesting information here and even some lessons for WebRTC. So I ended up writing a full fifteen page report which you can get here. It is a long story of packets (available for download here) which will be very boring if you are not an engineer so let me try to summarize the key points here.
SummaryWhatsApp is using the PJSIP library to implement Voice over IP (VoIP) functionality. The captures shows no signs of DTLS, which suggests the use of SDES encryption (see here for Victor’s past post on this). Even though STUN is used, the binding requests do not contain ICE-specific attributes. RTP and RTCP are multiplexed on the same port.
The audio codec can not be fully determined. The sampling rate is 16kHz, the codec bandwidth of about 20kbit/s and the bandwidth was the same when muted.
An inspection of the binary using the strings tool shows both PJSIP and several strings hinting at the use of elements from the webrtc.org voice engine such as the acoustic echo cancellation (AEC), AECM, gain control (AGC), noise suppression and the high-pass filter.
Comparison with WebRTC Feature WebRTC/RTCWeb Specifications WhatsApp SDES MUST NOT offer SDES probably uses SDES ICE RFC 5245 no ICE, STUN connectivity checks TURN usage used as last resort uses a similar mechanism first Audio codec Opus or G.711 unclear, 16khz with 20kbps bitrate Switching from a relayed session to a p2p sessionThe most impressive thing I found is the optimization for a fast call setup by using a relay initially and then switching to a peer-to-peer session. This also opens up the possibility for a future multi-party VoIP call which would certainly be supported by this architecture. The relay server is called “conf bridge” in the binary.
Lets look at the first session to illustrate this (see the PDF for the full, lengthy description):
Now, if we have decoded everything as RTP (which is something Wireshark doesn’t get right by default so it needs a little help), we can change the filter to rtp.ssrc == 0x0088a82d and see this clearly. The intent here is to try a connection that is almost guaranteed to work first (I used a similar rationale in the minimal viable SDP post recently even) and then switch to a peer-to-peer connection in order to minimize the load on the TURN servers.
Wow, that is pretty slick. It likely reduces the call setup time the user perceives. Let me repeat that: this is a hack which makes the user experience better!
By how much is hard to quantify. Only a large-scale measurement of both this approach and the standard approach can answer that.
Lessons for WebRTCIn WebRTC, we can do something similar, but it is a little more effort right now. We can setup the call with iceTransports: ‘relay’ which will skip host and server-reflexive candidates. Also, using a relay helps to guarantee the connetion will work (in conditions where WebRTC will work at all).
There are some drawbacks to this approach in terms of round-trip-times due to TURN’s permission mechanism. Basically when creating a TURN-relayed candidate the following happens (in Chrome; Firefox’s behavior differs slightly):
Compared to this, the proprietary mechanism used by Whatsapp saves a number of roundtrips.
this is a hack which makes the user experience better!If we started with just relay candidates, then, since this hides the IP addresses of the parties involved from each other, we might even establish the relayed connection and do the DTLS handshake before the callee accepts the call. This is known as transport warmup, it reduces the perceived time until media starts flowing.
Once the relayed connection is established, we can call setConfiguration (formerly known as updateIce; which is currently not implemented) to remove the restriction to relay candidates and do an ICE restart by calling createOffer again with the iceRestart flag set to true. This would trigger an ICE restart which might determine that a P2P connection can be established.
Despite updateIce not being implemented, we can still switch from a relay to peer-to-peer today. ICE restarts work in Chrome so the only bit we’re missing is the iceTransports ‘relay’ which just generates relay candidates. Now the same effect can be simulated in Javascript by dropping any non-relay candidates during the first iteration. It was pretty easy to implement this behaviour in my favorite sdp munging sample. The switch from relayed to P2P just works. The code is committed here.
While ICE restart is inefficient currently, the actual media switch (which is hard) happens very seamlessly.
In my humble opinion
Whatsapp’s usage of STUN and RTP seems a little out of date. Arguably, the way STUN is used is very straightforward and makes things like implementing the switch from relayed calls to P2P mode easier. But ICE provides methods to accomplish the same thing, in a more robust way. Using a custom TURN-like functionality that delivers raw RTP from the conference bridge saves some bytes’ overhead for TURN channels, but that overhead is typically negligible.
Not using DTLS-SRTP with ciphers capable of perfect forward secrecy is a pretty big issue in terms of privacy. SDES is known to have drawbacks and can be decrypted retroactively if the key (which is transmitted via the signaling server) is known. Note that the signaling exchange might still be protected the same way it is done for text messages.
In terms of user experience, the mid-call blocking of P2P showed that this scenario had been considered which shows quite some thought. Echo cancellation is a serious problem though. The webrtc.org echo cancellation is capable of a much better job and seems to be included in the binary already. Maybe the team there would even offer their help in exchange for an acknowledgement… or awesome chocolate.
{“author”: “Philipp Hancke“}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @reidstidolph, @victorpascual and @tsahil.
The post What’s up with WhatsApp and WebRTC? appeared first on webrtcHacks.
WebRTC-based services are seeing new and larger deployments every week. One of the challenges I’m personally facing is troubleshooting as many different problems might occur (network, device, components…) and it’s not always easy to get useful diagnostic data from users.
Earlier this week, Tsahi, Chad and I participated at the WebRTC Global Summit in London and had the chance to catch up with some friends from Google, who publicly announced the launch of test.webrtc.org. This is great diagnostic tool but, to me, the best thing is that it can be easily integrated into your own applications; in fact, we are already integrating this in some of our WebRTC apps.
Sam, André and Christoffer from Google are providing here a brief description of the tool. Enjoy it and happy troubleshooting!
{“intro-by”: “victor“}
The WebRTC Troubleshooter: test.webrtc.org (by Google) Why did we decide to build this?We have spent countless hours debugging things when a bug report comes in for a real-time application. Besides the application itself, there are many other components (audio, video, network) that can and will eventually go wrong due to the huge diversity among users’ system configurations.
By running small tests targeted at each component we hoped to identify issues and create the possibility to gather information on the system reducing the need for round-trips between developers and users to resolve bug reports.
It was important to be able to run this diagnostic tool without installing any software and ideally one should be able to integrate very closely with an application, thus making it possible to clearly identify bugs in an application from the components that power it.
To accomplish this, we created a collection of tests that verify basic real-time functionality from within a web page: video capture, audio capture, connectivity, network limitations, stats on encode time, supported resolutions, etc… See details here.
We then bundled the tests on a web page that enables the user to download a report, or make it available via a URL that can be shared with developers looking into the issue.
How can you use it?Take a look at test.webrtc.org and find out what tests you could incorporate in your app to help detect or diagnose user issues. For example, simple tests to distinguish application failures from system components failures, or more complex tests such as detecting if the camera is delivering frozen frames, or tell the user that their network signal quality is weak.
https://webrtchacks.com/wp-content/uploads/2015/04/test.webrtc.org_.mp4You are encouraged by us to take ideas and code from GitHub and integrate similar functionality in your own UX. Using test.webrtc.org should be part of any “support request” flow for real-time applications. We encourage developers to contribute!
In particular we’d love some help getting a uniform getStats API between browsers.
What’s next?Working on adding more tests (e.g. network analysis detecting issues that affect audio and video performance is on the way).
We want to learn how developers integrate our tests into their apps and we want to make them easier to use!
{“authors”: [“Sam“, “André“, “Christoffer”]}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @reidstidolph, @victorpascual and @tsahil.
The post The WebRTC Troubleshooter: test.webrtc.org appeared first on webrtcHacks.
3CX è Silver Sponsor al Microsoft Ignite 2015, che si terrà a Chicago dal 4 all’8 Maggio.
Il focus principale del Microsoft Ignite di quest’anno è la tecnologia Cloud, la Unified Communication e la Mobilità: in pratica è su misura per 3CX! Addetti ai lavori, esperti e opinion leaders parteciperanno all’evento, quindi iscrivetevi e partecipate ai lavori.
Durante tutti i giorni della conferenza verranno effettuate dimostrazioni live di 3CX Phone System e della nostra soluzione integrata di webconference, 3CX WebMeeting, basata su tecnologia WebRTC.
Venite ad incontrare il team 3CX USA e il CEO di 3CX Nick Galea allo stand #307
Per evitare contrattempi o sovrapposizioni, siete pregati di fissare un appuntamento via e-mail
Non vediamo l’ora di incontrarvi di persona al Microsoft Ignite 2015!
ApprofondimentiDopo AOL, Google, Yahoo ecc ecc anche il colosso di Redmont entra nel mercato della fonia over ip, e lo fa sviluppando, in collaborazione con importanti produttori hardware, una soluzione pensata per [...]
Response Point: doveva essere il cavallo di razza attraverso il quale espandere la propria “leadership” anche al settore della fonia over ip. A quanto pare però l’esperienza di Microsoft si può già [...]
Con un “colpo di scena inaspettato” (almeno per me) TellMe, azienda recentemente acquisita da Microsoft, ha lanciato un nuovo applicativo per piattaforma RIM che permette di effettuare ricerche attraverso comandi vocali.
Il funzionamento [...]
One of the biggest complaints about WebRTC is the lack of support for it inside Safari and iOS’s webview. Sure you can use a SDK or build your own native iOS app, but that is a lot of work compared to Android which has Chrome and WebRTC inside the native webview on Android 5 (Lollipop) today. Apple being Apple provides no external indications on what it plans to do with WebRTC. It is unlikely they will completely ignore a W3C standard, but who knows if iOS support is coming tomorrow or in 2 years.
Former guest webrtcHacks interviewee Alex Gouillard came to me with an idea a few months ago for helping to push Apple and get some visibility. The idea is simple – leverage Apple’s bug process to publicly demonstrate the desire for WebRTC support today, and hopefully get some kind of response from them. See below for details on Alex’s suggestion and some additional Q&A at the end.
Note: Alex is also involved in the webrtcinwebkit project – that is a separate project that is not directly related, although it shares the same goal of pushing Apple. Stay tuned for some coverage on that topic.
{“intro-by”: “chad“}
Plan to Get Apple to support WebRTC The situationAccording to some polls, adding WebRTC support to Safari, especially on iOS and in native apps in iOS, is the most wanted WebRTC item today.
The technical side of the problem is simple: any native app has to follow Apple’s store rules to be accepted in the store. These rules state that any apps that “browse the web” need to use Apple provided WebView [rule 2.17] based on the WebKit framework. Safari is also based on WebKit. WebKit does not Support WebRTC… yet!
First Technical stepThe webrtcinwebkit.org project aims at addressing the technical problem within the first half of 2015. However, bringing WebRTC support to WebKit is just part of the overall problem. Only Apple can decide to use it in their products, and they are not commenting about products that have not been released.
There have been lots of signs though that Apple is not opposed to WebRTC in WebKit/Safari.
So how to let Apple know you want it and soon – potentially this year?
Let Apple know!Chrome and Internet Explorer (IE), for example, have set up pages for web developers to directly give their feedback about which feature they want to see next (WebRTC related items generally rank high by the way). There is no such thing yet for Apple’s product.
The only way to formally provide feedback to Apple is through the bug process. One needs to have or create a developer account, and open a bug to let Apple know they want something. Free accounts are available, so there is no financial cost associated with the process. One can open a bug in any given category, the bugs are then triaged and will end up in “WebRTC” placeholder internally.
Volume counts. The more people will ask for this feature, the most likely Apple is to support it. The more requests the better.
But that is not the only thing that counts. Users of WebRTC libraries, or any third party who has a business depending on WebRTC can also raise their case with Apple that their business would profit from Apple supporting WebRTC in their product. Here too, volume (of business) counts.
As new releases of Safari are usually made with new releases of the OS, and generally in or around September, it is very unlikely to see WebRTC in Safari (if ever) before the next release, late 2015.
We need youYou want WebRTC support on iOS? You can help. See below for a step-by-step guide on how.
How to Guide Step-by-step guideIt is very important here that you write WHY, in your own words, you want WebRTC support in Safari. There are a multiple of different reasons you might want it:
Often times, some communities organize “bug writing campaigns” that include boilerplate text to include in a bug. It’s a natural tendency for reviewers to discount those bugs somewhat because they feel like more of a “me too” than a bug filed by someone that took 60 seconds to write up a report in their own words.
{“author”, “Alex Gouaillard“}
{“editor”, “chad“}
Chad’s follow-up Q&A with AlexChad: What is Apple’s typical response to these bug filing campaigns?
Alex: I do not have the direct answer to this, and I guess only Apple has. However, here are two very clear comments by an Apple representative:
The only way to let Apple know that a feature is needed is through bug filling.
I would just encourage people to describe why WebRTC (or any feature) is important to them in their own words. People sometimes start “bug writing campaigns” that include boilerplate text to include in a bug, and I think people here have a natural tendency to discount those bugs somewhat because they feel like more of a “me too” than a bug filed by someone that took 60 seconds to write up a report in their own words.”
So my initiative here is not to start a bug campaign per say, where everybody would copy paste the same text, or click the same report to increment a counter. My goal here is to let the community know they can let Apple know their opinion in a way that counts.
[Editor’s note: I was not able to get a direct confirmation from Apple (big suprise) – I did directly confirm evidence that at least one relevant Apple employee agrees with the sentiment above.]
Chad: Do you have any examples of where this process has worked in the past to add a whole new W3C-defined capability like WebRTC?
Alex: I do not. However, the comment #1 above by Apple representative was very clear that whether it will eventually work or not, there is no other way.
Chad: Is there any kind of threshold on the number of bug filings you think the community needs to meet?
Alex: My understanding is that it’s not so much about the number of people that send bugs, it’s more about the case they make. It’s a blend between business opportunities and number of people. I guess volume counts – whether it is people or dollars. This is why it is so important that people use they own words and describe their own case.
Let’s say my friends at various other WebRTC Platform-as-a-Service providers desire to show the importance for them of having WebRTC in iOS or Safari- one representative of the company could go in and explain their use case and their numbers for the platform / service. They could also ask their devs to file a bug describing their application they developed on top of their WebRTC platform. They could also ask their users to describe why as users of the WebRTC app that they feel segregated against their friends who owns a Samsung tablet and who can enjoy WebRTC while they cannot on their iPad. (That is just an example, and I do not suggest that they should write exactly this. Again, everybody should use their own word.)
If I understand correctly, it does not matter whether one or several employees of the above named company fill only one or several bugs for the same company use case.
Chad: Are you confident this will be a good use of the WebRTC developer’s community’s time?
Alex: Ha ha. Well, let’s put it that way, the whole process takes around a couple of minutes in general, and maybe just a little bit more for companies that have a bigger use case and want to weight in the balance. Less than what you are spending reading this blog post. If you don’t have a couple of minute to fill a bug to Apple, then I guess you don’t really need the feature.
More seriously, I have been contacted by enough people that just wanted to have a way, anyway, to make it happen, that I know this information will be useful. For the cynics out there, I’m tempted to say, worse case scenario you lost a couple of minutes to prove me wrong. Don’t miss the opportunity.
Yes, I’m positive this will be a good use of everybody’s time.
{“interviewer”, “chad“}
{“interviewee”, “Alex Gouaillard“}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @reidstidolph, @victorpascual and @tsahil.
The post Put in a Bug in Apple’s Apple – Alex Gouaillard’s Plan appeared first on webrtcHacks.
The dedicated ARM hosting servers at Scaleway appear to be a decent platform for a mid-sized PBX.
In short, the platform displays the following results in performance tests:
The following tests are a slight modification of my previous test scenario: it appears that a channel in OPUS codec cannot execute `echo` or `delay_echo` FreeSWITCH applications, as they copy RTP frames, and the OPUS codec is stateful and does not accept such copying. So, an extra bridge is made to ensure that echo is always executed on a PCMA channel.
XML dialplan in public context (here IPADDR is the public address on the Scaleway host):
<!-- Extension 100 accepts the initial call, plays echo, and on pressing *1 it transfers to 101 --> <extension name="100"> <condition field="destination_number" expression="^100$"> <action application="answer"/> <action application="bind_meta_app" data="1 a si transfer::101 XML ${context}"/> <action application="delay_echo" data="1000"/> </condition> </extension> <!-- Extension 101 plays a beep, then makes an outgoing SIP call to our own external profile and extension 200 --> <extension name="101"> <condition field="destination_number" expression="^101$"> <action application="playback" data="tone_stream://%(100,100,1400,2060,2450,2600)"/> <action application="unbind_meta_app" data=""/> <action application="bridge" data="{absolute_codec_string=PCMA}sofia/external/200@IPADDR:5080"/> </condition> </extension> <!-- Extension 200 enforces transcoding and sends the call to 201 --> <extension name="200"> <condition field="destination_number" expression="^200$"> <action application="answer"/> <action application="bridge" data="{max_forwards=65}{absolute_codec_string=OPUS}sofia/external/201@IPADDR:5080"/> </condition> </extension> <!-- Extension 201 returns the call to 100, guaranteeing it to be in PCMA --> <extension name="201"> <condition field="destination_number" expression="^201$"> <action application="answer"/> <action application="bridge" data="{max_forwards=65}{absolute_codec_string=PCMA}sofia/external/100@IPADDR:5080"/> </condition> </extension>The initial call is sent to extension 100 in the public context, and then by pressing *1, 6 additional channels are created, of which two calls perform the transcoding from PCMA to OPUS and back. So, if “show channels” shows 43 total channels, it corresponds to 42 = 6*7 test channels plus the incoming one, or 14 transcoding calls.
#### Good quality #### # fs_cli -x 'show channels' | grep total 43 total. # mpstat -P ALL 10 Linux 3.19.3-192 (scw01) 04/10/2015 _armv7l_ (4 CPU) 10:08:41 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 10:08:51 PM all 82.67 0.00 2.75 0.00 0.00 1.30 0.00 0.00 13.28 10:08:51 PM 0 92.80 0.00 1.30 0.00 0.00 5.20 0.00 0.00 0.70 10:08:51 PM 1 95.30 0.00 1.60 0.00 0.00 0.00 0.00 0.00 3.10 10:08:51 PM 2 89.90 0.00 2.50 0.00 0.00 0.00 0.00 0.00 7.60 10:08:51 PM 3 52.70 0.00 5.60 0.00 0.00 0.00 0.00 0.00 41.70 10:08:51 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 10:09:01 PM all 84.88 0.00 2.43 0.00 0.00 1.23 0.00 0.00 11.47 10:09:01 PM 0 94.50 0.00 0.50 0.00 0.00 4.90 0.00 0.00 0.10 10:09:01 PM 1 97.60 0.00 1.50 0.00 0.00 0.00 0.00 0.00 0.90 10:09:01 PM 2 87.70 0.00 2.20 0.00 0.00 0.00 0.00 0.00 10.10 10:09:01 PM 3 59.70 0.00 5.50 0.00 0.00 0.00 0.00 0.00 34.80 #### quite OK quality, with some minor distortions #### # fs_cli -x 'show channels' | grep total 49 total. # mpstat -P ALL 10 Linux 3.19.3-192 (scw01) 04/10/2015 _armv7l_ (4 CPU) 10:10:29 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 10:10:39 PM all 95.65 0.00 2.40 0.00 0.00 0.83 0.00 0.00 1.12 10:10:39 PM 0 95.30 0.00 1.20 0.00 0.00 3.30 0.00 0.00 0.20 10:10:39 PM 1 96.90 0.00 2.20 0.00 0.00 0.00 0.00 0.00 0.90 10:10:39 PM 2 95.80 0.00 3.50 0.00 0.00 0.00 0.00 0.00 0.70 10:10:39 PM 3 94.60 0.00 2.70 0.00 0.00 0.00 0.00 0.00 2.70 10:10:39 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 10:10:49 PM all 91.55 0.00 1.55 0.00 0.00 0.78 0.00 0.00 6.12 10:10:49 PM 0 89.90 0.00 1.20 0.00 0.00 3.10 0.00 0.00 5.80 10:10:49 PM 1 96.60 0.00 0.70 0.00 0.00 0.00 0.00 0.00 2.70 10:10:49 PM 2 90.60 0.00 1.70 0.00 0.00 0.00 0.00 0.00 7.70 10:10:49 PM 3 89.10 0.00 2.60 0.00 0.00 0.00 0.00 0.00 8.30 #### bad quality, barely audible #### # fs_cli -x 'show channels' | grep total 55 total.If OPUS codec is replaced with SILK in the above configuration, the test is not usable, as SILK appears not to tolerate multiple transcodings, and after 4 transcodings the sound is almost not propagated at all. Also further transcoding sessions treat the input as silence, and do not load CPU.
If G722 is used, 36 transcoded calls still leave plenty of CPU resources for other tasks:
# fs_cli -x 'show channels' | grep total 109 total. # mpstat -P ALL 10 Linux 3.19.3-192 (scw01) 04/10/2015 _armv7l_ (4 CPU) 10:37:31 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 10:37:41 PM all 19.75 0.00 5.40 0.00 0.00 0.00 0.00 0.00 74.85 10:37:41 PM 0 27.00 0.00 12.10 0.00 0.00 0.00 0.00 0.00 60.90 10:37:41 PM 1 4.30 0.00 9.50 0.00 0.00 0.00 0.00 0.00 86.20 10:37:41 PM 2 47.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 52.40 10:37:41 PM 3 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 99.90 10:37:41 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 10:37:51 PM all 17.57 0.00 7.42 0.00 0.00 0.00 0.00 0.00 75.00 10:37:51 PM 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 10:37:51 PM 1 20.30 0.00 29.70 0.00 0.00 0.00 0.00 0.00 50.00 10:37:51 PM 2 50.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 50.00 10:37:51 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Test 2: parallel transcodingThe following piece of public dialplan takes the call at extension 300, makes a call in OPUS to extension 301, and then the call is bridged to 302 in PCMA where a speech test file is played endlessly. Thus, a call to 300 produces 5 channels, which are equivalent of two transcoded calls.
<extension name="300"> <condition field="destination_number" expression="^300$"> <action application="answer"/> <action application="bridge" data="{absolute_codec_string=OPUS}sofia/external/301@IPADDR:5080"/> </condition> </extension> <extension name="301"> <condition field="destination_number" expression="^301$"> <action application="answer"/> <action application="bridge" data="{absolute_codec_string=PCMA}sofia/external/302@IPADDR:5080"/> </condition> </extension> <extension name="302"> <condition field="destination_number" expression="^302$"> <action application="answer"/> <action application="endless_playback" data="/var/tmp/t02.wav"/> </condition> </extension>In parallel to a call to 300 from outside, additional endless calls were produced from fs_cli:
originate sofia/external/300@IPADDR:5080 &endless_playback(/var/tmp/t02.wav)This originate command produced 6 new channels, equivalent to two transcoded calls. The command was repeated until the human caller hears any distortions.
OPUS transcoding was functioning fine with 16 transcoded calls and 95% average CPU load, while SILK and G722 started showing distortions at around 65-75% of CPU load.
Scaleway (a cloud service by online.net) offers ARM-based dedicated servers for EUR9.99/month, and the first month free. The platform is powerful enough to run a small or FreeSWITCH server, and it shows nice results in voice quality tests.
These instructions are for Debian Wheezy distribution.
By default, the server is created with Linux kernel 3.2.34, and this kernel version does not have a high-resolution timer. You need to choose 3.19.3 in server settings.
At Scaleway, you get a dedicated public IP address and 1:1 NAT to a private IP address on your server. So, FreeSWITCH SIP profiles need to be updated (“ext-rtp-ip” and “ext-sip-ip” to point to you rpublic IP address).
FreeSWITCH compiles and links “mpg123-1.13.2″ library, which fails to compile on ARM architecture. You need to edit the corresponding files to point to “mpg123-1.19.0″ and commit back to Git, because the build scripts check if any modified and uncommitted files exist in the source tree. Also the patch forces to use gcc-4.7, as 4.6 is known with some problems on ARM architecture.
apt-get update && apt-get install -y make curl git sox flac mkdir -p /usr/src/freeswitch cd /usr/src/freeswitch/ git clone https://gist.github.com/b27f4e41cc02f49d31a0.git git clone -b v1.4 https://stash.freeswitch.org/scm/fs/freeswitch.git /usr/src/freeswitch/src cd src git apply ../b27f4e41cc02f49d31a0/freeswitch-arm.patch git add --all git commit -m 'mpg123-1.19.0.patch' ./debian/util.sh build-all -i -z1 -aarmhf -cwheezy # This will run for about 4 hours, and you can build the sound packages in parallel in another terminal. mkdir /usr/src/freeswitch-sounds cd /usr/src/freeswitch-sounds git clone https://github.com/traviscross/freeswitch-sounds.git music-default cd music-default ./debian/bootstrap.sh -p freeswitch-music-default ./debian/rules get-orig-source tar -xv --strip-components=1 -f *_*.orig.tar.xz && mv *_*.orig.tar.xz ../ dpkg-buildpackage -uc -us -Zxz -z1 cd /usr/src/freeswitch-sounds git clone https://github.com/traviscross/freeswitch-sounds.git sounds-en-us-callie cd sounds-en-us-callie ./debian/bootstrap.sh -p freeswitch-sounds-en-us-callie ./debian/rules get-orig-source tar -xv --strip-components=1 -f *_*.orig.tar.xz && mv *_*.orig.tar.xz ../ dpkg-buildpackage -uc -us -Zxz -z1 cd /usr/src/freeswitch-sounds dpkg -i *.deb cd /usr/src/freeswitch # this will fail because dependencies are not installed dpkg -i freeswitch-all_* # this will add dependencies apt-get -f install # finally, install FreeSWITCH dpkg -i freeswitch-all_* # Minimal configuration that you can use cd /etc git clone https://github.com/voxserv/freeswitch_conf_minimal.git freeswitch # edit sip_profiles/*.xml and put the public IP address into "ext-rtp-ip" and "ext-sip-ip" insserv freeswitch service freeswitch startMONACO DI BAVIERA, GERMANIA, 27 MARZO 2015 – 3CX, azienda sviluppatrice del centralino software di ultima generazione 3CX Phone System, con il nuovo prodotto 3CX WebMeeting ha sbaragliato i concorrenti nella categoria “Unified Communication” per il premio “Prodotto più Innovativo”. Questo è avvenuto in occasione del CeBIT 2015 di Hannover, una delle più importanti fiere IT del mondo. Il premio è stato ritirato dal CEO Nick Galea e da Markus Kogel, Sales Manager area EMEA.
3CX WebMeeting è stato scelto per il suo uso innovativo della tecnologia WebRTC. WebRTC è la nuova piattaforma open standard di Google che consente agli utenti di lanciare webmeetings direttamente dal browser, senza dover scaricare ed installare nessun client. 3CX ha lanciato la versione hosted di 3CX WebMeeting nell’agosto 2014 e la versione on-premise nel febbraio 2015. Fin dal suo lancio 3CX WebMeeting ha ricevuto riscontri positivi sia dai partner che dagli utenti finali. 3CX WebMeeting è gratis fino a 10 utenti contemporanei per tutte le licenze di 3CX Phone System v12.5.
I premi Innovationpreis-IT 2015 Award sono organizzati da Initiative Mittelstand, un portale on-line di informazione che fornisce alle aziende aggiornamenti sui prodotti e le tecnologie più innovative disponibili.
Nick Galea, 3CX CEO ha detto:
“Questo premio è il riconoscimento di 3CX come un’azienda all’avanguardia nel settore della telefonia e dell’Unified Communications. Siamo i primi vendor ad offrire una soluzione di videoconferenza multi-punto su tecnologia WebRTC che è inoltre integrata con il nostro centralino senza costi aggiuntivi. Il premio “Prodotto più innovativo”, selezionato da una giuria di esperti, è un riconoscimento molto prestigioso in Germania e noi siamo molto felici del fatto che la nostra capacità di innovare venga riconosciuta all’interno del settore IT”
Informazioni su 3CX (www.3cx.it)
3CX è lo sviluppatore del sistema telefonico 3CX, una piattaforma di comunicazione unificata a standard aperto per Windows che funziona con telefoni standard SIP e sostituisce qualunque tipo di centralino telefonico proprietario. Il sistema telefonico 3CX è gestibile più facilmente rispetto ai sistemi PBX standard e garantisce un notevole risparmio sui costi ed un aumento della produttività. Alcune fra le più importanti aziende e organizzazioni mondiali utilizzano il sistema telefonico 3CX, tra cui Boeing, Mitsubishi Motors, Intercontinental Hotels & Resorts, Harley Davidson, Città di Vienna e Pepsi.
3CX è stato insignito del 2014 Comms National Award per la categoria ‘Miglior soluzione enterprise per l’installazione in loco’, è stato annoverato nella Annual Network Connectivity Services Partner Program Guide di CRN per il 2014 ed è stato premiato con un punteggio di 5 stelle nel programma partner di CRN nel 2013. 3CX è stato inoltre riconosciuto come Venditore Emergente da CRN nel 2011 e nel 2012, ha ricevuto la certificazione Windows Server e ha vinto svariati premi: il Gold Award di Windowsnetworking.com, il Windows IT Pro 2008 Editor’s Best Award ed un premio come miglior prodotto da Computer Shopper.
3CX ha uffici in Australia, Cipro, Germania, Italia, Sud Africa, Regno Unito e Stati Uniti. Visitate il sito web http://www.3cx.com, la pagina Facebook www.facebook.com/3CX e il canale Twitter @3cx.
Approfondimenti
3CX è Silver Sponsor al Microsoft Ignite 2015, che si terrà a Chicago dal 4 all’8 Maggio.
Il focus principale del Microsoft Ignite di quest’anno è la tecnologia Cloud, la Unified Communication e [...]
Il movimiento FON ha da poco lanciato sul mercato la FONTENNA appositamente studiata per ampliare la portata dei nostri hotspot casalinghi. Analizziamo da vicino le caratteristiche di questa antenna da 6,5db di [...]
Fedele alla sua reputazione di azienda innovatrice, 3CX è uno dei primi produttori di centralini telefonici ad offrire un client per Mac completo di funzionalità professionali. Con il nuovo aggiornamento del popolare 3CXPhone per Mac, gli utenti ricevono una notifica via mail quando perdono una chiamata. Questo è perfetto per gli utenti sempre in viaggio e lontani dalla scrivania che saranno così sempre avvisati di ogni chiamata persa e potranno richiamare.
Altre novità nell’aggiornamento del 3CXPhone per MacPer maggiori informazioni sulle nuove funzionalità guarda quì. Scarica 3CXPhone per Mac quì.
ApprofondimentiLa nuova major release di 3CX Phone System è pronta! Il nostro team Ricerca&Sviluppo c’è riuscito un’altra volta e ci ha fornito una versione straordinaria: pronta per il Cloud e corredata di [...]
Sicurezza e riservatezza, due aspetti del mondo IT che sempre più spesso riguarderanno anche le nostre telefonate: con il diffondersi del voip famigliarizzeremo con vpn, chiavi di criptazione e connessioni sicure. Cellcrypt [...]
Si sa..di speranze non si campa, ma queste certamente aiutano a vivere meglio. Pare che a Mountain View si siano traditi (chissà poi quanto involontariamente) rivelando le prossime feauture del client Gtalk: [...]
One evening last week, I was nerd-sniped by a question Max Ogden asked:
That is quite an interesting question. I somewhat dislike using Session Description Protocol (SDP) in the signaling protocol anyway and prefer nice JSON objects for the API and ugly XML blobs on the wire to the ugly SDP blobs used by the WebRTC API.
The question is really about the minimum amount of information that needs to be exchanged for a WebRTC connection to succeed.
WebRTC uses ICE and DTLS to establish a secure connection between peers. This mandates two constraints:
Now the stock SDP that WebRTC uses (explained here) is a rather big blob of text, more than 1500 characters for an audio-video offer not even considering the ICE candidates yet.
Do we really need all this? It turns out that you can establish a P2P connection with just a little more than 100 characters sent in each direction. The minimal-webrtc repository shows you how to do that. I had to use quite a number of tricks to make this work, it’s a real hack.
How I did it Get some SDPFirst, we want to establish a datachannel connection. Once we have this, we can potentially use it negotiate a second audio/video peerconnection without being constrained in the size of the offer or the answer. Also, the SDP for the data channel is a lot smaller to start with since the is no codec negotiation. Here is how to get that SDP:
var pc = new webkitRTCPeerConnection(null); var dc = pc.createDataChannel('webrtchacks'); pc.createOffer( function (offer) { pc.setLocalDescription(offer); console.log(offer.sdp); }, function (err) { console.error(err); } );The resulting SDP is slightly more than 400 bytes. Now we need also some candidates included, so we wait for the end-of-candidates event:
pc.onicecandidate = function (event) { if (!event.candidate) console.log(pc.localDescription.sdp); };The result is even longer:
v=0 o=- 4596489990601351948 2 IN IP4 127.0.0.1 s=- t=0 0 a=msid-semantic: WMS m=application 47299 DTLS/SCTP 5000 c=IN IP4 192.168.20.129 a=candidate:1966762134 1 udp 2122260223 192.168.20.129 47299 typ host generation 0 a=candidate:211962667 1 udp 2122194687 10.0.3.1 40864 typ host generation 0 a=candidate:1002017894 1 tcp 1518280447 192.168.20.129 0 typ host tcptype active generation 0 a=candidate:1109506011 1 tcp 1518214911 10.0.3.1 0 typ host tcptype active generation 0 a=ice-ufrag:1/MvHwjAyVf27aLu a=ice-pwd:3dBU7cFOBl120v33cynDvN1E a=ice-options:google-ice a=fingerprint:sha-256 75:74:5A:A6:A4:E5:52:F4:A7:67:4C:01:C7:EE:91:3F:21:3D:A2:E3:53:7B:6F:30:86:F2:30:AA:65:FB:04:24 a=setup:actpass a=mid:data a=sctpmap:5000 webrtc-datachannel 1024 Only take what you needWe are only interested in a few bits of information here:
The ice-ufrag is 16 characters due to randomness security requirements from RFC 5245. While it is possible to reduce that, it’s probably not worth the effort. The same applies to the 24 characters of the ice-pwd. Both are random so there is not much to gain from compressing them even.
The DTLS fingerprint is a representation of the 256 bytes of the sha-256 hash. It’s length can easily be reduced from 95 characters to almost optimal (assuming we want to be binary-safe) 44 characters:
var line = "a=fingerprint:sha-256 75:74:5A:A6:A4:E5:52:F4:A7:67:4C:01:C7:EE:91:3F:21:3D:A2:E3:53:7B:6F:30:86:F2:30:AA:65:FB:04:24"; var hex = line.substr(22).split(':').map(function (h) { return parseInt(h, 16); }); console.log(btoa(String.fromCharCode.apply(String, hex))); // yields dXRapqTlUvSnZ0wBx+6RPyE9ouNTe28whvIwqmX7BCQ=So we have So we’re at 84 characters now. We can hardcode everything else in the application.
Dealing with candidatesLet’s look at the candidates. Wait, we got only host candidates. This is not going to work unless people are on the same network. STUN does not help much either since it only works in approximately 80% of all cases.
So we need candidates that were gathered from a TURN server. In Chrome, the easy way to achieve this is to set the iceTransports constraint to ‘relay’ which will not even gather host and srflx candidates. In Firefox, you need to ignore all non-relay candidates currently.
If you use the minimal-webrtc demo you need to use your own TURN credentials, the ones in the repository will no longer work since they’re using the time-based credential scheme. Here is what happened on my machine was that two candidates were gathered:
a=candidate:1211076970 1 udp 41885439 104.130.198.83 47751 typ relay raddr 0.0.0.0 rport 0 generation 0 a=candidate:1211076970 1 udp 41819903 104.130.198.83 38132 typ relay raddr 0.0.0.0 rport 0 generation 0I believe this is a bug in chrome which gathers a relay candidate for an interface which is not routable, so I filed an issue.
Lets look at the first candidate using the grammar defined in RFC 5245:
If we were to simply append both candidates to the 84 bytes we already have we would end up with 290 bytes. But we don’t need most of the information in there.
The most interesting information is the IP and port. For IPv4, that is 32bits for the IP and 16 bits for the port. We can encode that using btoa again which yields 7 + 4 characters per candidate. Actually, if both candidates share the same IP, we can skip encoding it again, reducing the size.
After consulting RFC 5245 it turned out that the foundation and priority can actually be skipped, even though that requires some effort. And everything else can be easily hard-coded in the application.
sdp.length = 106Let’s summarize what we have so far:
Now we also want to encode whether this is an offer or an answer. Let’s use uppercase O and A respectively. Next, we concatenate this and separate the fields with a ‘,’ character. While that is less efficient than a binary encoding or one that relies on fixed field lengths, it is flexible. The result is a string like:
O,1/MvHwjAyVf27aLu,3dBU7cFOBl120v33cynDvN1E, dXRapqTlUvSnZ0wBx+6RPyE9ouNTe28whvIwqmX7BCQ=, 1k85hij,1ek7,157k106 characters! So that is tweetable. Yay!
You better be fastNow, if you try this it turns out it does not usually work unless you are fast enough pasting stuff.
ICE is short for Interactive Connectivity Establishment. If you are not fast enough in transferring the answer and starting ICE at the Offerer, it will fail. You have less than 30 seconds between creating the answer at the Answerer and setting it at the Offerer. That’s pretty tough for humans doing copy-paste. And it will not work via twitter.
What happens is that the Answerer is trying to perform connectivity checks as explained in RFC 5245. But those never reach the Offerer since we are using a TURN server. The TURN server does not allow traffic from the Answerer to be relayed to the Offerer before the Offerer creates a TURN permission for the candidate, which it can only do once the Offerer receives the answer. Even if we could ignore permissions, the Offerer can not form the STUN username without the Answerer’s ice-ufrag and ice-pwd. And if the Offerer does not reply to the connectivity checks by Answerer, the Answerer will conclude that ICE has failed.
So what was the point of this?
Now… it is pretty hard to come up with a use-case for this. It fits into an SMS. But sending your peer an URL where you both connect using a third-party signaling server is a lot more viable most of the time. Especially given that to achieve this, I had to make some tough design decisions like forcing a TURN server and taking some shortcuts with the ICE candidates which are not really safe. Also, this cannot use trickle ice.
¯\_(ツ)_/¯
So is this just a case study in arcane signaling protocols? Probably. But hey, I can now use IRC as a signaling protocol for WebRTC. IRC has a limit of 512 characters so one can include more candidates and information even. CTCP WEBRTC anyone?
{“author”: “Philipp Hancke“}
Want to keep up on our latest posts? Please click here to subscribe to our mailing list if you have not already. We only email post updates. You can also follow us on twitter at @webrtcHacks for blog updates and news of technical WebRTC topics or our individual feeds @chadwallacehart, @reidstidolph, @victorpascual and @tsahil.
The post The Minimum Viable SDP appeared first on webrtcHacks.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.